Page MenuHomeFreeBSD

No OneTemporary

This file is larger than 256 KB, so syntax highlighting was skipped.
diff --git a/documentation/config/_default/config.toml b/documentation/config/_default/config.toml
index fd4b3983fc..9fbd8c26a0 100644
--- a/documentation/config/_default/config.toml
+++ b/documentation/config/_default/config.toml
@@ -1,29 +1,30 @@
# FreeBSD documentation
# $FreeBSD$
baseURL = "https://docs.freebsd.org/"
-title = "The FreeBSD Project"
+title = "FreeBSD Documentation Portal"
copyright = "BSD 2-clause 'Simplified' License"
DefaultContentLanguage = "en"
defaultContentLanguageInSubdir = true
disablePathToLower = true
theme = "beastie"
disableKinds = [ "taxonomy", "taxonomyTerm" ]
authors = [ "carlavilla@FreeBSD.org" ]
preserveTOC = true
ignoreFiles = [ "chapters-order.adoc$", "toc.adoc$", "toc-tables.adoc$", "toc-figures.adoc$", "toc-examples.adoc$", "toc-1.adoc$", "toc-2.adoc$", "toc-3.adoc$", "toc-4.adoc$", "toc-5.adoc$", "books.adoc$", "chapter.adoc$" ]
enableRobotsTXT = true
[params]
websiteURL = "https://www.FreeBSD.org/"
+ description = "FreeBSD Documentation Portal"
[markup.asciidocExt]
preserveTOC = true
extensions = ["man-macro", "inter-document-references-macro", "sectnumoffset-treeprocessor", "packages-macro", "git-macro"]
[outputs]
home = [ "HTML" ]
page = [ "HTML" ]
list = [ "HTML" ]
single = [ "HTML" ]
section = [ "HTML" ]
diff --git a/documentation/content/en/articles/bsdl-gpl/_index.adoc b/documentation/content/en/articles/bsdl-gpl/_index.adoc
index 01b9cbebec..53872bcc2b 100644
--- a/documentation/content/en/articles/bsdl-gpl/_index.adoc
+++ b/documentation/content/en/articles/bsdl-gpl/_index.adoc
@@ -1,283 +1,283 @@
---
title: Why you should use a BSD style license for your Open Source Project
authors:
- author: Bruce Montague
email: brucem@alumni.cse.ucsc.edu
-releaseinfo: "$FreeBSD$"
trademarks: ["freebsd", "intel", "general"]
+description: Why you should use a BSD style license for your Open Source Project
---
= Why you should use a BSD style license for your Open Source Project
:doctype: article
:toc: macro
:toclevels: 1
:icons: font
:sectnums:
:sectnumlevels: 6
:source-highlighter: rouge
:experimental:
'''
toc::[]
[[intro]]
== Introduction
This document makes a case for using a BSD style license for software and data;
specifically it recommends using a BSD style license in place of the GPL.
It can also be read as a BSD versus GPL Open Source License introduction and summary.
[[history]]
== Very Brief Open Source History
Long before the term "Open Source" was used, software was developed by loose associations of programmers and freely exchanged.
Starting in the early 1950's, organizations such as http://www.share.org[SHARE] and http://www.decus.org[DECUS] developed much of the software that computer hardware companies bundled with their hardware offerings.
At that time computer companies were in the hardware business;
anything that reduced software cost and made more programs available made the hardware companies more competitive.
This model changed in the 1960's.
In 1965 ADR developed the first licensed software product independent of a hardware company.
ADR was competing against a free IBM package originally developed by IBM customers.
ADR patented their software in 1968.
To stop sharing of their program, they provided it under an equipment lease in which payment was spread over the lifetime of the product.
ADR thus retained ownership and could control resale and reuse.
In 1969 the US Department of Justice charged IBM with destroying businesses by bundling free software with IBM hardware.
As a result of this suit, IBM unbundled its software; that is, software became independent products separate from hardware.
In 1968 Informatics introduced the first commercial killer-app and rapidly established the concept of the software product,
the software company, and very high rates of return.
Informatics developed the perpetual license which is now standard throughout the computer industry,
wherein ownership is never transferred to the customer.
[[unix-license]]
== Unix from a BSD Licensing Perspective
AT&T, who owned the original Unix implementation,
was a publicly regulated monopoly tied up in anti-trust court;
it was legally unable to sell a product into the software market.
It was, however, able to provide it to academic institutions for the price of media.
Universities rapidly adopted Unix after an OS conference publicized its availability.
It was extremely helpful that Unix ran on the PDP-11, a very affordable 16-bit computer,
and was coded in a high-level language that was demonstrably good for systems programming.
The DEC PDP-11 had, in effect, an open hardware interface designed to make it easy for customers to write their own OS, which was common.
As DEC founder Ken Olsen famously proclaimed, "software comes from heaven when you have good hardware".
Unix author Ken Thompson returned to his alma mater, University of California Berkeley (UCB), in 1975 and taught the kernel line-by-line.
This ultimately resulted in an evolving system known as BSD (Berkeley Standard Distribution).
UCB converted Unix to 32-bits, added virtual memory, and implemented the version of the TCP/IP stack upon which the Internet was essentially built.
UCB made BSD available for the cost of media, under what became known as "the BSD license".
A customer purchased Unix from AT&T and then ordered a BSD tape from UCB.
In the mid-1980s a government anti-trust case against AT&T ended with the break-up of AT&T.
AT&T still owned Unix and was now able to sell it.
AT&T embarked on an aggressive licensing effort and most commercial Unixes of the day became AT&T-derived.
In the early 1990's AT&T sued UCB over license violations related to BSD.
UCB discovered that AT&T had incorporated, without acknowledgment or payment,
many improvements due to BSD into AT&T's products, and a lengthy court case, primarily between AT&T and UCB, ensued.
During this period some UCB programmers embarked on a project to rewrite any AT&T code associated with BSD.
This project resulted in a system called BSD 4.4-lite (lite because it was not a complete system; it lacked 6 key AT&T files).
A lengthy series of articles published slightly later in Dr. Dobbs magazine described a BSD-derived 386 PC version of Unix, with BSD-licensed replacement files for the 6 missing 4.4 lite files.
This system, named 386BSD, was due to ex-UCB programmer William Jolitz.
It became the original basis of all the PC BSDs in use today.
In the mid 1990s, Novell purchased AT&T's Unix rights and a (then secret) agreement was reached to terminate the lawsuit.
UCB soon terminated its support for BSD.
[[current-bsdl]]
== The Current State of FreeBSD and BSD Licenses
The so-called http://www.opensource.org/licenses/bsd-license.php[new BSD license] applied to FreeBSD within the last few years is effectively a statement that you can do anything with the program or its source,
but you do not have any warranty and none of the authors has any liability (basically, you cannot sue anybody).
This new BSD license is intended to encourage product commercialization.
Any BSD code can be sold or included in proprietary products without any restrictions on the availability of your code or your future behavior.
Do not confuse the new BSD license with "public domain".
While an item in the public domain is also free for all to use, it has no owner.
[[origins-gpl]]
== The origins of the GPL
While the future of Unix had been so muddled in the late 1980s and early 1990s, the GPL,
another development with important licensing considerations, reached fruition.
Richard Stallman, the developer of Emacs, was a member of the staff at MIT when his lab switched from home-grown to proprietary systems.
Stallman became upset when he found that he could not legally add minor improvements to the system.
(Many of Stallman's co-workers had left to form two companies based on software developed at MIT and licensed by MIT;
there appears to have been disagreement over access to the source code for this software).
Stallman devised an alternative to the commercial software license and called it the GPL, or "GNU Public License".
He also started a non-profit foundation, the http://www.fsf.org[Free Software Foundation] (FSF),
which intended to develop an entire operating system, including all associated software, that would not be subject to proprietary licensing.
This system was called GNU, for "GNU is Not Unix".
The GPL was designed to be the antithesis of the standard proprietary license.
To this end, any modifications that were made to a GPL program were required to be given back to the GPL community (by requiring that the source of the program be available to the user) and any program that used or linked to GPL code was required to be under the GPL.
The GPL was intended to keep software from becoming proprietary.
As the last paragraph of the GPL states:
"This General Public License does not permit incorporating your program into proprietary programs."<<one>>
The http://www.opensource.org/licenses/gpl-license.php[GPL] is a complex license so here are some rules of thumb when using the GPL:
* you can charge as much as you want for distributing, supporting, or documenting the software, but you cannot sell the software itself.
* the rule-of-thumb states that if GPL source is required for a program to compile, the program must be under the GPL. Linking statically to a GPL library requires a program to be under the GPL.
* the GPL requires that any patents associated with GPLed software must be licensed for everyone's free use.
* simply aggregating software together, as when multiple programs are put on one disk, does not count as including GPLed programs in non-GPLed programs.
* output of a program does not count as a derivative work. This enables the gcc compiler to be used in commercial environments without legal problems.
* since the Linux kernel is under the GPL, any code statically linked with the Linux kernel must be GPLed. This requirement can be circumvented by dynamically linking loadable kernel modules. This permits companies to distribute binary drivers, but often has the disadvantage that they will only work for particular versions of the Linux kernel.
Due in part to its complexity, in many parts of the world today the legalities of the GPL are being ignored in regard to Linux and related software.
The long-term ramifications of this are unclear.
[[origins-lgpl]]
== The origins of Linux and the LGPL
While the commercial Unix wars raged, the Linux kernel was developed as a PC Unix clone.
Linus Torvalds credits the existence of the GNU C compiler and the associated GNU tools for the existence of Linux.
He put the Linux kernel under the GPL.
Remember that the GPL requires anything that statically links to any code under the GPL also be placed under the GPL.
The source for this code must thus be made available to the user of the program.
Dynamic linking, however, is not considered a violation of the GPL.
Pressure to put proprietary applications on Linux became overwhelming.
Such applications often must link with system libraries.
This resulted in a modified version of the GPL called the http://www.opensource.org/licenses/lgpl-license.php[LGPL] ("Library", since renamed to "Lesser", GPL).
The LGPL allows proprietary code to be linked to the GNU C library, glibc.
You do not have to release the source code which has been dynamically linked to an LGPLed library.
If you statically link an application with glibc, such as is often required in embedded systems,
you cannot keep your application proprietary, that is, the source must be released.
Both the GPL and LGPL require any modifications to the code directly under the license to be released.
[[orphaning]]
== Open Source licenses and the Orphaning Problem
One of the serious problems associated with proprietary software is known as "orphaning".
This occurs when a single business failure or change in a product strategy causes a huge pyramid of dependent systems and companies to fail for reasons beyond their control.
Decades of experience have shown that the momentary size or success of a software supplier is no guarantee that their software will remain available, as current market conditions and strategies can change rapidly.
The GPL attempts to prevent orphaning by severing the link to proprietary intellectual property.
A BSD license gives a small company the equivalent of software-in-escrow without any legal complications or costs.
If a BSD-licensed program becomes orphaned, a company can simply take over, in a proprietary manner, the program on which they are dependent.
An even better situation occurs when a BSD code-base is maintained by a small informal consortium, since the development process is not dependent on the survival of a single company or product line.
The survivability of the development team when they are mentally in the zone is much more important than simple physical availability of the source code.
[[license-cannot]]
== What a license cannot do
No license can guarantee future software availability.
Although a copyright holder can traditionally change the terms of a copyright at anytime, the presumption in the BSD community is that such an attempt simply causes the source to fork.
The GPL explicitly disallows revoking the license.
It has occurred, however, that a company (Mattel) purchased a GPL copyright (cphack), revoked the entire copyright, went to court, and prevailed <<two>>.
That is, they legally revoked the entire distribution and all derivative works based on the copyright.
Whether this could happen with a larger and more dispersed distribution is an open question;
there is also some confusion regarding whether the software was really under the GPL.
In another example, Red Hat purchased Cygnus, an engineering company that had taken over development of the FSF compiler tools.
Cygnus was able to do so because they had developed a business model in which they sold support for GNU software.
This enabled them to employ some 50 engineers and drive the direction of the programs by contributing the preponderance of modifications.
As Donald Rosenberg states "projects using licenses like the GPL...live under constant threat of having someone take over the project by producing a better version of the code and doing it faster than the original owners." <<three>>
[[gpl-advantages]]
== GPL Advantages and Disadvantages
A common reason to use the GPL is when modifying or extending the gcc compiler.
This is particularly apt when working with one-off specialty CPUs in environments where all software costs are likely to be considered overhead, with minimal expectations that others will use the resulting compiler.
The GPL is also attractive to small companies selling CDs in an environment where "buy-low, sell-high" may still give the end-user a very inexpensive product.
It is also attractive to companies that expect to survive by providing various forms of technical support, including documentation, for the GPLed intellectual property world.
A less publicized and unintended use of the GPL is that it is very favorable to large companies that want to undercut software companies.
In other words, the GPL is well suited for use as a marketing weapon, potentially reducing overall economic benefit and contributing to monopolistic behavior.
The GPL can present a real problem for those wishing to commercialize and profit from software.
For example, the GPL adds to the difficulty a graduate student will have in directly forming a company to commercialize his research results, or the difficulty a student will have in joining a company on the assumption that a promising research project will be commercialized.
For those who must work with statically-linked implementations of multiple software standards, the GPL is often a poor license, because it precludes using proprietary implementations of the standards.
The GPL thus minimizes the number of programs that can be built using a GPLed standard.
The GPL was intended to not provide a mechanism to develop a standard on which one engineers proprietary products.
(This does not apply to Linux applications because they do not statically link, rather they use a trap-based API.)
The GPL attempts to make programmers contribute to an evolving suite of programs, then to compete in the distribution and support of this suite.
This situation is unrealistic for many required core system standards, which may be applied in widely varying environments which require commercial customization or integration with legacy standards under existing (non-GPL) licenses.
Real-time systems are often statically linked, so the GPL and LGPL are definitely considered potential problems by many embedded systems companies.
The GPL is an attempt to keep efforts, regardless of demand, at the research and development stages.
This maximizes the benefits to researchers and developers, at an unknown cost to those who would benefit from wider distribution.
The GPL was designed to keep research results from transitioning to proprietary products.
This step is often assumed to be the last step in the traditional technology transfer pipeline and it is usually difficult enough under the best of circumstances;
the GPL was intended to make it impossible.
[[bsd-advantages]]
== BSD Advantages
A BSD style license is a good choice for long duration research or other projects that need a development environment that:
* has near zero cost
* will evolve over a long period of time
* permits anyone to retain the option of commercializing final results with minimal legal issues.
This final consideration may often be the dominant one, as it was when the Apache project decided upon its license:
"This type of license is ideal for promoting the use of a reference body of code that implements a protocol for common service.
This is another reason why we choose it for the Apache group - many of us wanted to see HTTP survive and become a true multiparty standard,
and would not have minded in the slightest if Microsoft or Netscape choose to incorporate our HTTP engine or any other component of our code into their products, if it helped further the goal of keeping HTTP common... All this means that, strategically speaking, the project needs to maintain sufficient momentum, and that participants realize greater value by contributing their code to the project, even code that would have had value if kept proprietary."
Developers tend to find the BSD license attractive as it keeps legal issues out of the way and lets them do whatever they want with the code.
In contrast, those who expect primarily to use a system rather than program it, or expect others to evolve the code, or who do not expect to make a living from their work associated with the system (such as government employees), find the GPL attractive, because it forces code developed by others to be given to them and keeps their employer from retaining copyright and thus potentially "burying" or orphaning the software.
If you want to force your competitors to help you, the GPL is attractive.
A BSD license is not simply a gift.
The question "why should we help our competitors or let them steal our work?" comes up often in relation to a BSD license.
Under a BSD license, if one company came to dominate a product niche that others considered strategic, the other companies can, with minimal effort, form a mini-consortium aimed at reestablishing parity by contributing to a competitive BSD variant that increases market competition and fairness.
This permits each company to believe that it will be able to profit from some advantage it can provide, while also contributing to economic flexibility and efficiency.
The more rapidly and easily the cooperating members can do this, the more successful they will be.
A BSD license is essentially a minimally complicated license that enables such behavior.
A key effect of the GPL, making a complete and competitive Open Source system widely available at cost of media, is a reasonable goal.
A BSD style license, in conjunction with ad-hoc-consortiums of individuals, can achieve this goal without destroying the economic assumptions built around the deployment-end of the technology transfer pipeline.
[[recommendations]]
== Specific Recommendations for using a BSD license
* The BSD license is preferable for transferring research results in a way that will widely be deployed and most benefit an economy.
As such, research funding agencies, such as the NSF, ONR and DARPA, should encourage in the earliest phases of funded research projects, the adoption of BSD style licenses for software, data, results, and open hardware.
They should also encourage formation of standards based around implemented Open Source systems and ongoing Open Source projects.
* Government policy should minimize the costs and difficulties in moving from research to deployment.
When possible, grants should require results to be available under a commercialization friendly BSD style license.
* In many cases, the long-term results of a BSD style license more accurately reflect the goals proclaimed in the research charter of universities than what occurs when results are copyrighted or patented and subject to proprietary university licensing. Anecdotal evidence exists that universities are financially better rewarded in the long run by releasing research results and then appealing to donations from commercially successful alumni.
* Companies have long recognized that the creation of de facto standards is a key marketing technique. The BSD license serves this role well, if a company really has a unique advantage in evolving the system. The license is legally attractive to the widest audience while the company's expertise ensures their control. There are times when the GPL may be the appropriate vehicle for an attempt to create such a standard, especially when attempting to undermine or co-opt others. The GPL, however, penalizes the evolution of that standard, because it promotes a suite rather than a commercially applicable standard. Use of such a suite constantly raises commercialization and legal issues. It may not be possible to mix standards when some are under the GPL and others are not. A true technical standard should not mandate exclusion of other standards for non-technical reasons.
* Companies interested in promoting an evolving standard, which can become the core of other companies' commercial products, should be wary of the GPL. Regardless of the license used, the resulting software will usually devolve to whoever actually makes the majority of the engineering changes and most understands the state of the system. The GPL simply adds more legal friction to the result.
* Large companies, in which Open Source code is developed, should be aware that programmers appreciate Open Source because it leaves the software available to the employee when they change employers. Some companies encourage this behavior as an employment perk, especially when the software involved is not directly strategic. It is, in effect, a front-loaded retirement benefit with potential lost opportunity costs but no direct costs. Encouraging employees to work for peer acclaim outside the company is a cheap portable benefit a company can sometimes provide with near zero downside.
* Small companies with software projects vulnerable to orphaning should attempt to use the BSD license when possible. Companies of all sizes should consider forming such Open Source projects when it is to their mutual advantage to maintain the minimal legal and organization overheads associated with a true BSD-style Open Source project.
* Non-profits should participate in Open Source projects when possible. To minimize software engineering problems, such as mixing code under different licenses, BSD-style licenses should be encouraged. Being leery of the GPL should particularly be the case with non-profits that interact with the developing world. In some locales where application of law becomes a costly exercise, the simplicity of the new BSD license, as compared to the GPL, may be of considerable advantage.
[[conclusion]]
== Conclusion
In contrast to the GPL, which is designed to prevent the proprietary commercialization of Open Source code, the BSD license places minimal restrictions on future behavior.
This allows BSD code to remain Open Source or become integrated into commercial solutions, as a project's or company's needs change.
In other words, the BSD license does not become a legal time-bomb at any point in the development process.
In addition, since the BSD license does not come with the legal complexity of the GPL or LGPL licenses, it allows developers and companies to spend their time creating and promoting good code rather than worrying if that code violates licensing.
[[addenda]]
[bibliography]
== Bibliographical References
* [[[one,1]]] http://www.gnu.org/licenses/gpl.html
* [[[two,2]]] http://archives.cnn.com/2000/TECH/computing/03/28/cyberpatrol.mirrors/
* [[[three,3]]] Open Source: the Unauthorized White Papers, Donald K. Rosenberg, IDG Books, 2000. Quotes are from page 114, "Effects of the GNU GPL".
* [[[four,4]]] In the "What License to Use?" section of http://www.oreilly.com/catalog/opensources/book/brian.html
This whitepaper is a condensation of an original work available at http://alumni.cse.ucsc.edu/~brucem/open_source_license.htm
diff --git a/documentation/content/en/articles/building-products/_index.adoc b/documentation/content/en/articles/building-products/_index.adoc
index 8538aea5e8..577b1d3d42 100644
--- a/documentation/content/en/articles/building-products/_index.adoc
+++ b/documentation/content/en/articles/building-products/_index.adoc
@@ -1,374 +1,374 @@
---
title: Building Products with FreeBSD
authors:
- author: Joseph Koshy
email: jkoshy@FreeBSD.org
organizations:
- organization: The FreeBSD Project
-releaseinfo: "$FreeBSD$"
+description: Building Products with FreeBSD
trademarks: ["freebsd", "general"]
---
= Building Products with FreeBSD
:doctype: article
:toc: macro
:toclevels: 1
:icons: font
:sectnums:
:sectnumlevels: 6
:source-highlighter: rouge
:experimental:
include::shared/releases.adoc[]
include::shared/en/mailing-lists.adoc[]
include::shared/en/urls.adoc[]
ifeval::["{backend}" == "html5"]
:imagesdir: ../../../images/articles/building-products/
endif::[]
ifeval::["{backend}" == "pdf"]
:imagesdir: ../../../../static/images/articles/building-products/
endif::[]
ifeval::["{backend}" == "epub3"]
:imagesdir: ../../../../static/images/articles/building-products/
endif::[]
[.abstract-title]
Abstract
The FreeBSD project is a worldwide, volunteer based, and collaborative project, which develops a portable and high-quality operating system.
The FreeBSD project distributes the source code for its product under a liberal license, with the intention of encouraging the use of its code.
Collaborating with the FreeBSD project can help organizations reduce their time to market, reduce engineering costs and improve their product quality.
This article examines the issues in using FreeBSD code in appliances and software products.
It highlights the characteristics of FreeBSD that make it an excellent substrate for product development.
The article concludes by suggesting a few "best practices" for organizations collaborating with the FreeBSD project.
'''
toc::[]
[[introduction]]
== Introduction
FreeBSD today is well-known as a high-performance server operating system.
It is deployed on millions of web servers and internet-facing hosts worldwide.
FreeBSD code also forms an integral part of many products, ranging from appliances such as network routers, firewalls, and storage devices, to personal computers.
Portions of FreeBSD have also been used in commercial shrink-wrapped software (see <<freebsd-intro>>).
In this article we look at the link:https://www.FreeBSD.org/[FreeBSD project] as a software engineering resource-as a collection of building blocks and processes which you can use to build products.
While FreeBSD's source is distributed freely to the public, to fully enjoy the benefits of the project's work, organizations need to _collaborate_ with the project.
In subsequent sections of this article we discuss effective means of collaboration with the project and the pitfalls that need to be avoided while doing so.
*Caveat Reader.* The author believes that the characteristics of the FreeBSD Project listed in this article were substantially true at the time the article was conceived and written (2005).
However, the reader should keep in mind that the practices and processes used by open-source communities can change over time, and that the information in this article should therefore be taken as indicative rather than normative.
=== Target Audience
This document would be of interest to the following broad groups of people:
* Decision makers in product companies looking at ways to improve their product quality, reduce their time to market and lower engineering costs in the long term.
* Technology consultants looking for best-practices in leveraging "open-source".
* Industry observers interested in understanding the dynamics of open-source projects.
* Software developers seeking to use FreeBSD and looking for ways to contribute back.
=== Article Goals
After reading this article you should have:
* An understanding of the goals of the FreeBSD Project and its organizational structure.
* An understanding of its development model and release engineering processes.
* An understanding of how conventional corporate software development processes differ from that used in the FreeBSD project.
* Awareness of the communication channels used by the project and the level of transparency you can expect.
* Awareness of optimal ways of working with the project-how best to reduce engineering costs, improve time to market, manage security vulnerabilities, and preserve future compatibility with your product as the FreeBSD project evolves.
=== Article Structure
The rest of the article is structured as follows:
* <<freebsd-intro>> introduces the FreeBSD project, explores its organizational structure, key technologies and release engineering processes.
* <<freebsd-collaboration>> describes ways to collaborate with the FreeBSD project. It examines common pitfalls encountered by corporates working with voluntary projects like FreeBSD.
* <<conclusion>> concludes.
[[freebsd-intro]]
== FreeBSD as a set of building blocks
FreeBSD makes an excellent foundation on which to build products:
* FreeBSD source code is distributed under a liberal BSD license facilitating its adoption in commercial products <<Mon2005>> with minimum hassle.
* The FreeBSD project has excellent engineering practices that can be leveraged.
* The project offers exceptional transparency into its workings, allowing organizations using its code to plan effectively for the future.
* The culture of the FreeBSD project, carried over from the Computer Science Research Group at The University of California, Berkeley <<McKu1999-1>>, fosters high-quality work. Some features in FreeBSD define the state of the art.
<<GoldGab2005>> examines the business reasons for using open-source in greater detail.
For organizations, the benefits of using FreeBSD components in their products include a shorter time to market, lower development costs and lower development risks.
=== Building with FreeBSD
Here are a few ways organizations have used FreeBSD:
* As an upstream source for tested code for libraries and utilities.
+
By being "downstream" of the project, organizations leverage the new features, bug fixes and testing that the upstream code receives.
* As an embedded OS (for example, for an OEM router and firewall device). In this model, organizations use a customized FreeBSD kernel and application program set along with a proprietary management layer for their device. OEMs benefit from new hardware support being added by the FreeBSD project upstream, and from the testing that the base system receives.
+
FreeBSD ships with a self-hosting development environment that allows easy creation of such configurations.
* As a Unix compatible environment for the management functions of high-end storage and networking devices, running on a separate processor "blade".
+
FreeBSD provides the tools for creating dedicated OS and application program images.
Its implementation of a BSD unix API is mature and tested.
FreeBSD can also provide a stable cross-development environment for the other components of the high-end device.
* As a vehicle to get widespread testing and support from a worldwide team of developers for non-critical "intellectual property".
+
In this model, organizations contribute useful infrastructural frameworks to the FreeBSD project (for example, see man:netgraph[3]).
The widespread exposure that the code gets helps to quickly identify performance issues and bugs.
The involvement of top-notch developers also leads to useful extensions to the infrastructure that the contributing organization also benefits from.
* As a development environment supporting cross-development for embedded OSes like http://www.rtems.com/[RTEMS] and http://ecos.sourceware.org/[eCOS].
+
There are many full-fledged development environments in the {numports}-strong collection of applications ported and packaged with FreeBSD.
* As a way to support a Unix-like API in an otherwise proprietary OS, increasing its palatability for application developers.
+
Here parts of FreeBSD's kernel and application programs are "ported" to run alongside other tasks in the proprietary OS.
The availability of a stable and well tested Unix(TM) API implementation can reduce the effort needed to port popular applications to the proprietary OS.
As FreeBSD ships with high-quality documentation for its internals and has effective vulnerability management and release engineering processes, the costs of keeping up-to-date are kept low.
[[freebsd-technologies]]
=== Technologies
There are a large number of technologies supported by the FreeBSD project.
A selection of these are listed below:
* A complete system that can cross-host itself for link:https://www.FreeBSD.org/platforms/[many architectures:]
* A modular symmetric multiprocessing capable kernel, with loadable kernel modules and a flexible and easy to use configuration system.
* Support for emulation of Linux(TM) and SVR4 binaries at near machine speeds. Support for binary Windows(TM) (NDIS) network drivers.
* Libraries for many programming tasks: archivers, FTP and HTTP support, thread support, in addition to a full POSIX(TM) like programming environment.
* Security features: Mandatory Access Control (man:mac[9]), jails (man:jail[2]), ACLs, and in-kernel cryptographic device support.
* Networking features: firewall-ing, QoS management, high-performance TCP/IP networking with support for many extensions.
+
FreeBSD's in-kernel Netgraph (man:netgraph[4]) framework allows kernel networking modules to be connected together in flexible ways.
* Support for storage technologies: Fibre Channel, SCSI, software and hardware RAID, ATA and SATA.
+
FreeBSD supports a number of filesystems, and its native UFS2 filesystem supports soft updates, snapshots and very large filesystem sizes (16TB per filesystem) <<McKu1999>>.
+
FreeBSD's in-kernel GEOM (man:geom[4]) framework allows kernel storage modules to be composed in flexible ways.
* Over {numports} ported applications, both commercial and open-source, managed via the FreeBSD ports collection.
=== Organizational Structure
FreeBSD's organizational structure is non-hierarchical.
There are essentially two kinds of contributors to FreeBSD, general users of FreeBSD, and developers with write access (known as _committers_ in the jargon) to the source base.
There are many thousands of contributors in the first group; the vast majority of contributions to FreeBSD come from individuals in this group.
Commit rights (write access) to the repository are granted to individuals who contribute consistently to the project.
Commit rights come with additional responsibilities, and new committers are assigned mentors to help them learn the ropes.
.FreeBSD Organization
image::freebsd-organization.png[]
Conflict resolution is performed by a nine member "Core Team" that is elected from the group of committers.
FreeBSD does not have "corporate" committers.
Individual committers are required to take responsibility for the changes they introduce to the code.
The link:{committers-guide}[FreeBSD Committer's guide] <<ComGuide>> documents the rules and responsibilities for committers.
FreeBSD's project model is examined in detail in <<Nik2005>>.
=== FreeBSD Release Engineering Processes
FreeBSD's release engineering processes play a major role in ensuring that its released versions are of a high quality.
At any point of time, FreeBSD's volunteers support multiple code lines (<<fig-freebsd-branches, FreeBSD Release Branches>>):
* New features and disruptive code enters on the development branch, also known as the _-CURRENT_ branch.
* _-STABLE_ branches are code lines that are branched from HEAD at regular intervals. Only tested code is allowed onto a -STABLE branch. New features are allowed once they have been tested and stabilized in the -CURRENT branch.
* _-RELEASE_ branches are maintained by the FreeBSD security team. Only bug fixes for critical issues are permitted onto -RELEASE branches.
[[fig-freebsd-branches]]
.FreeBSD Release Branches
image::freebsd-branches.png[]
Code lines are kept alive for as long as there is user and developer interest in them.
Machine architectures are grouped into "tiers"; _Tier 1_ architectures are fully supported by the project's release engineering and security teams, _Tier 2_ architectures are supported on a best effort basis, and experimental architectures comprise _Tier 3_.
The list of link:{committers-guide}#archs[supported architectures] is part of the FreeBSD documentation collection.
The release engineering team publishes a link:https://www.FreeBSD.org/releng/[road map] for future releases of FreeBSD on the project's web site.
The dates laid down in the road map are not deadlines; FreeBSD is released when its code and documentation are ready.
FreeBSD's release engineering processes are described in <<RelEngDoc>>.
[[freebsd-collaboration]]
== Collaborating with FreeBSD
Open-source projects like FreeBSD offer finished code of a very high quality.
While access to quality source code can reduce the cost of initial development, in the long-term the costs of managing change begin to dominate.
As computing environments change over the years and new security vulnerabilities are discovered, your product too needs to change and adapt.
Using open-source code is best viewed not as a one-off activity, but as an __ongoing process__.
The best projects to collaborate with are the ones that are __live__; i.e., with an active community, clear goals and a transparent working style.
* FreeBSD has an active developer community around it. At the time of writing there are many thousands of contributors from every populated continent in the world and over 300 individuals with write access to the project's source repositories.
* The goals of the FreeBSD project are <<Hub1994>>:
** To develop a high-quality operating system for popular computer hardware, and,
** To make our work available to all under a liberal license.
* FreeBSD enjoys an open and transparent working culture. Nearly all discussion in the project happens by email, on https://lists.freebsd.org/mailman/listinfo[public mailing lists] that are also archived for posterity. The project's policies are link:https://www.FreeBSD.org/internal/policies/[documented] and maintained under revision control. Participation in the project is open to all.
[[freebsd-org]]
=== Understanding FreeBSD culture
To be able to work effectively with the FreeBSD project, you need to understand the project's culture.
Volunteer driven projects operate under different rules than for-profit corporates.
A common mistake that companies make when venturing into the open-source world is that of underplaying these differences.
*Motivation.* Most contributions to FreeBSD are done voluntarily without monetary rewards entering the picture. The factors that motivate individuals are complex, ranging from altruism, to an interest in solving the kinds of problems that FreeBSD attempts to solve. In this environment, "elegance is never optional"<<Nor1993>>.
*The Long Term View.* FreeBSD traces its roots back nearly twenty years to the work of the Computer Science Research Group at the University of California Berkeley.footnote:[FreeBSD's source repository contains a history of the project since its inception, and there are CDROMs available that contain earlier code from the CSRG.] A number of the original CSRG developers remain associated with the project.
The project values long-term perspectives <<Nor2001>>. A frequent acronym encountered in the project is DTRT, which stands for "Do The Right Thing".
*Development Processes.* Computer programs are tools for communication: at one level programmers communicate their intentions using a precise notation to a tool (a compiler) that translates their instructions to executable code.
At another level, the same notation is used for communication of intent between two programmers.
Formal specifications and design documents are seldom used in the project.
Clear and well-written code and well-written change logs (<<fig-change-log, A sample change log entry>>) are used in their place.
FreeBSD development happens by "rough consensus and running code"<<Carp1996>>.
[.programlisting]
....
r151864 | bde | 2005-10-29 09:34:50 -0700 (Sat, 29 Oct 2005) | 13 lines
Changed paths:
M /head/lib/msun/src/e_rem_pio2f.c
Use double precision to simplify and optimize arg reduction for small
and medium size args too: instead of conditionally subtracting a float
17+24, 17+17+24 or 17+17+17+24 bit approximation to pi/2, always
subtract a double 33+53 bit one. The float version is now closer to
the double version than to old versions of itself -- it uses the same
33+53 bit approximation as the simplest cases in the double version,
and where the float version had to switch to the slow general case at
|x| == 2^7*pi/2, it now switches at |x| == 2^19*pi/2 the same as the
double version.
This speeds up arg reduction by a factor of 2 for |x| between 3*pi/4 and
2^7*pi/4, and by a factor of 7 for |x| between 2^7*pi/4 and 2^19*pi/4.
....
.A sample change log entry [[fig-change-log]]
Communication between programmers is enhanced by the use of a common coding standard man:style[9].
*Communication Channels.* FreeBSD's contributors are spread across the world.
Email (and to a lesser extent, IRC) is the preferred means of communication in the project.
=== Best Practices for collaborating with the FreeBSD project
We now look at a few best practices for making the best use of FreeBSD in product development.
Plan for the long term::
Setup processes that help in tracking the development of FreeBSD.
For example:
+
*Track FreeBSD source code.* The project makes it easy to mirror its SVN repository using link:{committers-guide}#svn-advanced-use-setting-up-svnsync[svnsync]. Having the complete history of the source is useful when debugging complex problems and offers valuable insight into the intentions of the original developers. Use a capable source control system that allows you to easily merge changes between the upstream FreeBSD code base and your own in-house code.
+
<<fig-svn-blame, An annotated source listing generated using `svn blame`>> shows a portion of an annotated listing of the file referenced by the change log in <<fig-change-log, A sample change log entry>>.
The ancestry of each line of the source is clearly visible.
Annotated listings showing the history of every file that is part of FreeBSD are https://svnweb.freebsd.org/[available on the web].
+
[.programlisting]
....
#REV #WHO #DATE #TEXT
176410 bde 2008-02-19 07:42:46 -0800 (Tue, 19 Feb 2008) #include <sys/cdefs.h>
176410 bde 2008-02-19 07:42:46 -0800 (Tue, 19 Feb 2008) __FBSDID("$FreeBSD$");
2116 jkh 1994-08-19 02:40:01 -0700 (Fri, 19 Aug 1994)
2116 jkh 1994-08-19 02:40:01 -0700 (Fri, 19 Aug 1994) /* __ieee754_rem_pio2f(x,y)
8870 rgrimes 1995-05-29 22:51:47 -0700 (Mon, 29 May 1995) *
176552 bde 2008-02-25 05:33:20 -0800 (Mon, 25 Feb 2008) * return the remainder of x rem pi/2 in *y
176552 bde 2008-02-25 05:33:20 -0800 (Mon, 25 Feb 2008) * use double precision for everything except passing x
152535 bde 2005-11-16 18:20:04 -0800 (Wed, 16 Nov 2005) * use __kernel_rem_pio2() for large x
2116 jkh 1994-08-19 02:40:01 -0700 (Fri, 19 Aug 1994) */
2116 jkh 1994-08-19 02:40:01 -0700 (Fri, 19 Aug 1994)
176465 bde 2008-02-22 07:55:14 -0800 (Fri, 22 Feb 2008) #include <float.h>
176465 bde 2008-02-22 07:55:14 -0800 (Fri, 22 Feb 2008)
2116 jkh 1994-08-19 02:40:01 -0700 (Fri, 19 Aug 1994) #include "math.h"
....
.An annotated source listing generated using `svn blame` [[fig-svn-blame]]
+
*Use a gatekeeper.* Appoint a _gatekeeper_ to monitor FreeBSD development, to keep an eye out for changes that could potentially impact your products.
+
*Report bugs upstream.* If you notice bug in the FreeBSD code that you are using, file a https://www.FreeBSD.org/support/bugreports/[bug report].
This step helps ensure that you do not have to fix the bug the next time you take a code drop from upstream.
Leverage FreeBSD's release engineering efforts::
Use code from a -STABLE development branch of FreeBSD.
These development branches are formally supported by FreeBSD's release engineering and security teams and comprise of tested code.
Donate code to reduce costs::
A major proportion of the costs associated with developing products is that of doing maintenance.
By donating non-critical code to the project, you benefit by having your code see much wider exposure than it would otherwise get.
This in turn leads to more bugs and security vulnerabilities being flushed out and performance anomalies being identified and fixed.
Get support effectively::
For products with tight deadlines, it is recommended that you hire or enter into a consulting agreement with a developer or firm with FreeBSD experience.
The {freebsd-jobs} is a useful communication channel to find talent.
The FreeBSD project maintains a link:https://www.FreeBSD.org/commercial/consult_bycat/[gallery of consultants and consulting firms] undertaking FreeBSD work.
The http://www.bsdcertification.org/[BSD Certification Group] offers certification for all the major BSD derived OSes.
+
For less critical needs, you can ask for help on the http://lists.FreeBSD.org/mailman/listinfo[project mailing lists].
A useful guide to follow when asking for help is given in <<Ray2004>>.
Publicize your involvement::
You are not required to publicize your use of FreeBSD, but doing so helps both your effort as well as that of the project.
+
Letting the FreeBSD community know that your company uses FreeBSD helps improve your chances of attracting high quality talent.
A large roster of support for FreeBSD also means more mind share for it among developers.
This in turn yields a healthier foundation for your future.
Support FreeBSD developers::
Sometimes the most direct way to get a desired feature into FreeBSD is to support a developer who is already looking at a related problem.
Help can range from hardware donations to direct financial assistance.
In some countries, donations to the FreeBSD project enjoy tax benefits.
The project has a dedicated link:https://www.FreeBSD.org/donations/[donations liaison] to assist donors.
The project also maintains a web page where developers link:https://www.FreeBSD.org/donations/wantlist/[list their needs].
+
As a policy the FreeBSD project link:{contributors}[acknowledges] all contributions received on its web site.
[[conclusion]]
== Conclusion
The FreeBSD project's goals are to create and give away the source code for a high-quality operating system.
By working with the FreeBSD project you can reduce development costs and improve your time to market in a number of product development scenarios.
We examined the characteristics of the FreeBSD project that make it an excellent choice for being part of an organization's product strategy.
We then looked at the prevailing culture of the project and examined effective ways of interacting with its developers.
The article concluded with a list of best-practices that could help organizations collaborating with the project.
:sectnums!:
[bibliography]
== Bibliography
[[Carp1996]] [Carp1996] http://www.ietf.org/rfc/rfc1958.txt[The Architectural Principles of the Internet] B. Carpenter. The Internet Architecture Board.The Internet Architecture Board. Copyright(R) 1996.
[[ComGuide]] [ComGuide] link:{committers-guide}[Committer's Guide] The FreeBSD Project. Copyright(R) 2005.
[[GoldGab2005]] [GoldGab2005] http://dreamsongs.com/IHE/IHE.html[Innovation Happens Elsewhere: Open Source as Business Strategy] Ron Goldman. Richard Gabriel. Copyright(R) 2005. Morgan-Kaufmann.
[[Hub1994]] [Hub1994] link:{contributing}[Contributing to the FreeBSD Project] Jordan Hubbard. Copyright(R) 1994-2005. The FreeBSD Project.
[[McKu1999]] [McKu1999] http://www.usenix.org/publications/library/proceedings/usenix99/mckusick.html[Soft Updates: A Technique for Eliminating Most Synchronous Writes in the Fast Filesystem] Kirk McKusick. Gregory Ganger. Copyright(R) 1999.
[[McKu1999-1]] [McKu1999-1] http://www.oreilly.com/catalog/opensources/book/kirkmck.html[Twenty Years of Berkeley Unix: From AT&T-Owned to Freely Redistributable] Marshall Kirk McKusick. http://www.oreilly.com/catalog/opensources/book/toc.html[Open Sources: Voices from the Open Source Revolution] O'Reilly Inc.. Copyright(R) 1993.
[[Mon2005]] [Mon2005] link:{bsdl-gpl}[Why you should use a BSD style license for your Open Source Project] Bruce Montague. The FreeBSD Project. Copyright(R) 2005.
[[Nik2005]] [Nik2005] link:{dev-model}[A project model for the FreeBSD Project] Niklas Saers. Copyright(R) 2005. The FreeBSD Project.
[[Nor1993]] [Nor1993] http://www.norvig.com/luv-slides.ps[Tutorial on Good Lisp Programming Style] Peter Norvig. Kent Pitman. Copyright(R) 1993.
[[Nor2001]] [Nor2001] http://www.norvig.com/21-days.html[Teach Yourself Programming in Ten Years] Peter Norvig. Copyright(R) 2001.
[[Ray2004]] [Ray2004] http://www.catb.org/~esr/faqs/smart-questions.html[How to ask questions the smart way] Eric Steven Raymond. Copyright(R) 2004.
[[RelEngDoc]] [RelEngDoc] link:{releng}[FreeBSD Release Engineering] Murray Stokely. Copyright(R) 2001. The FreeBSD Project.
diff --git a/documentation/content/en/articles/committers-guide/_index.adoc b/documentation/content/en/articles/committers-guide/_index.adoc
index a79443f48c..7aad6a68ef 100644
--- a/documentation/content/en/articles/committers-guide/_index.adoc
+++ b/documentation/content/en/articles/committers-guide/_index.adoc
@@ -1,3774 +1,3774 @@
---
title: Committer's Guide
authors:
- author: The FreeBSD Documentation Project
copyright: 1999-2021 The FreeBSD Documentation Project
-releaseinfo: "$FreeBSD$"
+description: FreeBSD Committer's Guide
trademarks: ["freebsd", "coverity", "ibm", "intel", "general"]
---
= Committer's Guide
:doctype: article
:toc: macro
:toclevels: 1
:icons: font
:sectnums:
:sectnumlevels: 6
:source-highlighter: rouge
:experimental:
include::shared/authors.adoc[]
include::shared/en/teams.adoc[lines=16..-1]
include::shared/en/mailing-lists.adoc[]
include::shared/en/urls.adoc[]
[.abstract-title]
Abstract
This document provides information for the FreeBSD committer community.
All new committers should read this document before they start, and existing committers are strongly encouraged to review it from time to time.
Almost all FreeBSD developers have commit rights to one or more repositories.
However, a few developers do not, and some of the information here applies to them as well.
(For instance, some people only have rights to work with the Problem Report database).
Please see <<non-committers>> for more information.
This document may also be of interest to members of the FreeBSD community who want to learn more about how the project works.
'''
toc::[]
[[admin]]
== Administrative Details
[.informaltable]
[cols="1,1", frame="none"]
|===
|_Login Methods_
|man:ssh[1], protocol 2 only
|_Main Shell Host_
|`freefall.FreeBSD.org`
|_SMTP Host_
|`smtp.FreeBSD.org:587` (see also <<smtp-setup>>).
|`_src/_` Git Repository
|`ssh://git@gitrepo.FreeBSD.org/src.git` (see also <<git-getting-started-base-layout>>).
|`_doc/_` Git Repository
|`ssh://git@gitrepo.FreeBSD.org/doc.git` (see also <<git-getting-started-doc-layout>>).
|`_ports/_` Git Repository
|`ssh://git@gitrepo.FreeBSD.org/ports.git` (see also <<git-getting-started-ports-layout>>).
|_Internal Mailing Lists_
|developers (technically called all-developers), doc-developers, doc-committers, ports-developers, ports-committers, src-developers, src-committers. (Each project repository has its own -developers and -committers mailing lists. Archives for these lists can be found in the files [.filename]#/local/mail/repository-name-developers-archive# and [.filename]#/local/mail/repository-name-committers-archive# on the `FreeBSD.org` cluster.)
|_Core Team monthly reports_
|[.filename]#/home/core/public/monthly-reports# on the `FreeBSD.org` cluster.
|_Ports Management Team monthly reports_
|[.filename]#/home/portmgr/public/monthly-reports# on the `FreeBSD.org` cluster.
|_Noteworthy `src/` Git Branches:_
|`stable/n` (`n`-STABLE), `main` (-CURRENT)
|===
man:ssh[1] is required to connect to the project hosts. For more information, see <<ssh.guide>>.
Useful links:
* link:https://www.FreeBSD.org/internal/[FreeBSD Project Internal Pages]
* link:https://www.FreeBSD.org/internal/machines/[FreeBSD Project Hosts]
* link:https://www.FreeBSD.org/administration/[FreeBSD Project Administrative Groups]
[[pgpkeys]]
== OpenPGP Keys for FreeBSD
Cryptographic keys conforming to the OpenPGP (__Pretty Good Privacy__) standard are used by the FreeBSD project to authenticate committers.
Messages carrying important information like public SSH keys can be signed with the OpenPGP key to prove that they are really from the committer.
See http://www.nostarch.com/pgp_ml.htm[PGP & GPG: Email for the Practical Paranoid by Michael Lucas] and http://en.wikipedia.org/wiki/Pretty_Good_Privacy[] for more information.
[[pgpkeys-creating]]
=== Creating a Key
Existing keys can be used, but should be checked with [.filename]#documentation/tools/checkkey.sh# first.
In this case, make sure the key has a FreeBSD user ID.
For those who do not yet have an OpenPGP key, or need a new key to meet FreeBSD security requirements, here we show how to generate one.
[[pgpkeys-create-steps]]
[.procedure]
====
. Install [.filename]#security/gnupg#. Enter these lines in [.filename]#~/.gnupg/gpg.conf# to set minimum acceptable defaults:
+
[.programlisting]
....
fixed-list-mode
keyid-format 0xlong
personal-digest-preferences SHA512 SHA384 SHA256 SHA224
default-preference-list SHA512 SHA384 SHA256 SHA224 AES256 AES192 AES CAST5 BZIP2 ZLIB ZIP Uncompressed
use-agent
verify-options show-uid-validity
list-options show-uid-validity
sig-notation issuer-fpr@notations.openpgp.fifthhorseman.net=%g
cert-digest-algo SHA512
....
. Generate a key:
+
[source,shell]
....
% gpg --full-gen-key
gpg (GnuPG) 2.1.8; Copyright (C) 2015 Free Software Foundation, Inc.
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
Warning: using insecure memory!
Please select what kind of key you want:
(1) RSA and RSA (default)
(2) DSA and Elgamal
(3) DSA (sign only)
(4) RSA (sign only)
Your selection? 1
RSA keys may be between 1024 and 4096 bits long.
What keysize do you want? (2048) 2048 <.>
Requested keysize is 2048 bits
Please specify how long the key should be valid.
0 = key does not expire
<n> = key expires in n days
<n>w = key expires in n weeks
<n>m = key expires in n months
<n>y = key expires in n years
Key is valid for? (0) 3y <.>
Key expires at Wed Nov 4 17:20:20 2015 MST
Is this correct? (y/N) y
GnuPG needs to construct a user ID to identify your key.
Real name: Chucky Daemon <.>
Email address: notreal@example.com
Comment:
You selected this USER-ID:
"Chucky Daemon <notreal@example.com>"
Change (N)ame, (C)omment, (E)mail or (O)kay/(Q)uit? o
You need a Passphrase to protect your secret key.
....
<.> 2048-bit keys with a three-year expiration provide adequate protection at present (2013-12). http://danielpocock.com/rsa-key-sizes-2048-or-4096-bits[] describes the situation in more detail.
<.> A three year key lifespan is short enough to obsolete keys weakened by advancing computer power, but long enough to reduce key management problems.
<.> Use your real name here, preferably matching that shown on government-issued ID to make it easier for others to verify your identity. Text that may help others identify you can be entered in the `Comment` section.
+
After the email address is entered, a passphrase is requested.
Methods of creating a secure passphrase are contentious.
Rather than suggest a single way, here are some links to sites that describe various methods: http://world.std.com/~reinhold/diceware.html[], http://www.iusmentis.com/security/passphrasefaq/[], http://xkcd.com/936/[], http://en.wikipedia.org/wiki/Passphrase[].
====
Protect the private key and passphrase.
If either the private key or passphrase may have been compromised or disclosed, immediately notify mailto:accounts@FreeBSD.org[accounts@FreeBSD.org] and revoke the key.
Committing the new key is shown in <<commit-steps, Steps for New Committers>>.
[[kerberos-ldap]]
== Kerberos and LDAP web Password for FreeBSD Cluster
The FreeBSD cluster requires a Kerberos password to access certain services.
The Kerberos password also serves as the LDAP web password, since LDAP is proxying to Kerberos in the cluster.
Some of the services which require this include:
* https://bugs.freebsd.org/bugzilla[Bugzilla]
* https://ci.freebsd.org[Jenkins]
To create a new Kerberos account in the FreeBSD cluster, or to reset a Kerberos password for an existing account using a random password generator:
[source,shell]
....
% ssh kpasswd.freebsd.org
....
[NOTE]
====
This must be done from a machine outside of the FreeBSD.org cluster.
====
A Kerberos password can also be set manually by logging into `freefall.FreeBSD.org` and running:
[source,shell]
....
% kpasswd
....
[NOTE]
====
Unless the Kerberos-authenticated services of the FreeBSD.org cluster have been used previously, `Client unknown` will be shown.
This error means that the `ssh kpasswd.freebsd.org` method shown above must be used first to initialize the Kerberos account.
====
[[committer.types]]
== Commit Bit Types
The FreeBSD repository has a number of components which, when combined, support the basic operating system source, documentation, third party application ports infrastructure, and various maintained utilities.
When FreeBSD commit bits are allocated, the areas of the tree where the bit may be used are specified.
Generally, the areas associated with a bit reflect who authorized the allocation of the commit bit.
Additional areas of authority may be added at a later date: when this occurs, the committer should follow normal commit bit allocation procedures for that area of the tree, seeking approval from the appropriate entity and possibly getting a mentor for that area for some period of time.
[.informaltable]
[cols="1,1,1", frame="none"]
|===
|__Committer Type__
|__Responsible__
|__Tree Components__
|src
|core@
|src/
|doc
|doceng@
|doc/, ports/, src/ documentation
|ports
|portmgr@
|ports/
|===
Commit bits allocated prior to the development of the notion of areas of authority may be appropriate for use in many parts of the tree.
However, common sense dictates that a committer who has not previously worked in an area of the tree seek review prior to committing, seek approval from the appropriate responsible party, and/or work with a mentor.
Since the rules regarding code maintenance differ by area of the tree, this is as much for the benefit of the committer working in an area of less familiarity as it is for others working on the tree.
Committers are encouraged to seek review for their work as part of the normal development process, regardless of the area of the tree where the work is occurring.
=== Policy for Committer Activity in Other Trees
* All committers may modify [.filename]#src/share/misc/committers-*.dot#, [.filename]#src/usr.bin/calendar/calendars/calendar.freebsd#, and [.filename]#ports/astro/xearth/files#.
* doc committers may commit documentation changes to [.filename]#src# files, such as man pages, READMEs, fortune databases, calendar files, and comment fixes without approval from a src committer, subject to the normal care and tending of commits.
* Any committer may make changes to any other tree with an "Approved by" from a non-mentored committer with the appropriate bit.
Mentored committers can provide a "Reviewed by" but not an "Approved by".
* Committers can acquire an additional bit by the usual process of finding a mentor who will propose them to core, doceng, or portmgr, as appropriate. When approved, they will be added to 'access' and the normal mentoring period will ensue, which will involve a continuing of "Approved by" for some period.
[[git-primer]]
== Git Primer
[NOTE]
====
this section is a work in progress...
====
[[git-basics]]
=== Git basics
There are many primers on how to use Git on the web.
There's a lot of them (search for "Git primer").
https://danielmiessler.com/study/git/ and https://gist.github.com/williewillus/068e9a8543de3a7ef80adb2938657b6b are good overviews.
The Git book is also complete, but much longer https://git-scm.com/book/en/v2.
There is also this website https://ohshitgit.com/ for common traps and pitfalls of Git, in case you need guidance to fix things up.
This document will assume that you've read through it and will try not to belabor the basics (though it will cover them briefly).
[[git-mini-primer]]
=== Git Mini Primer
This primer is less ambitiously scoped than the old Subversion Primer, but should cover the basics.
==== Scope
If you want to download FreeBSD, compile it from sources, and generally keep up to date that way, this primer is for you.
It covers getting the sources, updating the sources, bisecting and touches briefly on how to cope with a few local changes.
It covers the basics, and tries to give good pointers to more in-depth treatment for when the reader finds the basics insufficient.
Other sections of this guide cover more advanced topics related to contributing to the project.
The goal of this section is to highlight those bits of Git needed to track sources.
They assume a basic understanding of Git.
There are many primers for Git on the web, but the https://git-scm.com/book/en/v2[Git Book] provides one of the better treatments.
==== Keeping Current With The FreeBSD src Tree
[[keeping_current]]
First step: cloning a tree.
This downloads the entire tree.
There are two ways to download.
Most people will want to do a deep clone of the repository.
However, there are times when you may wish to do a shallow clone.
===== Branch names
The branch names in the new Git repository are similar to the old names.
For the stable branches, they are stable/X where X is the major release (like 11 or 12).
The main branch in the new repository is 'main'.
The main branch in the old GitHub mirror was 'master', but is now 'main'.
Both reflect the defaults of Git at the time they were created.
The 'main' branch is the default branch if you omit the '-b branch' or '--branch branch' options below.
===== Repositories
Please see the <<admin,Administrative Details>> for the latest information on where to get FreeBSD sources.
$URL below can be obtained from that page.
Note: The project doesn't use submodules as they are a poor fit for our workflows and development model.
How we track changes in third-party applications is discussed elsewhere and generally of little concern to the casual user.
===== Deep Clone
A deep clone pulls in the entire tree, as well as all the history and branches.
It is the easiest to do.
It also allows you to use Git's worktree feature to have all your active branches checked out into separate directories but with only one copy of the repository.
[source,shell]
....
% git clone -o freebsd $URL -b branch [dir]
....
is how you make a deep clone.
'branch' should be one of the branches listed in the previous section.
It is optional if it is the main branch.
'dir' is an optional directory to place it in (the default will be the name of the repo you are cloning (src, doc, etc)).
You will want a deep clone if you are interested in the history, plan on making local changes, or plan on working on more than one branch.
It is the easiest to keep up to date as well.
If you are interested in the history, but are working with only one branch and are short on space, you can also use --single-branch to only download the one branch
(though some merge commits will not reference the merged-from branch which may be important for some users who are interested in detailed versions of history).
===== Shallow Clone
A shallow clone copies just the most current code, but none or little of the history.
This can be useful when you need to build a specific revision of FreeBSD, or when you are just starting out and plan to track the tree more fully.
You can also use it to limit history to only so many revisions.
However, see below for a significant limitation of this approach.
[source,shell]
....
% git clone -o freebsd -b branch --depth 1 $URL [dir]
....
This clones the repository, but only has the most recent version in the repository.
The rest of the history is not downloaded.
Should you change your mind later, you can do 'git fetch --unshallow' to get the old history.
[WARNING]
====
When you make a shallow clone, you will lose the commit count in your uname output.
This can make it more difficult to determine if your system needs to be updated when a security advisory is issued.
====
===== Building
Once you've downloaded, building is done as described in the handbook,
eg:
[source,shell]
....
% cd src
% make buildworld
% make buildkernel
% make installkernel
% make installworld
....
so that won't be covered in depth here.
If you want to build a custom kernel, link:{handbook}#kernelconfig[the kernel config section] of the FreeBSD Handbook recommends creating a file MYKERNEL under sys/${ARCH}/conf with your changes against GENERIC.
To have MYKERNEL disregarded by Git, it can be added to .git/info/exclude.
===== Updating
To update both types of trees uses the same commands.
This pulls in all the revisions since your last update.
[source,shell]
....
% git pull --ff-only
....
will update the tree.
In Git, a 'fast forward' merge is one that only needs to set a new branch pointer and doesn't need to re-create the commits.
By always doing a 'fast forward' merge/pull, you'll ensure that you have an exact copy of the FreeBSD tree.
This will be important if you want to maintain local patches.
See below for how to manage local changes.
The simplest is to use --autostash on the 'git pull' command, but more sophisticated options are available.
==== Selecting a Specific Version
In Git, the 'git checkout' checks out both branches and specific versions.
Git's versions are the long hashes rather than a sequential number.
When you checkout a specific version, just specify the hash you want on the command line (the git log command can help you decide which hash you might want):
[source,shell]
....
% git checkout 08b8197a74
....
and you have that checked out.
You will be greeted with a message similar to the following:
[source,shell]
....
Note: checking out '08b8197a742a96964d2924391bf9fdfeb788865d'.
You are in a 'detached HEAD' state. You can look around, make experimental
changes and commit them, and you can discard any commits you make in this
state without impacting any branches by performing another checkout.
If you want to create a new branch to retain commits you create, you may
do so (now or later) by using -b with the checkout command again. Example:
git checkout -b <new-branch-name>
HEAD is now at 08b8197a742a hook gpiokeys.4 to the build
....
where the last line is generated from the hash you are checking out and the first line of the commit message from that revision.
The hash can be abbreviated to the shortest unique length.
Git itself is inconsistent about how many digits it displays.
==== Bisecting
Sometimes, things go wrong.
The last version worked, but the one you just updated to does not.
A developer may ask you to bisect the problem to track down which commit caused the regression.
Git makes bisecting changes easy with a powerful 'git bisect' command.
Here's a brief outline of how to use it.
For more information, you can view https://www.metaltoad.com/blog/beginners-guide-git-bisect-process-elimination or https://git-scm.com/docs/git-bisect for more details.
The man git-bisect page is good at describing what can go wrong, what to do when versions won't build, when you want to use terms other than 'good' and 'bad', etc, none of which will be covered here.
`git bisect start` will start the bisection process.
Next, you need to tell a range to go through.
'git bisect good XXXXXX' will tell it the working version and 'git bisect bad XXXXX' will tell it the bad version.
The bad version will almost always be HEAD (a special tag for what you have checked out).
The good version will be the last one you checked out.
[TIP]
====
If you want to know the last version you checked out, you should use 'git reflog':
[source,shell]
....
5ef0bd68b515 (HEAD -> main, freebsd/main, freebsd/HEAD) HEAD@{0}: pull --ff-only: Fast-forward
a8163e165c5b (upstream/main) HEAD@{1}: checkout: moving from b6fb97efb682994f59b21fe4efb3fcfc0e5b9eeb to main
...
....
shows me moving the working tree to the main branch (a816...) and then updating from upstream (to 5ef0...).
In this case, bad would be HEAD (or 5rf0bd68) and good would be a8163e165.
As you can see from the output, HEAD@{1} also often works, but isn't foolproof if you have done other things to your Git tree after updating, but before you discover the need to bisect.
====
Set the 'good' version first, then set the bad (though the order doesn't matter).
When you set the bad version, it will give you some statistics on the process:
[source,shell]
....
% git bisect start
% git bisect good a8163e165c5b
% git bisect bad HEAD
Bisecting: 1722 revisions left to test after this (roughly 11 steps)
[c427b3158fd8225f6afc09e7e6f62326f9e4de7e] Fixup r361997 by balancing parens. Duh.
....
You would then build/install that version.
If it's good you'd type 'git bisect good' otherwise 'git bisect bad'.
If the version doesn't compile, type 'git bisect skip'.
You will get a similar message to the above after each step.
When you are done, report the bad version to the developer (or fix the bug yourself and send a patch).
'git bisect reset' will end the process and return you back to where you started (usually tip of main).
Again, the git-bisect manual (linked above) is a good resource for when things go wrong or for unusual cases.
[[git-gpg-signing]]
==== Signing the commits, tags, and pushes, with GnuPG
Git knows how to sign commits, tags, and pushes.
When you sign a Git commit or a tag, you can prove that the code you submitted came from you and wasn't altered while you were transferring it.
You also can prove that you submitted the code and not someone else.
A more in-depth documentation on signing commits and tags can be found in the https://git-scm.com/book/en/v2/Git-Tools-Signing-Your-Work[Git Tools - Signing Your Work] chapter of the Git's book.
The rationale behind signing pushes can be found in the https://github.com/git/git/commit/a85b377d0419a9dfaca8af2320cc33b051cbed04[commit that introduced the feature].
The best way is to simply tell Git you always want to sign commits, tags, and pushes.
You can do this by setting a few configuration variables:
[source,shell]
....
% git config --add user.signingKey=LONG-KEY-ID
% git config --add commit.gpgSign=true
% git config --add tag.gpgSign=true
% git config --add push.gpgSign=if-asked
....
// push.gpgSign should probably be set to `yes` once we enable it, or be set with --global, so that it is enabled for all repositories.
[NOTE]
======
To avoid possible collisions, make sure you give a long key id to Git.
You can get the long id with: `gpg --list-secret-keys --keyid-format LONG`.
======
[TIP]
======
To use specific subkeys, and not have GnuPG to resolve the subkey to a primary key, attach `!` to the key.
For example, to encrypt for the subkey `DEADBEEF`, use `DEADBEEF!`.
======
===== Verifying signatures
Commit signatures can be verified by running either `git verify-commit <commit hash>`, or `git log --show-signature`.
Tag signatures can be verifed with `git verity-tag <tag name>`, or `git tag -v <tag name>`.
////
Commented out for now until we decide what to do.
Git pushes are a bit different, they live in a special ref in the repository.
TODO: write how to verify them
////
==== Ports Considerations
The ports tree operates the same way.
The branch names are different and the repositories are in different locations.
The cgit repository web interface for use with web browsers is at https://cgit.FreeBSD.org/ports/ .
The production Git repository is at https://git.FreeBSD.org/ports.git and at ssh://anongit@git.FreeBSD.org/ports.git (or anongit@git.FreeBSD.org:ports.git).
There is also a mirror on GitHub, see link:{handbook}mirrors/#mirrors[External mirrors] for an overview.
The 'current' branch is 'main' .
The quarterly branches are named 'yyyyQn' for year 'yyyy' and quarter 'n'.
===== Commit message formats
A hook is available in the ports repository to help you write up your commit messages in https://cgit.freebsd.org/ports/tree/.hooks/prepare-commit-msg[.hooks/prepare-commit-message].
It can be enabled by running ``git config --add core.hooksPath .hooks``.
The main point being that a commit message should be formatted in the following way:
....
category/port: Summary.
Description of why the changes where made.
PR: 12345
....
[IMPORTANT]
====
The first line is the subject of the commit, it contains what port was changed, and a summary of the commit.
It should contain 50 characters or less.
A blank line should separate it from the rest of the commit message.
The rest of the commit message should be wrapped at the 72 characters boundary.
Another blank line should be added if there are any metadata fields, so that they are easily distinguishable from the commit message.
====
==== Managing Local Changes
This section addresses tracking local changes.
If you have no local changes, you can stop reading now (it is the last section and OK to skip).
One item that is important for all of them: all changes are local until pushed.
Unlike Subversion, Git uses a distributed model.
For users, for most things, there is very little difference.
However, if you have local changes, you can use the same tool to manage them as you use to pull in changes from FreeBSD.
All changes that you have not pushed are local and can easily be modified (git rebase, discussed below does this).
===== Keeping local changes
The simplest way to keep local changes (especially trivial ones) is to use 'git stash'.
In its simplest form, you use 'git stash' to record the changes (which pushes them onto the stash stack).
Most people use this to save changes before updating the tree as described above.
They then use 'git stash apply' to re-apply them to the tree.
The stash is a stack of changes that can be examined with 'git stash list'.
The git-stash man page (https://git-scm.com/docs/git-stash) has all the details.
This method is suitable when you have tiny tweaks to the tree.
When you have anything non trivial, you'll likely be better off keeping a local branch and rebasing.
Stashing is also integrated with the 'git pull' command: just add '--autostash' to the command line.
===== Keeping a local branch
[[keeping_a_local_branch]]
It is much easier to keep a local branch with Git than Subversion.
In Subversion you need to merge the commit, and resolve the conflicts.
This is manageable, but can lead to a convoluted history that's hard to upstream should that ever be necessary, or hard to replicate if you need to do so.
Git also allows one to merge, along with the same problems.
That's one way to manage the branch, but it's the least flexible.
In addition to merging, Git supports the concept of 'rebasing' which avoids these issues.
The 'git rebase' command replays all the commits of a branch at a newer location on the parent branch.
We will cover the most common scenarios that arise using it.
====== Create a branch
Let's say you want to make a change to FreeBSD's ls command to never, ever do color.
There are many reasons to do this, but this example will use that as a baseline.
The FreeBSD ls command changes from time to time, and you'll need to cope with those changes.
Fortunately, with Git rebase it usually is automatic.
[source,shell]
....
% cd src
% git checkout main
% git checkout -b no-color-ls
% cd bin/ls
% vi ls.c # hack the changes in
% git diff # check the changes
diff --git a/bin/ls/ls.c b/bin/ls/ls.c
index 7378268867ef..cfc3f4342531 100644
--- a/bin/ls/ls.c
+++ b/bin/ls/ls.c
@@ -66,6 +66,7 @@ __FBSDID("$FreeBSD$");
#include <stdlib.h>
#include <string.h>
#include <unistd.h>
+#undef COLORLS
#ifdef COLORLS
#include <termcap.h>
#include <signal.h>
% # these look good, make the commit...
% git commit ls.c
....
The commit will pop you into an editor to describe what you've done.
Once you enter that, you have your own **local** branch in the Git repo.
Build and install it like you normally would, following the directions in the handbook.
Git differs from other version control systems in that you have to tell it explicitly which files to commit.
I have opted to do it on the commit command line, but you can also do it with 'git add' which many of the more in depth tutorials cover.
====== Time to update
When it is time to bring in a new version, it is almost the same as w/o the branches.
You would update like you would above, but there is one extra command before you update, and one after.
The following assumes you are starting with an unmodified tree.
It is important to start rebasing operations with a clean tree (Git usually requires this).
[source,shell]
....
% git checkout main
% git pull --no-ff
% git rebase -i main no-color-ls
....
This will bring up an editor that lists all the commits in it.
For this example, do not change it at all.
This is typically what you are doing while updating the baseline (though you also use the Git rebase command to curate the commits you have in the branch).
Once you are done with the above, you have to move the commits to ls.c forward from the old version of FreeBSD to the newer one.
Sometimes there are merge conflicts.
That is OK.
Do not panic.
Instead, handle them the same as any other merge conflicts.
To keep it simple, I will just describe a common issue that may arise.
A pointer to a more complete treatment can be found at the end of this section.
Let's say the includes changes upstream in a radical shift to terminfo as well as a name change for the option.
When you updated, you might see something like this:
[source,shell]
....
Auto-merging bin/ls/ls.c
CONFLICT (content): Merge conflict in bin/ls/ls.c
error: could not apply 646e0f9cda11... no color ls
Resolve all conflicts manually, mark them as resolved with
"git add/rm <conflicted_files>", then run "git rebase --continue".
You can instead skip this commit: run "git rebase --skip".
To abort and get back to the state before "git rebase", run "git rebase --abort".
Could not apply 646e0f9cda11... no color ls
....
which looks scary.
If you bring up an editor, you will see it is a typical 3-way merge conflict resolution that you may be familiar with from other source code systems (the rest of ls.c has been omitted):
[source,shell]
....
<<<<<<< HEAD
#ifdef COLORLS_NEW
#include <terminfo.h>
=======
#undef COLORLS
#ifdef COLORLS
#include <termcap.h>
>>>>>>> 646e0f9cda11... no color ls
....
The new code is first, and your code is second.
The right fix here is to just add a #undef COLORLS_NEW before #ifdef and then delete the old changes:
[source,shell]
....
#undef COLORLS_NEW
#ifdef COLORLS_NEW
#include <terminfo.h>
....
save the file.
The rebase was interrupted, so you have to complete it:
[source,shell]
....
% git add ls.c
% git rebase --continue
....
which tells Git that ls.c has been fixed and to continue the rebase operation.
Since there was a conflict, you will get kicked into the editor to update the commit message if necessary.
If the commit message is still accurate, just exit the editor.
If you get stuck during the rebase, do not panic.
git rebase --abort will take you back to a clean slate.
It is important, though, to start with an unmodified tree.
An aside: The above mentioned 'git reflog' comes in handy here, as it will have a list of all the (intermediate) commits that you can view or inspect or cherry-pick.
For more on this topic, https://www.freecodecamp.org/news/the-ultimate-guide-to-git-merge-and-git-rebase/ provides a rather extensive treatment.
It is a good resource for issues that arise occasionally but are too obscure for this guide.
===== Switching to a Different FreeBSD Branch
If you wish to shift from stable/12 to the current branch.
If you have a deep clone, the following will suffice:
[source,shell]
....
% git checkout main
% # build and install here...
....
If you have a local branch, though, there are one or two caveats.
First, rebase will rewrite history, so you will likely want to do something to save it.
Second, jumping branches tends to cause more conflicts.
If we pretend the example above was relative to stable/12, then to move to main, I'd suggest the following:
[source,shell]
....
% git checkout no-color-ls
% git checkout -b no-color-ls-stable-12 # create another name for this branch
% git rebase -i stable/12 no-color-ls --onto main
....
What the above does is checkout no-color-ls.
Then create a new name for it (no-color-ls-stable-12) in case you need to get back to it.
Then you rebase onto the main branch.
This will find all the commits to the current no-color-ls branch (back to where it meets up with the stable/12 branch) and then it will
replay them onto the main branch creating a new no-color-ls branch there (which is why I had you create a place holder name).
===== Migrating from an existing Git clone
If you have work based on a previous Git conversion or a locally running git-svn conversion, migrating to new repository can encounter problems because Git has no knowledge about the connection between the two.
When you have only a few local changes, the easiest way would be to cherry-pick those changes to the new base:
[source,shell]
....
% git checkout main
% git cherry-pick old_branch..your_branch
....
Or alternatively, do the same thing with rebase:
[source,shell]
....
% git rebase --onto main master your_branch
....
If you do have a lot of changes, you would probably want to perform a merge instead.
The idea is to create a merge point that consolidates the history of the old_branch, and the new FreeBSD repository (main).
You can find out by looking up the same commit that are found on both parents:
[source,shell]
....
% git show old_branch
....
You will see a commit message, now search for that in the new branch:
[source,shell]
....
% git log --grep="commit message on old_branch" freebsd/main
....
You would help locate the commit hash on the new main branch, create a helper branch (in the example we call it 'stage') from that hash:
[source,shell]
....
% git checkout -b stage _hash_found_from_git_log_
....
Then perform a merge of the old branch:
[source,shell]
....
% git merge -s ours -m "Mark old branch as merged" old_branch
....
With that, it's possible to merge your work branch or the main branch in any order without problem.
Eventually, when you are ready to commit your work back to main, you can perform a rebase to main, or do a squash commit by combining everything into one commit.
[[mfc-with-git]]
=== MFC (Merge From Current) Procedures
==== Summary
MFC workflow can be summarized as `git cherry-pick -x` plus `git commit --amend` to adjust the commit message.
For multiple commits, use `git rebase -i` to squash them together and edit the commit message.
==== Single commit MFC
[source,shell]
....
% git checkout stable/X
% git cherry-pick -x $HASH --edit
....
For MFC commits, for example a vendor import, you would need to specify one parent for cherry-pick purposes.
Normally, that would be the "first parent" of the branch you are cherry-picking from, so:
[source,shell]
....
% git checkout stable/X
% git cherry-pick -x $HASH -m 1 --edit
....
If things go wrong, you'll either need to abort the cherry-pick with `git cherry-pick --abort` or fix it up and do a `git cherry-pick --continue`.
Once the cherry-pick is finished, push with `git push`.
If you get an error due to losing the commit race, use `git pull --rebase` and try to push again.
==== MFC to RELENG branch
MFCs to branches that require approval require a bit more care.
The process is the same for either a typical merge or an exceptional direct commit.
* Merge or direct commit to the appropriate `stable/X` branch first before merging to the `releng/X.Y` branch.
* Use the hash that's in the `stable/X` branch for the MFC to `releng/X.Y` branch.
* Leave both "cherry picked from" lines in the commit message.
* Be sure to add the `Approved by:` line when you are in the editor.
[source,shell]
....
% git checkout releng/13.0
% git cherry-pick -x $HASH --edit
....
If you forget to to add the `Approved by:` line, you can do a `git commit --amend` to edit the commit message before you push the change.
==== Multiple commit MFC
[source,shell]
....
% git checkout -b tmp-branch stable/X
% for h in $HASH_LIST; do git cherry-pick -x $h; done
% git rebase -i stable/X
# mark each of the commits after the first as 'squash'
# Update the commit message to reflect all elements of commit, if necessary.
# Be sure to retain the "cherry picked from" lines.
% git push freebsd HEAD:stable/X
....
If the push fails due to losing the commit race, rebase and try again:
[source,shell]
....
% git checkout stable/X
% git pull
% git checkout tmp-branch
% git rebase stable/X
% git push freebsd HEAD:stable/X
....
Once the MFC is complete, you can delete the temporary branch:
[source,shell]
....
% git checkout stable/X
% git branch -d tmp-branch
....
==== MFC a vendor import
Vendor imports are the only thing in the tree that creates a merge commit in the main line.
Cherry picking merge commits into stable/XX presents an additional difficulty because there are two parents for a merge commit.
Generally, you'll want the first parent's diff since that's the diff to mainline (though there may be some exceptions).
[source,shell]
....
% git cherry-pick -x -m 1 $HASH
....
is typically what you want.
This will tell cherry-pick to apply the correct diff.
There are some, hopefully, rare cases where it's possible that the mainline was merged backwards by the conversion script.
Should that be the case (and we've not found any yet), you'd change the above to '-m 2' to pickup the proper parent.
Just do
[source,shell]
....
% git cherry-pick --abort
% git cherry-pick -x -m 2 $HASH
....
to do that. The `--aboort` will cleanup the failed first attempt.
==== Redoing a MFC
If you do a MFC, and it goes horribly wrong and you want to start over,
then the easiest way is to use `git reset --hard` like so:
[source,shell]
....
% git reset --hard freebsd/stable/12
....
though if you have some revs you want to keep, and others you don't,
using 'git rebase -i' is better.
==== Considerations when MFCing
When committing source commits to stable and releng branches, we have the following goals:
* Clearly mark direct commits distinct from commits that land a change from another branch.
* Avoid introducing known breakage into stable and releng branches.
* Allow developers to determine which changes have or have not been landed from one branch to another.
With Subversion, we used the following practices to achieve these goals:
* Using 'MFC' and 'MFS' tags to mark commits that merged changes from another branch.
* Squashing fixup commits into the main commit when merging a change.
* Recording mergeinfo so that `svn mergeinfo --show-revs` worked.
With Git, we will need to use different strategies to achieve the same goals.
This document aims to define best practices when merging source commits using Git that achieve these goals.
In general, we aim to use Git's native support to achieve these goals rather than enforcing practices built on Subversion's model.
One general note: due to technical differences with Git, we will not be using Git "merge commits" (created via `git merge`) in stable or releng branches.
Instead, when this document refers to "merge commits", it means a commit originally made to `main` that is replicated or "landed" to a stable branch, or a commit from a stable branch that is replicated to a releng branch with some variation of `git cherry-pick`.
==== Finding Eligible Hashes to MFC
Git provides some built-in support for this via the `git cherry` and `git log --cherry` commands.
These commands compare the raw diffs of commits (but not other metadata such as log messages) to determine if two commits are identical.
This works well when each commit from head is landed as a single commit to a stable branch, but it falls over if multiple commits from main are squashed together as a single commit to a stable branch.
There are a few options for resolving this:
1. We could ban squashing of commits and instead require that committers stage all of the fixup / follow-up commits to stable into a single push.
This would still achieve the goal of stability in stable and releng branches since pushes are atomic and users doing a simple pull will never end up with a tree that has the main commit without the fixup(s).
`git bisect` is also able to cope with this model via `git bisect skip`.
2. We could adopt a consistent style for describing MFCs and write our own tooling to wrap around `git cherry` to determine the list of eligible commits.
A simple approach here might be to use the syntax from `git cherry-pick -x`, but require that a squashed commit list all of the hashes (one line per hash) at the end of the commit message.
Developers could do this by using `git cherry-pick -x` of each individual commit into a branch and then use `git rebase` to squash the commits down into a single commit, but collecting the `-x` annotations at the end of the landed commit log.
==== Commit message standards
===== Marking MFCs
The project has adopted the following practice for marking MFCs:
* Use the `-x` flag with `git cherry-pick`. This adds a line to the commit message that includes the hash of the original commit when merging. Since it is added by Git directly, committers do not have to manually edit the commit log when merging.
When merging multiple commits, keep all the "cherry picked from" lines.
===== Trim Metadata?
One area that was not clearly documented with Subversion (or even CVS) is how to format metadata in log messages for MFC commits.
Should it include the metadata from the original commit unchanged, or should it be altered to reflect information about the MFC commit itself?
Historical practice has varied, though some of the variance is by field.
For example, MFCs that are relevant to a PR generally include the PR field in the MFC so that MFC commits are included in the bug tracker's audit trail.
Other fields are less clear.
For example, Phabricator shows the diff of the last commit tagged to a review, so including Phabricator URLs replaces the `main` commit with the landed commits.
The list of reviewers is also not clear.
If a reviewer has approved a change to `main`, does that mean they have approved the MFC commit? Is that true if it's identical code only, or with merely trivial reworkes? It's clearly not true for more extensive reworks.
Even for identical code what if the commit doesn't conflict but introduces an ABI change? A reviewer may have ok'd a commit for `main` due to the ABI breakage but may not approve of merging the same commit as-is.
One will have to use one's best judgement until clear guidelines can be agreed upon.
For MFCs regulated by re@, new metadata fields are added, such as the Approved by tag for approved commits.
This new metadata will have to be added via `git commit --amend` or similar after the original commit has been reviewed and approved.
We may also want to reserve some metadata fields in MFC commits such as Phabricator URLs for use by re@ in the future.
Preserving existing metadata provides a very simple workflow.
Developers can just use `git cherry-pick -x` without having to edit the log message.
If instead we choose to adjust metadata in MFCs, developers will have to edit log messages explicitly via the use of `git cherry-pick --edit` or `git commit --amend`.
However, as compared to svn, at least the existing commit message can be pre-populated and metadata fields can be added or removed without having to re-enter the entire commit message.
The bottom line is that developers will likely need to curate their commit message for MFCs that are non-trivial.
==== Examples
===== Merging a Single Subversion Commit
This walks through the process of merging a commit to stable/12 that was originally committed to head in Subversion.
In this case, the original commit is r368685.
The first step is to map the Subversion commit to a Git hash.
Once you have fetched refs/notes/commits, you can pass the revision number to `git log --grep`:
[source,shell]
....
% git log main --grep 368685
commit ce8395ecfda2c8e332a2adf9a9432c2e7f35ea81
Author: John Baldwin <jhb@FreeBSD.org>
Date: Wed Dec 16 00:11:30 2020 +0000
Use the 't' modifier to print a ptrdiff_t.
Reviewed by: imp
Obtained from: CheriBSD
Sponsored by: DARPA
Differential Revision: https://reviews.freebsd.org/D27576
Notes:
svn path=/head/; revision=368685
....
Next, MFC the commit to a `stable/12` checkout:
[source,shell]
....
git checkout stable/12
git cherry-pick -x ce8395ecfda2c8e332a2adf9a9432c2e7f35ea81 --edit
....
Git will invoke the editor.
Use this to remove the metadata that only applied to the original commit (Phabricator URL and Reviewed by).
After the editor saves the updated log message, Git completes the commit:
[source,shell]
....
[stable/12 3e3a548c4874] Use the 't' modifier to print a ptrdiff_t.
Date: Wed Dec 16 00:11:30 2020 +0000
1 file changed, 1 insertion(+), 1 deletion(-)
....
The contents of the MFCd commit can be examined via `git show`:
[source,shell]
....
% git show
commit 3e3a548c487450825679e4bd63d8d1a67fd8bd2d (HEAD -> stable/12)
Author: John Baldwin <jhb@FreeBSD.org>
Date: Wed Dec 16 00:11:30 2020 +0000
Use the 't' modifier to print a ptrdiff_t.
Obtained from: CheriBSD
Sponsored by: DARPA
(cherry picked from commit ce8395ecfda2c8e332a2adf9a9432c2e7f35ea81)
diff --git a/sys/compat/linuxkpi/common/include/linux/printk.h b/sys/compat/linuxkpi/common/include/linux/printk.h
index 31802bdd2c99..e6510e9e9834 100644
--- a/sys/compat/linuxkpi/common/include/linux/printk.h
+++ b/sys/compat/linuxkpi/common/include/linux/printk.h
@@ -68,7 +68,7 @@ print_hex_dump(const char *level, const char *prefix_str,
printf("[%p] ", buf);
break;
case DUMP_PREFIX_OFFSET:
- printf("[%p] ", (const char *)((const char *)buf -
+ printf("[%#tx] ", ((const char *)buf -
(const char *)buf_old));
break;
default:
....
The MFC commit can now be published via `git push`
[source,shell]
....
% git push freebsd
Enumerating objects: 17, done.
Counting objects: 100% (17/17), done.
Delta compression using up to 4 threads
Compressing objects: 100% (7/7), done.
Writing objects: 100% (9/9), 817 bytes | 204.00 KiB/s, done.
Total 9 (delta 5), reused 1 (delta 1), pack-reused 0
To gitrepo-dev.FreeBSD.org:src.git
525bd9c9dda7..3e3a548c4874 stable/12 -> stable/12
....
===== Merging a Single Subversion Commit with a Conflict
This example is similar to the previous example except that the commit in question encounters a merge conflict.
In this case, the original commit is r368314.
As above, the first step is to map the Subversion commit to a Git hash:
[source,shell]
....
% git log main --grep 368314
commit 99963f5343a017e934e4d8ea2371a86789a46ff9
Author: John Baldwin <jhb@FreeBSD.org>
Date: Thu Dec 3 22:01:13 2020 +0000
Don't transmit mbufs that aren't yet ready on TOE sockets.
This includes mbufs waiting for data from sendfile() I/O requests, or
mbufs awaiting encryption for KTLS.
Reviewed by: np
MFC after: 2 weeks
Sponsored by: Chelsio Communications
Differential Revision: https://reviews.freebsd.org/D27469
Notes:
svn path=/head/; revision=368314
....
Next, MFC the commit to a `stable/12` checkout:
[source,shell]
....
% git checkout stable/12
% git cherry-pick -x 99963f5343a017e934e4d8ea2371a86789a46ff9 --edit
Auto-merging sys/dev/cxgbe/tom/t4_cpl_io.c
CONFLICT (content): Merge conflict in sys/dev/cxgbe/tom/t4_cpl_io.c
warning: inexact rename detection was skipped due to too many files.
warning: you may want to set your merge.renamelimit variable to at least 7123 and retry the command.
error: could not apply 99963f5343a0... Don't transmit mbufs that aren't yet ready on TOE sockets.
hint: after resolving the conflicts, mark the corrected paths
hint: with 'git add <paths>' or 'git rm <paths>'
hint: and commit the result with 'git commit'
....
In this case, the commit encountered a merge conflict in sys/dev/cxge/tom/t4_cpl_io.c as kernel TLS is not present in stable/12.
Note that Git does not invoke an editor to adjust the commit message due to the conflict.
`git status` confirms that this file has merge conflicts:
[source,shell]
....
% git status
On branch stable/12
Your branch is up to date with 'upstream/stable/12'.
You are currently cherry-picking commit 99963f5343a0.
(fix conflicts and run "git cherry-pick --continue")
(use "git cherry-pick --skip" to skip this patch)
(use "git cherry-pick --abort" to cancel the cherry-pick operation)
Unmerged paths:
(use "git add <file>..." to mark resolution)
both modified: sys/dev/cxgbe/tom/t4_cpl_io.c
no changes added to commit (use "git add" and/or "git commit -a")
....
After editing the file to resolve the conflict, `git status` shows the conflict as resolved:
[source,shell]
....
% git status
On branch stable/12
Your branch is up to date with 'upstream/stable/12'.
You are currently cherry-picking commit 99963f5343a0.
(all conflicts fixed: run "git cherry-pick --continue")
(use "git cherry-pick --skip" to skip this patch)
(use "git cherry-pick --abort" to cancel the cherry-pick operation)
Changes to be committed:
modified: sys/dev/cxgbe/tom/t4_cpl_io.c
....
The cherry-pick can now be completed:
[source,shell]
....
% git cherry-pick --continue
....
Since there was a merge conflict, Git invokes the editor to adjust the commit message.
Trim the metadata fields from the commit log from the original commit to head and save the updated log message.
The contents of the MFC commit can be examined via `git show`:
[source,shell]
....
% git show
commit 525bd9c9dda7e7c7efad2d4570c7fd8e1a8ffabc (HEAD -> stable/12)
Author: John Baldwin <jhb@FreeBSD.org>
Date: Thu Dec 3 22:01:13 2020 +0000
Don't transmit mbufs that aren't yet ready on TOE sockets.
This includes mbufs waiting for data from sendfile() I/O requests, or
mbufs awaiting encryption for KTLS.
Sponsored by: Chelsio Communications
(cherry picked from commit 99963f5343a017e934e4d8ea2371a86789a46ff9)
diff --git a/sys/dev/cxgbe/tom/t4_cpl_io.c b/sys/dev/cxgbe/tom/t4_cpl_io.c
index 8e8c2b8639e6..43861f10b689 100644
--- a/sys/dev/cxgbe/tom/t4_cpl_io.c
+++ b/sys/dev/cxgbe/tom/t4_cpl_io.c
@@ -746,6 +746,8 @@ t4_push_frames(struct adapter *sc, struct toepcb *toep, int drop)
for (m = sndptr; m != NULL; m = m->m_next) {
int n;
+ if ((m->m_flags & M_NOTAVAIL) != 0)
+ break;
if (IS_AIOTX_MBUF(m))
n = sglist_count_vmpages(aiotx_mbuf_pages(m),
aiotx_mbuf_pgoff(m), m->m_len);
@@ -821,8 +823,9 @@ t4_push_frames(struct adapter *sc, struct toepcb *toep, int drop)
/* nothing to send */
if (plen == 0) {
- KASSERT(m == NULL,
- ("%s: nothing to send, but m != NULL", __func__));
+ KASSERT(m == NULL || (m->m_flags & M_NOTAVAIL) != 0,
+ ("%s: nothing to send, but m != NULL is ready",
+ __func__));
break;
}
@@ -910,7 +913,7 @@ t4_push_frames(struct adapter *sc, struct toepcb *toep, int drop)
toep->txsd_avail--;
t4_l2t_send(sc, wr, toep->l2te);
- } while (m != NULL);
+ } while (m != NULL && (m->m_flags & M_NOTAVAIL) == 0);
/* Send a FIN if requested, but only if there's no more data to send */
if (m == NULL && toep->flags & TPF_SEND_FIN)
....
The MFC commit can now be published via `git push`
[source,shell]
....
git push freebsd
Enumerating objects: 13, done.
Counting objects: 100% (13/13), done.
Delta compression using up to 4 threads
Compressing objects: 100% (7/7), done.
Writing objects: 100% (7/7), 819 bytes | 117.00 KiB/s, done.
Total 7 (delta 6), reused 0 (delta 0), pack-reused 0
To gitrepo.FreeBSD.org:src.git
f4d0bc6aa6b9..525bd9c9dda7 stable/12 -> stable/12
....
[[vendor-import-git]]
=== Vendor Imports with Git
This section describes the vendor import procedure with Git in detail.
==== Branch naming convention
All vendor branches and tags start with `vendor/`. These branches and tags are visible by default.
[NOTE]
====
This chapter follows the convention that the `freebsd` origin is the origin name for the official FreeBSD Git repository.
If you use a different convention, replace `freebsd` with the name you use instead in the examples below.
====
We will explore an example for updating NetBSD's mtree that is in our tree.
The vendor branch for this is `vendor/NetBSD/mtree`.
==== Updating an old vendor import
The vendor trees usually have only the subset of the third-party software that is appropriate to FreeBSD.
These trees are usually tiny in comparison to the FreeBSD tree.
Git worktrees are thus quite small and fast and the preferred method to use.
Make sure that whatever directory you choose below (the `../mtree`) does not currently exist.
[source,shell]
....
% git worktree add ../mtree vendor/NetBSD/mtree
....
==== Update the Sources in the Vendor Branch
Prepare a full, clean tree of the vendor sources. Import everything but merge only what is needed.
This example assumes the NetBSD checked out from their GitHub mirror in `~/git/NetBSD`.
Note that "upstream" might have added or removed files, so we want to make sure deletions are propagated as well.
rsync(1) is commonly installed, so I'll use that.
[source,shell]
....
% cd ../mtree
% rsync -va --del --exclude=".git" ~/git/NetBSD/usr.sbin/mtree/ .
% git add -A
% git status
...
% git diff --staged
...
% git commit -m"Vendor import of NetBSD's mtree at 2020-12-11"
[vendor/NetBSD/mtree 8e7aa25fcf1] Vendor import of NetBSD's mtree at 2020-12-11
7 files changed, 114 insertions(+), 82 deletions(-)
% git tag -a vendor/NetBSD/mtree/20201211
....
Note: I run the `git diff` and `git status` commands to make sure nothing weird was present.
Also I used `-m` to illustrate, but you should compose a proper message in an editor (using a commit message template).
It is also important to create an annotated tag, otherwise the push will be rejected.
Only annotated tags are allowed to be pushed.
The annotated tag gives you a chance to enter a commit message.
Enter the version you are importing, along with any salient new features or fixes in that version.
==== Updating the FreeBSD Copy
At this point you can push the import to vendor into our repo.
[source,shell]
....
% git push --follow-tags freebsd vendor/NetBSD/mtree
....
`--follow-tags` tells `git push` to also push tags associated with the locally committed revision.
==== Updating the FreeBSD source tree
Now you need to update the mtree in FreeBSD.
The sources live in `contrib/mtree` since it is upstream software.
[source,shell]
....
% cd ../src
% git subtree merge -P contrib/mtree vendor/NetBSD/mtree
....
This would generate a subtree merge commit of `contrib/mtree` against the local `vendor/NetBSD/mtree` branch.
If there were conflicts, you would need to fix them before committing.
Include details about the changes being merged in the merge commit message.
==== Rebasing your change against latest FreeBSD source tree
Because the current policy recommends against using merges, if the upstream FreeBSD `main` moved forward before you get a chance to push, you would have to redo the merge.
Regular `git rebase` or `git pull --rebase` doesn't know how to rebase a merge commit **as a merge commit**,
so instead of that you would have to recreate the commit.
The easiest way to do this would be to create a side branch with the **contents** of the merged tree:
[source,shell]
....
% cd ../src
% git fetch freebsd
% git checkout -b merge_result
% git merge freebsd/main
....
Typically, there would be no merge conflicts here (because developers tend to work on different components).
In the worst case scenario, you would still have to resolve merge conflicts, if there was any, but this should be really rare.
Now, checkout `freebsd/main` again as `new_merge`, and redo the merge:
[source,shell]
....
% git checkout -b new_merge freebsd/main
% git subtree merge -P contrib/mtree vendor/NetBSD/mtree
....
Instead of resolving the conflicts, perform this instead:
[source,shell]
....
% git checkout merge_result .
....
Which will overwrite the files with conflicts with the version found in `merge_result`.
Examine the tree against `merge_result` to make sure that you haven't missed deleted files:
[source,shell]
....
% git diff merge_result
....
==== Pushing the changes
Once you are sure that you have a set of deltas you think is good, you can push it to a fork off GitHub or GitLab for others to review.
One nice thing about Git is that it allows you to publish rough drafts of your work for others to review.
While phabricator is good for content review, publishing the updated vendor branch and merge commits lets others check the details as they will eventually appear in the repository.
After review, when you are sure it is a good change, you can push it to the FreeBSD repo:
[source,shell]
....
% git push freebsd main
....
=== Creating a new vendor branch
There are a number of ways to create a new vendor branch.
The recommended way is to create a new repository and then merge that with FreeBSD.
If one is importing `glorbnitz` into the FreeBSD tree, release 3.1415.
For the sake of simplicity, we will not trim this release.
It is a simple user command that puts the nitz device into different magical glorb states and is small enough trimming will not save much.
==== Create the repo
[source,shell]
....
% cd /some/where
% mkdir glorbnitz
% cd glorbnitz
% git init
% git checkout -b vendor/glorbnitz
....
At this point, you have a new repo, where all new commits will go on the `vendor/glorbnitz` branch.
Git experts can also do this right in their FreeBSD clone, using `git checkout --orphan vendor/glorbnitz` if they are more comfortable with that.
==== Copy the sources in
Since this is a new import, you can just cp the sources in, or use tar or even rsync as shown above.
And we will add everything, assuming no dot files.
[source,shell]
....
% cp -r ~/glorbnitz/* .
% git add *
....
At this point, you should have a pristine copy of glorbnitz ready to commit.
[source,shell]
....
% git commit -m"Import GlorbNitz frobnosticator revision 3.1415"
....
As above, I used `-m` for simplicity, but you should likely create a commit message that explains what a Glorb is and why you'd use a Nitz to get it.
Not everybody will know.
But for your actual commit, you should follow the <<commit-log-message,commit log message>> section instead of emulating the brief style used here.
==== Now import it into our repository
Now you need to import the branch into our repository.
[source,shell]
....
% cd /path/to/freebsd/repo/src
% git remote add glorbnitz /some/where/glorbnitz
% git fetch glorbnitz vendor/glorbnitz
....
Note the vendor/glorbnitz branch is in the repo. At this point the `/some/where/glorbnitz` can be deleted, if you like.
It was only a means to an end.
==== Tag and push
Steps from here on out are much the same as they are in the case of updating a vendor branch, though without the updating the vendor branch step.
[source,shell]
....
% git worktree add ../glorbnitz vendor/glorbnitz
% cd ../glorbnitz
% git tag --annotate vendor/glorbnitz/3.1415
# Make sure the commit is good with "git show"
% git push --follow-tags freebsd vendor/glorbnitz
....
By 'good' we mean:
. All the right files are present
. None of the wrong files are present
. The vendor branch points at something sensible
. The tag looks good, and is annotated
. The commit message for the tag has a quick summary of what's new since the last tag
==== Time to finally merge it into the base tree
[source,shell]
....
% cd ../src
% git subtree add -P contrib/glorbnitz vendor/glorbnitz
# Make sure the commit is good with "git show"
% git commit --amend # one last sanity check on commit message
% git push freebsd
....
Here 'good' means:
. All the right files, and none of the wrong ones, were merged into contrib/glorbnitz.
. No other changes are in the tree.
. The commit messages look <<commit-log-message,good>>. It should contain a summary of what's changed since the last merge to the FreeBSD main line and any caveats.
. UPDATING should be updated if there is anything of note, such as user visible changes, important upgrade concerns, etc.
[NOTE]
====
This hasn't connected `glorbnitz` to the build yet.
How so do that is specific to the software being imported and is beyond the scope of this tutorial.
====
=== FreeBSD Src Committer Transition Guide
This section is designed to walk people through the conversion process from Subversion to Git, written from the source committer's point of view.
==== Migrating from a Subversion tree
This section will cover a couple of common scenarios for migrating from using the FreeBSD Subversion repo to the FreeBSD source Git repo.
The FreeBSD Git conversion is still in beta status, so some minor things may change between this and going into production.
The first thing to do is install Git. Any version of Git will do, though the latest one in ports / packages generally will be good.
Either build it from ports, or install it using pkg (though some folks might use `su` or `doas` instead of `sudo`):
[source,shell]
....
% sudo pkg install git
....
===== No staged changes migration
If you have no changes pending, the migration is straightforward.
In this, you abandon the Subversion tree and clone the Git repository.
It's likely best to retain your Subversion tree, in case there's something you've forgotten about there.
First, let's clone the repository:
[source,shell]
....
% git clone -o freebsd --config remote.freebsd.fetch='+refs/notes/*:refs/notes/*' https://git.freebsd.org/src.git freebsd-src
....
will create a clone of the FreeBSD src repository into a subdirectory called `freebsd-src` and include the 'notes' about the revisions.
We are currently mirroring the source repository to https://github.com/freebsd/freebsd-src.git as well.
https://github.com/freebsd/freebsd-legacy.git has the old GitHub mirror with the old hashes should you need that for your migration.
The GitHub `master` branch has been frozen.
As the default in Git has changed, we've shifted from `master` to `main`; the new repository uses `main`.
We also mirror the repository to GitLab at https://gitlab.com/FreeBSD/src.git .
It's useful to have the old Subversion revisions available.
This data is stored using Git notes, but Git doesn't fetch those by default.
The --config and the argument above changed the default to fetch the notes.
If you've cloned the repository without this, or wish to add notes to a previously cloned repository, use the following commands:
[source,shell]
....
% git config --add remote.freebsd.fetch "+refs/notes/*:refs/notes/*"
% git fetch
....
At this point you have the src checked out into a Git tree, ready to do other things.
===== But I have changes that I've not committed
If you are migrating from a tree that has changes you've not yet committed to FreeBSD, you'll need to follow the steps from the previous section first, and then follow these.
[source,shell]
....
% cd path-to-svn-checkout-tree
% svn diff > /tmp/src.diff
% cd _mumble_/freebsd-src
% git checkout -b working
....
This will create a diff of your current changes.
The last command creates a branch called `working` though you can call it whatever you want.
[source,shell]
....
% git apply /tmp/src.diff
....
this will apply all your pending changes to the working tree.
This doesn't commit the change, so you'll need to make this permanent:
[source,shell]
....
% git add _files_
% git commit
....
The last command will commit these changes to the branch.
The editor will prompt you for a commit message.
Enter one as if you were committing to FreeBSD.
At this point, your work is preserved, and in the Git repository.
===== Keeping current
So, time passes.
It's time now to update the tree for the latest changes upstream.
When you checkout `main` make sure that you have no diffs.
It's a lot easier to commit those to a branch (or use `git stash`) before doing the following.
If you are used to `git pull`, we strongly recommend using the `--ff-only` option, and further setting it as the default option.
Alternatively, `git pull --rebase` is useful if you have changes staged in the main branch.
[source,shell]
....
% git config --global pull.ff only
....
You may need to omit the --global if you want this setting to apply to only this repository.
[source,shell]
....
% cd freebsd-src
% git checkout main
% git pull (--ff-only|--rebase)
....
There is a common trap, that the combination command `git pull` will try to perform a merge, which would sometimes creates a merge commit that didn't exist before.
This can be harder to recover from.
The longer form is also recommended.
[source,shell]
....
% cd freebsd-src
% git checkout main
% git fetch freebsd
% git merge --ff-only freebsd/main
....
These commands reset your tree to the main branch, and then update it from where you pulled the tree from originally.
It's important to switch to `main` before doing this so it moves forward.
Now, it's time to move the changes forward:
[source,shell]
....
% git rebase -i main working
....
This will bring up an interactive screen to change the defaults.
For now, just exit the editor.
Everything should just apply.
If not, then you'll need to resolve the diffs.
https://docs.github.com/en/free-pro-team@latest/github/using-git/resolving-merge-conflicts-after-a-git-rebase[This github document] can help you navigate this process.
===== Time to push changes upstream
First, ensure that the push URL is properly configured for the upstream repository.
[source,shell]
....
% git remote set-url --push freebsd ssh://git@gitrepo.freebsd.org/src.git
....
Then, verify that user name and email are configured right.
We require that they exactly match the passwd entry in FreeBSD cluster.
Use
[source,shell]
....
freefall% gen-gitconfig.sh
....
on freefall.freebsd.org to get a recipe that you can use directly, assuming /usr/local/bin is in the PATH.
The below command merges the 'working' branch into the upstream main line.
It's important that you curate your changes to be just like you want them in the FreeBSD source repo before doing this.
[source,shell]
....
% git push freebsd working:main
....
If your push is rejected due to losing a commit race, rebase your branch before trying again:
[source,shell]
....
% git checkout working
% git fetch freebsd
% git rebase freebsd/main
% git push freebsd working:main
....
===== Finding the Subversion Revision
You'll need to make sure that you've fetched the notes (see the `No staged changes migration` section above for details.
Once you have these, notes will show up in the git log command like so:
[source,shell]
....
% git log
....
If you have a specific version in mind, you can use this construct:
[source,shell]
....
% git log --grep revision=XXXX
....
to find the specific revision.
The hex number after 'commit' is the hash you can use to refer to this commit.
==== Migrating from GitHub fork
Note: as of this writing, https://github.com/freebsd/freebsd-src is mirroring all official branches, along with a `master` branch which is the legacy svn2git result.
The `master` branch will not be updated anymore, and the link:https://github.com/freebsd/freebsd-src/commit/de1aa3dab23c06fec962a14da3e7b4755c5880cf[last commit] contains the instructions for migrating to the new `main` branch.
We'll retain the `master` branch for a certain time, but in the future it will only be kept in the link:https://github.com/freebsd/freebsd-legacy[freebsd-legacy] repository.
When migrating branches from a GitHub fork from the old GitHub mirror to the official repo, the process is straight forward.
This assumes that you have a `freebsd` upstream pointing to GitHub, adjust if necessary.
This also assumes a clean tree before starting...
===== Add the new `freebsd` upstream repository:
[source,shell]
....
% git remote add freebsd https://git.freebsd.org/src.git
% git fetch freebsd
% git checkout --track freebsd/main
....
===== Rebase all your WIP branches.
For each branch FOO, do the following after fetching the `freebsd` sources and creating a local `main` branch with the above checkout:
[source,shell]
....
% git rebase -i freebsd/master FOO --onto main
....
And you'll now be tracking the official repository.
You can then follow the `Keeping Current` section above to stay up to date.
If you need to then commit work to FreeBSD, you can do so following the `Time to push changes upstream` instructions.
You'll need to do the following once to update the push URL if you are a FreeBSD committer:
[source,shell]
....
% git remote set-url --push freebsd ssh://git@gitrepo.freebsd.org/src.git
....
(note that gitrepo.freebsd.org will be change to repo.freebsd.org in the future.)
You will also need to add `freebsd` as the location to push to.
The author recommends that your upstream GitHub repository remain the default push location so that you only push things into FreeBSD you intend to by making it explicit.
[[git-faq]]
=== Git FAQ
This section provides a number of targeted answers to questions that are likely to come up often for users and developers.
[NOTE]
====
We use the common convention of having the origin for the FreeBSD repository being 'freebsd' rather than the default 'origin' to allow
people to use that for their own development and to minimize "whoopse" pushes to the wrong repository.
====
==== Users
===== How do I track -current and -stable with only one copy of the repository?
**Q:** Although disk space is not a huge issue, it's more efficient to use only one copy of the repository.
With SVN mirroring, I could checkout multiple trees from the same repository.
How do I do this with Git?
**A:** You can use Git worktrees.
There's a number of ways to do this, but the simplest way is to use a clone to track -current, and a worktree to track stable releases.
While using a 'bare repository' has been put forward as a way to cope, it's more complicated and will not be documented here.
First, you need to clone the FreeBSD repository, shown here cloning into `freebsd-current` to reduce confusion.
$URL is whatever mirror works best for you:
[source,shell]
....
% git clone -o freebsd --config remote.freebsd.fetch='+refs/notes/*:refs/notes/*' $URL freebsd-current
....
then once that's cloned, you can simply create a worktree from it:
[source,shell]
....
% cd freebsd-current
% git worktree add ../freebsd-stable-12 stable/12
....
this will checkout `stable/12` into a directory named `freebsd-stable-12` that's a peer to the `freebsd-current` directory.
Once created, it's updated very similarly to how you might expect:
[source,shell]
....
% cd freebsd-current
% git checkout main
% git pull --ff-only
# changes from upstream now local and current tree updated
% cd ../freebsd-stable-12
% git merge --ff-only freebsd/stable/12
# now your stable/12 is up to date too
....
I recommend using `--ff-only` because it's safer and you avoid accidentally getting into a 'merge nightmare' where you have an extra change in your tree, forcing a complicated merge rather than a simple one.
Here's https://adventurist.me/posts/00296[a good writeup] that goes into more detail.
==== Developers
===== Ooops! I committed to `main` instead of a branch.
**Q:** From time to time, I goof up and commit to main instead of to a branch. What do I do?
**A:** First, don't panic.
Second, don't push.
In fact, you can fix almost anything if you haven't pushed.
All the answers in this section assume no push has happened.
The following answer assumes you committed to `main` and want to create a branch called `issue`:
[source,shell]
....
% git branch issue # Create the 'issue' branch
% git reset --hard freebsd/main # Reset 'main' back to the official tip
% git checkout issue # Back to where you were
....
===== Ooops! I committed something to the wrong branch!
**Q:** I was working on feature on the `wilma` branch, but accidentally committed a change relevant to the `fred` branch in 'wilma'.
What do I do?
**A:** The answer is similar to the previous one, but with cherry picking.
This assumes there's only one commit on wilma, but will generalize to more complicated situations.
It also assumes that it's the last commit on wilma (hence using wilma in the `git cherry-pick` command), but that too can be generalized.
[source,shell]
....
# We're on branch wilma
% git checkout fred # move to fred branch
% git cherry-pick wilma # copy the misplaced commit
% git checkout wilma # go back to wilma branch
% git reset --hard HEAD^ # move what wilma refers to back 1 commit
....
Git experts would first rewind the wilma branch by 1 commit, switch over to fred and then use `git reflog` to see what that 1 deleted commit was and
cherry-pick it over.
**Q:** But what if I want to commit a few changes to `main`, but keep the rest in `wilma` for some reason?
**A:** The same technique above also works if you are wanting to 'land' parts of the branch you are working on into `main` before the rest of the branch is ready (say you noticed an unrelated typo, or fixed an incidental bug).
You can cherry pick those changes into main, then push to the parent repository.
Once you've done that, cleanup couldn't be simpler: just `git rebase -i`.
Git will notice you've done this and skip the common changes automatically (even if you had to change the commit message or tweak the commit slightly).
There's no need to switch back to wilma to adjust it: just rebase!
**Q:** I want to split off some changes from branch `wilma` into branch `fred`
**A:** The more general answer would be the same as the previous.
You'd checkout/create the `fred` branch, cherry pick the changes you want from `wilma` one at a time, then rebase `wilma` to remove those changes you cherry picked.
`git rebase -i main wilma` will toss you into an editor, and remove the `pick` lines that correspond to the commits you copied to `fred`.
If all goes well, and there are no conflicts, you're done.
If not, you'll need to resolve the conflicts as you go.
The other way to do this would be to checkout `wilma` and then create the branch `fred` to point to the same point in the tree.
You can then `git rebase -i` both these branches, selecting the changes you want in `fred` or `wilma` by retaining the pick likes, and deleting the rest from the editor.
Some people would create a tag/branch called `pre-split` before starting in case something goes wrong in the split.
You can undo it with the following sequence:
[source,shell]
....
% git checkout pre-split # Go back
% git branch -D fred # delete the fred branch
% git checkout -B wilma # reset the wilma branch
% git branch -d pre-split # Pretend it didn't happen
....
The last step is optional.
If you are going to try again to split, you'd omit it.
**Q:** But I did things as I read along and didn't see your advice at the end to create a branch, and now `fred` and `wilma` are all screwed up.
How do I find what `wilma` was before I started.
I don't know how many times I moved things around.
**A:** All is not lost. You can figure out it, so long as it hasn't been too long, or too many commits (hundreds).
So I created a wilma branch and committed a couple of things to it, then decided I wanted to split it into fred and wilma.
Nothing weird happened when I did that, but let's say it did.
The way to look at what you've done is with the `git reflog`:
[source,shell]
....
% git reflog
6ff9c25 (HEAD -> wilma) HEAD@{0}: rebase -i (finish): returning to refs/heads/wilma
6ff9c25 (HEAD -> wilma) HEAD@{1}: rebase -i (start): checkout main
869cbd3 HEAD@{2}: rebase -i (start): checkout wilma
a6a5094 (fred) HEAD@{3}: rebase -i (finish): returning to refs/heads/fred
a6a5094 (fred) HEAD@{4}: rebase -i (pick): Encourage contributions
1ccd109 (freebsd/main, main) HEAD@{5}: rebase -i (start): checkout main
869cbd3 HEAD@{6}: rebase -i (start): checkout fred
869cbd3 HEAD@{7}: checkout: moving from wilma to fred
869cbd3 HEAD@{8}: commit: Encourage contributions
...
%
....
Here we see the changes I've made.
You can use it to figure out where things went wrong.
I'll just point out a few things here.
The first one is that HEAD@{X} is a 'commitish' thing, so you can use that as an argument to a command.
Although if that command commits anything to the repository, the X numbers change.
You can also use the hash (first column).
Next, 'Encourage contributions' was the last commit I made to `wilma` before I decided to split things up.
You can also see the same hash is there when I created the `fred` branch to do that.
I started by rebasing `fred` and you see the 'start', each step, and the 'finish' for that process.
While we don't need it here, you can figure out exactly what happened.
Fortunately, to fix this, you can follow the prior answer's steps, but with the hash `869cbd3` instead of `pre-split`.
While that seems a bit verbose, it's easy to remember since you're doing one thing at a time.
You can also stack:
[source,shell]
....
% git checkout -B wilma 869cbd3
% git branch -D fred
....
and you are ready to try again.
The 'checkout -B' with the hash combines checking out and creating a branch for it.
The -B instead of -b forces the movement of a pre-existing branch.
Either way works, which is what's great (and awful) about Git.
One reason I tend to use `git checkout -B xxxx hash` instead of checking out the hash, and then creating / moving the branch is purely to avoid the slightly distressing message about detached heads:
[source,shell]
....
% git checkout 869cbd3
M faq.md
Note: checking out '869cbd3'.
You are in 'detached HEAD' state. You can look around, make experimental
changes and commit them, and you can discard any commits you make in this
state without impacting any branches by performing another checkout.
If you want to create a new branch to retain commits you create, you may
do so (now or later) by using -b with the checkout command again. Example:
git checkout -b <new-branch-name>
HEAD is now at 869cbd3 Encourage contributions
% git checkout -B wilma
....
this produces the same effect, but I have to read a lot more and severed heads aren't an image I like to contemplate.
===== Ooops! I did a `git pull` and it created a merge commit, what do I do?
**Q:** I was on autopilot and did a `git pull` for my development tree and that created a merge commit on the mainline.
How do I recover?
**A:** This can happen when you invoke the pull with your development branch checked out.
Right after the pull, you will have the new merge commit checked out.
Git supports a `HEAD^#` syntax to examine the parents of a merge commit:
[source,shell]
....
git log --oneline HEAD^1 # Look at the first parent's commits
git log --oneline HEAD^2 # Look at the second parent's commits
....
From those logs, you can easily identify which commit is your development work.
Then you simply reset your branch to the corresponding `HEAD^#`:
[source,shell]
....
git reset --hard HEAD^2
....
**Q:** But I also need to fix my `main` branch. How do I do that?
**A:** Git keeps track of the remote repository branches in a `freebsd/` namespace.
To fix your `main` branch, just make it point to the remote's `main`:
[source,shell]
....
git branch -f main freebsd/main
....
There's nothing magical about branches in Git: they are just labels on a graph that are automatically moved forward by making commits.
So the above works because you're just moving a label.
There's no metadata about the branch that needs to be preserved due to this.
===== Mixing and matching branches
**Q:** So I have two branches `worker` and `async` that I'd like to combine into one branch called `feature`
while maintaining the commits in both.
**A:** This is a job for cherry pick.
[source,shell]
....
% git checkout worker
% git checkout -b feature # create a new branch
% git cherry-pick main..async # bring in the changes
....
You now have a new branch called `feature`.
This branch combines commits from both branches.
You can further curate it with `git rebase`.
**Q:** I have a branch called `driver` and I'd like to break it up into `kernel` and `userland` so I can evolve them separately and commit each branch as it becomes ready.
**A:** This takes a little bit of prep work, but `git rebase` will do the heavy
lifting here.
[source,shell]
....
% git checkout driver # Checkout the driver
% git checkout -b kernel # Create kernel branch
% git checkout -b userland # Create userland branch
....
Now you have two identical branches.
So, it's time to separate out the commits.
We'll assume first that all the commits in `driver` go into either the `kernel` or the `userland` branch, but not both.
[source,shell]
....
% git rebase -i main kernel
....
and just include the changes you want (with a 'p' or 'pick' line) and just delete the commits you don't (this sounds scary, but if worse comes to worse, you can throw this all away and start over with the `driver` branch since you've not yet moved it).
[source,shell]
....
% git rebase -i main userland
....
and do the same thing you did with the `kernel` branch.
**Q:** Oh great! I followed the above and forgot a commit in the `kernel` branch.
How do I recover?
**A:** You can use the `driver` branch to find the hash of the commit is missing and
cherry pick it.
[source,shell]
....
% git checkout kernel
% git log driver
% git cherry-pick $HASH
....
**Q:** OK. I have the same situation as the above, but my commits are all mixed up.
I need parts of one commit to go to one branch and the rest to go to the other.
In fact, I have several.
Your rebase method to select sounds tricky.
**A:** In this situation, you'd be better off to curate the original branch to separate
out the commits, and then use the above method to split the branch.
So let's assume that there's just one commit with a clean tree.
You can either use `git rebase` with an `edit` line, or you can use this with the commit on the tip.
The steps are the same either way.
The first thing we need to do is to back up one commit while leaving the changes uncommitted in the tree:
[source,shell]
....
% git reset HEAD^
....
Note: Do not, repeat do not, add `--hard` here since that also removes the changes from your tree.
Now, if you are lucky, the change needing to be split up falls entirely along file lines.
In that case you can just do the usual `git add` for the files in each group than do a `git commit`.
Note: when you do this, you'll lose the commit message when you do the reset, so if you need it for some reason, you should save a copy (though `git log $HASH` can recover it).
If you are not lucky, you'll need to split apart files.
There's another tool to do that which you can apply one file at a time.
[source,shell]
....
git add -i foo/bar.c
....
will step through the diffs, prompting you, one at time, whether to include or exclude the hunk.
Once you're done, `git commit` and you'll have the remainder in your tree.
You can run it multiple times as well, and even over multiple files (though I find it easier to do one file at a time
and use the `git rebase -i` to fold the related commits together).
==== Cloning and Mirroring
**Q:** I'd like to mirror the entire Git repository, how do I do that?
**A:** If all you want to do is mirror, then
[source,shell]
....
% git clone --mirror $URL
....
will do the trick.
However, there are two disadvantages to this if you want to use it for anything other than a mirror you'll reclone.
First, this is a 'bare repository' which has the repository database, but no checked out worktree.
This is great for mirroring, but terrible for day to day work.
There's a number of ways around this with 'git worktree':
[source,shell]
....
% git clone --mirror https://git.freebsd.org/ports.git ports.git
% cd ports.git
% git worktree add ../ports main
% git worktree add ../quarterly branches/2020Q4
% cd ../ports
....
But if you aren't using your mirror for further local clones, then it's a poor match.
The second disadvantage is that Git normally rewrites the refs (branch name, tags, etc) from upstream so that your local refs can evolve independently of upstream.
This means that you'll lose changes if you are committing to this repository on anything other than private project branches.
**Q:** So what can I do instead?
**A:** Well, you can stuff all of the upstream repository's refs into a private namespace in your local repository.
Git clones everything via a 'refspec' and the default refspec is:
[source,shell]
....
fetch = +refs/heads/*:refs/remotes/freebsd/*
....
which says just fetch the branch refs.
However, the FreeBSD repository has a number of other things in it.
To see those, you can add explicit refspecs for each ref namespace, or you can fetch everything.
To setup your repository to do that:
[source,shell]
....
git config --add remote.freebsd.fetch '+refs/*:refs/freebsd/*'
....
which will put everything in the upstream repository into your local repository's 'refs/freebsd/' namespace.
Please note, that this also grabs all the unconverted vendor branches and the number of refs associated with them is quite large.
You'll need to refer to these 'refs' with their full name because they aren't in and of Git's regular namespaces.
[source,shell]
....
git log refs/freebsd/vendor/zlib/1.2.10
....
would look at the log for the vendor branch for zlib starting at 1.2.10.
=== Collaborating with others
One of the keys to good software development on a project as large as FreeBSD is the ability to collaborate with others before you push your changes to the tree.
The FreeBSD project's Git repositories do not, yet, allow user created branches to be pushed to the repository, and therefore if you wish to share your changes with others you must use another mechanism, such as a hosted GitLab or GitHub, in order to share changes in a user generated branch.
The following instructions show how to set up a user generated branch, based on the FreeBSD main branch, and push it to GitHub.
Before you begin, make sure that your local Git repo is up to date and has the correct origins set <<keeping_current,as shown above.>>
[source,shell]
````
% git remote -v
freebsd https://git.freebsd.org/src.git (fetch)
freebsd ssh://git@gitrepo.freebsd.org/src.git (push)
````
The first step is to create a fork of https://github.com/freebsd/freebsd-src[FreeBSD] on GitHub following these https://docs.github.com/en/github/getting-started-with-github/fork-a-repo[guidelines].
The destination of the fork should be your own, personal, GitHub account (gvnn3 in my case).
Now add a remote on your local system that points to your fork:
[source,shell]
....
% git remote add github git@github.com:gvnn3/freebsd-src.git
% git remote -v
github git@github.com:gvnn3/freebsd-src.git (fetch)
github git@github.com:gvnn3/freebsd-src.git (push)
freebsd https://git.freebsd.org/src.git (fetch)
freebsd ssh://git@gitrepo.freebsd.org/src.git (push)
....
With this in place you can create a branch <<keeping_a_local_branch,as shown above.>>
[source,shell]
....
% git checkout -b gnn-github
....
Make whatever modifications you wish in your branch. Build, test, and once you're ready to collaborate with others it's time to push your changes into your hosted branch.
Before you can push you'll have to set the appropriate upstream, as Git will tell you the first time you try to push to your +github+ remote:
[source,shell]
....
% git push github
fatal: The current branch gnn-github has no upstream branch.
To push the current branch and set the remote as upstream, use
git push --set-upstream github gnn-github
....
Setting the push as +git+ advises allows it to succeed:
[source,shell]
....
% git push --set-upstream github gnn-feature
Enumerating objects: 20486, done.
Counting objects: 100% (20486/20486), done.
Delta compression using up to 8 threads
Compressing objects: 100% (12202/12202), done.
Writing objects: 100% (20180/20180), 56.25 MiB | 13.15 MiB/s, done.
Total 20180 (delta 11316), reused 12972 (delta 7770), pack-reused 0
remote: Resolving deltas: 100% (11316/11316), completed with 247 local objects.
remote:
remote: Create a pull request for 'gnn-feature' on GitHub by visiting:
remote: https://github.com/gvnn3/freebsd-src/pull/new/gnn-feature
remote:
To github.com:gvnn3/freebsd-src.git
* [new branch] gnn-feature -> gnn-feature
Branch 'gnn-feature' set up to track remote branch 'gnn-feature' from 'github'.
....
Subsequent changes to the same branch will push correctly by default:
[source,shell]
....
% git push
Enumerating objects: 4, done.
Counting objects: 100% (4/4), done.
Delta compression using up to 8 threads
Compressing objects: 100% (2/2), done.
Writing objects: 100% (3/3), 314 bytes | 1024 bytes/s, done.
Total 3 (delta 1), reused 1 (delta 0), pack-reused 0
remote: Resolving deltas: 100% (1/1), completed with 1 local object.
To github.com:gvnn3/freebsd-src.git
9e5243d7b659..cf6aeb8d7dda gnn-feature -> gnn-feature
....
At this point your work is now in your branch on +GitHub+ and you can
share the link with other collaborators.
[[vcs-history]]
== Version Control History
The project has moved to <<git-primer,git>>.
The FreeBSD source repository switched from CVS to Subversion on May 31st, 2008.
The first real SVN commit is __r179447__.
The source repository switched from Subversion to Git on December 23rd, 2020.
The last real svn commit is __r368820__.
The first real git commit hash is __5ef5f51d2bef80b0ede9b10ad5b0e9440b60518c__
The FreeBSD `doc/www` repository switched from CVS to Subversion on May 19th, 2012.
The first real SVN commit is __r38821__.
The documentation repository switched from Subversion to Git on December 8th, 2020.
The last SVN commit is __r54737__.
The first real git commit hash is __3be01a475855e7511ad755b2defd2e0da5d58bbe__.
The FreeBSD `ports` repository switched from CVS to Subversion on July 14th, 2012.
The first real SVN commit is __r300894__.
The ports repository switched from Subversion to Git on April 6, 2021.
The last SVN commit is __r569609__
The first real git commit hash is __ed8d3eda309dd863fb66e04bccaa513eee255cbf__.
[[conventions]]
== Setup, Conventions, and Traditions
There are a number of things to do as a new developer.
The first set of steps is specific to committers only.
These steps must be done by a mentor for those who are not committers.
[[conventions-committers]]
=== For New Committers
Those who have been given commit rights to the FreeBSD repositories must follow these steps.
* Get mentor approval before committing each of these changes!
* The [.filename]#.ent# and [.filename]#.xml# files mentioned below exist in the FreeBSD Documentation Project SVN repository at `svn+ssh://repo.FreeBSD.org/doc/`.
* All [.filename]#src# commits go to FreeBSD-CURRENT first before being merged to FreeBSD-STABLE. The FreeBSD-STABLE branch must maintain ABI and API compatibility with earlier versions of that branch. Do not merge changes that break this compatibility.
[[commit-steps]]
[.procedure]
====
*Procedure 1. Steps for New Committers*
. Add an Author Entity
+
[.filename]#doc/shared/authors.adoc# - Add an author entity. Later steps depend on this entity, and missing this step will cause the [.filename]#doc/# build to fail. This is a relatively easy task, but remains a good first test of version control skills.
. Update the List of Developers and Contributors
+
[.filename]#doc/en/articles/contributors/contrib-committers.adoc# - Add an entry to the "Developers" section of the link:{contributors}#staff-committers[Contributors List]. Entries are sorted by last name.
+
[.filename]#doc/en/articles/contributors/contrib-additional.adoc# - _Remove_ the entry from the "Additional Contributors" section. Entries are sorted by first name.
. Add a News Item
+
[.filename]#website/data/en/news/news.toml# - Add an entry. Look for the other entries that announce new committers and follow the format. Use the date from the commit bit approval email from mailto:core@FreeBSD.org[core@FreeBSD.org].
. Add a PGP Key
+
`{des}` has written a shell script ([.filename]#documentation/tools/addkey.sh#) to make this easier. See the https://cgit.freebsd.org/doc/plain/documentation/static/pgpkeys/README[README] file for more information.
+
Use [.filename]#documentation/tools/checkkey.sh# to verify that keys meet minimal best-practices standards.
+
After adding and checking a key, add both updated files to source control and then commit them. Entries in this file are sorted by last name.
+
[NOTE]
======
It is very important to have a current PGP/GnuPG key in the repository. The key may be required for positive identification of a committer. For example, the `{admins}` might need it for account recovery. A complete keyring of `FreeBSD.org` users is available for download from link:https://www.FreeBSD.org/doc/pgpkeyring.txt[https://www.FreeBSD.org/doc/pgpkeyring.txt].
======
. Update Mentor and Mentee Information
+
[.filename]#base/head/share/misc/committers-repository.dot# - Add an entry to the current committers section, where _repository_ is `doc`, `ports`, or `src`, depending on the commit privileges granted.
+
Add an entry for each additional mentor/mentee relationship in the bottom section.
. Generate a Kerberos Password
+
See <<kerberos-ldap>> to generate or set a Kerberos for use with other FreeBSD services like the bug tracking database.
. Optional: Enable Wiki Account
+
https://wiki.freebsd.org[FreeBSD Wiki] Account - A wiki account allows sharing projects and ideas. Those who do not yet have an account can follow instructions on the https://wiki.freebsd.org/AboutWiki[AboutWiki Page] to obtain one. Contact mailto:wiki-admin@FreeBSD.org[wiki-admin@FreeBSD.org] if you need help with your Wiki account.
. Optional: Update Wiki Information
+
Wiki Information - After gaining access to the wiki, some people add entries to the https://wiki.freebsd.org/HowWeGotHere[How We Got Here], https://wiki.freebsd.org/IRC/Nicknames[IRC Nicks], and https://wiki.freebsd.org/Community/Dogs[Dogs of FreeBSD] pages.
. Optional: Update Ports with Personal Information
+
[.filename]#ports/astro/xearth/files/freebsd.committers.markers# and [.filename]#src/usr.bin/calendar/calendars/calendar.freebsd# - Some people add entries for themselves to these files to show where they are located or the date of their birthday.
. Optional: Prevent Duplicate Mailings
+
Subscribers to {dev-commits-doc-all}, {dev-commits-ports-all} or {dev-commits-src-all} might wish to unsubscribe to avoid receiving duplicate copies of commit messages and followups.
====
[[conventions-everyone]]
=== For Everyone
[[conventions-everyone-steps]]
[.procedure]
====
. Introduce yourself to the other developers, otherwise no one will have any idea who you are or what you are working on. The introduction need not be a comprehensive biography, just write a paragraph or two about who you are, what you plan to be working on as a developer in FreeBSD, and who will be your mentor. Email this to the {developers-name} and you will be on your way!
. Log into `freefall.FreeBSD.org` and create a [.filename]#/var/forward/user# (where _user_ is your username) file containing the e-mail address where you want mail addressed to _yourusername_@FreeBSD.org to be forwarded. This includes all of the commit messages as well as any other mail addressed to the {committers-name} and the {developers-name}. Really large mailboxes which have taken up permanent residence on `freefall` may get truncated without warning if space needs to be freed, so forward it or save it elsewhere.
+
[NOTE]
======
If your e-mail system uses SPF with strict rules, you should whitelist `mx2.FreeBSD.org` from SPF checks.
======
+
Due to the severe load dealing with SPAM places on the central mail servers that do the mailing list processing, the front-end server does do some basic checks and will drop some messages based on these checks. At the moment proper DNS information for the connecting host is the only check in place but that may change. Some people blame these checks for bouncing valid email. To have these checks turned off for your email, create a file named [.filename]#~/.spam_lover# on `freefall.FreeBSD.org`.
+
[NOTE]
======
Those who are developers but not committers will not be subscribed to the committers or developers mailing lists. The subscriptions are derived from the access rights.
======
====
[[smtp-setup]]
==== SMTP Access Setup
For those willing to send e-mail messages through the FreeBSD.org infrastructure, follow the instructions below:
[.procedure]
====
. Point your mail client at `smtp.FreeBSD.org:587`.
. Enable STARTTLS.
. Ensure your `From:` address is set to `_yourusername_@FreeBSD.org`.
. For authentication, you can use your FreeBSD Kerberos username and password (see <<kerberos-ldap>>). The `_yourusername_/mail` principal is preferred, as it is only valid for authenticating to mail resources.
+
[NOTE]
======
Do not include `@FreeBSD.org` when entering in your username.
======
+
.Additional Notes
[NOTE]
======
* Will only accept mail from `_yourusername_@FreeBSD.org`. If you are authenticated as one user, you are not permitted to send mail from another.
* A header will be appended with the SASL username: (`Authenticated sender: _username_`).
* Host has various rate limits in place to cut down on brute force attempts.
======
====
[[smtp-setup-local-mta]]
===== Using a Local MTA to Forward Emails to the FreeBSD.org SMTP Service
It is also possible to use a local MTA to forward locally sent emails to the FreeBSD.org SMTP servers.
[[smtp-setup-local-postfix]]
.Using Postfix
[example]
====
To tell a local Postfix instance that anything from `_yourusername_@FreeBSD.org` should be forwarded to the FreeBSD.org servers, add this to your [.filename]#main.cf#:
[.programlisting]
....
sender_dependent_relayhost_maps = hash:/usr/local/etc/postfix/relayhost_maps
smtp_sasl_auth_enable = yes
smtp_sasl_security_options = noanonymous
smtp_sasl_password_maps = hash:/usr/local/etc/postfix/sasl_passwd
smtp_use_tls = yes
....
Create [.filename]#/usr/local/etc/postfix/relayhost_maps# with the following content:
[.programlisting]
....
yourusername@FreeBSD.org [smtp.freebsd.org]:587
....
Create [.filename]#/usr/local/etc/postfix/sasl_passwd# with the following content:
[.programlisting]
....
[smtp.freebsd.org]:587 yourusername:yourpassword
....
If the email server is used by other people, you may want to prevent them from sending e-mails from your address. To achieve this, add this to your [.filename]#main.cf#:
[.programlisting]
....
smtpd_sender_login_maps = hash:/usr/local/etc/postfix/sender_login_maps
smtpd_sender_restrictions = reject_known_sender_login_mismatch
....
Create [.filename]#/usr/local/etc/postfix/sender_login_maps# with the following content:
[.programlisting]
....
yourusername@FreeBSD.org yourlocalusername
....
Where _yourlocalusername_ is the SASL username used to connect to the local instance of Postfix.
====
[[smtp-setup-local-opensmtpd]]
.Using OpenSMTPD
[example]
====
To tell a local OpenSMTPD instance that anything from `_yourusername_@FreeBSD.org` should be forwarded to the FreeBSD.org servers, add this to your [.filename]#smtpd.conf#:
[.programlisting]
....
action "freebsd" relay host smtp+tls://freebsd@smtp.freebsd.org:587 auth <secrets>
match from any auth yourlocalusername mail-from "_yourusername_@freebsd.org" for any action "freebsd"
....
Where _yourlocalusername_ is the SASL username used to connect to the local instance of OpenSMTPD.
Create [.filename]#/usr/local/etc/mail/secrets# with the following content:
[.programlisting]
....
freebsd yourusername:yourpassword
....
====
[[mentors]]
=== Mentors
All new developers have a mentor assigned to them for the first few months.
A mentor is responsible for teaching the mentee the rules and conventions of the project and guiding their first steps in the developer community.
The mentor is also personally responsible for the mentee's actions during this initial period.
For committers: do not commit anything without first getting mentor approval.
Document that approval with an `Approved by:` line in the commit message.
When the mentor decides that a mentee has learned the ropes and is ready to commit on their own, the mentor announces it with a commit to [.filename]#conf/mentors#.
This file is in the [.filename]#svnadmin# branch of each repository:
[.informaltable]
[cols="1,1", frame="none"]
|===
|`src`
|[.filename]#base/svnadmin/conf/mentors#
|`doc`
|[.filename]#doc/svnadmin/conf/mentors#
|`ports`
|[.filename]#ports/svnadmin/conf/mentors#
|===
[[pre-commit-review]]
== Pre-Commit Review
Code review is one way to increase the quality of software.
The following guidelines apply to commits to the `head` (-CURRENT) branch of the `src` repository.
Other branches and the `ports` and `docs` trees have their own review policies, but these guidelines generally apply to commits requiring review:
* All non-trivial changes should be reviewed before they are committed to the repository.
* Reviews may be conducted by email, in Bugzilla, in Phabricator, or by another mechanism. Where possible, reviews should be public.
* The developer responsible for a code change is also responsible for making all necessary review-related changes.
* Code review can be an iterative process, which continues until the patch is ready to be committed. Specifically, once a patch is sent out for review, it should receive an explicit "looks good" before it is committed. So long as it is explicit, this can take whatever form makes sense for the review method.
* Timeouts are not a substitute for review.
Sometimes code reviews will take longer than you would hope for, especially for larger features. Accepted ways to speed up review times for your patches are:
* Review other people's patches. If you help out, everybody will be more willing to do the same for you; goodwill is our currency.
* Ping the patch. If it is urgent, provide reasons why it is important to you to get this patch landed and ping it every couple of days. If it is not urgent, the common courtesy ping rate is one week. Remember that you are asking for valuable time from other professional developers.
* Ask for help on mailing lists, IRC, etc. Others may be able to either help you directly, or suggest a reviewer.
* Split your patch into multiple smaller patches that build on each other. The smaller your patch, the higher the probability that somebody will take a quick look at it.
+
When making large changes, it is helpful to keep this in mind from the beginning of the effort as breaking large changes into smaller ones is often difficult after the fact.
Developers should participate in code reviews as both reviewers and reviewees.
If someone is kind enough to review your code, you should return the favor for someone else.
Note that while anyone is welcome to review and give feedback on a patch, only an appropriate subject-matter expert can approve a change.
This will usually be a committer who works with the code in question on a regular basis.
In some cases, no subject-matter expert may be available.
In those cases, a review by an experienced developer is sufficient when coupled with appropriate testing.
[[commit-log-message]]
== Commit Log Messages
This section contains some suggestions and traditions for how commit logs are formatted.
=== Why are commit messages important?
When you commit a change in Git, Subversion, or another version control system (VCS), you're prompted to write some text describing the commit -- a commit message.
How important is this commit message? Should you spend some significant effort writing it? Does it really matter if you write simply fixed a bug?
Most projects have more than one developer and last for some length of time.
Commit messages are a very important method of communicating with other developers, in the present and for the future.
FreeBSD has hundreds of active developers and hundreds of thousands of commits spanning decades of history.
Over that time the developer community has learned how valuable good commit messages are; sometimes these are hard-learned lessons.
Commit messages serve at least three purposes:
* Communicating with other developers
+
FreeBSD commits generate email to various mailing lists.
These include the commit message along with a copy of the patch itself.
Commit messages are also viewed through commands like git log.
These serve to make other developers aware of changes that are ongoing; that other developer may want to test the change, may have an interest in the topic and will want to review in more detail, or may have their own projects underway that would benefit from interaction.
* Making Changes Discoverable
+
In a large project with a long history it may be difficult to find changes of interest when investigating an issue or change in behaviour.
Verbose, detailed commit messages allow searches for changes that might be relevant.
For example, `git log --since 1year --grep 'USB timeout'`.
* Providing historical documentation
+
Commit messages serve to document changes for future developers, perhaps years or decades later.
This future developer may even be you, the original author.
A change that seems obvious today may be decidedly not so much later on.
The `git blame` command annotates each line of a source file with the change (hash and subject line) that brought it in.
Having established the importance, here are elements of a good FreeBSD commit message:
=== Start with a subject line
Commit messages should start with a single-line subject that briefly summarizes the change.
The subject should, by itself, allow the reader to quickly determine if the change is of interest or not.
=== Keep subject lines short
The subject line should be as short as possible while still retaining the required information.
This is to make browsing Git log more efficient, and so that git log --oneline can display the short hash and subject on a single 80-column line.
A good rule of thumb is to stay below 63 characters, and aim for about 50 or fewer if possible.
=== Prefix the subject line with a component, if applicable
If the change relates to a specific component the subject line may be prefixed with that component name and a colon (:).
✓ `foo: Add -k option to keep temporary data`
Include the prefix in the 63-character limit suggested above, so that `git log --oneline` avoids wrapping.
=== Capitalize the first letter of the subject
Capitalize the first letter of the subject itself.
The prefix, if any, is not capitalized unless necessary (e.g., `USB:` is capitalized).
=== Do not end the subject line with punctuation
Do not end with a period or other punctuation.
In this regard the subject line is like a newspaper headline.
=== Separate the subject and body with a blank line
Separate the body from the subject with a blank line.
Some trivial commits do not require a body, and will have only a subject.
✓ `ls: Fix typo in usage text`
=== Limit messages to 72 columns
`git log` and `git format-patch` indent the commit message by four spaces.
Wrapping at 72 columns provides a matching margin on the right edge.
Limiting messages to 72 characters also keeps the commit message in formatted patches below RFC 2822's suggested email line length limit of 78 characters.
This limit works well with a variety of tools that may render commit messages; line wrapping might be inconsistent with longer line length.
=== Use the present tense, imperative mood
This facilitates short subject lines and provides consistency, including with automatically generated commit messages (e.g., as generated by git revert).
This is important when reading a list of commit subjects.
Think of the subject as finishing the sentence "when applied, this change will ...".
✓ `foo: Implement the -k (keep) option` +
✗ `foo: Implemented the -k option` +
✗ `This change implements the -k option in foo` +
✗ `-k option added`
=== Focus on what and why, not how
Explain what the change accomplishes and why it is being done, rather than how.
Do not assume that the reader is familiar with the issue.
Explain the background and motivation for the change.
Include benchmark data if you have it.
If there are limitations or incomplete aspects of the change, describe them in the commit message.
=== Consider whether parts of the commit message could be code comments instead
Sometimes while writing a commit message you may find yourself writing a sentence or two explaining some tricky or confusing aspect of the change. When this happens consider whether it would be valuable to have that explanation as a comment in the code itself.
=== Write commit messages for your future self
While writing the commit message for a change you have all of the context in mind - what prompted the change, alternate approaches that were considered and rejected, limitations of the change, and so on.
Imagine yourself revisiting the change a year or two in the future, and write the commit message in a way that would provide that necessary context.
=== Commit messages should stand alone
You may include references to mailing list postings, benchmark result web sites, or code review links.
However, the commit message should contain all of the relevant information in case these references are no longer available in the future.
Similarly, a commit may refer to a previous commit, for example in the case of a bug fix or revert.
In addition to the commit identifier (revision or hash), include the subject line from the referenced commit (or another suitable brief reference).
With each VCS migration (from CVS to Subversion to Git) revision identifiers from previous systems may become difficult to follow.
=== Include appropriate metadata in a footer
As well as including an informative message with each commit, some additional information may be needed.
This information consists of one or more lines containing the key word or phrase, a colon, tabs for formatting, and then the additional information.
The key words or phrases are:
[.informaltable]
[cols="20%,80%", frame="none"]
|===
|`PR:`
|The problem report (if any) which is affected (typically, by being closed) by this commit. Multiple PRs may be specified on one line, separated by commas or spaces.
|`Reported by:`
|The name and e-mail address of the person that reported the issue; for developers, just the username on the FreeBSD cluster.
Typically used when there is no PR, for example if the issue was reported on
a mailing list.
|`Submitted by:`
|The name and e-mail address of the person that submitted the fix; for developers, just the username on the FreeBSD cluster.
Typically not used with Git; submitted patches should
have the author set by using `git commit --author`.
If the submitter is the maintainer of the port being committed, include "(maintainer)" after the email address.
Avoid obfuscating the email address of the submitter as this adds additional work when searching logs.
|`Reviewed by:`
|The name and e-mail address of the person or people that reviewed the change; for developers, just the username on the FreeBSD cluster. If a patch was submitted to a mailing list for review, and the review was favorable, then just include the list name.
|`Tested by:`
|The name and e-mail address of the person or people that tested the change; for developers, just the username on the FreeBSD cluster.
|`Approved by:`
a|
The name and e-mail address of the person or people that approved the change; for developers, just the username on the FreeBSD cluster.
There are several cases where approval is customary:
* while a new committer is under mentorship
* commits to an area of the tree to which you do not usually commit
* during a release cycle
* committing to a repo where you do not hold a commit bit (e.g. src committer committing to docs)
While under mentorship, get mentor approval before the commit. Enter the mentor's username in this field, and note that they are a mentor:
[source,shell]
....
Approved by: username-of-mentor (mentor)
....
If a team approved these commits then include the team name followed by the username of the approver in parentheses. For example:
[source,shell]
....
Approved by: re (username)
....
|`Obtained from:`
|The name of the project (if any) from which the code was obtained. Do not use this line for the name of an individual person.
|`Fixes:`
|The Git short hash and the title line of a commit that is fixed by this change as returned by `git log -n 1 --oneline GIT-COMMIT-HASH`.
|`MFC after:`
|To receive an e-mail reminder to MFC at a later date, specify the number of days, weeks, or months after which an MFC is planned.
|`MFC to:`
|If the commit should be merged to a subset of stable branches, specify the branch names.
|`MFC with:`
|If the commit should be merged together with a previous one in a single MFC commit (for example, where this commit corrects a bug in the previous change), specify the corresponding Git hash.
|`MFH:`
|If the commit is to be merged into a ports quarterly branch name, specify the quarterly branch. For example `2021Q2`.
|`Relnotes:`
|If the change is a candidate for inclusion in the release notes for the next release from the branch, set to `yes`.
|`Security:`
|If the change is related to a security vulnerability or security exposure, include one or more references or a description of the issue. If possible, include a VuXML URL or a CVE ID.
|`Event:`
|The description for the event where this commit was made. If this is a recurring event, add the year or even the month to it. For example, this could be `FooBSDcon 2019`. The idea behind this line is to put recognition to conferences, gatherings, and other types of meetups and to show that these are useful to have. Please do not use the `Sponsored by:` line for this as that is meant for organizations sponsoring certain features or developers working on them.
|`Sponsored by:`
|Sponsoring organizations for this change, if any. Separate multiple organizations with commas. If only a portion of the work was sponsored, or different amounts of sponsorship were provided to different authors, please give appropriate credit in parentheses after each sponsor name. For example, `Example.com (alice, code refactoring), Wormulon (bob), Momcorp (cindy)` shows that Alice was sponsored by Example.com to do code refactoring, while Wormulon sponsored Bob's work and Momcorp sponsored Cindy's work. Other authors were either not sponsored or chose not to list sponsorship.
|`Differential Revision:`
|The full URL of the Phabricator review. This line __must be the last line__. For example: `https://reviews.freebsd.org/D1708`.
|`Signed-off-by:`
|ID certifies compliance with https://developercertificate.org/
|===
.Commit Log for a Commit Based on a PR
[example]
====
The commit is based on a patch from a PR submitted by John Smith.
The commit message "PR" field is filled.
[.programlisting]
....
...
PR: 12345
....
The committer sets the author of the patch with `git commit --author "John Smith <John.Smith@example.com>"`.
====
.Commit Log for a Commit Needing Review
[example]
====
The virtual memory system is being changed.
After posting patches to the appropriate mailing list (in this case, `freebsd-arch`) and the changes have been approved.
[.programlisting]
....
...
Reviewed by: -arch
....
====
.Commit Log for a Commit Needing Approval
[example]
====
Commit a port, after working with the listed MAINTAINER, who said to go ahead and commit.
[.programlisting]
....
...
Approved by: abc (maintainer)
....
Where _abc_ is the account name of the person who approved.
====
.Commit Log for a Commit Bringing in Code from OpenBSD
[example]
====
Committing some code based on work done in the OpenBSD project.
[.programlisting]
....
...
Obtained from: OpenBSD
....
====
.Commit Log for a Change to FreeBSD-CURRENT with a Planned Commit to FreeBSD-STABLE to Follow at a Later Date.
[example]
====
Committing some code which will be merged from FreeBSD-CURRENT into the FreeBSD-STABLE branch after two weeks.
[.programlisting]
....
...
MFC after: 2 weeks
....
Where _2_ is the number of days, weeks, or months after which an MFC is planned. The _weeks_ option may be `day`, `days`, `week`, `weeks`, `month`, `months`.
====
It is often necessary to combine these.
Consider the situation where a user has submitted a PR containing code from the NetBSD project.
Looking at the PR, the developer sees it is not an area of the tree they normally work in, so they have the change reviewed by the `arch` mailing list.
Since the change is complex, the developer opts to MFC after one month to allow adequate testing.
The extra information to include in the commit would look something like
.Example Combined Commit Log
[example]
====
[.programlisting]
....
PR: 54321
Reviewed by: -arch
Obtained from: NetBSD
MFC after: 1 month
Relnotes: yes
....
====
[[pref-license]]
== Preferred License for New Files
The FreeBSD Project's full license policy can be found at link:https://www.FreeBSD.org/internal/software-license/[https://www.FreeBSD.org/internal/software-license].
The rest of this section is intended to help you get started.
As a rule, when in doubt, ask.
It is much easier to give advice than to fix the source tree.
The FreeBSD Project suggests and uses this text as the preferred license scheme:
[.programlisting]
....
/*-
* SPDX-License-Identifier: BSD-2-Clause-FreeBSD
*
* Copyright (c) [year] [your name]
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND
* ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
* ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE
* FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
* DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
* OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
* HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
* LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
* OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
* SUCH DAMAGE.
*
* [id for your version control system, if any]
*/
....
The FreeBSD project strongly discourages the so-called "advertising clause" in new code.
Due to the large number of contributors to the FreeBSD project, complying with this clause for many commercial vendors has become difficult.
If you have code in the tree with the advertising clause, please consider removing it.
In fact, please consider using the above license for your code.
The FreeBSD project discourages completely new licenses and variations on the standard licenses.
New licenses require the approval of the {core-email} to reside in the main repository.
The more different licenses that are used in the tree, the more problems that this causes to those wishing to utilize this code, typically from unintended consequences from a poorly worded license.
Project policy dictates that code under some non-BSD licenses must be placed only in specific sections of the repository, and in some cases, compilation must be conditional or even disabled by default.
For example, the GENERIC kernel must be compiled under only licenses identical to or substantially similar to the BSD license.
GPL, APSL, CDDL, etc, licensed software must not be compiled into GENERIC.
Developers are reminded that in open source, getting "open" right is just as important as getting "source" right, as improper handling of intellectual property has serious consequences.
Any questions or concerns should immediately be brought to the attention of the core team.
[[tracking.license.grants]]
== Keeping Track of Licenses Granted to the FreeBSD Project
Various software or data exist in the repositories where the FreeBSD project has been granted a special licence to be able to use them.
A case in point are the Terminus fonts for use with man:vt[4].
Here the author Dimitar Zhekov has allowed us to use the "Terminus BSD Console" font under a 2-clause BSD license rather than the regular Open Font License he normally uses.
It is clearly sensible to keep a record of any such license grants.
To that end, the {core-email} has decided to keep an archive of them.
Whenever the FreeBSD project is granted a special license we require the {core-email} to be notified.
Any developers involved in arranging such a license grant, please send details to the {core-email} including:
* Contact details for people or organizations granting the special license.
* What files, directories etc. in the repositories are covered by the license grant including the revision numbers where any specially licensed material was committed.
* The date the license comes into effect from. Unless otherwise agreed, this will be the date the license was issued by the authors of the software in question.
* The license text.
* A note of any restrictions, limitations or exceptions that apply specifically to FreeBSD's usage of the licensed material.
* Any other relevant information.
Once the {core-email} is satisfied that all the necessary details have been gathered and are correct, the secretary will send a PGP-signed acknowledgement of receipt including the license details.
This receipt will be persistently archived and serve as our permanent record of the license grant.
The license archive should contain only details of license grants; this is not the place for any discussions around licensing or other subjects.
Access to data within the license archive will be available on request to the {core-email}.
[[spdx.tags]]
== SPDX Tags in the tree
The project uses https://spdx.dev[SPDX] tags in our source base.
At present, these tags are indented to help automated tools reconstruct license requirements mechanically.
All _SPDX-License-Identifier_ tags in the tree should be considered to be informative.
All files in the FreeBSD source tree with these tags also have a copy of the license which governs use of that file.
In the event of a discrepency, the verbatim license is controlling.
The project tries to follow the https://spdx.github.io/spdx-spec/[SPDX Specification, Version 2.2].
How to mark source files and valid algebraic expressions are found in https://spdx.github.io/spdx-spec/appendix-IV-SPDX-license-expressions/[Appendix IV] and https://spdx.github.io/spdx-spec/appendix-V-using-SPDX-short-identifiers-in-source-files/[Appendix V].
The project draws identifiers from SPDX's list of valid https://spdx.org/licenses/[short license identifiers].
The project uses only the _SPDX-License-Identifier_ tag.
As of March 2021, approximately 25,000 out of 90,000 files in the tree have been marked.
[[developer.relations]]
== Developer Relations
When working directly on your own code or on code which is already well established as your responsibility, then there is probably little need to check with other committers before jumping in with a commit.
Working on a bug in an area of the system which is clearly orphaned (and there are a few such areas, to our shame), the same applies.
When modifying parts of the system which are maintained, formally, or informally, consider asking for review just as a developer would have before becoming a committer.
For ports, contact the listed `MAINTAINER` in the [.filename]#Makefile#.
To determine if an area of the tree is maintained, check the MAINTAINERS file at the root of the tree.
If nobody is listed, scan the revision history to see who has committed changes in the past.
An example script that lists each person who has committed to a given file along with the number of commits each person has made can be found at on `freefall` at [.filename]#~eadler/bin/whodid#.
If queries go unanswered or the committer otherwise indicates a lack of interest in the area affected, go ahead and commit it.
[IMPORTANT]
====
Avoid sending private emails to maintainers.
Other people might be interested in the conversation, not just the final output.
====
If there is any doubt about a commit for any reason at all, have it reviewed before committing.
Better to have it flamed then and there rather than when it is part of the repository.
If a commit does results in controversy erupting, it may be advisable to consider backing the change out again until the matter is settled.
Remember, with a version control system we can always change it back.
Do not impugn the intentions of others.
If they see a different solution to a problem, or even a different problem, it is probably not because they are stupid, because they have questionable parentage, or because they are trying to destroy hard work, personal image, or FreeBSD, but basically because they have a different outlook on the world.
Different is good.
Disagree honestly.
Argue your position from its merits, be honest about any shortcomings it may have, and be open to seeing their solution, or even their vision of the problem, with an open mind.
Accept correction.
We are all fallible.
When you have made a mistake, apologize and get on with life.
Do not beat up yourself, and certainly do not beat up others for your mistake.
Do not waste time on embarrassment or recrimination, just fix the problem and move on.
Ask for help.
Seek out (and give) peer reviews.
One of the ways open source software is supposed to excel is in the number of eyeballs applied to it; this does not apply if nobody will review code.
[[if-in-doubt]]
== If in Doubt...
When unsure about something, whether it be a technical issue or a project convention be sure to ask.
If you stay silent you will never make progress.
If it relates to a technical issue ask on the public mailing lists.
Avoid the temptation to email the individual person that knows the answer.
This way everyone will be able to learn from the question and the answer.
For project specific or administrative questions ask, in order:
* Your mentor or former mentor.
* An experienced committer on IRC, email, etc.
* Any team with a "hat", as they can give you a definitive answer.
* If still not sure, ask on {developers-name}.
Once your question is answered, if no one pointed you to documentation that spelled out the answer to your question, document it, as others will have the same question.
[[bugzilla]]
== Bugzilla
The FreeBSD Project utilizes Bugzilla for tracking bugs and change requests.
Be sure that if you commit a fix or suggestion found in the PR database to close it.
It is also considered nice if you take time to close any PRs associated with your commits, if appropriate.
Committers with non-``FreeBSD.org`` Bugzilla accounts can have the old account merged with the `FreeBSD.org` account by following these steps:
[.procedure]
====
. Log in using your old account.
. Open new bug. Choose `Services` as the Product, and `Bug Tracker` as the Component. In bug description list accounts you wish to be merged.
. Log in using `FreeBSD.org` account and post comment to newly opened bug to confirm ownership. See <<kerberos-ldap>> for more details on how to generate or set a password for your `FreeBSD.org` account.
. If there are more than two accounts to merge, post comments from each of them.
====
You can find out more about Bugzilla at:
* link:{pr-guidelines}[FreeBSD Problem Report Handling Guidelines]
* link:https://www.FreeBSD.org/support/[https://www.FreeBSD.org/support]
[[phabricator]]
== Phabricator
The FreeBSD Project utilizes https://reviews.freebsd.org[Phabricator] for code review requests.
See the https://wiki.freebsd.org/CodeReview[CodeReview] wiki page for details.
Committers with non-``FreeBSD.org`` Phabricator accounts can have the old account renamed to the ``FreeBSD.org`` account by following these steps:
[.procedure]
====
. Change your Phabricator account email to your `FreeBSD.org` email.
. Open new bug on our bug tracker using your `FreeBSD.org` account, see <<bugzilla>> for more information. Choose `Services` as the Product, and `Code Review` as the Component. In bug description request that your Phabricator account be renamed, and provide a link to your Phabricator user. For example, `https://reviews.freebsd.org/p/bob_example.com/`
====
[IMPORTANT]
====
Phabricator accounts cannot be merged, please do not open a new account.
====
[[people]]
== Who's Who
Besides the repository meisters, there are other FreeBSD project members and teams whom you will probably get to know in your role as a committer. Briefly, and by no means all-inclusively, these are:
`{doceng}`::
doceng is the group responsible for the documentation build infrastructure, approving new documentation committers, and ensuring that the FreeBSD website and documentation on the FTP site is up to date with respect to the Subversion tree.
It is not a conflict resolution body.
The vast majority of documentation related discussion takes place on the {freebsd-doc}.
More details regarding the doceng team can be found in its https://www.FreeBSD.org/internal/doceng/[charter].
Committers interested in contributing to the documentation should familiarize themselves with the link:{fdp-primer}[Documentation Project Primer].
`{re-members}`::
These are the members of the `{re}`.
This team is responsible for setting release deadlines and controlling the release process.
During code freezes, the release engineers have final authority on all changes to the system for whichever branch is pending release status.
If there is something you want merged from FreeBSD-CURRENT to FreeBSD-STABLE (whatever values those may have at any given time), these are the people to talk to about it.
`{so}`::
`{so-name}` is the link:https://www.FreeBSD.org/security/[FreeBSD Security Officer] and oversees the `{security-officer}`.
`{wollman}`::
If you need advice on obscure network internals or are not sure of some potential change to the networking subsystem you have in mind, Garrett is someone to talk to.
Garrett is also very knowledgeable on the various standards applicable to FreeBSD.
{committers-name}::
{svn-src-all}, {svn-ports-all} and {svn-doc-all} are the mailing lists that the version control system uses to send commit messages to.
_Never_ send email directly to these lists.
Only send replies to this list when they are short and are directly related to a commit.
{developers-name}::
All committers are subscribed to -developers.
This list was created to be a forum for the committers "community" issues.
Examples are Core voting, announcements, etc.
+
The {developers-name} is for the exclusive use of FreeBSD committers.
To develop FreeBSD, committers must have the ability to openly discuss matters that will be resolved before they are publicly announced.
Frank discussions of work in progress are not suitable for open publication and may harm FreeBSD.
+
All FreeBSD committers are expected not to not publish or forward messages from the {developers-name} outside the list membership without permission of all of the authors.
Violators will be removed from the {developers-name}, resulting in a suspension of commit privileges.
Repeated or flagrant violations may result in permanent revocation of commit privileges.
+
This list is _not_ intended as a place for code reviews or for any technical discussion.
In fact using it as such hurts the FreeBSD Project as it gives a sense of a closed list where general decisions affecting all of the FreeBSD using community are made without being "open".
Last, but not least __never, never ever, email the {developers-name} and CC:/BCC: another FreeBSD list__.
Never, ever email another FreeBSD email list and CC:/BCC: the {developers-name}.
Doing so can greatly diminish the benefits of this list.
[[ssh.guide]]
== SSH Quick-Start Guide
[.procedure]
====
. If you do not wish to type your password in every time you use man:ssh[1], and you use keys to authenticate, man:ssh-agent[1] is there for your convenience. If you want to use man:ssh-agent[1], make sure that you run it before running other applications. X users, for example, usually do this from their [.filename]#.xsession# or [.filename]#.xinitrc#. See man:ssh-agent[1] for details.
. Generate a key pair using man:ssh-keygen[1]. The key pair will wind up in your [.filename]#$HOME/.ssh/# directory.
+
[IMPORTANT]
======
Only ECDSA, Ed25519 or RSA keys are supported.
======
. Send your public key ([.filename]#$HOME/.ssh/id_ecdsa.pub#, [.filename]#$HOME/.ssh/id_ed25519.pub#, or [.filename]#$HOME/.ssh/id_rsa.pub#) to the person setting you up as a committer so it can be put into [.filename]#yourlogin# in [.filename]#/etc/ssh-keys/# on `freefall`.
====
Now man:ssh-add[1] can be used for authentication once per session.
It prompts for the private key's pass phrase, and then stores it in the authentication agent (man:ssh-agent[1]).
Use `ssh-add -d` to remove keys stored in the agent.
Test with a simple remote command: `ssh freefall.FreeBSD.org ls /usr`.
For more information, see package:security/openssh-portable[], man:ssh[1], man:ssh-add[1], man:ssh-agent[1], man:ssh-keygen[1], and man:scp[1].
For information on adding, changing, or removing man:ssh[1] keys, see https://wiki.freebsd.org/clusteradm/ssh-keys[this article].
[[coverity]]
== Coverity(R) Availability for FreeBSD Committers
All FreeBSD developers can obtain access to Coverity analysis results of all FreeBSD Project software.
All who are interested in obtaining access to the analysis results of the automated Coverity runs, can sign up at http://scan.coverity.com/[Coverity Scan].
The FreeBSD wiki includes a mini-guide for developers who are interested in working with the Coverity(R) analysis reports: https://wiki.freebsd.org/CoverityPrevent[https://wiki.freebsd.org/CoverityPrevent].
Please note that this mini-guide is only readable by FreeBSD developers, so if you cannot access this page, you will have to ask someone to add you to the appropriate Wiki access list.
Finally, all FreeBSD developers who are going to use Coverity(R) are always encouraged to ask for more details and usage information, by posting any questions to the mailing list of the FreeBSD developers.
[[rules]]
== The FreeBSD Committers' Big List of Rules
Everyone involved with the FreeBSD project is expected to abide by the _Code of Conduct_ available from link:https://www.FreeBSD.org/internal/code-of-conduct/[https://www.FreeBSD.org/internal/code-of-conduct].
As committers, you form the public face of the project, and how you behave has a vital impact on the public perception of it.
This guide expands on the parts of the _Code of Conduct_ specific to committers.
. Respect other committers.
. Respect other contributors.
. Discuss any significant change _before_ committing.
. Respect existing maintainers (if listed in the `MAINTAINER` field in [.filename]#Makefile# or in [.filename]#MAINTAINER# in the top-level directory).
. Any disputed change must be backed out pending resolution of the dispute if requested by a maintainer. Security related changes may override a maintainer's wishes at the Security Officer's discretion.
. Changes go to FreeBSD-CURRENT before FreeBSD-STABLE unless specifically permitted by the release engineer or unless they are not applicable to FreeBSD-CURRENT. Any non-trivial or non-urgent change which is applicable should also be allowed to sit in FreeBSD-CURRENT for at least 3 days before merging so that it can be given sufficient testing. The release engineer has the same authority over the FreeBSD-STABLE branch as outlined for the maintainer in rule #5.
. Do not fight in public with other committers; it looks bad.
. Respect all code freezes and read the `committers` and `developers` mailing lists in a timely manner so you know when a code freeze is in effect.
. When in doubt on any procedure, ask first!
. Test your changes before committing them.
. Do not commit to contributed software without _explicit_ approval from the respective maintainers.
As noted, breaking some of these rules can be grounds for suspension or, upon repeated offense, permanent removal of commit privileges.
Individual members of core have the power to temporarily suspend commit privileges until core as a whole has the chance to review the issue.
In case of an "emergency" (a committer doing damage to the repository), a temporary suspension may also be done by the repository meisters.
Only a 2/3 majority of core has the authority to suspend commit privileges for longer than a week or to remove them permanently.
This rule does not exist to set core up as a bunch of cruel dictators who can dispose of committers as casually as empty soda cans, but to give the project a kind of safety fuse.
If someone is out of control, it is important to be able to deal with this immediately rather than be paralyzed by debate.
In all cases, a committer whose privileges are suspended or revoked is entitled to a "hearing" by core, the total duration of the suspension being determined at that time.
A committer whose privileges are suspended may also request a review of the decision after 30 days and every 30 days thereafter (unless the total suspension period is less than 30 days).
A committer whose privileges have been revoked entirely may request a review after a period of 6 months has elapsed.
This review policy is _strictly informal_ and, in all cases, core reserves the right to either act on or disregard requests for review if they feel their original decision to be the right one.
In all other aspects of project operation, core is a subset of committers and is bound by the __same rules__.
Just because someone is in core this does not mean that they have special dispensation to step outside any of the lines painted here; core's "special powers" only kick in when it acts as a group, not on an individual basis.
As individuals, the core team members are all committers first and core second.
=== Details
[[respect]]
. Respect other committers.
+
This means that you need to treat other committers as the peer-group developers that they are.
Despite our occasional attempts to prove the contrary, one does not get to be a committer by being stupid and nothing rankles more than being treated that way by one of your peers.
Whether we always feel respect for one another or not (and everyone has off days), we still have to _treat_ other committers with respect at all times, on public forums and in private email.
+
Being able to work together long term is this project's greatest asset, one far more important than any set of changes to the code, and turning arguments about code into issues that affect our long-term ability to work harmoniously together is just not worth the trade-off by any conceivable stretch of the imagination.
+
To comply with this rule, do not send email when you are angry or otherwise behave in a manner which is likely to strike others as needlessly confrontational.
First calm down, then think about how to communicate in the most effective fashion for convincing the other persons that your side of the argument is correct, do not just blow off some steam so you can feel better in the short term at the cost of a long-term flame war.
Not only is this very bad "energy economics", but repeated displays of public aggression which impair our ability to work well together will be dealt with severely by the project leadership and may result in suspension or termination of your commit privileges.
The project leadership will take into account both public and private communications brought before it.
It will not seek the disclosure of private communications, but it will take it into account if it is volunteered by the committers involved in the complaint.
+
All of this is never an option which the project's leadership enjoys in the slightest, but unity comes first.
No amount of code or good advice is worth trading that away.
. Respect other contributors.
+
You were not always a committer.
At one time you were a contributor.
Remember that at all times.
Remember what it was like trying to get help and attention.
Do not forget that your work as a contributor was very important to you.
Remember what it was like. Do not discourage, belittle, or demean contributors.
Treat them with respect. They are our committers in waiting.
They are every bit as important to the project as committers.
Their contributions are as valid and as important as your own.
After all, you made many contributions before you became a committer.
Always remember that.
+
Consider the points raised under <<respect,Respect other committers>> and apply them also to contributors.
. Discuss any significant change _before_ committing.
+
The repository is not where changes are initially submitted for correctness or argued over, that happens first in the mailing lists or by use of the Phabricator service.
The commit will only happen once something resembling consensus has been reached.
This does not mean that permission is required before correcting every obvious syntax error or manual page misspelling, just that it is good to develop a feel for when a proposed change is not quite such a no-brainer and requires some feedback first.
People really do not mind sweeping changes if the result is something clearly better than what they had before, they just do not like being _surprised_ by those changes.
The very best way of making sure that things are on the right track is to have code reviewed by one or more other committers.
+
When in doubt, ask for review!
. Respect existing maintainers if listed.
+
Many parts of FreeBSD are not "owned" in the sense that any specific individual will jump up and yell if you commit a change to "their" area, but it still pays to check first.
One convention we use is to put a maintainer line in the [.filename]#Makefile# for any package or subtree which is being actively maintained by one or more people; see link:{developers-handbook}#policies[Source Tree Guidelines and Policies] for documentation on this.
Where sections of code have several maintainers, commits to affected areas by one maintainer need to be reviewed by at least one other maintainer.
In cases where the "maintainer-ship" of something is not clear, look at the repository logs for the files in question and see if someone has been working recently or predominantly in that area.
. Any disputed change must be backed out pending resolution of the dispute if requested by a maintainer. Security related changes may override a maintainer's wishes at the Security Officer's discretion.
+
This may be hard to swallow in times of conflict (when each side is convinced that they are in the right, of course) but a version control system makes it unnecessary to have an ongoing dispute raging when it is far easier to simply reverse the disputed change, get everyone calmed down again and then try to figure out what is the best way to proceed.
If the change turns out to be the best thing after all, it can be easily brought back.
If it turns out not to be, then the users did not have to live with the bogus change in the tree while everyone was busily debating its merits.
People _very_ rarely call for back-outs in the repository since discussion generally exposes bad or controversial changes before the commit even happens, but on such rare occasions the back-out should be done without argument so that we can get immediately on to the topic of figuring out whether it was bogus or not.
. Changes go to FreeBSD-CURRENT before FreeBSD-STABLE unless specifically permitted by the release engineer or unless they are not applicable to FreeBSD-CURRENT. Any non-trivial or non-urgent change which is applicable should also be allowed to sit in FreeBSD-CURRENT for at least 3 days before merging so that it can be given sufficient testing. The release engineer has the same authority over the FreeBSD-STABLE branch as outlined in rule #5.
+
This is another "do not argue about it" issue since it is the release engineer who is ultimately responsible (and gets beaten up) if a change turns out to be bad.
Please respect this and give the release engineer your full cooperation when it comes to the FreeBSD-STABLE branch.
The management of FreeBSD-STABLE may frequently seem to be overly conservative to the casual observer, but also bear in mind the fact that conservatism is supposed to be the hallmark of FreeBSD-STABLE and different rules apply there than in FreeBSD-CURRENT.
There is also really no point in having FreeBSD-CURRENT be a testing ground if changes are merged over to FreeBSD-STABLE immediately.
Changes need a chance to be tested by the FreeBSD-CURRENT developers, so allow some time to elapse before merging unless the FreeBSD-STABLE fix is critical, time sensitive or so obvious as to make further testing unnecessary (spelling fixes to manual pages, obvious bug/typo fixes, etc.) In other words, apply common sense.
+
Changes to the security branches (for example, `releng/9.3`) must be approved by a member of the `{security-officer}`, or in some cases, by a member of the `{re}`.
. Do not fight in public with other committers; it looks bad.
+
This project has a public image to uphold and that image is very important to all of us, especially if we are to continue to attract new members.
There will be occasions when, despite everyone's very best attempts at self-control, tempers are lost and angry words are exchanged.
The best thing that can be done in such cases is to minimize the effects of this until everyone has cooled back down.
Do not air angry words in public and do not forward private correspondence or other private communications to public mailing lists, mail aliases, instant messaging channels or social media sites.
What people say one-to-one is often much less sugar-coated than what they would say in public, and such communications therefore have no place there - they only serve to inflame an already bad situation.
If the person sending a flame-o-gram at least had the grace to send it privately, then have the grace to keep it private yourself.
If you feel you are being unfairly treated by another developer, and it is causing you anguish, bring the matter up with core rather than taking it public. Core will do its best to play peace makers and get things back to sanity.
In cases where the dispute involves a change to the codebase and the participants do not appear to be reaching an amicable agreement, core may appoint a mutually-agreeable third party to resolve the dispute.
All parties involved must then agree to be bound by the decision reached by this third party.
. Respect all code freezes and read the `committers` and `developers` mailing list on a timely basis so you know when a code freeze is in effect.
+
Committing unapproved changes during a code freeze is a really big mistake and committers are expected to keep up-to-date on what is going on before jumping in after a long absence and committing 10 megabytes worth of accumulated stuff.
People who abuse this on a regular basis will have their commit privileges suspended until they get back from the FreeBSD Happy Reeducation Camp we run in Greenland.
. When in doubt on any procedure, ask first!
+
Many mistakes are made because someone is in a hurry and just assumes they know the right way of doing something.
If you have not done it before, chances are good that you do not actually know the way we do things and really need to ask first or you are going to completely embarrass yourself in public.
There is no shame in asking "how in the heck do I do this?" We already know you are an intelligent person; otherwise, you would not be a committer.
. Test your changes before committing them.
+
This may sound obvious, but if it really were so obvious then we probably would not see so many cases of people clearly not doing this.
If your changes are to the kernel, make sure you can still compile both GENERIC and LINT.
If your changes are anywhere else, make sure you can still make world.
If your changes are to a branch, make sure your testing occurs with a machine which is running that code.
If you have a change which also may break another architecture, be sure and test on all supported architectures.
Please refer to the https://www.FreeBSD.org/internal/[FreeBSD Internal Page] for a list of available resources.
As other architectures are added to the FreeBSD supported platforms list, the appropriate shared testing resources will be made available.
. Do not commit to contributed software without _explicit_ approval from the respective maintainers.
+
Contributed software is anything under the [.filename]#src/contrib#, [.filename]#src/crypto#, or [.filename]#src/sys/contrib# trees.
+
The trees mentioned above are for contributed software usually imported onto a vendor branch.
Committing something there may cause unnecessary headaches when importing newer versions of the software.
As a general consider sending patches upstream to the vendor.
Patches may be committed to FreeBSD first with permission of the maintainer.
+
Reasons for modifying upstream software range from wanting strict control over a tightly coupled dependency to lack of portability in the canonical repository's distribution of their code.
Regardless of the reason, effort to minimize the maintenance burden of fork is helpful to fellow maintainers.
Avoid committing trivial or cosmetic changes to files since it makes every merge thereafter more difficult: such patches need to be manually re-verified every import.
+
If a particular piece of software lacks a maintainer, you are encouraged to take up ownership.
If you are unsure of the current maintainership email {freebsd-arch} and ask.
=== Policy on Multiple Architectures
FreeBSD has added several new architecture ports during recent release cycles and is truly no longer an i386(TM) centric operating system.
In an effort to make it easier to keep FreeBSD portable across the platforms we support, core has developed this mandate:
[.blockquote]
Our 32-bit reference platform is i386, and our 64-bit reference platform is amd64.
Major design work (including major API and ABI changes) must prove itself on at least one 32-bit and at least one 64-bit platform, preferably the primary reference platforms, before it may be committed to the source tree.
The i386 and amd64 platforms were chosen due to being more readily available to developers and as representatives of more diverse processor and system designs - big versus little endian, register file versus register stack, different DMA and cache implementations, hardware page tables versus software TLB management etc.
We will continue to re-evaluate this policy as cost and availability of the 64-bit platforms change.
Developers should also be aware of our Tier Policy for the long term support of hardware architectures.
The rules here are intended to provide guidance during the development process, and are distinct from the requirements for features and architectures listed in that section.
The Tier rules for feature support on architectures at release-time are more strict than the rules for changes during the development process.
=== Other Suggestions
When committing documentation changes, use a spell checker before committing.
For all XML docs, verify that the formatting directives are correct by running `make lint` and package:textproc/igor[].
For manual pages, run package:sysutils/manck[] and package:textproc/igor[] over the manual page to verify all of the cross references and file references are correct and that the man page has all of the appropriate `MLINKS` installed.
Do not mix style fixes with new functionality.
A style fix is any change which does not modify the functionality of the code.
Mixing the changes obfuscates the functionality change when asking for differences between revisions, which can hide any new bugs.
Do not include whitespace changes with content changes in commits to [.filename]#doc/#.
The extra clutter in the diffs makes the translators' job much more difficult.
Instead, make any style or whitespace changes in separate commits that are clearly labeled as such in the commit message.
=== Deprecating Features
When it is necessary to remove functionality from software in the base system, follow these guidelines whenever possible:
. Mention is made in the manual page and possibly the release notes that the option, utility, or interface is deprecated. Use of the deprecated feature generates a warning.
. The option, utility, or interface is preserved until the next major (point zero) release.
. The option, utility, or interface is removed and no longer documented. It is now obsolete. It is also generally a good idea to note its removal in the release notes.
=== Privacy and Confidentiality
. Most FreeBSD business is done in public.
+
FreeBSD is an _open_ project.
Which means that not only can anyone use the source code, but that most of the development process is open to public scrutiny.
. Certain sensitive matters must remain private or held under embargo.
+
There unfortunately cannot be complete transparency.
As a FreeBSD developer you will have a certain degree of privileged access to information.
Consequently you are expected to respect certain requirements for confidentiality.
Sometimes the need for confidentiality comes from external collaborators or has a specific time limit.
Mostly though, it is a matter of not releasing private communications.
. The Security Officer has sole control over the release of security advisories.
+
Where there are security problems that affect many different operating systems, FreeBSD frequently depends on early access to be able to prepare advisories for coordinated release.
Unless FreeBSD developers can be trusted to maintain security, such early access will not be made available.
The Security Officer is responsible for controlling pre-release access to information about vulnerabilities, and for timing the release of all advisories.
He may request help under condition of confidentiality from any developer with relevant knowledge to prepare security fixes.
. Communications with Core are kept confidential for as long as necessary.
+
Communications to core will initially be treated as confidential.
Eventually however, most of Core's business will be summarized into the monthly or quarterly core reports.
Care will be taken to avoid publicising any sensitive details.
Records of some particularly sensitive subjects may not be reported on at all and will be retained only in Core's private archives.
. Non-disclosure Agreements may be required for access to certain commercially sensitive data.
+
Access to certain commercially sensitive data may only be available under a Non-Disclosure Agreement.
The FreeBSD Foundation legal staff must be consulted before any binding agreements are entered into.
. Private communications must not be made public without permission.
+
Beyond the specific requirements above there is a general expectation not to publish private communications between developers without the consent of all parties involved.
Ask permission before forwarding a message onto a public mailing list, or posting it to a forum or website that can be accessed by other than the original correspondents.
. Communications on project-only or restricted access channels must be kept private.
+
Similarly to personal communications, certain internal communications channels, including FreeBSD Committer only mailing lists and restricted access IRC channels are considered private communications.
Permission is required to publish material from these sources.
. Core may approve publication.
+
Where it is impractical to obtain permission due to the number of correspondents or where permission to publish is unreasonably withheld, Core may approve release of such private matters that merit more general publication.
[[archs]]
== Support for Multiple Architectures
FreeBSD is a highly portable operating system intended to function on many different types of hardware architectures.
Maintaining clean separation of Machine Dependent (MD) and Machine Independent (MI) code, as well as minimizing MD code, is an important part of our strategy to remain agile with regards to current hardware trends.
Each new hardware architecture supported by FreeBSD adds substantially to the cost of code maintenance, toolchain support, and release engineering.
It also dramatically increases the cost of effective testing of kernel changes.
As such, there is strong motivation to differentiate between classes of support for various architectures while remaining strong in a few key architectures that are seen as the FreeBSD "target audience".
=== Statement of General Intent
The FreeBSD Project targets "production quality commercial off-the-shelf (COTS) workstation, server, and high-end embedded systems".
By retaining a focus on a narrow set of architectures of interest in these environments, the FreeBSD Project is able to maintain high levels of quality, stability, and performance, as well as minimize the load on various support teams on the project, such as the ports team, documentation team, security officer, and release engineering teams.
Diversity in hardware support broadens the options for FreeBSD consumers by offering new features and usage opportunities, but these benefits must always be carefully considered in terms of the real-world maintenance cost associated with additional platform support.
The FreeBSD Project differentiates platform targets into four tiers.
Each tier includes a list of guarantees consumers may rely on as well as obligations by the Project and developers to fulfill those guarantees.
These lists define the minimum guarantees for each tier.
The Project and developers may provide additional levels of support beyond the minimum guarantees for a given tier, but such additional support is not guaranteed.
Each platform target is assigned to a specific tier for each stable branch.
As a result, a platform target might be assigned to different tiers on concurrent stable branches.
=== Platform Targets
Support for a hardware platform consists of two components: kernel support and userland Application Binary Interfaces (ABIs).
Kernel platform support includes things needed to run a FreeBSD kernel on a hardware platform such as machine-dependent virtual memory management and device drivers.
A userland ABI specifies an interface for user processes to interact with a FreeBSD kernel and base system libraries.
A userland ABI includes system call interfaces, the layout and semantics of public data structures, and the layout and semantics of arguments passed to subroutines.
Some components of an ABI may be defined by specifications such as the layout of C++ exception objects or calling conventions for C functions.
A FreeBSD kernel also uses an ABI (sometimes referred to as the Kernel Binary Interface (KBI)) which includes the semantics and layouts of public data structures and the layout and semantics of arguments to public functions within the kernel itself.
A FreeBSD kernel may support multiple userland ABIs.
For example, FreeBSD's amd64 kernel supports FreeBSD amd64 and i386 userland ABIs as well as Linux x86_64 and i386 userland ABIs.
A FreeBSD kernel should support a "native" ABI as the default ABI.
The native "ABI" generally shares certain properties with the kernel ABI such as the C calling convention, sizes of basic types, etc.
Tiers are defined for both kernels and userland ABIs. In the common case, a platform's kernel and FreeBSD ABIs are assigned to the same tier.
=== Tier 1: Fully-Supported Architectures
Tier 1 platforms are the most mature FreeBSD platforms.
They are supported by the security officer, release engineering, and port management teams.
Tier 1 architectures are expected to be Production Quality with respect to all aspects of the FreeBSD operating system, including installation and development environments.
The FreeBSD Project provides the following guarantees to consumers of Tier 1 platforms:
* Official FreeBSD release images will be provided by the release engineering team.
* Binary updates and source patches for Security Advisories and Errata Notices will be provided for supported releases.
* Source patches for Security Advisories will be provided for supported branches.
* Binary updates and source patches for cross-platform Security Advisories will typically be provided at the time of the announcement.
* Changes to userland ABIs will generally include compatibility shims to ensure correct operation of binaries compiled against any stable branch where the platform is Tier 1. These shims might not be enabled in the default install. If compatibility shims are not provided for an ABI change, the lack of shims will be clearly documented in the release notes.
* Changes to certain portions of the kernel ABI will include compatibility shims to ensure correct operation of kernel modules compiled against the oldest supported release on the branch. Note that not all parts of the kernel ABI are protected.
* Official binary packages for third party software will be provided by the ports team. For embedded architectures, these packages may be cross-built from a different architecture.
* Most relevant ports should either build or have the appropriate filters to prevent inappropriate ones from building.
* New features which are not inherently platform-specific will be fully functional on all Tier 1 architectures.
* Features and compatibility shims used by binaries compiled against older stable branches may be removed in newer major versions. Such removals will be clearly documented in the release notes.
* Tier 1 platforms should be fully documented. Basic operations will be documented in the FreeBSD Handbook.
* Tier 1 platforms will be included in the source tree.
* Tier 1 platforms should be self-hosting either via the in-tree toolchain or an external toolchain. If an external toolchain is required, official binary packages for an external toolchain will be provided.
To maintain maturity of Tier 1 platforms, the FreeBSD Project will maintain the following resources to support development:
* Build and test automation support either in the FreeBSD.org cluster or some other location easily available for all developers. Embedded platforms may substitute an emulator available in the FreeBSD.org cluster for actual hardware.
* Inclusion in the `make universe` and `make tinderbox` targets.
* Dedicated hardware in one of the FreeBSD clusters for package building (either natively or via qemu-user).
Collectively, developers are required to provide the following to maintain the Tier 1 status of a platform:
* Changes to the source tree should not knowingly break the build of a Tier 1 platform.
* Tier 1 architectures must have a mature, healthy ecosystem of users and active developers.
* Developers should be able to build packages on commonly available, non-embedded Tier 1 systems. This can mean either native builds if non-embedded systems are commonly available for the platform in question, or it can mean cross-builds hosted on some other Tier 1 architecture.
* Changes cannot break the userland ABI. If an ABI change is required, ABI compatibility for existing binaries should be provided via use of symbol versioning or shared library version bumps.
* Changes merged to stable branches cannot break the protected portions of the kernel ABI. If a kernel ABI change is required, the change should be modified to preserve functionality of existing kernel modules.
=== Tier 2: Developmental and Niche Architectures
Tier 2 platforms are functional, but less mature FreeBSD platforms.
They are not supported by the security officer, release engineering, and port management teams.
Tier 2 platforms may be Tier 1 platform candidates that are still under active development.
Architectures reaching end of life may also be moved from Tier 1 status to Tier 2 status as the availability of resources to continue to maintain the system in a Production Quality state diminishes.
Well-supported niche architectures may also be Tier 2.
The FreeBSD Project provides the following guarantees to consumers of Tier 2 platforms:
* The ports infrastructure should include basic support for Tier 2 architectures sufficient to support building ports and packages. This includes support for basic packages such as ports-mgmt/pkg, but there is no guarantee that arbitrary ports will be buildable or functional.
* New features which are not inherently platform-specific should be feasible on all Tier 2 architectures if not implemented.
* Tier 2 platforms will be included in the source tree.
* Tier 2 platforms should be self-hosting either via the in-tree toolchain or an external toolchain. If an external toolchain is required, official binary packages for an external toolchain will be provided.
* Tier 2 platforms should provide functional kernels and userlands even if an official release distribution is not provided.
To maintain maturity of Tier 2 platforms, the FreeBSD Project will maintain the following resources to support development:
* Inclusion in the `make universe` and `make tinderbox` targets.
Collectively, developers are required to provide the following to maintain the Tier 2 status of a platform:
* Changes to the source tree should not knowingly break the build of a Tier 2 platform.
* Tier 2 architectures must have an active ecosystem of users and developers.
* While changes are permitted to break the userland ABI, the ABI should not be broken gratuitously. Significant userland ABI changes should be restricted to major versions.
* New features that are not yet implemented on Tier 2 architectures should provide a means of disabling them on those architectures.
=== Tier 3: Experimental Architectures
Tier 3 platforms have at least partial FreeBSD support.
They are _not_ supported by the security officer, release engineering, and port management teams.
Tier 3 platforms are architectures in the early stages of development, for non-mainstream hardware platforms, or which are considered legacy systems unlikely to see broad future use.
Initial support for Tier 3 platforms may exist in a separate repository rather than the main source repository.
The FreeBSD Project provides no guarantees to consumers of Tier 3 platforms and is not committed to maintaining resources to support development.
Tier 3 platforms may not always be buildable, nor are any kernel or userland ABIs considered stable.
=== Tier 4: Unsupported Architectures
Tier 4 platforms are not supported in any form by the project.
All systems not otherwise classified are Tier 4 systems.
When a platform transitions to Tier 4, all support for the platform is removed from the source and ports trees.
Note that ports support should remain as long as the platform is supported in a branch supported by ports.
=== Policy on Changing the Tier of an Architecture
Systems may only be moved from one tier to another by approval of the FreeBSD Core Team, which shall make that decision in collaboration with the Security Officer, Release Engineering, and ports management teams.
For a platform to be promoted to a higher tier, any missing support guarantees must be satisfied before the promotion is completed.
[[ports]]
== Ports Specific FAQ
[[ports-qa-adding]]
=== Adding a New Port
[[ports-qa-add-new]]
==== How do I add a new port?
First, please read the section about repository copies.
The easiest way to add a new port is the `addport` script located in the [.filename]#ports/Tools/scripts# directory.
It adds a port from the directory specified, determining the category automatically from the port [.filename]#Makefile#.
It also adds an entry to the port's category [.filename]#Makefile#.
It was written by `{mharo}`, `{will}`, and `{garga}`.
When sending questions about this script to the {freebsd-ports}, please also CC `{crees}`, the current maintainer.
[[ports-qa-add-new-extra]]
==== Any other things I need to know when I add a new port?
Check the port, preferably to make sure it compiles and packages correctly.
This is the recommended sequence:
[source,shell]
....
# make install
# make package
# make deinstall
# pkg add package you built above
# make deinstall
# make reinstall
# make package
....
The link:{porters-handbook}[Porters Handbook] contains more detailed instructions.
Use man:portlint[1] to check the syntax of the port.
You do not necessarily have to eliminate all warnings but make sure you have fixed the simple ones.
If the port came from a submitter who has not contributed to the Project before, add that person's name to the link:{contributors}#contrib-additional[Additional Contributors] section of the FreeBSD Contributors List.
Close the PR if the port came in as a PR.
To close a PR, change the state to `Issue Resolved` and the resolution as `Fixed`.
[[ports-qa-removing]]
=== Removing an Existing Port
[[ports-qa-remove-one]]
==== How do I remove an existing port?
First, please read the section about repository copies. Before you remove the port, you have to verify there are no other ports depending on it.
* Make sure there is no dependency on the port in the ports collection:
** The port's PKGNAME appears in exactly one line in a recent INDEX file.
** No other ports contains any reference to the port's directory or PKGNAME in their Makefiles
+
[TIP]
====
When using Git, consider using `git grep`, it is much faster than `grep -r`.
====
+
* Then, remove the port:
+
[.procedure]
====
* Remove the port's files and directory with `git rm`.
* Remove the `SUBDIR` listing of the port in the parent directory [.filename]#Makefile#.
* Add an entry to [.filename]#ports/MOVED#.
* Search for entries in [.filename]#ports/security/vuxml/vuln.xml# and adjust them accordingly. In particular, check for previous packages with the new name which version could include the new port.
* Remove the port from [.filename]#ports/LEGAL# if it is there.
====
Alternatively, you can use the rmport script, from [.filename]#ports/Tools/scripts#.
This script was written by {vd}.
When sending questions about this script to the {freebsd-ports}, please also CC {crees}, the current maintainer.
[[ports-qa-move-port]]
=== How do I move a port to a new location?
[.procedure]
====
. Perform a thorough check of the ports collection for any dependencies on the old port location/name, and update them. Running `grep` on [.filename]#INDEX# is not enough because some ports have dependencies enabled by compile-time options. A full `git grep` of the ports collection is recommended.
. Remove the `SUBDIR` entry from the old category Makefile and add a `SUBDIR` entry to the new category Makefile.
. Add an entry to [.filename]#ports/MOVED#.
. Move the port with `git mv`.
. Commit the changes.
====
[[ports-qa-freeze]]
=== Ports Freeze
[[ports-qa-freeze-what]]
==== What is a “ports freeze”?
A “ports freeze” was a restricted state the ports tree was put in before a release.
It was used to ensure a higher quality for the packages shipped with a release.
It usually lasted a couple of weeks.
During that time, build problems were fixed, and the release packages were built.
This practice is no longer used, as the packages for the releases are built from the current stable, quarterly branch.
For more information on how to merge commits to the quarterly branch, see <<ports-qa-misc-request-mfh>>.
[[ports-qa-quarterly]]
=== Quarterly Branches
[[ports-qa-misc-request-mfh]]
==== What is the procedure to request authorization for merging a commit to the quarterly branch?
As of November 30, 2020, there is no need to seek explicit approval to commit to the quarterly branch.
[[ports-qa-misc-commit-mfh]]
==== What is the procedure for merging commits to the quarterly branch?
Merging commits to the quarterly branch is very similar to MFC'ing a commit in the src repository, so basically:
[source,shell]
....
% git checkout 2021Q2
% git cherry-pick -x $HASH
(verify everything is OK, for example by doing a build test)
% git push
....
where '$HASH' is the hash of the commit you want to copy over to the quarterly branch.
The -x parameter ensures the hash '$HASH' of the main branch is included in the new commit message of the quarterly branch.
[[ports-qa-new-category]]
=== Creating a New Category
[[ports-qa-new-category-how]]
==== What is the procedure for creating a new category?
Please see link:{porters-handbook}#proposing-categories[Proposing a New Category] in the Porter's Handbook.
Once that procedure has been followed and the PR has been assigned to the {portmgr}, it is their decision whether or not to approve it.
If they do, it is their responsibility to:
[.procedure]
====
. Perform any needed moves. (This only applies to physical categories.)
. Update the `VALID_CATEGORIES` definition in [.filename]#ports/Mk/bsd.port.mk#.
. Assign the PR back to you.
====
[[ports-qa-new-category-physical]]
==== What do I need to do to implement a new physical category?
[.procedure]
====
. Upgrade each moved port's [.filename]#Makefile#. Do not connect the new category to the build yet.
+
To do this, you will need to:
+
[.procedure]
======
. Change the port's `CATEGORIES` (this was the point of the exercise, remember?) The new category is listed first. This will help to ensure that the PKGORIGIN is correct.
. Run a `make describe`. Since the top-level `make index` that you will be running in a few steps is an iteration of `make describe` over the entire ports hierarchy, catching any errors here will save you having to re-run that step later on.
. If you want to be really thorough, now might be a good time to run man:portlint[1].
======
+
. Check that the ``PKGORIGIN``s are correct. The ports system uses each port's `CATEGORIES` entry to create its `PKGORIGIN`, which is used to connect installed packages to the port directory they were built from. If this entry is wrong, common port tools like man:pkg_version[1] and man:portupgrade[1] fail.
+
To do this, use the [.filename]#chkorigin.sh# tool: `env PORTSDIR=/path/to/ports sh -e /path/to/ports/Tools/scripts/chkorigin.sh`. This will check every port in the ports tree, even those not connected to the build, so you can run it directly after the move operation. Hint: do not forget to look at the ``PKGORIGIN``s of any slave ports of the ports you just moved!
. On your own local system, test the proposed changes: first, comment out the SUBDIR entries in the old ports' categories' [.filename]##Makefile##s; then enable building the new category in [.filename]#ports/Makefile#. Run make checksubdirs in the affected category directories to check the SUBDIR entries. Next, in the [.filename]#ports/# directory, run make index. This can take over 40 minutes on even modern systems; however, it is a necessary step to prevent problems for other people.
. Once this is done, you can commit the updated [.filename]#ports/Makefile# to connect the new category to the build and also commit the [.filename]#Makefile# changes for the old category or categories.
. Add appropriate entries to [.filename]#ports/MOVED#.
. Update the documentation by modifying:
** the link:{porters-handbook}#PORTING-CATEGORIES[list of categories] in the Porter's Handbook
+
. Only once all the above have been done, and no one is any longer reporting problems with the new ports, should the old ports be deleted from their previous locations in the repository.
====
==== What do I need to do to implement a new virtual category?
This is much simpler than a physical category. Only a few modifications are needed:
* the link:{porters-handbook}#PORTING-CATEGORIES[list of categories] in the Porter's Handbook
[[ports-qa-misc-questions]]
=== Miscellaneous Questions
[[ports-qa-misc-blanket-approval]]
==== Are there changes that can be committed without asking the maintainer for approval?
Blanket approval for most ports applies to these types of fixes:
* Most infrastructure changes to a port (that is, modernizing, but not changing the functionality). For example, the blanket covers converting to new `USES` macros, enabling verbose builds, and switching to new ports system syntaxes.
* Trivial and _tested_ build and runtime fixes.
* Documentations or metadata changes to ports, like [.filename]#pkg-descr# or `COMMENT`.
[IMPORTANT]
====
Exceptions to this are anything maintained by the {portmgr}, or the {security-officer}.
No unauthorized commits may ever be made to ports maintained by those groups.
====
[[ports-qa-misc-correctly-building]]
==== How do I know if my port is building correctly or not?
The packages are built multiple times each week.
If a port fails, the maintainer will receive an email from `pkg-fallout@FreeBSD.org`.
Reports for all the package builds (official, experimental, and non-regression) are aggregated at link:pkg-status.FreeBSD.org[pkg-status.FreeBSD.org].
[[ports-qa-misc-INDEX]]
==== I added a new port. Do I need to add it to the [.filename]#INDEX#?
No. The file can either be generated by running `make index`, or a pre-generated version can be downloaded with `make fetchindex`.
[[ports-qa-misc-no-touch]]
==== Are there any other files I am not allowed to touch?
Any file directly under [.filename]#ports/#, or any file under a subdirectory that starts with an uppercase letter ([.filename]#Mk/#, [.filename]#Tools/#, etc.).
In particular, the {portmgr} is very protective of [.filename]#ports/Mk/bsd.port*.mk# so do not commit changes to those files unless you want to face their wrath.
[[ports-qa-misc-updated-distfile]]
==== What is the proper procedure for updating the checksum for a port distfile when the file changes without a version change?
When the checksum for a distribution file is updated due to the author updating the file without changing the port revision, the commit message includes a summary of the relevant diffs between the original and new distfile to ensure that the distfile has not been corrupted or maliciously altered.
If the current version of the port has been in the ports tree for a while, a copy of the old distfile will usually be available on the ftp servers; otherwise the author or maintainer should be contacted to find out why the distfile has changed.
[[ports-exp-run]]
==== How can an experimental test build of the ports tree (exp-run) be requested?
An exp-run must be completed before patches with a significant ports impact are committed.
The patch can be against the ports tree or the base system.
Full package builds will be done with the patches provided by the submitter, and the submitter is required to fix detected problems _(fallout)_ before commit.
[.procedure]
====
. Go to the link:https://bugs.freebsd.org/submit[Bugzilla new PR page].
. Select the product your patch is about.
. Fill in the bug report as normal. Remember to attach the patch.
. If at the top it says “Show Advanced Fields” click on it. It will now say “Hide Advanced Fields”. Many new fields will be available. If it already says “Hide Advanced Fields”, no need to do anything.
. In the “Flags” section, set the “exp-run” one to `?`. As for all other fields, hovering the mouse over any field shows more details.
. Submit. Wait for the build to run.
. {portmgr} will reply with a possible fallout.
. Depending on the fallout:
** If there is no fallout, the procedure stops here, and the change can be committed, pending any other approval required.
... If there is fallout, it _must_ be fixed, either by fixing the ports directly in the ports tree, or adding to the submitted patch.
... When this is done, go back to step 6 saying the fallout was fixed and wait for the exp-run to be run again. Repeat as long as there are broken ports.
====
[[non-committers]]
== Issues Specific to Developers Who Are Not Committers
A few people who have access to the FreeBSD machines do not have commit bits.
Almost all of this document will apply to these developers as well (except things specific to commits and the mailing list memberships that go with them).
In particular, we recommend that you read:
* <<admin>>
* <<conventions-everyone>>
+
[NOTE]
====
Get your mentor to add you to the "Additional Contributors" ([.filename]#~/documentation/content/en/articles/contributors/contrib-additional.adoc#), if you are not already listed there.
====
* <<developer.relations>>
* <<ssh.guide>>
* <<rules>>
[[google-analytics]]
== Information About Google Analytics
As of December 12, 2012, Google Analytics was enabled on the FreeBSD Project website to collect anonymized usage statistics regarding usage of the site.
The information collected is valuable to the FreeBSD Documentation Project, to identify various problems on the FreeBSD website.
[[google-analytics-policy]]
=== Google Analytics General Policy
The FreeBSD Project takes visitor privacy very seriously.
As such, the FreeBSD Project website honors the "Do Not Track" header _before_ fetching the tracking code from Google.
For more information, please see the https://www.FreeBSD.org/privacy/[FreeBSD Privacy Policy].
Google Analytics access is _not_ arbitrarily allowed - access must be requested, voted on by the `{doceng}`, and explicitly granted.
Requests for Google Analytics data must include a specific purpose.
For example, a valid reason for requesting access would be "to see the most frequently used web browsers when viewing FreeBSD web pages to ensure page rendering speeds are acceptable."
Conversely, "to see what web browsers are most frequently used" (without stating __why__) would be rejected.
All requests must include the timeframe for which the data would be required.
For example, it must be explicitly stated if the requested data would be needed for a timeframe covering a span of 3 weeks, or if the request would be one-time only.
Any request for Google Analytics data without a clear, reasonable reason beneficial to the FreeBSD Project will be rejected.
[[google-analytics-data]]
=== Data Available Through Google Analytics
A few examples of the types of Google Analytics data available include:
* Commonly used web browsers
* Page load times
* Site access by language
[[misc]]
== Miscellaneous Questions
=== Are there changes that can be committed without asking the maintainer for approval?
Blanket approval for most ports applies to these types of fixes:
* Most infrastructure changes to a port (that is, modernizing, but not changing the functionality). For example, the blanket covers converting to new `USES` macros, enabling verbose builds, and switching to new ports system syntaxes.
* Trivial and _tested_ build and runtime fixes.
* Documentations or metadata changes to ports, like [.filename]#pkg-descr# or `COMMENT`.
=== How do I access people.FreeBSD.org to put up personal or project information?
`people.FreeBSD.org` is the same as `freefall.FreeBSD.org`.
Just create a [.filename]#public_html# directory. Anything you place in that directory will automatically be visible under https://people.FreeBSD.org/[https://people.FreeBSD.org/].
=== Where are the mailing list archives stored?
The mailing lists are archived under [.filename]#/local/mail# on `freefall.FreeBSD.org`.
=== I would like to mentor a new committer. What process do I need to follow?
See the https://www.freebsd.org/internal/new-account/[New Account Creation Procedure] document on the internal pages.
[[benefits]]
== Benefits and Perks for FreeBSD Committers
[[benefits-recognition]]
=== Recognition
Recognition as a competent software engineer is the longest lasting value.
In addition, getting a chance to work with some of the best people that every engineer would dream of meeting is a great perk!
[[benefits-freebsdmall]]
=== FreeBSD Mall
FreeBSD committers can get a free 4-CD or DVD set at conferences from http://www.freebsdmall.com[FreeBSD Mall, Inc.].
[[benefits-irc]]
=== IRC
In addition, developers may request a cloaked hostmask for their account on the Freenode IRC network in the form of `freebsd/developer/<freefall username>` or `freebsd/developer/<FreeNode account>`.
To request a cloak, send an email to `{irc-email}` with your requested hostmask and NickServ account name.
See the https://wiki.freebsd.org/IRC/Cloaks[IRC Cloaks] wiki page for more details.
[[benefits-gandi]]
=== `Gandi.net`
Gandi provides website hosting, cloud computing, domain registration, and X.509 certificate services.
Gandi offers an E-rate discount to all FreeBSD developers.
Send mail to mailto:non-profit@gandi.net[non-profit@gandi.net] using your `@freebsd.org` mail address, and indicate your Gandi handle.
[[benefits-rsync]]
=== `rsync.net`
https://rsync.net[rsync.net] provides cloud storage for offsite backup that is optimized for UNIX users. Their service runs entirely on FreeBSD and ZFS.
rsync.net offers a free-forever 500 GB account to FreeBSD developers. Simply sign up at https://www.rsync.net/freebsd.html[https://www.rsync.net/freebsd.html] using your `@freebsd.org` address to receive this free account.
diff --git a/documentation/content/en/articles/contributing/_index.adoc b/documentation/content/en/articles/contributing/_index.adoc
index 208b019e6b..c7d61004ec 100644
--- a/documentation/content/en/articles/contributing/_index.adoc
+++ b/documentation/content/en/articles/contributing/_index.adoc
@@ -1,573 +1,573 @@
---
title: Contributing to FreeBSD
authors:
- author: Jordan Hubbard
- author: Sam Lawrance
- author: Mark Linimon
-releaseinfo: "$FreeBSD$"
+description: Contributing to the FreeBSD Project
trademarks: ["freebsd", "ieee", "general"]
---
= Contributing to FreeBSD
:doctype: article
:toc: macro
:toclevels: 1
:icons: font
:sectnums:
:sectnumlevels: 6
:source-highlighter: rouge
:experimental:
include::shared/en/mailing-lists.adoc[]
include::shared/en/urls.adoc[]
[.abstract-title]
Abstract
This article describes the different ways in which an individual or organization may contribute to the FreeBSD Project.
'''
toc::[]
So you want to contribute to FreeBSD? That is great! FreeBSD _relies_ on the contributions of its user base to survive.
Your contributions are not only appreciated, they are vital to FreeBSD's continued growth.
A large and growing number of international contributors, of greatly varying ages and areas of technical expertise, develop FreeBSD.
There is always more work to be done than there are people available to do it, and more help is always appreciated.
As a volunteer, what you do is limited only by what you want to do.
However, we do ask that you are aware of what other members of the FreeBSD community will expect of you.
You may want to take this into account before deciding to volunteer.
The FreeBSD project is responsible for an entire operating system environment, rather than just a kernel or a few scattered utilities.
As such, our [.filename]#TODO# lists span a very wide range of tasks: from documentation, beta testing and presentation, to the system installer and highly specialized types of kernel development.
People of any skill level, in almost any area, can almost certainly help the project.
Commercial entities engaged in FreeBSD-related enterprises are also encouraged to contact us.
Do you need a special extension to make your product work? You will find us receptive to your requests, given that they are not too outlandish.
Are you working on a value-added product? Please let us know! We may be able to work cooperatively on some aspect of it.
The free software world is challenging many existing assumptions about how software is developed, sold, and maintained, and we urge you to at least give it a second look.
[[contrib-what]]
== What Is Needed
The following list of tasks and sub-projects represents something of an amalgam of various [.filename]#TODO# lists and user requests.
[[non-programmer-tasks]]
=== Ongoing Non-Programmer Tasks
Many people who are involved in FreeBSD are not programmers.
The Project includes documentation writers, Web designers, and support people.
All that these people need to contribute is an investment of time and a willingness to learn.
. Read through the FAQ and Handbook periodically. If anything is poorly explained, ambiguous, out of date or incorrect, let us know. Even better, send us a fix (Docbook is not difficult to learn, but there is no objection to ASCII submissions).
. Help translate FreeBSD documentation into your native language. If documentation already exists for your language, you can help translate additional documents or verify that the translations are up-to-date and correct. First take a look at the link:{fdp-primer}#translations[Translations FAQ] in the FreeBSD Documentation Project Primer. You are not committing yourself to translating every single FreeBSD document by doing this - as a volunteer, you can do as much or as little translation as you desire. Once someone begins translating, others almost always join the effort. If you only have the time or energy to translate one part of the documentation, please translate the installation instructions.
. Read the {freebsd-questions} occasionally (or even regularly). It can be very satisfying to share your expertise and help people solve their problems; sometimes you may even learn something new yourself! These forums can also be a source of ideas for things to improve upon.
[[ongoing-programmer-tasks]]
=== Ongoing Programmer Tasks
Most of the tasks listed here may require a considerable investment of time, an in-depth knowledge of the FreeBSD kernel, or both.
However, there are also many useful tasks which are suitable for "weekend hackers".
. If you run FreeBSD-CURRENT and have a good Internet connection, there is a machine `current.FreeBSD.org` which builds a full release once a day-every now and again, try to install the latest release from it and report any failures in the process.
. Read the {freebsd-bugs}. There may be a problem you can comment constructively on or with patches you can test. Or you could even try to fix one of the problems yourself.
. If you know of any bug fixes which have been successfully applied to -CURRENT but have not been merged into -STABLE after a decent interval (normally a couple of weeks), send the committer a polite reminder.
. Move contributed software to [.filename]#src/contrib# in the source tree.
. Make sure code in [.filename]#src/contrib# is up to date.
. Build the source tree (or just part of it) with extra warnings enabled and clean up the warnings. A list of build warnings can also be found from our https://ci.freebsd.org[CI] by selecting a build and checking "LLVM/Clang Warnings".
. Fix warnings for ports which do deprecated things like using `gets()` or including [.filename]#malloc.h#.
. If you have contributed any ports and you had to make FreeBSD-specific changes, send your patches back to the original authors (this will make your life easier when they bring out the next version).
. Get copies of formal standards like POSIX(R). Compare FreeBSD's behavior to that required by the standard. If the behavior differs, particularly in subtle or obscure corners of the specification, send in a PR about it. If you are able, figure out how to fix it and include a patch in the PR. If you think the standard is wrong, ask the standards body to consider the question.
. Suggest further tasks for this list!
=== Work through the PR Database
The https://bugs.FreeBSD.org/search/[FreeBSD PR list] shows all the current active problem reports and requests for enhancement that have been submitted by FreeBSD users.
The PR database includes both programmer and non-programmer tasks.
Look through the open PRs, and see if anything there takes your interest.
Some of these might be very simple tasks that just need an extra pair of eyes to look over them and confirm that the fix in the PR is a good one.
Others might be much more complex, or might not even have a fix included at all.
Start with the PRs that have not been assigned to anyone else.
If a PR is assigned to someone else, but it looks like something you can handle, email the person it is assigned to and ask if you can work on it-they might already have a patch ready to be tested, or further ideas that you can discuss with them.
=== Ongoing Ports Tasks
The Ports Collection is a perpetual work in progress.
We want to provide our users with an easy to use, up to date, high quality repository of third party software.
We need people to donate some of their time and effort to help us achieve this goal.
Anyone can get involved, and there are lots of different ways to do so.
Contributing to ports is an excellent way to help "give back" something to the project.
Whether you are looking for an ongoing role, or a fun challenge for a rainy day, we would love to have your help!
There are a number of easy ways you can contribute to keeping the ports tree up to date and in good working order:
* Find some cool or useful software and link:{porters-handbook}[create a port] for it.
* There are a large number of ports that have no maintainer. Become a maintainer and <<adopt-port>>.
* If you have created or adopted a port, be aware of <<maintain-port>>.
* When you are looking for a quick challenge you could <<fix-broken>>.
=== Pick one of the items from the Ideas page
The https://wiki.freebsd.org/IdeasPage[FreeBSD list of projects and ideas for volunteers] is also available for people willing to contribute to the FreeBSD project.
The list is being regularly updated and contains items for both programmers and non-programmers with information about each project.
[[contrib-how]]
== How to Contribute
Contributions to the system generally fall into one or more of the following 5 categories:
[[contrib-general]]
=== Bug Reports and General Commentary
An idea or suggestion of _general_ technical interest should be mailed to the {freebsd-hackers}.
Likewise, people with an interest in such things (and a tolerance for a _high_ volume of mail!) may subscribe to the {freebsd-hackers}.
See link:{handbook}#eresources-mail[The FreeBSD Handbook] for more information about this and other mailing lists.
If you find a bug or are submitting a specific change, please report it using the https://bugs.FreeBSD.org/submit/[bug submission form].
Try to fill-in each field of the bug report.
Unless they exceed 65KB, include any patches directly in the report.
If the patch is suitable to be applied to the source tree put `[PATCH]` in the synopsis of the report.
When including patches, _do not_ use cut-and-paste because cut-and-paste turns tabs into spaces and makes them unusable.
When patches are a lot larger than 20KB, consider compressing them (eg. with man:gzip[1] or man:bzip2[1]) prior to uploading them.
After filing a report, you should receive confirmation along with a tracking number.
Keep this tracking number so that you can update us with details about the problem.
See also link:{problem-reports}[this article] on how to write good problem reports.
=== Changes to the Documentation
Changes to the documentation are overseen by the {freebsd-doc}.
Please look at the link:{fdp-primer}[FreeBSD Documentation Project Primer] for complete instructions.
Send submissions and changes (even small ones are welcome!) using the same method as any other bug report.
=== Changes to Existing Source Code
An addition or change to the existing source code is a somewhat trickier affair and depends a lot on how far out of date you are with the current state of FreeBSD development.
There is a special on-going release of FreeBSD known as "FreeBSD-CURRENT" which is made available in a variety of ways for the convenience of developers working actively on the system.
See link:{handbook}#current-stable[The FreeBSD Handbook] for more information about getting and using FreeBSD-CURRENT.
Working from older sources unfortunately means that your changes may sometimes be too obsolete or too divergent for easy re-integration into FreeBSD.
Chances of this can be minimized somewhat by subscribing to the {freebsd-announce} and the {freebsd-current} lists, where discussions on the current state of the system take place.
Assuming that you can manage to secure fairly up-to-date sources to base your changes on, the next step is to produce a set of diffs to send to the FreeBSD maintainers.
This is done with the man:diff[1] command.
The preferred man:diff[1] format for submitting patches is the unified output format generated by `diff -u`.
[source,shell]
....
% diff -u oldfile newfile
....
or
[source,shell]
....
% diff -u -r -N olddir newdir
....
would generate a set of unified diffs for the given source file or directory hierarchy.
See man:diff[1] for more information.
Once you have a set of diffs (which you may test with the man:patch[1] command), you should submit them for inclusion with FreeBSD as a bug report.
_Do not_ just send the diffs to the {freebsd-hackers} or they will get lost! We greatly appreciate your submission (this is a volunteer project!); because we are busy, we may not be able to address it immediately, but it will remain in the PR database until we do.
Indicate your submission by including `[PATCH]` in the synopsis of the report.
If you feel it appropriate (e.g. you have added, deleted, or renamed files), bundle your changes into a `tar` file.
Archives created with man:shar[1] are also welcome.
If your change is of a potentially sensitive nature, such as if you are unsure of copyright issues governing its further distribution then you should send it to {core-email} directly rather than submitting as a bug report.
The {core-email} reaches a much smaller group of people who do much of the day-to-day work on FreeBSD.
Note that this group is also _very busy_ and so you should only send mail to them where it is truly necessary.
Please refer to man:intro[9] and man:style[9] for some information on coding style.
We would appreciate it if you were at least aware of this information before submitting code.
=== New Code or Major Value-Added Packages
In the case of a significant contribution of a large body work, or the addition of an important new feature to FreeBSD, it becomes almost always necessary to either send changes as tar files or upload them to a web or FTP site for other people to access.
If you do not have access to a web or FTP site, ask on an appropriate FreeBSD mailing list for someone to host the changes for you.
When working with large amounts of code, the touchy subject of copyrights also invariably comes up.
FreeBSD prefers free software licenses such as BSD or ISC.
Copyleft licenses such as GPLv2 are sometimes permitted.
The complete listing can be found on the link:https://www.FreeBSD.org/internal/software-license/[core team licensing policy] page.
=== Money or Hardware
We are always very happy to accept donations to further the cause of the FreeBSD Project and, in a volunteer effort like ours, a little can go a long way! Donations of hardware are also very important to expanding our list of supported peripherals since we generally lack the funds to buy such items ourselves.
[[donations]]
==== Donating Funds
The https://www.freebsdfoundation.org[FreeBSD Foundation] is a non-profit, tax-exempt foundation established to further the goals of the FreeBSD Project.
As a 501(c)3 entity, the Foundation is generally exempt from US federal income tax as well as Colorado State income tax.
Donations to a tax-exempt entity are often deductible from taxable federal income.
Donations may be sent in check form to:
[.address]
****
The FreeBSD Foundation +
P.O. Box 20247, +
Boulder, +
CO 80308 +
USA
****
The FreeBSD Foundation is also able to accept https://www.freebsdfoundation.org/donate/[online donations] through various payment options.
More information about the FreeBSD Foundation can be found in https://people.FreeBSD.org/~jdp/foundation/announcement.html[The FreeBSD Foundation -- an Introduction].
To contact the Foundation by email, write to mailto:info@FreeBSDFoundation.org[info@FreeBSDFoundation.org].
==== Donating Hardware
The FreeBSD Project happily accepts donations of hardware that it can find good use for.
If you are interested in donating hardware, please contact the link:https://www.FreeBSD.org/donations/[Donations Liaison Office].
[[ports-contributing]]
== Contributing to ports
[[adopt-port]]
=== Adopting an unmaintained port
==== Choosing an unmaintained port
Taking over maintainership of ports that are unmaintained is a great way to get involved.
Unmaintained ports are only updated and fixed when somebody volunteers to work on them.
There are a large number of unmaintained ports.
It is a good idea to start with adopting a port that you use regularly.
Unmaintained ports have their `MAINTAINER` set to `ports@FreeBSD.org`.
A list of unmaintained ports and their current errors and problem reports can be seen at the http://portsmon.FreeBSD.org/portsconcordanceformaintainer.py?maintainer=ports%40FreeBSD.org[FreeBSD Ports Monitoring System].
On https://portsfallout.com/fallout?port=&maintainer=ports%40FreeBSD.org[PortsFallout] can be seen a list of unmaintained ports with errors.
Many unmaintained ports can have pending updates, this can be seen at the https://portscout.freebsd.org/ports@freebsd.org.html[FreeBSD Ports distfile scanner].
Some ports affect a large number of others due to dependencies and slave port relationships.
Generally, we want people to have some experience before they maintain such ports.
You can find out whether or not a port has dependencies or slave ports by looking at a master index of ports called [.filename]#INDEX#.
(The name of the file varies by release of FreeBSD; for instance, [.filename]#INDEX-8#.) Some ports have conditional dependencies that are not included in a default [.filename]#INDEX# build.
We expect you to be able to recognize such ports by looking through other ports' [.filename]#Makefile#'s.
[NOTE]
======
The FreeBSD Ports Monitoring System (portsmon) is currently not working due to latest Python updates.
======
==== How to adopt the port
First make sure you understand your <<maintain-port>>.
Also read the link:{porters-handbook}[Porter's Handbook].
_Please do not commit yourself to more than you feel you can comfortably handle._
You may request maintainership of any unmaintained port as soon as you wish.
Simply set `MAINTAINER` to your own email address and send a PR (Problem Report) with the change.
If the port has build errors or needs updating, you may wish to include any other changes in the same PR.
This will help because many committers are less willing to assign maintainership to someone who does not have a known track record with FreeBSD.
Submitting PRs that fix build errors or update ports are the best ways to establish one.
File your PR with category `ports` and class `change-request`.
A committer will examine your PR, commit the changes, and finally close the PR.
Sometimes this process can take a little while (committers are volunteers, too :).
[[maintain-port]]
=== The challenge for port maintainers
This section will give you an idea of why ports need to be maintained and outline the responsibilities of a port maintainer.
[[why-maintenance]]
==== Why ports require maintenance
Creating a port is a once-off task.
Ensuring that a port is up to date and continues to build and run requires an ongoing maintenance effort.
Maintainers are the people who dedicate some of their time to meeting these goals.
The foremost reason ports need maintenance is to bring the latest and greatest in third party software to the FreeBSD community.
An additional challenge is to keep individual ports working within the Ports Collection framework as it evolves.
As a maintainer, you will need to manage the following challenges:
* *New software versions and updates.* New versions and updates of existing ported software become available all the time, and these need to be incorporated into the Ports Collection in order to provide up-to-date software.
* *Changes to dependencies.* If significant changes are made to the dependencies of your port, it may need to be updated so that it will continue to work correctly.
* *Changes affecting dependent ports.* If other ports depend on a port that you maintain, changes to your port may require coordination with other maintainers.
* *Interaction with other users, maintainers and developers.* Part of being a maintainer is taking on a support role. You are not expected to provide general support (but we welcome it if you choose to do so). What you should provide is a point of coordination for FreeBSD-specific issues regarding your ports.
* *Bug hunting.* A port may be affected by bugs which are specific to FreeBSD. You will need to investigate, find, and fix these bugs when they are reported. Thoroughly testing a port to identify problems before they make their way into the Ports Collection is even better.
* *Changes to ports infrastructure and policy.* Occasionally the systems that are used to build ports and packages are updated or a new recommendation affecting the infrastructure is made. You should be aware of these changes in case your ports are affected and require updating.
* *Changes to the base system.* FreeBSD is under constant development. Changes to software, libraries, the kernel or even policy changes can cause flow-on change requirements to ports.
==== Maintainer responsibilities
===== Keep your ports up to date
This section outlines the process to follow to keep your ports up to date.
This is an overview.
More information about upgrading a port is available in the link:{porters-handbook}[Porter's Handbook].
[.procedure]
====
. Watch for updates
+
Monitor the upstream vendor for new versions, updates and security fixes for the software.
Announcement mailing lists or news web pages are useful for doing this.
Sometimes users will contact you and ask when your port will be updated.
If you are busy with other things or for any reason just cannot update it at the moment, ask if they will help you by submitting an update.
+
You may also receive automated email from the `FreeBSD Ports Version Check` informing you that a newer version of your port's distfile is available.
More information about that system (including how to stop future emails) will be provided in the message.
. Incorporate changes
+
When they become available, incorporate the changes into the port.
You need to be able to generate a patch between the original port and your updated port.
. Review and test
+
Thoroughly review and test your changes:
** Build, install and test your port on as many platforms and architectures as you can. It is common for a port to work on one branch or platform and fail on another.
** Make sure your port's dependencies are complete. The recommended way of doing this is by installing your own ports tinderbox. See <<resources>> for more information.
** Check that the packing list is up to date. This involves adding in any new files and directories and removing unused entries.
** Verify your port using man:portlint[1] as a guide. See <<resources>> for important information about using portlint.
** Consider whether changes to your port might cause any other ports to break. If this is the case, coordinate the changes with the maintainers of those ports. This is especially important if your update changes the shared library version; in this case, at the very least, the dependent ports will need to get a `PORTREVISION` bump so that they will automatically be upgraded by automated tools such as portmaster or man:portupgrade[1].
. Submit changes
+
Send your update by submitting a PR with an explanation of the changes and a patch containing the differences between the original port and the updated one.
Please refer to link:{problem-reports}[Writing FreeBSD Problem Reports] for information on how to write a really good PR.
+
[NOTE]
======
Please do not submit a man:shar[1] archive of the entire port; instead, use man:diff[1] `-ruN`.
In this way, committers can much more easily see exactly what changes are being made.
The Porter's Handbook section on link:{porters-handbook}#port-upgrading[Upgrading] has more information.
======
. Wait
+
At some stage a committer will deal with your PR.
It may take minutes, or it may take weeks - so please be patient.
. Give feedback
+
If a committer finds a problem with your changes, they will most likely refer it back to you.
A prompt response will help get your PR committed faster, and is better for maintaining a thread of conversation when trying to resolve any problems.
. And Finally
+
Your changes will be committed and your port will have been updated.
The PR will then be closed by the committer. That's it!
====
===== Ensure your ports continue to build correctly
This section is about discovering and fixing problems that stop your ports from building correctly.
FreeBSD only guarantees that the Ports Collection works on the `-STABLE` branches.
In theory, you should be able to get by with running the latest release of each stable branch (since the ABIs are not supposed to change) but if you can run the branch, that is even better.
Since the majority of FreeBSD installations run on PC-compatible machines (what is termed the `i386` architecture), we expect you to keep the port working on that architecture.
We prefer that ports also work on the `amd64` architecture running native.
It is completely fair to ask for help if you do not have one of these machines.
[NOTE]
====
The usual failure modes for non-`x86` machines are that the original programmers assumed that, for instance, pointers are `int`-s, or that a relatively lax older gcc compiler was being used.
More and more, application authors are reworking their code to remove these assumptions - but if the author is not actively maintaining their code, you may need to do this yourself.
====
These are the tasks you need to perform to ensure your port is able to be built:
[.procedure]
====
. Watch for build failures
+
Check your mail for mail from `pkg-fallout@FreeBSD.org` and the http://portscout.FreeBSD.org[distfiles scanner] to see if any of the port which are failing to build are out of date.
. Collect information
+
Once you are aware of a problem, collect information to help you fix it.
Build errors reported by `pkg-fallout` are accompanied by logs which will show you where the build failed.
If the failure was reported to you by a user, ask them to send you information which may help in diagnosing the problem, such as:
** Build logs
** The commands and options used to build the port (including options set in [.filename]#/etc/make.conf#)
** A list of packages installed on their system as shown by man:pkg-info[8]
** The version of FreeBSD they are running as shown by man:uname[1] `-a`
** When their ports collection was last updated
** When their ports tree and [.filename]#INDEX# was last updated
. Investigate and find a solution
+
Unfortunately there is no straightforward process to follow to do this.
Remember, though: if you are stuck, ask for help! The {freebsd-ports} is a good place to start, and the upstream developers are often very helpful.
. Submit changes
+
Just as with updating a port, you should now incorporate changes, review and test, submit your changes in a PR, and provide feedback if required.
. Send patches to upstream authors
+
In some cases, you will have to make patches to the port to make it run on FreeBSD.
Some (but not all) upstream authors will accept such patches back into their code for the next release.
If so, this may even help their users on other BSD-based systems as well and perhaps save duplicated effort.
Please consider sending any applicable patches to the authors as a courtesy.
====
===== Investigate bug reports and PRs related to your port
This section is about discovering and fixing bugs.
FreeBSD-specific bugs are generally caused by assumptions about the build and runtime environments that do not apply to FreeBSD.
You are less likely to encounter a problem of this type, but it can be more subtle and difficult to diagnose.
These are the tasks you need to perform to ensure your port continues to work as intended:
[.procedure]
====
. Respond to bug reports
+
Bugs may be reported to you through email via the https://bugs.FreeBSD.org/search/[Problem Report database].
Bugs may also be reported directly to you by users.
+
You should respond to PRs and other reports within 14 days, but please try not to take that long.
Try to respond as soon as possible, even if it is just to say you need some more time before you can work on the PR.
+
If you have not responded after 14 days, any committer may commit from a PR that you have not responded to via a `maintainer-timeout`.
. Collect information
+
If the person reporting the bug has not also provided a fix, you need to collect the information that will allow you to generate one.
+
If the bug is reproducible, you can collect most of the required information yourself.
If not, ask the person who reported the bug to collect the information for you, such as:
** A detailed description of their actions, expected program behavior and actual behavior
** Copies of input data used to trigger the bug
** Information about their build and execution environment - for example, a list of installed packages and the output of man:env[1]
** Core dumps
** Stack traces
. Eliminate incorrect reports
+
Some bug reports may be incorrect.
For example, the user may have simply misused the program; or their installed packages may be out of date and require updating.
Sometimes a reported bug is not specific to FreeBSD.
In this case report the bug to the upstream developers.
If the bug is within your capabilities to fix, you can also patch the port so that the fix is applied before the next upstream release.
. Find a solution
+
As with build errors, you will need to sort out a fix to the problem.
Again, remember to ask if you are stuck!
. Submit or approve changes
+
Just as with updating a port, you should now incorporate changes, review and test, and submit your changes in a PR (or send a follow-up if a PR already exists for the problem).
If another user has submitted changes in the PR, you can also send a follow-up saying whether or not you approve the changes.
====
===== Providing support
Part of being a maintainer is providing support - not for the software in general - but for the port and any FreeBSD-specific quirks and problems.
Users may contact you with questions, suggestions, problems and patches.
Most of the time their correspondence will be specific to FreeBSD.
Occasionally you may have to invoke your skills in diplomacy, and kindly point users seeking general support to the appropriate resources.
Less frequently you will encounter a person asking why the `RPMS` are not up to date or how can they get the software to run under Foo Linux.
Take the opportunity to tell them that your port is up to date (if it is, of course!), and suggest that they try FreeBSD.
Sometimes users and developers will decide that you are a busy person whose time is valuable and do some of the work for you.
For example, they might:
* submit a PR or send you patches to update your port,
* investigate and perhaps provide a fix to a PR, or
* otherwise submit changes to your port.
In these cases your main obligation is to respond in a timely manner.
Again, the timeout for non-responsive maintainers is 14 days.
After this period changes may be committed unapproved.
They have taken the trouble to do this for you; so please try to at least respond promptly.
Then review, approve, modify or discuss their changes with them as soon as possible.
If you can make them feel that their contribution is appreciated (and it should be) you will have a better chance persuading them to do more things for you in the future :-).
[[fix-broken]]
=== Finding and fixing a broken port
There are some really good places to find a port that needs some attention.
You can use the https://bugs.freebsd.org/search[web interface] to the Problem Report database to search through and view unresolved PRs.
The majority of ports PRs are updates, but with a little searching and skimming over synopses you should be able to find something interesting to work on (the `sw-bug` class is a good place to start).
The other place is the http://portsmon.FreeBSD.org/[FreeBSD Ports Monitoring System].
In particular look for unmaintained ports with build errors and ports that are marked `BROKEN`.
https://portsfallout.com/[PortsFallout] shows port issues gathered from the FreeBSD package building.
It is OK to send changes for a maintained port as well, but remember to ask the maintainer in case they are already working on the problem.
Once you have found a bug or problem, collect information, investigate and fix! If there is an existing PR, follow up to that.
Otherwise create a new PR.
Your changes will be reviewed and, if everything checks out, committed.
[NOTE]
======
The FreeBSD Ports Monitoring System (portsmon) is currently not working due to latest Python updates.
======
[[mortal-coil]]
=== When to call it quits
As your interests and commitments change, you may find that you no longer have time to continue some (or all) of your ports contributions.
That is fine! Please let us know if you are no longer using a port or have otherwise lost time or interest in being a maintainer.
In this way we can go ahead and allow other people to try to work on existing problems with the port without waiting for your response.
Remember, FreeBSD is a volunteer project, so if maintaining a port is no fun any more, it is probably time to let someone else do it!
In any case, the Ports Management Team (`portmgr`) reserves the right to reset your maintainership if you have not actively maintained your port in some time.
(Currently, this is set to 3 months.)
By this, we mean that there are unresolved problems or pending updates that have not been worked on during that time.
[[resources]]
=== Resources for ports maintainers and contributors
The link:{porters-handbook}[Porter's Handbook] is your hitchhiker's guide to the ports system. Keep it handy!
link:{problem-reports}[Writing FreeBSD Problem Reports] describes how to best formulate and submit a PR.
In 2005 more than eleven thousand ports PRs were submitted! Following this article will greatly assist us in reducing the time needed to handle your PRs.
The https://bugs.freebsd.org/bugzilla/query.cgi[Problem Report database].
The http://portsmon.FreeBSD.org/[FreeBSD Ports Monitoring System (portsmon)] can show you cross-referenced information about ports such as build errors and problem reports.
If you are a maintainer you can use it to check on the build status of your ports.
As a contributor you can use it to find broken and unmaintained ports that need to be fixed.
The http://portscout.FreeBSD.org[FreeBSD Ports distfile scanner (portscout)] can show you ports for which the distfiles are not fetchable.
You can check on your own ports or use it to find ports that need their `MASTER_SITES` updated.
package:ports-mgmt/poudriere[] is the most thorough way to test a port through the entire cycle of installation, packaging, and deinstallation.
Documentation is located at the https://github.com/freebsd/poudriere[poudriere github repository]
man:portlint[1] is an application which can be used to verify that your port conforms to many important stylistic and functional guidelines.
portlint is a simple heuristic application, so you should use it __only as a guide__.
If portlint suggests changes which seem unreasonable, consult the link:{porters-handbook}[Porter's Handbook] or ask for advice.
The {freebsd-ports} is for general ports-related discussion.
It is a good place to ask for help.
You can https://lists.freebsd.org/mailman/listinfo[subscribe, or read and search the list archives].
Reading the archives of the {freebsd-ports-bugs} and the {svn-ports-head} may also be of interest.
https://portsfallout.com/[PortsFallout] is a place to help in searching for the https://lists.freebsd.org/pipermail/freebsd-pkg-fallout/[FreeBSD package-fallout archive].
[[ideas-contributing]]
== Getting Started in Other Areas
Looking for something interesting to get started that is not mentioned elsewhere in this article? The FreeBSD Project has several Wiki pages containing areas within which new contributors can get ideas on how to get started.
The https://wiki.freebsd.org/JuniorJobs[Junior Jobs] page has a list of projects that might be of interest to people just getting started in FreeBSD, and want to work on interesting things to get their feet wet.
The https://wiki.freebsd.org/IdeasPage[Ideas Page] contains various "nice to have" or "interesting" things to work on in the Project.
diff --git a/documentation/content/en/articles/contributors/_index.adoc b/documentation/content/en/articles/contributors/_index.adoc
index 1919f846c6..afd0f120b4 100644
--- a/documentation/content/en/articles/contributors/_index.adoc
+++ b/documentation/content/en/articles/contributors/_index.adoc
@@ -1,166 +1,166 @@
---
title: Contributors to FreeBSD
-releaseinfo: "$FreeBSD$"
+description: Contributors to FreeBSD
trademarks: ["freebsd", "sun", "general"]
---
= Contributors to FreeBSD
:doctype: article
:toc: macro
:toclevels: 1
:icons: font
:sectnums:
:source-highlighter: rouge
:experimental:
:sectnumlevels: 6
include::shared/authors.adoc[]
include::shared/en/urls.adoc[]
[.abstract-title]
Abstract
This article lists individuals and organizations who have made a contribution to FreeBSD.
'''
toc::[]
[[donors]]
== Donors Gallery
[NOTE]
====
As of 2010, the following section is several years out-of-date. Donations from the past several years appear https://www.FreeBSD.org/donations/donors/[here].
====
The FreeBSD Project is indebted to the following donors and would like to publicly thank them here!
* _Contributors to the central server project:_
The following individuals and businesses made it possible for the FreeBSD Project to build a new central server machine, which has replaced `freefall.FreeBSD.org` at one point, by donating the following items:
** {mbarkah} and his employer, http://www.hemi.com/[ Hemisphere Online], donated a _Pentium Pro (P6) 200MHz CPU_
** http://www.asacomputers.com/[ASA Computers] donated a _Tyan 1662 motherboard_.
** Joe McGuckin mailto:joe@via.net[joe@via.net] of http://www.via.net/[ViaNet Communications] donated a _Kingston ethernet controller._
** Jack O'Neill mailto:jack@diamond.xtalwind.net[jack@diamond.xtalwind.net] donated an _NCR 53C875 SCSI controller card_.
** Ulf Zimmermann mailto:ulf@Alameda.net[ulf@Alameda.net] of http://www.Alameda.net/[Alameda Networks] donated _128MB of memory_, a _4 Gb disk drive and the case._
* _Direct funding:_
+
The following individuals and businesses have generously contributed direct funding to the project:
** Annelise Anderson mailto:ANDRSN@HOOVER.STANFORD.EDU[ANDRSN@HOOVER.STANFORD.EDU]
** {dillon}
** http://www.bluemountain.com/[Blue Mountain Arts]
** http://www.epilogue.com/[Epilogue Technology Corporation]
** {sef}
** http://www.gta.com/[Global Technology Associates, Inc]
** Don Scott Wilde
** Gianmarco Giovannelli mailto:gmarco@masternet.it[gmarco@masternet.it]
** Josef C. Grosch mailto:joeg@truenorth.org[joeg@truenorth.org]
** Robert T. Morris
** {chuckr}
** Kenneth P. Stox mailto:ken@stox.sa.enteract.com[ken@stox.sa.enteract.com] of http://www.imagescape.com/[Imaginary Landscape, LLC.]
** Dmitry S. Kohmanyuk mailto:dk@dog.farm.org[dk@dog.farm.org]
** http://www.cdrom.co.jp/[Laser5] of Japan (a portion of the profits from sales of their various FreeBSD CDROMs).
** http://www.mmjp.or.jp/fuki/[Fuki Shuppan Publishing Co.] donated a portion of their profits from _Hajimete no FreeBSD_ (FreeBSD, Getting started) to the FreeBSD and XFree86 projects.
** http://www.ascii.co.jp/[ASCII Corp.] donated a portion of their profits from several FreeBSD-related books to the FreeBSD project.
** http://www.yokogawa.co.jp/[Yokogawa Electric Corp] has generously donated significant funding to the FreeBSD project.
** http://www.buffnet.net/[BuffNET]
** http://www.pacificsolutions.com/[Pacific Solutions]
** http://www.siemens.de/[Siemens AG] via Andre Albsmeier mailto:andre.albsmeier@mchp.siemens.de[andre.albsmeier@mchp.siemens.de]
** Chris Silva mailto:ras@interaccess.com[ras@interaccess.com]
* _Hardware contributors:_
+
The following individuals and businesses have generously contributed hardware for testing and device driver development/support:
** BSDi for providing the Pentium P5-90 and 486/DX2-66 EISA/VL systems that are being used for our development work, to say nothing of the network access and other donations of hardware resources.
** http://www.compaq.com[Compaq] has donated a variety of Alpha systems to the FreeBSD Project. Among the many generous donations are 4 AlphaStation DS10s, an AlphaServer DS20, AlphaServer 2100s, an AlphaServer 4100, 8 500Mhz Personal Workstations, 4 433Mhz Personal Workstations, and more! These machines are used for release engineering, package building, SMP development, and general development on the Alpha architecture.
** TRW Financial Systems, Inc. provided 130 PCs, three 68 GB file servers, twelve Ethernets, two routers and an ATM switch for debugging the diskless code.
** Dermot McDonnell donated the Toshiba XM3401B CDROM drive currently used in freefall.
** Chuck Robey mailto:chuckr@glue.umd.edu[chuckr@glue.umd.edu] contributed his floppy tape streamer for experimental work.
** Larry Altneu mailto:larry@ALR.COM[larry@ALR.COM], and {wilko}, provided Wangtek and Archive QIC-02 tape drives in order to improve the [.filename]#wt# driver.
** Ernst Winter (http://berklix.org/ewinter/[Deceased]) contributed a 2.88 MB floppy drive to the project. This will hopefully increase the pressure for rewriting the floppy disk driver.
** http://www.tekram.com/[Tekram Technologies] sent one each of their DC-390, DC-390U and DC-390F FAST and ULTRA SCSI host adapter cards for regression testing of the NCR and AMD drivers with their cards. They are also to be applauded for making driver sources for free operating systems available from their FTP server link:ftp://ftp.tekram.com/scsi/FreeBSD/[ftp://ftp.tekram.com/scsi/FreeBSD/].
** Larry M. Augustin contributed not only a Symbios Sym8751S SCSI card, but also a set of data books, including one about the forthcoming Sym53c895 chip with Ultra-2 and LVD support, and the latest programming manual with information on how to safely use the advanced features of the latest Symbios SCSI chips. Thanks a lot!
** {kuku} donated an FX120 12 speed Mitsumi CDROM drive for IDE CDROM driver development.
** Mike Tancsa mailto:mike@sentex.ca[mike@sentex.ca] donated four various ATM PCI cards in order to help increase support of these cards as well as help support the development effort of the netatm ATM stack.
* _Special contributors:_
** http://www.osd.bsdi.com/[BSDi] (formerly Walnut Creek CDROM) has donated almost more than we can say (see the 'About the FreeBSD Project' section of the link:{handbook}[FreeBSD Handbook] for more details). In particular, we would like to thank them for the original hardware used for `freefall.FreeBSD.org`, our primary development machine, and for `thud.FreeBSD.org`, a testing and build box. We are also indebted to them for funding various contributors over the years and providing us with unrestricted use of their T1 connection to the Internet.
** The http://www.interface-business.de/[interface business GmbH, Dresden] has been patiently supporting {joerg} who has often preferred FreeBSD work over paid work, and used to fall back to their (quite expensive) EUnet Internet connection whenever his private connection became too slow or flaky to work with it...
** http://www.bsdi.com/[Berkeley Software Design, Inc.] has contributed their DOS emulator code to the remaining BSD world, which is used in the _doscmd_ command.
[[staff-committers]]
== The FreeBSD Developers
These are the people who have commit privileges and do the engineering work on the FreeBSD source tree.
All core team members are also developers.
(in alphabetical order by last name):
include::content/en/articles/contributors/contrib-committers.adoc[]
[[contrib-corealumni]]
== Core Team Alumni
The following people were members of the FreeBSD core team during the periods indicated.
We thank them for their past efforts in the service of the FreeBSD project.
_In rough reverse chronological order:_
include::content/en/articles/contributors/contrib-corealumni.adoc[]
[[contrib-develalumni]]
== Development Team Alumni
The following people were members of the FreeBSD development team during the periods indicated.
We thank them for their past efforts in the service of the FreeBSD project.
_In rough reverse chronological order:_
include::content/en/articles/contributors/contrib-develalumni.adoc[]
[[contrib-portmgralumni]]
== Ports Management Team Alumni
The following people were members of the FreeBSD portmgr team during the periods indicated.
We thank them for their past efforts in the service of the FreeBSD project.
_In rough reverse chronological order:_
include::content/en/articles/contributors/contrib-portmgralumni.adoc[]
[[contrib-develinmemoriam]]
== Development Team: In Memoriam
During the many years that the FreeBSD Project has been in existence, sadly, some of our developers have passed away.
Here are some remembrances.
_In rough reverse chronological order of their passing:_
include::content/en/articles/contributors/contrib-develinmemoriam.adoc[]
[[contrib-derived]]
== Derived Software Contributors
This software was originally derived from William F. Jolitz's 386BSD release 0.1, though almost none of the original 386BSD specific code remains.
This software has been essentially re-implemented from the 4.4BSD-Lite release provided by the Computer Science Research Group (CSRG) at the University of California, Berkeley and associated academic contributors.
There are also portions of NetBSD and OpenBSD that have been integrated into FreeBSD as well, and we would therefore like to thank all the contributors to NetBSD and OpenBSD for their work.
[[contrib-additional]]
== Additional FreeBSD Contributors
(in alphabetical order by first name):
include::content/en/articles/contributors/contrib-additional.adoc[]
[[contrib-386bsd]]
== 386BSD Patch Kit Patch Contributors
(in alphabetical order by first name):
include::content/en/articles/contributors/contrib-386bsd.adoc[]
diff --git a/documentation/content/en/articles/cups/_index.adoc b/documentation/content/en/articles/cups/_index.adoc
index 008369f0ad..1bd26b5da6 100644
--- a/documentation/content/en/articles/cups/_index.adoc
+++ b/documentation/content/en/articles/cups/_index.adoc
@@ -1,256 +1,256 @@
---
title: CUPS on FreeBSD
authors:
- author: Chess Griffin
email: chess@chessgriffin.com
-releaseinfo: "$FreeBSD$"
+description: How to install and use CUPS on FreeBSD
trademarks: ["freebsd", "general"]
---
= CUPS on FreeBSD
:doctype: article
:toc: macro
:toclevels: 1
:icons: font
:sectnums:
:source-highlighter: rouge
:experimental:
:sectnumlevels: 6
[.abstract-title]
Abstract
An article about configuring CUPS on FreeBSD.
'''
toc::[]
[[printing-cups]]
== An Introduction to the Common Unix Printing System (CUPS)
CUPS, the Common UNIX Printing System, provides a portable printing layer for UNIX(R)-based operating systems.
It has been developed by Easy Software Products to promote a standard printing solution for all UNIX(R) vendors and users.
CUPS uses the Internet Printing Protocol (IPP) as the basis for managing print jobs and queues.
The Line Printer Daemon (LPD), Server Message Block (SMB), and AppSocket (aka JetDirect) protocols are also supported with reduced functionality.
CUPS adds network printer browsing and PostScript Printer Description (PPD) based printing options to support real-world printing under UNIX(R).
As a result, CUPS is ideally-suited for sharing and accessing printers in mixed environments of FreeBSD, Linux(R), Mac OS(R) X, or Windows(R).
The main site for CUPS is http://www.cups.org/[http://www.cups.org/].
[[printing-cups-install]]
== Installing the CUPS Print Server
To install CUPS using a precompiled binary, issue the following command from a root terminal:
[source,shell]
....
# pkg install cups
....
Other optional, but recommended, packages are package:print/gutenprint[] and package:print/hplip[], both of which add drivers and utilities for a variety of printers.
Once installed, the CUPS configuration files can be found in the directory [.filename]#/usr/local/etc/cups#.
[[printing-cups-configuring-server]]
== Configuring the CUPS Print Server
After installation, a few files must be edited in order to configure the CUPS server.
First, create or modify, as the case may be, the file [.filename]#/etc/devfs.rules# and add the following information to set the proper permissions on all potential printer devices and to associate printers with the `cups` user group:
[.programlisting]
....
[system=10]
add path 'unlpt*' mode 0660 group cups
add path 'ulpt*' mode 0660 group cups
add path 'lpt*' mode 0660 group cups
add path 'usb/X.Y.Z' mode 0660 group cups
....
[NOTE]
====
Note that _X_, _Y_, and _Z_ should be replaced with the target USB device listed in the [.filename]#/dev/usb# directory that corresponds to the printer.
To find the correct device, examine the output of man:dmesg[8], where [.filename]#ugenX.Y# lists the printer device, which is a symbolic link to a USB device in [.filename]#/dev/usb#.
====
Next, add two lines to [.filename]#/etc/rc.conf# as follows:
[.programlisting]
....
cupsd_enable="YES"
devfs_system_ruleset="system"
....
These two entries will start the CUPS print server on boot and invoke the local devfs rule created above, respectively.
In order to enable CUPS printing under certain Microsoft(R) Windows(R) clients, the line below should be uncommented in [.filename]#/usr/local/etc/cups/mime.types# and [.filename]#/usr/local/etc/cups/mime.convs#:
[.programlisting]
....
application/octet-stream
....
Once these changes have been made, the man:devfs[8] and CUPS systems must both be restarted, either by rebooting the computer or issuing the following two commands in a root terminal:
[source,shell]
....
# /etc/rc.d/devfs restart
# /usr/local/etc/rc.d/cupsd restart
....
[[printing-cups-configuring-printers]]
== Configuring Printers on the CUPS Print Server
After the CUPS system has been installed and configured, the administrator can begin configuring the local printers attached to the CUPS print server.
This part of the process is very similar, if not identical, to configuring CUPS printers on other UNIX(R)-based operating systems, such as a Linux(R) distribution.
The primary means for managing and administering the CUPS server is through the web-based interface, which can be found by launching a web browser and entering http://localhost:631[http://localhost:631] in the browser's URL bar.
If the CUPS server is on another machine on the network, substitute the server's local IP address for `localhost`.
The CUPS web interface is fairly self-explanatory, as there are sections for managing printers and print jobs, authorizing users, and more.
Additionally, on the right-hand side of the Administration screen are several check-boxes allowing easy access to commonly-changed settings, such as whether to share published printers connected to the system, whether to allow remote administration of the CUPS server, and whether to allow users additional access and privileges to the printers and print jobs.
Adding a printer is generally as easy as clicking "Add Printer" at the Administration screen of the CUPS web interface, or clicking one of the "New Printers Found" buttons also at the Administration screen.
When presented with the "Device" drop-down box, simply select the desired locally-attached printer, and then continue through the process.
If one has added the package:print/gutenprint-cups[] or package:print/hplip[] ports or packages as referenced above, then additional print drivers will be available in the subsequent screens that might provide more stability or features.
[[printing-cups-clients]]
== Configuring CUPS Clients
Once the CUPS server has been configured and printers have been added and published to the network, the next step is to configure the clients, or the machines that are going to access the CUPS server.
If one has a single desktop machine that is acting as both server and client, then much of this information may not be needed.
[[printing-cups-clients-unix]]
=== UNIX(R) Clients
CUPS will also need to be installed on your UNIX(R) clients.
Once CUPS is installed on the clients, then CUPS printers that are shared across the network are often automatically discovered by the printer managers for various desktop environments such as GNOME or KDE.
Alternatively, one can access the local CUPS interface on the client machine at http://localhost:631[http://localhost:631] and click on "Add Printer" in the Administration section.
When presented with the "Device" drop-down box, simply select the networked CUPS printer, if it was automatically discovered, or select `ipp` or `http` and enter the IPP or HTTP URI of the networked CUPS printer, usually in one of the two following syntaxes:
[.programlisting]
....
ipp://server-name-or-ip/printers/printername
....
[.programlisting]
....
http://server-name-or-ip:631/printers/printername
....
If the CUPS clients have difficulty finding other CUPS printers shared across the network, sometimes it is helpful to add or create a file [.filename]#/usr/local/etc/cups/client.conf# with a single entry as follows:
[.programlisting]
....
ServerName server-ip
....
In this case, _server-ip_ would be replaced by the local IP address of the CUPS server on the network.
[[printing-cups-clients-windows]]
=== Windows(R) Clients
Versions of Windows(R) prior to XP did not have the capability to natively network with IPP-based printers.
However, Windows(R) XP and later versions do have this capability.
Therefore, to add a CUPS printer in these versions of Windows(R) is quite easy.
Generally, the Windows(R) administrator will run the Windows(R) `Add Printer` wizard, select `Network Printer` and then enter the URI in the following syntax:
[.programlisting]
....
http://server-name-or-ip:631/printers/printername
....
If one has an older version of Windows(R) without native IPP printing support, then the general means of connecting to a CUPS printer is to use package:net/samba413[] and CUPS together, which is a topic outside the scope of this chapter.
[[printing-cups-troubleshooting]]
== CUPS Troubleshooting
Difficulties with CUPS often lies in permissions.
First, double check the man:devfs[8] permissions as outlined above.
Next, check the actual permissions of the devices created in the file system.
It is also helpful to make sure your user is a member of the `cups` group.
If the permissions check boxes in the Administration section of the CUPS web interface do not seem to be working, another fix might be to manually backup the main CUPS configuration file located at [.filename]#/usr/local/etc/cups/cupsd.conf# and edit the various configuration options and try different combinations of configuration options.
One sample [.filename]#/usr/local/etc/cups/cupsd.conf# to test is listed below.
Please note that this sample [.filename]#cupsd.conf# sacrifices security for easier configuration; once the administrator successfully connects to the CUPS server and configures the clients, it is advisable to revisit this configuration file and begin locking down access.
[.programlisting]
....
# Log general information in error_log - change "info" to "debug" for
# troubleshooting...
LogLevel info
# Administrator user group...
SystemGroup wheel
# Listen for connections on Port 631.
Port 631
#Listen localhost:631
Listen /var/run/cups.sock
# Show shared printers on the local network.
Browsing On
BrowseOrder allow,deny
#BrowseAllow @LOCAL
BrowseAllow 192.168.1.* # change to local LAN settings
BrowseAddress 192.168.1.* # change to local LAN settings
# Default authentication type, when authentication is required...
DefaultAuthType Basic
DefaultEncryption Never # comment this line to allow encryption
# Allow access to the server from any machine on the LAN
<Location />
Order allow,deny
#Allow localhost
Allow 192.168.1.* # change to local LAN settings
</Location>
# Allow access to the admin pages from any machine on the LAN
<Location /admin>
#Encryption Required
Order allow,deny
#Allow localhost
Allow 192.168.1.* # change to local LAN settings
</Location>
# Allow access to configuration files from any machine on the LAN
<Location /admin/conf>
AuthType Basic
Require user @SYSTEM
Order allow,deny
#Allow localhost
Allow 192.168.1.* # change to local LAN settings
</Location>
# Set the default printer/job policies...
<Policy default>
# Job-related operations must be done by the owner or an administrator...
<Limit Send-Document Send-URI Hold-Job Release-Job Restart-Job Purge-Jobs \
Set-Job-Attributes Create-Job-Subscription Renew-Subscription Cancel-Subscription \
Get-Notifications Reprocess-Job Cancel-Current-Job Suspend-Current-Job Resume-Job \
CUPS-Move-Job>
Require user @OWNER @SYSTEM
Order deny,allow
</Limit>
# All administration operations require an administrator to authenticate...
<Limit Pause-Printer Resume-Printer Set-Printer-Attributes Enable-Printer \
Disable-Printer Pause-Printer-After-Current-Job Hold-New-Jobs Release-Held-New-Jobs \
Deactivate-Printer Activate-Printer Restart-Printer Shutdown-Printer Startup-Printer \
Promote-Job Schedule-Job-After CUPS-Add-Printer CUPS-Delete-Printer CUPS-Add-Class \
CUPS-Delete-Class CUPS-Accept-Jobs CUPS-Reject-Jobs CUPS-Set-Default>
AuthType Basic
Require user @SYSTEM
Order deny,allow
</Limit>
# Only the owner or an administrator can cancel or authenticate a job...
<Limit Cancel-Job CUPS-Authenticate-Job>
Require user @OWNER @SYSTEM
Order deny,allow
</Limit>
<Limit All>
Order deny,allow
</Limit>
</Policy>
....
diff --git a/documentation/content/en/articles/explaining-bsd/_index.adoc b/documentation/content/en/articles/explaining-bsd/_index.adoc
index ce636eb968..a4c56fa7fd 100644
--- a/documentation/content/en/articles/explaining-bsd/_index.adoc
+++ b/documentation/content/en/articles/explaining-bsd/_index.adoc
@@ -1,211 +1,211 @@
---
title: Explaining BSD
authors:
- author: Greg Lehey
email: grog@FreeBSD.org
-releaseinfo: "$FreeBSD$"
+description: Brief explanation about BSD
trademarks: ["freebsd", "amd", "apple", "intel", "linux", "opengroup", "sun", "unix", "general"]
---
= Explaining BSD
:doctype: article
:toc: macro
:toclevels: 1
:icons: font
:sectnums:
:source-highlighter: rouge
:experimental:
:sectnumlevels: 6
[.abstract-title]
Abstract
In the open source world, the word "Linux" is almost synonymous with "Operating System", but it is not the only open source UNIX(R) operating system.
So what is the secret? Why is BSD not better known? This white paper addresses these and other questions.
Throughout this paper, differences between BSD and Linux will be noted __like this__.
'''
toc::[]
[[what-is-bsd]]
== What is BSD?
BSD stands for "Berkeley Software Distribution".
It is the name of distributions of source code from the University of California, Berkeley, which were originally extensions to AT&T's Research UNIX(R) operating system.
Several open source operating system projects are based on a release of this source code known as 4.4BSD-Lite.
In addition, they comprise a number of packages from other Open Source projects, including notably the GNU project.
The overall operating system comprises:
* The BSD kernel, which handles process scheduling, memory management, symmetric multi-processing (SMP), device drivers, etc.
* The C library, the base API for the system.
+
__The BSD C library is based on code from Berkeley, not the GNU project.__
* Utilities such as shells, file utilities, compilers and linkers.
+
__Some of the utilities are derived from the GNU project, others are not.__
* The X Window system, which handles graphical display.
+
The X Window system used in most versions of BSD is maintained by the http://www.X.org/[X.Org project].
FreeBSD allows the user to choose from a variety of desktop environments, such as Gnome, KDE, or Xfce; and lightweight window managers like Openbox, Fluxbox, or Awesome.
* Many other programs and utilities.
[[what-a-real-unix]]
== What, a real UNIX(R)?
The BSD operating systems are not clones, but open source derivatives of AT&T's Research UNIX(R) operating system, which is also the ancestor of the modern UNIX(R) System V.
This may surprise you.
How could that happen when AT&T has never released its code as open source?
It is true that AT&T UNIX(R) is not open source, and in a copyright sense BSD is very definitely _not_ UNIX(R), but on the other hand, AT&T has imported sources from other projects, noticeably the Computer Sciences Research Group (CSRG) of the University of California in Berkeley, CA. Starting in 1976, the CSRG started releasing tapes of their software, calling them _Berkeley Software Distribution_ or __BSD__.
Initial BSD releases consisted mainly of user programs, but that changed dramatically when the CSRG landed a contract with the Defense Advanced Research Projects Agency (DARPA) to upgrade the communications protocols on their network, ARPANET.
The new protocols were known as the __Internet Protocols__, later _TCP/IP_ after the most important protocols.
The first widely distributed implementation was part of 4.2BSD, in 1982.
In the course of the 1980s, a number of new workstation companies sprang up.
Many preferred to license UNIX(R) rather than developing operating systems for themselves.
In particular, Sun Microsystems licensed UNIX(R) and implemented a version of 4.2BSD, which they called SunOS(TM).
When AT&T themselves were allowed to sell UNIX(R) commercially, they started with a somewhat bare-bones implementation called System III, to be quickly followed by System V.
The System V code base did not include networking, so all implementations included additional software from the BSD, including the TCP/IP software, but also utilities such as the _csh_ shell and the _vi_ editor.
Collectively, these enhancements were known as the __Berkeley Extensions__.
The BSD tapes contained AT&T source code and thus required a UNIX(R) source license.
By 1990, the CSRG's funding was running out, and it faced closure.
Some members of the group decided to release the BSD code, which was Open Source, without the AT&T proprietary code.
This finally happened with the __Networking Tape 2__, usually known as __Net/2__.
Net/2 was not a complete operating system: about 20% of the kernel code was missing.
One of the CSRG members, William F. Jolitz, wrote the remaining code and released it in early 1992 as __386BSD__.
At the same time, another group of ex-CSRG members formed a commercial company called http://www.bsdi.com/[Berkeley Software Design Inc.] and released a beta version of an operating system called http://www.bsdi.com/[BSD/386], which was based on the same sources.
The name of the operating system was later changed to BSD/OS.
386BSD never became a stable operating system.
Instead, two other projects split off from it in 1993: http://www.NetBSD.org/[NetBSD] and link:https://www.FreeBSD.org/[FreeBSD].
The two projects originally diverged due to differences in patience waiting for improvements to 386BSD: the NetBSD people started early in the year, and the first version of FreeBSD was not ready until the end of the year.
In the meantime, the code base had diverged sufficiently to make it difficult to merge.
In addition, the projects had different aims, as we will see below. In 1996, http://www.OpenBSD.org/[OpenBSD] split off from NetBSD, and in 2003, http://www.dragonflybsd.org/[DragonFlyBSD] split off from FreeBSD.
[[why-is-bsd-not-better-known]]
== Why is BSD not better known?
For a number of reasons, BSD is relatively unknown:
. The BSD developers are often more interested in polishing their code than marketing it.
. Much of Linux's popularity is due to factors external to the Linux projects, such as the press, and to companies formed to provide Linux services. Until recently, the open source BSDs had no such proponents.
. In 1992, AT&T sued http://www.bsdi.com/[BSDI], the vendor of BSD/386, alleging that the product contained AT&T-copyrighted code. The case was settled out of court in 1994, but the spectre of the litigation continues to haunt people. In March 2000 an article published on the web claimed that the court case had been "recently settled".
+
One detail that the lawsuit did clarify is the naming: in the 1980s, BSD was known as "BSD UNIX(R)".
With the elimination of the last vestige of AT&T code from BSD, it also lost the right to the name UNIX(R).
Thus you will see references in book titles to "the 4.3BSD UNIX(R) operating system" and "the 4.4BSD operating system".
[[comparing-bsd-and-linux]]
== Comparing BSD and Linux
So what is really the difference between, say, Debian Linux and FreeBSD? For the average user, the difference is surprisingly small: Both are UNIX(R) like operating systems.
Both are developed by non-commercial projects (this does not apply to many other Linux distributions, of course).
In the following section, we will look at BSD and compare it to Linux.
The description applies most closely to FreeBSD, which accounts for an estimated 80% of the BSD installations, but the differences from NetBSD, OpenBSD and DragonFlyBSD are small.
=== Who owns BSD?
No one person or corporation owns BSD.
It is created and distributed by a community of highly technical and committed contributors all over the world.
Some of the components of BSD are Open Source projects in their own right and managed by different project maintainers.
=== How is BSD developed and updated?
The BSD kernels are developed and updated following the Open Source development model.
Each project maintains a publicly accessible _source tree_ which contains all source files for the project, including documentation and other incidental files.
Users can obtain a complete copy of any version.
A large number of developers worldwide contribute to improvements to BSD.
They are divided into three kinds:
* _Contributors_ write code or documentation. They are not permitted to commit (add code) directly to the source tree. In order for their code to be included in the system, it must be reviewed and checked in by a registered developer, known as a __committer__.
* _Committers_ are developers with write access to the source tree. In order to become a committer, an individual must show ability in the area in which they are active.
+
It is at the individual committer's discretion whether they should obtain authority before committing changes to the source tree.
In general, an experienced committer may make changes which are obviously correct without obtaining consensus.
For example, a documentation project committer may correct typographical or grammatical errors without review.
On the other hand, developers making far-reaching or complicated changes are expected to submit their changes for review before committing them
In extreme cases, a core team member with a function such as Principal Architect may order that changes be removed from the tree, a process known as _backing out_.
All committers receive mail describing each individual commit, so it is not possible to commit secretly.
* The _Core team_. FreeBSD and NetBSD each have a core team which manages the project. The core teams developed in the course of the projects, and their role is not always well-defined. It is not necessary to be a developer in order to be a core team member, though it is normal. The rules for the core team vary from one project to the other, but in general they have more say in the direction of the project than non-core team members have.
This arrangement differs from Linux in a number of ways:
. No one person controls the content of the system. In practice, this difference is overrated, since the Principal Architect can require that code be backed out, and even in the Linux project several people are permitted to make changes.
. On the other hand, there _is_ a central repository, a single place where you can find the entire operating system sources, including all older versions.
. BSD projects maintain the entire "Operating System", not only the kernel. This distinction is only marginally useful: neither BSD nor Linux is useful without applications. The applications used under BSD are frequently the same as the applications used under Linux.
. As a result of the formalized maintenance of a single SVN source tree, BSD development is clear, and it is possible to access any version of the system by release number or by date. SVN also allows incremental updates to the system: for example, the FreeBSD repository is updated about 100 times a day. Most of these changes are small.
=== BSD releases
FreeBSD, NetBSD and OpenBSD provide the system in three different "releases".
As with Linux, releases are assigned a number such as 1.4.1 or 3.5.
In addition, the version number has a suffix indicating its purpose:
. The development version of the system is called _CURRENT_. FreeBSD assigns a number to CURRENT, for example FreeBSD 5.0-CURRENT. NetBSD uses a slightly different naming scheme and appends a single-letter suffix which indicates changes in the internal interfaces, for example NetBSD 1.4.3G. OpenBSD does not assign a number ("OpenBSD-current"). All new development on the system goes into this branch.
. At regular intervals, between two and four times a year, the projects bring out a _RELEASE_ version of the system, which is available on CD-ROM and for free download from FTP sites, for example OpenBSD 2.6-RELEASE or NetBSD 1.4-RELEASE. The RELEASE version is intended for end users and is the normal version of the system. NetBSD also provides _patch releases_ with a third digit, for example NetBSD 1.4.2.
. As bugs are found in a RELEASE version, they are fixed, and the fixes are added to the SVN tree. In FreeBSD, the resultant version is called the _STABLE_ version, while in NetBSD and OpenBSD it continues to be called the RELEASE version. Smaller new features can also be added to this branch after a period of test in the CURRENT branch. Security and other important bug fixes are also applied to all supported RELEASE versions.
_By contrast, Linux maintains two separate code trees: the stable version and the development version.
Stable versions have an even minor version number, such as 2.0, 2.2 or 2.4.
Development versions have an odd minor version number, such as 2.1, 2.3 or 2.5.
In each case, the number is followed by a further number designating the exact release.
In addition, each vendor adds their own userland programs and utilities, so the name of the distribution is also important.
Each distribution vendor also assigns version numbers to the distribution, so a complete description might be something like "TurboLinux 6.0 with kernel 2.2.14"_
=== What versions of BSD are available?
In contrast to the numerous Linux distributions, there are only four major open source BSDs. Each BSD project maintains its own source tree and its own kernel. In practice, though, there appear to be fewer divergences between the userland code of the projects than there is in Linux.
It is difficult to categorize the goals of each project: the differences are very subjective. Basically,
* FreeBSD aims for high performance and ease of use by end users, and is a favourite of web content providers. It runs on a link:https://www.FreeBSD.org/platforms/[number of platforms] and has significantly more users than the other projects.
* NetBSD aims for maximum portability: "of course it runs NetBSD". It runs on machines from palmtops to large servers, and has even been used on NASA space missions. It is a particularly good choice for running on old non-Intel(R) hardware.
* OpenBSD aims for security and code purity: it uses a combination of the open source concept and rigorous code reviews to create a system which is demonstrably correct, making it the choice of security-conscious organizations such as banks, stock exchanges and US Government departments. Like NetBSD, it runs on a number of platforms.
* DragonFlyBSD aims for high performance and scalability under everything from a single-node UP system to a massively clustered system. DragonFlyBSD has several long-range technical goals, but focus lies on providing a SMP-capable infrastructure that is easy to understand, maintain and develop for.
There are also two additional BSD UNIX(R) operating systems which are not open source, BSD/OS and Apple's Mac OS(R) X:
* BSD/OS was the oldest of the 4.4BSD derivatives. It was not open source, though source code licenses were available at relatively low cost. It resembled FreeBSD in many ways. Two years after the acquisition of BSDi by Wind River Systems, BSD/OS failed to survive as an independent product. Support and source code may still be available from Wind River, but all new development is focused on the VxWorks embedded operating system.
* http://www.apple.com/macosx/server/[Mac OS(R) X] is the latest version of the operating system for Apple(R)'s Mac(R) line. The BSD core of this operating system, http://developer.apple.com/darwin/[Darwin], is available as a fully functional open source operating system for x86 and PPC computers. The Aqua/Quartz graphics system and many other proprietary aspects of Mac OS(R) X remain closed-source, however. Several Darwin developers are also FreeBSD committers, and vice-versa.
=== How does the BSD license differ from the GNU Public license?
Linux is available under the http://www.fsf.org/copyleft/gpl.html[GNU General Public License] (GPL), which is designed to eliminate closed source software.
In particular, any derivative work of a product released under the GPL must also be supplied with source code if requested.
By contrast, the http://www.opensource.org/licenses/bsd-license.html[BSD license] is less restrictive: binary-only distributions are allowed.
This is particularly attractive for embedded applications.
=== What else should I know?
Since fewer applications are available for BSD than Linux, the BSD developers created a Linux compatibility package, which allows Linux programs to run under BSD.
The package includes both kernel modifications, in order to correctly perform Linux system calls, and Linux compatibility files such as the C library.
There is no noticeable difference in execution speed between a Linux application running on a Linux machine and a Linux application running on a BSD machine of the same speed.
The "all from one supplier" nature of BSD means that upgrades are much easier to handle than is frequently the case with Linux.
BSD handles library version upgrades by providing compatibility modules for earlier library versions, so it is possible to run binaries which are several years old with no problems.
=== Which should I use, BSD or Linux?
What does this all mean in practice? Who should use BSD, who should use Linux?
This is a very difficult question to answer.
Here are some guidelines:
* "If it ain't broke, don't fix it": If you already use an open source operating system, and you are happy with it, there is probably no good reason to change.
* BSD systems, in particular FreeBSD, can have notably higher performance than Linux. But this is not across the board. In many cases, there is little or no difference in performance. In some cases, Linux may perform better than FreeBSD.
* In general, BSD systems have a better reputation for reliability, mainly as a result of the more mature code base.
* BSD projects have a better reputation for the quality and completeness of their documentation. The various documentation projects aim to provide actively updated documentation, in many languages, and covering all aspects of the system.
* The BSD license may be more attractive than the GPL.
* BSD can execute most Linux binaries, while Linux can not execute BSD binaries. Many BSD implementations can also execute binaries from other UNIX(R) like systems. As a result, BSD may present an easier migration route from other systems than Linux would.
=== Who provides support, service, and training for BSD?
BSDi / http://www.freebsdmall.com[FreeBSD Mall, Inc.] have been providing support contracts for FreeBSD for nearly a decade.
In addition, each of the projects has a list of consultants for hire: link:https://www.FreeBSD.org/commercial/consult_bycat/[FreeBSD], http://www.netbsd.org/gallery/consultants.html[NetBSD], and http://www.openbsd.org/support.html[OpenBSD].
diff --git a/documentation/content/en/articles/filtering-bridges/_index.adoc b/documentation/content/en/articles/filtering-bridges/_index.adoc
index d880fb95f6..7a3647c0fb 100644
--- a/documentation/content/en/articles/filtering-bridges/_index.adoc
+++ b/documentation/content/en/articles/filtering-bridges/_index.adoc
@@ -1,274 +1,274 @@
---
title: Filtering Bridges
authors:
- author: Alex Dupre
email: ale@FreeBSD.org
-releaseinfo: "$FreeBSD$"
+description: Filtering Bridges in FreeBSD
trademarks: ["freebsd", "3com", "intel", "general"]
---
= Filtering Bridges
:doctype: article
:toc: macro
:toclevels: 1
:icons: font
:sectnums:
:source-highlighter: rouge
:experimental:
:sectnumlevels: 6
include::shared/en/urls.adoc[]
[.abstract-title]
Abstract
Often it is useful to divide one physical network (like an Ethernet) into two separate segments without having to create subnets, and use a router to link them together.
The device that connects the two networks in this way is called a bridge.
A FreeBSD system with two network interfaces is enough in order to act as a bridge.
A bridge works by scanning the addresses of MAC level (Ethernet addresses) of the devices connected to each of its network interfaces and then forwarding the traffic between the two networks only if the source and the destination are on different segments.
Under many points of view a bridge is similar to an Ethernet switch with only two ports.
'''
toc::[]
[[filtering-bridges-why]]
== Why use a filtering bridge?
More and more frequently, thanks to the lowering costs of broad band Internet connections (xDSL) and also because of the reduction of available IPv4 addresses, many companies are connected to the Internet 24 hours on 24 and with few (sometimes not even a power of 2) IP addresses.
In these situations it is often desirable to have a firewall that filters incoming and outgoing traffic from and towards Internet, but a packet filtering solution based on router may not be applicable, either due to subnetting issues, the router is owned by the connectivity supplier (ISP), or because it does not support such functionalities.
In these scenarios the use of a filtering bridge is highly advised.
A bridge-based firewall can be configured and inserted between the xDSL router and your Ethernet hub/switch without any IP numbering issues.
[[filtering-bridges-how]]
== How to Install
Adding bridge functionalities to a FreeBSD system is not difficult.
Since 4.5 release it is possible to load such functionalities as modules instead of having to rebuild the kernel, simplifying the procedure a great deal.
In the following subsections I will explain both installation ways.
[IMPORTANT]
====
_Do not_ follow both instructions: a procedure _excludes_ the other one.
Select the best choice according to your needs and abilities.
====
Before going on, be sure to have at least two Ethernet cards that support the promiscuous mode for both reception and transmission, since they must be able to send Ethernet packets with any address, not just their own.
Moreover, to have a good throughput, the cards should be PCI bus mastering cards.
The best choices are still the Intel EtherExpress(TM) Pro, followed by the 3Com(R) 3c9xx series.
To simplify the firewall configuration it may be useful to have two cards of different manufacturers (using different drivers) in order to distinguish clearly which interface is connected to the router and which to the inner network.
[[filtering-bridges-kernel]]
=== Kernel Configuration
So you have decided to use the older but well tested installation method.
To begin, you have to add the following rows to your kernel configuration file:
[.programlisting]
....
options BRIDGE
options IPFIREWALL
options IPFIREWALL_VERBOSE
....
The first line is to compile the bridge support, the second one is the firewall and the third one is the logging functions of the firewall.
Now it is necessary to build and install the new kernel.
You may find detailed instructions in the link:{handbook}#kernelconfig-building[Building and Installing a Custom Kernel] section of the FreeBSD Handbook.
[[filtering-bridges-modules]]
=== Modules Loading
If you have chosen to use the new and simpler installation method, the only thing to do now is add the following row to [.filename]#/boot/loader.conf#:
[.programlisting]
....
bridge_load="YES"
....
In this way, during the system startup, the [.filename]#bridge.ko# module will be loaded together with the kernel.
It is not required to add a similar row for the [.filename]#ipfw.ko# module, since it will be loaded automatically after the execution of the steps in the following section.
[[filtering-bridges-finalprep]]
== Final Preparation
Before rebooting in order to load the new kernel or the required modules (according to the previously chosen installation method), you have to make some changes to the [.filename]#/etc/rc.conf# configuration file.
The default rule of the firewall is to reject all IP packets.
Initially we will set up an `open` firewall, in order to verify its operation without any issue related to packet filtering (in case you are going to execute this procedure remotely, such configuration will avoid you to remain isolated from the network).
Put these lines in [.filename]#/etc/rc.conf#:
[.programlisting]
....
firewall_enable="YES"
firewall_type="open"
firewall_quiet="YES"
firewall_logging="YES"
....
The first row will enable the firewall (and will load the module [.filename]#ipfw.ko# if it is not compiled in the kernel), the second one to set up it in `open` mode (as explained in [.filename]#/etc/rc.firewall#), the third one to not show rules loading and the fourth one to enable logging support.
About the configuration of the network interfaces, the most used way is to assign an IP to only one of the network cards, but the bridge will work equally even if both interfaces or none has a configured IP.
In the last case (IP-less) the bridge machine will be still more hidden, as inaccessible from the network: to configure it, you have to login from console or through a third network interface separated from the bridge.
Sometimes, during the system startup, some programs require network access, say for domain resolution: in this case it is necessary to assign an IP to the external interface (the one connected to Internet, where DNS server resides), since the bridge will be activated at the end of the startup procedure.
It means that the [.filename]#fxp0# interface (in our case) must be mentioned in the ifconfig section of the [.filename]#/etc/rc.conf# file, while the [.filename]#xl0# is not.
Assigning an IP to both the network cards does not make much sense, unless, during the start procedure, applications should access to services on both Ethernet segments.
There is another important thing to know.
When running IP over Ethernet, there are actually two Ethernet protocols in use: one is IP, the other is ARP.
ARP does the conversion of the IP address of a host into its Ethernet address (MAC layer).
In order to allow the communication between two hosts separated by the bridge, it is necessary that the bridge will forward ARP packets.
Such protocol is not included in the IP layer, since it exists only with IP over Ethernet.
The FreeBSD firewall filters exclusively on the IP layer and therefore all non-IP packets (ARP included) will be forwarded without being filtered, even if the firewall is configured to not permit anything.
Now it is time to reboot the system and use it as before: there will be some new messages about the bridge and the firewall, but the bridge will not be activated and the firewall, being in `open` mode, will not avoid any operations.
If there are any problems, you should sort them out now before proceeding.
[[filtering-bridges-enabling]]
== Enabling the Bridge
At this point, to enable the bridge, you have to execute the following commands (having the shrewdness to replace the names of the two network interfaces [.filename]#fxp0# and [.filename]#xl0# with your own ones):
[source,shell]
....
# sysctl net.link.ether.bridge.config=fxp0:0,xl0:0
# sysctl net.link.ether.bridge.ipfw=1
# sysctl net.link.ether.bridge.enable=1
....
The first row specifies which interfaces should be activated by the bridge, the second one will enable the firewall on the bridge and finally the third one will enable the bridge.
At this point you should be able to insert the machine between two sets of hosts without compromising any communication abilities between them.
If so, the next step is to add the `net.link.ether.bridge._[blah]_=_[blah]_` portions of these rows to the [.filename]#/etc/sysctl.conf# file, in order to have them execute at startup.
[[filtering-bridges-ipfirewall]]
== Configuring The Firewall
Now it is time to create your own file with custom firewall rules, in order to secure the inside network.
There will be some complication in doing this because not all of the firewall functionalities are available on bridged packets.
Furthermore, there is a difference between the packets that are in the process of being forwarded and packets that are being received by the local machine.
In general, incoming packets are run through the firewall only once, not twice as is normally the case; in fact they are filtered only upon receipt, so rules that use `out` or `xmit` will never match.
Personally, I use `in via` which is an older syntax, but one that has a sense when you read it.
Another limitation is that you are restricted to use only `pass` or `drop` commands for packets filtered by a bridge.
Sophisticated things like `divert`, `forward` or `reject` are not available.
Such options can still be used, but only on traffic to or from the bridge machine itself (if it has an IP address).
New in FreeBSD 4.0, is the concept of stateful filtering.
This is a big improvement for UDP traffic, which typically is a request going out, followed shortly thereafter by a response with the exact same set of IP addresses and port numbers (but with source and destination reversed, of course).
For firewalls that have no statekeeping, there is almost no way to deal with this sort of traffic as a single session.
But with a firewall that can "remember" an outgoing UDP packet and, for the next few minutes, allow a response, handling UDP services is trivial.
The following example shows how to do it.
It is possible to do the same thing with TCP packets.
This allows you to avoid some denial of service attacks and other nasty tricks, but it also typically makes your state table grow quickly in size.
Let's look at an example setup.
Note first that at the top of [.filename]#/etc/rc.firewall# there are already standard rules for the loopback interface [.filename]#lo0#, so we should not have to care for them anymore.
Custom rules should be put in a separate file (say [.filename]#/etc/rc.firewall.local#) and loaded at system startup, by modifying the row of [.filename]#/etc/rc.conf# where we defined the `open` firewall:
[.programlisting]
....
firewall_type="/etc/rc.firewall.local"
....
[IMPORTANT]
====
You have to specify the _full_ path, otherwise it will not be loaded with the risk to remain isolated from the network.
====
For our example imagine to have the [.filename]#fxp0# interface connected towards the outside (Internet) and the [.filename]#xl0# towards the inside (LAN). The bridge machine has the IP `1.2.3.4` (it is not possible that your ISP can give you an address quite like this, but for our example it is good).
[.programlisting]
....
# Things that we have kept state on before get to go through in a hurry
add check-state
# Throw away RFC 1918 networks
add drop all from 10.0.0.0/8 to any in via fxp0
add drop all from 172.16.0.0/12 to any in via fxp0
add drop all from 192.168.0.0/16 to any in via fxp0
# Allow the bridge machine to say anything it wants
# (if the machine is IP-less do not include these rows)
add pass tcp from 1.2.3.4 to any setup keep-state
add pass udp from 1.2.3.4 to any keep-state
add pass ip from 1.2.3.4 to any
# Allow the inside hosts to say anything they want
add pass tcp from any to any in via xl0 setup keep-state
add pass udp from any to any in via xl0 keep-state
add pass ip from any to any in via xl0
# TCP section
# Allow SSH
add pass tcp from any to any 22 in via fxp0 setup keep-state
# Allow SMTP only towards the mail server
add pass tcp from any to relay 25 in via fxp0 setup keep-state
# Allow zone transfers only by the slave name server [dns2.nic.it]
add pass tcp from 193.205.245.8 to ns 53 in via fxp0 setup keep-state
# Pass ident probes. It is better than waiting for them to timeout
add pass tcp from any to any 113 in via fxp0 setup keep-state
# Pass the "quarantine" range
add pass tcp from any to any 49152-65535 in via fxp0 setup keep-state
# UDP section
# Allow DNS only towards the name server
add pass udp from any to ns 53 in via fxp0 keep-state
# Pass the "quarantine" range
add pass udp from any to any 49152-65535 in via fxp0 keep-state
# ICMP section
# Pass 'ping'
add pass icmp from any to any icmptypes 8 keep-state
# Pass error messages generated by 'traceroute'
add pass icmp from any to any icmptypes 3
add pass icmp from any to any icmptypes 11
# Everything else is suspect
add drop log all from any to any
....
Those of you who have set up firewalls before may notice some things missing.
In particular, there are no anti-spoofing rules, in fact we did _not_ add:
[.programlisting]
....
add deny all from 1.2.3.4/8 to any in via fxp0
....
That is, drop packets that are coming in from the outside claiming to be from our network.
This is something that you would commonly do to be sure that someone does not try to evade the packet filter, by generating nefarious packets that look like they are from the inside.
The problem with that is that there is _at least_ one host on the outside interface that you do not want to ignore: the router.
But usually, the ISP anti-spoofs at their router, so we do not need to bother that much.
The last rule seems to be an exact duplicate of the default rule, that is, do not let anything pass that is not specifically allowed.
But there is a difference: all suspected traffic will be logged.
There are two rules for passing SMTP and DNS traffic towards the mail server and the name server, if you have them.
Obviously the whole rule set should be flavored to personal taste, this is only a specific example (rule format is described accurately in the man:ipfw[8] man page).
Note that in order for "relay" and "ns" to work, name service lookups must work _before_ the bridge is enabled.
This is an example of making sure that you set the IP on the correct network card.
Alternatively it is possible to specify the IP address instead of the host name (required if the machine is IP-less).
People that are used to setting up firewalls are probably also used to either having a `reset` or a `forward` rule for ident packets (TCP port 113).
Unfortunately, this is not an applicable option with the bridge, so the best thing is to simply pass them to their destination.
As long as that destination machine is not running an ident daemon, this is relatively harmless.
The alternative is dropping connections on port 113, which creates some problems with services like IRC (the ident probe must timeout).
The only other thing that is a little weird that you may have noticed is that there is a rule to let the bridge machine speak, and another for internal hosts.
Remember that this is because the two sets of traffic will take different paths through the kernel and into the packet filter.
The inside net will go through the bridge, while the local machine will use the normal IP stack to speak.
Thus the two rules to handle the different cases.
The `in via fxp0` rules work for both paths.
In general, if you use `in via` rules throughout the filter, you will need to make an exception for locally generated packets, because they did not come in via any of our interfaces.
[[filtering-bridges-contributors]]
== Contributors
Many parts of this article have been taken, updated and adapted from an old text about bridging, edited by Nick Sayer.
A pair of inspirations are due to an introduction on bridging by Steve Peterson.
A big thanks to Luigi Rizzo for the implementation of the bridge code in FreeBSD and for the time he has dedicated to me answering all of my related questions.
A thanks goes out also to Tom Rhodes who looked over my job of translation from Italian (the original language of this article) into English.
diff --git a/documentation/content/en/articles/fonts/_index.adoc b/documentation/content/en/articles/fonts/_index.adoc
index 498371dfcb..70e52a1a91 100644
--- a/documentation/content/en/articles/fonts/_index.adoc
+++ b/documentation/content/en/articles/fonts/_index.adoc
@@ -1,581 +1,581 @@
---
title: Fonts and FreeBSD
subtitle: A Tutorial
authors:
- author: Dave Bodenstab
email: imdave@synet.net
-releaseinfo: "$FreeBSD$"
+description: Description of the various font files that may be used with FreeBSD
trademarks: ["freebsd", "adobe", "apple", "linux", "microsoft", "opengroup", "general"]
---
= Fonts and FreeBSD
:doctype: article
:toc: macro
:toclevels: 1
:icons: font
:sectnums:
:source-highlighter: rouge
:experimental:
:sectnumlevels: 6
[.abstract-title]
Abstract
This document contains a description of the various font files that may be used with FreeBSD and the syscons driver, X11, Ghostscript and Groff.
Cookbook examples are provided for switching the syscons display to 80x60 mode, and for using type 1 fonts with the above application programs.
'''
toc::[]
[[intro]]
== Introduction
There are many sources of fonts available, and one might ask how they might be used with FreeBSD.
The answer can be found by carefully searching the documentation for the component that one would like to use.
This is very time consuming, so this tutorial is an attempt to provide a shortcut for others who might be interested.
[[terminology]]
== Basic Terminology
There are many different font formats and associated font file suffixes.
A few that will be addressed here are:
[.filename]#.pfa#, [.filename]#.pfb#::
PostScript(R) type 1 fonts. The [.filename]#.pfa# is the __A__scii form and [.filename]#.pfb# the __B__inary form.
[.filename]#.afm#::
The font metrics associated with a type 1 font.
[.filename]#.pfm#::
The printer font metrics associated with a type 1 font.
[.filename]#.ttf#::
A TrueType(R) font
[.filename]#.fot#::
An indirect reference to a TrueType font (not an actual font)
[.filename]#.fon#, [.filename]#.fnt#::
Bitmapped screen fonts
The [.filename]#.fot# is used by Windows(R) as sort of a symbolic link to the actual TrueType(R) font ([.filename]#.ttf#) file. The [.filename]#.fon# font files are also used by Windows.
I know of no way to use this font format with FreeBSD.
[[font-formats]]
== What Font Formats Can I Use?
Which font file format is useful depends on the application being used.
FreeBSD by itself uses no fonts.
Application programs and/or drivers may make use of the font files.
Here is a small cross reference of application/driver to the font type suffixes:
Driver::
vt:::
[.filename]#.hex#
syscons:::
[.filename]#.fnt#
Application::
Ghostscript:::
[.filename]#.pfa#, [.filename]#.pfb#, [.filename]#.ttf#
X11:::
[.filename]#.pfa#, [.filename]#.pfb#
Groff:::
[.filename]#.pfa#, [.filename]#.afm#
Povray:::
[.filename]#.ttf#
The [.filename]#.fnt# suffix is used quite frequently.
I suspect that whenever someone wanted to create a specialized font file for their application, more often than not they chose this suffix.
Therefore, it is likely that files with this suffix are not all the same format; specifically, the [.filename]#.fnt# files used by syscons under FreeBSD may not be the same format as a [.filename]#.fnt# one encounters in the MS-DOS(R)/Windows(R) environment.
I have not made any attempt at using other [.filename]#.fnt# files other than those provided with FreeBSD.
[[virtual-console]]
== Setting a Virtual Console to 80x60 Line Mode
First, an 8x8 font must be loaded.
To do this, [.filename]#/etc/rc.conf# should contain the line (change the font name to an appropriate one for your locale):
[.programlisting]
....
font8x8="iso-8x8" # font 8x8 from /usr/share/syscons/fonts/* (or NO).
....
The command to actually switch the mode is man:vidcontrol[1]:
[source,shell]
....
% vidcontrol VGA_80x60
....
Various screen-oriented programs, such as man:vi[1], must be able to determine the current screen dimensions.
As this is achieved this through `ioctl` calls to the console driver (such as man:syscons[4]) they will correctly determine the new screen dimensions.
To make this more seamless, one can embed these commands in the startup scripts so it takes place when the system boots.
To do this is add this line to [.filename]#/etc/rc.conf#.
[.programlisting]
....
allscreens_flags="VGA_80x60" # Set this vidcontrol mode for all virtual screens
....
References: man:rc.conf[5], man:vidcontrol[1].
[[type1-fonts-x11]]
== Using Type 1 Fonts with X11
X11 can use either the [.filename]#.pfa# or the [.filename]#.pfb# format fonts.
The X11 fonts are located in various subdirectories under [.filename]#/usr/X11R6/lib/X11/fonts#.
Each font file is cross referenced to its X11 name by the contents of [.filename]#fonts.dir# in each directory.
There is already a directory named [.filename]#Type1#.
The most straight forward way to add a new font is to put it into this directory.
A better way is to keep all new fonts in a separate directory and use a symbolic link to the additional font.
This allows one to more easily keep track of ones fonts without confusing them with the fonts that were originally provided.
For example:
[source,shell]
....
Create a directory to contain the font files
% mkdir -p /usr/local/share/fonts/type1
% cd /usr/local/share/fonts/type1
Place the .pfa, .pfb and .afm files here
One might want to keep readme files, and other documentation
for the fonts here also
% cp /cdrom/fonts/atm/showboat/showboat.pfb .
% cp /cdrom/fonts/atm/showboat/showboat.afm .
Maintain an index to cross reference the fonts
% echo showboat - InfoMagic CICA, Dec 1994, /fonts/atm/showboat >>INDEX
....
Now, to use a new font with X11, one must make the font file available and update the font name files.
The X11 font names look like:
[.programlisting]
....
-bitstream-charter-medium-r-normal-xxx-0-0-0-0-p-0-iso8859-1
| | | | | | | | | | | | \ \
| | | | | \ \ \ \ \ \ \ +----+- character set
| | | | \ \ \ \ \ \ \ +- average width
| | | | \ \ \ \ \ \ +- spacing
| | | \ \ \ \ \ \ +- vertical res.
| | | \ \ \ \ \ +- horizontal res.
| | | \ \ \ \ +- points
| | | \ \ \ +- pixels
| | | \ \ \
foundry family weight slant width additional style
....
A new name needs to be created for each new font.
If you have some information from the documentation that accompanied the font, then it could serve as the basis for creating the name.
If there is no information, then you can get some idea by using man:strings[1] on the font file.
For example:
[source,shell]
....
% strings showboat.pfb | more
%!FontType1-1.0: Showboat 001.001
%%CreationDate: 1/15/91 5:16:03 PM
%%VMusage: 1024 45747
% Generated by Fontographer 3.1
% Showboat
1991 by David Rakowski. Alle Rechte Vorbehalten.
FontDirectory/Showboat known{/Showboat findfont dup/UniqueID known{dup
/UniqueID get 4962377 eq exch/FontType get 1 eq and}{pop false}ifelse
{save true}{false}ifelse}{false}ifelse
12 dict begin
/FontInfo 9 dict dup begin
/version (001.001) readonly def
/FullName (Showboat) readonly def
/FamilyName (Showboat) readonly def
/Weight (Medium) readonly def
/ItalicAngle 0 def
/isFixedPitch false def
/UnderlinePosition -106 def
/UnderlineThickness 16 def
/Notice (Showboat
1991 by David Rakowski. Alle Rechte Vorbehalten.) readonly def
end readonly def
/FontName /Showboat def
--stdin--
....
Using this information, a possible name might be:
[source,shell]
....
-type1-Showboat-medium-r-normal-decorative-0-0-0-0-p-0-iso8859-1
....
The components of our name are:
Foundry::
Lets just name all the new fonts `type1`.
Family::
The name of the font.
Weight::
Normal, bold, medium, semibold, etc.
From the man:strings[1] output above, it appears that this font has a weight of __medium__.
Slant::
__r__oman, __i__talic, __o__blique, etc.
Since the _ItalicAngle_ is zero, _roman_ will be used.
Width::
Normal, wide, condensed, extended, etc.
Until it can be examined, the assumption will be __normal__.
Additional style::
Usually omitted, but this will indicate that the font contains decorative capital letters.
Spacing::
proportional or monospaced.
_Proportional_ is used since _isFixedPitch_ is false.
All of these names are arbitrary, but one should strive to be compatible with the existing conventions.
A font is referenced by name with possible wild cards by an X11 program, so the name chosen should make some sense.
One might begin by simply using
[source,shell]
....
...-normal-r-normal-...-p-...
....
as the name, and then use man:xfontsel[1] to examine it and adjust the name based on the appearance of the font.
So, to complete our example:
[source,shell]
....
Make the font accessible to X11
% cd /usr/X11R6/lib/X11/fonts/Type1
% ln -s /usr/local/share/fonts/type1/showboat.pfb .
Edit fonts.dir and fonts.scale, adding the line describing the font
and incrementing the number of fonts which is found on the first line.
% ex fonts.dir
:1p
25
:1c
26
.
:$a
showboat.pfb -type1-showboat-medium-r-normal-decorative-0-0-0-0-p-0-iso8859-1
.
:wq
fonts.scale seems to be identical to fonts.dir...
% cp fonts.dir fonts.scale
Tell X11 that things have changed
% xset fp rehash
Examine the new font
% xfontsel -pattern -type1-*
....
References: man:xfontsel[1], man:xset[1], The X Windows System in a Nutshell, http://www.ora.com/[O'Reilly & Associates].
[[type1-fonts-ghostscript]]
== Using Type 1 Fonts with Ghostscript
Ghostscript references a font via its [.filename]#Fontmap#.
This must be modified in a similar way to the X11 [.filename]#fonts.dir#.
Ghostscript can use either the [.filename]#.pfa# or the [.filename]#.pfb# format fonts.
Using the font from the previous example, here is how to use it with Ghostscript:
[source,shell]
....
Put the font in Ghostscript's font directory
% cd /usr/local/share/ghostscript/fonts
% ln -s /usr/local/share/fonts/type1/showboat.pfb .
Edit Fontmap so Ghostscript knows about the font
% cd /usr/local/share/ghostscript/4.01
% ex Fontmap
:$a
/Showboat (showboat.pfb) ; % From CICA /fonts/atm/showboat
.
:wq
Use Ghostscript to examine the font
% gs prfont.ps
Aladdin Ghostscript 4.01 (1996-7-10)
Copyright (C) 1996 Aladdin Enterprises, Menlo Park, CA. All rights
reserved.
This software comes with NO WARRANTY: see the file PUBLIC for details.
Loading Times-Roman font from /usr/local/share/ghostscript/fonts/tir_____.pfb...
/1899520 581354 1300084 13826 0 done.
GS>Showboat DoFont
Loading Showboat font from /usr/local/share/ghostscript/fonts/showboat.pfb...
1939688 565415 1300084 16901 0 done.
>>showpage, press <return> to continue<<
>>showpage, press <return> to continue<<
>>showpage, press <return> to continue<<
GS>quit
....
References: [.filename]#fonts.txt# in the Ghostscript 4.01 distribution
[[type1-fonts-groff]]
== Using Type 1 Fonts with Groff
Now that the new font can be used by both X11 and Ghostscript, how can one use the new font with groff? First of all, since we are dealing with type 1 PostScript(R) fonts, the groff device that is applicable is the _ps_ device.
A font file must be created for each font that groff can use.
A groff font name is just a file in [.filename]#/usr/share/groff_font/devps#.
With our example, the font file could be [.filename]#/usr/share/groff_font/devps/SHOWBOAT#.
The file must be created using tools provided by groff.
The first tool is `afmtodit`.
This is not normally installed, so it must be retrieved from the source distribution.
I found I had to change the first line of the file, so I did:
[source,shell]
....
% cp /usr/src/gnu/usr.bin/groff/afmtodit/afmtodit.pl /tmp
% ex /tmp/afmtodit.pl
:1c
#!/usr/bin/perl -P-
.
:wq
....
This tool will create the groff font file from the metrics file ([.filename]#.afm# suffix.)
Continuing with our example:
[source,shell]
....
Many .afm files are in Mac format... ^M delimited lines
We need to convert them to UNIX(R) style ^J delimited lines
% cd /tmp
% cat /usr/local/share/fonts/type1/showboat.afm |
tr '\015' '\012' >showboat.afm
Now create the groff font file
% cd /usr/share/groff_font/devps
% /tmp/afmtodit.pl -d DESC -e text.enc /tmp/showboat.afm generate/textmap SHOWBOAT
....
The font can now be referenced with the name SHOWBOAT.
If Ghostscript is used to drive the printers on the system, then nothing more needs to be done.
However, if true PostScript(R) printers are used, then the font must be downloaded to the printer in order for the font to be used (unless the printer happens to have the showboat font built in or on an accessible font disk.)
The final step is to create a downloadable font.
The `pfbtops` tool is used to create the [.filename]#.pfa# format of the font, and [.filename]#download# is modified to reference the new font.
The [.filename]#download# must reference the internal name of the font.
This can easily be determined from the groff font file as illustrated:
[source,shell]
....
Create the .pfa font file
% pfbtops /usr/local/share/fonts/type1/showboat.pfb >showboat.pfa
....
Of course, if [.filename]#.pfa# is already available, just use a symbolic link to reference it.
[source,shell]
....
Get the internal font name
% fgrep internalname SHOWBOAT
internalname Showboat
Tell groff that the font must be downloaded
% ex download
:$a
Showboat showboat.pfa
.
:wq
....
To test the font:
[source,shell]
....
% cd /tmp
% cat >example.t <<EOF
.sp 5
.ps 16
This is an example of the Showboat font:
.br
.ps 48
.vs (\n(.s+2)p
.sp
.ft SHOWBOAT
ABCDEFGHI
.br
JKLMNOPQR
.br
STUVWXYZ
.sp
.ps 16
.vs (\n(.s+2)p
.fp 5 SHOWBOAT
.ft R
To use it for the first letter of a paragraph, it will look like:
.sp 50p
\s(48\f5H\s0\fRere is the first sentence of a paragraph that uses the
showboat font as its first letter.
Additional vertical space must be used to allow room for the larger
letter.
EOF
% groff -Tps example.t >example.ps
To use ghostscript/ghostview
% ghostview example.ps
To print it
% lpr -Ppostscript example.ps
....
References: [.filename]#/usr/src/gnu/usr.bin/groff/afmtodit/afmtodit.man#, man:groff_font[5], man:groff_char[7], man:pfbtops[1].
[[convert-truetype]]
== Converting TrueType Fonts to a groff/PostScript Format For groff
This potentially requires a bit of work, simply because it depends on some utilities that are not installed as part of the base system.
They are:
`ttf2pf`::
TrueType to PostScript conversion utilities.
This allows conversion of a TrueType font to an ascii font metric ([.filename]#.afm#) file.
+
Currently available at http://sunsite.icm.edu.pl/pub/GUST/contrib/BachoTeX98/ttf2pf/[http://sunsite.icm.edu.pl/pub/GUST/contrib/BachoTeX98/ttf2pf/].
Note: These files are PostScript programs and must be downloaded to disk by holding down kbd:[Shift] when clicking on the link.
Otherwise, your browser may try to launch ghostview to view them.
+
The files of interest are:
** [.filename]#GS_TTF.PS#
** [.filename]#PF2AFM.PS#
** [.filename]#ttf2pf.ps#
+
The funny upper/lower case is due to their being intended also for DOS shells.
[.filename]#ttf2pf.ps# makes use of the others as upper case, so any renaming must be consistent with this.
(Actually, [.filename]#GS_TTF.PS# and [.filename]#PFS2AFM.PS# are supposedly part of the Ghostscript distribution, but it is just as easy to use these as an isolated utility.
FreeBSD does not seem to include the latter.)
You also may want to have these installed to [.filename]#/usr/local/share/groff_font/devps#(?).
`afmtodit`::
Creates font files for use with groff from ascii font metrics file.
This usually resides in the directory, [.filename]#/usr/src/contrib/groff/afmtodit#, and requires some work to get going.
+
[NOTE]
====
If you are paranoid about working in the [.filename]#/usr/src# tree, simply copy the contents of the above directory to a work location.
====
+
In the work area, you will need to make the utility.
Just type:
+
[source,shell]
....
# make -f Makefile.sub afmtodit
....
+
You may also need to copy [.filename]#/usr/contrib/groff/devps/generate/textmap# to [.filename]#/usr/share/groff_font/devps/generate# if it does not already exist.
Once all these utilities are in place, you are ready to commence:
. Create [.filename]#.afm# by typing:
+
[source,shell]
....
% gs -dNODISPLAY -q -- ttf2pf.ps TTF_name PS_font_name AFM_name
....
+
Where, _TTF_name_ is your TrueType font file, _PS_font_name_ is the file name for [.filename]#.pfa#, _AFM_name_ is the name you wish for [.filename]#.afm#. If you do not specify output file names for the [.filename]#.pfa# or [.filename]#.afm# files, then default names will be generated from the TrueType font file name.
+
This also produces a [.filename]#.pfa#, the ascii PostScript font metrics file ([.filename]#.pfb# is for the binary form).
This will not be needed, but could (I think) be useful for a fontserver.
+
For example, to convert the 30f9 Barcode font using the default file names, use the following command:
+
[source,shell]
....
% gs -dNODISPLAY -- ttf2pf.ps 3of9.ttf
Aladdin Ghostscript 5.10 (1997-11-23)
Copyright (C) 1997 Aladdin Enterprises, Menlo Park, CA. All rights reserved.
This software comes with NO WARRANTY: see the file PUBLIC for details.
Converting 3of9.ttf to 3of9.pfa and 3of9.afm.
....
+
If you want the converted fonts to be stored in [.filename]#A.pfa# and [.filename]#B.afm#, then use this command:
+
[source,shell]
....
% gs -dNODISPLAY -- ttf2pf.ps 3of9.ttf A B
Aladdin Ghostscript 5.10 (1997-11-23)
Copyright (C) 1997 Aladdin Enterprises, Menlo Park, CA. All rights reserved.
This software comes with NO WARRANTY: see the file PUBLIC for details.
Converting 3of9.ttf to A.pfa and B.afm.
....
. Create the groff PostScript file:
+
Change directories to [.filename]#/usr/share/groff_font/devps# so as to make the following command easier to execute.
You will probably need root privileges for this.
(Or, if you are paranoid about working there, make sure you reference the files [.filename]#DESC#, [.filename]#text.enc# and [.filename]#generate/textmap# as being in this directory.)
+
[source,shell]
....
% afmtodit -d DESC -e text.enc file.afm generate/textmap PS_font_name
....
+
Where, [.filename]#file.afm# is the _AFM_name_ created by `ttf2pf.ps` above, and _PS_font_name_ is the font name used from that command, as well as the name that man:groff[1] will use for references to this font.
For example, assuming you used the first `tiff2pf.ps` above, then the 3of9 Barcode font can be created using the command:
+
[source,shell]
....
% afmtodit -d DESC -e text.enc 3of9.afm generate/textmap 3of9
....
+
Ensure that the resulting _PS_font_name_ file (e.g., [.filename]#3of9# in the example above) is located in the directory [.filename]#/usr/share/groff_font/devps# by copying or moving it there.
+
Note that if [.filename]#ttf2pf.ps# assigns a font name using the one it finds in the TrueType font file and you want to use a different name, you must edit the [.filename]#.afm# prior to running `afmtodit`.
This name must also match the one used in the Fontmap file if you wish to pipe man:groff[1] into man:gs[1].
[[truetype-for-other-programs]]
== Can TrueType Fonts be Used with Other Programs?
The TrueType font format is used by Windows, Windows 95, and Mac's.
It is quite popular and there are a great number of fonts available in this format.
Unfortunately, there are few applications that I am aware of that can use this format: Ghostscript and Povray come to mind.
Ghostscript's support, according to the documentation, is rudimentary and the results are likely to be inferior to type 1 fonts.
Povray version 3 also has the ability to use TrueType fonts, but I rather doubt many people will be creating documents as a series of raytraced pages :-).
This rather dismal situation may soon change.
The http://www.freetype.org/[FreeType Project] is currently developing a useful set of FreeType tools:
* The `xfsft` font server for X11 can serve TrueType fonts in addition to regular fonts. Though currently in beta, it is said to be quite usable. See http://www.dcs.ed.ac.uk/home/jec/programs/xfsft/[Juliusz Chroboczek's page] for further information. Porting instructions for FreeBSD can be found at http://math.missouri.edu/~stephen/software/[Stephen Montgomery's software page].
* xfstt is another font server for X11, available under link:ftp://sunsite.unc.edu/pub/Linux/X11/fonts/[ftp://sunsite.unc.edu/pub/Linux/X11/fonts/].
* A program called `ttf2bdf` can produce BDF files suitable for use in an X environment from TrueType files. Linux binaries are said to be available from link:ftp://crl.nmsu.edu/CLR/multiling/General/[ftp://crl.nmsu.edu/CLR/multiling/General/].
* and others ...
[[obtaining-additional-fonts]]
== Where Can Additional Fonts be Obtained?
Many fonts are available on the Internet.
They are either entirely free, or are share-ware.
In addition many fonts are available in the [.filename]#x11-fonts/# category in the ports collection
[[additional-questions]]
== Additional Questions
* What use are the [.filename]#.pfm# files?
* Can one generate the [.filename]#.afm# from a [.filename]#.pfa# or [.filename]#.pfb#?
* How to generate the groff character mapping files for PostScript fonts with non-standard character names?
* Can xditview and devX?? devices be set up to access all the new fonts?
* It would be good to have examples of using TrueType fonts with Povray and Ghostscript.
diff --git a/documentation/content/en/articles/freebsd-questions/_index.adoc b/documentation/content/en/articles/freebsd-questions/_index.adoc
index 95774ce2c6..51d18e0cef 100644
--- a/documentation/content/en/articles/freebsd-questions/_index.adoc
+++ b/documentation/content/en/articles/freebsd-questions/_index.adoc
@@ -1,256 +1,256 @@
---
title: How to get Best Results from the FreeBSD-questions Mailing List
authors:
- author: Greg Lehey
email: grog@FreeBSD.org
-releaseinfo: "$FreeBSD$"
+description: How to get Best Results from the FreeBSD-questions Mailing List
trademarks: ["freebsd", "microsoft", "opengroup", "qualcomm", "general"]
---
= How to get Best Results from the FreeBSD-questions Mailing List
:doctype: article
:toc: macro
:toclevels: 1
:icons: font
:sectnums:
:sectnumlevels: 6
:source-highlighter: rouge
:experimental:
include::shared/en/mailing-lists.adoc[]
include::shared/en/urls.adoc[]
[.abstract-title]
Abstract
This document provides useful information for people looking to prepare an e-mail to the FreeBSD-questions mailing list.
Advice and hints are given that will maximize the chance that the reader will receive useful replies.
This document is regularly posted to the FreeBSD-questions mailing list.
'''
toc::[]
== Introduction
`FreeBSD-questions` is a mailing list maintained by the FreeBSD project to help people who have questions about the normal use of FreeBSD.
Another group, `FreeBSD-hackers`, discusses more advanced questions such as future development work.
[NOTE]
====
The term "hacker" has nothing to do with breaking into other people's computers.
The correct term for the latter activity is "cracker", but the popular press has not found out yet.
The FreeBSD hackers disapprove strongly of cracking security, and have nothing to do with it.
For a longer description of hackers, see Eric Raymond's http://www.catb.org/~esr/faqs/hacker-howto.html[How To Become A Hacker]
====
This is a regular posting aimed to help both those seeking advice from FreeBSD-questions (the "newcomers"), and also those who answer the questions (the "hackers").
Inevitably there is some friction, which stems from the different viewpoints of the two groups.
The newcomers accuse the hackers of being arrogant, stuck-up, and unhelpful, while the hackers accuse the newcomers of being stupid, unable to read plain English, and expecting everything to be handed to them on a silver platter.
Of course, there is an element of truth in both these claims, but for the most part these viewpoints come from a sense of frustration.
In this document, I would like to do something to relieve this frustration and help everybody get better results from FreeBSD-questions.
In the following section, I recommend how to submit a question; after that, we will look at how to answer one.
== How to Subscribe to FreeBSD-questions
FreeBSD-questions is a mailing list, so you need mail access.
Point your WWW browser to the {freebsd-questions}.
In the section titled "Subscribing to freebsd-questions" fill in the "Your email address" field; the other fields are optional.
[NOTE]
====
The password fields in the subscription form provide only mild security, but should prevent others from messing with your subscription.
_Do not use a valuable password_ as it will occasionally be emailed back to you in cleartext.
====
You will receive a confirmation message from mailman; follow the included instructions to complete your subscription.
Finally, when you get the "Welcome" message from mailman telling you the details of the list and subscription area password, __please save it__.
If you ever should want to leave the list, you will need the information there.
See the next section for more details.
== How to Unsubscribe from FreeBSD-questions
When you subscribed to FreeBSD-questions, you got a welcome message from mailman.
In this message, amongst other things, it told you how to unsubscribe.
Here is a typical message:
....
Welcome to the
freebsd-questions@freebsd.org mailing list!
To post to this list, send your email to:
freebsd-questions@freebsd.org
General information about the mailing list is at:
https://lists.freebsd.org/mailman/listinfo/freebsd-questions
If you ever want to unsubscribe or change your options (e.g., switch to
or from digest mode, change your password, etc.), visit your
subscription page at:
https://lists.freebsd.org/mailman/options/freebsd-questions/grog%40lemsi.de
You can also make such adjustments via email by sending a message to:
freebsd-questions-request@freebsd.org
with the word 'help' in the subject or body (do not include the
quotes), and you will get back a message with instructions.
You must know your password to change your options (including changing
the password, itself) or to unsubscribe. It is:
12345
Normally, Mailman will remind you of your freebsd.org mailing list
passwords once every month, although you can disable this if you
prefer. This reminder will also include instructions on how to
unsubscribe or change your account options. There is also a button on
your options page that will email your current password to you.
....
From the URL specified in your "Welcome" message you may visit the "Account management page" and enter a request to "Unsubscribe" you from FreeBSD-questions mailing list.
A confirmation message will be sent to you from mailman; follow the included instructions to finish unsubscribing.
If you have done this, and you still cannot figure out what is going on, send a message to mailto:freebsd-questions-request@FreeBSD.org[freebsd-questions-request@FreeBSD.org], and they will sort things out for you.
_Do not_ send a message to FreeBSD-questions: they cannot help you.
== Should I ask `-questions` or `-hackers`?
Two mailing lists handle general questions about FreeBSD, `FreeBSD-questions` and `FreeBSD-hackers`.
In some cases, it is not really clear which group you should ask.
The following criteria should help for 99% of all questions, however:
. If the question is of a general nature, ask `FreeBSD-questions`. Examples might be questions about installing FreeBSD or the use of a particular UNIX(R) utility.
. If you think the question relates to a bug, but you are not sure, or you do not know how to look for it, send the message to `FreeBSD-questions`.
. If the question relates to a bug, and you are _sure_ that it is a bug (for example, you can pinpoint the place in the code where it happens, and you maybe have a fix), then send the message to `FreeBSD-hackers`.
. If the question relates to enhancements to FreeBSD, and you can make suggestions about how to implement them, then send the message to `FreeBSD-hackers`.
There are also a number of other link:{handbook}#eresources-mail[specialized mailing lists], which caters to more specific interests.
The criteria above still apply, and it is in your interest to stick to them, since you are more likely to get good results that way.
== Before Submitting a Question
You can (and should) do some things yourself before asking a question on one of the mailing lists:
* Try solving the problem on your own. If you post a question which shows that you have tried to solve the problem, your question will generally attract more positive attention from people reading it. Trying to solve the problem yourself will also enhance your understanding of FreeBSD, and will eventually let you use your knowledge to help others by answering questions posted to the mailing lists.
* Read the manual pages, and the FreeBSD documentation (either installed in [.filename]#/usr/doc# or accessible via WWW at http://www.FreeBSD.org[http://www.FreeBSD.org]), especially the link:{handbook}[handbook] and the link:{faq}[FAQ].
* Browse and/or search the archives for the mailing list, to see if your question or a similar one has been asked (and possibly answered) on the list. You can browse and/or search the mailing list archives at https://www.FreeBSD.org/mail[https://www.FreeBSD.org/mail] and https://www.FreeBSD.org/search/#mailinglists[https://www.FreeBSD.org/search/#mailinglists] respectively. This can be done at other WWW sites as well, for example at http://marc.theaimsgroup.com[http://marc.theaimsgroup.com].
* Use a search engine such as http://www.google.com[Google] or http://www.yahoo.com[Yahoo] to find answers to your question.
== How to Submit a Question
When submitting a question to FreeBSD-questions, consider the following points:
* Remember that nobody gets paid for answering a FreeBSD question. They do it of their own free will. You can influence this free will positively by submitting a well-formulated question supplying as much relevant information as possible. You can influence this free will negatively by submitting an incomplete, illegible, or rude question. It is perfectly possible to send a message to FreeBSD-questions and not get an answer even if you follow these rules. It is much more possible to not get an answer if you do not. In the rest of this document, we will look at how to get the most out of your question to FreeBSD-questions.
* Not everybody who answers FreeBSD questions reads every message: they look at the subject line and decide whether it interests them. Clearly, it is in your interest to specify a subject. "FreeBSD problem" or "Help" are not enough. If you provide no subject at all, many people will not bother reading it. If your subject is not specific enough, the people who can answer it may not read it.
* Format your message so that it is legible, and PLEASE DO NOT SHOUT!!!!!. We appreciate that a lot of people do not speak English as their first language, and we try to make allowances for that, but it is really painful to try to read a message written full of typos or without any line breaks.
+
Do not underestimate the effect that a poorly formatted mail message has, not just on the FreeBSD-questions mailing list.
Your mail message is all people see of you, and if it is poorly formatted, one line per paragraph, badly spelt, or full of errors, it will give people a poor impression of you.
+
A lot of badly formatted messages come from http://www.lemis.com/email.html[bad mailers or badly configured mailers].
The following mailers are known to send out badly formatted messages without you finding out about them:
** Eudora(R)
** exmh
** Microsoft(R) Exchange
** Microsoft(R) Outlook(R)
+
Try not to use MIME: a lot of people use mailers which do not get on very well with MIME.
* Make sure your time and time zone are set correctly. This may seem a little silly, since your message still gets there, but many of the people you are trying to reach get several hundred messages a day. They frequently sort the incoming messages by subject and by date, and if your message does not come before the first answer, they may assume they missed it and not bother to look.
* Do not include unrelated questions in the same message. Firstly, a long message tends to scare people off, and secondly, it is more difficult to get all the people who can answer all the questions to read the message.
* Specify as much information as possible. This is a difficult area, and we need to expand on what information you need to submit, but here is a start:
** In nearly every case, it is important to know the version of FreeBSD you are running. This is particularly the case for FreeBSD-CURRENT, where you should also specify the date of the sources, though of course you should not be sending questions about -CURRENT to FreeBSD-questions.
** With any problem which _could_ be hardware related, tell us about your hardware. In case of doubt, assume it is possible that it is hardware. What kind of CPU are you using? How fast? What motherboard? How much memory? What peripherals?
+
There is a judgement call here, of course, but the output of the man:dmesg[8] command can frequently be very useful, since it tells not just what hardware you are running, but what version of FreeBSD as well.
** If you get error messages, do not say "I get error messages", say (for example) "I get the error message 'No route to host'".
** If your system panics, do not say "My system panicked", say (for example) "my system panicked with the message 'free vnode isn't'".
** If you have difficulty installing FreeBSD, please tell us what hardware you have. In particular, it is important to know the IRQs and I/O addresses of the boards installed in your machine.
** If you have difficulty getting PPP to run, describe the configuration. Which version of PPP do you use? What kind of authentication do you have? Do you have a static or dynamic IP address? What kind of messages do you get in the log file?
* A lot of the information you need to supply is the output of programs, such as man:dmesg[8], or console messages, which usually appear in [.filename]#/var/log/messages#. Do not try to copy this information by typing it in again; it is a real pain, and you are bound to make a mistake. To send log file contents, either make a copy of the file and use an editor to trim the information to what is relevant, or cut and paste into your message. For the output of programs like man:dmesg[8], redirect the output to a file and include that. For example,
+
[source,shell]
....
% dmesg > /tmp/dmesg.out
....
+
This redirects the information to the file [.filename]#/tmp/dmesg.out#.
* If you do all this, and you still do not get an answer, there could be other reasons. For example, the problem is so complicated that nobody knows the answer, or the person who does know the answer was offline. If you do not get an answer after, say, a week, it might help to re-send the message. If you do not get an answer to your second message, though, you are probably not going to get one from this forum. Resending the same message again and again will only make you unpopular.
To summarize, let's assume you know the answer to the following question (yes, it is the same one in each case).
You choose which of these two questions you would be more prepared to answer:
.Message 1
[example]
====
....
Subject: HELP!!?!??
I just can't get hits damn silly FereBSD system to
workd, and Im really good at this tsuff, but I have never seen
anythign sho difficult to install, it jst wont work whatever I try
so why don't you guys tell me what I doing wrong.
....
====
.Message 2
[example]
====
....
Subject: Problems installing FreeBSD
I've just got the FreeBSD 2.1.5 CDROM from Walnut Creek, and I'm having a lot
of difficulty installing it. I have a 66 MHz 486 with 16 MB of
memory and an Adaptec 1540A SCSI board, a 1.2GB Quantum Fireball
disk and a Toshiba 3501XA CDROM drive. The installation works just
fine, but when I try to reboot the system, I get the message
Missing Operating System.
....
====
== How to Follow up to a Question
Often you will want to send in additional information to a question you have already sent.
The best way to do this is to reply to your original message.
This has three advantages:
. You include the original message text, so people will know what you are talking about. Do not forget to trim unnecessary text out, though.
. The text in the subject line stays the same (you did remember to put one in, did you not?). Many mailers will sort messages by subject. This helps group messages together.
. The message reference numbers in the header will refer to the previous message. Some mailers, such as http://www.mutt.org/[mutt], can _thread_ messages, showing the exact relationships between the messages.
== How to Answer a Question
Before you answer a question to FreeBSD-questions, consider:
. A lot of the points on submitting questions also apply to answering questions. Read them.
. Has somebody already answered the question? The easiest way to check this is to sort your incoming mail by subject: then (hopefully) you will see the question followed by any answers, all together.
+
If somebody has already answered it, it does not automatically mean that you should not send another answer.
But it makes sense to read all the other answers first.
. Do you have something to contribute beyond what has already been said? In general, "Yeah, me too" answers do not help much, although there are exceptions, like when somebody is describing a problem they are having, and they do not know whether it is their fault or whether there is something wrong with the hardware or software. If you do send a "me too" answer, you should also include any further relevant information.
. Are you sure you understand the question? Very frequently, the person who asks the question is confused or does not express themselves very well. Even with the best understanding of the system, it is easy to send a reply which does not answer the question. This does not help: you will leave the person who submitted the question more frustrated or confused than ever. If nobody else answers, and you are not too sure either, you can always ask for more information.
. Are you sure your answer is correct? If not, wait a day or so. If nobody else comes up with a better answer, you can still reply and say, for example, "I do not know if this is correct, but since nobody else has replied, why don't you try replacing your ATAPI CDROM with a frog?".
. Unless there is a good reason to do otherwise, reply to the sender and to FreeBSD-questions. Many people on the FreeBSD-questions are "lurkers": they learn by reading messages sent and replied to by others. If you take a message which is of general interest off the list, you are depriving these people of their information. Be careful with group replies; lots of people send messages with hundreds of CCs. If this is the case, be sure to trim the Cc: lines appropriately.
. Include relevant text from the original message. Trim it to the minimum, but do not overdo it. It should still be possible for somebody who did not read the original message to understand what you are talking about.
. Use some technique to identify which text came from the original message, and which text you add. I personally find that prepending "`>`" to the original message works best. Leaving white space after the "`> ;`" and leave empty lines between your text and the original text both make the result more readable.
. Put your response in the correct place (after the text to which it replies). It is very difficult to read a thread of responses where each reply comes before the text to which it replies.
. Most mailers change the subject line on a reply by prepending a text such as "Re: ". If your mailer does not do it automatically, you should do it manually.
. If the submitter did not abide by format conventions (lines too long, inappropriate subject line) _please_ fix it. In the case of an incorrect subject line (such as "HELP!!??"), change the subject line to (say) "Re: Difficulties with sync PPP (was: HELP!!??)". That way other people trying to follow the thread will have less difficulty following it.
+
In such cases, it is appropriate to say what you did and why you did it, but try not to be rude.
If you find you can not answer without being rude, do not answer.
+
If you just want to reply to a message because of its bad format, just reply to the submitter, not to the list.
You can just send him this message in reply, if you like.
diff --git a/documentation/content/en/articles/freebsd-releng/_index.adoc b/documentation/content/en/articles/freebsd-releng/_index.adoc
index 65410cf4a0..5615986124 100644
--- a/documentation/content/en/articles/freebsd-releng/_index.adoc
+++ b/documentation/content/en/articles/freebsd-releng/_index.adoc
@@ -1,862 +1,863 @@
---
title: FreeBSD Release Engineering
authors:
- author: Glen Barber
email: gjb@FreeBSD.org
organizations:
- organization: The FreeBSD Foundation
webpage: https://www.freebsdfoundation.org/
- organization: Rubicon Communications, LLC (Netgate)
webpage: https://www.netgate.com/
+description: FreeBSD Release Engineering
trademarks: ["freebsd", "intel", "general"]
---
= FreeBSD Release Engineering
:doctype: article
:toc: macro
:toclevels: 1
:icons: font
:sectnums:
:sectnumlevels: 6
:source-highlighter: rouge
:experimental:
:teamBugmeister: FreeBSD Bugmeister Team
:teamDoceng: FreeBSD Documentation Engineering Team
:teamPortmgr: FreeBSD Ports Management Team
:teamPostmaster: FreeBSD Postmaster Team
:teamRe: FreeBSD Release Engineering Team
:teamSecteam: FreeBSD Security Team
:branchHead: head/
:branchStable: stable/
:branchStablex: stable/12/
:branchReleng: releng/
:branchRelengx: releng/12.0/
:branchReleasex: release/12.0.0/
:branchRevision: 12.0
[.abstract-title]
Abstract
This article describes the release engineering process of the FreeBSD Project.
'''
toc::[]
[[introduction]]
== Introduction to the FreeBSD Release Engineering Process
Development of FreeBSD has a very specific workflow.
In general, all changes to the FreeBSD base system are committed to the {branchHead} branch, which reflects the top of the source tree.
After a reasonable testing period, changes can then be merged to the {branchStable} branches.
The default minimum timeframe before merging to {branchStable} branches is three (3) days.
Although a general rule to wait a minimum of three days before merging from {branchHead}, there are a few special circumstances where an immediate merge may be necessary, such as a critical security fix, or a bug fix that directly inhibits the release build process.
After several months, and the number of changes in the {branchStable} branch have grown significantly, it is time to release the next version of FreeBSD.
These releases have been historically referred to as "point" releases.
In between releases from the {branchStable} branches, approximately every two (2) years, a release will be cut directly from {branchHead}.
These releases have been historically referred to as "dot-zero" releases.
This article will highlight the workflow and responsibilities of the {teamRe} for both "dot-zero" and "point"' releases.
The following sections of this article describe:
<<releng-prep>>::
General information and preparation before starting the release cycle.
<<releng-website>>::
Website Changes During the Release Cycle
<<releng-terms>>::
Terminology and general information, such as the "code slush" and "code freeze", used throughout this document.
<<releng-head>>::
The Release Engineering process for a "dot-zero" release.
<<releng-stable>>::
The Release Engineering process for a "point" release.
<<releng-building>>::
Information related to the specific procedures to build installation medium.
<<releng-mirrors>>::
Procedures to publish installation medium.
<<releng-wrapup>>::
Wrapping up the release cycle.
[[releng-prep]]
== General Information and Preparation
Approximately two months before the start of the release cycle, the {teamRe} decides on a schedule for the release.
The schedule includes the various milestone points of the release cycle, such as freeze dates, branch dates, and build dates.
For example:
[.informaltable]
[cols="1,1", frame="none", options="header"]
|===
| Milestone
| Anticipated Date
|{branchHead} slush:
|May 27, 2016
|{branchHead} freeze:
|June 10, 2016
|{branchHead} KBI freeze:
|June 24, 2016
|`doc/` tree slush [1]:
|June 24, 2016
|Ports quarterly branch [2]:
|July 1, 2016
|{branchStablex} branch:
|July 8, 2016
|`doc/` tree tag [3]:
|July 8, 2016
|BETA1 build starts:
|July 8, 2016
|{branchHead} thaw:
|July 9, 2016
|BETA2 build starts:
|July 15, 2016
|BETA3 build starts [*]:
|July 22, 2016
|{branchRelengx} branch:
|July 29, 2016
|RC1 build starts:
|July 29, 2016
|{branchStablex} thaw:
|July 30, 2016
|RC2 build starts:
|August 5, 2016
|Final Ports package builds [4]:
|August 6, 2016
|Ports release tag:
|August 12, 2016
|RC3 build starts [*]:
|August 12, 2016
|RELEASE build starts:
|August 19, 2016
|RELEASE announcement:
|September 2, 2016
|===
[NOTE]
====
Items marked with "[*]" are "as needed".
====
. The `doc/` tree slush is coordinated by the {teamDoceng}.
. The Ports quarterly branch used is determined by when the final `RC` build is planned. A new quarterly branch is created on the first day of the quarter, so this metric should be used when taking the release cycle milestones into account. The quarterly branch is created by the {teamPortmgr}.
. The `doc/` tree is tagged by the {teamDoceng}.
. The final Ports package build is done by the {teamPortmgr} after the final (or what is expected to be final) `RC` build.
[NOTE]
====
If the release is being created from an existing {branchStable} branch, the KBI freeze date can be excluded, since the KBI is already considered frozen on established {branchStable} branches.
====
When writing the release cycle schedule, a number of things need to be taken into consideration, in particular milestones where the target date depends on predefined milestones upon which there is a dependency.
For example, the Ports Collection release tag originates from the active quarterly branch at the time of the last `RC`.
This in part defines which quarterly branch is used, when the release tag can happen, and what revision of the ports tree is used for the final `RELEASE` build.
After general agreement on the schedule, the {teamRe} emails the schedule to the FreeBSD Developers.
It is somewhat typical that many developers will inform the {teamRe} about various works-in-progress.
In some cases, an extension for the in-progress work will be requested, and in other cases, a request for "blanket approval" to a particular subset of the tree will be made.
When such requests are made, it is important to make sure timelines (even if estimated) are discussed.
For blanket approvals, the length of time for the blanket approval should be made clear.
For example, a FreeBSD developer may request blanket approvals from the start of the code slush until the start of the `RC` builds.
[NOTE]
====
In order to keep track of blanket approvals, the {teamRe} uses an internal repository to keep a running log of such requests, which defines the area upon which a blanket approval was granted, the author(s), when the blanket approval expires, and the reason the approval was granted.
One example of this is granting blanket approval to [.filename]#release/doc/# to all {teamRe} members until the final `RC` to update the release notes and other release-related documentation.
====
[NOTE]
====
The {teamRe} also uses this repository to track pending approval requests that are received just prior to starting various builds during the release cycle, which the Release Engineer specifies the cutoff period with an email to the FreeBSD developers.
====
Depending on the underlying set of code in question, and the overall impact the set of code has on FreeBSD as a whole, such requests may be approved or denied by the {teamRe}.
The same applies to work-in-progress extensions.
For example, in-progress work for a new device driver that is otherwise isolated from the rest of the tree may be granted an extension.
A new scheduler, however, may not be feasible, especially if such dramatic changes do not exist in another branch.
The schedule is also added to the Project website, in the `doc/` repository, in [.filename]#~/website/content/en/releases/{branchRevision}R/schedule.adoc#. This file is continuously updated as the release cycle progresses.
[NOTE]
====
In most cases, the [.filename]#schedule.adoc# can be copied from a prior release and updated accordingly.
====
In addition to adding [.filename]#schedule.adoc# to the website, [.filename]#~/shared/releases.adoc# is also updated to add the link to the schedule to various subpages, as well as enabling the link to the schedule on the Project website index page.
The schedule is also linked from [.filename]#~/website/content/en/releng/_index.adoc#.
Approximately one month prior to the scheduled "code slush", the {teamRe} sends a reminder email to the FreeBSD Developers.
[[releng-terms]]
== Release Engineering Terminology
This section describes some of the terminology used throughout the rest of this document.
[[releng-terms-code-slush]]
=== The Code Slush
Although the code slush is not a hard freeze on the tree, the {teamRe} requests that bugs in the existing code base take priority over new features.
The code slush does not enforce commit approvals to the branch.
[[releng-terms-code-freeze]]
=== The Code Freeze
The code freeze marks the point in time where all commits to the branch require explicit approval from the {teamRe}.
The FreeBSD Subversion repository contains several hooks to perform sanity checks before any commit is actually committed to the tree.
One of these hooks will evaluate if committing to a particular branch requires specific approval.
To enforce commit approvals by the {teamRe}, the Release Engineer updates [.filename]#base/svnadmin/conf/approvers#, and commits the change back to the repository.
Once this is done, any change to the branch must include an "Approved by:" line in the commit message.
The "Approved by:" line must match the second column in [.filename]#base/svnadmin/conf/approvers#, otherwise the commit will be rejected by the repository hooks.
[NOTE]
====
During the code freeze, FreeBSD committers are urged to follow the link:https://wiki.freebsd.org/Releng/ChangeRequestGuidelines[Change Request Guidelines].
====
[[releng-terms-kbi-freeze]]
=== The KBI/KPI Freeze
KBI/KPI stability implies that the caller of a function across two different releases of software that implement the function results in the same end state.
The caller, whether it is a process, thread, or function, expects the function to operate in a certain way, otherwise the KBI/KPI stability on the branch is broken.
[[releng-website]]
== Website Changes During the Release Cycle
This section describes the changes to the website that should occur as the release cycle progresses.
[NOTE]
====
The files specified throughout this section are relative to the `head/` branch of the `doc` repository in Subversion.
====
[[releng-website-prerelease]]
=== Website Changes Before the Release Cycle Begins
When the release cycle schedule is available, these files need to be updated to enable various different functionalities on the FreeBSD Project website:
[.informaltable]
[cols="1,1", frame="none", options="header"]
|===
| File to Edit
| What to Change
|[.filename]#~/shared/releases.adoc#
|Change `beta-upcoming` from `IGNORE` to `INCLUDE`
|[.filename]#~/shared/releases.adoc#
|Change `beta-testing` from `IGNORE` to `INCLUDE`
|===
[[releng-website-beta-rc]]
=== Website Changes During `BETA` or `RC`
When transitioning from `PRERELEASE` to `BETA`, these files need to be updated to enable the "Help Test" block on the download page. All files are relative to [.filename]#head/# in the `doc` repository:
[.informaltable]
[cols="1,1", frame="none", options="header"]
|===
| File to Edit
| What to Change
|[.filename]#share/releases.adoc#
|Update `betarel-vers` to `BETA__1__`
|[.filename]#~/website/data/en/news.toml#
|Add an entry announcing the `BETA`
|[.filename]#~/website/static/security/advisory-template.txt#
|Add the new `BETA`, `RC`, or final `RELEASE` to the template
|[.filename]#~/website/static/security/errata-template.txt#
|Add the new `BETA`, `RC`, or final `RELEASE` to the template
|===
Once the {branchRelengx} branch is created, the various release-related documents need to be generated and manually added to the `doc/` repository.
Within [.filename]#release/doc#, invoke to generate [.filename]#errata.html#, [.filename]#hardware.html#, [.filename]#readme.html#, and [.filename]#relnotes.html# pages, which are then added to [.filename]#doc/head/en_US.ISO8859-1/htdocs/releases/X.YR/#, where _X.Y_ represents the major and minor version number of the release.
The `fbsd:nokeywords` property must be set to `on` on the newly-added files before the pre-commit hooks will allow them to be added to the repository.
[NOTE]
====
The relevant release-related documents exist in the [.filename]#doc# repository for FreeBSD 12.x and later.
====
[[releng-ports-beta-rc]]
=== Ports Changes During `BETA`, `RC`, and the Final `RELEASE`
For each build during the release cycle, the `MANIFEST` files containing the `SHA256` of the various distribution sets, such as `base.txz`, `kernel.txz`, and so on, are added to the package:misc/freebsd-release-manifests[] port.
This allows utilities other than , such as package:ports-mgmt/poudriere[], to safely use these distribution sets by providing a mechanism through which the checksums can be verified.
[[releng-head]]
== Release from {branchHead}
This section describes the general procedures of the FreeBSD release cycle from the {branchHead} branch.
[[releng-head-builds-alpha]]
=== FreeBSD "`ALPHA`" Builds
Starting with the FreeBSD 10.0-RELEASE cycle, the notion of "`ALPHA`" builds was introduced.
Unlike the `BETA` and `RC` builds, `ALPHA` builds are not included in the FreeBSD Release schedule.
The idea behind `ALPHA` builds is to provide regular FreeBSD-provided builds before the creation of the {branchStable} branch.
FreeBSD `ALPHA` snapshots should be built approximately once a week.
For the first `ALPHA` build, the `BRANCH` value in [.filename]#sys/conf/newvers.sh# needs to be changed from `CURRENT` to `ALPHA1`.
For subsequent `ALPHA` builds, increment each `ALPHA__N__` value by one.
See <<releng-building>> for information on building the `ALPHA` images.
[[releng-head-branching]]
=== Creating the {branchStablex} Branch
When creating the {branchStable} branch, several changes are required in both the new {branchStable} branch and the {branchHead} branch.
The files listed are relative to the repository root.
To create the new {branchStablex} branch in Subversion:
[source,shell,subs="attributes"]
....
% svn cp ^/head {branchStablex}
....
Once the {branchStablex} branch has been committed, make the following edits:
[.informaltable]
[cols="1,1", frame="none", options="header"]
|===
| File to Edit
| What to Change
|[.filename]#stable/12/UPDATING#
|Update the FreeBSD version, and remove the notice about `WITNESS`
|[.filename]#stable/12/contrib/jemalloc/include/jemalloc/jemalloc_FreeBSD.h#
a|
[source,shell,subs="attributes"]
....
#ifndef MALLOC_PRODUCTION
#define MALLOC_PRODUCTION
#endif
....
|[.filename]#stable/12/lib/clang/llvm.build.mk#
|Uncomment `-DNDEBUG`
|[.filename]#stable/12/sys/\*/conf/GENERIC*#
|Remove debugging support
|[.filename]#stable/12/sys/*/conf/MINIMAL#
|Remove debugging support
|[.filename]#stable/12/release/release.conf.sample#
|Update `SRCBRANCH`
|[.filename]#stable/12/sys/*/conf/GENERIC-NODEBUG#
|Remove these kernel configurations
|[.filename]#stable/12/sys/arm/conf/std.arm*#
|Remove debugging options
|[.filename]#stable/12/sys/conf/newvers.sh#
|Update the `BRANCH` value to reflect `BETA1`
|[.filename]#stable/12/shared/mk/src.opts.mk#
|Move `REPRODUCIBLE_BUILD` from `\__DEFAULT_NO_OPTIONS` to `__DEFAULT_YES_OPTIONS`
|[.filename]#stable/12/shared/mk/src.opts.mk#
|Move `LLVM_ASSERTIONS` from `\__DEFAULT_YES_OPTIONS` to `__DEFAULT_NO_OPTIONS` (FreeBSD 13.x and later only)
|[.filename]#stable/12/libexec/rc/rc.conf#
|Set `dumpdev` from `AUTO` to `NO` (it is configurable via for those that want it enabled by default)
|[.filename]#stable/12/release/Makefile#
|Remove the `debug.witness.trace` entries
|===
Then in the {branchHead} branch, which will now become a new major version:
[.informaltable]
[cols="1,1", frame="none", options="header"]
|===
| File to Edit
| What to Change
|[.filename]#head/UPDATING#
|Update the FreeBSD version
|[.filename]#head/sys/conf/newvers.sh#
|Update the `BRANCH` value to reflect `CURRENT`, and increment `REVISION`
|[.filename]#head/Makefile.inc1#
|Update `TARGET_TRIPLE` and `MACHINE_TRIPLE`
|[.filename]#head/sys/sys/param.h#
|Update `__FreeBSD_version`
|[.filename]#head/gnu/usr.bin/cc/cc_tools/freebsd-native.h#
|Update `FBSD_MAJOR` and `FBSD_CC_VER`
|[.filename]#head/contrib/gcc/config.gcc#
|Append the `freebsdversion.h` section
|[.filename]#head/lib/clang/llvm.build.mk#
|Update the value of `OS_VERSION`
|[.filename]#head/lib/clang/freebsd_cc_version.h#
|Update `FREEBSD_CC_VERSION`
|[.filename]#head/lib/clang/include/lld/Common/Version.inc#
|Update `LLD_REVISION_STRING`
|[.filename]#head/Makefile.libcompat#
|Update `LIB32CPUFLAGS`
|===
[[releng-stable]]
== Release from {branchStable}
This section describes the general procedures of the FreeBSD release cycle from an extablished {branchStable} branch.
[[releng-stable-slush]]
=== FreeBSD `stable` Branch Code Slush
In preparation for the code freeze on a `stable` branch, several files need to be updated to reflect the release cycle is officially in progress.
These files are all relative to the top-most level of the stable branch:
[.informaltable]
[cols="1,1", frame="none", options="header"]
|===
| File to Edit
| What to Change
|[.filename]#sys/conf/newvers.sh#
|Update the `BRANCH` value to reflect `PRERELEASE`
|[.filename]#Makefile.inc1#
|Update `TARGET_TRIPLE`
|[.filename]#lib/clang/llvm.build.mk#
|Update `OS_VERSION`
|[.filename]#Makefile.libcompat#
|Update `LIB32CPUFLAGS`
|[.filename]#gnu/usr.bin/groff/tmac/mdoc.local.in#
|Add a new `.ds` entry for the FreeBSD version, and update `doc-default-operating-system` (FreeBSD 11.x and earlier only)
|===
[[releng-stable-builds-beta]]
=== FreeBSD `BETA` Builds
Following the code slush, the next phase of the release cycle is the code freeze.
This is the point at which all commits to the stable branch require explicit approval from the {teamRe}.
This is enforced by pre-commit hooks in the Subversion repository by editing [.filename]#base/svnadmin/conf/approvers# to include a regular expression matching the {branchStablex} branch for the release:
[.programlisting,subs="attributes"]
....
^/{branchStablex} re
^/{branchRelengx} re
....
[NOTE]
====
There are two general exceptions to requiring commit approval during the release cycle.
The first is any change that needs to be committed by the Release Engineer in order to proceed with the day-to-day workflow of the release cycle, the other is security fixes that may occur during the release cycle.
====
Once the code freeze is in effect, the next build from the branch is labeled `BETA1`.
This is done by updating the `BRANCH` value in [.filename]#sys/conf/newvers.sh# from `PRERELEASE` to `BETA1`.
Once this is done, the first set of `BETA` builds are started.
Subsequent `BETA` builds do not require updates to any files other than [.filename]#sys/conf/newvers.sh#, incrementing the `BETA` build number.
[[releng-stable-branching]]
=== Creating the {branchRelengx} Branch
When the first `RC` (Release Candidate) build is ready to begin, the {branchReleng} branch is created.
This is a multi-step process that must be done in a specific order, in order to avoid anomalies such as overlaps with `__FreeBSD_version` values, for example.
The paths listed below are relative to the repository root.
The order of commits and what to change are:
[source,shell,subs="attributes"]
....
% svn cp ^/{branchStablex} {branchRelengx}
....
[.informaltable]
[cols="1,1", frame="none", options="header"]
|===
| File to Edit
| What to Change
|[.filename]#releng/12.0/sys/conf/newvers.sh#
|Change `BETA__X__` to `RC1`
|[.filename]#releng/12.0/sys/sys/param.h#
|Update `__FreeBSD_version`
|[.filename]#releng/12.0/etc/pkg/FreeBSD.conf#
|Replace `latest` with `quarterly` as the default package repository location
|[.filename]#releng/12.0/release/pkg_repos/release-dvd.conf#
|Replace `latest` with `quarterly` as the default package repository location
|[.filename]#stable/12/sys/conf/newvers.sh#
|Update `BETA__X__` with `PRERELEASE`
|[.filename]#stable/12/sys/sys/param.h#
|Update `__FreeBSD_version`
|[.filename]#svnadmin/conf/approvers#
|Add a new approvers line for the releng branch as was done for the stable branch
|===
[source,shell,subs="attributes"]
....
% svn propdel -R svn:mergeinfo {branchRelengx}
% svn commit {branchRelengx}
% svn commit {branchStablex}
....
Now that two new `__FreeBSD_version` values exist, also update [.filename]#~/documentation/content/en/books/porters-handbook/versions/chapter.adoc# in the Documentation Project repository.
After the first `RC` build has completed and tested, the {branchStable} branch can be "thawed" by removing (or commenting) the ^/{branchStablex} entry in [.filename]#svnadmin/conf/approvers#.
Following the availability of the first `RC`, {teamBugmeister} should be emailed to add the new FreeBSD `-RELEASE` to the `versions` available in the drop-down menu shown in the bug tracker.
[[releng-building]]
== Building FreeBSD Installation Media
This section describes the general procedures producing FreeBSD development snapshots and releases.
[[releng-build-scripts]]
=== Release Build Scripts
This section describes the build scripts used by {teamRe} to produce development snapshots and releases.
[[releng-build-scripts-single]]
==== The [.filename]#release.sh# Script
Prior to FreeBSD 9.0-RELEASE, [.filename]#src/release/Makefile# was updated to support , and the [.filename]#src/release/generate-release.sh# script was introduced as a wrapper to automate invoking the targets.
Prior to FreeBSD 9.2-RELEASE, [.filename]#src/release/release.sh# was introduced, which heavily based on [.filename]#src/release/generate-release.sh# included support to specify configuration files to override various options and environment variables.
Support for configuration files provided support for cross building each architecture for a release by specifying a separate configuration file for each invocation.
As a brief example of using [.filename]#src/release/release.sh# to build a single release in [.filename]#/scratch#:
[source,shell,subs="attributes"]
....
# /bin/sh /usr/src/release/release.sh
....
As a brief example of using [.filename]#src/release/release.sh# to build a single, cross-built release using a different target directory, create a custom [.filename]#release.conf# containing:
[.programlisting,subs="attributes"]
....
# release.sh configuration for powerpc/powerpc64
CHROOTDIR="/scratch-powerpc64"
TARGET="powerpc"
TARGET_ARCH="powerpc64"
KERNEL="GENERIC64"
....
Then invoke [.filename]#src/release/release.sh# as:
[source,shell,subs="attributes"]
....
# /bin/sh /usr/src/release/release.sh -c $HOME/release.conf
....
See and [.filename]#src/release/release.conf.sample# for more details and example usage.
[[releng-build-scripts-multiple]]
==== The [.filename]#thermite.sh# Wrapper Script
In order to make cross building the full set of architectures supported on a given branch faster, easier, and reduce human error factors, a wrapper script around [.filename]#src/release/release.sh# was written to iterate through the various combinations of architectures and invoke [.filename]#src/release/release.sh# using a configuration file specific to that architecture.
The wrapper script is called [.filename]#thermite.sh#, which is available in the FreeBSD Subversion repository at `svn://svn.freebsd.org/base/user/gjb/thermite/`, in addition to configuration files used to build {branchHead} and {branchStablex} development snapshots.
Using [.filename]#thermite.sh# is covered in <<releng-build-snapshot>> and <<releng-build-release>>.
Each architecture and individual kernel have their own configuration file used by [.filename]#release.sh#.
Each branch has its own [.filename]#defaults-X.conf# configuration which contains entries common throughout each architecture, where overrides or special variables are set and/or overridden in the per-build files.
The per-build configuration file naming scheme is in the form of [.filename]#${revision}-${TARGET_ARCH}-${KERNCONF}-${type}.conf#, where the uppercase variables are equivalent to what uses in the build system, and lowercase variables are set within the configuration files, mapping to the major version of the respective branch.
Each branch also has its own [.filename]#builds-X.conf# configuration, which is used by [.filename]#thermite.sh#. The [.filename]#thermite.sh# script iterates through each ${revision}, ${TARGET_ARCH}, ${KERNCONF}, and ${type} value, creating a master list of what to build.
However, a given combination from the list will only be built if the respective configuration file exists, which is where the naming scheme above is relevant.
There are two paths of file sourcing:
* [.filename]#builds-12.conf# - [.filename]#main.conf#
+
This controls [.filename]#thermite.sh# behavior
* [.filename]#12-amd64-GENERIC-snap.conf# - [.filename]#defaults-12.conf# - [.filename]#main.conf#
+
This controls [.filename]#release/release.sh# behavior within the build
[NOTE]
====
The [.filename]#builds-12.conf#, [.filename]#defaults-12.conf#, and [.filename]#main.conf# configuration files exist to reduce repetition between the various per-build files.
====
[[releng-build-snapshot]]
=== Building FreeBSD Development Snapshots
The official release build machines have a specific filesystem layout, which using ZFS, [.filename]#thermite.sh# takes heavy advantage of with clones and snapshots, ensuring a pristine build environment.
The build scripts reside in [.filename]#/releng/scripts-snapshot/scripts# or [.filename]#/releng/scripts-release/scripts# respectively, to avoid collisions between an `RC` build from a releng branch versus a `STABLE` snapshot from the respective stable branch.
A separate dataset exists for the final build images, [.filename]#/snap/ftp#. This directory contains both snapshots and releases directories.
They are only used if the `EVERYTHINGISFINE` variable is defined in [.filename]#main.conf#.
[NOTE]
====
The `EVERYTHINGISFINE` variable name was chosen to avoid colliding with a variable that might be possibly set in the user environment, accidentally enabling the behavior that depends on it being defined.
====
As [.filename]#thermite.sh# iterates through the master list of combinations and locates the per-build configuration file, a ZFS dataset is created under [.filename]#/releng#, such as [.filename]#/releng/12-amd64-GENERIC-snap#.
The `src/`, `ports/`, and `doc/` trees are checked out to separate ZFS datasets, such as [.filename]#/releng/12-src-snap#, which are then cloned and mounted into the respective build datasets.
This is done to avoid checking out a given tree more than once.
Assuming these filesystem paths, [.filename]#thermite.sh# would be invoked as:
[source,shell,subs="attributes"]
....
# cd /releng/scripts-snapshot/scripts
# ./setrev.sh -b {branchStablex}
# ./zfs-cleanup.sh -c ./builds-12.conf
# ./thermite.sh -c ./builds-12.conf
....
Once the builds have completed, additional helper scripts are available to generate development snapshot emails which are sent to the `freebsd-snapshots@freebsd.org` mailing list:
[source,shell,subs="attributes"]
....
# cd /releng/scripts-snapshot/scripts
# ./get-checksums.sh -c ./builds-12.conf | ./generate-email.pl > snapshot-12-mail
....
[NOTE]
====
The generated output should be double-checked for correctness, and the email itself should be PGP signed, in-line.
====
[NOTE]
====
These helper scripts only apply to development snapshot builds.
Announcements during the release cycle (excluding the final release announcement) are created from an email template.
A sample of the email template currently used can be found link:here[here].
====
[[releng-build-release]]
=== Building FreeBSD Releases
Similar to building FreeBSD development snapshots, [.filename]#thermite.sh# would be invoked the same way.
The difference between development snapshots and release builds, `BETA` and `RC` included, is that the configuration files must be named with `release` instead of `snap` as the type, as mentioned above.
In addition, the `BUILDTYPE` and `types` must be changed from `snap` to `release` in [.filename]#defaults-12.conf# and [.filename]#builds-12.conf#, respectively.
When building `BETA`, `RC`, and the final `RELEASE`, also statically set `BUILDSVNREV` to the revision on the branch reflecting the name change, `BUILDDATE` to the date the builds are started in `YYYYMMDD` format.
If the `doc/` and `ports/` trees have been tagged, also set `PORTBRANCH` and `DOCBRANCH` to the relevant tag path in the Subversion repository, replacing `HEAD` with the last changed revision.
Also set `releasesrc` in [.filename]#builds-12.conf# to the relevant branch, such as {branchStablex} or {branchRelengx}.
During the release cycle, a copy of [.filename]#CHECKSUM.SHA512# and [.filename]#CHECKSUM.SHA256# for each architecture are stored in the {teamRe} internal repository in addition to being included in the various announcement emails.
Each [.filename]#MANIFEST# containing the hashes of [.filename]#base.txz#, [.filename]#kernel.txz#, etc. are added to package:misc/freebsd-release-manifests[] in the Ports Collection, as well.
In preparation for the release build, several files need to be updated:
[.informaltable]
[cols="1,1", frame="none", options="header"]
|===
| File to Edit
| What to Change
|[.filename]#sys/conf/newvers.sh#
|Update the `BRANCH` value to `RELEASE`
|[.filename]#UPDATING#
|Add the anticipated announcement date
|[.filename]#lib/csu/common/crtbrand.c#
|Replace `__FreeBSD_version` with the value in [.filename]#sys/sys/param.h#
|===
After building the final `RELEASE`, the {branchRelengx} branch is tagged as {branchReleasex} using the revision from which the `RELEASE` was built.
Similar to creating the {branchStablex} and {branchRelengx} branches, this is done with `svn cp`.
From the repository root:
[source,shell,subs="attributes"]
....
% svn cp ^/{branchRelengx}@r306420 {branchReleasex}
% svn commit {branchReleasex}
....
[[releng-mirrors]]
== Publishing FreeBSD Installation Media to Project Mirrors
This section describes the procedure to publish FreeBSD development snapshots and releases to the Project mirrors.
[[releng-mirrors-staging]]
=== Staging FreeBSD Installation Media Images
Staging FreeBSD snapshots and releases is a two part process:
* Creating the directory structure to match the hierarchy on `ftp-master`
+
If `EVERYTHINGISFINE` is defined in the build configuration files, [.filename]#main.conf# in the case of the build scripts referenced above, this happens automatically in the after the build is complete, creating the directory structure in [.filename]#${DESTDIR}/R/ftp-stage# with a path structure matching what is expected on `ftp-master`.
This is equivalent to running the following in the directly:
+
[source,shell,subs="attributes"]
....
# make -C /usr/src/release -f Makefile.mirrors EVERYTHINGISFINE=1 ftp-stage
....
+
After each architecture is built, [.filename]#thermite.sh# will rsync the [.filename]#${DESTDIR}/R/ftp-stage# from the build to [.filename]#/snap/ftp/snapshots# or [.filename]#/snap/ftp/releases# on the build host, respectively.
* Copying the files to a staging directory on `ftp-master` before moving the files into [.filename]#pub/# to begin propagation to the Project mirrors
+
Once all builds have finished, [.filename]#/snap/ftp/snapshots#, or [.filename]#/snap/ftp/releases# for a release, is polled by `ftp-master` using rsync to [.filename]#/archive/tmp/snapshots# or [.filename]#/archive/tmp/releases#, respectively.
+
[NOTE]
====
On `ftp-master` in the FreeBSD Project infrastructure, this step requires `root` level access, as this step must be executed as the `archive` user.
====
[[releng-mirrors-publishing]]
=== Publishing FreeBSD Installation Media
Once the images are staged in [.filename]#/archive/tmp/#, they are ready to be made public by putting them in [.filename]#/archive/pub/FreeBSD#.
In order to reduce propagation time, is used to create hard links from [.filename]#/archive/tmp# to [.filename]#/archive/pub/FreeBSD#.
[NOTE]
====
In order for this to be effective, both [.filename]#/archive/tmp# and [.filename]#/archive/pub# must reside on the same logical filesystem.
====
There is a caveat, however, where rsync must be used after in order to correct the symbolic links in [.filename]#pub/FreeBSD/snapshots/ISO-IMAGES# which will replace with a hard link, increasing the propagation time.
[NOTE]
====
As with the staging steps, this requires `root` level access, as this step must be executed as the `archive` user.
====
As the `archive` user:
[source,shell,subs="attributes"]
....
% cd /archive/tmp/snapshots
% pax -r -w -l . /archive/pub/FreeBSD/snapshots
% /usr/local/bin/rsync -avH /archive/tmp/snapshots/* /archive/pub/FreeBSD/snapshots/
....
Replace _snapshots_ with _releases_ as appropriate.
[[releng-wrapup]]
== Wrapping up the Release Cycle
This section describes general post-release tasks.
[[releng-wrapup-en]]
=== Post-Release Errata Notices
As the release cycle approaches conclusion, it is common to have several EN (Errata Notice) candidates to address issues that were discovered late in the cycle.
Following the release, the {teamRe} and the {teamSecteam} revisit changes that were not approved prior to the final release, and depending on the scope of the change in question, may issue an EN.
[NOTE]
====
The actual process of issuing ENs is handled by the {teamSecteam}.
====
To request an Errata Notice after a release cycle has completed, a developer should fill out the https://www.freebsd.org/security/errata-template.txt[Errata Notice template], in particular the `Background`, `Problem Description`, `Impact`, and if applicable, `Workaround` sections.
The completed Errata Notice template should be emailed together with either a patch against the {branchReleng} branch or a list of revisions from the {branchStable} branch.
For Errata Notice requests immediately following the release, the request should be emailed to both the {teamRe} and the {teamSecteam}.
Once the {branchReleng} branch has been handed over to the {teamSecteam} as described in <<releng-wrapup-handoff>>, Errata Notice requests should be sent to the {teamSecteam}.
[[releng-wrapup-handoff]]
=== Handoff to the {teamSecteam}
Roughly two weeks following the release, the Release Engineer updates [.filename]#svnadmin/conf/approvers# changing the approver column from `re` to `(so|security-officer)` for the {branchRelengx} branch.
[[releng-eol]]
== Release End-of-Life
This section describes the website-related files to update when a release reaches EoL (End-of-Life).
[[releng-eol-website]]
=== Website Updates for End-of-Life
When a release reaches End-of-Life, references to that release should be removed and/or updated on the website:
[.informaltable]
[cols="1,1", frame="none", options="header"]
|===
| File
| What to Change
|[.filename]#~/website/themes/beastie/layouts/index.html#
|Remove `u-relXXX-announce` and `u-relXXX-announce` references.
|[.filename]#~/website/content/en/releases/_index.adoc#
|Move the `u-relXXX-*` variables from the supported release list to the Legacy Releases list.
|[.filename]#~/website/content/en/releng/_index.adoc#
|Update the appropriate releng branch to refelect the branch is no longer supported.
|[.filename]#~/website/content/en/security/_index.adoc#
|Remove the branch from the supported branch list.
|[.filename]#~/website/content/en/where.adoc#
|Remove the URLs for the release.
|[.filename]#~/website/themes/beastie/layouts/partials/sidenav.html#
|Remove `u-relXXX-announce` and `u-relXXX-announce` references.
|[.filename]#~/website/static/security/advisory-template.txt#
|Remove references to the release and releng branch.
|[.filename]#~/website/static/security/errata-template.txt#
|Remove references to the release and releng branch.
|===
diff --git a/documentation/content/en/articles/freebsd-update-server/_index.adoc b/documentation/content/en/articles/freebsd-update-server/_index.adoc
index 394836d40e..6ee32d4eee 100644
--- a/documentation/content/en/articles/freebsd-update-server/_index.adoc
+++ b/documentation/content/en/articles/freebsd-update-server/_index.adoc
@@ -1,612 +1,612 @@
---
title: Build Your Own FreeBSD Update Server
authors:
- author: Jason Helfman
email: jgh@FreeBSD.org
copyright: 2009-2011, 2013 Jason Helfman
-releaseinfo: "$FreeBSD$"
+description: Build Your Own FreeBSD Update Server
trademarks: ["freebsd", "amd", "intel", "general"]
---
= Build Your Own FreeBSD Update Server
:doctype: article
:toc: macro
:toclevels: 1
:icons: font
:sectnums:
:sectnumlevels: 6
:source-highlighter: rouge
:experimental:
include::shared/authors.adoc[]
include::shared/en/urls.adoc[]
[.abstract-title]
Abstract
This article describes building an internal FreeBSD Update Server.
The https://svnweb.freebsd.org/base/user/cperciva/freebsd-update-build/[freebsd-update-server] is written by `{cperciva}`, Security Officer Emeritus of FreeBSD.
For users that think it is convenient to update their systems against an official update server, building their own FreeBSD Update Server may help to extend its functionality by supporting manually-tweaked FreeBSD releases or by providing a local mirror that will allow faster updates for a number of machines.
'''
toc::[]
[[acknowledgments]]
== Acknowledgments
This article was subsequently printed at https://people.freebsd.org/~jgh/files/fus/BSD_03_2010_EN.pdf[BSD Magazine].
[[introduction]]
== Introduction
Experienced users or administrators are often responsible for several machines or environments.
They understand the difficult demands and challenges of maintaining such an infrastructure.
Running a FreeBSD Update Server makes it easier to deploy security and software patches to selected test machines before rolling them out to production.
It also means a number of systems can be updated from the local network rather than a potentially slower Internet connection.
This article outlines the steps involved in creating an internal FreeBSD Update Server.
[[prerequisites]]
== Prerequisites
To build an internal FreeBSD Update Server some requirements should be met.
* A running FreeBSD system.
+
[NOTE]
====
At a minimum, updates require building on a FreeBSD release greater than or equal to the target release version for distribution.
====
* A user account with at least 4 GB of available space. This will allow the creation of updates for 7.1 and 7.2, but the exact space requirements may change from version to version.
* An man:ssh[1] account on a remote machine to upload distributed updates.
* A web server, like link:{handbook}#network-apache[Apache], with over half of the space required for the build. For instance, test builds for 7.1 and 7.2 consume a total amount of 4 GB, and the webserver space needed to distribute these updates is 2.6 GB.
* Basic knowledge of shell scripting with Bourne shell, man:sh[1].
[[Configuration]]
== Configuration: Installation & Setup
Download the https://svnweb.freebsd.org/base/user/cperciva/freebsd-update-build/[freebsd-update-server] software by installing package:devel/subversion[] and package:security/ca_root_nss[], and execute:
[source,shell]
....
% svn co https://svn.freebsd.org/base/user/cperciva/freebsd-update-build freebsd-update-server
....
Update [.filename]#scripts/build.conf# appropriately.
It is sourced during all build operations.
Here is the default [.filename]#build.conf#, which should be modified to suit your environment.
[.programlisting]
....
# Main configuration file for FreeBSD Update builds. The
# release-specific configuration data is lower down in
# the scripts tree.
# Location from which to fetch releases
export FTP=ftp://ftp2.freebsd.org/pub/FreeBSD/releases <.>
# Host platform
export HOSTPLATFORM=`uname -m`
# Host name to use inside jails
export BUILDHOSTNAME=${HOSTPLATFORM}-builder.daemonology.net <.>
# Location of SSH key
export SSHKEY=/root/.ssh/id_dsa <.>
# SSH account into which files are uploaded
MASTERACCT=builder@wadham.daemonology.net <.>
# Directory into which files are uploaded
MASTERDIR=update-master.freebsd.org <.>
....
Parameters for consideration would be:
<.> This is the location where ISO images are downloaded from (by the `fetchiso()` subroutine of [.filename]#scripts/build.subr#). The location configured is not limited to FTP URIs. Any URI scheme supported by standard man:fetch[1] utility should work fine.
Customizations to the `fetchiso()` code can be installed by copying the default [.filename]#build.subr# script to the release and architecture-specific area at [.filename]#scripts/RELEASE/ARCHITECTURE/build.subr# and applying local changes.
<.> The name of the build host. This information will be displayed on updated systems when issuing:
+
[source,shell]
....
% uname -v
....
+
<.> The SSH key for uploading files to the update server. A key pair can be created by typing `ssh-keygen -t dsa`. This parameter is optional; standard password authentication will be used as a fallback authentication method when `SSHKEY` is not defined.
The man:ssh-keygen[1] manual page has more detailed information about SSH and the appropriate steps for creating and using one.
<.> Account for uploading files to the update server.
<.> Directory on the update server where files are uploaded to.
The default [.filename]#build.conf# shipped with the freebsd-update-server sources is suitable for building i386 releases of FreeBSD. As an example of building an update server for other architectures, the following steps outline the configuration changes needed for amd64:
[.procedure]
====
. Create a build environment for amd64:
+
[source,shell]
....
% mkdir -p /usr/local/freebsd-update-server/scripts/7.2-RELEASE/amd64
....
. Install a [.filename]#build.conf# in the newly created build directory. The build configuration options for FreeBSD 7.2-RELEASE on amd64 should be similar to:
+
[.programlisting]
....
# SHA256 hash of RELEASE disc1.iso image.
export RELH=1ea1f6f652d7c5f5eab7ef9f8edbed50cb664b08ed761850f95f48e86cc71ef5 <.>
# Components of the world, source, and kernels
export WORLDPARTS="base catpages dict doc games info manpages proflibs lib32"
export SOURCEPARTS="base bin contrib crypto etc games gnu include krb5 \
lib libexec release rescue sbin secure share sys tools \
ubin usbin cddl"
export KERNELPARTS="generic"
# EOL date
export EOL=1275289200 <.>
....
+
<.> The man:sha256[1] hash key for the desired release, is published within the respective link:https://www.FreeBSD.org/releases/[release announcement].
<.> To generate the "End of Life" number for [.filename]#build.conf#, refer to the "Estimated EOL" posted on the link:https://www.FreeBSD.org/security/security/[FreeBSD Security Website]. The value of `EOL` can be derived from the date listed on the web site, using the man:date[1] utility, for example:
+
[source,shell]
....
% date -j -f '%Y%m%d-%H%M%S' '20090401-000000' '+%s'
....
====
[[build]]
== Building Update Code
The first step is to run [.filename]#scripts/make.sh#.
This will build some binaries, create directories, and generate an RSA signing key used for approving builds.
In this step, a passphrase will have to be supplied for the final creation of the signing key.
[source,shell]
....
# sh scripts/make.sh
cc -O2 -fno-strict-aliasing -pipe findstamps.c -o findstamps
findstamps.c: In function 'usage':
findstamps.c:45: warning: incompatible implicit declaration of built-in function 'exit'
cc -O2 -fno-strict-aliasing -pipe unstamp.c -o unstamp
install findstamps ../bin
install unstamp ../bin
rm -f findstamps unstamp
Generating RSA private key, 4096 bit long modulus
................................................................................++
...................++
e is 65537 (0x10001)
Public key fingerprint:
27ef53e48dc869eea6c3136091cc6ab8589f967559824779e855d58a2294de9e
Encrypting signing key for root
enter aes-256-cbc encryption password:
Verifying - enter aes-256-cbc encryption password:
....
[NOTE]
====
Keep a note of the generated key fingerprint.
This value is required in [.filename]#/etc/freebsd-update.conf# for binary updates.
====
At this point, we are ready to stage a build.
[source,shell]
....
# cd /usr/local/freebsd-update-server
# sh scripts/init.sh amd64 7.2-RELEASE
....
What follows is a sample of an _initial_ build run.
[source,shell]
....
# sh scripts/init.sh amd64 7.2-RELEASE
Mon Aug 24 16:04:36 PDT 2009 Starting fetch for FreeBSD/amd64 7.2-RELEASE
/usr/local/freebsd-update-server/work/7.2-RELE100 of 588 MB 359 kBps 00m00s
Mon Aug 24 16:32:38 PDT 2009 Verifying disc1 hash for FreeBSD/amd64 7.2-RELEASE
Mon Aug 24 16:32:44 PDT 2009 Extracting components for FreeBSD/amd64 7.2-RELEASE
Mon Aug 24 16:34:05 PDT 2009 Constructing world+src image for FreeBSD/amd64 7.2-RELEASE
Mon Aug 24 16:35:57 PDT 2009 Extracting world+src for FreeBSD/amd64 7.2-RELEASE
Mon Aug 24 23:36:24 UTC 2009 Building world for FreeBSD/amd64 7.2-RELEASE
Tue Aug 25 00:31:29 UTC 2009 Distributing world for FreeBSD/amd64 7.2-RELEASE
Tue Aug 25 00:32:36 UTC 2009 Building and distributing kernels for FreeBSD/amd64 7.2-RELEASE
Tue Aug 25 00:44:44 UTC 2009 Constructing world components for FreeBSD/amd64 7.2-RELEASE
Tue Aug 25 00:44:56 UTC 2009 Distributing source for FreeBSD/amd64 7.2-RELEASE
Mon Aug 24 17:46:18 PDT 2009 Moving components into staging area for FreeBSD/amd64 7.2-RELEASE
Mon Aug 24 17:46:33 PDT 2009 Identifying extra documentation for FreeBSD/amd64 7.2-RELEASE
Mon Aug 24 17:47:13 PDT 2009 Extracting extra docs for FreeBSD/amd64 7.2-RELEASE
Mon Aug 24 17:47:18 PDT 2009 Indexing release for FreeBSD/amd64 7.2-RELEASE
Mon Aug 24 17:50:44 PDT 2009 Indexing world0 for FreeBSD/amd64 7.2-RELEASE
Files built but not released:
Files released but not built:
Files which differ by more than contents:
Files which differ between release and build:
kernel|generic|/GENERIC/hptrr.ko
kernel|generic|/GENERIC/kernel
src|sys|/sys/conf/newvers.sh
world|base|/boot/loader
world|base|/boot/pxeboot
world|base|/etc/mail/freebsd.cf
world|base|/etc/mail/freebsd.submit.cf
world|base|/etc/mail/sendmail.cf
world|base|/etc/mail/submit.cf
world|base|/lib/libcrypto.so.5
world|base|/usr/bin/ntpq
world|base|/usr/lib/libalias.a
world|base|/usr/lib/libalias_cuseeme.a
world|base|/usr/lib/libalias_dummy.a
world|base|/usr/lib/libalias_ftp.a
...
....
Then the build of the world is performed again, with world patches.
A more detailed explanation may be found in [.filename]#scripts/build.subr#.
[WARNING]
====
During this second build cycle, the network time protocol daemon, man:ntpd[8], is turned off.
Per `{cperciva}`, Security Officer Emeritus of FreeBSD, "the https://svnweb.freebsd.org/base/user/cperciva/freebsd-update-build/[freebsd-update-server] build code needs to identify timestamps which are stored in files so that they can be ignored when comparing builds to determine which files need to be updated.
This timestamp-finding works by doing two builds 400 days apart and comparing the results."
====
[source,shell]
....
Mon Aug 24 17:54:07 PDT 2009 Extracting world+src for FreeBSD/amd64 7.2-RELEASE
Wed Sep 29 00:54:34 UTC 2010 Building world for FreeBSD/amd64 7.2-RELEASE
Wed Sep 29 01:49:42 UTC 2010 Distributing world for FreeBSD/amd64 7.2-RELEASE
Wed Sep 29 01:50:50 UTC 2010 Building and distributing kernels for FreeBSD/amd64 7.2-RELEASE
Wed Sep 29 02:02:56 UTC 2010 Constructing world components for FreeBSD/amd64 7.2-RELEASE
Wed Sep 29 02:03:08 UTC 2010 Distributing source for FreeBSD/amd64 7.2-RELEASE
Tue Sep 28 19:04:31 PDT 2010 Moving components into staging area for FreeBSD/amd64 7.2-RELEASE
Mon Aug 24 19:04:46 PDT 2009 Extracting extra docs for FreeBSD/amd64 7.2-RELEASE
Mon Aug 24 19:04:51 PDT 2009 Indexing world1 for FreeBSD/amd64 7.2-RELEASE
Mon Aug 24 19:08:04 PDT 2009 Locating build stamps for FreeBSD/amd64 7.2-RELEASE
Mon Aug 24 19:10:19 PDT 2009 Cleaning staging area for FreeBSD/amd64 7.2-RELEASE
Mon Aug 24 19:10:19 PDT 2009 Preparing to copy files into staging area for FreeBSD/amd64 7.2-RELEASE
Mon Aug 24 19:10:20 PDT 2009 Copying data files into staging area for FreeBSD/amd64 7.2-RELEASE
Mon Aug 24 12:16:57 PDT 2009 Copying metadata files into staging area for FreeBSD/amd64 7.2-RELEASE
Mon Aug 24 12:16:59 PDT 2009 Constructing metadata index and tag for FreeBSD/amd64 7.2-RELEASE
Files found which include build stamps:
kernel|generic|/GENERIC/hptrr.ko
kernel|generic|/GENERIC/kernel
world|base|/boot/loader
world|base|/boot/pxeboot
world|base|/etc/mail/freebsd.cf
world|base|/etc/mail/freebsd.submit.cf
world|base|/etc/mail/sendmail.cf
world|base|/etc/mail/submit.cf
world|base|/lib/libcrypto.so.5
world|base|/usr/bin/ntpq
world|base|/usr/include/osreldate.h
world|base|/usr/lib/libalias.a
world|base|/usr/lib/libalias_cuseeme.a
world|base|/usr/lib/libalias_dummy.a
world|base|/usr/lib/libalias_ftp.a
...
....
Finally, the build completes.
[source,shell]
....
Values of build stamps, excluding library archive headers:
v1.2 (Aug 25 2009 00:40:36)
v1.2 (Aug 25 2009 00:38:22)
@()FreeBSD 7.2-RELEASE 0: Tue Aug 25 00:38:29 UTC 2009
FreeBSD 7.2-RELEASE 0: Tue Aug 25 00:38:29 UTC 2009
root@server.myhost.com:/usr/obj/usr/src/sys/GENERIC
7.2-RELEASE
Mon Aug 24 23:55:25 UTC 2009
Mon Aug 24 23:55:25 UTC 2009
built by root@server.myhost.com on Tue Aug 25 00:16:15 UTC 2009
built by root@server.myhost.com on Tue Aug 25 00:16:15 UTC 2009
built by root@server.myhost.com on Tue Aug 25 00:16:15 UTC 2009
built by root@server.myhost.com on Tue Aug 25 00:16:15 UTC 2009
Mon Aug 24 23:46:47 UTC 2009
ntpq 4.2.4p5-a Mon Aug 24 23:55:53 UTC 2009 (1)
* Copyright (c) 1992-2009 The FreeBSD Project.
Mon Aug 24 23:46:47 UTC 2009
Mon Aug 24 23:55:40 UTC 2009
Aug 25 2009
ntpd 4.2.4p5-a Mon Aug 24 23:55:52 UTC 2009 (1)
ntpdate 4.2.4p5-a Mon Aug 24 23:55:53 UTC 2009 (1)
ntpdc 4.2.4p5-a Mon Aug 24 23:55:53 UTC 2009 (1)
Tue Aug 25 00:21:21 UTC 2009
Tue Aug 25 00:21:21 UTC 2009
Tue Aug 25 00:21:21 UTC 2009
Mon Aug 24 23:46:47 UTC 2009
FreeBSD/amd64 7.2-RELEASE initialization build complete. Please
review the list of build stamps printed above to confirm that
they look sensible, then run
sh -e approve.sh amd64 7.2-RELEASE
to sign the release.
....
Approve the build if everything is correct.
More information on determining this can be found in the distributed source file named [.filename]#USAGE#. Execute [.filename]#scripts/approve.sh#, as directed.
This will sign the release, and move components into a staging area suitable for uploading.
[source,shell]
....
# cd /usr/local/freebsd-update-server
# sh scripts/mountkey.sh
....
[source,shell]
....
# sh -e scripts/approve.sh amd64 7.2-RELEASE
Wed Aug 26 12:50:06 PDT 2009 Signing build for FreeBSD/amd64 7.2-RELEASE
Wed Aug 26 12:50:06 PDT 2009 Copying files to patch source directories for FreeBSD/amd64 7.2-RELEASE
Wed Aug 26 12:50:06 PDT 2009 Copying files to upload staging area for FreeBSD/amd64 7.2-RELEASE
Wed Aug 26 12:50:07 PDT 2009 Updating databases for FreeBSD/amd64 7.2-RELEASE
Wed Aug 26 12:50:07 PDT 2009 Cleaning staging area for FreeBSD/amd64 7.2-RELEASE
....
After the approval process is complete, the upload procedure may be started.
[source,shell]
....
# cd /usr/local/freebsd-update-server
# sh scripts/upload.sh amd64 7.2-RELEASE
....
[NOTE]
====
In the event update code needs to be re-uploaded, this may be done by changing to the public distributions directory for the target release and updating attributes of the _uploaded_ file.
[source,shell]
....
# cd /usr/local/freebsd-update-server/pub/7.2-RELEASE/amd64
# touch -t 200801010101.01 uploaded
....
====
The uploaded files will need to be in the document root of the webserver in order for updates to be distributed.
The exact configuration will vary depending on the web server used.
For the Apache web server, please refer to the link:{handbook}#network-apache[Configuration of Apache servers] section in the Handbook.
Update client's `KeyPrint` and `ServerName` in [.filename]#/etc/freebsd-update.conf#, and perform updates as instructed in the link:{handbook}#updating-upgrading-freebsdupdate[FreeBSD Update] section of the Handbook.
[IMPORTANT]
====
In order for FreeBSD Update Server to work properly, updates for both the _current_ release and the release _one wants to upgrade to_ need to be built.
This is necessary for determining the differences of files between releases.
For example, when upgrading a FreeBSD system from 7.1-RELEASE to 7.2-RELEASE, updates will need to be built and uploaded to your distribution server for both versions.
====
For reference, the entire run of link:../../source/articles/freebsd-update-server/init.txt[init.sh] is attached.
[[patch]]
== Building a Patch
Every time a link:https://www.FreeBSD.org/security/advisories/[security advisory] or link:https://www.FreeBSD.org/security/notices/[security notice] is announced, a patch update can be built.
For this example, 7.1-RELEASE will be used.
A couple of assumptions are made for a different release build:
* Setup the correct directory structure for the initial build.
* Perform an initial build for 7.1-RELEASE.
Create the patch directory of the respective release under [.filename]#/usr/local/freebsd-update-server/patches/#.
[source,shell]
....
% mkdir -p /usr/local/freebsd-update-server/patches/7.1-RELEASE/
% cd /usr/local/freebsd-update-server/patches/7.1-RELEASE
....
As an example, take the patch for man:named[8].
Read the advisory, and grab the necessary file from link:https://www.FreeBSD.org/security/advisories/[FreeBSD Security Advisories].
More information on interpreting the advisory, can be found in the link:{handbook}#security-advisories[FreeBSD Handbook].
In the https://security.freebsd.org/advisories/FreeBSD-SA-09:12.bind.asc[security brief], this advisory is called `SA-09:12.bind`.
After downloading the file, it is required to rename the file to an appropriate patch level.
It is suggested to keep this consistent with official FreeBSD patch levels, but its name may be freely chosen.
For this build, let us follow the currently established practice of FreeBSD and call this `p7`. Rename the file:
[source,shell]
....
% cd /usr/local/freebsd-update-server/patches/7.1-RELEASE/; mv bind.patch 7-SA-09:12.bind
....
[NOTE]
====
When running a patch level build, it is assumed that previous patches are in place.
When a patch build is run, it will run all patches contained in the patch directory.
There can be custom patches added to any build. Use the number zero, or any other number.
====
[WARNING]
====
It is up to the administrator of the FreeBSD Update Server to take appropriate measures to verify the authenticity of every patch.
====
At this point, a _diff_ is ready to be built.
The software checks first to see if a [.filename]#scripts/init.sh# has been run on the respective release prior to running the diff build.
[source,shell]
....
# cd /usr/local/freebsd-update-server
# sh scripts/diff.sh amd64 7.1-RELEASE 7
....
What follows is a sample of a _differential_ build run.
[source,shell]
....
# sh -e scripts/diff.sh amd64 7.1-RELEASE 7
Wed Aug 26 10:09:59 PDT 2009 Extracting world+src for FreeBSD/amd64 7.1-RELEASE-p7
Wed Aug 26 17:10:25 UTC 2009 Building world for FreeBSD/amd64 7.1-RELEASE-p7
Wed Aug 26 18:05:11 UTC 2009 Distributing world for FreeBSD/amd64 7.1-RELEASE-p7
Wed Aug 26 18:06:16 UTC 2009 Building and distributing kernels for FreeBSD/amd64 7.1-RELEASE-p7
Wed Aug 26 18:17:50 UTC 2009 Constructing world components for FreeBSD/amd64 7.1-RELEASE-p7
Wed Aug 26 18:18:02 UTC 2009 Distributing source for FreeBSD/amd64 7.1-RELEASE-p7
Wed Aug 26 11:19:23 PDT 2009 Moving components into staging area for FreeBSD/amd64 7.1-RELEASE-p7
Wed Aug 26 11:19:37 PDT 2009 Extracting extra docs for FreeBSD/amd64 7.1-RELEASE-p7
Wed Aug 26 11:19:42 PDT 2009 Indexing world0 for FreeBSD/amd64 7.1-RELEASE-p7
Wed Aug 26 11:23:02 PDT 2009 Extracting world+src for FreeBSD/amd64 7.1-RELEASE-p7
Thu Sep 30 18:23:29 UTC 2010 Building world for FreeBSD/amd64 7.1-RELEASE-p7
Thu Sep 30 19:18:15 UTC 2010 Distributing world for FreeBSD/amd64 7.1-RELEASE-p7
Thu Sep 30 19:19:18 UTC 2010 Building and distributing kernels for FreeBSD/amd64 7.1-RELEASE-p7
Thu Sep 30 19:30:52 UTC 2010 Constructing world components for FreeBSD/amd64 7.1-RELEASE-p7
Thu Sep 30 19:31:03 UTC 2010 Distributing source for FreeBSD/amd64 7.1-RELEASE-p7
Thu Sep 30 12:32:25 PDT 2010 Moving components into staging area for FreeBSD/amd64 7.1-RELEASE-p7
Wed Aug 26 12:32:39 PDT 2009 Extracting extra docs for FreeBSD/amd64 7.1-RELEASE-p7
Wed Aug 26 12:32:43 PDT 2009 Indexing world1 for FreeBSD/amd64 7.1-RELEASE-p7
Wed Aug 26 12:35:54 PDT 2009 Locating build stamps for FreeBSD/amd64 7.1-RELEASE-p7
Wed Aug 26 12:36:58 PDT 2009 Reverting changes due to build stamps for FreeBSD/amd64 7.1-RELEASE-p7
Wed Aug 26 12:37:14 PDT 2009 Cleaning staging area for FreeBSD/amd64 7.1-RELEASE-p7
Wed Aug 26 12:37:14 PDT 2009 Preparing to copy files into staging area for FreeBSD/amd64 7.1-RELEASE-p7
Wed Aug 26 12:37:15 PDT 2009 Copying data files into staging area for FreeBSD/amd64 7.1-RELEASE-p7
Wed Aug 26 12:43:23 PDT 2009 Copying metadata files into staging area for FreeBSD/amd64 7.1-RELEASE-p7
Wed Aug 26 12:43:25 PDT 2009 Constructing metadata index and tag for FreeBSD/amd64 7.1-RELEASE-p7
...
Files found which include build stamps:
kernel|generic|/GENERIC/hptrr.ko
kernel|generic|/GENERIC/kernel
world|base|/boot/loader
world|base|/boot/pxeboot
world|base|/etc/mail/freebsd.cf
world|base|/etc/mail/freebsd.submit.cf
world|base|/etc/mail/sendmail.cf
world|base|/etc/mail/submit.cf
world|base|/lib/libcrypto.so.5
world|base|/usr/bin/ntpq
world|base|/usr/include/osreldate.h
world|base|/usr/lib/libalias.a
world|base|/usr/lib/libalias_cuseeme.a
world|base|/usr/lib/libalias_dummy.a
world|base|/usr/lib/libalias_ftp.a
...
Values of build stamps, excluding library archive headers:
v1.2 (Aug 26 2009 18:13:46)
v1.2 (Aug 26 2009 18:11:44)
@()FreeBSD 7.1-RELEASE-p7 0: Wed Aug 26 18:11:50 UTC 2009
FreeBSD 7.1-RELEASE-p7 0: Wed Aug 26 18:11:50 UTC 2009
root@server.myhost.com:/usr/obj/usr/src/sys/GENERIC
7.1-RELEASE-p7
Wed Aug 26 17:29:15 UTC 2009
Wed Aug 26 17:29:15 UTC 2009
built by root@server.myhost.com on Wed Aug 26 17:49:58 UTC 2009
built by root@server.myhost.com on Wed Aug 26 17:49:58 UTC 2009
built by root@server.myhost.com on Wed Aug 26 17:49:58 UTC 2009
built by root@server.myhost.com on Wed Aug 26 17:49:58 UTC 2009
Wed Aug 26 17:20:39 UTC 2009
ntpq 4.2.4p5-a Wed Aug 26 17:29:42 UTC 2009 (1)
* Copyright (c) 1992-2009 The FreeBSD Project.
Wed Aug 26 17:20:39 UTC 2009
Wed Aug 26 17:29:30 UTC 2009
Aug 26 2009
ntpd 4.2.4p5-a Wed Aug 26 17:29:41 UTC 2009 (1)
ntpdate 4.2.4p5-a Wed Aug 26 17:29:42 UTC 2009 (1)
ntpdc 4.2.4p5-a Wed Aug 26 17:29:42 UTC 2009 (1)
Wed Aug 26 17:55:02 UTC 2009
Wed Aug 26 17:55:02 UTC 2009
Wed Aug 26 17:55:02 UTC 2009
Wed Aug 26 17:20:39 UTC 2009
...
....
Updates are printed, and approval is requested.
[source,shell]
....
New updates:
kernel|generic|/GENERIC/kernel.symbols|f|0|0|0555|0|7c8dc176763f96ced0a57fc04e7c1b8d793f27e006dd13e0b499e1474ac47e10|
kernel|generic|/GENERIC/kernel|f|0|0|0555|0|33197e8cf15bbbac263d17f39c153c9d489348c2c534f7ca1120a1183dec67b1|
kernel|generic|/|d|0|0|0755|0||
src|base|/|d|0|0|0755|0||
src|bin|/|d|0|0|0755|0||
src|cddl|/|d|0|0|0755|0||
src|contrib|/contrib/bind9/bin/named/update.c|f|0|10000|0644|0|4d434abf0983df9bc47435670d307fa882ef4b348ed8ca90928d250f42ea0757|
src|contrib|/contrib/bind9/lib/dns/openssldsa_link.c|f|0|10000|0644|0|c6805c39f3da2a06dd3f163f26c314a4692d4cd9a2d929c0acc88d736324f550|
src|contrib|/contrib/bind9/lib/dns/opensslrsa_link.c|f|0|10000|0644|0|fa0f7417ee9da42cc8d0fd96ad24e7a34125e05b5ae075bd6e3238f1c022a712|
...
FreeBSD/amd64 7.1-RELEASE update build complete. Please review
the list of build stamps printed above and the list of updated
files to confirm that they look sensible, then run
sh -e approve.sh amd64 7.1-RELEASE
to sign the build.
....
Follow the same process as noted before for approving a build:
[source,shell]
....
# sh -e scripts/approve.sh amd64 7.1-RELEASE
Wed Aug 26 12:50:06 PDT 2009 Signing build for FreeBSD/amd64 7.1-RELEASE
Wed Aug 26 12:50:06 PDT 2009 Copying files to patch source directories for FreeBSD/amd64 7.1-RELEASE
Wed Aug 26 12:50:06 PDT 2009 Copying files to upload staging area for FreeBSD/amd64 7.1-RELEASE
Wed Aug 26 12:50:07 PDT 2009 Updating databases for FreeBSD/amd64 7.1-RELEASE
Wed Aug 26 12:50:07 PDT 2009 Cleaning staging area for FreeBSD/amd64 7.1-RELEASE
The FreeBSD/amd64 7.1-RELEASE update build has been signed and is
ready to be uploaded. Remember to run
sh -e umountkey.sh
to unmount the decrypted key once you have finished signing all
the new builds.
....
After approving the build, upload the software:
[source,shell]
....
# cd /usr/local/freebsd-update-server
# sh scripts/upload.sh amd64 7.1-RELEASE
....
For reference, the entire run of link:../../source/articles/freebsd-update-server/diff.txt[diff.sh] is attached.
[[tips]]
== Tips
* If a custom release is built using the native `make release` link:{releng}#release-build[procedure], freebsd-update-server code will work from your release. As an example, a release without ports or documentation can be built by clearing functionality pertaining to documentation subroutines `findextradocs ()`, `addextradocs ()` and altering the download location in `fetchiso ()`, respectively, in [.filename]#scripts/build.subr#. As a last step, change the man:sha256[1] hash in [.filename]#build.conf# under your respective release and architecture and you are ready to build off your custom release.
+
[.programlisting]
....
# Compare ${WORKDIR}/release and ${WORKDIR}/$1, identify which parts
# of the world|doc subcomponent are missing from the latter, and
# build a tarball out of them.
findextradocs () {
}
# Add extra docs to ${WORKDIR}/$1
addextradocs () {
}
....
* Adding `-j _NUMBER_` flags to `buildworld` and `obj` targets in the [.filename]#scripts/build.subr# script may speed up processing depending on the hardware used, however it is not necessary. Using these flags in other targets is not recommended, as it may cause the build to become unreliable.
+
[.programlisting]
....
# Build the world
log "Building world"
cd /usr/src &&
make -j 2 ${COMPATFLAGS} buildworld 2>&1
# Distribute the world
log "Distributing world"
cd /usr/src/release &&
make -j 2 obj &&
make ${COMPATFLAGS} release.1 release.2 2>&1
....
* Create an appropriate link:{handbook}#network-dns[DNS] SRV record for the update server, and put others behind it with variable weights. Using this facility will provide update mirrors, however this tip is not necessary unless you wish to provide a redundant service.
+
[.programlisting]
....
_http._tcp.update.myserver.com. IN SRV 0 2 80 host1.myserver.com.
IN SRV 0 1 80 host2.myserver.com.
IN SRV 0 0 80 host3.myserver.com.
....
diff --git a/documentation/content/en/articles/geom-class/_index.adoc b/documentation/content/en/articles/geom-class/_index.adoc
index 81795771fd..8c93697afb 100644
--- a/documentation/content/en/articles/geom-class/_index.adoc
+++ b/documentation/content/en/articles/geom-class/_index.adoc
@@ -1,402 +1,402 @@
---
title: Writing a GEOM Class
authors:
- author: Ivan Voras
email: ivoras@FreeBSD.org
-releaseinfo: "$FreeBSD$"
+description: Writing a GEOM Class in FreeBSD
trademarks: ["freebsd", "intel", "general"]
---
= Writing a GEOM Class
:doctype: article
:toc: macro
:toclevels: 1
:icons: font
:sectnums:
:sectnumlevels: 6
:source-highlighter: rouge
:experimental:
include::shared/en/urls.adoc[]
[.abstract-title]
Abstract
This text documents some starting points in developing GEOM classes, and kernel modules in general.
It is assumed that the reader is familiar with C userland programming.
'''
toc::[]
[[intro]]
== Introduction
[[intro-docs]]
=== Documentation
Documentation on kernel programming is scarce - it is one of few areas where there is nearly nothing in the way of friendly tutorials, and the phrase "use the source!" really holds true.
However, there are some bits and pieces (some of them seriously outdated) floating around that should be studied before beginning to code:
* The link:{developers-handbook}[FreeBSD Developer's Handbook] - part of the documentation project, it does not contain anything specific to kernel programming, but rather some general useful information.
* The link:{arch-handbook}[FreeBSD Architecture Handbook] - also from the documentation project, contains descriptions of several low-level facilities and procedures. The most important chapter is 13, link:{arch-handbook}#driverbasics[Writing FreeBSD device drivers].
* The Blueprints section of http://www.freebsddiary.org[FreeBSD Diary] web site - contains several interesting articles on kernel facilities.
* The man pages in section 9 - for important documentation on kernel functions.
* The man:geom[4] man page and http://phk.freebsd.dk/pubs/[PHK's GEOM slides] - for general introduction of the GEOM subsystem.
* Man pages man:g_bio[9], man:g_event[9], man:g_data[9], man:g_geom[9], man:g_provider[9], man:g_consumer[9], man:g_access[9] & others linked from those, for documentation on specific functionalities.
* The man:style[9] man page - for documentation on the coding-style conventions which must be followed for any code which is to be committed to the FreeBSD tree.
[[prelim]]
== Preliminaries
The best way to do kernel development is to have (at least) two separate computers.
One of these would contain the development environment and sources, and the other would be used to test the newly written code by network-booting and network-mounting filesystems from the first one.
This way if the new code contains bugs and crashes the machine, it will not mess up the sources (and other "live" data).
The second system does not even require a proper display.
Instead, it could be connected with a serial cable or KVM to the first one.
But, since not everybody has two or more computers handy, there are a few things that can be done to prepare an otherwise "live" system for developing kernel code.
This setup is also applicable for developing in a http://www.vmware.com/[VMWare] or http://www.qemu.org/[QEmu] virtual machine (the next best thing after a dedicated development machine).
[[prelim-system]]
=== Modifying a System for Development
For any kernel programming a kernel with `INVARIANTS` enabled is a must-have. So enter these in your kernel configuration file:
[.programlisting]
....
options INVARIANT_SUPPORT
options INVARIANTS
....
For more debugging you should also include WITNESS support, which will alert you of mistakes in locking:
[.programlisting]
....
options WITNESS_SUPPORT
options WITNESS
....
For debugging crash dumps, a kernel with debug symbols is needed:
[.programlisting]
....
makeoptions DEBUG=-g
....
With the usual way of installing the kernel (`make installkernel`) the debug kernel will not be automatically installed.
It is called [.filename]#kernel.debug# and located in [.filename]#/usr/obj/usr/src/sys/KERNELNAME/#.
For convenience it should be copied to [.filename]#/boot/kernel/#.
Another convenience is enabling the kernel debugger so you can examine a kernel panic when it happens.
For this, enter the following lines in your kernel configuration file:
[.programlisting]
....
options KDB
options DDB
options KDB_TRACE
....
For this to work you might need to set a sysctl (if it is not on by default):
[.programlisting]
....
debug.debugger_on_panic=1
....
Kernel panics will happen, so care should be taken with the filesystem cache.
In particular, having softupdates might mean the latest file version could be lost if a panic occurs before it is committed to storage.
Disabling softupdates yields a great performance hit, and still does not guarantee data consistency.
Mounting filesystem with the "sync" option is needed for that.
For a compromise, the softupdates cache delays can be shortened.
There are three sysctl's that are useful for this (best to be set in [.filename]#/etc/sysctl.conf#):
[.programlisting]
....
kern.filedelay=5
kern.dirdelay=4
kern.metadelay=3
....
The numbers represent seconds.
For debugging kernel panics, kernel core dumps are required.
Since a kernel panic might make filesystems unusable, this crash dump is first written to a raw partition.
Usually, this is the swap partition.
This partition must be at least as large as the physical RAM in the machine.
On the next boot, the dump is copied to a regular file.
This happens after filesystems are checked and mounted, and before swap is enabled.
This is controlled with two [.filename]#/etc/rc.conf# variables:
[.programlisting]
....
dumpdev="/dev/ad0s4b"
dumpdir="/usr/core
....
The `dumpdev` variable specifies the swap partition and `dumpdir` tells the system where in the filesystem to relocate the core dump on reboot.
Writing kernel core dumps is slow and takes a long time so if you have lots of memory (>256M) and lots of panics it could be frustrating to sit and wait while it is done (twice - first to write it to swap, then to relocate it to filesystem).
It is convenient then to limit the amount of RAM the system will use via a [.filename]#/boot/loader.conf# tunable:
[.programlisting]
....
hw.physmem="256M"
....
If the panics are frequent and filesystems large (or you simply do not trust softupdates+background fsck) it is advisable to turn background fsck off via [.filename]#/etc/rc.conf# variable:
[.programlisting]
....
background_fsck="NO"
....
This way, the filesystems will always get checked when needed.
Note that with background fsck, a new panic could happen while it is checking the disks.
Again, the safest way is not to have many local filesystems by using another computer as an NFS server.
[[prelim-starting]]
=== Starting the Project
For the purpose of creating a new GEOM class, an empty subdirectory has to be created under an arbitrary user-accessible directory.
You do not have to create the module directory under [.filename]#/usr/src#.
[[prelim-makefile]]
=== The Makefile
It is good practice to create [.filename]#Makefiles# for every nontrivial coding project, which of course includes kernel modules.
Creating the [.filename]#Makefile# is simple thanks to an extensive set of helper routines provided by the system.
In short, here is how a minimal [.filename]#Makefile# looks for a kernel module:
[.programlisting]
....
SRCS=g_journal.c
KMOD=geom_journal
.include <bsd.kmod.mk>
....
This [.filename]#Makefile# (with changed filenames) will do for any kernel module, and a GEOM class can reside in just one kernel module.
If more than one file is required, list it in the `SRCS` variable, separated with whitespace from other filenames.
[[kernelprog]]
== On FreeBSD Kernel Programming
[[kernelprog-memalloc]]
=== Memory Allocation
See man:malloc[9].
Basic memory allocation is only slightly different than its userland equivalent.
Most notably, `malloc`() and `free`() accept additional parameters as is described in the man page.
A "malloc type" must be declared in the declaration section of a source file, like this:
[.programlisting]
....
static MALLOC_DEFINE(M_GJOURNAL, "gjournal data", "GEOM_JOURNAL Data");
....
To use this macro, [.filename]#sys/param.h#, [.filename]#sys/kernel.h# and [.filename]#sys/malloc.h# headers must be included.
There is another mechanism for allocating memory, the UMA (Universal Memory Allocator).
See man:uma[9] for details, but it is a special type of allocator mainly used for speedy allocation of lists comprised of same-sized items (for example, dynamic arrays of structs).
[[kernelprog-lists]]
=== Lists and Queues
See man:queue[3].
There are a LOT of cases when a list of things needs to be maintained.
Fortunately, this data structure is implemented (in several ways) by C macros included in the system.
The most used list type is TAILQ because it is the most flexible.
It is also the one with largest memory requirements (its elements are doubly-linked) and also the slowest (although the speed variation is on the order of several CPU instructions more, so it should not be taken seriously).
If data retrieval speed is very important, see man:tree[3] and man:hashinit[9].
[[kernelprog-bios]]
=== BIOs
Structure `bio` is used for any and all Input/Output operations concerning GEOM.
It basically contains information about what device ('provider') should satisfy the request, request type, offset, length, pointer to a buffer, and a bunch of "user-specific" flags and fields that can help implement various hacks.
The important thing here is that ``bio``s are handled asynchronously.
That means that, in most parts of the code, there is no analogue to userland's man:read[2] and man:write[2] calls that do not return until a request is done.
Rather, a developer-supplied function is called as a notification when the request gets completed (or results in error).
The asynchronous programming model (also called "event-driven") is somewhat harder than the much more used imperative one used in userland (at least it takes a while to get used to it).
In some cases the helper routines `g_write_data`() and `g_read_data`() can be used, but __not always__.
In particular, they cannot be used when a mutex is held; for example, the GEOM topology mutex or the internal mutex held during the `.start`() and `.stop`() functions.
[[geom]]
== On GEOM Programming
[[geom-ggate]]
=== Ggate
If maximum performance is not needed, a much simpler way of making a data transformation is to implement it in userland via the ggate (GEOM gate) facility.
Unfortunately, there is no easy way to convert between, or even share code between the two approaches.
[[geom-class]]
=== GEOM Class
GEOM classes are transformations on the data.
These transformations can be combined in a tree-like fashion.
Instances of GEOM classes are called __geoms__.
Each GEOM class has several "class methods" that get called when there is no geom instance available (or they are simply not bound to a single instance):
* `.init` is called when GEOM becomes aware of a GEOM class (when the kernel module gets loaded.)
* `.fini` gets called when GEOM abandons the class (when the module gets unloaded)
* `.taste` is called next, once for each provider the system has available. If applicable, this function will usually create and start a geom instance.
* `.destroy_geom` is called when the geom should be disbanded
* `.ctlconf` is called when user requests reconfiguration of existing geom
Also defined are the GEOM event functions, which will get copied to the geom instance.
Field `.geom` in the `g_class` structure is a LIST of geoms instantiated from the class.
These functions are called from the g_event kernel thread.
[[geom-softc]]
=== Softc
The name "softc" is a legacy term for "driver private data".
The name most probably comes from the archaic term "software control block".
In GEOM, it is a structure (more precise: pointer to a structure) that can be attached to a geom instance to hold whatever data is private to the geom instance.
Most GEOM classes have the following members:
* `struct g_provider *provider` : The "provider" this geom instantiates
* `uint16_t n_disks` : Number of consumer this geom consumes
* `struct g_consumer \**disks` : Array of `struct g_consumer*`. (It is not possible to use just single indirection because struct g_consumer* are created on our behalf by GEOM).
The `softc` structure contains all the state of geom instance.
Every geom instance has its own softc.
[[geom-metadata]]
=== Metadata
Format of metadata is more-or-less class-dependent, but MUST start with:
* 16 byte buffer for null-terminated signature (usually the class name)
* uint32 version ID
It is assumed that geom classes know how to handle metadata with version ID's lower than theirs.
Metadata is located in the last sector of the provider (and thus must fit in it).
(All this is implementation-dependent but all existing code works like that, and it is supported by libraries.)
[[geom-creating]]
=== Labeling/creating a GEOM
The sequence of events is:
* user calls man:geom[8] utility (or one of its hardlinked friends)
* the utility figures out which geom class it is supposed to handle and searches for [.filename]#geom_CLASSNAME.so# library (usually in [.filename]#/lib/geom#).
* it man:dlopen[3]-s the library, extracts the definitions of command-line parameters and helper functions.
In the case of creating/labeling a new geom, this is what happens:
* man:geom[8] looks in the command-line argument for the command (usually `label`), and calls a helper function.
* The helper function checks parameters and gathers metadata, which it proceeds to write to all concerned providers.
* This "spoils" existing geoms (if any) and initializes a new round of "tasting" of the providers. The intended geom class recognizes the metadata and brings the geom up.
(The above sequence of events is implementation-dependent but all existing code works like that, and it is supported by libraries.)
[[geom-command]]
=== GEOM Command Structure
The helper [.filename]#geom_CLASSNAME.so# library exports `class_commands` structure, which is an array of `struct g_command` elements.
Commands are of uniform format and look like:
[.programlisting]
....
verb [-options] geomname [other]
....
Common verbs are:
* label - to write metadata to devices so they can be recognized at tasting and brought up in geoms
* destroy - to destroy metadata, so the geoms get destroyed
Common options are:
* `-v` : be verbose
* `-f` : force
Many actions, such as labeling and destroying metadata can be performed in userland.
For this, `struct g_command` provides field `gc_func` that can be set to a function (in the same [.filename]#.so#) that will be called to process a verb.
If `gc_func` is NULL, the command will be passed to kernel module, to `.ctlreq` function of the geom class.
[[geom-geoms]]
=== Geoms
Geoms are instances of GEOM classes.
They have internal data (a softc structure) and some functions with which they respond to external events.
The event functions are:
* `.access` : calculates permissions (read/write/exclusive)
* `.dumpconf` : returns XML-formatted information about the geom
* `.orphan` : called when some underlying provider gets disconnected
* `.spoiled` : called when some underlying provider gets written to
* `.start` : handles I/O
These functions are called from the `g_down` kernel thread and there can be no sleeping in this context, (see definition of sleeping elsewhere) which limits what can be done quite a bit, but forces the handling to be fast.
Of these, the most important function for doing actual useful work is the `.start`() function, which is called when a BIO request arrives for a provider managed by a instance of geom class.
[[geom-threads]]
=== GEOM Threads
There are three kernel threads created and run by the GEOM framework:
* `g_down` : Handles requests coming from high-level entities (such as a userland request) on the way to physical devices
* `g_up` : Handles responses from device drivers to requests made by higher-level entities
* `g_event` : Handles all other cases: creation of geom instances, access counting, "spoil" events, etc.
When a user process issues "read data X at offset Y of a file" request, this is what happens:
* The filesystem converts the request into a struct bio instance and passes it to the GEOM subsystem. It knows what geom instance should handle it because filesystems are hosted directly on a geom instance.
* The request ends up as a call to the `.start`() function made on the g_down thread and reaches the top-level geom instance.
* This top-level geom instance (for example the partition slicer) determines that the request should be routed to a lower-level instance (for example the disk driver). It makes a copy of the bio request (bio requests _ALWAYS_ need to be copied between instances, with `g_clone_bio`()!), modifies the data offset and target provider fields and executes the copy with `g_io_request`()
* The disk driver gets the bio request also as a call to `.start`() on the `g_down` thread. It talks to hardware, gets the data back, and calls `g_io_deliver`() on the bio.
* Now, the notification of bio completion "bubbles up" in the `g_up` thread. First the partition slicer gets `.done`() called in the `g_up` thread, it uses information stored in the bio to free the cloned `bio` structure (with `g_destroy_bio`()) and calls `g_io_deliver`() on the original request.
* The filesystem gets the data and transfers it to userland.
See man:g_bio[9] man page for information how the data is passed back and forth in the `bio` structure (note in particular the `bio_parent` and `bio_children` fields and how they are handled).
One important feature is: __THERE CAN BE NO SLEEPING IN G_UP AND G_DOWN THREADS__.
This means that none of the following things can be done in those threads (the list is of course not complete, but only informative):
* Calls to `msleep`() and `tsleep`(), obviously.
* Calls to `g_write_data`() and `g_read_data`(), because these sleep between passing the data to consumers and returning.
* Waiting for I/O.
* Calls to man:malloc[9] and `uma_zalloc`() with `M_WAITOK` flag set
* sx and other sleepable locks
This restriction is here to stop GEOM code clogging the I/O request path, since sleeping is usually not time-bound and there can be no guarantees on how long will it take (there are some other, more technical reasons also).
It also means that there is not much that can be done in those threads; for example, almost any complex thing requires memory allocation.
Fortunately, there is a way out: creating additional kernel threads.
[[geom-kernelthreads]]
=== Kernel Threads for Use in GEOM Code
Kernel threads are created with man:kthread_create[9] function, and they are sort of similar to userland threads in behavior, only they cannot return to caller to signify termination, but must call man:kthread_exit[9].
In GEOM code, the usual use of threads is to offload processing of requests from `g_down` thread (the `.start`() function).
These threads look like "event handlers": they have a linked list of event associated with them (on which events can be posted by various functions in various threads so it must be protected by a mutex), take the events from the list one by one and process them in a big `switch`() statement.
The main benefit of using a thread to handle I/O requests is that it can sleep when needed.
Now, this sounds good, but should be carefully thought out.
Sleeping is well and very convenient but can very effectively destroy performance of the geom transformation.
Extremely performance-sensitive classes probably should do all the work in `.start`() function call, taking great care to handle out-of-memory and similar errors.
The other benefit of having a event-handler thread like that is to serialize all the requests and responses coming from different geom threads into one thread.
This is also very convenient but can be slow.
In most cases, handling of `.done`() requests can be left to the `g_up` thread.
Mutexes in FreeBSD kernel (see man:mutex[9]) have one distinction from their more common userland cousins - the code cannot sleep while holding a mutex).
If the code needs to sleep a lot, man:sx[9] locks may be more appropriate.
On the other hand, if you do almost everything in a single thread, you may get away with no mutexes at all.
diff --git a/documentation/content/en/articles/gjournal-desktop/_index.adoc b/documentation/content/en/articles/gjournal-desktop/_index.adoc
index 320a0b2295..99f51b0ac9 100644
--- a/documentation/content/en/articles/gjournal-desktop/_index.adoc
+++ b/documentation/content/en/articles/gjournal-desktop/_index.adoc
@@ -1,504 +1,504 @@
---
title: Implementing UFS Journaling on a Desktop PC
authors:
- author: Manolis Kiagias
email: manolis@FreeBSD.org
-releaseinfo: "$FreeBSD$"
+description: Implementing UFS Journaling on a Desktop PC
trademarks: ["freebsd", "general"]
---
= Implementing UFS Journaling on a Desktop PC
:doctype: article
:toc: macro
:toclevels: 1
:icons: font
:sectnums:
:sectnumlevels: 6
:source-highlighter: rouge
:experimental:
include::shared/authors.adoc[]
include::shared/en/mailing-lists.adoc[]
include::shared/en/urls.adoc[]
ifeval::["{backend}" == "html5"]
:imagesdir: ../../../images/articles/gjournal-desktop/
endif::[]
ifeval::["{backend}" == "pdf"]
:imagesdir: ../../../../static/images/articles/gjournal-desktop/
endif::[]
ifeval::["{backend}" == "epub3"]
:imagesdir: ../../../../static/images/articles/gjournal-desktop/
endif::[]
[.abstract-title]
Abstract
A journaling file system uses a log to record all transactions that take place in the file system, and preserves its integrity in the event of a system crash or power failure.
Although it is still possible to lose unsaved changes to files, journaling almost completely eliminates the possibility of file system corruption caused by an unclean shutdown.
It also shortens to a minimum the time required for after-failure file system checking.
Although the UFS file system employed by FreeBSD does not implement journaling itself, the new journal class of the GEOM framework in FreeBSD 7._X_ can be used to provide file system independent journaling.
This article explains how to implement UFS journaling on a typical desktop PC scenario.
'''
toc::[]
[[introduction]]
== Introduction
While professional servers are usually well protected from unforeseen shutdowns, the typical desktop is at the mercy of power failures, accidental resets, and other user related incidents that can lead to unclean shutdowns.
Soft Updates usually protect the file system efficiently in such cases, although most of the times a lengthy background check is required.
On rare occasions, file system corruption reaches a point where user intervention is required and data may be lost.
The new journaling capability provided by GEOM can greatly assist in such scenarios, by virtually eliminating the time required for file system checking, and ensuring that the file system is quickly restored to a consistent state.
This article describes a procedure for implementing UFS journaling on a typical desktop PC scenario (one hard disk used for both operating system and data).
It should be followed during a fresh installation of FreeBSD.
The steps are simple enough and do not require overly complex interaction with the command line.
After reading this article, you will know:
* How to reserve space for journaling during a new installation of FreeBSD.
* How to load and enable the `geom_journal` module (or build support for it in your custom kernel).
* How to convert your existing file systems to utilize journaling, and what options to use in [.filename]#/etc/fstab# to mount them.
* How to implement journaling in new (empty) partitions.
* How to troubleshoot common problems associated with journaling.
Before reading this article, you should be able to:
* Understand basic UNIX(R) and FreeBSD concepts.
* Be familiar with the installation procedure of FreeBSD and the sysinstall utility.
[WARNING]
====
The procedure described here is intended for preparing a new installation where no actual user data is stored on the disk yet.
While it is possible to modify and extend this procedure for systems already in production, you should _backup_ all important data before doing so.
Messing around with disks and partitions at a low level can lead to fatal mistakes and data loss.
====
[[understanding-journaling]]
== Understanding Journaling in FreeBSD
The journaling provided by GEOM in FreeBSD 7._X_ is not file system specific (unlike for example the ext3 file system in Linux(R)) but is functioning at the block level.
Though this means it can be applied to different file systems, for FreeBSD 7.0-RELEASE, it can only be used on UFS2.
This functionality is provided by loading the [.filename]#geom_journal.ko# module into the kernel (or building it into a custom kernel) and using the `gjournal` command to configure the file systems.
In general, you would like to journal large file systems, like [.filename]#/usr#.
You will need however (see the following section) to reserve some free disk space.
When a file system is journaled, some disk space is needed to keep the journal itself.
The disk space that holds the actual data is referred to as the __data provider__, while the one that holds the journal is referred to as the __journal provider__.
The data and journal providers need to be on different partitions when journaling an existing (non-empty) partition.
When journaling a new partition, you have the option to use a single provider for both data and journal.
In any case, the `gjournal` command combines both providers to create the final journaled file system.
For example:
* You wish to journal your [.filename]#/usr# file system, stored in [.filename]#/dev/ad0s1f# (which already contains data).
* You reserved some free disk space in a partition in [.filename]#/dev/ad0s1g#.
* Using `gjournal`, a new [.filename]#/dev/ad0s1f.journal# device is created where [.filename]#/dev/ad0s1f# is the data provider, and [.filename]#/dev/ad0s1g# is the journal provider. This new device is then used for all subsequent file operations.
The amount of disk space you need to reserve for the journal provider depends on the usage load of the file system and not on the size of the data provider.
For example on a typical office desktop, a 1 GB journal provider for the [.filename]#/usr# file system will suffice, while a machine that deals with heavy disk I/O (i.e. video editing) may need more.
A kernel panic will occur if the journal space is exhausted before it has a chance to be committed.
[NOTE]
====
The journal sizes suggested here, are highly unlikely to cause problems in typical desktop use (such as web browsing, word processing and playback of media files).
If your workload includes intense disk activity, use the following rule for maximum reliability: Your RAM size should fit in 30% of the journal provider's space.
For example, if your system has 1 GB RAM, create an approximately 3.3 GB journal provider.
(Multiply your RAM size with 3.3 to obtain the size of the journal).
====
For more information about journaling, please read the manual page of man:gjournal[8].
[[reserve-space]]
== Steps During the Installation of FreeBSD
=== Reserving Space for Journaling
A typical desktop machine usually has one hard disk that stores both the OS and user data.
Arguably, the default partitioning scheme selected by sysinstall is more or less suitable: A desktop machine does not need a large [.filename]#/var# partition, while [.filename]#/usr# is allocated the bulk of the disk space, since user data and a lot of packages are installed into its subdirectories.
The default partitioning (the one obtained by pressing kbd:[A] at the FreeBSD partition editor, called Disklabel) does not leave any unallocated space.
Each partition that will be journaled, requires another partition for the journal.
Since the [.filename]#/usr# partition is the largest, it makes sense to shrink this partition slightly, to obtain the space required for journaling.
In our example, an 80 GB disk is used.
The following screenshot shows the default partitions created by Disklabel during installation:
image::disklabel1.png[]
If this is more or less what you need, it is very easy to adjust for journaling.
Simply use the arrow keys to move the highlight to the [.filename]#/usr# partition and press kbd:[D] to delete it.
Now, move the highlight to the disk name at the top of the screen and press kbd:[C] to create a new partition for [.filename]#/usr#.
This new partition should be smaller by 1 GB (if you intend to journal [.filename]#/usr# only), or 2 GB (if you intend to journal both [.filename]#/usr# and [.filename]#/var#).
From the pop-up that appears, opt to create a file system, and type [.filename]#/usr# as the mount point.
[NOTE]
====
Should you journal the [.filename]#/var# partition? Normally, journaling makes sense on quite large partitions.
You may decide not to journal [.filename]#/var#, although doing so on a typical desktop will cause no harm.
If the file system is lightly used (quite probable for a desktop) you may wish to allocate less disk space for its journal.
In our example, we journal both [.filename]#/usr# and [.filename]#/var#.
You may of course adjust the procedure to your own needs.
====
To keep things as easy going as possible, we are going to use sysinstall to create the partitions required for journaling.
However, during installation, sysinstall insists on asking a mount point for each partition you create.
At this point, you do not have any mount points for the partitions that will hold the journals, and in reality you __do not even need them__.
These are not partitions that we are ever going to mount somewhere.
To avoid these problems with sysinstall, we are going to create the journal partitions as swap space.
Swap is never mounted, and sysinstall has no problem creating as many swap partitions as needed.
After the first reboot, [.filename]#/etc/fstab# will have to be edited, and the extra swap space entries removed.
To create the swap, again use the arrow keys to move the highlight to the top of Disklabel screen, so that the disk name itself is highlighted.
Then press kbd:[N], enter the desired size (_1024M_), and select "swap space" from the pop-up menu that appears.
Repeat for every journal you wish to create.
In our example, we create two partitions to provide for the journals of [.filename]#/usr# and [.filename]#/var#.
The final result is shown in the following screenshot:
image::disklabel2.png[]
When you have completed creating the partitions, we suggest you write down the partition names, and mount points, so you can easily refer to this information during the configuration phase.
This will help alleviate mistakes that may damage your installation.
The following table shows our notes for the sample configuration:
.Partitions and Journals
[cols="1,1,1", options="header"]
|===
| Partition
| Mount Point
| Journal
|ad0s1d
|/var
|ad0s1h
|ad0s1f
|/usr
|ad0s1g
|===
Continue the installation as you would normally do.
We would however suggest you postpone installation of third party software (packages) until you have completely setup journaling.
[[first-boot]]
=== Booting for the first time
Your system will come up normally, but you will need to edit [.filename]#/etc/fstab# and remove the extra swap partitions you created for the journals.
Normally, the swap partition you will actually use is the one with the "b" suffix (i.e. ad0s1b in our example).
Remove all other swap space entries and reboot so that FreeBSD will stop using them.
When the system comes up again, we will be ready to configure journaling.
[[configure-journal]]
== Setting Up Journaling
[[running-gjournal]]
=== Executing `gjournal`
Having prepared all the required partitions, it is quite easy to configure journaling.
We will need to switch to single user mode, so login as `root` and type:
[source,shell]
....
# shutdown now
....
Press kbd:[Enter] to get the default shell.
We will need to unmount the partitions that will be journaled, in our example [.filename]#/usr# and [.filename]#/var#:
[source,shell]
....
# umount /usr /var
....
Load the module required for journaling:
[source,shell]
....
# gjournal load
....
Now, use your notes to determine which partition will be used for each journal.
In our example, [.filename]#/usr# is [.filename]#ad0s1f# and its journal will be [.filename]#ad0s1g#, while [.filename]#/var# is [.filename]#ad0s1d# and will be journaled to [.filename]#ad0s1h#.
The following commands are required:
[source,shell]
....
# gjournal label ad0s1f ad0s1g
GEOM_JOURNAL: Journal 2948326772: ad0s1f contains data.
GEOM_JOURNAL: Journal 2948326772: ad0s1g contains journal.
# gjournal label ad0s1d ad0s1h
GEOM_JOURNAL: Journal 3193218002: ad0s1d contains data.
GEOM_JOURNAL: Journal 3193218002: ad0s1h contains journal.
....
[NOTE]
====
If the last sector of either partition is used, `gjournal` will return an error.
You will have to run the command using the `-f` flag to force an overwrite, i.e.:
[source,shell]
....
# gjournal label -f ad0s1d ad0s1h
....
Since this is a new installation, it is highly unlikely that anything will be actually overwritten.
====
At this point, two new devices are created, namely [.filename]#ad0s1d.journal# and [.filename]#ad0s1f.journal#.
These represent the [.filename]#/var# and [.filename]#/usr# partitions we have to mount.
Before mounting, we must however set the journal flag on them and clear the Soft Updates flag:
[source,shell]
....
# tunefs -J enable -n disable ad0s1d.journal
tunefs: gjournal set
tunefs: soft updates cleared
# tunefs -J enable -n disable ad0s1f.journal
tunefs: gjournal set
tunefs: soft updates cleared
....
Now, mount the new devices manually at their respective places (note that we can now use the `async` mount option):
[source,shell]
....
# mount -o async /dev/ad0s1d.journal /var
# mount -o async /dev/ad0s1f.journal /usr
....
Edit [.filename]#/etc/fstab# and update the entries for [.filename]#/usr# and [.filename]#/var#:
[.programlisting]
....
/dev/ad0s1f.journal /usr ufs rw,async 2 2
/dev/ad0s1d.journal /var ufs rw,async 2 2
....
[WARNING]
====
Make sure the above entries are correct, or you will have trouble starting up normally after you reboot!
====
Finally, edit [.filename]#/boot/loader.conf# and add the following line so the man:gjournal[8] module is loaded at every boot:
[.programlisting]
....
geom_journal_load="YES"
....
Congratulations! Your system is now set for journaling.
You can either type `exit` to return to multi-user mode, or reboot to test your configuration (recommended).
During the boot you will see messages like the following:
[source,shell]
....
ad0: 76293MB XEC XE800JD-00HBC0 08.02D08 at ata0-master SATA150
GEOM_JOURNAL: Journal 2948326772: ad0s1g contains journal.
GEOM_JOURNAL: Journal 3193218002: ad0s1h contains journal.
GEOM_JOURNAL: Journal 3193218002: ad0s1d contains data.
GEOM_JOURNAL: Journal ad0s1d clean.
GEOM_JOURNAL: Journal 2948326772: ad0s1f contains data.
GEOM_JOURNAL: Journal ad0s1f clean.
....
After an unclean shutdown, the messages will vary slightly, i.e.:
[source,shell]
....
GEOM_JOURNAL: Journal ad0s1d consistent.
....
This usually means that man:gjournal[8] used the information in the journal provider to return the file system to a consistent state.
[[gjournal-new]]
=== Journaling Newly Created Partitions
While the above procedure is necessary for journaling partitions that already contain data, journaling an empty partition is somewhat easier, since both the data and the journal provider can be stored in the same partition.
For example, assume a new disk was installed, and a new partition [.filename]#/dev/ad1s1d# was created.
Creating the journal would be as simple as:
[source,shell]
....
# gjournal label ad1s1d
....
The journal size will be 1 GB by default.
You may adjust it by using the `-s` option.
The value can be given in bytes, or appended by `K`, `M` or `G` to denote Kilobytes, Megabytes or Gigabytes respectively.
Note that `gjournal` will not allow you to create unsuitably small journal sizes.
For example, to create a 2 GB journal, you could use the following command:
[source,shell]
....
# gjournal label -s 2G ad1s1d
....
You can then create a file system on your new partition, and enable journaling using the `-J` option:
[source,shell]
....
# newfs -J /dev/ad1s1d.journal
....
[[configure-kernel]]
=== Building Journaling into Your Custom Kernel
If you do not wish to load `geom_journal` as a module, you can build its functions right into your kernel.
Edit your custom kernel configuration file, and make sure it includes these two lines:
[.programlisting]
....
options UFS_GJOURNAL # Note: This is already in GENERIC
options GEOM_JOURNAL # You will have to add this one
....
Rebuild and reinstall your kernel following the relevant link:{handbook}#kernelconfig[instructions in the FreeBSD Handbook.]
Do not forget to remove the relevant "load" entry from [.filename]#/boot/loader.conf# if you have previously used it.
[[troubleshooting-gjournal]]
== Troubleshooting Journaling
The following section covers frequently asked questions regarding problems related to journaling.
=== I am getting kernel panics during periods of high disk activity. How is this related to journaling?
The journal probably fills up before it has a chance to get committed (flushed) to disk.
Keep in mind the size of the journal depends on the usage load, and not the size of the data provider.
If your disk activity is high, you need a larger partition for the journal.
See the note in the <<understanding-journaling>> section.
=== I made some mistake during configuration, and I cannot boot normally now. Can this be fixed some way?
You either forgot (or misspelled) the entry in [.filename]#/boot/loader.conf#, or there are errors in your [.filename]#/etc/fstab# file.
These are usually easy to fix.
Press kbd:[Enter] to get to the default single user shell.
Then locate the root of the problem:
[source,shell]
....
# cat /boot/loader.conf
....
If the `geom_journal_load` entry is missing or misspelled, the journaled devices are never created.
Load the module manually, mount all partitions, and continue with multi-user boot:
[source,shell]
....
# gjournal load
GEOM_JOURNAL: Journal 2948326772: ad0s1g contains journal.
GEOM_JOURNAL: Journal 3193218002: ad0s1h contains journal.
GEOM_JOURNAL: Journal 3193218002: ad0s1d contains data.
GEOM_JOURNAL: Journal ad0s1d clean.
GEOM_JOURNAL: Journal 2948326772: ad0s1f contains data.
GEOM_JOURNAL: Journal ad0s1f clean.
# mount -a
# exit
(boot continues)
....
If, on the other hand, this entry is correct, have a look at [.filename]#/etc/fstab#.
You will probably find a misspelled or missing entry.
In this case, mount all remaining partitions by hand and continue with the multi-user boot.
=== Can I remove journaling and return to my standard file system with Soft Updates?
Sure.
Use the following procedure, which reverses the changes.
The partitions you created for the journal providers can then be used for other purposes, if you so wish.
Login as `root` and switch to single user mode:
[source,shell]
....
# shutdown now
....
Unmount the journaled partitions:
[source,shell]
....
# umount /usr /var
....
Synchronize the journals:
[source,shell]
....
# gjournal sync
....
Stop the journaling providers:
[source,shell]
....
# gjournal stop ad0s1d.journal
# gjournal stop ad0s1f.journal
....
Clear journaling metadata from all the devices used:
[source,shell]
....
# gjournal clear ad0s1d
# gjournal clear ad0s1f
# gjournal clear ad0s1g
# gjournal clear ad0s1h
....
Clear the file system journaling flag, and restore the Soft Updates flag:
[source,shell]
....
# tunefs -J disable -n enable ad0s1d
tunefs: gjournal cleared
tunefs: soft updates set
# tunefs -J disable -n enable ad0s1f
tunefs: gjournal cleared
tunefs: soft updates set
....
Remount the old devices by hand:
[source,shell]
....
# mount -o rw /dev/ad0s1d /var
# mount -o rw /dev/ad0s1f /usr
....
Edit [.filename]#/etc/fstab# and restore it to its original state:
[.programlisting]
....
/dev/ad0s1f /usr ufs rw 2 2
/dev/ad0s1d /var ufs rw 2 2
....
Finally, edit [.filename]#/boot/loader.conf#, remove the entry that loads the `geom_journal` module and reboot.
[[further-reading]]
== Further Reading
Journaling is a fairly new feature of FreeBSD, and as such, it is not very well documented yet.
You may however find the following additional references useful:
* A link:{handbook}#geom-gjournal[new section on journaling] is now part of the FreeBSD Handbook.
* https://lists.freebsd.org/pipermail/freebsd-current/2006-June/064043.html[This post] in {freebsd-current} by man:gjournal[8]'s developer, `{pjd}`.
* https://lists.freebsd.org/pipermail/freebsd-questions/2008-April/173501.html[This post] in {freebsd-questions} by `{ivoras}`.
* The manual pages of man:gjournal[8] and man:geom[8].
diff --git a/documentation/content/en/articles/hubs/_index.adoc b/documentation/content/en/articles/hubs/_index.adoc
index 07bcdc840b..3819b76d71 100644
--- a/documentation/content/en/articles/hubs/_index.adoc
+++ b/documentation/content/en/articles/hubs/_index.adoc
@@ -1,379 +1,379 @@
---
title: Mirroring FreeBSD
authors:
- author: Jun Kuriyama
email: kuriyama@FreeBSD.org
- author: Valentino Vaschetto
email: logo@FreeBSD.org
- author: Daniel Lang
email: dl@leo.org
- author: Ken Smith
email: kensmith@FreeBSD.org
-releaseinfo: "$FreeBSD$"
+description: How to mirror FreeBSD
trademarks: ["freebsd", "general"]
---
= Mirroring FreeBSD
:doctype: article
:toc: macro
:toclevels: 1
:icons: font
:sectnums:
:sectnumlevels: 6
:source-highlighter: rouge
:experimental:
include::shared/en/mailing-lists.adoc[]
include::shared/en/urls.adoc[]
include::shared/releases.adoc[]
[.abstract-title]
Abstract
An in-progress article on how to mirror FreeBSD, aimed at hub administrators.
'''
toc::[]
[NOTE]
====
We are not accepting new mirrors at this time.
====
[[mirror-contact]]
== Contact Information
The Mirror System Coordinators can be reached through email at mailto:mirror-admin@FreeBSD.org[mirror-admin@FreeBSD.org].
There is also a {freebsd-hubs}.
[[mirror-requirements]]
== Requirements for FreeBSD Mirrors
[[mirror-diskspace]]
=== Disk Space
Disk space is one of the most important requirements.
Depending on the set of releases, architectures, and degree of completeness you want to mirror, a huge amount of disk space may be consumed.
Also keep in mind that _official_ mirrors are probably required to be complete.
The web pages should always be mirrored completely.
Also note that the numbers stated here are reflecting the current state (at {rel120-current}-RELEASE/{rel113-current}-RELEASE).
Further development and releases will only increase the required amount.
Also make sure to keep some (ca. 10-20%) extra space around just to be sure.
Here are some approximate figures:
* Full FTP Distribution: 1.4 TB
* CTM deltas: 10 GB
* Web pages: 1GB
The current disk usage of FTP Distribution can be found at link:ftp://ftp.FreeBSD.org/pub/FreeBSD/dir.sizes[ftp://ftp.FreeBSD.org/pub/FreeBSD/dir.sizes].
[[mirror-bandwidth]]
=== Network Connection/Bandwidth
Of course, you need to be connected to the Internet.
The required bandwidth depends on your intended use of the mirror.
If you just want to mirror some parts of FreeBSD for local use at your site/intranet, the demand may be much smaller than if you want to make the files publicly available.
If you intend to become an official mirror, the bandwidth required will be even higher.
We can only give rough estimates here:
* Local site, no public access: basically no minimum, but < 2 Mbps could make syncing too slow.
* Unofficial public site: 34 Mbps is probably a good start.
* Official site: > 100 Mbps is recommended, and your host should be connected as close as possible to your border router.
[[mirror-system]]
=== System Requirements, CPU, RAM
One thing this depends on the expected number of clients, which is determined by the server's policy.
It is also affected by the types of services you want to offer.
Plain FTP or HTTP services may not require a huge amount of resources.
Watch out if you provide rsync.
This can have a huge impact on CPU and memory requirements as it is considered a memory hog.
The following are just examples to give you a very rough hint.
For a moderately visited site that offers rsync, you might consider a current CPU with around 800MHz - 1 GHz, and at least 512MB RAM.
This is probably the minimum you want for an _official_ site.
For a frequently used site you definitely need more RAM (consider 2GB as a good start) and possibly more CPU, which could also mean that you need to go for a SMP system.
You also want to consider a fast disk subsystem.
Operations on the SVN repository require a fast disk subsystem (RAID is highly advised).
A SCSI controller that has a cache of its own can also speed up things since most of these services incur a large number of small modifications to the disk.
[[mirror-services]]
=== Services to Offer
Every mirror site is required to have a set of core services available.
In addition to these required services, there are a number of optional services that server administrators may choose to offer.
This section explains which services you can provide and how to go about implementing them.
[[mirror-serv-ftp]]
==== FTP (required for FTP Fileset)
This is one of the most basic services, and it is required for each mirror offering public FTP distributions.
FTP access must be anonymous, and no upload/download ratios are allowed (a ridiculous thing anyway).
Upload capability is not required (and _must_ never be allowed for the FreeBSD file space).
Also the FreeBSD archive should be available under the path [.filename]#/pub/FreeBSD#.
There is a lot of software available which can be set up to allow anonymous FTP (in alphabetical order).
* `/usr/libexec/ftpd`: FreeBSD's own ftpd can be used. Be sure to read man:ftpd[8].
* package:ftp/ncftpd[]: A commercial package, free for educational use.
* package:ftp/oftpd[]: An ftpd designed with security as a main focus.
* package:ftp/proftpd[]: A modular and very flexible ftpd.
* package:ftp/pure-ftpd[]: Another ftpd developed with security in mind.
* package:ftp/twoftpd[]: As above.
* package:ftp/vsftpd[]: The "very secure" ftpd.
FreeBSD's `ftpd`, `proftpd` and maybe `ncftpd` are among the most commonly used FTPds.
The others do not have a large userbase among mirror sites.
One thing to consider is that you may need flexibility in limiting how many simultaneous connections are allowed, thus limiting how much network bandwidth and system resources are consumed.
[[mirror-serv-rsync]]
==== Rsync (optional for FTP Fileset)
Rsync is often offered for access to the contents of the FTP area of FreeBSD, so other mirror sites can use your system as their source.
The protocol is different from FTP in many ways.
It is much more bandwidth friendly, as only differences between files are transferred instead of whole files when they change.
Rsync does require a significant amount of memory for each instance.
The size depends on the size of the synced module in terms of the number of directories and files.
Rsync can use `rsh` and `ssh` (now default) as a transport, or use its own protocol for stand-alone access (this is the preferred method for public rsync servers).
Authentication, connection limits, and other restrictions may be applied.
There is just one software package available:
* package:net/rsync[]
[[mirror-serv-http]]
==== HTTP (required for Web Pages, Optional for FTP Fileset)
If you want to offer the FreeBSD web pages, you will need to install a web server.
You may optionally offer the FTP fileset via HTTP.
The choice of web server software is left up to the mirror administrator.
Some of the most popular choices are:
* package:www/apache24[]: Apache is still one of the most widely deployed web servers on the Internet. It is used extensively by the FreeBSD Project.
* package:www/boa[]: Boa is a single-tasking HTTP server. Unlike traditional web servers, it does not fork for each incoming connection, nor does it fork many copies of itself to handle multiple connections. Although, it should provide considerably great performance for purely static content.
* package:www/cherokee[]: Cherokee is a very fast, flexible and easy to configure web server. It supports the widespread technologies nowadays: FastCGI, SCGI, PHP, CGI, SSL/TLS encrypted connections, vhosts, users authentication, on the fly encoding and load balancing. It also generates Apache compatible log files.
* package:www/lighttpd[]: lighttpd is a secure, fast, compliant and very flexible web server which has been optimized for high-performance environments. It has a very low memory footprint compared to other web servers and takes care of cpu-load.
* package:www/nginx[]: nginx is a high performance edge web server with a low memory footprint and key features to build a modern and efficient web infrastructure. Features include a HTTP server, HTTP and mail reverse proxy, caching, load balancing, compression, request throttling, connection multiplexing and reuse, SSL offload and HTTP media streaming.
* package:www/thttpd[]: If you are going to be serving a large amount of static content you may find that using an application such as thttpd is more efficient than others. It is also optimized for excellent performance on FreeBSD.
[[mirror-howto]]
== How to Mirror FreeBSD
Ok, now you know the requirements and how to offer the services, but not how to get it.
:-) This section explains how to actually mirror the various parts of FreeBSD, what tools to use, and where to mirror from.
[[mirror-ftp-rsync]]
=== Mirroring the FTP Site
The FTP area is the largest amount of data that needs to be mirrored.
It includes the _distribution sets_ required for network installation, the _branches_ which are actually snapshots of checked-out source trees, the _ISO Images_ to write CD-ROMs with the installation distribution, a live file system, and a snapshot of the ports tree.
All of course for various FreeBSD versions, and various architectures.
The best way to mirror the FTP area is rsync.
You can install the port package:net/rsync[] and then use rsync to sync with your upstream host.
rsync is already mentioned in <<mirror-serv-rsync>>.
Since rsync access is not required, your preferred upstream site may not allow it.
You may need to hunt around a little bit to find a site that allows rsync access.
[NOTE]
====
Since the number of rsync clients will have a significant impact on the server machine, most admins impose limitations on their server.
For a mirror, you should ask the site maintainer you are syncing from about their policy, and maybe an exception for your host (since you are a mirror).
====
A command line to mirror FreeBSD might look like:
[source,shell]
....
% rsync -vaHz --delete rsync://ftp4.de.FreeBSD.org/FreeBSD/ /pub/FreeBSD/
....
Consult the documentation for rsync, which is also available at http://rsync.samba.org/[http://rsync.samba.org/], about the various options to be used with rsync.
If you sync the whole module (unlike subdirectories), be aware that the module-directory (here "FreeBSD") will not be created, so you cannot omit the target directory. Also you might want to set up a script framework that calls such a command via man:cron[8].
[[mirror-www]]
=== Mirroring the WWW Pages
The FreeBSD website should only be mirrored via rsync.
A command line to mirror the FreeBSD web site might look like:
[source,shell]
....
% rsync -vaHz --delete rsync://bit0.us-west.freebsd.org/FreeBSD-www-data/ /usr/local/www/
....
[[mirror-pkgs]]
=== Mirroring Packages
Due to very high requirements of bandwidth, storage and adminstration the FreeBSD Project has decided not to allow public mirrors of packages.
For sites with lots of machines, it might be advantagous to run a caching HTTP proxy for the man:pkg[8] process.
Alternatively specific packages and their dependencies can be fetched by running something like the following:
[source,shell]
....
% pkg fetch -d -o /usr/local/mirror vim
....
Once those packages have been fetched, the repository metadata must be generated by running:
[source,shell]
....
% pkg repo /usr/local/mirror
....
Once the packages have been fetched and the metadata for the repository has been generated, serve the packages up to the client machines via HTTP.
For additional information see the man pages for man:pkg[8], specifically the man:pkg-repo[8] page.
[[mirror-how-often]]
=== How Often Should I Mirror?
Every mirror should be updated at a minimum of once per day.
Certainly a script with locking to prevent multiple runs happening at the same time will be needed to run from man:cron[8].
Since nearly every admin does this in their own way, specific instructions cannot be provided.
It could work something like this:
[.procedure]
====
. Put the command to run your mirroring application in a script. Use of a plain `/bin/sh` script is recommended.
. Add some output redirections so diagnostic messages are logged to a file.
. Test if your script works. Check the logs.
. Use man:crontab[1] to add the script to the appropriate user's man:crontab[5]. This should be a different user than what your FTP daemon runs as so that if file permissions inside your FTP area are not world-readable those files cannot be accessed by anonymous FTP. This is used to "stage" releases - making sure all of the official mirror sites have all of the necessary release files on release day.
====
Here are some recommended schedules:
* FTP fileset: daily
* WWW pages: daily
[[mirror-where]]
== Where to Mirror From
This is an important issue.
So this section will spend some effort to explain the backgrounds.
We will say this several times: under no circumstances should you mirror from `ftp.FreeBSD.org`.
[[mirror-where-organization]]
=== A few Words About the Organization
Mirrors are organized by country.
All official mirrors have a DNS entry of the form `ftpN.CC.FreeBSD.org`.
_CC_ (i.e., country code) is the _top level domain_ (TLD) of the country where this mirror is located.
_N_ is a number, telling that the host would be the _Nth_ mirror in that country.
(Same applies to `wwwN.CC.FreeBSD.org`, etc.) There are mirrors with no _CC_ part.
These are the mirror sites that are very well connected and allow a large number of concurrent users.
`ftp.FreeBSD.org` is actually two machines, one currently located in Denmark and the other in the United States.
It is _NOT_ a master site and should never be used to mirror from.
Lots of online documentation leads "interactive"users to `ftp.FreeBSD.org` so automated mirroring systems should find a different machine to mirror from.
Additionally there exists a hierarchy of mirrors, which is described in terms of __tiers__.
The master sites are not referred to but can be described as __Tier-0__.
Mirrors that mirror from these sites can be considered __Tier-1__, mirrors of __Tier-1__-mirrors, are __Tier-2__, etc.
Official sites are encouraged to be of a low __tier__, but the lower the tier the higher the requirements in terms as described in <<mirror-requirements>>.
Also access to low-tier-mirrors may be restricted, and access to master sites is definitely restricted.
The __tier__-hierarchy is not reflected by DNS and generally not documented anywhere except for the master sites.
However, official mirrors with low numbers like 1-4, are usually _Tier-1_ (this is just a rough hint, and there is no rule).
[[mirror-where-where]]
=== Ok, but Where Should I get the Stuff Now?
Under no circumstances should you mirror from `ftp.FreeBSD.org`.
The short answer is: from the site that is closest to you in Internet terms, or gives you the fastest access.
[[mirror-where-simple]]
==== I Just Want to Mirror from Somewhere!
If you have no special intentions or requirements, the statement in <<mirror-where-where>> applies.
This means:
[.procedure]
====
. Check for those which provide fastest access (number of hops, round-trip-times) and offer the services you intend to use (like rsync).
. Contact the administrators of your chosen site stating your request, and asking about their terms and policies.
. Set up your mirror as described above.
====
[[mirror-where-official]]
==== I am an Official Mirror, What is the Right Rite for Me?
In general the description in <<mirror-where-simple>> still applies.
Of course you may want to put some weight on the fact that your upstream should be of a low tier.
There are some other considerations about _official_ mirrors that are described in <<mirror-official>>.
[[mirror-where-master]]
==== I Want to Access the Master Sites!
If you have good reasons and good prerequisites, you may want and get access to one of the master sites.
Access to these sites is generally restricted, and there are special policies for access.
If you are already an _official_ mirror, this certainly helps you getting access.
In any other case make sure your country really needs another mirror.
If it already has three or more, ask the "zone administrator" (mailto:hostmaster@CC.FreeBSD.org[hostmaster@CC.FreeBSD.org]) or {freebsd-hubs} first.
Whoever helped you become, an _official_ should have helped you gain access to an appropriate upstream host, either one of the master sites or a suitable Tier-1 site.
If not, you can send email to mailto:mirror-admin@FreeBSD.org[mirror-admin@FreeBSD.org] to request help with that.
There is one master site for the FTP fileset.
[[mirror-where-master-ftp]]
===== ftp-master.FreeBSD.org
This is the master site for the FTP fileset.
`ftp-master.FreeBSD.org` provides rsync access, in addition to FTP.
Refer to <<mirror-ftp-rsync>>.
Mirrors are also encouraged to allow rsync access for the FTP contents, since they are __Tier-1__-mirrors.
[[mirror-official]]
== Official Mirrors
Official mirrors are mirrors that
* a) have a `FreeBSD.org` DNS entry (usually a CNAME).
* b) are listed as an official mirror in the FreeBSD documentation (like handbook).
So far to distinguish official mirrors. Official mirrors are not necessarily __Tier-1__-mirrors.
However you probably will not find a __Tier-1__-mirror, that is not also official.
[[mirror-official-requirements]]
=== Special Requirements for Official (tier-1) Mirrors
It is not so easy to state requirements for all official mirrors, since the project is sort of tolerant here.
It is more easy to say, what _official tier-1 mirrors_ are required to.
All other official mirrors can consider this a big __should__.
Tier-1 mirrors are required to:
* carry the complete fileset
* allow access to other mirror sites
* provide FTP and rsync access
Furthermore, admins should be subscribed to the {freebsd-hubs}.
See link:{handbook}#eresources-mail[this link] for details, how to subscribe.
[IMPORTANT]
====
It is _very_ important for a hub administrator, especially Tier-1 hub admins, to check the https://www.FreeBSD.org/releng/[release schedule] for the next FreeBSD release.
This is important because it will tell you when the next release is scheduled to come out, and thus giving you time to prepare for the big spike of traffic which follows it.
It is also important that hub administrators try to keep their mirrors as up-to-date as possible (again, even more crucial for Tier-1 mirrors).
If Mirror1 does not update for a while, lower tier mirrors will begin to mirror old data from Mirror1 and thus begins a downward spiral... Keep your mirrors up to date!
====
[[mirror-official-become]]
=== How to Become Official Then?
We are not accepting any new mirrors at this time.
[[mirror-statpages]]
== Some Statistics from Mirror Sites
Here are links to the stat pages of your favorite mirrors (aka the only ones who feel like providing stats).
[[mirror-statpagesftp]]
=== FTP Site Statistics
* ftp.is.FreeBSD.org - mailto:hostmaster@is.FreeBSD.org[hostmaster@is.FreeBSD.org] - http://www.rhnet.is/status/draupnir/draupnir.html[ (Bandwidth)] http://www.rhnet.is/status/ftp/ftp-notendur.html[(FTP processes)] http://www.rhnet.is/status/ftp/http-notendur.html[(HTTP processes)]
* ftp2.ru.FreeBSD.org - mailto:mirror@macomnet.ru[mirror@macomnet.ru] - http://mirror.macomnet.net/mrtg/mirror.macomnet.net_195.128.64.25.html[(Bandwidth)] http://mirror.macomnet.net/mrtg/mirror.macomnet.net_proc.html[(HTTP and FTP users)]
diff --git a/documentation/content/en/articles/ipsec-must/_index.adoc b/documentation/content/en/articles/ipsec-must/_index.adoc
index 04bbab552c..f514789fd4 100644
--- a/documentation/content/en/articles/ipsec-must/_index.adoc
+++ b/documentation/content/en/articles/ipsec-must/_index.adoc
@@ -1,274 +1,274 @@
---
title: Independent Verification of IPsec Functionality in FreeBSD
authors:
- author: David Honig
email: honig@sprynet.com
-releaseinfo: "$FreeBSD$"
+description: Independent Verification of IPsec Functionality in FreeBSD
trademarks: ["freebsd", "opengroup", "general"]
---
= Independent Verification of IPsec Functionality in FreeBSD
:doctype: article
:toc: macro
:toclevels: 1
:icons: font
:sectnums:
:sectnumlevels: 6
:source-highlighter: rouge
:experimental:
include::shared/en/urls.adoc[]
[.abstract-title]
Abstract
You installed IPsec and it seems to be working.
How do you know? I describe a method for experimentally verifying that IPsec is working.
'''
toc::[]
[[problem]]
== The Problem
First, lets assume you have <<ipsec-install>>.
How do you know it is <<caveat>>? Sure, your connection will not work if it is misconfigured, and it will work when you finally get it right.
man:netstat[1] will list it. But can you independently confirm it?
[[solution]]
== The Solution
First, some crypto-relevant info theory:
. Encrypted data is uniformly distributed, i.e., has maximal entropy per symbol;
. Raw, uncompressed data is typically redundant, i.e., has sub-maximal entropy.
Suppose you could measure the entropy of the data to- and from- your network interface.
Then you could see the difference between unencrypted data and encrypted data.
This would be true even if some of the data in "encrypted mode" was not encrypted---as the outermost IP header must be if the packet is to be routable.
[[MUST]]
=== MUST
Ueli Maurer's "Universal Statistical Test for Random Bit Generators"(https://web.archive.org/web/20011115002319/http://www.geocities.com/SiliconValley/Code/4704/universal.pdf[MUST]) quickly measures the entropy of a sample.
It uses a compression-like algorithm.
<<code>> for a variant which measures successive (~quarter megabyte) chunks of a file.
[[tcpdump]]
=== Tcpdump
We also need a way to capture the raw network data.
A program called man:tcpdump[1] lets you do this, if you have enabled the _Berkeley Packet Filter_ interface in your <<kernel>>.
The command:
[source,shell]
....
tcpdump -c 4000 -s 10000 -w dumpfile.bin
....
will capture 4000 raw packets to _dumpfile.bin_.
Up to 10,000 bytes per packet will be captured in this example.
[[experiment]]
== The Experiment
Here is the experiment:
[.procedure]
====
. Open a window to an IPsec host and another window to an insecure host.
. Now start <<tcpdump>>.
. In the "secure" window, run the UNIX(R) command man:yes[1], which will stream the `y` character. After a while, stop this. Switch to the insecure window, and repeat. After a while, stop.
. Now run <<code>> on the captured packets. You should see something like the following. The important thing to note is that the secure connection has 93% (6.7) of the expected value (7.18), and the "normal" connection has 29% (2.1) of the expected value.
+
[source,shell]
....
% tcpdump -c 4000 -s 10000 -w ipsecdemo.bin
% uliscan ipsecdemo.bin
Uliscan 21 Dec 98
L=8 256 258560
Measuring file ipsecdemo.bin
Init done
Expected value for L=8 is 7.1836656
6.9396 --------------------------------------------------------
6.6177 -----------------------------------------------------
6.4100 ---------------------------------------------------
2.1101 -----------------
2.0838 -----------------
2.0983 -----------------
....
====
[[caveat]]
== Caveat
This experiment shows that IPsec _does_ seem to be distributing the payload data __uniformly__, as encryption should.
However, the experiment described here _cannot_ detect many possible flaws in a system (none of which do I have any evidence for).
These include poor key generation or exchange, data or keys being visible to others, use of weak algorithms, kernel subversion, etc.
Study the source; know the code.
[[IPsec]]
== IPsec---Definition
Internet Protocol security extensions to IPv4; required for IPv6.
A protocol for negotiating encryption and authentication at the IP (host-to-host) level.
SSL secures only one application socket; SSH secures only a login; PGP secures only a specified file or message.
IPsec encrypts everything between two hosts.
[[ipsec-install]]
== Installing IPsec
Most of the modern versions of FreeBSD have IPsec support in their base source.
So you will need to include the `IPSEC` option in your kernel config and, after kernel rebuild and reinstall, configure IPsec connections using man:setkey[8] command.
A comprehensive guide on running IPsec on FreeBSD is provided in link:{handbook}#ipsec[FreeBSD Handbook].
[[kernel]]
== src/sys/i386/conf/KERNELNAME
This needs to be present in the kernel config file in order to capture network data with man:tcpdump[1].
Be sure to run man:config[8] after adding this, and rebuild and reinstall.
[.programlisting]
....
device bpf
....
[[code]]
== Maurer's Universal Statistical Test (for block size=8 bits)
You can find the same code at https://web.archive.org/web/20031204230654/http://www.geocities.com:80/SiliconValley/Code/4704/uliscanc.txt[this link].
[.programlisting]
....
/*
ULISCAN.c ---blocksize of 8
1 Oct 98
1 Dec 98
21 Dec 98 uliscan.c derived from ueli8.c
This version has // comments removed for Sun cc
This implements Ueli M Maurer's "Universal Statistical Test for Random
Bit Generators" using L=8
Accepts a filename on the command line; writes its results, with other
info, to stdout.
Handles input file exhaustion gracefully.
Ref: J. Cryptology v 5 no 2, 1992 pp 89-105
also on the web somewhere, which is where I found it.
-David Honig
honig@sprynet.com
Usage:
ULISCAN filename
outputs to stdout
*/
#define L 8
#define V (1<<L)
#define Q (10*V)
#define K (100 *Q)
#define MAXSAMP (Q + K)
#include <stdio.h>
#include <math.h>
int main(argc, argv)
int argc;
char **argv;
{
FILE *fptr;
int i,j;
int b, c;
int table[V];
double sum = 0.0;
int iproduct = 1;
int run;
extern double log(/* double x */);
printf("Uliscan 21 Dec 98 \nL=%d %d %d \n", L, V, MAXSAMP);
if (argc < 2) {
printf("Usage: Uliscan filename\n");
exit(-1);
} else {
printf("Measuring file %s\n", argv[1]);
}
fptr = fopen(argv[1],"rb");
if (fptr == NULL) {
printf("Can't find %s\n", argv[1]);
exit(-1);
}
for (i = 0; i < V; i++) {
table[i] = 0;
}
for (i = 0; i < Q; i++) {
b = fgetc(fptr);
table[b] = i;
}
printf("Init done\n");
printf("Expected value for L=8 is 7.1836656\n");
run = 1;
while (run) {
sum = 0.0;
iproduct = 1;
if (run)
for (i = Q; run && i < Q + K; i++) {
j = i;
b = fgetc(fptr);
if (b < 0)
run = 0;
if (run) {
if (table[b] > j)
j += K;
sum += log((double)(j-table[b]));
table[b] = i;
}
}
if (!run)
printf("Premature end of file; read %d blocks.\n", i - Q);
sum = (sum/((double)(i - Q))) / log(2.0);
printf("%4.4f ", sum);
for (i = 0; i < (int)(sum*8.0 + 0.50); i++)
printf("-");
printf("\n");
/* refill initial table */
if (0) {
for (i = 0; i < Q; i++) {
b = fgetc(fptr);
if (b < 0) {
run = 0;
} else {
table[b] = i;
}
}
}
}
}
....
diff --git a/documentation/content/en/articles/ldap-auth/_index.adoc b/documentation/content/en/articles/ldap-auth/_index.adoc
index d848ae896e..206a6d4ed6 100644
--- a/documentation/content/en/articles/ldap-auth/_index.adoc
+++ b/documentation/content/en/articles/ldap-auth/_index.adoc
@@ -1,783 +1,783 @@
---
title: LDAP Authentication
authors:
- author: Toby Burress
email: kurin@causa-sui.net
copyright: 2007-2008 The FreeBSD Documentation Project
-releaseinfo: "$FreeBSD$"
+description: Guide for the configuration of an LDAP server for authentication on FreeBSD
trademarks: ["freebsd", "general"]
---
= LDAP Authentication
:doctype: article
:toc: macro
:toclevels: 1
:icons: font
:sectnums:
:sectnumlevels: 6
:source-highlighter: rouge
:experimental:
[.abstract-title]
Abstract
This document is intended as a guide for the configuration of an LDAP server (principally an OpenLDAP server) for authentication on FreeBSD.
This is useful for situations where many servers need the same user accounts, for example as a replacement for NIS.
'''
toc::[]
[[preface]]
== Preface
This document is intended to give the reader enough of an understanding of LDAP to configure an LDAP server.
This document will attempt to provide an explanation of package:net/nss_ldap[] and package:security/pam_ldap[] for use with client machines services for use with the LDAP server.
When finished, the reader should be able to configure and deploy a FreeBSD server that can host an LDAP directory, and to configure and deploy a FreeBSD server which can authenticate against an LDAP directory.
This article is not intended to be an exhaustive account of the security, robustness, or best practice considerations for configuring LDAP or the other services discussed herein.
While the author takes care to do everything correctly, they do not address security issues beyond a general scope.
This article should be considered to lay the theoretical groundwork only, and any actual implementation should be accompanied by careful requirement analysis.
[[ldap]]
== Configuring LDAP
LDAP stands for "Lightweight Directory Access Protocol" and is a subset of the X.500 Directory Access Protocol.
Its most recent specifications are in http://www.ietf.org/rfc/rfc4510.txt[RFC4510] and friends.
Essentially it is a database that expects to be read from more often than it is written to.
The LDAP server http://www.openldap.org/[OpenLDAP] will be used in the examples in this document; while the principles here should be generally applicable to many different servers, most of the concrete administration is OpenLDAP-specific.
There are several server versions in ports, for example package:net/openldap24-server[].
Client servers will need the corresponding package:net/openldap24-client[] libraries.
There are (basically) two areas of the LDAP service which need configuration.
The first is setting up a server to receive connections properly, and the second is adding entries to the server's directory so that FreeBSD tools know how to interact with it.
[[ldap-connect]]
=== Setting Up the Server for Connections
[NOTE]
====
This section is specific to OpenLDAP.
If you are using another server, you will need to consult that server's documentation.
====
[[ldap-connect-install]]
==== Installing OpenLDAP
First, install OpenLDAP:
[[oldap-install]]
.Installing OpenLDAP
[example]
====
[source,shell]
....
# cd /usr/ports/net/openldap24-server
# make install clean
....
====
This installs the `slapd` and `slurpd` binaries, along with the required OpenLDAP libraries.
[[ldap-connect-config]]
==== Configuring OpenLDAP
Next we must configure OpenLDAP.
You will want to require encryption in your connections to the LDAP server; otherwise your users' passwords will be transferred in plain text, which is considered insecure.
The tools we will be using support two very similar kinds of encryption, SSL and TLS.
TLS stands for "Transportation Layer Security".
Services that employ TLS tend to connect on the _same_ ports as the same services without TLS; thus an SMTP server which supports TLS will listen for connections on port 25, and an LDAP server will listen on 389.
SSL stands for "Secure Sockets Layer", and services that implement SSL do _not_ listen on the same ports as their non-SSL counterparts.
Thus SMTPS listens on port 465 (not 25), HTTPS listens on 443, and LDAPS on 636.
The reason SSL uses a different port than TLS is because a TLS connection begins as plain text, and switches to encrypted traffic after the `STARTTLS` directive.
SSL connections are encrypted from the beginning.
Other than that there are no substantial differences between the two.
[NOTE]
====
We will adjust OpenLDAP to use TLS, as SSL is considered deprecated.
====
Once OpenLDAP is installed via ports, the following configuration parameters in [.filename]#/usr/local/etc/openldap/slapd.conf# will enable TLS:
[.programlisting]
....
security ssf=128
TLSCertificateFile /path/to/your/cert.crt
TLSCertificateKeyFile /path/to/your/cert.key
TLSCACertificateFile /path/to/your/cacert.crt
....
Here, `ssf=128` tells OpenLDAP to require 128-bit encryption for all connections, both search and update.
This parameter may be configured based on the security needs of your site, but rarely you need to weaken it, as most LDAP client libraries support strong encryption.
The [.filename]#cert.crt#, [.filename]#cert.key#, and [.filename]#cacert.crt# files are necessary for clients to authenticate _you_ as the valid LDAP server.
If you simply want a server that runs, you can create a self-signed certificate with OpenSSL:
[[genrsa]]
.Generating an RSA Key
[example]
====
[source,shell]
....
% openssl genrsa -out cert.key 1024
Generating RSA private key, 1024 bit long modulus
....................++++++
...++++++
e is 65537 (0x10001)
% openssl req -new -key cert.key -out cert.csr
....
====
At this point you should be prompted for some values.
You may enter whatever values you like; however, it is important the "Common Name" value be the fully qualified domain name of the OpenLDAP server.
In our case, and the examples here, the server is _server.example.org_.
Incorrectly setting this value will cause clients to fail when making connections.
This can the cause of great frustration, so ensure that you follow these steps closely.
Finally, the certificate signing request needs to be signed:
[[self-sign]]
.Self-signing the Certificate
[example]
====
[source,shell]
....
% openssl x509 -req -in cert.csr -days 365 -signkey cert.key -out cert.crt
Signature ok
subject=/C=AU/ST=Some-State/O=Internet Widgits Pty Ltd
Getting Private key
....
====
This will create a self-signed certificate that can be used for the directives in [.filename]#slapd.conf#, where [.filename]#cert.crt# and [.filename]#cacert.crt# are the same file.
If you are going to use many OpenLDAP servers (for replication via `slurpd`) you will want to see <<ssl-ca>> to generate a CA key and use it to sign individual server certificates.
Once this is done, put the following in [.filename]#/etc/rc.conf#:
[.programlisting]
....
slapd_enable="YES"
....
Then run `/usr/local/etc/rc.d/slapd start`.
This should start OpenLDAP.
Confirm that it is listening on 389 with
[source,shell]
....
% sockstat -4 -p 389
ldap slapd 3261 7 tcp4 *:389 *:*
....
[[ldap-connect-client]]
==== Configuring the Client
Install the package:net/openldap24-client[] port for the OpenLDAP libraries.
The client machines will always have OpenLDAP libraries since that is all package:security/pam_ldap[] and package:net/nss_ldap[] support, at least for the moment.
The configuration file for the OpenLDAP libraries is [.filename]#/usr/local/etc/openldap/ldap.conf#.
Edit this file to contain the following values:
[.programlisting]
....
base dc=example,dc=org
uri ldap://server.example.org/
ssl start_tls
tls_cacert /path/to/your/cacert.crt
....
[NOTE]
====
It is important that your clients have access to [.filename]#cacert.crt#, otherwise they will not be able to connect.
====
[NOTE]
====
There are two files called [.filename]#ldap.conf#.
The first is this file, which is for the OpenLDAP libraries and defines how to talk to the server.
The second is [.filename]#/usr/local/etc/ldap.conf#, and is for pam_ldap.
====
At this point you should be able to run `ldapsearch -Z` on the client machine; `-Z` means "use TLS".
If you encounter an error, then something is configured wrong; most likely it is your certificates.
Use man:openssl[1]'s `s_client` and `s_server` to ensure you have them configured and signed properly.
[[ldap-database]]
=== Entries in the Database
Authentication against an LDAP directory is generally accomplished by attempting to bind to the directory as the connecting user.
This is done by establishing a "simple" bind on the directory with the user name supplied.
If there is an entry with the `uid` equal to the user name and that entry's `userPassword` attribute matches the password supplied, then the bind is successful.
The first thing we have to do is figure out is where in the directory our users will live.
The base entry for our database is `dc=example,dc=org`.
The default location for users that most clients seem to expect is something like `ou=people,_base_`, so that is what will be used here.
However keep in mind that this is configurable.
So the ldif entry for the `people` organizational unit will look like:
[.programlisting]
....
dn: ou=people,dc=example,dc=org
objectClass: top
objectClass: organizationalUnit
ou: people
....
All users will be created as subentries of this organizational unit.
Some thought might be given to the object class your users will belong to.
Most tools by default will use `people`, which is fine if you simply want to provide entries against which to authenticate.
However, if you are going to store user information in the LDAP database as well, you will probably want to use `inetOrgPerson`, which has many useful attributes.
In either case, the relevant schemas need to be loaded in [.filename]#slapd.conf#.
For this example we will use the `person` object class.
If you are using `inetOrgPerson`, the steps are basically identical, except that the `sn` attribute is required.
To add a user `testuser`, the ldif would be:
[.programlisting]
....
dn: uid=tuser,ou=people,dc=example,dc=org
objectClass: person
objectClass: posixAccount
objectClass: shadowAccount
objectClass: top
uidNumber: 10000
gidNumber: 10000
homeDirectory: /home/tuser
loginShell: /bin/csh
uid: tuser
cn: tuser
....
I start my LDAP users' UIDs at 10000 to avoid collisions with system accounts; you can configure whatever number you wish here, as long as it is less than 65536.
We also need group entries.
They are as configurable as user entries, but we will use the defaults below:
[.programlisting]
....
dn: ou=groups,dc=example,dc=org
objectClass: top
objectClass: organizationalUnit
ou: groups
dn: cn=tuser,ou=groups,dc=example,dc=org
objectClass: posixGroup
objectClass: top
gidNumber: 10000
cn: tuser
....
To enter these into your database, you can use `slapadd` or `ldapadd` on a file containing these entries.
Alternatively, you can use package:sysutils/ldapvi[].
The `ldapsearch` utility on the client machine should now return these entries.
If it does, your database is properly configured to be used as an LDAP authentication server.
[[client]]
== Client Configuration
The client should already have OpenLDAP libraries from <<ldap-connect-client>>, but if you are installing several client machines you will need to install package:net/openldap24-client[] on each of them.
FreeBSD requires two ports to be installed to authenticate against an LDAP server, package:security/pam_ldap[] and package:net/nss_ldap[].
[[client-auth]]
=== Authentication
package:security/pam_ldap[] is configured via [.filename]#/usr/local/etc/ldap.conf#.
[NOTE]
====
This is a _different file_ than the OpenLDAP library functions' configuration file, [.filename]#/usr/local/etc/openldap/ldap.conf#; however, it takes many of the same options; in fact it is a superset of that file.
For the rest of this section, references to [.filename]#ldap.conf# will mean [.filename]#/usr/local/etc/ldap.conf#.
====
Thus, we will want to copy all of our original configuration parameters from [.filename]#openldap/ldap.conf# to the new [.filename]#ldap.conf#.
Once this is done, we want to tell package:security/pam_ldap[] what to look for on the directory server.
We are identifying our users with the `uid` attribute.
To configure this (though it is the default), set the `pam_login_attribute` directive in [.filename]#ldap.conf#:
[[set-pam-login-attr]]
.Setting `pam_login_attribute`
[example]
====
[.programlisting]
....
pam_login_attribute uid
....
====
With this set, package:security/pam_ldap[] will search the entire LDAP directory under `base` for the value `uid=_username_`.
If it finds one and only one entry, it will attempt to bind as that user with the password it was given.
If it binds correctly, then it will allow access.
Otherwise it will fail.
Users whose shell is not in [.filename]#/etc/shells# will not be able to log in.
This is particularly important when Bash is set as the user shell on the LDAP server.
Bash is not included with a default installation of FreeBSD.
When installed from a package or port, it is located at [.filename]#/usr/local/bin/bash#.
Verify that the path to the shell on the server is set correctly:
[source,shell]
....
% getent passwd username
....
There are two choices when the output shows `/bin/bash` in the last column.
The first is to change the user's entry on the LDAP server to [.filename]#/usr/local/bin/bash#.
The second option is to create a symlink on the LDAP client computer so Bash is found at the correct location:
[source,shell]
....
# ln -s /usr/local/bin/bash /bin/bash
....
Make sure that [.filename]#/etc/shells# contains entries for both `/usr/local/bin/bash` and `/bin/bash`.
The user will then be able to log in to the system with Bash as their shell.
[[client-auth-pam]]
==== PAM
PAM, which stands for "Pluggable Authentication Modules", is the method by which FreeBSD authenticates most of its sessions.
To tell FreeBSD we wish to use an LDAP server, we will have to add a line to the appropriate PAM file.
Most of the time the appropriate PAM file is [.filename]#/etc/pam.d/sshd#, if you want to use SSH (remember to set the relevant options in [.filename]#/etc/ssh/sshd_config#, otherwise SSH will not use PAM).
To use PAM for authentication, add the line
[.programlisting]
....
auth sufficient /usr/local/lib/pam_ldap.so no_warn
....
Exactly where this line shows up in the file and which options appear in the fourth column determine the exact behavior of the authentication mechanism; see man:pam[d]
With this configuration you should be able to authenticate a user against an LDAP directory.
PAM will perform a bind with your credentials, and if successful will tell SSH to allow access.
However it is not a good idea to allow _every_ user in the directory into _every_ client machine.
With the current configuration, all that a user needs to log into a machine is an LDAP entry.
Fortunately there are a few ways to restrict user access.
[.filename]#ldap.conf# supports a `pam_groupdn` directive; every account that connects to this machine needs to be a member of the group specified here.
For example, if you have
[.programlisting]
....
pam_groupdn cn=servername,ou=accessgroups,dc=example,dc=org
....
in [.filename]#ldap.conf#, then only members of that group will be able to log in.
There are a few things to bear in mind, however.
Members of this group are specified in one or more `memberUid` attributes, and each attribute must have the full distinguished name of the member.
So `memberUid: someuser` will not work; it must be:
[.programlisting]
....
memberUid: uid=someuser,ou=people,dc=example,dc=org
....
Additionally, this directive is not checked in PAM during authentication, it is checked during account management, so you will need a second line in your PAM files under `account`.
This will require, in turn, _every_ user to be listed in the group, which is not necessarily what we want.
To avoid blocking users that are not in LDAP, you should enable the `ignore_unknown_user` attribute.
Finally, you should set the `ignore_authinfo_unavail` option so that you are not locked out of every computer when the LDAP server is unavailable.
Your [.filename]#pam.d/sshd# might then end up looking like this:
[[pam]]
.Sample [.filename]#pam.d/sshd#
[example]
====
[.programlisting]
....
auth required pam_nologin.so no_warn
auth sufficient pam_opie.so no_warn no_fake_prompts
auth requisite pam_opieaccess.so no_warn allow_local
auth sufficient /usr/local/lib/pam_ldap.so no_warn
auth required pam_unix.so no_warn try_first_pass
account required pam_login_access.so
account required /usr/local/lib/pam_ldap.so no_warn ignore_authinfo_unavail ignore_unknown_user
....
====
[NOTE]
====
Since we are adding these lines specifically to [.filename]#pam.d/sshd#, this will only have an effect on SSH sessions.
LDAP users will be unable to log in at the console.
To change this behavior, examine the other files in [.filename]#/etc/pam.d# and modify them accordingly.
====
[[client-nss]]
=== Name Service Switch
NSS is the service that maps attributes to names.
So, for example, if a file is owned by user `1001`, an application will query NSS for the name of `1001`, and it might get `bob` or `ted` or whatever the user's name is.
Now that our user information is kept in LDAP, we need to tell NSS to look there when queried.
The package:net/nss_ldap[] port does this.
It uses the same configuration file as package:security/pam_ldap[], and should not need any extra parameters once it is installed.
Instead, what is left is simply to edit [.filename]#/etc/nsswitch.conf# to take advantage of the directory.
Simply replace the following lines:
[.programlisting]
....
group: compat
passwd: compat
....
with
[.programlisting]
....
group: files ldap
passwd: files ldap
....
This will allow you to map usernames to UIDs and UIDs to usernames.
Congratulations! You should now have working LDAP authentication.
[[caveats]]
=== Caveats
Unfortunately, as of the time this was written FreeBSD did not support changing user passwords with man:passwd[1].
As a result of this, most administrators are left to implement a solution themselves.
I provide some examples here.
Note that if you write your own password change script, there are some security issues you should be made aware of; see <<security-passwd>>
[[chpw-shell]]
.Shell Script for Changing Passwords
[example]
====
[.programlisting]
....
#!/bin/sh
stty -echo
read -p "Old Password: " oldp; echo
read -p "New Password: " np1; echo
read -p "Retype New Password: " np2; echo
stty echo
if [ "$np1" != "$np2" ]; then
echo "Passwords do not match."
exit 1
fi
ldappasswd -D uid="$USER",ou=people,dc=example,dc=org \
-w "$oldp" \
-a "$oldp" \
-s "$np1"
....
====
[CAUTION]
====
This script does hardly any error checking, but more important it is very cavalier about how it stores your passwords.
If you do anything like this, at least adjust the `security.bsd.see_other_uids` sysctl value:
[source,shell]
....
# sysctl security.bsd.see_other_uids=0
....
====
A more flexible (and probably more secure) approach can be used by writing a custom program, or even a web interface.
The following is part of a Ruby library that can change LDAP passwords.
It sees use both on the command line, and on the web.
[[chpw-ruby]]
.Ruby Script for Changing Passwords
[example]
====
[.programlisting]
....
require 'ldap'
require 'base64'
require 'digest'
require 'password' # ruby-password
ldap_server = "ldap.example.org"
luser = "uid=#{ENV['USER']},ou=people,dc=example,dc=org"
# get the new password, check it, and create a salted hash from it
def get_password
pwd1 = Password.get("New Password: ")
pwd2 = Password.get("Retype New Password: ")
raise if pwd1 != pwd2
pwd1.check # check password strength
salt = rand.to_s.gsub(/0\./, '')
pass = pwd1.to_s
hash = "{SSHA}"+Base64.encode64(Digest::SHA1.digest("#{pass}#{salt}")+salt).chomp!
return hash
end
oldp = Password.get("Old Password: ")
newp = get_password
# We'll just replace it. That we can bind proves that we either know
# the old password or are an admin.
replace = LDAP::Mod.new(LDAP::LDAP_MOD_REPLACE | LDAP::LDAP_MOD_BVALUES,
"userPassword",
[newp])
conn = LDAP::SSLConn.new(ldap_server, 389, true)
conn.set_option(LDAP::LDAP_OPT_PROTOCOL_VERSION, 3)
conn.bind(luser, oldp)
conn.modify(luser, [replace])
....
====
Although not guaranteed to be free of security holes (the password is kept in memory, for example) this is cleaner and more flexible than a simple `sh` script.
[[secure]]
== Security Considerations
Now that your machines (and possibly other services) are authenticating against your LDAP server, this server needs to be protected at least as well as [.filename]#/etc/master.passwd# would be on a regular server, and possibly even more so since a broken or cracked LDAP server would break every client service.
Remember, this section is not exhaustive.
You should continually review your configuration and procedures for improvements.
[[secure-readonly]]
=== Setting Attributes Read-only
Several attributes in LDAP should be read-only.
If left writable by the user, for example, a user could change his `uidNumber` attribute to `0` and get `root` access!
To begin with, the `userPassword` attribute should not be world-readable.
By default, anyone who can connect to the LDAP server can read this attribute.
To disable this, put the following in [.filename]#slapd.conf#:
[[hide-userpass]]
.Hide Passwords
[example]
====
[.programlisting]
....
access to dn.subtree="ou=people,dc=example,dc=org"
attrs=userPassword
by self write
by anonymous auth
by * none
access to *
by self write
by * read
....
====
This will disallow reading of the `userPassword` attribute, while still allowing users to change their own passwords.
Additionally, you'll want to keep users from changing some of their own attributes.
By default, users can change any attribute (except for those which the LDAP schemas themselves deny changes), such as `uidNumber`.
To close this hole, modify the above to
[[attrib-readonly]]
.Read-only Attributes
[example]
====
[.programlisting]
....
access to dn.subtree="ou=people,dc=example,dc=org"
attrs=userPassword
by self write
by anonymous auth
by * none
access to attrs=homeDirectory,uidNumber,gidNumber
by * read
access to *
by self write
by * read
....
====
This will stop users from being able to masquerade as other users.
[[secure-root]]
=== `root` Account Definition
Often the `root` or manager account for the LDAP service will be defined in the configuration file.
OpenLDAP supports this, for example, and it works, but it can lead to trouble if [.filename]#slapd.conf# is compromised.
It may be better to use this only to bootstrap yourself into LDAP, and then define a `root` account there.
Even better is to define accounts that have limited permissions, and omit a `root` account entirely.
For example, users that can add or remove user accounts are added to one group, but they cannot themselves change the membership of this group.
Such a security policy would help mitigate the effects of a leaked password.
[[manager-acct]]
==== Creating a Management Group
Say you want your IT department to be able to change home directories for users, but you do not want all of them to be able to add or remove users.
The way to do this is to add a group for these admins:
[[manager-acct-dn]]
.Creating a Management Group
[example]
====
[.programlisting]
....
dn: cn=homemanagement,dc=example,dc=org
objectClass: top
objectClass: posixGroup
cn: homemanagement
gidNumber: 121 # required for posixGroup
memberUid: uid=tuser,ou=people,dc=example,dc=org
memberUid: uid=user2,ou=people,dc=example,dc=org
....
====
And then change the permissions attributes in [.filename]#slapd.conf#:
[[management-acct-acl]]
.ACLs for a Home Directory Management Group
[example]
====
[.programlisting]
....
access to dn.subtree="ou=people,dc=example,dc=org"
attr=homeDirectory
by dn="cn=homemanagement,dc=example,dc=org"
dnattr=memberUid write
....
====
Now `tuser` and `user2` can change other users' home directories.
In this example we have given a subset of administrative power to certain users without giving them power in other domains.
The idea is that soon no single user account has the power of a `root` account, but every power root had is had by at least one user.
The `root` account then becomes unnecessary and can be removed.
[[security-passwd]]
=== Password Storage
By default OpenLDAP will store the value of the `userPassword` attribute as it stores any other data: in the clear.
Most of the time it is base 64 encoded, which provides enough protection to keep an honest administrator from knowing your password, but little else.
It is a good idea, then, to store passwords in a more secure format, such as SSHA (salted SHA).
This is done by whatever program you use to change users' passwords.
:sectnums!:
[appendix]
[[useful]]
== Useful Aids
There are a few other programs that might be useful, particularly if you have many users and do not want to configure everything manually.
package:security/pam_mkhomedir[] is a PAM module that always succeeds; its purpose is to create home directories for users which do not have them.
If you have dozens of client servers and hundreds of users, it is much easier to use this and set up skeleton directories than to prepare every home directory.
package:sysutils/cpu[] is a man:pw[8]-like utility that can be used to manage users in the LDAP directory.
You can call it directly, or wrap scripts around it.
It can handle both TLS (with the `-x` flag) and SSL (directly).
package:sysutils/ldapvi[] is a great utility for editing LDAP values in an LDIF-like syntax.
The directory (or subsection of the directory) is presented in the editor chosen by the `EDITOR` environment variable.
This makes it easy to enable large-scale changes in the directory without having to write a custom tool.
package:security/openssh-portable[] has the ability to contact an LDAP server to verify SSH keys.
This is extremely nice if you have many servers and do not want to copy your public keys across all of them.
:sectnums!:
[appendix]
[[ssl-ca]]
== OpenSSL Certificates for LDAP
If you are hosting two or more LDAP servers, you will probably not want to use self-signed certificates, since each client will have to be configured to work with each certificate.
While this is possible, it is not nearly as simple as creating your own certificate authority, and signing your servers' certificates with that.
The steps here are presented as they are with very little attempt at explaining what is going on-further explanation can be found in man:openssl[1] and its friends.
To create a certificate authority, we simply need a self-signed certificate and key.
The steps for this again are
[[make-cert]]
.Creating a Certificate
[example]
====
[source,shell]
....
% openssl genrsa -out root.key 1024
% openssl req -new -key root.key -out root.csr
% openssl x509 -req -days 1024 -in root.csr -signkey root.key -out root.crt
....
====
These will be your root CA key and certificate.
You will probably want to encrypt the key and store it in a cool, dry place; anyone with access to it can masquerade as one of your LDAP servers.
Next, using the first two steps above create a key [.filename]#ldap-server-one.key# and certificate signing request [.filename]#ldap-server-one.csr#.
Once you sign the signing request with [.filename]#root.key#, you will be able to use [.filename]#ldap-server-one.*# on your LDAP servers.
[NOTE]
====
Do not forget to use the fully qualified domain name for the "common name" attribute when generating the certificate signing request; otherwise clients will reject a connection with you, and it can be very tricky to diagnose.
====
To sign the key, use `-CA` and `-CAkey` instead of `-signkey`:
[[ca-sign]]
.Signing as a Certificate Authority
[example]
====
[source,shell]
....
% openssl x509 -req -days 1024 \
-in ldap-server-one.csr -CA root.crt -CAkey root.key \
-out ldap-server-one.crt
....
====
The resulting file will be the certificate that you can use on your LDAP servers.
Finally, for clients to trust all your servers, distribute [.filename]#root.crt# (the __certificate__, not the key!) to each client, and specify it in the `TLSCACertificateFile` directive in [.filename]#ldap.conf#.
diff --git a/documentation/content/en/articles/leap-seconds/_index.adoc b/documentation/content/en/articles/leap-seconds/_index.adoc
index 64271ff340..8ad3c6154e 100644
--- a/documentation/content/en/articles/leap-seconds/_index.adoc
+++ b/documentation/content/en/articles/leap-seconds/_index.adoc
@@ -1,84 +1,84 @@
---
title: FreeBSD Support for Leap Seconds
-releaseinfo: "$FreeBSD$"
+description: FreeBSD Support for Leap Seconds
---
= FreeBSD Support for Leap Seconds
:doctype: article
:toc: macro
:toclevels: 1
:icons: font
:sectnums:
:sectnumlevels: 6
:source-highlighter: rouge
:experimental:
include::shared/en/urls.adoc[]
'''
toc::[]
[[leapseconds-definition]]
== Introduction
A _leap second_ is an one second adjustment made at specific times of year to UTC to synchronize atomic time scales with variations in the rotation of the Earth.
This article describes how FreeBSD interacts with leap seconds.
As of this writing, the next leap second will occur at 2015-Jun-30 23:59:60 UTC.
This leap second will occur during a business day for North and South America and the Asia/Pacific region.
Leap seconds are announced by http://datacenter.iers.org/[IERS] on http://datacenter.iers.org/web/guest/bulletins/-/somos/5Rgv/product/16[Bulletin C].
Standard leap second behavior is described in https://tools.ietf.org/html/rfc7164#section-3[RFC 7164].
Also see man:time2posix[3].
[[leapseconds-posix]]
== Default Leap Second Handling on FreeBSD
The easiest way to handle leap seconds is with the POSIX time rules FreeBSD uses by default, combined with link:{handbook}#network-ntp[NTP].
When man:ntpd[8] is running and the time is synchronized with upstream NTP servers that handle leap seconds correctly, the leap second will cause the system time to automatically repeat the last second of the day.
No other adjustments are necessary.
If the upstream NTP servers do not handle leap seconds correctly, man:ntpd[8] will step the time by one second after the errant upstream server has noticed and stepped itself.
If NTP is not being used, manual adjustment of the system clock will be required after the leap second has passed.
[[leapseconds-cautions]]
== Cautions
Leap seconds are inserted at the same instant all over the world: UTC midnight.
In Japan that is mid-morning, in the Pacific mid-day, in the Americas late afternoon, and in Europe at night.
We believe and expect that FreeBSD, if provided correct and stable NTP service, will work as designed during this leap second, as it did during the previous ones.
However, we caution that practically no applications have ever asked the kernel about leap seconds.
Our experience is that, as designed, leap seconds are essentially a replay of the second before the leap second, and this is a surprise to most application programmers.
Other operating systems and other computers may or may not handle the leap-second the same way as FreeBSD, and systems without correct and stable NTP service will not know anything about leap seconds at all.
It is not unheard of for computers to crash because of leap seconds, and experience has shown that a large fraction of all public NTP servers might handle and announce the leap second incorrectly.
Please try to make sure nothing horrible happens because of the leap second.
[[leapseconds-testing]]
== Testing
It is possible to test whether a leap second will be used.
Due to the nature of NTP, the test might work up to 24 hours before the leap second.
Some major reference clock sources only announce leap seconds one hour ahead of the event.
Query the NTP daemon:
[source,shell]
....
% ntpq -c 'rv 0 leap'
....
Output that includes `leap_add_sec` indicates proper support of the leap second.
Before the 24 hours leading up to the leap second, or after the leap second has passed, `leap_none` will be shown.
[[leapseconds-conclusion]]
== Conclusion
In practice, leap seconds are usually not a problem on FreeBSD.
We hope that this overview helps clarify what to expect and how to make the leap second event proceed more smoothly.
diff --git a/documentation/content/en/articles/linux-emulation/_index.adoc b/documentation/content/en/articles/linux-emulation/_index.adoc
index f19362a9d0..701bf524cc 100644
--- a/documentation/content/en/articles/linux-emulation/_index.adoc
+++ b/documentation/content/en/articles/linux-emulation/_index.adoc
@@ -1,1416 +1,1416 @@
---
title: Linux® emulation in FreeBSD
authors:
- author: Roman Divacky
email: rdivacky@FreeBSD.org
-releaseinfo: "$FreeBSD$"
+description: Linux® emulation in FreeBSD
trademarks: ["freebsd", "ibm", "adobe", "netbsd", "realnetworks", "oracle", "linux", "sun", "general"]
---
= Linux(R) emulation in FreeBSD
:doctype: article
:toc: macro
:toclevels: 1
:icons: font
:sectnums:
:sectnumlevels: 6
:source-highlighter: rouge
:experimental:
include::shared/authors.adoc[]
[.abstract-title]
Abstract
This masters thesis deals with updating the Linux(R) emulation layer (the so called _Linuxulator_).
The task was to update the layer to match the functionality of Linux(R) 2.6.
As a reference implementation, the Linux(R) 2.6.16 kernel was chosen.
The concept is loosely based on the NetBSD implementation.
Most of the work was done in the summer of 2006 as a part of the Google Summer of Code students program.
The focus was on bringing the _NPTL_ (new POSIX(R) thread library) support into the emulation layer, including _TLS_ (thread local storage), _futexes_ (fast user space mutexes), _PID mangling_, and some other minor things.
Many small problems were identified and fixed in the process.
My work was integrated into the main FreeBSD source repository and will be shipped in the upcoming 7.0R release.
We, the emulation development team, are working on making the Linux(R) 2.6 emulation the default emulation layer in FreeBSD.
'''
toc::[]
[[intro]]
== Introduction
In the last few years the open source UNIX(R) based operating systems started to be widely deployed on server and client machines.
Among these operating systems I would like to point out two: FreeBSD, for its BSD heritage, time proven code base and many interesting features and Linux(R) for its wide user base, enthusiastic open developer community and support from large companies.
FreeBSD tends to be used on server class machines serving heavy duty networking tasks with less usage on desktop class machines for ordinary users.
While Linux(R) has the same usage on servers, but it is used much more by home based users.
This leads to a situation where there are many binary only programs available for Linux(R) that lack support for FreeBSD.
Naturally, a need for the ability to run Linux(R) binaries on a FreeBSD system arises and this is what this thesis deals with: the emulation of the Linux(R) kernel in the FreeBSD operating system.
During the Summer of 2006 Google Inc. sponsored a project which focused on extending the Linux(R) emulation layer (the so called Linuxulator) in FreeBSD to include Linux(R) 2.6 facilities.
This thesis is written as a part of this project.
[[inside]]
== A look inside...
In this section we are going to describe every operating system in question.
How they deal with syscalls, trapframes etc., all the low-level stuff.
We also describe the way they understand common UNIX(R) primitives like what a PID is, what a thread is, etc.
In the third subsection we talk about how UNIX(R) on UNIX(R) emulation could be done in general.
[[what-is-unix]]
=== What is UNIX(R)
UNIX(R) is an operating system with a long history that has influenced almost every other operating system currently in use.
Starting in the 1960s, its development continues to this day (although in different projects).
UNIX(R) development soon forked into two main ways: the BSDs and System III/V families.
They mutually influenced themselves by growing a common UNIX(R) standard.
Among the contributions originated in BSD we can name virtual memory, TCP/IP networking, FFS, and many others.
The System V branch contributed to SysV interprocess communication primitives, copy-on-write, etc.
UNIX(R) itself does not exist any more but its ideas have been used by many other operating systems world wide thus forming the so called UNIX(R)-like operating systems.
These days the most influential ones are Linux(R), Solaris, and possibly (to some extent) FreeBSD.
There are in-company UNIX(R) derivatives (AIX, HP-UX etc.), but these have been more and more migrated to the aforementioned systems.
Let us summarize typical UNIX(R) characteristics.
[[tech-details]]
=== Technical details
Every running program constitutes a process that represents a state of the computation.
Running process is divided between kernel-space and user-space.
Some operations can be done only from kernel space (dealing with hardware etc.), but the process should spend most of its lifetime in the user space.
The kernel is where the management of the processes, hardware, and low-level details take place.
The kernel provides a standard unified UNIX(R) API to the user space.
The most important ones are covered below.
[[kern-proc-comm]]
==== Communication between kernel and user space process
Common UNIX(R) API defines a syscall as a way to issue commands from a user space process to the kernel.
The most common implementation is either by using an interrupt or specialized instruction (think of `SYSENTER`/`SYSCALL` instructions for ia32).
Syscalls are defined by a number.
For example in FreeBSD, the syscall number 85 is the man:swapon[2] syscall and the syscall number 132 is man:mkfifo[2].
Some syscalls need parameters, which are passed from the user-space to the kernel-space in various ways (implementation dependant).
Syscalls are synchronous.
Another possible way to communicate is by using a _trap_.
Traps occur asynchronously after some event occurs (division by zero, page fault etc.).
A trap can be transparent for a process (page fault) or can result in a reaction like sending a _signal_ (division by zero).
[[proc-proc-comm]]
==== Communication between processes
There are other APIs (System V IPC, shared memory etc.) but the single most important API is signal.
Signals are sent by processes or by the kernel and received by processes.
Some signals can be ignored or handled by a user supplied routine, some result in a predefined action that cannot be altered or ignored.
[[proc-mgmt]]
==== Process management
Kernel instances are processed first in the system (so called init).
Every running process can create its identical copy using the man:fork[2] syscall.
Some slightly modified versions of this syscall were introduced but the basic semantic is the same.
Every running process can morph into some other process using the man:exec[3] syscall.
Some modifications of this syscall were introduced but all serve the same basic purpose.
Processes end their lives by calling the man:exit[2] syscall.
Every process is identified by a unique number called PID.
Every process has a defined parent (identified by its PID).
[[thread-mgmt]]
==== Thread management
Traditional UNIX(R) does not define any API nor implementation for threading, while POSIX(R) defines its threading API but the implementation is undefined.
Traditionally there were two ways of implementing threads.
Handling them as separate processes (1:1 threading) or envelope the whole thread group in one process and managing the threading in userspace (1:N threading).
Comparing main features of each approach:
1:1 threading
* - heavyweight threads
* - the scheduling cannot be altered by the user (slightly mitigated by the POSIX(R) API)
* + no syscall wrapping necessary
* + can utilize multiple CPUs
1:N threading
* + lightweight threads
* + scheduling can be easily altered by the user
* - syscalls must be wrapped
* - cannot utilize more than one CPU
[[what-is-freebsd]]
=== What is FreeBSD?
The FreeBSD project is one of the oldest open source operating systems currently available for daily use.
It is a direct descendant of the genuine UNIX(R) so it could be claimed that it is a true UNIX(R) although licensing issues do not permit that.
The start of the project dates back to the early 1990's when a crew of fellow BSD users patched the 386BSD operating system.
Based on this patchkit a new operating system arose named FreeBSD for its liberal license.
Another group created the NetBSD operating system with different goals in mind.
We will focus on FreeBSD.
FreeBSD is a modern UNIX(R)-based operating system with all the features of UNIX(R).
Preemptive multitasking, multiuser facilities, TCP/IP networking, memory protection, symmetric multiprocessing support, virtual memory with merged VM and buffer cache, they are all there.
One of the interesting and extremely useful features is the ability to emulate other UNIX(R)-like operating systems.
As of December 2006 and 7-CURRENT development, the following emulation functionalities are supported:
* FreeBSD/i386 emulation on FreeBSD/amd64
* FreeBSD/i386 emulation on FreeBSD/ia64
* Linux(R)-emulation of Linux(R) operating system on FreeBSD
* NDIS-emulation of Windows networking drivers interface
* NetBSD-emulation of NetBSD operating system
* PECoff-support for PECoff FreeBSD executables
* SVR4-emulation of System V revision 4 UNIX(R)
Actively developed emulations are the Linux(R) layer and various FreeBSD-on-FreeBSD layers.
Others are not supposed to work properly nor be usable these days.
[[freebsd-tech-details]]
==== Technical details
FreeBSD is traditional flavor of UNIX(R) in the sense of dividing the run of processes into two halves: kernel space and user space run.
There are two types of process entry to the kernel: a syscall and a trap.
There is only one way to return.
In the subsequent sections we will describe the three gates to/from the kernel.
The whole description applies to the i386 architecture as the Linuxulator only exists there but the concept is similar on other architectures.
The information was taken from [1] and the source code.
[[freebsd-sys-entries]]
===== System entries
FreeBSD has an abstraction called an execution class loader, which is a wedge into the man:execve[2] syscall.
This employs a structure `sysentvec`, which describes an executable ABI.
It contains things like errno translation table, signal translation table, various functions to serve syscall needs (stack fixup, coredumping, etc.).
Every ABI the FreeBSD kernel wants to support must define this structure, as it is used later in the syscall processing code and at some other places.
System entries are handled by trap handlers, where we can access both the kernel-space and the user-space at once.
[[freebsd-syscalls]]
===== Syscalls
Syscalls on FreeBSD are issued by executing interrupt `0x80` with register `%eax` set to a desired syscall number with arguments passed on the stack.
When a process issues an interrupt `0x80`, the `int0x80` syscall trap handler is issued (defined in [.filename]#sys/i386/i386/exception.s#), which prepares arguments (i.e. copies them on to the stack) for a call to a C function man:syscall[2] (defined in [.filename]#sys/i386/i386/trap.c#), which processes the passed in trapframe.
The processing consists of preparing the syscall (depending on the `sysvec` entry), determining if the syscall is 32-bit or 64-bit one (changes size of the parameters), then the parameters are copied, including the syscall.
Next, the actual syscall function is executed with processing of the return code (special cases for `ERESTART` and `EJUSTRETURN` errors).
Finally an `userret()` is scheduled, switching the process back to the users-pace.
The parameters to the actual syscall handler are passed in the form of `struct thread *td`, `struct syscall args *` arguments where the second parameter is a pointer to the copied in structure of parameters.
[[freebsd-traps]]
===== Traps
Handling of traps in FreeBSD is similar to the handling of syscalls.
Whenever a trap occurs, an assembler handler is called.
It is chosen between alltraps, alltraps with regs pushed or calltrap depending on the type of the trap.
This handler prepares arguments for a call to a C function `trap()` (defined in [.filename]#sys/i386/i386/trap.c#), which then processes the occurred trap.
After the processing it might send a signal to the process and/or exit to userland using `userret()`.
[[freebsd-exits]]
===== Exits
Exits from kernel to userspace happen using the assembler routine `doreti` regardless of whether the kernel was entered via a trap or via a syscall.
This restores the program status from the stack and returns to the userspace.
[[freebsd-unix-primitives]]
===== UNIX(R) primitives
FreeBSD operating system adheres to the traditional UNIX(R) scheme, where every process has a unique identification number, the so called _PID_ (Process ID).
PID numbers are allocated either linearly or randomly ranging from `0` to `PID_MAX`.
The allocation of PID numbers is done using linear searching of PID space.
Every thread in a process receives the same PID number as result of the man:getpid[2] call.
There are currently two ways to implement threading in FreeBSD.
The first way is M:N threading followed by the 1:1 threading model.
The default library used is M:N threading (`libpthread`) and you can switch at runtime to 1:1 threading (`libthr`).
The plan is to switch to 1:1 library by default soon.
Although those two libraries use the same kernel primitives, they are accessed through different API(es).
The M:N library uses the `kse_*` family of syscalls while the 1:1 library uses the `thr_*` family of syscalls.
Due to this, there is no general concept of thread ID shared between kernel and userspace.
Of course, both threading libraries implement the pthread thread ID API.
Every kernel thread (as described by `struct thread`) has td tid identifier but this is not directly accessible from userland and solely serves the kernel's needs.
It is also used for 1:1 threading library as pthread's thread ID but handling of this is internal to the library and cannot be relied on.
As stated previously there are two implementations of threading in FreeBSD.
The M:N library divides the work between kernel space and userspace.
Thread is an entity that gets scheduled in the kernel but it can represent various number of userspace threads.
M userspace threads get mapped to N kernel threads thus saving resources while keeping the ability to exploit multiprocessor parallelism.
Further information about the implementation can be obtained from the man page or [1].
The 1:1 library directly maps a userland thread to a kernel thread thus greatly simplifying the scheme.
None of these designs implement a fairness mechanism (such a mechanism was implemented but it was removed recently because it caused serious slowdown and made the code more difficult to deal with).
[[what-is-linux]]
=== What is Linux(R)
Linux(R) is a UNIX(R)-like kernel originally developed by Linus Torvalds, and now being contributed to by a massive crowd of programmers all around the world.
From its mere beginnings to today, with wide support from companies such as IBM or Google, Linux(R) is being associated with its fast development pace, full hardware support and benevolent dictator model of organization.
Linux(R) development started in 1991 as a hobbyist project at University of Helsinki in Finland.
Since then it has obtained all the features of a modern UNIX(R)-like OS: multiprocessing, multiuser support, virtual memory, networking, basically everything is there.
There are also highly advanced features like virtualization etc.
As of 2006 Linux(R) seems to be the most widely used open source operating system with support from independent software vendors like Oracle, RealNetworks, Adobe, etc.
Most of the commercial software distributed for Linux(R) can only be obtained in a binary form so recompilation for other operating systems is impossible.
Most of the Linux(R) development happens in a Git version control system.
Git is a distributed system so there is no central source of the Linux(R) code, but some branches are considered prominent and official.
The version number scheme implemented by Linux(R) consists of four numbers A.B.C.D.
Currently development happens in 2.6.C.D, where C represents major version, where new features are added or changed while D is a minor version for bugfixes only.
More information can be obtained from [3].
[[linux-tech-details]]
==== Technical details
Linux(R) follows the traditional UNIX(R) scheme of dividing the run of a process in two halves: the kernel and user space.
The kernel can be entered in two ways: via a trap or via a syscall.
The return is handled only in one way.
The further description applies to Linux(R) 2.6 on the i386(TM) architecture.
This information was taken from [2].
[[linux-syscalls]]
===== Syscalls
Syscalls in Linux(R) are performed (in userspace) using `syscallX` macros where X substitutes a number representing the number of parameters of the given syscall.
This macro translates to a code that loads `%eax` register with a number of the syscall and executes interrupt `0x80`.
After this syscall return is called, which translates negative return values to positive `errno` values and sets `res` to `-1` in case of an error.
Whenever the interrupt `0x80` is called the process enters the kernel in system call trap handler.
This routine saves all registers on the stack and calls the selected syscall entry.
Note that the Linux(R) calling convention expects parameters to the syscall to be passed via registers as shown here:
. parameter -> `%ebx`
. parameter -> `%ecx`
. parameter -> `%edx`
. parameter -> `%esi`
. parameter -> `%edi`
. parameter -> `%ebp`
There are some exceptions to this, where Linux(R) uses different calling convention (most notably the `clone` syscall).
[[linux-traps]]
===== Traps
The trap handlers are introduced in [.filename]#arch/i386/kernel/traps.c# and most of these handlers live in [.filename]#arch/i386/kernel/entry.S#, where handling of the traps happens.
[[linux-exits]]
===== Exits
Return from the syscall is managed by syscall man:exit[3], which checks for the process having unfinished work, then checks whether we used user-supplied selectors.
If this happens stack fixing is applied and finally the registers are restored from the stack and the process returns to the userspace.
[[linux-unix-primitives]]
===== UNIX(R) primitives
In the 2.6 version, the Linux(R) operating system redefined some of the traditional UNIX(R) primitives, notably PID, TID and thread.
PID is defined not to be unique for every process, so for some processes (threads) man:getppid[2] returns the same value.
Unique identification of process is provided by TID.
This is because _NPTL_ (New POSIX(R) Thread Library) defines threads to be normal processes (so called 1:1 threading).
Spawning a new process in Linux(R) 2.6 happens using the `clone` syscall (fork variants are reimplemented using it).
This clone syscall defines a set of flags that affect behavior of the cloning process regarding thread implementation.
The semantic is a bit fuzzy as there is no single flag telling the syscall to create a thread.
Implemented clone flags are:
* `CLONE_VM` - processes share their memory space
* `CLONE_FS` - share umask, cwd and namespace
* `CLONE_FILES` - share open files
* `CLONE_SIGHAND` - share signal handlers and blocked signals
* `CLONE_PARENT` - share parent
* `CLONE_THREAD` - be thread (further explanation below)
* `CLONE_NEWNS` - new namespace
* `CLONE_SYSVSEM` - share SysV undo structures
* `CLONE_SETTLS` - setup TLS at supplied address
* `CLONE_PARENT_SETTID` - set TID in the parent
* `CLONE_CHILD_CLEARTID` - clear TID in the child
* `CLONE_CHILD_SETTID` - set TID in the child
`CLONE_PARENT` sets the real parent to the parent of the caller.
This is useful for threads because if thread A creates thread B we want thread B to be parented to the parent of the whole thread group.
`CLONE_THREAD` does exactly the same thing as `CLONE_PARENT`, `CLONE_VM` and `CLONE_SIGHAND`, rewrites PID to be the same as PID of the caller, sets exit signal to be none and enters the thread group.
`CLONE_SETTLS` sets up GDT entries for TLS handling.
The `CLONE_*_*TID` set of flags sets/clears user supplied address to TID or 0.
As you can see the `CLONE_THREAD` does most of the work and does not seem to fit the scheme very well.
The original intention is unclear (even for authors, according to comments in the code) but I think originally there was one threading flag, which was then parcelled among many other flags but this separation was never fully finished.
It is also unclear what this partition is good for as glibc does not use that so only hand-written use of the clone permits a programmer to access this features.
For non-threaded programs the PID and TID are the same.
For threaded programs the first thread PID and TID are the same and every created thread shares the same PID and gets assigned a unique TID (because `CLONE_THREAD` is passed in) also parent is shared for all processes forming this threaded program.
The code that implements man:pthread_create[3] in NPTL defines the clone flags like this:
[.programlisting]
....
int clone_flags = (CLONE_VM | CLONE_FS | CLONE_FILES | CLONE_SIGNAL
| CLONE_SETTLS | CLONE_PARENT_SETTID
| CLONE_CHILD_CLEARTID | CLONE_SYSVSEM
#if __ASSUME_NO_CLONE_DETACHED == 0
| CLONE_DETACHED
#endif
| 0);
....
The `CLONE_SIGNAL` is defined like
[.programlisting]
....
#define CLONE_SIGNAL (CLONE_SIGHAND | CLONE_THREAD)
....
the last 0 means no signal is sent when any of the threads exits.
[[what-is-emu]]
=== What is emulation
According to a dictionary definition, emulation is the ability of a program or device to imitate another program or device.
This is achieved by providing the same reaction to a given stimulus as the emulated object.
In practice, the software world mostly sees three types of emulation - a program used to emulate a machine (QEMU, various game console emulators etc.), software emulation of a hardware facility (OpenGL emulators, floating point units emulation etc.) and operating system emulation (either in kernel of the operating system or as a userspace program).
Emulation is usually used in a place, where using the original component is not feasible nor possible at all.
For example someone might want to use a program developed for a different operating system than they use.
Then emulation comes in handy.
Sometimes there is no other way but to use emulation - e.g. when the hardware device you try to use does not exist (yet/anymore) then there is no other way but emulation.
This happens often when porting an operating system to a new (non-existent) platform.
Sometimes it is just cheaper to emulate.
Looking from an implementation point of view, there are two main approaches to the implementation of emulation.
You can either emulate the whole thing - accepting possible inputs of the original object, maintaining inner state and emitting correct output based on the state and/or input.
This kind of emulation does not require any special conditions and basically can be implemented anywhere for any device/program.
The drawback is that implementing such emulation is quite difficult, time-consuming and error-prone.
In some cases we can use a simpler approach.
Imagine you want to emulate a printer that prints from left to right on a printer that prints from right to left.
It is obvious that there is no need for a complex emulation layer but simply reversing of the printed text is sufficient.
Sometimes the emulating environment is very similar to the emulated one so just a thin layer of some translation is necessary to provide fully working emulation! As you can see this is much less demanding to implement, so less time-consuming and error-prone than the previous approach.
But the necessary condition is that the two environments must be similar enough.
The third approach combines the two previous.
Most of the time the objects do not provide the same capabilities so in a case of emulating the more powerful one on the less powerful we have to emulate the missing features with full emulation described above.
This master thesis deals with emulation of UNIX(R) on UNIX(R), which is exactly the case, where only a thin layer of translation is sufficient to provide full emulation.
The UNIX(R) API consists of a set of syscalls, which are usually self contained and do not affect some global kernel state.
There are a few syscalls that affect inner state but this can be dealt with by providing some structures that maintain the extra state.
No emulation is perfect and emulations tend to lack some parts but this usually does not cause any serious drawbacks.
Imagine a game console emulator that emulates everything but music output. No doubt that the games are playable and one can use the emulator.
It might not be that comfortable as the original game console but its an acceptable compromise between price and comfort.
The same goes with the UNIX(R) API.
Most programs can live with a very limited set of syscalls working.
Those syscalls tend to be the oldest ones (man:read[2]/man:write[2], man:fork[2] family, man:signal[3] handling, man:exit[3], man:socket[2] API) hence it is easy to emulate because their semantics is shared among all UNIX(R)es, which exist todays.
[[freebsd-emulation]]
== Emulation
=== How emulation works in FreeBSD
As stated earlier, FreeBSD supports running binaries from several other UNIX(R)es.
This works because FreeBSD has an abstraction called the execution class loader.
This wedges into the man:execve[2] syscall, so when man:execve[2] is about to execute a binary it examines its type.
There are basically two types of binaries in FreeBSD.
Shell-like text scripts which are identified by `#!` as their first two characters and normal (typically _ELF_) binaries, which are a representation of a compiled executable object.
The vast majority (one could say all of them) of binaries in FreeBSD are from type ELF.
ELF files contain a header, which specifies the OS ABI for this ELF file.
By reading this information, the operating system can accurately determine what type of binary the given file is.
Every OS ABI must be registered in the FreeBSD kernel.
This applies to the FreeBSD native OS ABI, as well.
So when man:execve[2] executes a binary it iterates through the list of registered APIs and when it finds the right one it starts to use the information contained in the OS ABI description (its syscall table, `errno` translation table, etc.).
So every time the process calls a syscall, it uses its own set of syscalls instead of some global one.
This effectively provides a very elegant and easy way of supporting execution of various binary formats.
The nature of emulation of different OSes (and also some other subsystems) led developers to invite a handler event mechanism.
There are various places in the kernel, where a list of event handlers are called.
Every subsystem can register an event handler and they are called accordingly.
For example, when a process exits there is a handler called that possibly cleans up whatever the subsystem needs to be cleaned.
Those simple facilities provide basically everything that is needed for the emulation infrastructure and in fact these are basically the only things necessary to implement the Linux(R) emulation layer.
[[freebsd-common-primitives]]
=== Common primitives in the FreeBSD kernel
Emulation layers need some support from the operating system.
I am going to describe some of the supported primitives in the FreeBSD operating system.
[[freebsd-locking-primitives]]
==== Locking primitives
Contributed by: `{attilio}`
The FreeBSD synchronization primitive set is based on the idea to supply a rather huge number of different primitives in a way that the better one can be used for every particular, appropriate situation.
To a high level point of view you can consider three kinds of synchronization primitives in the FreeBSD kernel:
* atomic operations and memory barriers
* locks
* scheduling barriers
Below there are descriptions for the 3 families.
For every lock, you should really check the linked manpage (where possible) for more detailed explanations.
[[freebsd-atomic-op]]
===== Atomic operations and memory barriers
Atomic operations are implemented through a set of functions performing simple arithmetics on memory operands in an atomic way with respect to external events (interrupts, preemption, etc.).
Atomic operations can guarantee atomicity just on small data types (in the magnitude order of the `.long.` architecture C data type), so should be rarely used directly in the end-level code, if not only for very simple operations (like flag setting in a bitmap, for example).
In fact, it is rather simple and common to write down a wrong semantic based on just atomic operations (usually referred as lock-less).
The FreeBSD kernel offers a way to perform atomic operations in conjunction with a memory barrier.
The memory barriers will guarantee that an atomic operation will happen following some specified ordering with respect to other memory accesses.
For example, if we need that an atomic operation happen just after all other pending writes (in terms of instructions reordering buffers activities) are completed, we need to explicitly use a memory barrier in conjunction to this atomic operation.
So it is simple to understand why memory barriers play a key role for higher-level locks building (just as refcounts, mutexes, etc.).
For a detailed explanatory on atomic operations, please refer to man:atomic[9].
It is far, however, noting that atomic operations (and memory barriers as well) should ideally only be used for building front-ending locks (as mutexes).
[[freebsd-refcounts]]
===== Refcounts
Refcounts are interfaces for handling reference counters.
They are implemented through atomic operations and are intended to be used just for cases, where the reference counter is the only one thing to be protected, so even something like a spin-mutex is deprecated.
Using the refcount interface for structures, where a mutex is already used is often wrong since we should probably close the reference counter in some already protected paths.
A manpage discussing refcount does not exist currently, just check [.filename]#sys/refcount.h# for an overview of the existing API.
[[freebsd-locks]]
===== Locks
FreeBSD kernel has huge classes of locks.
Every lock is defined by some peculiar properties, but probably the most important is the event linked to contesting holders (or in other terms, the behavior of threads unable to acquire the lock).
FreeBSD's locking scheme presents three different behaviors for contenders:
. spinning
. blocking
. sleeping
[NOTE]
====
numbers are not casual
====
[[freebsd-spinlocks]]
===== Spinning locks
Spin locks let waiters to spin until they cannot acquire the lock.
An important matter do deal with is when a thread contests on a spin lock if it is not descheduled.
Since the FreeBSD kernel is preemptive, this exposes spin lock at the risk of deadlocks that can be solved just disabling interrupts while they are acquired.
For this and other reasons (like lack of priority propagation support, poorness in load balancing schemes between CPUs, etc.), spin locks are intended to protect very small paths of code, or ideally not to be used at all if not explicitly requested (explained later).
[[freebsd-blocking]]
===== Blocking
Block locks let waiters to be descheduled and blocked until the lock owner does not drop it and wakes up one or more contenders.
In order to avoid starvation issues, blocking locks do priority propagation from the waiters to the owner.
Block locks must be implemented through the turnstile interface and are intended to be the most used kind of locks in the kernel, if no particular conditions are met.
[[freebsd-sleeping]]
===== Sleeping
Sleep locks let waiters to be descheduled and fall asleep until the lock holder does not drop it and wakes up one or more waiters.
Since sleep locks are intended to protect large paths of code and to cater asynchronous events, they do not do any form of priority propagation.
They must be implemented through the man:sleepqueue[9] interface.
The order used to acquire locks is very important, not only for the possibility to deadlock due at lock order reversals, but even because lock acquisition should follow specific rules linked to locks natures.
If you give a look at the table above, the practical rule is that if a thread holds a lock of level n (where the level is the number listed close to the kind of lock) it is not allowed to acquire a lock of superior levels, since this would break the specified semantic for a path.
For example, if a thread holds a block lock (level 2), it is allowed to acquire a spin lock (level 1) but not a sleep lock (level 3), since block locks are intended to protect smaller paths than sleep lock (these rules are not about atomic operations or scheduling barriers, however).
This is a list of lock with their respective behaviors:
* spin mutex - spinning - man:mutex[9]
* sleep mutex - blocking - man:mutex[9]
* pool mutex - blocking - man:mtx[pool]
* sleep family - sleeping - man:sleep[9] pause tsleep msleep msleep spin msleep rw msleep sx
* condvar - sleeping - man:condvar[9]
* rwlock - blocking - man:rwlock[9]
* sxlock - sleeping - man:sx[9]
* lockmgr - sleeping - man:lockmgr[9]
* semaphores - sleeping - man:sema[9]
Among these locks only mutexes, sxlocks, rwlocks and lockmgrs are intended to handle recursion, but currently recursion is only supported by mutexes and lockmgrs.
[[freebsd-scheduling]]
===== Scheduling barriers
Scheduling barriers are intended to be used in order to drive scheduling of threading.
They consist mainly of three different stubs:
* critical sections (and preemption)
* sched_bind
* sched_pin
Generally, these should be used only in a particular context and even if they can often replace locks, they should be avoided because they do not let the diagnose of simple eventual problems with locking debugging tools (as man:witness[4]).
[[freebsd-critical]]
===== Critical sections
The FreeBSD kernel has been made preemptive basically to deal with interrupt threads.
In fact, in order to avoid high interrupt latency, time-sharing priority threads can be preempted by interrupt threads (in this way, they do not need to wait to be scheduled as the normal path previews).
Preemption, however, introduces new racing points that need to be handled, as well.
Often, in order to deal with preemption, the simplest thing to do is to completely disable it.
A critical section defines a piece of code (borderlined by the pair of functions man:critical_enter[9] and man:critical_exit[9], where preemption is guaranteed to not happen (until the protected code is fully executed).
This can often replace a lock effectively but should be used carefully in order to not lose the whole advantage that preemption brings.
[[freebsd-schedpin]]
===== sched_pin/sched_unpin
Another way to deal with preemption is the `sched_pin()` interface.
If a piece of code is closed in the `sched_pin()` and `sched_unpin()` pair of functions it is guaranteed that the respective thread, even if it can be preempted, it will always be executed on the same CPU.
Pinning is very effective in the particular case when we have to access at per-cpu datas and we assume other threads will not change those data.
The latter condition will determine a critical section as a too strong condition for our code.
[[freebsd-schedbind]]
===== sched_bind/sched_unbind
`sched_bind` is an API used in order to bind a thread to a particular CPU for all the time it executes the code, until a `sched_unbind` function call does not unbind it.
This feature has a key role in situations where you cannot trust the current state of CPUs (for example, at very early stages of boot), as you want to avoid your thread to migrate on inactive CPUs.
Since `sched_bind` and `sched_unbind` manipulate internal scheduler structures, they need to be enclosed in `sched_lock` acquisition/releasing when used.
[[freebsd-proc]]
==== Proc structure
Various emulation layers sometimes require some additional per-process data.
It can manage separate structures (a list, a tree etc.) containing these data for every process but this tends to be slow and memory consuming.
To solve this problem the FreeBSD `proc` structure contains `p_emuldata`, which is a void pointer to some emulation layer specific data.
This `proc` entry is protected by the proc mutex.
The FreeBSD `proc` structure contains a `p_sysent` entry that identifies, which ABI this process is running.
In fact, it is a pointer to the `sysentvec` described above.
So by comparing this pointer to the address where the `sysentvec` structure for the given ABI is stored we can effectively determine whether the process belongs to our emulation layer.
The code typically looks like:
[.programlisting]
....
if (__predict_true(p->p_sysent != &elf_Linux(R)_sysvec))
return;
....
As you can see, we effectively use the `__predict_true` modifier to collapse the most common case (FreeBSD process) to a simple return operation thus preserving high performance.
This code should be turned into a macro because currently it is not very flexible, i.e. we do not support Linux(R)64 emulation nor A.OUT Linux(R) processes on i386.
[[freebsd-vfs]]
==== VFS
The FreeBSD VFS subsystem is very complex but the Linux(R) emulation layer uses just a small subset via a well defined API.
It can either operate on vnodes or file handlers.
Vnode represents a virtual vnode, i.e. representation of a node in VFS.
Another representation is a file handler, which represents an opened file from the perspective of a process.
A file handler can represent a socket or an ordinary file.
A file handler contains a pointer to its vnode.
More then one file handler can point to the same vnode.
[[freebsd-namei]]
===== namei
The man:namei[9] routine is a central entry point to pathname lookup and translation.
It traverses the path point by point from the starting point to the end point using lookup function, which is internal to VFS.
The man:namei[9] syscall can cope with symlinks, absolute and relative paths.
When a path is looked up using man:namei[9] it is inputed to the name cache. This behavior can be suppressed.
This routine is used all over the kernel and its performance is very critical.
[[freebsd-vn]]
===== vn_fullpath
The man:vn_fullpath[9] function takes the best effort to traverse VFS name cache and returns a path for a given (locked) vnode.
This process is unreliable but works just fine for the most common cases.
The unreliability is because it relies on VFS cache (it does not traverse the on medium structures), it does not work with hardlinks, etc.
This routine is used in several places in the Linuxulator.
[[freebsd-vnode]]
===== Vnode operations
* `fgetvp` - given a thread and a file descriptor number it returns the associated vnode
* man:vn_lock[9] - locks a vnode
* `vn_unlock` - unlocks a vnode
* man:VOP_READDIR[9] - reads a directory referenced by a vnode
* man:VOP_GETATTR[9] - gets attributes of a file or a directory referenced by a vnode
* man:VOP_LOOKUP[9] - looks up a path to a given directory
* man:VOP_OPEN[9] - opens a file referenced by a vnode
* man:VOP_CLOSE[9] - closes a file referenced by a vnode
* man:vput[9] - decrements the use count for a vnode and unlocks it
* man:vrele[9] - decrements the use count for a vnode
* man:vref[9] - increments the use count for a vnode
[[freebsd-file-handler]]
===== File handler operations
* `fget` - given a thread and a file descriptor number it returns associated file handler and references it
* `fdrop` - drops a reference to a file handler
* `fhold` - references a file handler
[[md]]
== Linux(R) emulation layer -MD part
This section deals with implementation of Linux(R) emulation layer in FreeBSD operating system.
It first describes the machine dependent part talking about how and where interaction between userland and kernel is implemented.
It talks about syscalls, signals, ptrace, traps, stack fixup.
This part discusses i386 but it is written generally so other architectures should not differ very much.
The next part is the machine independent part of the Linuxulator.
This section only covers i386 and ELF handling. A.OUT is obsolete and untested.
[[syscall-handling]]
=== Syscall handling
Syscall handling is mostly written in [.filename]#linux_sysvec.c#, which covers most of the routines pointed out in the `sysentvec` structure.
When a Linux(R) process running on FreeBSD issues a syscall, the general syscall routine calls linux prepsyscall routine for the Linux(R) ABI.
[[linux-prepsyscall]]
==== Linux(R) prepsyscall
Linux(R) passes arguments to syscalls via registers (that is why it is limited to 6 parameters on i386) while FreeBSD uses the stack.
The Linux(R) prepsyscall routine must copy parameters from registers to the stack.
The order of the registers is: `%ebx`, `%ecx`, `%edx`, `%esi`, `%edi`, `%ebp`.
The catch is that this is true for only _most_ of the syscalls.
Some (most notably `clone`) uses a different order but it is luckily easy to fix by inserting a dummy parameter in the `linux_clone` prototype.
[[syscall-writing]]
==== Syscall writing
Every syscall implemented in the Linuxulator must have its prototype with various flags in [.filename]#syscalls.master#.
The form of the file is:
[.programlisting]
....
...
AUE_FORK STD { int linux_fork(void); }
...
AUE_CLOSE NOPROTO { int close(int fd); }
...
....
The first column represents the syscall number.
The second column is for auditing support.
The third column represents the syscall type.
It is either `STD`, `OBSOL`, `NOPROTO` and `UNIMPL`.
`STD` is a standard syscall with full prototype and implementation.
`OBSOL` is obsolete and defines just the prototype.
`NOPROTO` means that the syscall is implemented elsewhere so do not prepend ABI prefix, etc.
`UNIMPL` means that the syscall will be substituted with the `nosys` syscall (a syscall just printing out a message about the syscall not being implemented and returning `ENOSYS`).
From [.filename]#syscalls.master# a script generates three files: [.filename]#linux_syscall.h#, [.filename]#linux_proto.h# and [.filename]#linux_sysent.c#.
The [.filename]#linux_syscall.h# contains definitions of syscall names and their numerical value, e.g.:
[.programlisting]
....
...
#define LINUX_SYS_linux_fork 2
...
#define LINUX_SYS_close 6
...
....
The [.filename]#linux_proto.h# contains structure definitions of arguments to every syscall, e.g.:
[.programlisting]
....
struct linux_fork_args {
register_t dummy;
};
....
And finally, [.filename]#linux_sysent.c# contains structure describing the system entry table, used to actually dispatch a syscall, e.g.:
[.programlisting]
....
{ 0, (sy_call_t *)linux_fork, AUE_FORK, NULL, 0, 0 }, /* 2 = linux_fork */
{ AS(close_args), (sy_call_t *)close, AUE_CLOSE, NULL, 0, 0 }, /* 6 = close */
....
As you can see `linux_fork` is implemented in Linuxulator itself so the definition is of `STD` type and has no argument, which is exhibited by the dummy argument structure.
On the other hand `close` is just an alias for real FreeBSD man:close[2] so it has no linux arguments structure associated and in the system entry table it is not prefixed with linux as it calls the real man:close[2] in the kernel.
[[dummy-syscalls]]
==== Dummy syscalls
The Linux(R) emulation layer is not complete, as some syscalls are not implemented properly and some are not implemented at all.
The emulation layer employs a facility to mark unimplemented syscalls with the `DUMMY` macro.
These dummy definitions reside in [.filename]#linux_dummy.c# in a form of `DUMMY(syscall);`, which is then translated to various syscall auxiliary files and the implementation consists of printing a message saying that this syscall is not implemented.
The `UNIMPL` prototype is not used because we want to be able to identify the name of the syscall that was called in order to know what syscalls are more important to implement.
[[signal-handling]]
=== Signal handling
Signal handling is done generally in the FreeBSD kernel for all binary compatibilities with a call to a compat-dependent layer.
Linux(R) compatibility layer defines `linux_sendsig` routine for this purpose.
[[linux-sendsig]]
==== Linux(R) sendsig
This routine first checks whether the signal has been installed with a `SA_SIGINFO` in which case it calls `linux_rt_sendsig` routine instead.
Furthermore, it allocates (or reuses an already existing) signal handle context, then it builds a list of arguments for the signal handler.
It translates the signal number based on the signal translation table, assigns a handler, translates sigset.
Then it saves context for the `sigreturn` routine (various registers, translated trap number and signal mask).
Finally, it copies out the signal context to the userspace and prepares context for the actual signal handler to run.
[[linux-rt-sendsig]]
==== linux_rt_sendsig
This routine is similar to `linux_sendsig` just the signal context preparation is different.
It adds `siginfo`, `ucontext`, and some POSIX(R) parts.
It might be worth considering whether those two functions could not be merged with a benefit of less code duplication and possibly even faster execution.
[[linux-sigreturn]]
==== linux_sigreturn
This syscall is used for return from the signal handler.
It does some security checks and restores the original process context.
It also unmasks the signal in process signal mask.
[[ptrace]]
=== Ptrace
Many UNIX(R) derivates implement the man:ptrace[2] syscall in order to allow various tracking and debugging features.
This facility enables the tracing process to obtain various information about the traced process, like register dumps, any memory from the process address space, etc. and also to trace the process like in stepping an instruction or between system entries (syscalls and traps).
man:ptrace[2] also lets you set various information in the traced process (registers etc.).
man:ptrace[2] is a UNIX(R)-wide standard implemented in most UNIX(R)es around the world.
Linux(R) emulation in FreeBSD implements the man:ptrace[2] facility in [.filename]#linux_ptrace.c#.
The routines for converting registers between Linux(R) and FreeBSD and the actual man:ptrace[2] syscall emulation syscall.
The syscall is a long switch block that implements its counterpart in FreeBSD for every man:ptrace[2] command.
The man:ptrace[2] commands are mostly equal between Linux(R) and FreeBSD so usually just a small modification is needed.
For example, `PT_GETREGS` in Linux(R) operates on direct data while FreeBSD uses a pointer to the data so after performing a (native) man:ptrace[2] syscall, a copyout must be done to preserve Linux(R) semantics.
The man:ptrace[2] implementation in Linuxulator has some known weaknesses.
There have been panics seen when using `strace` (which is a man:ptrace[2] consumer) in the Linuxulator environment.
Also `PT_SYSCALL` is not implemented.
[[traps]]
=== Traps
Whenever a Linux(R) process running in the emulation layer traps the trap itself is handled transparently with the only exception of the trap translation.
Linux(R) and FreeBSD differs in opinion on what a trap is so this is dealt with here.
The code is actually very short:
[.programlisting]
....
static int
translate_traps(int signal, int trap_code)
{
if (signal != SIGBUS)
return signal;
switch (trap_code) {
case T_PROTFLT:
case T_TSSFLT:
case T_DOUBLEFLT:
case T_PAGEFLT:
return SIGSEGV;
default:
return signal;
}
}
....
[[stack-fixup]]
=== Stack fixup
The RTLD run-time link-editor expects so called AUX tags on stack during an `execve` so a fixup must be done to ensure this.
Of course, every RTLD system is different so the emulation layer must provide its own stack fixup routine to do this.
So does Linuxulator.
The `elf_linux_fixup` simply copies out AUX tags to the stack and adjusts the stack of the user space process to point right after those tags.
So RTLD works in a smart way.
[[aout-support]]
=== A.OUT support
The Linux(R) emulation layer on i386 also supports Linux(R) A.OUT binaries.
Pretty much everything described in the previous sections must be implemented for A.OUT support (beside traps translation and signals sending).
The support for A.OUT binaries is no longer maintained, especially the 2.6 emulation does not work with it but this does not cause any problem, as the linux-base in ports probably do not support A.OUT binaries at all.
This support will probably be removed in future.
Most of the stuff necessary for loading Linux(R) A.OUT binaries is in [.filename]#imgact_linux.c# file.
[[mi]]
== Linux(R) emulation layer -MI part
This section talks about machine independent part of the Linuxulator.
It covers the emulation infrastructure needed for Linux(R) 2.6 emulation, the thread local storage (TLS) implementation (on i386) and futexes.
Then we talk briefly about some syscalls.
[[nptl-desc]]
=== Description of NPTL
One of the major areas of progress in development of Linux(R) 2.6 was threading.
Prior to 2.6, the Linux(R) threading support was implemented in the linuxthreads library.
The library was a partial implementation of POSIX(R) threading.
The threading was implemented using separate processes for each thread using the `clone` syscall to let them share the address space (and other things).
The main weaknesses of this approach was that every thread had a different PID, signal handling was broken (from the pthreads perspective), etc.
Also the performance was not very good (use of `SIGUSR` signals for threads synchronization, kernel resource consumption, etc.) so to overcome these problems a new threading system was developed and named NPTL.
The NPTL library focused on two things but a third thing came along so it is usually considered a part of NPTL.
Those two things were embedding of threads into a process structure and futexes.
The additional third thing was TLS, which is not directly required by NPTL but the whole NPTL userland library depends on it.
Those improvements yielded in much improved performance and standards conformance.
NPTL is a standard threading library in Linux(R) systems these days.
The FreeBSD Linuxulator implementation approaches the NPTL in three main areas.
The TLS, futexes and PID mangling, which is meant to simulate the Linux(R) threads.
Further sections describe each of these areas.
[[linux26-emu]]
=== Linux(R) 2.6 emulation infrastructure
These sections deal with the way Linux(R) threads are managed and how we simulate that in FreeBSD.
[[linux26-runtime]]
==== Runtime determining of 2.6 emulation
The Linux(R) emulation layer in FreeBSD supports runtime setting of the emulated version.
This is done via man:sysctl[8], namely `compat.linux.osrelease`.
Setting this man:sysctl[8] affects runtime behavior of the emulation layer.
When set to 2.6.x it sets the value of `linux_use_linux26` while setting to something else keeps it unset.
This variable (plus per-prison variables of the very same kind) determines whether 2.6 infrastructure (mainly PID mangling) is used in the code or not.
The version setting is done system-wide and this affects all Linux(R) processes.
The man:sysctl[8] should not be changed when running any Linux(R) binary as it might harm things.
[[linux-proc-thread]]
==== Linux(R) processes and thread identifiers
The semantics of Linux(R) threading are a little confusing and uses entirely different nomenclature to FreeBSD.
A process in Linux(R) consists of a `struct task` embedding two identifier fields - PID and TGID.
PID is _not_ a process ID but it is a thread ID.
The TGID identifies a thread group in other words a process.
For single-threaded process the PID equals the TGID.
The thread in NPTL is just an ordinary process that happens to have TGID not equal to PID and have a group leader not equal to itself (and shared VM etc. of course).
Everything else happens in the same way as to an ordinary process.
There is no separation of a shared status to some external structure like in FreeBSD.
This creates some duplication of information and possible data inconsistency.
The Linux(R) kernel seems to use task -> group information in some places and task information elsewhere and it is really not very consistent and looks error-prone.
Every NPTL thread is created by a call to the `clone` syscall with a specific set of flags (more in the next subsection).
The NPTL implements strict 1:1 threading.
In FreeBSD we emulate NPTL threads with ordinary FreeBSD processes that share VM space, etc. and the PID gymnastic is just mimicked in the emulation specific structure attached to the process. The structure attached to the process looks like:
[.programlisting]
....
struct linux_emuldata {
pid_t pid;
int *child_set_tid; /* in clone(): Child.s TID to set on clone */
int *child_clear_tid;/* in clone(): Child.s TID to clear on exit */
struct linux_emuldata_shared *shared;
int pdeath_signal; /* parent death signal */
LIST_ENTRY(linux_emuldata) threads; /* list of linux threads */
};
....
The PID is used to identify the FreeBSD process that attaches this structure.
The `child_se_tid` and `child_clear_tid` are used for TID address copyout when a process exits and is created.
The `shared` pointer points to a structure shared among threads.
The `pdeath_signal` variable identifies the parent death signal and the `threads` pointer is used to link this structure to the list of threads.
The `linux_emuldata_shared` structure looks like:
[.programlisting]
....
struct linux_emuldata_shared {
int refs;
pid_t group_pid;
LIST_HEAD(, linux_emuldata) threads; /* head of list of linux threads */
};
....
The `refs` is a reference counter being used to determine when we can free the structure to avoid memory leaks.
The `group_pid` is to identify PID ( = TGID) of the whole process ( = thread group).
The `threads` pointer is the head of the list of threads in the process.
The `linux_emuldata` structure can be obtained from the process using `em_find`.
The prototype of the function is:
[.programlisting]
....
struct linux_emuldata *em_find(struct proc *, int locked);
....
Here, `proc` is the process we want the emuldata structure from and the locked parameter determines whether we want to lock or not.
The accepted values are `EMUL_DOLOCK` and `EMUL_DOUNLOCK`.
More about locking later.
[[pid-mangling]]
==== PID mangling
As there is a difference in view as what to the idea of a process ID and thread ID is between FreeBSD and Linux(R) we have to translate the view somehow.
We do it by PID mangling.
This means that we fake what a PID (=TGID) and TID (=PID) is between kernel and userland.
The rule of thumb is that in kernel (in Linuxulator) PID = PID and TGID = shared -> group pid and to userland we present `PID = shared -> group_pid` and `TID = proc -> p_pid`.
The PID member of `linux_emuldata structure` is a FreeBSD PID.
The above affects mainly getpid, getppid, gettid syscalls.
Where we use PID/TGID respectively.
In copyout of TIDs in `child_clear_tid` and `child_set_tid` we copy out FreeBSD PID.
[[clone-syscall]]
==== Clone syscall
The `clone` syscall is the way threads are created in Linux(R).
The syscall prototype looks like this:
[.programlisting]
....
int linux_clone(l_int flags, void *stack, void *parent_tidptr, int dummy,
void * child_tidptr);
....
The `flags` parameter tells the syscall how exactly the processes should be cloned.
As described above, Linux(R) can create processes sharing various things independently, for example two processes can share file descriptors but not VM, etc.
Last byte of the `flags` parameter is the exit signal of the newly created process.
The `stack` parameter if non-`NULL` tells, where the thread stack is and if it is `NULL` we are supposed to copy-on-write the calling process stack (i.e. do what normal man:fork[2] routine does).
The `parent_tidptr` parameter is used as an address for copying out process PID (i.e. thread id) once the process is sufficiently instantiated but is not runnable yet.
The `dummy` parameter is here because of the very strange calling convention of this syscall on i386.
It uses the registers directly and does not let the compiler do it what results in the need of a dummy syscall.
The `child_tidptr` parameter is used as an address for copying out PID once the process has finished forking and when the process exits.
The syscall itself proceeds by setting corresponding flags depending on the flags passed in.
For example, `CLONE_VM` maps to RFMEM (sharing of VM), etc.
The only nit here is `CLONE_FS` and `CLONE_FILES` because FreeBSD does not allow setting this separately so we fake it by not setting RFFDG (copying of fd table and other fs information) if either of these is defined.
This does not cause any problems, because those flags are always set together.
After setting the flags the process is forked using the internal `fork1` routine, the process is instrumented not to be put on a run queue, i.e. not to be set runnable.
After the forking is done we possibly reparent the newly created process to emulate `CLONE_PARENT` semantics.
Next part is creating the emulation data.
Threads in Linux(R) does not signal their parents so we set exit signal to be 0 to disable this.
After that setting of `child_set_tid` and `child_clear_tid` is performed enabling the functionality later in the code.
At this point we copy out the PID to the address specified by `parent_tidptr`.
The setting of process stack is done by simply rewriting thread frame `%esp` register (`%rsp` on amd64).
Next part is setting up TLS for the newly created process.
After this man:vfork[2] semantics might be emulated and finally the newly created process is put on a run queue and copying out its PID to the parent process via `clone` return value is done.
The `clone` syscall is able and in fact is used for emulating classic man:fork[2] and man:vfork[2] syscalls.
Newer glibc in a case of 2.6 kernel uses `clone` to implement man:fork[2] and man:vfork[2] syscalls.
[[locking]]
==== Locking
The locking is implemented to be per-subsystem because we do not expect a lot of contention on these.
There are two locks: `emul_lock` used to protect manipulating of `linux_emuldata` and `emul_shared_lock` used to manipulate `linux_emuldata_shared`.
The `emul_lock` is a nonsleepable blocking mutex while `emul_shared_lock` is a sleepable blocking `sx_lock`.
Due to of the per-subsystem locking we can coalesce some locks and that is why the em find offers the non-locking access.
[[tls]]
=== TLS
This section deals with TLS also known as thread local storage.
[[trheading-intro]]
==== Introduction to threading
Threads in computer science are entities within a process that can be scheduled independently from each other.
The threads in the process share process wide data (file descriptors, etc.) but also have their own stack for their own data.
Sometimes there is a need for process-wide data specific to a given thread.
Imagine a name of the thread in execution or something like that.
The traditional UNIX(R) threading API, pthreads provides a way to do it via man:pthread_key_create[3], man:pthread_setspecific[3] and man:pthread_getspecific[3] where a thread can create a key to the thread local data and using man:pthread_getspecific[3] or man:pthread_getspecific[3] to manipulate those data.
You can easily see that this is not the most comfortable way this could be accomplished.
So various producers of C/C++ compilers introduced a better way.
They defined a new modifier keyword thread that specifies that a variable is thread specific.
A new method of accessing such variables was developed as well (at least on i386).
The pthreads method tends to be implemented in userspace as a trivial lookup table.
The performance of such a solution is not very good.
So the new method uses (on i386) segment registers to address a segment, where TLS area is stored so the actual accessing of a thread variable is just appending the segment register to the address thus addressing via it.
The segment registers are usually `%gs` and `%fs` acting like segment selectors.
Every thread has its own area where the thread local data are stored and the segment must be loaded on every context switch.
This method is very fast and used almost exclusively in the whole i386 UNIX(R) world.
Both FreeBSD and Linux(R) implement this approach and it yields very good results.
The only drawback is the need to reload the segment on every context switch which can slowdown context switches.
FreeBSD tries to avoid this overhead by using only 1 segment descriptor for this while Linux(R) uses 3.
Interesting thing is that almost nothing uses more than 1 descriptor (only Wine seems to use 2) so Linux(R) pays this unnecessary price for context switches.
[[i386-segs]]
==== Segments on i386
The i386 architecture implements the so called segments.
A segment is a description of an area of memory.
The base address (bottom) of the memory area, the end of it (ceiling), type, protection, etc.
The memory described by a segment can be accessed using segment selector registers (`%cs`, `%ds`, `%ss`, `%es`, `%fs`, `%gs`).
For example let us suppose we have a segment which base address is 0x1234 and length and this code:
[.programlisting]
....
mov %edx,%gs:0x10
....
This will load the content of the `%edx` register into memory location 0x1244.
Some segment registers have a special use, for example `%cs` is used for code segment and `%ss` is used for stack segment but `%fs` and `%gs` are generally unused.
Segments are either stored in a global GDT table or in a local LDT table.
LDT is accessed via an entry in the GDT.
The LDT can store more types of segments.
LDT can be per process.
Both tables define up to 8191 entries.
[[linux-i386]]
==== Implementation on Linux(R) i386
There are two main ways of setting up TLS in Linux(R).
It can be set when cloning a process using the `clone` syscall or it can call `set_thread_area`.
When a process passes `CLONE_SETTLS` flag to `clone`, the kernel expects the memory pointed to by the `%esi` register a Linux(R) user space representation of a segment, which gets translated to the machine representation of a segment and loaded into a GDT slot.
The GDT slot can be specified with a number or -1 can be used meaning that the system itself should choose the first free slot.
In practice, the vast majority of programs use only one TLS entry and does not care about the number of the entry.
We exploit this in the emulation and in fact depend on it.
[[tls-emu]]
==== Emulation of Linux(R) TLS
[[tls-i386]]
===== i386
Loading of TLS for the current thread happens by calling `set_thread_area` while loading TLS for a second process in `clone` is done in the separate block in `clone`.
Those two functions are very similar.
The only difference being the actual loading of the GDT segment, which happens on the next context switch for the newly created process while `set_thread_area` must load this directly.
The code basically does this.
It copies the Linux(R) form segment descriptor from the userland.
The code checks for the number of the descriptor but because this differs between FreeBSD and Linux(R) we fake it a little.
We only support indexes of 6, 3 and -1.
The 6 is genuine Linux(R) number, 3 is genuine FreeBSD one and -1 means autoselection.
Then we set the descriptor number to constant 3 and copy out this to the userspace.
We rely on the userspace process using the number from the descriptor but this works most of the time (have never seen a case where this did not work) as the userspace process typically passes in 1.
Then we convert the descriptor from the Linux(R) form to a machine dependant form (i.e. operating system independent form) and copy this to the FreeBSD defined segment descriptor.
Finally we can load it.
We assign the descriptor to threads PCB (process control block) and load the `%gs` segment using `load_gs`.
This loading must be done in a critical section so that nothing can interrupt us.
The `CLONE_SETTLS` case works exactly like this just the loading using `load_gs` is not performed.
The segment used for this (segment number 3) is shared for this use between FreeBSD processes and Linux(R) processes so the Linux(R) emulation layer does not add any overhead over plain FreeBSD.
[[tls-amd64]]
===== amd64
The amd64 implementation is similar to the i386 one but there was initially no 32bit segment descriptor used for this purpose (hence not even native 32bit TLS users worked) so we had to add such a segment and implement its loading on every context switch (when a flag signaling use of 32bit is set).
Apart from this the TLS loading is exactly the same just the segment numbers are different and the descriptor format and the loading differs slightly.
[[futexes]]
=== Futexes
[[sync-intro]]
==== Introduction to synchronization
Threads need some kind of synchronization and POSIX(R) provides some of them: mutexes for mutual exclusion, read-write locks for mutual exclusion with biased ratio of reads and writes and condition variables for signaling a status change.
It is interesting to note that POSIX(R) threading API lacks support for semaphores.
Those synchronization routines implementations are heavily dependant on the type threading support we have.
In pure 1:M (userspace) model the implementation can be solely done in userspace and thus be very fast (the condition variables will probably end up being implemented using signals, i.e. not fast) and simple.
In 1:1 model, the situation is also quite clear - the threads must be synchronized using kernel facilities (which is very slow because a syscall must be performed).
The mixed M:N scenario just combines the first and second approach or rely solely on kernel.
Threads synchronization is a vital part of thread-enabled programming and its performance can affect resulting program a lot.
Recent benchmarks on FreeBSD operating system showed that an improved sx_lock implementation yielded 40% speedup in _ZFS_ (a heavy sx user), this is in-kernel stuff but it shows clearly how important the performance of synchronization primitives is.
Threaded programs should be written with as little contention on locks as possible.
Otherwise, instead of doing useful work the thread just waits on a lock.
As a result of this, the most well written threaded programs show little locks contention.
[[futex-intro]]
==== Futexes introduction
Linux(R) implements 1:1 threading, i.e. it has to use in-kernel synchronization primitives.
As stated earlier, well written threaded programs have little lock contention.
So a typical sequence could be performed as two atomic increase/decrease mutex reference counter, which is very fast, as presented by the following example:
[.programlisting]
....
pthread_mutex_lock(&mutex);
...
pthread_mutex_unlock(&mutex);
....
1:1 threading forces us to perform two syscalls for those mutex calls, which is very slow.
The solution Linux(R) 2.6 implements is called futexes.
Futexes implement the check for contention in userspace and call kernel primitives only in a case of contention.
Thus the typical case takes place without any kernel intervention.
This yields reasonably fast and flexible synchronization primitives implementation.
[[futex-api]]
==== Futex API
The futex syscall looks like this:
[.programlisting]
....
int futex(void *uaddr, int op, int val, struct timespec *timeout, void *uaddr2, int val3);
....
In this example `uaddr` is an address of the mutex in userspace, `op` is an operation we are about to perform and the other parameters have per-operation meaning.
Futexes implement the following operations:
* `FUTEX_WAIT`
* `FUTEX_WAKE`
* `FUTEX_FD`
* `FUTEX_REQUEUE`
* `FUTEX_CMP_REQUEUE`
* `FUTEX_WAKE_OP`
[[futex-wait]]
===== FUTEX_WAIT
This operation verifies that on address `uaddr` the value `val` is written.
If not, `EWOULDBLOCK` is returned, otherwise the thread is queued on the futex and gets suspended.
If the argument `timeout` is non-zero it specifies the maximum time for the sleeping, otherwise the sleeping is infinite.
[[futex-wake]]
===== FUTEX_WAKE
This operation takes a futex at `uaddr` and wakes up `val` first futexes queued on this futex.
[[futex-fd]]
===== FUTEX_FD
This operations associates a file descriptor with a given futex.
[[futex-requeue]]
===== FUTEX_REQUEUE
This operation takes `val` threads queued on futex at `uaddr`, wakes them up, and takes `val2` next threads and requeues them on futex at `uaddr2`.
[[futex-cmp-requeue]]
===== FUTEX_CMP_REQUEUE
This operation does the same as `FUTEX_REQUEUE` but it checks that `val3` equals to `val` first.
[[futex-wake-op]]
===== FUTEX_WAKE_OP
This operation performs an atomic operation on `val3` (which contains coded some other value) and `uaddr`.
Then it wakes up `val` threads on futex at `uaddr` and if the atomic operation returned a positive number it wakes up `val2` threads on futex at `uaddr2`.
The operations implemented in `FUTEX_WAKE_OP`:
* `FUTEX_OP_SET`
* `FUTEX_OP_ADD`
* `FUTEX_OP_OR`
* `FUTEX_OP_AND`
* `FUTEX_OP_XOR`
[NOTE]
====
There is no `val2` parameter in the futex prototype.
The `val2` is taken from the `struct timespec *timeout` parameter for operations `FUTEX_REQUEUE`, `FUTEX_CMP_REQUEUE` and `FUTEX_WAKE_OP`.
====
[[futex-emu]]
==== Futex emulation in FreeBSD
The futex emulation in FreeBSD is taken from NetBSD and further extended by us.
It is placed in `linux_futex.c` and [.filename]#linux_futex.h# files.
The `futex` structure looks like:
[.programlisting]
....
struct futex {
void *f_uaddr;
int f_refcount;
LIST_ENTRY(futex) f_list;
TAILQ_HEAD(lf_waiting_paroc, waiting_proc) f_waiting_proc;
};
....
And the structure `waiting_proc` is:
[.programlisting]
....
struct waiting_proc {
struct thread *wp_t;
struct futex *wp_new_futex;
TAILQ_ENTRY(waiting_proc) wp_list;
};
....
[[futex-get]]
===== futex_get / futex_put
A futex is obtained using the `futex_get` function, which searches a linear list of futexes and returns the found one or creates a new futex.
When releasing a futex from the use we call the `futex_put` function, which decreases a reference counter of the futex and if the refcount reaches zero it is released.
[[futex-sleep]]
===== futex_sleep
When a futex queues a thread for sleeping it creates a `working_proc` structure and puts this structure to the list inside the futex structure then it just performs a man:tsleep[9] to suspend the thread.
The sleep can be timed out.
After man:tsleep[9] returns (the thread was woken up or it timed out) the `working_proc` structure is removed from the list and is destroyed.
All this is done in the `futex_sleep` function.
If we got woken up from `futex_wake` we have `wp_new_futex` set so we sleep on it.
This way the actual requeueing is done in this function.
[[futex-wake-2]]
===== futex_wake
Waking up a thread sleeping on a futex is performed in the `futex_wake` function.
First in this function we mimic the strange Linux(R) behavior, where it wakes up N threads for all operations, the only exception is that the REQUEUE operations are performed on N+1 threads.
But this usually does not make any difference as we are waking up all threads.
Next in the function in the loop we wake up n threads, after this we check if there is a new futex for requeueing.
If so, we requeue up to n2 threads on the new futex.
This cooperates with `futex_sleep`.
[[futex-wake-op-2]]
===== futex_wake_op
The `FUTEX_WAKE_OP` operation is quite complicated.
First we obtain two futexes at addresses `uaddr` and `uaddr2` then we perform the atomic operation using `val3` and `uaddr2`.
Then `val` waiters on the first futex is woken up and if the atomic operation condition holds we wake up `val2` (i.e. `timeout`) waiter on the second futex.
[[futex-atomic-op]]
===== futex atomic operation
The atomic operation takes two parameters `encoded_op` and `uaddr`.
The encoded operation encodes the operation itself, comparing value, operation argument, and comparing argument.
The pseudocode for the operation is like this one:
[.programlisting]
....
oldval = *uaddr2
*uaddr2 = oldval OP oparg
....
And this is done atomically. First a copying in of the number at `uaddr` is performed and the operation is done.
The code handles page faults and if no page fault occurs `oldval` is compared to `cmparg` argument with cmp comparator.
[[futex-locking]]
===== Futex locking
Futex implementation uses two lock lists protecting `sx_lock` and global locks (either Giant or another `sx_lock`).
Every operation is performed locked from the start to the very end.
[[syscall-impl]]
=== Various syscalls implementation
In this section I am going to describe some smaller syscalls that are worth mentioning because their implementation is not obvious or those syscalls are interesting from other point of view.
[[syscall-at]]
==== *at family of syscalls
During development of Linux(R) 2.6.16 kernel, the *at syscalls were added.
Those syscalls (`openat` for example) work exactly like their at-less counterparts with the slight exception of the `dirfd` parameter.
This parameter changes where the given file, on which the syscall is to be performed, is.
When the `filename` parameter is absolute `dirfd` is ignored but when the path to the file is relative, it comes to the play.
The `dirfd` parameter is a directory relative to which the relative pathname is checked.
The `dirfd` parameter is a file descriptor of some directory or `AT_FDCWD`.
So for example the `openat` syscall can be like this:
[.programlisting]
....
file descriptor 123 = /tmp/foo/, current working directory = /tmp/
openat(123, /tmp/bah\, flags, mode) /* opens /tmp/bah */
openat(123, bah\, flags, mode) /* opens /tmp/foo/bah */
openat(AT_FDWCWD, bah\, flags, mode) /* opens /tmp/bah */
openat(stdio, bah\, flags, mode) /* returns error because stdio is not a directory */
....
This infrastructure is necessary to avoid races when opening files outside the working directory.
Imagine that a process consists of two threads, thread A and thread B.
Thread A issues `open(./tmp/foo/bah., flags, mode)` and before returning it gets preempted and thread B runs.
Thread B does not care about the needs of thread A and renames or removes [.filename]#/tmp/foo/#.
We got a race.
To avoid this we can open [.filename]#/tmp/foo# and use it as `dirfd` for `openat` syscall.
This also enables user to implement per-thread working directories.
Linux(R) family of *at syscalls contains: `linux_openat`, `linux_mkdirat`, `linux_mknodat`, `linux_fchownat`, `linux_futimesat`, `linux_fstatat64`, `linux_unlinkat`, `linux_renameat`, `linux_linkat`, `linux_symlinkat`, `linux_readlinkat`, `linux_fchmodat` and `linux_faccessat`.
All these are implemented using the modified man:namei[9] routine and simple wrapping layer.
[[implementation]]
===== Implementation
The implementation is done by altering the man:namei[9] routine (described above) to take additional parameter `dirfd` in its `nameidata` structure, which specifies the starting point of the pathname lookup instead of using the current working directory every time.
The resolution of `dirfd` from file descriptor number to a vnode is done in native *at syscalls.
When `dirfd` is `AT_FDCWD` the `dvp` entry in `nameidata` structure is `NULL` but when `dirfd` is a different number we obtain a file for this file descriptor, check whether this file is valid and if there is vnode attached to it then we get a vnode. Then we check this vnode for being a directory.
In the actual man:namei[9] routine we simply substitute the `dvp` vnode for `dp` variable in the man:namei[9] function, which determines the starting point.
The man:namei[9] is not used directly but via a trace of different functions on various levels.
For example the `openat` goes like this:
[.programlisting]
....
openat() --> kern_openat() --> vn_open() -> namei()
....
For this reason `kern_open` and `vn_open` must be altered to incorporate the additional `dirfd` parameter.
No compat layer is created for those because there are not many users of this and the users can be easily converted.
This general implementation enables FreeBSD to implement their own *at syscalls.
This is being discussed right now.
[[ioctl]]
==== Ioctl
The ioctl interface is quite fragile due to its generality.
We have to bear in mind that devices differ between Linux(R) and FreeBSD so some care must be applied to do ioctl emulation work right.
The ioctl handling is implemented in [.filename]#linux_ioctl.c#, where `linux_ioctl` function is defined.
This function simply iterates over sets of ioctl handlers to find a handler that implements a given command.
The ioctl syscall has three parameters, the file descriptor, command and an argument.
The command is a 16-bit number, which in theory is divided into high 8 bits determining class of the ioctl command and low 8 bits, which are the actual command within the given set.
The emulation takes advantage of this division.
We implement handlers for each set, like `sound_handler` or `disk_handler`.
Each handler has a maximum command and a minimum command defined, which is used for determining what handler is used.
There are slight problems with this approach because Linux(R) does not use the set division consistently so sometimes ioctls for a different set are inside a set they should not belong to (SCSI generic ioctls inside cdrom set, etc.).
FreeBSD currently does not implement many Linux(R) ioctls (compared to NetBSD, for example) but the plan is to port those from NetBSD.
The trend is to use Linux(R) ioctls even in the native FreeBSD drivers because of the easy porting of applications.
[[debugging]]
==== Debugging
Every syscall should be debuggable.
For this purpose we introduce a small infrastructure.
We have the ldebug facility, which tells whether a given syscall should be debugged (settable via a sysctl).
For printing we have LMSG and ARGS macros.
Those are used for altering a printable string for uniform debugging messages.
[[conclusion]]
== Conclusion
[[results]]
=== Results
As of April 2007 the Linux(R) emulation layer is capable of emulating the Linux(R) 2.6.16 kernel quite well.
The remaining problems concern futexes, unfinished *at family of syscalls, problematic signals delivery, missing `epoll` and `inotify` and probably some bugs we have not discovered yet.
Despite this we are capable of running basically all the Linux(R) programs included in FreeBSD Ports Collection with Fedora Core 4 at 2.6.16 and there are some rudimentary reports of success with Fedora Core 6 at 2.6.16.
The Fedora Core 6 linux_base was recently committed enabling some further testing of the emulation layer and giving us some more hints where we should put our effort in implementing missing stuff.
We are able to run the most used applications like package:www/linux-firefox[], package:net-im/skype[] and some games from the Ports Collection.
Some of the programs exhibit bad behavior under 2.6 emulation but this is currently under investigation and hopefully will be fixed soon.
The only big application that is known not to work is the Linux(R) Java(TM) Development Kit and this is because of the requirement of `epoll` facility which is not directly related to the Linux(R) kernel 2.6.
We hope to enable 2.6.16 emulation by default some time after FreeBSD 7.0 is released at least to expose the 2.6 emulation parts for some wider testing.
Once this is done we can switch to Fedora Core 6 linux_base, which is the ultimate plan.
[[future-work]]
=== Future work
Future work should focus on fixing the remaining issues with futexes, implement the rest of the *at family of syscalls, fix the signal delivery and possibly implement the `epoll` and `inotify` facilities.
We hope to be able to run the most important programs flawlessly soon, so we will be able to switch to the 2.6 emulation by default and make the Fedora Core 6 the default linux_base because our currently used Fedora Core 4 is not supported any more.
The other possible goal is to share our code with NetBSD and DragonflyBSD.
NetBSD has some support for 2.6 emulation but its far from finished and not really tested.
DragonflyBSD has expressed some interest in porting the 2.6 improvements.
Generally, as Linux(R) develops we would like to keep up with their development, implementing newly added syscalls.
Splice comes to mind first.
Some already implemented syscalls are also heavily crippled, for example `mremap` and others.
Some performance improvements can also be made, finer grained locking and others.
[[team]]
=== Team
I cooperated on this project with (in alphabetical order):
* `{jhb}`
* `{kib}`
* Emmanuel Dreyfus
* Scot Hetzel
* `{jkim}`
* `{netchild}`
* `{ssouhlal}`
* Li Xiao
* `{davidxu}`
I would like to thank all those people for their advice, code reviews and general support.
[[literatures]]
== Literatures
. Marshall Kirk McKusick - George V. Nevile-Neil. Design and Implementation of the FreeBSD operating system. Addison-Wesley, 2005.
. https://tldp.org[https://tldp.org]
. https://www.kernel.org[https://www.kernel.org]
diff --git a/documentation/content/en/articles/linux-users/_index.adoc b/documentation/content/en/articles/linux-users/_index.adoc
index 2198e6d09d..5406d0b7bb 100644
--- a/documentation/content/en/articles/linux-users/_index.adoc
+++ b/documentation/content/en/articles/linux-users/_index.adoc
@@ -1,357 +1,357 @@
---
title: FreeBSD Quickstart Guide for Linux® Users
authors:
- author: John Ferrell
copyright: 2008 The FreeBSD Documentation Project
-releaseinfo: "$FreeBSD$"
+description: This document is intended to quickly familiarize intermediate to advanced Linux® users with the basics of FreeBSD.
trademarks: ["freebsd", "intel", "redhat", "linux", "unix", "general"]
---
= FreeBSD Quickstart Guide for Linux(R) Users
:doctype: article
:toc: macro
:toclevels: 1
:icons: font
:sectnums:
:sectnumlevels: 6
:source-highlighter: rouge
:experimental:
include::shared/en/urls.adoc[]
[.abstract-title]
Abstract
This document is intended to quickly familiarize intermediate to advanced Linux(R) users with the basics of FreeBSD.
'''
toc::[]
[[intro]]
== Introduction
This document highlights some of the technical differences between FreeBSD and Linux(R) so that intermediate to advanced Linux(R) users can quickly familiarize themselves with the basics of FreeBSD.
This document assumes that FreeBSD is already installed.
Refer to the link:{handbook}#bsdinstall[Installing FreeBSD] chapter of the FreeBSD Handbook for help with the installation process.
[[shells]]
== Default Shell
Linux(R) users are often surprised to find that Bash is not the default shell in FreeBSD.
In fact, Bash is not included in the default installation.
Instead, FreeBSD uses man:tcsh[1] as the default root shell, and the Bourne shell-compatible man:sh[1] as the default user shell.
man:sh[1] is very similar to Bash but with a much smaller feature-set.
Generally shell scripts written for man:sh[1] will run in Bash, but the reverse is not always true.
However, Bash and other shells are available for installation using the FreeBSD link:{handbook}#ports[Packages and Ports Collection].
After installing another shell, use man:chsh[1] to change a user's default shell.
It is recommended that the `root` user's default shell remain unchanged since shells which are not included in the base distribution are installed to [.filename]#/usr/local/bin#.
In the event of a problem, the file system where [.filename]#/usr/local/bin# is located may not be mounted.
In this case, `root` would not have access to its default shell, preventing `root` from logging in and fixing the problem.
[[software]]
== Packages and Ports: Adding Software in FreeBSD
FreeBSD provides two methods for installing applications: binary packages and compiled ports.
Each method has its own benefits:
.Binary Packages
* Faster installation as compared to compiling large applications.
* Does not require an understanding of how to compile software.
* No need to install a compiler.
.Ports
* Ability to customize installation options.
* Custom patches can be applied.
If an application installation does not require any customization, installing the package is sufficient.
Compile the port instead whenever an application requires customization of the default options.
If needed, a custom package can be compiled from ports using `make package`.
A complete list of all available ports and packages can be found https://www.freebsd.org/ports/[here].
[[packages]]
=== Packages
Packages are pre-compiled applications, the FreeBSD equivalents of [.filename]#.deb# files on Debian/Ubuntu based systems and [.filename]#.rpm# files on Red Hat/Fedora based systems.
Packages are installed using `pkg`.
For example, the following command installs Apache 2.4:
[source,shell]
....
# pkg install apache24
....
For more information on packages refer to section 5.4 of the FreeBSD Handbook: link:{handbook}#pkgng-intro[Using pkgng for Binary Package Management].
[[ports]]
=== Ports
The FreeBSD Ports Collection is a framework of [.filename]#Makefiles# and patches specifically customized for installing applications from source on FreeBSD.
When installing a port, the system will fetch the source code, apply any required patches, compile the code, and install the application and any required dependencies.
The Ports Collection, sometimes referred to as the ports tree, can be installed to [.filename]#/usr/ports# using man:portsnap[8].
Detailed instructions for installing the Ports Collection can be found in link:{handbook}#ports-using[section 5.5] of the FreeBSD Handbook.
To compile a port, change to the port's directory and start the build process. The following example installs Apache 2.4 from the Ports Collection:
[source,shell]
....
# cd /usr/ports/www/apache24
# make install clean
....
A benefit of using ports to install software is the ability to customize the installation options.
This example specifies that the mod_ldap module should also be installed:
[source,shell]
....
# cd /usr/ports/www/apache24
# make WITH_LDAP="YES" install clean
....
Refer to link:{handbook}#ports-using[Using the Ports Collection] for more information.
[[startup]]
== System Startup
Many Linux(R) distributions use the SysV init system, whereas FreeBSD uses the traditional BSD-style man:init[8].
Under the BSD-style man:init[8], there are no run-levels and [.filename]#/etc/inittab# does not exist.
Instead, startup is controlled by man:rc[8] scripts.
At system boot, [.filename]#/etc/rc# reads [.filename]#/etc/rc.conf# and [.filename]#/etc/defaults/rc.conf# to determine which services are to be started.
The specified services are then started by running the corresponding service initialization scripts located in [.filename]#/etc/rc.d/# and [.filename]#/usr/local/etc/rc.d/#. These scripts are similar to the scripts located in [.filename]#/etc/init.d/# on Linux(R) systems.
The scripts found in [.filename]#/etc/rc.d/# are for applications that are part of the "base" system, such as man:cron[8], man:sshd[8], and man:syslog[3].
The scripts in [.filename]#/usr/local/etc/rc.d/# are for user-installed applications such as Apache and Squid.
Since FreeBSD is developed as a complete operating system, user-installed applications are not considered to be part of the "base" system.
User-installed applications are generally installed using link:{handbook}#ports-using[Packages or Ports].
In order to keep them separate from the base system, user-installed applications are installed under [.filename]#/usr/local/#.
Therefore, user-installed binaries reside in [.filename]#/usr/local/bin/#, configuration files are in [.filename]#/usr/local/etc/#, and so on.
Services are enabled by adding an entry for the service in [.filename]#/etc/rc.conf#.
The system defaults are found in [.filename]#/etc/defaults/rc.conf# and these default settings are overridden by settings in [.filename]#/etc/rc.conf#.
Refer to man:rc.conf[5] for more information about the available entries.
When installing additional applications, review the application's install message to determine how to enable any associated services.
The following entries in [.filename]#/etc/rc.conf# enable man:sshd[8], enable Apache 2.4, and specify that Apache should be started with SSL.
[.programlisting]
....
# enable SSHD
sshd_enable="YES"
# enable Apache with SSL
apache24_enable="YES"
apache24_flags="-DSSL"
....
Once a service has been enabled in [.filename]#/etc/rc.conf#, it can be started without rebooting the system:
[source,shell]
....
# service sshd start
# service apache24 start
....
If a service has not been enabled, it can be started from the command line using `onestart`:
[source,shell]
....
# service sshd onestart
....
[[network]]
== Network Configuration
Instead of a generic _ethX_ identifier that Linux(R) uses to identify a network interface, FreeBSD uses the driver name followed by a number.
The following output from man:ifconfig[8] shows two Intel(R) Pro 1000 network interfaces ([.filename]#em0# and [.filename]#em1#):
[source,shell]
....
% ifconfig
em0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> mtu 1500
options=b<RXCSUM,TXCSUM,VLAN_MTU>
inet 10.10.10.100 netmask 0xffffff00 broadcast 10.10.10.255
ether 00:50:56:a7:70:b2
media: Ethernet autoselect (1000baseTX <full-duplex>)
status: active
em1: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> mtu 1500
options=b<RXCSUM,TXCSUM,VLAN_MTU>
inet 192.168.10.222 netmask 0xffffff00 broadcast 192.168.10.255
ether 00:50:56:a7:03:2b
media: Ethernet autoselect (1000baseTX <full-duplex>)
status: active
....
An IP address can be assigned to an interface using man:ifconfig[8].
To remain persistent across reboots, the IP configuration must be included in [.filename]#/etc/rc.conf#.
The following [.filename]#/etc/rc.conf# entries specify the hostname, IP address, and default gateway:
[.programlisting]
....
hostname="server1.example.com"
ifconfig_em0="inet 10.10.10.100 netmask 255.255.255.0"
defaultrouter="10.10.10.1"
....
Use the following entries to instead configure an interface for DHCP:
[.programlisting]
....
hostname="server1.example.com"
ifconfig_em0="DHCP"
....
[[firewall]]
== Firewall
FreeBSD does not use Linux(R) IPTABLES for its firewall.
Instead, FreeBSD offers a choice of three kernel level firewalls:
* link:{handbook}#firewalls-pf[PF]
* link:{handbook}#firewalls-ipf[IPFILTER]
* link:{handbook}#firewalls-ipfw[IPFW]
PF is developed by the OpenBSD project and ported to FreeBSD.
PF was created as a replacement for IPFILTER and its syntax is similar to that of IPFILTER.
PF can be paired with man:altq[4] to provide QoS features.
This sample PF entry allows inbound SSH:
[.programlisting]
....
pass in on $ext_if inet proto tcp from any to ($ext_if) port 22
....
IPFILTER is the firewall application developed by Darren Reed.
It is not specific to FreeBSD and has been ported to several operating systems including NetBSD, OpenBSD, SunOS, HP/UX, and Solaris.
The IPFILTER syntax to allow inbound SSH is:
[.programlisting]
....
pass in on $ext_if proto tcp from any to any port = 22
....
IPFW is the firewall developed and maintained by FreeBSD.
It can be paired with man:dummynet[4] to provide traffic shaping capabilities and simulate different types of network connections.
The IPFW syntax to allow inbound SSH would be:
[.programlisting]
....
ipfw add allow tcp from any to me 22 in via $ext_if
....
[[updates]]
== Updating FreeBSD
There are two methods for updating a FreeBSD system: from source or binary updates.
Updating from source is the most involved update method, but offers the greatest amount of flexibility.
The process involves synchronizing a local copy of the FreeBSD source code with the FreeBSD Subversion servers.
Once the local source code is up-to-date, a new version of the kernel and userland can be compiled.
Binary updates are similar to using `yum` or `apt-get` to update a Linux(R) system.
In FreeBSD, man:freebsd-update[8] can be used fetch new binary updates and install them.
These updates can be scheduled using man:cron[8].
[NOTE]
====
When using man:cron[8] to schedule updates, use `freebsd-update cron` in the man:crontab[1] to reduce the possibility of a large number of machines all pulling updates at the same time:
[.programlisting]
....
0 3 * * * root /usr/sbin/freebsd-update cron
....
====
For more information on source and binary updates, refer to link:{handbook}#updating-upgrading[the chapter on updating] in the FreeBSD Handbook.
[[procfs]]
== procfs: Gone But Not Forgotten
In some Linux(R) distributions, one could look at [.filename]#/proc/sys/net/ipv4/ip_forward# to determine if IP forwarding is enabled.
In FreeBSD, man:sysctl[8] is instead used to view this and other system settings.
For example, use the following to determine if IP forwarding is enabled on a FreeBSD system:
[source,shell]
....
% sysctl net.inet.ip.forwarding
net.inet.ip.forwarding: 0
....
Use `-a` to list all the system settings:
[source,shell]
....
% sysctl -a | more
....
If an application requires procfs, add the following entry to [.filename]#/etc/fstab#:
[source,shell]
....
proc /proc procfs rw,noauto 0 0
....
Including `noauto` will prevent [.filename]#/proc# from being automatically mounted at boot.
To mount the file system without rebooting:
[source,shell]
....
# mount /proc
....
[[commands]]
== Common Commands
Some common command equivalents are as follows:
[.informaltable]
[cols="1,1,1", frame="none", options="header"]
|===
| Linux(R) command (Red Hat/Debian)
| FreeBSD equivalent
| Purpose
|`yum install _package_` / `apt-get install _package_`
|`pkg install _package_`
|Install package from remote repository
|`rpm -ivh _package_` / `dpkg -i _package_`
|`pkg add _package_`
|Install local package
|`rpm -qa` / `dpkg -l`
|`pkg info`
|List installed packages
|`lspci`
|`pciconf`
|List PCI devices
|`lsmod`
|`kldstat`
|List loaded kernel modules
|`modprobe`
|`kldload` / `kldunload`
|Load/Unload kernel modules
|`strace`
|`truss`
|Trace system calls
|===
[[conclusion]]
== Conclusion
This document has provided an overview of FreeBSD.
Refer to the link:{handbook}[FreeBSD Handbook] for more in-depth coverage of these topics as well as the many topics not covered by this document.
diff --git a/documentation/content/en/articles/mailing-list-faq/_index.adoc b/documentation/content/en/articles/mailing-list-faq/_index.adoc
index c4430812bd..81bb9f2b07 100644
--- a/documentation/content/en/articles/mailing-list-faq/_index.adoc
+++ b/documentation/content/en/articles/mailing-list-faq/_index.adoc
@@ -1,191 +1,191 @@
---
title: Frequently Asked Questions About The FreeBSD Mailing Lists
authors:
- author: The FreeBSD Documentation Project
-copyright: 2004-2005 The FreeBSD Documentation Project
-releaseinfo: "$FreeBSD$"
+copyright: 2004-2021 The FreeBSD Documentation Project
+description: Frequently Asked Questions About The FreeBSD Mailing Lists
---
= Frequently Asked Questions About The FreeBSD Mailing Lists
:doctype: article
:toc: macro
:toclevels: 1
:icons: font
:sectnums:
:sectnumlevels: 6
:source-highlighter: rouge
:experimental:
include::shared/authors.adoc[]
include::shared/en/mailing-lists.adoc[]
include::shared/en/urls.adoc[]
[.abstract-title]
Abstract
This is the FAQ for the FreeBSD mailing lists.
If you are interested in helping with this project, send email to the {freebsd-doc}.
The latest version of this document is always available from the link:.[FreeBSD World Wide Web server].
It may also be downloaded as one large link:.[HTML] file with HTTP or as plain text, PostScript, PDF, etc. from the https://download.freebsd.org/ftp/doc/[FreeBSD FTP server].
You may also want to link:https://www.FreeBSD.org/search/[Search the FAQ].
'''
toc::[]
[[introduction]]
== Introduction
As is usual with FAQs, this document aims to cover the most frequently asked questions concerning the FreeBSD mailing lists (and of course answer them!).
Although originally intended to reduce bandwidth and avoid the same old questions being asked over and over again, FAQs have become recognized as valuable information resources.
This document attempts to represent a community consensus, and as such it can never really be __authoritative__.
However, if you find technical errors within this document, or have suggestions about items that should be added, please either submit a PR, or email the {freebsd-doc}.
Thanks.
=== What is the purpose of the FreeBSD mailing lists?
The FreeBSD mailing lists serve as the primary communication channels for the FreeBSD community, covering many different topic areas and communities of interest.
=== Who is the audience for the FreeBSD mailing lists?
This depends on charter of each individual list.
Some lists are more oriented to developers; some are more oriented towards the FreeBSD community as a whole.
Please see http://lists.FreeBSD.org/mailman/listinfo[this list] for the current summary.
=== Are the FreeBSD mailing lists open for anyone to participate?
Again, this depends on charter of each individual list.
Please read the charter of a mailing list before you post to it, and respect it when you post.
This will help everyone to have a better experience with the lists.
If after reading the above lists, you still do not know which mailing list to post a question to, you will probably want to post to freebsd-questions (but see below, first).
Also note that the mailing lists have traditionally been open to postings from non-subscribers.
This has been a deliberate choice, to help make joining the FreeBSD community an easier process, and to encourage open sharing of ideas.
However, due to past abuse by some individuals, certain lists now have a policy where postings from non-subscribers must be manually screened to ensure that they are appropriate.
=== How can I subscribe?
You can use http://lists.FreeBSD.org/mailman/listinfo[the Mailman web interface] to subscribe to any of the public lists.
=== How can I unsubscribe?
You can use the same interface as above; or, you can follow the instructions that are at the bottom of every mailing list message that is sent.
Please do not send unsubscribe messages directly to the public lists themselves.
First, this will not accomplish your goal, and second, it will irritate the existing subscribers, and you will probably get flamed.
This is a classical mistake when using mailing lists; please try to avoid it.
=== Are archives available?
Yes. Threaded archives are available http://docs.FreeBSD.org/mail/[here].
=== Are mailing lists available in a digest format?
Yes. See http://lists.FreeBSD.org/mailman/listinfo[the Mailman web interface].
[[etiquette]]
== Mailing List Etiquette
Participation in the mailing lists, like participation in any community, requires a common basis for communication.
Please make only appropriate postings, and follow common rules of etiquette.
=== What should I do before I post?
You have already taken the most important step by reading this document.
However, if you are new to FreeBSD, you may first need to familiarize yourself with the software, and all the social history around it, by reading the numerous link:https://www.FreeBSD.org/docs/books/[books and articles] that are available.
Items of particular interest include the link:{faq}[FreeBSD Frequently Asked Questions (FAQ)] document, the link:{handbook}[FreeBSD Handbook], and the articles link:{freebsd-questions-article}[How to get best results from the FreeBSD-questions mailing list], link:{explaining-bsd}[Explaining BSD], and link:{new-users}[FreeBSD First Steps].
It is always considered bad form to ask a question that is already answered in the above documents.
This is not because the volunteers who work on this project are particularly mean people, but after a certain number of times answering the same questions over and over again, frustration begins to set in.
This is particularly true if there is an existing answer to the question that is already available.
Always keep in mind that almost all of the work done on FreeBSD is done by volunteers, and that we are only human.
=== What constitutes an inappropriate posting?
* Postings must be in accordance with the charter of the mailing list.
* Personal attacks are discouraged. As good net-citizens, we should try to hold ourselves to high standards of behavior.
* Spam is not allowed, ever. The mailing lists are actively processed to ban offenders to this rule.
=== What is considered proper etiquette when posting to the mailing lists?
* Please wrap lines at 75 characters, since not everyone uses fancy GUI mail reading programs.
* Please respect the fact that bandwidth is not infinite. Not everyone reads email through high-speed connections, so if your posting involves something like the content of [.filename]#config.log# or an extensive stack trace, please consider putting that information up on a website somewhere and just provide a URL to it. Remember, too, that these postings will be archived indefinitely, so huge postings will simply inflate the size of the archives long after their purpose has expired.
* Format your message so that it is legible, and PLEASE DO NOT SHOUT!!!!!. Do not underestimate the effect that a poorly formatted mail message has, and not just on the FreeBSD mailing lists. Your mail message is all that people see of you, and if it is poorly formatted, badly spelled, full of errors, and/or has lots of exclamation points, it will give people a poor impression of you.
* Please use an appropriate human language for a particular mailing list. Many non-English mailing lists are link:https://www.FreeBSD.org/community/mailinglists/[available].
+
For the ones that are not, we do appreciate that many people do not speak English as their first language, and we try to make allowances for that.
It is considered particularly poor form to criticize non-native speakers for spelling or grammatical errors.
FreeBSD has an excellent track record in this regard; please, help us to uphold that tradition.
* Please use a standards-compliant Mail User Agent (MUA). A lot of badly formatted messages come from http://www.lemis.com/grog/email/email.php[bad mailers or badly configured mailers]. The following mailers are known to send out badly formatted messages without you finding out about them:
** exmh
** Microsoft(R) Exchange
** Microsoft(R) Outlook(R)
+
Try not to use MIME: a lot of people use mailers which do not get on very well with MIME.
* Make sure your time and time zone are set correctly. This may seem a little silly, since your message still gets there, but many of the people on these mailing lists get several hundred messages a day. They frequently sort the incoming messages by subject and by date, and if your message does not come before the first answer, they may assume that they missed it and not bother to look.
* A lot of the information you need to supply is the output of programs, such as man:dmesg[8], or console messages, which usually appear in [.filename]#/var/log/messages#. Do not try to copy this information by typing it in again; not only it is a real pain, but you are bound to make a mistake. To send log file contents, either make a copy of the file and use an editor to trim the information to what is relevant, or cut and paste into your message. For the output of programs like `dmesg`, redirect the output to a file and include that. For example,
+
[source,shell]
....
% dmesg > /tmp/dmesg.out
....
+
This redirects the information to the file [.filename]#/tmp/dmesg.out#.
* When using cut-and-paste, please be aware that some such operations badly mangle their messages. This is of particular concern when posting contents of [.filename]#Makefiles#, where `tab` is a significant character. This is a very common, and very annoying, problem with submissions to the link:https://www.FreeBSD.org/support/[Problem Reports database]. [.filename]#Makefiles# with tabs changed to either spaces, or the annoying `=3B` escape sequence, create a great deal of aggravation for committers.
=== What are the special etiquette consideration when replying to an existing posting on the mailing lists?
* Please include relevant text from the original message. Trim it to the minimum, but do not overdo it. It should still be possible for somebody who did not read the original message to understand what you are talking about.
+
This is especially important for postings of the type "yes, I see this too", where the initial posting was dozens or hundreds of lines.
* Use some technique to identify which text came from the original message, and which text you add. A common convention is to prepend "`>`" to the original message. Leaving white space after the "`>`" and leaving empty lines between your text and the original text both make the result more readable.
* Please ensure that the attributions of the text you are quoting is correct. People can become offended if you attribute words to them that they themselves did not write.
* Please do not `top post`. By this, we mean that if you are replying to a message, please put your replies after the text that you copy in your reply.
+
** A: Because it reverses the logical flow of conversation.
** Q: Why is top posting frowned upon?
+
(Thanks to Randy Bush for the joke.)
[[recurring]]
== Recurring Topics On The Mailing Lists
Participation in the mailing lists, like participation in any community, requires a common basis for communication.
Many of the mailing lists presuppose a knowledge of the Project's history.
In particular, there are certain topics that seem to regularly occur to newcomers to the community.
It is the responsibility of each poster to ensure that their postings do not fall into one of these categories.
By doing so, you will help the mailing lists to stay on-topic, and probably save yourself being flamed in the process.
The best method to avoid this is to familiarize yourself with the http://docs.FreeBSD.org/mail/[mailing list archives], to help yourself understand the background of what has gone before.
In this, the https://www.FreeBSD.org/search/#mailinglists[mailing list search interface] is invaluable.
(If that method does not yield useful results, please supplement it with a search with your favorite major search engine).
By familiarizing yourself with the archives, not only will you learn what topics have been discussed before, but also how discussion tends to proceed on that list, who the participants are, and who the target audience is.
These are always good things to know before you post to any mailing list, not just a FreeBSD mailing list.
There is no doubt that the archives are quite extensive, and some questions recur more often than others, sometimes as followups where the subject line no longer accurately reflects the new content.
Nevertheless, the burden is on you, the poster, to do your homework to help avoid these recurring topics.
[[bikeshed]]
== What Is A "Bikeshed"?
Literally, a `bikeshed` is a small outdoor shelter into which one may store one's two-wheeled form of transportation.
However, in FreeBSD parlance, the term refers to topics that are simple enough that (nearly) anyone can offer an opinion about, and often (nearly) everyone does. The genesis of this term is explained in more detail link:{faq}#bikeshed-painting[in this document].
You simply must have a working knowledge of this concept before posting to any FreeBSD mailing list.
More generally, a bikeshed is a topic that will tend to generate immediate meta-discussions and flames if you have not read up on their past history.
Please help us to keep the mailing lists as useful for as many people as possible by avoiding bikesheds whenever you can.
Thanks.
[[acknowledgments]]
== Acknowledgments
`{grog}`::
Original author of most of the material on mailing list etiquette, taken from the article on link:{freebsd-questions-article}[How to get best results from the FreeBSD-questions mailing list].
`{linimon}`::
Creation of the rough draft of this FAQ.
diff --git a/documentation/content/en/articles/nanobsd/_index.adoc b/documentation/content/en/articles/nanobsd/_index.adoc
index a006140591..016fa739a8 100644
--- a/documentation/content/en/articles/nanobsd/_index.adoc
+++ b/documentation/content/en/articles/nanobsd/_index.adoc
@@ -1,432 +1,432 @@
---
title: Introduction to NanoBSD
authors:
- author: Daniel Gerzo
copyright: 2006 The FreeBSD Documentation Project
-releaseinfo: "$FreeBSD$"
+description: This document provides information about the NanoBSD tools, which can be used to create FreeBSD system images for embedded applications, suitable for use on a USB key, memory card or other mass storage media.
trademarks: ["freebsd", "general"]
---
= Introduction to NanoBSD
:doctype: article
:toc: macro
:toclevels: 1
:icons: font
:sectnums:
:sectnumlevels: 6
:source-highlighter: rouge
:experimental:
include::shared/authors.adoc[]
[.abstract-title]
Abstract
This document provides information about the NanoBSD tools, which can be used to create FreeBSD system images for embedded applications, suitable for use on a USB key, memory card or other mass storage media.
'''
toc::[]
[[intro]]
== Introduction to NanoBSD
NanoBSD is a tool developed by {phk} and now maintained by {imp}.
It creates a FreeBSD system image for embedded applications, suitable for use on a USB key, memory card or other mass storage media.
It can be used to build specialized install images, designed for easy installation and maintenance of systems commonly called "computer appliances".
Computer appliances have their hardware and software bundled in the product, which means all applications are pre-installed.
The appliance is plugged into an existing network and can begin working (almost) immediately.
The features of NanoBSD include:
* Ports and packages work as in FreeBSD - Every single application can be installed and used in a NanoBSD image, the same way as in FreeBSD.
* No missing functionality - If it is possible to do something with FreeBSD, it is possible to do the same thing with NanoBSD, unless the specific feature or features were explicitly removed from the NanoBSD image when it was created.
* Everything is read-only at run-time - It is safe to pull the power-plug. There is no necessity to run man:fsck[8] after a non-graceful shutdown of the system.
* Easy to build and customize - Making use of just one shell script and one configuration file it is possible to build reduced and customized images satisfying any arbitrary set of requirements.
[[howto]]
== NanoBSD Howto
[[design]]
=== The Design of NanoBSD
Once the image is present on the medium, it is possible to boot NanoBSD.
The mass storage medium is divided into three parts by default:
* Two image partitions: `code#1` and `code#2`.
* The configuration file partition, which can be mounted under the [.filename]#/cfg# directory at run time.
These partitions are normally mounted read-only.
The [.filename]#/etc# and [.filename]#/var# directories are man:md[4] (malloc) disks.
The configuration file partition persists under the [.filename]#/cfg# directory.
It contains files for [.filename]#/etc# directory and is briefly mounted read-only right after the system boot, therefore it is required to copy modified files from [.filename]#/etc# back to the [.filename]#/cfg# directory if changes are expected to persist after the system restarts.
.Making Persistent Changes to [.filename]#/etc/resolv.conf#
[example]
====
[source,shell]
....
# vi /etc/resolv.conf
[...]
# mount /cfg
# cp /etc/resolv.conf /cfg
# umount /cfg
....
====
[NOTE]
====
The partition containing [.filename]#/cfg# should be mounted only at boot time and while overriding the configuration files.
Keeping [.filename]#/cfg# mounted at all times is not a good idea, especially if the NanoBSD system runs off a mass storage medium that may be adversely affected by a large number of writes to the partition (like when the filesystem syncer flushes data to the system disks).
====
=== Building a NanoBSD Image
A NanoBSD image is built using a simple [.filename]#nanobsd.sh# shell script, which can be found in the [.filename]#/usr/src/tools/tools/nanobsd# directory.
This script creates an image, which can be copied on the storage medium using the man:dd[1] utility.
The necessary commands to build a NanoBSD image are:
[source,shell]
....
# cd /usr/src/tools/tools/nanobsd <.>
# sh nanobsd.sh <.>
# cd /usr/obj/nanobsd.full <.>
# dd if=_.disk.full of=/dev/da0 bs=64k <.>
....
<.> Change the current directory to the base directory of the NanoBSD build script.
<.> Start the build process.
<.> Change the current directory to the place where the built images are located.
<.> Install NanoBSD onto the storage medium.
==== Options When Building a NanoBSD Image
When building a NanoBSD image, several build options can be passed to [.filename]#nanobsd.sh# on the command line.
These options can have a significant impact on the build process.
Some options are for verbosity purposes:
* `-h`: prints the help summary page.
* `-q`: makes output quieter.
* `-v`: makes output more verbose
Some other options can be used to restrict the building process.
Sometimes it is not necessary to rebuild everything from sources, especially if an image has already been built, and only little change is made.
* `-k`: do not build the kernel
* `-w`: do not build world
* `-b`: do not build either kernel and world
* `-i`: do not build a disk image at all. As a file will not be created, it will not be possible to man:dd[1] it to a storage media.
* `-f`: do not build a disk image of the first partition (which is useful for upgrade purposes)
* `-n`: add `-DNO_CLEAN` to `buildworld`, `buildkernel`. Also, all the files that have already been built in a previous run are kept.
A configuration file can be used to tweak as many elements as desired.
Load it with `-c`
The last options are:
* `-K`: do not install a kernel. A disk image without a kernel will not be able to achieve a normal boot sequence.
==== The Complete Image Building Process
The complete image building process is going through a lot of steps.
The exact steps taken will depend on the chosen options when starting the script.
Assuming the script is run with no particular options, this is what will happen.
. `run_early_customize`: commands that are defined in a supplied configuration file.
. `clean_build`: Just cleans the build environment by deleting the previously built files.
. `make_conf_build`: Assemble make.conffrom the `CONF_WORLD` and `CONF_BUILD` variables.
. `build_world`: Build world.
. `build_kernel`: Build the kernel files.
. `clean_world`: Clean the destination directory.
. `make_conf_install`: Assemble make.conf from the `CONF_WORLD` and `CONF_INSTALL` variables.
. `install_world`: Install all files built during `buildworld`.
. `install_etc`: Install the necessary files in the [.filename]#/etc# directory, based on the `make distribution` command.
. `setup_nanobsd_etc`: the first configuration specific to NanoBSD takes place at this stage. The [.filename]#/etc/diskless# is created and the root filesystem is defined as read-only.
. `install_kernel`: the kernel and modules files are installed.
. `run_customize`: all the customizing routines defined by the user will be called.
. `setup_nanobsd`: a special configuration directory layout is setup. The [.filename]#/usr/local/etc# gets moved to [.filename]#/etc/local# and a symbolic link is created back from [.filename]#/etc/local# to [.filename]#/usr/local/etc#.
. `prune_usr`: the empty directories from [.filename]#/usr# are removed.
. `run_late_customize`: the very last custom scripts can be run at this point.
. `fixup_before_diskimage`: List all installed files in a metalog
. `create_diskimage`: creates the actual disk image, based on the disk geometries provides parameters.
. `last_orders`: does nothing for now.
=== Customizing a NanoBSD Image
This is probably the most important and most interesting feature of NanoBSD.
This is also where you will be spending most of the time when developing with NanoBSD.
Invocation of the following command will force the [.filename]#nanobsd.sh# to read its configuration from [.filename]#myconf.nano# located in the current directory:
[source,shell]
....
# sh nanobsd.sh -c myconf.nano
....
Customization is done in two ways:
* Configuration options
* Custom functions
==== Configuration Options
With configuration settings, it is possible to configure options passed to both the `buildworld` and `installworld` stages of the NanoBSD build process, as well as internal options passed to the main build process of NanoBSD.
Through these options it is possible to cut the system down, so it will fit on as little as 64MB.
You can use the configuration options to trim down FreeBSD even more, until it will consists of just the kernel and two or three files in the userland.
The configuration file consists of configuration options, which override the default values.
The most important directives are:
* `NANO_NAME` - Name of build (used to construct the workdir names).
* `NANO_SRC` - Path to the source tree used to build the image.
* `NANO_KERNEL` - Name of kernel configuration file used to build kernel.
* `CONF_BUILD` - Options passed to the `buildworld` stage of the build.
* `CONF_INSTALL` - Options passed to the `installworld` stage of the build.
* `CONF_WORLD` - Options passed to both the `buildworld` and the `installworld` stage of the build.
* `FlashDevice` - Defines what type of media to use. Check [.filename]#FlashDevice.sub# for more details.
There are many more configuration options that could be relevant depending upon the kind of NanoBSD that is desired.
===== General Customization
There are three stages, by design, at which it is possible to make changes that affect the building process, just by setting up a variable in the provided configuration file:
* `run_early_customize`: before anything else happens.
* `run_customize`: after all the standard files have been laid out
* `run_late_customize`: at the very end of the process, just before the actual NanoBSD image is built.
To customize a NanoBSD image, at any of these steps, it is best to add a specific value to one of the corresponding variables.
The `NANO_EARLY_CUSTOMIZE` variable is used at the first step of the building process.
At this point, there is no example as to what can be done using that variable, but it may change in the future.
The `NANO_CUSTOMIZE` variable is used after the kernel, world and etc configuration files have been installed, and the etc files have been setup as to be a NanoBSD installation.
So it is the correct step in the building process to tweak configuration options and add packages, like in the cust_nobeastie example.
The `NANO_LATE_CUSTOMIZE` variable is used just before the disk image is created, so it is the very last moment to change anything.
Remember that the `setup_nanobsd` routine already executed and that the [.filename]#etc#, [.filename]#conf# and [.filename]#cfg# directories and subdirectories are already modified, so it is not time to change them at this point.
Rather, it is possible to add or remove specific files.
===== Booting Options
There are also variables that can change the way the NanoBSD image boots.
Two options are passed to man:boot0cfg[8] to initialize the boot sector of the disk image:
* `NANO_BOOT0CFG`
* `NANO_BOOTLOADER`
With `NANO_BOOTLOADER` a bootloader file can be chosen.
The most common possible options are between [.filename]#boot0sio# and [.filename]#boot0# depending on whether the appliance has a serial port or not.
It is best to avoid supplying a different bootloader, but it is possible.
To do so, it is best to have checked the link:{handbook}#boot/[FreeBSD Handbook] chapter on the boot process.
With `NANO_BOOT0CFG`, the booting process can be tweaked, like selecting on which partition the NanoBSD image will actually boot.
It is best to check the man:boot0cfg[8] page before changing the default value of this variable.
One option that could be interesting to change is the timeout of the booting procedure.
To do so, the `NANO_BOOT0CFG` variable can be changed to `"-o packet -s 1 -m 3 -t 36"`.
That way the booting process would start after approximately 2 seconds;
because it is rare that waiting 10 seconds before actually booting is desired.
Good to know: the `NANO_BOOT2CFG` variable is only used in the `cust_comconsole` routine that can be called at the `NANO_CUSTOMIZE` step if the appliance has a serial port and all console input and output has to take place through it.
Be sure to check the relevant parameters of the serial port, as setting a bad parameter value can make it useless.
===== Disk Image Creation
In the end of the boot process is the disk image creation.
With this step, the NanoBSD script provides a file that can simply be copied onto a disk for the appliance, and that will make it boot and start.
There are many variable that need to be set just right for the script to produce a usable disk image.
* The `NANO_DRIVE` variable must be set to the drive name of the media at runtime. Usually, the default value `ada0`, which represents the first `IDE`/`ATA`/`SATA` device on the appliance is expected to be the correct one, but a different type of storage could also be used - like a USB key, in which case, it would rather be da0.
* The `NANO_MEDIASIZE` variable must be set to the size (in 512 bytes sectors) of the storage media that will be used. If you set it wrong, it is possible that the NanoBSD image will not boot at all, and a message at boot time will be warning about incorrect disk geometry.
* The [.filename]#/etc#, [.filename]#/var#, and [.filename]#/tmp# directories are allocated as man:md[4] (malloc) disks at boot time; so their sizes can be tailored to suit the appliance needs. The `NANO_RAM_ETCSIZE` variable sets the size of the [.filename]#/etc#; and the `NANO_RAM_TMPVARSIZE` variable sets the size of both the [.filename]#/var# and [.filename]#/tmp# directory, as [.filename]#/tmp# is symbolically linked to [.filename]#/var/tmp#. By default, both malloc disks sizes are set at 20MB each. They can always be changed, but usually the [.filename]#/etc# does not grow too much in size, so 20MB is a good starting point, whereas the [.filename]#/var# and especially [.filename]#/tmp# can grow much larger if not careful about it. For memory constrained systems, smaller filesystem sizes may be chosen.
* As NanoBSD is mainly designed to build a system image for an appliance, it is assumed that the storage media used will be relatively small. For that reason, the filesystem that is laid out is configured to have a small block size (4Kb) and a small fragment size (512b). The configuration options of the filesystem can be modified through the `NANO_NEWFS` variable, but the syntax must respect the man:newfs[8] command format. Also, by default, the filesystem has Soft Updates enabled. The link:{handbook}[FreeBSD Handbook] can be checked about this.
* The different partition sizes can be set through the use of `NANO_CODESIZE`, `NANO_CONFSIZE`, and `NANO_DATASIZE` as a multiple of 512 bytes sectors. `NANO_CODESIZE` defines the size of the first two image partitions: `code#1` and `code#2`. They have to be big enough to hold all the files that will be produced as a result of the `buildworld` and `buildkernel` processes. `NANO_CONFSIZE` defines the size of the configuration file partition, so it does not need to be very big; but do not make it so small that it will not hold all configuration files. Finally, `NANO_DATASIZE` defines the size of an optional partition, that can be used on the appliance. The last partition can be used, for example, to keep files created on the fly on disk.
==== Custom Functions
It is possible to fine-tune NanoBSD using shell functions in the configuration file.
The following example illustrates the basic model of custom functions:
[.programlisting]
....
cust_foo () (
echo "bar=baz" > \
${NANO_WORLDDIR}/etc/foo
)
customize_cmd cust_foo
....
A more useful example of a customization function is the following, which changes the default size of the [.filename]#/etc# directory from 5MB to 30MB:
[.programlisting]
....
cust_etc_size () (
cd ${NANO_WORLDDIR}/conf
echo 30000 > default/etc/md_size
)
customize_cmd cust_etc_size
....
There are a few default pre-defined customization functions ready for use:
* `cust_comconsole` - Disables man:getty[8] on the VGA devices (the [.filename]#/dev/ttyv*# device nodes) and enables the use of the COM1 serial port as the system console.
* `cust_allow_ssh_root` - Allow `root` to login via man:sshd[8].
* `cust_install_files` - Installs files from the [.filename]#nanobsd/Files# directory, which contains some useful scripts for system administration.
==== Adding Packages
Packages can be added to a NanoBSD image, to provide specific functionalities on the appliance. To do so, either:
* Add the `cust_pkgng` to the `NANO_CUSTOMIZE` variable, or
* Add a `'customize_cmd cust_pkgng'` command in a customized configuration file.
Both methods achieve the same result: launching the `cust_pkgng` routine.
This routine will go through `NANO_PACKAGE_DIR` directory to find either all packages or just the list of packages in the `NANO_PACKAGE_LIST` variable.
It is common, when installing applications through pkg on a standard FreeBSD environment, that the install process puts configuration files, in the [.filename]#usr/local/etc# directory, and startup scripts in the [.filename]#/usr/local/etc/rc.d# directory.
So, after the required packages have been installed, they need to be configured in order for them to start right out of the box.
To do so, the necessary configuration files have to be installed in the correct directories.
This can be achieved by writing dedicated routines or the generic `cust_install_files` routine can be used to lay out files properly from the [.filename]#/usr/src/tools/tools/nanobsd/Files# directory.
Usually a statement, sometimes multiple statements, in the [.filename]#/etc/rc.conf# also needs to be added for each package.
==== Configuration File Example
A complete example of a configuration file for building a custom NanoBSD image can be:
[.programlisting]
....
NANO_NAME=custom
NANO_SRC=/usr/src
NANO_KERNEL=MYKERNEL
NANO_IMAGES=2
CONF_BUILD='
WITHOUT_KLDLOAD=YES
WITHOUT_NETGRAPH=YES
WITHOUT_PAM=YES
'
CONF_INSTALL='
WITHOUT_ACPI=YES
WITHOUT_BLUETOOTH=YES
WITHOUT_FORTRAN=YES
WITHOUT_HTML=YES
WITHOUT_LPR=YES
WITHOUT_MAN=YES
WITHOUT_SENDMAIL=YES
WITHOUT_SHAREDOCS=YES
WITHOUT_EXAMPLES=YES
WITHOUT_INSTALLLIB=YES
WITHOUT_CALENDAR=YES
WITHOUT_MISC=YES
WITHOUT_SHARE=YES
'
CONF_WORLD='
WITHOUT_BIND=YES
WITHOUT_MODULES=YES
WITHOUT_KERBEROS=YES
WITHOUT_GAMES=YES
WITHOUT_RESCUE=YES
WITHOUT_LOCALES=YES
WITHOUT_SYSCONS=YES
WITHOUT_INFO=YES
'
FlashDevice SanDisk 1G
cust_nobeastie() (
touch ${NANO_WORLDDIR}/boot/loader.conf
echo "beastie_disable=\"YES\"" >> ${NANO_WORLDDIR}/boot/loader.conf
)
customize_cmd cust_comconsole
customize_cmd cust_install_files
customize_cmd cust_allow_ssh_root
customize_cmd cust_nobeastie
....
All the build and install compilation options can be found in the man:src.conf[5] man page, but not all options can or should be used when building a NanoBSD image.
The build and install options should be defined according to the needs of the image being built.
For example, the ftp client and server might not be needed.
Adding `WITHOUT_FTP=TRUE` to a configuration file in the `CONF_BUILD` section will avoid having them built.
Also, if the NanoBSD appliance will not be used to build programs then it is possible to add the `WITHOUT_BINUTILS=TRUE` in the `CONF_INSTALL` section; but not in the `CONF_BUILD` section as they will be used to build the NanoBSD image.
Not building a particular set of programs - through a compilation option - shortens the overall building time and lowers the required size for the disk image, whereas not installing the same specific set of programs does not lower the overall building time.
=== Updating NanoBSD
The update process of NanoBSD is relatively simple:
[.procedure]
====
. Build a new NanoBSD image, as usual.
. Upload the new image into an unused partition of a running NanoBSD appliance.
+
The most important difference of this step from the initial NanoBSD installation is that now instead of using [.filename]#\_.disk.full# (which contains an image of the entire disk), the [.filename]#_.disk.image# image is installed (which contains an image of a single system partition).
. Reboot, and start the system from the newly installed partition.
. If all goes well, the upgrade is finished.
. If anything goes wrong, reboot back into the previous partition (which contains the old, working image), to restore system functionality as fast as possible. Fix any problems of the new build, and repeat the process.
====
To install new image onto the running NanoBSD system, it is possible to use either the [.filename]#updatep1# or [.filename]#updatep2# script located in the [.filename]#/root# directory, depending from which partition is running the current system.
According to which services are available on host serving new NanoBSD image and what type of transfer is preferred, it is possible to examine one of these three ways:
==== Using man:ftp[1]
If the transfer speed is in first place, use this example:
[source,shell]
....
# ftp myhost
get _.disk.image "| sh updatep1"
....
==== Using man:ssh[1]
If a secure transfer is preferred, consider using this example:
[source,shell]
....
# ssh myhost cat _.disk.image.gz | zcat | sh updatep1
....
==== Using man:nc[1]
Try this example if the remote host is not running neither man:ftpd[8] or man:sshd[8] service:
[.procedure]
====
. At first, open a TCP listener on host serving the image and make it send the image to client:
+
[source,shell]
....
myhost# nc -l 2222 < _.disk.image
....
+
[NOTE]
======
Make sure that the used port is not blocked to receive incoming connections from NanoBSD host by firewall.
======
. Connect to the host serving new image and execute [.filename]#updatep1# script:
+
[source,shell]
....
# nc myhost 2222 | sh updatep1
....
====
diff --git a/documentation/content/en/articles/new-users/_index.adoc b/documentation/content/en/articles/new-users/_index.adoc
index ff1de0b682..93588811b9 100644
--- a/documentation/content/en/articles/new-users/_index.adoc
+++ b/documentation/content/en/articles/new-users/_index.adoc
@@ -1,461 +1,461 @@
---
title: For People New to Both FreeBSD and UNIX®
authors:
- author: Annelise Anderson
email: andrsn@andrsn.stanford.edu
-releaseinfo: "$FreeBSD$"
+description: Introduction for people new to both FreeBSD and UNIX®
trademarks: ["freebsd", "ibm", "microsoft", "opengroup", "general"]
---
= For People New to Both FreeBSD and UNIX(R)
:doctype: article
:toc: macro
:toclevels: 1
:icons: font
:sectnums:
:sectnumlevels: 6
:source-highlighter: rouge
:experimental:
[.abstract-title]
Abstract
Congratulations on installing FreeBSD! This introduction is for people new to both FreeBSD _and_ UNIX(R)-so it starts with basics.
'''
toc::[]
[[in-and-out]]
== Logging in and Getting Out
Log in (when you see `login:`) as a user you created during installation or as `root`.
(Your FreeBSD installation will already have an account for `root`;
who can go anywhere and do anything, including deleting essential files, so be careful!)
The symbols % and # in the following stand for the prompt (yours may be different), with % indicating an ordinary user and # indicating `root`.
To log out (and get a new `login:` prompt) type
[source,shell]
....
# exit
....
as often as necessary.
Yes, press kbd:[enter] after commands, and remember that UNIX(R) is case-sensitive-``exit``, not `EXIT`.
To shut down the machine type
[source,shell]
....
# /sbin/shutdown -h now
....
Or to reboot type
[source,shell]
....
# /sbin/shutdown -r now
....
or
[source,shell]
....
# /sbin/reboot
....
You can also reboot with kbd:[Ctrl+Alt+Delete].
Give it a little time to do its work.
This is equivalent to `/sbin/reboot` in recent releases of FreeBSD and is much, much better than hitting the reset button.
You do not want to have to reinstall this thing, do you?
[[adding-a-user]]
== Adding a User with Root Privileges
If you did not create any users when you installed the system and are thus logged in as `root`, you should probably create a user now with
[source,shell]
....
# adduser
....
The first time you use `adduser`, it might ask for some defaults to save.
You might want to make the default shell man:csh[1] instead of man:sh[1], if it suggests `sh` as the default.
Otherwise just press enter to accept each default.
These defaults are saved in [.filename]#/etc/adduser.conf#, an editable file.
Suppose you create a user `jack` with full name _Jack Benimble_.
Give `jack` a password if security (even kids around who might pound on the keyboard) is an issue.
When it asks you if you want to invite `jack` into other groups, type `wheel`
[source,shell]
....
Login group is "jack". Invite jack into other groups: wheel
....
This will make it possible to log in as `jack` and use the man:su[1] command to become `root`.
Then you will not get scolded any more for logging in as `root`.
You can quit `adduser` any time by typing kbd:[Ctrl+C], and at the end you will have a chance to approve your new user or simply type kbd:[n] for no.
You might want to create a second new user so that when you edit `jack`'s login files, you will have a hot spare in case something goes wrong.
Once you have done this, use `exit` to get back to a login prompt and log in as `jack`.
In general, it is a good idea to do as much work as possible as an ordinary user who does not have the power-and risk-of `root`.
If you already created a user and you want the user to be able to `su` to `root`, you can log in as `root` and edit the file [.filename]#/etc/group#, adding `jack` to the first line (the group `wheel`).
But first you need to practice man:vi[1], the text editor-or use the simpler text editor, man:ee[1], installed on recent versions of FreeBSD.
To delete a user, use `rmuser`.
[[looking-around]]
== Looking Around
Logged in as an ordinary user, look around and try out some commands that will access the sources of help and information within FreeBSD.
Here are some commands and what they do:
`id`::
Tells you who you are!
`pwd`::
Shows you where you are-the current working directory.
`ls`::
Lists the files in the current directory.
`ls -F`::
Lists the files in the current directory with a * after executables, a `/` after directories, and an `@` after symbolic links.
`ls -l`::
Lists the files in long format-size, date, permissions.
`ls -a`::
Lists hidden "dot" files with the others.
If you are `root`, the "dot" files show up without the `-a` switch.
`cd`::
Changes directories. `cd ..` backs up one level; note the space after `cd`.
`cd /usr/local` goes there. `cd ~` goes to the home directory of the person logged in-e.g., [.filename]#/usr/home/jack#.
Try `cd /cdrom`, and then `ls`, to find out if your CDROM is mounted and working.
`less _filename_`::
Lets you look at a file (named _filename_) without changing it.
Try `less /etc/fstab`.
Type `q` to quit.
`cat _filename_`::
Displays _filename_ on screen.
If it is too long and you can see only the end of it, press kbd:[ScrollLock] and use the kbd:[up-arrow] to move backward; you can use kbd:[ScrollLock] with manual pages too.
Press kbd:[ScrollLock] again to quit scrolling.
You might want to try `cat` on some of the dot files in your home directory-`cat .cshrc`, `cat .login`, `cat .profile`.
You will notice aliases in [.filename]#.cshrc# for some of the `ls` commands (they are very convenient).
You can create other aliases by editing [.filename]#.cshrc#.
You can make these aliases available to all users on the system by putting them in the system-wide `csh` configuration file, [.filename]#/etc/csh.cshrc#.
[[getting-help]]
== Getting Help and Information
Here are some useful sources of help.
_Text_ stands for something of your choice that you type in-usually a command or filename.
`apropos _text_`::
Everything containing string _text_ in the `whatis database`.
`man _text_`::
The manual page for _text_.
The major source of documentation for UNIX(R) systems.
`man ls` will tell you all the ways to use `ls`.
Press kbd:[Enter] to move through text, kbd:[Ctrl+B] to go back a page, kbd:[Ctrl+F] to go forward, kbd:[q] or kbd:[Ctrl+C] to quit.
`which _text_`::
Tells you where in the user's path the command _text_ is found.
`locate _text_`::
All the paths where the string _text_ is found.
`whatis _text_`::
Tells you what the command _text_ does and its manual page.
Typing `whatis *` will tell you about all the binaries in the current directory.
`whereis _text_`::
Finds the file _text_, giving its full path.
You might want to try using `whatis` on some common useful commands like `cat`, `more`, `grep`, `mv`, `find`, `tar`, `chmod`, `chown`, `date`, and `script`.
`more` lets you read a page at a time as it does in DOS, e.g., `ls -l | more` or `more _filename_`.
The * works as a wildcard-e.g., `ls w*` will show you files beginning with `w`.
Are some of these not working very well? Both man:locate[1] and man:whatis[1] depend on a database that is rebuilt weekly.
If your machine is not going to be left on over the weekend (and running FreeBSD), you might want to run the commands for daily, weekly, and monthly maintenance now and then.
Run them as `root` and, for now, give each one time to finish before you start the next one.
[source,shell]
....
# periodic daily
output omitted
# periodic weekly
output omitted
# periodic monthly
output omitted
....
If you get tired of waiting, press kbd:[Alt+F2] to get another _virtual console_, and log in again.
After all, it is a multi-user, multi-tasking system.
Nevertheless these commands will probably flash messages on your screen while they are running; you can type `clear` at the prompt to clear the screen.
Once they have run, you might want to look at [.filename]#/var/mail/root# and [.filename]#/var/log/messages#.
Running such commands is part of system administration-and as a single user of a UNIX(R) system, you are your own system administrator.
Virtually everything you need to be `root` to do is system administration.
Such responsibilities are not covered very well even in those big fat books on UNIX(R), which seem to devote a lot of space to pulling down menus in windows managers.
You might want to get one of the two leading books on systems administration, either Evi Nemeth et.al.'s UNIX System Administration Handbook (Prentice-Hall, 1995, ISBN 0-13-15051-7)-the second edition with the red cover; or Æleen Frisch's Essential System Administration (O'Reilly & Associates, 2002, ISBN 0-596-00343-9).
I used Nemeth.
[[editing-text]]
== Editing Text
To configure your system, you need to edit text files.
Most of them will be in the [.filename]#/etc# directory; and you will need to `su` to `root` to be able to change them.
You can use the easy `ee`, but in the long run the text editor `vi` is worth learning.
There is an excellent tutorial on vi in [.filename]#/usr/src/contrib/nvi/docs/tutorial#, if you have the system sources installed.
Before you edit a file, you should probably back it up.
Suppose you want to edit [.filename]#/etc/rc.conf#.
You could just use `cd /etc` to get to the [.filename]#/etc# directory and do:
[source,shell]
....
# cp rc.conf rc.conf.orig
....
This would copy [.filename]#rc.conf# to [.filename]#rc.conf.orig#, and you could later copy [.filename]#rc.conf.orig# to [.filename]#rc.conf# to recover the original.
But even better would be moving (renaming) and then copying back:
[source,shell]
....
# mv rc.conf rc.conf.orig
# cp rc.conf.orig rc.conf
....
because `mv` preserves the original date and owner of the file.
You can now edit [.filename]#rc.conf#.
If you want the original back, you would then `mv rc.conf rc.conf.myedit` (assuming you want to preserve your edited version) and then
[source,shell]
....
# mv rc.conf.orig rc.conf
....
to put things back the way they were.
To edit a file, type
[source,shell]
....
# vi filename
....
Move through the text with the arrow keys.
kbd:[Esc] (the escape key) puts `vi` in command mode.
Here are some commands:
`x`::
delete letter the cursor is on
`dd`::
delete the entire line (even if it wraps on the screen)
`i`::
insert text at the cursor
`a`::
insert text after the cursor
Once you type `i` or `a`, you can enter text.
`Esc` puts you back in command mode where you can type
`:w`::
to write your changes to disk and continue editing
`:wq`::
to write and quit
`:q!`::
to quit without saving changes
`/_text_`::
to move the cursor to _text_; `/` kbd:[Enter] (the enter key) to find the next instance of _text_.
`G`::
to go to the end of the file
`nG`::
to go to line _n_ in the file, where _n_ is a number
kbd:[Ctrl+L]::
to redraw the screen
kbd:[Ctrl+b] and kbd:[Ctrl+f]::
go back and forward a screen, as they do with `more` and `view`.
Practice with `vi` in your home directory by creating a new file with `vi _filename_` and adding and deleting text, saving the file, and calling it up again.
`vi` delivers some surprises because it is really quite complex, and sometimes you will inadvertently issue a command that will do something you do not expect.
(Some people actually like `vi`-it is more powerful than DOS EDIT-find out about `:r`.)
Use kbd:[Esc] one or more times to be sure you are in command mode and proceed from there when it gives you trouble, save often with `:w`, and use `:q!` to get out and start over (from your last `:w`) when you need to.
Now you can `cd` to [.filename]#/etc#, `su` to `root`, use `vi` to edit the file [.filename]#/etc/group#, and add a user to `wheel` so the user has root privileges.
Just add a comma and the user's login name to the end of the first line in the file, press kbd:[Esc], and use `:wq` to write the file to disk and quit.
Instantly effective. (You did not put a space after the comma, did you?)
[[other-useful-commands]]
== Other Useful Commands
`df`::
shows file space and mounted systems.
`ps aux`::
shows processes running. `ps ax` is a narrower form.
`rm _filename_`::
remove _filename_.
`rm -R _dir_`::
removes a directory _dir_ and all subdirectories-careful!
`ls -R`::
lists files in the current directory and all subdirectories; I used a variant, `ls -AFR > where.txt`, to get a list of all the files in [.filename]#/# and (separately) [.filename]#/usr# before I found better ways to find files.
`passwd`::
to change user's password (or ``root``'s password)
`man hier`::
manual page on the UNIX(R) filesystem
Use `find` to locate [.filename]#filename# in [.filename]#/usr# or any of its subdirectories with
[source,shell]
....
% find /usr -name "filename"
....
You can use * as a wildcard in `"_filename_"` (which should be in quotes).
If you tell `find` to search in [.filename]#/# instead of [.filename]#/usr# it will look for the file(s) on all mounted filesystems, including the CDROM and the DOS partition.
An excellent book that explains UNIX(R) commands and utilities is Abrahams & Larson, Unix for the Impatient (2nd ed., Addison-Wesley, 1996). There is also a lot of UNIX(R) information on the Internet.
[[next-steps]]
== Next Steps
You should now have the tools you need to get around and edit files, so you can get everything up and running.
There is a great deal of information in the FreeBSD handbook (which is probably on your hard drive) and link:https://www.FreeBSD.org/[FreeBSD's web site].
A wide variety of packages and ports are on the CDROM as well as the web site.
The handbook tells you more about how to use them (get the package if it exists, with `pkg add _packagename_`, where _packagename_ is the filename of the package).
The CDROM has lists of the packages and ports with brief descriptions in [.filename]#cdrom/packages/index#, [.filename]#cdrom/packages/index.txt#, and [.filename]#cdrom/ports/index#, with fuller descriptions in [.filename]#/cdrom/ports/\*/*/pkg/DESCR#, where the *s represent subdirectories of kinds of programs and program names respectively.
If you find the handbook too sophisticated (what with `lndir` and all) on installing ports from the CDROM, here is what usually works:
Find the port you want, say `kermit`. There will be a directory for it on the CDROM.
Copy the subdirectory to [.filename]#/usr/local# (a good place for software you add that should be available to all users) with:
[source,shell]
....
# cp -R /cdrom/ports/comm/kermit /usr/local
....
This should result in a [.filename]#/usr/local/kermit# subdirectory that has all the files that the `kermit` subdirectory on the CDROM has.
Next, create the directory [.filename]#/usr/ports/distfiles# if it does not already exist using `mkdir`.
Now check [.filename]#/cdrom/ports/distfiles# for a file with a name that indicates it is the port you want.
Copy that file to [.filename]#/usr/ports/distfiles#; in recent versions you can skip this step, as FreeBSD will do it for you.
In the case of `kermit`, there is no distfile.
Then `cd` to the subdirectory of [.filename]#/usr/local/kermit# that has the file [.filename]#Makefile#.
Type
[source,shell]
....
# make all install
....
During this process the port will FTP to get any compressed files it needs that it did not find on the CDROM or in [.filename]#/usr/ports/distfiles#.
If you do not have your network running yet and there was no file for the port in [.filename]#/cdrom/ports/distfiles#, you will have to get the distfile using another machine and copy it to [.filename]#/usr/ports/distfiles#.
Read [.filename]#Makefile# (with `cat` or `more` or `view`) to find out where to go (the master distribution site) to get the file and what its name is.
(Use binary file transfers!) Then go back to [.filename]#/usr/local/kermit#, find the directory with [.filename]#Makefile#, and type `make all install`.
[[your-working-environment]]
== Your Working Environment
Your shell is the most important part of your working environment.
The shell is what interprets the commands you type on the command line, and thus communicates with the rest of the operating system.
You can also write shell scripts a series of commands to be run without intervention.
Two shells come installed with FreeBSD: `csh` and `sh`.
`csh` is good for command-line work, but scripts should be written with `sh` (or `bash`).
You can find out what shell you have by typing `echo $SHELL`.
The `csh` shell is okay, but `tcsh` does everything `csh` does and more.
It allows you to recall commands with the arrow keys and edit them.
It has tab-key completion of filenames (`csh` uses kbd:[Esc]), and it lets you switch to the directory you were last in with `cd -`.
It is also much easier to alter your prompt with `tcsh`.
It makes life a lot easier.
Here are the three steps for installing a new shell:
[.procedure]
====
. Install the shell as a port or a package, just as you would any other port or package.
. Use `chsh` to change your shell to `tcsh` permanently, or type `tcsh` at the prompt to change your shell without logging in again.
====
[NOTE]
====
It can be dangerous to change `root`'s shell to something other than `sh` or `csh` on early versions of FreeBSD and many other versions of UNIX(R);
you may not have a working shell when the system puts you into single user mode.
The solution is to use `su -m` to become `root`, which will give you the `tcsh` as `root`, because the shell is part of the environment.
You can make this permanent by adding it to your [.filename]#.tcshrc# as an alias with:
[.programlisting]
....
alias su su -m
....
====
When `tcsh` starts up, it will read the [.filename]#/etc/csh.cshrc# and [.filename]#/etc/csh.login# files, as does `csh`.
It will also read [.filename]#.login# in your home directory and [.filename]#.cshrc# as well, unless you provide a [.filename]#.tcshrc#.
This you can do by simply copying [.filename]#.cshrc# to [.filename]#.tcshrc#.
Now that you have installed `tcsh`, you can adjust your prompt.
You can find the details in the manual page for `tcsh`, but here is a line to put in your [.filename]#.tcshrc# that will tell you how many commands you have typed, what time it is, and what directory you are in.
It also produces a `>` if you are an ordinary user and a # if you are `root`, but tsch will do that in any case:
set prompt = "%h %t %~ %# "
This should go in the same place as the existing set prompt line if there is one, or under "if($?prompt) then" if not.
Comment out the old line; you can always switch back to it if you prefer it.
Do not forget the spaces and quotes.
You can get the [.filename]#.tcshrc# reread by typing `source .tcshrc`.
You can get a listing of other environmental variables that have been set by typing `env` at the prompt.
The result will show you your default editor, pager, and terminal type, among possibly many others.
A useful command if you log in from a remote location and cannot run a program because the terminal is not capable is `setenv TERM vt100`.
[[other]]
== Other
As `root`, you can unmount the CDROM with `/sbin/umount /cdrom`, take it out of the drive, insert another one, and mount it with `/sbin/mount_cd9660 /dev/cd0a /cdrom` assuming cd0a is the device name for your CDROM drive.
The most recent versions of FreeBSD let you mount the CDROM with just `/sbin/mount /cdrom`.
Using the live filesystem-the second of FreeBSD's CDROM disks-is useful if you have got limited space.
What is on the live filesystem varies from release to release.
You might try playing games from the CDROM.
This involves using `lndir`, which gets installed with the X Window System, to tell the program(s) where to find the necessary files, because they are in [.filename]#/cdrom# instead of in [.filename]#/usr# and its subdirectories, which is where they are expected to be.
Read `man lndir`.
[[comments-welcome]]
== Comments Welcome
If you use this guide I would be interested in knowing where it was unclear and what was left out that you think should be included, and if it was helpful.
My thanks to Eugene W. Stark, professor of computer science at SUNY-Stony Brook, and John Fieber for helpful comments.
Annelise Anderson, mailto:andrsn@andrsn.stanford.edu[andrsn@andrsn.stanford.edu]
diff --git a/documentation/content/en/articles/pam/_index.adoc b/documentation/content/en/articles/pam/_index.adoc
index 82bd501ba2..6f3d5a22d6 100644
--- a/documentation/content/en/articles/pam/_index.adoc
+++ b/documentation/content/en/articles/pam/_index.adoc
@@ -1,641 +1,641 @@
---
title: Pluggable Authentication Modules
authors:
- author: Dag-Erling Smørgrav
copyright: 2001-2003 Networks Associates Technology, Inc.
-releaseinfo: "$FreeBSD$"
+description: Pluggable Authentication Modules (PAM) in FreeBSD
trademarks: ["pam", "freebsd", "linux", "opengroup", "sun", "general"]
---
= Pluggable Authentication Modules
:doctype: article
:toc: macro
:toclevels: 1
:icons: font
:sectnums:
:sectnumlevels: 6
:source-highlighter: rouge
:experimental:
[.abstract-title]
Abstract
This article describes the underlying principles and mechanisms of the Pluggable Authentication Modules (PAM) library, and explains how to configure PAM, how to integrate PAM into applications, and how to write PAM modules.
'''
toc::[]
[[pam-intro]]
== Introduction
The Pluggable Authentication Modules (PAM) library is a generalized API for authentication-related services which allows a system administrator to add new authentication methods simply by installing new PAM modules, and to modify authentication policies by editing configuration files.
PAM was defined and developed in 1995 by Vipin Samar and Charlie Lai of Sun Microsystems, and has not changed much since.
In 1997, the Open Group published the X/Open Single Sign-on (XSSO) preliminary specification, which standardized the PAM API and added extensions for single (or rather integrated) sign-on.
At the time of this writing, this specification has not yet been adopted as a standard.
Although this article focuses primarily on FreeBSD 5.x, which uses OpenPAM, it should be equally applicable to FreeBSD 4.x, which uses Linux-PAM, and other operating systems such as Linux and Solaris(TM).
[[pam-terms]]
== Terms and Conventions
[[pam-definitions]]
=== Definitions
The terminology surrounding PAM is rather confused.
Neither Samar and Lai's original paper nor the XSSO specification made any attempt at formally defining terms for the various actors and entities involved in PAM, and the terms that they do use (but do not define) are sometimes misleading and ambiguous.
The first attempt at establishing a consistent and unambiguous terminology was a whitepaper written by Andrew G. Morgan (author of Linux-PAM) in 1999.
While Morgan's choice of terminology was a huge leap forward, it is in this author's opinion by no means perfect.
What follows is an attempt, heavily inspired by Morgan, to define precise and unambiguous terms for all actors and entities involved in PAM.
account::
The set of credentials the applicant is requesting from the arbitrator.
applicant::
The user or entity requesting authentication.
arbitrator::
The user or entity who has the privileges necessary to verify the applicant's credentials and the authority to grant or deny the request.
chain::
A sequence of modules that will be invoked in response to a PAM request.
The chain includes information about the order in which to invoke the modules, what arguments to pass to them, and how to interpret the results.
client::
The application responsible for initiating an authentication request on behalf of the applicant and for obtaining the necessary authentication information from him.
facility::
One of the four basic groups of functionality provided by PAM: authentication, account management, session management and authentication token update.
module::
A collection of one or more related functions implementing a particular authentication facility, gathered into a single (normally dynamically loadable) binary file and identified by a single name.
policy::
The complete set of configuration statements describing how to handle PAM requests for a particular service.
A policy normally consists of four chains, one for each facility, though some services do not use all four facilities.
server::
The application acting on behalf of the arbitrator to converse with the client, retrieve authentication information, verify the applicant's credentials and grant or deny requests.
service::
A class of servers providing similar or related functionality and requiring similar authentication.
PAM policies are defined on a per-service basis, so all servers that claim the same service name will be subject to the same policy.
session::
The context within which service is rendered to the applicant by the server.
One of PAM's four facilities, session management, is concerned exclusively with setting up and tearing down this context.
token::
A chunk of information associated with the account, such as a password or passphrase, which the applicant must provide to prove his identity.
transaction::
A sequence of requests from the same applicant to the same instance of the same server, beginning with authentication and session set-up and ending with session tear-down.
[[pam-usage-examples]]
=== Usage Examples
This section aims to illustrate the meanings of some of the terms defined above by way of a handful of simple examples.
==== Client and Server Are One
This simple example shows `alice` man:su[1]'ing to `root`.
[source,shell]
....
% whoami
alice
% ls -l `which su`
-r-sr-xr-x 1 root wheel 10744 Dec 6 19:06 /usr/bin/su
% su -
Password: xi3kiune
# whoami
root
....
* The applicant is `alice`.
* The account is `root`.
* The man:su[1] process is both client and server.
* The authentication token is `xi3kiune`.
* The arbitrator is `root`, which is why man:su[1] is setuid `root`.
==== Client and Server Are Separate
The example below shows `eve` try to initiate an man:ssh[1] connection to `login.example.com`, ask to log in as `bob`, and succeed.
Bob should have chosen a better password!
[source,shell]
....
% whoami
eve
% ssh bob@login.example.com
bob@login.example.com's password:
% god
Last login: Thu Oct 11 09:52:57 2001 from 192.168.0.1
Copyright (c) 1980, 1983, 1986, 1988, 1990, 1991, 1993, 1994
The Regents of the University of California. All rights reserved.
FreeBSD 4.4-STABLE (LOGIN) 4: Tue Nov 27 18:10:34 PST 2001
Welcome to FreeBSD!
%
....
* The applicant is `eve`.
* The client is Eve's man:ssh[1] process.
* The server is the man:sshd[8] process on `login.example.com`
* The account is `bob`.
* The authentication token is `god`.
* Although this is not shown in this example, the arbitrator is `root`.
==== Sample Policy
The following is FreeBSD's default policy for `sshd`:
[.programlisting]
....
sshd auth required pam_nologin.so no_warn
sshd auth required pam_unix.so no_warn try_first_pass
sshd account required pam_login_access.so
sshd account required pam_unix.so
sshd session required pam_lastlog.so no_fail
sshd password required pam_permit.so
....
* This policy applies to the `sshd` service (which is not necessarily restricted to the man:sshd[8] server.)
* `auth`, `account`, `session` and `password` are facilities.
* [.filename]#pam_nologin.so#, [.filename]#pam_unix.so#, [.filename]#pam_login_access.so#, [.filename]#pam_lastlog.so# and [.filename]#pam_permit.so# are modules. It is clear from this example that [.filename]#pam_unix.so# provides at least two facilities (authentication and account management.)
[[pam-essentials]]
== PAM Essentials
[[pam-facilities-primitives]]
=== Facilities and Primitives
The PAM API offers six different authentication primitives grouped in four facilities, which are described below.
`auth`::
_Authentication._ This facility concerns itself with authenticating the applicant and establishing the account credentials.
It provides two primitives:
** man:pam_authenticate[3] authenticates the applicant, usually by requesting an authentication token and comparing it with a value stored in a database or obtained from an authentication server.
** man:pam_setcred[3] establishes account credentials such as user ID, group membership and resource limits.
`account`::
_Account management._ This facility handles non-authentication-related issues of account availability, such as access restrictions based on the time of day or the server's work load.
It provides a single primitive:
** man:pam_acct_mgmt[3] verifies that the requested account is available.
`session`::
_Session management._ This facility handles tasks associated with session set-up and tear-down, such as login accounting.
It provides two primitives:
** man:pam_open_session[3] performs tasks associated with session set-up: add an entry in the [.filename]#utmp# and [.filename]#wtmp# databases, start an SSH agent, etc.
** man:pam_close_session[3] performs tasks associated with session tear-down: add an entry in the [.filename]#utmp# and [.filename]#wtmp# databases, stop the SSH agent, etc.
`password`::
_Password management._ This facility is used to change the authentication token associated with an account, either because it has expired or because the user wishes to change it.
It provides a single primitive:
** man:pam_chauthtok[3] changes the authentication token, optionally verifying that it is sufficiently hard to guess, has not been used previously, etc.
[[pam-modules]]
=== Modules
Modules are a very central concept in PAM; after all, they are the "M" in "PAM".
A PAM module is a self-contained piece of program code that implements the primitives in one or more facilities for one particular mechanism;
possible mechanisms for the authentication facility, for instance, include the UNIX(R) password database, NIS, LDAP and Radius.
[[pam-module-naming]]
==== Module Naming
FreeBSD implements each mechanism in a single module, named `pam_mechanism.so` (for instance, `pam_unix.so` for the UNIX(R) mechanism.)
Other implementations sometimes have separate modules for separate facilities, and include the facility name as well as the mechanism name in the module name.
To name one example, Solaris(TM) has a `pam_dial_auth.so.1` module which is commonly used to authenticate dialup users.
[[pam-module-versioning]]
==== Module Versioning
FreeBSD's original PAM implementation, based on Linux-PAM, did not use version numbers for PAM modules.
This would commonly cause problems with legacy applications, which might be linked against older versions of the system libraries, as there was no way to load a matching version of the required modules.
OpenPAM, on the other hand, looks for modules that have the same version number as the PAM library (currently 2), and only falls back to an unversioned module if no versioned module could be loaded.
Thus legacy modules can be provided for legacy applications, while allowing new (or newly built) applications to take advantage of the most recent modules.
Although Solaris(TM) PAM modules commonly have a version number, they are not truly versioned, because the number is a part of the module name and must be included in the configuration.
[[pam-chains-policies]]
=== Chains and Policies
When a server initiates a PAM transaction, the PAM library tries to load a policy for the service specified in the man:pam_start[3] call.
The policy specifies how authentication requests should be processed, and is defined in a configuration file.
This is the other central concept in PAM: the possibility for the admin to tune the system security policy (in the wider sense of the word) simply by editing a text file.
A policy consists of four chains, one for each of the four PAM facilities.
Each chain is a sequence of configuration statements, each specifying a module to invoke, some (optional) parameters to pass to the module, and a control flag that describes how to interpret the return code from the module.
Understanding the control flags is essential to understanding PAM configuration files.
There are four different control flags:
`binding`::
If the module succeeds and no earlier module in the chain has failed, the chain is immediately terminated and the request is granted.
If the module fails, the rest of the chain is executed, but the request is ultimately denied.
+
This control flag was introduced by Sun in Solaris(TM) 9 (SunOS(TM) 5.9), and is also supported by OpenPAM.
`required`::
If the module succeeds, the rest of the chain is executed, and the request is granted unless some other module fails.
If the module fails, the rest of the chain is also executed, but the request is ultimately denied.
`requisite`::
If the module succeeds, the rest of the chain is executed, and the request is granted unless some other module fails.
If the module fails, the chain is immediately terminated and the request is denied.
`sufficient`::
If the module succeeds and no earlier module in the chain has failed, the chain is immediately terminated and the request is granted.
If the module fails, the module is ignored and the rest of the chain is executed.
+
As the semantics of this flag may be somewhat confusing, especially when it is used for the last module in a chain, it is recommended that the `binding` control flag be used instead if the implementation supports it.
`optional`::
The module is executed, but its result is ignored.
If all modules in a chain are marked `optional`, all requests will always be granted.
When a server invokes one of the six PAM primitives, PAM retrieves the chain for the facility the primitive belongs to, and invokes each of the modules listed in the chain, in the order they are listed, until it reaches the end, or determines that no further processing is necessary (either because a `binding` or `sufficient` module succeeded, or because a `requisite` module failed.)
The request is granted if and only if at least one module was invoked, and all non-optional modules succeeded.
Note that it is possible, though not very common, to have the same module listed several times in the same chain.
For instance, a module that looks up user names and passwords in a directory server could be invoked multiple times with different parameters specifying different directory servers to contact.
PAM treat different occurrences of the same module in the same chain as different, unrelated modules.
[[pam-transactions]]
=== Transactions
The lifecycle of a typical PAM transaction is described below.
Note that if any of these steps fails, the server should report a suitable error message to the client and abort the transaction.
. If necessary, the server obtains arbitrator credentials through a mechanism independent of PAM-most commonly by virtue of having been started by `root`, or of being setuid `root`.
. The server calls man:pam_start[3] to initialize the PAM library and specify its service name and the target account, and register a suitable conversation function.
. The server obtains various information relating to the transaction (such as the applicant's user name and the name of the host the client runs on) and submits it to PAM using man:pam_set_item[3].
. The server calls man:pam_authenticate[3] to authenticate the applicant.
. The server calls man:pam_acct_mgmt[3] to verify that the requested account is available and valid. If the password is correct but has expired, man:pam_acct_mgmt[3] will return `PAM_NEW_AUTHTOK_REQD` instead of `PAM_SUCCESS`.
. If the previous step returned `PAM_NEW_AUTHTOK_REQD`, the server now calls man:pam_chauthtok[3] to force the client to change the authentication token for the requested account.
. Now that the applicant has been properly authenticated, the server calls man:pam_setcred[3] to establish the credentials of the requested account. It is able to do this because it acts on behalf of the arbitrator, and holds the arbitrator's credentials.
. Once the correct credentials have been established, the server calls man:pam_open_session[3] to set up the session.
. The server now performs whatever service the client requested-for instance, provide the applicant with a shell.
. Once the server is done serving the client, it calls man:pam_close_session[3] to tear down the session.
. Finally, the server calls man:pam_end[3] to notify the PAM library that it is done and that it can release whatever resources it has allocated in the course of the transaction.
[[pam-config]]
== PAM Configuration
[[pam-config-file]]
=== PAM Policy Files
[[pam-config-pam.conf]]
==== The [.filename]#/etc/pam.conf#
The traditional PAM policy file is [.filename]#/etc/pam.conf#.
This file contains all the PAM policies for your system.
Each line of the file describes one step in a chain, as shown below:
[.programlisting]
....
login auth required pam_nologin.so no_warn
....
The fields are, in order: service name, facility name, control flag, module name, and module arguments.
Any additional fields are interpreted as additional module arguments.
A separate chain is constructed for each service / facility pair, so while the order in which lines for the same service and facility appear is significant, the order in which the individual services and facilities are listed is not.
The examples in the original PAM paper grouped configuration lines by facility, and the Solaris(TM) stock [.filename]#pam.conf# still does that, but FreeBSD's stock configuration groups configuration lines by service.
Either way is fine; either way makes equal sense.
[[pam-config-pam.d]]
==== The [.filename]#/etc/pam.d#
OpenPAM and Linux-PAM support an alternate configuration mechanism, which is the preferred mechanism in FreeBSD.
In this scheme, each policy is contained in a separate file bearing the name of the service it applies to.
These files are stored in [.filename]#/etc/pam.d/#.
These per-service policy files have only four fields instead of [.filename]#pam.conf#'s five: the service name field is omitted.
Thus, instead of the sample [.filename]#pam.conf# line from the previous section, one would have the following line in [.filename]#/etc/pam.d/login#:
[.programlisting]
....
auth required pam_nologin.so no_warn
....
As a consequence of this simplified syntax, it is possible to use the same policy for multiple services by linking each service name to a same policy file.
For instance, to use the same policy for the `su` and `sudo` services, one could do as follows:
[source,shell]
....
# cd /etc/pam.d
# ln -s su sudo
....
This works because the service name is determined from the file name rather than specified in the policy file, so the same file can be used for multiple differently-named services.
Since each service's policy is stored in a separate file, the [.filename]#pam.d# mechanism also makes it very easy to install additional policies for third-party software packages.
[[pam-config-file-order]]
==== The Policy Search Order
As we have seen above, PAM policies can be found in a number of places.
What happens if policies for the same service exist in multiple places?
It is essential to understand that PAM's configuration system is centered on chains.
[[pam-config-breakdown]]
=== Breakdown of a Configuration Line
As explained in <<pam-config-file>>, each line in [.filename]#/etc/pam.conf# consists of four or more fields: the service name, the facility name, the control flag, the module name, and zero or more module arguments.
The service name is generally (though not always) the name of the application the statement applies to.
If you are unsure, refer to the individual application's documentation to determine what service name it uses.
Note that if you use [.filename]#/etc/pam.d/# instead of [.filename]#/etc/pam.conf#, the service name is specified by the name of the policy file, and omitted from the actual configuration lines, which then start with the facility name.
The facility is one of the four facility keywords described in <<pam-facilities-primitives>>.
Likewise, the control flag is one of the four keywords described in <<pam-chains-policies>>, describing how to interpret the return code from the module.
Linux-PAM supports an alternate syntax that lets you specify the action to associate with each possible return code, but this should be avoided as it is non-standard and closely tied in with the way Linux-PAM dispatches service calls (which differs greatly from the way Solaris(TM) and OpenPAM do it.)
Unsurprisingly, OpenPAM does not support this syntax.
[[pam-policies]]
=== Policies
To configure PAM correctly, it is essential to understand how policies are interpreted.
When an application calls man:pam_start[3], the PAM library loads the policy for the specified service and constructs four module chains (one for each facility.)
If one or more of these chains are empty, the corresponding chains from the policy for the `other` service are substituted.
When the application later calls one of the six PAM primitives, the PAM library retrieves the chain for the corresponding facility and calls the appropriate service function in each module listed in the chain, in the order in which they were listed in the configuration.
After each call to a service function, the module type and the error code returned by the service function are used to determine what happens next.
With a few exceptions, which we discuss below, the following table applies:
.PAM Chain Execution Summary
[cols="1,1,1,1", options="header"]
|===
|
| PAM_SUCCESS
| PAM_IGNORE
| other
|binding
|if (!fail) break;
|-
|fail = true;
|required
|-
|-
|fail = true;
|requisite
|-
|-
|fail = true; break;
|sufficient
|if (!fail) break;
|-
|-
|optional
|-
|-
|-
|===
If `fail` is true at the end of a chain, or when a "break" is reached, the dispatcher returns the error code returned by the first module that failed.
Otherwise, it returns `PAM_SUCCESS`.
The first exception of note is that the error code `PAM_NEW_AUTHTOK_REQD` is treated like a success, except that if no module failed, and at least one module returned `PAM_NEW_AUTHTOK_REQD`, the dispatcher will return `PAM_NEW_AUTHTOK_REQD`.
The second exception is that man:pam_setcred[3] treats `binding` and `sufficient` modules as if they were `required`.
The third and final exception is that man:pam_chauthtok[3] runs the entire chain twice (once for preliminary checks and once to actually set the password), and in the preliminary phase it treats `binding` and `sufficient` modules as if they were `required`.
[[pam-freebsd-modules]]
== FreeBSD PAM Modules
[[pam-modules-deny]]
=== man:pam_deny[8]
The man:pam_deny[8] module is one of the simplest modules available; it responds to any request with `PAM_AUTH_ERR`.
It is useful for quickly disabling a service (add it to the top of every chain), or for terminating chains of `sufficient` modules.
[[pam-modules-echo]]
=== man:pam_echo[8]
The man:pam_echo[8] module simply passes its arguments to the conversation function as a `PAM_TEXT_INFO` message.
It is mostly useful for debugging, but can also serve to display messages such as "Unauthorized access will be prosecuted" before starting the authentication procedure.
[[pam-modules-exec]]
=== man:pam_exec[8]
The man:pam_exec[8] module takes its first argument to be the name of a program to execute, and the remaining arguments are passed to that program as command-line arguments.
One possible application is to use it to run a program at login time which mounts the user's home directory.
[[pam-modules-ftpusers]]
=== man:pam_ftpusers[8]
The man:pam_ftpusers[8] module
[[pam-modules-group]]
=== man:pam_group[8]
The man:pam_group[8] module accepts or rejects applicants on the basis of their membership in a particular file group (normally `wheel` for man:su[1]).
It is primarily intended for maintaining the traditional behavior of BSD man:su[1], but has many other uses, such as excluding certain groups of users from a particular service.
[[pam-modules-guest]]
=== man:pam_guest[8]
The man:pam_guest[8] module allows guest logins using fixed login names.
Various requirements can be placed on the password, but the default behavior is to allow any password as long as the login name is that of a guest account.
The man:pam_guest[8] module can easily be used to implement anonymous FTP logins.
[[pam-modules-krb5]]
=== man:pam_krb5[8]
The man:pam_krb5[8] module
[[pam-modules-ksu]]
=== man:pam_ksu[8]
The man:pam_ksu[8] module
[[pam-modules-lastlog]]
=== man:pam_lastlog[8]
The man:pam_lastlog[8] module
[[pam-modules-login-access]]
=== man:pam_login_access[8]
The man:pam_login_access[8] module provides an implementation of the account management primitive which enforces the login restrictions specified in the man:login.access[5] table.
[[pam-modules-nologin]]
=== man:pam_nologin[8]
The man:pam_nologin[8] module refuses non-root logins when [.filename]#/var/run/nologin# exists.
This file is normally created by man:shutdown[8] when less than five minutes remain until the scheduled shutdown time.
[[pam-modules-opie]]
=== man:pam_opie[8]
The man:pam_opie[8] module implements the man:opie[4] authentication method.
The man:opie[4] system is a challenge-response mechanism where the response to each challenge is a direct function of the challenge and a passphrase, so the response can be easily computed "just in time" by anyone possessing the passphrase, eliminating the need for password lists.
Moreover, since man:opie[4] never reuses a challenge that has been correctly answered, it is not vulnerable to replay attacks.
[[pam-modules-opieaccess]]
=== man:pam_opieaccess[8]
The man:pam_opieaccess[8] module is a companion module to man:pam_opie[8].
Its purpose is to enforce the restrictions codified in man:opieaccess[5], which regulate the conditions under which a user who would normally authenticate herself using man:opie[4] is allowed to use alternate methods.
This is most often used to prohibit the use of password authentication from untrusted hosts.
In order to be effective, the man:pam_opieaccess[8] module must be listed as `requisite` immediately after a `sufficient` entry for man:pam_opie[8], and before any other modules, in the `auth` chain.
[[pam-modules-passwdqc]]
=== man:pam_passwdqc[8]
The man:pam_passwdqc[8] module
[[pam-modules-permit]]
=== man:pam_permit[8]
The man:pam_permit[8] module is one of the simplest modules available; it responds to any request with `PAM_SUCCESS`.
It is useful as a placeholder for services where one or more chains would otherwise be empty.
[[pam-modules-radius]]
=== man:pam_radius[8]
The man:pam_radius[8] module
[[pam-modules-rhosts]]
=== man:pam_rhosts[8]
The man:pam_rhosts[8] module
[[pam-modules-rootok]]
=== man:pam_rootok[8]
The man:pam_rootok[8] module reports success if and only if the real user id of the process calling it (which is assumed to be run by the applicant) is 0.
This is useful for non-networked services such as man:su[1] or man:passwd[1], to which the `root` should have automatic access.
[[pam-modules-securetty]]
=== man:pam_securetty[8]
The man:pam_securetty[8] module
[[pam-modules-self]]
=== man:pam_self[8]
The man:pam_self[8] module reports success if and only if the names of the applicant matches that of the target account.
It is most useful for non-networked services such as man:su[1], where the identity of the applicant can be easily verified.
[[pam-modules-ssh]]
=== man:pam_ssh[8]
The man:pam_ssh[8] module provides both authentication and session services.
The authentication service allows users who have passphrase-protected SSH secret keys in their [.filename]#~/.ssh# directory to authenticate themselves by typing their passphrase.
The session service starts man:ssh-agent[1] and preloads it with the keys that were decrypted in the authentication phase.
This feature is particularly useful for local logins, whether in X (using man:xdm[1] or another PAM-aware X login manager) or at the console.
[[pam-modules-tacplus]]
=== man:pam_tacplus[8]
The man:pam_tacplus[8] module
[[pam-modules-unix]]
=== man:pam_unix[8]
The man:pam_unix[8] module implements traditional UNIX(R) password authentication, using man:getpwnam[3] to obtain the target account's password and compare it with the one provided by the applicant.
It also provides account management services (enforcing account and password expiration times) and password-changing services.
This is probably the single most useful module, as the great majority of admins will want to maintain historical behavior for at least some services.
[[pam-appl-prog]]
== PAM Application Programming
This section has not yet been written.
[[pam-module-prog]]
== PAM Module Programming
This section has not yet been written.
:sectnums!:
[appendix]
[[pam-sample-appl]]
== Sample PAM Application
The following is a minimal implementation of man:su[1] using PAM.
Note that it uses the OpenPAM-specific man:openpam_ttyconv[3] conversation function, which is prototyped in [.filename]#security/openpam.h#.
If you wish build this application on a system with a different PAM library, you will have to provide your own conversation function.
A robust conversation function is surprisingly difficult to implement;
the one presented in <<pam-sample-conv>> is a good starting point, but should not be used in real-world applications.
[.programlisting]
....
include::static/source/articles/pam/su.c[]
....
:sectnums!:
[appendix]
[[pam-sample-module]]
== Sample PAM Module
The following is a minimal implementation of man:pam_unix[8], offering only authentication services.
It should build and run with most PAM implementations, but takes advantage of OpenPAM extensions if available: note the use of man:pam_get_authtok[3], which enormously simplifies prompting the user for a password.
[.programlisting]
....
include::static/source/articles/pam/pam_unix.c[]
....
:sectnums!:
[appendix]
[[pam-sample-conv]]
== Sample PAM Conversation Function
The conversation function presented below is a greatly simplified version of OpenPAM's man:openpam_ttyconv[3].
It is fully functional, and should give the reader a good idea of how a conversation function should behave, but it is far too simple for real-world use.
Even if you are not using OpenPAM, feel free to download the source code and adapt man:openpam_ttyconv[3] to your uses; we believe it to be as robust as a tty-oriented conversation function can reasonably get.
[.programlisting]
....
include::static/source/articles/pam/converse.c[]
....
:sectnums!:
[[pam-further]]
== Further Reading
=== Papers
Making Login Services Independent of Authentication Technologies Vipin Samar. Charlie Lai. Sun Microsystems.
_link:https://pubs.opengroup.org/onlinepubs/8329799/toc.htm[X/Open Single Sign-on Preliminary Specification]_. The Open Group. 1-85912-144-6. June 1997.
_link:https://mirrors.kernel.org/pub/linux/libs/pam/pre/doc/draft-morgan-pam-07.txt[Pluggable Authentication Modules]_. Andrew G. Morgan. 1999-10-06.
=== User Manuals
_link:https://docs.oracle.com/cd/E26505_01/html/E27224/pam-1.html[PAM Administration]_. Sun Microsystems.
=== Related Web Pages
_link:https://www.openpam.org/[OpenPAM homepage]_ Dag-Erling Smørgrav. ThinkSec AS.
_link:http://www.kernel.org/pub/linux/libs/pam/[Linux-PAM homepage]_ Andrew Morgan.
_Solaris PAM homepage_. Sun Microsystems.
diff --git a/documentation/content/en/articles/pgpkeys/_index.adoc b/documentation/content/en/articles/pgpkeys/_index.adoc
index a09cfe8fcd..f55d39fe46 100644
--- a/documentation/content/en/articles/pgpkeys/_index.adoc
+++ b/documentation/content/en/articles/pgpkeys/_index.adoc
@@ -1,1791 +1,1791 @@
---
title: OpenPGP Keys
-releaseinfo: "$FreeBSD$"
+description: List of OpenPGP keys that can be used to verify a signature or send encrypted email to FreeBSD.org officers or developers.
---
= OpenPGP Keys
:doctype: article
:toc: macro
:toclevels: 1
:icons: font
:sectnums:
:sectnumlevels: 6
:source-highlighter: rouge
:experimental:
include::shared/authors.adoc[]
include::shared/en/teams.adoc[lines=16..-1]
'''
toc::[]
These OpenPGP keys can be used to verify a signature or send encrypted email to `FreeBSD.org` officers or developers.
The complete keyring can be downloaded at link:https://www.FreeBSD.org/doc/pgpkeyring.txt[https://www.FreeBSD.org/doc/pgpkeyring.txt].
////
Do not edit this file except as instructed by the addkey.sh script.
See the README file in /data/pgpkeys for instructions.
This article contains all the keys. The officer keys are also
shown in the Handbook PGP keys chapter.
////
[[pgpkeys-officers]]
== Officers
=== {security-officer-name} `<{security-officer-email}>`
include::static/pgpkeys/security-officer.key[]
=== {secteam-secretary-name} `<{secteam-secretary-email}>`
include::static/pgpkeys/secteam-secretary.key[]
=== {core-secretary-name} `<{core-secretary-email}>`
include::static/pgpkeys/core-secretary.key[]
=== {portmgr-secretary-name} `<{portmgr-secretary-email}>`
include::static/pgpkeys/portmgr-secretary.key[]
=== `{doceng-secretary-email}`
include::static/pgpkeys/doceng-secretary.key[]
[[pgpkeys-core]]
== Core Team Members
=== `{bapt}`
include::static/pgpkeys/bapt.key[]
=== `{emaste}`
include::static/pgpkeys/emaste.key[]
=== `{gnn}`
include::static/pgpkeys/gnn.key[]
=== `{hrs}`
include::static/pgpkeys/hrs.key[]
=== `{imp}`
include::static/pgpkeys/imp.key[]
=== `{kevans}`
include::static/pgpkeys/kevans.key[]
=== `{markj}`
include::static/pgpkeys/markj.key[]
=== `{scottl}`
include::static/pgpkeys/scottl.key[]
=== `{seanc}`
include::static/pgpkeys/seanc.key[]
[[pgpkeys-developers]]
== Developers
=== `{ariff}`
include::static/pgpkeys/ariff.key[]
=== `{tabthorpe}`
include::static/pgpkeys/tabthorpe.key[]
=== `{eadler}`
include::static/pgpkeys/eadler.key[]
=== `{mahrens}`
include::static/pgpkeys/mahrens.key[]
=== `{shaun}`
include::static/pgpkeys/shaun.key[]
=== `{brix}`
include::static/pgpkeys/brix.key[]
=== `{mandree}`
include::static/pgpkeys/mandree.key[]
=== `{will}`
include::static/pgpkeys/will.key[]
=== `{dim}`
include::static/pgpkeys/dim.key[]
=== `{anholt}`
include::static/pgpkeys/anholt.key[]
=== `{fernape}`
include::static/pgpkeys/fernape.key[]
=== `{mva}`
include::static/pgpkeys/mva.key[]
=== `{araujo}`
include::static/pgpkeys/araujo.key[]
=== `{mat}`
include::static/pgpkeys/mat.key[]
=== `{syuu}`
include::static/pgpkeys/syuu.key[]
=== `{asami}`
include::static/pgpkeys/asami.key[]
=== `{gavin}`
include::static/pgpkeys/gavin.key[]
=== `{jsa}`
include::static/pgpkeys/jsa.key[]
=== `{jadawin}`
include::static/pgpkeys/jadawin.key[]
=== `{jwb}`
include::static/pgpkeys/jwb.key[]
=== `{badger}`
include::static/pgpkeys/badger.key[]
=== `{dbaio}`
include::static/pgpkeys/dbaio.key[]
=== `{timur}`
include::static/pgpkeys/timur.key[]
=== `{jhb}`
include::static/pgpkeys/jhb.key[]
=== `{gjb}`
include::static/pgpkeys/gjb.key[]
=== `{snb}`
include::static/pgpkeys/snb.key[]
=== `{barner}`
include::static/pgpkeys/barner.key[]
=== `{lbartoletti}`
include::static/pgpkeys/lbartoletti.key[]
=== `{jbeich}`
include::static/pgpkeys/jbeich.key[]
=== `{art}`
include::static/pgpkeys/art.key[]
=== `{tobez}`
include::static/pgpkeys/tobez.key[]
=== `{damien}`
include::static/pgpkeys/damien.key[]
=== `{bdragon}`
include::static/pgpkeys/bdragon.key[]
=== `{tcberner}`
include::static/pgpkeys/tcberner.key[]
=== `{tdb}`
include::static/pgpkeys/tdb.key[]
=== `{gblach}`
include::static/pgpkeys/gblach.key[]
=== `{mbr}`
include::static/pgpkeys/mbr.key[]
=== `{wblock}`
include::static/pgpkeys/wblock.key[]
=== `{bvs}`
include::static/pgpkeys/bvs.key[]
=== `{zbb}`
include::static/pgpkeys/zbb.key[]
=== `{novel}`
include::static/pgpkeys/novel.key[]
=== `{garga}`
include::static/pgpkeys/garga.key[]
=== `{kbowling}`
include::static/pgpkeys/kbowling.key[]
=== `{alexbl}`
include::static/pgpkeys/alexbl.key[]
=== `{sbz}`
include::static/pgpkeys/sbz.key[]
=== `{ebrandi}`
include::static/pgpkeys/ebrandi.key[]
=== `{dab}`
include::static/pgpkeys/dab.key[]
=== `{harti}`
include::static/pgpkeys/harti.key[]
=== `{obraun}`
include::static/pgpkeys/obraun.key[]
=== `{makc}`
include::static/pgpkeys/makc.key[]
=== `{jmb}`
include::static/pgpkeys/jmb.key[]
=== `{antoine}`
include::static/pgpkeys/antoine.key[]
=== `{db}`
include::static/pgpkeys/db.key[]
=== `{brueffer}`
include::static/pgpkeys/brueffer.key[]
=== `{markus}`
include::static/pgpkeys/markus.key[]
=== `{sbruno}`
include::static/pgpkeys/sbruno.key[]
=== `{br}`
include::static/pgpkeys/br.key[]
=== `{oleg}`
include::static/pgpkeys/oleg.key[]
=== `{bushman}`
include::static/pgpkeys/bushman.key[]
=== `{adrian}`
include::static/pgpkeys/adrian.key[]
=== `{jch}`
include::static/pgpkeys/jch.key[]
=== `{jchandra}`
include::static/pgpkeys/jchandra.key[]
=== `{jcamou}`
include::static/pgpkeys/jcamou.key[]
=== `{acm}`
include::static/pgpkeys/acm.key[]
=== `{gahr}`
include::static/pgpkeys/gahr.key[]
=== `{dchagin}`
include::static/pgpkeys/dchagin.key[]
=== `{perky}`
include::static/pgpkeys/perky.key[]
=== `{jon}`
include::static/pgpkeys/jon.key[]
=== `{jonathan}`
include::static/pgpkeys/jonathan.key[]
=== `{loader}`
include::static/pgpkeys/loader.key[]
=== `{luoqi}`
include::static/pgpkeys/luoqi.key[]
=== `{ache}`
include::static/pgpkeys/ache.key[]
=== `{melifaro}`
include::static/pgpkeys/melifaro.key[]
=== `{seanc}`
include::static/pgpkeys/seanc.key[]
=== `{cjh}`
include::static/pgpkeys/cjh.key[]
=== `{davidch}`
include::static/pgpkeys/davidch.key[]
=== `{milki}`
include::static/pgpkeys/milki.key[]
=== `{cjc}`
include::static/pgpkeys/cjc.key[]
=== `{marcus}`
include::static/pgpkeys/marcus.key[]
=== `{nik}`
include::static/pgpkeys/nik.key[]
=== `{benjsc}`
include::static/pgpkeys/benjsc.key[]
=== `{lcook}`
include::static/pgpkeys/lcook.key[]
=== `{ngie}`
include::static/pgpkeys/ngie.key[]
=== `{tijl}`
include::static/pgpkeys/tijl.key[]
=== `{rakuco}`
include::static/pgpkeys/rakuco.key[]
=== `{dch}`
include::static/pgpkeys/dch.key[]
=== `{alc}`
include::static/pgpkeys/alc.key[]
=== `{olivier}`
include::static/pgpkeys/olivier.key[]
=== `{jeb}`
include::static/pgpkeys/jeb.key[]
=== `{bcran}`
include::static/pgpkeys/bcran.key[]
=== `{culot}`
include::static/pgpkeys/culot.key[]
=== `{aaron}`
include::static/pgpkeys/aaron.key[]
=== `{alfredo}`
include::static/pgpkeys/alfredo.key[]
=== `{bapt}`
include::static/pgpkeys/bapt.key[]
=== `{ceri}`
include::static/pgpkeys/ceri.key[]
=== `{brd}`
include::static/pgpkeys/brd.key[]
=== `{edavis}`
include::static/pgpkeys/edavis.key[]
=== `{pjd}`
include::static/pgpkeys/pjd.key[]
=== `{alexey}`
include::static/pgpkeys/alexey.key[]
=== `{bsd}`
include::static/pgpkeys/bsd.key[]
=== `{carl}`
include::static/pgpkeys/carl.key[]
=== `{carlavilla}`
include::static/pgpkeys/carlavilla.key[]
=== `{jmd}`
include::static/pgpkeys/jmd.key[]
=== `{vd}`
include::static/pgpkeys/vd.key[]
=== `{rdivacky}`
include::static/pgpkeys/rdivacky.key[]
=== `{danfe}`
include::static/pgpkeys/danfe.key[]
=== `{dd}`
include::static/pgpkeys/dd.key[]
=== `{bdrewery}`
include::static/pgpkeys/bdrewery.key[]
=== `{gad}`
include::static/pgpkeys/gad.key[]
=== `{olivierd}`
include::static/pgpkeys/olivierd.key[]
=== `{bruno}`
include::static/pgpkeys/bruno.key[]
=== `{ale}`
include::static/pgpkeys/ale.key[]
=== `{nemysis}`
include::static/pgpkeys/nemysis.key[]
=== `{peadar}`
include::static/pgpkeys/peadar.key[]
=== `{deischen}`
include::static/pgpkeys/deischen.key[]
=== `{josef}`
include::static/pgpkeys/josef.key[]
=== `{lme}`
include::static/pgpkeys/lme.key[]
=== `{ue}`
include::static/pgpkeys/ue.key[]
=== `{ru}`
include::static/pgpkeys/ru.key[]
=== `{le}`
include::static/pgpkeys/le.key[]
=== `{se}`
include::static/pgpkeys/se.key[]
=== `{kevans}`
include::static/pgpkeys/kevans.key[]
=== `{bf}`
include::static/pgpkeys/bf.key[]
=== `{sef}`
include::static/pgpkeys/sef.key[]
=== `{madpilot}`
include::static/pgpkeys/madpilot.key[]
=== `{rafan}`
include::static/pgpkeys/rafan.key[]
=== `{kami}`
include::static/pgpkeys/kami.key[]
=== `{stefanf}`
include::static/pgpkeys/stefanf.key[]
=== `{farrokhi}`
include::static/pgpkeys/farrokhi.key[]
=== `{jedgar}`
include::static/pgpkeys/jedgar.key[]
=== `{mfechner}`
include::static/pgpkeys/mfechner.key[]
=== `{feld}`
include::static/pgpkeys/feld.key[]
=== `{green}`
include::static/pgpkeys/green.key[]
=== `{lioux}`
include::static/pgpkeys/lioux.key[]
=== `{mdf}`
include::static/pgpkeys/mdf.key[]
=== `{fanf}`
include::static/pgpkeys/fanf.key[]
=== `{blackend}`
include::static/pgpkeys/blackend.key[]
=== `{petef}`
include::static/pgpkeys/petef.key[]
=== `{decke}`
include::static/pgpkeys/decke.key[]
=== `{landonf}`
include::static/pgpkeys/landonf.key[]
=== `{billf}`
include::static/pgpkeys/billf.key[]
=== `{sg}`
include::static/pgpkeys/sg.key[]
=== `{sgalabov}`
include::static/pgpkeys/sgalabov.key[]
=== `{ultima}`
include::static/pgpkeys/ultima.key[]
=== `{avg}`
include::static/pgpkeys/avg.key[]
=== `{beat}`
include::static/pgpkeys/beat.key[]
=== `{danger}`
include::static/pgpkeys/danger.key[]
=== `{sjg}`
include::static/pgpkeys/sjg.key[]
=== `{gibbs}`
include::static/pgpkeys/gibbs.key[]
=== `{pfg}`
include::static/pgpkeys/pfg.key[]
=== `{girgen}`
include::static/pgpkeys/girgen.key[]
=== `{eugen}`
include::static/pgpkeys/eugen.key[]
=== `{pgollucci}`
include::static/pgpkeys/pgollucci.key[]
=== `{trociny}`
include::static/pgpkeys/trociny.key[]
=== `{danilo}`
include::static/pgpkeys/danilo.key[]
=== `{dmgk}`
include::static/pgpkeys/dmgk.key[]
=== `{daichi}`
include::static/pgpkeys/daichi.key[]
=== `{mnag}`
include::static/pgpkeys/mnag.key[]
=== `{grehan}`
include::static/pgpkeys/grehan.key[]
=== `{jamie}`
include::static/pgpkeys/jamie.key[]
=== `{adridg}`
include::static/pgpkeys/adridg.key[]
=== `{edwin}`
include::static/pgpkeys/edwin.key[]
=== `{wg}`
include::static/pgpkeys/wg.key[]
=== `{bar}`
include::static/pgpkeys/bar.key[]
=== `{anish}`
include::static/pgpkeys/anish.key[]
=== `{jmg}`
include::static/pgpkeys/jmg.key[]
=== `{mjg}`
include::static/pgpkeys/mjg.key[]
=== `{jhale}`
include::static/pgpkeys/jhale.key[]
=== `{jah}`
include::static/pgpkeys/jah.key[]
=== `{dannyboy}`
include::static/pgpkeys/dannyboy.key[]
=== `{dhartmei}`
include::static/pgpkeys/dhartmei.key[]
=== `{ohauer}`
include::static/pgpkeys/ohauer.key[]
=== `{ehaupt}`
include::static/pgpkeys/ehaupt.key[]
=== `{jhay}`
include::static/pgpkeys/jhay.key[]
=== `{bhd}`
include::static/pgpkeys/bhd.key[]
=== `{sheldonh}`
include::static/pgpkeys/sheldonh.key[]
=== `{mikeh}`
include::static/pgpkeys/mikeh.key[]
=== `{mheinen}`
include::static/pgpkeys/mheinen.key[]
=== `{niels}`
include::static/pgpkeys/niels.key[]
=== `{jh}`
include::static/pgpkeys/jh.key[]
=== `{jgh}`
include::static/pgpkeys/jgh.key[]
=== `{ghelmer}`
include::static/pgpkeys/ghelmer.key[]
=== `{mux}`
include::static/pgpkeys/mux.key[]
=== `{wen}`
include::static/pgpkeys/wen.key[]
=== `{dhn}`
include::static/pgpkeys/dhn.key[]
=== `{jhibbits}`
include::static/pgpkeys/jhibbits.key[]
=== `{jhixson}`
include::static/pgpkeys/jhixson.key[]
=== `{pho}`
include::static/pgpkeys/pho.key[]
=== `{oh}`
include::static/pgpkeys/oh.key[]
=== `{mhorne}`
include::static/pgpkeys/mhorne.key[]
=== `{bhughes}`
include::static/pgpkeys/bhughes.key[]
=== `{mich}`
include::static/pgpkeys/mich.key[]
=== `{sunpoet}`
include::static/pgpkeys/sunpoet.key[]
=== `{lwhsu}`
include::static/pgpkeys/lwhsu.key[]
=== `{foxfair}`
include::static/pgpkeys/foxfair.key[]
=== `{whu}`
include::static/pgpkeys/whu.key[]
=== `{chinsan}`
include::static/pgpkeys/chinsan.key[]
=== `{shurd}`
include::static/pgpkeys/shurd.key[]
=== `{kibab}`
include::static/pgpkeys/kibab.key[]
=== `{davide}`
include::static/pgpkeys/davide.key[]
=== `{jkh}`
include::static/pgpkeys/jkh.key[]
=== `{sevan}`
include::static/pgpkeys/sevan.key[]
=== `{versus}`
include::static/pgpkeys/versus.key[]
=== `{pi}`
include::static/pgpkeys/pi.key[]
=== `{weongyo}`
include::static/pgpkeys/weongyo.key[]
=== `{peterj}`
include::static/pgpkeys/peterj.key[]
=== `{jinmei}`
include::static/pgpkeys/jinmei.key[]
=== `{ahze}`
include::static/pgpkeys/ahze.key[]
=== `{markj}`
include::static/pgpkeys/markj.key[]
=== `{trevor}`
include::static/pgpkeys/trevor.key[]
=== `{thj}`
include::static/pgpkeys/thj.key[]
=== `{mjoras}`
include::static/pgpkeys/mjoras.key[]
=== `{erj}`
include::static/pgpkeys/erj.key[]
=== `{allanjude}`
include::static/pgpkeys/allanjude.key[]
=== `{tj}`
include::static/pgpkeys/tj.key[]
=== `{kan}`
include::static/pgpkeys/kan.key[]
=== `{bjk}`
include::static/pgpkeys/bjk.key[]
=== `{phk}`
include::static/pgpkeys/phk.key[]
=== `{pluknet}`
include::static/pgpkeys/pluknet.key[]
=== `{cokane}`
include::static/pgpkeys/cokane.key[]
=== `{karels}`
include::static/pgpkeys/karels.key[]
=== `{kato}`
include::static/pgpkeys/kato.key[]
=== `{joe}`
include::static/pgpkeys/joe.key[]
=== `{vkashyap}`
include::static/pgpkeys/vkashyap.key[]
=== `{pkelsey}`
include::static/pgpkeys/pkelsey.key[]
=== `{pkubaj}`
include::static/pgpkeys/pkubaj.key[]
=== `{kris}`
include::static/pgpkeys/kris.key[]
=== `{keramida}`
include::static/pgpkeys/keramida.key[]
=== `{fjoe}`
include::static/pgpkeys/fjoe.key[]
=== `{manolis}`
include::static/pgpkeys/manolis.key[]
=== `{stevek}`
include::static/pgpkeys/stevek.key[]
=== `{jkim}`
include::static/pgpkeys/jkim.key[]
=== `{zack}`
include::static/pgpkeys/zack.key[]
=== `{jceel}`
include::static/pgpkeys/jceel.key[]
=== `{andreas}`
include::static/pgpkeys/andreas.key[]
=== `{kai}`
include::static/pgpkeys/kai.key[]
=== `{jkois}`
include::static/pgpkeys/jkois.key[]
=== `{sergei}`
include::static/pgpkeys/sergei.key[]
=== `{wulf}`
include::static/pgpkeys/wulf.key[]
=== `{maxim}`
include::static/pgpkeys/maxim.key[]
=== `{taras}`
include::static/pgpkeys/taras.key[]
=== `{tobik}`
include::static/pgpkeys/tobik.key[]
=== `{jkoshy}`
include::static/pgpkeys/jkoshy.key[]
=== `{wkoszek}`
include::static/pgpkeys/wkoszek.key[]
=== `{ak}`
include::static/pgpkeys/ak.key[]
=== `{skra}`
include::static/pgpkeys/skra.key[]
=== `{skreuzer}`
include::static/pgpkeys/skreuzer.key[]
=== `{gabor}`
include::static/pgpkeys/gabor.key[]
=== `{anchie}`
include::static/pgpkeys/anchie.key[]
=== `{rik}`
include::static/pgpkeys/rik.key[]
=== `{rushani}`
include::static/pgpkeys/rushani.key[]
=== `{kuriyama}`
include::static/pgpkeys/kuriyama.key[]
=== `{gleb}`
include::static/pgpkeys/gleb.key[]
=== `{rene}`
include::static/pgpkeys/rene.key[]
=== `{jlaffaye}`
include::static/pgpkeys/jlaffaye.key[]
=== `{clement}`
include::static/pgpkeys/clement.key[]
=== `{mlaier}`
include::static/pgpkeys/mlaier.key[]
=== `{dvl}`
include::static/pgpkeys/dvl.key[]
=== `{erwin}`
include::static/pgpkeys/erwin.key[]
=== `{martymac}`
include::static/pgpkeys/martymac.key[]
=== `{glarkin}`
include::static/pgpkeys/glarkin.key[]
=== `{laszlof}`
include::static/pgpkeys/laszlof.key[]
=== `{dru}`
include::static/pgpkeys/dru.key[]
=== `{lawrance}`
include::static/pgpkeys/lawrance.key[]
=== `{njl}`
include::static/pgpkeys/njl.key[]
=== `{jlh}`
include::static/pgpkeys/jlh.key[]
=== `{leeym}`
include::static/pgpkeys/leeym.key[]
=== `{sam}`
include::static/pgpkeys/sam.key[]
=== `{jylefort}`
include::static/pgpkeys/jylefort.key[]
=== `{grog}`
include::static/pgpkeys/grog.key[]
=== `{oliver}`
include::static/pgpkeys/oliver.key[]
=== `{netchild}`
include::static/pgpkeys/netchild.key[]
=== `{leitao}`
include::static/pgpkeys/leitao.key[]
=== `{ae}`
include::static/pgpkeys/ae.key[]
=== `{lesi}`
include::static/pgpkeys/lesi.key[]
=== `{achim}`
include::static/pgpkeys/achim.key[]
=== `{cel}`
include::static/pgpkeys/cel.key[]
=== `{truckman}`
include::static/pgpkeys/truckman.key[]
=== `{glewis}`
include::static/pgpkeys/glewis.key[]
=== `{qingli}`
include::static/pgpkeys/qingli.key[]
=== `{delphij}`
include::static/pgpkeys/delphij.key[]
=== `{avatar}`
include::static/pgpkeys/avatar.key[]
=== `{ijliao}`
include::static/pgpkeys/ijliao.key[]
=== `{rlibby}`
include::static/pgpkeys/rlibby.key[]
=== `{lidl}`
include::static/pgpkeys/lidl.key[]
=== `{lifanov}`
include::static/pgpkeys/lifanov.key[]
=== `{lulf}`
include::static/pgpkeys/lulf.key[]
=== `{clive}`
include::static/pgpkeys/clive.key[]
=== `{pclin}`
include::static/pgpkeys/pclin.key[]
=== `{yzlin}`
include::static/pgpkeys/yzlin.key[]
=== `{linimon}`
include::static/pgpkeys/linimon.key[]
=== `{arved}`
include::static/pgpkeys/arved.key[]
=== `{dryice}`
include::static/pgpkeys/dryice.key[]
=== `{nemoliu}`
include::static/pgpkeys/nemoliu.key[]
=== `{kevlo}`
include::static/pgpkeys/kevlo.key[]
=== `{zml}`
include::static/pgpkeys/zml.key[]
=== `{nox}`
include::static/pgpkeys/nox.key[]
=== `{remko}`
include::static/pgpkeys/remko.key[]
=== `{avl}`
include::static/pgpkeys/avl.key[]
=== `{issyl0}`
include::static/pgpkeys/issyl0.key[]
=== `{scottl}`
include::static/pgpkeys/scottl.key[]
=== `{jtl}`
include::static/pgpkeys/jtl.key[]
=== `{luporl}`
include::static/pgpkeys/luporl.key[]
=== `{wma}`
include::static/pgpkeys/wma.key[]
=== `{rmacklem}`
include::static/pgpkeys/rmacklem.key[]
=== `{vmaffione}`
include::static/pgpkeys/vmaffione.key[]
=== `{bmah}`
include::static/pgpkeys/bmah.key[]
=== `{rm}`
include::static/pgpkeys/rm.key[]
=== `{mtm}`
include::static/pgpkeys/mtm.key[]
=== `{dwmalone}`
include::static/pgpkeys/dwmalone.key[]
=== `{amdmi3}`
include::static/pgpkeys/amdmi3.key[]
=== `{marino}`
include::static/pgpkeys/marino.key[]
=== `{kwm}`
include::static/pgpkeys/kwm.key[]
=== `{emaste}`
include::static/pgpkeys/emaste.key[]
=== `{cherry}`
include::static/pgpkeys/cherry.key[]
=== `{matusita}`
include::static/pgpkeys/matusita.key[]
=== `{mm}`
include::static/pgpkeys/mm.key[]
=== `{sem}`
include::static/pgpkeys/sem.key[]
=== `{slm}`
include::static/pgpkeys/slm.key[]
=== `{mckay}`
include::static/pgpkeys/mckay.key[]
=== `{mckusick}`
include::static/pgpkeys/mckusick.key[]
=== `{tmclaugh}`
include::static/pgpkeys/tmclaugh.key[]
=== `{jmcneill}`
include::static/pgpkeys/jmcneill.key[]
=== `{xmj}`
include::static/pgpkeys/xmj.key[]
=== `{jmelo}`
include::static/pgpkeys/jmelo.key[]
=== `{mmel}`
include::static/pgpkeys/mmel.key[]
=== `{jmmv}`
include::static/pgpkeys/jmmv.key[]
=== `{kadesai}`
include::static/pgpkeys/kadesai.key[]
=== `{ken}`
include::static/pgpkeys/ken.key[]
=== `{markm}`
include::static/pgpkeys/markm.key[]
=== `{dinoex}`
include::static/pgpkeys/dinoex.key[]
=== `{sanpei}`
include::static/pgpkeys/sanpei.key[]
=== `{rmh}`
include::static/pgpkeys/rmh.key[]
=== `{jrm}`
include::static/pgpkeys/jrm.key[]
=== `{freqlabs}`
include::static/pgpkeys/freqlabs.key[]
=== `{mmokhi}`
include::static/pgpkeys/mmokhi.key[]
=== `{mmoll}`
include::static/pgpkeys/mmoll.key[]
=== `{cmt}`
include::static/pgpkeys/cmt.key[]
=== `{stephen}`
include::static/pgpkeys/stephen.key[]
=== `{marcel}`
include::static/pgpkeys/marcel.key[]
=== `{dougm}`
include::static/pgpkeys/dougm.key[]
=== `{kmoore}`
include::static/pgpkeys/kmoore.key[]
=== `{marck}`
include::static/pgpkeys/marck.key[]
=== `{mav}`
include::static/pgpkeys/mav.key[]
=== `{lippe}`
include::static/pgpkeys/lippe.key[]
=== `{rich}`
include::static/pgpkeys/rich.key[]
=== `{knu}`
include::static/pgpkeys/knu.key[]
=== `{tmm}`
include::static/pgpkeys/tmm.key[]
=== `{jsm}`
include::static/pgpkeys/jsm.key[]
=== `{max}`
include::static/pgpkeys/max.key[]
=== `{maho}`
include::static/pgpkeys/maho.key[]
=== `{yoichi}`
include::static/pgpkeys/yoichi.key[]
=== `{trasz}`
include::static/pgpkeys/trasz.key[]
=== `{neel}`
include::static/pgpkeys/neel.key[]
=== `{dbn}`
include::static/pgpkeys/dbn.key[]
=== `{bland}`
include::static/pgpkeys/bland.key[]
=== `{joneum}`
include::static/pgpkeys/joneum.key[]
=== `{gnn}`
include::static/pgpkeys/gnn.key[]
=== `{khng}`
include::static/pgpkeys/khng.key[]
=== `{simon}`
include::static/pgpkeys/simon.key[]
=== `{rnoland}`
include::static/pgpkeys/rnoland.key[]
=== `{anders}`
include::static/pgpkeys/anders.key[]
=== `{lofi}`
include::static/pgpkeys/lofi.key[]
=== `{obrien}`
include::static/pgpkeys/obrien.key[]
=== `{olgeni}`
include::static/pgpkeys/olgeni.key[]
=== `{phil}`
include::static/pgpkeys/phil.key[]
=== `{philip}`
include::static/pgpkeys/philip.key[]
=== `{jpaetzel}`
include::static/pgpkeys/jpaetzel.key[]
=== `{pgj}`
include::static/pgpkeys/pgj.key[]
=== `{hiren}`
include::static/pgpkeys/hiren.key[]
=== `{hmp}`
include::static/pgpkeys/hmp.key[]
=== `{yuripv}`
include::static/pgpkeys/yuripv.key[]
=== `{fluffy}`
include::static/pgpkeys/fluffy.key[]
=== `{sat}`
include::static/pgpkeys/sat.key[]
=== `{np}`
include::static/pgpkeys/np.key[]
=== `{royger}`
include::static/pgpkeys/royger.key[]
=== `{rpaulo}`
include::static/pgpkeys/rpaulo.key[]
=== `{misha}`
include::static/pgpkeys/misha.key[]
=== `{dumbbell}`
include::static/pgpkeys/dumbbell.key[]
=== `{mp}`
include::static/pgpkeys/mp.key[]
=== `{roam}`
include::static/pgpkeys/roam.key[]
=== `{den}`
include::static/pgpkeys/den.key[]
=== `{csjp}`
include::static/pgpkeys/csjp.key[]
=== `{gerald}`
include::static/pgpkeys/gerald.key[]
=== `{scottph}`
include::static/pgpkeys/scottph.key[]
=== `{jacula}`
include::static/pgpkeys/jacula.key[]
=== `{0mp}`
include::static/pgpkeys/0mp.key[]
=== `{pizzamig}`
include::static/pgpkeys/pizzamig.key[]
=== `{rpokala}`
include::static/pgpkeys/rpokala.key[]
=== `{jdp}`
include::static/pgpkeys/jdp.key[]
=== `{krion}`
include::static/pgpkeys/krion.key[]
=== `{sepotvin}`
include::static/pgpkeys/sepotvin.key[]
=== `{cpm}`
include::static/pgpkeys/cpm.key[]
=== `{markp}`
include::static/pgpkeys/markp.key[]
=== `{alepulver}`
include::static/pgpkeys/alepulver.key[]
=== `{kp}`
include::static/pgpkeys/kp.key[]
=== `{thomas}`
include::static/pgpkeys/thomas.key[]
=== `{hq}`
include::static/pgpkeys/hq.key[]
=== `{dfr}`
include::static/pgpkeys/dfr.key[]
=== `{bofh}`
include::static/pgpkeys/bofh.key[]
=== `{fox}`
include::static/pgpkeys/fox.key[]
=== `{lbr}`
include::static/pgpkeys/lbr.key[]
=== `{crees}`
include::static/pgpkeys/crees.key[]
=== `{rees}`
include::static/pgpkeys/rees.key[]
=== `{mr}`
include::static/pgpkeys/mr.key[]
=== `{bcr}`
include::static/pgpkeys/bcr.key[]
=== `{rezny}`
include::static/pgpkeys/rezny.key[]
=== `{trhodes}`
include::static/pgpkeys/trhodes.key[]
=== `{benno}`
include::static/pgpkeys/benno.key[]
=== `{arichardson}`
include::static/pgpkeys/arichardson.key[]
=== `{beech}`
include::static/pgpkeys/beech.key[]
=== `{matteo}`
include::static/pgpkeys/matteo.key[]
=== `{roberto}`
include::static/pgpkeys/roberto.key[]
=== `{rodrigc}`
include::static/pgpkeys/rodrigc.key[]
=== `{ler}`
include::static/pgpkeys/ler.key[]
=== `{leres}`
include::static/pgpkeys/leres.key[]
=== `{robak}`
include::static/pgpkeys/robak.key[]
=== `{guido}`
include::static/pgpkeys/guido.key[]
=== `{rea}`
include::static/pgpkeys/rea.key[]
=== `{ray}`
include::static/pgpkeys/ray.key[]
=== `{arybchik}`
include::static/pgpkeys/arybchik.key[]
=== `{niklas}`
include::static/pgpkeys/niklas.key[]
=== `{salvadore}`
include::static/pgpkeys/salvadore.key[]
=== `{bsam}`
include::static/pgpkeys/bsam.key[]
=== `{marks}`
include::static/pgpkeys/marks.key[]
=== `{alonso}`
include::static/pgpkeys/alonso.key[]
=== `{bschmidt}`
include::static/pgpkeys/bschmidt.key[]
=== `{wosch}`
include::static/pgpkeys/wosch.key[]
=== `{ed}`
include::static/pgpkeys/ed.key[]
=== `{cy}`
include::static/pgpkeys/cy.key[]
=== `{das}`
include::static/pgpkeys/das.key[]
=== `{scheidell}`
include::static/pgpkeys/scheidell.key[]
=== `{schweikh}`
include::static/pgpkeys/schweikh.key[]
=== `{matthew}`
include::static/pgpkeys/matthew.key[]
=== `{tmseck}`
include::static/pgpkeys/tmseck.key[]
=== `{stas}`
include::static/pgpkeys/stas.key[]
=== `{johalun}`
include::static/pgpkeys/johalun.key[]
=== `{johans}`
include::static/pgpkeys/johans.key[]
=== `{lev}`
include::static/pgpkeys/lev.key[]
=== `{bakul}`
include::static/pgpkeys/bakul.key[]
=== `{gshapiro}`
include::static/pgpkeys/gshapiro.key[]
=== `{arun}`
include::static/pgpkeys/arun.key[]
=== `{wxs}`
include::static/pgpkeys/wxs.key[]
=== `{nork}`
include::static/pgpkeys/nork.key[]
=== `{syrinx}`
include::static/pgpkeys/syrinx.key[]
=== `{vanilla}`
include::static/pgpkeys/vanilla.key[]
=== `{ashish}`
include::static/pgpkeys/ashish.key[]
=== `{chs}`
include::static/pgpkeys/chs.key[]
=== `{bms}`
include::static/pgpkeys/bms.key[]
=== `{demon}`
include::static/pgpkeys/demon.key[]
=== `{jesper}`
include::static/pgpkeys/jesper.key[]
=== `{scop}`
include::static/pgpkeys/scop.key[]
=== `{anray}`
include::static/pgpkeys/anray.key[]
=== `{flo}`
include::static/pgpkeys/flo.key[]
=== `{glebius}`
include::static/pgpkeys/glebius.key[]
=== `{kensmith}`
include::static/pgpkeys/kensmith.key[]
=== `{ben}`
include::static/pgpkeys/ben.key[]
=== `{des}`
include::static/pgpkeys/des.key[]
=== `{sobomax}`
include::static/pgpkeys/sobomax.key[]
=== `{asomers}`
include::static/pgpkeys/asomers.key[]
=== `{brian}`
include::static/pgpkeys/brian.key[]
=== `{sson}`
include::static/pgpkeys/sson.key[]
=== `{nsouch}`
include::static/pgpkeys/nsouch.key[]
=== `{ssouhlal}`
include::static/pgpkeys/ssouhlal.key[]
=== `{tsoome}`
include::static/pgpkeys/tsoome.key[]
=== `{loos}`
include::static/pgpkeys/loos.key[]
=== `{brnrd}`
include::static/pgpkeys/brnrd.key[]
=== `{uqs}`
include::static/pgpkeys/uqs.key[]
=== `{rink}`
include::static/pgpkeys/rink.key[]
=== `{vsevolod}`
include::static/pgpkeys/vsevolod.key[]
=== `{pstef}`
include::static/pgpkeys/pstef.key[]
=== `{zi}`
include::static/pgpkeys/zi.key[]
=== `{lstewart}`
include::static/pgpkeys/lstewart.key[]
=== `{rrs}`
include::static/pgpkeys/rrs.key[]
=== `{murray}`
include::static/pgpkeys/murray.key[]
=== `{vs}`
include::static/pgpkeys/vs.key[]
=== `{rstone}`
include::static/pgpkeys/rstone.key[]
=== `{xride}`
include::static/pgpkeys/xride.key[]
=== `{marius}`
include::static/pgpkeys/marius.key[]
=== `{cs}`
include::static/pgpkeys/cs.key[]
=== `{clsung}`
include::static/pgpkeys/clsung.key[]
=== `{gsutter}`
include::static/pgpkeys/gsutter.key[]
=== `{metal}`
include::static/pgpkeys/metal.key[]
=== `{ryusuke}`
include::static/pgpkeys/ryusuke.key[]
=== `{garys}`
include::static/pgpkeys/garys.key[]
=== `{nyan}`
include::static/pgpkeys/nyan.key[]
=== `{sahil}`
include::static/pgpkeys/sahil.key[]
=== `{tota}`
include::static/pgpkeys/tota.key[]
=== `{romain}`
include::static/pgpkeys/romain.key[]
=== `{sylvio}`
include::static/pgpkeys/sylvio.key[]
=== `{dteske}`
include::static/pgpkeys/dteske.key[]
=== `{itetcu}`
include::static/pgpkeys/itetcu.key[]
=== `{mi}`
include::static/pgpkeys/mi.key[]
=== `{gordon}`
include::static/pgpkeys/gordon.key[]
=== `{lth}`
include::static/pgpkeys/lth.key[]
=== `{jase}`
include::static/pgpkeys/jase.key[]
=== `{lx}`
include::static/pgpkeys/lx.key[]
=== `{fabient}`
include::static/pgpkeys/fabient.key[]
=== `{thierry}`
include::static/pgpkeys/thierry.key[]
=== `{thompsa}`
include::static/pgpkeys/thompsa.key[]
=== `{flz}`
include::static/pgpkeys/flz.key[]
=== `{jilles}`
include::static/pgpkeys/jilles.key[]
=== `{ganbold}`
include::static/pgpkeys/ganbold.key[]
=== `{tuexen}`
include::static/pgpkeys/tuexen.key[]
=== `{andrew}`
include::static/pgpkeys/andrew.key[]
=== `{gonzo}`
include::static/pgpkeys/gonzo.key[]
=== `{ume}`
include::static/pgpkeys/ume.key[]
=== `{junovitch}`
include::static/pgpkeys/junovitch.key[]
=== `{ups}`
include::static/pgpkeys/ups.key[]
=== `{fsu}`
include::static/pgpkeys/fsu.key[]
=== `{mikael}`
include::static/pgpkeys/mikael.key[]
=== `{ivadasz}`
include::static/pgpkeys/ivadasz.key[]
=== `{manu}`
include::static/pgpkeys/manu.key[]
=== `{vangyzen}`
include::static/pgpkeys/vangyzen.key[]
=== `{ram}`
include::static/pgpkeys/ram.key[]
=== `{bryanv}`
include::static/pgpkeys/bryanv.key[]
=== `{nectar}`
include::static/pgpkeys/nectar.key[]
=== `{avilla}`
include::static/pgpkeys/avilla.key[]
=== `{nivit}`
include::static/pgpkeys/nivit.key[]
=== `{ivoras}`
include::static/pgpkeys/ivoras.key[]
=== `{avos}`
include::static/pgpkeys/avos.key[]
=== `{stefan}`
include::static/pgpkeys/stefan.key[]
=== `{kaiw}`
include::static/pgpkeys/kaiw.key[]
=== `{adamw}`
include::static/pgpkeys/adamw.key[]
=== `{naddy}`
include::static/pgpkeys/naddy.key[]
=== `{peter}`
include::static/pgpkeys/peter.key[]
=== `{nwhitehorn}`
include::static/pgpkeys/nwhitehorn.key[]
=== `{miwi}`
include::static/pgpkeys/miwi.key[]
=== `{nate}`
include::static/pgpkeys/nate.key[]
=== `{swills}`
include::static/pgpkeys/swills.key[]
=== `{twinterg}`
include::static/pgpkeys/twinterg.key[]
=== `{def}`
include::static/pgpkeys/def.key[]
=== `{mw}`
include::static/pgpkeys/mw.key[]
=== `{wollman}`
include::static/pgpkeys/wollman.key[]
=== `{woodsb02}`
include::static/pgpkeys/woodsb02.key[]
=== `{joerg}`
include::static/pgpkeys/joerg.key[]
=== `{davidxu}`
include::static/pgpkeys/davidxu.key[]
=== `{ygy}`
include::static/pgpkeys/ygy.key[]
=== `{emax}`
include::static/pgpkeys/emax.key[]
=== `{yongari}`
include::static/pgpkeys/yongari.key[]
=== `{rcyu}`
include::static/pgpkeys/rcyu.key[]
=== `{oshogbo}`
include::static/pgpkeys/oshogbo.key[]
=== `{riggs}`
include::static/pgpkeys/riggs.key[]
=== `{egypcio}`
include::static/pgpkeys/egypcio.key[]
=== `{bz}`
include::static/pgpkeys/bz.key[]
=== `{zeising}`
include::static/pgpkeys/zeising.key[]
=== `{phantom}`
include::static/pgpkeys/phantom.key[]
=== `{sephe}`
include::static/pgpkeys/sephe.key[]
=== `{mizhka}`
include::static/pgpkeys/mizhka.key[]
=== `{zont}`
include::static/pgpkeys/zont.key[]
=== `{tz}`
include::static/pgpkeys/tz.key[]
=== `{yuri}`
include::static/pgpkeys/yuri.key[]
=== `{slavash}`
include::static/pgpkeys/slavash.key[]
=== `{arrowd}`
include::static/pgpkeys/arrowd.key[]
=== `{rigoletto}`
include::static/pgpkeys/rigoletto.key[]
=== `{kaktus}`
include::static/pgpkeys/kaktus.key[]
=== `{samm}`
include::static/pgpkeys/samm.key[]
[[pgpkeys-other]]
== Other Cluster Account Holders
=== `{arundel}`
include::static/pgpkeys/arundel.key[]
=== `{bhaga}`
include::static/pgpkeys/bhaga.key[]
=== `{bk}`
include::static/pgpkeys/bk.key[]
=== `{deb}`
include::static/pgpkeys/deb.key[]
=== `{debdrup}`
include::static/pgpkeys/debdrup.key[]
=== `{dutchdaemon}`
include::static/pgpkeys/dutchdaemon.key[]
=== `{keymaster}`
include::static/pgpkeys/keymaster.key[]
=== `{plosher}`
include::static/pgpkeys/plosher.key[]
=== `{mwlucas}`
include::static/pgpkeys/mwlucas.key[]
=== `{dhw}`
include::static/pgpkeys/dhw.key[]
=== `{eduardo}`
include::static/pgpkeys/eduardo.key[]
diff --git a/documentation/content/en/articles/port-mentor-guidelines/_index.adoc b/documentation/content/en/articles/port-mentor-guidelines/_index.adoc
index 1d9d5f4cbe..a8a50b05bc 100644
--- a/documentation/content/en/articles/port-mentor-guidelines/_index.adoc
+++ b/documentation/content/en/articles/port-mentor-guidelines/_index.adoc
@@ -1,121 +1,121 @@
---
title: Port Mentor Guidelines
organizations:
- organization: The FreeBSD Ports Management Team
copyright: 2011 Thomas Abthorpe, Chris Rees
-releaseinfo: "$FreeBSD$"
+description: Port Mentor Guidelines for FreeBSD Mentors
---
= Port Mentor Guidelines
:doctype: article
:toc: macro
:toclevels: 1
:icons: font
:sectnums:
:sectnumlevels: 6
:source-highlighter: rouge
:experimental:
include::shared/en/urls.adoc[]
'''
toc::[]
[[port-mentor.guidelines]]
== Guideline for Mentor/Mentee Relationships
This section is intended to help demystify the mentoring process, as well as a way to openly promote a constructive discussion to adapt and grow the guidelines.
In our lives we have too many rules; we are not a government organization that inflicts regulation, but rather a collective of like minded individuals working toward a common goal, maintaining the quality assurance of the product we call the Ports Tree.
[[why.mentor]]
=== Why Mentor?
* For most of us, we were mentored into the Project, so return the favor by offering to mentor somebody else in.
* You have an irresistible urge to inflict knowledge on others.
* The usual punishment applies because you are sick and tired of committing somebody else's good work!
[[mentor.comentor]]
=== Mentor/Co-Mentor
Reasons for a co-mentorship:
* Significant timezone differential. Accessible, interactive mentor(s) available via IM is extremely helpful!
* Potential language barrier. Yes, FreeBSD is very English oriented, as is most software development, however, having a mentor who can speak a native language can be very useful.
* ENOTIME! Until there is a 30 hour day, and an 8 day week, some of us only have so much time to give. Sharing the load with somebody else will make it easier.
* A rookie mentor can benefit from the experience of a senior committer/mentor.
* Two heads are better than one.
Reasons for sole mentorship:
* You do not play nicely with others.
* You prefer to have a one-on-one relationship.
* The reasons for co-mentorship do not apply to you.
[[mentor.expectations]]
=== Expectations
We expect mentors to review and test-build all proposed patches, at least for an initial period lasting more than a week or two.
We expect that mentors should take responsibility for the actions of their mentee.
A mentor should follow up with all commits the mentee makes, both approved and implicit.
We expect mentors to make sure their mentees read the link:{porters-handbook}[Porter's Handbook], the link:{pr-guidelines}[PR handling guide], and the link:{committers-guide}[Committer's Guide].
While it is not necessary to memorize all the details, every committer needs to have an overview of these things to be an effective part of the community (and avoid as many rookie mistakes as possible).
[[mentees]]
=== Selecting a Mentee
There is no defined rule for what makes a candidate ready; it can be a combination of number of PRs they have submitted, the number of ports maintained, frequency of ports updates and/or level of participation in a particular area of interest like GNOME, KDE, Gecko or others.
A candidate should have almost no timeouts, be responsive to requests, and generally helpful in supporting their ports.
There must be a history of commitment, as it is widely understood that training a committer requires time and effort.
If somebody has been around longer, and spent the time observing how things are done, there is some anticipation of accumulated knowledge.
All too often we have seen a maintainer submit a few PRs, show up in IRC and ask when they will be given a commit bit.
Being subscribed to, and following the mailing lists is very beneficial.
There is no real expectation that submitting posts on the lists will make somebody a committer, but it demonstrates a commitment.
Some mails offer insights into the knowledge of a candidate as well how they interact with others.
Similarly participating in IRC can give somebody a higher profile.
Ask six different committers how many PRs a maintainer should submit prior to being nominated, and you will get six different answers.
Ask those same individuals how long somebody should have been participating, same dilemma.
How many ports should they have at a minimum? Now we have a bikeshed! Some things are just hard to quantify, a mentor will just have to use their best judgement, and hope that portmgr agrees.
[[mentorship.duration]]
=== Mentorship Duration
As the trust level develops and grows, the mentee may be granted "implicit" commit rights.
This can include trivial changes to a [.filename]#Makefile#, [.filename]#pkg-descr# etc.
Similarly, it may include `PORTVERSION` updates that do not include `plist` changes.
Other circumstances may be formulated at the discretion of the Mentor.
However, during the period of mentorship, a port version bump that affects dependent ports should be checked by a mentor.
Just as we are all varied individuals, each mentee has different learning curves, time commitments, and other influencing factors that will contribute to the time required before they can "fly solo".
Empirically, a mentee should be observed for at least 3 months.
90-100 commits is another target that a mentor could use before releasing a mentee.
Other factors to consider prior releasing a mentee are the number of mistakes they may have made, QATs received etc.
If they are still making rookie mistakes, they still require mentor guidance.
[[mentor.comentor.debate]]
=== Mentor/Co-Mentor Debate
When a request gets to portmgr, it usually reads as, "I propose 'foo' for a ports commit bit, I will co-mentor with 'bar'".
Proposal received, voted, and carried.
The mentor is the primary point of contact or the "first among equals", the co-mentor is the backup.
Some reprobate, whose name shall be withheld, made the https://lists.freebsd.org/pipermail/cvs-ports/2007-September/134614.html[first recorded co-mentor commit].
Similar co-mentor commits have also been spotted in the src tree.
Does this make it right? Does this make it wrong? It seems to be part of the evolution of how things are done.
[[mentee.expectations]]
=== Expectations
We expect mentees to be prepared for constructive criticism from the community.
There's still a lot of "lore" that is not written down.
Responding well to constructive criticism is what we hope we are selecting for by first reviewing their existing contributions on IRC and mailing lists.
We warn mentees that some of the criticism they receive may be less "constructive" than others, (whether through language communication problems, or excessive nit-picking), and that dealing with this gracefully is just part of being in a large community.
In case of specific problems with specific people, or any questions, we hope that they will approach a portmgr member on IRC or by email.
diff --git a/documentation/content/en/articles/pr-guidelines/_index.adoc b/documentation/content/en/articles/pr-guidelines/_index.adoc
index deea48608d..50a0159802 100644
--- a/documentation/content/en/articles/pr-guidelines/_index.adoc
+++ b/documentation/content/en/articles/pr-guidelines/_index.adoc
@@ -1,496 +1,496 @@
---
title: Problem Report Handling Guidelines
authors:
- author: Dag-Erling Smørgrav
- author: Hiten Pandya
-releaseinfo: "$FreeBSD$"
+description: These guidelines describe recommended handling practices for FreeBSD Problem Reports (PRs).
trademarks: ["freebsd", "general"]
---
= Problem Report Handling Guidelines
:doctype: article
:toc: macro
:toclevels: 1
:icons: font
:sectnums:
:sectnumlevels: 6
:source-highlighter: rouge
:experimental:
include::shared/en/mailing-lists.adoc[]
include::shared/en/urls.adoc[]
[.abstract-title]
Abstract
These guidelines describe recommended handling practices for FreeBSD Problem Reports (PRs).
Whilst developed for the FreeBSD PR Database Maintenance Team mailto:freebsd-bugbusters@FreeBSD.org[freebsd-bugbusters@FreeBSD.org], these guidelines should be followed by anyone working with FreeBSD PRs.
'''
toc::[]
[[intro]]
== Introduction
Bugzilla is an issue management system used by the FreeBSD Project.
As accurate tracking of outstanding software defects is important to FreeBSD's quality, the correct use of the software is essential to the forward progress of the Project.
Access to Bugzilla is available to the entire FreeBSD community.
In order to maintain consistency within the database and provide a consistent user experience, guidelines have been established covering common aspects of bug management such as presenting followup, handling close requests, and so forth.
[[pr-lifecycle]]
== Problem Report Life-cycle
* The Reporter submits a bug report on the website. The bug is in the `Needs Triage` state.
* Jane Random BugBuster confirms that the bug report has sufficient information to be reproducible. If not, she goes back and forth with the reporter to obtain the needed information. At this point the bug is set to the `Open` state.
* Joe Random Committer takes interest in the PR and assigns it to himself, or Jane Random BugBuster decides that Joe is best suited to handle it and assigns it to him. The bug should be set to the `In Discussion` state.
* Joe has a brief exchange with the originator (making sure it all goes into the audit trail) and determines the cause of the problem.
* Joe pulls an all-nighter and whips up a patch that he thinks fixes the problem, and submits it in a follow-up, asking the originator to test it. He then sets the PRs state to `Patch Ready`.
* A couple of iterations later, both Joe and the originator are satisfied with the patch, and Joe commits it to `-CURRENT` (or directly to `-STABLE` if the problem does not exist in `-CURRENT`), making sure to reference the Problem Report in his commit log (and credit the originator if they submitted all or part of the patch) and, if appropriate, start an MFC countdown. The bug is set to the `Needs MFC` state.
* If the patch does not need MFCing, Joe then closes the PR as `Issue Resolved`.
[NOTE]
====
Many PRs are submitted with very little information about the problem, and some are either very complex to solve, or just scratch the surface of a larger problem; in these cases, it is very important to obtain all the necessary information needed to solve the problem.
If the problem contained within cannot be solved, or has occurred again, it is necessary to re-open the PR.
====
[[pr-states]]
== Problem Report State
It is important to update the state of a PR when certain actions are taken.
The state should accurately reflect the current state of work on the PR.
.A small example on when to change PR state
[example]
====
When a PR has been worked on and the developer(s) responsible feel comfortable about the fix, they will submit a followup to the PR and change its state to "feedback".
At this point, the originator should evaluate the fix in their context and respond indicating whether the defect has indeed been remedied.
====
A Problem Report may be in one of the following states:
open::
Initial state; the problem has been pointed out and it needs reviewing.
analyzed::
The problem has been reviewed and a solution is being sought.
feedback::
Further work requires additional information from the originator or the community; possibly information regarding the proposed solution.
patched::
A patch has been committed, but something (MFC, or maybe confirmation from originator) is still pending.
suspended::
The problem is not being worked on, due to lack of information or resources.
This is a prime candidate for somebody who is looking for a project to take on.
If the problem cannot be solved at all, it will be closed, rather than suspended.
The documentation project uses suspended for wish-list items that entail a significant amount of work which no one currently has time for.
closed::
A problem report is closed when any changes have been integrated, documented, and tested, or when fixing the problem is abandoned.
[NOTE]
====
The "patched" state is directly related to feedback, so you may go directly to "closed" state if the originator cannot test the patch, and it works in your own testing.
====
[[pr-types]]
== Types of Problem Reports
While handling problem reports, either as a developer who has direct access to the Problem Reports database or as a contributor who browses the database and submits followups with patches, comments, suggestions or change requests, you will come across several different types of PRs.
* <<pr-unassigned>>
* <<pr-assigned>>
* <<pr-dups>>
* <<pr-stale>>
* <<pr-misfiled-notpr>>
The following sections describe what each different type of PRs is used for, when a PR belongs to one of these types, and what treatment each different type receives.
[[pr-unassigned]]
== Unassigned PRs
When PRs arrive, they are initially assigned to a generic (placeholder) assignee.
These are always prepended with `freebsd-`.
The exact value for this default depends on the category; in most cases, it corresponds to a specific FreeBSD mailing list.
Here is the current list, with the most common ones listed first:
[[default-assignees-common]]
.Default Assignees - most common
[cols="1,1,1", options="header"]
|===
| Type
| Categories
| Default Assignee
|base system
|bin, conf, gnu, kern, misc
|freebsd-bugs
|architecture-specific
|alpha, amd64, arm, i386, ia64, powerpc, sparc64
|freebsd-_arch_
|ports collection
|ports
|freebsd-ports-bugs
|documentation shipped with the system
|docs
|freebsd-doc
|FreeBSD web pages (not including docs)
|Website
|freebsd-www
|===
[[default-assignees-other]]
.Default Assignees - other
[cols="1,1,1", options="header"]
|===
| Type
| Categories
| Default Assignee
|advocacy efforts
|advocacy
|freebsd-advocacy
|Java Virtual Machine(TM) problems
|java
|freebsd-java
|standards compliance
|standards
|freebsd-standards
|threading libraries
|threads
|freebsd-threads
|man:usb[4] subsystem
|usb
|freebsd-usb
|===
Do not be surprised to find that the submitter of the PR has assigned it to the wrong category.
If you fix the category, do not forget to fix the assignment as well.
(In particular, our submitters seem to have a hard time understanding that just because their problem manifested on an i386 system, that it might be generic to all of FreeBSD, and thus be more appropriate for `kern`.
The converse is also true, of course.)
Certain PRs may be reassigned away from these generic assignees by anyone.
There are several types of assignees: specialized mailing lists; mail aliases (used for certain limited-interest items); and individuals.
For assignees which are mailing lists, please use the long form when making the assignment (e.g., `freebsd-foo` instead of `foo`);
this will avoid duplicate emails sent to the mailing list.
[NOTE]
====
Since the list of individuals who have volunteered to be the default assignee for certain types of PRs changes so often, it is much more suitable for https://wiki.freebsd.org/AssigningPRs[the FreeBSD wiki].
====
Here is a sample list of such entities; it is probably not complete.
[[common-assignees-base]]
.Common Assignees - base system
[cols="1,1,1,1", options="header"]
|===
| Type
| Suggested Category
| Suggested Assignee
| Assignee Type
|problem specific to the ARM(R) architecture
|arm
|freebsd-arm
|mailing list
|problem specific to the MIPS(R) architecture
|kern
|freebsd-mips
|mailing list
|problem specific to the PowerPC(R) architecture
|kern
|freebsd-ppc
|mailing list
|problem with Advanced Configuration and Power Management (man:acpi[4])
|kern
|freebsd-acpi
|mailing list
|problem with Asynchronous Transfer Mode (ATM) drivers
|kern
|freebsd-atm
|mailing list
|problem with embedded or small-footprint FreeBSD systems (e.g., NanoBSD/PicoBSD/FreeBSD-arm)
|kern
|freebsd-embedded
|mailing list
|problem with FireWire(R) drivers
|kern
|freebsd-firewire
|mailing list
|problem with the filesystem code
|kern
|freebsd-fs
|mailing list
|problem with the man:geom[4] subsystem
|kern
|freebsd-geom
|mailing list
|problem with the man:ipfw[4] subsystem
|kern
|freebsd-ipfw
|mailing list
|problem with Integrated Services Digital Network (ISDN) drivers
|kern
|freebsd-isdn
|mailing list
|man:jail[8] subsystem
|kern
|freebsd-jail
|mailing list
|problem with Linux(R) or SVR4 emulation
|kern
|freebsd-emulation
|mailing list
|problem with the networking stack
|kern
|freebsd-net
|mailing list
|problem with the man:pf[4] subsystem
|kern
|freebsd-pf
|mailing list
|problem with the man:scsi[4] subsystem
|kern
|freebsd-scsi
|mailing list
|problem with the man:sound[4] subsystem
|kern
|freebsd-multimedia
|mailing list
|problems with the man:wlan[4] subsystem and wireless drivers
|kern
|freebsd-wireless
|mailing list
|problem with man:sysinstall[8] or man:bsdinstall[8]
|bin
|freebsd-sysinstall
|mailing list
|problem with the system startup scripts (man:rc[8])
|kern
|freebsd-rc
|mailing list
|problem with VIMAGE or VNET functionality and related code
|kern
|freebsd-virtualization
|mailing list
|problem with Xen emulation
|kern
|freebsd-xen
|mailing list
|===
[[common-assignees-ports]]
.Common Assignees - Ports Collection
[cols="1,1,1,1", options="header"]
|===
| Type
| Suggested Category
| Suggested Assignee
| Assignee Type
|problem with the ports framework (__not__ with an individual port!)
|ports
|portmgr
|alias
|port which is maintained by apache@FreeBSD.org
|ports
|apache
|mailing list
|port which is maintained by autotools@FreeBSD.org
|ports
|autotools
|alias
|port which is maintained by doceng@FreeBSD.org
|ports
|doceng
|alias
|port which is maintained by eclipse@FreeBSD.org
|ports
|freebsd-eclipse
|mailing list
|port which is maintained by gecko@FreeBSD.org
|ports
|gecko
|mailing list
|port which is maintained by gnome@FreeBSD.org
|ports
|gnome
|mailing list
|port which is maintained by hamradio@FreeBSD.org
|ports
|hamradio
|alias
|port which is maintained by haskell@FreeBSD.org
|ports
|haskell
|alias
|port which is maintained by java@FreeBSD.org
|ports
|freebsd-java
|mailing list
|port which is maintained by kde@FreeBSD.org
|ports
|kde
|mailing list
|port which is maintained by mono@FreeBSD.org
|ports
|mono
|mailing list
|port which is maintained by office@FreeBSD.org
|ports
|freebsd-office
|mailing list
|port which is maintained by perl@FreeBSD.org
|ports
|perl
|mailing list
|port which is maintained by python@FreeBSD.org
|ports
|freebsd-python
|mailing list
|port which is maintained by ruby@FreeBSD.org
|ports
|freebsd-ruby
|mailing list
|port which is maintained by secteam@FreeBSD.org
|ports
|secteam
|alias
|port which is maintained by vbox@FreeBSD.org
|ports
|vbox
|alias
|port which is maintained by x11@FreeBSD.org
|ports
|freebsd-x11
|mailing list
|===
Ports PRs which have a maintainer who is a ports committer may be reassigned by anyone (but note that not every FreeBSD committer is necessarily a ports committer, so you cannot simply go by the email address alone.)
For other PRs, please do not reassign them to individuals (other than yourself) unless you are certain that the assignee really wants to track the PR.
This will help to avoid the case where no one looks at fixing a particular problem because everyone assumes that the assignee is already working on it.
[[common-assignees-other]]
.Common Assignees - Other
[cols="1,1,1,1", options="header"]
|===
| Type
| Suggested Category
| Suggested Assignee
| Assignee Type
|problem with PR database
|bin
|bugmeister
|alias
|problem with Bugzilla https://bugs.freebsd.org/submit/[web form].
|doc
|bugmeister
|alias
|===
[[pr-assigned]]
== Assigned PRs
If a PR has the `responsible` field set to the username of a FreeBSD developer, it means that the PR has been handed over to that particular person for further work.
Assigned PRs should not be touched by anyone but the assignee or bugmeister.
If you have comments, submit a followup.
If for some reason you think the PR should change state or be reassigned, send a message to the assignee.
If the assignee does not respond within two weeks, unassign the PR and do as you please.
[[pr-dups]]
== Duplicate PRs
If you find more than one PR that describe the same issue, choose the one that contains the largest amount of useful information and close the others, stating clearly the number of the superseding PR.
If several PRs contain non-overlapping useful information, submit all the missing information to one in a followup, including references to the others; then close the other PRs (which are now completely superseded).
[[pr-stale]]
== Stale PRs
A PR is considered stale if it has not been modified in more than six months. Apply the following procedure to deal with stale PRs:
* If the PR contains sufficient detail, try to reproduce the problem in `-CURRENT` and `-STABLE`. If you succeed, submit a followup detailing your findings and try to find someone to assign it to. Set the state to "analyzed" if appropriate.
* If the PR describes an issue which you know is the result of a usage error (incorrect configuration or otherwise), submit a followup explaining what the originator did wrong, then close the PR with the reason "User error" or "Configuration error".
* If the PR describes an error which you know has been corrected in both `-CURRENT` and `-STABLE`, close it with a message stating when it was fixed in each branch.
* If the PR describes an error which you know has been corrected in `-CURRENT`, but not in `-STABLE`, try to find out when the person who corrected it is planning to MFC it, or try to find someone else (maybe yourself?) to do it. Set the state to "patched" and assign it to whomever will do the MFC.
* In other cases, ask the originator to confirm if the problem still exists in newer versions. If the originator does not reply within a month, close the PR with the notation "Feedback timeout".
[[pr-misfiled-notpr]]
== Non-Bug PRs
Developers that come across PRs that look like they should have been posted to {freebsd-bugs} or some other list should close the PR, informing the submitter in a comment why this is not really a PR and where the message should be posted.
The email addresses that Bugzilla listens to for incoming PRs have been published as part of the FreeBSD documentation, have been announced and listed on the web-site.
This means that spammers found them.
Whenever you close one of these PRs, please do the following:
* Set the component to `junk` (under `Supporting Services`.
* Set Responsible to `nobody@FreeBSD.org`.
* Set State to `Issue Resolved`.
Setting the category to `junk` makes it obvious that there is no useful content within the PR, and helps to reduce the clutter within the main categories.
[[references]]
== Further Reading
This is a list of resources relevant to the proper writing and processing of problem reports.
It is by no means complete.
* link:{problem-reports}[How to Write FreeBSD Problem Reports]-guidelines for PR originators.
diff --git a/documentation/content/en/articles/problem-reports/_index.adoc b/documentation/content/en/articles/problem-reports/_index.adoc
index 3294a5266e..d61055bdfc 100644
--- a/documentation/content/en/articles/problem-reports/_index.adoc
+++ b/documentation/content/en/articles/problem-reports/_index.adoc
@@ -1,271 +1,271 @@
---
title: Writing FreeBSD Problem Reports
authors:
- author: Dag-Erling Smørgrav
- author: Mark Linimon
-releaseinfo: "$FreeBSD$"
+description: Writing FreeBSD Problem Reports
trademarks: ["freebsd", "ibm", "intel", "sun", "general"]
---
= Writing FreeBSD Problem Reports
:doctype: article
:toc: macro
:toclevels: 1
:icons: font
:sectnums:
:sectnumlevels: 6
:source-highlighter: rouge
:experimental:
include::shared/en/mailing-lists.adoc[]
include::shared/en/urls.adoc[]
[.abstract-title]
Abstract
This article describes how to best formulate and submit a problem report to the FreeBSD Project.
'''
toc::[]
[[pr-intro]]
== Introduction
One of the most frustrating experiences one can have as a software user is to submit a problem report only to have it summarily closed with a terse and unhelpful explanation like "not a bug" or "bogus PR".
Similarly, one of the most frustrating experiences as a software developer is to be flooded with problem reports that are not really problem reports but requests for support, or that contain little or no information about what the problem is and how to reproduce it.
This document attempts to describe how to write good problem reports.
What, one asks, is a good problem report? Well, to go straight to the bottom line, a good problem report is one that can be analyzed and dealt with swiftly, to the mutual satisfaction of both user and developer.
Although the primary focus of this article is on FreeBSD problem reports, most of it should apply quite well to other software projects.
Note that this article is organized thematically, not chronologically.
Read the entire document before submitting a problem report, rather than treating it as a step-by-step tutorial.
[[pr-when]]
== When to Submit a Problem Report
There are many types of problems, and not all of them should engender a problem report.
Of course, nobody is perfect, and there will be times when what seems to be a bug in a program is, in fact, a misunderstanding of the syntax for a command or a typographical error in a configuration file (though that in itself may sometimes be indicative of poor documentation or poor error handling in the application).
There are still many cases where submitting a problem report is clearly _not_ the right course of action, and will only serve to frustrate both the submitter and the developers.
Conversely, there are cases where it might be appropriate to submit a problem report about something else than a bug-an enhancement or a new feature, for instance.
So how does one determine what is a bug and what is not? As a simple rule of thumb, the problem is _not_ a bug if it can be expressed as a question (usually of the form "How do I do X?" or "Where can I find Y?").
It is not always quite so black and white, but the question rule covers a large majority of cases.
When looking for an answer, consider posing the question to the {freebsd-questions}.
Consider these factors when submitting PRs about ports or other software that is not part of FreeBSD itself:
* Please do not submit problem reports that simply state that a newer version of an application is available. Ports maintainers are automatically notified by portscout when a new version of an application becomes available. Actual patches to update a port to the latest version are welcome.
* For unmaintained ports (`MAINTAINER` is `ports@FreeBSD.org`), a PR without an included patch is unlikely to get picked up by a committer. To become the maintainer of an unmaintained port, submit a PR with the request (patch preferred but not required).
* In either case, following the process described in link:{porters-handbook}#port-upgrading[Porter's Handbook] will yield the best results. (You might also wish to read link:{contributing}#ports-contributing[Contributing to the FreeBSD Ports Collection].)
A bug that cannot be reproduced can rarely be fixed.
If the bug only occurred once and you cannot reproduce it, and it does not seem to happen to anybody else, chances are none of the developers will be able to reproduce it or figure out what is wrong.
That does not mean it did not happen, but it does mean that the chances of your problem report ever leading to a bug fix are very slim.
To make matters worse, often these kinds of bugs are actually caused by failing hard drives or overheating processors - you should always try to rule out these causes, whenever possible, before submitting a PR.
Next, to decide to whom you should file your problem report, you need to understand that the software that makes up FreeBSD is composed of several different elements:
* Code in the base system that is written and maintained by FreeBSD contributors, such as the kernel, the C library, and the device drivers (categorized as `kern`); the binary utilities (`bin`); the manual pages and documentation (`docs`); and the web pages (`www`). All bugs in these areas should be reported to the FreeBSD developers.
* Code in the base system that is written and maintained by others, and imported into FreeBSD and adapted. Examples include man:clang[1], and man:sendmail[8]. Most bugs in these areas should be reported to the FreeBSD developers; but in some cases they may need to be reported to the original authors instead if the problems are not FreeBSD-specific.
* Individual applications that are not in the base system but are instead part of the FreeBSD Ports Collection (category `ports`). Most of these applications are not written by FreeBSD developers; what FreeBSD provides is merely a framework for installing the application. Therefore, only report a problem to the FreeBSD developers when the problem is believed to be FreeBSD-specific; otherwise, report it to the authors of the software.
Then, ascertain whether the problem is timely.
There are few things that will annoy a developer more than receiving a problem report about a bug she has already fixed.
If the problem is in the base system, first read the FAQ section on link:{faq}#LATEST-VERSION[FreeBSD versions], if you are not already familiar with the topic.
It is not possible for FreeBSD to fix problems in anything other than certain recent branches of the base system, so filing a bug report about an older version will probably only result in a developer advising you to upgrade to a supported version to see if the problem still recurs.
The Security Officer team maintains the link:https://www.FreeBSD.org/security/[list of supported versions].
If the problem is in a port, consider filing a bug with the upstream.
The FreeBSD Project can not fix all bugs in all software.
[[pr-prep]]
== Preparations
A good rule to follow is to always do a background search before submitting a problem report.
Maybe the problem has already been reported; maybe it is being discussed on the mailing lists, or recently was; it may even already be fixed in a newer version than what you are running.
You should therefore check all the obvious places before submitting your problem report.
For FreeBSD, this means:
* The FreeBSD link:{faq}[Frequently Asked Questions] (FAQ) list. The FAQ attempts to provide answers for a wide range of questions, such as those concerning link:{faq}#hardware[hardware compatibility], link:{faq}#applications[user applications], and link:{faq}#kernelconfig[kernel configuration].
* The link:{handbook}#eresources-mail[mailing lists]-if you are not subscribed, use https://www.FreeBSD.org/search/#mailinglists[the searchable archives] on the FreeBSD web site. If the problem has not been discussed on the lists, you might try posting a message about it and waiting a few days to see if someone can spot something that has been overlooked.
* Optionally, the entire web-use your favorite search engine to locate any references to the problem. You may even get hits from archived mailing lists or newsgroups you did not know of or had not thought to search through.
* Next, the searchable https://bugs.freebsd.org/bugzilla/query.cgi[FreeBSD PR database] (Bugzilla). Unless the problem is recent or obscure, there is a fair chance it has already been reported.
* Most importantly, attempt to see if existing documentation in the source base addresses your problem.
+
For the base FreeBSD code, you should carefully study the contents of [.filename]#/usr/src/UPDATING# on your system or the latest version at https://cgit.freebsd.org/src/tree/UPDATING[https://cgit.freebsd.org/src/tree/UPDATING].
(This is vital information if you are upgrading from one version to another-especially if you are upgrading to the FreeBSD-CURRENT branch).
+
However, if the problem is in something that was installed as a part of the FreeBSD Ports Collection, you should refer to [.filename]#/usr/ports/UPDATING# (for individual ports) or [.filename]#/usr/ports/CHANGES# (for changes that affect the entire Ports Collection).
https://cgit.freebsd.org/ports/tree/UPDATING[https://cgit.freebsd.org/ports/tree/UPDATING] and https://cgit.freebsd.org/ports/tree/CHANGES[https://cgit.freebsd.org/ports/tree/CHANGES] are also available via cgit.
[[pr-writing]]
== Writing the Problem Report
Now that you have decided that your issue merits a problem report, and that it is a FreeBSD problem, it is time to write the actual problem report.
Before we get into the mechanics of the program used to generate and submit PRs, here are some tips and tricks to help make sure that your PR will be most effective.
[[pr-writing-tips]]
== Tips and Tricks for Writing a Good Problem Report
* _Do not leave the "Summary" line empty._ The PRs go both onto a mailing list that goes all over the world (where the "Summary" is used for the `Subject:` line), but also into a database. Anyone who comes along later and browses the database by synopsis, and finds a PR with a blank subject line, tends just to skip over it. Remember that PRs stay in this database until they are closed by someone; an anonymous one will usually just disappear in the noise.
* _Avoid using a weak "Summary" line._ You should not assume that anyone reading your PR has any context for your submission, so the more you provide, the better. For instance, what part of the system does the problem apply to? Do you only see the problem while installing, or while running? To illustrate, instead of `Summary: portupgrade is broken`, see how much more informative this seems: `Summary: port ports-mgmt/portupgrade coredumps on -current`. (In the case of ports, it is especially helpful to have both the category and portname in the "Summary" line.)
* _If you have a patch, say so._ A PR with a patch included is much more likely to be looked at than one without. Please set the `patch` Keyword in Bugzilla.
* _If you are a maintainer, say so._ If you are maintaining a part of the source code (for instance, an existing port), you definitely should set the "Class" of your PR to `maintainer-update`. This way any committer that handles your PR will not have to check.
* _Be specific._ The more information you supply about what problem you are having, the better your chance of getting a response.
** Include the version of FreeBSD you are running (there is a place to put that, see below) and on which architecture. You should include whether you are running from a release (e.g., from a CD-ROM or download), or from a system maintained by Git (and, if so, what hash and branch you are at). If you are tracking the FreeBSD-CURRENT branch, that is the very first thing someone will ask, because fixes (especially for high-profile problems) tend to get committed very quickly, and FreeBSD-CURRENT users are expected to keep up.
** Include which global options you have specified in your [.filename]#make.conf#, [.filename]#src.conf#, and [.filename]#src-env.conf#. Given the infinite number of options, not every combination may be fully supported.
** If the problem can be reproduced easily, include information that will help a developer to reproduce it themselves. If a problem can be demonstrated with specific input then include an example of that input if possible, and include both the actual and the expected output. If this data is large or cannot be made public, then do try to create a minimal file that exhibits the same issue and that can be included within the PR.
** If this is a kernel problem, then be prepared to supply the following information. (You do not have to include these by default, which only tends to fill up the database, but you should include excerpts that you think might be relevant):
*** your kernel configuration (including which hardware devices you have installed)
*** whether or not you have debugging options enabled (such as `WITNESS`), and if so, whether the problem persists when you change the sense of that option
*** the full text of any backtrace, panic or other console output, or entries in [.filename]#/var/log/messages#, if any were generated
*** the output of `pciconf -l` and relevant parts of your `dmesg` output if your problem relates to a specific piece of hardware
*** the fact that you have read [.filename]#src/UPDATING# and that your problem is not listed there (someone is guaranteed to ask)
*** whether or not you can run any other kernel as a fallback (this is to rule out hardware-related issues such as failing disks and overheating CPUs, which can masquerade as kernel problems)
** If this is a ports problem, then be prepared to supply the following information. (You do not have to include these by default, which only tends to fill up the database, but you should include excerpts that you think might be relevant):
*** which ports you have installed
*** any environment variables that override the defaults in [.filename]#bsd.port.mk#, such as `PORTSDIR`
*** the fact that you have read [.filename]#ports/UPDATING# and that your problem is not listed there (someone is guaranteed to ask)
* _Avoid vague requests for features._ PRs of the form "someone should really implement something that does so-and-so" are less likely to get results than very specific requests. Remember, the source is available to everyone, so if you want a feature, the best way to ensure it being included is to get to work! Also consider the fact that many things like this would make a better topic for discussion on `freebsd-questions` than an entry in the PR database, as discussed above.
* _Make sure no one else has already submitted a similar PR._ Although this has already been mentioned above, it bears repeating here. It only take a minute or two to use the web-based search engine at https://bugs.freebsd.org/bugzilla/query.cgi[https://bugs.freebsd.org/bugzilla/query.cgi]. (Of course, everyone is guilty of forgetting to do this now and then.)
* _Report only one issue per Problem Report._ Avoid including two or more problems within the same report unless they are related. When submitting patches, avoid adding multiple features or fixing multiple bugs in the same PR unless they are closely related-such PRs often take longer to resolve.
* _Avoid controversial requests._ If your PR addresses an area that has been controversial in the past, you should probably be prepared to not only offer patches, but also justification for why the patches are "The Right Thing To Do". As noted above, a careful search of the mailing lists using the archives at https://www.FreeBSD.org/search/#mailinglists[https://www.FreeBSD.org/search/#mailinglists] is always good preparation.
* _Be polite._ Almost anyone who would potentially work on your PR is a volunteer. No one likes to be told that they have to do something when they are already doing it for some motivation other than monetary gain. This is a good thing to keep in mind at all times on Open Source projects.
[[pr-writing-before-beginning]]
== Before Beginning
Similar considerations apply to use of the https://bugs.freebsd.org/bugzilla/enter_bug.cgi[web-based PR submission form].
Be careful of cut-and-paste operations that might change whitespace or other text formatting.
Finally, if the submission is lengthy, prepare the work offline so that nothing will be lost if there is a problem submitting it.
[[pr-writing-attaching-patches]]
== Attaching Patches or Files
When attaching a patch, be sure to use either `git diff` or man:diff[1] with the `-u` option to create a unified diff and make sure to specify the Git hash and branch of the repository against which you modified files, so the developers who read your report will be able to apply them easily.
For problems with the kernel or the base utilities, a patch against FreeBSD-CURRENT (the main Git branch) is preferred since all new code should be applied and tested there first.
After appropriate or substantial testing has been done, the code will be merged/migrated to the FreeBSD-STABLE branch.
If you attach a patch inline, instead of as an attachment, note that the most common problem by far is the tendency of some email programs to render tabs as spaces, which will completely ruin anything intended to be part of a Makefile.
Do not send patches as attachments using `Content-Transfer-Encoding: quoted-printable`.
These will perform character escaping and the entire patch will be useless.
Also note that while including small patches in a PR is generally all right-particularly when they fix the problem described in the PR-large patches and especially new code which may require substantial review before committing should be placed on a web or ftp server, and the URL should be included in the PR instead of the patch.
Patches in email tend to get mangled, and the larger the patch, the harder it will be for interested parties to unmangle it.
Also, posting a patch on the web allows you to modify it without having to resubmit the entire patch in a followup to the original PR.
Finally, large patches simply increase the size of the database, since closed PRs are not actually deleted but instead kept and simply marked as complete.
You should also take note that unless you explicitly specify otherwise in your PR or in the patch itself, any patches you submit will be assumed to be licensed under the same terms as the original file you modified.
[[pr-writing-filling-template]]
== Filling out the Form
[NOTE]
====
The email address you use will become public information and may become available to spammers.
You should either have spam handling procedures in place, or use a temporary email account.
However, please note that if you do not use a valid email account at all, we will not be able to ask you questions about your PR.
====
When you file a bug, you will find the following fields:
* _Summary:_ Fill this out with a short and accurate description of the problem. The synopsis is used as the subject of the problem report email, and is used in problem report listings and summaries; problem reports with obscure synopses tend to get ignored.
* _Severity:_ One of `Affects only me`, `Affects some people` or `Affects many people`. Do not overreact; refrain from labeling your problem `Affects many people` unless it really does. FreeBSD developers will not necessarily work on your problem faster if you inflate its importance since there are so many other people who have done exactly that.
* _Category:_ Choose an appropriate category.
+
The first thing you need to do is to decide what part of the system your problem lies in.
Remember, FreeBSD is a complete operating system, which installs both a kernel, the standard libraries, many peripheral drivers, and a large number of utilities (the "base system").
However, there are thousands of additional applications in the Ports Collection.
You'll first need to decide if the problem is in the base system or something installed via the Ports Collection.
+
Here is a description of the major categories:
** If a problem is with the kernel, the libraries (such as standard C library `libc`), or a peripheral driver in the base system, in general you will use the `kern` category. (There are a few exceptions; see below). In general these are things that are described in section 2, 3, or 4 of the manual pages.
** If a problem is with a binary program such as man:sh[1] or man:mount[8], you will first need to determine whether these programs are in the base system or were added via the Ports Collection. If you are unsure, you can do `whereis _programname_`. FreeBSD's convention for the Ports Collection is to install everything underneath [.filename]#/usr/local#, although this can be overridden by a system administrator. For these, you will use the `ports` category (yes, even if the port's category is `www`; see below). If the location is [.filename]#/bin#, [.filename]#/usr/bin#, [.filename]#/sbin#, or [.filename]#/usr/sbin#, it is part of the base system, and you should use the `bin` category. These are all things that are described in section 1 or 8 of the manual pages.
** If you believe that the error is in the startup `(rc)` scripts, or in some kind of other non-executable configuration file, then the right category is `conf` (configuration). These are things that are described in section 5 of the manual pages.
** If you have found a problem in the documentation set (articles, books, man pages) or website the correct choice is `docs`.
+
[NOTE]
====
if you are having a problem with something from a port named `www/_someportname_`, this nevertheless goes in the `ports` category.
====
+
There are a few more specialized categories.
** If the problem would otherwise be filed in `kern` but has to do with the USB subsystem, the correct choice is `usb`.
** If the problem would otherwise be filed in `kern` but has to do with the threading libraries, the correct choice is `threads`.
** If the problem would otherwise be in the base system, but has to do with our adherence to standards such as POSIX(R), the correct choice is `standards`.
** If you are convinced that the problem will only occur under the processor architecture you are using, select one of the architecture-specific categories: commonly `i386` for Intel-compatible machines in 32-bit mode; `amd64` for AMD machines running in 64-bit mode (this also includes Intel-compatible machines running in EMT64 mode); and less commonly `arm` or `powerpc`.
+
[NOTE]
====
These categories are quite often misused for "I do not know" problems. Rather than guessing, please just use `misc`.
====
+
.Correct Use of Arch-Specific Category
[example]
====
You have a common PC-based machine, and think you have encountered a problem specific to a particular chipset or a particular motherboard: `i386` is the right category.
====
+
.Incorrect Use of Arch-Specific Category
[example]
====
You are having a problem with an add-in peripheral card on a commonly seen bus, or a problem with a particular type of hard disk drive: in this case, it probably applies to more than one architecture, and `kern` is the right category.
====
** If you really do not know where the problem lies (or the explanation does not seem to fit into the ones above), use the `misc` category. Before you do so, you may wish to ask for help on the {freebsd-questions} first. You may be advised that one of the existing categories really is a better choice.
* _Environment:_ This should describe, as accurately as possible, the environment in which the problem has been observed. This includes the operating system version, the version of the specific program or file that contains the problem, and any other relevant items such as system configuration, other installed software that influences the problem, etc.-quite simply everything a developer needs to know to reconstruct the environment in which the problem occurs.
* __Description:__A complete and accurate description of the problem you are experiencing. Try to avoid speculating about the causes of the problem unless you are certain that you are on the right track, as it may mislead a developer into making incorrect assumptions about the problem. It should include the actions you need to take to reproduce the problem. If you know any workaround, include it. It not only helps other people with the same problem work around it, but may also help a developer understand the cause for the problem.
[[pr-followup]]
== Follow-up
Once the problem report has been filed, you will receive a confirmation by email which will include the tracking number that was assigned to your problem report and a URL you can use to check its status.
With a little luck, someone will take an interest in your problem and try to address it, or, as the case may be, explain why it is not a problem.
You will be automatically notified of any change of status, and you will receive copies of any comments or patches someone may attach to your problem report's audit trail.
If someone requests additional information from you, or you remember or discover something you did not mention in the initial report, please submit a follow up.
The number one reason for a bug not getting fixed is lack of communication with the originator.
The easiest way is to use the comment option on the individual PR's web page, which you can reach from the https://bugs.freebsd.org/bugzilla/query.cgi[PR search page].
If the problem report remains open after the problem has gone away, just add a comment saying that the problem report can be closed, and, if possible, explaining how or when the problem was fixed.
Sometimes there is a delay of a week or two where the problem report remains untouched, not assigned or commented on by anyone.
This can happen when there is an increased problem report backlog or during a holiday season.
When a problem report has not received attention after several weeks, it is worth finding a committer particularly interested in working on it.
There are a few ways to do so, ideally in the following order, with a few days between attempting each communication channel:
* Find the relevant FreeBSD mailing list for the problem report from the link:{handbook}#eresources-mail[list in the Handbook] and send a message to that list asking about assistance or comments on the problem report.
* Join the relevant IRC channels. A partial listing is here: https://wiki.freebsd.org/IrcChannels[]. Inform the people in that channel about the problem report and ask for assistance. Be patient and stay in the channel after posting, so that the people from different time zones around the world have a chance to catch up.
* Find committers interested in the problem that was reported. If the problem was in a particular tool, binary, port, document, or source file, check the https://cgit.FreeBSD.org[Git Repository]. Locate the last few committers who made substantive changes to the file, and try to reach them via IRC or email. A list of committers and their emails can be found in the link:{contributors}[Contributors to FreeBSD] article.
Remember that these people are volunteers, just like maintainers and users, so they might not be immediately available to assist with the problem report.
Patience and consistency in the follow-ups is highly advised and appreciated.
With enough care and effort dedicated to that follow-up process, finding a committer to take care of the problem report is just a matter of time.
[[pr-problems]]
== If There Are Problems
If you found an issue with the bug system, file a bug! There is a category for exactly this purpose.
If you are unable to do so, contact the bug wranglers at mailto:bugmeister@FreeBSD.org[bugmeister@FreeBSD.org].
[[pr-further]]
== Further Reading
This is a list of resources relevant to the proper writing and processing of problem reports.
It is by no means complete.
* https://github.com/smileytechguy/reporting-bugs-effectively/blob/master/ENGLISH.md[How to Report Bugs Effectively]-an excellent essay by Simon G. Tatham on composing useful (non-FreeBSD-specific) problem reports.
* link:{pr-guidelines}[Problem Report Handling Guidelines]-valuable insight into how problem reports are handled by the FreeBSD developers.
diff --git a/documentation/content/en/articles/rc-scripting/_index.adoc b/documentation/content/en/articles/rc-scripting/_index.adoc
index 336f5c03c2..72e71bffbd 100644
--- a/documentation/content/en/articles/rc-scripting/_index.adoc
+++ b/documentation/content/en/articles/rc-scripting/_index.adoc
@@ -1,803 +1,803 @@
---
title: Practical rc.d scripting in BSD
authors:
- author: Yar Tikhiy
email: yar@FreeBSD.org
copyright: 2005-2006, 2012 The FreeBSD Project
-releaseinfo: "$FreeBSD$"
+description: Practical rc.d scripting in BSD
trademarks: ["freebsd", "netbsd", "general"]
---
= Practical rc.d scripting in BSD
:doctype: article
:toc: macro
:toclevels: 1
:icons: font
:sectnums:
:sectnumlevels: 6
:source-highlighter: rouge
:experimental:
include::shared/en/urls.adoc[]
[.abstract-title]
Abstract
Beginners may find it difficult to relate the facts from the formal documentation on the BSD [.filename]#rc.d# framework with the practical tasks of [.filename]#rc.d# scripting.
In this article, we consider a few typical cases of increasing complexity, show [.filename]#rc.d# features suited for each case, and discuss how they work.
Such an examination should provide reference points for further study of the design and efficient application of [.filename]#rc.d#.
'''
toc::[]
[[rcng-intro]]
== Introduction
The historical BSD had a monolithic startup script, [.filename]#/etc/rc#.
It was invoked by man:init[8] at system boot time and performed all userland tasks required for multi-user operation: checking and mounting file systems, setting up the network, starting daemons, and so on.
The precise list of tasks was not the same in every system; admins needed to customize it.
With few exceptions, [.filename]#/etc/rc# had to be modified, and true hackers liked it.
The real problem with the monolithic approach was that it provided no control over the individual components started from [.filename]#/etc/rc#.
For instance, [.filename]#/etc/rc# could not restart a single daemon.
The system admin had to find the daemon process by hand, kill it, wait until it actually exited, then browse through [.filename]#/etc/rc# for the flags, and finally type the full command line to start the daemon again.
The task would become even more difficult and prone to errors if the service to restart consisted of more than one daemon or demanded additional actions.
In a few words, the single script failed to fulfil what scripts are for: to make the system admin's life easier.
Later there was an attempt to split out some parts of [.filename]#/etc/rc# for the sake of starting the most important subsystems separately.
The notorious example was [.filename]#/etc/netstart# to bring up networking.
It did allow for accessing the network from single-user mode, but it did not integrate well into the automatic startup process because parts of its code needed to interleave with actions essentially unrelated to networking.
That was why [.filename]#/etc/netstart# mutated into [.filename]#/etc/rc.network#.
The latter was no longer an ordinary script; it comprised of large, tangled man:sh[1] functions called from [.filename]#/etc/rc# at different stages of system startup.
However, as the startup tasks grew diverse and sophisticated, the "quasi-modular" approach became even more of a drag than the monolithic [.filename]#/etc/rc# had been.
Without a clean and well-designed framework, the startup scripts had to bend over backwards to satisfy the needs of rapidly developing BSD-based operating systems.
It became obvious at last that more steps are necessary on the way to a fine-grained and extensible [.filename]#rc# system.
Thus BSD [.filename]#rc.d# was born.
Its acknowledged fathers were Luke Mewburn and the NetBSD community.
Later it was imported into FreeBSD.
Its name refers to the location of system scripts for individual services, which is in [.filename]#/etc/rc.d#.
Soon we will learn about more components of the [.filename]#rc.d# system and see how the individual scripts are invoked.
The basic ideas behind BSD [.filename]#rc.d# are _fine modularity_ and __code reuse__.
_Fine modularity_ means that each basic "service" such as a system daemon or primitive startup task gets its own man:sh[1] script able to start the service, stop it, reload it, check its status.
A particular action is chosen by the command-line argument to the script.
The [.filename]#/etc/rc# script still drives system startup, but now it merely invokes the smaller scripts one by one with the `start` argument.
It is easy to perform shutdown tasks as well by running the same set of scripts with the `stop` argument, which is done by [.filename]#/etc/rc.shutdown#.
Note how closely this follows the Unix way of having a set of small specialized tools, each fulfilling its task as well as possible.
_Code reuse_ means that common operations are implemented as man:sh[1] functions and collected in [.filename]#/etc/rc.subr#.
Now a typical script can be just a few lines' worth of man:sh[1] code.
Finally, an important part of the [.filename]#rc.d# framework is man:rcorder[8], which helps [.filename]#/etc/rc# to run the small scripts orderly with respect to dependencies between them.
It can help [.filename]#/etc/rc.shutdown#, too, because the proper order for the shutdown sequence is opposite to that of startup.
The BSD [.filename]#rc.d# design is described in <<lukem, the original article by Luke Mewburn>>, and the [.filename]#rc.d# components are documented in great detail in <<manpages, the respective manual pages>>.
However, it might not appear obvious to an [.filename]#rc.d# newbie how to tie the numerous bits and pieces together in order to create a well-styled script for a particular task.
Therefore this article will try a different approach to describe [.filename]#rc.d#.
It will show which features should be used in a number of typical cases, and why.
Note that this is not a how-to document because our aim is not at giving ready-made recipes, but at showing a few easy entrances into the [.filename]#rc.d# realm.
Neither is this article a replacement for the relevant manual pages.
Do not hesitate to refer to them for more formal and complete documentation while reading this article.
There are prerequisites to understanding this article.
First of all, you should be familiar with the man:sh[1] scripting language in order to master [.filename]#rc.d#.
In addition, you should know how the system performs userland startup and shutdown tasks, which is described in man:rc[8].
This article focuses on the FreeBSD branch of [.filename]#rc.d#.
Nevertheless, it may be useful to NetBSD developers, too, because the two branches of BSD [.filename]#rc.d# not only share the same design but also stay similar in their aspects visible to script authors.
[[rcng-task]]
== Outlining the task
A little consideration before starting `$EDITOR` will not hurt.
In order to write a well-tempered [.filename]#rc.d# script for a system service, we should be able to answer the following questions first:
* Is the service mandatory or optional?
* Will the script serve a single program, e.g., a daemon, or perform more complex actions?
* Which other services will our service depend on, and vice versa?
From the examples that follow we will see why it is important to know the answers to these questions.
[[rcng-dummy]]
== A dummy script
The following script just emits a message each time the system boots up:
[.programlisting]
....
#!/bin/sh <.>
. /etc/rc.subr <.>
name="dummy" <.>
start_cmd="${name}_start" <.>
stop_cmd=":" <.>
dummy_start() <.>
{
echo "Nothing started."
}
load_rc_config $name <.>
run_rc_command "$1" <.>
....
Things to note are:
&#10122; An interpreted script should begin with the magic "shebang" line.
That line specifies the interpreter program for the script.
Due to the shebang line, the script can be invoked exactly like a binary program provided that it has the execute bit set.
(See man:chmod[1].)
For example, a system admin can run our script manually, from the command line:
[source,shell]
....
# /etc/rc.d/dummy start
....
[NOTE]
====
In order to be properly managed by the [.filename]#rc.d# framework, its scripts need to be written in the man:sh[1] language.
If you have a service or port that uses a binary control utility or a startup routine written in another language, install that element in [.filename]#/usr/sbin# (for the system) or [.filename]#/usr/local/sbin# (for ports) and call it from a man:sh[1] script in the appropriate [.filename]#rc.d# directory.
====
[TIP]
====
If you would like to learn the details of why [.filename]#rc.d# scripts must be written in the man:sh[1] language, see how [.filename]#/etc/rc# invokes them by means of `run_rc_script`, then study the implementation of `run_rc_script` in [.filename]#/etc/rc.subr#.
====
&#10123; In [.filename]#/etc/rc.subr#, a number of man:sh[1] functions are defined for an [.filename]#rc.d# script to use.
The functions are documented in man:rc.subr[8].
While it is theoretically possible to write an [.filename]#rc.d# script without ever using man:rc.subr[8], its functions prove extremely handy and make the job an order of magnitude easier. So it is no surprise that everybody resorts to man:rc.subr[8] in [.filename]#rc.d# scripts.
We are not going to be an exception.
An [.filename]#rc.d# script must "source"[.filename]#/etc/rc.subr# (include it using "`.`") _before_ it calls man:rc.subr[8] functions so that man:sh[1] has an opportunity to learn the functions.
The preferred style is to source [.filename]#/etc/rc.subr# first of all.
[NOTE]
====
Some useful functions related to networking are provided by another include file, [.filename]#/etc/network.subr#.
====
&#10124; [[name-var]]The mandatory variable `name` specifies the name of our script.
It is required by man:rc.subr[8].
That is, each [.filename]#rc.d# script _must_ set `name` before it calls man:rc.subr[8] functions.
Now it is the right time to choose a unique name for our script once and for all.
We will use it in a number of places while developing the script.
For a start, let us give the same name to the script file, too.
[NOTE]
====
The current style of [.filename]#rc.d# scripting is to enclose values assigned to variables in double quotes.
Keep in mind that it is just a style issue that may not always be applicable.
You can safely omit quotes from around simple words without man:sh[1] metacharacters in them, while in certain cases you will need single quotes to prevent any interpretation of the value by man:sh[1].
A programmer should be able to tell the language syntax from style conventions and use both of them wisely.
====
&#10125; The main idea behind man:rc.subr[8] is that an [.filename]#rc.d# script provides handlers, or methods, for man:rc.subr[8] to invoke.
In particular, `start`, `stop`, and other arguments to an [.filename]#rc.d# script are handled this way.
A method is a man:sh[1] expression stored in a variable named `argument_cmd`, where _argument_ corresponds to what can be specified on the script's command line.
We will see later how man:rc.subr[8] provides default methods for the standard arguments.
[NOTE]
====
To make the code in [.filename]#rc.d# more uniform, it is common to use `${name}` wherever appropriate.
Thus a number of lines can be just copied from one script to another.
====
&#10126; We should keep in mind that man:rc.subr[8] provides default methods for the standard arguments.
Consequently, we must override a standard method with a no-op man:sh[1] expression if we want it to do nothing.
&#10127; The body of a sophisticated method can be implemented as a function.
It is a good idea to make the function name meaningful.
[IMPORTANT]
====
It is strongly recommended to add the prefix `${name}` to the names of all functions defined in our script so they never clash with the functions from man:rc.subr[8] or another common include file.
====
&#10128; This call to man:rc.subr[8] loads man:rc.conf[5] variables.
Our script makes no use of them yet, but it still is recommended to load man:rc.conf[5] because there can be man:rc.conf[5] variables controlling man:rc.subr[8] itself.
&#10129; Usually this is the last command in an [.filename]#rc.d# script.
It invokes the man:rc.subr[8] machinery to perform the requested action using the variables and methods our script has provided.
[[rcng-confdummy]]
== A configurable dummy script
Now let us add some controls to our dummy script.
As you may know, [.filename]#rc.d# scripts are controlled with man:rc.conf[5].
Fortunately, man:rc.subr[8] hides all the complications from us.
The following script uses man:rc.conf[5] via man:rc.subr[8] to see whether it is enabled in the first place, and to fetch a message to show at boot time.
These two tasks in fact are independent.
On the one hand, an [.filename]#rc.d# script can just support enabling and disabling its service.
On the other hand, a mandatory [.filename]#rc.d# script can have configuration variables.
We will do both things in the same script though:
[.programlisting]
....
#!/bin/sh
. /etc/rc.subr
name=dummy
rcvar=dummy_enable <.>
start_cmd="${name}_start"
stop_cmd=":"
load_rc_config $name <.>
: ${dummy_enable:=no} <.>
: ${dummy_msg="Nothing started."} <.>
dummy_start()
{
echo "$dummy_msg" <.>
}
run_rc_command "$1"
....
What changed in this example?
&#10122; The variable `rcvar` specifies the name of the ON/OFF knob variable.
&#10123; Now `load_rc_config` is invoked earlier in the script, before any man:rc.conf[5] variables are accessed.
[NOTE]
====
While examining [.filename]#rc.d# scripts, keep in mind that man:sh[1] defers the evaluation of expressions in a function until the latter is called.
Therefore it is not an error to invoke `load_rc_config` as late as just before `run_rc_command` and still access man:rc.conf[5] variables from the method functions exported to `run_rc_command`.
This is because the method functions are to be called by `run_rc_command`, which is invoked _after_ `load_rc_config`.
====
&#10124; A warning will be emitted by `run_rc_command` if `rcvar` itself is set, but the indicated knob variable is unset.
If your [.filename]#rc.d# script is for the base system, you should add a default setting for the knob to [.filename]#/etc/defaults/rc.conf# and document it in man:rc.conf[5].
Otherwise it is your script that should provide a default setting for the knob.
The canonical approach to the latter case is shown in the example.
[NOTE]
====
You can make man:rc.subr[8] act as though the knob is set to `ON`, irrespective of its current setting, by prefixing the argument to the script with `one` or `force`, as in `onestart` or `forcestop`.
Keep in mind though that `force` has other dangerous effects we will touch upon below, while `one` just overrides the ON/OFF knob.
E.g., assume that `dummy_enable` is `OFF`.
The following command will run the `start` method in spite of the setting:
[source,shell]
....
# /etc/rc.d/dummy onestart
....
====
&#10125; Now the message to be shown at boot time is no longer hard-coded in the script.
It is specified by an man:rc.conf[5] variable named `dummy_msg`.
This is a trivial example of how man:rc.conf[5] variables can control an [.filename]#rc.d# script.
[IMPORTANT]
====
The names of all man:rc.conf[5] variables used exclusively by our script _must_ have the same prefix: `${name}_`.
For example: `dummy_mode`, `dummy_state_file`, and so on.
====
[NOTE]
====
While it is possible to use a shorter name internally, e.g., just `msg`, adding the unique prefix `${name}_` to all global names introduced by our script will save us from possible collisions with the man:rc.subr[8] namespace.
As a rule, [.filename]#rc.d# scripts of the base system need not provide defaults for their man:rc.conf[5] variables because the defaults should be set in [.filename]#/etc/defaults/rc.conf# instead.
On the other hand, [.filename]#rc.d# scripts for ports should provide the defaults as shown in the example.
====
&#10126; Here we use `dummy_msg` to actually control our script, i.e., to emit a variable message.
Use of a shell function is overkill here, since it only runs a single command; an equally valid alternative is:
[.programlisting]
....
start_cmd="echo \"$dummy_msg\""
....
[[rcng-daemon]]
== Startup and shutdown of a simple daemon
We said earlier that man:rc.subr[8] could provide default methods.
Obviously, such defaults cannot be too general.
They are suited for the common case of starting and shutting down a simple daemon program.
Let us assume now that we need to write an [.filename]#rc.d# script for such a daemon called `mumbled`.
Here it is:
[.programlisting]
....
#!/bin/sh
. /etc/rc.subr
name=mumbled
rcvar=mumbled_enable
command="/usr/sbin/${name}" <.>
load_rc_config $name
run_rc_command "$1"
....
Pleasingly simple, isn't it? Let us examine our little script.
The only new thing to note is as follows:
&#10122; The `command` variable is meaningful to man:rc.subr[8].
If it is set, man:rc.subr[8] will act according to the scenario of serving a conventional daemon.
In particular, the default methods will be provided for such arguments: `start`, `stop`, `restart`, `poll`, and `status`.
The daemon will be started by running `$command` with command-line flags specified by `$mumbled_flags`.
Thus all the input data for the default `start` method are available in the variables set by our script.
Unlike `start`, other methods may require additional information about the process started.
For instance, `stop` must know the PID of the process to terminate it.
In the present case, man:rc.subr[8] will scan through the list of all processes, looking for a process with its name equal to `procname`.
The latter is another variable of meaning to man:rc.subr[8], and its value defaults to that of `command`.
In other words, when we set `command`, `procname` is effectively set to the same value.
This enables our script to kill the daemon and to check if it is running in the first place.
[NOTE]
====
Some programs are in fact executable scripts.
The system runs such a script by starting its interpreter and passing the name of the script to it as a command-line argument.
This is reflected in the list of processes, which can confuse man:rc.subr[8].
You should additionally set `command_interpreter` to let man:rc.subr[8] know the actual name of the process if `$command` is a script.
For each [.filename]#rc.d# script, there is an optional man:rc.conf[5] variable that takes precedence over `command`.
Its name is constructed as follows: `${name}_program`, where `name` is the mandatory variable we discussed <<name-var, earlier>>.
E.g., in this case it will be `mumbled_program`.
It is man:rc.subr[8] that arranges `${name}_program` to override `command`.
Of course, man:sh[1] will permit you to set `${name}_program` from man:rc.conf[5] or the script itself even if `command` is unset.
In that case, the special properties of `${name}_program` are lost, and it becomes an ordinary variable your script can use for its own purposes.
However, the sole use of `${name}_program` is discouraged because using it together with `command` became an idiom of [.filename]#rc.d# scripting.
====
For more detailed information on default methods, refer to man:rc.subr[8].
[[rcng-daemon-adv]]
== Startup and shutdown of an advanced daemon
Let us add some meat onto the bones of the previous script and make it more complex and featureful.
The default methods can do a good job for us, but we may need some of their aspects tweaked.
Now we will learn how to tune the default methods to our needs.
[.programlisting]
....
#!/bin/sh
. /etc/rc.subr
name=mumbled
rcvar=mumbled_enable
command="/usr/sbin/${name}"
command_args="mock arguments > /dev/null 2>&1" <.>
pidfile="/var/run/${name}.pid" <.>
required_files="/etc/${name}.conf /usr/share/misc/${name}.rules" <.>
sig_reload="USR1" <.>
start_precmd="${name}_prestart" <.>
stop_postcmd="echo Bye-bye" <.>
extra_commands="reload plugh xyzzy" <.>
plugh_cmd="mumbled_plugh" <.>
xyzzy_cmd="echo 'Nothing happens.'"
mumbled_prestart()
{
if checkyesno mumbled_smart; then <.>
rc_flags="-o smart ${rc_flags}" <.>
fi
case "$mumbled_mode" in
foo)
rc_flags="-frotz ${rc_flags}"
;;
bar)
rc_flags="-baz ${rc_flags}"
;;
*)
warn "Invalid value for mumbled_mode" <.>
return 1 <.>
;;
esac
run_rc_command xyzzy <.>
return 0
}
mumbled_plugh() <.>
{
echo 'A hollow voice says "plugh".'
}
load_rc_config $name
run_rc_command "$1"
....
&#10122; Additional arguments to `$command` can be passed in `command_args`.
They will be added to the command line after `$mumbled_flags`.
Since the final command line is passed to `eval` for its actual execution, input and output redirections can be specified in `command_args`.
[NOTE]
====
_Never_ include dashed options, like `-X` or `--foo`, in `command_args`.
The contents of `command_args` will appear at the end of the final command line, hence they are likely to follow arguments present in `${name}_flags`; but most commands will not recognize dashed options after ordinary arguments.
A better way of passing additional options to `$command` is to add them to the beginning of `${name}_flags`.
Another way is to modify `rc_flags` <<rc-flags, as shown later>>.
====
&#10123; A good-mannered daemon should create a _pidfile_ so that its process can be found more easily and reliably.
The variable `pidfile`, if set, tells man:rc.subr[8] where it can find the pidfile for its default methods to use.
[NOTE]
====
In fact, man:rc.subr[8] will also use the pidfile to see if the daemon is already running before starting it.
This check can be skipped by using the `faststart` argument.
====
&#10124; If the daemon cannot run unless certain files exist, just list them in `required_files`, and man:rc.subr[8] will check that those files do exist before starting the daemon.
There also are `required_dirs` and `required_vars` for directories and environment variables, respectively.
They all are described in detail in man:rc.subr[8].
[NOTE]
====
The default method from man:rc.subr[8] can be forced to skip the prerequisite checks by using `forcestart` as the argument to the script.
====
&#10125; We can customize signals to send to the daemon in case they differ from the well-known ones.
In particular, `sig_reload` specifies the signal that makes the daemon reload its configuration; it is SIGHUP by default.
Another signal is sent to stop the daemon process;
the default is SIGTERM, but this can be changed by setting `sig_stop` appropriately.
[NOTE]
====
The signal names should be specified to man:rc.subr[8] without the `SIG` prefix, as it is shown in the example.
The FreeBSD version of man:kill[1] can recognize the `SIG` prefix, but the versions from other OS types may not.
====
&#10126;&#10127; Performing additional tasks before or after the default methods is easy.
For each command-argument supported by our script, we can define `argument_precmd` and `argument_postcmd`.
These man:sh[1] commands are invoked before and after the respective method, as it is evident from their names.
[NOTE]
====
Overriding a default method with a custom `argument_cmd` still does not prevent us from making use of `argument_precmd` or `argument_postcmd` if we need to.
In particular, the former is good for checking custom, sophisticated conditions that should be met before performing the command itself.
Using `argument_precmd` along with `argument_cmd` lets us logically separate the checks from the action.
Do not forget that you can cram any valid man:sh[1] expressions into the methods, pre-, and post-commands you define.
Just invoking a function that makes the real job is a good style in most cases, but never let style limit your understanding of what is going on behind the curtain.
====
&#10128; If we would like to implement custom arguments, which can also be thought of as _commands_ to our script, we need to list them in `extra_commands` and provide methods to handle them.
[NOTE]
====
The `reload` command is special. On the one hand, it has a preset method in man:rc.subr[8].
On the other hand, `reload` is not offered by default.
The reason is that not all daemons use the same reload mechanism and some have nothing to reload at all.
So we need to ask explicitly that the builtin functionality be provided.
We can do so via `extra_commands`.
What do we get from the default method for `reload`? Quite often daemons reload their configuration upon reception of a signal - typically, SIGHUP.
Therefore man:rc.subr[8] attempts to reload the daemon by sending a signal to it.
The signal is preset to SIGHUP but can be customized via `sig_reload` if necessary.
====
&#10129;&#9454; Our script supports two non-standard commands, `plugh` and `xyzzy`.
We saw them listed in `extra_commands`, and now it is time to provide methods for them.
The method for `xyzzy` is just inlined while that for `plugh` is implemented as the `mumbled_plugh` function.
Non-standard commands are not invoked during startup or shutdown.
Usually they are for the system admin's convenience.
They can also be used from other subsystems, e.g., man:devd[8] if specified in man:devd.conf[5].
The full list of available commands can be found in the usage line printed by man:rc.subr[8] when the script is invoked without arguments.
For example, here is the usage line from the script under study:
[source,shell]
....
# /etc/rc.d/mumbled
Usage: /etc/rc.d/mumbled [fast|force|one](start|stop|restart|rcvar|reload|plugh|xyzzy|status|poll)
....
&#9453; A script can invoke its own standard or non-standard commands if needed.
This may look similar to calling functions, but we know that commands and shell functions are not always the same thing.
For instance, `xyzzy` is not implemented as a function here.
In addition, there can be a pre-command and post-command, which should be invoked orderly.
So the proper way for a script to run its own command is by means of man:rc.subr[8], as shown in the example.
&#10130; A handy function named `checkyesno` is provided by man:rc.subr[8].
It takes a variable name as its argument and returns a zero exit code if and only if the variable is set to `YES`, or `TRUE`, or `ON`, or `1`, case insensitive;
a non-zero exit code is returned otherwise.
In the latter case, the function tests the variable for being set to `NO`, `FALSE`, `OFF`, or `0`, case insensitive;
it prints a warning message if the variable contains anything else, i.e., junk.
Keep in mind that for man:sh[1] a zero exit code means true and a non-zero exit code means false.
[IMPORTANT]
====
The `checkyesno` function takes a __variable name__.
Do not pass the expanded _value_ of a variable to it; it will not work as expected.
The following is the correct usage of `checkyesno`:
[.programlisting]
....
if checkyesno mumbled_enable; then
foo
fi
....
On the contrary, calling `checkyesno` as shown below will not work - at least not as expected:
[.programlisting]
....
if checkyesno "${mumbled_enable}"; then
foo
fi
....
====
&#10131; [[rc-flags]]We can affect the flags to be passed to `$command` by modifying `rc_flags` in `$start_precmd`.
&#9451; In certain cases we may need to emit an important message that should go to `syslog` as well.
This can be done easily with the following man:rc.subr[8] functions: `debug`, `info`, `warn`, and `err`.
The latter function then exits the script with the code specified.
&#9452; The exit codes from methods and their pre-commands are not just ignored by default.
If `argument_precmd` returns a non-zero exit code, the main method will not be performed.
In turn, `argument_postcmd` will not be invoked unless the main method returns a zero exit code.
[NOTE]
====
However, man:rc.subr[8] can be instructed from the command line to ignore those exit codes and invoke all commands anyway by prefixing an argument with `force`, as in `forcestart`.
====
[[rcng-hookup]]
== Connecting a script to the rc.d framework
After a script has been written, it needs to be integrated into [.filename]#rc.d#.
The crucial step is to install the script in [.filename]#/etc/rc.d# (for the base system) or [.filename]#/usr/local/etc/rc.d# (for ports).
Both [.filename]#bsd.prog.mk# and [.filename]#bsd.port.mk# provide convenient hooks for that, and usually you do not have to worry about the proper ownership and mode.
System scripts should be installed from [.filename]#src/etc/rc.d# through the [.filename]#Makefile# found there.
Port scripts can be installed using `USE_RC_SUBR` as described link:{porters-handbook}#rc-scripts[in the Porter's Handbook].
However, we should consider beforehand the place of our script in the system startup sequence.
The service handled by our script is likely to depend on other services.
For instance, a network daemon cannot function without the network interfaces and routing up and running.
Even if a service seems to demand nothing, it can hardly start before the basic filesystems have been checked and mounted.
We mentioned man:rcorder[8] already.
Now it is time to have a close look at it.
In a nutshell, man:rcorder[8] takes a set of files, examines their contents, and prints a dependency-ordered list of files from the set to `stdout`.
The point is to keep dependency information _inside_ the files so that each file can speak for itself only.
A file can specify the following information:
* the names of the "conditions" (which means services to us) it __provides__;
* the names of the "conditions" it __requires__;
* the names of the "conditions" this file should run __before__;
* additional _keywords_ that can be used to select a subset from the whole set of files (man:rcorder[8] can be instructed via options to include or omit the files having particular keywords listed.)
It is no surprise that man:rcorder[8] can handle only text files with a syntax close to that of man:sh[1].
That is, special lines understood by man:rcorder[8] look like man:sh[1] comments.
The syntax of such special lines is rather rigid to simplify their processing.
See man:rcorder[8] for details.
Besides using man:rcorder[8] special lines, a script can insist on its dependency upon another service by just starting it forcibly.
This can be needed when the other service is optional and will not start by itself because the system admin has disabled it mistakenly in man:rc.conf[5].
With this general knowledge in mind, let us consider the simple daemon script enhanced with dependency stuff:
[.programlisting]
....
#!/bin/sh
# PROVIDE: mumbled oldmumble <.>
# REQUIRE: DAEMON cleanvar frotz <.>
# BEFORE: LOGIN <.>
# KEYWORD: nojail shutdown <.>
. /etc/rc.subr
name=mumbled
rcvar=mumbled_enable
command="/usr/sbin/${name}"
start_precmd="${name}_prestart"
mumbled_prestart()
{
if ! checkyesno frotz_enable && \
! /etc/rc.d/frotz forcestatus 1>/dev/null 2>&1; then
force_depend frotz || return 1 <.>
fi
return 0
}
load_rc_config $name
run_rc_command "$1"
....
As before, detailed analysis follows:
&#10122; That line declares the names of "conditions" our script provides.
Now other scripts can record a dependency on our script by those names.
[NOTE]
====
Usually a script specifies a single condition provided.
However, nothing prevents us from listing several conditions there, e.g., for compatibility reasons.
In any case, the name of the main, or the only, `PROVIDE:` condition should be the same as `${name}`.
====
&#10123;&#10124; So our script indicates which "conditions" provided by other scripts it depends on.
According to the lines, our script asks man:rcorder[8] to put it after the script(s) providing [.filename]#DAEMON# and [.filename]#cleanvar#, but before that providing [.filename]#LOGIN#.
[NOTE]
====
The `BEFORE:` line should not be abused to work around an incomplete dependency list in the other script.
The appropriate case for using `BEFORE:` is when the other script does not care about ours, but our script can do its task better if run before the other one.
A typical real-life example is the network interfaces vs. the firewall: While the interfaces do not depend on the firewall in doing their job, the system security will benefit from the firewall being ready before there is any network traffic.
Besides conditions corresponding to a single service each, there are meta-conditions and their "placeholder" scripts used to ensure that certain groups of operations are performed before others.
These are denoted by [.filename]#UPPERCASE# names.
Their list and purposes can be found in man:rc[8].
Keep in mind that putting a service name in the `REQUIRE:` line does not guarantee that the service will actually be running by the time our script starts.
The required service may fail to start or just be disabled in man:rc.conf[5].
Obviously, man:rcorder[8] cannot track such details, and man:rc[8] will not do that either.
Consequently, the application started by our script should be able to cope with any required services being unavailable.
In certain cases, we can help it as discussed <<forcedep, below>>
====
[[keywords]]&#10125; As we remember from the above text, man:rcorder[8] keywords can be used to select or leave out some scripts.
Namely any man:rcorder[8] consumer can specify through `-k` and `-s` options which keywords are on the "keep list" and "skip list", respectively.
From all the files to be dependency sorted, man:rcorder[8] will pick only those having a keyword from the keep list (unless empty) and not having a keyword from the skip list.
In FreeBSD, man:rcorder[8] is used by [.filename]#/etc/rc# and [.filename]#/etc/rc.shutdown#.
These two scripts define the standard list of FreeBSD [.filename]#rc.d# keywords and their meanings as follows:
[[forcedep]]&#10126; To begin with, `force_depend` should be used with much care.
It is generally better to revise the hierarchy of configuration variables for your [.filename]#rc.d# scripts if they are interdependent.
If you still cannot do without `force_depend`, the example offers an idiom of how to invoke it conditionally.
In the example, our `mumbled` daemon requires that another one, `frotz`, be started in advance.
However, `frotz` is optional, too; and man:rcorder[8] knows nothing about such details.
Fortunately, our script has access to all man:rc.conf[5] variables.
If `frotz_enable` is true, we hope for the best and rely on [.filename]#rc.d# to have started `frotz`.
Otherwise we forcibly check the status of `frotz`.
Finally, we enforce our dependency on `frotz` if it is found to be not running.
A warning message will be emitted by `force_depend` because it should be invoked only if a misconfiguration has been detected.
[[rcng-args]]
== Giving more flexibility to an rc.d script
When invoked during startup or shutdown, an [.filename]#rc.d# script is supposed to act on the entire subsystem it is responsible for.
E.g., [.filename]#/etc/rc.d/netif# should start or stop all network interfaces described by man:rc.conf[5].
Either task can be uniquely indicated by a single command argument such as `start` or `stop`.
Between startup and shutdown, [.filename]#rc.d# scripts help the admin to control the running system, and it is when the need for more flexibility and precision arises.
For instance, the admin may want to add the settings of a new network interface to man:rc.conf[5] and then to start it without interfering with the operation of the existing interfaces.
Next time the admin may need to shut down a single network interface.
In the spirit of the command line, the respective [.filename]#rc.d# script calls for an extra argument, the interface name.
Fortunately, man:rc.subr[8] allows for passing any number of arguments to script's methods (within the system limits).
Due to that, the changes in the script itself can be minimal.
How can man:rc.subr[8] gain access to the extra command-line arguments.
Should it just grab them directly? Not by any means.
Firstly, an man:sh[1] function has no access to the positional parameters of its caller, but man:rc.subr[8] is just a sack of such functions.
Secondly, the good manner of [.filename]#rc.d# dictates that it is for the main script to decide which arguments are to be passed to its methods.
So the approach adopted by man:rc.subr[8] is as follows: `run_rc_command` passes on all its arguments but the first one to the respective method verbatim.
The first, omitted, argument is the name of the method itself: `start`, `stop`, etc.
It will be shifted out by `run_rc_command`, so what is `$2` in the original command line will be presented as `$1` to the method, and so on.
To illustrate this opportunity, let us modify the primitive dummy script so that its messages depend on the additional arguments supplied.
Here we go:
[.programlisting]
....
#!/bin/sh
. /etc/rc.subr
name="dummy"
start_cmd="${name}_start"
stop_cmd=":"
kiss_cmd="${name}_kiss"
extra_commands="kiss"
dummy_start()
{
if [ $# -gt 0 ]; then <.>
echo "Greeting message: $*"
else
echo "Nothing started."
fi
}
dummy_kiss()
{
echo -n "A ghost gives you a kiss"
if [ $# -gt 0 ]; then <.>
echo -n " and whispers: $*"
fi
case "$*" in
*[.!?])
echo
;;
*)
echo .
;;
esac
}
load_rc_config $name
run_rc_command "$@" <.>
....
What essential changes can we notice in the script?
&#10122; All arguments you type after `start` can end up as positional parameters to the respective method.
We can use them in any way according to our task, skills, and fancy.
In the current example, we just pass all of them to man:echo[1] as one string in the next line - note `$*` within the double quotes.
Here is how the script can be invoked now:
[source,shell]
....
# /etc/rc.d/dummy start
Nothing started.
# /etc/rc.d/dummy start Hello world!
Greeting message: Hello world!
....
&#10123; The same applies to any method our script provides, not only to a standard one.
We have added a custom method named `kiss`, and it can take advantage of the extra arguments not less than `start` does. E.g.:
[source,shell]
....
# /etc/rc.d/dummy kiss
A ghost gives you a kiss.
# /etc/rc.d/dummy kiss Once I was Etaoin Shrdlu...
A ghost gives you a kiss and whispers: Once I was Etaoin Shrdlu...
....
&#10124; If we want just to pass all extra arguments to any method, we can merely substitute `"$@"` for `"$1"` in the last line of our script, where we invoke `run_rc_command`.
[IMPORTANT]
====
An man:sh[1] programmer ought to understand the subtle difference between `$*` and `$@` as the ways to designate all positional parameters.
For its in-depth discussion, refer to a good handbook on man:sh[1] scripting.
_Do not_ use the expressions until you fully understand them because their misuse will result in buggy and insecure scripts.
====
[NOTE]
====
Currently `run_rc_command` may have a bug that prevents it from keeping the original boundaries between arguments.
That is, arguments with embedded whitespace may not be processed correctly.
The bug stems from `$*` misuse.
====
[[rcng-furthur]]
== Further reading
[[lukem]]http://www.mewburn.net/luke/papers/rc.d.pdf[The original article by Luke Mewburn] offers a general overview of [.filename]#rc.d# and detailed rationale for its design decisions.
It provides insight on the whole [.filename]#rc.d# framework and its place in a modern BSD operating system.
[[manpages]]The manual pages man:rc[8], man:rc.subr[8], and man:rcorder[8] document the [.filename]#rc.d# components in great detail.
You cannot fully use the [.filename]#rc.d# power without studying the manual pages and referring to them while writing your own scripts.
The major source of working, real-life examples is [.filename]#/etc/rc.d# in a live system.
Its contents are easy and pleasant to read because most rough corners are hidden deep in man:rc.subr[8].
Keep in mind though that the [.filename]#/etc/rc.d# scripts were not written by angels, so they might suffer from bugs and suboptimal design decisions.
Now you can improve them!
diff --git a/documentation/content/en/articles/releng/_index.adoc b/documentation/content/en/articles/releng/_index.adoc
index 51a1b647c8..665c3cea45 100644
--- a/documentation/content/en/articles/releng/_index.adoc
+++ b/documentation/content/en/articles/releng/_index.adoc
@@ -1,443 +1,444 @@
---
title: FreeBSD Release Engineering
authors:
- author: Murray Stokely
email: murray@FreeBSD.org
webpage: https://people.FreeBSD.org/~murray/
+description: This paper describes the approach used by the FreeBSD release engineering team to make production quality releases of the FreeBSD Operating System
trademarks: ["freebsd", "intel", "general"]
---
= FreeBSD Release Engineering
:doctype: article
:toc: macro
:toclevels: 1
:icons: font
:sectnums:
:sectnumlevels: 6
:source-highlighter: rouge
:experimental:
:xrefstyle: full
include::shared/releases.adoc[]
include::shared/authors.adoc[]
include::shared/en/teams.adoc[lines=16..-1]
include::shared/en/mailing-lists.adoc[]
include::shared/en/urls.adoc[]
ifeval::["{backend}" == "html5"]
:imagesdir: ../../../images/articles/releng/
endif::[]
ifeval::["{backend}" == "pdf"]
:imagesdir: ../../../../static/images/articles/releng/
endif::[]
ifeval::["{backend}" == "epub3"]
:imagesdir: ../../../../static/images/articles/releng/
endif::[]
[.abstract-title]
Abstract
[NOTE]
====
This document is outdated and does not accurately describe the current release procedures of the FreeBSD Release Engineering team.
It is retained for historical purposes.
The current procedures used by the FreeBSD Release Engineering team are available in the link:{freebsd-releng}[FreeBSD Release Engineering] article.
====
This paper describes the approach used by the FreeBSD release engineering team to make production quality releases of the FreeBSD Operating System.
It details the methodology used for the official FreeBSD releases and describes the tools available for those interested in producing customized FreeBSD releases for corporate rollouts or commercial productization.
'''
toc::[]
[[introduction]]
== Introduction
The development of FreeBSD is a very open process.
FreeBSD is comprised of contributions from thousands of people around the world.
The FreeBSD Project provides Subversion footnote:[Subversion, http://subversion.apache.org] access to the general public so that others can have access to log messages, diffs (patches) between development branches, and other productivity enhancements that formal source code management provides.
This has been a huge help in attracting more talented developers to FreeBSD.
However, I think everyone would agree that chaos would soon manifest if write access to the main repository was opened up to everyone on the Internet.
Therefore only a "select" group of nearly 300 people are given write access to the Subversion repository.
These link:{contributors}#staff-committers[FreeBSD committers]footnote:[link:{contributors}#staff-committers[FreeBSD committers]] are usually the people who do the bulk of FreeBSD development.
An elected link:https://www.FreeBSD.org/administration/#t-core[Core Team]footnote:[link:https://www.FreeBSD.org/administration/#t-core[FreeBSD Core Team]] of developers provide some level of direction over the project.
The rapid pace of `FreeBSD` development makes the main development branch unsuitable for the everyday use by the general public.
In particular, stabilizing efforts are required for polishing the development system into a production quality release.
To solve this conflict, development continues on several parallel tracks.
The main development branch is the _HEAD_ or _trunk_ of our Subversion tree, known as "FreeBSD-CURRENT" or "-CURRENT" for short.
A set of more stable branches are maintained, known as "FreeBSD-STABLE" or "-STABLE" for short.
All branches live in a master Subversion repository maintained by the FreeBSD Project.
FreeBSD-CURRENT is the "bleeding-edge" of FreeBSD development where all new changes first enter the system.
FreeBSD-STABLE is the development branch from which major releases are made.
Changes go into this branch at a different pace, and with the general assumption that they have first gone into FreeBSD-CURRENT and have been thoroughly tested by our user community.
The term _stable_ in the name of the branch refers to the presumed Application Binary Interface stability, which is promised by the project.
This means that a user application compiled on an older version of the system from the same branch works on a newer system from the same branch.
The ABI stability has improved greatly from the compared to previous releases.
In most cases, binaries from the older _STABLE_ systems run unmodified on newer systems, including __HEAD__, assuming that the system management interfaces are not used.
In the interim period between releases, weekly snapshots are built automatically by the FreeBSD Project build machines and made available for download from `ftp://ftp.FreeBSD.org/pub/FreeBSD/snapshots/`.
The widespread availability of binary release snapshots, and the tendency of our user community to keep up with -STABLE development with Subversion and "`make buildworld`" footnote:[link:{handbook}#makeworld[Rebuilding world]] helps to keep FreeBSD-STABLE in a very reliable condition even before the quality assurance activities ramp up pending a major release.
In addition to installation ISO snapshots, weekly virtual machine images are also provided for use with VirtualBox, qemu, or other popular emulation software.
The virtual machine images can be downloaded from `ftp://ftp.FreeBSD.org/pub/FreeBSD/snapshots/VM-IMAGES/`.
The virtual machine images are approximately 150MB man:xz[1] compressed, and contain a 10GB sparse filesystem when attached to a virtual machine.
Bug reports and feature requests are continuously submitted by users throughout the release cycle.
Problems reports are entered into our Bugzilla database through the web interface provided at https://www.freebsd.org/support/bugreports/[https://www.freebsd.org/support/bugreports/].
To service our most conservative users, individual release branches were introduced with FreeBSD 4.3.
These release branches are created shortly before a final release is made.
After the release goes out, only the most critical security fixes and additions are merged onto the release branch.
In addition to source updates via Subversion, binary patchkits are available to keep systems on the _releng/X.Y_ branches updated.
=== What This Article Describes
The following sections of this article describe:
<<release-proc>>::
The different phases of the release engineering process leading up to the actual system build.
<<release-build>>::
The actual build process.
<<extensibility>>::
How the base release may be extended by third parties.
<<lessons-learned>>::
Some of the lessons learned through the release of FreeBSD 4.4.
<<future>>::
Future directions of development.
[[release-proc]]
== Release Process
New releases of FreeBSD are released from the -STABLE branch at approximately four month intervals.
The FreeBSD release process begins to ramp up 70-80 days before the anticipated release date when the release engineer sends an email to the development mailing lists to remind developers that they only have 15 days to integrate new changes before the code freeze.
During this time, many developers perform what have become known as "MFC sweeps".
MFC stands for "Merge From CURRENT" and it describes the process of merging a tested change from our -CURRENT development branch to our -STABLE branch.
Project policy requires any change to be first applied to trunk, and merged to the -STABLE branches after sufficient external testing was done by -CURRENT users (developers are expected to extensively test the change before committing to -CURRENT, but it is impossible for a person to exercise all usages of the general-purpose operating system).
Minimal MFC period is 3 days, which is typically used only for trivial or critical bugfixes.
=== Code Review
Sixty days before the anticipated release, the source repository enters a "code freeze".
During this time, all commits to the -STABLE branch must be approved by `{re}`.
The approval process is technically enforced by a pre-commit hook.
The kinds of changes that are allowed during this period include:
* Bug fixes.
* Documentation updates.
* Security-related fixes of any kind.
* Minor changes to device drivers, such as adding new Device IDs.
* Driver updates from the vendors.
* Any additional change that the release engineering team feels is justified, given the potential risk.
Shortly after the code freeze is started, a _BETA1_ image is built and released for widespread testing.
During the code freeze, at least one beta image or release candidate is released every two weeks until the final release is ready.
During the days preceding the final release, the release engineering team is in constant communication with the security-officer team, the documentation maintainers, and the port maintainers to ensure that all of the different components required for a successful release are available.
After the quality of the BETA images is satisfying enough, and no large and potentially risky changes are planned, the release branch is created and _Release Candidate_ (RC) images are built from the release branch, instead of the BETA images from the STABLE branch.
Also, the freeze on the STABLE branch is lifted and release branch enters a "hard code freeze" where it becomes much harder to justify new changes to the system unless a serious bug-fix or security issue is involved.
=== Final Release Checklist
When several BETA images have been made available for widespread testing and all major issues have been resolved, the final release "polishing" can begin.
[[rel-branch]]
==== Creating the Release Branch
[NOTE]
====
In all examples below, `$FSVN` refers to the location of the FreeBSD Subversion repository, `svn+ssh://svn.FreeBSD.org/base/`.
====
The layout of FreeBSD branches in Subversion is described in the link:{committers-guide}#subversion-primer-base-layout[Committer's Guide].
The first step in creating a branch is to identify the revision of the `stable/_X_` sources that you want to branch _from_.
[source,shell]
....
# svn log -v $FSVN/stable/9
....
The next step is to create the _release branch_
[source,shell]
....
# svn cp $FSVN/stable/9@REVISION $FSVN/releng/9.2
....
This branch can be checked out:
[source,shell]
....
# svn co $FSVN/releng/9.2 src
....
[NOTE]
====
Creating the `releng` branch and `release` tags is done by the link:https://www.FreeBSD.org/administration/#t-re[Release Engineering Team].
====
image::branches-head.png[FreeBSD Development Branch]
image::branches-releng3.png[FreeBSD 3.x STABLE Branch]
image::branches-releng4.png[FreeBSD 4.x STABLE Branch]
image::branches-releng5.png[FreeBSD 5.x STABLE Branch]
image::branches-releng6.png[FreeBSD 6.x STABLE Branch]
image::branches-releng7.png[FreeBSD 7.x STABLE Branch]
image::branches-releng8.png[FreeBSD 8.x STABLE Branch]
image::branches-releng9.png[FreeBSD 9.x STABLE Branch]
[[versionbump]]
==== Bumping up the Version Number
Before the final release can be tagged, built, and released, the following files need to be modified to reflect the correct version of FreeBSD:
* [.filename]#doc/en_US.ISO8859-1/books/handbook/mirrors/chapter.xml#
* [.filename]#doc/en_US.ISO8859-1/books/porters-handbook/book.xml#
* [.filename]#doc/en_US.ISO8859-1/htdocs/cgi/ports.cgi#
* [.filename]#ports/Tools/scripts/release/config#
* [.filename]#doc/shared/xml/freebsd.ent#
* [.filename]#src/Makefile.inc1#
* [.filename]#src/UPDATING#
* [.filename]#src/gnu/usr.bin/groff/tmac/mdoc.local#
* [.filename]#src/release/Makefile#
* [.filename]#src/release/doc/en_US.ISO8859-1/shared/xml/release.dsl#
* [.filename]#src/release/doc/shared/examples/Makefile.relnotesng#
* [.filename]#src/release/doc/shared/xml/release.ent#
* [.filename]#src/sys/conf/newvers.sh#
* [.filename]#src/sys/sys/param.h#
* [.filename]#src/usr.sbin/pkg_install/add/main.c#
* [.filename]#doc/en_US.ISO8859-1/htdocs/search/opensearch/man.xml#
The release notes and errata files also need to be adjusted for the new release (on the release branch) and truncated appropriately (on the stable/current branch):
* [.filename]#src/release/doc/en_US.ISO8859-1/relnotes/common/new.xml#
* [.filename]#src/release/doc/en_US.ISO8859-1/errata/article.xml#
Sysinstall should be updated to note the number of available ports and the amount of disk space required for the Ports Collection.
footnote:[FreeBSD Ports Collection https://www.FreeBSD.org/ports]
This information is currently kept in [.filename]#src/usr.sbin/bsdinstall/dist.c#.
After the release has been built, a number of files should be updated to announce the release to the world.
These files are relative to `head/` within the `doc/` subversion tree.
* [.filename]#share/images/articles/releng/branches-relengX.pic#
* [.filename]#head/shared/xml/release.ent#
* [.filename]#en_US.ISO8859-1/htdocs/releases/*#
* [.filename]#en_US.ISO8859-1/htdocs/releng/index.xml#
* [.filename]#share/xml/news.xml#
Additionally, update the "BSD Family Tree" file:
* [.filename]#src/shared/misc/bsd-family-tree#
==== Creating the Release Tag
When the final release is ready, the following command will create the `release/9.2.0` tag.
[source,shell]
....
# svn cp $FSVN/releng/9.2 $FSVN/release/9.2.0
....
The Documentation and Ports managers are responsible for tagging their respective trees with the `tags/RELEASE_9_2_0` tag.
When the Subversion `svn cp` command is used to create a __release tag__, this identifies the source at a specific point in time.
By creating tags, we ensure that future release builders will always be able to use the exact same source we used to create the official FreeBSD Project releases.
[[release-build]]
== Release Building
FreeBSD "releases" can be built by anyone with a fast machine and access to a source repository.
(That should be everyone, since we offer Subversion access! See the link:{handbook}#svn[Subversion section in the Handbook] for details.)
The _only_ special requirement is that the man:md[4] device must be available.
If the device is not loaded into your kernel, then the kernel module should be automatically loaded when man:mdconfig[8] is executed during the boot media creation phase.
All of the tools necessary to build a release are available from the Subversion repository in [.filename]#src/release#.
These tools aim to provide a consistent way to build FreeBSD releases.
A complete release can actually be built with only a single command, including the creation of ISO images suitable for burning to CDROM or DVD, and an FTP install directory.
man:release[7] fully documents the `src/release/generate-release.sh` script which is used to build a release.
`generate-release.sh` is a wrapper around the Makefile target: `make release`.
=== Building a Release
man:release[7] documents the exact commands required to build a FreeBSD release.
The following sequences of commands can build an 9.2.0 release:
[source,shell]
....
# cd /usr/src/release
# sh generate-release.sh release/9.2.0 /local3/release
....
After running these commands, all prepared release files are available in [.filename]#/local3/release/R# directory.
The release [.filename]#Makefile# can be broken down into several distinct steps.
* Creation of a sanitized system environment in a separate directory hierarchy with "`make installworld`".
* Checkout from Subversion of a clean version of the system source, documentation, and ports into the release build hierarchy.
* Population of [.filename]#/etc# and [.filename]#/dev# in the chrooted environment.
* chroot into the release build hierarchy, to make it harder for the outside environment to taint this build.
* `make world` in the chrooted environment.
* Build of Kerberos-related binaries.
* Build [.filename]#GENERIC# kernel.
* Creation of a staging directory tree where the binary distributions will be built and packaged.
* Build and installation of the documentation toolchain needed to convert the documentation source (SGML) into HTML and text documents that will accompany the release.
* Build and installation of the actual documentation (user manuals, tutorials, release notes, hardware compatibility lists, and so on.)
* Package up distribution tarballs of the binaries and sources.
* Create FTP installation hierarchy.
* _(optionally)_ Create ISO images for CDROM/DVD media.
For more information about the release build infrastructure, please see man:release[7].
[NOTE]
====
It is important to remove any site-specific settings from [.filename]#/etc/make.conf#.
For example, it would be unwise to distribute binaries that were built on a system with `CPUTYPE` set to a specific processor.
====
=== Contributed Software ("ports")
The https://www.FreeBSD.org/ports[FreeBSD Ports collection] is a collection of over {numports} third-party software packages available for FreeBSD.
The `{portmgr}` is responsible for maintaining a consistent ports tree that can be used to create the binary packages that accompany official FreeBSD releases.
=== Release ISOs
Starting with FreeBSD 4.4, the FreeBSD Project decided to release all four ISO images that were previously sold on the _BSDi/Wind River Systems/FreeBSD Mall_ "official" CDROM distributions.
Each of the four discs must contain a [.filename]#README.TXT# file that explains the contents of the disc, a [.filename]#CDROM.INF# file that provides meta-data for the disc so that man:bsdinstall[8] can validate and use the contents, and a [.filename]#filename.txt# file that provides a manifest for the disc.
This _manifest_ can be created with a simple command:
[source,shell]
....
/stage/cdrom# find . -type f | sed -e 's/^\.\///' | sort > filename.txt
....
The specific requirements of each CD are outlined below.
==== Disc 1
The first disc is almost completely created by `make release`.
The only changes that should be made to the [.filename]#disc1# directory are the addition of a [.filename]#tools# directory, and as many popular third party software packages as will fit on the disc.
The [.filename]#tools# directory contains software that allow users to create installation floppies from other operating systems.
This disc should be made bootable so that users of modern PCs do not need to create installation floppy disks.
If a custom kernel of FreeBSD is to be included, then man:bsdinstall[8] and man:release[7] must be updated to include installation instructions.
The relevant code is contained in [.filename]#src/release# and [.filename]#src/usr.sbin/bsdinstall#.
Specifically, the file [.filename]#src/release/Makefile#, and [.filename]#dist.c#, [.filename]#dist.h#, [.filename]#menus.c#, [.filename]#install.c#, and [.filename]#Makefile# will need to be updated under [.filename]#src/usr.sbin/bsdinstall#.
Optionally, you may choose to update [.filename]#bsdinstall.8#.
==== Disc 2
The second disc is also largely created by `make release`.
This disc contains a "live filesystem" that can be used from man:bsdinstall[8] to troubleshoot a FreeBSD installation.
This disc should be bootable and should also contain a compressed copy of the CVS repository in the [.filename]#CVSROOT# directory and commercial software demos in the [.filename]#commerce# directory.
==== Multi-volume Support
Sysinstall supports multiple volume package installations.
This requires that each disc have an [.filename]#INDEX# file containing all of the packages on all volumes of a set, along with an extra field that indicates which volume that particular package is on.
Each volume in the set must also have the `CD_VOLUME` variable set in the [.filename]#cdrom.inf# file so that bsdinstall can tell which volume is which.
When a user attempts to install a package that is not on the current disc, bsdinstall will prompt the user to insert the appropriate one.
[[distribution]]
== Distribution
[[dist-ftp]]
=== FTP Sites
When the release has been thoroughly tested and packaged for distribution, the master FTP site must be updated.
The official FreeBSD public FTP sites are all mirrors of a master server that is open only to other FTP sites.
This site is known as `ftp-master`.
When the release is ready, the following files must be modified on `ftp-master`:
[.filename]#/pub/FreeBSD/releases/arch/X.Y-RELEASE/#::
The installable FTP directory as output from `make release`.
[.filename]#/pub/FreeBSD/ports/arch/packages-X.Y-release/#::
The complete package build for this release.
[.filename]#/pub/FreeBSD/releases/arch/X.Y-RELEASE/tools#::
A symlink to [.filename]#../../../tools#.
[.filename]#/pub/FreeBSD/releases/arch/X.Y-RELEASE/packages#::
A symlink to [.filename]#../../../ports/arch/packages-X.Y-release#.
[.filename]#/pub/FreeBSD/releases/arch/ISO-IMAGES/X.Y/X.Y-RELEASE-arch-*.iso#::
The ISO images. The "*" is [.filename]#disc1#, [.filename]#disc2#, etc.
Only if there is a [.filename]#disc1# and there is an alternative first installation CD (for example a stripped-down install with no windowing system) there may be a [.filename]#mini# as well.
For more information about the distribution mirror architecture of the FreeBSD FTP sites, please see the link:{hubs}[Mirroring FreeBSD] article.
It may take many hours to two days after updating `ftp-master` before a majority of the Tier-1 FTP sites have the new software depending on whether or not a package set got loaded at the same time.
It is imperative that the release engineers coordinate with the {mirror-announce} before announcing the general availability of new software on the FTP sites.
Ideally the release package set should be loaded at least four days prior to release day.
The release bits should be loaded between 24 and 48 hours before the planned release time with "other" file permissions turned off.
This will allow the mirror sites to download it but the general public will not be able to download it from the mirror sites.
Mail should be sent to {mirror-announce} at the time the release bits get posted saying the release has been staged and giving the time that the mirror sites should begin allowing access.
Be sure to include a time zone with the time, for example make it relative to GMT.
[[dist-cdrom]]
=== CD-ROM Replication
Coming soon: Tips for sending FreeBSD ISOs to a replicator and quality assurance measures to be taken.
[[extensibility]]
== Extensibility
Although FreeBSD forms a complete operating system, there is nothing that forces you to use the system exactly as we have packaged it up for distribution.
We have tried to design the system to be as extensible as possible so that it can serve as a platform that other commercial products can be built on top of.
The only "rule" we have about this is that if you are going to distribute FreeBSD with non-trivial changes, we encourage you to document your enhancements!
The FreeBSD community can only help support users of the software we provide.
We certainly encourage innovation in the form of advanced installation and administration tools, for example, but we cannot be expected to answer questions about it.
=== Scripting `bsdinstall`
The FreeBSD system installation and configuration tool, man:bsdinstall[8], can be scripted to provide automated installs for large sites.
This functionality can be used in conjunction with Intel(R) PXE footnote:[link:{handbook}#network-diskless[Diskless Operation with PXE]] to bootstrap systems from the network.
[[lessons-learned]]
== Lessons Learned from FreeBSD 4.4
The release engineering process for 4.4 formally began on August 1st, 2001.
After that date all commits to the `RELENG_4` branch of FreeBSD had to be explicitly approved by the `{re}`.
The first release candidate for the x86 architecture was released on August 16, followed by 4 more release candidates leading up to the final release on September 18th.
The security officer was very involved in the last week of the process as several security issues were found in the earlier release candidates.
A total of over _500_ emails were sent to the `{re}` in little over a month.
Our user community has made it very clear that the security and stability of a FreeBSD release should not be sacrificed for any self-imposed deadlines or target release dates.
The FreeBSD Project has grown tremendously over its lifetime and the need for standardized release engineering procedures has never been more apparent.
This will become even more important as FreeBSD is ported to new platforms.
[[future]]
== Future Directions
It is imperative for our release engineering activities to scale with our growing userbase.
Along these lines we are working very hard to document the procedures involved in producing FreeBSD releases.
* _Parallelism_ - Certain portions of the release build are actually "embarrassingly parallel". Most of the tasks are very I/O intensive, so having multiple high-speed disk drives is actually more important than using multiple processors in speeding up the `make release` process. If multiple disks are used for different hierarchies in the man:chroot[2] environment, then the CVS checkout of the [.filename]#ports# and [.filename]#doc# trees can be happening simultaneously as the `make world` on another disk. Using a RAID solution (hardware or software) can significantly decrease the overall build time.
* _Cross-building releases_ - Building IA-64 or Alpha release on x86 hardware? `make TARGET=ia64 release`.
* _Regression Testing_ - We need better automated correctness testing for FreeBSD.
* _Installation Tools_ - Our installation program has long since outlived its intended life span. Several projects are under development to provide a more advanced installation mechanism. The libh project was one such project that aimed to provide an intelligent new package framework and GUI installation program.
[[ackno]]
== Acknowledgements
I would like to thank Jordan Hubbard for giving me the opportunity to take on some of the release engineering responsibilities for FreeBSD 4.4 and also for all of his work throughout the years making FreeBSD what it is today.
Of course the release would not have been possible without all of the release-related work done by `{asami}`, `{steve}`, `{bmah}`, `{nik}`, `{obrien}`, `{kris}`, `{jhb}` and the rest of the FreeBSD development community.
I would also like to thank `{rgrimes}`, `{phk}`, and others who worked on the release engineering tools in the very early days of FreeBSD.
This article was influenced by release engineering documents from the CSRG footnote:[Marshall Kirk McKusick, Michael J. Karels, and Keith Bostic: link:http://docs.FreeBSD.org/44doc/papers/releng.html[The Release Engineering of 4.3BSD]] , the NetBSD Project, footnote:[NetBSD Developer Documentation: Release Engineering http://www.NetBSD.org/developers/releng/index.html] , and John Baldwin's proposed release engineering process notes. footnote:[John Baldwin's FreeBSD Release Engineering Proposal https://people.FreeBSD.org/~jhb/docs/releng.txt]
diff --git a/documentation/content/en/articles/remote-install/_index.adoc b/documentation/content/en/articles/remote-install/_index.adoc
index bf97ed25e6..e5343b75a9 100644
--- a/documentation/content/en/articles/remote-install/_index.adoc
+++ b/documentation/content/en/articles/remote-install/_index.adoc
@@ -1,376 +1,376 @@
---
title: Remote Installation of the FreeBSD Operating System Without a Remote Console
authors:
- author: Daniel Gerzo
email: danger@FreeBSD.org
-copyright: 2008 The FreeBSD Documentation Project
-releaseinfo: "$FreeBSD$"
+copyright: 2008-2021 The FreeBSD Documentation Project
+description: Remote Installation of the FreeBSD Operating System Without a Remote Console
trademarks: ["freebsd", "general"]
---
= Remote Installation of the FreeBSD Operating System Without a Remote Console
:doctype: article
:toc: macro
:toclevels: 1
:icons: font
:sectnums:
:sectnumlevels: 6
:source-highlighter: rouge
:experimental:
include::shared/authors.adoc[]
include::shared/en/urls.adoc[]
[.abstract-title]
Abstract
This article documents the remote installation of the FreeBSD operating system when the console of the remote system is unavailable.
The main idea behind this article is the result of a collaboration with `{mm}` with valuable input provided by `{pjd}`.
'''
toc::[]
[[background]]
== Background
There are many server hosting providers in the world, but very few of them are officially supporting FreeBSD.
They usually provide support for a Linux(R) distribution to be installed on the servers they offer.
In some cases, these companies will install your preferred Linux(R) distribution if you request it.
Using this option, we will attempt to install FreeBSD. In other cases, they may offer a rescue system which would be used in an emergency.
It is possible to use this for our purposes as well.
This article covers the basic installation and configuration steps required to bootstrap a remote installation of FreeBSD with RAID-1 and ZFS capabilities.
[[intro]]
== Introduction
This section will summarize the purpose of this article and better explain what is covered herein.
The instructions included in this article will benefit those using services provided by colocation facilities not supporting FreeBSD.
[.procedure]
====
. As we have mentioned in the <<background>> section, many of the reputable server hosting companies provide some kind of rescue system, which is booted from their LAN and accessible over SSH. They usually provide this support in order to help their customers fix broken operating systems. As this article will explain, it is possible to install FreeBSD with the help of these rescue systems.
+
. The next section of this article will describe how to configure, and build minimalistic FreeBSD on the local machine. That version will eventually be running on the remote machine from a ramdisk, which will allow us to install a complete FreeBSD operating system from an FTP mirror using the sysinstall utility.
. The rest of this article will describe the installation procedure itself, as well as the configuration of the ZFS file system.
====
[[requirements]]
=== Requirements
To continue successfully, you must:
* Have a network accessible operating system with SSH access
* Understand the FreeBSD installation process
* Be familiar with the man:sysinstall[8] utility
* Have the FreeBSD installation SO image or CD handy
[[preparation]]
== Preparation - mfsBSD
Before FreeBSD may be installed on the target system, it is necessary to build the minimal FreeBSD operating system image which will boot from the hard drive.
This way the new system can be accessed from the network, and the rest of the installation can be done without remote access to the system console.
The mfsBSD tool-set can be used to build a tiny FreeBSD image.
As the name of mfsBSD suggests ("mfs" means "memory file system"), the resulting image runs entirely from a ramdisk.
Thanks to this feature, the manipulation of hard drives will not be limited, therefore it will be possible to install a complete FreeBSD operating system.
The mfsBSD http://mfsbsd.vx.sk/[home page] includes pointers to the latest release of the toolset.
Please note that the internals of mfsBSD and how it all fits together is beyond the scope of this article.
The interested reader should consult the original documentation of mfsBSD for more details.
Download and extract the latest mfsBSD release and change your working directory to the directory where the mfsBSD scripts will reside:
[source,shell]
....
# fetch http://mfsbsd.vx.sk/release/mfsbsd-2.1.tar.gz
# tar xvzf mfsbsd-2.1.tar.gz
# cd mfsbsd-2.1/
....
[[mfsbsd-config]]
=== Configuration of mfsBSD
Before booting mfsBSD, a few important configuration options have to be set.
The most important that we have to get right is, naturally, the network setup.
The most suitable method to configure networking options depends on whether we know beforehand the type of the network interface we will use, and the network interface driver to be loaded for our hardware.
We will see how mfsBSD can be configured in either case.
Another important thing to set is the `root` password.
This can be done by editing [.filename]#conf/loader.conf#.
Please see the included comments.
==== The [.filename]#conf/interfaces.conf# method
When the installed network interface card is unknown, it is possible to use the auto-detection features of mfsBSD.
The startup scripts of mfsBSD can detect the correct driver to use, based on the MAC address of the interface, if we set the following options in [.filename]#conf/interfaces.conf#:
[.programlisting]
....
mac_interfaces="ext1"
ifconfig_ext1_mac="00:00:00:00:00:00"
ifconfig_ext1="inet 192.168.0.2/24"
....
Do not forget to add the `defaultrouter` information to [.filename]#conf/rc.conf#:
[.programlisting]
....
defaultrouter="192.168.0.1"
....
==== The [.filename]#conf/rc.conf# Method
When the network interface driver is known, it is more convenient to use [.filename]#conf/rc.conf# for networking options.
The syntax of this file is the same as the one used in the standard man:rc.conf[5] file of FreeBSD.
For example, if you know that a man:re[4] network interface is going to be available, you can set the following options in [.filename]#conf/rc.conf#:
[.programlisting]
....
defaultrouter="192.168.0.1"
ifconfig_re0="inet 192.168.0.2/24"
....
[[mfsbsd-build]]
=== Building an mfsBSD Image
The process of building an mfsBSD image is pretty straightforward.
The first step is to mount the FreeBSD installation CD, or the installation ISO image to [.filename]#/cdrom#.
For the sake of example, in this article we will assume that you have downloaded the FreeBSD 10.1-RELEASE ISO.
Mounting this ISO image to the [.filename]#/cdrom# directory is easy with the man:mdconfig[8] utility:
[source,shell]
....
# mdconfig -a -t vnode -u 10 -f FreeBSD-10.1-RELEASE-amd64-disc1.iso
# mount_cd9660 /dev/md10 /cdrom
....
Since the recent FreeBSD releases do not contain regular distribution sets, it is required to extract the FreeBSD distribution files from the distribution archives located on the ISO image:
[source,shell]
....
# mkdir DIST
# tar -xvf /cdrom/usr/freebsd-dist/base.txz -C DIST
# tar -xvf /cdrom/usr/freebsd-dist/kernel.txz -C DIST
....
Next, build the bootable mfsBSD image:
[source,shell]
....
# make BASE=DIST
....
[NOTE]
====
The above `make` has to be run from the top level of the mfsBSD directory tree, for example [.filename]#~/mfsbsd-2.1/#.
====
=== Booting mfsBSD
Now that the mfsBSD image is ready, it must be uploaded to the remote system running a live rescue system or pre-installed Linux(R) distribution.
The most suitable tool for this task is scp:
[source,shell]
....
# scp disk.img root@192.168.0.2:.
....
To boot mfsBSD image properly, it must be placed on the first (bootable) device of the given machine.
This may be accomplished using this example providing that [.filename]#sda# is the first bootable disk device:
[source,shell]
....
# dd if=/root/disk.img of=/dev/sda bs=1m
....
If all went well, the image should now be in the MBR of the first device and the machine can be rebooted.
Watch for the machine to boot up properly with the man:ping[8] tool.
Once it has came back on-line, it should be possible to access it over man:ssh[1] as user `root` with the configured password.
[[installation]]
== Installation of the FreeBSD Operating System
The mfsBSD has been successfully booted and it should be possible to log in through man:ssh[1].
This section will describe how to create and label slices, set up `gmirror` for RAID-1, and how to use `sysinstall` to install a minimal distribution of the FreeBSD operating system.
=== Preparation of Hard Drives
The first task is to allocate disk space for FreeBSD, i.e.: to create slices and partitions.
Obviously, the currently running system is fully loaded in system memory and therefore there will be no problems with manipulating hard drives.
To complete this task, it is possible to use either `sysinstall` or man:fdisk[8] in conjunction to man:bsdlabel[8].
At the start, mark all system disks as empty.
Repeat the following command for each hard drive:
[source,shell]
....
# dd if=/dev/zero of=/dev/ad0 count=2
....
Next, create slices and label them with your preferred tool.
While it is considered easier to use `sysinstall`, a powerful and also probably less buggy method will be to use standard text-based UNIX(R) tools, such as man:fdisk[8] and man:bsdlabel[8], which will also be covered in this section.
The former option is well documented in the link:{handbook}#install-steps[Installing FreeBSD] chapter of the FreeBSD Handbook.
As it was mentioned in the introduction, this article will present how to set up a system with RAID-1 and ZFS capabilities.
Our set up will consist of a small man:gmirror[8] mirrored [.filename]#/# (root), [.filename]#/usr# and [.filename]#/var# dataset, and the rest of the disk space will be allocated for a man:zpool[8] mirrored ZFS file system.
Please note, that the ZFS file system will be configured after the FreeBSD operating system is successfully installed and booted.
The following example will describe how to create slices and labels, initialize man:gmirror[8] on each partition and how to create a UFS2 file system in each mirrored partition:
[source,shell]
....
# fdisk -BI /dev/ad0 <.>
# fdisk -BI /dev/ad1
# bsdlabel -wB /dev/ad0s1 <.>
# bsdlabel -wB /dev/ad1s1
# bsdlabel -e /dev/ad0s1 <.>
# bsdlabel /dev/ad0s1 > /tmp/bsdlabel.txt && bsdlabel -R /dev/ad1s1 /tmp/bsdlabel.txt <.>
# gmirror label root /dev/ad[01]s1a <.>
# gmirror label var /dev/ad[01]s1d
# gmirror label usr /dev/ad[01]s1e
# gmirror label -F swap /dev/ad[01]s1b <.>
# newfs /dev/mirror/root <.>
# newfs /dev/mirror/var
# newfs /dev/mirror/usr
....
<.> Create a slice covering the entire disk and initialize the boot code contained in sector 0 of the given disk. Repeat this command for all hard drives in the system.
<.> Write a standard label for each disk including the bootstrap code.
<.> Now, manually edit the label of the given disk. Refer to the man:bsdlabel[8] manual page in order to find out how to create partitions. Create partitions `a` for [.filename]#/# (root) file system, `b` for swap, `d` for [.filename]#/var#, `e` for [.filename]#/usr# and finally `f` which will later be used for ZFS.
<.> Import the recently created label for the second hard drive, so both hard drives will be labeled in the same way.
<.> Initialize man:gmirror[8] on each partition.
<.> Note that `-F` is used for the swap partition. This instructs man:gmirror[8] to assume that the device is in the consistent state after the power/system failure.
<.> Create a UFS2 file system on each mirrored partition.
=== System Installation
This is the most important part.
This section will describe how to actually install the minimal distribution of FreeBSD on the hard drives that we have prepared in the previous section.
To accomplish this goal, all file systems need to be mounted so `sysinstall` may write the contents of FreeBSD to the hard drives:
[source,shell]
....
# mount /dev/mirror/root /mnt
# mkdir /mnt/var /mnt/usr
# mount /dev/mirror/var /mnt/var
# mount /dev/mirror/usr /mnt/usr
....
When you are done, start man:sysinstall[8].
Select the [.guimenuitem]#Custom# installation from the main menu.
Select [.guimenuitem]#Options# and press kbd:[Enter].
With the help of arrow keys, move the cursor on the `Install Root` item, press kbd:[Space] and change it to [.filename]#/mnt#.
Press kbd:[Enter] to submit your changes and exit the [.guimenuitem]#Options# menu by pressing kbd:[q].
[WARNING]
====
Note that this step is very important and if skipped, `sysinstall` will be unable to install FreeBSD.
====
Go to the [.guimenuitem]#Distributions# menu, move the cursor with the arrow keys to `Minimal`, and check it by pressing kbd:[Space].
This article uses the Minimal distribution in order to save network traffic, because the system itself will be installed over ftp.
Exit this menu by choosing `Exit`.
[NOTE]
====
The [.guimenuitem]#Partition# and [.guimenuitem]#Label# menus will be skipped, as these are useless now.
====
In the [.guimenuitem]#Media# menu, select `FTP`.
Select the nearest mirror and let `sysinstall` assume that the network is already configured.
You will be returned back to the [.guimenuitem]#Custom# menu.
Finally, perform the system installation by selecting the last option, [.guimenuitem]#Commit#.
Exit `sysinstall` when it finishes the installation.
=== Post Installation Steps
The FreeBSD operating system should be installed now; however, the process is not finished yet.
It is necessary to perform some post installation steps in order to allow FreeBSD to boot in the future and to be able to log in to the system.
You must now man:chroot[8] into the freshly installed system in order to finish the installation.
Use the following command:
[source,shell]
....
# chroot /mnt
....
To complete our goal, perform these steps:
* Copy the `GENERIC` kernel to the [.filename]#/boot/kernel# directory:
+
[source,shell]
....
# cp -Rp /boot/GENERIC/* /boot/kernel
....
* Create the [.filename]#/etc/rc.conf#, [.filename]#/etc/resolv.conf# and [.filename]#/etc/fstab# files. Do not forget to properly set the network information and to enable sshd in [.filename]#/etc/rc.conf#. The contents of [.filename]#/etc/fstab# will be similar to the following:
+
[.programlisting]
....
# Device Mountpoint FStype Options Dump Pass#
/dev/mirror/swap none swap sw 0 0
/dev/mirror/root / ufs rw 1 1
/dev/mirror/usr /usr ufs rw 2 2
/dev/mirror/var /var ufs rw 2 2
/dev/cd0 /cdrom cd9660 ro,noauto 0 0
....
* Create [.filename]#/boot/loader.conf# with the following contents:
+
[.programlisting]
....
geom_mirror_load="YES"
zfs_load="YES"
....
* Perform the following command, which will make ZFS available on the next boot:
+
[source,shell]
....
# sysrc zfs_enable="YES"
....
* Add additional users to the system using the man:adduser[8] tool. Do not forget to add a user to the `wheel` group so you may obtain root access after the reboot.
* Double-check all your settings.
The system should now be ready for the next boot.
Use the man:reboot[8] command to reboot your system.
[[zfs]]
== ZFS
If your system survived the reboot, it should now be possible to log in.
Welcome to the fresh FreeBSD installation, performed remotely without the use of a remote console!
The only remaining step is to configure man:zpool[8] and create some man:zfs[8] file systems.
Creating and administering ZFS is very straightforward. First, create a mirrored pool:
[source,shell]
....
# zpool create tank mirror /dev/ad[01]s1f
....
Next, create some file systems:
[source,shell]
....
# zfs create tank/ports
# zfs create tank/src
# zfs set compression=gzip tank/ports
# zfs set compression=on tank/src
# zfs set mountpoint=/usr/ports tank/ports
# zfs set mountpoint=/usr/src tank/src
....
That is all.
If you are interested in more details about ZFS on FreeBSD, please refer to the https://wiki.freebsd.org/ZFS[ZFS] section of the FreeBSD Wiki.
diff --git a/documentation/content/en/articles/serial-uart/_index.adoc b/documentation/content/en/articles/serial-uart/_index.adoc
index aed42baa0f..9bd53781e0 100644
--- a/documentation/content/en/articles/serial-uart/_index.adoc
+++ b/documentation/content/en/articles/serial-uart/_index.adoc
@@ -1,1163 +1,1163 @@
---
title: Serial and UART Tutorial
authors:
- author: Frank Durda
email: uhclem@FreeBSD.org
-releaseinfo: "$FreeBSD$"
+description: How to use serial hardware and UART with FreeBSD
trademarks: ["freebsd", "microsoft", "general"]
---
= Serial and UART Tutorial
:doctype: article
:toc: macro
:toclevels: 1
:icons: font
:sectnums:
:sectnumlevels: 6
:source-highlighter: rouge
:experimental:
include::shared/authors.adoc[]
include::shared/en/urls.adoc[]
[.abstract-title]
Abstract
This article talks about using serial hardware with FreeBSD.
'''
toc::[]
[[uart]]
== The UART: What it is and how it works
_Copyright (R) 1996 `{uhclem}`, All Rights Reserved. 13 January 1996._
The Universal Asynchronous Receiver/Transmitter (UART) controller is the key component of the serial communications subsystem of a computer.
The UART takes bytes of data and transmits the individual bits in a sequential fashion.
At the destination, a second UART re-assembles the bits into complete bytes.
Serial transmission is commonly used with modems and for non-networked communication between computers, terminals and other devices.
There are two primary forms of serial transmission: Synchronous and Asynchronous.
Depending on the modes that are supported by the hardware, the name of the communication sub-system will usually include a `A` if it supports Asynchronous communications, and a `S` if it supports Synchronous communications.
Both forms are described below.
Some common acronyms are:
[.blockquote]
UART Universal Asynchronous Receiver/Transmitter
[.blockquote]
USART Universal Synchronous-Asynchronous Receiver/Transmitter
=== Synchronous Serial Transmission
Synchronous serial transmission requires that the sender and receiver share a clock with one another, or that the sender provide a strobe or other timing signal so that the receiver knows when to "read" the next bit of the data.
In most forms of serial Synchronous communication, if there is no data available at a given instant to transmit, a fill character must be sent instead so that data is always being transmitted.
Synchronous communication is usually more efficient because only data bits are transmitted between sender and receiver, and synchronous communication can be more costly if extra wiring and circuits are required to share a clock signal between the sender and receiver.
A form of Synchronous transmission is used with printers and fixed disk devices in that the data is sent on one set of wires while a clock or strobe is sent on a different wire.
Printers and fixed disk devices are not normally serial devices because most fixed disk interface standards send an entire word of data for each clock or strobe signal by using a separate wire for each bit of the word.
In the PC industry, these are known as Parallel devices.
The standard serial communications hardware in the PC does not support Synchronous operations.
This mode is described here for comparison purposes only.
=== Asynchronous Serial Transmission
Asynchronous transmission allows data to be transmitted without the sender having to send a clock signal to the receiver.
Instead, the sender and receiver must agree on timing parameters in advance and special bits are added to each word which are used to synchronize the sending and receiving units.
When a word is given to the UART for Asynchronous transmissions, a bit called the "Start Bit" is added to the beginning of each word that is to be transmitted.
The Start Bit is used to alert the receiver that a word of data is about to be sent, and to force the clock in the receiver into synchronization with the clock in the transmitter.
These two clocks must be accurate enough to not have the frequency drift by more than 10% during the transmission of the remaining bits in the word.
(This requirement was set in the days of mechanical teleprinters and is easily met by modern electronic equipment.)
After the Start Bit, the individual bits of the word of data are sent, with the Least Significant Bit (LSB) being sent first.
Each bit in the transmission is transmitted for exactly the same amount of time as all of the other bits, and the receiver "looks" at the wire at approximately halfway through the period assigned to each bit to determine if the bit is a `1` or a `0`.
For example, if it takes two seconds to send each bit, the receiver will examine the signal to determine if it is a `1` or a `0` after one second has passed, then it will wait two seconds and then examine the value of the next bit, and so on.
The sender does not know when the receiver has "looked" at the value of the bit.
The sender only knows when the clock says to begin transmitting the next bit of the word.
When the entire data word has been sent, the transmitter may add a Parity Bit that the transmitter generates.
The Parity Bit may be used by the receiver to perform simple error checking.
Then at least one Stop Bit is sent by the transmitter.
When the receiver has received all of the bits in the data word, it may check for the Parity Bits (both sender and receiver must agree on whether a Parity Bit is to be used), and then the receiver looks for a Stop Bit.
If the Stop Bit does not appear when it is supposed to, the UART considers the entire word to be garbled and will report a Framing Error to the host processor when the data word is read.
The usual cause of a Framing Error is that the sender and receiver clocks were not running at the same speed, or that the signal was interrupted.
Regardless of whether the data was received correctly or not, the UART automatically discards the Start, Parity and Stop bits.
If the sender and receiver are configured identically, these bits are not passed to the host.
If another word is ready for transmission, the Start Bit for the new word can be sent as soon as the Stop Bit for the previous word has been sent.
As asynchronous data is "self synchronizing", if there is no data to transmit, the transmission line can be idle.
=== Other UART Functions
In addition to the basic job of converting data from parallel to serial for transmission and from serial to parallel on reception, a UART will usually provide additional circuits for signals that can be used to indicate the state of the transmission media, and to regulate the flow of data in the event that the remote device is not prepared to accept more data.
For example, when the device connected to the UART is a modem, the modem may report the presence of a carrier on the phone line while the computer may be able to instruct the modem to reset itself or to not take calls by raising or lowering one more of these extra signals.
The function of each of these additional signals is defined in the EIA RS232-C standard.
=== The RS232-C and V.24 Standards
In most computer systems, the UART is connected to circuitry that generates signals that comply with the EIA RS232-C specification.
There is also a CCITT standard named V.24 that mirrors the specifications included in RS232-C.
==== RS232-C Bit Assignments (Marks and Spaces)
In RS232-C, a value of `1` is called a `Mark` and a value of `0` is called a `Space`.
When a communication line is idle, the line is said to be "Marking", or transmitting continuous `1` values.
The Start bit always has a value of `0` (a Space).
The Stop Bit always has a value of `1` (a Mark).
This means that there will always be a Mark (1) to Space (0) transition on the line at the start of every word, even when multiple word are transmitted back to back.
This guarantees that sender and receiver can resynchronize their clocks regardless of the content of the data bits that are being transmitted.
The idle time between Stop and Start bits does not have to be an exact multiple (including zero) of the bit rate of the communication link, but most UARTs are designed this way for simplicity.
In RS232-C, the "Marking" signal (a `1`) is represented by a voltage between -2 VDC and -12 VDC, and a "Spacing" signal (a `0`) is represented by a voltage between 0 and +12 VDC.
The transmitter is supposed to send +12 VDC or -12 VDC, and the receiver is supposed to allow for some voltage loss in long cables.
Some transmitters in low power devices (like portable computers) sometimes use only +5 VDC and -5 VDC, but these values are still acceptable to a RS232-C receiver, provided that the cable lengths are short.
==== RS232-C Break Signal
RS232-C also specifies a signal called a `Break`, which is caused by sending continuous Spacing values (no Start or Stop bits).
When there is no electricity present on the data circuit, the line is considered to be sending `Break`.
The `Break` signal must be of a duration longer than the time it takes to send a complete byte plus Start, Stop and Parity bits.
Most UARTs can distinguish between a Framing Error and a Break, but if the UART cannot do this, the Framing Error detection can be used to identify Breaks.
In the days of teleprinters, when numerous printers around the country were wired in series (such as news services), any unit could cause a `Break` by temporarily opening the entire circuit so that no current flowed.
This was used to allow a location with urgent news to interrupt some other location that was currently sending information.
In modern systems there are two types of Break signals.
If the Break is longer than 1.6 seconds, it is considered a "Modem Break", and some modems can be programmed to terminate the conversation and go on-hook or enter the modems' command mode when the modem detects this signal.
If the Break is smaller than 1.6 seconds, it signifies a Data Break and it is up to the remote computer to respond to this signal.
Sometimes this form of Break is used as an Attention or Interrupt signal and sometimes is accepted as a substitute for the ASCII CONTROL-C character.
Marks and Spaces are also equivalent to "Holes" and "No Holes" in paper tape systems.
[NOTE]
====
Breaks cannot be generated from paper tape or from any other byte value, since bytes are always sent with Start and Stop bit.
The UART is usually capable of generating the continuous Spacing signal in response to a special command from the host processor.
====
==== RS232-C DTE and DCE Devices
The RS232-C specification defines two types of equipment: the Data Terminal Equipment (DTE) and the Data Carrier Equipment (DCE).
Usually, the DTE device is the terminal (or computer), and the DCE is a modem.
Across the phone line at the other end of a conversation, the receiving modem is also a DCE device and the computer that is connected to that modem is a DTE device.
The DCE device receives signals on the pins that the DTE device transmits on, and vice versa.
When two devices that are both DTE or both DCE must be connected together without a modem or a similar media translator between them, a NULL modem must be used.
The NULL modem electrically re-arranges the cabling so that the transmitter output is connected to the receiver input on the other device, and vice versa.
Similar translations are performed on all of the control signals so that each device will see what it thinks are DCE (or DTE) signals from the other device.
The number of signals generated by the DTE and DCE devices are not symmetrical.
The DTE device generates fewer signals for the DCE device than the DTE device receives from the DCE.
==== RS232-C Pin Assignments
The EIA RS232-C specification (and the ITU equivalent, V.24) calls for a twenty-five pin connector (usually a DB25) and defines the purpose of most of the pins in that connector.
In the IBM Personal Computer and similar systems, a subset of RS232-C signals are provided via nine pin connectors (DB9).
The signals that are not included on the PC connector deal mainly with synchronous operation, and this transmission mode is not supported by the UART that IBM selected for use in the IBM PC.
Depending on the computer manufacturer, a DB25, a DB9, or both types of connector may be used for RS232-C communications.
(The IBM PC also uses a DB25 connector for the parallel printer interface which causes some confusion.)
Below is a table of the RS232-C signal assignments in the DB25 and DB9 connectors.
[.informaltable]
[cols="1,1,1,1,1,1,1", frame="none", options="header"]
|===
| DB25 RS232-C Pin
| DB9 IBM PC Pin
| EIA Circuit Symbol
| CCITT Circuit Symbol
| Common Name
| Signal Source
| Description
|1
|-
|AA
|101
|PG/FG
|-
|Frame/Protective Ground
|2
|3
|BA
|103
|TD
|DTE
|Transmit Data
|3
|2
|BB
|104
|RD
|DCE
|Receive Data
|4
|7
|CA
|105
|RTS
|DTE
|Request to Send
|5
|8
|CB
|106
|CTS
|DCE
|Clear to Send
|6
|6
|CC
|107
|DSR
|DCE
|Data Set Ready
|7
|5
|AV
|102
|SG/GND
|-
|Signal Ground
|8
|1
|CF
|109
|DCD/CD
|DCE
|Data Carrier Detect
|9
|-
|-
|-
|-
|-
|Reserved for Test
|10
|-
|-
|-
|-
|-
|Reserved for Test
|11
|-
|-
|-
|-
|-
|Reserved for Test
|12
|-
|CI
|122
|SRLSD
|DCE
|Sec. Recv. Line Signal Detector
|13
|-
|SCB
|121
|SCTS
|DCE
|Secondary Clear to Send
|14
|-
|SBA
|118
|STD
|DTE
|Secondary Transmit Data
|15
|-
|DB
|114
|TSET
|DCE
|Trans. Sig. Element Timing
|16
|-
|SBB
|119
|SRD
|DCE
|Secondary Received Data
|17
|-
|DD
|115
|RSET
|DCE
|Receiver Signal Element Timing
|18
|-
|-
|141
|LOOP
|DTE
|Local Loopback
|19
|-
|SCA
|120
|SRS
|DTE
|Secondary Request to Send
|20
|4
|CD
|108.2
|DTR
|DTE
|Data Terminal Ready
|21
|-
|-
|-
|RDL
|DTE
|Remote Digital Loopback
|22
|9
|CE
|125
|RI
|DCE
|Ring Indicator
|23
|-
|CH
|111
|DSRS
|DTE
|Data Signal Rate Selector
|24
|-
|DA
|113
|TSET
|DTE
|Trans. Sig. Element Timing
|25
|-
|-
|142
|-
|DCE
|Test Mode
|===
=== Bits, Baud and Symbols
Baud is a measurement of transmission speed in asynchronous communication.
Due to advances in modem communication technology, this term is frequently misused when describing the data rates in newer devices.
Traditionally, a Baud Rate represents the number of bits that are actually being sent over the media, not the amount of data that is actually moved from one DTE device to the other.
The Baud count includes the overhead bits Start, Stop and Parity that are generated by the sending UART and removed by the receiving UART.
This means that seven-bit words of data actually take 10 bits to be completely transmitted.
Therefore, a modem capable of moving 300 bits per second from one place to another can normally only move 30 7-bit words if Parity is used and one Start and Stop bit are present.
If 8-bit data words are used and Parity bits are also used, the data rate falls to 27.27 words per second, because it now takes 11 bits to send the eight-bit words, and the modem still only sends 300 bits per second.
The formula for converting bytes per second into a baud rate and vice versa was simple until error-correcting modems came along.
These modems receive the serial stream of bits from the UART in the host computer (even when internal modems are used the data is still frequently serialized) and converts the bits back into bytes.
These bytes are then combined into packets and sent over the phone line using a Synchronous transmission method.
This means that the Stop, Start, and Parity bits added by the UART in the DTE (the computer) were removed by the modem before transmission by the sending modem.
When these bytes are received by the remote modem, the remote modem adds Start, Stop and Parity bits to the words, converts them to a serial format and then sends them to the receiving UART in the remote computer, who then strips the Start, Stop and Parity bits.
The reason all these extra conversions are done is so that the two modems can perform error correction, which means that the receiving modem is able to ask the sending modem to resend a block of data that was not received with the correct checksum.
This checking is handled by the modems, and the DTE devices are usually unaware that the process is occurring.
By striping the Start, Stop and Parity bits, the additional bits of data that the two modems must share between themselves to perform error-correction are mostly concealed from the effective transmission rate seen by the sending and receiving DTE equipment.
For example, if a modem sends ten 7-bit words to another modem without including the Start, Stop and Parity bits, the sending modem will be able to add 30 bits of its own information that the receiving modem can use to do error-correction without impacting the transmission speed of the real data.
The use of the term Baud is further confused by modems that perform compression.
A single 8-bit word passed over the telephone line might represent a dozen words that were transmitted to the sending modem.
The receiving modem will expand the data back to its original content and pass that data to the receiving DTE.
Modern modems also include buffers that allow the rate that bits move across the phone line (DCE to DCE) to be a different speed than the speed that the bits move between the DTE and DCE on both ends of the conversation.
Normally the speed between the DTE and DCE is higher than the DCE to DCE speed because of the use of compression by the modems.
As the number of bits needed to describe a byte varied during the trip between the two machines plus the differing bits-per-seconds speeds that are used present on the DTE-DCE and DCE-DCE links, the usage of the term Baud to describe the overall communication speed causes problems and can misrepresent the true transmission speed.
So Bits Per Second (bps) is the correct term to use to describe the transmission rate seen at the DCE to DCE interface and Baud or Bits Per Second are acceptable terms to use when a connection is made between two systems with a wired connection, or if a modem is in use that is not performing error-correction or compression.
Modern high speed modems (2400, 9600, 14,400, and 19,200bps) in reality still operate at or below 2400 baud, or more accurately, 2400 Symbols per second.
High speed modem are able to encode more bits of data into each Symbol using a technique called Constellation Stuffing, which is why the effective bits per second rate of the modem is higher, but the modem continues to operate within the limited audio bandwidth that the telephone system provides.
Modems operating at 28,800 and higher speeds have variable Symbol rates, but the technique is the same.
=== The IBM Personal Computer UART
Starting with the original IBM Personal Computer, IBM selected the National Semiconductor INS8250 UART for use in the IBM PC Parallel/Serial Adapter.
Subsequent generations of compatible computers from IBM and other vendors continued to use the INS8250 or improved versions of the National Semiconductor UART family.
==== National Semiconductor UART Family Tree
There have been several versions and subsequent generations of the INS8250 UART. Each major version is described below.
[.programlisting]
....
INS8250 -> INS8250B
\
\
\-> INS8250A -> INS82C50A
\
\
\-> NS16450 -> NS16C450
\
\
\-> NS16550 -> NS16550A -> PC16550D
....
INS8250::
This part was used in the original IBM PC and IBM PC/XT.
The original name for this part was the INS8250 ACE (Asynchronous Communications Element) and it is made from NMOS technology.
+
The 8250 uses eight I/O ports and has a one-byte send and a one-byte receive buffer.
This original UART has several race conditions and other flaws.
The original IBM BIOS includes code to work around these flaws, but this made the BIOS dependent on the flaws being present, so subsequent parts like the 8250A, 16450 or 16550 could not be used in the original IBM PC or IBM PC/XT.
INS8250-B::
This is the slower speed of the INS8250 made from NMOS technology.
It contains the same problems as the original INS8250.
INS8250A::
An improved version of the INS8250 using XMOS technology with various functional flaws corrected.
The INS8250A was used initially in PC clone computers by vendors who used "clean" BIOS designs.
Due to the corrections in the chip, this part could not be used with a BIOS compatible with the INS8250 or INS8250B.
INS82C50A::
This is a CMOS version (low power consumption) of the INS8250A and has similar functional characteristics.
NS16450::
Same as NS8250A with improvements so it can be used with faster CPU bus designs.
IBM used this part in the IBM AT and updated the IBM BIOS to no longer rely on the bugs in the INS8250.
NS16C450::
This is a CMOS version (low power consumption) of the NS16450.
NS16550::
Same as NS16450 with a 16-byte send and receive buffer but the buffer design was flawed and could not be reliably be used.
NS16550A::
Same as NS16550 with the buffer flaws corrected.
The 16550A and its successors have become the most popular UART design in the PC industry, mainly due to its ability to reliably handle higher data rates on operating systems with sluggish interrupt response times.
NS16C552::
This component consists of two NS16C550A CMOS UARTs in a single package.
PC16550D::
Same as NS16550A with subtle flaws corrected.
This is revision D of the 16550 family and is the latest design available from National Semiconductor.
==== The NS16550AF and the PC16550D are the same thing
National reorganized their part numbering system a few years ago, and the NS16550AFN no longer exists by that name.
(If you have a NS16550AFN, look at the date code on the part, which is a four digit number that usually starts with a nine.
The first two digits of the number are the year, and the last two digits are the week in that year when the part was packaged.
If you have a NS16550AFN, it is probably a few years old.)
The new numbers are like PC16550DV, with minor differences in the suffix letters depending on the package material and its shape.
(A description of the numbering system can be found below.)
It is important to understand that in some stores, you may pay $15(US) for a NS16550AFN made in 1990 and in the next bin are the new PC16550DN parts with minor fixes that National has made since the AFN part was in production, the PC16550DN was probably made in the past six months and it costs half (as low as $5(US) in volume) as much as the NS16550AFN because they are readily available.
As the supply of NS16550AFN chips continues to shrink, the price will probably continue to increase until more people discover and accept that the PC16550DN really has the same function as the old part number.
==== National Semiconductor Part Numbering System
The older NS``__nnnnnrqp__`` part numbers are now of the format PC``__nnnnnrgp__``.
The `_r_` is the revision field. The current revision of the 16550 from National Semiconductor is `D`.
The `_p_` is the package-type field. The types are:
[.informaltable]
[cols="1,1,1", frame="none"]
|===
|"F"
|QFP
|(quad flat pack) L lead type
|"N"
|DIP
|(dual inline package) through hole straight lead type
|"V"
|LPCC
|(lead plastic chip carrier) J lead type
|===
The _g_ is the product grade field.
If an `I` precedes the package-type letter, it indicates an "industrial" grade part, which has higher specs than a standard part but not as high as Military Specification (Milspec) component.
This is an optional field.
So what we used to call a NS16550AFN (DIP Package) is now called a PC16550DN or PC16550DIN.
=== Other Vendors and Similar UARTs
Over the years, the 8250, 8250A, 16450 and 16550 have been licensed or copied by other chip vendors.
In the case of the 8250, 8250A and 16450, the exact circuit (the "megacell") was licensed to many vendors, including Western Digital and Intel.
Other vendors reverse-engineered the part or produced emulations that had similar behavior.
In internal modems, the modem designer will frequently emulate the 8250A/16450 with the modem microprocessor, and the emulated UART will frequently have a hidden buffer consisting of several hundred bytes.
Due to the size of the buffer, these emulations can be as reliable as a 16550A in their ability to handle high speed data.
However, most operating systems will still report that the UART is only a 8250A or 16450, and may not make effective use of the extra buffering present in the emulated UART unless special drivers are used.
Some modem makers are driven by market forces to abandon a design that has hundreds of bytes of buffer and instead use a 16550A UART so that the product will compare favorably in market comparisons even though the effective performance may be lowered by this action.
A common misconception is that all parts with "16550A" written on them are identical in performance.
There are differences, and in some cases, outright flaws in most of these 16550A clones.
When the NS16550 was developed, the National Semiconductor obtained several patents on the design and they also limited licensing, making it harder for other vendors to provide a chip with similar features.
As a result of the patents, reverse-engineered designs and emulations had to avoid infringing the claims covered by the patents.
Subsequently, these copies almost never perform exactly the same as the NS16550A or PC16550D, which are the parts most computer and modem makers want to buy but are sometimes unwilling to pay the price required to get the genuine part.
Some of the differences in the clone 16550A parts are unimportant, while others can prevent the device from being used at all with a given operating system or driver.
These differences may show up when using other drivers, or when particular combinations of events occur that were not well tested or considered in the Windows(R) driver.
This is because most modem vendors and 16550-clone makers use the Microsoft drivers from Windows(R) for Workgroups 3.11 and the Microsoft(R) MS-DOS(R) utility as the primary tests for compatibility with the NS16550A.
This over-simplistic criteria means that if a different operating system is used, problems could appear due to subtle differences between the clones and genuine components.
National Semiconductor has made available a program named COMTEST that performs compatibility tests independent of any OS drivers.
It should be remembered that the purpose of this type of program is to demonstrate the flaws in the products of the competition, so the program will report major as well as extremely subtle differences in behavior in the part being tested.
In a series of tests performed by the author of this document in 1994, components made by National Semiconductor, TI, StarTech, and CMD as well as megacells and emulations embedded in internal modems were tested with COMTEST. A difference count for some of these components is listed below.
Since these tests were performed in 1994, they may not reflect the current performance of the given product from a vendor.
It should be noted that COMTEST normally aborts when an excessive number or certain types of problems have been detected.
As part of this testing, COMTEST was modified so that it would not abort no matter how many differences were encountered.
[.informaltable]
[cols="1,1,1", frame="none", options="header"]
|===
| Vendor
| Part Number
| Errors (aka "differences" reported)
|National
|(PC16550DV)
|0
|National
|(NS16550AFN)
|0
|National
|(NS16C552V)
|0
|TI
|(TL16550AFN)
|3
|CMD
|(16C550PE)
|19
|StarTech
|(ST16C550J)
|23
|Rockwell
|Reference modem with internal 16550 or an emulation (RC144DPi/C3000-25)
|117
|Sierra
|Modem with an internal 16550 (SC11951/SC11351)
|91
|===
[NOTE]
====
To date, the author of this document has not found any non-National parts that report zero differences using the COMTEST program.
It should also be noted that National has had five versions of the 16550 over the years and the newest parts behave a bit differently than the classic NS16550AFN that is considered the benchmark for functionality.
COMTEST appears to turn a blind eye to the differences within the National product line and reports no errors on the National parts (except for the original 16550) even when there are official erratas that describe bugs in the A, B and C revisions of the parts, so this bias in COMTEST must be taken into account.
====
It is important to understand that a simple count of differences from COMTEST does not reveal a lot about what differences are important and which are not.
For example, about half of the differences reported in the two modems listed above that have internal UARTs were caused by the clone UARTs not supporting five- and six-bit character modes.
The real 16550, 16450, and 8250 UARTs all support these modes and COMTEST checks the functionality of these modes so over fifty differences are reported.
However, almost no modern modem supports five- or six-bit characters, particularly those with error-correction and compression capabilities.
This means that the differences related to five- and six-bit character modes can be discounted.
Many of the differences COMTEST reports have to do with timing.
In many of the clone designs, when the host reads from one port, the status bits in some other port may not update in the same amount of time (some faster, some slower) as a _real_ NS16550AFN and COMTEST looks for these differences.
This means that the number of differences can be misleading in that one device may only have one or two differences but they are extremely serious, and some other device that updates the status registers faster or slower than the reference part (that would probably never affect the operation of a properly written driver) could have dozens of differences reported.
COMTEST can be used as a screening tool to alert the administrator to the presence of potentially incompatible components that might cause problems or have to be handled as a special case.
If you run COMTEST on a 16550 that is in a modem or a modem is attached to the serial port, you need to first issue a ATE0&W command to the modem so that the modem will not echo any of the test characters.
If you forget to do this, COMTEST will report at least this one difference:
[source,shell]
....
Error (6)...Timeout interrupt failed: IIR = c1 LSR = 61
....
=== 8250/16450/16550 Registers
The 8250/16450/16550 UART occupies eight contiguous I/O port addresses.
In the IBM PC, there are two defined locations for these eight ports and they are known collectively as [.filename]#COM1# and [.filename]#COM2#.
The makers of PC-clones and add-on cards have created two additional areas known as [.filename]#COM3# and [.filename]#COM4#, but these extra COM ports conflict with other hardware on some systems.
The most common conflict is with video adapters that provide IBM 8514 emulation.
[.filename]#COM1# is located from 0x3f8 to 0x3ff and normally uses IRQ 4.
[.filename]#COM2# is located from 0x2f8 to 0x2ff and normally uses IRQ 3.
[.filename]#COM3# is located from 0x3e8 to 0x3ef and has no standardized IRQ.
[.filename]#COM4# is located from 0x2e8 to 0x2ef and has no standardized IRQ.
A description of the I/O ports of the 8250/16450/16550 UART is provided below.
[.informaltable]
[cols="10%,10%,80%", frame="none", options="header"]
|===
| I/O Port
| Access Allowed
| Description
|+0x00
|write (DLAB==0)
|
Transmit Holding Register (THR).
Information written to this port are treated as data words and will be transmitted by the UART.
|+0x00
|read (DLAB==0)
|
Receive Buffer Register (RBR).
Any data words received by the UART form the serial link are accessed by the host by reading this port.
|+0x00
|write/read (DLAB==1)
|
Divisor Latch LSB (DLL)
This value will be divided from the master input clock (in the IBM PC, the master clock is 1.8432MHz) and the resulting clock will determine the baud rate of the UART. This register holds bits 0 thru 7 of the divisor.
|+0x01
|write/read (DLAB==1)
|
Divisor Latch MSB (DLH)
This value will be divided from the master input clock (in the IBM PC, the master clock is 1.8432MHz) and the resulting clock will determine the baud rate of the UART. This register holds bits 8 thru 15 of the divisor.
|+0x01
|write/read (DLAB==0)
|Interrupt Enable Register (IER) +
The 8250/16450/16550 UART classifies events into one of four categories. Each category can be configured to generate an interrupt when any of the events occurs. The 8250/16450/16550 UART generates a single external interrupt signal regardless of how many events in the enabled categories have occurred. It is up to the host processor to respond to the interrupt and then poll the enabled interrupt categories (usually all categories have interrupts enabled) to determine the true cause(s) of the interrupt. +
Bit 7 -> Reserved, always 0. +
Bit 6 -> Reserved, always 0. +
Bit 5 -> Reserved, always 0. +
Bit 4 -> Reserved, always 0. +
Bit 3 -> Enable Modem Status Interrupt (EDSSI). Setting this bit to "1" allows the UART to generate an interrupt when a change occurs on one or more of the status lines. +
Bit 2 -> Enable Receiver Line Status Interrupt (ELSI) Setting this bit to "1" causes the UART to generate an interrupt when the an error (or a BREAK signal) has been detected in the incoming data. +
Bit 1 -> Enable Transmitter Holding Register Empty Interrupt (ETBEI) Setting this bit to "1" causes the UART to generate an interrupt when the UART has room for one or more additional characters that are to be transmitted. +
Bit 0 -> Enable Received Data Available Interrupt (ERBFI) Setting this bit to "1" causes the UART to generate an interrupt when the UART has received enough characters to exceed the trigger level of the FIFO, or the FIFO timer has expired (stale data), or a single character has been received when the FIFO is disabled.
|+0x02
|write
|FIFO Control Register (FCR) (This port does not exist on the 8250 and 16450 UART.) +
Bit 7 -> Receiver Trigger Bit #1 +
Bit 6 -> Receiver Trigger Bit #0 +
These two bits control at what point the receiver is to generate an interrupt when the FIFO is active. +
7 6 How many words are received before an interrupt is generated +
0 0 1 +
0 1 4 +
1 0 8 +
1 1 14 +
Bit 5 -> Reserved, always 0. +
Bit 4 -> Reserved, always 0. +
Bit 3 -> DMA Mode Select. If Bit 0 is set to "1" (FIFOs enabled), setting this bit changes the operation of the -RXRDY and -TXRDY signals from Mode 0 to Mode 1. +
Bit 2 -> Transmit FIFO Reset. When a "1" is written to this bit, the contents of the FIFO are discarded. Any word currently being transmitted will be sent intact. This function is useful in aborting transfers. +
Bit 1 -> Receiver FIFO Reset. When a "1" is written to this bit, the contents of the FIFO are discarded. Any word currently being assembled in the shift register will be received intact. +
Bit 0 -> 16550 FIFO Enable. When set, both the transmit and receive FIFOs are enabled. Any contents in the holding register, shift registers or FIFOs are lost when FIFOs are enabled or disabled. +
|+0x02
|read
|Interrupt Identification Register +
Bit 7 -> FIFOs enabled. On the 8250/16450 UART, this bit is zero. +
Bit 6 -> FIFOs enabled. On the 8250/16450 UART, this bit is zero. +
Bit 5 -> Reserved, always 0. +
Bit 4 -> Reserved, always 0. +
Bit 3 -> Interrupt ID Bit #2. On the 8250/16450 UART, this bit is zero. +
Bit 2 -> Interrupt ID Bit #1 +
Bit 1 -> Interrupt ID Bit #0.These three bits combine to report the category of event that caused the interrupt that is in progress. These categories have priorities, so if multiple categories of events occur at the same time, the UART will report the more important events first and the host must resolve the events in the order they are reported. All events that caused the current interrupt must be resolved before any new interrupts will be generated. (This is a limitation of the PC architecture.) +
2 1 0 Priority Description +
0 1 1 First Received Error (OE, PE, BI, or FE) +
0 1 0 Second Received Data Available +
1 1 0 Second Trigger level identification (Stale data in receive buffer) +
0 0 1 Third Transmitter has room for more words (THRE) +
0 0 0 Fourth Modem Status Change (-CTS, -DSR, -RI, or -DCD) +
Bit 0 -> Interrupt Pending Bit. If this bit is set to "0", then at least one interrupt is pending.
|+0x03
|write/read
|Line Control Register (LCR) +
Bit 7 -> Divisor Latch Access Bit (DLAB). When set, access to the data transmit/receive register (THR/RBR) and the Interrupt Enable Register (IER) is disabled. Any access to these ports is now redirected to the Divisor Latch Registers. Setting this bit, loading the Divisor Registers, and clearing DLAB should be done with interrupts disabled. +
Bit 6 -> Set Break. When set to "1", the transmitter begins to transmit continuous Spacing until this bit is set to "0". This overrides any bits of characters that are being transmitted. +
Bit 5 -> Stick Parity. When parity is enabled, setting this bit causes parity to always be "1" or "0", based on the value of Bit 4.
Bit 4 -> Even Parity Select (EPS). When parity is enabled and Bit 5 is "0", setting this bit causes even parity to be transmitted and expected. Otherwise, odd parity is used. +
Bit 3 -> Parity Enable (PEN). When set to "1", a parity bit is inserted between the last bit of the data and the Stop Bit. The UART will also expect parity to be present in the received data. +
Bit 2 -> Number of Stop Bits (STB). If set to "1" and using 5-bit data words, 1.5 Stop Bits are transmitted and expected in each data word. For 6, 7 and 8-bit data words, 2 Stop Bits are transmitted and expected. When this bit is set to "0", one Stop Bit is used on each data word. +
Bit 1 -> Word Length Select Bit #1 (WLSB1) +
Bit 0 -> Word Length Select Bit #0 (WLSB0) +
Together these bits specify the number of bits in each data word. +
1 0 Word Length +
0 0 5 Data Bits +
0 1 6 Data Bits +
1 0 7 Data Bits +
1 1 8 Data Bits +
|+0x04
|write/read
|Modem Control Register (MCR) +
Bit 7 -> Reserved, always 0. +
Bit 6 -> Reserved, always 0. +
Bit 5 -> Reserved, always 0. +
Bit 4 -> Loop-Back Enable. When set to "1", the UART transmitter and receiver are internally connected together to allow diagnostic operations. In addition, the UART modem control outputs are connected to the UART modem control inputs. CTS is connected to RTS, DTR is connected to DSR, OUT1 is connected to RI, and OUT 2 is connected to DCD. +
Bit 3 -> OUT 2. An auxiliary output that the host processor may set high or low. In the IBM PC serial adapter (and most clones), OUT 2 is used to tri-state (disable) the interrupt signal from the 8250/16450/16550 UART. +
Bit 2 -> OUT 1. An auxiliary output that the host processor may set high or low. This output is not used on the IBM PC serial adapter. +
Bit 1 -> Request to Send (RTS). When set to "1", the output of the UART -RTS line is Low (Active). +
Bit 0 -> Data Terminal Ready (DTR). When set to "1", the output of the UART -DTR line is Low (Active). +
|+0x05
|write/read
|Line Status Register (LSR) +
Bit 7 -> Error in Receiver FIFO. On the 8250/16450 UART, this bit is zero. This bit is set to "1" when any of the bytes in the FIFO have one or more of the following error conditions: PE, FE, or BI. +
Bit 6 -> Transmitter Empty (TEMT). When set to "1", there are no words remaining in the transmit FIFO or the transmit shift register. The transmitter is completely idle. +
Bit 5 -> Transmitter Holding Register Empty (THRE). When set to "1", the FIFO (or holding register) now has room for at least one additional word to transmit. The transmitter may still be transmitting when this bit is set to "1". +
Bit 4 -> Break Interrupt (BI). The receiver has detected a Break signal. +
Bit 3 -> Framing Error (FE). A Start Bit was detected but the Stop Bit did not appear at the expected time. The received word is probably garbled. +
Bit 2 -> Parity Error (PE). The parity bit was incorrect for the word received. +
Bit 1 -> Overrun Error (OE). A new word was received and there was no room in the receive buffer. The newly-arrived word in the shift register is discarded. On 8250/16450 UARTs, the word in the holding register is discarded and the newly- arrived word is put in the holding register. +
Bit 0 -> Data Ready (DR) One or more words are in the receive FIFO that the host may read. A word must be completely received and moved from the shift register into the FIFO (or holding register for 8250/16450 designs) before this bit is set.
|+0x06
|write/read
|Modem Status Register (MSR) +
Bit 7 -> Data Carrier Detect (DCD). Reflects the state of the DCD line on the UART. +
Bit 6 -> Ring Indicator (RI). Reflects the state of the RI line on the UART. +
Bit 5 -> Data Set Ready (DSR). Reflects the state of the DSR line on the UART. +
Bit 4 -> Clear To Send (CTS). Reflects the state of the CTS line on the UART. +
Bit 3 -> Delta Data Carrier Detect (DDCD). Set to "1" if the -DCD line has changed state one more time since the last time the MSR was read by the host. +
Bit 2 -> Trailing Edge Ring Indicator (TERI). Set to "1" if the -RI line has had a low to high transition since the last time the MSR was read by the host. +
Bit 1 -> Delta Data Set Ready (DDSR). Set to "1" if the -DSR line has changed state one more time since the last time the MSR was read by the host. +
Bit 0 -> Delta Clear To Send (DCTS). Set to "1" if the -CTS line has changed state one more time since the last time the MSR was read by the host. +
|+0x07
|write/read
|Scratch Register (SCR). This register performs no function in the UART. Any value can be written by the host to this location and read by the host later on.
|===
=== Beyond the 16550A UART
Although National Semiconductor has not offered any components compatible with the 16550 that provide additional features, various other vendors have.
Some of these components are described below.
It should be understood that to effectively utilize these improvements, drivers may have to be provided by the chip vendor since most of the popular operating systems do not support features beyond those provided by the 16550.
ST16650::
By default this part is similar to the NS16550A, but an extended 32-byte send and receive buffer can be optionally enabled.
Made by StarTech.
TIL16660::
By default this part behaves similar to the NS16550A, but an extended 64-byte send and receive buffer can be optionally enabled.
Made by Texas Instruments.
Hayes ESP::
This proprietary plug-in card contains a 2048-byte send and receive buffer, and supports data rates to 230.4Kbit/sec.
Made by Hayes.
In addition to these "dumb" UARTs, many vendors produce intelligent serial communication boards.
This type of design usually provides a microprocessor that interfaces with several UARTs, processes and buffers the data, and then alerts the main PC processor when necessary.
As the UARTs are not directly accessed by the PC processor in this type of communication system, it is not necessary for the vendor to use UARTs that are compatible with the 8250, 16450, or the 16550 UART.
This leaves the designer free to components that may have better performance characteristics.
[[sio]]
== Configuring the [.filename]#sio# driver
The [.filename]#sio# driver provides support for NS8250-, NS16450-, NS16550 and NS16550A-based EIA RS-232C (CCITT V.24) communications interfaces.
Several multiport cards are supported as well.
See the man:sio[4] manual page for detailed technical documentation.
=== Digi International (DigiBoard) PC/8
_Contributed by `{awebster}`. 26 August 1995._
Here is a config snippet from a machine with a Digi International PC/8 with 16550.
It has 8 modems connected to these 8 lines, and they work just great.
Do not forget to add `options COM_MULTIPORT` or it will not work very well!
[.programlisting]
....
device sio4 at isa? port 0x100 flags 0xb05
device sio5 at isa? port 0x108 flags 0xb05
device sio6 at isa? port 0x110 flags 0xb05
device sio7 at isa? port 0x118 flags 0xb05
device sio8 at isa? port 0x120 flags 0xb05
device sio9 at isa? port 0x128 flags 0xb05
device sio10 at isa? port 0x130 flags 0xb05
device sio11 at isa? port 0x138 flags 0xb05 irq 9
....
The trick in setting this up is that the MSB of the flags represent the last SIO port, in this case 11 so flags are 0xb05.
=== Boca 16
_Contributed by `{whiteside}`. 26 August 1995._
The procedures to make a Boca 16 port board with FreeBSD are pretty straightforward, but you will need a couple things to make it work:
. You either need the kernel sources installed so you can recompile the necessary options or you will need someone else to compile it for you. The 2.0.5 default kernel does _not_ come with multiport support enabled and you will need to add a device entry for each port anyways.
. Two, you will need to know the interrupt and IO setting for your Boca Board so you can set these options properly in the kernel.
One important note - the actual UART chips for the Boca 16 are in the connector box, not on the internal board itself.
So if you have it unplugged, probes of those ports will fail.
I have never tested booting with the box unplugged and plugging it back in, and I suggest you do not either.
If you do not already have a custom kernel configuration file set up, refer to link:{handbook}#kernelconfig[Kernel Configuration] chapter of the FreeBSD Handbook for general procedures.
The following are the specifics for the Boca 16 board and assume you are using the kernel name MYKERNEL and editing with vi.
[.procedure]
====
. Add the line
+
[.programlisting]
....
options COM_MULTIPORT
....
to the config file.
. Where the current `device sio__n__` lines are, you will need to add 16 more devices. The following example is for a Boca Board with an interrupt of 3, and a base IO address 100h. The IO address for Each port is +8 hexadecimal from the previous port, thus the 100h, 108h, 110h... addresses.
+
[.programlisting]
....
device sio1 at isa? port 0x100 flags 0x1005
device sio2 at isa? port 0x108 flags 0x1005
device sio3 at isa? port 0x110 flags 0x1005
device sio4 at isa? port 0x118 flags 0x1005
...
device sio15 at isa? port 0x170 flags 0x1005
device sio16 at isa? port 0x178 flags 0x1005 irq 3
....
+
The flags entry _must_ be changed from this example unless you are using the exact same sio assignments.
Flags are set according to 0x``__MYY__`` where _M_ indicates the minor number of the master port (the last port on a Boca 16) and _YY_ indicates if FIFO is enabled or disabled(enabled), IRQ sharing is used(yes) and if there is an AST/4 compatible IRQ control register(no).
In this example,
+
[.programlisting]
....
flags
0x1005
....
indicates that the master port is sio16.
If I added another board and assigned sio17 through sio28, the flags for all 16 ports on _that_ board would be 0x1C05, where 1C indicates the minor number of the master port.
Do not change the 05 setting.
. Save and complete the kernel configuration, recompile, install and reboot. Presuming you have successfully installed the recompiled kernel and have it set to the correct address and IRQ, your boot message should indicate the successful probe of the Boca ports as follows: (obviously the sio numbers, IO and IRQ could be different)
+
[source,shell]
....
sio1 at 0x100-0x107 flags 0x1005 on isa
sio1: type 16550A (multiport)
sio2 at 0x108-0x10f flags 0x1005 on isa
sio2: type 16550A (multiport)
sio3 at 0x110-0x117 flags 0x1005 on isa
sio3: type 16550A (multiport)
sio4 at 0x118-0x11f flags 0x1005 on isa
sio4: type 16550A (multiport)
sio5 at 0x120-0x127 flags 0x1005 on isa
sio5: type 16550A (multiport)
sio6 at 0x128-0x12f flags 0x1005 on isa
sio6: type 16550A (multiport)
sio7 at 0x130-0x137 flags 0x1005 on isa
sio7: type 16550A (multiport)
sio8 at 0x138-0x13f flags 0x1005 on isa
sio8: type 16550A (multiport)
sio9 at 0x140-0x147 flags 0x1005 on isa
sio9: type 16550A (multiport)
sio10 at 0x148-0x14f flags 0x1005 on isa
sio10: type 16550A (multiport)
sio11 at 0x150-0x157 flags 0x1005 on isa
sio11: type 16550A (multiport)
sio12 at 0x158-0x15f flags 0x1005 on isa
sio12: type 16550A (multiport)
sio13 at 0x160-0x167 flags 0x1005 on isa
sio13: type 16550A (multiport)
sio14 at 0x168-0x16f flags 0x1005 on isa
sio14: type 16550A (multiport)
sio15 at 0x170-0x177 flags 0x1005 on isa
sio15: type 16550A (multiport)
sio16 at 0x178-0x17f irq 3 flags 0x1005 on isa
sio16: type 16550A (multiport master)
....
+
If the messages go by too fast to see,
+
[source,shell]
....
# dmesg | more
....
will show you the boot messages.
. Next, appropriate entries in [.filename]#/dev# for the devices must be made using the [.filename]#/dev/MAKEDEV# script. This step can be omitted if you are running FreeBSD 5.X with a kernel that has man:devfs[5] support compiled in.
+
If you do need to create the [.filename]#/dev# entries, run the following as `root`:
+
[source,shell]
....
# cd /dev
# ./MAKEDEV tty1
# ./MAKEDEV cua1
(everything in between)
# ./MAKEDEV ttyg
# ./MAKEDEV cuag
....
+
If you do not want or need call-out devices for some reason, you can dispense with making the [.filename]#cua*# devices.
. If you want a quick and sloppy way to make sure the devices are working, you can simply plug a modem into each port and (as root)
+
[source,shell]
....
# echo at > ttyd*
....
for each device you have made. You _should_ see the RX lights flash for each working port.
====
=== Support for Cheap Multi-UART Cards
_Contributed by Helge Oldach_ mailto:hmo@sep.hamburg.com[hmo@sep.hamburg.com], September 1999
Ever wondered about FreeBSD support for your 20$ multi-I/O card with two (or more) COM ports, sharing IRQs? Here is how:
Usually the only option to support these kind of boards is to use a distinct IRQ for each port.
For example, if your CPU board has an on-board [.filename]#COM1# port (aka [.filename]#sio0#-I/O address 0x3F8 and IRQ 4) and you have an extension board with two UARTs, you will commonly need to configure them as [.filename]#COM2# (aka [.filename]#sio1#-I/O address 0x2F8 and IRQ 3), and the third port (aka [.filename]#sio2#) as I/O 0x3E8 and IRQ 5.
Obviously this is a waste of IRQ resources, as it should be basically possible to run both extension board ports using a single IRQ with the `COM_MULTIPORT` configuration described in the previous sections.
Such cheap I/O boards commonly have a 4 by 3 jumper matrix for the COM ports, similar to the following:
[.programlisting]
....
o o o *
Port A |
o * o *
Port B |
o * o o
IRQ 2 3 4 5
....
Shown here is port A wired for IRQ 5 and port B wired for IRQ 3.
The IRQ columns on your specific board may vary-other boards may supply jumpers for IRQs 3, 4, 5, and 7 instead.
One could conclude that wiring both ports for IRQ 3 using a handcrafted wire-made jumper covering all three connection points in the IRQ 3 column would solve the issue, but no.
You cannot duplicate IRQ 3 because the output drivers of each UART are wired in a "totem pole" fashion, so if one of the UARTs drives IRQ 3, the output signal will not be what you would expect.
Depending on the implementation of the extension board or your motherboard, the IRQ 3 line will continuously stay up, or always stay low.
You need to decouple the IRQ drivers for the two UARTs, so that the IRQ line of the board only goes up if (and only if) one of the UARTs asserts a IRQ, and stays low otherwise.
The solution was proposed by Joerg Wunsch mailto:j@ida.interface-business.de[j@ida.interface-business.de]: To solder up a wired-or consisting of two diodes (Germanium or Schottky-types strongly preferred) and a 1 kOhm resistor.
Here is the schematic, starting from the 4 by 3 jumper field above:
[.programlisting]
....
Diode
+---------->|-------+
/ |
o * o o | 1 kOhm
Port A +----|######|-------+
o * o o | |
Port B `-------------------+ ==+==
o * o o | Ground
\ |
+--------->|-------+
IRQ 2 3 4 5 Diode
....
The cathodes of the diodes are connected to a common point, together with a 1 kOhm pull-down resistor.
It is essential to connect the resistor to ground to avoid floating of the IRQ line on the bus.
Now we are ready to configure a kernel.
Staying with this example, we would configure:
[.programlisting]
....
# standard on-board COM1 port
device sio0 at isa? port "IO_COM1" flags 0x10
# patched-up multi-I/O extension board
options COM_MULTIPORT
device sio1 at isa? port "IO_COM2" flags 0x205
device sio2 at isa? port "IO_COM3" flags 0x205 irq 3
....
Note that the `flags` setting for [.filename]#sio1# and [.filename]#sio2# is truly essential; refer to man:sio[4] for details.
(Generally, the `2` in the "flags" attribute refers to [.filename]#sio#`2` which holds the IRQ, and you surely want a `5` low nibble.)
With kernel verbose mode turned on this should yield something similar to this:
[source,shell]
....
sio0: irq maps: 0x1 0x11 0x1 0x1
sio0 at 0x3f8-0x3ff irq 4 flags 0x10 on isa
sio0: type 16550A
sio1: irq maps: 0x1 0x9 0x1 0x1
sio1 at 0x2f8-0x2ff flags 0x205 on isa
sio1: type 16550A (multiport)
sio2: irq maps: 0x1 0x9 0x1 0x1
sio2 at 0x3e8-0x3ef irq 3 flags 0x205 on isa
sio2: type 16550A (multiport master)
....
Though [.filename]#/sys/i386/isa/sio.c# is somewhat cryptic with its use of the "irq maps" array above, the basic idea is that you observe `0x1` in the first, third, and fourth place.
This means that the corresponding IRQ was set upon output and cleared after, which is just what we would expect.
If your kernel does not display this behavior, most likely there is something wrong with your wiring.
[[cy]]
== Configuring the [.filename]#cy# driver
_Contributed by Alex Nash. 6 June 1996._
The Cyclades multiport cards are based on the [.filename]#cy# driver instead of the usual [.filename]#sio# driver used by other multiport cards.
Configuration is a simple matter of:
[.procedure]
====
. Add the [.filename]#cy# device to your kernel configuration (note that your irq and iomem settings may differ).
+
[.programlisting]
....
device cy0 at isa? irq 10 iomem 0xd4000 iosiz 0x2000
....
. Rebuild and install the new kernel.
. Make the device nodes by typing (the following example assumes an 8-port board):
+
[source,shell]
....
# cd /dev
# for i in 0 1 2 3 4 5 6 7;do ./MAKEDEV cuac$i ttyc$i;done
....
. If appropriate, add dialup entries to [.filename]#/etc/ttys# by duplicating serial device (`ttyd`) entries and using `ttyc` in place of `ttyd`. For example:
+
[.programlisting]
....
ttyc0 "/usr/libexec/getty std.38400" unknown on insecure
ttyc1 "/usr/libexec/getty std.38400" unknown on insecure
ttyc2 "/usr/libexec/getty std.38400" unknown on insecure
...
ttyc7 "/usr/libexec/getty std.38400" unknown on insecure
....
. Reboot with the new kernel.
====
== Configuring the [.filename]#si# driver
_Contributed by `{nsayer}`. 25 March 1998._
The Specialix SI/XIO and SX multiport cards use the [.filename]#si# driver.
A single machine can have up to 4 host cards.
The following host cards are supported:
* ISA SI/XIO host card (2 versions)
* EISA SI/XIO host card
* PCI SI/XIO host card
* ISA SX host card
* PCI SX host card
Although the SX and SI/XIO host cards look markedly different, their functionality are basically the same.
The host cards do not use I/O locations, but instead require a 32K chunk of memory.
The factory configuration for ISA cards places this at `0xd0000-0xd7fff`.
They also require an IRQ.
PCI cards will, of course, auto-configure themselves.
You can attach up to 4 external modules to each host card.
The external modules contain either 4 or 8 serial ports.
They come in the following varieties:
* SI 4 or 8 port modules. Up to 57600 bps on each port supported.
* XIO 8 port modules. Up to 115200 bps on each port supported. One type of XIO module has 7 serial and 1 parallel port.
* SXDC 8 port modules. Up to 921600 bps on each port supported. Like XIO, a module is available with one parallel port as well.
To configure an ISA host card, add the following line to your kernel configuration file, changing the numbers as appropriate:
[.programlisting]
....
device si0 at isa? iomem 0xd0000 irq 11
....
Valid IRQ numbers are 9, 10, 11, 12 and 15 for SX ISA host cards and 11, 12 and 15 for SI/XIO ISA host cards.
To configure an EISA or PCI host card, use this line:
[.programlisting]
....
device si0
....
After adding the configuration entry, rebuild and install your new kernel.
[NOTE]
====
The following step, is not necessary if you are using man:devfs[5] in FreeBSD 5._X_.
====
After rebooting with the new kernel, you need to make the device nodes in [.filename]#/dev#.
The [.filename]#MAKEDEV# script will take care of this for you.
Count how many total ports you have and type:
[source,shell]
....
# cd /dev
# ./MAKEDEV ttyAnn cuaAnn
....
(where _nn_ is the number of ports)
If you want login prompts to appear on these ports, you will need to add lines like this to [.filename]#/etc/ttys#:
[.programlisting]
....
ttyA01 "/usr/libexec/getty std.9600" vt100 on insecure
....
Change the terminal type as appropriate.
For modems, `dialup` or `unknown` is fine.
diff --git a/documentation/content/en/articles/solid-state/_index.adoc b/documentation/content/en/articles/solid-state/_index.adoc
index 988a7e5aad..4a1b022fdb 100644
--- a/documentation/content/en/articles/solid-state/_index.adoc
+++ b/documentation/content/en/articles/solid-state/_index.adoc
@@ -1,304 +1,304 @@
---
title: FreeBSD and Solid State Devices
authors:
- author: John Kozubik
email: john@kozubik.com
-copyright: 2001, 2009 The FreeBSD Documentation Project
-releaseinfo: "$FreeBSD$"
+copyright: 2001 - 2021 The FreeBSD Documentation Project
+description: FreeBSD and Solid State Devices
trademarks: ["freebsd", "general"]
---
= FreeBSD and Solid State Devices
:doctype: article
:toc: macro
:toclevels: 1
:icons: font
:sectnums:
:sectnumlevels: 6
:source-highlighter: rouge
:experimental:
[.abstract-title]
Abstract
This article covers the use of solid state disk devices in FreeBSD to create embedded systems.
Embedded systems have the advantage of increased stability due to the lack of integral moving parts (hard drives).
Account must be taken, however, for the generally low disk space available in the system and the durability of the storage medium.
Specific topics to be covered include the types and attributes of solid state media suitable for disk use in FreeBSD, kernel options that are of interest in such an environment, the [.filename]#rc.initdiskless# mechanisms that automate the initialization of such systems and the need for read-only filesystems, and building filesystems from scratch.
The article will conclude with some general strategies for small and read-only FreeBSD environments.
'''
toc::[]
[[intro]]
== Solid State Disk Devices
The scope of this article will be limited to solid state disk devices made from flash memory.
Flash memory is a solid state memory (no moving parts) that is non-volatile (the memory maintains data even after all power sources have been disconnected).
Flash memory can withstand tremendous physical shock and is reasonably fast (the flash memory solutions covered in this article are slightly slower than a EIDE hard disk for write operations, and much faster for read operations).
One very important aspect of flash memory, the ramifications of which will be discussed later in this article, is that each sector has a limited rewrite capacity.
You can only write, erase, and write again to a sector of flash memory a certain number of times before the sector becomes permanently unusable.
Although many flash memory products automatically map bad blocks, and although some even distribute write operations evenly throughout the unit, the fact remains that there exists a limit to the amount of writing that can be done to the device.
Competitive units have between 1,000,000 and 10,000,000 writes per sector in their specification.
This figure varies due to the temperature of the environment.
Specifically, we will be discussing ATA compatible compact-flash units, which are quite popular as storage media for digital cameras.
Of particular interest is the fact that they pin out directly to the IDE bus and are compatible with the ATA command set.
Therefore, with a very simple and low-cost adaptor, these devices can be attached directly to an IDE bus in a computer.
Once implemented in this manner, operating systems such as FreeBSD see the device as a normal hard disk (albeit small).
Other solid state disk solutions do exist, but their expense, obscurity, and relative unease of use places them beyond the scope of this article.
[[kernel]]
== Kernel Options
A few kernel options are of specific interest to those creating an embedded FreeBSD system.
All embedded FreeBSD systems that use flash memory as system disk will be interested in memory disks and memory filesystems.
As a result of the limited number of writes that can be done to flash memory, the disk and the filesystems on the disk will most likely be mounted read-only.
In this environment, filesystems such as [.filename]#/tmp# and [.filename]#/var# are mounted as memory filesystems to allow the system to create logs and update counters and temporary files.
Memory filesystems are a critical component to a successful solid state FreeBSD implementation.
You should make sure the following lines exist in your kernel configuration file:
[.programlisting]
....
options MFS # Memory Filesystem
options MD_ROOT # md device usable as a potential root device
pseudo-device md # memory disk
....
[[ro-fs]]
== The `rc` Subsystem and Read-Only Filesystems
The post-boot initialization of an embedded FreeBSD system is controlled by [.filename]#/etc/rc.initdiskless#.
[.filename]#/etc/rc.d/var# mounts [.filename]#/var# as a memory filesystem, makes a configurable list of directories in [.filename]#/var# with the man:mkdir[1] command, and changes modes on some of those directories.
In the execution of [.filename]#/etc/rc.d/var#, one other [.filename]#rc.conf# variable comes into play - `varsize`.
A [.filename]#/var# partition is created by [.filename]#/etc/rc.d/var# based on the value of this variable in [.filename]#rc.conf#:
[.programlisting]
....
varsize=8192
....
Remember that this value is in sectors by default.
The fact that [.filename]#/var# is a read-write filesystem is an important distinction, as the [.filename]#/# partition (and any other partitions you may have on your flash media) should be mounted read-only.
Remember that in <<intro>> we detailed the limitations of flash memory - specifically the limited write capability.
The importance of not mounting filesystems on flash media read-write, and the importance of not using a swap file, cannot be overstated.
A swap file on a busy system can burn through a piece of flash media in less than one year.
Heavy logging or temporary file creation and destruction can do the same.
Therefore, in addition to removing the `swap` entry from your [.filename]#/etc/fstab#, you should also change the Options field for each filesystem to `ro` as follows:
[.programlisting]
....
# Device Mountpoint FStype Options Dump Pass#
/dev/ad0s1a / ufs ro 1 1
....
A few applications in the average system will immediately begin to fail as a result of this change.
For instance, cron will not run properly as a result of missing cron tabs in the [.filename]#/var# created by [.filename]#/etc/rc.d/var#, and syslog and dhcp will encounter problems as well as a result of the read-only filesystem and missing items in the [.filename]#/var# that [.filename]#/etc/rc.d/var# has created.
These are only temporary problems though, and are addressed, along with solutions to the execution of other common software packages in <<strategies>>.
An important thing to remember is that a filesystem that was mounted read-only with [.filename]#/etc/fstab# can be made read-write at any time by issuing the command:
[source,shell]
....
# /sbin/mount -uw partition
....
and can be toggled back to read-only with the command:
[source,shell]
....
# /sbin/mount -ur partition
....
== Building a File System from Scratch
Since ATA compatible compact-flash cards are seen by FreeBSD as normal IDE hard drives, you could theoretically install FreeBSD from the network using the kern and mfsroot floppies or from a CD.
However, even a small installation of FreeBSD using normal installation procedures can produce a system in size of greater than 200 megabytes.
Most people will be using smaller flash memory devices (128 megabytes is considered fairly large - 32 or even 16 megabytes is common), so an installation using normal mechanisms is not possible-there is simply not enough disk space for even the smallest of conventional installations.
The easiest way to overcome this space limitation is to install FreeBSD using conventional means to a normal hard disk.
After the installation is complete, pare down the operating system to a size that will fit onto your flash media, then tar the entire filesystem.
The following steps will guide you through the process of preparing a piece of flash memory for your tarred filesystem.
Remember, because a normal installation is not being performed, operations such as partitioning, labeling, file-system creation, etc. need to be performed by hand.
In addition to the kern and mfsroot floppy disks, you will also need to use the fixit floppy.
[.procedure]
====
. Partitioning Your Flash Media Device
+
After booting with the kern and mfsroot floppies, choose `custom` from the installation menu.
In the custom installation menu, choose `partition`.
In the partition menu, you should delete all existing partitions using kbd:[d].
After deleting all existing partitions, create a partition using kbd:[c] and accept the default value for the size of the partition.
When asked for the type of the partition, make sure the value is set to `165`.
Now write this partition table to the disk by pressing kbd:[w] (this is a hidden option on this screen).
If you are using an ATA compatible compact flash card, you should choose the FreeBSD Boot Manager.
Now press kbd:[q] to quit the partition menu.
You will be shown the boot manager menu once more - repeat the choice you made earlier.
. Creating Filesystems on Your Flash Memory Device
+
Exit the custom installation menu, and from the main installation menu choose the `fixit` option.
After entering the fixit environment, enter the following command:
+
[source,shell]
....
# disklabel -e /dev/ad0c
....
+
At this point you will have entered the vi editor under the auspices of the disklabel command.
Next, you need to add an `a:` line at the end of the file. This `a:` line should look like:
+
[.programlisting]
....
a: 123456 0 4.2BSD 0 0
....
+
Where _123456_ is a number that is exactly the same as the number in the existing `c:` entry for size.
Basically you are duplicating the existing `c:` line as an `a:` line, making sure that fstype is `4.2BSD`.
Save the file and exit.
+
[source,shell]
....
# disklabel -B -r /dev/ad0c
# newfs /dev/ad0a
....
. Placing Your Filesystem on the Flash Media
+
Mount the newly prepared flash media:
+
[source,shell]
....
# mount /dev/ad0a /flash
....
+
Bring this machine up on the network so we may transfer our tar file and explode it onto our flash media filesystem.
One example of how to do this is:
+
[source,shell]
....
# ifconfig xl0 192.168.0.10 netmask 255.255.255.0
# route add default 192.168.0.1
....
+
Now that the machine is on the network, transfer your tar file.
You may be faced with a bit of a dilemma at this point - if your flash memory part is 128 megabytes, for instance, and your tar file is larger than 64 megabytes, you cannot have your tar file on the flash media at the same time as you explode it - you will run out of space.
One solution to this problem, if you are using FTP, is to untar the file while it is transferred over FTP.
If you perform your transfer in this manner, you will never have the tar file and the tar contents on your disk at the same time:
+
[source,shell]
....
ftp> get tarfile.tar "| tar xvf -"
....
+
If your tarfile is gzipped, you can accomplish this as well:
+
[source,shell]
....
ftp> get tarfile.tar "| zcat | tar xvf -"
....
+
After the contents of your tarred filesystem are on your flash memory filesystem, you can unmount the flash memory and reboot:
+
[source,shell]
....
# cd /
# umount /flash
# exit
....
+
Assuming that you configured your filesystem correctly when it was built on the normal hard disk (with your filesystems mounted read-only, and with the necessary options compiled into the kernel) you should now be successfully booting your FreeBSD embedded system.
====
[[strategies]]
== System Strategies for Small and Read Only Environments
In <<ro-fs>>, it was pointed out that the [.filename]#/var# filesystem constructed by [.filename]#/etc/rc.d/var# and the presence of a read-only root filesystem causes problems with many common software packages used with FreeBSD.
In this article, suggestions for successfully running cron, syslog, ports installations, and the Apache web server will be provided.
=== Cron
Upon boot, [.filename]#/var# gets populated by [.filename]#/etc/rc.d/var# using the list from [.filename]#/etc/mtree/BSD.var.dist#, so the [.filename]#cron#, [.filename]#cron/tabs#, [.filename]#at#, and a few other standard directories get created.
However, this does not solve the problem of maintaining cron tabs across reboots.
When the system reboots, the [.filename]#/var# filesystem that is in memory will disappear and any cron tabs you may have had in it will also disappear.
Therefore, one solution would be to create cron tabs for the users that need them, mount your [.filename]#/# filesystem as read-write and copy those cron tabs to somewhere safe, like [.filename]#/etc/tabs#, then add a line to the end of [.filename]#/etc/rc.initdiskless# that copies those crontabs into [.filename]#/var/cron/tabs# after that directory has been created during system initialization.
You may also need to add a line that changes modes and permissions on the directories you create and the files you copy with [.filename]#/etc/rc.initdiskless#.
=== Syslog
[.filename]#syslog.conf# specifies the locations of certain log files that exist in [.filename]#/var/log#.
These files are not created by [.filename]#/etc/rc.d/var# upon system initialization.
Therefore, somewhere in [.filename]#/etc/rc.d/var#, after the section that creates the directories in [.filename]#/var#, you will need to add something like this:
[source,shell]
....
# touch /var/log/security /var/log/maillog /var/log/cron /var/log/messages
# chmod 0644 /var/log/*
....
=== Ports Installation
Before discussing the changes necessary to successfully use the ports tree, a reminder is necessary regarding the read-only nature of your filesystems on the flash media.
Since they are read-only, you will need to temporarily mount them read-write using the mount syntax shown in <<ro-fs>>.
You should always remount those filesystems read-only when you are done with any maintenance - unnecessary writes to the flash media could considerably shorten its lifespan.
To make it possible to enter a ports directory and successfully run `make install`, we must create a packages directory on a non-memory filesystem that will keep track of our packages across reboots.
As it is necessary to mount your filesystems as read-write for the installation of a package anyway, it is sensible to assume that an area on the flash media can also be used for package information to be written to.
First, create a package database directory.
This is normally in [.filename]#/var/db/pkg#, but we cannot place it there as it will disappear every time the system is booted.
[source,shell]
....
# mkdir /etc/pkg
....
Now, add a line to [.filename]#/etc/rc.d/var# that links the [.filename]#/etc/pkg# directory to [.filename]#/var/db/pkg#. An example:
[source,shell]
....
# ln -s /etc/pkg /var/db/pkg
....
Now, any time that you mount your filesystems as read-write and install a package, the `make install` will work, and package information will be written successfully to [.filename]#/etc/pkg# (because the filesystem will, at that time, be mounted read-write) which will always be available to the operating system as [.filename]#/var/db/pkg#.
=== Apache Web Server
[NOTE]
====
The steps in this section are only necessary if Apache is set up to write its pid or log information outside of [.filename]#/var#.
By default, Apache keeps its pid file in [.filename]#/var/run/httpd.pid# and its log files in [.filename]#/var/log#.
====
It is now assumed that Apache keeps its log files in a directory [.filename]#apache_log_dir# outside of [.filename]#/var#.
When this directory lives on a read-only filesystem, Apache will not be able to save any log files, and may have problems working.
If so, it is necessary to add a new directory to the list of directories in [.filename]#/etc/rc.d/var# to create in [.filename]#/var#, and to link [.filename]#apache_log_dir# to [.filename]#/var/log/apache#.
It is also necessary to set permissions and ownership on this new directory.
First, add the directory `log/apache` to the list of directories to be created in [.filename]#/etc/rc.d/var#.
Second, add these commands to [.filename]#/etc/rc.d/var# after the directory creation section:
[source,shell]
....
# chmod 0774 /var/log/apache
# chown nobody:nobody /var/log/apache
....
Finally, remove the existing [.filename]#apache_log_dir# directory, and replace it with a link:
[source,shell]
....
# rm -rf apache_log_dir
# ln -s /var/log/apache apache_log_dir
....
diff --git a/documentation/content/en/articles/vinum/_index.adoc b/documentation/content/en/articles/vinum/_index.adoc
index 20d786ab5a..329a417bcc 100644
--- a/documentation/content/en/articles/vinum/_index.adoc
+++ b/documentation/content/en/articles/vinum/_index.adoc
@@ -1,697 +1,698 @@
---
title: The vinum Volume Manager
authors:
- author: Greg Lehey
+description: The vinum Volume Manager in FreeBSD
---
= The vinum Volume Manager
:doctype: article
:toc: macro
:toclevels: 1
:icons: font
:sectnums:
:sectnumlevels: 6
:source-highlighter: rouge
:experimental:
include::shared/en/urls.adoc[]
ifeval::["{backend}" == "html5"]
:imagesdir: ../../../images/articles/vinum/
endif::[]
ifeval::["{backend}" == "pdf"]
:imagesdir: ../../../../static/images/articles/vinum/
endif::[]
ifeval::["{backend}" == "epub3"]
:imagesdir: ../../../../static/images/articles/vinum/
endif::[]
'''
toc::[]
[[vinum-synopsis]]
== Synopsis
No matter the type of disks, there are always potential problems.
The disks can be too small, too slow, or too unreliable to meet the system's requirements.
While disks are getting bigger, so are data storage requirements.
Often a file system is needed that is bigger than a disk's capacity.
Various solutions to these problems have been proposed and implemented.
One method is through the use of multiple, and sometimes redundant, disks.
In addition to supporting various cards and controllers for hardware Redundant Array of Independent Disks RAID systems, the base FreeBSD system includes the [.filename]#vinum# volume manager, a block device driver that implements virtual disk drives and addresses these three problems.
[.filename]#vinum# provides more flexibility, performance, and reliability than traditional disk storage and implements `RAID`-0, `RAID`-1, and `RAID`-5 models, both individually and in combination.
This chapter provides an overview of potential problems with traditional disk storage, and an introduction to the [.filename]#vinum# volume manager.
[NOTE]
====
Starting with FreeBSD 5, [.filename]#vinum# has been rewritten in order to fit into the link:{handbook}#geom[GEOM architecture], while retaining the original ideas, terminology, and on-disk metadata.
This rewrite is called _gvinum_ (for _GEOM vinum_).
While this chapter uses the term [.filename]#vinum#, any command invocations should be performed with `gvinum`.
The name of the kernel module has changed from the original [.filename]#vinum.ko# to [.filename]#geom_vinum.ko#, and all device nodes reside under [.filename]#/dev/gvinum# instead of [.filename]#/dev/vinum#.
As of FreeBSD 6, the original [.filename]#vinum# implementation is no longer available in the code base.
====
[[vinum-access-bottlenecks]]
== Access Bottlenecks
Modern systems frequently need to access data in a highly concurrent manner.
For example, large FTP or HTTP servers can maintain thousands of concurrent sessions and have multiple 100 Mbit/s connections to the outside world, well beyond the sustained transfer rate of most disks.
Current disk drives can transfer data sequentially at up to 70 MB/s, but this value is of little importance in an environment where many independent processes access a drive, and where they may achieve only a fraction of these values.
In such cases, it is more interesting to view the problem from the viewpoint of the disk subsystem.
The important parameter is the load that a transfer places on the subsystem, or the time for which a transfer occupies the drives involved in the transfer.
In any disk transfer, the drive must first position the heads, wait for the first sector to pass under the read head, and then perform the transfer.
These actions can be considered to be atomic as it does not make any sense to interrupt them.
[[vinum-latency]] Consider a typical transfer of about 10 kB: the current generation of high-performance disks can position the heads in an average of 3.5 ms.
The fastest drives spin at 15,000 rpm, so the average rotational latency (half a revolution) is 2 ms.
At 70 MB/s, the transfer itself takes about 150 μs, almost nothing compared to the positioning time.
In such a case, the effective transfer rate drops to a little over 1 MB/s and is clearly highly dependent on the transfer size.
The traditional and obvious solution to this bottleneck is "more spindles": rather than using one large disk, use several smaller disks with the same aggregate storage space.
Each disk is capable of positioning and transferring independently, so the effective throughput increases by a factor close to the number of disks used.
The actual throughput improvement is smaller than the number of disks involved.
Although each drive is capable of transferring in parallel, there is no way to ensure that the requests are evenly distributed across the drives.
Inevitably the load on one drive will be higher than on another.
The evenness of the load on the disks is strongly dependent on the way the data is shared across the drives.
In the following discussion, it is convenient to think of the disk storage as a large number of data sectors which are addressable by number, rather like the pages in a book.
The most obvious method is to divide the virtual disk into groups of consecutive sectors the size of the individual physical disks and store them in this manner, rather like taking a large book and tearing it into smaller sections.
This method is called _concatenation_ and has the advantage that the disks are not required to have any specific size relationships.
It works well when the access to the virtual disk is spread evenly about its address space.
When access is concentrated on a smaller area, the improvement is less marked.
<<vinum-concat, Concatenated Organization>> illustrates the sequence in which storage units are allocated in a concatenated organization.
[[vinum-concat]]
.Concatenated Organization
image::vinum-concat.png[]
An alternative mapping is to divide the address space into smaller, equal-sized components and store them sequentially on different devices.
For example, the first 256 sectors may be stored on the first disk, the next 256 sectors on the next disk and so on.
After filling the last disk, the process repeats until the disks are full.
This mapping is called _striping_ or RAID-0.
`RAID` offers various forms of fault tolerance, though RAID-0 is somewhat misleading as it provides no redundancy.
Striping requires somewhat more effort to locate the data, and it can cause additional I/O load where a transfer is spread over multiple disks, but it can also provide a more constant load across the disks.
<<vinum-striped, Striped Organization>> illustrates the sequence in which storage units are allocated in a striped organization.
[[vinum-striped]]
.Striped Organization
image::vinum-striped.png[]
[[vinum-data-integrity]]
== Data Integrity
The final problem with disks is that they are unreliable.
Although reliability has increased tremendously over the last few years, disk drives are still the most likely core component of a server to fail.
When they do, the results can be catastrophic and replacing a failed disk drive and restoring data can result in server downtime.
One approach to this problem is _mirroring_, or `RAID-1`, which keeps two copies of the data on different physical hardware.
Any write to the volume writes to both disks; a read can be satisfied from either, so if one drive fails, the data is still available on the other drive.
Mirroring has two problems:
* It requires twice as much disk storage as a non-redundant solution.
* Writes must be performed to both drives, so they take up twice the bandwidth of a non-mirrored volume. Reads do not suffer from a performance penalty and can even be faster.
An alternative solution is _parity_, implemented in `RAID` levels 2, 3, 4 and 5.
Of these, `RAID-5` is the most interesting.
As implemented in [.filename]#vinum#, it is a variant on a striped organization which dedicates one block of each stripe to parity one of the other blocks.
As implemented by [.filename]#vinum#, a `RAID-5` plex is similar to a striped plex, except that it implements `RAID-5` by including a parity block in each stripe.
As required by `RAID-5`, the location of this parity block changes from one stripe to the next.
The numbers in the data blocks indicate the relative block numbers.
[[vinum-raid5-org]]
.`RAID`-5 Organization
image::vinum-raid5-org.png[]
Compared to mirroring, `RAID-5` has the advantage of requiring significantly less storage space.
Read access is similar to that of striped organizations, but write access is significantly slower, approximately 25% of the read performance.
If one drive fails, the array can continue to operate in degraded mode where a read from one of the remaining accessible drives continues normally, but a read from the failed drive is recalculated from the corresponding block from all the remaining drives.
[[vinum-objects]]
== [.filename]#vinum# Objects
In order to address these problems, [.filename]#vinum# implements a four-level hierarchy of objects:
* The most visible object is the virtual disk, called a _volume_. Volumes have essentially the same properties as a UNIX(R) disk drive, though there are some minor differences. For one, they have no size limitations.
* Volumes are composed of _plexes_, each of which represent the total address space of a volume. This level in the hierarchy provides redundancy. Think of plexes as individual disks in a mirrored array, each containing the same data.
* Since [.filename]#vinum# exists within the UNIX(R) disk storage framework, it would be possible to use UNIX(R) partitions as the building block for multi-disk plexes. In fact, this turns out to be too inflexible as UNIX(R) disks can have only a limited number of partitions. Instead, [.filename]#vinum# subdivides a single UNIX(R) partition, the _drive_, into contiguous areas called _subdisks_, which are used as building blocks for plexes.
* Subdisks reside on [.filename]#vinum#_drives_, currently UNIX(R) partitions. [.filename]#vinum# drives can contain any number of subdisks. With the exception of a small area at the beginning of the drive, which is used for storing configuration and state information, the entire drive is available for data storage.
The following sections describe the way these objects provide the functionality required of [.filename]#vinum#.
=== Volume Size Considerations
Plexes can include multiple subdisks spread over all drives in the [.filename]#vinum# configuration.
As a result, the size of an individual drive does not limit the size of a plex or a volume.
=== Redundant Data Storage
[.filename]#vinum# implements mirroring by attaching multiple plexes to a volume.
Each plex is a representation of the data in a volume.
A volume may contain between one and eight plexes.
Although a plex represents the complete data of a volume, it is possible for parts of the representation to be physically missing, either by design (by not defining a subdisk for parts of the plex) or by accident (as a result of the failure of a drive).
As long as at least one plex can provide the data for the complete address range of the volume, the volume is fully functional.
=== Which Plex Organization?
[.filename]#vinum# implements both concatenation and striping at the plex level:
* A _concatenated plex_ uses the address space of each subdisk in turn. Concatenated plexes are the most flexible as they can contain any number of subdisks, and the subdisks may be of different length. The plex may be extended by adding additional subdisks. They require less CPU time than striped plexes, though the difference in CPU overhead is not measurable. On the other hand, they are most susceptible to hot spots, where one disk is very active and others are idle.
* A _striped plex_ stripes the data across each subdisk. The subdisks must all be the same size and there must be at least two subdisks in order to distinguish it from a concatenated plex. The greatest advantage of striped plexes is that they reduce hot spots. By choosing an optimum sized stripe, about 256 kB, the load can be evened out on the component drives. Extending a plex by adding new subdisks is so complicated that [.filename]#vinum# does not implement it.
<<vinum-comparison, [.filename]#vinum# Plex Organizations>> summarizes the advantages and disadvantages of each plex organization.
[[vinum-comparison]]
.[.filename]#vinum# Plex Organizations
[cols="1,1,1,1,1", frame="none", options="header"]
|===
| Plex type
| Minimum subdisks
| Can add subdisks
| Must be equal size
| Application
|concatenated
|1
|yes
|no
|Large data storage with maximum placement flexibility and moderate performance
|striped
|2
|no
|yes
|High performance in combination with highly concurrent access
|===
[[vinum-examples]]
== Some Examples
[.filename]#vinum# maintains a _configuration database_ which describes the objects known to an individual system.
Initially, the user creates the configuration database from one or more configuration files using man:gvinum[8].
[.filename]#vinum# stores a copy of its configuration database on each disk _device_ under its control.
This database is updated on each state change, so that a restart accurately restores the state of each [.filename]#vinum# object.
=== The Configuration File
The configuration file describes individual [.filename]#vinum# objects.
The definition of a simple volume might be:
[.programlisting]
....
drive a device /dev/da3h
volume myvol
plex org concat
sd length 512m drive a
....
This file describes four [.filename]#vinum# objects:
* The _drive_ line describes a disk partition (_drive_) and its location relative to the underlying hardware. It is given the symbolic name _a_. This separation of symbolic names from device names allows disks to be moved from one location to another without confusion.
* The _volume_ line describes a volume. The only required attribute is the name, in this case _myvol_.
* The _plex_ line defines a plex. The only required parameter is the organization, in this case _concat_. No name is necessary as the system automatically generates a name from the volume name by adding the suffix _.px_, where _x_ is the number of the plex in the volume. Thus this plex will be called _myvol.p0_.
* The _sd_ line describes a subdisk. The minimum specifications are the name of a drive on which to store it, and the length of the subdisk. No name is necessary as the system automatically assigns names derived from the plex name by adding the suffix _.sx_, where _x_ is the number of the subdisk in the plex. Thus [.filename]#vinum# gives this subdisk the name _myvol.p0.s0_.
After processing this file, man:gvinum[8] produces the following output:
[.programlisting]
....
# gvinum -> create config1
Configuration summary
Drives: 1 (4 configured)
Volumes: 1 (4 configured)
Plexes: 1 (8 configured)
Subdisks: 1 (16 configured)
D a State: up Device /dev/da3h Avail: 2061/2573 MB (80%)
V myvol State: up Plexes: 1 Size: 512 MB
P myvol.p0 C State: up Subdisks: 1 Size: 512 MB
S myvol.p0.s0 State: up PO: 0 B Size: 512 MB
....
This output shows the brief listing format of man:gvinum[8].
It is represented graphically in <<vinum-simple-vol, A Simple [.filename]#vinum# Volume>>.
[[vinum-simple-vol]]
.A Simple [.filename]#vinum# Volume
image::vinum-simple-vol.png[]
This figure, and the ones which follow, represent a volume, which contains the plexes, which in turn contains the subdisks.
In this example, the volume contains one plex, and the plex contains one subdisk.
This particular volume has no specific advantage over a conventional disk partition.
It contains a single plex, so it is not redundant.
The plex contains a single subdisk, so there is no difference in storage allocation from a conventional disk partition.
The following sections illustrate various more interesting configuration methods.
=== Increased Resilience: Mirroring
The resilience of a volume can be increased by mirroring.
When laying out a mirrored volume, it is important to ensure that the subdisks of each plex are on different drives, so that a drive failure will not take down both plexes.
The following configuration mirrors a volume:
[.programlisting]
....
drive b device /dev/da4h
volume mirror
plex org concat
sd length 512m drive a
plex org concat
sd length 512m drive b
....
In this example, it was not necessary to specify a definition of drive _a_ again, since [.filename]#vinum# keeps track of all objects in its configuration database.
After processing this definition, the configuration looks like:
[.programlisting]
....
Drives: 2 (4 configured)
Volumes: 2 (4 configured)
Plexes: 3 (8 configured)
Subdisks: 3 (16 configured)
D a State: up Device /dev/da3h Avail: 1549/2573 MB (60%)
D b State: up Device /dev/da4h Avail: 2061/2573 MB (80%)
V myvol State: up Plexes: 1 Size: 512 MB
V mirror State: up Plexes: 2 Size: 512 MB
P myvol.p0 C State: up Subdisks: 1 Size: 512 MB
P mirror.p0 C State: up Subdisks: 1 Size: 512 MB
P mirror.p1 C State: initializing Subdisks: 1 Size: 512 MB
S myvol.p0.s0 State: up PO: 0 B Size: 512 MB
S mirror.p0.s0 State: up PO: 0 B Size: 512 MB
S mirror.p1.s0 State: empty PO: 0 B Size: 512 MB
....
<<vinum-mirrored-vol, A Mirrored [.filename]#vinum# Volume>> shows the structure graphically.
[[vinum-mirrored-vol]]
.A Mirrored [.filename]#vinum# Volume
image::vinum-mirrored-vol.png[]
In this example, each plex contains the full 512 MB of address space.
As in the previous example, each plex contains only a single subdisk.
=== Optimizing Performance
The mirrored volume in the previous example is more resistant to failure than an unmirrored volume, but its performance is less as each write to the volume requires a write to both drives, using up a greater proportion of the total disk bandwidth.
Performance considerations demand a different approach: instead of mirroring, the data is striped across as many disk drives as possible.
The following configuration shows a volume with a plex striped across four disk drives:
[.programlisting]
....
drive c device /dev/da5h
drive d device /dev/da6h
volume stripe
plex org striped 512k
sd length 128m drive a
sd length 128m drive b
sd length 128m drive c
sd length 128m drive d
....
As before, it is not necessary to define the drives which are already known to [.filename]#vinum#.
After processing this definition, the configuration looks like:
[.programlisting]
....
Drives: 4 (4 configured)
Volumes: 3 (4 configured)
Plexes: 4 (8 configured)
Subdisks: 7 (16 configured)
D a State: up Device /dev/da3h Avail: 1421/2573 MB (55%)
D b State: up Device /dev/da4h Avail: 1933/2573 MB (75%)
D c State: up Device /dev/da5h Avail: 2445/2573 MB (95%)
D d State: up Device /dev/da6h Avail: 2445/2573 MB (95%)
V myvol State: up Plexes: 1 Size: 512 MB
V mirror State: up Plexes: 2 Size: 512 MB
V striped State: up Plexes: 1 Size: 512 MB
P myvol.p0 C State: up Subdisks: 1 Size: 512 MB
P mirror.p0 C State: up Subdisks: 1 Size: 512 MB
P mirror.p1 C State: initializing Subdisks: 1 Size: 512 MB
P striped.p1 State: up Subdisks: 1 Size: 512 MB
S myvol.p0.s0 State: up PO: 0 B Size: 512 MB
S mirror.p0.s0 State: up PO: 0 B Size: 512 MB
S mirror.p1.s0 State: empty PO: 0 B Size: 512 MB
S striped.p0.s0 State: up PO: 0 B Size: 128 MB
S striped.p0.s1 State: up PO: 512 kB Size: 128 MB
S striped.p0.s2 State: up PO: 1024 kB Size: 128 MB
S striped.p0.s3 State: up PO: 1536 kB Size: 128 MB
....
[[vinum-striped-vol]]
.A Striped [.filename]#vinum# Volume
image::vinum-striped-vol.png[]
This volume is represented in <<vinum-striped-vol, A Striped [.filename]#vinum# Volume>>.
The darkness of the stripes indicates the position within the plex address space, where the lightest stripes come first and the darkest last.
=== Resilience and Performance
[[vinum-resilience]]With sufficient hardware, it is possible to build volumes which show both increased resilience and increased performance compared to standard UNIX(R) partitions.
A typical configuration file might be:
[.programlisting]
....
volume raid10
plex org striped 512k
sd length 102480k drive a
sd length 102480k drive b
sd length 102480k drive c
sd length 102480k drive d
sd length 102480k drive e
plex org striped 512k
sd length 102480k drive c
sd length 102480k drive d
sd length 102480k drive e
sd length 102480k drive a
sd length 102480k drive b
....
The subdisks of the second plex are offset by two drives from those of the first plex.
This helps to ensure that writes do not go to the same subdisks even if a transfer goes over two drives.
<<vinum-raid10-vol, A Mirrored, Striped [.filename]#vinum# Volume>> represents the structure of this volume.
[[vinum-raid10-vol]]
.A Mirrored, Striped [.filename]#vinum# Volume
image::vinum-raid10-vol.png[]
[[vinum-object-naming]]
== Object Naming
[.filename]#vinum# assigns default names to plexes and subdisks, although they may be overridden.
Overriding the default names is not recommended as it does not bring a significant advantage and it can cause confusion.
Names may contain any non-blank character, but it is recommended to restrict them to letters, digits and the underscore characters.
The names of volumes, plexes, and subdisks may be up to 64 characters long, and the names of drives may be up to 32 characters long.
[.filename]#vinum# objects are assigned device nodes in the hierarchy [.filename]#/dev/gvinum#.
The configuration shown above would cause [.filename]#vinum# to create the following device nodes:
* Device entries for each volume. These are the main devices used by [.filename]#vinum#. The configuration above would include the devices [.filename]#/dev/gvinum/myvol#, [.filename]#/dev/gvinum/mirror#, [.filename]#/dev/gvinum/striped#, [.filename]#/dev/gvinum/raid5# and [.filename]#/dev/gvinum/raid10#.
* All volumes get direct entries under [.filename]#/dev/gvinum/#.
* The directories [.filename]#/dev/gvinum/plex#, and [.filename]#/dev/gvinum/sd#, which contain device nodes for each plex and for each subdisk, respectively.
For example, consider the following configuration file:
[.programlisting]
....
drive drive1 device /dev/sd1h
drive drive2 device /dev/sd2h
drive drive3 device /dev/sd3h
drive drive4 device /dev/sd4h
volume s64 setupstate
plex org striped 64k
sd length 100m drive drive1
sd length 100m drive drive2
sd length 100m drive drive3
sd length 100m drive drive4
....
After processing this file, man:gvinum[8] creates the following structure in [.filename]#/dev/gvinum#:
[.programlisting]
....
drwxr-xr-x 2 root wheel 512 Apr 13
16:46 plex
crwxr-xr-- 1 root wheel 91, 2 Apr 13 16:46 s64
drwxr-xr-x 2 root wheel 512 Apr 13 16:46 sd
/dev/vinum/plex:
total 0
crwxr-xr-- 1 root wheel 25, 0x10000002 Apr 13 16:46 s64.p0
/dev/vinum/sd:
total 0
crwxr-xr-- 1 root wheel 91, 0x20000002 Apr 13 16:46 s64.p0.s0
crwxr-xr-- 1 root wheel 91, 0x20100002 Apr 13 16:46 s64.p0.s1
crwxr-xr-- 1 root wheel 91, 0x20200002 Apr 13 16:46 s64.p0.s2
crwxr-xr-- 1 root wheel 91, 0x20300002 Apr 13 16:46 s64.p0.s3
....
Although it is recommended that plexes and subdisks should not be allocated specific names, [.filename]#vinum# drives must be named.
This makes it possible to move a drive to a different location and still recognize it automatically.
Drive names may be up to 32 characters long.
=== Creating File Systems
Volumes appear to the system to be identical to disks, with one exception.
Unlike UNIX(R) drives, [.filename]#vinum# does not partition volumes, which thus do not contain a partition table.
This has required modification to some disk utilities, notably man:newfs[8], so that it does not try to interpret the last letter of a [.filename]#vinum# volume name as a partition identifier.
For example, a disk drive may have a name like [.filename]#/dev/ad0a# or [.filename]#/dev/da2h#.
These names represent the first partition ([.filename]#a#) on the first (0) IDE disk ([.filename]#ad#) and the eighth partition ([.filename]#h#) on the third (2) SCSI disk ([.filename]#da#) respectively.
By contrast, a [.filename]#vinum# volume might be called [.filename]#/dev/gvinum/concat#, which has no relationship with a partition name.
In order to create a file system on this volume, use man:newfs[8]:
[source,shell]
....
# newfs /dev/gvinum/concat
....
[[vinum-config]]
== Configuring [.filename]#vinum#
The [.filename]#GENERIC# kernel does not contain [.filename]#vinum#.
It is possible to build a custom kernel which includes [.filename]#vinum#, but this is not recommended.
The standard way to start [.filename]#vinum# is as a kernel module.
man:kldload[8] is not needed because when man:gvinum[8] starts, it checks whether the module has been loaded, and if it is not, it loads it automatically.
=== Startup
[.filename]#vinum# stores configuration information on the disk slices in essentially the same form as in the configuration files.
When reading from the configuration database, [.filename]#vinum# recognizes a number of keywords which are not allowed in the configuration files.
For example, a disk configuration might contain the following text:
[.programlisting]
....
volume myvol state up
volume bigraid state down
plex name myvol.p0 state up org concat vol myvol
plex name myvol.p1 state up org concat vol myvol
plex name myvol.p2 state init org striped 512b vol myvol
plex name bigraid.p0 state initializing org raid5 512b vol bigraid
sd name myvol.p0.s0 drive a plex myvol.p0 state up len 1048576b driveoffset 265b plexoffset 0b
sd name myvol.p0.s1 drive b plex myvol.p0 state up len 1048576b driveoffset 265b plexoffset 1048576b
sd name myvol.p1.s0 drive c plex myvol.p1 state up len 1048576b driveoffset 265b plexoffset 0b
sd name myvol.p1.s1 drive d plex myvol.p1 state up len 1048576b driveoffset 265b plexoffset 1048576b
sd name myvol.p2.s0 drive a plex myvol.p2 state init len 524288b driveoffset 1048841b plexoffset 0b
sd name myvol.p2.s1 drive b plex myvol.p2 state init len 524288b driveoffset 1048841b plexoffset 524288b
sd name myvol.p2.s2 drive c plex myvol.p2 state init len 524288b driveoffset 1048841b plexoffset 1048576b
sd name myvol.p2.s3 drive d plex myvol.p2 state init len 524288b driveoffset 1048841b plexoffset 1572864b
sd name bigraid.p0.s0 drive a plex bigraid.p0 state initializing len 4194304b driveoff set 1573129b plexoffset 0b
sd name bigraid.p0.s1 drive b plex bigraid.p0 state initializing len 4194304b driveoff set 1573129b plexoffset 4194304b
sd name bigraid.p0.s2 drive c plex bigraid.p0 state initializing len 4194304b driveoff set 1573129b plexoffset 8388608b
sd name bigraid.p0.s3 drive d plex bigraid.p0 state initializing len 4194304b driveoff set 1573129b plexoffset 12582912b
sd name bigraid.p0.s4 drive e plex bigraid.p0 state initializing len 4194304b driveoff set 1573129b plexoffset 16777216b
....
The obvious differences here are the presence of explicit location information and naming, both of which are allowed but discouraged, and the information on the states.
[.filename]#vinum# does not store information about drives in the configuration information.
It finds the drives by scanning the configured disk drives for partitions with a [.filename]#vinum# label.
This enables [.filename]#vinum# to identify drives correctly even if they have been assigned different UNIX(R) drive IDs.
[[vinum-rc-startup]]
==== Automatic Startup
_Gvinum_ always features an automatic startup once the kernel module is loaded, via man:loader.conf[5].
To load the _Gvinum_ module at boot time, add `geom_vinum_load="YES"` to [.filename]#/boot/loader.conf#.
When [.filename]#vinum# is started with `gvinum start`, [.filename]#vinum# reads the configuration database from one of the [.filename]#vinum# drives.
Under normal circumstances, each drive contains an identical copy of the configuration database, so it does not matter which drive is read.
After a crash, however, [.filename]#vinum# must determine which drive was updated most recently and read the configuration from this drive.
It then updates the configuration, if necessary, from progressively older drives.
[[vinum-root]]
== Using [.filename]#vinum# for the Root File System
For a machine that has fully-mirrored file systems using [.filename]#vinum#, it is desirable to also mirror the root file system.
Setting up such a configuration is less trivial than mirroring an arbitrary file system because:
* The root file system must be available very early during the boot process, so the [.filename]#vinum# infrastructure must already be available at this time.
* The volume containing the root file system also contains the system bootstrap and the kernel. These must be read using the host system's native utilities, such as the BIOS, which often cannot be taught about the details of [.filename]#vinum#.
In the following sections, the term "root volume" is generally used to describe the [.filename]#vinum# volume that contains the root file system.
=== Starting up [.filename]#vinum# Early Enough for the Root File System
[.filename]#vinum# must be available early in the system boot as man:loader[8] must be able to load the vinum kernel module before starting the kernel.
This can be accomplished by putting this line in [.filename]#/boot/loader.conf#:
[.programlisting]
....
geom_vinum_load="YES"
....
=== Making a [.filename]#vinum#-based Root Volume Accessible to the Bootstrap
The current FreeBSD bootstrap is only 7.5 KB of code and does not understand the internal [.filename]#vinum# structures.
This means that it cannot parse the [.filename]#vinum# configuration data or figure out the elements of a boot volume.
Thus, some workarounds are necessary to provide the bootstrap code with the illusion of a standard `a` partition that contains the root file system.
For this to be possible, the following requirements must be met for the root volume:
* The root volume must not be a stripe or `RAID`-5.
* The root volume must not contain more than one concatenated subdisk per plex.
Note that it is desirable and possible to use multiple plexes, each containing one replica of the root file system.
The bootstrap process will only use one replica for finding the bootstrap and all boot files, until the kernel mounts the root file system.
Each single subdisk within these plexes needs its own `a` partition illusion, for the respective device to be bootable.
It is not strictly needed that each of these faked `a` partitions is located at the same offset within its device, compared with other devices containing plexes of the root volume.
However, it is probably a good idea to create the [.filename]#vinum# volumes that way so the resulting mirrored devices are symmetric, to avoid confusion.
In order to set up these `a` partitions for each device containing part of the root volume, the following is required:
[.procedure]
====
. The location, offset from the beginning of the device, and size of this device's subdisk that is part of the root volume needs to be examined, using the command:
+
[source,shell]
....
# gvinum l -rv root
....
+
[.filename]#vinum# offsets and sizes are measured in bytes.
They must be divided by 512 in order to obtain the block numbers that are to be used by `bsdlabel`.
. Run this command for each device that participates in the root volume:
+
[source,shell]
....
# bsdlabel -e devname
....
+
_devname_ must be either the name of the disk, like [.filename]#da0# for disks without a slice table, or the name of the slice, like [.filename]#ad0s1#.
+
If there is already an `a` partition on the device from a pre-[.filename]#vinum# root file system, it should be renamed to something else so that it remains accessible (just in case), but will no longer be used by default to bootstrap the system.
A currently mounted root file system cannot be renamed, so this must be executed either when being booted from a "Fixit" media, or in a two-step process where, in a mirror, the disk that is not been currently booted is manipulated first.
+
The offset of the [.filename]#vinum# partition on this device (if any) must be added to the offset of the respective root volume subdisk on this device.
The resulting value will become the `offset` value for the new `a` partition.
The `size` value for this partition can be taken verbatim from the calculation above.
The `fstype` should be `4.2BSD`.
The `fsize`, `bsize`, and `cpg` values should be chosen to match the actual file system, though they are fairly unimportant within this context.
+
That way, a new `a` partition will be established that overlaps the [.filename]#vinum# partition on this device.
`bsdlabel` will only allow for this overlap if the [.filename]#vinum# partition has properly been marked using the `vinum` fstype.
. A faked `a` partition now exists on each device that has one replica of the root volume. It is highly recommendable to verify the result using a command like:
+
[source,shell]
....
# fsck -n /dev/devnamea
....
====
It should be remembered that all files containing control information must be relative to the root file system in the [.filename]#vinum# volume which, when setting up a new [.filename]#vinum# root volume, might not match the root file system that is currently active.
So in particular, [.filename]#/etc/fstab# and [.filename]#/boot/loader.conf# need to be taken care of.
At next reboot, the bootstrap should figure out the appropriate control information from the new [.filename]#vinum#-based root file system, and act accordingly.
At the end of the kernel initialization process, after all devices have been announced, the prominent notice that shows the success of this setup is a message like:
[source,shell]
....
Mounting root from ufs:/dev/gvinum/root
....
=== Example of a [.filename]#vinum#-based Root Setup
After the [.filename]#vinum# root volume has been set up, the output of `gvinum l -rv root` could look like:
[source,shell]
....
...
Subdisk root.p0.s0:
Size: 125829120 bytes (120 MB)
State: up
Plex root.p0 at offset 0 (0 B)
Drive disk0 (/dev/da0h) at offset 135680 (132 kB)
Subdisk root.p1.s0:
Size: 125829120 bytes (120 MB)
State: up
Plex root.p1 at offset 0 (0 B)
Drive disk1 (/dev/da1h) at offset 135680 (132 kB)
....
The values to note are `135680` for the offset, relative to partition [.filename]#/dev/da0h#.
This translates to 265 512-byte disk blocks in `bsdlabel`'s terms.
Likewise, the size of this root volume is 245760 512-byte blocks.
[.filename]#/dev/da1h#, containing the second replica of this root volume, has a symmetric setup.
The bsdlabel for these devices might look like:
[source,shell]
....
...
8 partitions:
# size offset fstype [fsize bsize bps/cpg]
a: 245760 281 4.2BSD 2048 16384 0 # (Cyl. 0*- 15*)
c: 71771688 0 unused 0 0 # (Cyl. 0 - 4467*)
h: 71771672 16 vinum # (Cyl. 0*- 4467*)
....
It can be observed that the `size` parameter for the faked `a` partition matches the value outlined above, while the `offset` parameter is the sum of the offset within the [.filename]#vinum# partition `h`, and the offset of this partition within the device or slice.
This is a typical setup that is necessary to avoid the problem described in <<vinum-root-panic, Nothing Boots, the Bootstrap Panics>>.
The entire `a` partition is completely within the `h` partition containing all the [.filename]#vinum# data for this device.
In the above example, the entire device is dedicated to [.filename]#vinum# and there is no leftover pre-[.filename]#vinum# root partition.
=== Troubleshooting
The following list contains a few known pitfalls and solutions.
==== System Bootstrap Loads, but System Does Not Boot
If for any reason the system does not continue to boot, the bootstrap can be interrupted by pressing kbd:[space] at the 10-seconds warning.
The loader variable `vinum.autostart` can be examined by typing `show` and manipulated using `set` or `unset`.
If the [.filename]#vinum# kernel module was not yet in the list of modules to load automatically, type `load geom_vinum`.
When ready, the boot process can be continued by typing `boot -as` which `-as` requests the kernel to ask for the root file system to mount (`-a`) and make the boot process stop in single-user mode (`-s`), where the root file system is mounted read-only.
That way, even if only one plex of a multi-plex volume has been mounted, no data inconsistency between plexes is being risked.
At the prompt asking for a root file system to mount, any device that contains a valid root file system can be entered.
If [.filename]#/etc/fstab# is set up correctly, the default should be something like `ufs:/dev/gvinum/root`.
A typical alternate choice would be something like `ufs:da0d` which could be a hypothetical partition containing the pre-[.filename]#vinum# root file system.
Care should be taken if one of the alias `a` partitions is entered here, that it actually references the subdisks of the [.filename]#vinum# root device, because in a mirrored setup, this would only mount one piece of a mirrored root device.
If this file system is to be mounted read-write later on, it is necessary to remove the other plex(es) of the [.filename]#vinum# root volume since these plexes would otherwise carry inconsistent data.
==== Only Primary Bootstrap Loads
If [.filename]#/boot/loader# fails to load, but the primary bootstrap still loads (visible by a single dash in the left column of the screen right after the boot process starts), an attempt can be made to interrupt the primary bootstrap by pressing kbd:[space].
This will make the bootstrap stop in link:{handbook}#boot-boot1[stage two].
An attempt can be made here to boot off an alternate partition, like the partition containing the previous root file system that has been moved away from `a`.
[[vinum-root-panic]]
==== Nothing Boots, the Bootstrap Panics
This situation will happen if the bootstrap had been destroyed by the [.filename]#vinum# installation.
Unfortunately, [.filename]#vinum# accidentally leaves only 4 KB at the beginning of its partition free before starting to write its [.filename]#vinum# header information.
However, the stage one and two bootstraps plus the bsdlabel require 8 KB.
So if a [.filename]#vinum# partition was started at offset 0 within a slice or disk that was meant to be bootable, the [.filename]#vinum# setup will trash the bootstrap.
Similarly, if the above situation has been recovered, by booting from a "Fixit" media, and the bootstrap has been re-installed using `bsdlabel -B` as described in link:{handbook}#boot-boot1[stage two], the bootstrap will trash the [.filename]#vinum# header, and [.filename]#vinum# will no longer find its disk(s).
Though no actual [.filename]#vinum# configuration data or data in [.filename]#vinum# volumes will be trashed, and it would be possible to recover all the data by entering exactly the same [.filename]#vinum# configuration data again, the situation is hard to fix.
It is necessary to move the entire [.filename]#vinum# partition by at least 4 KB, in order to have the [.filename]#vinum# header and the system bootstrap no longer collide.
diff --git a/documentation/content/en/articles/vm-design/_index.adoc b/documentation/content/en/articles/vm-design/_index.adoc
index 2b7163da83..12a60f6962 100644
--- a/documentation/content/en/articles/vm-design/_index.adoc
+++ b/documentation/content/en/articles/vm-design/_index.adoc
@@ -1,411 +1,411 @@
---
title: Design elements of the FreeBSD VM system
authors:
- author: Matthew Dillon
email: dillon@apollo.backplane.com
-releaseinfo: "$FreeBSD$"
+description: Design elements of the FreeBSD VM system
trademarks: ["freebsd", "linux", "microsoft", "opengroup", "daemon-news", "general"]
---
= Design elements of the FreeBSD VM system
:doctype: article
:toc: macro
:toclevels: 1
:icons: font
:sectnums:
:sectnumlevels: 6
:source-highlighter: rouge
:experimental:
ifeval::["{backend}" == "html5"]
:imagesdir: ../../../images/articles/vm-design/
endif::[]
ifeval::["{backend}" == "pdf"]
:imagesdir: ../../../../static/images/articles/vm-design/
endif::[]
ifeval::["{backend}" == "epub3"]
:imagesdir: ../../../../static/images/articles/vm-design/
endif::[]
[.abstract-title]
Abstract
The title is really just a fancy way of saying that I am going to attempt to describe the whole VM enchilada, hopefully in a way that everyone can follow.
For the last year I have concentrated on a number of major kernel subsystems within FreeBSD, with the VM and Swap subsystems being the most interesting and NFS being "a necessary chore".
I rewrote only small portions of the code. In the VM arena the only major rewrite I have done is to the swap subsystem.
Most of my work was cleanup and maintenance, with only moderate code rewriting and no major algorithmic adjustments within the VM subsystem.
The bulk of the VM subsystem's theoretical base remains unchanged and a lot of the credit for the modernization effort in the last few years belongs to John Dyson and David Greenman.
Not being a historian like Kirk I will not attempt to tag all the various features with peoples names, since I will invariably get it wrong.
'''
toc::[]
[[introduction]]
== Introduction
Before moving along to the actual design let's spend a little time on the necessity of maintaining and modernizing any long-living codebase.
In the programming world, algorithms tend to be more important than code and it is precisely due to BSD's academic roots that a great deal of attention was paid to algorithm design from the beginning.
More attention paid to the design generally leads to a clean and flexible codebase that can be fairly easily modified, extended, or replaced over time.
While BSD is considered an "old" operating system by some people, those of us who work on it tend to view it more as a "mature" codebase which has various components modified, extended, or replaced with modern code.
It has evolved, and FreeBSD is at the bleeding edge no matter how old some of the code might be.
This is an important distinction to make and one that is unfortunately lost to many people.
The biggest error a programmer can make is to not learn from history, and this is precisely the error that many other modern operating systems have made.
Windows NT(R) is the best example of this, and the consequences have been dire.
Linux also makes this mistake to some degree-enough that we BSD folk can make small jokes about it every once in a while, anyway.
Linux's problem is simply one of a lack of experience and history to compare ideas against, a problem that is easily and rapidly being addressed by the Linux community in the same way it has been addressed in the BSD community-by continuous code development.
The Windows NT(R) folk, on the other hand, repeatedly make the same mistakes solved by UNIX(R) decades ago and then spend years fixing them.
Over and over again.
They have a severe case of "not designed here" and "we are always right because our marketing department says so".
I have little tolerance for anyone who cannot learn from history.
Much of the apparent complexity of the FreeBSD design, especially in the VM/Swap subsystem, is a direct result of having to solve serious performance issues that occur under various conditions.
These issues are not due to bad algorithmic design but instead rise from environmental factors.
In any direct comparison between platforms, these issues become most apparent when system resources begin to get stressed.
As I describe FreeBSD's VM/Swap subsystem the reader should always keep two points in mind:
. The most important aspect of performance design is what is known as "Optimizing the Critical Path". It is often the case that performance optimizations add a little bloat to the code in order to make the critical path perform better.
. A solid, generalized design outperforms a heavily-optimized design over the long run. While a generalized design may end up being slower than an heavily-optimized design when they are first implemented, the generalized design tends to be easier to adapt to changing conditions and the heavily-optimized design winds up having to be thrown away.
Any codebase that will survive and be maintainable for years must therefore be designed properly from the beginning even if it costs some performance.
Twenty years ago people were still arguing that programming in assembly was better than programming in a high-level language because it produced code that was ten times as fast.
Today, the fallibility of that argument is obvious - as are the parallels to algorithmic design and code generalization.
[[vm-objects]]
== VM Objects
The best way to begin describing the FreeBSD VM system is to look at it from the perspective of a user-level process.
Each user process sees a single, private, contiguous VM address space containing several types of memory objects.
These objects have various characteristics.
Program code and program data are effectively a single memory-mapped file (the binary file being run), but program code is read-only while program data is copy-on-write.
Program BSS is just memory allocated and filled with zeros on demand, called demand zero page fill.
Arbitrary files can be memory-mapped into the address space as well, which is how the shared library mechanism works.
Such mappings can require modifications to remain private to the process making them.
The fork system call adds an entirely new dimension to the VM management problem on top of the complexity already given.
A program binary data page (which is a basic copy-on-write page) illustrates the complexity.
A program binary contains a preinitialized data section which is initially mapped directly from the program file.
When a program is loaded into a process's VM space, this area is initially memory-mapped and backed by the program binary itself, allowing the VM system to free/reuse the page and later load it back in from the binary.
The moment a process modifies this data, however, the VM system must make a private copy of the page for that process.
Since the private copy has been modified, the VM system may no longer free it, because there is no longer any way to restore it later on.
You will notice immediately that what was originally a simple file mapping has become much more complex.
Data may be modified on a page-by-page basis whereas the file mapping encompasses many pages at once.
The complexity further increases when a process forks.
When a process forks, the result is two processes-each with their own private address spaces, including any modifications made by the original process prior to the call to `fork()`.
It would be silly for the VM system to make a complete copy of the data at the time of the `fork()` because it is quite possible that at least one of the two processes will only need to read from that page from then on, allowing the original page to continue to be used.
What was a private page is made copy-on-write again, since each process (parent and child) expects their own personal post-fork modifications to remain private to themselves and not effect the other.
FreeBSD manages all of this with a layered VM Object model.
The original binary program file winds up being the lowest VM Object layer.
A copy-on-write layer is pushed on top of that to hold those pages which had to be copied from the original file.
If the program modifies a data page belonging to the original file the VM system takes a fault and makes a copy of the page in the higher layer.
When a process forks, additional VM Object layers are pushed on.
This might make a little more sense with a fairly basic example.
A `fork()` is a common operation for any *BSD system, so this example will consider a program that starts up, and forks.
When the process starts, the VM system creates an object layer, let's call this A:
image::fig1.png[A picture]
A represents the file-pages may be paged in and out of the file's physical media as necessary.
Paging in from the disk is reasonable for a program, but we really do not want to page back out and overwrite the executable.
The VM system therefore creates a second layer, B, that will be physically backed by swap space:
image::fig2.png[]
On the first write to a page after this, a new page is created in B, and its contents are initialized from A.
All pages in B can be paged in or out to a swap device.
When the program forks, the VM system creates two new object layers-C1 for the parent, and C2 for the child-that rest on top of B:
image::fig3.png[]
In this case, let's say a page in B is modified by the original parent process.
The process will take a copy-on-write fault and duplicate the page in C1, leaving the original page in B untouched.
Now, let's say the same page in B is modified by the child process.
The process will take a copy-on-write fault and duplicate the page in C2.
The original page in B is now completely hidden since both C1 and C2 have a copy and B could theoretically be destroyed if it does not represent a "real" file; however, this sort of optimization is not trivial to make because it is so fine-grained.
FreeBSD does not make this optimization.
Now, suppose (as is often the case) that the child process does an `exec()`.
Its current address space is usually replaced by a new address space representing a new file.
In this case, the C2 layer is destroyed:
image::fig4.png[]
In this case, the number of children of B drops to one, and all accesses to B now go through C1.
This means that B and C1 can be collapsed together.
Any pages in B that also exist in C1 are deleted from B during the collapse.
Thus, even though the optimization in the previous step could not be made, we can recover the dead pages when either of the processes exit or `exec()`.
This model creates a number of potential problems.
The first is that you can wind up with a relatively deep stack of layered VM Objects which can cost scanning time and memory when you take a fault.
Deep layering can occur when processes fork and then fork again (either parent or child).
The second problem is that you can wind up with dead, inaccessible pages deep in the stack of VM Objects.
In our last example if both the parent and child processes modify the same page, they both get their own private copies of the page and the original page in B is no longer accessible by anyone.
That page in B can be freed.
FreeBSD solves the deep layering problem with a special optimization called the "All Shadowed Case".
This case occurs if either C1 or C2 take sufficient COW faults to completely shadow all pages in B.
Lets say that C1 achieves this.
C1 can now bypass B entirely, so rather then have C1->B->A and C2->B->A we now have C1->A and C2->B->A.
But look what also happened-now B has only one reference (C2), so we can collapse B and C2 together.
The end result is that B is deleted entirely and we have C1->A and C2->A.
It is often the case that B will contain a large number of pages and neither C1 nor C2 will be able to completely overshadow it.
If we fork again and create a set of D layers, however, it is much more likely that one of the D layers will eventually be able to completely overshadow the much smaller dataset represented by C1 or C2.
The same optimization will work at any point in the graph and the grand result of this is that even on a heavily forked machine VM Object stacks tend to not get much deeper then 4.
This is true of both the parent and the children and true whether the parent is doing the forking or whether the children cascade forks.
The dead page problem still exists in the case where C1 or C2 do not completely overshadow B.
Due to our other optimizations this case does not represent much of a problem and we simply allow the pages to be dead.
If the system runs low on memory it will swap them out, eating a little swap, but that is it.
The advantage to the VM Object model is that `fork()` is extremely fast, since no real data copying need take place.
The disadvantage is that you can build a relatively complex VM Object layering that slows page fault handling down a little, and you spend memory managing the VM Object structures.
The optimizations FreeBSD makes proves to reduce the problems enough that they can be ignored, leaving no real disadvantage.
[[swap-layers]]
== SWAP Layers
Private data pages are initially either copy-on-write or zero-fill pages.
When a change, and therefore a copy, is made, the original backing object (usually a file) can no longer be used to save a copy of the page when the VM system needs to reuse it for other purposes.
This is where SWAP comes in.
SWAP is allocated to create backing store for memory that does not otherwise have it.
FreeBSD allocates the swap management structure for a VM Object only when it is actually needed.
However, the swap management structure has had problems historically:
* Under FreeBSD 3.X the swap management structure preallocates an array that encompasses the entire object requiring swap backing store-even if only a few pages of that object are swap-backed. This creates a kernel memory fragmentation problem when large objects are mapped, or processes with large runsizes (RSS) fork.
* Also, in order to keep track of swap space, a "list of holes" is kept in kernel memory, and this tends to get severely fragmented as well. Since the "list of holes" is a linear list, the swap allocation and freeing performance is a non-optimal O(n)-per-page.
* It requires kernel memory allocations to take place during the swap freeing process, and that creates low memory deadlock problems.
* The problem is further exacerbated by holes created due to the interleaving algorithm.
* Also, the swap block map can become fragmented fairly easily resulting in non-contiguous allocations.
* Kernel memory must also be allocated on the fly for additional swap management structures when a swapout occurs.
It is evident from that list that there was plenty of room for improvement.
For FreeBSD 4.X, I completely rewrote the swap subsystem:
* Swap management structures are allocated through a hash table rather than a linear array giving them a fixed allocation size and much finer granularity.
* Rather then using a linearly linked list to keep track of swap space reservations, it now uses a bitmap of swap blocks arranged in a radix tree structure with free-space hinting in the radix node structures. This effectively makes swap allocation and freeing an O(1) operation.
* The entire radix tree bitmap is also preallocated in order to avoid having to allocate kernel memory during critical low memory swapping operations. After all, the system tends to swap when it is low on memory so we should avoid allocating kernel memory at such times in order to avoid potential deadlocks.
* To reduce fragmentation the radix tree is capable of allocating large contiguous chunks at once, skipping over smaller fragmented chunks.
I did not take the final step of having an "allocating hint pointer" that would trundle through a portion of swap as allocations were made in order to further guarantee contiguous allocations or at least locality of reference, but I ensured that such an addition could be made.
[[freeing-pages]]
== When to free a page
Since the VM system uses all available memory for disk caching, there are usually very few truly-free pages.
The VM system depends on being able to properly choose pages which are not in use to reuse for new allocations.
Selecting the optimal pages to free is possibly the single-most important function any VM system can perform because if it makes a poor selection, the VM system may be forced to unnecessarily retrieve pages from disk, seriously degrading system performance.
How much overhead are we willing to suffer in the critical path to avoid freeing the wrong page? Each wrong choice we make will cost us hundreds of thousands of CPU cycles and a noticeable stall of the affected processes, so we are willing to endure a significant amount of overhead in order to be sure that the right page is chosen.
This is why FreeBSD tends to outperform other systems when memory resources become stressed.
The free page determination algorithm is built upon a history of the use of memory pages.
To acquire this history, the system takes advantage of a page-used bit feature that most hardware page tables have.
In any case, the page-used bit is cleared and at some later point the VM system comes across the page again and sees that the page-used bit has been set.
This indicates that the page is still being actively used.
If the bit is still clear it is an indication that the page is not being actively used.
By testing this bit periodically, a use history (in the form of a counter) for the physical page is developed.
When the VM system later needs to free up some pages, checking this history becomes the cornerstone of determining the best candidate page to reuse.
For those platforms that do not have this feature, the system actually emulates a page-used bit.
It unmaps or protects a page, forcing a page fault if the page is accessed again.
When the page fault is taken, the system simply marks the page as having been used and unprotects the page so that it may be used.
While taking such page faults just to determine if a page is being used appears to be an expensive proposition, it is much less expensive than reusing the page for some other purpose only to find that a process needs it back and then have to go to disk.
FreeBSD makes use of several page queues to further refine the selection of pages to reuse as well as to determine when dirty pages must be flushed to their backing store.
Since page tables are dynamic entities under FreeBSD, it costs virtually nothing to unmap a page from the address space of any processes using it.
When a page candidate has been chosen based on the page-use counter, this is precisely what is done.
The system must make a distinction between clean pages which can theoretically be freed up at any time, and dirty pages which must first be written to their backing store before being reusable.
When a page candidate has been found it is moved to the inactive queue if it is dirty, or the cache queue if it is clean.
A separate algorithm based on the dirty-to-clean page ratio determines when dirty pages in the inactive queue must be flushed to disk.
Once this is accomplished, the flushed pages are moved from the inactive queue to the cache queue.
At this point, pages in the cache queue can still be reactivated by a VM fault at relatively low cost.
However, pages in the cache queue are considered to be "immediately freeable" and will be reused in an LRU (least-recently used) fashion when the system needs to allocate new memory.
It is important to note that the FreeBSD VM system attempts to separate clean and dirty pages for the express reason of avoiding unnecessary flushes of dirty pages (which eats I/O bandwidth), nor does it move pages between the various page queues gratuitously when the memory subsystem is not being stressed.
This is why you will see some systems with very low cache queue counts and high active queue counts when doing a `systat -vm` command.
As the VM system becomes more stressed, it makes a greater effort to maintain the various page queues at the levels determined to be the most effective.
An urban myth has circulated for years that Linux did a better job avoiding swapouts than FreeBSD, but this in fact is not true.
What was actually occurring was that FreeBSD was proactively paging out unused pages in order to make room for more disk cache while Linux was keeping unused pages in core and leaving less memory available for cache and process pages.
I do not know whether this is still true today.
[[prefault-optimizations]]
== Pre-Faulting and Zeroing Optimizations
Taking a VM fault is not expensive if the underlying page is already in core and can simply be mapped into the process, but it can become expensive if you take a whole lot of them on a regular basis.
A good example of this is running a program such as man:ls[1] or man:ps[1] over and over again.
If the program binary is mapped into memory but not mapped into the page table, then all the pages that will be accessed by the program will have to be faulted in every time the program is run.
This is unnecessary when the pages in question are already in the VM Cache, so FreeBSD will attempt to pre-populate a process's page tables with those pages that are already in the VM Cache.
One thing that FreeBSD does not yet do is pre-copy-on-write certain pages on exec.
For example, if you run the man:ls[1] program while running `vmstat 1` you will notice that it always takes a certain number of page faults, even when you run it over and over again.
These are zero-fill faults, not program code faults (which were pre-faulted in already).
Pre-copying pages on exec or fork is an area that could use more study.
A large percentage of page faults that occur are zero-fill faults.
You can usually see this by observing the `vmstat -s` output.
These occur when a process accesses pages in its BSS area.
The BSS area is expected to be initially zero but the VM system does not bother to allocate any memory at all until the process actually accesses it.
When a fault occurs the VM system must not only allocate a new page, it must zero it as well.
To optimize the zeroing operation the VM system has the ability to pre-zero pages and mark them as such, and to request pre-zeroed pages when zero-fill faults occur.
The pre-zeroing occurs whenever the CPU is idle but the number of pages the system pre-zeros is limited in order to avoid blowing away the memory caches.
This is an excellent example of adding complexity to the VM system in order to optimize the critical path.
[[page-table-optimizations]]
== Page Table Optimizations
The page table optimizations make up the most contentious part of the FreeBSD VM design and they have shown some strain with the advent of serious use of `mmap()`.
I think this is actually a feature of most BSDs though I am not sure when it was first introduced.
There are two major optimizations.
The first is that hardware page tables do not contain persistent state but instead can be thrown away at any time with only a minor amount of management overhead.
The second is that every active page table entry in the system has a governing `pv_entry` structure which is tied into the `vm_page` structure.
FreeBSD can simply iterate through those mappings that are known to exist while Linux must check all page tables that _might_ contain a specific mapping to see if it does, which can achieve O(n^2) overhead in certain situations.
It is because of this that FreeBSD tends to make better choices on which pages to reuse or swap when memory is stressed, giving it better performance under load.
However, FreeBSD requires kernel tuning to accommodate large-shared-address-space situations such as those that can occur in a news system because it may run out of `pv_entry` structures.
Both Linux and FreeBSD need work in this area.
FreeBSD is trying to maximize the advantage of a potentially sparse active-mapping model (not all processes need to map all pages of a shared library, for example), whereas Linux is trying to simplify its algorithms.
FreeBSD generally has the performance advantage here at the cost of wasting a little extra memory, but FreeBSD breaks down in the case where a large file is massively shared across hundreds of processes.
Linux, on the other hand, breaks down in the case where many processes are sparsely-mapping the same shared library and also runs non-optimally when trying to determine whether a page can be reused or not.
[[page-coloring-optimizations]]
== Page Coloring
We will end with the page coloring optimizations.
Page coloring is a performance optimization designed to ensure that accesses to contiguous pages in virtual memory make the best use of the processor cache.
In ancient times (i.e. 10+ years ago) processor caches tended to map virtual memory rather than physical memory.
This led to a huge number of problems including having to clear the cache on every context switch in some cases, and problems with data aliasing in the cache.
Modern processor caches map physical memory precisely to solve those problems.
This means that two side-by-side pages in a processes address space may not correspond to two side-by-side pages in the cache.
In fact, if you are not careful side-by-side pages in virtual memory could wind up using the same page in the processor cache-leading to cacheable data being thrown away prematurely and reducing CPU performance.
This is true even with multi-way set-associative caches (though the effect is mitigated somewhat).
FreeBSD's memory allocation code implements page coloring optimizations, which means that the memory allocation code will attempt to locate free pages that are contiguous from the point of view of the cache.
For example, if page 16 of physical memory is assigned to page 0 of a process's virtual memory and the cache can hold 4 pages, the page coloring code will not assign page 20 of physical memory to page 1 of a process's virtual memory.
It would, instead, assign page 21 of physical memory.
The page coloring code attempts to avoid assigning page 20 because this maps over the same cache memory as page 16 and would result in non-optimal caching.
This code adds a significant amount of complexity to the VM memory allocation subsystem as you can well imagine, but the result is well worth the effort.
Page Coloring makes VM memory as deterministic as physical memory in regards to cache performance.
[[conclusion]]
== Conclusion
Virtual memory in modern operating systems must address a number of different issues efficiently and for many different usage patterns.
The modular and algorithmic approach that BSD has historically taken allows us to study and understand the current implementation as well as relatively cleanly replace large sections of the code.
There have been a number of improvements to the FreeBSD VM system in the last several years, and work is ongoing.
[[allen-briggs-qa]]
== Bonus QA session by Allen Briggs
=== What is the interleaving algorithm that you refer to in your listing of the ills of the FreeBSD 3.X swap arrangements?
FreeBSD uses a fixed swap interleave which defaults to 4.
This means that FreeBSD reserves space for four swap areas even if you only have one, two, or three.
Since swap is interleaved the linear address space representing the "four swap areas" will be fragmented if you do not actually have four swap areas.
For example, if you have two swap areas A and B FreeBSD's address space representation for that swap area will be interleaved in blocks of 16 pages:
....
A B C D A B C D A B C D A B C D
....
FreeBSD 3.X uses a "sequential list of free regions" approach to accounting for the free swap areas.
The idea is that large blocks of free linear space can be represented with a single list node ([.filename]#kern/subr_rlist.c#).
But due to the fragmentation the sequential list winds up being insanely fragmented.
In the above example, completely unused swap will have A and B shown as "free" and C and D shown as "all allocated".
Each A-B sequence requires a list node to account for because C and D are holes, so the list node cannot be combined with the next A-B sequence.
Why do we interleave our swap space instead of just tack swap areas onto the end and do something fancier? It is a whole lot easier to allocate linear swaths of an address space and have the result automatically be interleaved across multiple disks than it is to try to put that sophistication elsewhere.
The fragmentation causes other problems.
Being a linear list under 3.X, and having such a huge amount of inherent fragmentation, allocating and freeing swap winds up being an O(N) algorithm instead of an O(1) algorithm.
Combined with other factors (heavy swapping) and you start getting into O(N^2) and O(N^3) levels of overhead, which is bad.
The 3.X system may also need to allocate KVM during a swap operation to create a new list node which can lead to a deadlock if the system is trying to pageout pages in a low-memory situation.
Under 4.X we do not use a sequential list.
Instead we use a radix tree and bitmaps of swap blocks rather than ranged list nodes.
We take the hit of preallocating all the bitmaps required for the entire swap area up front but it winds up wasting less memory due to the use of a bitmap (one bit per block) instead of a linked list of nodes.
The use of a radix tree instead of a sequential list gives us nearly O(1) performance no matter how fragmented the tree becomes.
=== How is the separation of clean and dirty (inactive) pages related to the situation where you see low cache queue counts and high active queue counts in systat -vm? Do the systat stats roll the active and dirty pages together for the active queue count?
Yes, that is confusing.
The relationship is "goal" verses "reality".
Our goal is to separate the pages but the reality is that if we are not in a memory crunch, we do not really have to.
What this means is that FreeBSD will not try very hard to separate out dirty pages (inactive queue) from clean pages (cache queue) when the system is not being stressed, nor will it try to deactivate pages (active queue -> inactive queue) when the system is not being stressed, even if they are not being used.
=== In man:ls[1] the / vmstat 1 example, would not some of the page faults be data page faults (COW from executable file to private page)? I.e., I would expect the page faults to be some zero-fill and some program data. Or are you implying that FreeBSD does do pre-COW for the program data?
A COW fault can be either zero-fill or program-data.
The mechanism is the same either way because the backing program-data is almost certainly already in the cache.
I am indeed lumping the two together.
FreeBSD does not pre-COW program data or zero-fill, but it _does_ pre-map pages that exist in its cache.
=== In your section on page table optimizations, can you give a little more detail about pv_entry and vm_page (or should vm_page be vm_pmap-as in 4.4, cf. pp. 180-181 of McKusick, Bostic, Karel, Quarterman)? Specifically, what kind of operation/reaction would require scanning the mappings?
A `vm_page` represents an (object,index#) tuple. A `pv_entry` represents a hardware page table entry (pte).
If you have five processes sharing the same physical page, and three of those processes's page tables actually map the page, that page will be represented by a single `vm_page` structure and three `pv_entry` structures.
`pv_entry` structures only represent pages mapped by the MMU (one `pv_entry` represents one pte).
This means that when we need to remove all hardware references to a `vm_page` (in order to reuse the page for something else, page it out, clear it, dirty it, and so forth) we can simply scan the linked list of pv_entry's associated with that vm_page to remove or modify the pte's from their page tables.
Under Linux there is no such linked list.
In order to remove all the hardware page table mappings for a `vm_page` linux must index into every VM object that _might_ have mapped the page.
For example, if you have 50 processes all mapping the same shared library and want to get rid of page X in that library, you need to index into the page table for each of those 50 processes even if only 10 of them have actually mapped the page.
So Linux is trading off the simplicity of its design against performance.
Many VM algorithms which are O(1) or (small N) under FreeBSD wind up being O(N), O(N^2), or worse under Linux.
Since the pte's representing a particular page in an object tend to be at the same offset in all the page tables they are mapped in, reducing the number of accesses into the page tables at the same pte offset will often avoid blowing away the L1 cache line for that offset, which can lead to better performance.
FreeBSD has added complexity (the `pv_entry` scheme) in order to increase performance (to limit page table accesses to _only_ those pte's that need to be modified).
But FreeBSD has a scaling problem that Linux does not in that there are a limited number of `pv_entry` structures and this causes problems when you have massive sharing of data.
In this case you may run out of `pv_entry` structures even though there is plenty of free memory available.
This can be fixed easily enough by bumping up the number of `pv_entry` structures in the kernel config, but we really need to find a better way to do it.
In regards to the memory overhead of a page table verses the `pv_entry` scheme: Linux uses "permanent" page tables that are not throw away, but does not need a `pv_entry` for each potentially mapped pte.
FreeBSD uses "throw away" page tables but adds in a `pv_entry` structure for each actually-mapped pte.
I think memory utilization winds up being about the same, giving FreeBSD an algorithmic advantage with its ability to throw away page tables at will with very low overhead.
=== Finally, in the page coloring section, it might help to have a little more description of what you mean here. I did not quite follow it.
Do you know how an L1 hardware memory cache works? I will explain: Consider a machine with 16MB of main memory but only 128K of L1 cache.
Generally the way this cache works is that each 128K block of main memory uses the _same_ 128K of cache.
If you access offset 0 in main memory and then offset 128K in main memory you can wind up throwing away the cached data you read from offset 0!
Now, I am simplifying things greatly.
What I just described is what is called a "direct mapped" hardware memory cache.
Most modern caches are what are called 2-way-set-associative or 4-way-set-associative caches.
The set-associatively allows you to access up to N different memory regions that overlap the same cache memory without destroying the previously cached data.
But only N.
So if I have a 4-way set associative cache I can access offset 0, offset 128K, 256K and offset 384K and still be able to access offset 0 again and have it come from the L1 cache.
If I then access offset 512K, however, one of the four previously cached data objects will be thrown away by the cache.
It is extremely important... _extremely_ important for most of a processor's memory accesses to be able to come from the L1 cache, because the L1 cache operates at the processor frequency.
The moment you have an L1 cache miss and have to go to the L2 cache or to main memory, the processor will stall and potentially sit twiddling its fingers for _hundreds_ of instructions worth of time waiting for a read from main memory to complete.
Main memory (the dynamic ram you stuff into a computer) is __slow__, when compared to the speed of a modern processor core.
Ok, so now onto page coloring: All modern memory caches are what are known as _physical_ caches.
They cache physical memory addresses, not virtual memory addresses.
This allows the cache to be left alone across a process context switch, which is very important.
But in the UNIX(R) world you are dealing with virtual address spaces, not physical address spaces.
Any program you write will see the virtual address space given to it.
The actual _physical_ pages underlying that virtual address space are not necessarily physically contiguous!
In fact, you might have two pages that are side by side in a processes address space which wind up being at offset 0 and offset 128K in _physical_ memory.
A program normally assumes that two side-by-side pages will be optimally cached.
That is, that you can access data objects in both pages without having them blow away each other's cache entry.
But this is only true if the physical pages underlying the virtual address space are contiguous (insofar as the cache is concerned).
This is what Page coloring does.
Instead of assigning _random_ physical pages to virtual addresses, which may result in non-optimal cache performance, Page coloring assigns _reasonably-contiguous_ physical pages to virtual addresses.
Thus programs can be written under the assumption that the characteristics of the underlying hardware cache are the same for their virtual address space as they would be if the program had been run directly in a physical address space.
Note that I say "reasonably" contiguous rather than simply "contiguous".
From the point of view of a 128K direct mapped cache, the physical address 0 is the same as the physical address 128K.
So two side-by-side pages in your virtual address space may wind up being offset 128K and offset 132K in physical memory, but could also easily be offset 128K and offset 4K in physical memory and still retain the same cache performance characteristics.
So page-coloring does _not_ have to assign truly contiguous pages of physical memory to contiguous pages of virtual memory, it just needs to make sure it assigns contiguous pages from the point of view of cache performance and operation.
diff --git a/documentation/content/en/books/arch-handbook/_index.adoc b/documentation/content/en/books/arch-handbook/_index.adoc
index 2ae3066029..a5cefc4616 100644
--- a/documentation/content/en/books/arch-handbook/_index.adoc
+++ b/documentation/content/en/books/arch-handbook/_index.adoc
@@ -1,60 +1,60 @@
---
title: FreeBSD Architecture Handbook
authors:
- author: The FreeBSD Documentation Project
-copyright: Copyright © 2000-2006, 2012-2013 The FreeBSD Documentation Project
-releaseinfo: "$FreeBSD$"
+copyright: Copyright © 2000-2006, 2012-2021 The FreeBSD Documentation Project
+description: FreeBSD Architecture Handbook Index
trademarks: ["freebsd", "apple", "microsoft", "unix", "general"]
next: books/arch-handbook/parti
---
= FreeBSD Architecture Handbook
:doctype: book
:toc: macro
:toclevels: 1
:icons: font
:sectnums:
:sectnumlevels: 6
:source-highlighter: rouge
:experimental:
ifeval::["{backend}" == "html5"]
include::shared/mirrors.adoc[]
include::shared/authors.adoc[]
include::shared/releases.adoc[]
include::shared/en/mailing-lists.adoc[]
include::shared/en/teams.adoc[]
include::shared/en/urls.adoc[]
:chapters-path: content/en/books/arch-handbook/
endif::[]
ifeval::["{backend}" == "pdf"]
include::../../../../shared/mirrors.adoc[]
include::../../../../shared/authors.adoc[]
include::../../../../shared/releases.adoc[]
include::../../../../shared/en/mailing-lists.adoc[]
include::../../../../shared/en/teams.adoc[]
include::../../../../shared/en/urls.adoc[]
:chapters-path:
endif::[]
ifeval::["{backend}" == "epub3"]
include::../../../../shared/mirrors.adoc[]
include::../../../../shared/authors.adoc[]
include::../../../../shared/releases.adoc[]
include::../../../../shared/en/mailing-lists.adoc[]
include::../../../../shared/en/teams.adoc[]
include::../../../../shared/en/urls.adoc[]
:chapters-path:
endif::[]
[.abstract-title]
Abstract
Welcome to the FreeBSD Architecture Handbook. This manual is a _work in progress_ and is the work of many individuals. Many sections do not yet exist and some of those that do exist need to be updated. If you are interested in helping with this project, send email to the {freebsd-doc}.
The latest version of this document is always available from the link:https://www.FreeBSD.org/[FreeBSD World Wide Web server]. It may also be downloaded in a variety of formats and compression options from the https://download.freebsd.org/ftp/doc/[FreeBSD FTP server] or one of the numerous link:{handbook}#mirrors-ftp/[mirror sites].
'''
include::content/en/books/arch-handbook/toc.adoc[]
diff --git a/documentation/content/en/books/arch-handbook/bibliography/_index.adoc b/documentation/content/en/books/arch-handbook/bibliography/_index.adoc
index a33b858ee5..1ba9e79298 100644
--- a/documentation/content/en/books/arch-handbook/bibliography/_index.adoc
+++ b/documentation/content/en/books/arch-handbook/bibliography/_index.adoc
@@ -1,28 +1,29 @@
---
title: Bibliography
prev: books/arch-handbook/partiii
+description: Bibliography of the FreeBSD Architecture Handbook
---
[appendix]
[[bibliography]]
= Bibliography
:doctype: book
:toc: macro
:toclevels: 1
:icons: font
:sectnums!:
:source-highlighter: rouge
:experimental:
:skip-front-matter:
:xrefstyle: basic
:relfileprefix: ../
:outfilesuffix:
include::shared/mirrors.adoc[]
include::shared/authors.adoc[]
include::shared/releases.adoc[]
include::shared/en/mailing-lists.adoc[]
include::shared/en/teams.adoc[]
include::shared/en/urls.adoc[]
[1] _Marshall Kirk McKusick, Keith Bostic, Michael J Karels, and John S Quarterman._ Copyright © 1996 Addison-Wesley Publishing Company, Inc.. 0-201-54979-4. Addison-Wesley Publishing Company, Inc.. The Design and Implementation of the 4.4 BSD Operating System. 1-2.
diff --git a/documentation/content/en/books/arch-handbook/book.adoc b/documentation/content/en/books/arch-handbook/book.adoc
index 4fbb953a12..2de998232c 100644
--- a/documentation/content/en/books/arch-handbook/book.adoc
+++ b/documentation/content/en/books/arch-handbook/book.adoc
@@ -1,97 +1,97 @@
---
title: FreeBSD Architecture Handbook
authors:
- author: The FreeBSD Documentation Project
copyright: Copyright © 2000-2006, 2012-2013 The FreeBSD Documentation Project
releaseinfo: "$FreeBSD$"
trademarks: ["freebsd", "apple", "microsoft", "unix", "general"]
---
= FreeBSD Architecture Handbook
:doctype: book
:toc: macro
:toclevels: 2
:icons: font
:xrefstyle: basic
:relfileprefix: ../
:outfilesuffix:
:sectnums:
:sectnumlevels: 6
:partnums:
:chapter-signifier: Chapter
:part-signifier: Part
:source-highlighter: rouge
:experimental:
:skip-front-matter:
:book: true
:pdf: false
ifeval::["{backend}" == "html5"]
include::shared/mirrors.adoc[]
include::shared/authors.adoc[]
include::shared/releases.adoc[]
include::shared/en/mailing-lists.adoc[]
include::shared/en/teams.adoc[]
include::shared/en/urls.adoc[]
:chapters-path: content/en/books/arch-handbook/
endif::[]
ifeval::["{backend}" == "pdf"]
include::../../../../shared/mirrors.adoc[]
include::../../../../shared/authors.adoc[]
include::../../../../shared/releases.adoc[]
include::../../../../shared/en/mailing-lists.adoc[]
include::../../../../shared/en/teams.adoc[]
include::../../../../shared/en/urls.adoc[]
:chapters-path:
endif::[]
ifeval::["{backend}" == "epub3"]
include::../../../../shared/mirrors.adoc[]
include::../../../../shared/authors.adoc[]
include::../../../../shared/releases.adoc[]
include::../../../../shared/en/mailing-lists.adoc[]
include::../../../../shared/en/teams.adoc[]
include::../../../../shared/en/urls.adoc[]
:chapters-path:
endif::[]
[.abstract-title]
Abstract
Welcome to the FreeBSD Architecture Handbook. This manual is a _work in progress_ and is the work of many individuals. Many sections do not yet exist and some of those that do exist need to be updated. If you are interested in helping with this project, send email to the {freebsd-doc}.
The latest version of this document is always available from the link:https://www.FreeBSD.org/[FreeBSD World Wide Web server]. It may also be downloaded in a variety of formats and compression options from the https://download.freebsd.org/ftp/doc/[FreeBSD FTP server] or one of the numerous link:{handbook}#mirrors-ftp/[mirror sites].
'''
toc::[]
// Section one
include::{chapters-path}parti.adoc[lines=7..8]
-include::{chapters-path}boot/_index.adoc[leveloffset=+1, lines=7..21;32..-1]
-include::{chapters-path}locking/_index.adoc[leveloffset=+1, lines=7..21;32..-1]
-include::{chapters-path}kobj/_index.adoc[leveloffset=+1, lines=7..21;32..-1]
-include::{chapters-path}jail/_index.adoc[leveloffset=+1, lines=7..21;32..-1]
-include::{chapters-path}sysinit/_index.adoc[leveloffset=+1], lines=7..21;32..-1]
-include::{chapters-path}mac/_index.adoc[leveloffset=+1, lines=12..26;37..-1]
-include::{chapters-path}vm/_index.adoc[leveloffset=+1, lines=7..21;32..-1]
-include::{chapters-path}smp/_index.adoc[leveloffset=+1, lines=7..21;32..-1]
+include::{chapters-path}boot/_index.adoc[leveloffset=+1, lines=8..22;33..-1]
+include::{chapters-path}locking/_index.adoc[leveloffset=+1, lines=8..22;33..-1]
+include::{chapters-path}kobj/_index.adoc[leveloffset=+1, lines=8..22;33..-1]
+include::{chapters-path}jail/_index.adoc[leveloffset=+1, lines=8..22;33..-1]
+include::{chapters-path}sysinit/_index.adoc[leveloffset=+1], lines=8..22;33..-1]
+include::{chapters-path}mac/_index.adoc[leveloffset=+1, lines=13..27;38..-1]
+include::{chapters-path}vm/_index.adoc[leveloffset=+1, lines=8..22;33..-1]
+include::{chapters-path}smp/_index.adoc[leveloffset=+1, lines=8..22;33..-1]
// Section two
include::{chapters-path}partii.adoc[lines=7..8]
-include::{chapters-path}driverbasics/_index.adoc[leveloffset=+1, lines=7..21;32..-1]
-include::{chapters-path}isa/_index.adoc[leveloffset=+1, lines=7..21;32..-1]
-include::{chapters-path}pci/_index.adoc[leveloffset=+1, lines=7..21;32..-1]
-include::{chapters-path}scsi/_index.adoc[leveloffset=+1, lines=7..21;32..-1]
-include::{chapters-path}usb/_index.adoc[leveloffset=+1, lines=7..21;32..-1]
-include::{chapters-path}newbus/_index.adoc[leveloffset=+1, lines=12..26;37..-1]
-include::{chapters-path}sound/_index.adoc[leveloffset=+1, lines=7..21;32..-1]
-include::{chapters-path}pccard/_index.adoc[leveloffset=+1, lines=7..21;32..-1]
+include::{chapters-path}driverbasics/_index.adoc[leveloffset=+1, lines=8..22;33..-1]
+include::{chapters-path}isa/_index.adoc[leveloffset=+1, lines=8..22;33..-1]
+include::{chapters-path}pci/_index.adoc[leveloffset=+1, lines=8..22;33..-1]
+include::{chapters-path}scsi/_index.adoc[leveloffset=+1, lines=8..22;33..-1]
+include::{chapters-path}usb/_index.adoc[leveloffset=+1, lines=8..22;33..-1]
+include::{chapters-path}newbus/_index.adoc[leveloffset=+1, lines=13..27;38..-1]
+include::{chapters-path}sound/_index.adoc[leveloffset=+1, lines=8..22;33..-1]
+include::{chapters-path}pccard/_index.adoc[leveloffset=+1, lines=8..22;33..-1]
// Section three
include::{chapters-path}partiii.adoc[lines=7..8]
-include::{chapters-path}bibliography/_index.adoc[leveloffset=+1, lines=6..19;28..-1]
+include::{chapters-path}bibliography/_index.adoc[leveloffset=+1, lines=7..20;29..-1]
diff --git a/documentation/content/en/books/arch-handbook/boot/_index.adoc b/documentation/content/en/books/arch-handbook/boot/_index.adoc
index 44b4e3c1be..e98c7114fe 100644
--- a/documentation/content/en/books/arch-handbook/boot/_index.adoc
+++ b/documentation/content/en/books/arch-handbook/boot/_index.adoc
@@ -1,1319 +1,1320 @@
---
title: Chapter 1. Bootstrapping and Kernel Initialization
prev: books/arch-handbook/parti
next: books/arch-handbook/locking
+description: Bootstrapping and Kernel Initialization
---
[[boot]]
= Bootstrapping and Kernel Initialization
:doctype: book
:toc: macro
:toclevels: 1
:icons: font
:sectnums:
:sectnumlevels: 6
:source-highlighter: rouge
:experimental:
:skip-front-matter:
:xrefstyle: basic
:relfileprefix: ../
:outfilesuffix:
:sectnumoffset: 1
include::shared/mirrors.adoc[]
include::shared/authors.adoc[]
include::shared/releases.adoc[]
include::shared/en/mailing-lists.adoc[]
include::shared/en/teams.adoc[]
include::shared/en/urls.adoc[]
toc::[]
[[boot-synopsis]]
== Synopsis
This chapter is an overview of the boot and system initialization processes, starting from the BIOS (firmware) POST, to the first user process creation. Since the initial steps of system startup are very architecture dependent, the IA-32 architecture is used as an example.
The FreeBSD boot process can be surprisingly complex. After control is passed from the BIOS, a considerable amount of low-level configuration must be done before the kernel can be loaded and executed. This setup must be done in a simple and flexible manner, allowing the user a great deal of customization possibilities.
[[boot-overview]]
== Overview
The boot process is an extremely machine-dependent activity. Not only must code be written for every computer architecture, but there may also be multiple types of booting on the same architecture. For example, a directory listing of [.filename]#/usr/src/sys/boot# reveals a great amount of architecture-dependent code. There is a directory for each of the various supported architectures. In the x86-specific [.filename]#i386# directory, there are subdirectories for different boot standards like [.filename]#mbr# (Master Boot Record), [.filename]#gpt# (GUID Partition Table), and [.filename]#efi# (Extensible Firmware Interface). Each boot standard has its own conventions and data structures. The example that follows shows booting an x86 computer from an MBR hard drive with the FreeBSD [.filename]#boot0# multi-boot loader stored in the very first sector. That boot code starts the FreeBSD three-stage boot process.
The key to understanding this process is that it is a series of stages of increasing complexity. These stages are [.filename]#boot1#, [.filename]#boot2#, and [.filename]#loader# (see man:boot[8] for more detail). The boot system executes each stage in sequence. The last stage, [.filename]#loader#, is responsible for loading the FreeBSD kernel. Each stage is examined in the following sections.
Here is an example of the output generated by the different boot stages. Actual output may differ from machine to machine:
[.informaltable]
[cols="20%,80%", frame="none"]
|===
|*FreeBSD Component*
|*Output (may vary)*
|`boot0`
a|
[source,bash]
....
F1 FreeBSD
F2 BSD
F5 Disk 2
....
|`boot2` footnote:[This prompt will appear if the user presses a key just after selecting an OS to boot at the boot0 stage.]
a|
[source,bash]
....
>>FreeBSD/i386 BOOT
Default: 1:ad(1,a)/boot/loader
boot:
....
|[.filename]#loader#
a|
[source,bash]
....
BTX loader 1.00 BTX version is 1.02
Consoles: internal video/keyboard
BIOS drive C: is disk0
BIOS 639kB/2096064kB available memory
FreeBSD/x86 bootstrap loader, Revision 1.1
Console internal video/keyboard
(root@snap.freebsd.org, Thu Jan 16 22:18:05 UTC 2014)
Loading /boot/defaults/loader.conf
/boot/kernel/kernel text=0xed9008 data=0x117d28+0x176650 syms=[0x8+0x137988+0x8+0x1515f8]
....
|kernel
a|
[source,bash]
....
Copyright (c) 1992-2013 The FreeBSD Project.
Copyright (c) 1979, 1980, 1983, 1986, 1988, 1989, 1991, 1992, 1993, 1994
The Regents of the University of California. All rights reserved.
FreeBSD is a registered trademark of The FreeBSD Foundation.
FreeBSD 10.0-RELEASE 0 r260789: Thu Jan 16 22:34:59 UTC 2014
root@snap.freebsd.org:/usr/obj/usr/src/sys/GENERIC amd64
FreeBSD clang version 3.3 (tags/RELEASE_33/final 183502) 20130610
....
|===
[[boot-bios]]
== The BIOS
When the computer powers on, the processor's registers are set to some predefined values. One of the registers is the _instruction pointer_ register, and its value after a power on is well defined: it is a 32-bit value of `0xfffffff0`. The instruction pointer register (also known as the Program Counter) points to code to be executed by the processor. Another important register is the `cr0` 32-bit control register, and its value just after a reboot is `0`. One of ``cr0``'s bits, the PE (Protection Enabled) bit, indicates whether the processor is running in 32-bit protected mode or 16-bit real mode. Since this bit is cleared at boot time, the processor boots in 16-bit real mode. Real mode means, among other things, that linear and physical addresses are identical. The reason for the processor not to start immediately in 32-bit protected mode is backwards compatibility. In particular, the boot process relies on the services provided by the BIOS, and the BIOS itself works in legacy, 16-bit code.
The value of `0xfffffff0` is slightly less than 4 GB, so unless the machine has 4 GB of physical memory, it cannot point to a valid memory address. The computer's hardware translates this address so that it points to a BIOS memory block.
The BIOS (Basic Input Output System) is a chip on the motherboard that has a relatively small amount of read-only memory (ROM). This memory contains various low-level routines that are specific to the hardware supplied with the motherboard. The processor will first jump to the address 0xfffffff0, which really resides in the BIOS's memory. Usually this address contains a jump instruction to the BIOS's POST routines.
The POST (Power On Self Test) is a set of routines including the memory check, system bus check, and other low-level initialization so the CPU can set up the computer properly. The important step of this stage is determining the boot device. Modern BIOS implementations permit the selection of a boot device, allowing booting from a floppy, CD-ROM, hard disk, or other devices.
The very last thing in the POST is the `INT 0x19` instruction. The `INT 0x19` handler reads 512 bytes from the first sector of boot device into the memory at address `0x7c00`. The term _first sector_ originates from hard drive architecture, where the magnetic plate is divided into a number of cylindrical tracks. Tracks are numbered, and every track is divided into a number (usually 64) of sectors. Track numbers start at 0, but sector numbers start from 1. Track 0 is the outermost on the magnetic plate, and sector 1, the first sector, has a special purpose. It is also called the MBR, or Master Boot Record. The remaining sectors on the first track are never used.
This sector is our boot-sequence starting point. As we will see, this sector contains a copy of our [.filename]#boot0# program. A jump is made by the BIOS to address `0x7c00` so it starts executing.
[[boot-boot0]]
== The Master Boot Record (`boot0`)
After control is received from the BIOS at memory address `0x7c00`, [.filename]#boot0# starts executing. It is the first piece of code under FreeBSD control. The task of [.filename]#boot0# is quite simple: scan the partition table and let the user choose which partition to boot from. The Partition Table is a special, standard data structure embedded in the MBR (hence embedded in [.filename]#boot0#) describing the four standard PC "partitions". [.filename]#boot0# resides in the filesystem as [.filename]#/boot/boot0#. It is a small 512-byte file, and it is exactly what FreeBSD's installation procedure wrote to the hard disk's MBR if you chose the "bootmanager" option at installation time. Indeed, [.filename]#boot0#_is_ the MBR.
As mentioned previously, the `INT 0x19` instruction causes the `INT 0x19` handler to load an MBR ([.filename]#boot0#) into memory at address `0x7c00`. The source file for [.filename]#boot0# can be found in [.filename]#sys/boot/i386/boot0/boot0.S# - which is an awesome piece of code written by Robert Nordier.
A special structure starting from offset `0x1be` in the MBR is called the _partition table_. It has four records of 16 bytes each, called _partition records_, which represent how the hard disk is partitioned, or, in FreeBSD's terminology, sliced. One byte of those 16 says whether a partition (slice) is bootable or not. Exactly one record must have that flag set, otherwise [.filename]#boot0#'s code will refuse to proceed.
A partition record has the following fields:
* the 1-byte filesystem type
* the 1-byte bootable flag
* the 6 byte descriptor in CHS format
* the 8 byte descriptor in LBA format
A partition record descriptor contains information about where exactly the partition resides on the drive. Both descriptors, LBA and CHS, describe the same information, but in different ways: LBA (Logical Block Addressing) has the starting sector for the partition and the partition's length, while CHS (Cylinder Head Sector) has coordinates for the first and last sectors of the partition. The partition table ends with the special signature `0xaa55`.
The MBR must fit into 512 bytes, a single disk sector. This program uses low-level "tricks" like taking advantage of the side effects of certain instructions and reusing register values from previous operations to make the most out of the fewest possible instructions. Care must also be taken when handling the partition table, which is embedded in the MBR itself. For these reasons, be very careful when modifying [.filename]#boot0.S#.
Note that the [.filename]#boot0.S# source file is assembled "as is": instructions are translated one by one to binary, with no additional information (no ELF file format, for example). This kind of low-level control is achieved at link time through special control flags passed to the linker. For example, the text section of the program is set to be located at address `0x600`. In practice this means that [.filename]#boot0# must be loaded to memory address `0x600` in order to function properly.
It is worth looking at the [.filename]#Makefile# for [.filename]#boot0# ([.filename]#sys/boot/i386/boot0/Makefile#), as it defines some of the run-time behavior of [.filename]#boot0#. For instance, if a terminal connected to the serial port (COM1) is used for I/O, the macro `SIO` must be defined (`-DSIO`). `-DPXE` enables boot through PXE by pressing kbd:[F6]. Additionally, the program defines a set of _flags_ that allow further modification of its behavior. All of this is illustrated in the [.filename]#Makefile#. For example, look at the linker directives which command the linker to start the text section at address `0x600`, and to build the output file "as is" (strip out any file formatting):
[.programlisting]
....
BOOT_BOOT0_ORG?=0x600
LDFLAGS=-e start -Ttext ${BOOT_BOOT0_ORG} \
-Wl,-N,-S,--oformat,binary
....
.[.filename]#sys/boot/i386/boot0/Makefile# [[boot-boot0-makefile-as-is]]
Let us now start our study of the MBR, or [.filename]#boot0#, starting where execution begins.
[NOTE]
====
Some modifications have been made to some instructions in favor of better exposition. For example, some macros are expanded, and some macro tests are omitted when the result of the test is known. This applies to all of the code examples shown.
====
[.programlisting]
....
start:
cld # String ops inc
xorw %ax,%ax # Zero
movw %ax,%es # Address
movw %ax,%ds # data
movw %ax,%ss # Set up
movw 0x7c00,%sp # stack
....
.[.filename]#sys/boot/i386/boot0/boot0.S# [[boot-boot0-entrypoint]]
This first block of code is the entry point of the program. It is where the BIOS transfers control. First, it makes sure that the string operations autoincrement its pointer operands (the `cld` instruction) footnote:[When in doubt, we refer the reader to the official Intel manuals, which describe the exact semantics for each instruction: .]. Then, as it makes no assumption about the state of the segment registers, it initializes them. Finally, it sets the stack pointer register (`%sp`) to address `0x7c00`, so we have a working stack.
The next block is responsible for the relocation and subsequent jump to the relocated code.
[.programlisting]
....
movw $0x7c00,%si # Source
movw $0x600,%di # Destination
movw $512,%cx # Word count
rep # Relocate
movsb # code
movw %di,%bp # Address variables
movb $16,%cl # Words to clear
rep # Zero
stosb # them
incb -0xe(%di) # Set the S field to 1
jmp main-0x7c00+0x600 # Jump to relocated code
....
.[.filename]#sys/boot/i386/boot0/boot0.S# [[boot-boot0-relocation]]
As [.filename]#boot0# is loaded by the BIOS to address `0x7C00`, it copies itself to address `0x600` and then transfers control there (recall that it was linked to execute at address `0x600`). The source address, `0x7c00`, is copied to register `%si`. The destination address, `0x600`, to register `%di`. The number of bytes to copy, `512` (the program's size), is copied to register `%cx`. Next, the `rep` instruction repeats the instruction that follows, that is, `movsb`, the number of times dictated by the `%cx` register. The `movsb` instruction copies the byte pointed to by `%si` to the address pointed to by `%di`. This is repeated another 511 times. On each repetition, both the source and destination registers, `%si` and `%di`, are incremented by one. Thus, upon completion of the 512-byte copy, `%di` has the value `0x600`+`512`= `0x800`, and `%si` has the value `0x7c00`+`512`= `0x7e00`; we have thus completed the code _relocation_.
Next, the destination register `%di` is copied to `%bp`. `%bp` gets the value `0x800`. The value `16` is copied to `%cl` in preparation for a new string operation (like our previous `movsb`). Now, `stosb` is executed 16 times. This instruction copies a `0` value to the address pointed to by the destination register (`%di`, which is `0x800`), and increments it. This is repeated another 15 times, so `%di` ends up with value `0x810`. Effectively, this clears the address range `0x800`-`0x80f`. This range is used as a (fake) partition table for writing the MBR back to disk. Finally, the sector field for the CHS addressing of this fake partition is given the value 1 and a jump is made to the main function from the relocated code. Note that until this jump to the relocated code, any reference to an absolute address was avoided.
The following code block tests whether the drive number provided by the BIOS should be used, or the one stored in [.filename]#boot0#.
[.programlisting]
....
main:
testb $SETDRV,-69(%bp) # Set drive number?
jnz disable_update # Yes
testb %dl,%dl # Drive number valid?
js save_curdrive # Possibly (0x80 set)
....
.[.filename]#sys/boot/i386/boot0/boot0.S# [[boot-boot0-drivenumber]]
This code tests the `SETDRV` bit (`0x20`) in the _flags_ variable. Recall that register `%bp` points to address location `0x800`, so the test is done to the _flags_ variable at address `0x800`-`69`= `0x7bb`. This is an example of the type of modifications that can be done to [.filename]#boot0#. The `SETDRV` flag is not set by default, but it can be set in the [.filename]#Makefile#. When set, the drive number stored in the MBR is used instead of the one provided by the BIOS. We assume the defaults, and that the BIOS provided a valid drive number, so we jump to `save_curdrive`.
The next block saves the drive number provided by the BIOS, and calls `putn` to print a new line on the screen.
[.programlisting]
....
save_curdrive:
movb %dl, (%bp) # Save drive number
pushw %dx # Also in the stack
#ifdef TEST /* test code, print internal bios drive */
rolb $1, %dl
movw $drive, %si
call putkey
#endif
callw putn # Print a newline
....
.[.filename]#sys/boot/i386/boot0/boot0.S# [[boot-boot0-savedrivenumber]]
Note that we assume `TEST` is not defined, so the conditional code in it is not assembled and will not appear in our executable [.filename]#boot0#.
Our next block implements the actual scanning of the partition table. It prints to the screen the partition type for each of the four entries in the partition table. It compares each type with a list of well-known operating system file systems. Examples of recognized partition types are NTFS (Windows(R), ID 0x7), `ext2fs` (Linux(R), ID 0x83), and, of course, `ffs`/`ufs2` (FreeBSD, ID 0xa5). The implementation is fairly simple.
[.programlisting]
....
movw $(partbl+0x4),%bx # Partition table (+4)
xorw %dx,%dx # Item number
read_entry:
movb %ch,-0x4(%bx) # Zero active flag (ch == 0)
btw %dx,_FLAGS(%bp) # Entry enabled?
jnc next_entry # No
movb (%bx),%al # Load type
test %al, %al # skip empty partition
jz next_entry
movw $bootable_ids,%di # Lookup tables
movb $(TLEN+1),%cl # Number of entries
repne # Locate
scasb # type
addw $(TLEN-1), %di # Adjust
movb (%di),%cl # Partition
addw %cx,%di # description
callw putx # Display it
next_entry:
incw %dx # Next item
addb $0x10,%bl # Next entry
jnc read_entry # Till done
....
.[.filename]#sys/boot/i386/boot0/boot0.S# [[boot-boot0-partition-scan]]
It is important to note that the active flag for each entry is cleared, so after the scanning, _no_ partition entry is active in our memory copy of [.filename]#boot0#. Later, the active flag will be set for the selected partition. This ensures that only one active partition exists if the user chooses to write the changes back to disk.
The next block tests for other drives. At startup, the BIOS writes the number of drives present in the computer to address `0x475`. If there are any other drives present, [.filename]#boot0# prints the current drive to screen. The user may command [.filename]#boot0# to scan partitions on another drive later.
[.programlisting]
....
popw %ax # Drive number
subb $0x79,%al # Does next
cmpb 0x475,%al # drive exist? (from BIOS?)
jb print_drive # Yes
decw %ax # Already drive 0?
jz print_prompt # Yes
....
.[.filename]#sys/boot/i386/boot0/boot0.S# [[boot-boot0-test-drives]]
We make the assumption that a single drive is present, so the jump to `print_drive` is not performed. We also assume nothing strange happened, so we jump to `print_prompt`.
This next block just prints out a prompt followed by the default option:
[.programlisting]
....
print_prompt:
movw $prompt,%si # Display
callw putstr # prompt
movb _OPT(%bp),%dl # Display
decw %si # default
callw putkey # key
jmp start_input # Skip beep
....
.[.filename]#sys/boot/i386/boot0/boot0.S# [[boot-boot0-prompt]]
Finally, a jump is performed to `start_input`, where the BIOS services are used to start a timer and for reading user input from the keyboard; if the timer expires, the default option will be selected:
[.programlisting]
....
start_input:
xorb %ah,%ah # BIOS: Get
int $0x1a # system time
movw %dx,%di # Ticks when
addw _TICKS(%bp),%di # timeout
read_key:
movb $0x1,%ah # BIOS: Check
int $0x16 # for keypress
jnz got_key # Have input
xorb %ah,%ah # BIOS: int 0x1a, 00
int $0x1a # get system time
cmpw %di,%dx # Timeout?
jb read_key # No
....
.[.filename]#sys/boot/i386/boot0/boot0.S# [[boot-boot0-start-input]]
An interrupt is requested with number `0x1a` and argument `0` in register `%ah`. The BIOS has a predefined set of services, requested by applications as software-generated interrupts through the `int` instruction and receiving arguments in registers (in this case, `%ah`). Here, particularly, we are requesting the number of clock ticks since last midnight; this value is computed by the BIOS through the RTC (Real Time Clock). This clock can be programmed to work at frequencies ranging from 2 Hz to 8192 Hz. The BIOS sets it to 18.2 Hz at startup. When the request is satisfied, a 32-bit result is returned by the BIOS in registers `%cx` and `%dx` (lower bytes in `%dx`). This result (the `%dx` part) is copied to register `%di`, and the value of the `TICKS` variable is added to `%di`. This variable resides in [.filename]#boot0# at offset `_TICKS` (a negative value) from register `%bp` (which, recall, points to `0x800`). The default value of this variable is `0xb6` (182 in decimal). Now, the idea is that [.filename]#boot0# constantly requests the time from the BIOS, and when the value returned in register `%dx` is greater than the value stored in `%di`, the time is up and the default selection will be made. Since the RTC ticks 18.2 times per second, this condition will be met after 10 seconds (this default behavior can be changed in the [.filename]#Makefile#). Until this time has passed, [.filename]#boot0# continually asks the BIOS for any user input; this is done through `int 0x16`, argument `1` in `%ah`.
Whether a key was pressed or the time expired, subsequent code validates the selection. Based on the selection, the register `%si` is set to point to the appropriate partition entry in the partition table. This new selection overrides the previous default one. Indeed, it becomes the new default. Finally, the ACTIVE flag of the selected partition is set. If it was enabled at compile time, the in-memory version of [.filename]#boot0# with these modified values is written back to the MBR on disk. We leave the details of this implementation to the reader.
We now end our study with the last code block from the [.filename]#boot0# program:
[.programlisting]
....
movw $0x7c00,%bx # Address for read
movb $0x2,%ah # Read sector
callw intx13 # from disk
jc beep # If error
cmpw $0xaa55,0x1fe(%bx) # Bootable?
jne beep # No
pushw %si # Save ptr to selected part.
callw putn # Leave some space
popw %si # Restore, next stage uses it
jmp *%bx # Invoke bootstrap
....
.[.filename]#sys/boot/i386/boot0/boot0.S# [[boot-boot0-check-bootable]]
Recall that `%si` points to the selected partition entry. This entry tells us where the partition begins on disk. We assume, of course, that the partition selected is actually a FreeBSD slice.
[NOTE]
====
From now on, we will favor the use of the technically more accurate term "slice" rather than "partition".
====
The transfer buffer is set to `0x7c00` (register `%bx`), and a read for the first sector of the FreeBSD slice is requested by calling `intx13`. We assume that everything went okay, so a jump to `beep` is not performed. In particular, the new sector read must end with the magic sequence `0xaa55`. Finally, the value at `%si` (the pointer to the selected partition table) is preserved for use by the next stage, and a jump is performed to address `0x7c00`, where execution of our next stage (the just-read block) is started.
[[boot-boot1]]
== `boot1` Stage
So far we have gone through the following sequence:
* The BIOS did some early hardware initialization, including the POST. The MBR ([.filename]#boot0#) was loaded from absolute disk sector one to address `0x7c00`. Execution control was passed to that location.
* [.filename]#boot0# relocated itself to the location it was linked to execute (`0x600`), followed by a jump to continue execution at the appropriate place. Finally, [.filename]#boot0# loaded the first disk sector from the FreeBSD slice to address `0x7c00`. Execution control was passed to that location.
[.filename]#boot1# is the next step in the boot-loading sequence. It is the first of three boot stages. Note that we have been dealing exclusively with disk sectors. Indeed, the BIOS loads the absolute first sector, while [.filename]#boot0# loads the first sector of the FreeBSD slice. Both loads are to address `0x7c00`. We can conceptually think of these disk sectors as containing the files [.filename]#boot0# and [.filename]#boot1#, respectively, but in reality this is not entirely true for [.filename]#boot1#. Strictly speaking, unlike [.filename]#boot0#, [.filename]#boot1# is not part of the boot blocks footnote:[There is a file /boot/boot1, but it is not the written to the beginning of the FreeBSD slice. Instead, it is concatenated with boot2 to form boot, which is written to the beginning of the FreeBSD slice and read at boot time.]. Instead, a single, full-blown file, [.filename]#boot# ([.filename]#/boot/boot#), is what ultimately is written to disk. This file is a combination of [.filename]#boot1#, [.filename]#boot2# and the `Boot Extender` (or BTX). This single file is greater in size than a single sector (greater than 512 bytes). Fortunately, [.filename]#boot1# occupies _exactly_ the first 512 bytes of this single file, so when [.filename]#boot0# loads the first sector of the FreeBSD slice (512 bytes), it is actually loading [.filename]#boot1# and transferring control to it.
The main task of [.filename]#boot1# is to load the next boot stage. This next stage is somewhat more complex. It is composed of a server called the "Boot Extender", or BTX, and a client, called [.filename]#boot2#. As we will see, the last boot stage, [.filename]#loader#, is also a client of the BTX server.
Let us now look in detail at what exactly is done by [.filename]#boot1#, starting like we did for [.filename]#boot0#, at its entry point:
[.programlisting]
....
start:
jmp main
....
.[.filename]#sys/boot/i386/boot2/boot1.S# [[boot-boot1-entry]]
The entry point at `start` simply jumps past a special data area to the label `main`, which in turn looks like this:
[.programlisting]
....
main:
cld # String ops inc
xor %cx,%cx # Zero
mov %cx,%es # Address
mov %cx,%ds # data
mov %cx,%ss # Set up
mov $start,%sp # stack
mov %sp,%si # Source
mov $0x700,%di # Destination
incb %ch # Word count
rep # Copy
movsw # code
....
.[.filename]#sys/boot/i386/boot2/boot1.S# [[boot-boot1-main]]
Just like [.filename]#boot0#, this code relocates [.filename]#boot1#, this time to memory address `0x700`. However, unlike [.filename]#boot0#, it does not jump there. [.filename]#boot1# is linked to execute at address `0x7c00`, effectively where it was loaded in the first place. The reason for this relocation will be discussed shortly.
Next comes a loop that looks for the FreeBSD slice. Although [.filename]#boot0# loaded [.filename]#boot1# from the FreeBSD slice, no information was passed to it about this footnote:[Actually we did pass a pointer to the slice entry in register %si. However, boot1 does not assume that it was loaded by boot0 (perhaps some other MBR loaded it, and did not pass this information), so it assumes nothing.], so [.filename]#boot1# must rescan the partition table to find where the FreeBSD slice starts. Therefore it rereads the MBR:
[.programlisting]
....
mov $part4,%si # Partition
cmpb $0x80,%dl # Hard drive?
jb main.4 # No
movb $0x1,%dh # Block count
callw nread # Read MBR
....
.[.filename]#sys/boot/i386/boot2/boot1.S# [[boot-boot1-find-freebsd]]
In the code above, register `%dl` maintains information about the boot device. This is passed on by the BIOS and preserved by the MBR. Numbers `0x80` and greater tells us that we are dealing with a hard drive, so a call is made to `nread`, where the MBR is read. Arguments to `nread` are passed through `%si` and `%dh`. The memory address at label `part4` is copied to `%si`. This memory address holds a "fake partition" to be used by `nread`. The following is the data in the fake partition:
[.programlisting]
....
part4:
.byte 0x80, 0x00, 0x01, 0x00
.byte 0xa5, 0xfe, 0xff, 0xff
.byte 0x00, 0x00, 0x00, 0x00
.byte 0x50, 0xc3, 0x00, 0x00
....
.[.filename]#sys/boot/i386/boot2/Makefile# [[boot-boot2-make-fake-partition]]
In particular, the LBA for this fake partition is hardcoded to zero. This is used as an argument to the BIOS for reading absolute sector one from the hard drive. Alternatively, CHS addressing could be used. In this case, the fake partition holds cylinder 0, head 0 and sector 1, which is equivalent to absolute sector one.
Let us now proceed to take a look at `nread`:
[.programlisting]
....
nread:
mov $0x8c00,%bx # Transfer buffer
mov 0x8(%si),%ax # Get
mov 0xa(%si),%cx # LBA
push %cs # Read from
callw xread.1 # disk
jnc return # If success, return
....
.[.filename]#sys/boot/i386/boot2/boot1.S# [[boot-boot1-nread]]
Recall that `%si` points to the fake partition. The word footnote:[In the context of 16-bit real mode, a word is 2 bytes.] at offset `0x8` is copied to register `%ax` and word at offset `0xa` to `%cx`. They are interpreted by the BIOS as the lower 4-byte value denoting the LBA to be read (the upper four bytes are assumed to be zero). Register `%bx` holds the memory address where the MBR will be loaded. The instruction pushing `%cs` onto the stack is very interesting. In this context, it accomplishes nothing. However, as we will see shortly, [.filename]#boot2#, in conjunction with the BTX server, also uses `xread.1`. This mechanism will be discussed in the next section.
The code at `xread.1` further calls the `read` function, which actually calls the BIOS asking for the disk sector:
[.programlisting]
....
xread.1:
pushl $0x0 # absolute
push %cx # block
push %ax # number
push %es # Address of
push %bx # transfer buffer
xor %ax,%ax # Number of
movb %dh,%al # blocks to
push %ax # transfer
push $0x10 # Size of packet
mov %sp,%bp # Packet pointer
callw read # Read from disk
lea 0x10(%bp),%sp # Clear stack
lret # To far caller
....
.[.filename]#sys/boot/i386/boot2/boot1.S# [[boot-boot1-xread1]]
Note the long return instruction at the end of this block. This instruction pops out the `%cs` register pushed by `nread`, and returns. Finally, `nread` also returns.
With the MBR loaded to memory, the actual loop for searching the FreeBSD slice begins:
[.programlisting]
....
mov $0x1,%cx # Two passes
main.1:
mov $0x8dbe,%si # Partition table
movb $0x1,%dh # Partition
main.2:
cmpb $0xa5,0x4(%si) # Our partition type?
jne main.3 # No
jcxz main.5 # If second pass
testb $0x80,(%si) # Active?
jnz main.5 # Yes
main.3:
add $0x10,%si # Next entry
incb %dh # Partition
cmpb $0x5,%dh # In table?
jb main.2 # Yes
dec %cx # Do two
jcxz main.1 # passes
....
.[.filename]#sys/boot/i386/boot2/boot1.S# [[boot-boot1-find-part]]
If a FreeBSD slice is identified, execution continues at `main.5`. Note that when a FreeBSD slice is found `%si` points to the appropriate entry in the partition table, and `%dh` holds the partition number. We assume that a FreeBSD slice is found, so we continue execution at `main.5`:
[.programlisting]
....
main.5:
mov %dx,0x900 # Save args
movb $0x10,%dh # Sector count
callw nread # Read disk
mov $0x9000,%bx # BTX
mov 0xa(%bx),%si # Get BTX length and set
add %bx,%si # %si to start of boot2.bin
mov $0xc000,%di # Client page 2
mov $0xa200,%cx # Byte
sub %si,%cx # count
rep # Relocate
movsb # client
....
.[.filename]#sys/boot/i386/boot2/boot1.S# [[boot-boot1-main5]]
Recall that at this point, register `%si` points to the FreeBSD slice entry in the MBR partition table, so a call to `nread` will effectively read sectors at the beginning of this partition. The argument passed on register `%dh` tells `nread` to read 16 disk sectors. Recall that the first 512 bytes, or the first sector of the FreeBSD slice, coincides with the [.filename]#boot1# program. Also recall that the file written to the beginning of the FreeBSD slice is not [.filename]#/boot/boot1#, but [.filename]#/boot/boot#. Let us look at the size of these files in the filesystem:
[source,bash]
....
-r--r--r-- 1 root wheel 512B Jan 8 00:15 /boot/boot0
-r--r--r-- 1 root wheel 512B Jan 8 00:15 /boot/boot1
-r--r--r-- 1 root wheel 7.5K Jan 8 00:15 /boot/boot2
-r--r--r-- 1 root wheel 8.0K Jan 8 00:15 /boot/boot
....
Both [.filename]#boot0# and [.filename]#boot1# are 512 bytes each, so they fit _exactly_ in one disk sector. [.filename]#boot2# is much bigger, holding both the BTX server and the [.filename]#boot2# client. Finally, a file called simply [.filename]#boot# is 512 bytes larger than [.filename]#boot2#. This file is a concatenation of [.filename]#boot1# and [.filename]#boot2#. As already noted, [.filename]#boot0# is the file written to the absolute first disk sector (the MBR), and [.filename]#boot# is the file written to the first sector of the FreeBSD slice; [.filename]#boot1# and [.filename]#boot2# are _not_ written to disk. The command used to concatenate [.filename]#boot1# and [.filename]#boot2# into a single [.filename]#boot# is merely `cat boot1 boot2 > boot`.
So [.filename]#boot1# occupies exactly the first 512 bytes of [.filename]#boot# and, because [.filename]#boot# is written to the first sector of the FreeBSD slice, [.filename]#boot1# fits exactly in this first sector. When `nread` reads the first 16 sectors of the FreeBSD slice, it effectively reads the entire [.filename]#boot# file footnote:[512*16=8192 bytes, exactly the size of boot]. We will see more details about how [.filename]#boot# is formed from [.filename]#boot1# and [.filename]#boot2# in the next section.
Recall that `nread` uses memory address `0x8c00` as the transfer buffer to hold the sectors read. This address is conveniently chosen. Indeed, because [.filename]#boot1# belongs to the first 512 bytes, it ends up in the address range `0x8c00`-`0x8dff`. The 512 bytes that follows (range `0x8e00`-`0x8fff`) is used to store the _bsdlabel_ footnote:[Historically known as disklabel. If you ever wondered where FreeBSD stored this information, it is in this region. See man:bsdlabel[8]].
Starting at address `0x9000` is the beginning of the BTX server, and immediately following is the [.filename]#boot2# client. The BTX server acts as a kernel, and executes in protected mode in the most privileged level. In contrast, the BTX clients ([.filename]#boot2#, for example), execute in user mode. We will see how this is accomplished in the next section. The code after the call to `nread` locates the beginning of [.filename]#boot2# in the memory buffer, and copies it to memory address `0xc000`. This is because the BTX server arranges [.filename]#boot2# to execute in a segment starting at `0xa000`. We explore this in detail in the following section.
The last code block of [.filename]#boot1# enables access to memory above 1MB footnote:[This is necessary for legacy reasons. Interested readers should see .] and concludes with a jump to the starting point of the BTX server:
[.programlisting]
....
seta20:
cli # Disable interrupts
seta20.1:
dec %cx # Timeout?
jz seta20.3 # Yes
inb $0x64,%al # Get status
testb $0x2,%al # Busy?
jnz seta20.1 # Yes
movb $0xd1,%al # Command: Write
outb %al,$0x64 # output port
seta20.2:
inb $0x64,%al # Get status
testb $0x2,%al # Busy?
jnz seta20.2 # Yes
movb $0xdf,%al # Enable
outb %al,$0x60 # A20
seta20.3:
sti # Enable interrupts
jmp 0x9010 # Start BTX
....
.[.filename]#sys/boot/i386/boot2/boot1.S# [[boot-boot1-seta20]]
Note that right before the jump, interrupts are enabled.
[[btx-server]]
== The BTX Server
Next in our boot sequence is the BTX Server. Let us quickly remember how we got here:
* The BIOS loads the absolute sector one (the MBR, or [.filename]#boot0#), to address `0x7c00` and jumps there.
* [.filename]#boot0# relocates itself to `0x600`, the address it was linked to execute, and jumps over there. It then reads the first sector of the FreeBSD slice (which consists of [.filename]#boot1#) into address `0x7c00` and jumps over there.
* [.filename]#boot1# loads the first 16 sectors of the FreeBSD slice into address `0x8c00`. This 16 sectors, or 8192 bytes, is the whole file [.filename]#boot#. The file is a concatenation of [.filename]#boot1# and [.filename]#boot2#. [.filename]#boot2#, in turn, contains the BTX server and the [.filename]#boot2# client. Finally, a jump is made to address `0x9010`, the entry point of the BTX server.
Before studying the BTX Server in detail, let us further review how the single, all-in-one [.filename]#boot# file is created. The way [.filename]#boot# is built is defined in its [.filename]#Makefile# ([.filename]#/usr/src/sys/boot/i386/boot2/Makefile#). Let us look at the rule that creates the [.filename]#boot# file:
[.programlisting]
....
boot: boot1 boot2
cat boot1 boot2 > boot
....
.[.filename]#sys/boot/i386/boot2/Makefile# [[boot-boot1-make-boot]]
This tells us that [.filename]#boot1# and [.filename]#boot2# are needed, and the rule simply concatenates them to produce a single file called [.filename]#boot#. The rules for creating [.filename]#boot1# are also quite simple:
[.programlisting]
....
boot1: boot1.out
objcopy -S -O binary boot1.out boot1
boot1.out: boot1.o
ld -e start -Ttext 0x7c00 -o boot1.out boot1.o
....
.[.filename]#sys/boot/i386/boot2/Makefile# [[boot-boot1-make-boot1]]
To apply the rule for creating [.filename]#boot1#, [.filename]#boot1.out# must be resolved. This, in turn, depends on the existence of [.filename]#boot1.o#. This last file is simply the result of assembling our familiar [.filename]#boot1.S#, without linking. Now, the rule for creating [.filename]#boot1.out# is applied. This tells us that [.filename]#boot1.o# should be linked with `start` as its entry point, and starting at address `0x7c00`. Finally, [.filename]#boot1# is created from [.filename]#boot1.out# applying the appropriate rule. This rule is the [.filename]#objcopy# command applied to [.filename]#boot1.out#. Note the flags passed to [.filename]#objcopy#: `-S` tells it to strip all relocation and symbolic information; `-O binary` indicates the output format, that is, a simple, unformatted binary file.
Having [.filename]#boot1#, let us take a look at how [.filename]#boot2# is constructed:
[.programlisting]
....
boot2: boot2.ld
@set -- `ls -l boot2.ld`; x=$$((7680-$$5)); \
echo "$$x bytes available"; test $$x -ge 0
dd if=boot2.ld of=boot2 obs=7680 conv=osync
boot2.ld: boot2.ldr boot2.bin ../btx/btx/btx
btxld -v -E 0x2000 -f bin -b ../btx/btx/btx -l boot2.ldr \
-o boot2.ld -P 1 boot2.bin
boot2.ldr:
dd if=/dev/zero of=boot2.ldr bs=512 count=1
boot2.bin: boot2.out
objcopy -S -O binary boot2.out boot2.bin
boot2.out: ../btx/lib/crt0.o boot2.o sio.o
ld -Ttext 0x2000 -o boot2.out
boot2.o: boot2.s
${CC} ${ACFLAGS} -c boot2.s
boot2.s: boot2.c boot2.h ${.CURDIR}/../../common/ufsread.c
${CC} ${CFLAGS} -S -o boot2.s.tmp ${.CURDIR}/boot2.c
sed -e '/align/d' -e '/nop/d' "MISSING" boot2.s.tmp > boot2.s
rm -f boot2.s.tmp
boot2.h: boot1.out
${NM} -t d ${.ALLSRC} | awk '/([0-9])+ T xread/ \
{ x = $$1 - ORG1; \
printf("#define XREADORG %#x\n", REL1 + x) }' \
ORG1=`printf "%d" ${ORG1}` \
REL1=`printf "%d" ${REL1}` > ${.TARGET}
....
.[.filename]#sys/boot/i386/boot2/Makefile# [[boot-boot1-make-boot2]]
The mechanism for building [.filename]#boot2# is far more elaborate. Let us point out the most relevant facts. The dependency list is as follows:
[.programlisting]
....
boot2: boot2.ld
boot2.ld: boot2.ldr boot2.bin ${BTXDIR}/btx/btx
boot2.bin: boot2.out
boot2.out: ${BTXDIR}/lib/crt0.o boot2.o sio.o
boot2.o: boot2.s
boot2.s: boot2.c boot2.h ${.CURDIR}/../../common/ufsread.c
boot2.h: boot1.out
....
.[.filename]#sys/boot/i386/boot2/Makefile# [[boot-boot1-make-boot2-more]]
Note that initially there is no header file [.filename]#boot2.h#, but its creation depends on [.filename]#boot1.out#, which we already have. The rule for its creation is a bit terse, but the important thing is that the output, [.filename]#boot2.h#, is something like this:
[.programlisting]
....
#define XREADORG 0x725
....
.[.filename]#sys/boot/i386/boot2/boot2.h# [[boot-boot1-make-boot2h]]
Recall that [.filename]#boot1# was relocated (i.e., copied from `0x7c00` to `0x700`). This relocation will now make sense, because as we will see, the BTX server reclaims some memory, including the space where [.filename]#boot1# was originally loaded. However, the BTX server needs access to [.filename]#boot1#'s `xread` function; this function, according to the output of [.filename]#boot2.h#, is at location `0x725`. Indeed, the BTX server uses the `xread` function from [.filename]#boot1#'s relocated code. This function is now accessible from within the [.filename]#boot2# client.
We next build [.filename]#boot2.s# from files [.filename]#boot2.h#, [.filename]#boot2.c# and [.filename]#/usr/src/sys/boot/common/ufsread.c#. The rule for this is to compile the code in [.filename]#boot2.c# (which includes [.filename]#boot2.h# and [.filename]#ufsread.c#) into assembly code. Having [.filename]#boot2.s#, the next rule assembles [.filename]#boot2.s#, creating the object file [.filename]#boot2.o#. The next rule directs the linker to link various files ([.filename]#crt0.o#, [.filename]#boot2.o# and [.filename]#sio.o#). Note that the output file, [.filename]#boot2.out#, is linked to execute at address `0x2000`. Recall that [.filename]#boot2# will be executed in user mode, within a special user segment set up by the BTX server. This segment starts at `0xa000`. Also, remember that the [.filename]#boot2# portion of [.filename]#boot# was copied to address `0xc000`, that is, offset `0x2000` from the start of the user segment, so [.filename]#boot2# will work properly when we transfer control to it. Next, [.filename]#boot2.bin# is created from [.filename]#boot2.out# by stripping its symbols and format information; boot2.bin is a _raw_ binary. Now, note that a file [.filename]#boot2.ldr# is created as a 512-byte file full of zeros. This space is reserved for the bsdlabel.
Now that we have files [.filename]#boot1#, [.filename]#boot2.bin# and [.filename]#boot2.ldr#, only the BTX server is missing before creating the all-in-one [.filename]#boot# file. The BTX server is located in [.filename]#/usr/src/sys/boot/i386/btx/btx#; it has its own [.filename]#Makefile# with its own set of rules for building. The important thing to notice is that it is also compiled as a _raw_ binary, and that it is linked to execute at address `0x9000`. The details can be found in [.filename]#/usr/src/sys/boot/i386/btx/btx/Makefile#.
Having the files that comprise the [.filename]#boot# program, the final step is to _merge_ them. This is done by a special program called [.filename]#btxld# (source located in [.filename]#/usr/src/usr.sbin/btxld#). Some arguments to this program include the name of the output file ([.filename]#boot#), its entry point (`0x2000`) and its file format (raw binary). The various files are finally merged by this utility into the file [.filename]#boot#, which consists of [.filename]#boot1#, [.filename]#boot2#, the `bsdlabel` and the BTX server. This file, which takes exactly 16 sectors, or 8192 bytes, is what is actually written to the beginning of the FreeBSD slice during installation. Let us now proceed to study the BTX server program.
The BTX server prepares a simple environment and switches from 16-bit real mode to 32-bit protected mode, right before passing control to the client. This includes initializing and updating the following data structures:
* Modifies the `Interrupt Vector Table (IVT)`. The IVT provides exception and interrupt handlers for Real-Mode code.
* The `Interrupt Descriptor Table (IDT)` is created. Entries are provided for processor exceptions, hardware interrupts, two system calls and V86 interface. The IDT provides exception and interrupt handlers for Protected-Mode code.
* A `Task-State Segment (TSS)` is created. This is necessary because the processor works in the _least_ privileged level when executing the client ([.filename]#boot2#), but in the _most_ privileged level when executing the BTX server.
* The GDT (Global Descriptor Table) is set up. Entries (descriptors) are provided for supervisor code and data, user code and data, and real-mode code and data. footnote:[Real-mode code and data are necessary when switching back to real mode from protected mode, as suggested by the Intel manuals.]
Let us now start studying the actual implementation. Recall that [.filename]#boot1# made a jump to address `0x9010`, the BTX server's entry point. Before studying program execution there, note that the BTX server has a special header at address range `0x9000-0x900f`, right before its entry point. This header is defined as follows:
[.programlisting]
....
start: # Start of code
/*
* BTX header.
*/
btx_hdr: .byte 0xeb # Machine ID
.byte 0xe # Header size
.ascii "BTX" # Magic
.byte 0x1 # Major version
.byte 0x2 # Minor version
.byte BTX_FLAGS # Flags
.word PAG_CNT-MEM_ORG>>0xc # Paging control
.word break-start # Text size
.long 0x0 # Entry address
....
.[.filename]#sys/boot/i386/btx/btx/btx.S# [[btx-header]]
Note the first two bytes are `0xeb` and `0xe`. In the IA-32 architecture, these two bytes are interpreted as a relative jump past the header into the entry point, so in theory, [.filename]#boot1# could jump here (address `0x9000`) instead of address `0x9010`. Note that the last field in the BTX header is a pointer to the client's ([.filename]#boot2#) entry point. This field is patched at link time.
Immediately following the header is the BTX server's entry point:
[.programlisting]
....
/*
* Initialization routine.
*/
init: cli # Disable interrupts
xor %ax,%ax # Zero/segment
mov %ax,%ss # Set up
mov $0x1800,%sp # stack
mov %ax,%es # Address
mov %ax,%ds # data
pushl $0x2 # Clear
popfl # flags
....
.[.filename]#sys/boot/i386/btx/btx/btx.S# [[btx-init]]
This code disables interrupts, sets up a working stack (starting at address `0x1800`) and clears the flags in the EFLAGS register. Note that the `popfl` instruction pops out a doubleword (4 bytes) from the stack and places it in the EFLAGS register. As the value actually popped is `2`, the EFLAGS register is effectively cleared (IA-32 requires that bit 2 of the EFLAGS register always be 1).
Our next code block clears (sets to `0`) the memory range `0x5e00-0x8fff`. This range is where the various data structures will be created:
[.programlisting]
....
/*
* Initialize memory.
*/
mov $0x5e00,%di # Memory to initialize
mov $(0x9000-0x5e00)/2,%cx # Words to zero
rep # Zero-fill
stosw # memory
....
.[.filename]#sys/boot/i386/btx/btx/btx.S# [[btx-clear-mem]]
Recall that [.filename]#boot1# was originally loaded to address `0x7c00`, so, with this memory initialization, that copy effectively disappeared. However, also recall that [.filename]#boot1# was relocated to `0x700`, so _that_ copy is still in memory, and the BTX server will make use of it.
Next, the real-mode IVT (Interrupt Vector Table is updated. The IVT is an array of segment/offset pairs for exception and interrupt handlers. The BIOS normally maps hardware interrupts to interrupt vectors `0x8` to `0xf` and `0x70` to `0x77` but, as will be seen, the 8259A Programmable Interrupt Controller, the chip controlling the actual mapping of hardware interrupts to interrupt vectors, is programmed to remap these interrupt vectors from `0x8-0xf` to `0x20-0x27` and from `0x70-0x77` to `0x28-0x2f`. Thus, interrupt handlers are provided for interrupt vectors `0x20-0x2f`. The reason the BIOS-provided handlers are not used directly is because they work in 16-bit real mode, but not 32-bit protected mode. Processor mode will be switched to 32-bit protected mode shortly. However, the BTX server sets up a mechanism to effectively use the handlers provided by the BIOS:
[.programlisting]
....
/*
* Update real mode IDT for reflecting hardware interrupts.
*/
mov $intr20,%bx # Address first handler
mov $0x10,%cx # Number of handlers
mov $0x20*4,%di # First real mode IDT entry
init.0: mov %bx,(%di) # Store IP
inc %di # Address next
inc %di # entry
stosw # Store CS
add $4,%bx # Next handler
loop init.0 # Next IRQ
....
.[.filename]#sys/boot/i386/btx/btx/btx.S# [[btx-ivt]]
The next block creates the IDT (Interrupt Descriptor Table). The IDT is analogous, in protected mode, to the IVT in real mode. That is, the IDT describes the various exception and interrupt handlers used when the processor is executing in protected mode. In essence, it also consists of an array of segment/offset pairs, although the structure is somewhat more complex, because segments in protected mode are different than in real mode, and various protection mechanisms apply:
[.programlisting]
....
/*
* Create IDT.
*/
mov $0x5e00,%di # IDT's address
mov $idtctl,%si # Control string
init.1: lodsb # Get entry
cbw # count
xchg %ax,%cx # as word
jcxz init.4 # If done
lodsb # Get segment
xchg %ax,%dx # P:DPL:type
lodsw # Get control
xchg %ax,%bx # set
lodsw # Get handler offset
mov $SEL_SCODE,%dh # Segment selector
init.2: shr %bx # Handle this int?
jnc init.3 # No
mov %ax,(%di) # Set handler offset
mov %dh,0x2(%di) # and selector
mov %dl,0x5(%di) # Set P:DPL:type
add $0x4,%ax # Next handler
init.3: lea 0x8(%di),%di # Next entry
loop init.2 # Till set done
jmp init.1 # Continue
....
.[.filename]#sys/boot/i386/btx/btx/btx.S# [[btx-idt]]
Each entry in the `IDT` is 8 bytes long. Besides the segment/offset information, they also describe the segment type, privilege level, and whether the segment is present in memory or not. The construction is such that interrupt vectors from `0` to `0xf` (exceptions) are handled by function `intx00`; vector `0x10` (also an exception) is handled by `intx10`; hardware interrupts, which are later configured to start at interrupt vector `0x20` all the way to interrupt vector `0x2f`, are handled by function `intx20`. Lastly, interrupt vector `0x30`, which is used for system calls, is handled by `intx30`, and vectors `0x31` and `0x32` are handled by `intx31`. It must be noted that only descriptors for interrupt vectors `0x30`, `0x31` and `0x32` are given privilege level 3, the same privilege level as the [.filename]#boot2# client, which means the client can execute a software-generated interrupt to this vectors through the `int` instruction without failing (this is the way [.filename]#boot2# use the services provided by the BTX server). Also, note that _only_ software-generated interrupts are protected from code executing in lesser privilege levels. Hardware-generated interrupts and processor-generated exceptions are _always_ handled adequately, regardless of the actual privileges involved.
The next step is to initialize the TSS (Task-State Segment). The TSS is a hardware feature that helps the operating system or executive software implement multitasking functionality through process abstraction. The IA-32 architecture demands the creation and use of _at least_ one TSS if multitasking facilities are used or different privilege levels are defined. Since the [.filename]#boot2# client is executed in privilege level 3, but the BTX server runs in privilege level 0, a TSS must be defined:
[.programlisting]
....
/*
* Initialize TSS.
*/
init.4: movb $_ESP0H,TSS_ESP0+1(%di) # Set ESP0
movb $SEL_SDATA,TSS_SS0(%di) # Set SS0
movb $_TSSIO,TSS_MAP(%di) # Set I/O bit map base
....
.[.filename]#sys/boot/i386/btx/btx/btx.S# [[btx-tss]]
Note that a value is given for the Privilege Level 0 stack pointer and stack segment in the TSS. This is needed because, if an interrupt or exception is received while executing [.filename]#boot2# in Privilege Level 3, a change to Privilege Level 0 is automatically performed by the processor, so a new working stack is needed. Finally, the I/O Map Base Address field of the TSS is given a value, which is a 16-bit offset from the beginning of the TSS to the I/O Permission Bitmap and the Interrupt Redirection Bitmap.
After the IDT and TSS are created, the processor is ready to switch to protected mode. This is done in the next block:
[.programlisting]
....
/*
* Bring up the system.
*/
mov $0x2820,%bx # Set protected mode
callw setpic # IRQ offsets
lidt idtdesc # Set IDT
lgdt gdtdesc # Set GDT
mov %cr0,%eax # Switch to protected
inc %ax # mode
mov %eax,%cr0 #
ljmp $SEL_SCODE,$init.8 # To 32-bit code
.code32
init.8: xorl %ecx,%ecx # Zero
movb $SEL_SDATA,%cl # To 32-bit
movw %cx,%ss # stack
....
.[.filename]#sys/boot/i386/btx/btx/btx.S# [[btx-prot]]
First, a call is made to `setpic` to program the 8259A PIC (Programmable Interrupt Controller). This chip is connected to multiple hardware interrupt sources. Upon receiving an interrupt from a device, it signals the processor with the appropriate interrupt vector. This can be customized so that specific interrupts are associated with specific interrupt vectors, as explained before. Next, the IDTR (Interrupt Descriptor Table Register) and GDTR (Global Descriptor Table Register) are loaded with the instructions `lidt` and `lgdt`, respectively. These registers are loaded with the base address and limit address for the IDT and GDT. The following three instructions set the Protection Enable (PE) bit of the `%cr0` register. This effectively switches the processor to 32-bit protected mode. Next, a long jump is made to `init.8` using segment selector SEL_SCODE, which selects the Supervisor Code Segment. The processor is effectively executing in CPL 0, the most privileged level, after this jump. Finally, the Supervisor Data Segment is selected for the stack by assigning the segment selector SEL_SDATA to the `%ss` register. This data segment also has a privilege level of `0`.
Our last code block is responsible for loading the TR (Task Register) with the segment selector for the TSS we created earlier, and setting the User Mode environment before passing execution control to the [.filename]#boot2# client.
[.programlisting]
....
/*
* Launch user task.
*/
movb $SEL_TSS,%cl # Set task
ltr %cx # register
movl $0xa000,%edx # User base address
movzwl %ss:BDA_MEM,%eax # Get free memory
shll $0xa,%eax # To bytes
subl $ARGSPACE,%eax # Less arg space
subl %edx,%eax # Less base
movb $SEL_UDATA,%cl # User data selector
pushl %ecx # Set SS
pushl %eax # Set ESP
push $0x202 # Set flags (IF set)
push $SEL_UCODE # Set CS
pushl btx_hdr+0xc # Set EIP
pushl %ecx # Set GS
pushl %ecx # Set FS
pushl %ecx # Set DS
pushl %ecx # Set ES
pushl %edx # Set EAX
movb $0x7,%cl # Set remaining
init.9: push $0x0 # general
loop init.9 # registers
popa # and initialize
popl %es # Initialize
popl %ds # user
popl %fs # segment
popl %gs # registers
iret # To user mode
....
.[.filename]#sys/boot/i386/btx/btx/btx.S# [[btx-end]]
Note that the client's environment include a stack segment selector and stack pointer (registers `%ss` and `%esp`). Indeed, once the TR is loaded with the appropriate stack segment selector (instruction `ltr`), the stack pointer is calculated and pushed onto the stack along with the stack's segment selector. Next, the value `0x202` is pushed onto the stack; it is the value that the EFLAGS will get when control is passed to the client. Also, the User Mode code segment selector and the client's entry point are pushed. Recall that this entry point is patched in the BTX header at link time. Finally, segment selectors (stored in register `%ecx`) for the segment registers `%gs, %fs, %ds and %es` are pushed onto the stack, along with the value at `%edx` (`0xa000`). Keep in mind the various values that have been pushed onto the stack (they will be popped out shortly). Next, values for the remaining general purpose registers are also pushed onto the stack (note the `loop` that pushes the value `0` seven times). Now, values will be started to be popped out of the stack. First, the `popa` instruction pops out of the stack the latest seven values pushed. They are stored in the general purpose registers in order `%edi, %esi, %ebp, %ebx, %edx, %ecx, %eax`. Then, the various segment selectors pushed are popped into the various segment registers. Five values still remain on the stack. They are popped when the `iret` instruction is executed. This instruction first pops the value that was pushed from the BTX header. This value is a pointer to [.filename]#boot2#'s entry point. It is placed in the register `%eip`, the instruction pointer register. Next, the segment selector for the User Code Segment is popped and copied to register `%cs`. Remember that this segment's privilege level is 3, the least privileged level. This means that we must provide values for the stack of this privilege level. This is why the processor, besides further popping the value for the EFLAGS register, does two more pops out of the stack. These values go to the stack pointer (`%esp`) and the stack segment (`%ss`). Now, execution continues at ``boot0``'s entry point.
It is important to note how the User Code Segment is defined. This segment's _base address_ is set to `0xa000`. This means that code memory addresses are _relative_ to address 0xa000; if code being executed is fetched from address `0x2000`, the _actual_ memory addressed is `0xa000+0x2000=0xc000`.
[[boot2]]
== boot2 Stage
`boot2` defines an important structure, `struct bootinfo`. This structure is initialized by `boot2` and passed to the loader, and then further to the kernel. Some nodes of this structures are set by `boot2`, the rest by the loader. This structure, among other information, contains the kernel filename, BIOS harddisk geometry, BIOS drive number for boot device, physical memory available, `envp` pointer etc. The definition for it is:
[.programlisting]
....
/usr/include/machine/bootinfo.h:
struct bootinfo {
u_int32_t bi_version;
u_int32_t bi_kernelname; /* represents a char * */
u_int32_t bi_nfs_diskless; /* struct nfs_diskless * */
/* End of fields that are always present. */
#define bi_endcommon bi_n_bios_used
u_int32_t bi_n_bios_used;
u_int32_t bi_bios_geom[N_BIOS_GEOM];
u_int32_t bi_size;
u_int8_t bi_memsizes_valid;
u_int8_t bi_bios_dev; /* bootdev BIOS unit number */
u_int8_t bi_pad[2];
u_int32_t bi_basemem;
u_int32_t bi_extmem;
u_int32_t bi_symtab; /* struct symtab * */
u_int32_t bi_esymtab; /* struct symtab * */
/* Items below only from advanced bootloader */
u_int32_t bi_kernend; /* end of kernel space */
u_int32_t bi_envp; /* environment */
u_int32_t bi_modulep; /* preloaded modules */
};
....
`boot2` enters into an infinite loop waiting for user input, then calls `load()`. If the user does not press anything, the loop breaks by a timeout, so `load()` will load the default file ([.filename]#/boot/loader#). Functions `ino_t lookup(char *filename)` and `int xfsread(ino_t inode, void *buf, size_t nbyte)` are used to read the content of a file into memory. [.filename]#/boot/loader# is an ELF binary, but where the ELF header is prepended with [.filename]#a.out#'s `struct exec` structure. `load()` scans the loader's ELF header, loading the content of [.filename]#/boot/loader# into memory, and passing the execution to the loader's entry:
[.programlisting]
....
sys/boot/i386/boot2/boot2.c:
__exec((caddr_t)addr, RB_BOOTINFO | (opts & RBX_MASK),
MAKEBOOTDEV(dev_maj[dsk.type], 0, dsk.slice, dsk.unit, dsk.part),
0, 0, 0, VTOP(&bootinfo));
....
[[boot-loader]]
== loader Stage
loader is a BTX client as well. I will not describe it here in detail, there is a comprehensive man page written by Mike Smith, man:loader[8]. The underlying mechanisms and BTX were discussed above.
The main task for the loader is to boot the kernel. When the kernel is loaded into memory, it is being called by the loader:
[.programlisting]
....
sys/boot/common/boot.c:
/* Call the exec handler from the loader matching the kernel */
module_formats[km->m_loader]->l_exec(km);
....
[[boot-kernel]]
== Kernel Initialization
Let us take a look at the command that links the kernel. This will help identify the exact location where the loader passes execution to the kernel. This location is the kernel's actual entry point.
[.programlisting]
....
sys/conf/Makefile.i386:
ld -elf -Bdynamic -T /usr/src/sys/conf/ldscript.i386 -export-dynamic \
-dynamic-linker /red/herring -o kernel -X locore.o \
<lots of kernel .o files>
....
A few interesting things can be seen here. First, the kernel is an ELF dynamically linked binary, but the dynamic linker for kernel is [.filename]#/red/herring#, which is definitely a bogus file. Second, taking a look at the file [.filename]#sys/conf/ldscript.i386# gives an idea about what ld options are used when compiling a kernel. Reading through the first few lines, the string
[.programlisting]
....
sys/conf/ldscript.i386:
ENTRY(btext)
....
says that a kernel's entry point is the symbol `btext`. This symbol is defined in [.filename]#locore.s#:
[.programlisting]
....
sys/i386/i386/locore.s:
.text
/**********************************************************************
*
* This is where the bootblocks start us, set the ball rolling...
*
*/
NON_GPROF_ENTRY(btext)
....
First, the register EFLAGS is set to a predefined value of 0x00000002. Then all the segment registers are initialized:
[.programlisting]
....
sys/i386/i386/locore.s:
/* Don't trust what the BIOS gives for eflags. */
pushl $PSL_KERNEL
popfl
/*
* Don't trust what the BIOS gives for %fs and %gs. Trust the bootstrap
* to set %cs, %ds, %es and %ss.
*/
mov %ds, %ax
mov %ax, %fs
mov %ax, %gs
....
btext calls the routines `recover_bootinfo()`, `identify_cpu()`, `create_pagetables()`, which are also defined in [.filename]#locore.s#. Here is a description of what they do:
[.informaltable]
[cols="1,1", frame="none"]
|===
|`recover_bootinfo`
|This routine parses the parameters to the kernel passed from the bootstrap. The kernel may have been booted in 3 ways: by the loader, described above, by the old disk boot blocks, or by the old diskless boot procedure. This function determines the booting method, and stores the `struct bootinfo` structure into the kernel memory.
|`identify_cpu`
|This functions tries to find out what CPU it is running on, storing the value found in a variable `_cpu`.
|`create_pagetables`
|This function allocates and fills out a Page Table Directory at the top of the kernel memory area.
|===
The next steps are enabling VME, if the CPU supports it:
[.programlisting]
....
testl $CPUID_VME, R(_cpu_feature)
jz 1f
movl %cr4, %eax
orl $CR4_VME, %eax
movl %eax, %cr4
....
Then, enabling paging:
[.programlisting]
....
/* Now enable paging */
movl R(_IdlePTD), %eax
movl %eax,%cr3 /* load ptd addr into mmu */
movl %cr0,%eax /* get control word */
orl $CR0_PE|CR0_PG,%eax /* enable paging */
movl %eax,%cr0 /* and let's page NOW! */
....
The next three lines of code are because the paging was set, so the jump is needed to continue the execution in virtualized address space:
[.programlisting]
....
pushl $begin /* jump to high virtualized address */
ret
/* now running relocated at KERNBASE where the system is linked to run */
begin:
....
The function `init386()` is called with a pointer to the first free physical page, after that `mi_startup()`. `init386` is an architecture dependent initialization function, and `mi_startup()` is an architecture independent one (the 'mi_' prefix stands for Machine Independent). The kernel never returns from `mi_startup()`, and by calling it, the kernel finishes booting:
[.programlisting]
....
sys/i386/i386/locore.s:
movl physfree, %esi
pushl %esi /* value of first for init386(first) */
call _init386 /* wire 386 chip for unix operation */
call _mi_startup /* autoconfiguration, mountroot etc */
hlt /* never returns to here */
....
=== `init386()`
`init386()` is defined in [.filename]#sys/i386/i386/machdep.c# and performs low-level initialization specific to the i386 chip. The switch to protected mode was performed by the loader. The loader has created the very first task, in which the kernel continues to operate. Before looking at the code, consider the tasks the processor must complete to initialize protected mode execution:
* Initialize the kernel tunable parameters, passed from the bootstrapping program.
* Prepare the GDT.
* Prepare the IDT.
* Initialize the system console.
* Initialize the DDB, if it is compiled into kernel.
* Initialize the TSS.
* Prepare the LDT.
* Set up proc0's pcb.
`init386()` initializes the tunable parameters passed from bootstrap by setting the environment pointer (envp) and calling `init_param1()`. The envp pointer has been passed from loader in the `bootinfo` structure:
[.programlisting]
....
sys/i386/i386/machdep.c:
kern_envp = (caddr_t)bootinfo.bi_envp + KERNBASE;
/* Init basic tunables, hz etc */
init_param1();
....
`init_param1()` is defined in [.filename]#sys/kern/subr_param.c#. That file has a number of sysctls, and two functions, `init_param1()` and `init_param2()`, that are called from `init386()`:
[.programlisting]
....
sys/kern/subr_param.c:
hz = HZ;
TUNABLE_INT_FETCH("kern.hz", &hz);
....
TUNABLE_<typename>_FETCH is used to fetch the value from the environment:
[.programlisting]
....
/usr/src/sys/sys/kernel.h:
#define TUNABLE_INT_FETCH(path, var) getenv_int((path), (var))
....
Sysctl `kern.hz` is the system clock tick. Additionally, these sysctls are set by `init_param1()`: `kern.maxswzone, kern.maxbcache, kern.maxtsiz, kern.dfldsiz, kern.maxdsiz, kern.dflssiz, kern.maxssiz, kern.sgrowsiz`.
Then `init386()` prepares the Global Descriptors Table (GDT). Every task on an x86 is running in its own virtual address space, and this space is addressed by a segment:offset pair. Say, for instance, the current instruction to be executed by the processor lies at CS:EIP, then the linear virtual address for that instruction would be "the virtual address of code segment CS" + EIP. For convenience, segments begin at virtual address 0 and end at a 4Gb boundary. Therefore, the instruction's linear virtual address for this example would just be the value of EIP. Segment registers such as CS, DS etc are the selectors, i.e., indexes, into GDT (to be more precise, an index is not a selector itself, but the INDEX field of a selector). FreeBSD's GDT holds descriptors for 15 selectors per CPU:
[.programlisting]
....
sys/i386/i386/machdep.c:
union descriptor gdt[NGDT * MAXCPU]; /* global descriptor table */
sys/i386/include/segments.h:
/*
* Entries in the Global Descriptor Table (GDT)
*/
#define GNULL_SEL 0 /* Null Descriptor */
#define GCODE_SEL 1 /* Kernel Code Descriptor */
#define GDATA_SEL 2 /* Kernel Data Descriptor */
#define GPRIV_SEL 3 /* SMP Per-Processor Private Data */
#define GPROC0_SEL 4 /* Task state process slot zero and up */
#define GLDT_SEL 5 /* LDT - eventually one per process */
#define GUSERLDT_SEL 6 /* User LDT */
#define GTGATE_SEL 7 /* Process task switch gate */
#define GBIOSLOWMEM_SEL 8 /* BIOS low memory access (must be entry 8) */
#define GPANIC_SEL 9 /* Task state to consider panic from */
#define GBIOSCODE32_SEL 10 /* BIOS interface (32bit Code) */
#define GBIOSCODE16_SEL 11 /* BIOS interface (16bit Code) */
#define GBIOSDATA_SEL 12 /* BIOS interface (Data) */
#define GBIOSUTIL_SEL 13 /* BIOS interface (Utility) */
#define GBIOSARGS_SEL 14 /* BIOS interface (Arguments) */
....
Note that those #defines are not selectors themselves, but just a field INDEX of a selector, so they are exactly the indices of the GDT. for example, an actual selector for the kernel code (GCODE_SEL) has the value 0x08.
The next step is to initialize the Interrupt Descriptor Table (IDT). This table is referenced by the processor when a software or hardware interrupt occurs. For example, to make a system call, user application issues the `INT 0x80` instruction. This is a software interrupt, so the processor's hardware looks up a record with index 0x80 in the IDT. This record points to the routine that handles this interrupt, in this particular case, this will be the kernel's syscall gate. The IDT may have a maximum of 256 (0x100) records. The kernel allocates NIDT records for the IDT, where NIDT is the maximum (256):
[.programlisting]
....
sys/i386/i386/machdep.c:
static struct gate_descriptor idt0[NIDT];
struct gate_descriptor *idt = &idt0[0]; /* interrupt descriptor table */
....
For each interrupt, an appropriate handler is set. The syscall gate for `INT 0x80` is set as well:
[.programlisting]
....
sys/i386/i386/machdep.c:
setidt(0x80, &IDTVEC(int0x80_syscall),
SDT_SYS386TGT, SEL_UPL, GSEL(GCODE_SEL, SEL_KPL));
....
So when a userland application issues the `INT 0x80` instruction, control will transfer to the function `_Xint0x80_syscall`, which is in the kernel code segment and will be executed with supervisor privileges.
Console and DDB are then initialized:
[.programlisting]
....
sys/i386/i386/machdep.c:
cninit();
/* skipped */
#ifdef DDB
kdb_init();
if (boothowto & RB_KDB)
Debugger("Boot flags requested debugger");
#endif
....
The Task State Segment is another x86 protected mode structure, the TSS is used by the hardware to store task information when a task switch occurs.
The Local Descriptors Table is used to reference userland code and data. Several selectors are defined to point to the LDT, they are the system call gates and the user code and data selectors:
[.programlisting]
....
/usr/include/machine/segments.h:
#define LSYS5CALLS_SEL 0 /* forced by intel BCS */
#define LSYS5SIGR_SEL 1
#define L43BSDCALLS_SEL 2 /* notyet */
#define LUCODE_SEL 3
#define LSOL26CALLS_SEL 4 /* Solaris >= 2.6 system call gate */
#define LUDATA_SEL 5
/* separate stack, es,fs,gs sels ? */
/* #define LPOSIXCALLS_SEL 5*/ /* notyet */
#define LBSDICALLS_SEL 16 /* BSDI system call gate */
#define NLDT (LBSDICALLS_SEL + 1)
....
Next, proc0's Process Control Block (`struct pcb`) structure is initialized. proc0 is a `struct proc` structure that describes a kernel process. It is always present while the kernel is running, therefore it is declared as global:
[.programlisting]
....
sys/kern/kern_init.c:
struct proc proc0;
....
The structure `struct pcb` is a part of a proc structure. It is defined in [.filename]#/usr/include/machine/pcb.h# and has a process's information specific to the i386 architecture, such as registers values.
=== `mi_startup()`
This function performs a bubble sort of all the system initialization objects and then calls the entry of each object one by one:
[.programlisting]
....
sys/kern/init_main.c:
for (sipp = sysinit; *sipp; sipp++) {
/* ... skipped ... */
/* Call function */
(*((*sipp)->func))((*sipp)->udata);
/* ... skipped ... */
}
....
Although the sysinit framework is described in the link:/books/developers-handbook[Developers' Handbook], I will discuss the internals of it.
Every system initialization object (sysinit object) is created by calling a SYSINIT() macro. Let us take as example an `announce` sysinit object. This object prints the copyright message:
[.programlisting]
....
sys/kern/init_main.c:
static void
print_caddr_t(void *data __unused)
{
printf("%s", (char *)data);
}
SYSINIT(announce, SI_SUB_COPYRIGHT, SI_ORDER_FIRST, print_caddr_t, copyright)
....
The subsystem ID for this object is SI_SUB_COPYRIGHT (0x0800001), which comes right after the SI_SUB_CONSOLE (0x0800000). So, the copyright message will be printed out first, just after the console initialization.
Let us take a look at what exactly the macro `SYSINIT()` does. It expands to a `C_SYSINIT()` macro. The `C_SYSINIT()` macro then expands to a static `struct sysinit` structure declaration with another `DATA_SET` macro call:
[.programlisting]
....
/usr/include/sys/kernel.h:
#define C_SYSINIT(uniquifier, subsystem, order, func, ident) \
static struct sysinit uniquifier ## _sys_init = { \ subsystem, \
order, \ func, \ ident \ }; \ DATA_SET(sysinit_set,uniquifier ##
_sys_init);
#define SYSINIT(uniquifier, subsystem, order, func, ident) \
C_SYSINIT(uniquifier, subsystem, order, \
(sysinit_cfunc_t)(sysinit_nfunc_t)func, (void *)ident)
....
The `DATA_SET()` macro expands to a `MAKE_SET()`, and that macro is the point where all the sysinit magic is hidden:
[.programlisting]
....
/usr/include/linker_set.h:
#define MAKE_SET(set, sym) \
static void const * const __set_##set##_sym_##sym = sym; \
__asm(".section .set." #set ",\"aw\""); \
__asm(".long " #sym); \
__asm(".previous")
#endif
#define TEXT_SET(set, sym) MAKE_SET(set, sym)
#define DATA_SET(set, sym) MAKE_SET(set, sym)
....
In our case, the following declaration will occur:
[.programlisting]
....
static struct sysinit announce_sys_init = {
SI_SUB_COPYRIGHT,
SI_ORDER_FIRST,
(sysinit_cfunc_t)(sysinit_nfunc_t) print_caddr_t,
(void *) copyright
};
static void const *const __set_sysinit_set_sym_announce_sys_init =
announce_sys_init;
__asm(".section .set.sysinit_set" ",\"aw\"");
__asm(".long " "announce_sys_init");
__asm(".previous");
....
The first `__asm` instruction will create an ELF section within the kernel's executable. This will happen at kernel link time. The section will have the name `.set.sysinit_set`. The content of this section is one 32-bit value, the address of announce_sys_init structure, and that is what the second `__asm` is. The third `__asm` instruction marks the end of a section. If a directive with the same section name occurred before, the content, i.e., the 32-bit value, will be appended to the existing section, so forming an array of 32-bit pointers.
Running objdump on a kernel binary, you may notice the presence of such small sections:
[source,bash]
....
% objdump -h /kernel
7 .set.cons_set 00000014 c03164c0 c03164c0 002154c0 2**2
CONTENTS, ALLOC, LOAD, DATA
8 .set.kbddriver_set 00000010 c03164d4 c03164d4 002154d4 2**2
CONTENTS, ALLOC, LOAD, DATA
9 .set.scrndr_set 00000024 c03164e4 c03164e4 002154e4 2**2
CONTENTS, ALLOC, LOAD, DATA
10 .set.scterm_set 0000000c c0316508 c0316508 00215508 2**2
CONTENTS, ALLOC, LOAD, DATA
11 .set.sysctl_set 0000097c c0316514 c0316514 00215514 2**2
CONTENTS, ALLOC, LOAD, DATA
12 .set.sysinit_set 00000664 c0316e90 c0316e90 00215e90 2**2
CONTENTS, ALLOC, LOAD, DATA
....
This screen dump shows that the size of .set.sysinit_set section is 0x664 bytes, so `0x664/sizeof(void *)` sysinit objects are compiled into the kernel. The other sections such as `.set.sysctl_set` represent other linker sets.
By defining a variable of type `struct linker_set` the content of `.set.sysinit_set` section will be "collected" into that variable:
[.programlisting]
....
sys/kern/init_main.c:
extern struct linker_set sysinit_set; /* XXX */
....
The `struct linker_set` is defined as follows:
[.programlisting]
....
/usr/include/linker_set.h:
struct linker_set {
int ls_length;
void *ls_items[1]; /* really ls_length of them, trailing NULL */
};
....
The first node will be equal to the number of a sysinit objects, and the second node will be a NULL-terminated array of pointers to them.
Returning to the `mi_startup()` discussion, it is must be clear now, how the sysinit objects are being organized. The `mi_startup()` function sorts them and calls each. The very last object is the system scheduler:
[.programlisting]
....
/usr/include/sys/kernel.h:
enum sysinit_sub_id {
SI_SUB_DUMMY = 0x0000000, /* not executed; for linker*/
SI_SUB_DONE = 0x0000001, /* processed*/
SI_SUB_CONSOLE = 0x0800000, /* console*/
SI_SUB_COPYRIGHT = 0x0800001, /* first use of console*/
...
SI_SUB_RUN_SCHEDULER = 0xfffffff /* scheduler: no return*/
};
....
The system scheduler sysinit object is defined in the file [.filename]#sys/vm/vm_glue.c#, and the entry point for that object is `scheduler()`. That function is actually an infinite loop, and it represents a process with PID 0, the swapper process. The proc0 structure, mentioned before, is used to describe it.
The first user process, called _init_, is created by the sysinit object `init`:
[.programlisting]
....
sys/kern/init_main.c:
static void
create_init(const void *udata __unused)
{
int error;
int s;
s = splhigh();
error = fork1(proc0, RFFDG | RFPROC, initproc);
if (error)
panic("cannot fork init: %d\n", error);
initproc-p_flag |= P_INMEM | P_SYSTEM;
cpu_set_fork_handler(initproc, start_init, NULL);
remrunqueue(initproc);
splx(s);
}
SYSINIT(init,SI_SUB_CREATE_INIT, SI_ORDER_FIRST, create_init, NULL)
....
The `create_init()` allocates a new process by calling `fork1()`, but does not mark it runnable. When this new process is scheduled for execution by the scheduler, the `start_init()` will be called. That function is defined in [.filename]#init_main.c#. It tries to load and exec the [.filename]#init# binary, probing [.filename]#/sbin/init# first, then [.filename]#/sbin/oinit#, [.filename]#/sbin/init.bak#, and finally [.filename]#/stand/sysinstall#:
[.programlisting]
....
sys/kern/init_main.c:
static char init_path[MAXPATHLEN] =
#ifdef INIT_PATH
__XSTRING(INIT_PATH);
#else
"/sbin/init:/sbin/oinit:/sbin/init.bak:/stand/sysinstall";
#endif
....
diff --git a/documentation/content/en/books/arch-handbook/driverbasics/_index.adoc b/documentation/content/en/books/arch-handbook/driverbasics/_index.adoc
index 07732d34bb..46c08ba0aa 100644
--- a/documentation/content/en/books/arch-handbook/driverbasics/_index.adoc
+++ b/documentation/content/en/books/arch-handbook/driverbasics/_index.adoc
@@ -1,328 +1,329 @@
---
title: Chapter 9. Writing FreeBSD Device Drivers
prev: books/arch-handbook/partii
next: books/arch-handbook/isa
+description: Writing FreeBSD Device Drivers
---
[[driverbasics]]
= Writing FreeBSD Device Drivers
:doctype: book
:toc: macro
:toclevels: 1
:icons: font
:sectnums:
:sectnumlevels: 6
:source-highlighter: rouge
:experimental:
:skip-front-matter:
:xrefstyle: basic
:relfileprefix: ../
:outfilesuffix:
:sectnumoffset: 9
include::shared/mirrors.adoc[]
include::shared/authors.adoc[]
include::shared/releases.adoc[]
include::shared/en/mailing-lists.adoc[]
include::shared/en/teams.adoc[]
include::shared/en/urls.adoc[]
toc::[]
[[driverbasics-intro]]
== Introduction
This chapter provides a brief introduction to writing device drivers for FreeBSD. A device in this context is a term used mostly for hardware-related stuff that belongs to the system, like disks, printers, or a graphics display with its keyboard. A device driver is the software component of the operating system that controls a specific device. There are also so-called pseudo-devices where a device driver emulates the behavior of a device in software without any particular underlying hardware. Device drivers can be compiled into the system statically or loaded on demand through the dynamic kernel linker facility `kld'.
Most devices in a UNIX(R)-like operating system are accessed through device-nodes, sometimes also called special files. These files are usually located under the directory [.filename]#/dev# in the filesystem hierarchy.
Device drivers can roughly be broken down into two categories; character and network device drivers.
[[driverbasics-kld]]
== Dynamic Kernel Linker Facility - KLD
The kld interface allows system administrators to dynamically add and remove functionality from a running system. This allows device driver writers to load their new changes into a running kernel without constantly rebooting to test changes.
The kld interface is used through:
* `kldload` - loads a new kernel module
* `kldunload` - unloads a kernel module
* `kldstat` - lists loaded modules
Skeleton Layout of a kernel module
[.programlisting]
....
/*
* KLD Skeleton
* Inspired by Andrew Reiter's Daemonnews article
*/
#include <sys/types.h>
#include <sys/module.h>
#include <sys/systm.h> /* uprintf */
#include <sys/errno.h>
#include <sys/param.h> /* defines used in kernel.h */
#include <sys/kernel.h> /* types used in module initialization */
/*
* Load handler that deals with the loading and unloading of a KLD.
*/
static int
skel_loader(struct module *m, int what, void *arg)
{
int err = 0;
switch (what) {
case MOD_LOAD: /* kldload */
uprintf("Skeleton KLD loaded.\n");
break;
case MOD_UNLOAD:
uprintf("Skeleton KLD unloaded.\n");
break;
default:
err = EOPNOTSUPP;
break;
}
return(err);
}
/* Declare this module to the rest of the kernel */
static moduledata_t skel_mod = {
"skel",
skel_loader,
NULL
};
DECLARE_MODULE(skeleton, skel_mod, SI_SUB_KLD, SI_ORDER_ANY);
....
=== Makefile
FreeBSD provides a system makefile to simplify compiling a kernel module.
[.programlisting]
....
SRCS=skeleton.c
KMOD=skeleton
.include <bsd.kmod.mk>
....
Running `make` with this makefile will create a file [.filename]#skeleton.ko# that can be loaded into the kernel by typing:
[source,bash]
....
# kldload -v ./skeleton.ko
....
[[driverbasics-char]]
== Character Devices
A character device driver is one that transfers data directly to and from a user process. This is the most common type of device driver and there are plenty of simple examples in the source tree.
This simple example pseudo-device remembers whatever values are written to it and can then echo them back when read.
.Example of a Sample Echo Pseudo-Device Driver for FreeBSD 10.X - 12.X
[example]
====
[.programlisting]
....
/*
* Simple Echo pseudo-device KLD
*
* Murray Stokely
* Søren (Xride) Straarup
* Eitan Adler
*/
#include <sys/types.h>
#include <sys/module.h>
#include <sys/systm.h> /* uprintf */
#include <sys/param.h> /* defines used in kernel.h */
#include <sys/kernel.h> /* types used in module initialization */
#include <sys/conf.h> /* cdevsw struct */
#include <sys/uio.h> /* uio struct */
#include <sys/malloc.h>
#define BUFFERSIZE 255
/* Function prototypes */
static d_open_t echo_open;
static d_close_t echo_close;
static d_read_t echo_read;
static d_write_t echo_write;
/* Character device entry points */
static struct cdevsw echo_cdevsw = {
.d_version = D_VERSION,
.d_open = echo_open,
.d_close = echo_close,
.d_read = echo_read,
.d_write = echo_write,
.d_name = "echo",
};
struct s_echo {
char msg[BUFFERSIZE + 1];
int len;
};
/* vars */
static struct cdev *echo_dev;
static struct s_echo *echomsg;
MALLOC_DECLARE(M_ECHOBUF);
MALLOC_DEFINE(M_ECHOBUF, "echobuffer", "buffer for echo module");
/*
* This function is called by the kld[un]load(2) system calls to
* determine what actions to take when a module is loaded or unloaded.
*/
static int
echo_loader(struct module *m __unused, int what, void *arg __unused)
{
int error = 0;
switch (what) {
case MOD_LOAD: /* kldload */
error = make_dev_p(MAKEDEV_CHECKNAME | MAKEDEV_WAITOK,
&echo_dev,
&echo_cdevsw,
0,
UID_ROOT,
GID_WHEEL,
0600,
"echo");
if (error != 0)
break;
echomsg = malloc(sizeof(*echomsg), M_ECHOBUF, M_WAITOK |
M_ZERO);
printf("Echo device loaded.\n");
break;
case MOD_UNLOAD:
destroy_dev(echo_dev);
free(echomsg, M_ECHOBUF);
printf("Echo device unloaded.\n");
break;
default:
error = EOPNOTSUPP;
break;
}
return (error);
}
static int
echo_open(struct cdev *dev __unused, int oflags __unused, int devtype __unused,
struct thread *td __unused)
{
int error = 0;
uprintf("Opened device \"echo\" successfully.\n");
return (error);
}
static int
echo_close(struct cdev *dev __unused, int fflag __unused, int devtype __unused,
struct thread *td __unused)
{
uprintf("Closing device \"echo\".\n");
return (0);
}
/*
* The read function just takes the buf that was saved via
* echo_write() and returns it to userland for accessing.
* uio(9)
*/
static int
echo_read(struct cdev *dev __unused, struct uio *uio, int ioflag __unused)
{
size_t amt;
int error;
/*
* How big is this read operation? Either as big as the user wants,
* or as big as the remaining data. Note that the 'len' does not
* include the trailing null character.
*/
amt = MIN(uio->uio_resid, uio->uio_offset >= echomsg->len + 1 ? 0 :
echomsg->len + 1 - uio->uio_offset);
if ((error = uiomove(echomsg->msg, amt, uio)) != 0)
uprintf("uiomove failed!\n");
return (error);
}
/*
* echo_write takes in a character string and saves it
* to buf for later accessing.
*/
static int
echo_write(struct cdev *dev __unused, struct uio *uio, int ioflag __unused)
{
size_t amt;
int error;
/*
* We either write from the beginning or are appending -- do
* not allow random access.
*/
if (uio->uio_offset != 0 && (uio->uio_offset != echomsg->len))
return (EINVAL);
/* This is a new message, reset length */
if (uio->uio_offset == 0)
echomsg->len = 0;
/* Copy the string in from user memory to kernel memory */
amt = MIN(uio->uio_resid, (BUFFERSIZE - echomsg->len));
error = uiomove(echomsg->msg + uio->uio_offset, amt, uio);
/* Now we need to null terminate and record the length */
echomsg->len = uio->uio_offset;
echomsg->msg[echomsg->len] = 0;
if (error != 0)
uprintf("Write failed: bad address!\n");
return (error);
}
DEV_MODULE(echo, echo_loader, NULL);
....
====
With this driver loaded try:
[source,bash]
....
# echo -n "Test Data" > /dev/echo
# cat /dev/echo
Opened device "echo" successfully.
Test Data
Closing device "echo".
....
Real hardware devices are described in the next chapter.
[[driverbasics-block]]
== Block Devices (Are Gone)
Other UNIX(R) systems may support a second type of disk device known as block devices. Block devices are disk devices for which the kernel provides caching. This caching makes block-devices almost unusable, or at least dangerously unreliable. The caching will reorder the sequence of write operations, depriving the application of the ability to know the exact disk contents at any one instant in time.
This makes predictable and reliable crash recovery of on-disk data structures (filesystems, databases, etc.) impossible. Since writes may be delayed, there is no way the kernel can report to the application which particular write operation encountered a write error, this further compounds the consistency problem.
For this reason, no serious applications rely on block devices, and in fact, almost all applications which access disks directly take great pains to specify that character (or "raw") devices should always be used. As the implementation of the aliasing of each disk (partition) to two devices with different semantics significantly complicated the relevant kernel code, FreeBSD dropped support for cached disk devices as part of the modernization of the disk I/O infrastructure.
[[driverbasics-net]]
== Network Drivers
Drivers for network devices do not use device nodes in order to be accessed. Their selection is based on other decisions made inside the kernel and instead of calling open(), use of a network device is generally introduced by using the system call socket(2).
For more information see ifnet(9), the source of the loopback device, and Bill Paul's network drivers.
diff --git a/documentation/content/en/books/arch-handbook/isa/_index.adoc b/documentation/content/en/books/arch-handbook/isa/_index.adoc
index a51c91ad1d..454f973fb0 100644
--- a/documentation/content/en/books/arch-handbook/isa/_index.adoc
+++ b/documentation/content/en/books/arch-handbook/isa/_index.adoc
@@ -1,1103 +1,1104 @@
---
title: Chapter 10. ISA Device Drivers
prev: books/arch-handbook/driverbasics
next: books/arch-handbook/pci
+description: ISA Device Drivers
---
[[isa-driver]]
= ISA Device Drivers
:doctype: book
:toc: macro
:toclevels: 1
:icons: font
:sectnums:
:sectnumlevels: 6
:source-highlighter: rouge
:experimental:
:skip-front-matter:
:xrefstyle: basic
:relfileprefix: ../
:outfilesuffix:
:sectnumoffset: 10
include::shared/mirrors.adoc[]
include::shared/authors.adoc[]
include::shared/releases.adoc[]
include::shared/en/mailing-lists.adoc[]
include::shared/en/teams.adoc[]
include::shared/en/urls.adoc[]
toc::[]
[[isa-driver-synopsis]]
== Synopsis
This chapter introduces the issues relevant to writing a driver for an ISA device. The pseudo-code presented here is rather detailed and reminiscent of the real code but is still only pseudo-code. It avoids the details irrelevant to the subject of the discussion. The real-life examples can be found in the source code of real drivers. In particular the drivers `ep` and `aha` are good sources of information.
[[isa-driver-basics]]
== Basic Information
A typical ISA driver would need the following include files:
[.programlisting]
....
#include <sys/module.h>
#include <sys/bus.h>
#include <machine/bus.h>
#include <machine/resource.h>
#include <sys/rman.h>
#include <isa/isavar.h>
#include <isa/pnpvar.h>
....
They describe the things specific to the ISA and generic bus subsystem.
The bus subsystem is implemented in an object-oriented fashion, its main structures are accessed by associated method functions.
The list of bus methods implemented by an ISA driver is like one for any other bus. For a hypothetical driver named "xxx" they would be:
* `static void xxx_isa_identify (driver_t *, device_t);` Normally used for bus drivers, not device drivers. But for ISA devices this method may have special use: if the device provides some device-specific (non-PnP) way to auto-detect devices this routine may implement it.
* `static int xxx_isa_probe (device_t dev);` Probe for a device at a known (or PnP) location. This routine can also accommodate device-specific auto-detection of parameters for partially configured devices.
* `static int xxx_isa_attach (device_t dev);` Attach and initialize device.
* `static int xxx_isa_detach (device_t dev);` Detach device before unloading the driver module.
* `static int xxx_isa_shutdown (device_t dev);` Execute shutdown of the device before system shutdown.
* `static int xxx_isa_suspend (device_t dev);` Suspend the device before the system goes to the power-save state. May also abort transition to the power-save state.
* `static int xxx_isa_resume (device_t dev);` Resume the device activity after return from power-save state.
`xxx_isa_probe()` and `xxx_isa_attach()` are mandatory, the rest of the routines are optional, depending on the device's needs.
The driver is linked to the system with the following set of descriptions.
[.programlisting]
....
/* table of supported bus methods */
static device_method_t xxx_isa_methods[] = {
/* list all the bus method functions supported by the driver */
/* omit the unsupported methods */
DEVMETHOD(device_identify, xxx_isa_identify),
DEVMETHOD(device_probe, xxx_isa_probe),
DEVMETHOD(device_attach, xxx_isa_attach),
DEVMETHOD(device_detach, xxx_isa_detach),
DEVMETHOD(device_shutdown, xxx_isa_shutdown),
DEVMETHOD(device_suspend, xxx_isa_suspend),
DEVMETHOD(device_resume, xxx_isa_resume),
DEVMETHOD_END
};
static driver_t xxx_isa_driver = {
"xxx",
xxx_isa_methods,
sizeof(struct xxx_softc),
};
static devclass_t xxx_devclass;
DRIVER_MODULE(xxx, isa, xxx_isa_driver, xxx_devclass,
load_function, load_argument);
....
Here struct `xxx_softc` is a device-specific structure that contains private driver data and descriptors for the driver's resources. The bus code automatically allocates one softc descriptor per device as needed.
If the driver is implemented as a loadable module then `load_function()` is called to do driver-specific initialization or clean-up when the driver is loaded or unloaded and load_argument is passed as one of its arguments. If the driver does not support dynamic loading (in other words it must always be linked into the kernel) then these values should be set to 0 and the last definition would look like:
[.programlisting]
....
DRIVER_MODULE(xxx, isa, xxx_isa_driver,
xxx_devclass, 0, 0);
....
If the driver is for a device which supports PnP then a table of supported PnP IDs must be defined. The table consists of a list of PnP IDs supported by this driver and human-readable descriptions of the hardware types and models having these IDs. It looks like:
[.programlisting]
....
static struct isa_pnp_id xxx_pnp_ids[] = {
/* a line for each supported PnP ID */
{ 0x12345678, "Our device model 1234A" },
{ 0x12345679, "Our device model 1234B" },
{ 0, NULL }, /* end of table */
};
....
If the driver does not support PnP devices it still needs an empty PnP ID table, like:
[.programlisting]
....
static struct isa_pnp_id xxx_pnp_ids[] = {
{ 0, NULL }, /* end of table */
};
....
[[isa-driver-device-t]]
== `device_t` Pointer
`device_t` is the pointer type for the device structure. Here we consider only the methods interesting from the device driver writer's standpoint. The methods to manipulate values in the device structure are:
* `device_t device_get_parent(dev)` Get the parent bus of a device.
* `driver_t device_get_driver(dev)` Get pointer to its driver structure.
* `char *device_get_name(dev)` Get the driver name, such as `"xxx"` for our example.
* `int device_get_unit(dev)` Get the unit number (units are numbered from 0 for the devices associated with each driver).
* `char *device_get_nameunit(dev)` Get the device name including the unit number, such as "xxx0", "xxx1" and so on.
* `char *device_get_desc(dev)` Get the device description. Normally it describes the exact model of device in human-readable form.
* `device_set_desc(dev, desc)` Set the description. This makes the device description point to the string desc which may not be deallocated or changed after that.
* `device_set_desc_copy(dev, desc)` Set the description. The description is copied into an internal dynamically allocated buffer, so the string desc may be changed afterwards without adverse effects.
* `void *device_get_softc(dev)` Get pointer to the device descriptor (struct `xxx_softc`) associated with this device.
* `u_int32_t device_get_flags(dev)` Get the flags specified for the device in the configuration file.
A convenience function `device_printf(dev, fmt, ...)` may be used to print the messages from the device driver. It automatically prepends the unitname and colon to the message.
The device_t methods are implemented in the file [.filename]#kern/bus_subr.c#.
[[isa-driver-config]]
== Configuration File and the Order of Identifying and Probing During Auto-Configuration
The ISA devices are described in the kernel configuration file like:
[.programlisting]
....
device xxx0 at isa? port 0x300 irq 10 drq 5
iomem 0xd0000 flags 0x1 sensitive
....
The values of port, IRQ and so on are converted to the resource values associated with the device. They are optional, depending on the device's needs and abilities for auto-configuration. For example, some devices do not need DRQ at all and some allow the driver to read the IRQ setting from the device configuration ports. If a machine has multiple ISA buses the exact bus may be specified in the configuration line, like `isa0` or `isa1`, otherwise the device would be searched for on all the ISA buses.
`sensitive` is a resource requesting that this device must be probed before all non-sensitive devices. It is supported but does not seem to be used in any current driver.
For legacy ISA devices in many cases the drivers are still able to detect the configuration parameters. But each device to be configured in the system must have a config line. If two devices of some type are installed in the system but there is only one configuration line for the corresponding driver, ie:
[.programlisting]
....
device xxx0 at isa?
....
then only one device will be configured.
But for the devices supporting automatic identification by the means of Plug-n-Play or some proprietary protocol one configuration line is enough to configure all the devices in the system, like the one above or just simply:
[.programlisting]
....
device xxx at isa?
....
If a driver supports both auto-identified and legacy devices and both kinds are installed at once in one machine then it is enough to describe in the config file the legacy devices only. The auto-identified devices will be added automatically.
When an ISA bus is auto-configured the events happen as follows:
All the drivers' identify routines (including the PnP identify routine which identifies all the PnP devices) are called in random order. As they identify the devices they add them to the list on the ISA bus. Normally the drivers' identify routines associate their drivers with the new devices. The PnP identify routine does not know about the other drivers yet so it does not associate any with the new devices it adds.
The PnP devices are put to sleep using the PnP protocol to prevent them from being probed as legacy devices.
The probe routines of non-PnP devices marked as `sensitive` are called. If probe for a device went successfully, the attach routine is called for it.
The probe and attach routines of all non-PNP devices are called likewise.
The PnP devices are brought back from the sleep state and assigned the resources they request: I/O and memory address ranges, IRQs and DRQs, all of them not conflicting with the attached legacy devices.
Then for each PnP device the probe routines of all the present ISA drivers are called. The first one that claims the device gets attached. It is possible that multiple drivers would claim the device with different priority; in this case, the highest-priority driver wins. The probe routines must call `ISA_PNP_PROBE()` to compare the actual PnP ID with the list of the IDs supported by the driver and if the ID is not in the table return failure. That means that absolutely every driver, even the ones not supporting any PnP devices must call `ISA_PNP_PROBE()`, at least with an empty PnP ID table to return failure on unknown PnP devices.
The probe routine returns a positive value (the error code) on error, zero or negative value on success.
The negative return values are used when a PnP device supports multiple interfaces. For example, an older compatibility interface and a newer advanced interface which are supported by different drivers. Then both drivers would detect the device. The driver which returns a higher value in the probe routine takes precedence (in other words, the driver returning 0 has highest precedence, returning -1 is next, returning -2 is after it and so on). In result the devices which support only the old interface will be handled by the old driver (which should return -1 from the probe routine) while the devices supporting the new interface as well will be handled by the new driver (which should return 0 from the probe routine). If multiple drivers return the same value then the one called first wins. So if a driver returns value 0 it may be sure that it won the priority arbitration.
The device-specific identify routines can also assign not a driver but a class of drivers to the device. Then all the drivers in the class are probed for this device, like the case with PnP. This feature is not implemented in any existing driver and is not considered further in this document.
As the PnP devices are disabled when probing the legacy devices they will not be attached twice (once as legacy and once as PnP). But in case of device-dependent identify routines it is the responsibility of the driver to make sure that the same device will not be attached by the driver twice: once as legacy user-configured and once as auto-identified.
Another practical consequence for the auto-identified devices (both PnP and device-specific) is that the flags can not be passed to them from the kernel configuration file. So they must either not use the flags at all or use the flags from the device unit 0 for all the auto-identified devices or use the sysctl interface instead of flags.
Other unusual configurations may be accommodated by accessing the configuration resources directly with functions of families `resource_query_*()` and `resource_*_value()`. Their implementations are located in [.filename]#kern/subr_bus.c#. The old IDE disk driver [.filename]#i386/isa/wd.c# contains examples of such use. But the standard means of configuration must always be preferred. Leave parsing the configuration resources to the bus configuration code.
[[isa-driver-resources]]
== Resources
The information that a user enters into the kernel configuration file is processed and passed to the kernel as configuration resources. This information is parsed by the bus configuration code and transformed into a value of structure device_t and the bus resources associated with it. The drivers may access the configuration resources directly using functions `resource_*` for more complex cases of configuration. However, generally this is neither needed nor recommended, so this issue is not discussed further here.
The bus resources are associated with each device. They are identified by type and number within the type. For the ISA bus the following types are defined:
* _SYS_RES_IRQ_ - interrupt number
* _SYS_RES_DRQ_ - ISA DMA channel number
* _SYS_RES_MEMORY_ - range of device memory mapped into the system memory space
* _SYS_RES_IOPORT_ - range of device I/O registers
The enumeration within types starts from 0, so if a device has two memory regions it would have resources of type `SYS_RES_MEMORY` numbered 0 and 1. The resource type has nothing to do with the C language type, all the resource values have the C language type `unsigned long` and must be cast as necessary. The resource numbers do not have to be contiguous, although for ISA they normally would be. The permitted resource numbers for ISA devices are:
[.programlisting]
....
IRQ: 0-1
DRQ: 0-1
MEMORY: 0-3
IOPORT: 0-7
....
All the resources are represented as ranges, with a start value and count. For IRQ and DRQ resources the count would normally be equal to 1. The values for memory refer to the physical addresses.
Three types of activities can be performed on resources:
* set/get
* allocate/release
* activate/deactivate
Setting sets the range used by the resource. Allocation reserves the requested range that no other driver would be able to reserve it (and checking that no other driver reserved this range already). Activation makes the resource accessible to the driver by doing whatever is necessary for that (for example, for memory it would be mapping into the kernel virtual address space).
The functions to manipulate resources are:
* `int bus_set_resource(device_t dev, int type, int rid, u_long start, u_long count)`
+
Set a range for a resource. Returns 0 if successful, error code otherwise. Normally, this function will return an error only if one of `type`, `rid`, `start` or `count` has a value that falls out of the permitted range.
** dev - driver's device
** type - type of resource, SYS_RES_*
** rid - resource number (ID) within type
** start, count - resource range
* `int bus_get_resource(device_t dev, int type, int rid, u_long *startp, u_long *countp)`
+
Get the range of resource. Returns 0 if successful, error code if the resource is not defined yet.
* `u_long bus_get_resource_start(device_t dev, int type, int rid) u_long bus_get_resource_count (device_t dev, int type, int rid)`
+
Convenience functions to get only the start or count. Return 0 in case of error, so if the resource start has 0 among the legitimate values it would be impossible to tell if the value is 0 or an error occurred. Luckily, no ISA resources for add-on drivers may have a start value equal to 0.
* `void bus_delete_resource(device_t dev, int type, int rid)`
+
Delete a resource, make it undefined.
* `struct resource * bus_alloc_resource(device_t dev, int type, int *rid, u_long start, u_long end, u_long count, u_int flags)`
+
Allocate a resource as a range of count values not allocated by anyone else, somewhere between start and end. Alas, alignment is not supported. If the resource was not set yet it is automatically created. The special values of start 0 and end ~0 (all ones) means that the fixed values previously set by `bus_set_resource()` must be used instead: start and count as themselves and end=(start+count), in this case if the resource was not defined before then an error is returned. Although rid is passed by reference it is not set anywhere by the resource allocation code of the ISA bus. (The other buses may use a different approach and modify it).
Flags are a bitmap, the flags interesting for the caller are:
* _RF_ACTIVE_ - causes the resource to be automatically activated after allocation.
* _RF_SHAREABLE_ - resource may be shared at the same time by multiple drivers.
* _RF_TIMESHARE_ - resource may be time-shared by multiple drivers, i.e., allocated at the same time by many but activated only by one at any given moment of time.
* Returns 0 on error. The allocated values may be obtained from the returned handle using methods `rhand_*()`.
* `int bus_release_resource(device_t dev, int type, int rid, struct resource *r)`
* Release the resource, r is the handle returned by `bus_alloc_resource()`. Returns 0 on success, error code otherwise.
* `int bus_activate_resource(device_t dev, int type, int rid, struct resource *r) int bus_deactivate_resource(device_t dev, int type, int rid, struct resource *r)`
* Activate or deactivate resource. Return 0 on success, error code otherwise. If the resource is time-shared and currently activated by another driver then `EBUSY` is returned.
* `int bus_setup_intr(device_t dev, struct resource *r, int flags, driver_intr_t *handler, void *arg, void **cookiep) int bus_teardown_intr(device_t dev, struct resource *r, void *cookie)`
* Associate or de-associate the interrupt handler with a device. Return 0 on success, error code otherwise.
* r - the activated resource handler describing the IRQ
+
flags - the interrupt priority level, one of:
** `INTR_TYPE_TTY` - terminals and other likewise character-type devices. To mask them use `spltty()`.
** `(INTR_TYPE_TTY | INTR_TYPE_FAST)` - terminal type devices with small input buffer, critical to the data loss on input (such as the old-fashioned serial ports). To mask them use `spltty()`.
** `INTR_TYPE_BIO` - block-type devices, except those on the CAM controllers. To mask them use `splbio()`.
** `INTR_TYPE_CAM` - CAM (Common Access Method) bus controllers. To mask them use `splcam()`.
** `INTR_TYPE_NET` - network interface controllers. To mask them use `splimp()`.
** `INTR_TYPE_MISC` - miscellaneous devices. There is no other way to mask them than by `splhigh()` which masks all interrupts.
When an interrupt handler executes all the other interrupts matching its priority level will be masked. The only exception is the MISC level for which no other interrupts are masked and which is not masked by any other interrupt.
* _handler_ - pointer to the handler function, the type driver_intr_t is defined as `void driver_intr_t(void *)`
* _arg_ - the argument passed to the handler to identify this particular device. It is cast from void* to any real type by the handler. The old convention for the ISA interrupt handlers was to use the unit number as argument, the new (recommended) convention is using a pointer to the device softc structure.
* _cookie[p]_ - the value received from `setup()` is used to identify the handler when passed to `teardown()`
A number of methods are defined to operate on the resource handlers (struct resource *). Those of interest to the device driver writers are:
* `u_long rman_get_start(r) u_long rman_get_end(r)` Get the start and end of allocated resource range.
* `void *rman_get_virtual(r)` Get the virtual address of activated memory resource.
[[isa-driver-busmem]]
== Bus Memory Mapping
In many cases data is exchanged between the driver and the device through the memory. Two variants are possible:
(a) memory is located on the device card
(b) memory is the main memory of the computer
In case (a) the driver always copies the data back and forth between the on-card memory and the main memory as necessary. To map the on-card memory into the kernel virtual address space the physical address and length of the on-card memory must be defined as a `SYS_RES_MEMORY` resource. That resource can then be allocated and activated, and its virtual address obtained using `rman_get_virtual()`. The older drivers used the function `pmap_mapdev()` for this purpose, which should not be used directly any more. Now it is one of the internal steps of resource activation.
Most of the ISA cards will have their memory configured for physical location somewhere in range 640KB-1MB. Some of the ISA cards require larger memory ranges which should be placed somewhere under 16MB (because of the 24-bit address limitation on the ISA bus). In that case if the machine has more memory than the start address of the device memory (in other words, they overlap) a memory hole must be configured at the address range used by devices. Many BIOSes allow configuration of a memory hole of 1MB starting at 14MB or 15MB. FreeBSD can handle the memory holes properly if the BIOS reports them properly (this feature may be broken on old BIOSes).
In case (b) just the address of the data is sent to the device, and the device uses DMA to actually access the data in the main memory. Two limitations are present: First, ISA cards can only access memory below 16MB. Second, the contiguous pages in virtual address space may not be contiguous in physical address space, so the device may have to do scatter/gather operations. The bus subsystem provides ready solutions for some of these problems, the rest has to be done by the drivers themselves.
Two structures are used for DMA memory allocation, `bus_dma_tag_t` and `bus_dmamap_t`. Tag describes the properties required for the DMA memory. Map represents a memory block allocated according to these properties. Multiple maps may be associated with the same tag.
Tags are organized into a tree-like hierarchy with inheritance of the properties. A child tag inherits all the requirements of its parent tag, and may make them more strict but never more loose.
Normally one top-level tag (with no parent) is created for each device unit. If multiple memory areas with different requirements are needed for each device then a tag for each of them may be created as a child of the parent tag.
The tags can be used to create a map in two ways.
First, a chunk of contiguous memory conformant with the tag requirements may be allocated (and later may be freed). This is normally used to allocate relatively long-living areas of memory for communication with the device. Loading of such memory into a map is trivial: it is always considered as one chunk in the appropriate physical memory range.
Second, an arbitrary area of virtual memory may be loaded into a map. Each page of this memory will be checked for conformance to the map requirement. If it conforms then it is left at its original location. If it is not then a fresh conformant "bounce page" is allocated and used as intermediate storage. When writing the data from the non-conformant original pages they will be copied to their bounce pages first and then transferred from the bounce pages to the device. When reading the data would go from the device to the bounce pages and then copied to their non-conformant original pages. The process of copying between the original and bounce pages is called synchronization. This is normally used on a per-transfer basis: buffer for each transfer would be loaded, transfer done and buffer unloaded.
The functions working on the DMA memory are:
* `int bus_dma_tag_create(bus_dma_tag_t parent, bus_size_t alignment, bus_size_t boundary, bus_addr_t lowaddr, bus_addr_t highaddr, bus_dma_filter_t *filter, void *filterarg, bus_size_t maxsize, int nsegments, bus_size_t maxsegsz, int flags, bus_dma_tag_t *dmat)`
+
Create a new tag. Returns 0 on success, the error code otherwise.
** _parent_ - parent tag, or NULL to create a top-level tag.
** _alignment_ - required physical alignment of the memory area to be allocated for this tag. Use value 1 for "no specific alignment". Applies only to the future `bus_dmamem_alloc()` but not `bus_dmamap_create()` calls.
** _boundary_ - physical address boundary that must not be crossed when allocating the memory. Use value 0 for "no boundary". Applies only to the future `bus_dmamem_alloc()` but not `bus_dmamap_create()` calls. Must be power of 2. If the memory is planned to be used in non-cascaded DMA mode (i.e., the DMA addresses will be supplied not by the device itself but by the ISA DMA controller) then the boundary must be no larger than 64KB (64*1024) due to the limitations of the DMA hardware.
** _lowaddr, highaddr_ - the names are slightly misleading; these values are used to limit the permitted range of physical addresses used to allocate the memory. The exact meaning varies depending on the planned future use:
*** For `bus_dmamem_alloc()` all the addresses from 0 to lowaddr-1 are considered permitted, the higher ones are forbidden.
*** For `bus_dmamap_create()` all the addresses outside the inclusive range [lowaddr; highaddr] are considered accessible. The addresses of pages inside the range are passed to the filter function which decides if they are accessible. If no filter function is supplied then all the range is considered unaccessible.
*** For the ISA devices the normal values (with no filter function) are:
+
lowaddr = BUS_SPACE_MAXADDR_24BIT
+
highaddr = BUS_SPACE_MAXADDR
** _filter, filterarg_ - the filter function and its argument. If NULL is passed for filter then the whole range [lowaddr, highaddr] is considered unaccessible when doing `bus_dmamap_create()`. Otherwise the physical address of each attempted page in range [lowaddr; highaddr] is passed to the filter function which decides if it is accessible. The prototype of the filter function is: `int filterfunc(void *arg, bus_addr_t paddr)`. It must return 0 if the page is accessible, non-zero otherwise.
** _maxsize_ - the maximal size of memory (in bytes) that may be allocated through this tag. In case it is difficult to estimate or could be arbitrarily big, the value for ISA devices would be `BUS_SPACE_MAXSIZE_24BIT`.
** _nsegments_ - maximal number of scatter-gather segments supported by the device. If unrestricted then the value `BUS_SPACE_UNRESTRICTED` should be used. This value is recommended for the parent tags, the actual restrictions would then be specified for the descendant tags. Tags with nsegments equal to `BUS_SPACE_UNRESTRICTED` may not be used to actually load maps, they may be used only as parent tags. The practical limit for nsegments seems to be about 250-300, higher values will cause kernel stack overflow (the hardware can not normally support that many scatter-gather buffers anyway).
** _maxsegsz_ - maximal size of a scatter-gather segment supported by the device. The maximal value for ISA device would be `BUS_SPACE_MAXSIZE_24BIT`.
** _flags_ - a bitmap of flags. The only interesting flags are:
*** _BUS_DMA_ALLOCNOW_ - requests to allocate all the potentially needed bounce pages when creating the tag.
*** _BUS_DMA_ISA_ - mysterious flag used only on Alpha machines. It is not defined for the i386 machines. Probably it should be used by all the ISA drivers for Alpha machines but it looks like there are no such drivers yet.
** _dmat_ - pointer to the storage for the new tag to be returned.
* `int bus_dma_tag_destroy(bus_dma_tag_t dmat)`
+
Destroy a tag. Returns 0 on success, the error code otherwise.
+
dmat - the tag to be destroyed.
* `int bus_dmamem_alloc(bus_dma_tag_t dmat, void** vaddr, int flags, bus_dmamap_t *mapp)`
+
Allocate an area of contiguous memory described by the tag. The size of memory to be allocated is tag's maxsize. Returns 0 on success, the error code otherwise. The result still has to be loaded by `bus_dmamap_load()` before being used to get the physical address of the memory.
** _dmat_ - the tag
** _vaddr_ - pointer to the storage for the kernel virtual address of the allocated area to be returned.
** flags - a bitmap of flags. The only interesting flag is:
*** _BUS_DMA_NOWAIT_ - if the memory is not immediately available return the error. If this flag is not set then the routine is allowed to sleep until the memory becomes available.
** _mapp_ - pointer to the storage for the new map to be returned.
* `void bus_dmamem_free(bus_dma_tag_t dmat, void *vaddr, bus_dmamap_t map)`
+
Free the memory allocated by `bus_dmamem_alloc()`. At present, freeing of the memory allocated with ISA restrictions is not implemented. Due to this the recommended model of use is to keep and re-use the allocated areas for as long as possible. Do not lightly free some area and then shortly allocate it again. That does not mean that `bus_dmamem_free()` should not be used at all: hopefully it will be properly implemented soon.
** _dmat_ - the tag
** _vaddr_ - the kernel virtual address of the memory
** _map_ - the map of the memory (as returned from `bus_dmamem_alloc()`)
* `int bus_dmamap_create(bus_dma_tag_t dmat, int flags, bus_dmamap_t *mapp)`
+
Create a map for the tag, to be used in `bus_dmamap_load()` later. Returns 0 on success, the error code otherwise.
** _dmat_ - the tag
** _flags_ - theoretically, a bit map of flags. But no flags are defined yet, so at present it will be always 0.
** _mapp_ - pointer to the storage for the new map to be returned
* `int bus_dmamap_destroy(bus_dma_tag_t dmat, bus_dmamap_t map)`
+
Destroy a map. Returns 0 on success, the error code otherwise.
** dmat - the tag to which the map is associated
** map - the map to be destroyed
* `int bus_dmamap_load(bus_dma_tag_t dmat, bus_dmamap_t map, void *buf, bus_size_t buflen, bus_dmamap_callback_t *callback, void *callback_arg, int flags)`
+
Load a buffer into the map (the map must be previously created by `bus_dmamap_create()` or `bus_dmamem_alloc()`). All the pages of the buffer are checked for conformance to the tag requirements and for those not conformant the bounce pages are allocated. An array of physical segment descriptors is built and passed to the callback routine. This callback routine is then expected to handle it in some way. The number of bounce buffers in the system is limited, so if the bounce buffers are needed but not immediately available the request will be queued and the callback will be called when the bounce buffers will become available. Returns 0 if the callback was executed immediately or `EINPROGRESS` if the request was queued for future execution. In the latter case the synchronization with queued callback routine is the responsibility of the driver.
+
** _dmat_ - the tag
** _map_ - the map
** _buf_ - kernel virtual address of the buffer
** _buflen_ - length of the buffer
** _callback_, `callback_arg` - the callback function and its argument
+
The prototype of callback function is: `void callback(void *arg, bus_dma_segment_t *seg, int nseg, int error)`
+
** _arg_ - the same as callback_arg passed to `bus_dmamap_load()`
** _seg_ - array of the segment descriptors
** _nseg_ - number of descriptors in array
** _error_ - indication of the segment number overflow: if it is set to `EFBIG` then the buffer did not fit into the maximal number of segments permitted by the tag. In this case only the permitted number of descriptors will be in the array. Handling of this situation is up to the driver: depending on the desired semantics it can either consider this an error or split the buffer in two and handle the second part separately
+
Each entry in the segments array contains the fields:
+
** _ds_addr_ - physical bus address of the segment
** _ds_len_ - length of the segment
+
* `void bus_dmamap_unload(bus_dma_tag_t dmat, bus_dmamap_t map)`
+
unload the map.
+
** _dmat_ - tag
** _map_ - loaded map
+
* `void bus_dmamap_sync (bus_dma_tag_t dmat, bus_dmamap_t map, bus_dmasync_op_t op)`
+
Synchronise a loaded buffer with its bounce pages before and after physical transfer to or from device. This is the function that does all the necessary copying of data between the original buffer and its mapped version. The buffers must be synchronized both before and after doing the transfer.
+
** _dmat_ - tag
** _map_ - loaded map
** _op_ - type of synchronization operation to perform:
+
** `BUS_DMASYNC_PREREAD` - before reading from device into buffer
** `BUS_DMASYNC_POSTREAD` - after reading from device into buffer
** `BUS_DMASYNC_PREWRITE` - before writing the buffer to device
** `BUS_DMASYNC_POSTWRITE` - after writing the buffer to device
As of now PREREAD and POSTWRITE are null operations but that may change in the future, so they must not be ignored in the driver. Synchronization is not needed for the memory obtained from `bus_dmamem_alloc()`.
Before calling the callback function from `bus_dmamap_load()` the segment array is stored in the stack. And it gets pre-allocated for the maximal number of segments allowed by the tag. As a result of this the practical limit for the number of segments on i386 architecture is about 250-300 (the kernel stack is 4KB minus the size of the user structure, size of a segment array entry is 8 bytes, and some space must be left). Since the array is allocated based on the maximal number this value must not be set higher than really needed. Fortunately, for most of hardware the maximal supported number of segments is much lower. But if the driver wants to handle buffers with a very large number of scatter-gather segments it should do that in portions: load part of the buffer, transfer it to the device, load next part of the buffer, and so on.
Another practical consequence is that the number of segments may limit the size of the buffer. If all the pages in the buffer happen to be physically non-contiguous then the maximal supported buffer size for that fragmented case would be (nsegments * page_size). For example, if a maximal number of 10 segments is supported then on i386 maximal guaranteed supported buffer size would be 40K. If a higher size is desired then special tricks should be used in the driver.
If the hardware does not support scatter-gather at all or the driver wants to support some buffer size even if it is heavily fragmented then the solution is to allocate a contiguous buffer in the driver and use it as intermediate storage if the original buffer does not fit.
Below are the typical call sequences when using a map depend on the use of the map. The characters -> are used to show the flow of time.
For a buffer which stays practically fixed during all the time between attachment and detachment of a device:
bus_dmamem_alloc -> bus_dmamap_load -> ...use buffer... -> -> bus_dmamap_unload -> bus_dmamem_free
For a buffer that changes frequently and is passed from outside the driver:
[.programlisting]
....
bus_dmamap_create ->
-> bus_dmamap_load -> bus_dmamap_sync(PRE...) -> do transfer ->
-> bus_dmamap_sync(POST...) -> bus_dmamap_unload ->
...
-> bus_dmamap_load -> bus_dmamap_sync(PRE...) -> do transfer ->
-> bus_dmamap_sync(POST...) -> bus_dmamap_unload ->
-> bus_dmamap_destroy
....
When loading a map created by `bus_dmamem_alloc()` the passed address and size of the buffer must be the same as used in `bus_dmamem_alloc()`. In this case it is guaranteed that the whole buffer will be mapped as one segment (so the callback may be based on this assumption) and the request will be executed immediately (EINPROGRESS will never be returned). All the callback needs to do in this case is to save the physical address.
A typical example would be:
[.programlisting]
....
static void
alloc_callback(void *arg, bus_dma_segment_t *seg, int nseg, int error)
{
*(bus_addr_t *)arg = seg[0].ds_addr;
}
...
int error;
struct somedata {
....
};
struct somedata *vsomedata; /* virtual address */
bus_addr_t psomedata; /* physical bus-relative address */
bus_dma_tag_t tag_somedata;
bus_dmamap_t map_somedata;
...
error=bus_dma_tag_create(parent_tag, alignment,
boundary, lowaddr, highaddr, /*filter*/ NULL, /*filterarg*/ NULL,
/*maxsize*/ sizeof(struct somedata), /*nsegments*/ 1,
/*maxsegsz*/ sizeof(struct somedata), /*flags*/ 0,
&tag_somedata);
if(error)
return error;
error = bus_dmamem_alloc(tag_somedata, &vsomedata, /* flags*/ 0,
&map_somedata);
if(error)
return error;
bus_dmamap_load(tag_somedata, map_somedata, (void *)vsomedata,
sizeof (struct somedata), alloc_callback,
(void *) &psomedata, /*flags*/0);
....
Looks a bit long and complicated but that is the way to do it. The practical consequence is: if multiple memory areas are allocated always together it would be a really good idea to combine them all into one structure and allocate as one (if the alignment and boundary limitations permit).
When loading an arbitrary buffer into the map created by `bus_dmamap_create()` special measures must be taken to synchronize with the callback in case it would be delayed. The code would look like:
[.programlisting]
....
{
int s;
int error;
s = splsoftvm();
error = bus_dmamap_load(
dmat,
dmamap,
buffer_ptr,
buffer_len,
callback,
/*callback_arg*/ buffer_descriptor,
/*flags*/0);
if (error == EINPROGRESS) {
/*
* Do whatever is needed to ensure synchronization
* with callback. Callback is guaranteed not to be started
* until we do splx() or tsleep().
*/
}
splx(s);
}
....
Two possible approaches for the processing of requests are:
1. If requests are completed by marking them explicitly as done (such as the CAM requests) then it would be simpler to put all the further processing into the callback driver which would mark the request when it is done. Then not much extra synchronization is needed. For the flow control reasons it may be a good idea to freeze the request queue until this request gets completed.
2. If requests are completed when the function returns (such as classic read or write requests on character devices) then a synchronization flag should be set in the buffer descriptor and `tsleep()` called. Later when the callback gets called it will do its processing and check this synchronization flag. If it is set then the callback should issue a wakeup. In this approach the callback function could either do all the needed processing (just like the previous case) or simply save the segments array in the buffer descriptor. Then after callback completes the calling function could use this saved segments array and do all the processing.
[[isa-driver-dma]]
== DMA
The Direct Memory Access (DMA) is implemented in the ISA bus through the DMA controller (actually, two of them but that is an irrelevant detail). To make the early ISA devices simple and cheap the logic of the bus control and address generation was concentrated in the DMA controller. Fortunately, FreeBSD provides a set of functions that mostly hide the annoying details of the DMA controller from the device drivers.
The simplest case is for the fairly intelligent devices. Like the bus master devices on PCI they can generate the bus cycles and memory addresses all by themselves. The only thing they really need from the DMA controller is bus arbitration. So for this purpose they pretend to be cascaded slave DMA controllers. And the only thing needed from the system DMA controller is to enable the cascaded mode on a DMA channel by calling the following function when attaching the driver:
`void isa_dmacascade(int channel_number)`
All the further activity is done by programming the device. When detaching the driver no DMA-related functions need to be called.
For the simpler devices things get more complicated. The functions used are:
* `int isa_dma_acquire(int chanel_number)`
+
Reserve a DMA channel. Returns 0 on success or EBUSY if the channel was already reserved by this or a different driver. Most of the ISA devices are not able to share DMA channels anyway, so normally this function is called when attaching a device. This reservation was made redundant by the modern interface of bus resources but still must be used in addition to the latter. If not used then later, other DMA routines will panic.
* `int isa_dma_release(int chanel_number)`
+
Release a previously reserved DMA channel. No transfers must be in progress when the channel is released (in addition the device must not try to initiate transfer after the channel is released).
* `void isa_dmainit(int chan, u_int bouncebufsize)`
+
Allocate a bounce buffer for use with the specified channel. The requested size of the buffer can not exceed 64KB. This bounce buffer will be automatically used later if a transfer buffer happens to be not physically contiguous or outside of the memory accessible by the ISA bus or crossing the 64KB boundary. If the transfers will be always done from buffers which conform to these conditions (such as those allocated by `bus_dmamem_alloc()` with proper limitations) then `isa_dmainit()` does not have to be called. But it is quite convenient to transfer arbitrary data using the DMA controller. The bounce buffer will automatically care of the scatter-gather issues.
+
** _chan_ - channel number
** _bouncebufsize_ - size of the bounce buffer in bytes
+
* `void isa_dmastart(int flags, caddr_t addr, u_int nbytes, int chan)`
+
Prepare to start a DMA transfer. This function must be called to set up the DMA controller before actually starting transfer on the device. It checks that the buffer is contiguous and falls into the ISA memory range, if not then the bounce buffer is automatically used. If bounce buffer is required but not set up by `isa_dmainit()` or too small for the requested transfer size then the system will panic. In case of a write request with bounce buffer the data will be automatically copied to the bounce buffer.
* flags - a bitmask determining the type of operation to be done. The direction bits B_READ and B_WRITE are mutually exclusive.
+
** B_READ - read from the ISA bus into memory
** B_WRITE - write from the memory to the ISA bus
** B_RAW - if set then the DMA controller will remember the buffer and after the end of transfer will automatically re-initialize itself to repeat transfer of the same buffer again (of course, the driver may change the data in the buffer before initiating another transfer in the device). If not set then the parameters will work only for one transfer, and `isa_dmastart()` will have to be called again before initiating the next transfer. Using B_RAW makes sense only if the bounce buffer is not used.
+
* addr - virtual address of the buffer
* nbytes - length of the buffer. Must be less or equal to 64KB. Length of 0 is not allowed: the DMA controller will understand it as 64KB while the kernel code will understand it as 0 and that would cause unpredictable effects. For channels number 4 and higher the length must be even because these channels transfer 2 bytes at a time. In case of an odd length the last byte will not be transferred.
* chan - channel number
* `void isa_dmadone(int flags, caddr_t addr, int nbytes, int chan)`
+
Synchronize the memory after device reports that transfer is done. If that was a read operation with a bounce buffer then the data will be copied from the bounce buffer to the original buffer. Arguments are the same as for `isa_dmastart()`. Flag B_RAW is permitted but it does not affect `isa_dmadone()` in any way.
* `int isa_dmastatus(int channel_number)`
+
Returns the number of bytes left in the current transfer to be transferred. In case the flag B_READ was set in `isa_dmastart()` the number returned will never be equal to zero. At the end of transfer it will be automatically reset back to the length of buffer. The normal use is to check the number of bytes left after the device signals that the transfer is completed. If the number of bytes is not 0 then something probably went wrong with that transfer.
* `int isa_dmastop(int channel_number)`
+
Aborts the current transfer and returns the number of bytes left untransferred.
[[isa-driver-probe]]
== xxx_isa_probe
This function probes if a device is present. If the driver supports auto-detection of some part of device configuration (such as interrupt vector or memory address) this auto-detection must be done in this routine.
As for any other bus, if the device cannot be detected or is detected but failed the self-test or some other problem happened then it returns a positive value of error. The value `ENXIO` must be returned if the device is not present. Other error values may mean other conditions. Zero or negative values mean success. Most of the drivers return zero as success.
The negative return values are used when a PnP device supports multiple interfaces. For example, an older compatibility interface and a newer advanced interface which are supported by different drivers. Then both drivers would detect the device. The driver which returns a higher value in the probe routine takes precedence (in other words, the driver returning 0 has highest precedence, one returning -1 is next, one returning -2 is after it and so on). In result the devices which support only the old interface will be handled by the old driver (which should return -1 from the probe routine) while the devices supporting the new interface as well will be handled by the new driver (which should return 0 from the probe routine).
The device descriptor struct xxx_softc is allocated by the system before calling the probe routine. If the probe routine returns an error the descriptor will be automatically deallocated by the system. So if a probing error occurs the driver must make sure that all the resources it used during probe are deallocated and that nothing keeps the descriptor from being safely deallocated. If the probe completes successfully the descriptor will be preserved by the system and later passed to the routine `xxx_isa_attach()`. If a driver returns a negative value it can not be sure that it will have the highest priority and its attach routine will be called. So in this case it also must release all the resources before returning and if necessary allocate them again in the attach routine. When `xxx_isa_probe()` returns 0 releasing the resources before returning is also a good idea and a well-behaved driver should do so. But in cases where there is some problem with releasing the resources the driver is allowed to keep resources between returning 0 from the probe routine and execution of the attach routine.
A typical probe routine starts with getting the device descriptor and unit:
[.programlisting]
....
struct xxx_softc *sc = device_get_softc(dev);
int unit = device_get_unit(dev);
int pnperror;
int error = 0;
sc->dev = dev; /* link it back */
sc->unit = unit;
....
Then check for the PnP devices. The check is carried out by a table containing the list of PnP IDs supported by this driver and human-readable descriptions of the device models corresponding to these IDs.
[.programlisting]
....
pnperror=ISA_PNP_PROBE(device_get_parent(dev), dev,
xxx_pnp_ids); if(pnperror == ENXIO) return ENXIO;
....
The logic of ISA_PNP_PROBE is the following: If this card (device unit) was not detected as PnP then ENOENT will be returned. If it was detected as PnP but its detected ID does not match any of the IDs in the table then ENXIO is returned. Finally, if it has PnP support and it matches on of the IDs in the table, 0 is returned and the appropriate description from the table is set by `device_set_desc()`.
If a driver supports only PnP devices then the condition would look like:
[.programlisting]
....
if(pnperror != 0)
return pnperror;
....
No special treatment is required for the drivers which do not support PnP because they pass an empty PnP ID table and will always get ENXIO if called on a PnP card.
The probe routine normally needs at least some minimal set of resources, such as I/O port number to find the card and probe it. Depending on the hardware the driver may be able to discover the other necessary resources automatically. The PnP devices have all the resources pre-set by the PnP subsystem, so the driver does not need to discover them by itself.
Typically the minimal information required to get access to the device is the I/O port number. Then some devices allow to get the rest of information from the device configuration registers (though not all devices do that). So first we try to get the port start value:
[.programlisting]
....
sc->port0 = bus_get_resource_start(dev,
SYS_RES_IOPORT, 0 /*rid*/); if(sc->port0 == 0) return ENXIO;
....
The base port address is saved in the structure softc for future use. If it will be used very often then calling the resource function each time would be prohibitively slow. If we do not get a port we just return an error. Some device drivers can instead be clever and try to probe all the possible ports, like this:
[.programlisting]
....
/* table of all possible base I/O port addresses for this device */
static struct xxx_allports {
u_short port; /* port address */
short used; /* flag: if this port is already used by some unit */
} xxx_allports = {
{ 0x300, 0 },
{ 0x320, 0 },
{ 0x340, 0 },
{ 0, 0 } /* end of table */
};
...
int port, i;
...
port = bus_get_resource_start(dev, SYS_RES_IOPORT, 0 /*rid*/);
if(port !=0 ) {
for(i=0; xxx_allports[i].port!=0; i++) {
if(xxx_allports[i].used || xxx_allports[i].port != port)
continue;
/* found it */
xxx_allports[i].used = 1;
/* do probe on a known port */
return xxx_really_probe(dev, port);
}
return ENXIO; /* port is unknown or already used */
}
/* we get here only if we need to guess the port */
for(i=0; xxx_allports[i].port!=0; i++) {
if(xxx_allports[i].used)
continue;
/* mark as used - even if we find nothing at this port
* at least we won't probe it in future
*/
xxx_allports[i].used = 1;
error = xxx_really_probe(dev, xxx_allports[i].port);
if(error == 0) /* found a device at that port */
return 0;
}
/* probed all possible addresses, none worked */
return ENXIO;
....
Of course, normally the driver's `identify()` routine should be used for such things. But there may be one valid reason why it may be better to be done in `probe()`: if this probe would drive some other sensitive device crazy. The probe routines are ordered with consideration of the `sensitive` flag: the sensitive devices get probed first and the rest of the devices later. But the `identify()` routines are called before any probes, so they show no respect to the sensitive devices and may upset them.
Now, after we got the starting port we need to set the port count (except for PnP devices) because the kernel does not have this information in the configuration file.
[.programlisting]
....
if(pnperror /* only for non-PnP devices */
&& bus_set_resource(dev, SYS_RES_IOPORT, 0, sc->port0,
XXX_PORT_COUNT)<0)
return ENXIO;
....
Finally allocate and activate a piece of port address space (special values of start and end mean "use those we set by ``bus_set_resource()``"):
[.programlisting]
....
sc->port0_rid = 0;
sc->port0_r = bus_alloc_resource(dev, SYS_RES_IOPORT,
&sc->port0_rid,
/*start*/ 0, /*end*/ ~0, /*count*/ 0, RF_ACTIVE);
if(sc->port0_r == NULL)
return ENXIO;
....
Now having access to the port-mapped registers we can poke the device in some way and check if it reacts like it is expected to. If it does not then there is probably some other device or no device at all at this address.
Normally drivers do not set up the interrupt handlers until the attach routine. Instead they do probes in the polling mode using the `DELAY()` function for timeout. The probe routine must never hang forever, all the waits for the device must be done with timeouts. If the device does not respond within the time it is probably broken or misconfigured and the driver must return error. When determining the timeout interval give the device some extra time to be on the safe side: although `DELAY()` is supposed to delay for the same amount of time on any machine it has some margin of error, depending on the exact CPU.
If the probe routine really wants to check that the interrupts really work it may configure and probe the interrupts too. But that is not recommended.
[.programlisting]
....
/* implemented in some very device-specific way */
if(error = xxx_probe_ports(sc))
goto bad; /* will deallocate the resources before returning */
....
The function `xxx_probe_ports()` may also set the device description depending on the exact model of device it discovers. But if there is only one supported device model this can be as well done in a hardcoded way. Of course, for the PnP devices the PnP support sets the description from the table automatically.
[.programlisting]
....
if(pnperror)
device_set_desc(dev, "Our device model 1234");
....
Then the probe routine should either discover the ranges of all the resources by reading the device configuration registers or make sure that they were set explicitly by the user. We will consider it with an example of on-board memory. The probe routine should be as non-intrusive as possible, so allocation and check of functionality of the rest of resources (besides the ports) would be better left to the attach routine.
The memory address may be specified in the kernel configuration file or on some devices it may be pre-configured in non-volatile configuration registers. If both sources are available and different, which one should be used? Probably if the user bothered to set the address explicitly in the kernel configuration file they know what they are doing and this one should take precedence. An example of implementation could be:
[.programlisting]
....
/* try to find out the config address first */
sc->mem0_p = bus_get_resource_start(dev, SYS_RES_MEMORY, 0 /*rid*/);
if(sc->mem0_p == 0) { /* nope, not specified by user */
sc->mem0_p = xxx_read_mem0_from_device_config(sc);
if(sc->mem0_p == 0)
/* can't get it from device config registers either */
goto bad;
} else {
if(xxx_set_mem0_address_on_device(sc) < 0)
goto bad; /* device does not support that address */
}
/* just like the port, set the memory size,
* for some devices the memory size would not be constant
* but should be read from the device configuration registers instead
* to accommodate different models of devices. Another option would
* be to let the user set the memory size as "msize" configuration
* resource which will be automatically handled by the ISA bus.
*/
if(pnperror) { /* only for non-PnP devices */
sc->mem0_size = bus_get_resource_count(dev, SYS_RES_MEMORY, 0 /*rid*/);
if(sc->mem0_size == 0) /* not specified by user */
sc->mem0_size = xxx_read_mem0_size_from_device_config(sc);
if(sc->mem0_size == 0) {
/* suppose this is a very old model of device without
* auto-configuration features and the user gave no preference,
* so assume the minimalistic case
* (of course, the real value will vary with the driver)
*/
sc->mem0_size = 8*1024;
}
if(xxx_set_mem0_size_on_device(sc) < 0)
goto bad; /* device does not support that size */
if(bus_set_resource(dev, SYS_RES_MEMORY, /*rid*/0,
sc->mem0_p, sc->mem0_size)<0)
goto bad;
} else {
sc->mem0_size = bus_get_resource_count(dev, SYS_RES_MEMORY, 0 /*rid*/);
}
....
Resources for IRQ and DRQ are easy to check by analogy.
If all went well then release all the resources and return success.
[.programlisting]
....
xxx_free_resources(sc);
return 0;
....
Finally, handle the troublesome situations. All the resources should be deallocated before returning. We make use of the fact that before the structure softc is passed to us it gets zeroed out, so we can find out if some resource was allocated: then its descriptor is non-zero.
[.programlisting]
....
bad:
xxx_free_resources(sc);
if(error)
return error;
else /* exact error is unknown */
return ENXIO;
....
That would be all for the probe routine. Freeing of resources is done from multiple places, so it is moved to a function which may look like:
[.programlisting]
....
static void
xxx_free_resources(sc)
struct xxx_softc *sc;
{
/* check every resource and free if not zero */
/* interrupt handler */
if(sc->intr_r) {
bus_teardown_intr(sc->dev, sc->intr_r, sc->intr_cookie);
bus_release_resource(sc->dev, SYS_RES_IRQ, sc->intr_rid,
sc->intr_r);
sc->intr_r = 0;
}
/* all kinds of memory maps we could have allocated */
if(sc->data_p) {
bus_dmamap_unload(sc->data_tag, sc->data_map);
sc->data_p = 0;
}
if(sc->data) { /* sc->data_map may be legitimately equal to 0 */
/* the map will also be freed */
bus_dmamem_free(sc->data_tag, sc->data, sc->data_map);
sc->data = 0;
}
if(sc->data_tag) {
bus_dma_tag_destroy(sc->data_tag);
sc->data_tag = 0;
}
... free other maps and tags if we have them ...
if(sc->parent_tag) {
bus_dma_tag_destroy(sc->parent_tag);
sc->parent_tag = 0;
}
/* release all the bus resources */
if(sc->mem0_r) {
bus_release_resource(sc->dev, SYS_RES_MEMORY, sc->mem0_rid,
sc->mem0_r);
sc->mem0_r = 0;
}
...
if(sc->port0_r) {
bus_release_resource(sc->dev, SYS_RES_IOPORT, sc->port0_rid,
sc->port0_r);
sc->port0_r = 0;
}
}
....
[[isa-driver-attach]]
== xxx_isa_attach
The attach routine actually connects the driver to the system if the probe routine returned success and the system had chosen to attach that driver. If the probe routine returned 0 then the attach routine may expect to receive the device structure softc intact, as it was set by the probe routine. Also if the probe routine returns 0 it may expect that the attach routine for this device shall be called at some point in the future. If the probe routine returns a negative value then the driver may make none of these assumptions.
The attach routine returns 0 if it completed successfully or error code otherwise.
The attach routine starts just like the probe routine, with getting some frequently used data into more accessible variables.
[.programlisting]
....
struct xxx_softc *sc = device_get_softc(dev);
int unit = device_get_unit(dev);
int error = 0;
....
Then allocate and activate all the necessary resources. As normally the port range will be released before returning from probe, it has to be allocated again. We expect that the probe routine had properly set all the resource ranges, as well as saved them in the structure softc. If the probe routine had left some resource allocated then it does not need to be allocated again (which would be considered an error).
[.programlisting]
....
sc->port0_rid = 0;
sc->port0_r = bus_alloc_resource(dev, SYS_RES_IOPORT, &sc->port0_rid,
/*start*/ 0, /*end*/ ~0, /*count*/ 0, RF_ACTIVE);
if(sc->port0_r == NULL)
return ENXIO;
/* on-board memory */
sc->mem0_rid = 0;
sc->mem0_r = bus_alloc_resource(dev, SYS_RES_MEMORY, &sc->mem0_rid,
/*start*/ 0, /*end*/ ~0, /*count*/ 0, RF_ACTIVE);
if(sc->mem0_r == NULL)
goto bad;
/* get its virtual address */
sc->mem0_v = rman_get_virtual(sc->mem0_r);
....
The DMA request channel (DRQ) is allocated likewise. To initialize it use functions of the `isa_dma*()` family. For example:
`isa_dmacascade(sc->drq0);`
The interrupt request line (IRQ) is a bit special. Besides allocation the driver's interrupt handler should be associated with it. Historically in the old ISA drivers the argument passed by the system to the interrupt handler was the device unit number. But in modern drivers the convention suggests passing the pointer to structure softc. The important reason is that when the structures softc are allocated dynamically then getting the unit number from softc is easy while getting softc from the unit number is difficult. Also this convention makes the drivers for different buses look more uniform and allows them to share the code: each bus gets its own probe, attach, detach and other bus-specific routines while the bulk of the driver code may be shared among them.
[.programlisting]
....
sc->intr_rid = 0;
sc->intr_r = bus_alloc_resource(dev, SYS_RES_MEMORY, &sc->intr_rid,
/*start*/ 0, /*end*/ ~0, /*count*/ 0, RF_ACTIVE);
if(sc->intr_r == NULL)
goto bad;
/*
* XXX_INTR_TYPE is supposed to be defined depending on the type of
* the driver, for example as INTR_TYPE_CAM for a CAM driver
*/
error = bus_setup_intr(dev, sc->intr_r, XXX_INTR_TYPE,
(driver_intr_t *) xxx_intr, (void *) sc, &sc->intr_cookie);
if(error)
goto bad;
....
If the device needs to make DMA to the main memory then this memory should be allocated like described before:
[.programlisting]
....
error=bus_dma_tag_create(NULL, /*alignment*/ 4,
/*boundary*/ 0, /*lowaddr*/ BUS_SPACE_MAXADDR_24BIT,
/*highaddr*/ BUS_SPACE_MAXADDR, /*filter*/ NULL, /*filterarg*/ NULL,
/*maxsize*/ BUS_SPACE_MAXSIZE_24BIT,
/*nsegments*/ BUS_SPACE_UNRESTRICTED,
/*maxsegsz*/ BUS_SPACE_MAXSIZE_24BIT, /*flags*/ 0,
&sc->parent_tag);
if(error)
goto bad;
/* many things get inherited from the parent tag
* sc->data is supposed to point to the structure with the shared data,
* for example for a ring buffer it could be:
* struct {
* u_short rd_pos;
* u_short wr_pos;
* char bf[XXX_RING_BUFFER_SIZE]
* } *data;
*/
error=bus_dma_tag_create(sc->parent_tag, 1,
0, BUS_SPACE_MAXADDR, 0, /*filter*/ NULL, /*filterarg*/ NULL,
/*maxsize*/ sizeof(* sc->data), /*nsegments*/ 1,
/*maxsegsz*/ sizeof(* sc->data), /*flags*/ 0,
&sc->data_tag);
if(error)
goto bad;
error = bus_dmamem_alloc(sc->data_tag, &sc->data, /* flags*/ 0,
&sc->data_map);
if(error)
goto bad;
/* xxx_alloc_callback() just saves the physical address at
* the pointer passed as its argument, in this case &sc->data_p.
* See details in the section on bus memory mapping.
* It can be implemented like:
*
* static void
* xxx_alloc_callback(void *arg, bus_dma_segment_t *seg,
* int nseg, int error)
* {
* *(bus_addr_t *)arg = seg[0].ds_addr;
* }
*/
bus_dmamap_load(sc->data_tag, sc->data_map, (void *)sc->data,
sizeof (* sc->data), xxx_alloc_callback, (void *) &sc->data_p,
/*flags*/0);
....
After all the necessary resources are allocated the device should be initialized. The initialization may include testing that all the expected features are functional.
[.programlisting]
....
if(xxx_initialize(sc) < 0)
goto bad;
....
The bus subsystem will automatically print on the console the device description set by probe. But if the driver wants to print some extra information about the device it may do so, for example:
[.programlisting]
....
device_printf(dev, "has on-card FIFO buffer of %d bytes\n", sc->fifosize);
....
If the initialization routine experiences any problems then printing messages about them before returning error is also recommended.
The final step of the attach routine is attaching the device to its functional subsystem in the kernel. The exact way to do it depends on the type of the driver: a character device, a block device, a network device, a CAM SCSI bus device and so on.
If all went well then return success.
[.programlisting]
....
error = xxx_attach_subsystem(sc);
if(error)
goto bad;
return 0;
....
Finally, handle the troublesome situations. All the resources should be deallocated before returning an error. We make use of the fact that before the structure softc is passed to us it gets zeroed out, so we can find out if some resource was allocated: then its descriptor is non-zero.
[.programlisting]
....
bad:
xxx_free_resources(sc);
if(error)
return error;
else /* exact error is unknown */
return ENXIO;
....
That would be all for the attach routine.
[[isa-driver-detach]]
== xxx_isa_detach
If this function is present in the driver and the driver is compiled as a loadable module then the driver gets the ability to be unloaded. This is an important feature if the hardware supports hot plug. But the ISA bus does not support hot plug, so this feature is not particularly important for the ISA devices. The ability to unload a driver may be useful when debugging it, but in many cases installation of the new version of the driver would be required only after the old version somehow wedges the system and a reboot will be needed anyway, so the efforts spent on writing the detach routine may not be worth it. Another argument that unloading would allow upgrading the drivers on a production machine seems to be mostly theoretical. Installing a new version of a driver is a dangerous operation which should never be performed on a production machine (and which is not permitted when the system is running in secure mode). Still, the detach routine may be provided for the sake of completeness.
The detach routine returns 0 if the driver was successfully detached or the error code otherwise.
The logic of detach is a mirror of the attach. The first thing to do is to detach the driver from its kernel subsystem. If the device is currently open then the driver has two choices: refuse to be detached or forcibly close and proceed with detach. The choice used depends on the ability of the particular kernel subsystem to do a forced close and on the preferences of the driver's author. Generally the forced close seems to be the preferred alternative.
[.programlisting]
....
struct xxx_softc *sc = device_get_softc(dev);
int error;
error = xxx_detach_subsystem(sc);
if(error)
return error;
....
Next the driver may want to reset the hardware to some consistent state. That includes stopping any ongoing transfers, disabling the DMA channels and interrupts to avoid memory corruption by the device. For most of the drivers this is exactly what the shutdown routine does, so if it is included in the driver we can just call it.
`xxx_isa_shutdown(dev);`
And finally release all the resources and return success.
[.programlisting]
....
xxx_free_resources(sc);
return 0;
....
[[isa-driver-shutdown]]
== xxx_isa_shutdown
This routine is called when the system is about to be shut down. It is expected to bring the hardware to some consistent state. For most of the ISA devices no special action is required, so the function is not really necessary because the device will be re-initialized on reboot anyway. But some devices have to be shut down with a special procedure, to make sure that they will be properly detected after soft reboot (this is especially true for many devices with proprietary identification protocols). In any case disabling DMA and interrupts in the device registers and stopping any ongoing transfers is a good idea. The exact action depends on the hardware, so we do not consider it here in any detail.
[[isa-driver-intr]]
== xxx_intr
The interrupt handler is called when an interrupt is received which may be from this particular device. The ISA bus does not support interrupt sharing (except in some special cases) so in practice if the interrupt handler is called then the interrupt almost for sure came from its device. Still, the interrupt handler must poll the device registers and make sure that the interrupt was generated by its device. If not it should just return.
The old convention for the ISA drivers was getting the device unit number as an argument. This is obsolete, and the new drivers receive whatever argument was specified for them in the attach routine when calling `bus_setup_intr()`. By the new convention it should be the pointer to the structure softc. So the interrupt handler commonly starts as:
[.programlisting]
....
static void
xxx_intr(struct xxx_softc *sc)
{
....
It runs at the interrupt priority level specified by the interrupt type parameter of `bus_setup_intr()`. That means that all the other interrupts of the same type as well as all the software interrupts are disabled.
To avoid races it is commonly written as a loop:
[.programlisting]
....
while(xxx_interrupt_pending(sc)) {
xxx_process_interrupt(sc);
xxx_acknowledge_interrupt(sc);
}
....
The interrupt handler has to acknowledge interrupt to the device only but not to the interrupt controller, the system takes care of the latter.
diff --git a/documentation/content/en/books/arch-handbook/jail/_index.adoc b/documentation/content/en/books/arch-handbook/jail/_index.adoc
index 4ea78c7bb8..46bbf2565b 100644
--- a/documentation/content/en/books/arch-handbook/jail/_index.adoc
+++ b/documentation/content/en/books/arch-handbook/jail/_index.adoc
@@ -1,510 +1,511 @@
---
title: Chapter 4. The Jail Subsystem
prev: books/arch-handbook/kobj
next: books/arch-handbook/sysinit
+description: The Jail Subsystem
---
[[jail]]
= The Jail Subsystem
:doctype: book
:toc: macro
:toclevels: 1
:icons: font
:sectnums:
:sectnumlevels: 6
:source-highlighter: rouge
:experimental:
:skip-front-matter:
:xrefstyle: basic
:relfileprefix: ../
:outfilesuffix:
:sectnumoffset: 4
include::shared/mirrors.adoc[]
include::shared/authors.adoc[]
include::shared/releases.adoc[]
include::shared/en/mailing-lists.adoc[]
include::shared/en/teams.adoc[]
include::shared/en/urls.adoc[]
toc::[]
On most UNIX(R) systems, `root` has omnipotent power. This promotes insecurity. If an attacker gained `root` on a system, he would have every function at his fingertips. In FreeBSD there are sysctls which dilute the power of `root`, in order to minimize the damage caused by an attacker. Specifically, one of these functions is called `secure levels`. Similarly, another function which is present from FreeBSD 4.0 and onward, is a utility called man:jail[8]. Jail chroots an environment and sets certain restrictions on processes which are forked within the jail. For example, a jailed process cannot affect processes outside the jail, utilize certain system calls, or inflict any damage on the host environment.
Jail is becoming the new security model. People are running potentially vulnerable servers such as Apache, BIND, and sendmail within jails, so that if an attacker gains `root` within the jail, it is only an annoyance, and not a devastation. This article mainly focuses on the internals (source code) of jail. For information on how to set up a jail see the link:{handbook}#jails/[handbook entry on jails].
[[jail-arch]]
== Architecture
Jail consists of two realms: the userland program, man:jail[8], and the code implemented within the kernel: the man:jail[2] system call and associated restrictions. I will be discussing the userland program and then how jail is implemented within the kernel.
=== Userland Code
The source for the userland jail is located in [.filename]#/usr/src/usr.sbin/jail#, consisting of one file, [.filename]#jail.c#. The program takes these arguments: the path of the jail, hostname, IP address, and the command to be executed.
==== Data Structures
In [.filename]#jail.c#, the first thing I would note is the declaration of an important structure `struct jail j;` which was included from [.filename]#/usr/include/sys/jail.h#.
The definition of the `jail` structure is:
[.programlisting]
....
/usr/include/sys/jail.h:
struct jail {
u_int32_t version;
char *path;
char *hostname;
u_int32_t ip_number;
};
....
As you can see, there is an entry for each of the arguments passed to the man:jail[8] program, and indeed, they are set during its execution.
[.programlisting]
....
/usr/src/usr.sbin/jail/jail.c
char path[PATH_MAX];
...
if (realpath(argv[0], path) == NULL)
err(1, "realpath: %s", argv[0]);
if (chdir(path) != 0)
err(1, "chdir: %s", path);
memset(&j, 0, sizeof(j));
j.version = 0;
j.path = path;
j.hostname = argv[1];
....
==== Networking
One of the arguments passed to the man:jail[8] program is an IP address with which the jail can be accessed over the network. man:jail[8] translates the IP address given into host byte order and then stores it in `j` (the `jail` structure).
[.programlisting]
....
/usr/src/usr.sbin/jail/jail.c:
struct in_addr in;
...
if (inet_aton(argv[2], &in) == 0)
errx(1, "Could not make sense of ip-number: %s", argv[2]);
j.ip_number = ntohl(in.s_addr);
....
The man:inet_aton[3] function "interprets the specified character string as an Internet address, placing the address into the structure provided." The `ip_number` member in the `jail` structure is set only when the IP address placed onto the `in` structure by man:inet_aton[3] is translated into host byte order by man:ntohl[3].
==== Jailing the Process
Finally, the userland program jails the process. Jail now becomes an imprisoned process itself and then executes the command given using man:execv[3].
[.programlisting]
....
/usr/src/usr.sbin/jail/jail.c
i = jail(&j);
...
if (execv(argv[3], argv + 3) != 0)
err(1, "execv: %s", argv[3]);
....
As you can see, the `jail()` function is called, and its argument is the `jail` structure which has been filled with the arguments given to the program. Finally, the program you specify is executed. I will now discuss how jail is implemented within the kernel.
=== Kernel Space
We will now be looking at the file [.filename]#/usr/src/sys/kern/kern_jail.c#. This is the file where the man:jail[2] system call, appropriate sysctls, and networking functions are defined.
==== Sysctls
In [.filename]#kern_jail.c#, the following sysctls are defined:
[.programlisting]
....
/usr/src/sys/kern/kern_jail.c:
int jail_set_hostname_allowed = 1;
SYSCTL_INT(_security_jail, OID_AUTO, set_hostname_allowed, CTLFLAG_RW,
&jail_set_hostname_allowed, 0,
"Processes in jail can set their hostnames");
int jail_socket_unixiproute_only = 1;
SYSCTL_INT(_security_jail, OID_AUTO, socket_unixiproute_only, CTLFLAG_RW,
&jail_socket_unixiproute_only, 0,
"Processes in jail are limited to creating UNIX/IPv4/route sockets only");
int jail_sysvipc_allowed = 0;
SYSCTL_INT(_security_jail, OID_AUTO, sysvipc_allowed, CTLFLAG_RW,
&jail_sysvipc_allowed, 0,
"Processes in jail can use System V IPC primitives");
static int jail_enforce_statfs = 2;
SYSCTL_INT(_security_jail, OID_AUTO, enforce_statfs, CTLFLAG_RW,
&jail_enforce_statfs, 0,
"Processes in jail cannot see all mounted file systems");
int jail_allow_raw_sockets = 0;
SYSCTL_INT(_security_jail, OID_AUTO, allow_raw_sockets, CTLFLAG_RW,
&jail_allow_raw_sockets, 0,
"Prison root can create raw sockets");
int jail_chflags_allowed = 0;
SYSCTL_INT(_security_jail, OID_AUTO, chflags_allowed, CTLFLAG_RW,
&jail_chflags_allowed, 0,
"Processes in jail can alter system file flags");
int jail_mount_allowed = 0;
SYSCTL_INT(_security_jail, OID_AUTO, mount_allowed, CTLFLAG_RW,
&jail_mount_allowed, 0,
"Processes in jail can mount/unmount jail-friendly file systems");
....
Each of these sysctls can be accessed by the user through the man:sysctl[8] program. Throughout the kernel, these specific sysctls are recognized by their name. For example, the name of the first sysctl is `security.jail.set_hostname_allowed`.
==== man:jail[2] System Call
Like all system calls, the man:jail[2] system call takes two arguments, `struct thread *td` and `struct jail_args *uap`. `td` is a pointer to the `thread` structure which describes the calling thread. In this context, `uap` is a pointer to the structure in which a pointer to the `jail` structure passed by the userland [.filename]#jail.c# is contained. When I described the userland program before, you saw that the man:jail[2] system call was given a `jail` structure as its own argument.
[.programlisting]
....
/usr/src/sys/kern/kern_jail.c:
/*
* struct jail_args {
* struct jail *jail;
* };
*/
int
jail(struct thread *td, struct jail_args *uap)
....
Therefore, `uap->jail` can be used to access the `jail` structure which was passed to the system call. Next, the system call copies the `jail` structure into kernel space using the man:copyin[9] function. man:copyin[9] takes three arguments: the address of the data which is to be copied into kernel space, `uap->jail`, where to store it, `j` and the size of the storage. The `jail` structure pointed by `uap->jail` is copied into kernel space and is stored in another `jail` structure, `j`.
[.programlisting]
....
/usr/src/sys/kern/kern_jail.c:
error = copyin(uap->jail, &j, sizeof(j));
....
There is another important structure defined in [.filename]#jail.h#. It is the `prison` structure. The `prison` structure is used exclusively within kernel space. Here is the definition of the `prison` structure.
[.programlisting]
....
/usr/include/sys/jail.h:
struct prison {
LIST_ENTRY(prison) pr_list; /* (a) all prisons */
int pr_id; /* (c) prison id */
int pr_ref; /* (p) refcount */
char pr_path[MAXPATHLEN]; /* (c) chroot path */
struct vnode *pr_root; /* (c) vnode to rdir */
char pr_host[MAXHOSTNAMELEN]; /* (p) jail hostname */
u_int32_t pr_ip; /* (c) ip addr host */
void *pr_linux; /* (p) linux abi */
int pr_securelevel; /* (p) securelevel */
struct task pr_task; /* (d) destroy task */
struct mtx pr_mtx;
void **pr_slots; /* (p) additional data */
};
....
The man:jail[2] system call then allocates memory for a `prison` structure and copies data between the `jail` and `prison` structure.
[.programlisting]
....
/usr/src/sys/kern/kern_jail.c:
MALLOC(pr, struct prison *, sizeof(*pr), M_PRISON, M_WAITOK | M_ZERO);
...
error = copyinstr(j.path, &pr->pr_path, sizeof(pr->pr_path), 0);
if (error)
goto e_killmtx;
...
error = copyinstr(j.hostname, &pr->pr_host, sizeof(pr->pr_host), 0);
if (error)
goto e_dropvnref;
pr->pr_ip = j.ip_number;
....
Next, we will discuss another important system call man:jail_attach[2], which implements the function to put a process into the jail.
[.programlisting]
....
/usr/src/sys/kern/kern_jail.c:
/*
* struct jail_attach_args {
* int jid;
* };
*/
int
jail_attach(struct thread *td, struct jail_attach_args *uap)
....
This system call makes the changes that can distinguish a jailed process from those unjailed ones. To understand what man:jail_attach[2] does for us, certain background information is needed.
On FreeBSD, each kernel visible thread is identified by its `thread` structure, while the processes are described by their `proc` structures. You can find the definitions of the `thread` and `proc` structure in [.filename]#/usr/include/sys/proc.h#. For example, the `td` argument in any system call is actually a pointer to the calling thread's `thread` structure, as stated before. The `td_proc` member in the `thread` structure pointed by `td` is a pointer to the `proc` structure which represents the process that contains the thread represented by `td`. The `proc` structure contains members which can describe the owner's identity(`p_ucred`), the process resource limits(`p_limit`), and so on. In the `ucred` structure pointed by `p_ucred` member in the `proc` structure, there is a pointer to the `prison` structure(`cr_prison`).
[.programlisting]
....
/usr/include/sys/proc.h:
struct thread {
...
struct proc *td_proc;
...
};
struct proc {
...
struct ucred *p_ucred;
...
};
/usr/include/sys/ucred.h
struct ucred {
...
struct prison *cr_prison;
...
};
....
In [.filename]#kern_jail.c#, the function `jail()` then calls function `jail_attach()` with a given `jid`. And `jail_attach()` calls function `change_root()` to change the root directory of the calling process. The `jail_attach()` then creates a new `ucred` structure, and attaches the newly created `ucred` structure to the calling process after it has successfully attached the `prison` structure to the `ucred` structure. From then on, the calling process is recognized as jailed. When the kernel routine `jailed()` is called in the kernel with the newly created `ucred` structure as its argument, it returns 1 to tell that the credential is connected with a jail. The public ancestor process of all the process forked within the jail, is the process which runs man:jail[8], as it calls the man:jail[2] system call. When a program is executed through man:execve[2], it inherits the jailed property of its parent's `ucred` structure, therefore it has a jailed `ucred` structure.
[.programlisting]
....
/usr/src/sys/kern/kern_jail.c
int
jail(struct thread *td, struct jail_args *uap)
{
...
struct jail_attach_args jaa;
...
error = jail_attach(td, &jaa);
if (error)
goto e_dropprref;
...
}
int
jail_attach(struct thread *td, struct jail_attach_args *uap)
{
struct proc *p;
struct ucred *newcred, *oldcred;
struct prison *pr;
...
p = td->td_proc;
...
pr = prison_find(uap->jid);
...
change_root(pr->pr_root, td);
...
newcred->cr_prison = pr;
p->p_ucred = newcred;
...
}
....
When a process is forked from its parent process, the man:fork[2] system call uses `crhold()` to maintain the credential for the newly forked process. It inherently keep the newly forked child's credential consistent with its parent, so the child process is also jailed.
[.programlisting]
....
/usr/src/sys/kern/kern_fork.c:
p2->p_ucred = crhold(td->td_ucred);
...
td2->td_ucred = crhold(p2->p_ucred);
....
[[jail-restrictions]]
== Restrictions
Throughout the kernel there are access restrictions relating to jailed processes. Usually, these restrictions only check whether the process is jailed, and if so, returns an error. For example:
[.programlisting]
....
if (jailed(td->td_ucred))
return (EPERM);
....
=== SysV IPC
System V IPC is based on messages. Processes can send each other these messages which tell them how to act. The functions which deal with messages are: man:msgctl[3], man:msgget[3], man:msgsnd[3] and man:msgrcv[3]. Earlier, I mentioned that there were certain sysctls you could turn on or off in order to affect the behavior of jail. One of these sysctls was `security.jail.sysvipc_allowed`. By default, this sysctl is set to 0. If it were set to 1, it would defeat the whole purpose of having a jail; privileged users from the jail would be able to affect processes outside the jailed environment. The difference between a message and a signal is that the message only consists of the signal number.
[.filename]#/usr/src/sys/kern/sysv_msg.c#:
* `msgget(key, msgflg)`: `msgget` returns (and possibly creates) a message descriptor that designates a message queue for use in other functions.
* `msgctl(msgid, cmd, buf)`: Using this function, a process can query the status of a message descriptor.
* `msgsnd(msgid, msgp, msgsz, msgflg)`: `msgsnd` sends a message to a process.
* `msgrcv(msgid, msgp, msgsz, msgtyp, msgflg)`: a process receives messages using this function
In each of the system calls corresponding to these functions, there is this conditional:
[.programlisting]
....
/usr/src/sys/kern/sysv_msg.c:
if (!jail_sysvipc_allowed && jailed(td->td_ucred))
return (ENOSYS);
....
Semaphore system calls allow processes to synchronize execution by doing a set of operations atomically on a set of semaphores. Basically semaphores provide another way for processes lock resources. However, process waiting on a semaphore, that is being used, will sleep until the resources are relinquished. The following semaphore system calls are blocked inside a jail: man:semget[2], man:semctl[2] and man:semop[2].
[.filename]#/usr/src/sys/kern/sysv_sem.c#:
* `semctl(semid, semnum, cmd, ...)`: `semctl` does the specified `cmd` on the semaphore queue indicated by `semid`.
* `semget(key, nsems, flag)`: `semget` creates an array of semaphores, corresponding to `key`.
+
`key and flag take on the same meaning as they do in msgget.`
* `semop(semid, array, nops)`: `semop` performs a group of operations indicated by `array`, to the set of semaphores identified by `semid`.
System V IPC allows for processes to share memory. Processes can communicate directly with each other by sharing parts of their virtual address space and then reading and writing data stored in the shared memory. These system calls are blocked within a jailed environment: man:shmdt[2], man:shmat[2], man:shmctl[2] and man:shmget[2].
[.filename]#/usr/src/sys/kern/sysv_shm.c#:
* `shmctl(shmid, cmd, buf)`: `shmctl` does various control operations on the shared memory region identified by `shmid`.
* `shmget(key, size, flag)`: `shmget` accesses or creates a shared memory region of `size` bytes.
* `shmat(shmid, addr, flag)`: `shmat` attaches a shared memory region identified by `shmid` to the address space of a process.
* `shmdt(addr)`: `shmdt` detaches the shared memory region previously attached at `addr`.
=== Sockets
Jail treats the man:socket[2] system call and related lower-level socket functions in a special manner. In order to determine whether a certain socket is allowed to be created, it first checks to see if the sysctl `security.jail.socket_unixiproute_only` is set. If set, sockets are only allowed to be created if the family specified is either `PF_LOCAL`, `PF_INET` or `PF_ROUTE`. Otherwise, it returns an error.
[.programlisting]
....
/usr/src/sys/kern/uipc_socket.c:
int
socreate(int dom, struct socket **aso, int type, int proto,
struct ucred *cred, struct thread *td)
{
struct protosw *prp;
...
if (jailed(cred) && jail_socket_unixiproute_only &&
prp->pr_domain->dom_family != PF_LOCAL &&
prp->pr_domain->dom_family != PF_INET &&
prp->pr_domain->dom_family != PF_ROUTE) {
return (EPROTONOSUPPORT);
}
...
}
....
=== Berkeley Packet Filter
The Berkeley Packet Filter provides a raw interface to data link layers in a protocol independent fashion. BPF is now controlled by the man:devfs[8] whether it can be used in a jailed environment.
=== Protocols
There are certain protocols which are very common, such as TCP, UDP, IP and ICMP. IP and ICMP are on the same level: the network layer 2. There are certain precautions which are taken in order to prevent a jailed process from binding a protocol to a certain address only if the `nam` parameter is set. `nam` is a pointer to a `sockaddr` structure, which describes the address on which to bind the service. A more exact definition is that `sockaddr` "may be used as a template for referring to the identifying tag and length of each address". In the function `in_pcbbind_setup()`, `sin` is a pointer to a `sockaddr_in` structure, which contains the port, address, length and domain family of the socket which is to be bound. Basically, this disallows any processes from jail to be able to specify the address that does not belong to the jail in which the calling process exists.
[.programlisting]
....
/usr/src/sys/netinet/in_pcb.c:
int
in_pcbbind_setup(struct inpcb *inp, struct sockaddr *nam, in_addr_t *laddrp,
u_short *lportp, struct ucred *cred)
{
...
struct sockaddr_in *sin;
...
if (nam) {
sin = (struct sockaddr_in *)nam;
...
if (sin->sin_addr.s_addr != INADDR_ANY)
if (prison_ip(cred, 0, &sin->sin_addr.s_addr))
return(EINVAL);
...
if (lport) {
...
if (prison && prison_ip(cred, 0, &sin->sin_addr.s_addr))
return (EADDRNOTAVAIL);
...
}
}
if (lport == 0) {
...
if (laddr.s_addr != INADDR_ANY)
if (prison_ip(cred, 0, &laddr.s_addr))
return (EINVAL);
...
}
...
if (prison_ip(cred, 0, &laddr.s_addr))
return (EINVAL);
...
}
....
You might be wondering what function `prison_ip()` does. `prison_ip()` is given three arguments, a pointer to the credential(represented by `cred`), any flags, and an IP address. It returns 1 if the IP address does NOT belong to the jail or 0 otherwise. As you can see from the code, if it is indeed an IP address not belonging to the jail, the protocol is not allowed to bind to that address.
[.programlisting]
....
/usr/src/sys/kern/kern_jail.c:
int
prison_ip(struct ucred *cred, int flag, u_int32_t *ip)
{
u_int32_t tmp;
if (!jailed(cred))
return (0);
if (flag)
tmp = *ip;
else
tmp = ntohl(*ip);
if (tmp == INADDR_ANY) {
if (flag)
*ip = cred->cr_prison->pr_ip;
else
*ip = htonl(cred->cr_prison->pr_ip);
return (0);
}
if (tmp == INADDR_LOOPBACK) {
if (flag)
*ip = cred->cr_prison->pr_ip;
else
*ip = htonl(cred->cr_prison->pr_ip);
return (0);
}
if (cred->cr_prison->pr_ip != tmp)
return (1);
return (0);
}
....
=== Filesystem
Even `root` users within the jail are not allowed to unset or modify any file flags, such as immutable, append-only, and undeleteable flags, if the securelevel is greater than 0.
[.programlisting]
....
/usr/src/sys/ufs/ufs/ufs_vnops.c:
static int
ufs_setattr(ap)
...
{
...
if (!priv_check_cred(cred, PRIV_VFS_SYSFLAGS, 0)) {
if (ip->i_flags
& (SF_NOUNLINK | SF_IMMUTABLE | SF_APPEND)) {
error = securelevel_gt(cred, 0);
if (error)
return (error);
}
...
}
}
/usr/src/sys/kern/kern_priv.c
int
priv_check_cred(struct ucred *cred, int priv, int flags)
{
...
error = prison_priv_check(cred, priv);
if (error)
return (error);
...
}
/usr/src/sys/kern/kern_jail.c
int
prison_priv_check(struct ucred *cred, int priv)
{
...
switch (priv) {
...
case PRIV_VFS_SYSFLAGS:
if (jail_chflags_allowed)
return (0);
else
return (EPERM);
...
}
...
}
....
diff --git a/documentation/content/en/books/arch-handbook/kobj/_index.adoc b/documentation/content/en/books/arch-handbook/kobj/_index.adoc
index bd11946acd..f2760e2252 100644
--- a/documentation/content/en/books/arch-handbook/kobj/_index.adoc
+++ b/documentation/content/en/books/arch-handbook/kobj/_index.adoc
@@ -1,257 +1,258 @@
---
title: Chapter 3. Kernel Objects
prev: books/arch-handbook/locking
next: books/arch-handbook/jail
+description: Kernel Objects
---
[[kernel-objects]]
= Kernel Objects
:doctype: book
:toc: macro
:toclevels: 1
:icons: font
:sectnums:
:sectnumlevels: 6
:source-highlighter: rouge
:experimental:
:skip-front-matter:
:xrefstyle: basic
:relfileprefix: ../
:outfilesuffix:
:sectnumoffset: 3
include::shared/mirrors.adoc[]
include::shared/authors.adoc[]
include::shared/releases.adoc[]
include::shared/en/mailing-lists.adoc[]
include::shared/en/teams.adoc[]
include::shared/en/urls.adoc[]
toc::[]
Kernel Objects, or _Kobj_ provides an object-oriented C programming system for the kernel.
As such the data being operated on carries the description of how to operate on it.
This allows operations to be added and removed from an interface at run time and without breaking binary compatibility.
[[kernel-objects-term]]
== Terminology
Object::
A set of data - data structure - data allocation.
Method::
An operation - function.
Class::
One or more methods.
Interface::
A standard set of one or more methods.
[[kernel-objects-operation]]
== Kobj Operation
Kobj works by generating descriptions of methods.
Each description holds a unique id as well as a default function.
The description's address is used to uniquely identify the method within a class' method table.
A class is built by creating a method table associating one or more functions with method descriptions.
Before use the class is compiled.
The compilation allocates a cache and associates it with the class.
A unique id is assigned to each method description within the method table of the class if not already done so by another referencing class compilation.
For every method to be used a function is generated by script to qualify arguments and automatically reference the method description for a lookup.
The generated function looks up the method by using the unique id associated with the method description as a hash into the cache associated with the object's class.
If the method is not cached the generated function proceeds to use the class' table to find the method.
If the method is found then the associated function within the class is used; otherwise, the default function associated with the method description is used.
These indirections can be visualized as the following:
[.programlisting]
....
object->cache<->class
....
[[kernel-objects-using]]
== Using Kobj
=== Structures
[.programlisting]
....
struct kobj_method
....
=== Functions
[.programlisting]
....
void kobj_class_compile(kobj_class_t cls);
void kobj_class_compile_static(kobj_class_t cls, kobj_ops_t ops);
void kobj_class_free(kobj_class_t cls);
kobj_t kobj_create(kobj_class_t cls, struct malloc_type *mtype, int mflags);
void kobj_init(kobj_t obj, kobj_class_t cls);
void kobj_delete(kobj_t obj, struct malloc_type *mtype);
....
=== Macros
[.programlisting]
....
KOBJ_CLASS_FIELDS
KOBJ_FIELDS
DEFINE_CLASS(name, methods, size)
KOBJMETHOD(NAME, FUNC)
....
=== Headers
[.programlisting]
....
<sys/param.h>
<sys/kobj.h>
....
=== Creating an Interface Template
The first step in using Kobj is to create an Interface.
Creating the interface involves creating a template that the script [.filename]#src/sys/kern/makeobjops.pl# can use to generate the header and code for the method declarations and method lookup functions.
Within this template the following keywords are used: `#include`, `INTERFACE`, `CODE`, `METHOD`, `STATICMETHOD`, and `DEFAULT`.
The `#include` statement and what follows it is copied verbatim to the head of the generated code file.
For example:
[.programlisting]
....
#include <sys/foo.h>
....
The `INTERFACE` keyword is used to define the interface name.
This name is concatenated with each method name as [interface name]_[method name].
Its syntax is INTERFACE [interface name];.
For example:
[.programlisting]
....
INTERFACE foo;
....
The `CODE` keyword copies its arguments verbatim into the code file.
Its syntax is `CODE { [whatever] };`
For example:
[.programlisting]
....
CODE {
struct foo * foo_alloc_null(struct bar *)
{
return NULL;
}
};
....
The `METHOD` keyword describes a method.
Its syntax is `METHOD [return type] [method name] { [object [, arguments]] };`
For example:
[.programlisting]
....
METHOD int bar {
struct object *;
struct foo *;
struct bar;
};
....
The `DEFAULT` keyword may follow the `METHOD` keyword.
It extends the `METHOD` key word to include the default function for method.
The extended syntax is `METHOD [return type] [method name] { [object; [other arguments]] }DEFAULT [default function];`
For example:
[.programlisting]
....
METHOD int bar {
struct object *;
struct foo *;
int bar;
} DEFAULT foo_hack;
....
The `STATICMETHOD` keyword is used like the `METHOD` keyword except the kobj data is not at the head of the object structure so casting to kobj_t would be incorrect.
Instead `STATICMETHOD` relies on the Kobj data being referenced as 'ops'.
This is also useful for calling methods directly out of a class's method table.
Other complete examples:
[.programlisting]
....
src/sys/kern/bus_if.m
src/sys/kern/device_if.m
....
=== Creating a Class
The second step in using Kobj is to create a class.
A class consists of a name, a table of methods, and the size of objects if Kobj's object handling facilities are used.
To create the class use the macro `DEFINE_CLASS()`.
To create the method table create an array of kobj_method_t terminated by a NULL entry.
Each non-NULL entry may be created using the macro `KOBJMETHOD()`.
For example:
[.programlisting]
....
DEFINE_CLASS(fooclass, foomethods, sizeof(struct foodata));
kobj_method_t foomethods[] = {
KOBJMETHOD(bar_doo, foo_doo),
KOBJMETHOD(bar_foo, foo_foo),
{ NULL, NULL}
};
....
The class must be "compiled".
Depending on the state of the system at the time that the class is to be initialized a statically allocated cache, "ops table" have to be used.
This can be accomplished by declaring a `struct kobj_ops` and using `kobj_class_compile_static();` otherwise, `kobj_class_compile()` should be used.
=== Creating an Object
The third step in using Kobj involves how to define the object.
Kobj object creation routines assume that Kobj data is at the head of an object.
If this in not appropriate you will have to allocate the object yourself and then use `kobj_init()` on the Kobj portion of it; otherwise, you may use `kobj_create()` to allocate and initialize the Kobj portion of the object automatically.
`kobj_init()` may also be used to change the class that an object uses.
To integrate Kobj into the object you should use the macro KOBJ_FIELDS.
For example
[.programlisting]
....
struct foo_data {
KOBJ_FIELDS;
foo_foo;
foo_bar;
};
....
=== Calling Methods
The last step in using Kobj is to simply use the generated functions to use the desired method within the object's class.
This is as simple as using the interface name and the method name with a few modifications.
The interface name should be concatenated with the method name using a '_' between them, all in upper case.
For example, if the interface name was foo and the method was bar then the call would be:
[.programlisting]
....
[return value = ] FOO_BAR(object [, other parameters]);
....
=== Cleaning Up
When an object allocated through `kobj_create()` is no longer needed `kobj_delete()` may be called on it, and when a class is no longer being used `kobj_class_free()` may be called on it.
diff --git a/documentation/content/en/books/arch-handbook/locking/_index.adoc b/documentation/content/en/books/arch-handbook/locking/_index.adoc
index 517c615b99..fdbdd70d4e 100644
--- a/documentation/content/en/books/arch-handbook/locking/_index.adoc
+++ b/documentation/content/en/books/arch-handbook/locking/_index.adoc
@@ -1,126 +1,127 @@
---
title: Chapter 2. Locking Notes
prev: books/arch-handbook/boot
next: books/arch-handbook/kobj
+description: Locking Notes
---
[[locking]]
= Locking Notes
:doctype: book
:toc: macro
:toclevels: 1
:icons: font
:sectnums:
:sectnumlevels: 6
:source-highlighter: rouge
:experimental:
:skip-front-matter:
:xrefstyle: basic
:relfileprefix: ../
:outfilesuffix:
:sectnumoffset: 2
include::shared/mirrors.adoc[]
include::shared/authors.adoc[]
include::shared/releases.adoc[]
include::shared/en/mailing-lists.adoc[]
include::shared/en/teams.adoc[]
include::shared/en/urls.adoc[]
toc::[]
_This chapter is maintained by the FreeBSD SMP Next Generation Project._
This document outlines the locking used in the FreeBSD kernel to permit effective multi-processing within the kernel. Locking can be achieved via several means. Data structures can be protected by mutexes or man:lockmgr[9] locks. A few variables are protected simply by always using atomic operations to access them.
[[locking-mutexes]]
== Mutexes
A mutex is simply a lock used to guarantee mutual exclusion. Specifically, a mutex may only be owned by one entity at a time. If another entity wishes to obtain a mutex that is already owned, it must wait until the mutex is released. In the FreeBSD kernel, mutexes are owned by processes.
Mutexes may be recursively acquired, but they are intended to be held for a short period of time. Specifically, one may not sleep while holding a mutex. If you need to hold a lock across a sleep, use a man:lockmgr[9] lock.
Each mutex has several properties of interest:
Variable Name::
The name of the struct mtx variable in the kernel source.
Logical Name::
The name of the mutex assigned to it by `mtx_init`. This name is displayed in KTR trace messages and witness errors and warnings and is used to distinguish mutexes in the witness code.
Type::
The type of the mutex in terms of the `MTX_*` flags. The meaning for each flag is related to its meaning as documented in man:mutex[9].
`MTX_DEF`:::
A sleep mutex
`MTX_SPIN`:::
A spin mutex
`MTX_RECURSE`:::
This mutex is allowed to recurse.
Protectees::
A list of data structures or data structure members that this entry protects. For data structure members, the name will be in the form of `structure name`.`member name`.
Dependent Functions::
Functions that can only be called if this mutex is held.
.Mutex List
[cols="15%,10%,10%,55%,20%", frame="all", options="header"]
|===
| Variable Name
| Logical Name
| Type
| Protectees
| Dependent Functions
|sched_lock
|"sched lock"
|`MTX_SPIN` \| `MTX_RECURSE`
|`_gmonparam`, `cnt.v_swtch`, `cp_time`, `curpriority`, `mtx`.`mtx_blocked`, `mtx`.`mtx_contested`, `proc`.`p_procq`, `proc`.`p_slpq`, `proc`.`p_sflag`, `proc`.`p_stat`, `proc`.`p_estcpu`, `proc`.`p_cpticks` `proc`.`p_pctcpu`, `proc`.`p_wchan`, `proc`.`p_wmesg`, `proc`.`p_swtime`, `proc`.`p_slptime`, `proc`.`p_runtime`, `proc`.`p_uu`, `proc`.`p_su`, `proc`.`p_iu`, `proc`.`p_uticks`, `proc`.`p_sticks`, `proc`.`p_iticks`, `proc`.`p_oncpu`, `proc`.`p_lastcpu`, `proc`.`p_rqindex`, `proc`.`p_heldmtx`, `proc`.`p_blocked`, `proc`.`p_mtxname`, `proc`.`p_contested`, `proc`.`p_priority`, `proc`.`p_usrpri`, `proc`.`p_nativepri`, `proc`.`p_nice`, `proc`.`p_rtprio`, `pscnt`, `slpque`, `itqueuebits`, `itqueues`, `rtqueuebits`, `rtqueues`, `queuebits`, `queues`, `idqueuebits`, `idqueues`, `switchtime`, `switchticks`
|`setrunqueue`, `remrunqueue`, `mi_switch`, `chooseproc`, `schedclock`, `resetpriority`, `updatepri`, `maybe_resched`, `cpu_switch`, `cpu_throw`, `need_resched`, `resched_wanted`, `clear_resched`, `aston`, `astoff`, `astpending`, `calcru`, `proc_compare`
|vm86pcb_lock
|"vm86pcb lock"
|`MTX_DEF`
|`vm86pcb`
|`vm86_bioscall`
|Giant
|"Giant"
|`MTX_DEF` \| `MTX_RECURSE`
|nearly everything
|lots
|callout_lock
|"callout lock"
|`MTX_SPIN` \| `MTX_RECURSE`
|`callfree`, `callwheel`, `nextsoftcheck`, `proc`.`p_itcallout`, `proc`.`p_slpcallout`, `softticks`, `ticks`
|
|===
[[locking-sx]]
== Shared Exclusive Locks
These locks provide basic reader-writer type functionality and may be held by a sleeping process. Currently they are backed by man:lockmgr[9].
.Shared Exclusive Lock List
[cols="20%,80%", options="header"]
|===
| Variable Name
| Protectees
|`allproc_lock`
|`allproc` `zombproc` `pidhashtbl` `proc`.`p_list` `proc`.`p_hash` `nextpid`
|`proctree_lock`
|`proc`.`p_children` `proc`.`p_sibling`
|===
[[locking-atomic]]
== Atomically Protected Variables
An atomically protected variable is a special variable that is not protected by an explicit lock. Instead, all data accesses to the variables use special atomic operations as described in man:atomic[9]. Very few variables are treated this way, although other synchronization primitives such as mutexes are implemented with atomically protected variables.
* `mtx`.`mtx_lock`
diff --git a/documentation/content/en/books/arch-handbook/mac/_index.adoc b/documentation/content/en/books/arch-handbook/mac/_index.adoc
index f33be4228e..b6d2903530 100644
--- a/documentation/content/en/books/arch-handbook/mac/_index.adoc
+++ b/documentation/content/en/books/arch-handbook/mac/_index.adoc
@@ -1,5756 +1,5757 @@
---
title: Chapter 6. The TrustedBSD MAC Framework
authors:
- author: Chris Costello
email: chris@FreeBSD.org
- author: Robert Watson
email: rwatson@FreeBSD.org
prev: books/arch-handbook/sysinit
next: books/arch-handbook/vm
+description: The TrustedBSD MAC Framework
---
[[mac]]
= The TrustedBSD MAC Framework
:doctype: book
:toc: macro
:toclevels: 1
:icons: font
:sectnums:
:sectnumlevels: 6
:source-highlighter: rouge
:experimental:
:skip-front-matter:
:xrefstyle: basic
:relfileprefix: ../
:outfilesuffix:
:sectnumoffset: 6
include::shared/mirrors.adoc[]
include::shared/authors.adoc[]
include::shared/releases.adoc[]
include::shared/en/mailing-lists.adoc[]
include::shared/en/teams.adoc[]
include::shared/en/urls.adoc[]
toc::[]
[[mac-copyright]]
== MAC Documentation Copyright
This documentation was developed for the FreeBSD Project by Chris Costello at Safeport Network Services and Network Associates Laboratories, the Security Research Division of Network Associates, Inc. under DARPA/SPAWAR contract N66001-01-C-8035 ("CBOSS"), as part of the DARPA CHATS research program.
Redistribution and use in source (SGML DocBook) and 'compiled' forms (SGML, HTML, PDF, PostScript, RTF and so forth) with or without modification, are permitted provided that the following conditions are met:
. Redistributions of source code (SGML DocBook) must retain the above copyright notice, this list of conditions and the following disclaimer as the first lines of this file unmodified.
. Redistributions in compiled form (transformed to other DTDs, converted to PDF, PostScript, RTF and other formats) must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.
[IMPORTANT]
====
THIS DOCUMENTATION IS PROVIDED BY THE NETWORKS ASSOCIATES TECHNOLOGY, INC "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL NETWORKS ASSOCIATES TECHNOLOGY, INC BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS DOCUMENTATION, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
====
[[mac-synopsis]]
== Synopsis
FreeBSD includes experimental support for several mandatory access control policies, as well as a framework for kernel security extensibility, the TrustedBSD MAC Framework. The MAC Framework is a pluggable access control framework, permitting new security policies to be easily linked into the kernel, loaded at boot, or loaded dynamically at run-time. The framework provides a variety of features to make it easier to implement new security policies, including the ability to easily tag security labels (such as confidentiality information) onto system objects.
This chapter introduces the MAC policy framework and provides documentation for a sample MAC policy module.
[[mac-introduction]]
== Introduction
The TrustedBSD MAC framework provides a mechanism to allow the compile-time or run-time extension of the kernel access control model. New system policies may be implemented as kernel modules and linked to the kernel; if multiple policy modules are present, their results will be composed. The MAC Framework provides a variety of access control infrastructure services to assist policy writers, including support for transient and persistent policy-agnostic object security labels. This support is currently considered experimental.
This chapter provides information appropriate for developers of policy modules, as well as potential consumers of MAC-enabled environments, to learn about how the MAC Framework supports access control extension of the kernel.
[[mac-background]]
== Policy Background
Mandatory Access Control (MAC), refers to a set of access control policies that are mandatorily enforced on users by the operating system. MAC policies may be contrasted with Discretionary Access Control (DAC) protections, by which non-administrative users may (at their discretion) protect objects. In traditional UNIX systems, DAC protections include file permissions and access control lists; MAC protections include process controls preventing inter-user debugging and firewalls. A variety of MAC policies have been formulated by operating system designers and security researches, including the Multi-Level Security (MLS) confidentiality policy, the Biba integrity policy, Role-Based Access Control (RBAC), Domain and Type Enforcement (DTE), and Type Enforcement (TE). Each model bases decisions on a variety of factors, including user identity, role, and security clearance, as well as security labels on objects representing concepts such as data sensitivity and integrity.
The TrustedBSD MAC Framework is capable of supporting policy modules that implement all of these policies, as well as a broad class of system hardening policies, which may use existing security attributes, such as user and group IDs, as well as extended attributes on files, and other system properties. In addition, despite the name, the MAC Framework can also be used to implement purely discretionary policies, as policy modules are given substantial flexibility in how they authorize protections.
[[mac-framework-kernel-arch]]
== MAC Framework Kernel Architecture
The TrustedBSD MAC Framework permits kernel modules to extend the operating system security policy, as well as providing infrastructure functionality required by many access control modules. If multiple policies are simultaneously loaded, the MAC Framework will usefully (for some definition of useful) compose the results of the policies.
[[mac-framework-kernel-arch-elements]]
=== Kernel Elements
The MAC Framework contains a number of kernel elements:
* Framework management interfaces
* Concurrency and synchronization primitives.
* Policy registration
* Extensible security label for kernel objects
* Policy entry point composition operators
* Label management primitives
* Entry point API invoked by kernel services
* Entry point API to policy modules
* Entry points implementations (policy life cycle, object life cycle/label management, access control checks).
* Policy-agnostic label-management system calls
* `mac_syscall()` multiplex system call
* Various security policies implemented as MAC policy modules
[[mac-framework-kernel-arch-management]]
=== Framework Management Interfaces
The TrustedBSD MAC Framework may be directly managed using sysctl's, loader tunables, and system calls.
In most cases, sysctl's and loader tunables of the same name modify the same parameters, and control behavior such as enforcement of protections relating to various kernel subsystems. In addition, if MAC debugging support is compiled into the kernel, several counters will be maintained tracking label allocation. It is generally advisable that per-subsystem enforcement controls not be used to control policy behavior in production environments, as they broadly impact the operation of all active policies. Instead, per-policy controls should be preferred, as they provide greater granularity and greater operational consistency for policy modules.
Loading and unloading of policy modules is performed using the system module management system calls and other system interfaces, including boot loader variables; policy modules will have the opportunity to influence load and unload events, including preventing undesired unloading of the policy.
[[mac-framework-kernel-arch-synchronization]]
=== Policy List Concurrency and Synchronization
As the set of active policies may change at run-time, and the invocation of entry points is non-atomic, synchronization is required to prevent loading or unloading of policies while an entry point invocation is in progress, freezing the set of active policies for the duration. This is accomplished by means of a framework busy count: whenever an entry point is entered, the busy count is incremented; whenever it is exited, the busy count is decremented. While the busy count is elevated, policy list changes are not permitted, and threads attempting to modify the policy list will sleep until the list is not busy. The busy count is protected by a mutex, and a condition variable is used to wake up sleepers waiting on policy list modifications. One side effect of this synchronization model is that recursion into the MAC Framework from within a policy module is permitted, although not generally used.
Various optimizations are used to reduce the overhead of the busy count, including avoiding the full cost of incrementing and decrementing if the list is empty or contains only static entries (policies that are loaded before the system starts, and cannot be unloaded). A compile-time option is also provided which prevents any change in the set of loaded policies at run-time, which eliminates the mutex locking costs associated with supporting dynamically loaded and unloaded policies as synchronization is no longer required.
As the MAC Framework is not permitted to block in some entry points, a normal sleep lock cannot be used; as a result, it is possible for the load or unload attempt to block for a substantial period of time waiting for the framework to become idle.
[[mac-framework-kernel-arch-label-synchronization]]
=== Label Synchronization
As kernel objects of interest may generally be accessed from more than one thread at a time, and simultaneous entry of more than one thread into the MAC Framework is permitted, security attribute storage maintained by the MAC Framework is carefully synchronized. In general, existing kernel synchronization on kernel object data is used to protect MAC Framework security labels on the object: for example, MAC labels on sockets are protected using the existing socket mutex. Likewise, semantics for concurrent access are generally identical to those of the container objects: for credentials, copy-on-write semantics are maintained for label contents as with the remainder of the credential structure. The MAC Framework asserts necessary locks on objects when invoked with an object reference. Policy authors must be aware of these synchronization semantics, as they will sometimes limit the types of accesses permitted on labels: for example, when a read-only reference to a credential is passed to a policy via an entry point, only read operations are permitted on the label state attached to the credential.
[[mac-framework-kernel-arch-policy-synchronization]]
=== Policy Synchronization and Concurrency
Policy modules must be written to assume that many kernel threads may simultaneously enter one more policy entry points due to the parallel and preemptive nature of the FreeBSD kernel. If the policy module makes use of mutable state, this may require the use of synchronization primitives within the policy to prevent inconsistent views on that state resulting in incorrect operation of the policy. Policies will generally be able to make use of existing FreeBSD synchronization primitives for this purpose, including mutexes, sleep locks, condition variables, and counting semaphores. However, policies should be written to employ these primitives carefully, respecting existing kernel lock orders, and recognizing that some entry points are not permitted to sleep, limiting the use of primitives in those entry points to mutexes and wakeup operations.
When policy modules call out to other kernel subsystems, they will generally need to release any in-policy locks in order to avoid violating the kernel lock order or risking lock recursion. This will maintain policy locks as leaf locks in the global lock order, helping to avoid deadlock.
[[mac-framework-kernel-arch-registration]]
=== Policy Registration
The MAC Framework maintains two lists of active policies: a static list, and a dynamic list. The lists differ only with regards to their locking semantics: an elevated reference count is not required to make use of the static list. When kernel modules containing MAC Framework policies are loaded, the policy module will use `SYSINIT` to invoke a registration function; when a policy module is unloaded, `SYSINIT` will likewise invoke a de-registration function. Registration may fail if a policy module is loaded more than once, if insufficient resources are available for the registration (for example, the policy might require labeling and insufficient labeling state might be available), or other policy prerequisites might not be met (some policies may only be loaded prior to boot). Likewise, de-registration may fail if a policy is flagged as not unloadable.
[[mac-framework-kernel-arch-entrypoints]]
=== Entry Points
Kernel services interact with the MAC Framework in two ways: they invoke a series of APIs to notify the framework of relevant events, and they provide a policy-agnostic label structure pointer in security-relevant objects. The label pointer is maintained by the MAC Framework via label management entry points, and permits the Framework to offer a labeling service to policy modules through relatively non-invasive changes to the kernel subsystem maintaining the object. For example, label pointers have been added to processes, process credentials, sockets, pipes, vnodes, Mbufs, network interfaces, IP reassembly queues, and a variety of other security-relevant structures. Kernel services also invoke the MAC Framework when they perform important security decisions, permitting policy modules to augment those decisions based on their own criteria (possibly including data stored in security labels). Most of these security critical decisions will be explicit access control checks; however, some affect more general decision functions such as packet matching for sockets and label transition at program execution.
[[mac-framework-kernel-arch-composition]]
=== Policy Composition
When more than one policy module is loaded into the kernel at a time, the results of the policy modules will be composed by the framework using a composition operator. This operator is currently hard-coded, and requires that all active policies must approve a request for it to return success. As policies may return a variety of error conditions (success, access denied, object does not exist, ...), a precedence operator selects the resulting error from the set of errors returned by policies. In general, errors indicating that an object does not exist will be preferred to errors indicating that access to an object is denied. While it is not guaranteed that the resulting composition will be useful or secure, we have found that it is for many useful selections of policies. For example, traditional trusted systems often ship with two or more policies using a similar composition.
[[mac-framework-kernel-arch-labels]]
=== Labeling Support
As many interesting access control extensions rely on security labels on objects, the MAC Framework provides a set of policy-agnostic label management system calls covering a variety of user-exposed objects. Common label types include partition identifiers, sensitivity labels, integrity labels, compartments, domains, roles, and types. By policy agnostic, we mean that policy modules are able to completely define the semantics of meta-data associated with an object. Policy modules participate in the internalization and externalization of string-based labels provides by user applications, and can expose multiple label elements to applications if desired.
In-memory labels are stored in slab-allocated `struct label`, which consists of a fixed-length array of unions, each holding a `void *` pointer and a `long`. Policies registering for label storage will be assigned a "slot" identifier, which may be used to dereference the label storage. The semantics of the storage are left entirely up to the policy module: modules are provided with a variety of entry points associated with the kernel object life cycle, including initialization, association/creation, and destruction. Using these interfaces, it is possible to implement reference counting and other storage models. Direct access to the object structure is generally not required by policy modules to retrieve a label, as the MAC Framework generally passes both a pointer to the object and a direct pointer to the object's label into entry points. The primary exception to this rule is the process credential, which must be manually dereferenced to access the credential label. This may change in future revisions of the MAC Framework.
Initialization entry points frequently include a sleeping disposition flag indicating whether or not an initialization is permitted to sleep; if sleeping is not permitted, a failure may be returned to cancel allocation of the label (and hence object). This may occur, for example, in the network stack during interrupt handling, where sleeping is not permitted, or while the caller holds a mutex. Due to the performance cost of maintaining labels on in-flight network packets (Mbufs), policies must specifically declare a requirement that Mbuf labels be allocated. Dynamically loaded policies making use of labels must be able to handle the case where their init function has not been called on an object, as objects may already exist when the policy is loaded. The MAC Framework guarantees that uninitialized label slots will hold a 0 or NULL value, which policies may use to detect uninitialized values. However, as allocation of Mbuf labels is conditional, policies must also be able to handle a NULL label pointer for Mbufs if they have been loaded dynamically.
In the case of file system labels, special support is provided for the persistent storage of security labels in extended attributes. Where available, extended attribute transactions are used to permit consistent compound updates of security labels on vnodes--currently this support is present only in the UFS2 file system. Policy authors may choose to implement multilabel file system object labels using one (or more) extended attributes. For efficiency reasons, the vnode label (`v_label`) is a cache of any on-disk label; policies are able to load values into the cache when the vnode is instantiated, and update the cache as needed. As a result, the extended attribute need not be directly accessed with every access control check.
[NOTE]
====
Currently, if a labeled policy permits dynamic unloading, its state slot cannot be reclaimed, which places a strict (and relatively low) bound on the number of unload-reload operations for labeled policies.
====
[[mac-framework-kernel-arch-syscalls]]
=== System Calls
The MAC Framework implements a number of system calls: most of these calls support the policy-agnostic label retrieval and manipulation APIs exposed to user applications.
The label management calls accept a label description structure, `struct mac`, which contains a series of MAC label elements. Each element contains a character string name, and character string value. Each policy will be given the chance to claim a particular element name, permitting policies to expose multiple independent elements if desired. Policy modules perform the internalization and externalization between kernel labels and user-provided labels via entry points, permitting a variety of semantics. Label management system calls are generally wrapped by user library functions to perform memory allocation and error handling, simplifying user applications that must manage labels.
The following MAC-related system calls are present in the FreeBSD kernel:
* `mac_get_proc()` may be used to retrieve the label of the current process.
* `mac_set_proc()` may be used to request a change in the label of the current process.
* `mac_get_fd()` may be used to retrieve the label of an object (file, socket, pipe, ...) referenced by a file descriptor.
* `mac_get_file()` may be used to retrieve the label of an object referenced by a file system path.
* `mac_set_fd()` may be used to request a change in the label of an object (file, socket, pipe, ...) referenced by a file descriptor.
* `mac_set_file()` may be used to request a change in the label of an object referenced by a file system path.
* `mac_syscall()` permits policy modules to create new system calls without modifying the system call table; it accepts a target policy name, operation number, and opaque argument for use by the policy.
* `mac_get_pid()` may be used to request the label of another process by process id.
* `mac_get_link()` is identical to `mac_get_file()`, only it will not follow a symbolic link if it is the final entry in the path, so may be used to retrieve the label on a symlink.
* `mac_set_link()` is identical to `mac_set_file()`, only it will not follow a symbolic link if it is the final entry in a path, so may be used to manipulate the label on a symlink.
* `mac_execve()` is identical to the `execve()` system call, only it also accepts a requested label to set the process label to when beginning execution of a new program. This change in label on execution is referred to as a "transition".
* `mac_get_peer()`, actually implemented via a socket option, retrieves the label of a remote peer on a socket, if available.
In addition to these system calls, the `SIOCSIGMAC` and `SIOCSIFMAC` network interface ioctls permit the labels on network interfaces to be retrieved and set.
[[mac-policy-architecture]]
== MAC Policy Architecture
Security policies are either linked directly into the kernel, or compiled into loadable kernel modules that may be loaded at boot, or dynamically using the module loading system calls at runtime. Policy modules interact with the system through a set of declared entry points, providing access to a stream of system events and permitting the policy to influence access control decisions. Each policy contains a number of elements:
* Optional configuration parameters for policy.
* Centralized implementation of the policy logic and parameters.
* Optional implementation of policy life cycle events, such as initialization and destruction.
* Optional support for initializing, maintaining, and destroying labels on selected kernel objects.
* Optional support for user process inspection and modification of labels on selected objects.
* Implementation of selected access control entry points that are of interest to the policy.
* Declaration of policy identity, module entry points, and policy properties.
[[mac-policy-declaration]]
=== Policy Declaration
Modules may be declared using the `MAC_POLICY_SET()` macro, which names the policy, provides a reference to the MAC entry point vector, provides load-time flags determining how the policy framework should handle the policy, and optionally requests the allocation of label state by the framework.
[.programlisting]
....
static struct mac_policy_ops mac_policy_ops =
{
.mpo_destroy = mac_policy_destroy,
.mpo_init = mac_policy_init,
.mpo_init_bpfdesc_label = mac_policy_init_bpfdesc_label,
.mpo_init_cred_label = mac_policy_init_label,
/* ... */
.mpo_check_vnode_setutimes = mac_policy_check_vnode_setutimes,
.mpo_check_vnode_stat = mac_policy_check_vnode_stat,
.mpo_check_vnode_write = mac_policy_check_vnode_write,
};
....
The MAC policy entry point vector, `mac__policy__ops` in this example, associates functions defined in the module with specific entry points. A complete listing of available entry points and their prototypes may be found in the MAC entry point reference section. Of specific interest during module registration are the .mpo_destroy and .mpo_init entry points. .mpo_init will be invoked once a policy is successfully registered with the module framework but prior to any other entry points becoming active. This permits the policy to perform any policy-specific allocation and initialization, such as initialization of any data or locks. .mpo_destroy will be invoked when a policy module is unloaded to permit releasing of any allocated memory and destruction of locks. Currently, these two entry points are invoked with the MAC policy list mutex held to prevent any other entry points from being invoked: this will be changed, but in the mean time, policies should be careful about what kernel primitives they invoke so as to avoid lock ordering or sleeping problems.
The policy declaration's module name field exists so that the module may be uniquely identified for the purposes of module dependencies. An appropriate string should be selected. The full string name of the policy is displayed to the user via the kernel log during load and unload events, and also exported when providing status information to userland processes.
[[mac-policy-flags]]
=== Policy Flags
The policy declaration flags field permits the module to provide the framework with information about its capabilities at the time the module is loaded. Currently, three flags are defined:
MPC_LOADTIME_FLAG_UNLOADOK::
This flag indicates that the policy module may be unloaded. If this flag is not provided, then the policy framework will reject requests to unload the module. This flag might be used by modules that allocate label state and are unable to free that state at runtime.
MPC_LOADTIME_FLAG_NOTLATE::
This flag indicates that the policy module must be loaded and initialized early in the boot process. If the flag is specified, attempts to register the module following boot will be rejected. The flag may be used by policies that require pervasive labeling of all system objects, and cannot handle objects that have not been properly initialized by the policy.
MPC_LOADTIME_FLAG_LABELMBUFS::
This flag indicates that the policy module requires labeling of Mbufs, and that memory should always be allocated for the storage of Mbuf labels. By default, the MAC Framework will not allocate label storage for Mbufs unless at least one loaded policy has this flag set. This measurably improves network performance when policies do not require Mbuf labeling. A kernel option, `MAC_ALWAYS_LABEL_MBUF`, exists to force the MAC Framework to allocate Mbuf label storage regardless of the setting of this flag, and may be useful in some environments.
[NOTE]
====
Policies using the `MPC_LOADTIME_FLAG_LABELMBUFS` without the `MPC_LOADTIME_FLAG_NOTLATE` flag set must be able to correctly handle `NULL` Mbuf label pointers passed into entry points. This is necessary as in-flight Mbufs without label storage may persist after a policy enabling Mbuf labeling has been loaded. If a policy is loaded before the network subsystem is active (i.e., the policy is not being loaded late), then all Mbufs are guaranteed to have label storage.
====
[[mac-policy-entry-points]]
=== Policy Entry Points
Four classes of entry points are offered to policies registered with the framework: entry points associated with the registration and management of policies, entry points denoting initialization, creation, destruction, and other life cycle events for kernel objects, events associated with access control decisions that the policy module may influence, and calls associated with the management of labels on objects. In addition, a `mac_syscall()` entry point is provided so that policies may extend the kernel interface without registering new system calls.
Policy module writers should be aware of the kernel locking strategy, as well as what object locks are available during which entry points. Writers should attempt to avoid deadlock scenarios by avoiding grabbing non-leaf locks inside of entry points, and also follow the locking protocol for object access and modification. In particular, writers should be aware that while necessary locks to access objects and their labels are generally held, sufficient locks to modify an object or its label may not be present for all entry points. Locking information for arguments is documented in the MAC framework entry point document.
Policy entry points will pass a reference to the object label along with the object itself. This permits labeled policies to be unaware of the internals of the object yet still make decisions based on the label. The exception to this is the process credential, which is assumed to be understood by policies as a first class security object in the kernel.
[[mac-entry-point-reference]]
== MAC Policy Entry Point Reference
[[mac-mpo-general]]
=== General-Purpose Module Entry Points
[[mac-mpo-init]]
==== `mpo_init`
[source,c]
----
void mpo_init( conf);
struct mac_policy_conf *conf;
----
[.informaltable]
[cols="1,1,1", frame="none", options="header"]
|===
| Parameter
| Description
| Locking
|`conf`
|MAC policy definition
|
|===
Policy load event. The policy list mutex is held, so sleep operations cannot be performed, and calls out to other kernel subsystems must be made with caution. If potentially sleeping memory allocations are required during policy initialization, they should be made using a separate module SYSINIT().
[[mpo-destroy]]
==== `mpo_destroy`
[source,c]
----
void mpo_destroy( conf);
struct mac_policy_conf *conf;
----
[.informaltable]
[cols="1,1,1", frame="none", options="header"]
|===
| Parameter
| Description
| Locking
|`conf`
|MAC policy definition
|
|===
Policy load event. The policy list mutex is held, so caution should be applied.
[[mac-mpo-syscall]]
==== `mpo_syscall`
[source,c]
----
int mpo_syscall( td,
call,
arg);
struct thread *td;
int call;
void *arg;
----
[.informaltable]
[cols="1,1,1", frame="none", options="header"]
|===
| Parameter
| Description
| Locking
|`td`
|Calling thread
|
|`call`
|Policy-specific syscall number
|
|`arg`
|Pointer to syscall arguments
|
|===
This entry point provides a policy-multiplexed system call so that policies may provide additional services to user processes without registering specific system calls. The policy name provided during registration is used to demux calls from userland, and the arguments will be forwarded to this entry point. When implementing new services, security modules should be sure to invoke appropriate access control checks from the MAC framework as needed. For example, if a policy implements an augmented signal functionality, it should call the necessary signal access control checks to invoke the MAC framework and other registered policies.
[NOTE]
====
Modules must currently perform the `copyin()` of the syscall data on their own.
====
[[mac-mpo-thread-userret]]
==== `mpo_thread_userret`
[source,c]
----
void mpo_thread_userret( td);
struct thread *td;
----
[.informaltable]
[cols="1,1,1", frame="none", options="header"]
|===
| Parameter
| Description
| Locking
|`td`
|Returning thread
|
|===
This entry point permits policy modules to perform MAC-related events when a thread returns to user space, via a system call return, trap return, or otherwise. This is required for policies that have floating process labels, as it is not always possible to acquire the process lock at arbitrary points in the stack during system call processing; process labels might represent traditional authentication data, process history information, or other data. To employ this mechanism, intended changes to the process credential label may be stored in the `p_label` protected by a per-policy spin lock, and then set the per-thread `TDF_ASTPENDING` flag and per-process `PS_MACPENDM` flag to schedule a call to the userret entry point. From this entry point, the policy may create a replacement credential with less concern about the locking context. Policy writers are cautioned that event ordering relating to scheduling an AST and the AST being performed may be complex and interlaced in multithreaded applications.
[[mac-label-ops]]
=== Label Operations
[[mac-mpo-init-bpfdesc]]
==== `mpo_init_bpfdesc_label`
[source,c]
----
void mpo_init_bpfdesc_label( label);
struct label *label;
----
[.informaltable]
[cols="1,1,1", frame="none", options="header"]
|===
| Parameter
| Description
| Locking
|`label`
|New label to apply
|
|===
Initialize the label on a newly instantiated bpfdesc (BPF descriptor). Sleeping is permitted.
[[mac-mpo-init-cred-label]]
==== `mpo_init_cred_label`
[source,c]
----
void mpo_init_cred_label( label);
struct label *label;
----
[.informaltable]
[cols="1,1,1", frame="none", options="header"]
|===
| Parameter
| Description
| Locking
|`label`
|New label to initialize
|
|===
Initialize the label for a newly instantiated user credential. Sleeping is permitted.
[[mac-mpo-init-devfsdirent]]
==== `mpo_init_devfsdirent_label`
[source,c]
----
void mpo_init_devfsdirent_label( label);
struct label *label;
----
[.informaltable]
[cols="1,1,1", frame="none", options="header"]
|===
| Parameter
| Description
| Locking
|`label`
|New label to apply
|
|===
Initialize the label on a newly instantiated devfs entry. Sleeping is permitted.
[[mac-mpo-init-ifnet]]
==== `mpo_init_ifnet_label`
[source,c]
----
void mpo_init_ifnet_label( label);
struct label *label;
----
[.informaltable]
[cols="1,1,1", frame="none", options="header"]
|===
| Parameter
| Description
| Locking
|`label`
|New label to apply
|
|===
Initialize the label on a newly instantiated network interface. Sleeping is permitted.
[[mac-mpo-init-ipq]]
==== `mpo_init_ipq_label`
[source,c]
----
void mpo_init_ipq_label( label,
flag);
struct label *label;
int flag;
----
[.informaltable]
[cols="1,1,1", frame="none", options="header"]
|===
| Parameter
| Description
| Locking
|`label`
|New label to apply
|
|`flag`
|Sleeping/non-sleeping man:malloc[9]; see below
|
|===
Initialize the label on a newly instantiated IP fragment reassembly queue. The `flag` field may be one of M_WAITOK and M_NOWAIT, and should be employed to avoid performing a sleeping man:malloc[9] during this initialization call. IP fragment reassembly queue allocation frequently occurs in performance sensitive environments, and the implementation should be careful to avoid sleeping or long-lived operations. This entry point is permitted to fail resulting in the failure to allocate the IP fragment reassembly queue.
[[mac-mpo-init-mbuf]]
==== `mpo_init_mbuf_label`
[source,c]
----
void mpo_init_mbuf_label( flag,
label);
int flag;
struct label *label;
----
[.informaltable]
[cols="1,1,1", frame="none", options="header"]
|===
| Parameter
| Description
| Locking
|`flag`
|Sleeping/non-sleeping man:malloc[9]; see below
|
|`label`
|Policy label to initialize
|
|===
Initialize the label on a newly instantiated mbuf packet header (`mbuf`). The `flag` field may be one of M_WAITOK and M_NOWAIT, and should be employed to avoid performing a sleeping man:malloc[9] during this initialization call. Mbuf allocation frequently occurs in performance sensitive environments, and the implementation should be careful to avoid sleeping or long-lived operations. This entry point is permitted to fail resulting in the failure to allocate the mbuf header.
[[mac-mpo-init-mount]]
==== `mpo_init_mount_label`
[source,c]
----
void mpo_init_mount_label( mntlabel,
fslabel);
struct label *mntlabel;
struct label *fslabel;
----
[.informaltable]
[cols="1,1,1", frame="none", options="header"]
|===
| Parameter
| Description
| Locking
|`mntlabel`
|Policy label to be initialized for the mount itself
|
|`fslabel`
|Policy label to be initialized for the file system
|
|===
Initialize the labels on a newly instantiated mount point. Sleeping is permitted.
[[mac-mpo-init-mount-fs-label]]
==== `mpo_init_mount_fs_label`
[source,c]
----
void mpo_init_mount_fs_label( label);
struct label *label;
----
[.informaltable]
[cols="1,1,1", frame="none", options="header"]
|===
| Parameter
| Description
| Locking
|`label`
|Label to be initialized
|
|===
Initialize the label on a newly mounted file system. Sleeping is permitted
[[mac-mpo-init-pipe-label]]
==== `mpo_init_pipe_label`
[source,c]
----
void mpo_init_pipe_label( label);
struct label*label;
----
[.informaltable]
[cols="1,1,1", frame="none", options="header"]
|===
| Parameter
| Description
| Locking
|`label`
|Label to be filled in
|
|===
Initialize a label for a newly instantiated pipe. Sleeping is permitted.
[[mac-mpo-init-socket]]
==== `mpo_init_socket_label`
[source,c]
----
void mpo_init_socket_label( label,
flag);
struct label *label;
int flag;
----
[.informaltable]
[cols="1,1,1", frame="none", options="header"]
|===
| Parameter
| Description
| Locking
|`label`
|New label to initialize
|
|`flag`
|man:malloc[9] flags
|
|===
Initialize a label for a newly instantiated socket. The `flag` field may be one of M_WAITOK and M_NOWAIT, and should be employed to avoid performing a sleeping man:malloc[9] during this initialization call.
[[mac-mpo-init-socket-peer-label]]
==== `mpo_init_socket_peer_label`
[source,c]
----
void mpo_init_socket_peer_label( label,
flag);
struct label *label;
int flag;
----
[.informaltable]
[cols="1,1,1", frame="none", options="header"]
|===
| Parameter
| Description
| Locking
|`label`
|New label to initialize
|
|`flag`
|man:malloc[9] flags
|
|===
Initialize the peer label for a newly instantiated socket. The `flag` field may be one of M_WAITOK and M_NOWAIT, and should be employed to avoid performing a sleeping man:malloc[9] during this initialization call.
[[mac-mpo-init-proc-label]]
==== `mpo_init_proc_label`
[source,c]
----
void mpo_init_proc_label( label);
struct label *label;
----
[.informaltable]
[cols="1,1,1", frame="none", options="header"]
|===
| Parameter
| Description
| Locking
|`label`
|New label to initialize
|
|===
Initialize the label for a newly instantiated process. Sleeping is permitted.
[[mac-mpo-init-vnode]]
==== `mpo_init_vnode_label`
[source,c]
----
void mpo_init_vnode_label( label);
struct label *label;
----
[.informaltable]
[cols="1,1,1", frame="none", options="header"]
|===
| Parameter
| Description
| Locking
|`label`
|New label to initialize
|
|===
Initialize the label on a newly instantiated vnode. Sleeping is permitted.
[[mac-mpo-destroy-bpfdesc]]
==== `mpo_destroy_bpfdesc_label`
[source,c]
----
void mpo_destroy_bpfdesc_label( label);
struct label *label;
----
[.informaltable]
[cols="1,1,1", frame="none", options="header"]
|===
| Parameter
| Description
| Locking
|`label`
|bpfdesc label
|
|===
Destroy the label on a BPF descriptor. In this entry point a policy should free any internal storage associated with `label` so that it may be destroyed.
[[mac-mpo-destroy-cred]]
==== `mpo_destroy_cred_label`
[source,c]
----
void mpo_destroy_cred_label( label);
struct label *label;
----
[.informaltable]
[cols="1,1,1", frame="none", options="header"]
|===
| Parameter
| Description
| Locking
|`label`
|Label being destroyed
|
|===
Destroy the label on a credential. In this entry point, a policy module should free any internal storage associated with `label` so that it may be destroyed.
[[mac-mpo-destroy-devfsdirent]]
==== `mpo_destroy_devfsdirent_label`
[source,c]
----
void mpo_destroy_devfsdirent_label( label);
struct label *label;
----
[.informaltable]
[cols="1,1,1", frame="none", options="header"]
|===
| Parameter
| Description
| Locking
|`label`
|Label being destroyed
|
|===
Destroy the label on a devfs entry. In this entry point, a policy module should free any internal storage associated with `label` so that it may be destroyed.
[[mac-mpo-destroy-ifnet-label]]
==== `mpo_destroy_ifnet_label`
[source,c]
----
void mpo_destroy_ifnet_label( label);
struct label *label;
----
[.informaltable]
[cols="1,1,1", frame="none", options="header"]
|===
| Parameter
| Description
| Locking
|`label`
|Label being destroyed
|
|===
Destroy the label on a removed interface. In this entry point, a policy module should free any internal storage associated with `label` so that it may be destroyed.
[[mac-mpo-destroy-ipq-label]]
==== `mpo_destroy_ipq_label`
[source,c]
----
void mpo_destroy_ipq_label( label);
struct label *label;
----
[.informaltable]
[cols="1,1,1", frame="none", options="header"]
|===
| Parameter
| Description
| Locking
|`label`
|Label being destroyed
|
|===
Destroy the label on an IP fragment queue. In this entry point, a policy module should free any internal storage associated with `label` so that it may be destroyed.
[[mac-mpo-destroy-mbuf-label]]
==== `mpo_destroy_mbuf_label`
[source,c]
----
void mpo_destroy_mbuf_label( label);
struct label *label;
----
[.informaltable]
[cols="1,1,1", frame="none", options="header"]
|===
| Parameter
| Description
| Locking
|`label`
|Label being destroyed
|
|===
Destroy the label on an mbuf header. In this entry point, a policy module should free any internal storage associated with `label` so that it may be destroyed.
[[mac-mpo-destroy-mount-label]]
==== `mpo_destroy_mount_label`
[source,c]
----
void mpo_destroy_mount_label( label);
struct label *label;
----
[.informaltable]
[cols="1,1,1", frame="none", options="header"]
|===
| Parameter
| Description
| Locking
|`label`
|Mount point label being destroyed
|
|===
Destroy the labels on a mount point. In this entry point, a policy module should free the internal storage associated with `mntlabel` so that they may be destroyed.
[[mac-mpo-destroy-mount]]
==== `mpo_destroy_mount_label`
[source,c]
----
void mpo_destroy_mount_label( mntlabel,
fslabel);
struct label *mntlabel;
struct label *fslabel;
----
[.informaltable]
[cols="1,1,1", frame="none", options="header"]
|===
| Parameter
| Description
| Locking
|`mntlabel`
|Mount point label being destroyed
|
|`fslabel`
|File system label being destroyed>
|
|===
Destroy the labels on a mount point. In this entry point, a policy module should free the internal storage associated with `mntlabel` and `fslabel` so that they may be destroyed.
[[mac-mpo-destroy-socket]]
==== `mpo_destroy_socket_label`
[source,c]
----
void mpo_destroy_socket_label( label);
struct label *label;
----
[.informaltable]
[cols="1,1,1", frame="none", options="header"]
|===
| Parameter
| Description
| Locking
|`label`
|Socket label being destroyed
|
|===
Destroy the label on a socket. In this entry point, a policy module should free any internal storage associated with `label` so that it may be destroyed.
[[mac-mpo-destroy-socket-peer-label]]
==== `mpo_destroy_socket_peer_label`
[source,c]
----
void mpo_destroy_socket_peer_label( peerlabel);
struct label *peerlabel;
----
[.informaltable]
[cols="1,1,1", frame="none", options="header"]
|===
| Parameter
| Description
| Locking
|`peerlabel`
|Socket peer label being destroyed
|
|===
Destroy the peer label on a socket. In this entry point, a policy module should free any internal storage associated with `label` so that it may be destroyed.
[[mac-mpo-destroy-pipe-label]]
==== `mpo_destroy_pipe_label`
[source,c]
----
void mpo_destroy_pipe_label( label);
struct label *label;
----
[.informaltable]
[cols="1,1,1", frame="none", options="header"]
|===
| Parameter
| Description
| Locking
|`label`
|Pipe label
|
|===
Destroy the label on a pipe. In this entry point, a policy module should free any internal storage associated with `label` so that it may be destroyed.
[[mac-mpo-destroy-proc-label]]
==== `mpo_destroy_proc_label`
[source,c]
----
void mpo_destroy_proc_label( label);
struct label *label;
----
[.informaltable]
[cols="1,1,1", frame="none", options="header"]
|===
| Parameter
| Description
| Locking
|`label`
|Process label
|
|===
Destroy the label on a process. In this entry point, a policy module should free any internal storage associated with `label` so that it may be destroyed.
[[mac-mpo-destroy-vnode-label]]
==== `mpo_destroy_vnode_label`
[source,c]
----
void mpo_destroy_vnode_label( label);
struct label *label;
----
[.informaltable]
[cols="1,1,1", frame="none", options="header"]
|===
| Parameter
| Description
| Locking
|`label`
|Process label
|
|===
Destroy the label on a vnode. In this entry point, a policy module should free any internal storage associated with `label` so that it may be destroyed.
[[mac-mpo-copy-mbuf-label]]
==== `mpo_copy_mbuf_label`
[source,c]
----
void mpo_copy_mbuf_label( src,
dest);
struct label *src;
struct label *dest;
----
[.informaltable]
[cols="1,1,1", frame="none", options="header"]
|===
| Parameter
| Description
| Locking
|`src`
|Source label
|
|`dest`
|Destination label
|
|===
Copy the label information in `src` into `dest`.
[[mac-mpo-copy-pipe-label]]
==== `mpo_copy_pipe_label`
[source,c]
----
void mpo_copy_pipe_label( src,
dest);
struct label *src;
struct label *dest;
----
[.informaltable]
[cols="1,1,1", frame="none", options="header"]
|===
| Parameter
| Description
| Locking
|`src`
|Source label
|
|`dest`
|Destination label
|
|===
Copy the label information in `src` into `dest`.
[[mac-mpo-copy-vnode-label]]
==== `mpo_copy_vnode_label`
[source,c]
----
void mpo_copy_vnode_label( src,
dest);
struct label *src;
struct label *dest;
----
[.informaltable]
[cols="1,1,1", frame="none", options="header"]
|===
| Parameter
| Description
| Locking
|`src`
|Source label
|
|`dest`
|Destination label
|
|===
Copy the label information in `src` into `dest`.
[[mac-mpo-externalize-cred-label]]
==== `mpo_externalize_cred_label`
[source,c]
----
int mpo_externalize_cred_label( label,
element_name,
sb,
*claimed);
struct label *label;
char *element_name;
struct sbuf *sb;
int *claimed;
----
[.informaltable]
[cols="1,1,1", frame="none", options="header"]
|===
| Parameter
| Description
| Locking
|`label`
|Label to be externalized
|
|`element_name`
|Name of the policy whose label should be externalized
|
|`sb`
|String buffer to be filled with a text representation of label
|
|`claimed`
|Should be incremented when `element_data` can be filled in.
|
|===
Produce an externalized label based on the label structure passed. An externalized label consists of a text representation of the label contents that can be used with userland applications and read by the user. Currently, all policies' `externalize` entry points will be called, so the implementation should check the contents of `element_name` before attempting to fill in `sb`. If `element_name` does not match the name of your policy, simply return 0. Only return nonzero if an error occurs while externalizing the label data. Once the policy fills in `element_data`, `*claimed` should be incremented.
[[mac-mpo-externalize-ifnet-label]]
==== `mpo_externalize_ifnet_label`
[source,c]
----
int mpo_externalize_ifnet_label( label,
element_name,
sb,
*claimed);
struct label *label;
char *element_name;
struct sbuf *sb;
int *claimed;
----
[.informaltable]
[cols="1,1,1", frame="none", options="header"]
|===
| Parameter
| Description
| Locking
|`label`
|Label to be externalized
|
|`element_name`
|Name of the policy whose label should be externalized
|
|`sb`
|String buffer to be filled with a text representation of label
|
|`claimed`
|Should be incremented when `element_data` can be filled in.
|
|===
Produce an externalized label based on the label structure passed. An externalized label consists of a text representation of the label contents that can be used with userland applications and read by the user. Currently, all policies' `externalize` entry points will be called, so the implementation should check the contents of `element_name` before attempting to fill in `sb`. If `element_name` does not match the name of your policy, simply return 0. Only return nonzero if an error occurs while externalizing the label data. Once the policy fills in `element_data`, `*claimed` should be incremented.
[[mac-mpo-externalize-pipe-label]]
==== `mpo_externalize_pipe_label`
[source,c]
----
int mpo_externalize_pipe_label( label,
element_name,
sb,
*claimed);
struct label *label;
char *element_name;
struct sbuf *sb;
int *claimed;
----
[.informaltable]
[cols="1,1,1", frame="none", options="header"]
|===
| Parameter
| Description
| Locking
|`label`
|Label to be externalized
|
|`element_name`
|Name of the policy whose label should be externalized
|
|`sb`
|String buffer to be filled with a text representation of label
|
|`claimed`
|Should be incremented when `element_data` can be filled in.
|
|===
Produce an externalized label based on the label structure passed. An externalized label consists of a text representation of the label contents that can be used with userland applications and read by the user. Currently, all policies' `externalize` entry points will be called, so the implementation should check the contents of `element_name` before attempting to fill in `sb`. If `element_name` does not match the name of your policy, simply return 0. Only return nonzero if an error occurs while externalizing the label data. Once the policy fills in `element_data`, `*claimed` should be incremented.
[[mac-mpo-externalize-socket-label]]
==== `mpo_externalize_socket_label`
[source,c]
----
int mpo_externalize_socket_label( label,
element_name,
sb,
*claimed);
struct label *label;
char *element_name;
struct sbuf *sb;
int *claimed;
----
[.informaltable]
[cols="1,1,1", frame="none", options="header"]
|===
| Parameter
| Description
| Locking
|`label`
|Label to be externalized
|
|`element_name`
|Name of the policy whose label should be externalized
|
|`sb`
|String buffer to be filled with a text representation of label
|
|`claimed`
|Should be incremented when `element_data` can be filled in.
|
|===
Produce an externalized label based on the label structure passed. An externalized label consists of a text representation of the label contents that can be used with userland applications and read by the user. Currently, all policies' `externalize` entry points will be called, so the implementation should check the contents of `element_name` before attempting to fill in `sb`. If `element_name` does not match the name of your policy, simply return 0. Only return nonzero if an error occurs while externalizing the label data. Once the policy fills in `element_data`, `*claimed` should be incremented.
[[mac-mpo-externalize-socket-peer-label]]
==== `mpo_externalize_socket_peer_label`
[source,c]
----
int mpo_externalize_socket_peer_label( label,
element_name,
sb,
*claimed);
struct label *label;
char *element_name;
struct sbuf *sb;
int *claimed;
----
[.informaltable]
[cols="1,1,1", frame="none", options="header"]
|===
| Parameter
| Description
| Locking
|`label`
|Label to be externalized
|
|`element_name`
|Name of the policy whose label should be externalized
|
|`sb`
|String buffer to be filled with a text representation of label
|
|`claimed`
|Should be incremented when `element_data` can be filled in.
|
|===
Produce an externalized label based on the label structure passed. An externalized label consists of a text representation of the label contents that can be used with userland applications and read by the user. Currently, all policies' `externalize` entry points will be called, so the implementation should check the contents of `element_name` before attempting to fill in `sb`. If `element_name` does not match the name of your policy, simply return 0. Only return nonzero if an error occurs while externalizing the label data. Once the policy fills in `element_data`, `*claimed` should be incremented.
[[mac-mpo-externalize-vnode-label]]
==== `mpo_externalize_vnode_label`
[source,c]
----
int mpo_externalize_vnode_label( label,
element_name,
sb,
*claimed);
struct label *label;
char *element_name;
struct sbuf *sb;
int *claimed;
----
[.informaltable]
[cols="1,1,1", frame="none", options="header"]
|===
| Parameter
| Description
| Locking
|`label`
|Label to be externalized
|
|`element_name`
|Name of the policy whose label should be externalized
|
|`sb`
|String buffer to be filled with a text representation of label
|
|`claimed`
|Should be incremented when `element_data` can be filled in.
|
|===
Produce an externalized label based on the label structure passed. An externalized label consists of a text representation of the label contents that can be used with userland applications and read by the user. Currently, all policies' `externalize` entry points will be called, so the implementation should check the contents of `element_name` before attempting to fill in `sb`. If `element_name` does not match the name of your policy, simply return 0. Only return nonzero if an error occurs while externalizing the label data. Once the policy fills in `element_data`, `*claimed` should be incremented.
[[mac-mpo-internalize-cred-label]]
==== `mpo_internalize_cred_label`
[source,c]
----
int mpo_internalize_cred_label( label,
element_name,
element_data,
claimed);
struct label *label;
char *element_name;
char *element_data;
int *claimed;
----
[.informaltable]
[cols="1,1,1", frame="none", options="header"]
|===
| Parameter
| Description
| Locking
|`label`
|Label to be filled in
|
|`element_name`
|Name of the policy whose label should be internalized
|
|`element_data`
|Text data to be internalized
|
|`claimed`
|Should be incremented when data can be successfully internalized.
|
|===
Produce an internal label structure based on externalized label data in text format. Currently, all policies' `internalize` entry points are called when internalization is requested, so the implementation should compare the contents of `element_name` to its own name in order to be sure it should be internalizing the data in `element_data`. Just as in the `externalize` entry points, the entry point should return 0 if `element_name` does not match its own name, or when data can successfully be internalized, in which case `*claimed` should be incremented.
[[mac-mpo-internalize-ifnet-label]]
==== `mpo_internalize_ifnet_label`
[source,c]
----
int mpo_internalize_ifnet_label( label,
element_name,
element_data,
claimed);
struct label *label;
char *element_name;
char *element_data;
int *claimed;
----
[.informaltable]
[cols="1,1,1", frame="none", options="header"]
|===
| Parameter
| Description
| Locking
|`label`
|Label to be filled in
|
|`element_name`
|Name of the policy whose label should be internalized
|
|`element_data`
|Text data to be internalized
|
|`claimed`
|Should be incremented when data can be successfully internalized.
|
|===
Produce an internal label structure based on externalized label data in text format. Currently, all policies' `internalize` entry points are called when internalization is requested, so the implementation should compare the contents of `element_name` to its own name in order to be sure it should be internalizing the data in `element_data`. Just as in the `externalize` entry points, the entry point should return 0 if `element_name` does not match its own name, or when data can successfully be internalized, in which case `*claimed` should be incremented.
[[mac-mpo-internalize-pipe-label]]
==== `mpo_internalize_pipe_label`
[source,c]
----
int mpo_internalize_pipe_label( label,
element_name,
element_data,
claimed);
struct label *label;
char *element_name;
char *element_data;
int *claimed;
----
[.informaltable]
[cols="1,1,1", frame="none", options="header"]
|===
| Parameter
| Description
| Locking
|`label`
|Label to be filled in
|
|`element_name`
|Name of the policy whose label should be internalized
|
|`element_data`
|Text data to be internalized
|
|`claimed`
|Should be incremented when data can be successfully internalized.
|
|===
Produce an internal label structure based on externalized label data in text format. Currently, all policies' `internalize` entry points are called when internalization is requested, so the implementation should compare the contents of `element_name` to its own name in order to be sure it should be internalizing the data in `element_data`. Just as in the `externalize` entry points, the entry point should return 0 if `element_name` does not match its own name, or when data can successfully be internalized, in which case `*claimed` should be incremented.
[[mac-mpo-internalize-socket-label]]
==== `mpo_internalize_socket_label`
[source,c]
----
int mpo_internalize_socket_label( label,
element_name,
element_data,
claimed);
struct label *label;
char *element_name;
char *element_data;
int *claimed;
----
[.informaltable]
[cols="1,1,1", frame="none", options="header"]
|===
| Parameter
| Description
| Locking
|`label`
|Label to be filled in
|
|`element_name`
|Name of the policy whose label should be internalized
|
|`element_data`
|Text data to be internalized
|
|`claimed`
|Should be incremented when data can be successfully internalized.
|
|===
Produce an internal label structure based on externalized label data in text format. Currently, all policies' `internalize` entry points are called when internalization is requested, so the implementation should compare the contents of `element_name` to its own name in order to be sure it should be internalizing the data in `element_data`. Just as in the `externalize` entry points, the entry point should return 0 if `element_name` does not match its own name, or when data can successfully be internalized, in which case `*claimed` should be incremented.
[[mac-mpo-internalize-vnode-label]]
==== `mpo_internalize_vnode_label`
[source,c]
----
int mpo_internalize_vnode_label( label,
element_name,
element_data,
claimed);
struct label *label;
char *element_name;
char *element_data;
int *claimed;
----
[.informaltable]
[cols="1,1,1", frame="none", options="header"]
|===
| Parameter
| Description
| Locking
|`label`
|Label to be filled in
|
|`element_name`
|Name of the policy whose label should be internalized
|
|`element_data`
|Text data to be internalized
|
|`claimed`
|Should be incremented when data can be successfully internalized.
|
|===
Produce an internal label structure based on externalized label data in text format. Currently, all policies' `internalize` entry points are called when internalization is requested, so the implementation should compare the contents of `element_name` to its own name in order to be sure it should be internalizing the data in `element_data`. Just as in the `externalize` entry points, the entry point should return 0 if `element_name` does not match its own name, or when data can successfully be internalized, in which case `*claimed` should be incremented.
[[mac-label-events]]
=== Label Events
This class of entry points is used by the MAC framework to permit policies to maintain label information on kernel objects. For each labeled kernel object of interest to a MAC policy, entry points may be registered for relevant life cycle events. All objects implement initialization, creation, and destruction hooks. Some objects will also implement relabeling, allowing user processes to change the labels on objects. Some objects will also implement object-specific events, such as label events associated with IP reassembly. A typical labeled object will have the following life cycle of entry points:
[.programlisting]
....
Label initialization o
(object-specific wait) \
Label creation o
\
Relabel events, o--<--.
Various object-specific, | |
Access control events ~-->--o
\
Label destruction o
....
Label initialization permits policies to allocate memory and set initial values for labels without context for the use of the object. The label slot allocated to a policy will be zeroed by default, so some policies may not need to perform initialization.
Label creation occurs when the kernel structure is associated with an actual kernel object. For example, Mbufs may be allocated and remain unused in a pool until they are required. mbuf allocation causes label initialization on the mbuf to take place, but mbuf creation occurs when the mbuf is associated with a datagram. Typically, context will be provided for a creation event, including the circumstances of the creation, and labels of other relevant objects in the creation process. For example, when an mbuf is created from a socket, the socket and its label will be presented to registered policies in addition to the new mbuf and its label. Memory allocation in creation events is discouraged, as it may occur in performance sensitive ports of the kernel; in addition, creation calls are not permitted to fail so a failure to allocate memory cannot be reported.
Object specific events do not generally fall into the other broad classes of label events, but will generally provide an opportunity to modify or update the label on an object based on additional context. For example, the label on an IP fragment reassembly queue may be updated during the MAC_UPDATE_IPQ entry point as a result of the acceptance of an additional mbuf to that queue.
Access control events are discussed in detail in the following section.
Label destruction permits policies to release storage or state associated with a label during its association with an object so that the kernel data structures supporting the object may be reused or released.
In addition to labels associated with specific kernel objects, an additional class of labels exists: temporary labels. These labels are used to store update information submitted by user processes. These labels are initialized and destroyed as with other label types, but the creation event is MAC_INTERNALIZE, which accepts a user label to be converted to an in-kernel representation.
[[mac-fs-label-event-ops]]
==== File System Object Labeling Event Operations
[[mac-mpo-associate-vnode-devfs]]
===== `mpo_associate_vnode_devfs`
[source,c]
----
void mpo_associate_vnode_devfs( mp,
fslabel,
de,
delabel,
vp,
vlabel);
struct mount *mp;
struct label *fslabel;
struct devfs_dirent *de;
struct label *delabel;
struct vnode *vp;
struct label *vlabel;
----
[.informaltable]
[cols="1,1,1", frame="none", options="header"]
|===
| Parameter
| Description
| Locking
|`mp`
|Devfs mount point
|
|`fslabel`
|Devfs file system label (`mp->mnt_fslabel`)
|
|`de`
|Devfs directory entry
|
|`delabel`
|Policy label associated with `de`
|
|`vp`
|vnode associated with `de`
|
|`vlabel`
|Policy label associated with `vp`
|
|===
Fill in the label (`vlabel`) for a newly created devfs vnode based on the devfs directory entry passed in `de` and its label.
[[mac-mpo-associate-vnode-extattr]]
===== `mpo_associate_vnode_extattr`
[source,c]
----
int mpo_associate_vnode_extattr( mp,
fslabel,
vp,
vlabel);
struct mount *mp;
struct label *fslabel;
struct vnode *vp;
struct label *vlabel;
----
[.informaltable]
[cols="1,1,1", frame="none", options="header"]
|===
| Parameter
| Description
| Locking
|`mp`
|File system mount point
|
|`fslabel`
|File system label
|
|`vp`
|Vnode to label
|
|`vlabel`
|Policy label associated with `vp`
|
|===
Attempt to retrieve the label for `vp` from the file system extended attributes. Upon success, the value `0` is returned. Should extended attribute retrieval not be supported, an accepted fallback is to copy `fslabel` into `vlabel`. In the event of an error, an appropriate value for `errno` should be returned.
[[mac-mpo-associate-vnode-singlelabel]]
===== `mpo_associate_vnode_singlelabel`
[source,c]
----
void mpo_associate_vnode_singlelabel( mp,
fslabel,
vp,
vlabel);
struct mount *mp;
struct label *fslabel;
struct vnode *vp;
struct label *vlabel;
----
[.informaltable]
[cols="1,1,1", frame="none", options="header"]
|===
| Parameter
| Description
| Locking
|`mp`
|File system mount point
|
|`fslabel`
|File system label
|
|`vp`
|Vnode to label
|
|`vlabel`
|Policy label associated with `vp`
|
|===
On non-multilabel file systems, this entry point is called to set the policy label for `vp` based on the file system label, `fslabel`.
[[mac-mpo-create-devfs-device]]
===== `mpo_create_devfs_device`
[source,c]
----
void mpo_create_devfs_device( dev,
devfs_dirent,
label);
dev_t dev;
struct devfs_dirent *devfs_dirent;
struct label *label;
----
[.informaltable]
[cols="1,1,1", frame="none", options="header"]
|===
| Parameter
| Description
| Locking
|`dev`
|Device corresponding with `devfs_dirent`
|
|`devfs_dirent`
|Devfs directory entry to be labeled.
|
|`label`
|Label for `devfs_dirent` to be filled in.
|
|===
Fill out the label on a devfs_dirent being created for the passed device. This call will be made when the device file system is mounted, regenerated, or a new device is made available.
[[mac-mpo-create-devfs-directory]]
===== `mpo_create_devfs_directory`
[source,c]
----
void mpo_create_devfs_directory( dirname,
dirnamelen,
devfs_dirent,
label);
char *dirname;
int dirnamelen;
struct devfs_dirent *devfs_dirent;
struct label *label;
----
[.informaltable]
[cols="1,1,1", frame="none", options="header"]
|===
| Parameter
| Description
| Locking
|`dirname`
|Name of directory being created
|
|`namelen`
|Length of string `dirname`
|
|`devfs_dirent`
|Devfs directory entry for directory being created.
|
|===
Fill out the label on a devfs_dirent being created for the passed directory. This call will be made when the device file system is mounted, regenerated, or a new device requiring a specific directory hierarchy is made available.
[[mac-mpo-create-devfs-symlink]]
===== `mpo_create_devfs_symlink`
[source,c]
----
void mpo_create_devfs_symlink( cred,
mp,
dd,
ddlabel,
de,
delabel);
struct ucred *cred;
struct mount *mp;
struct devfs_dirent *dd;
struct label *ddlabel;
struct devfs_dirent *de;
struct label *delabel;
----
[.informaltable]
[cols="1,1,1", frame="none", options="header"]
|===
| Parameter
| Description
| Locking
|`cred`
|Subject credential
|
|`mp`
|Devfs mount point
|
|`dd`
|Link destination
|
|`ddlabel`
|Label associated with `dd`
|
|`de`
|Symlink entry
|
|`delabel`
|Label associated with `de`
|
|===
Fill in the label (`delabel`) for a newly created man:devfs[5] symbolic link entry.
[[mac-mpo-create-vnode-extattr]]
===== `mpo_create_vnode_extattr`
[source,c]
----
int mpo_create_vnode_extattr( cred,
mp,
fslabel,
dvp,
dlabel,
vp,
vlabel,
cnp);
struct ucred *cred;
struct mount *mp;
struct label *fslabel;
struct vnode *dvp;
struct label *dlabel;
struct vnode *vp;
struct label *vlabel;
struct componentname *cnp;
----
[.informaltable]
[cols="1,1,1", frame="none", options="header"]
|===
| Parameter
| Description
| Locking
|`cred`
|Subject credential
|
|`mount`
|File system mount point
|
|`label`
|File system label
|
|`dvp`
|Parent directory vnode
|
|`dlabel`
|Label associated with `dvp`
|
|`vp`
|Newly created vnode
|
|`vlabel`
|Policy label associated with `vp`
|
|`cnp`
|Component name for `vp`
|
|===
Write out the label for `vp` to the appropriate extended attribute. If the write succeeds, fill in `vlabel` with the label, and return 0. Otherwise, return an appropriate error.
[[mac-mpo-create-mount]]
===== `mpo_create_mount`
[source,c]
----
void mpo_create_mount( cred,
mp,
mnt,
fslabel);
struct ucred *cred;
struct mount *mp;
struct label *mnt;
struct label *fslabel;
----
[.informaltable]
[cols="1,1,1", frame="none", options="header"]
|===
| Parameter
| Description
| Locking
|`cred`
|Subject credential
|
|`mp`
|Object; file system being mounted
|
|`mntlabel`
|Policy label to be filled in for `mp`
|
|`fslabel`
|Policy label for the file system `mp` mounts.
|
|===
Fill out the labels on the mount point being created by the passed subject credential. This call will be made when a new file system is mounted.
[[mac-mpo-create-root-mount]]
===== `mpo_create_root_mount`
[source,c]
----
void mpo_create_root_mount( cred,
mp,
mntlabel,
fslabel);
struct ucred *cred;
struct mount *mp;
struct label *mntlabel;
struct label *fslabel;
----
[.informaltable]
[cols="1,1,1", frame="none", options="header"]
|===
| Parameter
| Description
| Locking
3+|See <<mac-mpo-create-mount>>.
|===
Fill out the labels on the mount point being created by the passed subject credential. This call will be made when the root file system is mounted, after mpo_create_mount;.
[[mac-mpo-relabel-vnode]]
===== `mpo_relabel_vnode`
[source,c]
----
void mpo_relabel_vnode( cred,
vp,
vnodelabel,
newlabel);
struct ucred *cred;
struct vnode *vp;
struct label *vnodelabel;
struct label *newlabel;
----
[.informaltable]
[cols="1,1,1", frame="none", options="header"]
|===
| Parameter
| Description
| Locking
|`cred`
|Subject credential
|
|`vp`
|vnode to relabel
|
|`vnodelabel`
|Existing policy label for `vp`
|
|`newlabel`
|New, possibly partial label to replace `vnodelabel`
|
|===
Update the label on the passed vnode given the passed update vnode label and the passed subject credential.
[[mac-mpo-setlabel-vnode-extattr]]
===== `mpo_setlabel_vnode_extattr`
[source,c]
----
int mpo_setlabel_vnode_extattr( cred,
vp,
vlabel,
intlabel);
struct ucred *cred;
struct vnode *vp;
struct label *vlabel;
struct label *intlabel;
----
[.informaltable]
[cols="1,1,1", frame="none", options="header"]
|===
| Parameter
| Description
| Locking
|`cred`
|Subject credential
|
|`vp`
|Vnode for which the label is being written
|
|`vlabel`
|Policy label associated with `vp`
|
|`intlabel`
|Label to write out
|
|===
Write out the policy from `intlabel` to an extended attribute. This is called from `vop_stdcreatevnode_ea`.
[[mac-mpo-update-devfsdirent]]
===== `mpo_update_devfsdirent`
[source,c]
----
void mpo_update_devfsdirent( devfs_dirent,
direntlabel,
vp,
vnodelabel);
struct devfs_dirent *devfs_dirent;
struct label *direntlabel;
struct vnode *vp;
struct label *vnodelabel;
----
[.informaltable]
[cols="1,1,1", frame="none", options="header"]
|===
| Parameter
| Description
| Locking
|`devfs_dirent`
|Object; devfs directory entry
|
|`direntlabel`
|Policy label for `devfs_dirent` to be updated.
|
|`vp`
|Parent vnode
|Locked
|
|`vnodelabel`
|Policy label for `vp`
|
|===
Update the `devfs_dirent` label from the passed devfs vnode label. This call will be made when a devfs vnode has been successfully relabeled to commit the label change such that it lasts even if the vnode is recycled. It will also be made when a symlink is created in devfs, following a call to `mac_vnode_create_from_vnode` to initialize the vnode label.
[[mac-ipc-label-ops]]
==== IPC Object Labeling Event Operations
[[mac-mpo-create-mbuf-from-socket]]
===== `mpo_create_mbuf_from_socket`
[source,c]
----
void mpo_create_mbuf_from_socket( so,
socketlabel,
m,
mbuflabel);
struct socket *so;
struct label *socketlabel;
struct mbuf *m;
struct label *mbuflabel;
----
[.informaltable]
[cols="1,1,1", frame="none", options="header"]
|===
| Parameter
| Description
| Locking
|`socket`
|Socket
|Socket locking WIP
|`socketlabel`
|Policy label for `socket`
|
|`m`
|Object; mbuf
|
|`mbuflabel`
|Policy label to fill in for `m`
|
|===
Set the label on a newly created mbuf header from the passed socket label. This call is made when a new datagram or message is generated by the socket and stored in the passed mbuf.
[[mac-mpo-create-pipe]]
===== `mpo_create_pipe`
[source,c]
----
void mpo_create_pipe( cred,
pipe,
pipelabel);
struct ucred *cred;
struct pipe *pipe;
struct label *pipelabel;
----
[.informaltable]
[cols="1,1,1", frame="none", options="header"]
|===
| Parameter
| Description
| Locking
|`cred`
|Subject credential
|
|`pipe`
|Pipe
|
|`pipelabel`
|Policy label associated with `pipe`
|
|===
Set the label on a newly created pipe from the passed subject credential. This call is made when a new pipe is created.
[[mac-mpo-create-socket]]
===== `mpo_create_socket`
[source,c]
----
void mpo_create_socket( cred,
so,
socketlabel);
struct ucred *cred;
struct socket *so;
struct label *socketlabel;
----
[.informaltable]
[cols="1,1,1", frame="none", options="header"]
|===
| Parameter
| Description
| Locking
|`cred`
|Subject credential
|Immutable
|`so`
|Object; socket to label
|
|`socketlabel`
|Label to fill in for `so`
|
|===
Set the label on a newly created socket from the passed subject credential. This call is made when a socket is created.
[[mac-mpo-create-socket-from-socket]]
===== `mpo_create_socket_from_socket`
[source,c]
----
void mpo_create_socket_from_socket( oldsocket,
oldsocketlabel,
newsocket,
newsocketlabel);
struct socket *oldsocket;
struct label *oldsocketlabel;
struct socket *newsocket;
struct label *newsocketlabel;
----
[.informaltable]
[cols="1,1,1", frame="none", options="header"]
|===
| Parameter
| Description
| Locking
|`oldsocket`
|Listening socket
|
|`oldsocketlabel`
|Policy label associated with `oldsocket`
|
|`newsocket`
|New socket
|
|`newsocketlabel`
|Policy label associated with `newsocketlabel`
|
|===
Label a socket, `newsocket`, newly man:accept[2]ed, based on the man:listen[2] socket, `oldsocket`.
[[mac-mpo-relabel-pipe]]
===== `mpo_relabel_pipe`
[source,c]
----
void mpo_relabel_pipe( cred,
pipe,
oldlabel,
newlabel);
struct ucred *cred;
struct pipe *pipe;
struct label *oldlabel;
struct label *newlabel;
----
[.informaltable]
[cols="1,1,1", frame="none", options="header"]
|===
| Parameter
| Description
| Locking
|`cred`
|Subject credential
|
|`pipe`
|Pipe
|
|`oldlabel`
|Current policy label associated with `pipe`
|
|`newlabel`
|Policy label update to apply to `pipe`
|
|===
Apply a new label, `newlabel`, to `pipe`.
[[mac-mpo-relabel-socket]]
===== `mpo_relabel_socket`
[source,c]
----
void mpo_relabel_socket( cred,
so,
oldlabel,
newlabel);
struct ucred *cred;
struct socket *so;
struct label *oldlabel;
struct label *newlabel;
----
[.informaltable]
[cols="1,1,1", frame="none", options="header"]
|===
| Parameter
| Description
| Locking
|`cred`
|Subject credential
|Immutable
|`so`
|Object; socket
|
|`oldlabel`
|Current label for `so`
|
|`newlabel`
|Label update for `so`
|
|===
Update the label on a socket from the passed socket label update.
[[mpo-set-socket-peer-from-mbuf]]
===== `mpo_set_socket_peer_from_mbuf`
[source,c]
----
void mpo_set_socket_peer_from_mbuf( mbuf,
mbuflabel,
oldlabel,
newlabel);
struct mbuf *mbuf;
struct label *mbuflabel;
struct label *oldlabel;
struct label *newlabel;
----
[.informaltable]
[cols="1,1,1", frame="none", options="header"]
|===
| Parameter
| Description
| Locking
|`mbuf`
|First datagram received over socket
|
|`mbuflabel`
|Label for `mbuf`
|
|`oldlabel`
|Current label for the socket
|
|`newlabel`
|Policy label to be filled out for the socket
|
|===
Set the peer label on a stream socket from the passed mbuf label. This call will be made when the first datagram is received by the stream socket, with the exception of Unix domain sockets.
[[mac-mpo-set-socket-peer-from-socket]]
===== `mpo_set_socket_peer_from_socket`
[source,c]
----
void mpo_set_socket_peer_from_socket( oldsocket,
oldsocketlabel,
newsocket,
newsocketpeerlabel);
struct socket *oldsocket;
struct label *oldsocketlabel;
struct socket *newsocket;
struct label *newsocketpeerlabel;
----
[.informaltable]
[cols="1,1,1", frame="none", options="header"]
|===
| Parameter
| Description
| Locking
|`oldsocket`
|Local socket
|
|`oldsocketlabel`
|Policy label for `oldsocket`
|
|`newsocket`
|Peer socket
|
|`newsocketpeerlabel`
|Policy label to fill in for `newsocket`
|
|===
Set the peer label on a stream UNIX domain socket from the passed remote socket endpoint. This call will be made when the socket pair is connected, and will be made for both endpoints.
[[mac-net-labeling-event-ops]]
==== Network Object Labeling Event Operations
[[mac-mpo-create-bpfdesc]]
===== `mpo_create_bpfdesc`
[source,c]
----
void mpo_create_bpfdesc( cred,
bpf_d,
bpflabel);
struct ucred *cred;
struct bpf_d *bpf_d;
struct label *bpflabel;
----
[.informaltable]
[cols="1,1,1", frame="none", options="header"]
|===
| Parameter
| Description
| Locking
|`cred`
|Subject credential
|Immutable
|`bpf_d`
|Object; bpf descriptor
|
|`bpf`
|Policy label to be filled in for `bpf_d`
|
|===
Set the label on a newly created BPF descriptor from the passed subject credential. This call will be made when a BPF device node is opened by a process with the passed subject credential.
[[mac-mpo-create-ifnet]]
===== `mpo_create_ifnet`
[source,c]
----
void mpo_create_ifnet( ifnet,
ifnetlabel);
struct ifnet *ifnet;
struct label *ifnetlabel;
----
[.informaltable]
[cols="1,1,1", frame="none", options="header"]
|===
| Parameter
| Description
| Locking
|`ifnet`
|Network interface
|
|`ifnetlabel`
|Policy label to fill in for `ifnet`
|
|===
Set the label on a newly created interface. This call may be made when a new physical interface becomes available to the system, or when a pseudo-interface is instantiated during the boot or as a result of a user action.
[[mac-mpo-create-ipq]]
===== `mpo_create_ipq`
[source,c]
----
void mpo_create_ipq( fragment,
fragmentlabel,
ipq,
ipqlabel);
struct mbuf *fragment;
struct label *fragmentlabel;
struct ipq *ipq;
struct label *ipqlabel;
----
[.informaltable]
[cols="1,1,1", frame="none", options="header"]
|===
| Parameter
| Description
| Locking
|`fragment`
|First received IP fragment
|
|`fragmentlabel`
|Policy label for `fragment`
|
|`ipq`
|IP reassembly queue to be labeled
|
|`ipqlabel`
|Policy label to be filled in for `ipq`
|
|===
Set the label on a newly created IP fragment reassembly queue from the mbuf header of the first received fragment.
[[mac-mpo-create-datagram-from-ipq]]
===== `mpo_create_datagram_from_ipq`
[source,c]
----
void mpo_create_create_datagram_from_ipq( ipq,
ipqlabel,
datagram,
datagramlabel);
struct ipq *ipq;
struct label *ipqlabel;
struct mbuf *datagram;
struct label *datagramlabel;
----
[.informaltable]
[cols="1,1,1", frame="none", options="header"]
|===
| Parameter
| Description
| Locking
|`ipq`
|IP reassembly queue
|
|`ipqlabel`
|Policy label for `ipq`
|
|`datagram`
|Datagram to be labeled
|
|`datagramlabel`
|Policy label to be filled in for `datagramlabel`
|
|===
Set the label on a newly reassembled IP datagram from the IP fragment reassembly queue from which it was generated.
[[mac-mpo-create-fragment]]
===== `mpo_create_fragment`
[source,c]
----
void mpo_create_fragment( datagram,
datagramlabel,
fragment,
fragmentlabel);
struct mbuf *datagram;
struct label *datagramlabel;
struct mbuf *fragment;
struct label *fragmentlabel;
----
[.informaltable]
[cols="1,1,1", frame="none", options="header"]
|===
| Parameter
| Description
| Locking
|`datagram`
|Datagram
|
|`datagramlabel`
|Policy label for `datagram`
|
|`fragment`
|Fragment to be labeled
|
|`fragmentlabel`
|Policy label to be filled in for `datagram`
|
|===
Set the label on the mbuf header of a newly created IP fragment from the label on the mbuf header of the datagram it was generate from.
[[mac-mpo-create-mbuf-from-mbuf]]
===== `mpo_create_mbuf_from_mbuf`
[source,c]
----
void mpo_create_mbuf_from_mbuf( oldmbuf,
oldmbuflabel,
newmbuf,
newmbuflabel);
struct mbuf *oldmbuf;
struct label *oldmbuflabel;
struct mbuf *newmbuf;
struct label *newmbuflabel;
----
[.informaltable]
[cols="1,1,1", frame="none", options="header"]
|===
| Parameter
| Description
| Locking
|`oldmbuf`
|Existing (source) mbuf
|
|`oldmbuflabel`
|Policy label for `oldmbuf`
|
|`newmbuf`
|New mbuf to be labeled
|
|`newmbuflabel`
|Policy label to be filled in for `newmbuf`
|
|===
Set the label on the mbuf header of a newly created datagram from the mbuf header of an existing datagram. This call may be made in a number of situations, including when an mbuf is re-allocated for alignment purposes.
[[mac-mpo-create-mbuf-linklayer]]
===== `mpo_create_mbuf_linklayer`
[source,c]
----
void mpo_create_mbuf_linklayer( ifnet,
ifnetlabel,
mbuf,
mbuflabel);
struct ifnet *ifnet;
struct label *ifnetlabel;
struct mbuf *mbuf;
struct label *mbuflabel;
----
[.informaltable]
[cols="1,1,1", frame="none", options="header"]
|===
| Parameter
| Description
| Locking
|`ifnet`
|Network interface
|
|`ifnetlabel`
|Policy label for `ifnet`
|
|`mbuf`
|mbuf header for new datagram
|
|`mbuflabel`
|Policy label to be filled in for `mbuf`
|
|===
Set the label on the mbuf header of a newly created datagram generated for the purposes of a link layer response for the passed interface. This call may be made in a number of situations, including for ARP or ND6 responses in the IPv4 and IPv6 stacks.
[[mac-mpo-create-mbuf-from-bpfdesc]]
===== `mpo_create_mbuf_from_bpfdesc`
[source,c]
----
void mpo_create_mbuf_from_bpfdesc( bpf_d,
bpflabel,
mbuf,
mbuflabel);
struct bpf_d *bpf_d;
struct label *bpflabel;
struct mbuf *mbuf;
struct label *mbuflabel;
----
[.informaltable]
[cols="1,1,1", frame="none", options="header"]
|===
| Parameter
| Description
| Locking
|`bpf_d`
|BPF descriptor
|
|`bpflabel`
|Policy label for `bpflabel`
|
|`mbuf`
|New mbuf to be labeled
|
|`mbuflabel`
|Policy label to fill in for `mbuf`
|
|===
Set the label on the mbuf header of a newly created datagram generated using the passed BPF descriptor. This call is made when a write is performed to the BPF device associated with the passed BPF descriptor.
[[mac-mpo-create-mbuf-from-ifnet]]
===== `mpo_create_mbuf_from_ifnet`
[source,c]
----
void mpo_create_mbuf_from_ifnet( ifnet,
ifnetlabel,
mbuf,
mbuflabel);
struct ifnet *ifnet;
struct label *ifnetlabel;
struct mbuf *mbuf;
struct label *mbuflabel;
----
[.informaltable]
[cols="1,1,1", frame="none", options="header"]
|===
| Parameter
| Description
| Locking
|`ifnet`
|Network interface
|
|`ifnetlabel`
|Policy label for `ifnetlabel`
|
|`mbuf`
|mbuf header for new datagram
|
|`mbuflabel`
|Policy label to be filled in for `mbuf`
|
|===
Set the label on the mbuf header of a newly created datagram generated from the passed network interface.
[[mac-mpo-create-mbuf-multicast-encap]]
===== `mpo_create_mbuf_multicast_encap`
[source,c]
----
void mpo_create_mbuf_multicast_encap( oldmbuf,
oldmbuflabel,
ifnet,
ifnetlabel,
newmbuf,
newmbuflabel);
struct mbuf *oldmbuf;
struct label *oldmbuflabel;
struct ifnet *ifnet;
struct label *ifnetlabel;
struct mbuf *newmbuf;
struct label *newmbuflabel;
----
[.informaltable]
[cols="1,1,1", frame="none", options="header"]
|===
| Parameter
| Description
| Locking
|`oldmbuf`
|mbuf header for existing datagram
|
|`oldmbuflabel`
|Policy label for `oldmbuf`
|
|`ifnet`
|Network interface
|
|`ifnetlabel`
|Policy label for `ifnet`
|
|`newmbuf`
|mbuf header to be labeled for new datagram
|
|`newmbuflabel`
|Policy label to be filled in for `newmbuf`
|
|===
Set the label on the mbuf header of a newly created datagram generated from the existing passed datagram when it is processed by the passed multicast encapsulation interface. This call is made when an mbuf is to be delivered using the virtual interface.
[[mac-mpo-create-mbuf-netlayer]]
===== `mpo_create_mbuf_netlayer`
[source,c]
----
void mpo_create_mbuf_netlayer( oldmbuf,
oldmbuflabel,
newmbuf,
newmbuflabel);
struct mbuf *oldmbuf;
struct label *oldmbuflabel;
struct mbuf *newmbuf;
struct label *newmbuflabel;
----
[.informaltable]
[cols="1,1,1", frame="none", options="header"]
|===
| Parameter
| Description
| Locking
|`oldmbuf`
|Received datagram
|
|`oldmbuflabel`
|Policy label for `oldmbuf`
|
|`newmbuf`
|Newly created datagram
|
|`newmbuflabel`
|Policy label for `newmbuf`
|
|===
Set the label on the mbuf header of a newly created datagram generated by the IP stack in response to an existing received datagram (`oldmbuf`). This call may be made in a number of situations, including when responding to ICMP request datagrams.
[[mac-mpo-fragment-match]]
===== `mpo_fragment_match`
[source,c]
----
int mpo_fragment_match( fragment,
fragmentlabel,
ipq,
ipqlabel);
struct mbuf *fragment;
struct label *fragmentlabel;
struct ipq *ipq;
struct label *ipqlabel;
----
[.informaltable]
[cols="1,1,1", frame="none", options="header"]
|===
| Parameter
| Description
| Locking
|`fragment`
|IP datagram fragment
|
|`fragmentlabel`
|Policy label for `fragment`
|
|`ipq`
|IP fragment reassembly queue
|
|`ipqlabel`
|Policy label for `ipq`
|
|===
Determine whether an mbuf header containing an IP datagram (`fragment`) fragment matches the label of the passed IP fragment reassembly queue (`ipq`). Return (1) for a successful match, or (0) for no match. This call is made when the IP stack attempts to find an existing fragment reassembly queue for a newly received fragment; if this fails, a new fragment reassembly queue may be instantiated for the fragment. Policies may use this entry point to prevent the reassembly of otherwise matching IP fragments if policy does not permit them to be reassembled based on the label or other information.
[[mac-mpo-ifnet-relabel]]
===== `mpo_relabel_ifnet`
[source,c]
----
void mpo_relabel_ifnet( cred,
ifnet,
ifnetlabel,
newlabel);
struct ucred *cred;
struct ifnet *ifnet;
struct label *ifnetlabel;
struct label *newlabel;
----
[.informaltable]
[cols="1,1,1", frame="none", options="header"]
|===
| Parameter
| Description
| Locking
|`cred`
|Subject credential
|
|`ifnet`
|Object; Network interface
|
|`ifnetlabel`
|Policy label for `ifnet`
|
|`newlabel`
|Label update to apply to `ifnet`
|
|===
Update the label of network interface, `ifnet`, based on the passed update label, `newlabel`, and the passed subject credential, `cred`.
[[mac-mpo-update-ipq]]
===== `mpo_update_ipq`
[source,c]
----
void mpo_update_ipq( fragment,
fragmentlabel,
ipq,
ipqlabel);
struct mbuf *fragment;
struct label *fragmentlabel;
struct ipq *ipq;
struct label *ipqlabel;
----
[.informaltable]
[cols="1,1,1", frame="none", options="header"]
|===
| Parameter
| Description
| Locking
|`mbuf`
|IP fragment
|
|`mbuflabel`
|Policy label for `mbuf`
|
|`ipq`
|IP fragment reassembly queue
|
|`ipqlabel`
|Policy label to be updated for `ipq`
|
|===
Update the label on an IP fragment reassembly queue (`ipq`) based on the acceptance of the passed IP fragment mbuf header (`mbuf`).
[[mac-proc-labeling-event-ops]]
==== Process Labeling Event Operations
[[mac-mpo-create-cred]]
===== `mpo_create_cred`
[source,c]
----
void mpo_create_cred( parent_cred,
child_cred);
struct ucred *parent_cred;
struct ucred *child_cred;
----
[.informaltable]
[cols="1,1,1", frame="none", options="header"]
|===
| Parameter
| Description
| Locking
|`parent_cred`
|Parent subject credential
|
|`child_cred`
|Child subject credential
|
|===
Set the label of a newly created subject credential from the passed subject credential. This call will be made when man:crcopy[9] is invoked on a newly created struct ucred. This call should not be confused with a process forking or creation event.
[[mac-mpo-execve-transition]]
===== `mpo_execve_transition`
[source,c]
----
void mpo_execve_transition( old,
new,
vp,
vnodelabel);
struct ucred *old;
struct ucred *new;
struct vnode *vp;
struct label *vnodelabel;
----
[.informaltable]
[cols="1,1,1", frame="none", options="header"]
|===
| Parameter
| Description
| Locking
|`old`
|Existing subject credential
|Immutable
|`new`
|New subject credential to be labeled
|
|`vp`
|File to execute
|Locked
|`vnodelabel`
|Policy label for `vp`
|
|===
Update the label of a newly created subject credential (`new`) from the passed existing subject credential (`old`) based on a label transition caused by executing the passed vnode (`vp`). This call occurs when a process executes the passed vnode and one of the policies returns a success from the `mpo_execve_will_transition` entry point. Policies may choose to implement this call simply by invoking `mpo_create_cred` and passing the two subject credentials so as not to implement a transitioning event. Policies should not leave this entry point unimplemented if they implement `mpo_create_cred`, even if they do not implement `mpo_execve_will_transition`.
[[mac-mpo-execve-will-transition]]
===== `mpo_execve_will_transition`
[source,c]
----
int mpo_execve_will_transition( old,
vp,
vnodelabel);
struct ucred *old;
struct vnode *vp;
struct label *vnodelabel;
----
[.informaltable]
[cols="1,1,1", frame="none", options="header"]
|===
| Parameter
| Description
| Locking
|`old`
|Subject credential prior to man:execve[2]
|Immutable
|`vp`
|File to execute
|
|`vnodelabel`
|Policy label for `vp`
|
|===
Determine whether the policy will want to perform a transition event as a result of the execution of the passed vnode by the passed subject credential. Return 1 if a transition is required, 0 if not. Even if a policy returns 0, it should behave correctly in the presence of an unexpected invocation of `mpo_execve_transition`, as that call may happen as a result of another policy requesting a transition.
[[mac-mpo-create-proc0]]
===== `mpo_create_proc0`
[source,c]
----
void mpo_create_proc0( cred);
struct ucred *cred;
----
[.informaltable]
[cols="1,1,1", frame="none", options="header"]
|===
| Parameter
| Description
| Locking
|`cred`
|Subject credential to be filled in
|
|===
Create the subject credential of process 0, the parent of all kernel processes.
[[mac-mpo-create-proc1]]
===== `mpo_create_proc1`
[source,c]
----
void mpo_create_proc1( cred);
struct ucred *cred;
----
[.informaltable]
[cols="1,1,1", frame="none", options="header"]
|===
| Parameter
| Description
| Locking
|`cred`
|Subject credential to be filled in
|
|===
Create the subject credential of process 1, the parent of all user processes.
[[mac-mpo-relabel-cred]]
===== `mpo_relabel_cred`
[source,c]
----
void mpo_relabel_cred( cred,
newlabel);
struct ucred *cred;
struct label *newlabel;
----
[.informaltable]
[cols="1,1,1", frame="none", options="header"]
|===
| Parameter
| Description
| Locking
|`cred`
|Subject credential
|
|`newlabel`
|Label update to apply to `cred`
|
|===
Update the label on a subject credential from the passed update label.
[[mac-access-control-checks]]
=== Access Control Checks
Access control entry points permit policy modules to influence access control decisions made by the kernel. Generally, although not always, arguments to an access control entry point will include one or more authorizing credentials, information (possibly including a label) for any other objects involved in the operation. An access control entry point may return 0 to permit the operation, or an man:errno[2] error value. The results of invoking the entry point across various registered policy modules will be composed as follows: if all modules permit the operation to succeed, success will be returned. If one or modules returns a failure, a failure will be returned. If more than one module returns a failure, the errno value to return to the user will be selected using the following precedence, implemented by the `error_select()` function in [.filename]#kern_mac.c#:
[.informaltable]
[cols="1,1", frame="none"]
|===
|Most precedence
|EDEADLK
|
|EINVAL
|
|ESRCH
|
|EACCES
|Least precedence
|EPERM
|===
If none of the error values returned by all modules are listed in the precedence chart then an arbitrarily selected value from the set will be returned. In general, the rules provide precedence to errors in the following order: kernel failures, invalid arguments, object not present, access not permitted, other.
[[mac-mpo-bpfdesc-check-receive-from-ifnet]]
==== `mpo_check_bpfdesc_receive`
[source,c]
----
int mpo_check_bpfdesc_receive( bpf_d,
bpflabel,
ifnet,
ifnetlabel);
struct bpf_d *bpf_d;
struct label *bpflabel;
struct ifnet *ifnet;
struct label *ifnetlabel;
----
[.informaltable]
[cols="1,1,1", frame="none", options="header"]
|===
| Parameter
| Description
| Locking
|`bpf_d`
|Subject; BPF descriptor
|
|`bpflabel`
|Policy label for `bpf_d`
|
|`ifnet`
|Object; network interface
|
|`ifnetlabel`
|Policy label for `ifnet`
|
|===
Determine whether the MAC framework should permit datagrams from the passed interface to be delivered to the buffers of the passed BPF descriptor. Return (0) for success, or an `errno` value for failure Suggested failure: EACCES for label mismatches, EPERM for lack of privilege.
[[mac-mpo-check-kenv-dump]]
==== `mpo_check_kenv_dump`
[source,c]
----
int mpo_check_kenv_dump( cred);
struct ucred *cred;
----
[.informaltable]
[cols="1,1,1", frame="none", options="header"]
|===
| Parameter
| Description
| Locking
|`cred`
|Subject credential
|
|===
Determine whether the subject should be allowed to retrieve the kernel environment (see man:kenv[2]).
[[mac-mpo-check-kenv-get]]
==== `mpo_check_kenv_get`
[source,c]
----
int mpo_check_kenv_get( cred,
name);
struct ucred *cred;
char *name;
----
[.informaltable]
[cols="1,1,1", frame="none", options="header"]
|===
| Parameter
| Description
| Locking
|`cred`
|Subject credential
|
|`name`
|Kernel environment variable name
|
|===
Determine whether the subject should be allowed to retrieve the value of the specified kernel environment variable.
[[mac-mpo-check-kenv-set]]
==== `mpo_check_kenv_set`
[source,c]
----
int mpo_check_kenv_set( cred,
name);
struct ucred *cred;
char *name;
----
[.informaltable]
[cols="1,1,1", frame="none", options="header"]
|===
| Parameter
| Description
| Locking
|`cred`
|Subject credential
|
|`name`
|Kernel environment variable name
|
|===
Determine whether the subject should be allowed to set the specified kernel environment variable.
[[mac-mpo-check-kenv-unset]]
==== `mpo_check_kenv_unset`
[source,c]
----
int mpo_check_kenv_unset( cred,
name);
struct ucred *cred;
char *name;
----
[.informaltable]
[cols="1,1,1", frame="none", options="header"]
|===
| Parameter
| Description
| Locking
|`cred`
|Subject credential
|
|`name`
|Kernel environment variable name
|
|===
Determine whether the subject should be allowed to unset the specified kernel environment variable.
[[mac-mpo-check-kld-load]]
==== `mpo_check_kld_load`
[source,c]
----
int mpo_check_kld_load( cred,
vp,
vlabel);
struct ucred *cred;
struct vnode *vp;
struct label *vlabel;
----
[.informaltable]
[cols="1,1,1", frame="none", options="header"]
|===
| Parameter
| Description
| Locking
|`cred`
|Subject credential
|
|`vp`
|Kernel module vnode
|
|`vlabel`
|Label associated with `vp`
|
|===
Determine whether the subject should be allowed to load the specified module file.
[[mac-mpo-check-kld-stat]]
==== `mpo_check_kld_stat`
[source,c]
----
int mpo_check_kld_stat( cred);
struct ucred *cred;
----
[.informaltable]
[cols="1,1,1", frame="none", options="header"]
|===
| Parameter
| Description
| Locking
|`cred`
|Subject credential
|
|===
Determine whether the subject should be allowed to retrieve a list of loaded kernel module files and associated statistics.
[[mac-mpo-check-kld-unload]]
==== `mpo_check_kld_unload`
[source,c]
----
int mpo_check_kld_unload( cred);
struct ucred *cred;
----
[.informaltable]
[cols="1,1,1", frame="none", options="header"]
|===
| Parameter
| Description
| Locking
|`cred`
|Subject credential
|
|===
Determine whether the subject should be allowed to unload a kernel module.
[[mac-mpo-check-pipe-ioctl]]
==== `mpo_check_pipe_ioctl`
[source,c]
----
int mpo_check_pipe_ioctl( cred,
pipe,
pipelabel,
cmd,
data);
struct ucred *cred;
struct pipe *pipe;
struct label *pipelabel;
unsigned long cmd;
void *data;
----
[.informaltable]
[cols="1,1,1", frame="none", options="header"]
|===
| Parameter
| Description
| Locking
|`cred`
|Subject credential
|
|`pipe`
|Pipe
|
|`pipelabel`
|Policy label associated with `pipe`
|
|`cmd`
|man:ioctl[2] command
|
|`data`
|man:ioctl[2] data
|
|===
Determine whether the subject should be allowed to make the specified man:ioctl[2] call.
[[mac-mpo-check-pipe-poll]]
==== `mpo_check_pipe_poll`
[source,c]
----
int mpo_check_pipe_poll( cred,
pipe,
pipelabel);
struct ucred *cred;
struct pipe *pipe;
struct label *pipelabel;
----
[.informaltable]
[cols="1,1,1", frame="none", options="header"]
|===
| Parameter
| Description
| Locking
|`cred`
|Subject credential
|
|`pipe`
|Pipe
|
|`pipelabel`
|Policy label associated with `pipe`
|
|===
Determine whether the subject should be allowed to poll `pipe`.
[[mac-mpo-check-pipe-read]]
==== `mpo_check_pipe_read`
[source,c]
----
int mpo_check_pipe_read( cred,
pipe,
pipelabel);
struct ucred *cred;
struct pipe *pipe;
struct label *pipelabel;
----
[.informaltable]
[cols="1,1,1", frame="none", options="header"]
|===
| Parameter
| Description
| Locking
|`cred`
|Subject credential
|
|`pipe`
|Pipe
|
|`pipelabel`
|Policy label associated with `pipe`
|
|===
Determine whether the subject should be allowed read access to `pipe`.
[[mac-mpo-check-pipe-relabel]]
==== `mpo_check_pipe_relabel`
[source,c]
----
int mpo_check_pipe_relabel( cred,
pipe,
pipelabel,
newlabel);
struct ucred *cred;
struct pipe *pipe;
struct label *pipelabel;
struct label *newlabel;
----
[.informaltable]
[cols="1,1,1", frame="none", options="header"]
|===
| Parameter
| Description
| Locking
|`cred`
|Subject credential
|
|`pipe`
|Pipe
|
|`pipelabel`
|Current policy label associated with `pipe`
|
|`newlabel`
|Label update to `pipelabel`
|
|===
Determine whether the subject should be allowed to relabel `pipe`.
[[mac-mpo-check-pipe-stat]]
==== `mpo_check_pipe_stat`
[source,c]
----
int mpo_check_pipe_stat( cred,
pipe,
pipelabel);
struct ucred *cred;
struct pipe *pipe;
struct label *pipelabel;
----
[.informaltable]
[cols="1,1,1", frame="none", options="header"]
|===
| Parameter
| Description
| Locking
|`cred`
|Subject credential
|
|`pipe`
|Pipe
|
|`pipelabel`
|Policy label associated with `pipe`
|
|===
Determine whether the subject should be allowed to retrieve statistics related to `pipe`.
[[mac-mpo-check-pipe-write]]
==== `mpo_check_pipe_write`
[source,c]
----
int mpo_check_pipe_write( cred,
pipe,
pipelabel);
struct ucred *cred;
struct pipe *pipe;
struct label *pipelabel;
----
[.informaltable]
[cols="1,1,1", frame="none", options="header"]
|===
| Parameter
| Description
| Locking
|`cred`
|Subject credential
|
|`pipe`
|Pipe
|
|`pipelabel`
|Policy label associated with `pipe`
|
|===
Determine whether the subject should be allowed to write to `pipe`.
[[mac-mpo-cred-check-socket-bind]]
==== `mpo_check_socket_bind`
[source,c]
----
int mpo_check_socket_bind( cred,
socket,
socketlabel,
sockaddr);
struct ucred *cred;
struct socket *socket;
struct label *socketlabel;
struct sockaddr *sockaddr;
----
[.informaltable]
[cols="1,1,1", frame="none", options="header"]
|===
| Parameter
| Description
| Locking
|`cred`
|Subject credential
|
|`socket`
|Socket to be bound
|
|`socketlabel`
|Policy label for `socket`
|
|`sockaddr`
|Address of `socket`
|
|===
[[mac-mpo-cred-check-socket-connect]]
==== `mpo_check_socket_connect`
[source,c]
----
int mpo_check_socket_connect( cred,
socket,
socketlabel,
sockaddr);
struct ucred *cred;
struct socket *socket;
struct label *socketlabel;
struct sockaddr *sockaddr;
----
[.informaltable]
[cols="1,1,1", frame="none", options="header"]
|===
| Parameter
| Description
| Locking
|`cred`
|Subject credential
|
|`socket`
|Socket to be connected
|
|`socketlabel`
|Policy label for `socket`
|
|`sockaddr`
|Address of `socket`
|
|===
Determine whether the subject credential (`cred`) can connect the passed socket (`socket`) to the passed socket address (`sockaddr`). Return 0 for success, or an `errno` value for failure. Suggested failure: EACCES for label mismatches, EPERM for lack of privilege.
[[mac-mpo-check-socket-receive]]
==== `mpo_check_socket_receive`
[source,c]
----
int mpo_check_socket_receive( cred,
so,
socketlabel);
struct ucred *cred;
struct socket *so;
struct label *socketlabel;
----
[.informaltable]
[cols="1,1,1", frame="none", options="header"]
|===
| Parameter
| Description
| Locking
|`cred`
|Subject credential
|
|`so`
|Socket
|
|`socketlabel`
|Policy label associated with `so`
|
|===
Determine whether the subject should be allowed to receive information from the socket `so`.
[[mac-mpo-check-socket-send]]
==== `mpo_check_socket_send`
[source,c]
----
int mpo_check_socket_send( cred,
so,
socketlabel);
struct ucred *cred;
struct socket *so;
struct label *socketlabel;
----
[.informaltable]
[cols="1,1,1", frame="none", options="header"]
|===
| Parameter
| Description
| Locking
|`cred`
|Subject credential
|
|`so`
|Socket
|
|`socketlabel`
|Policy label associated with `so`
|
|===
Determine whether the subject should be allowed to send information across the socket `so`.
[[mac-mpo-check-cred-visible]]
==== `mpo_check_cred_visible`
[source,c]
----
int mpo_check_cred_visible( u1,
u2);
struct ucred *u1;
struct ucred *u2;
----
[.informaltable]
[cols="1,1,1", frame="none", options="header"]
|===
| Parameter
| Description
| Locking
|`u1`
|Subject credential
|
|`u2`
|Object credential
|
|===
Determine whether the subject credential `u1` can "see" other subjects with the passed subject credential `u2`. Return 0 for success, or an `errno` value for failure. Suggested failure: EACCES for label mismatches, EPERM for lack of privilege, or ESRCH to hide visibility. This call may be made in a number of situations, including inter-process status sysctl's used by `ps`, and in procfs lookups.
[[mac-mpo-cred-check-socket-visible]]
==== `mpo_check_socket_visible`
[source,c]
----
int mpo_check_socket_visible( cred,
socket,
socketlabel);
struct ucred *cred;
struct socket *socket;
struct label *socketlabel;
----
[.informaltable]
[cols="1,1,1", frame="none", options="header"]
|===
| Parameter
| Description
| Locking
|`cred`
|Subject credential
|
|`socket`
|Object; socket
|
|`socketlabel`
|Policy label for `socket`
|
|===
[[mac-mpo-cred-check-ifnet-relabel]]
==== `mpo_check_ifnet_relabel`
[source,c]
----
int mpo_check_ifnet_relabel( cred,
ifnet,
ifnetlabel,
newlabel);
struct ucred *cred;
struct ifnet *ifnet;
struct label *ifnetlabel;
struct label *newlabel;
----
[.informaltable]
[cols="1,1,1", frame="none", options="header"]
|===
| Parameter
| Description
| Locking
|`cred`
|Subject credential
|
|`ifnet`
|Object; network interface
|
|`ifnetlabel`
|Existing policy label for `ifnet`
|
|`newlabel`
|Policy label update to later be applied to `ifnet`
|
|===
Determine whether the subject credential can relabel the passed network interface to the passed label update.
[[mac-mpo-cred-check-socket-relabel]]
==== `mpo_check_socket_relabel`
[source,c]
----
int mpo_check_socket_relabel( cred,
socket,
socketlabel,
newlabel);
struct ucred *cred;
struct socket *socket;
struct label *socketlabel;
struct label *newlabel;
----
[.informaltable]
[cols="1,1,1", frame="none", options="header"]
|===
| Parameter
| Description
| Locking
|`cred`
|Subject credential
|
|`socket`
|Object; socket
|
|`socketlabel`
|Existing policy label for `socket`
|
|`newlabel`
|Label update to later be applied to `socketlabel`
|
|===
Determine whether the subject credential can relabel the passed socket to the passed label update.
[[mac-mpo-cred-check-cred-relabel]]
==== `mpo_check_cred_relabel`
[source,c]
----
int mpo_check_cred_relabel( cred,
newlabel);
struct ucred *cred;
struct label *newlabel;
----
[.informaltable]
[cols="1,1,1", frame="none", options="header"]
|===
| Parameter
| Description
| Locking
|`cred`
|Subject credential
|
|`newlabel`
|Label update to later be applied to `cred`
|
|===
Determine whether the subject credential can relabel itself to the passed label update.
[[mac-mpo-cred-check-vnode-relabel]]
==== `mpo_check_vnode_relabel`
[source,c]
----
int mpo_check_vnode_relabel( cred,
vp,
vnodelabel,
newlabel);
struct ucred *cred;
struct vnode *vp;
struct label *vnodelabel;
struct label *newlabel;
----
[.informaltable]
[cols="1,1,1", frame="none", options="header"]
|===
| Parameter
| Description
| Locking
|`cred`
|Subject credential
|Immutable
|`vp`
|Object; vnode
|Locked
|`vnodelabel`
|Existing policy label for `vp`
|
|`newlabel`
|Policy label update to later be applied to `vp`
|
|===
Determine whether the subject credential can relabel the passed vnode to the passed label update.
[[mpo-cred-check-mount-stat]]
==== `mpo_check_mount_stat`
[source,c]
----
int mpo_check_mount_stat( cred,
mp,
mountlabel);
struct ucred *cred;
struct mount *mp;
struct label *mountlabel;
----
[.informaltable]
[cols="1,1,1", frame="none", options="header"]
|===
| Parameter
| Description
| Locking
|`cred`
|Subject credential
|
|`mp`
|Object; file system mount
|
|`mountlabel`
|Policy label for `mp`
|
|===
Determine whether the subject credential can see the results of a statfs performed on the file system. Return 0 for success, or an `errno` value for failure. Suggested failure: EACCES for label mismatches or EPERM for lack of privilege. This call may be made in a number of situations, including during invocations of man:statfs[2] and related calls, as well as to determine what file systems to exclude from listings of file systems, such as when man:getfsstat[2] is invoked.
[[mac-mpo-cred-check-proc-debug]]
==== `mpo_check_proc_debug`
[source,c]
----
int mpo_check_proc_debug( cred,
proc);
struct ucred *cred;
struct proc *proc;
----
[.informaltable]
[cols="1,1,1", frame="none", options="header"]
|===
| Parameter
| Description
| Locking
|`cred`
|Subject credential
|Immutable
|`proc`
|Object; process
|
|===
Determine whether the subject credential can debug the passed process. Return 0 for success, or an `errno` value for failure. Suggested failure: EACCES for label mismatch, EPERM for lack of privilege, or ESRCH to hide visibility of the target. This call may be made in a number of situations, including use of the man:ptrace[2] and man:ktrace[2] APIs, as well as for some types of procfs operations.
[[mac-mpo-cred-check-vnode-access]]
==== `mpo_check_vnode_access`
[source,c]
----
int mpo_check_vnode_access( cred,
vp,
label,
flags);
struct ucred *cred;
struct vnode *vp;
struct label *label;
int flags;
----
[.informaltable]
[cols="1,1,1", frame="none", options="header"]
|===
| Parameter
| Description
| Locking
|`cred`
|Subject credential
|
|`vp`
|Object; vnode
|
|`label`
|Policy label for `vp`
|
|`flags`
|man:access[2] flags
|
|===
Determine how invocations of man:access[2] and related calls by the subject credential should return when performed on the passed vnode using the passed access flags. This should generally be implemented using the same semantics used in `mpo_check_vnode_open`. Return 0 for success, or an `errno` value for failure. Suggested failure: EACCES for label mismatches or EPERM for lack of privilege.
[[mac-mpo-cred-check-vnode-chdir]]
==== `mpo_check_vnode_chdir`
[source,c]
----
int mpo_check_vnode_chdir( cred,
dvp,
dlabel);
struct ucred *cred;
struct vnode *dvp;
struct label *dlabel;
----
[.informaltable]
[cols="1,1,1", frame="none", options="header"]
|===
| Parameter
| Description
| Locking
|`cred`
|Subject credential
|
|`dvp`
|Object; vnode to man:chdir[2] into
|
|`dlabel`
|Policy label for `dvp`
|
|===
Determine whether the subject credential can change the process working directory to the passed vnode. Return 0 for success, or an `errno` value for failure. Suggested failure: EACCES for label mismatch, or EPERM for lack of privilege.
[[mac-mpo-check-vnode-chroot]]
==== `mpo_check_vnode_chroot`
[source,c]
----
int mpo_check_vnode_chroot( cred,
dvp,
dlabel);
struct ucred *cred;
struct vnode *dvp;
struct label *dlabel;
----
[.informaltable]
[cols="1,1,1", frame="none", options="header"]
|===
| Parameter
| Description
| Locking
|`cred`
|Subject credential
|
|`dvp`
|Directory vnode
|
|`dlabel`
|Policy label associated with `dvp`
|
|===
Determine whether the subject should be allowed to man:chroot[2] into the specified directory (`dvp`).
[[mac-mpo-cred-check-vnode-create]]
==== `mpo_check_vnode_create`
[source,c]
----
int mpo_check_vnode_create( cred,
dvp,
dlabel,
cnp,
vap);
struct ucred *cred;
struct vnode *dvp;
struct label *dlabel;
struct componentname *cnp;
struct vattr *vap;
----
[.informaltable]
[cols="1,1,1", frame="none", options="header"]
|===
| Parameter
| Description
| Locking
|`cred`
|Subject credential
|
|`dvp`
|Object; vnode
|
|`dlabel`
|Policy label for `dvp`
|
|`cnp`
|Component name for `dvp`
|
|`vap`
|vnode attributes for `vap`
|
|===
Determine whether the subject credential can create a vnode with the passed parent directory, passed name information, and passed attribute information. Return 0 for success, or an `errno` value for failure. Suggested failure: EACCES for label mismatch, or EPERM for lack of privilege. This call may be made in a number of situations, including as a result of calls to man:open[2] with O_CREAT, man:mkfifo[2], and others.
[[mac-mpo-cred-check-vnode-delete]]
==== `mpo_check_vnode_delete`
[source,c]
----
int mpo_check_vnode_delete( cred,
dvp,
dlabel,
vp,
label,
cnp);
struct ucred *cred;
struct vnode *dvp;
struct label *dlabel;
struct vnode *vp;
void *label;
struct componentname *cnp;
----
[.informaltable]
[cols="1,1,1", frame="none", options="header"]
|===
| Parameter
| Description
| Locking
|`cred`
|Subject credential
|
|`dvp`
|Parent directory vnode
|
|`dlabel`
|Policy label for `dvp`
|
|`vp`
|Object; vnode to delete
|
|`label`
|Policy label for `vp`
|
|`cnp`
|Component name for `vp`
|
|===
Determine whether the subject credential can delete a vnode from the passed parent directory and passed name information. Return 0 for success, or an `errno` value for failure. Suggested failure: EACCES for label mismatch, or EPERM for lack of privilege. This call may be made in a number of situations, including as a result of calls to man:unlink[2] and man:rmdir[2]. Policies implementing this entry point should also implement `mpo_check_rename_to` to authorize deletion of objects as a result of being the target of a rename.
[[mac-mpo-cred-check-vnode-deleteacl]]
==== `mpo_check_vnode_deleteacl`
[source,c]
----
int mpo_check_vnode_deleteacl( cred,
vp,
label,
type);
struct ucred *cred;
struct vnode *vp;
struct label *label;
acl_type_t type;
----
[.informaltable]
[cols="1,1,1", frame="none", options="header"]
|===
| Parameter
| Description
| Locking
|`cred`
|Subject credential
|Immutable
|`vp`
|Object; vnode
|Locked
|`label`
|Policy label for `vp`
|
|`type`
|ACL type
|
|===
Determine whether the subject credential can delete the ACL of passed type from the passed vnode. Return 0 for success, or an `errno` value for failure. Suggested failure: EACCES for label mismatch, or EPERM for lack of privilege.
[[mac-mpo-cred-check-vnode-exec]]
==== `mpo_check_vnode_exec`
[source,c]
----
int mpo_check_vnode_exec( cred,
vp,
label);
struct ucred *cred;
struct vnode *vp;
struct label *label;
----
[.informaltable]
[cols="1,1,1", frame="none", options="header"]
|===
| Parameter
| Description
| Locking
|`cred`
|Subject credential
|
|`vp`
|Object; vnode to execute
|
|`label`
|Policy label for `vp`
|
|===
Determine whether the subject credential can execute the passed vnode. Determination of execute privilege is made separately from decisions about any transitioning event. Return 0 for success, or an `errno` value for failure. Suggested failure: EACCES for label mismatch, or EPERM for lack of privilege.
[[mpo-cred-check-vnode-getacl]]
==== `mpo_check_vnode_getacl`
[source,c]
----
int mpo_check_vnode_getacl( cred,
vp,
label,
type);
struct ucred *cred;
struct vnode *vp;
struct label *label;
acl_type_t type;
----
[.informaltable]
[cols="1,1,1", frame="none", options="header"]
|===
| Parameter
| Description
| Locking
|`cred`
|Subject credential
|
|`vp`
|Object; vnode
|
|`label`
|Policy label for `vp`
|
|`type`
|ACL type
|
|===
Determine whether the subject credential can retrieve the ACL of passed type from the passed vnode. Return 0 for success, or an `errno` value for failure. Suggested failure: EACCES for label mismatch, or EPERM for lack of privilege.
[[mac-mpo-cred-check-vnode-getextattr]]
==== `mpo_check_vnode_getextattr`
[source,c]
----
int mpo_check_vnode_getextattr( cred,
vp,
label,
attrnamespace,
name,
uio);
struct ucred *cred;
struct vnode *vp;
struct label *label;
int attrnamespace;
const char *name;
struct uio *uio;
----
[.informaltable]
[cols="1,1,1", frame="none", options="header"]
|===
| Parameter
| Description
| Locking
|`cred`
|Subject credential
|
|`vp`
|Object; vnode
|
|`label`
|Policy label for `vp`
|
|`attrnamespace`
|Extended attribute namespace
|
|`name`
|Extended attribute name
|
|`uio`
|I/O structure pointer; see man:uio[9]
|
|===
Determine whether the subject credential can retrieve the extended attribute with the passed namespace and name from the passed vnode. Policies implementing labeling using extended attributes may be interested in special handling of operations on those extended attributes. Return 0 for success, or an `errno` value for failure. Suggested failure: EACCES for label mismatch, or EPERM for lack of privilege.
[[mac-mpo-check-vnode-link]]
==== `mpo_check_vnode_link`
[source,c]
----
int mpo_check_vnode_link( cred,
dvp,
dlabel,
vp,
label,
cnp);
struct ucred *cred;
struct vnode *dvp;
struct label *dlabel;
struct vnode *vp;
struct label *label;
struct componentname *cnp;
----
[.informaltable]
[cols="1,1,1", frame="none", options="header"]
|===
| Parameter
| Description
| Locking
|`cred`
|Subject credential
|
|`dvp`
|Directory vnode
|
|`dlabel`
|Policy label associated with `dvp`
|
|`vp`
|Link destination vnode
|
|`label`
|Policy label associated with `vp`
|
|`cnp`
|Component name for the link being created
|
|===
Determine whether the subject should be allowed to create a link to the vnode `vp` with the name specified by `cnp`.
[[mac-mpo-check-vnode-mmap]]
==== `mpo_check_vnode_mmap`
[source,c]
----
int mpo_check_vnode_mmap( cred,
vp,
label,
prot);
struct ucred *cred;
struct vnode *vp;
struct label *label;
int prot;
----
[.informaltable]
[cols="1,1,1", frame="none", options="header"]
|===
| Parameter
| Description
| Locking
|`cred`
|Subject credential
|
|`vp`
|Vnode to map
|
|`label`
|Policy label associated with `vp`
|
|`prot`
|Mmap protections (see man:mmap[2])
|
|===
Determine whether the subject should be allowed to map the vnode `vp` with the protections specified in `prot`.
[[mac-mpo-check-vnode-mmap-downgrade]]
==== `mpo_check_vnode_mmap_downgrade`
[source,c]
----
void mpo_check_vnode_mmap_downgrade( cred,
vp,
label,
prot);
struct ucred *cred;
struct vnode *vp;
struct label *label;
int *prot;
----
[.informaltable]
[cols="1,1,1", frame="none", options="header"]
|===
| Parameter
| Description
| Locking
|`cred`
|See <<mac-mpo-check-vnode-mmap>>.
|
|`vp`
|
|
|`label`
|
|
|`prot`
|Mmap protections to be downgraded
|
|===
Downgrade the mmap protections based on the subject and object labels.
[[mac-mpo-check-vnode-mprotect]]
==== `mpo_check_vnode_mprotect`
[source,c]
----
int mpo_check_vnode_mprotect( cred,
vp,
label,
prot);
struct ucred *cred;
struct vnode *vp;
struct label *label;
int prot;
----
[.informaltable]
[cols="1,1,1", frame="none", options="header"]
|===
| Parameter
| Description
| Locking
|`cred`
|Subject credential
|
|`vp`
|Mapped vnode
|
|`prot`
|Memory protections
|
|===
Determine whether the subject should be allowed to set the specified memory protections on memory mapped from the vnode `vp`.
[[mac-mpo-check-vnode-poll]]
==== `mpo_check_vnode_poll`
[source,c]
----
int mpo_check_vnode_poll( active_cred,
file_cred,
vp,
label);
struct ucred *active_cred;
struct ucred *file_cred;
struct vnode *vp;
struct label *label;
----
[.informaltable]
[cols="1,1,1", frame="none", options="header"]
|===
| Parameter
| Description
| Locking
|`active_cred`
|Subject credential
|
|`file_cred`
|Credential associated with the struct file
|
|`vp`
|Polled vnode
|
|`label`
|Policy label associated with `vp`
|
|===
Determine whether the subject should be allowed to poll the vnode `vp`.
[[mac-mpo-check-vnode-rename-from]]
==== `mpo_check_vnode_rename_from`
[source,c]
----
int mpo_vnode_rename_from( cred,
dvp,
dlabel,
vp,
label,
cnp);
struct ucred *cred;
struct vnode *dvp;
struct label *dlabel;
struct vnode *vp;
struct label *label;
struct componentname *cnp;
----
[.informaltable]
[cols="1,1,1", frame="none", options="header"]
|===
| Parameter
| Description
| Locking
|`cred`
|Subject credential
|
|`dvp`
|Directory vnode
|
|`dlabel`
|Policy label associated with `dvp`
|
|`vp`
|Vnode to be renamed
|
|`label`
|Policy label associated with `vp`
|
|`cnp`
|Component name for `vp`
|
|===
Determine whether the subject should be allowed to rename the vnode `vp` to something else.
[[mac-mpo-check-vnode-rename-to]]
==== `mpo_check_vnode_rename_to`
[source,c]
----
int mpo_check_vnode_rename_to( cred,
dvp,
dlabel,
vp,
label,
samedir,
cnp);
struct ucred *cred;
struct vnode *dvp;
struct label *dlabel;
struct vnode *vp;
struct label *label;
int samedir;
struct componentname *cnp;
----
[.informaltable]
[cols="1,1,1", frame="none", options="header"]
|===
| Parameter
| Description
| Locking
|`cred`
|Subject credential
|
|`dvp`
|Directory vnode
|
|`dlabel`
|Policy label associated with `dvp`
|
|`vp`
|Overwritten vnode
|
|`label`
|Policy label associated with `vp`
|
|`samedir`
|Boolean; `1` if the source and destination directories are the same
|
|`cnp`
|Destination component name
|
|===
Determine whether the subject should be allowed to rename to the vnode `vp`, into the directory `dvp`, or to the name represented by `cnp`. If there is no existing file to overwrite, `vp` and `label` will be NULL.
[[mac-mpo-cred-check-socket-listen]]
==== `mpo_check_socket_listen`
[source,c]
----
int mpo_check_socket_listen( cred,
socket,
socketlabel);
struct ucred *cred;
struct socket *socket;
struct label *socketlabel;
----
[.informaltable]
[cols="1,1,1", frame="none", options="header"]
|===
| Parameter
| Description
| Locking
|`cred`
|Subject credential
|
|`socket`
|Object; socket
|
|`socketlabel`
|Policy label for `socket`
|
|===
Determine whether the subject credential can listen on the passed socket. Return 0 for success, or an `errno` value for failure. Suggested failure: EACCES for label mismatch, or EPERM for lack of privilege.
[[mac-mpo-cred-check-vnode-lookup]]
==== `mpo_check_vnode_lookup`
[source,c]
----
int mpo_check_vnode_lookup( ,
,
,
cnp);
struct ucred *cred;
struct vnode *dvp;
struct label *dlabel;
struct componentname *cnp;
----
[.informaltable]
[cols="1,1,1", frame="none", options="header"]
|===
| Parameter
| Description
| Locking
|`cred`
|Subject credential
|
|`dvp`
|Object; vnode
|
|`dlabel`
|Policy label for `dvp`
|
|`cnp`
|Component name being looked up
|
|===
Determine whether the subject credential can perform a lookup in the passed directory vnode for the passed name. Return 0 for success, or an `errno` value for failure. Suggested failure: EACCES for label mismatch, or EPERM for lack of privilege.
[[mac-mpo-cred-check-vnode-open]]
==== `mpo_check_vnode_open`
[source,c]
----
int mpo_check_vnode_open( cred,
vp,
label,
acc_mode);
struct ucred *cred;
struct vnode *vp;
struct label *label;
int acc_mode;
----
[.informaltable]
[cols="1,1,1", frame="none", options="header"]
|===
| Parameter
| Description
| Locking
|`cred`
|Subject credential
|
|`vp`
|Object; vnode
|
|`label`
|Policy label for `vp`
|
|`acc_mode`
|man:open[2] access mode
|
|===
Determine whether the subject credential can perform an open operation on the passed vnode with the passed access mode. Return 0 for success, or an errno value for failure. Suggested failure: EACCES for label mismatch, or EPERM for lack of privilege.
[[mac-mpo-cred-check-vnode-readdir]]
==== `mpo_check_vnode_readdir`
[source,c]
----
int mpo_check_vnode_readdir( ,
,
);
struct ucred *cred;
struct vnode *dvp;
struct label *dlabel;
----
[.informaltable]
[cols="1,1,1", frame="none", options="header"]
|===
| Parameter
| Description
| Locking
|`cred`
|Subject credential
|
|`dvp`
|Object; directory vnode
|
|`dlabel`
|Policy label for `dvp`
|
|===
Determine whether the subject credential can perform a `readdir` operation on the passed directory vnode. Return 0 for success, or an `errno` value for failure. Suggested failure: EACCES for label mismatch, or EPERM for lack of privilege.
[[mac-mpo-cred-check-vnode-readlink]]
==== `mpo_check_vnode_readlink`
[source,c]
----
int mpo_check_vnode_readlink( cred,
vp,
label);
struct ucred *cred;
struct vnode *vp;
struct label *label;
----
[.informaltable]
[cols="1,1,1", frame="none", options="header"]
|===
| Parameter
| Description
| Locking
|`cred`
|Subject credential
|
|`vp`
|Object; vnode
|
|`label`
|Policy label for `vp`
|
|===
Determine whether the subject credential can perform a `readlink` operation on the passed symlink vnode. Return 0 for success, or an `errno` value for failure. Suggested failure: EACCES for label mismatch, or EPERM for lack of privilege. This call may be made in a number of situations, including an explicit `readlink` call by the user process, or as a result of an implicit `readlink` during a name lookup by the process.
[[mac-mpo-cred-check-vnode-revoke]]
==== `mpo_check_vnode_revoke`
[source,c]
----
int mpo_check_vnode_revoke( cred,
vp,
label);
struct ucred *cred;
struct vnode *vp;
struct label *label;
----
[.informaltable]
[cols="1,1,1", frame="none", options="header"]
|===
| Parameter
| Description
| Locking
|`cred`
|Subject credential
|
|`vp`
|Object; vnode
|
|`label`
|Policy label for `vp`
|
|===
Determine whether the subject credential can revoke access to the passed vnode. Return 0 for success, or an `errno` value for failure. Suggested failure: EACCES for label mismatch, or EPERM for lack of privilege.
[[mac-mpo-cred-check-vnode-setacl]]
==== `mpo_check_vnode_setacl`
[source,c]
----
int mpo_check_vnode_setacl( cred,
vp,
label,
type,
acl);
struct ucred *cred;
struct vnode *vp;
struct label *label;
acl_type_t type;
struct acl *acl;
----
[.informaltable]
[cols="1,1,1", frame="none", options="header"]
|===
| Parameter
| Description
| Locking
|`cred`
|Subject credential
|
|`vp`
|Object; vnode
|
|`label`
|Policy label for `vp`
|
|`type`
|ACL type
|
|`acl`
|ACL
|
|===
Determine whether the subject credential can set the passed ACL of passed type on the passed vnode. Return 0 for success, or an `errno` value for failure. Suggested failure: EACCES for label mismatch, or EPERM for lack of privilege.
[[mac-mpo-cred-check-vnode-setextattr]]
==== `mpo_check_vnode_setextattr`
[source,c]
----
int mpo_check_vnode_setextattr( cred,
vp,
label,
attrnamespace,
name,
uio);
struct ucred *cred;
struct vnode *vp;
struct label *label;
int attrnamespace;
const char *name;
struct uio *uio;
----
[.informaltable]
[cols="1,1,1", frame="none", options="header"]
|===
| Parameter
| Description
| Locking
|`cred`
|Subject credential
|
|`vp`
|Object; vnode
|
|`label`
|Policy label for `vp`
|
|`attrnamespace`
|Extended attribute namespace
|
|`name`
|Extended attribute name
|
|`uio`
|I/O structure pointer; see man:uio[9]
|
|===
Determine whether the subject credential can set the extended attribute of passed name and passed namespace on the passed vnode. Policies implementing security labels backed into extended attributes may want to provide additional protections for those attributes. Additionally, policies should avoid making decisions based on the data referenced from `uio`, as there is a potential race condition between this check and the actual operation. The `uio` may also be `NULL` if a delete operation is being performed. Return 0 for success, or an `errno` value for failure. Suggested failure: EACCES for label mismatch, or EPERM for lack of privilege.
[[mac-mpo-cred-check-vnode-setflags]]
==== `mpo_check_vnode_setflags`
[source,c]
----
int mpo_check_vnode_setflags( cred,
vp,
label,
flags);
struct ucred *cred;
struct vnode *vp;
struct label *label;
u_long flags;
----
[.informaltable]
[cols="1,1,1", frame="none", options="header"]
|===
| Parameter
| Description
| Locking
|`cred`
|Subject credential
|
|`vp`
|Object; vnode
|
|`label`
|Policy label for `vp`
|
|`flags`
|File flags; see man:chflags[2]
|
|===
Determine whether the subject credential can set the passed flags on the passed vnode. Return 0 for success, or an `errno` value for failure. Suggested failure: EACCES for label mismatch, or EPERM for lack of privilege.
[[mac-mpo-cred-check-vnode-setmode]]
==== `mpo_check_vnode_setmode`
[source,c]
----
int mpo_check_vnode_setmode( cred,
vp,
label,
mode);
struct ucred *cred;
struct vnode *vp;
struct label *label;
mode_t mode;
----
[.informaltable]
[cols="1,1,1", frame="none", options="header"]
|===
| Parameter
| Description
| Locking
|`cred`
|Subject credential
|
|`vp`
|Object; vnode
|
|`label`
|Policy label for `vp`
|
|`mode`
|File mode; see man:chmod[2]
|
|===
Determine whether the subject credential can set the passed mode on the passed vnode. Return 0 for success, or an `errno` value for failure. Suggested failure: EACCES for label mismatch, or EPERM for lack of privilege.
[[mac-mpo-cred-check-vnode-setowner]]
==== `mpo_check_vnode_setowner`
[source,c]
----
int mpo_check_vnode_setowner( cred,
vp,
label,
uid,
gid);
struct ucred *cred;
struct vnode *vp;
struct label *label;
uid_t uid;
gid_t gid;
----
[.informaltable]
[cols="1,1,1", frame="none", options="header"]
|===
| Parameter
| Description
| Locking
|`cred`
|Subject credential
|
|`vp`
|Object; vnode
|
|`label`
|Policy label for `vp`
|
|`uid`
|User ID
|
|`gid`
|Group ID
|
|===
Determine whether the subject credential can set the passed uid and passed gid as file uid and file gid on the passed vnode. The IDs may be set to (`-1`) to request no update. Return 0 for success, or an `errno` value for failure. Suggested failure: EACCES for label mismatch, or EPERM for lack of privilege.
[[mac-mpo-cred-check-vnode-setutimes]]
==== `mpo_check_vnode_setutimes`
[source,c]
----
int mpo_check_vnode_setutimes( ,
,
,
,
);
struct ucred *cred;
struct vnode *vp;
struct label *label;
struct timespec atime;
struct timespec mtime;
----
[.informaltable]
[cols="1,1,1", frame="none", options="header"]
|===
| Parameter
| Description
| Locking
|`cred`
|Subject credential
|
|`vp`
|Object; vp
|
|`label`
|Policy label for `vp`
|
|`atime`
|Access time; see man:utimes[2]
|
|`mtime`
|Modification time; see man:utimes[2]
|
|===
Determine whether the subject credential can set the passed access timestamps on the passed vnode. Return 0 for success, or an `errno` value for failure. Suggested failure: EACCES for label mismatch, or EPERM for lack of privilege.
[[mac-mpo-cred-check-proc-sched]]
==== `mpo_check_proc_sched`
[source,c]
----
int mpo_check_proc_sched( ucred,
proc);
struct ucred *ucred;
struct proc *proc;
----
[.informaltable]
[cols="1,1,1", frame="none", options="header"]
|===
| Parameter
| Description
| Locking
|`cred`
|Subject credential
|
|`proc`
|Object; process
|
|===
Determine whether the subject credential can change the scheduling parameters of the passed process. Return 0 for success, or an `errno` value for failure. Suggested failure: EACCES for label mismatch, EPERM for lack of privilege, or ESRCH to limit visibility.
See man:setpriority[2] for more information.
[[mac-mpo-cred-check-proc-signal]]
==== `mpo_check_proc_signal`
[source,c]
----
int mpo_check_proc_signal( cred,
proc,
signal);
struct ucred *cred;
struct proc *proc;
int signal;
----
[.informaltable]
[cols="1,1,1", frame="none", options="header"]
|===
| Parameter
| Description
| Locking
|`cred`
|Subject credential
|
|`proc`
|Object; process
|
|`signal`
|Signal; see man:kill[2]
|
|===
Determine whether the subject credential can deliver the passed signal to the passed process. Return 0 for success, or an `errno` value for failure. Suggested failure: EACCES for label mismatch, EPERM for lack of privilege, or ESRCH to limit visibility.
[[mac-mpo-cred-check-vnode-stat]]
==== `mpo_check_vnode_stat`
[source,c]
----
int mpo_check_vnode_stat( cred,
vp,
label);
struct ucred *cred;
struct vnode *vp;
struct label *label;
----
[.informaltable]
[cols="1,1,1", frame="none", options="header"]
|===
| Parameter
| Description
| Locking
|`cred`
|Subject credential
|
|`vp`
|Object; vnode
|
|`label`
|Policy label for `vp`
|
|===
Determine whether the subject credential can `stat` the passed vnode. Return 0 for success, or an `errno` value for failure. Suggested failure: EACCES for label mismatch, or EPERM for lack of privilege.
See man:stat[2] for more information.
[[mac-mpo-cred-check-ifnet-transmit]]
==== `mpo_check_ifnet_transmit`
[source,c]
----
int mpo_check_ifnet_transmit( cred,
ifnet,
ifnetlabel,
mbuf,
mbuflabel);
struct ucred *cred;
struct ifnet *ifnet;
struct label *ifnetlabel;
struct mbuf *mbuf;
struct label *mbuflabel;
----
[.informaltable]
[cols="1,1,1", frame="none", options="header"]
|===
| Parameter
| Description
| Locking
|`cred`
|Subject credential
|
|`ifnet`
|Network interface
|
|`ifnetlabel`
|Policy label for `ifnet`
|
|`mbuf`
|Object; mbuf to be sent
|
|`mbuflabel`
|Policy label for `mbuf`
|
|===
Determine whether the network interface can transmit the passed mbuf. Return 0 for success, or an `errno` value for failure. Suggested failure: EACCES for label mismatch, or EPERM for lack of privilege.
[[mac-mpo-cred-check-socket-deliver]]
==== `mpo_check_socket_deliver`
[source,c]
----
int mpo_check_socket_deliver( cred,
ifnet,
ifnetlabel,
mbuf,
mbuflabel);
struct ucred *cred;
struct ifnet *ifnet;
struct label *ifnetlabel;
struct mbuf *mbuf;
struct label *mbuflabel;
----
[.informaltable]
[cols="1,1,1", frame="none", options="header"]
|===
| Parameter
| Description
| Locking
|`cred`
|Subject credential
|
|`ifnet`
|Network interface
|
|`ifnetlabel`
|Policy label for `ifnet`
|
|`mbuf`
|Object; mbuf to be delivered
|
|`mbuflabel`
|Policy label for `mbuf`
|
|===
Determine whether the socket may receive the datagram stored in the passed mbuf header. Return 0 for success, or an `errno` value for failure. Suggested failures: EACCES for label mismatch, or EPERM for lack of privilege.
[[mac-mpo-check-socket-visible]]
==== `mpo_check_socket_visible`
[source,c]
----
int mpo_check_socket_visible( cred,
so,
socketlabel);
struct ucred *cred;
struct socket *so;
struct label *socketlabel;
----
[.informaltable]
[cols="1,1,1", frame="none", options="header"]
|===
| Parameter
| Description
| Locking
|`cred`
|Subject credential
|Immutable
|`so`
|Object; socket
|
|`socketlabel`
|Policy label for `so`
|
|===
Determine whether the subject credential cred can "see" the passed socket (`socket`) using system monitoring functions, such as those employed by man:netstat[8] and man:sockstat[1]. Return 0 for success, or an `errno` value for failure. Suggested failure: EACCES for label mismatches, EPERM for lack of privilege, or ESRCH to hide visibility.
[[mac-mpo-check-system-acct]]
==== `mpo_check_system_acct`
[source,c]
----
int mpo_check_system_acct( ucred,
vp,
vlabel);
struct ucred *ucred;
struct vnode *vp;
struct label *vlabel;
----
[.informaltable]
[cols="1,1,1", frame="none", options="header"]
|===
| Parameter
| Description
| Locking
|`ucred`
|Subject credential
|
|`vp`
|Accounting file; man:acct[5]
|
|`vlabel`
|Label associated with `vp`
|
|===
Determine whether the subject should be allowed to enable accounting, based on its label and the label of the accounting log file.
[[mac-mpo-check-system-nfsd]]
==== `mpo_check_system_nfsd`
[source,c]
----
int mpo_check_system_nfsd( cred);
struct ucred *cred;
----
[.informaltable]
[cols="1,1,1", frame="none", options="header"]
|===
| Parameter
| Description
| Locking
|`cred`
|Subject credential
|
|===
Determine whether the subject should be allowed to call man:nfssvc[2].
[[mac-mpo-check-system-reboot]]
==== `mpo_check_system_reboot`
[source,c]
----
int mpo_check_system_reboot( cred,
howto);
struct ucred *cred;
int howto;
----
[.informaltable]
[cols="1,1,1", frame="none", options="header"]
|===
| Parameter
| Description
| Locking
|`cred`
|Subject credential
|
|`howto`
|`howto` parameter from man:reboot[2]
|
|===
Determine whether the subject should be allowed to reboot the system in the specified manner.
[[mac-mpo-check-system-settime]]
==== `mpo_check_system_settime`
[source,c]
----
int mpo_check_system_settime( cred);
struct ucred *cred;
----
[.informaltable]
[cols="1,1,1", frame="none", options="header"]
|===
| Parameter
| Description
| Locking
|`cred`
|Subject credential
|
|===
Determine whether the user should be allowed to set the system clock.
[[mac-mpo-check-system-swapon]]
==== `mpo_check_system_swapon`
[source,c]
----
int mpo_check_system_swapon( cred,
vp,
vlabel);
struct ucred *cred;
struct vnode *vp;
struct label *vlabel;
----
[.informaltable]
[cols="1,1,1", frame="none", options="header"]
|===
| Parameter
| Description
| Locking
|`cred`
|Subject credential
|
|`vp`
|Swap device
|
|`vlabel`
|Label associated with `vp`
|
|===
Determine whether the subject should be allowed to add `vp` as a swap device.
[[mac-mpo-check-system-sysctl]]
==== `mpo_check_system_sysctl`
[source,c]
----
int mpo_check_system_sysctl( cred,
name,
namelen,
old,
oldlenp,
inkernel,
new,
newlen);
struct ucred *cred;
int *name;
u_int *namelen;
void *old;
size_t *oldlenp;
int inkernel;
void *new;
size_t newlen;
----
[.informaltable]
[cols="1,1,1", frame="none", options="header"]
|===
| Parameter
| Description
| Locking
|`cred`
|Subject credential
|
|`name`
|See man:sysctl[3]
|
|`namelen`
|
|
|`old`
|
|
|`oldlenp`
|
|
|`inkernel`
|Boolean; `1` if called from kernel
|
|`new`
|See man:sysctl[3]
|
|`newlen`
|
|
|===
Determine whether the subject should be allowed to make the specified man:sysctl[3] transaction.
[[mac-label-management]]
=== Label Management Calls
Relabel events occur when a user process has requested that the label on an object be modified. A two-phase update occurs: first, an access control check will be performed to determine if the update is both valid and permitted, and then the update itself is performed via a separate entry point. Relabel entry points typically accept the object, object label reference, and an update label submitted by the process. Memory allocation during relabel is discouraged, as relabel calls are not permitted to fail (failure should be reported earlier in the relabel check).
[[mac-userland-arch]]
== Userland Architecture
The TrustedBSD MAC Framework includes a number of policy-agnostic elements, including MAC library interfaces for abstractly managing labels, modifications to the system credential management and login libraries to support the assignment of MAC labels to users, and a set of tools to monitor and modify labels on processes, files, and network interfaces. More details on the user architecture will be added to this section in the near future.
[[mac-userland-labels]]
=== APIs for Policy-Agnostic Label Management
The TrustedBSD MAC Framework provides a number of library and system calls permitting applications to manage MAC labels on objects using a policy-agnostic interface. This permits applications to manipulate labels for a variety of policies without being written to support specific policies. These interfaces are used by general-purpose tools such as man:ifconfig[8], man:ls[1] and man:ps[1] to view labels on network interfaces, files, and processes. The APIs also support MAC management tools including man:getfmac[8], man:getpmac[8], man:setfmac[8], man:setfsmac[8], and man:setpmac[8]. The MAC APIs are documented in man:mac[3].
Applications handle MAC labels in two forms: an internalized form used to return and set labels on processes and objects (`mac_t`), and externalized form based on C strings appropriate for storage in configuration files, display to the user, or input from the user. Each MAC label contains a number of elements, each consisting of a name and value pair. Policy modules in the kernel bind to specific names and interpret the values in policy-specific ways. In the externalized string form, labels are represented by a comma-delimited list of name and value pairs separated by the `/` character. Labels may be directly converted to and from text using provided APIs; when retrieving labels from the kernel, internalized label storage must first be prepared for the desired label element set. Typically, this is done in one of two ways: using man:mac_prepare[3] and an arbitrary list of desired label elements, or one of the variants of the call that loads a default element set from the man:mac.conf[5] configuration file. Per-object defaults permit application writers to usefully display labels associated with objects without being aware of the policies present in the system.
[NOTE]
====
Currently, direct manipulation of label elements other than by conversion to a text string, string editing, and conversion back to an internalized label is not supported by the MAC library. Such interfaces may be added in the future if they prove necessary for application writers.
====
[[mac-userland-credentials]]
=== Binding of Labels to Users
The standard user context management interface, man:setusercontext[3], has been modified to retrieve MAC labels associated with a user's class from man:login.conf[5]. These labels are then set along with other user context when either `LOGIN_SETALL` is specified, or when `LOGIN_SETMAC` is explicitly specified.
[NOTE]
====
It is expected that, in a future version of FreeBSD, the MAC label database will be separated from the [.filename]#login.conf# user class abstraction, and be maintained in a separate database. However, the man:setusercontext[3] API should remain the same following such a change.
====
[[mac-conclusion]]
== Conclusion
The TrustedBSD MAC framework permits kernel modules to augment the system security policy in a highly integrated manner. They may do this based on existing object properties, or based on label data that is maintained with the assistance of the MAC framework. The framework is sufficiently flexible to implement a variety of policy types, including information flow security policies such as MLS and Biba, as well as policies based on existing BSD credentials or file protections. Policy authors may wish to consult this documentation as well as existing security modules when implementing a new security service.
diff --git a/documentation/content/en/books/arch-handbook/newbus/_index.adoc b/documentation/content/en/books/arch-handbook/newbus/_index.adoc
index 797011d3f8..69a4109ec4 100644
--- a/documentation/content/en/books/arch-handbook/newbus/_index.adoc
+++ b/documentation/content/en/books/arch-handbook/newbus/_index.adoc
@@ -1,194 +1,195 @@
---
title: Chapter 14. Newbus
authors:
- author: Jeroen Ruigrok van der Werven (asmodai)
email: asmodai@FreeBSD.org
- author: Hiten Pandya
email: hiten@uk.FreeBSD.org
prev: books/arch-handbook/usb
next: books/arch-handbook/sound
+description: Newbus
---
[[newbus]]
= Newbus
:doctype: book
:toc: macro
:toclevels: 1
:icons: font
:sectnums:
:sectnumlevels: 6
:source-highlighter: rouge
:experimental:
:skip-front-matter:
:xrefstyle: basic
:relfileprefix: ../
:outfilesuffix:
:sectnumoffset: 14
include::shared/mirrors.adoc[]
include::shared/authors.adoc[]
include::shared/releases.adoc[]
include::shared/en/mailing-lists.adoc[]
include::shared/en/teams.adoc[]
include::shared/en/urls.adoc[]
toc::[]
_Special thanks to Matthew N. Dodd, Warner Losh, Bill Paul, Doug Rabson, Mike Smith, Peter Wemm and Scott Long_.
This chapter explains the Newbus device framework in detail.
[[newbus-devdrivers]]
== Device Drivers
=== Purpose of a Device Driver
A device driver is a software component which provides the interface between the kernel's generic view of a peripheral (e.g., disk, network adapter) and the actual implementation of the peripheral. The _device driver interface (DDI)_ is the defined interface between the kernel and the device driver component.
=== Types of Device Drivers
There used to be days in UNIX(R), and thus FreeBSD, in which there were four types of devices defined:
* block device drivers
* character device drivers
* network device drivers
* pseudo-device drivers
_Block devices_ performed in a way that used fixed size blocks [of data]. This type of driver depended on the so-called _buffer cache_, which had cached accessed blocks of data in a dedicated part of memory. Often this buffer cache was based on write-behind, which meant that when data was modified in memory it got synced to disk whenever the system did its periodical disk flushing, thus optimizing writes.
=== Character Devices
However, in the versions of FreeBSD 4.0 and onward the distinction between block and character devices became non-existent.
[[newbus-overview]]
== Overview of Newbus
_Newbus_ is the implementation of a new bus architecture based on abstraction layers which saw its introduction in FreeBSD 3.0 when the Alpha port was imported into the source tree. It was not until 4.0 before it became the default system to use for device drivers. Its goals are to provide a more object-oriented means of interconnecting the various busses and devices which a host system provides to the _Operating System_.
Its main features include amongst others:
* dynamic attaching
* easy modularization of drivers
* pseudo-busses
One of the most prominent changes is the migration from the flat and ad-hoc system to a device tree layout.
At the top level resides the _"root"_ device which is the parent to hang all other devices on. For each architecture, there is typically a single child of "root" which has such things as _host-to-PCI bridges_, etc. attached to it. For x86, this "root" device is the _"nexus"_ device. For Alpha, various different models of Alpha have different top-level devices corresponding to the different hardware chipsets, including _lca_, _apecs_, _cia_ and _tsunami_.
A device in the Newbus context represents a single hardware entity in the system. For instance each PCI device is represented by a Newbus device. Any device in the system can have children; a device which has children is often called a _"bus"_. Examples of common busses in the system are ISA and PCI, which manage lists of devices attached to ISA and PCI busses respectively.
Often, a connection between different kinds of bus is represented by a _"bridge"_ device, which normally has one child for the attached bus. An example of this is a _PCI-to-PCI bridge_ which is represented by a device _[.filename]#pcibN#_ on the parent PCI bus and has a child _[.filename]#pciN#_ for the attached bus. This layout simplifies the implementation of the PCI bus tree, allowing common code to be used for both top-level and bridged busses.
Each device in the Newbus architecture asks its parent to map its resources. The parent then asks its own parent until the nexus is reached. So, basically the nexus is the only part of the Newbus system which knows about all resources.
[TIP]
====
An ISA device might want to map its IO port at `0x230`, so it asks its parent, in this case the ISA bus. The ISA bus hands it over to the PCI-to-ISA bridge which in its turn asks the PCI bus, which reaches the host-to-PCI bridge and finally the nexus. The beauty of this transition upwards is that there is room to translate the requests. For example, the `0x230` IO port request might become memory-mapped at `0xb0000230` on a MIPS box by the PCI bridge.
====
Resource allocation can be controlled at any place in the device tree. For instance on many Alpha platforms, ISA interrupts are managed separately from PCI interrupts and resource allocations for ISA interrupts are managed by the Alpha's ISA bus device. On IA-32, ISA and PCI interrupts are both managed by the top-level nexus device. For both ports, memory and port address space is managed by a single entity - nexus for IA-32 and the relevant chipset driver on Alpha (e.g., CIA or tsunami).
In order to normalize access to memory and port mapped resources, Newbus integrates the `bus_space` APIs from NetBSD. These provide a single API to replace inb/outb and direct memory reads/writes. The advantage of this is that a single driver can easily use either memory-mapped registers or port-mapped registers (some hardware supports both).
This support is integrated into the resource allocation mechanism. When a resource is allocated, a driver can retrieve the associated `bus_space_tag_t` and `bus_space_handle_t` from the resource.
Newbus also allows for definitions of interface methods in files dedicated to this purpose. These are the [.filename]#.m# files that are found under the [.filename]#src/sys# hierarchy.
The core of the Newbus system is an extensible "object-based programming" model. Each device in the system has a table of methods which it supports. The system and other devices uses those methods to control the device and request services. The different methods supported by a device are defined by a number of "interfaces". An "interface" is simply a group of related methods which can be implemented by a device.
In the Newbus system, the methods for a device are provided by the various device drivers in the system. When a device is attached to a driver during _auto-configuration_, it uses the method table declared by the driver. A device can later _detach_ from its driver and _re-attach_ to a new driver with a new method table. This allows dynamic replacement of drivers which can be useful for driver development.
The interfaces are described by an interface definition language similar to the language used to define vnode operations for file systems. The interface would be stored in a methods file (which would normally be named [.filename]#foo_if.m#).
.Newbus Methods
[example]
====
[.programlisting]
....
# Foo subsystem/driver (a comment...)
INTERFACE foo
METHOD int doit {
device_t dev;
};
# DEFAULT is the method that will be used, if a method was not
# provided via: DEVMETHOD()
METHOD void doit_to_child {
device_t dev;
driver_t child;
} DEFAULT doit_generic_to_child;
....
====
When this interface is compiled, it generates a header file "[.filename]#foo_if.h#" which contains function declarations:
[.programlisting]
....
int FOO_DOIT(device_t dev);
int FOO_DOIT_TO_CHILD(device_t dev, device_t child);
....
A source file, "[.filename]#foo_if.c#" is also created to accompany the automatically generated header file; it contains implementations of those functions which look up the location of the relevant functions in the object's method table and call that function.
The system defines two main interfaces. The first fundamental interface is called _"device"_ and includes methods which are relevant to all devices. Methods in the _"device"_ interface include _"probe"_, _"attach"_ and _"detach"_ to control detection of hardware and _"shutdown"_, _"suspend"_ and _"resume"_ for critical event notification.
The second, more complex interface is _"bus"_. This interface contains methods suitable for devices which have children, including methods to access bus specific per-device information footnote:[man:bus_generic_read_ivar[9] and man:bus_generic_write_ivar[9]], event notification (`_child_detached_`, `_driver_added_`) and resource management (`_alloc_resource_`, `_activate_resource_`, `_deactivate_resource_`, `_release_resource_`).
Many methods in the "bus" interface are performing services for some child of the bus device. These methods would normally use the first two arguments to specify the bus providing the service and the child device which is requesting the service. To simplify driver code, many of these methods have accessor functions which lookup the parent and call a method on the parent. For instance the method `BUS_TEARDOWN_INTR(device_t dev, device_t child, ...)` can be called using the function `bus_teardown_intr(device_t child, ...)`.
Some bus types in the system define additional interfaces to provide access to bus-specific functionality. For instance, the PCI bus driver defines the "pci" interface which has two methods `_read_config_` and `_write_config_` for accessing the configuration registers of a PCI device.
[[newbus-api]]
== Newbus API
As the Newbus API is huge, this section makes some effort at documenting it. More information to come in the next revision of this document.
=== Important Locations in the Source Hierarchy
[.filename]#src/sys/[arch]/[arch]# - Kernel code for a specific machine architecture resides in this directory. For example, the `i386` architecture, or the `SPARC64` architecture.
[.filename]#src/sys/dev/[bus]# - device support for a specific `[bus]` resides in this directory.
[.filename]#src/sys/dev/pci# - PCI bus support code resides in this directory.
[.filename]#src/sys/[isa|pci]# - PCI/ISA device drivers reside in this directory. The PCI/ISA bus support code used to exist in this directory in FreeBSD version `4.0`.
=== Important Structures and Type Definitions
`devclass_t` - This is a type definition of a pointer to a `struct devclass`.
`device_method_t` - This is the same as `kobj_method_t` (see [.filename]#src/sys/kobj.h#).
`device_t` - This is a type definition of a pointer to a `struct device`. `device_t` represents a device in the system. It is a kernel object. See [.filename]#src/sys/sys/bus_private.h# for implementation details.
`driver_t` - This is a type definition which references `struct driver`. The `driver` struct is a class of the `device` kernel object; it also holds data private to the driver.
*_driver_t_ Implementation*
[.programlisting]
....
struct driver {
KOBJ_CLASS_FIELDS;
void *priv; /* driver private data */
};
....
A `device_state_t` type, which is an enumeration, `device_state`. It contains the possible states of a Newbus device before and after the autoconfiguration process.
*Device States _device_state_t*
[.programlisting]
....
/*
* src/sys/sys/bus.h
*/
typedef enum device_state {
DS_NOTPRESENT, /* not probed or probe failed */
DS_ALIVE, /* probe succeeded */
DS_ATTACHED, /* attach method called */
DS_BUSY /* device is open */
} device_state_t;
....
diff --git a/documentation/content/en/books/arch-handbook/pccard/_index.adoc b/documentation/content/en/books/arch-handbook/pccard/_index.adoc
index 1b7be9e711..6690023a19 100644
--- a/documentation/content/en/books/arch-handbook/pccard/_index.adoc
+++ b/documentation/content/en/books/arch-handbook/pccard/_index.adoc
@@ -1,190 +1,191 @@
---
title: Chapter 16. PC Card
prev: books/arch-handbook/sound
next: books/arch-handbook/partiii
+description: PC Card
---
[[pccard]]
= PC Card
:doctype: book
:toc: macro
:toclevels: 1
:icons: font
:sectnums:
:sectnumlevels: 6
:source-highlighter: rouge
:experimental:
:skip-front-matter:
:xrefstyle: basic
:relfileprefix: ../
:outfilesuffix:
:sectnumoffset: 16
include::shared/mirrors.adoc[]
include::shared/authors.adoc[]
include::shared/releases.adoc[]
include::shared/en/mailing-lists.adoc[]
include::shared/en/teams.adoc[]
include::shared/en/urls.adoc[]
toc::[]
This chapter will talk about the FreeBSD mechanisms for writing a device driver for a PC Card or CardBus device. However, at present it just documents how to add a new device to an existing pccard driver.
[[pccard-adddev]]
== Adding a Device
Device drivers know what devices they support. There is a table of supported devices in the kernel that drivers use to attach to a device.
[[pccard-overview]]
=== Overview
PC Cards are identified in one of two ways, both based on the _Card Information Structure_ (CIS) stored on the card. The first method is to use numeric manufacturer and product numbers. The second method is to use the human readable strings that are also contained in the CIS. The PC Card bus uses a centralized database and some macros to facilitate a design pattern to help the driver writer match devices to his driver.
Original equipment manufacturers (OEMs) often develop a reference design for a PC Card product, then sell this design to other companies to market. Those companies refine the design, market the product to their target audience or geographic area, and put their own name plate onto the card. The refinements to the physical card are typically very minor, if any changes are made at all. To strengthen their brand, these vendors place their company name in the human readable strings in the CIS space, but leave the manufacturer and product IDs unchanged.
Due to this practice, FreeBSD drivers usually rely on numeric IDs for device identification. Using numeric IDs and a centralized database complicates adding IDs and support for cards to the system. One must carefully check to see who really made the card, especially when it appears that the vendor who made the card might already have a different manufacturer ID listed in the central database. Linksys, D-Link, and NetGear are a number of US manufacturers of LAN hardware that often sell the same design. These same designs can be sold in Japan under names such as Buffalo and Corega. Often, these devices will all have the same manufacturer and product IDs.
The PC Card bus code keeps a central database of card information, but not which driver is associated with them, in [.filename]#/sys/dev/pccard/pccarddevs#. It also provides a set of macros that allow one to easily construct simple entries in the table the driver uses to claim devices.
Finally, some really low end devices do not contain manufacturer identification at all. These devices must be detected by matching the human readable CIS strings. While it would be nice if we did not need this method as a fallback, it is necessary for some very low end CD-ROM players and Ethernet cards. This method should generally be avoided, but a number of devices are listed in this section because they were added prior to the recognition of the OEM nature of the PC Card business. When adding new devices, prefer using the numeric method.
[[pccard-pccarddevs]]
=== Format of [.filename]#pccarddevs#
There are four sections in the [.filename]#pccarddevs# files. The first section lists the manufacturer numbers for vendors that use them. This section is sorted in numerical order. The next section has all of the products that are used by these vendors, along with their product ID numbers and a description string. The description string typically is not used (instead we set the device's description based on the human readable CIS, even if we match on the numeric version). These two sections are then repeated for devices that use the string matching method. Finally, C-style comments enclosed in `/*` and `*/` characters are allowed anywhere in the file.
The first section of the file contains the vendor IDs. Please keep this list sorted in numeric order. Also, please coordinate changes to this file because we share it with NetBSD to help facilitate a common clearing house for this information. For example, here are the first few vendor IDs:
[.programlisting]
....
vendor FUJITSU 0x0004 Fujitsu Corporation
vendor NETGEAR_2 0x000b Netgear
vendor PANASONIC 0x0032 Matsushita Electric Industrial Co.
vendor SANDISK 0x0045 Sandisk Corporation
....
Chances are very good that the `NETGEAR_2` entry is really an OEM that NETGEAR purchased cards from and the author of support for those cards was unaware at the time that Netgear was using someone else's ID. These entries are fairly straightforward. The vendor keyword denotes the kind of line that this is, followed by the name of the vendor. This name will be repeated later in [.filename]#pccarddevs#, as well as used in the driver's match tables, so keep it short and a valid C identifier. A numeric ID in hex identifies the manufacturer. Do not add IDs of the form `0xffffffff` or `0xffff` because these are reserved IDs (the former is "no ID set" while the latter is sometimes seen in extremely poor quality cards to try to indicate "none"). Finally there is a string description of the company that makes the card. This string is not used in FreeBSD for anything but commentary purposes.
The second section of the file contains the products. As shown in this example, the format is similar to the vendor lines:
[.programlisting]
....
/* Allied Telesis K.K. */
product ALLIEDTELESIS LA_PCM 0x0002 Allied Telesis LA-PCM
/* Archos */
product ARCHOS ARC_ATAPI 0x0043 MiniCD
....
The `product` keyword is followed by the vendor name, repeated from above. This is followed by the product name, which is used by the driver and should be a valid C identifier, but may also start with a number. As with the vendors, the hex product ID for this card follows the same convention for `0xffffffff` and `0xffff`. Finally, there is a string description of the device itself. This string typically is not used in FreeBSD, since FreeBSD's pccard bus driver will construct a string from the human readable CIS entries, but it can be used in the rare cases where this is somehow insufficient. The products are in alphabetical order by manufacturer, then numerical order by product ID. They have a C comment before each manufacturer's entries and there is a blank line between entries.
The third section is like the previous vendor section, but with all of the manufacturer numeric IDs set to `-1`, meaning "match anything found" in the FreeBSD pccard bus code. Since these are C identifiers, their names must be unique. Otherwise the format is identical to the first section of the file.
The final section contains the entries for those cards that must be identified by string entries. This section's format is a little different from the generic section:
[.programlisting]
....
product ADDTRON AWP100 { "Addtron", "AWP-100&spWireless&spPCMCIA", "Version&sp01.02", NULL }
product ALLIEDTELESIS WR211PCM { "Allied&spTelesis&spK.K.", "WR211PCM", NULL, NULL } Allied Telesis WR211PCM
....
The familiar `product` keyword is followed by the vendor name and the card name, just as in the second section of the file. Here the format deviates from that used earlier. There is a {} grouping, followed by a number of strings. These strings correspond to the vendor, product, and extra information that is defined in a CIS_INFO tuple. These strings are filtered by the program that generates [.filename]#pccarddevs.h# to replace &sp with a real space. NULL strings mean that the corresponding part of the entry should be ignored. The example shown here contains a bad entry. It should not contain the version number unless that is critical for the operation of the card. Sometimes vendors will have many different versions of the card in the field that all work, in which case that information only makes it harder for someone with a similar card to use it with FreeBSD. Sometimes it is necessary when a vendor wishes to sell many different parts under the same brand due to market considerations (availability, price, and so forth). Then it can be critical to disambiguating the card in those rare cases where the vendor kept the same manufacturer/product pair. Regular expression matching is not available at this time.
[[pccard-probe]]
=== Sample Probe Routine
To understand how to add a device to the list of supported devices, one must understand the probe and/or match routines that many drivers have. It is complicated a little in FreeBSD 5.x because there is a compatibility layer for OLDCARD present as well. Since only the window-dressing is different, an idealized version will be presented here.
[.programlisting]
....
static const struct pccard_product wi_pccard_products[] = {
PCMCIA_CARD(3COM, 3CRWE737A, 0),
PCMCIA_CARD(BUFFALO, WLI_PCM_S11, 0),
PCMCIA_CARD(BUFFALO, WLI_CF_S11G, 0),
PCMCIA_CARD(TDK, LAK_CD011WL, 0),
{ NULL }
};
static int
wi_pccard_probe(dev)
device_t dev;
{
const struct pccard_product *pp;
if ((pp = pccard_product_lookup(dev, wi_pccard_products,
sizeof(wi_pccard_products[0]), NULL)) != NULL) {
if (pp->pp_name != NULL)
device_set_desc(dev, pp->pp_name);
return (0);
}
return (ENXIO);
}
....
Here we have a simple pccard probe routine that matches a few devices. As stated above, the name may vary (if it is not `foo_pccard_probe()` it will be `foo_pccard_match()`). The function `pccard_product_lookup()` is a generalized function that walks the table and returns a pointer to the first entry that it matches. Some drivers may use this mechanism to convey additional information about some cards to the rest of the driver, so there may be some variance in the table. The only requirement is that each row of the table must have a `struct pccard_product` as the first element.
Looking at the table `wi_pccard_products`, one notices that all the entries are of the form `PCMCIA_CARD(_foo_, _bar_, _baz_)`. The _foo_ part is the manufacturer ID from [.filename]#pccarddevs#. The _bar_ part is the product ID. _baz_ is the expected function number for this card. Many pccards can have multiple functions, and some way to disambiguate function 1 from function 0 is needed. You may see `PCMCIA_CARD_D`, which includes the device description from [.filename]#pccarddevs#. You may also see `PCMCIA_CARD2` and `PCMCIA_CARD2_D` which are used when you need to match both CIS strings and manufacturer numbers, in the "use the default description" and "take the description from pccarddevs" flavors.
[[pccard-add]]
=== Putting it All Together
To add a new device, one must first obtain the identification information from the device. The easiest way to do this is to insert the device into a PC Card or CF slot and issue `devinfo -v`. Sample output:
[.programlisting]
....
cbb1 pnpinfo vendor=0x104c device=0xac51 subvendor=0x1265 subdevice=0x0300 class=0x060700 at slot=10 function=1
cardbus1
pccard1
unknown pnpinfo manufacturer=0x026f product=0x030c cisvendor="BUFFALO" cisproduct="WLI2-CF-S11" function_type=6 at function=0
....
`manufacturer` and `product` are the numeric IDs for this product, while `cisvendor` and `cisproduct` are the product description strings from the CIS.
Since we first want to prefer the numeric option, first try to construct an entry based on that. The above card has been slightly fictionalized for the purpose of this example. The vendor is BUFFALO, which we see already has an entry:
[.programlisting]
....
vendor BUFFALO 0x026f BUFFALO (Melco Corporation)
....
But there is no entry for this particular card. Instead we find:
[.programlisting]
....
/* BUFFALO */
product BUFFALO WLI_PCM_S11 0x0305 BUFFALO AirStation 11Mbps WLAN
product BUFFALO LPC_CF_CLT 0x0307 BUFFALO LPC-CF-CLT
product BUFFALO LPC3_CLT 0x030a BUFFALO LPC3-CLT Ethernet Adapter
product BUFFALO WLI_CF_S11G 0x030b BUFFALO AirStation 11Mbps CF WLAN
....
To add the device, we can just add this entry to [.filename]#pccarddevs#:
[.programlisting]
....
product BUFFALO WLI2_CF_S11G 0x030c BUFFALO AirStation ultra 802.11b CF
....
Once these steps are complete, the card can be added to the driver. That is a simple operation of adding one line:
[.programlisting]
....
static const struct pccard_product wi_pccard_products[] = {
PCMCIA_CARD(3COM, 3CRWE737A, 0),
PCMCIA_CARD(BUFFALO, WLI_PCM_S11, 0),
PCMCIA_CARD(BUFFALO, WLI_CF_S11G, 0),
+ PCMCIA_CARD(BUFFALO, WLI_CF2_S11G, 0),
PCMCIA_CARD(TDK, LAK_CD011WL, 0),
{ NULL }
};
....
Note that I have included a '`+`' in the line before the line that I added, but that is simply to highlight the line. Do not add it to the actual driver. Once you have added the line, you can recompile your kernel or module and test it. If the device is recognized and works, please submit a patch. If it does not work, please figure out what is needed to make it work and submit a patch. If the device is not recognized at all, you have done something wrong and should recheck each step.
If you are a FreeBSD src committer, and everything appears to be working, then you can commit the changes to the tree. However, there are some minor tricky things to be considered. [.filename]#pccarddevs# must be committed to the tree first. Then [.filename]#pccarddevs.h# must be regenerated and committed as a second step, ensuring that the right $FreeBSD$ tag is in the latter file. Finally, commit the additions to the driver.
[[pccard-pr]]
=== Submitting a New Device
Please do not send entries for new devices to the author directly. Instead, submit them as a PR and send the author the PR number for his records. This ensures that entries are not lost. When submitting a PR, it is unnecessary to include the [.filename]#pccardevs.h# diffs in the patch, since those will be regenerated. It is necessary to include a description of the device, as well as the patches to the client driver. If you do not know the name, use OEM99 as the name, and the author will adjust OEM99 accordingly after investigation. Committers should not commit OEM99, but instead find the highest OEM entry and commit one more than that.
diff --git a/documentation/content/en/books/arch-handbook/pci/_index.adoc b/documentation/content/en/books/arch-handbook/pci/_index.adoc
index b4a7b72c9c..5adb945dd5 100644
--- a/documentation/content/en/books/arch-handbook/pci/_index.adoc
+++ b/documentation/content/en/books/arch-handbook/pci/_index.adoc
@@ -1,405 +1,406 @@
---
title: Chapter 11. PCI Devices
prev: books/arch-handbook/isa
next: books/arch-handbook/scsi
+description: PCI Devices
---
[[pci]]
= PCI Devices
:doctype: book
:toc: macro
:toclevels: 1
:icons: font
:sectnums:
:sectnumlevels: 6
:source-highlighter: rouge
:experimental:
:skip-front-matter:
:xrefstyle: basic
:relfileprefix: ../
:outfilesuffix:
:sectnumoffset: 11
include::shared/mirrors.adoc[]
include::shared/authors.adoc[]
include::shared/releases.adoc[]
include::shared/en/mailing-lists.adoc[]
include::shared/en/teams.adoc[]
include::shared/en/urls.adoc[]
toc::[]
This chapter will talk about the FreeBSD mechanisms for writing a device driver for a device on a PCI bus.
[[pci-probe]]
== Probe and Attach
Information here about how the PCI bus code iterates through the unattached devices and see if a newly loaded kld will attach to any of them.
=== Sample Driver Source ([.filename]#mypci.c#)
[.programlisting]
....
/*
* Simple KLD to play with the PCI functions.
*
* Murray Stokely
*/
#include <sys/param.h> /* defines used in kernel.h */
#include <sys/module.h>
#include <sys/systm.h>
#include <sys/errno.h>
#include <sys/kernel.h> /* types used in module initialization */
#include <sys/conf.h> /* cdevsw struct */
#include <sys/uio.h> /* uio struct */
#include <sys/malloc.h>
#include <sys/bus.h> /* structs, prototypes for pci bus stuff and DEVMETHOD macros! */
#include <machine/bus.h>
#include <sys/rman.h>
#include <machine/resource.h>
#include <dev/pci/pcivar.h> /* For pci_get macros! */
#include <dev/pci/pcireg.h>
/* The softc holds our per-instance data. */
struct mypci_softc {
device_t my_dev;
struct cdev *my_cdev;
};
/* Function prototypes */
static d_open_t mypci_open;
static d_close_t mypci_close;
static d_read_t mypci_read;
static d_write_t mypci_write;
/* Character device entry points */
static struct cdevsw mypci_cdevsw = {
.d_version = D_VERSION,
.d_open = mypci_open,
.d_close = mypci_close,
.d_read = mypci_read,
.d_write = mypci_write,
.d_name = "mypci",
};
/*
* In the cdevsw routines, we find our softc by using the si_drv1 member
* of struct cdev. We set this variable to point to our softc in our
* attach routine when we create the /dev entry.
*/
int
mypci_open(struct cdev *dev, int oflags, int devtype, struct thread *td)
{
struct mypci_softc *sc;
/* Look up our softc. */
sc = dev->si_drv1;
device_printf(sc->my_dev, "Opened successfully.\n");
return (0);
}
int
mypci_close(struct cdev *dev, int fflag, int devtype, struct thread *td)
{
struct mypci_softc *sc;
/* Look up our softc. */
sc = dev->si_drv1;
device_printf(sc->my_dev, "Closed.\n");
return (0);
}
int
mypci_read(struct cdev *dev, struct uio *uio, int ioflag)
{
struct mypci_softc *sc;
/* Look up our softc. */
sc = dev->si_drv1;
device_printf(sc->my_dev, "Asked to read %zd bytes.\n", uio->uio_resid);
return (0);
}
int
mypci_write(struct cdev *dev, struct uio *uio, int ioflag)
{
struct mypci_softc *sc;
/* Look up our softc. */
sc = dev->si_drv1;
device_printf(sc->my_dev, "Asked to write %zd bytes.\n", uio->uio_resid);
return (0);
}
/* PCI Support Functions */
/*
* Compare the device ID of this device against the IDs that this driver
* supports. If there is a match, set the description and return success.
*/
static int
mypci_probe(device_t dev)
{
device_printf(dev, "MyPCI Probe\nVendor ID : 0x%x\nDevice ID : 0x%x\n",
pci_get_vendor(dev), pci_get_device(dev));
if (pci_get_vendor(dev) == 0x11c1) {
printf("We've got the Winmodem, probe successful!\n");
device_set_desc(dev, "WinModem");
return (BUS_PROBE_DEFAULT);
}
return (ENXIO);
}
/* Attach function is only called if the probe is successful. */
static int
mypci_attach(device_t dev)
{
struct mypci_softc *sc;
printf("MyPCI Attach for : deviceID : 0x%x\n", pci_get_devid(dev));
/* Look up our softc and initialize its fields. */
sc = device_get_softc(dev);
sc->my_dev = dev;
/*
* Create a /dev entry for this device. The kernel will assign us
* a major number automatically. We use the unit number of this
* device as the minor number and name the character device
* "mypci<unit>".
*/
sc->my_cdev = make_dev(&mypci_cdevsw, device_get_unit(dev),
UID_ROOT, GID_WHEEL, 0600, "mypci%u", device_get_unit(dev));
sc->my_cdev->si_drv1 = sc;
printf("Mypci device loaded.\n");
return (0);
}
/* Detach device. */
static int
mypci_detach(device_t dev)
{
struct mypci_softc *sc;
/* Teardown the state in our softc created in our attach routine. */
sc = device_get_softc(dev);
destroy_dev(sc->my_cdev);
printf("Mypci detach!\n");
return (0);
}
/* Called during system shutdown after sync. */
static int
mypci_shutdown(device_t dev)
{
printf("Mypci shutdown!\n");
return (0);
}
/*
* Device suspend routine.
*/
static int
mypci_suspend(device_t dev)
{
printf("Mypci suspend!\n");
return (0);
}
/*
* Device resume routine.
*/
static int
mypci_resume(device_t dev)
{
printf("Mypci resume!\n");
return (0);
}
static device_method_t mypci_methods[] = {
/* Device interface */
DEVMETHOD(device_probe, mypci_probe),
DEVMETHOD(device_attach, mypci_attach),
DEVMETHOD(device_detach, mypci_detach),
DEVMETHOD(device_shutdown, mypci_shutdown),
DEVMETHOD(device_suspend, mypci_suspend),
DEVMETHOD(device_resume, mypci_resume),
DEVMETHOD_END
};
static devclass_t mypci_devclass;
DEFINE_CLASS_0(mypci, mypci_driver, mypci_methods, sizeof(struct mypci_softc));
DRIVER_MODULE(mypci, pci, mypci_driver, mypci_devclass, 0, 0);
....
=== [.filename]#Makefile# for Sample Driver
[.programlisting]
....
# Makefile for mypci driver
KMOD= mypci
SRCS= mypci.c
SRCS+= device_if.h bus_if.h pci_if.h
.include <bsd.kmod.mk>
....
If you place the above source file and [.filename]#Makefile# into a directory, you may run `make` to compile the sample driver. Additionally, you may run `make load` to load the driver into the currently running kernel and `make unload` to unload the driver after it is loaded.
=== Additional Resources
* http://www.pcisig.org/[PCI Special Interest Group]
* PCI System Architecture, Fourth Edition by Tom Shanley, et al.
[[pci-bus]]
== Bus Resources
FreeBSD provides an object-oriented mechanism for requesting resources from a parent bus. Almost all devices will be a child member of some sort of bus (PCI, ISA, USB, SCSI, etc) and these devices need to acquire resources from their parent bus (such as memory segments, interrupt lines, or DMA channels).
=== Base Address Registers
To do anything particularly useful with a PCI device you will need to obtain the _Base Address Registers_ (BARs) from the PCI Configuration space. The PCI-specific details of obtaining the BAR are abstracted in the `bus_alloc_resource()` function.
For example, a typical driver might have something similar to this in the `attach()` function:
[.programlisting]
....
sc->bar0id = PCIR_BAR(0);
sc->bar0res = bus_alloc_resource(dev, SYS_RES_MEMORY, &sc->bar0id,
0, ~0, 1, RF_ACTIVE);
if (sc->bar0res == NULL) {
printf("Memory allocation of PCI base register 0 failed!\n");
error = ENXIO;
goto fail1;
}
sc->bar1id = PCIR_BAR(1);
sc->bar1res = bus_alloc_resource(dev, SYS_RES_MEMORY, &sc->bar1id,
0, ~0, 1, RF_ACTIVE);
if (sc->bar1res == NULL) {
printf("Memory allocation of PCI base register 1 failed!\n");
error = ENXIO;
goto fail2;
}
sc->bar0_bt = rman_get_bustag(sc->bar0res);
sc->bar0_bh = rman_get_bushandle(sc->bar0res);
sc->bar1_bt = rman_get_bustag(sc->bar1res);
sc->bar1_bh = rman_get_bushandle(sc->bar1res);
....
Handles for each base address register are kept in the `softc` structure so that they can be used to write to the device later.
These handles can then be used to read or write from the device registers with the `bus_space_*` functions. For example, a driver might contain a shorthand function to read from a board specific register like this:
[.programlisting]
....
uint16_t
board_read(struct ni_softc *sc, uint16_t address)
{
return bus_space_read_2(sc->bar1_bt, sc->bar1_bh, address);
}
....
Similarly, one could write to the registers with:
[.programlisting]
....
void
board_write(struct ni_softc *sc, uint16_t address, uint16_t value)
{
bus_space_write_2(sc->bar1_bt, sc->bar1_bh, address, value);
}
....
These functions exist in 8bit, 16bit, and 32bit versions and you should use `bus_space_{read|write}_{1|2|4}` accordingly.
[NOTE]
====
In FreeBSD 7.0 and later, you can use the `bus_*` functions instead of `bus_space_*`. The `bus_*` functions take a struct resource * pointer instead of a bus tag and handle. Thus, you could drop the bus tag and bus handle members from the `softc` and rewrite the `board_read()` function as:
[.programlisting]
....
uint16_t
board_read(struct ni_softc *sc, uint16_t address)
{
return (bus_read(sc->bar1res, address));
}
....
====
=== Interrupts
Interrupts are allocated from the object-oriented bus code in a way similar to the memory resources. First an IRQ resource must be allocated from the parent bus, and then the interrupt handler must be set up to deal with this IRQ.
Again, a sample from a device `attach()` function says more than words.
[.programlisting]
....
/* Get the IRQ resource */
sc->irqid = 0x0;
sc->irqres = bus_alloc_resource(dev, SYS_RES_IRQ, &(sc->irqid),
0, ~0, 1, RF_SHAREABLE | RF_ACTIVE);
if (sc->irqres == NULL) {
printf("IRQ allocation failed!\n");
error = ENXIO;
goto fail3;
}
/* Now we should set up the interrupt handler */
error = bus_setup_intr(dev, sc->irqres, INTR_TYPE_MISC,
my_handler, sc, &(sc->handler));
if (error) {
printf("Couldn't set up irq\n");
goto fail4;
}
....
Some care must be taken in the detach routine of the driver. You must quiesce the device's interrupt stream, and remove the interrupt handler. Once `bus_teardown_intr()` has returned, you know that your interrupt handler will no longer be called and that all threads that might have been executing this interrupt handler have returned. Since this function can sleep, you must not hold any mutexes when calling this function.
=== DMA
This section is obsolete, and present only for historical reasons. The proper methods for dealing with these issues is to use the `bus_space_dma*()` functions instead. This paragraph can be removed when this section is updated to reflect that usage. However, at the moment, the API is in a bit of flux, so once that settles down, it would be good to update this section to reflect that.
On the PC, peripherals that want to do bus-mastering DMA must deal with physical addresses. This is a problem since FreeBSD uses virtual memory and deals almost exclusively with virtual addresses. Fortunately, there is a function, `vtophys()` to help.
[.programlisting]
....
#include <vm/vm.h>
#include <vm/pmap.h>
#define vtophys(virtual_address) (...)
....
The solution is a bit different on the alpha however, and what we really want is a function called `vtobus()`.
[.programlisting]
....
#if defined(__alpha__)
#define vtobus(va) alpha_XXX_dmamap((vm_offset_t)va)
#else
#define vtobus(va) vtophys(va)
#endif
....
=== Deallocating Resources
It is very important to deallocate all of the resources that were allocated during `attach()`. Care must be taken to deallocate the correct stuff even on a failure condition so that the system will remain usable while your driver dies.
diff --git a/documentation/content/en/books/arch-handbook/scsi/_index.adoc b/documentation/content/en/books/arch-handbook/scsi/_index.adoc
index 6192c65cbd..49fcd635d4 100644
--- a/documentation/content/en/books/arch-handbook/scsi/_index.adoc
+++ b/documentation/content/en/books/arch-handbook/scsi/_index.adoc
@@ -1,1355 +1,1356 @@
---
title: Chapter 12. Common Access Method SCSI Controllers
prev: books/arch-handbook/pci
next: books/arch-handbook/usb
+description: Common Access Method SCSI Controllers
---
[[scsi]]
= Common Access Method SCSI Controllers
:doctype: book
:toc: macro
:toclevels: 1
:icons: font
:sectnums:
:sectnumlevels: 6
:source-highlighter: rouge
:experimental:
:skip-front-matter:
:xrefstyle: basic
:relfileprefix: ../
:outfilesuffix:
:sectnumoffset: 12
include::shared/mirrors.adoc[]
include::shared/authors.adoc[]
include::shared/releases.adoc[]
include::shared/en/mailing-lists.adoc[]
include::shared/en/teams.adoc[]
include::shared/en/urls.adoc[]
toc::[]
[[scsi-synopsis]]
== Synopsis
This document assumes that the reader has a general understanding of device drivers in FreeBSD and of the SCSI protocol. Much of the information in this document was extracted from the drivers:
* ncr ([.filename]#/sys/pci/ncr.c#) by Wolfgang Stanglmeier and Stefan Esser
* sym ([.filename]#/sys/dev/sym/sym_hipd.c#) by Gerard Roudier
* aic7xxx ([.filename]#/sys/dev/aic7xxx/aic7xxx.c#) by Justin T. Gibbs
and from the CAM code itself (by Justin T. Gibbs, see [.filename]#/sys/cam/*#). When some solution looked the most logical and was essentially verbatim extracted from the code by Justin T. Gibbs, I marked it as "recommended".
The document is illustrated with examples in pseudo-code. Although sometimes the examples have many details and look like real code, it is still pseudo-code. It was written to demonstrate the concepts in an understandable way. For a real driver other approaches may be more modular and efficient. It also abstracts from the hardware details, as well as issues that would cloud the demonstrated concepts or that are supposed to be described in the other chapters of the developers handbook. Such details are commonly shown as calls to functions with descriptive names, comments or pseudo-statements. Fortunately real life full-size examples with all the details can be found in the real drivers.
[[scsi-general]]
== General Architecture
CAM stands for Common Access Method. It is a generic way to address the I/O buses in a SCSI-like way. This allows a separation of the generic device drivers from the drivers controlling the I/O bus: for example the disk driver becomes able to control disks on both SCSI, IDE, and/or any other bus so the disk driver portion does not have to be rewritten (or copied and modified) for every new I/O bus. Thus the two most important active entities are:
* _Peripheral Modules_ - a driver for peripheral devices (disk, tape, CD-ROM, etc.)
* _SCSI Interface Modules_ (SIM) - a Host Bus Adapter drivers for connecting to an I/O bus such as SCSI or IDE.
A peripheral driver receives requests from the OS, converts them to a sequence of SCSI commands and passes these SCSI commands to a SCSI Interface Module. The SCSI Interface Module is responsible for passing these commands to the actual hardware (or if the actual hardware is not SCSI but, for example, IDE then also converting the SCSI commands to the native commands of the hardware).
As we are interested in writing a SCSI adapter driver here, from this point on we will consider everything from the SIM standpoint.
A typical SIM driver needs to include the following CAM-related header files:
[.programlisting]
....
#include <cam/cam.h>
#include <cam/cam_ccb.h>
#include <cam/cam_sim.h>
#include <cam/cam_xpt_sim.h>
#include <cam/cam_debug.h>
#include <cam/scsi/scsi_all.h>
....
The first thing each SIM driver must do is register itself with the CAM subsystem. This is done during the driver's `xxx_attach()` function (here and further xxx_ is used to denote the unique driver name prefix). The `xxx_attach()` function itself is called by the system bus auto-configuration code which we do not describe here.
This is achieved in multiple steps: first it is necessary to allocate the queue of requests associated with this SIM:
[.programlisting]
....
struct cam_devq *devq;
if(( devq = cam_simq_alloc(SIZE) )==NULL) {
error; /* some code to handle the error */
}
....
Here `SIZE` is the size of the queue to be allocated, maximal number of requests it could contain. It is the number of requests that the SIM driver can handle in parallel on one SCSI card. Commonly it can be calculated as:
[.programlisting]
....
SIZE = NUMBER_OF_SUPPORTED_TARGETS * MAX_SIMULTANEOUS_COMMANDS_PER_TARGET
....
Next we create a descriptor of our SIM:
[.programlisting]
....
struct cam_sim *sim;
if(( sim = cam_sim_alloc(action_func, poll_func, driver_name,
softc, unit, mtx, max_dev_transactions,
max_tagged_dev_transactions, devq) )==NULL) {
cam_simq_free(devq);
error; /* some code to handle the error */
}
....
Note that if we are not able to create a SIM descriptor we free the `devq` also because we can do nothing else with it and we want to conserve memory.
If a SCSI card has multiple SCSI buses on it then each bus requires its own `cam_sim` structure.
An interesting question is what to do if a SCSI card has more than one SCSI bus, do we need one `devq` structure per card or per SCSI bus? The answer given in the comments to the CAM code is: either way, as the driver's author prefers.
The arguments are:
* `action_func` - pointer to the driver's `xxx_action` function.
+
[source,c]
----
static void
xxx_action
();
----
* `poll_func` - pointer to the driver's `xxx_poll()`
+
[source,c]
----
static void
xxx_poll
();
----
* driver_name - the name of the actual driver, such as "ncr" or "wds".
* `softc` - pointer to the driver's internal descriptor for this SCSI card. This pointer will be used by the driver in future to get private data.
* unit - the controller unit number, for example for controller "mps0" this number will be 0
* mtx - Lock associated with this SIM. For SIMs that don't know about locking, pass in Giant. For SIMs that do, pass in the lock used to guard this SIM's data structures. This lock will be held when xxx_action and xxx_poll are called.
* max_dev_transactions - maximal number of simultaneous transactions per SCSI target in the non-tagged mode. This value will be almost universally equal to 1, with possible exceptions only for the non-SCSI cards. Also the drivers that hope to take advantage by preparing one transaction while another one is executed may set it to 2 but this does not seem to be worth the complexity.
* max_tagged_dev_transactions - the same thing, but in the tagged mode. Tags are the SCSI way to initiate multiple transactions on a device: each transaction is assigned a unique tag and the transaction is sent to the device. When the device completes some transaction it sends back the result together with the tag so that the SCSI adapter (and the driver) can tell which transaction was completed. This argument is also known as the maximal tag depth. It depends on the abilities of the SCSI adapter.
Finally we register the SCSI buses associated with our SCSI adapter:
[.programlisting]
....
if(xpt_bus_register(sim, softc, bus_number) != CAM_SUCCESS) {
cam_sim_free(sim, /*free_devq*/ TRUE);
error; /* some code to handle the error */
}
....
If there is one `devq` structure per SCSI bus (i.e., we consider a card with multiple buses as multiple cards with one bus each) then the bus number will always be 0, otherwise each bus on the SCSI card should be get a distinct number. Each bus needs its own separate structure cam_sim.
After that our controller is completely hooked to the CAM system. The value of `devq` can be discarded now: sim will be passed as an argument in all further calls from CAM and devq can be derived from it.
CAM provides the framework for such asynchronous events. Some events originate from the lower levels (the SIM drivers), some events originate from the peripheral drivers, some events originate from the CAM subsystem itself. Any driver can register callbacks for some types of the asynchronous events, so that it would be notified if these events occur.
A typical example of such an event is a device reset. Each transaction and event identifies the devices to which it applies by the means of "path". The target-specific events normally occur during a transaction with this device. So the path from that transaction may be re-used to report this event (this is safe because the event path is copied in the event reporting routine but not deallocated nor passed anywhere further). Also it is safe to allocate paths dynamically at any time including the interrupt routines, although that incurs certain overhead, and a possible problem with this approach is that there may be no free memory at that time. For a bus reset event we need to define a wildcard path including all devices on the bus. So we can create the path for the future bus reset events in advance and avoid problems with the future memory shortage:
[.programlisting]
....
struct cam_path *path;
if(xpt_create_path(&path, /*periph*/NULL,
cam_sim_path(sim), CAM_TARGET_WILDCARD,
CAM_LUN_WILDCARD) != CAM_REQ_CMP) {
xpt_bus_deregister(cam_sim_path(sim));
cam_sim_free(sim, /*free_devq*/TRUE);
error; /* some code to handle the error */
}
softc->wpath = path;
softc->sim = sim;
....
As you can see the path includes:
* ID of the peripheral driver (NULL here because we have none)
* ID of the SIM driver (`cam_sim_path(sim)`)
* SCSI target number of the device (CAM_TARGET_WILDCARD means "all devices")
* SCSI LUN number of the subdevice (CAM_LUN_WILDCARD means "all LUNs")
If the driver can not allocate this path it will not be able to work normally, so in that case we dismantle that SCSI bus.
And we save the path pointer in the `softc` structure for future use. After that we save the value of sim (or we can also discard it on the exit from `xxx_probe()` if we wish).
That is all for a minimalistic initialization. To do things right there is one more issue left.
For a SIM driver there is one particularly interesting event: when a target device is considered lost. In this case resetting the SCSI negotiations with this device may be a good idea. So we register a callback for this event with CAM. The request is passed to CAM by requesting CAM action on a CAM control block for this type of request:
[.programlisting]
....
struct ccb_setasync csa;
xpt_setup_ccb(&csa.ccb_h, path, /*priority*/5);
csa.ccb_h.func_code = XPT_SASYNC_CB;
csa.event_enable = AC_LOST_DEVICE;
csa.callback = xxx_async;
csa.callback_arg = sim;
xpt_action((union ccb *)&csa);
....
Now we take a look at the `xxx_action()` and `xxx_poll()` driver entry points.
[source,c]
----
static void
xxx_action
();
----
Do some action on request of the CAM subsystem. Sim describes the SIM for the request, CCB is the request itself. CCB stands for "CAM Control Block". It is a union of many specific instances, each describing arguments for some type of transactions. All of these instances share the CCB header where the common part of arguments is stored.
CAM supports the SCSI controllers working in both initiator ("normal") mode and target (simulating a SCSI device) mode. Here we only consider the part relevant to the initiator mode.
There are a few function and macros (in other words, methods) defined to access the public data in the struct sim:
* `cam_sim_path(sim)` - the path ID (see above)
* `cam_sim_name(sim)` - the name of the sim
* `cam_sim_softc(sim)` - the pointer to the softc (driver private data) structure
* `cam_sim_unit(sim)` - the unit number
* `cam_sim_bus(sim)` - the bus ID
To identify the device, `xxx_action()` can get the unit number and pointer to its structure softc using these functions.
The type of request is stored in `ccb->ccb_h.func_code`. So generally `xxx_action()` consists of a big switch:
[.programlisting]
....
struct xxx_softc *softc = (struct xxx_softc *) cam_sim_softc(sim);
struct ccb_hdr *ccb_h = &ccb->ccb_h;
int unit = cam_sim_unit(sim);
int bus = cam_sim_bus(sim);
switch(ccb_h->func_code) {
case ...:
...
default:
ccb_h->status = CAM_REQ_INVALID;
xpt_done(ccb);
break;
}
....
As can be seen from the default case (if an unknown command was received) the return code of the command is set into `ccb->ccb_h.status` and the completed CCB is returned back to CAM by calling `xpt_done(ccb)`.
`xpt_done()` does not have to be called from `xxx_action()`: For example an I/O request may be enqueued inside the SIM driver and/or its SCSI controller. Then when the device would post an interrupt signaling that the processing of this request is complete `xpt_done()` may be called from the interrupt handling routine.
Actually, the CCB status is not only assigned as a return code but a CCB has some status all the time. Before CCB is passed to the `xxx_action()` routine it gets the status CCB_REQ_INPROG meaning that it is in progress. There are a surprising number of status values defined in [.filename]#/sys/cam/cam.h# which should be able to represent the status of a request in great detail. More interesting yet, the status is in fact a "bitwise or" of an enumerated status value (the lower 6 bits) and possible additional flag-like bits (the upper bits). The enumerated values will be discussed later in more detail. The summary of them can be found in the Errors Summary section. The possible status flags are:
* _CAM_DEV_QFRZN_ - if the SIM driver gets a serious error (for example, the device does not respond to the selection or breaks the SCSI protocol) when processing a CCB it should freeze the request queue by calling `xpt_freeze_simq()`, return the other enqueued but not processed yet CCBs for this device back to the CAM queue, then set this flag for the troublesome CCB and call `xpt_done()`. This flag causes the CAM subsystem to unfreeze the queue after it handles the error.
* _CAM_AUTOSNS_VALID_ - if the device returned an error condition and the flag CAM_DIS_AUTOSENSE is not set in CCB the SIM driver must execute the REQUEST SENSE command automatically to extract the sense (extended error information) data from the device. If this attempt was successful the sense data should be saved in the CCB and this flag set.
* _CAM_RELEASE_SIMQ_ - like CAM_DEV_QFRZN but used in case there is some problem (or resource shortage) with the SCSI controller itself. Then all the future requests to the controller should be stopped by `xpt_freeze_simq()`. The controller queue will be restarted after the SIM driver overcomes the shortage and informs CAM by returning some CCB with this flag set.
* _CAM_SIM_QUEUED_ - when SIM puts a CCB into its request queue this flag should be set (and removed when this CCB gets dequeued before being returned back to CAM). This flag is not used anywhere in the CAM code now, so its purpose is purely diagnostic.
* _CAM_QOS_VALID_ - The QOS data is now valid.
The function `xxx_action()` is not allowed to sleep, so all the synchronization for resource access must be done using SIM or device queue freezing. Besides the aforementioned flags the CAM subsystem provides functions `xpt_release_simq()` and `xpt_release_devq()` to unfreeze the queues directly, without passing a CCB to CAM.
The CCB header contains the following fields:
* _path_ - path ID for the request
* _target_id_ - target device ID for the request
* _target_lun_ - LUN ID of the target device
* _timeout_ - timeout interval for this command, in milliseconds
* _timeout_ch_ - a convenience place for the SIM driver to store the timeout handle (the CAM subsystem itself does not make any assumptions about it)
* _flags_ - various bits of information about the request spriv_ptr0, spriv_ptr1 - fields reserved for private use by the SIM driver (such as linking to the SIM queues or SIM private control blocks); actually, they exist as unions: spriv_ptr0 and spriv_ptr1 have the type (void *), spriv_field0 and spriv_field1 have the type unsigned long, sim_priv.entries[0].bytes and sim_priv.entries[1].bytes are byte arrays of the size consistent with the other incarnations of the union and sim_priv.bytes is one array, twice bigger.
The recommended way of using the SIM private fields of CCB is to define some meaningful names for them and use these meaningful names in the driver, like:
[.programlisting]
....
#define ccb_some_meaningful_name sim_priv.entries[0].bytes
#define ccb_hcb spriv_ptr1 /* for hardware control block */
....
The most common initiator mode requests are:
* _XPT_SCSI_IO_ - execute an I/O transaction
+
The instance "struct ccb_scsiio csio" of the union ccb is used to transfer the arguments. They are:
** _cdb_io_ - pointer to the SCSI command buffer or the buffer itself
** _cdb_len_ - SCSI command length
** _data_ptr_ - pointer to the data buffer (gets a bit complicated if scatter/gather is used)
** _dxfer_len_ - length of the data to transfer
** _sglist_cnt_ - counter of the scatter/gather segments
** _scsi_status_ - place to return the SCSI status
** _sense_data_ - buffer for the SCSI sense information if the command returns an error (the SIM driver is supposed to run the REQUEST SENSE command automatically in this case if the CCB flag CAM_DIS_AUTOSENSE is not set)
** _sense_len_ - the length of that buffer (if it happens to be higher than size of sense_data the SIM driver must silently assume the smaller value) resid, sense_resid - if the transfer of data or SCSI sense returned an error these are the returned counters of the residual (not transferred) data. They do not seem to be especially meaningful, so in a case when they are difficult to compute (say, counting bytes in the SCSI controller's FIFO buffer) an approximate value will do as well. For a successfully completed transfer they must be set to zero.
** _tag_action_ - the kind of tag to use:
*** CAM_TAG_ACTION_NONE - do not use tags for this transaction
*** MSG_SIMPLE_Q_TAG, MSG_HEAD_OF_Q_TAG, MSG_ORDERED_Q_TAG - value equal to the appropriate tag message (see /sys/cam/scsi/scsi_message.h); this gives only the tag type, the SIM driver must assign the tag value itself
+
The general logic of handling this request is the following:
+
The first thing to do is to check for possible races, to make sure that the command did not get aborted when it was sitting in the queue:
+
[.programlisting]
....
struct ccb_scsiio *csio = &ccb->csio;
if ((ccb_h->status & CAM_STATUS_MASK) != CAM_REQ_INPROG) {
xpt_done(ccb);
return;
}
....
+
Also we check that the device is supported at all by our controller:
+
[.programlisting]
....
if(ccb_h->target_id > OUR_MAX_SUPPORTED_TARGET_ID
|| cch_h->target_id == OUR_SCSI_CONTROLLERS_OWN_ID) {
ccb_h->status = CAM_TID_INVALID;
xpt_done(ccb);
return;
}
if(ccb_h->target_lun > OUR_MAX_SUPPORTED_LUN) {
ccb_h->status = CAM_LUN_INVALID;
xpt_done(ccb);
return;
}
....
+
Then allocate whatever data structures (such as card-dependent hardware control block) we need to process this request. If we can not then freeze the SIM queue and remember that we have a pending operation, return the CCB back and ask CAM to re-queue it. Later when the resources become available the SIM queue must be unfrozen by returning a ccb with the `CAM_SIMQ_RELEASE` bit set in its status. Otherwise, if all went well, link the CCB with the hardware control block (HCB) and mark it as queued.
+
[.programlisting]
....
struct xxx_hcb *hcb = allocate_hcb(softc, unit, bus);
if(hcb == NULL) {
softc->flags |= RESOURCE_SHORTAGE;
xpt_freeze_simq(sim, /*count*/1);
ccb_h->status = CAM_REQUEUE_REQ;
xpt_done(ccb);
return;
}
hcb->ccb = ccb; ccb_h->ccb_hcb = (void *)hcb;
ccb_h->status |= CAM_SIM_QUEUED;
....
+
Extract the target data from CCB into the hardware control block. Check if we are asked to assign a tag and if yes then generate an unique tag and build the SCSI tag messages. The SIM driver is also responsible for negotiations with the devices to set the maximal mutually supported bus width, synchronous rate and offset.
+
[.programlisting]
....
hcb->target = ccb_h->target_id; hcb->lun = ccb_h->target_lun;
generate_identify_message(hcb);
if( ccb_h->tag_action != CAM_TAG_ACTION_NONE )
generate_unique_tag_message(hcb, ccb_h->tag_action);
if( !target_negotiated(hcb) )
generate_negotiation_messages(hcb);
....
+
Then set up the SCSI command. The command storage may be specified in the CCB in many interesting ways, specified by the CCB flags. The command buffer can be contained in CCB or pointed to, in the latter case the pointer may be physical or virtual. Since the hardware commonly needs physical address we always convert the address to the physical one, typically using the busdma API.
+
In case if a physical address is requested it is OK to return the CCB with the status `CAM_REQ_INVALID`, the current drivers do that. If necessary a physical address can be also converted or mapped back to a virtual address but with big pain, so we do not do that.
+
[.programlisting]
....
if(ccb_h->flags & CAM_CDB_POINTER) {
/* CDB is a pointer */
if(!(ccb_h->flags & CAM_CDB_PHYS)) {
/* CDB pointer is virtual */
hcb->cmd = vtobus(csio->cdb_io.cdb_ptr);
} else {
/* CDB pointer is physical */
hcb->cmd = csio->cdb_io.cdb_ptr ;
}
} else {
/* CDB is in the ccb (buffer) */
hcb->cmd = vtobus(csio->cdb_io.cdb_bytes);
}
hcb->cmdlen = csio->cdb_len;
....
+
Now it is time to set up the data. Again, the data storage may be specified in the CCB in many interesting ways, specified by the CCB flags. First we get the direction of the data transfer. The simplest case is if there is no data to transfer:
+
[.programlisting]
....
int dir = (ccb_h->flags & CAM_DIR_MASK);
if (dir == CAM_DIR_NONE)
goto end_data;
....
+
Then we check if the data is in one chunk or in a scatter-gather list, and the addresses are physical or virtual. The SCSI controller may be able to handle only a limited number of chunks of limited length. If the request hits this limitation we return an error. We use a special function to return the CCB to handle in one place the HCB resource shortages. The functions to add chunks are driver-dependent, and here we leave them without detailed implementation. See description of the SCSI command (CDB) handling for the details on the address-translation issues. If some variation is too difficult or impossible to implement with a particular card it is OK to return the status `CAM_REQ_INVALID`. Actually, it seems like the scatter-gather ability is not used anywhere in the CAM code now. But at least the case for a single non-scattered virtual buffer must be implemented, it is actively used by CAM.
+
[.programlisting]
....
int rv;
initialize_hcb_for_data(hcb);
if((!(ccb_h->flags & CAM_SCATTER_VALID)) {
/* single buffer */
if(!(ccb_h->flags & CAM_DATA_PHYS)) {
rv = add_virtual_chunk(hcb, csio->data_ptr, csio->dxfer_len, dir);
}
} else {
rv = add_physical_chunk(hcb, csio->data_ptr, csio->dxfer_len, dir);
}
} else {
int i;
struct bus_dma_segment *segs;
segs = (struct bus_dma_segment *)csio->data_ptr;
if ((ccb_h->flags & CAM_SG_LIST_PHYS) != 0) {
/* The SG list pointer is physical */
rv = setup_hcb_for_physical_sg_list(hcb, segs, csio->sglist_cnt);
} else if (!(ccb_h->flags & CAM_DATA_PHYS)) {
/* SG buffer pointers are virtual */
for (i = 0; i < csio->sglist_cnt; i++) {
rv = add_virtual_chunk(hcb, segs[i].ds_addr,
segs[i].ds_len, dir);
if (rv != CAM_REQ_CMP)
break;
}
} else {
/* SG buffer pointers are physical */
for (i = 0; i < csio->sglist_cnt; i++) {
rv = add_physical_chunk(hcb, segs[i].ds_addr,
segs[i].ds_len, dir);
if (rv != CAM_REQ_CMP)
break;
}
}
}
if(rv != CAM_REQ_CMP) {
/* we expect that add_*_chunk() functions return CAM_REQ_CMP
* if they added a chunk successfully, CAM_REQ_TOO_BIG if
* the request is too big (too many bytes or too many chunks),
* CAM_REQ_INVALID in case of other troubles
*/
free_hcb_and_ccb_done(hcb, ccb, rv);
return;
}
end_data:
....
+
If disconnection is disabled for this CCB we pass this information to the hcb:
+
[.programlisting]
....
if(ccb_h->flags & CAM_DIS_DISCONNECT)
hcb_disable_disconnect(hcb);
....
+
If the controller is able to run REQUEST SENSE command all by itself then the value of the flag CAM_DIS_AUTOSENSE should also be passed to it, to prevent automatic REQUEST SENSE if the CAM subsystem does not want it.
+
The only thing left is to set up the timeout, pass our hcb to the hardware and return, the rest will be done by the interrupt handler (or timeout handler).
+
[.programlisting]
....
ccb_h->timeout_ch = timeout(xxx_timeout, (caddr_t) hcb,
(ccb_h->timeout * hz) / 1000); /* convert milliseconds to ticks */
put_hcb_into_hardware_queue(hcb);
return;
....
+
And here is a possible implementation of the function returning CCB:
+
[.programlisting]
....
static void
free_hcb_and_ccb_done(struct xxx_hcb *hcb, union ccb *ccb, u_int32_t status)
{
struct xxx_softc *softc = hcb->softc;
ccb->ccb_h.ccb_hcb = 0;
if(hcb != NULL) {
untimeout(xxx_timeout, (caddr_t) hcb, ccb->ccb_h.timeout_ch);
/* we're about to free a hcb, so the shortage has ended */
if(softc->flags & RESOURCE_SHORTAGE) {
softc->flags &= ~RESOURCE_SHORTAGE;
status |= CAM_RELEASE_SIMQ;
}
free_hcb(hcb); /* also removes hcb from any internal lists */
}
ccb->ccb_h.status = status |
(ccb->ccb_h.status & ~(CAM_STATUS_MASK|CAM_SIM_QUEUED));
xpt_done(ccb);
}
....
* _XPT_RESET_DEV_ - send the SCSI "BUS DEVICE RESET" message to a device
+
There is no data transferred in CCB except the header and the most interesting argument of it is target_id. Depending on the controller hardware a hardware control block just like for the XPT_SCSI_IO request may be constructed (see XPT_SCSI_IO request description) and sent to the controller or the SCSI controller may be immediately programmed to send this RESET message to the device or this request may be just not supported (and return the status `CAM_REQ_INVALID`). Also on completion of the request all the disconnected transactions for this target must be aborted (probably in the interrupt routine).
+
Also all the current negotiations for the target are lost on reset, so they might be cleaned too. Or they clearing may be deferred, because anyway the target would request re-negotiation on the next transaction.
* _XPT_RESET_BUS_ - send the RESET signal to the SCSI bus
+
No arguments are passed in the CCB, the only interesting argument is the SCSI bus indicated by the struct sim pointer.
+
A minimalistic implementation would forget the SCSI negotiations for all the devices on the bus and return the status CAM_REQ_CMP.
+
The proper implementation would in addition actually reset the SCSI bus (possible also reset the SCSI controller) and mark all the CCBs being processed, both those in the hardware queue and those being disconnected, as done with the status CAM_SCSI_BUS_RESET. Like:
+
[.programlisting]
....
int targ, lun;
struct xxx_hcb *h, *hh;
struct ccb_trans_settings neg;
struct cam_path *path;
/* The SCSI bus reset may take a long time, in this case its completion
* should be checked by interrupt or timeout. But for simplicity
* we assume here that it is really fast.
*/
reset_scsi_bus(softc);
/* drop all enqueued CCBs */
for(h = softc->first_queued_hcb; h != NULL; h = hh) {
hh = h->next;
free_hcb_and_ccb_done(h, h->ccb, CAM_SCSI_BUS_RESET);
}
/* the clean values of negotiations to report */
neg.bus_width = 8;
neg.sync_period = neg.sync_offset = 0;
neg.valid = (CCB_TRANS_BUS_WIDTH_VALID
| CCB_TRANS_SYNC_RATE_VALID | CCB_TRANS_SYNC_OFFSET_VALID);
/* drop all disconnected CCBs and clean negotiations */
for(targ=0; targ <= OUR_MAX_SUPPORTED_TARGET; targ++) {
clean_negotiations(softc, targ);
/* report the event if possible */
if(xpt_create_path(&path, /*periph*/NULL,
cam_sim_path(sim), targ,
CAM_LUN_WILDCARD) == CAM_REQ_CMP) {
xpt_async(AC_TRANSFER_NEG, path, &neg);
xpt_free_path(path);
}
for(lun=0; lun <= OUR_MAX_SUPPORTED_LUN; lun++)
for(h = softc->first_discon_hcb[targ][lun]; h != NULL; h = hh) {
hh=h->next;
free_hcb_and_ccb_done(h, h->ccb, CAM_SCSI_BUS_RESET);
}
}
ccb->ccb_h.status = CAM_REQ_CMP;
xpt_done(ccb);
/* report the event */
xpt_async(AC_BUS_RESET, softc->wpath, NULL);
return;
....
+
Implementing the SCSI bus reset as a function may be a good idea because it would be re-used by the timeout function as a last resort if the things go wrong.
* _XPT_ABORT_ - abort the specified CCB
+
The arguments are transferred in the instance "struct ccb_abort cab" of the union ccb. The only argument field in it is:
+
_abort_ccb_ - pointer to the CCB to be aborted
+
If the abort is not supported just return the status CAM_UA_ABORT. This is also the easy way to minimally implement this call, return CAM_UA_ABORT in any case.
+
The hard way is to implement this request honestly. First check that abort applies to a SCSI transaction:
+
[.programlisting]
....
struct ccb *abort_ccb;
abort_ccb = ccb->cab.abort_ccb;
if(abort_ccb->ccb_h.func_code != XPT_SCSI_IO) {
ccb->ccb_h.status = CAM_UA_ABORT;
xpt_done(ccb);
return;
}
....
+
Then it is necessary to find this CCB in our queue. This can be done by walking the list of all our hardware control blocks in search for one associated with this CCB:
+
[.programlisting]
....
struct xxx_hcb *hcb, *h;
hcb = NULL;
/* We assume that softc->first_hcb is the head of the list of all
* HCBs associated with this bus, including those enqueued for
* processing, being processed by hardware and disconnected ones.
*/
for(h = softc->first_hcb; h != NULL; h = h->next) {
if(h->ccb == abort_ccb) {
hcb = h;
break;
}
}
if(hcb == NULL) {
/* no such CCB in our queue */
ccb->ccb_h.status = CAM_PATH_INVALID;
xpt_done(ccb);
return;
}
hcb=found_hcb;
....
+
Now we look at the current processing status of the HCB. It may be either sitting in the queue waiting to be sent to the SCSI bus, being transferred right now, or disconnected and waiting for the result of the command, or actually completed by hardware but not yet marked as done by software. To make sure that we do not get in any races with hardware we mark the HCB as being aborted, so that if this HCB is about to be sent to the SCSI bus the SCSI controller will see this flag and skip it.
+
[.programlisting]
....
int hstatus;
/* shown as a function, in case special action is needed to make
* this flag visible to hardware
*/
set_hcb_flags(hcb, HCB_BEING_ABORTED);
abort_again:
hstatus = get_hcb_status(hcb);
switch(hstatus) {
case HCB_SITTING_IN_QUEUE:
remove_hcb_from_hardware_queue(hcb);
/* FALLTHROUGH */
case HCB_COMPLETED:
/* this is an easy case */
free_hcb_and_ccb_done(hcb, abort_ccb, CAM_REQ_ABORTED);
break;
....
+
If the CCB is being transferred right now we would like to signal to the SCSI controller in some hardware-dependent way that we want to abort the current transfer. The SCSI controller would set the SCSI ATTENTION signal and when the target responds to it send an ABORT message. We also reset the timeout to make sure that the target is not sleeping forever. If the command would not get aborted in some reasonable time like 10 seconds the timeout routine would go ahead and reset the whole SCSI bus. Since the command will be aborted in some reasonable time we can just return the abort request now as successfully completed, and mark the aborted CCB as aborted (but not mark it as done yet).
+
[.programlisting]
....
case HCB_BEING_TRANSFERRED:
untimeout(xxx_timeout, (caddr_t) hcb, abort_ccb->ccb_h.timeout_ch);
abort_ccb->ccb_h.timeout_ch =
timeout(xxx_timeout, (caddr_t) hcb, 10 * hz);
abort_ccb->ccb_h.status = CAM_REQ_ABORTED;
/* ask the controller to abort that HCB, then generate
* an interrupt and stop
*/
if(signal_hardware_to_abort_hcb_and_stop(hcb) < 0) {
/* oops, we missed the race with hardware, this transaction
* got off the bus before we aborted it, try again */
goto abort_again;
}
break;
....
+
If the CCB is in the list of disconnected then set it up as an abort request and re-queue it at the front of hardware queue. Reset the timeout and report the abort request to be completed.
+
[.programlisting]
....
case HCB_DISCONNECTED:
untimeout(xxx_timeout, (caddr_t) hcb, abort_ccb->ccb_h.timeout_ch);
abort_ccb->ccb_h.timeout_ch =
timeout(xxx_timeout, (caddr_t) hcb, 10 * hz);
put_abort_message_into_hcb(hcb);
put_hcb_at_the_front_of_hardware_queue(hcb);
break;
}
ccb->ccb_h.status = CAM_REQ_CMP;
xpt_done(ccb);
return;
....
+
That is all for the ABORT request, although there is one more issue. As the ABORT message cleans all the ongoing transactions on a LUN we have to mark all the other active transactions on this LUN as aborted. That should be done in the interrupt routine, after the transaction gets aborted.
+
Implementing the CCB abort as a function may be quite a good idea, this function can be re-used if an I/O transaction times out. The only difference would be that the timed out transaction would return the status CAM_CMD_TIMEOUT for the timed out request. Then the case XPT_ABORT would be small, like that:
+
[.programlisting]
....
case XPT_ABORT:
struct ccb *abort_ccb;
abort_ccb = ccb->cab.abort_ccb;
if(abort_ccb->ccb_h.func_code != XPT_SCSI_IO) {
ccb->ccb_h.status = CAM_UA_ABORT;
xpt_done(ccb);
return;
}
if(xxx_abort_ccb(abort_ccb, CAM_REQ_ABORTED) < 0)
/* no such CCB in our queue */
ccb->ccb_h.status = CAM_PATH_INVALID;
else
ccb->ccb_h.status = CAM_REQ_CMP;
xpt_done(ccb);
return;
....
* _XPT_SET_TRAN_SETTINGS_ - explicitly set values of SCSI transfer settings
+
The arguments are transferred in the instance "struct ccb_trans_setting cts" of the union ccb:
** _valid_ - a bitmask showing which settings should be updated:
** _CCB_TRANS_SYNC_RATE_VALID_ - synchronous transfer rate
** _CCB_TRANS_SYNC_OFFSET_VALID_ - synchronous offset
** _CCB_TRANS_BUS_WIDTH_VALID_ - bus width
** _CCB_TRANS_DISC_VALID_ - set enable/disable disconnection
** _CCB_TRANS_TQ_VALID_ - set enable/disable tagged queuing
** _flags_ - consists of two parts, binary arguments and identification of sub-operations. The binary arguments are:
*** _CCB_TRANS_DISC_ENB_ - enable disconnection
*** _CCB_TRANS_TAG_ENB_ - enable tagged queuing
** the sub-operations are:
*** _CCB_TRANS_CURRENT_SETTINGS_ - change the current negotiations
*** _CCB_TRANS_USER_SETTINGS_ - remember the desired user values sync_period, sync_offset - self-explanatory, if sync_offset==0 then the asynchronous mode is requested bus_width - bus width, in bits (not bytes)
+
Two sets of negotiated parameters are supported, the user settings and the current settings. The user settings are not really used much in the SIM drivers, this is mostly just a piece of memory where the upper levels can store (and later recall) its ideas about the parameters. Setting the user parameters does not cause re-negotiation of the transfer rates. But when the SCSI controller does a negotiation it must never set the values higher than the user parameters, so it is essentially the top boundary.
+
The current settings are, as the name says, current. Changing them means that the parameters must be re-negotiated on the next transfer. Again, these "new current settings" are not supposed to be forced on the device, just they are used as the initial step of negotiations. Also they must be limited by actual capabilities of the SCSI controller: for example, if the SCSI controller has 8-bit bus and the request asks to set 16-bit wide transfers this parameter must be silently truncated to 8-bit transfers before sending it to the device.
+
One caveat is that the bus width and synchronous parameters are per target while the disconnection and tag enabling parameters are per lun.
+
The recommended implementation is to keep 3 sets of negotiated (bus width and synchronous transfer) parameters:
** _user_ - the user set, as above
** _current_ - those actually in effect
** _goal_ - those requested by setting of the "current" parameters
+
The code looks like:
+
[.programlisting]
....
struct ccb_trans_settings *cts;
int targ, lun;
int flags;
cts = &ccb->cts;
targ = ccb_h->target_id;
lun = ccb_h->target_lun;
flags = cts->flags;
if(flags & CCB_TRANS_USER_SETTINGS) {
if(flags & CCB_TRANS_SYNC_RATE_VALID)
softc->user_sync_period[targ] = cts->sync_period;
if(flags & CCB_TRANS_SYNC_OFFSET_VALID)
softc->user_sync_offset[targ] = cts->sync_offset;
if(flags & CCB_TRANS_BUS_WIDTH_VALID)
softc->user_bus_width[targ] = cts->bus_width;
if(flags & CCB_TRANS_DISC_VALID) {
softc->user_tflags[targ][lun] &= ~CCB_TRANS_DISC_ENB;
softc->user_tflags[targ][lun] |= flags & CCB_TRANS_DISC_ENB;
}
if(flags & CCB_TRANS_TQ_VALID) {
softc->user_tflags[targ][lun] &= ~CCB_TRANS_TQ_ENB;
softc->user_tflags[targ][lun] |= flags & CCB_TRANS_TQ_ENB;
}
}
if(flags & CCB_TRANS_CURRENT_SETTINGS) {
if(flags & CCB_TRANS_SYNC_RATE_VALID)
softc->goal_sync_period[targ] =
max(cts->sync_period, OUR_MIN_SUPPORTED_PERIOD);
if(flags & CCB_TRANS_SYNC_OFFSET_VALID)
softc->goal_sync_offset[targ] =
min(cts->sync_offset, OUR_MAX_SUPPORTED_OFFSET);
if(flags & CCB_TRANS_BUS_WIDTH_VALID)
softc->goal_bus_width[targ] = min(cts->bus_width, OUR_BUS_WIDTH);
if(flags & CCB_TRANS_DISC_VALID) {
softc->current_tflags[targ][lun] &= ~CCB_TRANS_DISC_ENB;
softc->current_tflags[targ][lun] |= flags & CCB_TRANS_DISC_ENB;
}
if(flags & CCB_TRANS_TQ_VALID) {
softc->current_tflags[targ][lun] &= ~CCB_TRANS_TQ_ENB;
softc->current_tflags[targ][lun] |= flags & CCB_TRANS_TQ_ENB;
}
}
ccb->ccb_h.status = CAM_REQ_CMP;
xpt_done(ccb);
return;
....
+
Then when the next I/O request will be processed it will check if it has to re-negotiate, for example by calling the function target_negotiated(hcb). It can be implemented like this:
+
[.programlisting]
....
int
target_negotiated(struct xxx_hcb *hcb)
{
struct softc *softc = hcb->softc;
int targ = hcb->targ;
if( softc->current_sync_period[targ] != softc->goal_sync_period[targ]
|| softc->current_sync_offset[targ] != softc->goal_sync_offset[targ]
|| softc->current_bus_width[targ] != softc->goal_bus_width[targ] )
return 0; /* FALSE */
else
return 1; /* TRUE */
}
....
+
After the values are re-negotiated the resulting values must be assigned to both current and goal parameters, so for future I/O transactions the current and goal parameters would be the same and `target_negotiated()` would return TRUE. When the card is initialized (in `xxx_attach()`) the current negotiation values must be initialized to narrow asynchronous mode, the goal and current values must be initialized to the maximal values supported by controller.
+
_XPT_GET_TRAN_SETTINGS_ - get values of SCSI transfer settings
+
This operations is the reverse of XPT_SET_TRAN_SETTINGS. Fill up the CCB instance "struct ccb_trans_setting cts" with data as requested by the flags CCB_TRANS_CURRENT_SETTINGS or CCB_TRANS_USER_SETTINGS (if both are set then the existing drivers return the current settings). Set all the bits in the valid field.
+
_XPT_CALC_GEOMETRY_ - calculate logical (BIOS) geometry of the disk
+
The arguments are transferred in the instance "struct ccb_calc_geometry ccg" of the union ccb:
** _block_size_ - input, block (A.K.A sector) size in bytes
** _volume_size_ - input, volume size in bytes
** _cylinders_ - output, logical cylinders
** _heads_ - output, logical heads
** _secs_per_track_ - output, logical sectors per track
+
If the returned geometry differs much enough from what the SCSI controller BIOS thinks and a disk on this SCSI controller is used as bootable the system may not be able to boot. The typical calculation example taken from the aic7xxx driver is:
+
[.programlisting]
....
struct ccb_calc_geometry *ccg;
u_int32_t size_mb;
u_int32_t secs_per_cylinder;
int extended;
ccg = &ccb->ccg;
size_mb = ccg->volume_size
/ ((1024L * 1024L) / ccg->block_size);
extended = check_cards_EEPROM_for_extended_geometry(softc);
if (size_mb > 1024 && extended) {
ccg->heads = 255;
ccg->secs_per_track = 63;
} else {
ccg->heads = 64;
ccg->secs_per_track = 32;
}
secs_per_cylinder = ccg->heads * ccg->secs_per_track;
ccg->cylinders = ccg->volume_size / secs_per_cylinder;
ccb->ccb_h.status = CAM_REQ_CMP;
xpt_done(ccb);
return;
....
+
This gives the general idea, the exact calculation depends on the quirks of the particular BIOS. If BIOS provides no way set the "extended translation" flag in EEPROM this flag should normally be assumed equal to 1. Other popular geometries are:
+
[.programlisting]
....
128 heads, 63 sectors - Symbios controllers
16 heads, 63 sectors - old controllers
....
+
Some system BIOSes and SCSI BIOSes fight with each other with variable success, for example a combination of Symbios 875/895 SCSI and Phoenix BIOS can give geometry 128/63 after power up and 255/63 after a hard reset or soft reboot.
* _XPT_PATH_INQ_ - path inquiry, in other words get the SIM driver and SCSI controller (also known as HBA - Host Bus Adapter) properties
+
The properties are returned in the instance "struct ccb_pathinq cpi" of the union ccb:
** version_num - the SIM driver version number, now all drivers use 1
** hba_inquiry - bitmask of features supported by the controller:
** PI_MDP_ABLE - supports MDP message (something from SCSI3?)
** PI_WIDE_32 - supports 32 bit wide SCSI
** PI_WIDE_16 - supports 16 bit wide SCSI
** PI_SDTR_ABLE - can negotiate synchronous transfer rate
** PI_LINKED_CDB - supports linked commands
** PI_TAG_ABLE - supports tagged commands
** PI_SOFT_RST - supports soft reset alternative (hard reset and soft reset are mutually exclusive within a SCSI bus)
** target_sprt - flags for target mode support, 0 if unsupported
** hba_misc - miscellaneous controller features:
** PIM_SCANHILO - bus scans from high ID to low ID
** PIM_NOREMOVE - removable devices not included in scan
** PIM_NOINITIATOR - initiator role not supported
** PIM_NOBUSRESET - user has disabled initial BUS RESET
** hba_eng_cnt - mysterious HBA engine count, something related to compression, now is always set to 0
** vuhba_flags - vendor-unique flags, unused now
** max_target - maximal supported target ID (7 for 8-bit bus, 15 for 16-bit bus, 127 for Fibre Channel)
** max_lun - maximal supported LUN ID (7 for older SCSI controllers, 63 for newer ones)
** async_flags - bitmask of installed Async handler, unused now
** hpath_id - highest Path ID in the subsystem, unused now
** unit_number - the controller unit number, cam_sim_unit(sim)
** bus_id - the bus number, cam_sim_bus(sim)
** initiator_id - the SCSI ID of the controller itself
** base_transfer_speed - nominal transfer speed in KB/s for asynchronous narrow transfers, equals to 3300 for SCSI
** sim_vid - SIM driver's vendor id, a zero-terminated string of maximal length SIM_IDLEN including the terminating zero
** hba_vid - SCSI controller's vendor id, a zero-terminated string of maximal length HBA_IDLEN including the terminating zero
** dev_name - device driver name, a zero-terminated string of maximal length DEV_IDLEN including the terminating zero, equal to cam_sim_name(sim)
+
The recommended way of setting the string fields is using strncpy, like:
+
[.programlisting]
....
strncpy(cpi->dev_name, cam_sim_name(sim), DEV_IDLEN);
....
+
After setting the values set the status to CAM_REQ_CMP and mark the CCB as done.
[[scsi-polling]]
== Polling
[source,c]
----
static void
xxx_poll
();
----
The poll function is used to simulate the interrupts when the interrupt subsystem is not functioning (for example, when the system has crashed and is creating the system dump). The CAM subsystem sets the proper interrupt level before calling the poll routine. So all it needs to do is to call the interrupt routine (or the other way around, the poll routine may be doing the real action and the interrupt routine would just call the poll routine). Why bother about a separate function then? This has to do with different calling conventions. The `xxx_poll` routine gets the struct cam_sim pointer as its argument while the PCI interrupt routine by common convention gets pointer to the struct `xxx_softc` and the ISA interrupt routine gets just the device unit number. So the poll routine would normally look as:
[.programlisting]
....
static void
xxx_poll(struct cam_sim *sim)
{
xxx_intr((struct xxx_softc *)cam_sim_softc(sim)); /* for PCI device */
}
....
or
[.programlisting]
....
static void
xxx_poll(struct cam_sim *sim)
{
xxx_intr(cam_sim_unit(sim)); /* for ISA device */
}
....
[[scsi-async]]
== Asynchronous Events
If an asynchronous event callback has been set up then the callback function should be defined.
[.programlisting]
....
static void
ahc_async(void *callback_arg, u_int32_t code, struct cam_path *path, void *arg)
....
* callback_arg - the value supplied when registering the callback
* code - identifies the type of event
* path - identifies the devices to which the event applies
* arg - event-specific argument
Implementation for a single type of event, AC_LOST_DEVICE, looks like:
[.programlisting]
....
struct xxx_softc *softc;
struct cam_sim *sim;
int targ;
struct ccb_trans_settings neg;
sim = (struct cam_sim *)callback_arg;
softc = (struct xxx_softc *)cam_sim_softc(sim);
switch (code) {
case AC_LOST_DEVICE:
targ = xpt_path_target_id(path);
if(targ <= OUR_MAX_SUPPORTED_TARGET) {
clean_negotiations(softc, targ);
/* send indication to CAM */
neg.bus_width = 8;
neg.sync_period = neg.sync_offset = 0;
neg.valid = (CCB_TRANS_BUS_WIDTH_VALID
| CCB_TRANS_SYNC_RATE_VALID | CCB_TRANS_SYNC_OFFSET_VALID);
xpt_async(AC_TRANSFER_NEG, path, &neg);
}
break;
default:
break;
}
....
[[scsi-interrupts]]
== Interrupts
The exact type of the interrupt routine depends on the type of the peripheral bus (PCI, ISA and so on) to which the SCSI controller is connected.
The interrupt routines of the SIM drivers run at the interrupt level splcam. So `splcam()` should be used in the driver to synchronize activity between the interrupt routine and the rest of the driver (for a multiprocessor-aware driver things get yet more interesting but we ignore this case here). The pseudo-code in this document happily ignores the problems of synchronization. The real code must not ignore them. A simple-minded approach is to set `splcam()` on the entry to the other routines and reset it on return thus protecting them by one big critical section. To make sure that the interrupt level will be always restored a wrapper function can be defined, like:
[.programlisting]
....
static void
xxx_action(struct cam_sim *sim, union ccb *ccb)
{
int s;
s = splcam();
xxx_action1(sim, ccb);
splx(s);
}
static void
xxx_action1(struct cam_sim *sim, union ccb *ccb)
{
... process the request ...
}
....
This approach is simple and robust but the problem with it is that interrupts may get blocked for a relatively long time and this would negatively affect the system's performance. On the other hand the functions of the `spl()` family have rather high overhead, so vast amount of tiny critical sections may not be good either.
The conditions handled by the interrupt routine and the details depend very much on the hardware. We consider the set of "typical" conditions.
First, we check if a SCSI reset was encountered on the bus (probably caused by another SCSI controller on the same SCSI bus). If so we drop all the enqueued and disconnected requests, report the events and re-initialize our SCSI controller. It is important that during this initialization the controller will not issue another reset or else two controllers on the same SCSI bus could ping-pong resets forever. The case of fatal controller error/hang could be handled in the same place, but it will probably need also sending RESET signal to the SCSI bus to reset the status of the connections with the SCSI devices.
[.programlisting]
....
int fatal=0;
struct ccb_trans_settings neg;
struct cam_path *path;
if( detected_scsi_reset(softc)
|| (fatal = detected_fatal_controller_error(softc)) ) {
int targ, lun;
struct xxx_hcb *h, *hh;
/* drop all enqueued CCBs */
for(h = softc->first_queued_hcb; h != NULL; h = hh) {
hh = h->next;
free_hcb_and_ccb_done(h, h->ccb, CAM_SCSI_BUS_RESET);
}
/* the clean values of negotiations to report */
neg.bus_width = 8;
neg.sync_period = neg.sync_offset = 0;
neg.valid = (CCB_TRANS_BUS_WIDTH_VALID
| CCB_TRANS_SYNC_RATE_VALID | CCB_TRANS_SYNC_OFFSET_VALID);
/* drop all disconnected CCBs and clean negotiations */
for(targ=0; targ <= OUR_MAX_SUPPORTED_TARGET; targ++) {
clean_negotiations(softc, targ);
/* report the event if possible */
if(xpt_create_path(&path, /*periph*/NULL,
cam_sim_path(sim), targ,
CAM_LUN_WILDCARD) == CAM_REQ_CMP) {
xpt_async(AC_TRANSFER_NEG, path, &neg);
xpt_free_path(path);
}
for(lun=0; lun <= OUR_MAX_SUPPORTED_LUN; lun++)
for(h = softc->first_discon_hcb[targ][lun]; h != NULL; h = hh) {
hh=h->next;
if(fatal)
free_hcb_and_ccb_done(h, h->ccb, CAM_UNREC_HBA_ERROR);
else
free_hcb_and_ccb_done(h, h->ccb, CAM_SCSI_BUS_RESET);
}
}
/* report the event */
xpt_async(AC_BUS_RESET, softc->wpath, NULL);
/* re-initialization may take a lot of time, in such case
* its completion should be signaled by another interrupt or
* checked on timeout - but for simplicity we assume here that
* it is really fast
*/
if(!fatal) {
reinitialize_controller_without_scsi_reset(softc);
} else {
reinitialize_controller_with_scsi_reset(softc);
}
schedule_next_hcb(softc);
return;
}
....
If interrupt is not caused by a controller-wide condition then probably something has happened to the current hardware control block. Depending on the hardware there may be other non-HCB-related events, we just do not consider them here. Then we analyze what happened to this HCB:
[.programlisting]
....
struct xxx_hcb *hcb, *h, *hh;
int hcb_status, scsi_status;
int ccb_status;
int targ;
int lun_to_freeze;
hcb = get_current_hcb(softc);
if(hcb == NULL) {
/* either stray interrupt or something went very wrong
* or this is something hardware-dependent
*/
handle as necessary;
return;
}
targ = hcb->target;
hcb_status = get_status_of_current_hcb(softc);
....
First we check if the HCB has completed and if so we check the returned SCSI status.
[.programlisting]
....
if(hcb_status == COMPLETED) {
scsi_status = get_completion_status(hcb);
....
Then look if this status is related to the REQUEST SENSE command and if so handle it in a simple way.
[.programlisting]
....
if(hcb->flags & DOING_AUTOSENSE) {
if(scsi_status == GOOD) { /* autosense was successful */
hcb->ccb->ccb_h.status |= CAM_AUTOSNS_VALID;
free_hcb_and_ccb_done(hcb, hcb->ccb, CAM_SCSI_STATUS_ERROR);
} else {
autosense_failed:
free_hcb_and_ccb_done(hcb, hcb->ccb, CAM_AUTOSENSE_FAIL);
}
schedule_next_hcb(softc);
return;
}
....
Else the command itself has completed, pay more attention to details. If auto-sense is not disabled for this CCB and the command has failed with sense data then run REQUEST SENSE command to receive that data.
[.programlisting]
....
hcb->ccb->csio.scsi_status = scsi_status;
calculate_residue(hcb);
if( (hcb->ccb->ccb_h.flags & CAM_DIS_AUTOSENSE)==0
&& ( scsi_status == CHECK_CONDITION
|| scsi_status == COMMAND_TERMINATED) ) {
/* start auto-SENSE */
hcb->flags |= DOING_AUTOSENSE;
setup_autosense_command_in_hcb(hcb);
restart_current_hcb(softc);
return;
}
if(scsi_status == GOOD)
free_hcb_and_ccb_done(hcb, hcb->ccb, CAM_REQ_CMP);
else
free_hcb_and_ccb_done(hcb, hcb->ccb, CAM_SCSI_STATUS_ERROR);
schedule_next_hcb(softc);
return;
}
....
One typical thing would be negotiation events: negotiation messages received from a SCSI target (in answer to our negotiation attempt or by target's initiative) or the target is unable to negotiate (rejects our negotiation messages or does not answer them).
[.programlisting]
....
switch(hcb_status) {
case TARGET_REJECTED_WIDE_NEG:
/* revert to 8-bit bus */
softc->current_bus_width[targ] = softc->goal_bus_width[targ] = 8;
/* report the event */
neg.bus_width = 8;
neg.valid = CCB_TRANS_BUS_WIDTH_VALID;
xpt_async(AC_TRANSFER_NEG, hcb->ccb.ccb_h.path_id, &neg);
continue_current_hcb(softc);
return;
case TARGET_ANSWERED_WIDE_NEG:
{
int wd;
wd = get_target_bus_width_request(softc);
if(wd <= softc->goal_bus_width[targ]) {
/* answer is acceptable */
softc->current_bus_width[targ] =
softc->goal_bus_width[targ] = neg.bus_width = wd;
/* report the event */
neg.valid = CCB_TRANS_BUS_WIDTH_VALID;
xpt_async(AC_TRANSFER_NEG, hcb->ccb.ccb_h.path_id, &neg);
} else {
prepare_reject_message(hcb);
}
}
continue_current_hcb(softc);
return;
case TARGET_REQUESTED_WIDE_NEG:
{
int wd;
wd = get_target_bus_width_request(softc);
wd = min (wd, OUR_BUS_WIDTH);
wd = min (wd, softc->user_bus_width[targ]);
if(wd != softc->current_bus_width[targ]) {
/* the bus width has changed */
softc->current_bus_width[targ] =
softc->goal_bus_width[targ] = neg.bus_width = wd;
/* report the event */
neg.valid = CCB_TRANS_BUS_WIDTH_VALID;
xpt_async(AC_TRANSFER_NEG, hcb->ccb.ccb_h.path_id, &neg);
}
prepare_width_nego_rsponse(hcb, wd);
}
continue_current_hcb(softc);
return;
}
....
Then we handle any errors that could have happened during auto-sense in the same simple-minded way as before. Otherwise we look closer at the details again.
[.programlisting]
....
if(hcb->flags & DOING_AUTOSENSE)
goto autosense_failed;
switch(hcb_status) {
....
The next event we consider is unexpected disconnect. Which is considered normal after an ABORT or BUS DEVICE RESET message and abnormal in other cases.
[.programlisting]
....
case UNEXPECTED_DISCONNECT:
if(requested_abort(hcb)) {
/* abort affects all commands on that target+LUN, so
* mark all disconnected HCBs on that target+LUN as aborted too
*/
for(h = softc->first_discon_hcb[hcb->target][hcb->lun];
h != NULL; h = hh) {
hh=h->next;
free_hcb_and_ccb_done(h, h->ccb, CAM_REQ_ABORTED);
}
ccb_status = CAM_REQ_ABORTED;
} else if(requested_bus_device_reset(hcb)) {
int lun;
/* reset affects all commands on that target, so
* mark all disconnected HCBs on that target+LUN as reset
*/
for(lun=0; lun <= OUR_MAX_SUPPORTED_LUN; lun++)
for(h = softc->first_discon_hcb[hcb->target][lun];
h != NULL; h = hh) {
hh=h->next;
free_hcb_and_ccb_done(h, h->ccb, CAM_SCSI_BUS_RESET);
}
/* send event */
xpt_async(AC_SENT_BDR, hcb->ccb->ccb_h.path_id, NULL);
/* this was the CAM_RESET_DEV request itself, it is completed */
ccb_status = CAM_REQ_CMP;
} else {
calculate_residue(hcb);
ccb_status = CAM_UNEXP_BUSFREE;
/* request the further code to freeze the queue */
hcb->ccb->ccb_h.status |= CAM_DEV_QFRZN;
lun_to_freeze = hcb->lun;
}
break;
....
If the target refuses to accept tags we notify CAM about that and return back all commands for this LUN:
[.programlisting]
....
case TAGS_REJECTED:
/* report the event */
neg.flags = 0 & ~CCB_TRANS_TAG_ENB;
neg.valid = CCB_TRANS_TQ_VALID;
xpt_async(AC_TRANSFER_NEG, hcb->ccb.ccb_h.path_id, &neg);
ccb_status = CAM_MSG_REJECT_REC;
/* request the further code to freeze the queue */
hcb->ccb->ccb_h.status |= CAM_DEV_QFRZN;
lun_to_freeze = hcb->lun;
break;
....
Then we check a number of other conditions, with processing basically limited to setting the CCB status:
[.programlisting]
....
case SELECTION_TIMEOUT:
ccb_status = CAM_SEL_TIMEOUT;
/* request the further code to freeze the queue */
hcb->ccb->ccb_h.status |= CAM_DEV_QFRZN;
lun_to_freeze = CAM_LUN_WILDCARD;
break;
case PARITY_ERROR:
ccb_status = CAM_UNCOR_PARITY;
break;
case DATA_OVERRUN:
case ODD_WIDE_TRANSFER:
ccb_status = CAM_DATA_RUN_ERR;
break;
default:
/* all other errors are handled in a generic way */
ccb_status = CAM_REQ_CMP_ERR;
/* request the further code to freeze the queue */
hcb->ccb->ccb_h.status |= CAM_DEV_QFRZN;
lun_to_freeze = CAM_LUN_WILDCARD;
break;
}
....
Then we check if the error was serious enough to freeze the input queue until it gets proceeded and do so if it is:
[.programlisting]
....
if(hcb->ccb->ccb_h.status & CAM_DEV_QFRZN) {
/* freeze the queue */
xpt_freeze_devq(ccb->ccb_h.path, /*count*/1);
/* re-queue all commands for this target/LUN back to CAM */
for(h = softc->first_queued_hcb; h != NULL; h = hh) {
hh = h->next;
if(targ == h->targ
&& (lun_to_freeze == CAM_LUN_WILDCARD || lun_to_freeze == h->lun) )
free_hcb_and_ccb_done(h, h->ccb, CAM_REQUEUE_REQ);
}
}
free_hcb_and_ccb_done(hcb, hcb->ccb, ccb_status);
schedule_next_hcb(softc);
return;
....
This concludes the generic interrupt handling although specific controllers may require some additions.
[[scsi-errors]]
== Errors Summary
When executing an I/O request many things may go wrong. The reason of error can be reported in the CCB status with great detail. Examples of use are spread throughout this document. For completeness here is the summary of recommended responses for the typical error conditions:
* _CAM_RESRC_UNAVAIL_ - some resource is temporarily unavailable and the SIM driver cannot generate an event when it will become available. An example of this resource would be some intra-controller hardware resource for which the controller does not generate an interrupt when it becomes available.
* _CAM_UNCOR_PARITY_ - unrecovered parity error occurred
* _CAM_DATA_RUN_ERR_ - data overrun or unexpected data phase (going in other direction than specified in CAM_DIR_MASK) or odd transfer length for wide transfer
* _CAM_SEL_TIMEOUT_ - selection timeout occurred (target does not respond)
* _CAM_CMD_TIMEOUT_ - command timeout occurred (the timeout function ran)
* _CAM_SCSI_STATUS_ERROR_ - the device returned error
* _CAM_AUTOSENSE_FAIL_ - the device returned error and the REQUEST SENSE COMMAND failed
* _CAM_MSG_REJECT_REC_ - MESSAGE REJECT message was received
* _CAM_SCSI_BUS_RESET_ - received SCSI bus reset
* _CAM_REQ_CMP_ERR_ - "impossible" SCSI phase occurred or something else as weird or just a generic error if further detail is not available
* _CAM_UNEXP_BUSFREE_ - unexpected disconnect occurred
* _CAM_BDR_SENT_ - BUS DEVICE RESET message was sent to the target
* _CAM_UNREC_HBA_ERROR_ - unrecoverable Host Bus Adapter Error
* _CAM_REQ_TOO_BIG_ - the request was too large for this controller
* _CAM_REQUEUE_REQ_ - this request should be re-queued to preserve transaction ordering. This typically occurs when the SIM recognizes an error that should freeze the queue and must place other queued requests for the target at the sim level back into the XPT queue. Typical cases of such errors are selection timeouts, command timeouts and other like conditions. In such cases the troublesome command returns the status indicating the error, the and the other commands which have not be sent to the bus yet get re-queued.
* _CAM_LUN_INVALID_ - the LUN ID in the request is not supported by the SCSI controller
* _CAM_TID_INVALID_ - the target ID in the request is not supported by the SCSI controller
[[scsi-timeout]]
== Timeout Handling
When the timeout for an HCB expires that request should be aborted, just like with an XPT_ABORT request. The only difference is that the returned status of aborted request should be CAM_CMD_TIMEOUT instead of CAM_REQ_ABORTED (that is why implementation of the abort better be done as a function). But there is one more possible problem: what if the abort request itself will get stuck? In this case the SCSI bus should be reset, just like with an XPT_RESET_BUS request (and the idea about implementing it as a function called from both places applies here too). Also we should reset the whole SCSI bus if a device reset request got stuck. So after all the timeout function would look like:
[.programlisting]
....
static void
xxx_timeout(void *arg)
{
struct xxx_hcb *hcb = (struct xxx_hcb *)arg;
struct xxx_softc *softc;
struct ccb_hdr *ccb_h;
softc = hcb->softc;
ccb_h = &hcb->ccb->ccb_h;
if(hcb->flags & HCB_BEING_ABORTED
|| ccb_h->func_code == XPT_RESET_DEV) {
xxx_reset_bus(softc);
} else {
xxx_abort_ccb(hcb->ccb, CAM_CMD_TIMEOUT);
}
}
....
When we abort a request all the other disconnected requests to the same target/LUN get aborted too. So there appears a question, should we return them with status CAM_REQ_ABORTED or CAM_CMD_TIMEOUT? The current drivers use CAM_CMD_TIMEOUT. This seems logical because if one request got timed out then probably something really bad is happening to the device, so if they would not be disturbed they would time out by themselves.
diff --git a/documentation/content/en/books/arch-handbook/smp/_index.adoc b/documentation/content/en/books/arch-handbook/smp/_index.adoc
index a47de5f260..b3f9f87ccb 100644
--- a/documentation/content/en/books/arch-handbook/smp/_index.adoc
+++ b/documentation/content/en/books/arch-handbook/smp/_index.adoc
@@ -1,341 +1,342 @@
---
title: Chapter 8. SMPng Design Document
prev: books/arch-handbook/vm
next: books/arch-handbook/partii
+description: SMPng Design Document
---
[[smp]]
= SMPng Design Document
:doctype: book
:toc: macro
:toclevels: 1
:icons: font
:sectnums:
:sectnumlevels: 6
:source-highlighter: rouge
:experimental:
:skip-front-matter:
:xrefstyle: basic
:relfileprefix: ../
:outfilesuffix:
:sectnumoffset: 8
include::shared/mirrors.adoc[]
include::shared/authors.adoc[]
include::shared/releases.adoc[]
include::shared/en/mailing-lists.adoc[]
include::shared/en/teams.adoc[]
include::shared/en/urls.adoc[]
toc::[]
[[smp-intro]]
== Introduction
This document presents the current design and implementation of the SMPng Architecture. First, the basic primitives and tools are introduced. Next, a general architecture for the FreeBSD kernel's synchronization and execution model is laid out. Then, locking strategies for specific subsystems are discussed, documenting the approaches taken to introduce fine-grained synchronization and parallelism for each subsystem. Finally, detailed implementation notes are provided to motivate design choices, and make the reader aware of important implications involving the use of specific primitives.
This document is a work-in-progress, and will be updated to reflect on-going design and implementation activities associated with the SMPng Project. Many sections currently exist only in outline form, but will be fleshed out as work proceeds. Updates or suggestions regarding the document may be directed to the document editors.
The goal of SMPng is to allow concurrency in the kernel. The kernel is basically one rather large and complex program. To make the kernel multi-threaded we use some of the same tools used to make other programs multi-threaded. These include mutexes, shared/exclusive locks, semaphores, and condition variables. For the definitions of these and other SMP-related terms, please see the <<smp-glossary>> section of this article.
[[smp-lock-fundamentals]]
== Basic Tools and Locking Fundamentals
=== Atomic Instructions and Memory Barriers
There are several existing treatments of memory barriers and atomic instructions, so this section will not include a lot of detail. To put it simply, one can not go around reading variables without a lock if a lock is used to protect writes to that variable. This becomes obvious when you consider that memory barriers simply determine relative order of memory operations; they do not make any guarantee about timing of memory operations. That is, a memory barrier does not force the contents of a CPU's local cache or store buffer to flush. Instead, the memory barrier at lock release simply ensures that all writes to the protected data will be visible to other CPU's or devices if the write to release the lock is visible. The CPU is free to keep that data in its cache or store buffer as long as it wants. However, if another CPU performs an atomic instruction on the same datum, the first CPU must guarantee that the updated value is made visible to the second CPU along with any other operations that memory barriers may require.
For example, assuming a simple model where data is considered visible when it is in main memory (or a global cache), when an atomic instruction is triggered on one CPU, other CPU's store buffers and caches must flush any writes to that same cache line along with any pending operations behind a memory barrier.
This requires one to take special care when using an item protected by atomic instructions. For example, in the sleep mutex implementation, we have to use an `atomic_cmpset` rather than an `atomic_set` to turn on the `MTX_CONTESTED` bit. The reason is that we read the value of `mtx_lock` into a variable and then make a decision based on that read. However, the value we read may be stale, or it may change while we are making our decision. Thus, when the `atomic_set` executed, it may end up setting the bit on another value than the one we made the decision on. Thus, we have to use an `atomic_cmpset` to set the value only if the value we made the decision on is up-to-date and valid.
Finally, atomic instructions only allow one item to be updated or read. If one needs to atomically update several items, then a lock must be used instead. For example, if two counters must be read and have values that are consistent relative to each other, then those counters must be protected by a lock rather than by separate atomic instructions.
=== Read Locks Versus Write Locks
Read locks do not need to be as strong as write locks. Both types of locks need to ensure that the data they are accessing is not stale. However, only write access requires exclusive access. Multiple threads can safely read a value. Using different types of locks for reads and writes can be implemented in a number of ways.
First, sx locks can be used in this manner by using an exclusive lock when writing and a shared lock when reading. This method is quite straightforward.
A second method is a bit more obscure. You can protect a datum with multiple locks. Then for reading that data you simply need to have a read lock of one of the locks. However, to write to the data, you need to have a write lock of all of the locks. This can make writing rather expensive but can be useful when data is accessed in various ways. For example, the parent process pointer is protected by both the `proctree_lock` sx lock and the per-process mutex. Sometimes the proc lock is easier as we are just checking to see who a parent of a process is that we already have locked. However, other places such as `inferior` need to walk the tree of processes via parent pointers and locking each process would be prohibitive as well as a pain to guarantee that the condition you are checking remains valid for both the check and the actions taken as a result of the check.
=== Locking Conditions and Results
If you need a lock to check the state of a variable so that you can take an action based on the state you read, you can not just hold the lock while reading the variable and then drop the lock before you act on the value you read. Once you drop the lock, the variable can change rendering your decision invalid. Thus, you must hold the lock both while reading the variable and while performing the action as a result of the test.
[[smp-design]]
== General Architecture and Design
=== Interrupt Handling
Following the pattern of several other multi-threaded UNIX(R) kernels, FreeBSD deals with interrupt handlers by giving them their own thread context. Providing a context for interrupt handlers allows them to block on locks. To help avoid latency, however, interrupt threads run at real-time kernel priority. Thus, interrupt handlers should not execute for very long to avoid starving other kernel threads. In addition, since multiple handlers may share an interrupt thread, interrupt handlers should not sleep or use a sleepable lock to avoid starving another interrupt handler.
The interrupt threads currently in FreeBSD are referred to as heavyweight interrupt threads. They are called this because switching to an interrupt thread involves a full context switch. In the initial implementation, the kernel was not preemptive and thus interrupts that interrupted a kernel thread would have to wait until the kernel thread blocked or returned to userland before they would have an opportunity to run.
To deal with the latency problems, the kernel in FreeBSD has been made preemptive. Currently, we only preempt a kernel thread when we release a sleep mutex or when an interrupt comes in. However, the plan is to make the FreeBSD kernel fully preemptive as described below.
Not all interrupt handlers execute in a thread context. Instead, some handlers execute directly in primary interrupt context. These interrupt handlers are currently misnamed "fast" interrupt handlers since the `INTR_FAST` flag used in earlier versions of the kernel is used to mark these handlers. The only interrupts which currently use these types of interrupt handlers are clock interrupts and serial I/O device interrupts. Since these handlers do not have their own context, they may not acquire blocking locks and thus may only use spin mutexes.
Finally, there is one optional optimization that can be added in MD code called lightweight context switches. Since an interrupt thread executes in a kernel context, it can borrow the vmspace of any process. Thus, in a lightweight context switch, the switch to the interrupt thread does not switch vmspaces but borrows the vmspace of the interrupted thread. In order to ensure that the vmspace of the interrupted thread does not disappear out from under us, the interrupted thread is not allowed to execute until the interrupt thread is no longer borrowing its vmspace. This can happen when the interrupt thread either blocks or finishes. If an interrupt thread blocks, then it will use its own context when it is made runnable again. Thus, it can release the interrupted thread.
The cons of this optimization are that they are very machine specific and complex and thus only worth the effort if their is a large performance improvement. At this point it is probably too early to tell, and in fact, will probably hurt performance as almost all interrupt handlers will immediately block on Giant and require a thread fix-up when they block. Also, an alternative method of interrupt handling has been proposed by Mike Smith that works like so:
. Each interrupt handler has two parts: a predicate which runs in primary interrupt context and a handler which runs in its own thread context.
. If an interrupt handler has a predicate, then when an interrupt is triggered, the predicate is run. If the predicate returns true then the interrupt is assumed to be fully handled and the kernel returns from the interrupt. If the predicate returns false or there is no predicate, then the threaded handler is scheduled to run.
Fitting light weight context switches into this scheme might prove rather complicated. Since we may want to change to this scheme at some point in the future, it is probably best to defer work on light weight context switches until we have settled on the final interrupt handling architecture and determined how light weight context switches might or might not fit into it.
=== Kernel Preemption and Critical Sections
==== Kernel Preemption in a Nutshell
Kernel preemption is fairly simple. The basic idea is that a CPU should always be doing the highest priority work available. Well, that is the ideal at least. There are a couple of cases where the expense of achieving the ideal is not worth being perfect.
Implementing full kernel preemption is very straightforward: when you schedule a thread to be executed by putting it on a run queue, you check to see if its priority is higher than the currently executing thread. If so, you initiate a context switch to that thread.
While locks can protect most data in the case of a preemption, not all of the kernel is preemption safe. For example, if a thread holding a spin mutex preempted and the new thread attempts to grab the same spin mutex, the new thread may spin forever as the interrupted thread may never get a chance to execute. Also, some code such as the code to assign an address space number for a process during `exec` on the Alpha needs to not be preempted as it supports the actual context switch code. Preemption is disabled for these code sections by using a critical section.
==== Critical Sections
The responsibility of the critical section API is to prevent context switches inside of a critical section. With a fully preemptive kernel, every `setrunqueue` of a thread other than the current thread is a preemption point. One implementation is for `critical_enter` to set a per-thread flag that is cleared by its counterpart. If `setrunqueue` is called with this flag set, it does not preempt regardless of the priority of the new thread relative to the current thread. However, since critical sections are used in spin mutexes to prevent context switches and multiple spin mutexes can be acquired, the critical section API must support nesting. For this reason the current implementation uses a nesting count instead of a single per-thread flag.
In order to minimize latency, preemptions inside of a critical section are deferred rather than dropped. If a thread that would normally be preempted to is made runnable while the current thread is in a critical section, then a per-thread flag is set to indicate that there is a pending preemption. When the outermost critical section is exited, the flag is checked. If the flag is set, then the current thread is preempted to allow the higher priority thread to run.
Interrupts pose a problem with regards to spin mutexes. If a low-level interrupt handler needs a lock, it needs to not interrupt any code needing that lock to avoid possible data structure corruption. Currently, providing this mechanism is piggybacked onto critical section API by means of the `cpu_critical_enter` and `cpu_critical_exit` functions. Currently this API disables and re-enables interrupts on all of FreeBSD's current platforms. This approach may not be purely optimal, but it is simple to understand and simple to get right. Theoretically, this second API need only be used for spin mutexes that are used in primary interrupt context. However, to make the code simpler, it is used for all spin mutexes and even all critical sections. It may be desirable to split out the MD API from the MI API and only use it in conjunction with the MI API in the spin mutex implementation. If this approach is taken, then the MD API likely would need a rename to show that it is a separate API.
==== Design Tradeoffs
As mentioned earlier, a couple of trade-offs have been made to sacrifice cases where perfect preemption may not always provide the best performance.
The first trade-off is that the preemption code does not take other CPUs into account. Suppose we have a two CPU's A and B with the priority of A's thread as 4 and the priority of B's thread as 2. If CPU B makes a thread with priority 1 runnable, then in theory, we want CPU A to switch to the new thread so that we will be running the two highest priority runnable threads. However, the cost of determining which CPU to enforce a preemption on as well as actually signaling that CPU via an IPI along with the synchronization that would be required would be enormous. Thus, the current code would instead force CPU B to switch to the higher priority thread. Note that this still puts the system in a better position as CPU B is executing a thread of priority 1 rather than a thread of priority 2.
The second trade-off limits immediate kernel preemption to real-time priority kernel threads. In the simple case of preemption defined above, a thread is always preempted immediately (or as soon as a critical section is exited) if a higher priority thread is made runnable. However, many threads executing in the kernel only execute in a kernel context for a short time before either blocking or returning to userland. Thus, if the kernel preempts these threads to run another non-realtime kernel thread, the kernel may switch out the executing thread just before it is about to sleep or execute. The cache on the CPU must then adjust to the new thread. When the kernel returns to the preempted thread, it must refill all the cache information that was lost. In addition, two extra context switches are performed that could be avoided if the kernel deferred the preemption until the first thread blocked or returned to userland. Thus, by default, the preemption code will only preempt immediately if the higher priority thread is a real-time priority thread.
Turning on full kernel preemption for all kernel threads has value as a debugging aid since it exposes more race conditions. It is especially useful on UP systems were many races are hard to simulate otherwise. Thus, there is a kernel option `FULL_PREEMPTION` to enable preemption for all kernel threads that can be used for debugging purposes.
=== Thread Migration
Simply put, a thread migrates when it moves from one CPU to another. In a non-preemptive kernel this can only happen at well-defined points such as when calling `msleep` or returning to userland. However, in the preemptive kernel, an interrupt can force a preemption and possible migration at any time. This can have negative affects on per-CPU data since with the exception of `curthread` and `curpcb` the data can change whenever you migrate. Since you can potentially migrate at any time this renders unprotected per-CPU data access rather useless. Thus it is desirable to be able to disable migration for sections of code that need per-CPU data to be stable.
Critical sections currently prevent migration since they do not allow context switches. However, this may be too strong of a requirement to enforce in some cases since a critical section also effectively blocks interrupt threads on the current processor. As a result, another API has been provided to allow the current thread to indicate that if it preempted it should not migrate to another CPU.
This API is known as thread pinning and is provided by the scheduler. The API consists of two functions: `sched_pin` and `sched_unpin`. These functions manage a per-thread nesting count `td_pinned`. A thread is pinned when its nesting count is greater than zero and a thread starts off unpinned with a nesting count of zero. Each scheduler implementation is required to ensure that pinned threads are only executed on the CPU that they were executing on when the `sched_pin` was first called. Since the nesting count is only written to by the thread itself and is only read by other threads when the pinned thread is not executing but while `sched_lock` is held, then `td_pinned` does not need any locking. The `sched_pin` function increments the nesting count and `sched_unpin` decrements the nesting count. Note that these functions only operate on the current thread and bind the current thread to the CPU it is executing on at the time. To bind an arbitrary thread to a specific CPU, the `sched_bind` and `sched_unbind` functions should be used instead.
=== Callouts
The `timeout` kernel facility permits kernel services to register functions for execution as part of the `softclock` software interrupt. Events are scheduled based on a desired number of clock ticks, and callbacks to the consumer-provided function will occur at approximately the right time.
The global list of pending timeout events is protected by a global spin mutex, `callout_lock`; all access to the timeout list must be performed with this mutex held. When `softclock` is woken up, it scans the list of pending timeouts for those that should fire. In order to avoid lock order reversal, the `softclock` thread will release the `callout_lock` mutex when invoking the provided `timeout` callback function. If the `CALLOUT_MPSAFE` flag was not set during registration, then Giant will be grabbed before invoking the callout, and then released afterwards. The `callout_lock` mutex will be re-grabbed before proceeding. The `softclock` code is careful to leave the list in a consistent state while releasing the mutex. If `DIAGNOSTIC` is enabled, then the time taken to execute each function is measured, and a warning is generated if it exceeds a threshold.
[[smp-lock-strategies]]
== Specific Locking Strategies
=== Credentials
`struct ucred` is the kernel's internal credential structure, and is generally used as the basis for process-driven access control within the kernel. BSD-derived systems use a "copy-on-write" model for credential data: multiple references may exist for a credential structure, and when a change needs to be made, the structure is duplicated, modified, and then the reference replaced. Due to wide-spread caching of the credential to implement access control on open, this results in substantial memory savings. With a move to fine-grained SMP, this model also saves substantially on locking operations by requiring that modification only occur on an unshared credential, avoiding the need for explicit synchronization when consuming a known-shared credential.
Credential structures with a single reference are considered mutable; shared credential structures must not be modified or a race condition is risked. A mutex, `cr_mtxp` protects the reference count of `struct ucred` so as to maintain consistency. Any use of the structure requires a valid reference for the duration of the use, or the structure may be released out from under the illegitimate consumer.
The `struct ucred` mutex is a leaf mutex and is implemented via a mutex pool for performance reasons.
Usually, credentials are used in a read-only manner for access control decisions, and in this case `td_ucred` is generally preferred because it requires no locking. When a process' credential is updated the `proc` lock must be held across the check and update operations thus avoid races. The process credential `p_ucred` must be used for check and update operations to prevent time-of-check, time-of-use races.
If system call invocations will perform access control after an update to the process credential, the value of `td_ucred` must also be refreshed to the current process value. This will prevent use of a stale credential following a change. The kernel automatically refreshes the `td_ucred` pointer in the thread structure from the process `p_ucred` whenever a process enters the kernel, permitting use of a fresh credential for kernel access control.
=== File Descriptors and File Descriptor Tables
Details to follow.
=== Jail Structures
`struct prison` stores administrative details pertinent to the maintenance of jails created using the man:jail[2] API. This includes the per-jail hostname, IP address, and related settings. This structure is reference-counted since pointers to instances of the structure are shared by many credential structures. A single mutex, `pr_mtx` protects read and write access to the reference count and all mutable variables inside the struct jail. Some variables are set only when the jail is created, and a valid reference to the `struct prison` is sufficient to read these values. The precise locking of each entry is documented via comments in [.filename]#sys/jail.h#.
=== MAC Framework
The TrustedBSD MAC Framework maintains data in a variety of kernel objects, in the form of `struct label`. In general, labels in kernel objects are protected by the same lock as the remainder of the kernel object. For example, the `v_label` label in `struct vnode` is protected by the vnode lock on the vnode.
In addition to labels maintained in standard kernel objects, the MAC Framework also maintains a list of registered and active policies. The policy list is protected by a global mutex (`mac_policy_list_lock`) and a busy count (also protected by the mutex). Since many access control checks may occur in parallel, entry to the framework for a read-only access to the policy list requires holding the mutex while incrementing (and later decrementing) the busy count. The mutex need not be held for the duration of the MAC entry operation--some operations, such as label operations on file system objects--are long-lived. To modify the policy list, such as during policy registration and de-registration, the mutex must be held and the reference count must be zero, to prevent modification of the list while it is in use.
A condition variable, `mac_policy_list_not_busy`, is available to threads that need to wait for the list to become unbusy, but this condition variable must only be waited on if the caller is holding no other locks, or a lock order violation may be possible. The busy count, in effect, acts as a form of shared/exclusive lock over access to the framework: the difference is that, unlike with an sx lock, consumers waiting for the list to become unbusy may be starved, rather than permitting lock order problems with regards to the busy count and other locks that may be held on entry to (or inside) the MAC Framework.
=== Modules
For the module subsystem there exists a single lock that is used to protect the shared data. This lock is a shared/exclusive (SX) lock and has a good chance of needing to be acquired (shared or exclusively), therefore there are a few macros that have been added to make access to the lock more easy. These macros can be located in [.filename]#sys/module.h# and are quite basic in terms of usage. The main structures protected under this lock are the `module_t` structures (when shared) and the global `modulelist_t` structure, modules. One should review the related source code in [.filename]#kern/kern_module.c# to further understand the locking strategy.
=== Newbus Device Tree
The newbus system will have one sx lock. Readers will hold a shared (read) lock (man:sx_slock[9]) and writers will hold an exclusive (write) lock (man:sx_xlock[9]). Internal functions will not do locking at all. Externally visible ones will lock as needed. Those items that do not matter if the race is won or lost will not be locked, since they tend to be read all over the place (e.g., man:device_get_softc[9]). There will be relatively few changes to the newbus data structures, so a single lock should be sufficient and not impose a performance penalty.
=== Pipes
...
=== Processes and Threads
- process hierarchy
- proc locks, references
- thread-specific copies of proc entries to freeze during system calls, including td_ucred
- inter-process operations
- process groups and sessions
=== Scheduler
Lots of references to `sched_lock` and notes pointing at specific primitives and related magic elsewhere in the document.
=== Select and Poll
The `select` and `poll` functions permit threads to block waiting on events on file descriptors--most frequently, whether or not the file descriptors are readable or writable.
...
=== SIGIO
The SIGIO service permits processes to request the delivery of a SIGIO signal to its process group when the read/write status of specified file descriptors changes. At most one process or process group is permitted to register for SIGIO from any given kernel object, and that process or group is referred to as the owner. Each object supporting SIGIO registration contains pointer field that is `NULL` if the object is not registered, or points to a `struct sigio` describing the registration. This field is protected by a global mutex, `sigio_lock`. Callers to SIGIO maintenance functions must pass in this field "by reference" so that local register copies of the field are not made when unprotected by the lock.
One `struct sigio` is allocated for each registered object associated with any process or process group, and contains back-pointers to the object, owner, signal information, a credential, and the general disposition of the registration. Each process or progress group contains a list of registered `struct sigio` structures, `p_sigiolst` for processes, and `pg_sigiolst` for process groups. These lists are protected by the process or process group locks respectively. Most fields in each `struct sigio` are constant for the duration of the registration, with the exception of the `sio_pgsigio` field which links the `struct sigio` into the process or process group list. Developers implementing new kernel objects supporting SIGIO will, in general, want to avoid holding structure locks while invoking SIGIO supporting functions, such as `fsetown` or `funsetown` to avoid defining a lock order between structure locks and the global SIGIO lock. This is generally possible through use of an elevated reference count on the structure, such as reliance on a file descriptor reference to a pipe during a pipe operation.
=== Sysctl
The `sysctl` MIB service is invoked from both within the kernel and from userland applications using a system call. At least two issues are raised in locking: first, the protection of the structures maintaining the namespace, and second, interactions with kernel variables and functions that are accessed by the sysctl interface. Since sysctl permits the direct export (and modification) of kernel statistics and configuration parameters, the sysctl mechanism must become aware of appropriate locking semantics for those variables. Currently, sysctl makes use of a single global sx lock to serialize use of `sysctl`; however, it is assumed to operate under Giant and other protections are not provided. The remainder of this section speculates on locking and semantic changes to sysctl.
- Need to change the order of operations for sysctl's that update values from read old, copyin and copyout, write new to copyin, lock, read old and write new, unlock, copyout. Normal sysctl's that just copyout the old value and set a new value that they copyin may still be able to follow the old model. However, it may be cleaner to use the second model for all of the sysctl handlers to avoid lock operations.
- To allow for the common case, a sysctl could embed a pointer to a mutex in the SYSCTL_FOO macros and in the struct. This would work for most sysctl's. For values protected by sx locks, spin mutexes, or other locking strategies besides a single sleep mutex, SYSCTL_PROC nodes could be used to get the locking right.
=== Taskqueue
The taskqueue's interface has two basic locks associated with it in order to protect the related shared data. The `taskqueue_queues_mutex` is meant to serve as a lock to protect the `taskqueue_queues` TAILQ. The other mutex lock associated with this system is the one in the `struct taskqueue` data structure. The use of the synchronization primitive here is to protect the integrity of the data in the `struct taskqueue`. It should be noted that there are no separate macros to assist the user in locking down his/her own work since these locks are most likely not going to be used outside of [.filename]#kern/subr_taskqueue.c#.
[[smp-implementation-notes]]
== Implementation Notes
=== Sleep Queues
A sleep queue is a structure that holds the list of threads asleep on a wait channel. Each thread that is not asleep on a wait channel carries a sleep queue structure around with it. When a thread blocks on a wait channel, it donates its sleep queue structure to that wait channel. Sleep queues associated with a wait channel are stored in a hash table.
The sleep queue hash table holds sleep queues for wait channels that have at least one blocked thread. Each entry in the hash table is called a sleepqueue chain. The chain contains a linked list of sleep queues and a spin mutex. The spin mutex protects the list of sleep queues as well as the contents of the sleep queue structures on the list. Only one sleep queue is associated with a given wait channel. If multiple threads block on a wait channel than the sleep queues associated with all but the first thread are stored on a list of free sleep queues in the master sleep queue. When a thread is removed from the sleep queue it is given one of the sleep queue structures from the master queue's free list if it is not the only thread asleep on the queue. The last thread is given the master sleep queue when it is resumed. Since threads may be removed from the sleep queue in a different order than they are added, a thread may depart from a sleep queue with a different sleep queue structure than the one it arrived with.
The `sleepq_lock` function locks the spin mutex of the sleep queue chain that maps to a specific wait channel. The `sleepq_lookup` function looks in the hash table for the master sleep queue associated with a given wait channel. If no master sleep queue is found, it returns `NULL`. The `sleepq_release` function unlocks the spin mutex associated with a given wait channel.
A thread is added to a sleep queue via the `sleepq_add`. This function accepts the wait channel, a pointer to the mutex that protects the wait channel, a wait message description string, and a mask of flags. The sleep queue chain should be locked via `sleepq_lock` before this function is called. If no mutex protects the wait channel (or it is protected by Giant), then the mutex pointer argument should be `NULL`. The flags argument contains a type field that indicates the kind of sleep queue that the thread is being added to and a flag to indicate if the sleep is interruptible (`SLEEPQ_INTERRUPTIBLE`). Currently there are only two types of sleep queues: traditional sleep queues managed via the `msleep` and `wakeup` functions (`SLEEPQ_MSLEEP`) and condition variable sleep queues (`SLEEPQ_CONDVAR`). The sleep queue type and lock pointer argument are used solely for internal assertion checking. Code that calls `sleepq_add` should explicitly unlock any interlock protecting the wait channel after the associated sleepqueue chain has been locked via `sleepq_lock` and before blocking on the sleep queue via one of the waiting functions.
A timeout for a sleep is set by invoking `sleepq_set_timeout`. The function accepts the wait channel and the timeout time as a relative tick count as its arguments. If a sleep should be interrupted by arriving signals, the `sleepq_catch_signals` function should be called as well. This function accepts the wait channel as its only parameter. If there is already a signal pending for this thread, then `sleepq_catch_signals` will return a signal number; otherwise, it will return 0.
Once a thread has been added to a sleep queue, it blocks using one of the `sleepq_wait` functions. There are four wait functions depending on whether or not the caller wishes to use a timeout or have the sleep aborted by caught signals or an interrupt from the userland thread scheduler. The `sleepq_wait` function simply waits until the current thread is explicitly resumed by one of the wakeup functions. The `sleepq_timedwait` function waits until either the thread is explicitly resumed or the timeout set by an earlier call to `sleepq_set_timeout` expires. The `sleepq_wait_sig` function waits until either the thread is explicitly resumed or its sleep is aborted. The `sleepq_timedwait_sig` function waits until either the thread is explicitly resumed, the timeout set by an earlier call to `sleepq_set_timeout` expires, or the thread's sleep is aborted. All of the wait functions accept the wait channel as their first parameter. In addition, the `sleepq_timedwait_sig` function accepts a second boolean parameter to indicate if the earlier call to `sleepq_catch_signals` found a pending signal.
If the thread is explicitly resumed or is aborted by a signal, then a value of zero is returned by the wait function to indicate a successful sleep. If the thread is resumed by either a timeout or an interrupt from the userland thread scheduler then an appropriate errno value is returned instead. Note that since `sleepq_wait` can only return 0 it does not return anything and the caller should assume a successful sleep. Also, if a thread's sleep times out and is aborted simultaneously then `sleepq_timedwait_sig` will return an error indicating that a timeout occurred. If an error value of 0 is returned and either `sleepq_wait_sig` or `sleepq_timedwait_sig` was used to block, then the function `sleepq_calc_signal_retval` should be called to check for any pending signals and calculate an appropriate return value if any are found. The signal number returned by the earlier call to `sleepq_catch_signals` should be passed as the sole argument to `sleepq_calc_signal_retval`.
Threads asleep on a wait channel are explicitly resumed by the `sleepq_broadcast` and `sleepq_signal` functions. Both functions accept the wait channel from which to resume threads, a priority to raise resumed threads to, and a flags argument to indicate which type of sleep queue is being resumed. The priority argument is treated as a minimum priority. If a thread being resumed already has a higher priority (numerically lower) than the priority argument then its priority is not adjusted. The flags argument is used for internal assertions to ensure that sleep queues are not being treated as the wrong type. For example, the condition variable functions should not resume threads on a traditional sleep queue. The `sleepq_broadcast` function resumes all threads that are blocked on the specified wait channel while `sleepq_signal` only resumes the highest priority thread blocked on the wait channel. The sleep queue chain should first be locked via the `sleepq_lock` function before calling these functions.
A sleeping thread may have its sleep interrupted by calling the `sleepq_abort` function. This function must be called with `sched_lock` held and the thread must be queued on a sleep queue. A thread may also be removed from a specific sleep queue via the `sleepq_remove` function. This function accepts both a thread and a wait channel as an argument and only awakens the thread if it is on the sleep queue for the specified wait channel. If the thread is not on a sleep queue or it is on a sleep queue for a different wait channel, then this function does nothing.
=== Turnstiles
- Compare/contrast with sleep queues.
- Lookup/wait/release. - Describe TDF_TSNOBLOCK race.
- Priority propagation.
=== Details of the Mutex Implementation
- Should we require mutexes to be owned for mtx_destroy() since we can not safely assert that they are unowned by anyone else otherwise?
==== Spin Mutexes
- Use a critical section...
==== Sleep Mutexes
- Describe the races with contested mutexes
- Why it is safe to read mtx_lock of a contested mutex when holding the turnstile chain lock.
=== Witness
- What does it do
- How does it work
[[smp-misc]]
== Miscellaneous Topics
=== Interrupt Source and ICU Abstractions
- struct isrc
- pic drivers
=== Other Random Questions/Topics
- Should we pass an interlock into `sema_wait`?
- Should we have non-sleepable sx locks?
- Add some info about proper use of reference counts.
:sectnums!:
[glossary]
[[smp-glossary]]
== Glossary
[.glosslist]
atomic::
An operation is atomic if all of its effects are visible to other CPUs together when the proper access protocol is followed. In the degenerate case are atomic instructions provided directly by machine architectures. At a higher level, if several members of a structure are protected by a lock, then a set of operations are atomic if they are all performed while holding the lock without releasing the lock in between any of the operations.
+
See Also operation.
block::
A thread is blocked when it is waiting on a lock, resource, or condition. Unfortunately this term is a bit overloaded as a result.
+
See Also sleep.
critical section::
A section of code that is not allowed to be preempted. A critical section is entered and exited using the man:critical_enter[9] API.
MD::
Machine dependent.
+
See Also MI.
memory operation::
A memory operation reads and/or writes to a memory location.
MI::
Machine independent.
+
See Also MD.
operation::
See memory operation.
primary interrupt context::
Primary interrupt context refers to the code that runs when an interrupt occurs. This code can either run an interrupt handler directly or schedule an asynchronous interrupt thread to execute the interrupt handlers for a given interrupt source.
realtime kernel thread::
A high priority kernel thread. Currently, the only realtime priority kernel threads are interrupt threads.
+
See Also thread.
sleep::
A thread is asleep when it is blocked on a condition variable or a sleep queue via msleep or tsleep.
+
See Also block.
sleepable lock::
A sleepable lock is a lock that can be held by a thread which is asleep. Lockmgr locks and sx locks are currently the only sleepable locks in FreeBSD. Eventually, some sx locks such as the allproc and proctree locks may become non-sleepable locks.
+
See Also sleep.
thread::
A kernel thread represented by a struct thread. Threads own locks and hold a single execution context.
wait channel::
A kernel virtual address that threads may sleep on.
:sectnums:
diff --git a/documentation/content/en/books/arch-handbook/sound/_index.adoc b/documentation/content/en/books/arch-handbook/sound/_index.adoc
index a7b6567206..94c43f0172 100644
--- a/documentation/content/en/books/arch-handbook/sound/_index.adoc
+++ b/documentation/content/en/books/arch-handbook/sound/_index.adoc
@@ -1,332 +1,333 @@
---
title: Chapter 15. Sound Subsystem
prev: books/arch-handbook/newbus
next: books/arch-handbook/pccard
+description: FreeBSD Sound Subsystem
---
[[oss]]
= Sound Subsystem
:doctype: book
:toc: macro
:toclevels: 1
:icons: font
:sectnums:
:sectnumlevels: 6
:source-highlighter: rouge
:experimental:
:skip-front-matter:
:xrefstyle: basic
:relfileprefix: ../
:outfilesuffix:
:sectnumoffset: 15
include::shared/mirrors.adoc[]
include::shared/authors.adoc[]
include::shared/releases.adoc[]
include::shared/en/mailing-lists.adoc[]
include::shared/en/teams.adoc[]
include::shared/en/urls.adoc[]
toc::[]
[[oss-intro]]
== Introduction
The FreeBSD sound subsystem cleanly separates generic sound handling issues from device-specific ones. This makes it easier to add support for new hardware.
The man:pcm[4] framework is the central piece of the sound subsystem. It mainly implements the following elements:
* A system call interface (read, write, ioctls) to digitized sound and mixer functions. The ioctl command set is compatible with the legacy _OSS_ or _Voxware_ interface, allowing common multimedia applications to be ported without modification.
* Common code for processing sound data (format conversions, virtual channels).
* A uniform software interface to hardware-specific audio interface modules.
* Additional support for some common hardware interfaces (ac97), or shared hardware-specific code (ex: ISA DMA routines).
The support for specific sound cards is implemented by hardware-specific drivers, which provide channel and mixer interfaces to plug into the generic [.filename]#pcm# code.
In this chapter, the term [.filename]#pcm# will refer to the central, common part of the sound driver, as opposed to the hardware-specific modules.
The prospective driver writer will of course want to start from an existing module and use the code as the ultimate reference. But, while the sound code is nice and clean, it is also mostly devoid of comments. This document tries to give an overview of the framework interface and answer some questions that may arise while adapting the existing code.
As an alternative, or in addition to starting from a working example, you can find a commented driver template at https://people.FreeBSD.org/~cg/template.c[ https://people.FreeBSD.org/~cg/template.c]
[[oss-files]]
== Files
All the relevant code lives in [.filename]#/usr/src/sys/dev/sound/#, except for the public ioctl interface definitions, found in [.filename]#/usr/src/sys/sys/soundcard.h#
Under [.filename]#/usr/src/sys/dev/sound/#, the [.filename]#pcm/# directory holds the central code, while the [.filename]#pci/#, [.filename]#isa/# and [.filename]#usb/# directories have the drivers for PCI and ISA boards, and for USB audio devices.
[[pcm-probe-and-attach]]
== Probing, Attaching, etc.
Sound drivers probe and attach in almost the same way as any hardware driver module. You might want to look at the crossref:isa-driver[isa-driver,ISA] or crossref:pci[pci,PCI] specific sections of the handbook for more information.
However, sound drivers differ in some ways:
* They declare themselves as [.filename]#pcm# class devices, with a `struct snddev_info` device private structure:
+
[.programlisting]
....
static driver_t xxx_driver = {
"pcm",
xxx_methods,
sizeof(struct snddev_info)
};
DRIVER_MODULE(snd_xxxpci, pci, xxx_driver, pcm_devclass, 0, 0);
MODULE_DEPEND(snd_xxxpci, snd_pcm, PCM_MINVER, PCM_PREFVER,PCM_MAXVER);
....
+
Most sound drivers need to store additional private information about their device. A private data structure is usually allocated in the attach routine. Its address is passed to [.filename]#pcm# by the calls to `pcm_register()` and `mixer_init()`. [.filename]#pcm# later passes back this address as a parameter in calls to the sound driver interfaces.
* The sound driver attach routine should declare its MIXER or AC97 interface to [.filename]#pcm# by calling `mixer_init()`. For a MIXER interface, this causes in turn a call to <<xxxmixer-init,`xxxmixer_init()`>>.
* The sound driver attach routine declares its general CHANNEL configuration to [.filename]#pcm# by calling `pcm_register(dev, sc, nplay, nrec)`, where `sc` is the address for the device data structure, used in further calls from [.filename]#pcm#, and `nplay` and `nrec` are the number of play and record channels.
* The sound driver attach routine declares each of its channel objects by calls to `pcm_addchan()`. This sets up the channel glue in [.filename]#pcm# and causes in turn a call to <<xxxchannel-init,`xxxchannel_init()`>>.
* The sound driver detach routine should call `pcm_unregister()` before releasing its resources.
There are two possible methods to handle non-PnP devices:
* Use a `device_identify()` method (example: [.filename]#sound/isa/es1888.c#). The `device_identify()` method probes for the hardware at known addresses and, if it finds a supported device, creates a new pcm device which is then passed to probe/attach.
* Use a custom kernel configuration with appropriate hints for pcm devices (example: [.filename]#sound/isa/mss.c#).
[.filename]#pcm# drivers should implement `device_suspend`, `device_resume` and `device_shutdown` routines, so that power management and module unloading function correctly.
[[oss-interfaces]]
== Interfaces
The interface between the [.filename]#pcm# core and the sound drivers is defined in terms of <<kernel-objects,kernel objects>>.
There are two main interfaces that a sound driver will usually provide: _CHANNEL_ and either _MIXER_ or _AC97_.
The _AC97_ interface is a very small hardware access (register read/write) interface, implemented by drivers for hardware with an AC97 codec. In this case, the actual MIXER interface is provided by the shared AC97 code in [.filename]#pcm#.
=== The CHANNEL Interface
==== Common Notes for Function Parameters
Sound drivers usually have a private data structure to describe their device, and one structure for each play and record data channel that it supports.
For all CHANNEL interface functions, the first parameter is an opaque pointer.
The second parameter is a pointer to the private channel data structure, except for `channel_init()` which has a pointer to the private device structure (and returns the channel pointer for further use by [.filename]#pcm#).
==== Overview of Data Transfer Operations
For sound data transfers, the [.filename]#pcm# core and the sound drivers communicate through a shared memory area, described by a `struct snd_dbuf`.
`struct snd_dbuf` is private to [.filename]#pcm#, and sound drivers obtain values of interest by calls to accessor functions (`sndbuf_getxxx()`).
The shared memory area has a size of `sndbuf_getsize()` and is divided into fixed size blocks of `sndbuf_getblksz()` bytes.
When playing, the general transfer mechanism is as follows (reverse the idea for recording):
* [.filename]#pcm# initially fills up the buffer, then calls the sound driver's <<channel-trigger,`xxxchannel_trigger()`>> function with a parameter of PCMTRIG_START.
* The sound driver then arranges to repeatedly transfer the whole memory area (`sndbuf_getbuf()`, `sndbuf_getsize()`) to the device, in blocks of `sndbuf_getblksz()` bytes. It calls back the `chn_intr()`[.filename]#pcm# function for each transferred block (this will typically happen at interrupt time).
* `chn_intr()` arranges to copy new data to the area that was transferred to the device (now free), and make appropriate updates to the `snd_dbuf` structure.
[[xxxchannel-init]]
==== channel_init
`xxxchannel_init()` is called to initialize each of the play or record channels. The calls are initiated from the sound driver attach routine. (See the <<pcm-probe-and-attach,probe and attach section>>).
[.programlisting]
....
static void *
xxxchannel_init(kobj_t obj, void *data,
struct snd_dbuf *b, struct pcm_channel *c, int dir) <.>
{
struct xxx_info *sc = data;
struct xxx_chinfo *ch;
...
return ch; <.>
}
....
<.> `b` is the address for the channel `struct snd_dbuf`. It should be initialized in the function by calling `sndbuf_alloc()`. The buffer size to use is normally a small multiple of the 'typical' unit transfer size for your device.`c` is the [.filename]#pcm# channel control structure pointer. This is an opaque object. The function should store it in the local channel structure, to be used in later calls to [.filename]#pcm# (ie: `chn_intr(c)`).`dir` indicates the channel direction (`PCMDIR_PLAY` or `PCMDIR_REC`).
<.> The function should return a pointer to the private area used to control this channel. This will be passed as a parameter to other channel interface calls.
==== channel_setformat
`xxxchannel_setformat()` should set up the hardware for the specified channel for the specified sound format.
[.programlisting]
....
static int
xxxchannel_setformat(kobj_t obj, void *data, u_int32_t format) <.>
{
struct xxx_chinfo *ch = data;
...
return 0;
}
....
<.> `format` is specified as an `AFMT_XXX value` ([.filename]#soundcard.h#).
==== channel_setspeed
`xxxchannel_setspeed()` sets up the channel hardware for the specified sampling speed, and returns the possibly adjusted speed.
[.programlisting]
....
static int
xxxchannel_setspeed(kobj_t obj, void *data, u_int32_t speed)
{
struct xxx_chinfo *ch = data;
...
return speed;
}
....
==== channel_setblocksize
`xxxchannel_setblocksize()` sets the block size, which is the size of unit transactions between [.filename]#pcm# and the sound driver, and between the sound driver and the device. Typically, this would be the number of bytes transferred before an interrupt occurs. During a transfer, the sound driver should call [.filename]#pcm#'s `chn_intr()` every time this size has been transferred.
Most sound drivers only take note of the block size here, to be used when an actual transfer will be started.
[.programlisting]
....
static int
xxxchannel_setblocksize(kobj_t obj, void *data, u_int32_t blocksize)
{
struct xxx_chinfo *ch = data;
...
return blocksize; <.>
}
....
<.> The function returns the possibly adjusted block size. In case the block size is indeed changed, `sndbuf_resize()` should be called to adjust the buffer.
[[channel-trigger]]
==== channel_trigger
`xxxchannel_trigger()` is called by [.filename]#pcm# to control data transfer operations in the driver.
[.programlisting]
....
static int
xxxchannel_trigger(kobj_t obj, void *data, int go) <.>
{
struct xxx_chinfo *ch = data;
...
return 0;
}
....
<.> `go` defines the action for the current call. The possible values are:
[NOTE]
====
If the driver uses ISA DMA, `sndbuf_isadma()` should be called before performing actions on the device, and will take care of the DMA chip side of things.
====
==== channel_getptr
`xxxchannel_getptr()` returns the current offset in the transfer buffer. This will typically be called by `chn_intr()`, and this is how [.filename]#pcm# knows where it can transfer new data.
==== channel_free
`xxxchannel_free()` is called to free up channel resources, for example when the driver is unloaded, and should be implemented if the channel data structures are dynamically allocated or if `sndbuf_alloc()` was not used for buffer allocation.
==== channel_getcaps
[.programlisting]
....
struct pcmchan_caps *
xxxchannel_getcaps(kobj_t obj, void *data)
{
return &xxx_caps; <.>
}
....
<.> The routine returns a pointer to a (usually statically-defined) `pcmchan_caps` structure (defined in [.filename]#sound/pcm/channel.h#. The structure holds the minimum and maximum sampling frequencies, and the accepted sound formats. Look at any sound driver for an example.
==== More Functions
`channel_reset()`, `channel_resetdone()`, and `channel_notify()` are for special purposes and should not be implemented in a driver without discussing it on the {freebsd-multimedia}.
`channel_setdir()` is deprecated.
=== The MIXER Interface
[[xxxmixer-init]]
==== mixer_init
`xxxmixer_init()` initializes the hardware and tells [.filename]#pcm# what mixer devices are available for playing and recording
[.programlisting]
....
static int
xxxmixer_init(struct snd_mixer *m)
{
struct xxx_info *sc = mix_getdevinfo(m);
u_int32_t v;
[Initialize hardware]
[Set appropriate bits in v for play mixers] <.>
mix_setdevs(m, v);
[Set appropriate bits in v for record mixers]
mix_setrecdevs(m, v)
return 0;
}
....
<.> Set bits in an integer value and call `mix_setdevs()` and `mix_setrecdevs()` to tell [.filename]#pcm# what devices exist.
Mixer bits definitions can be found in [.filename]#soundcard.h# (`SOUND_MASK_XXX` values and `SOUND_MIXER_XXX` bit shifts).
==== mixer_set
`xxxmixer_set()` sets the volume level for one mixer device.
[.programlisting]
....
static int
xxxmixer_set(struct snd_mixer *m, unsigned dev,
unsigned left, unsigned right) <.>
{
struct sc_info *sc = mix_getdevinfo(m);
[set volume level]
return left | (right << 8); <.>
}
....
<.> The device is specified as a `SOUND_MIXER_XXX` value. The volume values are specified in range [0-100]. A value of zero should mute the device.
<.> As the hardware levels probably will not match the input scale, and some rounding will occur, the routine returns the actual level values (in range 0-100) as shown.
==== mixer_setrecsrc
`xxxmixer_setrecsrc()` sets the recording source device.
[.programlisting]
....
static int
xxxmixer_setrecsrc(struct snd_mixer *m, u_int32_t src) <.>
{
struct xxx_info *sc = mix_getdevinfo(m);
[look for non zero bit(s) in src, set up hardware]
[update src to reflect actual action]
return src; <.>
}
....
<.> The desired recording devices are specified as a bit field
<.> The actual devices set for recording are returned. Some drivers can only set one device for recording. The function should return -1 if an error occurs.
==== mixer_uninit, mixer_reinit
`xxxmixer_uninit()` should ensure that all sound is muted and if possible mixer hardware should be powered down.
`xxxmixer_reinit()` should ensure that the mixer hardware is powered up and any settings not controlled by `mixer_set()` or `mixer_setrecsrc()` are restored.
=== The AC97 Interface
The _AC97_ interface is implemented by drivers with an AC97 codec. It only has three methods:
* `xxxac97_init()` returns the number of ac97 codecs found.
* `ac97_read()` and `ac97_write()` read or write a specified register.
The _AC97_ interface is used by the AC97 code in [.filename]#pcm# to perform higher level operations. Look at [.filename]#sound/pci/maestro3.c# or many others under [.filename]#sound/pci/# for an example.
diff --git a/documentation/content/en/books/arch-handbook/sysinit/_index.adoc b/documentation/content/en/books/arch-handbook/sysinit/_index.adoc
index dab2239d7a..401dfc4c71 100644
--- a/documentation/content/en/books/arch-handbook/sysinit/_index.adoc
+++ b/documentation/content/en/books/arch-handbook/sysinit/_index.adoc
@@ -1,146 +1,147 @@
---
title: Chapter 5. The SYSINIT Framework
prev: books/arch-handbook/jail
next: books/arch-handbook/mac
+description: The SYSINIT Framework
---
[[sysinit]]
= The SYSINIT Framework
:doctype: book
:toc: macro
:toclevels: 1
:icons: font
:sectnums:
:sectnumlevels: 6
:source-highlighter: rouge
:experimental:
:skip-front-matter:
:xrefstyle: basic
:relfileprefix: ../
:outfilesuffix:
:sectnumoffset: 5
include::shared/mirrors.adoc[]
include::shared/authors.adoc[]
include::shared/releases.adoc[]
include::shared/en/mailing-lists.adoc[]
include::shared/en/teams.adoc[]
include::shared/en/urls.adoc[]
toc::[]
SYSINIT is the framework for a generic call sort and dispatch mechanism. FreeBSD currently uses it for the dynamic initialization of the kernel. SYSINIT allows FreeBSD's kernel subsystems to be reordered, and added, removed, and replaced at kernel link time when the kernel or one of its modules is loaded without having to edit a statically ordered initialization routing and recompile the kernel. This system also allows kernel modules, currently called _KLD's_, to be separately compiled, linked, and initialized at boot time and loaded even later while the system is already running. This is accomplished using the "kernel linker" and "linker sets".
[[sysinit-term]]
== Terminology
Linker Set::
A linker technique in which the linker gathers statically declared data throughout a program's source files into a single contiguously addressable unit of data.
[[sysinit-operation]]
== SYSINIT Operation
SYSINIT relies on the ability of the linker to take static data declared at multiple locations throughout a program's source and group it together as a single contiguous chunk of data. This linker technique is called a "linker set". SYSINIT uses two linker sets to maintain two data sets containing each consumer's call order, function, and a pointer to the data to pass to that function.
SYSINIT uses two priorities when ordering the functions for execution. The first priority is a subsystem ID giving an overall order for SYSINIT's dispatch of functions. Current predeclared ID's are in [.filename]#<sys/kernel.h># in the enum list `sysinit_sub_id`. The second priority used is an element order within the subsystem. Current predeclared subsystem element orders are in [.filename]#<sys/kernel.h># in the enum list `sysinit_elem_order`.
There are currently two uses for SYSINIT. Function dispatch at system startup and kernel module loads, and function dispatch at system shutdown and kernel module unload. Kernel subsystems often use system startup SYSINIT's to initialize data structures, for example the process scheduling subsystem uses a SYSINIT to initialize the run queue data structure. Device drivers should avoid using `SYSINIT()` directly. Instead drivers for real devices that are part of a bus structure should use `DRIVER_MODULE()` to provide a function that detects the device and, if it is present, initializes the device. It will do a few things specific to devices and then call `SYSINIT()` itself. For pseudo-devices, which are not part of a bus structure, use `DEV_MODULE()`.
[[sysinit-using]]
== Using SYSINIT
=== Interface
==== Headers
[.programlisting]
....
<sys/kernel.h>
....
==== Macros
[.programlisting]
....
SYSINIT(uniquifier, subsystem, order, func, ident)
SYSUNINIT(uniquifier, subsystem, order, func, ident)
....
=== Startup
The `SYSINIT()` macro creates the necessary SYSINIT data in SYSINIT's startup data set for SYSINIT to sort and dispatch a function at system startup and module load. `SYSINIT()` takes a uniquifier that SYSINIT uses to identify the particular function dispatch data, the subsystem order, the subsystem element order, the function to call, and the data to pass the function. All functions must take a constant pointer argument.
.Example of a `SYSINIT()`
[example]
====
[.programlisting]
....
#include <sys/kernel.h>
void foo_null(void *unused)
{
foo_doo();
}
SYSINIT(foo, SI_SUB_FOO, SI_ORDER_FOO, foo_null, NULL);
struct foo foo_voodoo = {
FOO_VOODOO;
}
void foo_arg(void *vdata)
{
struct foo *foo = (struct foo *)vdata;
foo_data(foo);
}
SYSINIT(bar, SI_SUB_FOO, SI_ORDER_FOO, foo_arg, &foo_voodoo);
....
====
Note that `SI_SUB_FOO` and `SI_ORDER_FOO` need to be in the `sysinit_sub_id` and `sysinit_elem_order` enum's as mentioned above. Either use existing ones or add your own to the enum's. You can also use math for fine-tuning the order a SYSINIT will run in. This example shows a SYSINIT that needs to be run just barely before the SYSINIT's that handle tuning kernel parameters.
.Example of Adjusting `SYSINIT()` Order
[example]
====
[.programlisting]
....
static void
mptable_register(void *dummy __unused)
{
apic_register_enumerator(&mptable_enumerator);
}
SYSINIT(mptable_register, SI_SUB_TUNABLES - 1, SI_ORDER_FIRST,
mptable_register, NULL);
....
====
=== Shutdown
The `SYSUNINIT()` macro behaves similarly to the `SYSINIT()` macro except that it adds the SYSINIT data to SYSINIT's shutdown data set.
.Example of a `SYSUNINIT()`
[example]
====
[.programlisting]
....
#include <sys/kernel.h>
void foo_cleanup(void *unused)
{
foo_kill();
}
SYSUNINIT(foobar, SI_SUB_FOO, SI_ORDER_FOO, foo_cleanup, NULL);
struct foo_stack foo_stack = {
FOO_STACK_VOODOO;
}
void foo_flush(void *vdata)
{
}
SYSUNINIT(barfoo, SI_SUB_FOO, SI_ORDER_FOO, foo_flush, &foo_stack);
....
====
diff --git a/documentation/content/en/books/arch-handbook/usb/_index.adoc b/documentation/content/en/books/arch-handbook/usb/_index.adoc
index 18a06c4a0b..f7b3d9c69a 100644
--- a/documentation/content/en/books/arch-handbook/usb/_index.adoc
+++ b/documentation/content/en/books/arch-handbook/usb/_index.adoc
@@ -1,167 +1,168 @@
---
title: Chapter 13. USB Devices
prev: books/arch-handbook/scsi
next: books/arch-handbook/newbus
+description: USB Devices in FreeBSD
---
[[usb]]
= USB Devices
:doctype: book
:toc: macro
:toclevels: 1
:icons: font
:sectnums:
:sectnumlevels: 6
:source-highlighter: rouge
:experimental:
:skip-front-matter:
:xrefstyle: basic
:relfileprefix: ../
:outfilesuffix:
:sectnumoffset: 13
include::shared/mirrors.adoc[]
include::shared/authors.adoc[]
include::shared/releases.adoc[]
include::shared/en/mailing-lists.adoc[]
include::shared/en/teams.adoc[]
include::shared/en/urls.adoc[]
toc::[]
[[usb-intro]]
== Introduction
The Universal Serial Bus (USB) is a new way of attaching devices to personal computers. The bus architecture features two-way communication and has been developed as a response to devices becoming smarter and requiring more interaction with the host. USB support is included in all current PC chipsets and is therefore available in all recently built PCs. Apple's introduction of the USB-only iMac has been a major incentive for hardware manufacturers to produce USB versions of their devices. The future PC specifications specify that all legacy connectors on PCs should be replaced by one or more USB connectors, providing generic plug and play capabilities. Support for USB hardware was available at a very early stage in NetBSD and was developed by Lennart Augustsson for the NetBSD project. The code has been ported to FreeBSD and we are currently maintaining a shared code base. For the implementation of the USB subsystem a number of features of USB are important.
_Lennart Augustsson has done most of the implementation of the USB support for the NetBSD project. Many thanks for this incredible amount of work. Many thanks also to Ardy and Dirk for their comments and proofreading of this paper._
* Devices connect to ports on the computer directly or on devices called hubs, forming a treelike device structure.
* The devices can be connected and disconnected at run time.
* Devices can suspend themselves and trigger resumes of the host system
* As the devices can be powered from the bus, the host software has to keep track of power budgets for each hub.
* Different quality of service requirements by the different device types together with the maximum of 126 devices that can be connected to the same bus, require proper scheduling of transfers on the shared bus to take full advantage of the 12Mbps bandwidth available. (over 400Mbps with USB 2.0)
* Devices are intelligent and contain easily accessible information about themselves
The development of drivers for the USB subsystem and devices connected to it is supported by the specifications that have been developed and will be developed. These specifications are publicly available from the USB home pages. Apple has been very strong in pushing for standards based drivers, by making drivers for the generic classes available in their operating system MacOS and discouraging the use of separate drivers for each new device. This chapter tries to collate essential information for a basic understanding of the USB 2.0 implementation stack in FreeBSD/NetBSD. It is recommended however to read it together with the relevant 2.0 specifications and other developer resources:
* USB 2.0 Specification (http://www.usb.org/developers/docs/usb20_docs/[http://www.usb.org/developers/docs/usb20_docs/])
* Universal Host Controller Interface (UHCI) Specification (link:ftp://ftp.netbsd.org/pub/NetBSD/misc/blymn/uhci11d.pdf[ftp://ftp.netbsd.org/pub/NetBSD/misc/blymn/uhci11d.pdf)]
* Open Host Controller Interface (OHCI) Specification(link:ftp://ftp.compaq.com/pub/supportinformation/papers/hcir1_0a.pdf[ftp://ftp.compaq.com/pub/supportinformation/papers/hcir1_0a.pdf])
* Developer section of USB home page (http://www.usb.org/developers/[http://www.usb.org/developers/])
=== Structure of the USB Stack
The USB support in FreeBSD can be split into three layers. The lowest layer contains the host controller driver, providing a generic interface to the hardware and its scheduling facilities. It supports initialisation of the hardware, scheduling of transfers and handling of completed and/or failed transfers. Each host controller driver implements a virtual hub providing hardware independent access to the registers controlling the root ports on the back of the machine.
The middle layer handles the device connection and disconnection, basic initialisation of the device, driver selection, the communication channels (pipes) and does resource management. This services layer also controls the default pipes and the device requests transferred over them.
The top layer contains the individual drivers supporting specific (classes of) devices. These drivers implement the protocol that is used over the pipes other than the default pipe. They also implement additional functionality to make the device available to other parts of the kernel or userland. They use the USB driver interface (USBDI) exposed by the services layer.
[[usb-hc]]
== Host Controllers
The host controller (HC) controls the transmission of packets on the bus. Frames of 1 millisecond are used. At the start of each frame the host controller generates a Start of Frame (SOF) packet.
The SOF packet is used to synchronise to the start of the frame and to keep track of the frame number. Within each frame packets are transferred, either from host to device (out) or from device to host (in). Transfers are always initiated by the host (polled transfers). Therefore there can only be one host per USB bus. Each transfer of a packet has a status stage in which the recipient of the data can return either ACK (acknowledge reception), NAK (retry), STALL (error condition) or nothing (garbled data stage, device not available or disconnected). Section 8.5 of the USB 2.0 Specification explains the details of packets in more detail. Four different types of transfers can occur on a USB bus: control, bulk, interrupt and isochronous. The types of transfers and their characteristics are described below.
Large transfers between the device on the USB bus and the device driver are split up into multiple packets by the host controller or the HC driver.
Device requests (control transfers) to the default endpoints are special. They consist of two or three phases: SETUP, DATA (optional) and STATUS. The set-up packet is sent to the device. If there is a data phase, the direction of the data packet(s) is given in the set-up packet. The direction in the status phase is the opposite of the direction during the data phase, or IN if there was no data phase. The host controller hardware also provides registers with the current status of the root ports and the changes that have occurred since the last reset of the status change register. Access to these registers is provided through a virtualised hub as suggested in the USB specification. The virtual hub must comply with the hub device class given in chapter 11 of that specification. It must provide a default pipe through which device requests can be sent to it. It returns the standard andhub class specific set of descriptors. It should also provide an interrupt pipe that reports changes happening at its ports. There are currently two specifications for host controllers available: Universal Host Controller Interface (UHCI) from Intel and Open Host Controller Interface (OHCI) from Compaq, Microsoft, and National Semiconductor. The UHCI specification has been designed to reduce hardware complexity by requiring the host controller driver to supply a complete schedule of the transfers for each frame. OHCI type controllers are much more independent by providing a more abstract interface doing a lot of work themselves.
=== UHCI
The UHCI host controller maintains a framelist with 1024 pointers to per frame data structures. It understands two different data types: transfer descriptors (TD) and queue heads (QH). Each TD represents a packet to be communicated to or from a device endpoint. QHs are a means to groupTDs (and QHs) together.
Each transfer consists of one or more packets. The UHCI driver splits large transfers into multiple packets. For every transfer, apart from isochronous transfers, a QH is allocated. For every type of transfer these QHs are collected at a QH for that type. Isochronous transfers have to be executed first because of the fixed latency requirement and are directly referred to by the pointer in the framelist. The last isochronous TD refers to the QH for interrupt transfers for that frame. All QHs for interrupt transfers point at the QH for control transfers, which in turn points at the QH for bulk transfers. The following diagram gives a graphical overview of this:
This results in the following schedule being run in each frame. After fetching the pointer for the current frame from the framelist the controller first executes the TDs for all the isochronous packets in that frame. The last of these TDs refers to the QH for the interrupt transfers for thatframe. The host controller will then descend from that QH to the QHs for the individual interrupt transfers. After finishing that queue, the QH for the interrupt transfers will refer the controller to the QH for all control transfers. It will execute all the subqueues scheduled there, followed by all the transfers queued at the bulk QH. To facilitate the handling of finished or failed transfers different types of interrupts are generated by the hardware at the end of each frame. In the last TD for a transfer the Interrupt-On Completion bit is set by the HC driver to flag an interrupt when the transfer has completed. An error interrupt is flagged if a TD reaches its maximum error count. If the short packet detect bit is set in a TD and less than the set packet length is transferred this interrupt is flagged to notify the controller driver of the completed transfer. It is the host controller driver's task to find out which transfer has completed or produced an error. When called the interrupt service routine will locate all the finished transfers and call their callbacks.
Refer to the UHCI Specification for a more elaborate description.
=== OHCI
Programming an OHCI host controller is much simpler. The controller assumes that a set of endpoints is available, and is aware of scheduling priorities and the ordering of the types of transfers in a frame. The main data structure used by the host controller is the endpoint descriptor (ED) to which a queue of transfer descriptors (TDs) is attached. The ED contains the maximum packet size allowed for an endpoint and the controller hardware does the splitting into packets. The pointers to the data buffers are updated after each transfer and when the start and end pointer are equal, the TD is retired to the done-queue. The four types of endpoints (interrupt, isochronous, control, and bulk) have their own queues. Control and bulk endpoints are queued each at their own queue. Interrupt EDs are queued in a tree, with the level in the tree defining the frequency at which they run.
The schedule being run by the host controller in each frame looks as follows. The controller will first run the non-periodic control and bulk queues, up to a time limit set by the HC driver. Then the interrupt transfers for that frame number are run, by using the lower five bits of the frame number as an index into level 0 of the tree of interrupts EDs. At the end of this tree the isochronous EDs are connected and these are traversed subsequently. The isochronous TDs contain the frame number of the first frame the transfer should be run in. After all the periodic transfers have been run, the control and bulk queues are traversed again. Periodically the interrupt service routine is called to process the done queue and call the callbacks for each transfer and reschedule interrupt and isochronous endpoints.
See the UHCI Specification for a more elaborate description. The middle layer provides access to the device in a controlled way and maintains resources in use by the different drivers and the services layer. The layer takes care of the following aspects:
* The device configuration information
* The pipes to communicate with a device
* Probing and attaching and detaching form a device.
[[usb-dev]]
== USB Device Information
=== Device Configuration Information
Each device provides different levels of configuration information. Each device has one or more configurations, of which one is selected during probe/attach. A configuration provides power and bandwidth requirements. Within each configuration there can be multiple interfaces. A device interface is a collection of endpoints. For example USB speakers can have an interface for the audio data (Audio Class) and an interface for the knobs, dials and buttons (HID Class). All interfaces in a configuration are active at the same time and can be attached to by different drivers. Each interface can have alternates, providing different quality of service parameters. In for example cameras this is used to provide different frame sizes and numbers of frames per second.
Within each interface, 0 or more endpoints can be specified. Endpoints are the unidirectional access points for communicating with a device. They provide buffers to temporarily store incoming or outgoing data from the device. Each endpoint has a unique address within a configuration, the endpoint's number plus its direction. The default endpoint, endpoint 0, is not part of any interface and available in all configurations. It is managed by the services layer and not directly available to device drivers.
This hierarchical configuration information is described in the device by a standard set of descriptors (see section 9.6 of the USB specification). They can be requested through the Get Descriptor Request. The services layer caches these descriptors to avoid unnecessary transfers on the USB bus. Access to the descriptors is provided through function calls.
* Device descriptors: General information about the device, like Vendor, Product and Revision Id, supported device class, subclass and protocol if applicable, maximum packet size for the default endpoint, etc.
* Configuration descriptors: The number of interfaces in this configuration, suspend and resume functionality supported and power requirements.
* Interface descriptors: interface class, subclass and protocol if applicable, number of alternate settings for the interface and the number of endpoints.
* Endpoint descriptors: Endpoint address, direction and type, maximum packet size supported and polling frequency if type is interrupt endpoint. There is no descriptor for the default endpoint (endpoint 0) and it is never counted in an interface descriptor.
* String descriptors: In the other descriptors string indices are supplied for some fields.These can be used to retrieve descriptive strings, possibly in multiple languages.
Class specifications can add their own descriptor types that are available through the GetDescriptor Request.
Pipes Communication to end points on a device flows through so-called pipes. Drivers submit transfers to endpoints to a pipe and provide a callback to be called on completion or failure of the transfer (asynchronous transfers) or wait for completion (synchronous transfer). Transfers to an endpoint are serialised in the pipe. A transfer can either complete, fail or time-out (if a time-out has been set). There are two types of time-outs for transfers. Time-outs can happen due to time-out on the USBbus (milliseconds). These time-outs are seen as failures and can be due to disconnection of the device. A second form of time-out is implemented in software and is triggered when a transfer does not complete within a specified amount of time (seconds). These are caused by a device acknowledging negatively (NAK) the transferred packets. The cause for this is the device not being ready to receive data, buffer under- or overrun or protocol errors.
If a transfer over a pipe is larger than the maximum packet size specified in the associated endpoint descriptor, the host controller (OHCI) or the HC driver (UHCI) will split the transfer into packets of maximum packet size, with the last packet possibly smaller than the maximum packet size.
Sometimes it is not a problem for a device to return less data than requested. For example abulk-in-transfer to a modem might request 200 bytes of data, but the modem has only 5 bytes available at that time. The driver can set the short packet (SPD) flag. It allows the host controller to accept a packet even if the amount of data transferred is less than requested. This flag is only valid for in-transfers, as the amount of data to be sent to a device is always known beforehand. If an unrecoverable error occurs in a device during a transfer the pipe is stalled. Before any more data is accepted or sent the driver needs to resolve the cause of the stall and clear the endpoint stall condition through send the clear endpoint halt device request over the default pipe. The default endpoint should never stall.
There are four different types of endpoints and corresponding pipes: - Control pipe / default pipe: There is one control pipe per device, connected to the default endpoint (endpoint 0). The pipe carries the device requests and associated data. The difference between transfers over the default pipe and other pipes is that the protocol for the transfers is described in the USB specification. These requests are used to reset and configure the device. A basic set of commands that must be supported by each device is provided in chapter 9 of the USB specification. The commands supported on this pipe can be extended by a device class specification to support additional functionality.
* Bulk pipe: This is the USB equivalent to a raw transmission medium.
* Interrupt pipe: The host sends a request for data to the device and if the device has nothing to send, it will NAK the data packet. Interrupt transfers are scheduled at a frequency specified when creating the pipe.
* Isochronous pipe: These pipes are intended for isochronous data, for example video or audio streams, with fixed latency, but no guaranteed delivery. Some support for pipes of this type is available in the current implementation. Packets in control, bulk and interrupt transfers are retried if an error occurs during transmission or the device acknowledges the packet negatively (NAK) due to for example lack of buffer space to store the incoming data. Isochronous packets are however not retried in case of failed delivery or NAK of a packet as this might violate the timing constraints.
The availability of the necessary bandwidth is calculated during the creation of the pipe. Transfers are scheduled within frames of 1 millisecond. The bandwidth allocation within a frame is prescribed by the USB specification, section 5.6 [ 2]. Isochronous and interrupt transfers are allowed to consume up to 90% of the bandwidth within a frame. Packets for control and bulk transfers are scheduled after all isochronous and interrupt packets and will consume all the remaining bandwidth.
More information on scheduling of transfers and bandwidth reclamation can be found in chapter 5 of the USB specification, section 1.3 of the UHCI specification, and section 3.4.2 of the OHCI specification.
[[usb-devprobe]]
== Device Probe and Attach
After the notification by the hub that a new device has been connected, the service layer switches on the port, providing the device with 100 mA of current. At this point the device is in its default state and listening to device address 0. The services layer will proceed to retrieve the various descriptors through the default pipe. After that it will send a Set Address request to move the device away from the default device address (address 0). Multiple device drivers might be able to support the device. For example a modem driver might be able to support an ISDN TA through the AT compatibility interface. A driver for that specific model of the ISDN adapter might however be able to provide much better support for this device. To support this flexibility, the probes return priorities indicating their level of support. Support for a specific revision of a product ranks the highest and the generic driver the lowest priority. It might also be that multiple drivers could attach to one device if there are multiple interfaces within one configuration. Each driver only needs to support a subset of the interfaces.
The probing for a driver for a newly attached device checks first for device specific drivers. If not found, the probe code iterates over all supported configurations until a driver attaches in a configuration. To support devices with multiple drivers on different interfaces, the probe iterates over all interfaces in a configuration that have not yet been claimed by a driver. Configurations that exceed the power budget for the hub are ignored. During attach the driver should initialise the device to its proper state, but not reset it, as this will make the device disconnect itself from the bus and restart the probing process for it. To avoid consuming unnecessary bandwidth should not claim the interrupt pipe at attach time, but should postpone allocating the pipe until the file is opened and the data is actually used. When the file is closed the pipe should be closed again, even though the device might still be attached.
=== Device Disconnect and Detach
A device driver should expect to receive errors during any transaction with the device. The design of USB supports and encourages the disconnection of devices at any point in time. Drivers should make sure that they do the right thing when the device disappears.
Furthermore a device that has been disconnected and reconnected will not be reattached at the same device instance. This might change in the future when more devices support serial numbers (see the device descriptor) or other means of defining an identity for a device have been developed.
The disconnection of a device is signaled by a hub in the interrupt packet delivered to the hub driver. The status change information indicates which port has seen a connection change. The device detach method for all device drivers for the device connected on that port are called and the structures cleaned up. If the port status indicates that in the mean time a device has been connected to that port, the procedure for probing and attaching the device will be started. A device reset will produce a disconnect-connect sequence on the hub and will be handled as described above.
[[usb-protocol]]
== USB Drivers Protocol Information
The protocol used over pipes other than the default pipe is undefined by the USB specification. Information on this can be found from various sources. The most accurate source is the developer's section on the USB home pages. From these pages, a growing number of deviceclass specifications are available. These specifications specify what a compliant device should look like from a driver perspective, basic functionality it needs to provide and the protocol that is to be used over the communication channels. The USB specification includes the description of the Hub Class. A class specification for Human Interface Devices (HID) has been created to cater for keyboards, tablets, bar-code readers, buttons, knobs, switches, etc. A third example is the class specification for mass storage devices. For a full list of device classes see the developers section on the USB home pages.
For many devices the protocol information has not yet been published however. Information on the protocol being used might be available from the company making the device. Some companies will require you to sign a Non -Disclosure Agreement (NDA) before giving you the specifications. This in most cases precludes making the driver open source.
Another good source of information is the Linux driver sources, as a number of companies have started to provide drivers for Linux for their devices. It is always a good idea to contact the authors of those drivers for their source of information.
Example: Human Interface Devices The specification for the Human Interface Devices like keyboards, mice, tablets, buttons, dials,etc. is referred to in other device class specifications and is used in many devices.
For example audio speakers provide endpoints to the digital to analogue converters and possibly an extra pipe for a microphone. They also provide a HID endpoint in a separate interface for the buttons and dials on the front of the device. The same is true for the monitor control class. It is straightforward to build support for these interfaces through the available kernel and userland libraries together with the HID class driver or the generic driver. Another device that serves as an example for interfaces within one configuration driven by different device drivers is a cheap keyboard with built-in legacy mouse port. To avoid having the cost of including the hardware for a USB hub in the device, manufacturers combined the mouse data received from the PS/2 port on the back of the keyboard and the key presses from the keyboard into two separate interfaces in the same configuration. The mouse and keyboard drivers each attach to the appropriate interface and allocate the pipes to the two independent endpoints.
Example: Firmware download Many devices that have been developed are based on a general purpose processor with an additional USB core added to it. Since the development of drivers and firmware for USB devices is still very new, many devices require the downloading of the firmware after they have been connected.
The procedure followed is straightforward. The device identifies itself through a vendor and product Id. The first driver probes and attaches to it and downloads the firmware into it. After that the device soft resets itself and the driver is detached. After a short pause the device announces its presence on the bus. The device will have changed its vendor/product/revision Id to reflect the fact that it has been supplied with firmware and as a consequence a second driver will probe it and attach to it.
An example of these types of devices is the ActiveWire I/O board, based on the EZ-USB chip. For this chip a generic firmware downloader is available. The firmware downloaded into the ActiveWire board changes the revision Id. It will then perform a soft reset of the USB part of the EZ-USB chip to disconnect from the USB bus and again reconnect.
Example: Mass Storage Devices Support for mass storage devices is mainly built around existing protocols. The Iomega USB Zipdrive is based on the SCSI version of their drive. The SCSI commands and status messages are wrapped in blocks and transferred over the bulk pipes to and from the device, emulating a SCSI controller over the USB wire. ATAPI and UFI commands are supported in a similar fashion.
The Mass Storage Specification supports 2 different types of wrapping of the command block.The initial attempt was based on sending the command and status through the default pipe and using bulk transfers for the data to be moved between the host and the device. Based on experience a second approach was designed that was based on wrapping the command and status blocks and sending them over the bulk out and in endpoint. The specification specifies exactly what has to happen when and what has to be done in case an error condition is encountered. The biggest challenge when writing drivers for these devices is to fit USB based protocol into the existing support for mass storage devices. CAM provides hooks to do this in a fairly straight forward way. ATAPI is less simple as historically the IDE interface has never had many different appearances.
The support for the USB floppy from Y-E Data is again less straightforward as a new command set has been designed.
diff --git a/documentation/content/en/books/arch-handbook/vm/_index.adoc b/documentation/content/en/books/arch-handbook/vm/_index.adoc
index 12b9bf56e6..5593eb4322 100644
--- a/documentation/content/en/books/arch-handbook/vm/_index.adoc
+++ b/documentation/content/en/books/arch-handbook/vm/_index.adoc
@@ -1,108 +1,109 @@
---
title: Chapter 7. Virtual Memory System
prev: books/arch-handbook/mac
next: books/arch-handbook/smp
+description: Virtual Memory System in FreeBSD
---
[[vm]]
= Virtual Memory System
:doctype: book
:toc: macro
:toclevels: 1
:icons: font
:sectnums:
:sectnumlevels: 6
:source-highlighter: rouge
:experimental:
:skip-front-matter:
:xrefstyle: basic
:relfileprefix: ../
:outfilesuffix:
:sectnumoffset: 7
include::shared/mirrors.adoc[]
include::shared/authors.adoc[]
include::shared/releases.adoc[]
include::shared/en/mailing-lists.adoc[]
include::shared/en/teams.adoc[]
include::shared/en/urls.adoc[]
toc::[]
[[vm-physmem]]
== Management of Physical Memory `vm_page_t`
Physical memory is managed on a page-by-page basis through the `vm_page_t` structure. Pages of physical memory are categorized through the placement of their respective `vm_page_t` structures on one of several paging queues.
A page can be in a wired, active, inactive, cache, or free state. Except for the wired state, the page is typically placed in a doubly link list queue representing the state that it is in. Wired pages are not placed on any queue.
FreeBSD implements a more involved paging queue for cached and free pages in order to implement page coloring. Each of these states involves multiple queues arranged according to the size of the processor's L1 and L2 caches. When a new page needs to be allocated, FreeBSD attempts to obtain one that is reasonably well aligned from the point of view of the L1 and L2 caches relative to the VM object the page is being allocated for.
Additionally, a page may be held with a reference count or locked with a busy count. The VM system also implements an "ultimate locked" state for a page using the PG_BUSY bit in the page's flags.
In general terms, each of the paging queues operates in a LRU fashion. A page is typically placed in a wired or active state initially. When wired, the page is usually associated with a page table somewhere. The VM system ages the page by scanning pages in a more active paging queue (LRU) in order to move them to a less-active paging queue. Pages that get moved into the cache are still associated with a VM object but are candidates for immediate reuse. Pages in the free queue are truly free. FreeBSD attempts to minimize the number of pages in the free queue, but a certain minimum number of truly free pages must be maintained in order to accommodate page allocation at interrupt time.
If a process attempts to access a page that does not exist in its page table but does exist in one of the paging queues (such as the inactive or cache queues), a relatively inexpensive page reactivation fault occurs which causes the page to be reactivated. If the page does not exist in system memory at all, the process must block while the page is brought in from disk.
FreeBSD dynamically tunes its paging queues and attempts to maintain reasonable ratios of pages in the various queues as well as attempts to maintain a reasonable breakdown of clean versus dirty pages. The amount of rebalancing that occurs depends on the system's memory load. This rebalancing is implemented by the pageout daemon and involves laundering dirty pages (syncing them with their backing store), noticing when pages are activity referenced (resetting their position in the LRU queues or moving them between queues), migrating pages between queues when the queues are out of balance, and so forth. FreeBSD's VM system is willing to take a reasonable number of reactivation page faults to determine how active or how idle a page actually is. This leads to better decisions being made as to when to launder or swap-out a page.
[[vm-cache]]
== The Unified Buffer Cache `vm_object_t`
FreeBSD implements the idea of a generic "VM object". VM objects can be associated with backing store of various typesunbacked, swap-backed, physical device-backed, or file-backed storage. Since the filesystem uses the same VM objects to manage in-core data relating to files, the result is a unified buffer cache.
VM objects can be _shadowed_. That is, they can be stacked on top of each other. For example, you might have a swap-backed VM object stacked on top of a file-backed VM object in order to implement a MAP_PRIVATE mmap()ing. This stacking is also used to implement various sharing properties, including copy-on-write, for forked address spaces.
It should be noted that a `vm_page_t` can only be associated with one VM object at a time. The VM object shadowing implements the perceived sharing of the same page across multiple instances.
[[vm-fileio]]
== Filesystem I/O `struct buf`
vnode-backed VM objects, such as file-backed objects, generally need to maintain their own clean/dirty info independent from the VM system's idea of clean/dirty. For example, when the VM system decides to synchronize a physical page to its backing store, the VM system needs to mark the page clean before the page is actually written to its backing store. Additionally, filesystems need to be able to map portions of a file or file metadata into KVM in order to operate on it.
The entities used to manage this are known as filesystem buffers, ``struct buf``'s, or ``bp``'s. When a filesystem needs to operate on a portion of a VM object, it typically maps part of the object into a struct buf and then maps the pages in the struct buf into KVM. In the same manner, disk I/O is typically issued by mapping portions of objects into buffer structures and then issuing the I/O on the buffer structures. The underlying vm_page_t's are typically busied for the duration of the I/O. Filesystem buffers also have their own notion of being busy, which is useful to filesystem driver code which would rather operate on filesystem buffers instead of hard VM pages.
FreeBSD reserves a limited amount of KVM to hold mappings from struct bufs, but it should be made clear that this KVM is used solely to hold mappings and does not limit the ability to cache data. Physical data caching is strictly a function of ``vm_page_t``'s, not filesystem buffers. However, since filesystem buffers are used to placehold I/O, they do inherently limit the amount of concurrent I/O possible. However, as there are usually a few thousand filesystem buffers available, this is not usually a problem.
[[vm-pagetables]]
== Mapping Page Tables `vm_map_t, vm_entry_t`
FreeBSD separates the physical page table topology from the VM system. All hard per-process page tables can be reconstructed on the fly and are usually considered throwaway. Special page tables such as those managing KVM are typically permanently preallocated. These page tables are not throwaway.
FreeBSD associates portions of vm_objects with address ranges in virtual memory through `vm_map_t` and `vm_entry_t` structures. Page tables are directly synthesized from the `vm_map_t`/`vm_entry_t`/ `vm_object_t` hierarchy. Recall that I mentioned that physical pages are only directly associated with a `vm_object`; that is not quite true. ``vm_page_t``'s are also linked into page tables that they are actively associated with. One `vm_page_t` can be linked into several _pmaps_, as page tables are called. However, the hierarchical association holds, so all references to the same page in the same object reference the same `vm_page_t` and thus give us buffer cache unification across the board.
[[vm-kvm]]
== KVM Memory Mapping
FreeBSD uses KVM to hold various kernel structures. The single largest entity held in KVM is the filesystem buffer cache. That is, mappings relating to `struct buf` entities.
Unlike Linux, FreeBSD does _not_ map all of physical memory into KVM. This means that FreeBSD can handle memory configurations up to 4G on 32 bit platforms. In fact, if the mmu were capable of it, FreeBSD could theoretically handle memory configurations up to 8TB on a 32 bit platform. However, since most 32 bit platforms are only capable of mapping 4GB of ram, this is a moot point.
KVM is managed through several mechanisms. The main mechanism used to manage KVM is the _zone allocator_. The zone allocator takes a chunk of KVM and splits it up into constant-sized blocks of memory in order to allocate a specific type of structure. You can use `vmstat -m` to get an overview of current KVM utilization broken down by zone.
[[vm-tuning]]
== Tuning the FreeBSD VM System
A concerted effort has been made to make the FreeBSD kernel dynamically tune itself. Typically you do not need to mess with anything beyond the `maxusers` and `NMBCLUSTERS` kernel config options. That is, kernel compilation options specified in (typically) [.filename]#/usr/src/sys/i386/conf/CONFIG_FILE#. A description of all available kernel configuration options can be found in [.filename]#/usr/src/sys/i386/conf/LINT#.
In a large system configuration you may wish to increase `maxusers`. Values typically range from 10 to 128. Note that raising `maxusers` too high can cause the system to overflow available KVM resulting in unpredictable operation. It is better to leave `maxusers` at some reasonable number and add other options, such as `NMBCLUSTERS`, to increase specific resources.
If your system is going to use the network heavily, you may want to increase `NMBCLUSTERS`. Typical values range from 1024 to 4096.
The `NBUF` parameter is also traditionally used to scale the system. This parameter determines the amount of KVA the system can use to map filesystem buffers for I/O. Note that this parameter has nothing whatsoever to do with the unified buffer cache! This parameter is dynamically tuned in 3.0-CURRENT and later kernels and should generally not be adjusted manually. We recommend that you _not_ try to specify an `NBUF` parameter. Let the system pick it. Too small a value can result in extremely inefficient filesystem operation while too large a value can starve the page queues by causing too many pages to become wired down.
By default, FreeBSD kernels are not optimized. You can set debugging and optimization flags with the `makeoptions` directive in the kernel configuration. Note that you should not use `-g` unless you can accommodate the large (typically 7 MB+) kernels that result.
[.programlisting]
....
makeoptions DEBUG="-g"
makeoptions COPTFLAGS="-O -pipe"
....
Sysctl provides a way to tune kernel parameters at run-time. You typically do not need to mess with any of the sysctl variables, especially the VM related ones.
Run time VM and system tuning is relatively straightforward. First, use Soft Updates on your UFS/FFS filesystems whenever possible. [.filename]#/usr/src/sys/ufs/ffs/README.softupdates# contains instructions (and restrictions) on how to configure it.
Second, configure sufficient swap. You should have a swap partition configured on each physical disk, up to four, even on your "work" disks. You should have at least 2x the swap space as you have main memory, and possibly even more if you do not have a lot of memory. You should also size your swap partition based on the maximum memory configuration you ever intend to put on the machine so you do not have to repartition your disks later on. If you want to be able to accommodate a crash dump, your first swap partition must be at least as large as main memory and [.filename]#/var/crash# must have sufficient free space to hold the dump.
NFS-based swap is perfectly acceptable on 4.X or later systems, but you must be aware that the NFS server will take the brunt of the paging load.
diff --git a/documentation/content/en/books/design-44bsd/_index.adoc b/documentation/content/en/books/design-44bsd/_index.adoc
index 1c730ed3b2..bab937e2fc 100644
--- a/documentation/content/en/books/design-44bsd/_index.adoc
+++ b/documentation/content/en/books/design-44bsd/_index.adoc
@@ -1,507 +1,507 @@
---
title: The Design and Implementation of the 4.4BSD Operating System
authors:
- author: Marshall Kirk McKusick
- author: Keith Bostic
- author: Michael J. Karels
- author: John S. Quarterman
copyright: 1996 Addison-Wesley Longman, Inc
-releaseinfo: "$FreeBSD$"
+description: The Design and Implementation of the 4.4BSD Operating System. Second chapter.
trademarks: ["design-44bsd"]
---
= The Design and Implementation of the 4.4BSD Operating System
:doctype: book
:toc: macro
:toclevels: 2
:icons: font
:xrefstyle: basic
:relfileprefix: ../
:outfilesuffix:
:sectnums:
:sectnumlevels: 6
:partnums:
:chapter-signifier: Chapter
:part-signifier: Part
:source-highlighter: rouge
:experimental:
:skip-front-matter:
:sectnumoffset: 2
ifeval::["{backend}" == "html5"]
include::shared/mirrors.adoc[]
include::shared/authors.adoc[]
include::shared/releases.adoc[]
include::shared/en/mailing-lists.adoc[]
include::shared/en/teams.adoc[]
include::shared/en/urls.adoc[]
:imagesdir: ../../images/books/design-44bsd/
endif::[]
ifeval::["{backend}" == "pdf"]
include::../../../../shared/mirrors.adoc[]
include::../../../../shared/authors.adoc[]
include::../../../../shared/releases.adoc[]
include::../../../../shared/en/mailing-lists.adoc[]
include::../../../../shared/en/teams.adoc[]
include::../../../../shared/en/urls.adoc[]
:imagesdir: ../../../static/images/books/design-44bsd/
endif::[]
ifeval::["{backend}" == "epub3"]
include::../../../../shared/mirrors.adoc[]
include::../../../../shared/authors.adoc[]
include::../../../../shared/releases.adoc[]
include::../../../../shared/en/mailing-lists.adoc[]
include::../../../../shared/en/teams.adoc[]
include::../../../../shared/en/urls.adoc[]
:imagesdir: ../../../static/images/books/design-44bsd/
endif::[]
'''
toc::[]
[[overview]]
== Design Overview of 4.4BSD
[[overview-facilities]]
=== 4.4BSD Facilities and the Kernel
The 4.4BSD kernel provides four basic facilities: processes, a filesystem, communications, and system startup. This section outlines where each of these four basic services is described in this book.
. Processes constitute a thread of control in an address space. Mechanisms for creating, terminating, and otherwise controlling processes are described in Chapter 4. The system multiplexes separate virtual-address spaces for each process; this memory management is discussed in Chapter 5.
. The user interface to the filesystem and devices is similar; common aspects are discussed in Chapter 6. The filesystem is a set of named files, organized in a tree-structured hierarchy of directories, and of operations to manipulate them, as presented in Chapter 7. Files reside on physical media such as disks. 4.4BSD supports several organizations of data on the disk, as set forth in Chapter 8. Access to files on remote machines is the subject of Chapter 9. Terminals are used to access the system; their operation is the subject of Chapter 10.
. Communication mechanisms provided by traditional UNIX systems include simplex reliable byte streams between related processes (see pipes, Section 11.1), and notification of exceptional events (see signals, Section 4.7). 4.4BSD also has a general interprocess-communication facility. This facility, described in Chapter 11, uses access mechanisms distinct from those of the filesystem, but, once a connection is set up, a process can access it as though it were a pipe. There is a general networking framework, discussed in Chapter 12, that is normally used as a layer underlying the IPC facility. Chapter 13 describes a particular networking implementation in detail.
. Any real operating system has operational issues, such as how to start it running. Startup and operational issues are described in Chapter 14.
Sections 2.3 through 2.14 present introductory material related to Chapters 3 through 14. We shall define terms, mention basic system calls, and explore historical developments. Finally, we shall give the reasons for many major design decisions.
==== The Kernel
The _kernel_ is the part of the system that runs in protected mode and mediates access by all user programs to the underlying hardware (e.g., CPU, disks, terminals, network links) and software constructs (e.g., filesystem, network protocols). The kernel provides the basic system facilities; it creates and manages processes, and provides functions to access the filesystem and communication facilities. These functions, called _system calls_ appear to user processes as library subroutines. These system calls are the only interface that processes have to these facilities. Details of the system-call mechanism are given in Chapter 3, as are descriptions of several kernel mechanisms that do not execute as the direct result of a process doing a system call.
A _kernel_ in traditional operating-system terminology, is a small nucleus of software that provides only the minimal facilities necessary for implementing additional operating-system services. In contemporary research operating systems -- such as Chorus <<biblio-rozier, [Rozier et al, 1988]>>, Mach <<biblio-accetta, [Accetta et al, 1986]>>, Tunis <<biblio-ewens, [Ewens et al, 1985]>>, and the V Kernel <<biblio-cheriton, [Cheriton, 1988]>> -- this division of functionality is more than just a logical one. Services such as filesystems and networking protocols are implemented as client application processes of the nucleus or kernel.
The 4.4BSD kernel is not partitioned into multiple processes. This basic design decision was made in the earliest versions of UNIX. The first two implementations by Ken Thompson had no memory mapping, and thus made no hardware-enforced distinction between user and kernel space <<biblio-ritchie, [Ritchie, 1988]>>. A message-passing system could have been implemented as readily as the actually implemented model of kernel and user processes. The monolithic kernel was chosen for simplicity and performance. And the early kernels were small; the inclusion of facilities such as networking into the kernel has increased its size. The current trend in operating-systems research is to reduce the kernel size by placing such services in user space.
Users ordinarily interact with the system through a command-language interpreter, called a _shell_, and perhaps through additional user application programs. Such programs and the shell are implemented with processes. Details of such programs are beyond the scope of this book, which instead concentrates almost exclusively on the kernel.
Sections 2.3 and 2.4 describe the services provided by the 4.4BSD kernel, and give an overview of the latter's design. Later chapters describe the detailed design and implementation of these services as they appear in 4.4BSD.
[[overview-kernel-organization]]
=== Kernel Organization
In this section, we view the organization of the 4.4BSD kernel in two ways:
[arabic]
. As a static body of software, categorized by the functionality offered by the modules that make up the kernel
. By its dynamic operation, categorized according to the services provided to users
The largest part of the kernel implements the system services that applications access through system calls. In 4.4BSD, this software has been organized according to the following:
* Basic kernel facilities: timer and system-clock handling, descriptor management, and process management
* Memory-management support: paging and swapping
* Generic system interfaces: the I/O, control, and multiplexing operations performed on descriptors
* The filesystem: files, directories, pathname translation, file locking, and I/O buffer management
* Terminal-handling support: the terminal-interface driver and terminal line disciplines
* Interprocess-communication facilities: sockets
* Support for network communication: communication protocols and generic network facilities, such as routing
.Machine-independent software in the 4.4BSD kernel
[[table-mach-indep]]
[cols=",,",options="header",]
|===
|Category |Lines of code |Percentage of kernel
|headers |9,393 |4.6
|initialization |1,107 |0.6
|kernel facilities |8,793 |4.4
|generic interfaces |4,782 |2.4
|interprocess communication |4,540 |2.2
|terminal handling |3,911 |1.9
|virtual memory |11,813 |5.8
|vnode management |7,954 |3.9
|filesystem naming |6,550 |3.2
|fast filestore |4,365 |2.2
|log-structure filestore |4,337 |2.1
|memory-based filestore |645 |0.3
|cd9660 filesystem |4,177 |2.1
|miscellaneous filesystems (10) |12,695 |6.3
|network filesystem |17,199 |8.5
|network communication |8,630 |4.3
|internet protocols |11,984 |5.9
|ISO protocols |23,924 |11.8
|X.25 protocols |10,626 |5.3
|XNS protocols |5,192 |2.6
|===
Most of the software in these categories is machine independent and is portable across different hardware architectures.
The machine-dependent aspects of the kernel are isolated from the mainstream code. In particular, none of the machine-independent code contains conditional code for specific architecture. When an architecture-dependent action is needed, the machine-independent code calls an architecture-dependent function that is located in the machine-dependent code. The software that is machine dependent includes
* Low-level system-startup actions
* Trap and fault handling
* Low-level manipulation of the run-time context of a process
* Configuration and initialization of hardware devices
* Run-time support for I/O devices
.Machine-dependent software for the HP300 in the 4.4BSD kernel
[[table-mach-dep]]
[cols=",,",options="header",]
|===
|Category |Lines of code |Percentage of kernel
|machine dependent headers |1,562 |0.8
|device driver headers |3,495 |1.7
|device driver source |17,506 |8.7
|virtual memory |3,087 |1.5
|other machine dependent |6,287 |3.1
|routines in assembly language |3,014 |1.5
|HP/UX compatibility |4,683 |2.3
|===
<<table-mach-indep>> summarizes the machine-independent software that constitutes the 4.4BSD kernel for the HP300. The numbers in column 2 are for lines of C source code, header files, and assembly language. Virtually all the software in the kernel is written in the C programming language; less than 2 percent is written in assembly language. As the statistics in <<table-mach-dep>> show, the machine-dependent software, excluding HP/UX and device support, accounts for a minuscule 6.9 percent of the kernel.
Only a small part of the kernel is devoted to initializing the system. This code is used when the system is _bootstrapped_ into operation and is responsible for setting up the kernel hardware and software environment (see Chapter 14). Some operating systems (especially those with limited physical memory) discard or _overlay_ the software that performs these functions after that software has been executed. The 4.4BSD kernel does not reclaim the memory used by the startup code because that memory space is barely 0.5 percent of the kernel resources used on a typical machine. Also, the startup code does not appear in one place in the kernel -- it is scattered throughout, and it usually appears in places logically associated with what is being initialized.
[[overview-kernel-service]]
=== Kernel Services
The boundary between the kernel- and user-level code is enforced by hardware-protection facilities provided by the underlying hardware. The kernel operates in a separate address space that is inaccessible to user processes. Privileged operations -- such as starting I/O and halting the central processing unit (CPU) -- are available to only the kernel. Applications request services from the kernel with _system calls_. System calls are used to cause the kernel to execute complicated operations, such as writing data to secondary storage, and simple operations, such as returning the current time of day. All system calls appear _synchronous_ to applications: The application does not run while the kernel does the actions associated with a system call. The kernel may finish some operations associated with a system call after it has returned. For example, a _write_ system call will copy the data to be written from the user process to a kernel buffer while the process waits, but will usually return from the system call before the kernel buffer is written to the disk.
A system call usually is implemented as a hardware trap that changes the CPU's execution mode and the current address-space mapping. Parameters supplied by users in system calls are validated by the kernel before being used. Such checking ensures the integrity of the system. All parameters passed into the kernel are copied into the kernel's address space, to ensure that validated parameters are not changed as a side effect of the system call. System-call results are returned by the kernel, either in hardware registers or by their values being copied to user-specified memory addresses. Like parameters passed into the kernel, addresses used for the return of results must be validated to ensure that they are part of an application's address space. If the kernel encounters an error while processing a system call, it returns an error code to the user. For the C programming language, this error code is stored in the global variable _errno_, and the function that executed the system call returns the value -1.
User applications and the kernel operate independently of each other. 4.4BSD does not store I/O control blocks or other operating-system-related data structures in the application's address space. Each user-level application is provided an independent address space in which it executes. The kernel makes most state changes, such as suspending a process while another is running, invisible to the processes involved.
[[overview-process-management]]
=== Process Management
4.4BSD supports a multitasking environment. Each task or thread of execution is termed a _process_. The _context_ of a 4.4BSD process consists of user-level state, including the contents of its address space and the run-time environment, and kernel-level state, which includes scheduling parameters, resource controls, and identification information. The context includes everything used by the kernel in providing services for the process. Users can create processes, control the processes' execution, and receive notification when the processes' execution status changes. Every process is assigned a unique value, termed a _process identifier_ (PID). This value is used by the kernel to identify a process when reporting status changes to a user, and by a user when referencing a process in a system call.
The kernel creates a process by duplicating the context of another process. The new process is termed a _child process_ of the original _parent process_ The context duplicated in process creation includes both the user-level execution state of the process and the process's system state managed by the kernel. Important components of the kernel state are described in Chapter 4.
[[fig-process-lifecycle]]
.Process lifecycle
image:fig1.png[Process lifecycle]
The process lifecycle is depicted in <<fig-process-lifecycle>>. A process may create a new process that is a copy of the original by using the _fork_ system call. The _fork_ call returns twice: once in the parent process, where the return value is the process identifier of the child, and once in the child process, where the return value is 0. The parent-child relationship induces a hierarchical structure on the set of processes in the system. The new process shares all its parent's resources, such as file descriptors, signal-handling status, and memory layout.
Although there are occasions when the new process is intended to be a copy of the parent, the loading and execution of a different program is a more useful and typical action. A process can overlay itself with the memory image of another program, passing to the newly created image a set of parameters, using the system call _execve_. One parameter is the name of a file whose contents are in a format recognized by the system -- either a binary-executable file or a file that causes the execution of a specified interpreter program to process its contents.
A process may terminate by executing an _exit_ system call, sending 8 bits of exit status to its parent. If a process wants to communicate more than a single byte of information with its parent, it must either set up an interprocess-communication channel using pipes or sockets, or use an intermediate file. Interprocess communication is discussed extensively in Chapter 11.
A process can suspend execution until any of its child processes terminate using the _wait_ system call, which returns the PID and exit status of the terminated child process. A parent process can arrange to be notified by a signal when a child process exits or terminates abnormally. Using the _wait4_ system call, the parent can retrieve information about the event that caused termination of the child process and about resources consumed by the process during its lifetime. If a process is orphaned because its parent exits before it is finished, then the kernel arranges for the child's exit status to be passed back to a special system process _init_: see Sections 3.1 and 14.6).
The details of how the kernel creates and destroys processes are given in Chapter 5.
Processes are scheduled for execution according to a _process-priority_ parameter. This priority is managed by a kernel-based scheduling algorithm. Users can influence the scheduling of a process by specifying a parameter (_nice_) that weights the overall scheduling priority, but are still obligated to share the underlying CPU resources according to the kernel's scheduling policy.
==== Signals
The system defines a set of _signals_ that may be delivered to a process. Signals in 4.4BSD are modeled after hardware interrupts. A process may specify a user-level subroutine to be a _handler_ to which a signal should be delivered. When a signal is generated, it is blocked from further occurrence while it is being _caught_ by the handler. Catching a signal involves saving the current process context and building a new one in which to run the handler. The signal is then delivered to the handler, which can either abort the process or return to the executing process (perhaps after setting a global variable). If the handler returns, the signal is unblocked and can be generated (and caught) again.
Alternatively, a process may specify that a signal is to be _ignored_, or that a default action, as determined by the kernel, is to be taken. The default action of certain signals is to terminate the process. This termination may be accompanied by creation of a _core file_ that contains the current memory image of the process for use in postmortem debugging.
Some signals cannot be caught or ignored. These signals include _SIGKILL_, which kills runaway processes, and the job-control signal _SIGSTOP_.
A process may choose to have signals delivered on a special stack so that sophisticated software stack manipulations are possible. For example, a language supporting coroutines needs to provide a stack for each coroutine. The language run-time system can allocate these stacks by dividing up the single stack provided by 4.4BSD. If the kernel does not support a separate signal stack, the space allocated for each coroutine must be expanded by the amount of space required to catch a signal.
All signals have the same _priority_. If multiple signals are pending simultaneously, the order in which signals are delivered to a process is implementation specific. Signal handlers execute with the signal that caused their invocation to be blocked, but other signals may yet occur. Mechanisms are provided so that processes can protect critical sections of code against the occurrence of specified signals.
The detailed design and implementation of signals is described in Section 4.7.
==== Process Groups and Sessions
Processes are organized into _process groups_. Process groups are used to control access to terminals and to provide a means of distributing signals to collections of related processes. A process inherits its process group from its parent process. Mechanisms are provided by the kernel to allow a process to alter its process group or the process group of its descendents. Creating a new process group is easy; the value of a new process group is ordinarily the process identifier of the creating process.
The group of processes in a process group is sometimes referred to as a _job_ and is manipulated by high-level system software, such as the shell. A common kind of job created by a shell is a _pipeline_ of several processes connected by pipes, such that the output of the first process is the input of the second, the output of the second is the input of the third, and so forth. The shell creates such a job by forking a process for each stage of the pipeline, then putting all those processes into a separate process group.
A user process can send a signal to each process in a process group, as well as to a single process. A process in a specific process group may receive software interrupts affecting the group, causing the group to suspend or resume execution, or to be interrupted or terminated.
A terminal has a process-group identifier assigned to it. This identifier is normally set to the identifier of a process group associated with the terminal. A job-control shell may create a number of process groups associated with the same terminal; the terminal is the _controlling terminal_ for each process in these groups. A process may read from a descriptor for its controlling terminal only if the terminal's process-group identifier matches that of the process. If the identifiers do not match, the process will be blocked if it attempts to read from the terminal. By changing the process-group identifier of the terminal, a shell can arbitrate a terminal among several different jobs. This arbitration is called _job control_ and is described, with process groups, in Section 4.8.
Just as a set of related processes can be collected into a process group, a set of process groups can be collected into a _session_. The main uses for sessions are to create an isolated environment for a daemon process and its children, and to collect together a user's login shell and the jobs that that shell spawns.
[[overview-memory-management]]
=== Memory Management
Each process has its own private address space. The address space is initially divided into three logical segments: _text_, _data_, and _stack_. The text segment is read-only and contains the machine instructions of a program. The data and stack segments are both readable and writable. The data segment contains the initialized and uninitialized data portions of a program, whereas the stack segment holds the application's run-time stack. On most machines, the stack segment is extended automatically by the kernel as the process executes. A process can expand or contract its data segment by making a system call, whereas a process can change the size of its text segment only when the segment's contents are overlaid with data from the filesystem, or when debugging takes place. The initial contents of the segments of a child process are duplicates of the segments of a parent process.
The entire contents of a process address space do not need to be resident for a process to execute. If a process references a part of its address space that is not resident in main memory, the system _pages_ the necessary information into memory. When system resources are scarce, the system uses a two-level approach to maintain available resources. If a modest amount of memory is available, the system will take memory resources away from processes if these resources have not been used recently. Should there be a severe resource shortage, the system will resort to _swapping_ the entire context of a process to secondary storage. The _demand paging_ and _swapping_ done by the system are effectively transparent to processes. A process may, however, advise the system about expected future memory utilization as a performance aid.
==== BSD Memory-Management Design Decisions
The support of large sparse address spaces, mapped files, and shared memory was a requirement for 4.2BSD. An interface was specified, called _mmap_, that allowed unrelated processes to request a shared mapping of a file into their address spaces. If multiple processes mapped the same file into their address spaces, changes to the file's portion of an address space by one process would be reflected in the area mapped by the other processes, as well as in the file itself. Ultimately, 4.2BSD was shipped without the _mmap_ interface, because of pressure to make other features, such as networking, available.
Further development of the _mmap_ interface continued during the work on 4.3BSD. Over 40 companies and research groups participated in the discussions leading to the revised architecture that was described in the Berkeley Software Architecture Manual <<biblio-mckusick-1, [McKusick et al, 1994]>>. Several of the companies have implemented the revised interface <<biblio-gingell, [Gingell et al, 1987]>>.
Once again, time pressure prevented 4.3BSD from providing an implementation of the interface. Although the latter could have been built into the existing 4.3BSD virtual-memory system, the developers decided not to put it in because that implementation was nearly 10 years old. Furthermore, the original virtual-memory design was based on the assumption that computer memories were small and expensive, whereas disks were locally connected, fast, large, and inexpensive. Thus, the virtual-memory system was designed to be frugal with its use of memory at the expense of generating extra disk traffic. In addition, the 4.3BSD implementation was riddled with VAX memory-management hardware dependencies that impeded its portability to other computer architectures. Finally, the virtual-memory system was not designed to support the tightly coupled multiprocessors that are becoming increasingly common and important today.
Attempts to improve the old implementation incrementally seemed doomed to failure. A completely new design, on the other hand, could take advantage of large memories, conserve disk transfers, and have the potential to run on multiprocessors. Consequently, the virtual-memory system was completely replaced in 4.4BSD. The 4.4BSD virtual-memory system is based on the Mach 2.0 VM system <<biblio-tevanian, [Tevanian, 1987]>>. with updates from Mach 2.5 and Mach 3.0. It features efficient support for sharing, a clean separation of machine-independent and machine-dependent features, as well as (currently unused) multiprocessor support. Processes can map files anywhere in their address space. They can share parts of their address space by doing a shared mapping of the same file. Changes made by one process are visible in the address space of the other process, and also are written back to the file itself. Processes can also request private mappings of a file, which prevents any changes that they make from being visible to other processes mapping the file or being written back to the file itself.
Another issue with the virtual-memory system is the way that information is passed into the kernel when a system call is made. 4.4BSD always copies data from the process address space into a buffer in the kernel. For read or write operations that are transferring large quantities of data, doing the copy can be time consuming. An alternative to doing the copying is to remap the process memory into the kernel. The 4.4BSD kernel always copies the data for several reasons:
* Often, the user data are not page aligned and are not a multiple of the hardware page length.
* If the page is taken away from the process, it will no longer be able to reference that page. Some programs depend on the data remaining in the buffer even after those data have been written.
* If the process is allowed to keep a copy of the page (as it is in current 4.4BSD semantics), the page must be made _copy-on-write_. A copy-on-write page is one that is protected against being written by being made read-only. If the process attempts to modify the page, the kernel gets a write fault. The kernel then makes a copy of the page that the process can modify. Unfortunately, the typical process will immediately try to write new data to its output buffer, forcing the data to be copied anyway.
* When pages are remapped to new virtual-memory addresses, most memory-management hardware requires that the hardware address-translation cache be purged selectively. The cache purges are often slow. The net effect is that remapping is slower than copying for blocks of data less than 4 to 8 Kbyte.
The biggest incentives for memory mapping are the needs for accessing big files and for passing large quantities of data between processes. The _mmap_ interface provides a way for both of these tasks to be done without copying.
==== Memory Management Inside the Kernel
The kernel often does allocations of memory that are needed for only the duration of a single system call. In a user process, such short-term memory would be allocated on the run-time stack. Because the kernel has a limited run-time stack, it is not feasible to allocate even moderate-sized blocks of memory on it. Consequently, such memory must be allocated through a more dynamic mechanism. For example, when the system must translate a pathname, it must allocate a 1-Kbyte buffer to hold the name. Other blocks of memory must be more persistent than a single system call, and thus could not be allocated on the stack even if there was space. An example is protocol-control blocks that remain throughout the duration of a network connection.
Demands for dynamic memory allocation in the kernel have increased as more services have been added. A generalized memory allocator reduces the complexity of writing code inside the kernel. Thus, the 4.4BSD kernel has a single memory allocator that can be used by any part of the system. It has an interface similar to the C library routines _malloc_ and _free_ that provide memory allocation to application programs <<biblio-mckusick-2, [McKusick & Karels, 1988]>>. Like the C library interface, the allocation routine takes a parameter specifying the size of memory that is needed. The range of sizes for memory requests is not constrained; however, physical memory is allocated and is not paged. The free routine takes a pointer to the storage being freed, but does not require the size of the piece of memory being freed.
[[overview-io-system]]
=== I/O System
The basic model of the UNIX I/O system is a sequence of bytes that can be accessed either randomly or sequentially. There are no _access methods_ and no _control blocks_ in a typical UNIX user process.
Different programs expect various levels of structure, but the kernel does not impose structure on I/O. For instance, the convention for text files is lines of ASCII characters separated by a single newline character (the ASCII line-feed character), but the kernel knows nothing about this convention. For the purposes of most programs, the model is further simplified to being a stream of data bytes, or an _I/O stream_. It is this single common data form that makes the characteristic UNIX tool-based approach work <<biblio-kernighan, [Kernighan & Pike, 1984]>>. An I/O stream from one program can be fed as input to almost any other program. (This kind of traditional UNIX I/O stream should not be confused with the Eighth Edition stream I/O system or with the System V, Release 3 STREAMS, both of which can be accessed as traditional I/O streams.)
==== Descriptors and I/O
UNIX processes use _descriptors_ to reference I/O streams. Descriptors are small unsigned integers obtained from the _open_ and _socket_ system calls. The _open_ system call takes as arguments the name of a file and a permission mode to specify whether the file should be open for reading or for writing, or for both. This system call also can be used to create a new, empty file. A _read_ or _write_ system call can be applied to a descriptor to transfer data. The _close_ system call can be used to deallocate any descriptor.
Descriptors represent underlying objects supported by the kernel, and are created by system calls specific to the type of object. In 4.4BSD, three kinds of objects can be represented by descriptors: files, pipes, and sockets.
* A _file_ is a linear array of bytes with at least one name. A file exists until all its names are deleted explicitly and no process holds a descriptor for it. A process acquires a descriptor for a file by opening that file's name with the _open_ system call. I/O devices are accessed as files.
* A _pipe_ is a linear array of bytes, as is a file, but it is used solely as an I/O stream, and it is unidirectional. It also has no name, and thus cannot be opened with _open_. Instead, it is created by the _pipe_ system call, which returns two descriptors, one of which accepts input that is sent to the other descriptor reliably, without duplication, and in order. The system also supports a named pipe or FIFO. A FIFO has properties identical to a pipe, except that it appears in the filesystem; thus, it can be opened using the _open_ system call. Two processes that wish to communicate each open the FIFO: One opens it for reading, the other for writing.
* A _socket_ is a transient object that is used for interprocess communication; it exists only as long as some process holds a descriptor referring to it. A socket is created by the _socket_ system call, which returns a descriptor for it. There are different kinds of sockets that support various communication semantics, such as reliable delivery of data, preservation of message ordering, and preservation of message boundaries.
In systems before 4.2BSD, pipes were implemented using the filesystem; when sockets were introduced in 4.2BSD, pipes were reimplemented as sockets.
The kernel keeps for each process a _descriptor table_, which is a table that the kernel uses to translate the external representation of a descriptor into an internal representation. (The descriptor is merely an index into this table.) The descriptor table of a process is inherited from that process's parent, and thus access to the objects to which the descriptors refer also is inherited. The main ways that a process can obtain a descriptor are by opening or creation of an object, and by inheritance from the parent process. In addition, socket IPC allows passing of descriptors in messages between unrelated processes on the same machine.
Every valid descriptor has an associated _file offset_ in bytes from the beginning of the object. Read and write operations start at this offset, which is updated after each data transfer. For objects that permit random access, the file offset also may be set with the _lseek_ system call. Ordinary files permit random access, and some devices do, as well. Pipes and sockets do not.
When a process terminates, the kernel reclaims all the descriptors that were in use by that process. If the process was holding the final reference to an object, the object's manager is notified so that it can do any necessary cleanup actions, such as final deletion of a file or deallocation of a socket.
==== Descriptor Management
Most processes expect three descriptors to be open already when they start running. These descriptors are 0, 1, 2, more commonly known as _standard input_, _standard output_, and _standard error_, respectively. Usually, all three are associated with the user's terminal by the login process (see Section 14.6) and are inherited through _fork_ and _exec_ by processes run by the user. Thus, a program can read what the user types by reading standard input, and the program can send output to the user's screen by writing to standard output. The standard error descriptor also is open for writing and is used for error output, whereas standard output is used for ordinary output.
These (and other) descriptors can be mapped to objects other than the terminal; such mapping is called _I/O redirection_, and all the standard shells permit users to do it. The shell can direct the output of a program to a file by closing descriptor 1 (standard output) and opening the desired output file to produce a new descriptor 1. It can similarly redirect standard input to come from a file by closing descriptor 0 and opening the file.
Pipes allow the output of one program to be input to another program without rewriting or even relinking of either program. Instead of descriptor 1 (standard output) of the source program being set up to write to the terminal, it is set up to be the input descriptor of a pipe. Similarly, descriptor 0 (standard input) of the sink program is set up to reference the output of the pipe, instead of the terminal keyboard. The resulting set of two processes and the connecting pipe is known as a _pipeline_. Pipelines can be arbitrarily long series of processes connected by pipes.
The _open_, _pipe_, and _socket_ system calls produce new descriptors with the lowest unused number usable for a descriptor. For pipelines to work, some mechanism must be provided to map such descriptors into 0 and 1. The _dup_ system call creates a copy of a descriptor that points to the same file-table entry. The new descriptor is also the lowest unused one, but if the desired descriptor is closed first, _dup_ can be used to do the desired mapping. Care is required, however: If descriptor 1 is desired, and descriptor 0 happens also to have been closed, descriptor 0 will be the result. To avoid this problem, the system provides the _dup2_ system call; it is like _dup_, but it takes an additional argument specifying the number of the desired descriptor (if the desired descriptor was already open, _dup2_ closes it before reusing it).
==== Devices
Hardware devices have filenames, and may be accessed by the user via the same system calls used for regular files. The kernel can distinguish a _device special file_ or _special file_, and can determine to what device it refers, but most processes do not need to make this determination. Terminals, printers, and tape drives are all accessed as though they were streams of bytes, like 4.4BSD disk files. Thus, device dependencies and peculiarities are kept in the kernel as much as possible, and even in the kernel most of them are segregated in the device drivers.
Hardware devices can be categorized as either _structured_ or _unstructured_; they are known as _block_ or _character_ devices, respectively. Processes typically access devices through _special files_ in the filesystem. I/O operations to these files are handled by kernel-resident software modules termed _device drivers_. Most network-communication hardware devices are accessible through only the interprocess-communication facilities, and do not have special files in the filesystem name space, because the _raw-socket_ interface provides a more natural interface than does a special file.
Structured or block devices are typified by disks and magnetic tapes, and include most random-access devices. The kernel supports read-modify-write-type buffering actions on block-oriented structured devices to allow the latter to be read and written in a totally random byte-addressed fashion, like regular files. Filesystems are created on block devices.
Unstructured devices are those devices that do not support a block structure. Familiar unstructured devices are communication lines, raster plotters, and unbuffered magnetic tapes and disks. Unstructured devices typically support large block I/O transfers.
Unstructured files are called _character devices_ because the first of these to be implemented were terminal device drivers. The kernel interface to the driver for these devices proved convenient for other devices that were not block structured.
Device special files are created by the _mknod_ system call. There is an additional system call, _ioctl_, for manipulating the underlying device parameters of special files. The operations that can be done differ for each device. This system call allows the special characteristics of devices to be accessed, rather than overloading the semantics of other system calls. For example, there is an _ioctl_ on a tape drive to write an end-of-tape mark, instead of there being a special or modified version of _write_.
==== Socket IPC
The 4.2BSD kernel introduced an IPC mechanism more flexible than pipes, based on _sockets_. A socket is an endpoint of communication referred to by a descriptor, just like a file or a pipe. Two processes can each create a socket, and then connect those two endpoints to produce a reliable byte stream. Once connected, the descriptors for the sockets can be read or written by processes, just as the latter would do with a pipe. The transparency of sockets allows the kernel to redirect the output of one process to the input of another process residing on another machine. A major difference between pipes and sockets is that pipes require a common parent process to set up the communications channel. A connection between sockets can be set up by two unrelated processes, possibly residing on different machines.
System V provides local interprocess communication through FIFOs (also known as _named pipes_). FIFOs appear as an object in the filesystem that unrelated processes can open and send data through in the same way as they would communicate through a pipe. Thus, FIFOs do not require a common parent to set them up; they can be connected after a pair of processes are up and running. Unlike sockets, FIFOs can be used on only a local machine; they cannot be used to communicate between processes on different machines. FIFOs are implemented in 4.4BSD only because they are required by the POSIX.1 standard. Their functionality is a subset of the socket interface.
The socket mechanism requires extensions to the traditional UNIX I/O system calls to provide the associated naming and connection semantics. Rather than overloading the existing interface, the developers used the existing interfaces to the extent that the latter worked without being changed, and designed new interfaces to handle the added semantics. The _read_ and _write_ system calls were used for byte-stream type connections, but six new system calls were added to allow sending and receiving addressed messages such as network datagrams. The system calls for writing messages include _send_, _sendto_, and _sendmsg_. The system calls for reading messages include _recv_, _recvfrom_, and _recvmsg_. In retrospect, the first two in each class are special cases of the others; _recvfrom_ and _sendto_ probably should have been added as library interfaces to _recvmsg_ and _sendmsg_, respectively.
==== Scatter/Gather I/O
In addition to the traditional _read_ and _write_ system calls, 4.2BSD introduced the ability to do scatter/gather I/O. Scatter input uses the _readv_ system call to allow a single read to be placed in several different buffers. Conversely, the _writev_ system call allows several different buffers to be written in a single atomic write. Instead of passing a single buffer and length parameter, as is done with _read_ and _write_, the process passes in a pointer to an array of buffers and lengths, along with a count describing the size of the array.
This facility allows buffers in different parts of a process address space to be written atomically, without the need to copy them to a single contiguous buffer. Atomic writes are necessary in the case where the underlying abstraction is record based, such as tape drives that output a tape block on each write request. It is also convenient to be able to read a single request into several different buffers (such as a record header into one place and the data into another). Although an application can simulate the ability to scatter data by reading the data into a large buffer and then copying the pieces to their intended destinations, the cost of memory-to-memory copying in such cases often would more than double the running time of the affected application.
Just as _send_ and _recv_ could have been implemented as library interfaces to _sendto_ and _recvfrom_, it also would have been possible to simulate _read_ with _readv_ and _write_ with _writev_. However, _read_ and _write_ are used so much more frequently that the added cost of simulating them would not have been worthwhile.
==== Multiple Filesystem Support
With the expansion of network computing, it became desirable to support both local and remote filesystems. To simplify the support of multiple filesystems, the developers added a new virtual node or _vnode_ interface to the kernel. The set of operations exported from the vnode interface appear much like the filesystem operations previously supported by the local filesystem. However, they may be supported by a wide range of filesystem types:
* Local disk-based filesystems
* Files imported using a variety of remote filesystem protocols
* Read-only CD-ROM filesystems
* Filesystems providing special-purpose interfaces -- for example, the `/proc` filesystem
A few variants of 4.4BSD, such as FreeBSD, allow filesystems to be loaded dynamically when the filesystems are first referenced by the _mount_ system call. The vnode interface is described in Section 6.5; its ancillary support routines are described in Section 6.6; several of the special-purpose filesystems are described in Section 6.7.
[[overview-filesystem]]
=== Filesystems
A regular file is a linear array of bytes, and can be read and written starting at any byte in the file. The kernel distinguishes no record boundaries in regular files, although many programs recognize line-feed characters as distinguishing the ends of lines, and other programs may impose other structure. No system-related information about a file is kept in the file itself, but the filesystem stores a small amount of ownership, protection, and usage information with each file.
A _filename_ component is a string of up to 255 characters. These filenames are stored in a type of file called a _directory_. The information in a directory about a file is called a _directory entry_ and includes, in addition to the filename, a pointer to the file itself. Directory entries may refer to other directories, as well as to plain files. A hierarchy of directories and files is thus formed, and is called a _filesystem_;
.A small filesystem
[[fig-small-fs]]
image:fig2.png[A small filesystem]
a small one is shown in <<fig-small-fs>>. Directories may contain subdirectories, and there is no inherent limitation to the depth with which directory nesting may occur. To protect the consistency of the filesystem, the kernel does not permit processes to write directly into directories. A filesystem may include not only plain files and directories, but also references to other objects, such as devices and sockets.
The filesystem forms a tree, the beginning of which is the _root directory_, sometimes referred to by the name _slash_, spelled with a single solidus character (/). The root directory contains files; in our example in Fig 2.2, it contains `vmunix`, a copy of the kernel-executable object file. It also contains directories; in this example, it contains the `usr` directory. Within the `usr` directory is the `bin` directory, which mostly contains executable object code of programs, such as the files `ls` and `vi`.
A process identifies a file by specifying that file's _pathname_, which is a string composed of zero or more filenames separated by slash (/) characters. The kernel associates two directories with each process for use in interpreting pathnames. A process's _root directory_ is the topmost point in the filesystem that the process can access; it is ordinarily set to the root directory of the entire filesystem. A pathname beginning with a slash is called an _absolute pathname_, and is interpreted by the kernel starting with the process's root directory.
A pathname that does not begin with a slash is called a _relative pathname_, and is interpreted relative to the _current working directory_ of the process. (This directory also is known by the shorter names _current directory_ or _working directory_.) The current directory itself may be referred to directly by the name _dot_, spelled with a single period (`.`). The filename _dot-dot_ (`..`) refers to a directory's parent directory. The root directory is its own parent.
A process may set its root directory with the _chroot_ system call, and its current directory with the _chdir_ system call. Any process may do _chdir_ at any time, but _chroot_ is permitted only a process with superuser privileges. _Chroot_ is normally used to set up restricted access to the system.
Using the filesystem shown in Fig. 2.2, if a process has the root of the filesystem as its root directory, and has `/usr` as its current directory, it can refer to the file `vi` either from the root with the absolute pathname `/usr/bin/vi`, or from its current directory with the relative pathname `bin/vi`.
System utilities and databases are kept in certain well-known directories. Part of the well-defined hierarchy includes a directory that contains the _home directory_ for each user -- for example, `/usr/staff/mckusick` and `/usr/staff/karels` in Fig. 2.2. When users log in, the current working directory of their shell is set to the home directory. Within their home directories, users can create directories as easily as they can regular files. Thus, a user can build arbitrarily complex subhierarchies.
The user usually knows of only one filesystem, but the system may know that this one virtual filesystem is really composed of several physical filesystems, each on a different device. A physical filesystem may not span multiple hardware devices. Since most physical disk devices are divided into several logical devices, there may be more than one filesystem per physical device, but there will be no more than one per logical device. One filesystem -- the filesystem that anchors all absolute pathnames -- is called the _root filesystem_, and is always available. Others may be mounted; that is, they may be integrated into the directory hierarchy of the root filesystem. References to a directory that has a filesystem mounted on it are converted transparently by the kernel into references to the root directory of the mounted filesystem.
The _link_ system call takes the name of an existing file and another name to create for that file. After a successful _link_, the file can be accessed by either filename. A filename can be removed with the _unlink_ system call. When the final name for a file is removed (and the final process that has the file open closes it), the file is deleted.
Files are organized hierarchically in _directories_. A directory is a type of file, but, in contrast to regular files, a directory has a structure imposed on it by the system. A process can read a directory as it would an ordinary file, but only the kernel is permitted to modify a directory. Directories are created by the _mkdir_ system call and are removed by the _rmdir_ system call. Before 4.2BSD, the _mkdir_ and _rmdir_ system calls were implemented by a series of _link_ and _unlink_ system calls being done. There were three reasons for adding systems calls explicitly to create and delete directories:
[arabic]
. The operation could be made atomic. If the system crashed, the directory would not be left half-constructed, as could happen when a series of link operations were used.
. When a networked filesystem is being run, the creation and deletion of files and directories need to be specified atomically so that they can be serialized.
. When supporting non-UNIX filesystems, such as an MS-DOS filesystem, on another partition of the disk, the other filesystem may not support link operations. Although other filesystems might support the concept of directories, they probably would not create and delete the directories with links, as the UNIX filesystem does. Consequently, they could create and delete directories only if explicit directory create and delete requests were presented.
The _chown_ system call sets the owner and group of a file, and _chmod_ changes protection attributes. _Stat_ applied to a filename can be used to read back such properties of a file. The _fchown_, _fchmod_, and _fstat_ system calls are applied to a descriptor, instead of to a filename, to do the same set of operations. The _rename_ system call can be used to give a file a new name in the filesystem, replacing one of the file's old names. Like the directory-creation and directory-deletion operations, the _rename_ system call was added to 4.2BSD to provide atomicity to name changes in the local filesystem. Later, it proved useful explicitly to export renaming operations to foreign filesystems and over the network.
The _truncate_ system call was added to 4.2BSD to allow files to be shortened to an arbitrary offset. The call was added primarily in support of the Fortran run-time library, which has the semantics such that the end of a random-access file is set to be wherever the program most recently accessed that file. Without the _truncate_ system call, the only way to shorten a file was to copy the part that was desired to a new file, to delete the old file, then to rename the copy to the original name. As well as this algorithm being slow, the library could potentially fail on a full filesystem.
Once the filesystem had the ability to shorten files, the kernel took advantage of that ability to shorten large empty directories. The advantage of shortening empty directories is that it reduces the time spent in the kernel searching them when names are being created or deleted.
Newly created files are assigned the user identifier of the process that created them and the group identifier of the directory in which they were created. A three-level access-control mechanism is provided for the protection of files. These three levels specify the accessibility of a file to
[arabic]
. The user who owns the file
. The group that owns the file
. Everyone else
Each level of access has separate indicators for read permission, write permission, and execute permission.
Files are created with zero length, and may grow when they are written. While a file is open, the system maintains a pointer into the file indicating the current location in the file associated with the descriptor. This pointer can be moved about in the file in a random-access fashion. Processes sharing a file descriptor through a _fork_ or _dup_ system call share the current location pointer. Descriptors created by separate _open_ system calls have separate current location pointers. Files may have _holes_ in them. Holes are void areas in the linear extent of the file where data have never been written. A process can create these holes by positioning the pointer past the current end-of-file and writing. When read, holes are treated by the system as zero-valued bytes.
Earlier UNIX systems had a limit of 14 characters per filename component. This limitation was often a problem. For example, in addition to the natural desire of users to give files long descriptive names, a common way of forming filenames is as `basename.extension`, where the extension (indicating the kind of file, such as `.c` for C source or `.o` for intermediate binary object) is one to three characters, leaving 10 to 12 characters for the basename. Source-code-control systems and editors usually take up another two characters, either as a prefix or a suffix, for their purposes, leaving eight to 10 characters. It is easy to use 10 or 12 characters in a single English word as a basename (e.g., ``multiplexer'').
It is possible to keep within these limits, but it is inconvenient or even dangerous, because other UNIX systems accept strings longer than the limit when creating files, but then _truncate_ to the limit. A C language source file named `multiplexer.c` (already 13 characters) might have a source-code-control file with `s.` prepended, producing a filename `s.multiplexer` that is indistinguishable from the source-code-control file for `multiplexer.ms`, a file containing `troff` source for documentation for the C program. The contents of the two original files could easily get confused with no warning from the source-code-control system. Careful coding can detect this problem, but the long filenames first introduced in 4.2BSD practically eliminate it.
[[overview-filestore]]
=== Filestores
The operations defined for local filesystems are divided into two parts. Common to all local filesystems are hierarchical naming, locking, quotas, attribute management, and protection. These features are independent of how the data will be stored. 4.4BSD has a single implementation to provide these semantics.
The other part of the local filesystem is the organization and management of the data on the storage media. Laying out the contents of files on the storage media is the responsibility of the filestore. 4.4BSD supports three different filestore layouts:
* The traditional Berkeley Fast Filesystem
* The log-structured filesystem, based on the Sprite operating-system design <<biblio-rosenblum, [Rosenblum & Ousterhout, 1992]>>
* A memory-based filesystem
Although the organizations of these filestores are completely different, these differences are indistinguishable to the processes using the filestores.
The Fast Filesystem organizes data into cylinder groups. Files that are likely to be accessed together, based on their locations in the filesystem hierarchy, are stored in the same cylinder group. Files that are not expected to accessed together are moved into different cylinder groups. Thus, files written at the same time may be placed far apart on the disk.
The log-structured filesystem organizes data as a log. All data being written at any point in time are gathered together, and are written at the same disk location. Data are never overwritten; instead, a new copy of the file is written that replaces the old one. The old files are reclaimed by a garbage-collection process that runs when the filesystem becomes full and additional free space is needed.
The memory-based filesystem is designed to store data in virtual memory. It is used for filesystems that need to support fast but temporary data, such as `/tmp`. The goal of the memory-based filesystem is to keep the storage packed as compactly as possible to minimize the usage of virtual-memory resources.
[[overview-nfs]]
=== Network Filesystem
Initially, networking was used to transfer data from one machine to another. Later, it evolved to allowing users to log in remotely to another machine. The next logical step was to bring the data to the user, instead of having the user go to the data -- and network filesystems were born. Users working locally do not experience the network delays on each keystroke, so they have a more responsive environment.
Bringing the filesystem to a local machine was among the first of the major client-server applications. The _server_ is the remote machine that exports one or more of its filesystems. The _client_ is the local machine that imports those filesystems. From the local client's point of view, a remotely mounted filesystem appears in the file-tree name space just like any other locally mounted filesystem. Local clients can change into directories on the remote filesystem, and can read, write, and execute binaries within that remote filesystem identically to the way that they can do these operations on a local filesystem.
When the local client does an operation on a remote filesystem, the request is packaged and is sent to the server. The server does the requested operation and returns either the requested information or an error indicating why the request was denied. To get reasonable performance, the client must cache frequently accessed data. The complexity of remote filesystems lies in maintaining cache consistency between the server and its many clients.
Although many remote-filesystem protocols have been developed over the years, the most pervasive one in use among UNIX systems is the Network Filesystem (NFS), whose protocol and most widely used implementation were done by Sun Microsystems. The 4.4BSD kernel supports the NFS protocol, although the implementation was done independently from the protocol specification <<biblio-macklem, [Macklem, 1994]>>. The NFS protocol is described in Chapter 9.
[[overview-terminal]]
=== Terminals
Terminals support the standard system I/O operations, as well as a collection of terminal-specific operations to control input-character editing and output delays. At the lowest level are the terminal device drivers that control the hardware terminal ports. Terminal input is handled according to the underlying communication characteristics, such as baud rate, and according to a set of software-controllable parameters, such as parity checking.
Layered above the terminal device drivers are line disciplines that provide various degrees of character processing. The default line discipline is selected when a port is being used for an interactive login. The line discipline is run in _canonical mode_; input is processed to provide standard line-oriented editing functions, and input is presented to a process on a line-by-line basis.
Screen editors and programs that communicate with other computers generally run in _noncanonical mode_ (also commonly referred to as _raw mode_ or _character-at-a-time mode_). In this mode, input is passed through to the reading process immediately and without interpretation. All special-character input processing is disabled, no erase or other line editing processing is done, and all characters are passed to the program that is reading from the terminal.
It is possible to configure the terminal in thousands of combinations between these two extremes. For example, a screen editor that wanted to receive user interrupts asynchronously might enable the special characters that generate signals and enable output flow control, but otherwise run in noncanonical mode; all other characters would be passed through to the process uninterpreted.
On output, the terminal handler provides simple formatting services, including
* Converting the line-feed character to the two-character carriage-return-line-feed sequence
* Inserting delays after certain standard control characters
* Expanding tabs
* Displaying echoed nongraphic ASCII characters as a two-character sequence of the form ``^C'' (i.e., the ASCII caret character followed by the ASCII character that is the character's value offset from the ASCII ``@'' character).
Each of these formatting services can be disabled individually by a process through control requests.
[[overview-ipc]]
=== Interprocess Communication
Interprocess communication in 4.4BSD is organized in _communication domains_. Domains currently supported include the _local domain_, for communication between processes executing on the same machine; the _internet domain_, for communication between processes using the TCP/IP protocol suite (perhaps within the Internet); the ISO/OSI protocol family for communication between sites required to run them; and the _XNS domain_, for communication between processes using the XEROX Network Systems (XNS) protocols.
Within a domain, communication takes place between communication endpoints known as _sockets_. As mentioned in Section 2.6, the _socket_ system call creates a socket and returns a descriptor; other IPC system calls are described in Chapter 11. Each socket has a type that defines its communications semantics; these semantics include properties such as reliability, ordering, and prevention of duplication of messages.
Each socket has associated with it a _communication protocol_. This protocol provides the semantics required by the socket according to the latter's type. Applications may request a specific protocol when creating a socket, or may allow the system to select a protocol that is appropriate for the type of socket being created.
Sockets may have addresses bound to them. The form and meaning of socket addresses are dependent on the communication domain in which the socket is created. Binding a name to a socket in the local domain causes a file to be created in the filesystem.
Normal data transmitted and received through sockets are untyped. Data-representation issues are the responsibility of libraries built on top of the interprocess-communication facilities. In addition to transporting normal data, communication domains may support the transmission and reception of specially typed data, termed _access rights_. The local domain, for example, uses this facility to pass descriptors between processes.
Networking implementations on UNIX before 4.2BSD usually worked by overloading the character-device interfaces. One goal of the socket interface was for naive programs to be able to work without change on stream-style connections. Such programs can work only if the _read_ and _write_ systems calls are unchanged. Consequently, the original interfaces were left intact, and were made to work on stream-type sockets. A new interface was added for more complicated sockets, such as those used to send datagrams, with which a destination address must be presented with each _send_ call.
Another benefit is that the new interface is highly portable. Shortly after a test release was available from Berkeley, the socket interface had been ported to System III by a UNIX vendor (although AT&T did not support the socket interface until the release of System V Release 4, deciding instead to use the Eighth Edition stream mechanism). The socket interface was also ported to run in many Ethernet boards by vendors, such as Excelan and Interlan, that were selling into the PC market, where the machines were too small to run networking in the main processor. More recently, the socket interface was used as the basis for Microsoft's Winsock networking interface for Windows.
[[overview-network-communication]]
=== Network Communication
Some of the communication domains supported by the _socket_ IPC mechanism provide access to network protocols. These protocols are implemented as a separate software layer logically below the socket software in the kernel. The kernel provides many ancillary services, such as buffer management, message routing, standardized interfaces to the protocols, and interfaces to the network interface drivers for the use of the various network protocols.
At the time that 4.2BSD was being implemented, there were many networking protocols in use or under development, each with its own strengths and weaknesses. There was no clearly superior protocol or protocol suite. By supporting multiple protocols, 4.2BSD could provide interoperability and resource sharing among the diverse set of machines that was available in the Berkeley environment. Multiple-protocol support also provides for future changes. Today's protocols designed for 10- to 100-Mbit-per-second Ethernets are likely to be inadequate for tomorrow's 1- to 10-Gbit-per-second fiber-optic networks. Consequently, the network-communication layer is designed to support multiple protocols. New protocols are added to the kernel without the support for older protocols being affected. Older applications can continue to operate using the old protocol over the same physical network as is used by newer applications running with a newer network protocol.
[[overview-network-implementation]]
=== Network Implementation
The first protocol suite implemented in 4.2BSD was DARPA's Transmission Control Protocol/Internet Protocol (TCP/IP). The CSRG chose TCP/IP as the first network to incorporate into the socket IPC framework, because a 4.1BSD-based implementation was publicly available from a DARPA-sponsored project at Bolt, Beranek, and Newman (BBN). That was an influential choice: The 4.2BSD implementation is the main reason for the extremely widespread use of this protocol suite. Later performance and capability improvements to the TCP/IP implementation have also been widely adopted. The TCP/IP implementation is described in detail in Chapter 13.
The release of 4.3BSD added the Xerox Network Systems (XNS) protocol suite, partly building on work done at the University of Maryland and at Cornell University. This suite was needed to connect isolated machines that could not communicate using TCP/IP.
The release of 4.4BSD added the ISO protocol suite because of the latter's increasing visibility both within and outside the United States. Because of the somewhat different semantics defined for the ISO protocols, some minor changes were required in the socket interface to accommodate these semantics. The changes were made such that they were invisible to clients of other existing protocols. The ISO protocols also required extensive addition to the two-level routing tables provided by the kernel in 4.3BSD. The greatly expanded routing capabilities of 4.4BSD include arbitrary levels of routing with variable-length addresses and network masks.
[[overview-operation]]
=== System Operation
Bootstrapping mechanisms are used to start the system running. First, the 4.4BSD kernel must be loaded into the main memory of the processor. Once loaded, it must go through an initialization phase to set the hardware into a known state. Next, the kernel must do autoconfiguration, a process that finds and configures the peripherals that are attached to the processor. The system begins running in single-user mode while a start-up script does disk checks and starts the accounting and quota checking. Finally, the start-up script starts the general system services and brings up the system to full multiuser operation.
During multiuser operation, processes wait for login requests on the terminal lines and network ports that have been configured for user access. When a login request is detected, a login process is spawned and user validation is done. When the login validation is successful, a login shell is created from which the user can run additional processes.
:sectnums!:
[bibliography]
[[references]]
== References
[[biblio-accetta]] Accetta et al, 1986 Mach: A New Kernel Foundation for UNIX Development" M.Accetta R.Baron W.Bolosky D.Golub R.Rashid A.Tevanian M.Young 93-113 USENIX Association Conference Proceedings USENIX Association June 1986
[[biblio-cheriton]] Cheriton, 1988 The V Distributed System D. R.Cheriton 314-333 Comm ACM, 31, 3 March 1988
[[biblio-ewens]] Ewens et al, 1985 Tunis: A Distributed Multiprocessor Operating System P.Ewens D. R.Blythe M.Funkenhauser R. C.Holt 247-254 USENIX Assocation Conference Proceedings USENIX Association June 1985
[[biblio-gingell]] Gingell et al, 1987 Virtual Memory Architecture in SunOS R.Gingell J.Moran W.Shannon 81-94 USENIX Association Conference Proceedings USENIX Association June 1987
[[biblio-kernighan]] Kernighan & Pike, 1984 The UNIX Programming Environment B. W.Kernighan R.Pike Prentice-Hall Englewood Cliffs NJ 1984
[[biblio-macklem]] Macklem, 1994 The 4.4BSD NFS Implementation R.Macklem 6:1-14 4.4BSD System Manager's Manual O'Reilly & Associates, Inc. Sebastopol CA 1994
[[biblio-mckusick-2]] McKusick & Karels, 1988 Design of a General Purpose Memory Allocator for the 4.3BSD UNIX Kernel M. K.McKusick M. J.Karels 295-304 USENIX Assocation Conference Proceedings USENIX Assocation June 1998
[[biblio-mckusick-1]] McKusick et al, 1994 Berkeley Software Architecture Manual, 4.4BSD Edition M. K.McKusick M. J.Karels S. J.Leffler W. N.Joy R. S.Faber 5:1-42 4.4BSD Programmer's Supplementary Documents O'Reilly & Associates, Inc. Sebastopol CA 1994
[[biblio-ritchie]] Ritchie, 1988 Early Kernel Design private communication D. M.Ritchie March 1988
[[biblio-rosenblum]] Rosenblum & Ousterhout, 1992 The Design and Implementation of a Log-Structured File System M.Rosenblum K.Ousterhout 26-52 ACM Transactions on Computer Systems, 10, 1 Association for Computing Machinery February 1992
[[biblio-rozier]] Rozier et al, 1988 Chorus Distributed Operating Systems M.Rozier V.Abrossimov F.Armand I.Boule M.Gien M.Guillemont F.Herrmann C.Kaiser S.Langlois P.Leonard W.Neuhauser 305-370 USENIX Computing Systems, 1, 4 Fall 1988
[[biblio-tevanian]] Tevanian, 1987 Architecture-Independent Virtual Memory Management for Parallel and Distributed Environments: The Mach Approach Technical Report CMU-CS-88-106, A.Tevanian Department of Computer Science, Carnegie-Mellon University Pittsburgh PA December 1987
diff --git a/documentation/content/en/books/dev-model/_index.adoc b/documentation/content/en/books/dev-model/_index.adoc
index 887493f584..c5b4061159 100644
--- a/documentation/content/en/books/dev-model/_index.adoc
+++ b/documentation/content/en/books/dev-model/_index.adoc
@@ -1,1000 +1,1000 @@
---
title: A project model for the FreeBSD Project
authors:
- author: Niklas Saers
copyright: Copyright © 2002-2005 Niklas Saers
-releaseinfo: "$FreeBSD$"
+description: A project model for the FreeBSD Project
trademarks: ["freebsd", "ibm", "ieee", "adobe", "intel", "linux", "microsoft", "opengroup", "sun", "netbsd", "general"]
---
= A project model for the FreeBSD Project
:doctype: book
:toc: macro
:toclevels: 2
:icons: font
:xrefstyle: basic
:relfileprefix: ../
:outfilesuffix:
:sectnums:
:sectnumlevels: 6
:partnums:
:chapter-signifier: Chapter
:part-signifier: Part
:source-highlighter: rouge
:experimental:
:skip-front-matter:
ifeval::["{backend}" == "html5"]
include::shared/mirrors.adoc[]
include::shared/authors.adoc[]
include::shared/releases.adoc[]
include::shared/en/mailing-lists.adoc[]
include::shared/en/teams.adoc[]
include::shared/en/urls.adoc[]
:imagesdir: ../../../../images/books/dev-model/
endif::[]
ifeval::["{backend}" == "pdf"]
include::../../../../shared/mirrors.adoc[]
include::../../../../shared/authors.adoc[]
include::../../../../shared/releases.adoc[]
include::../../../../shared/en/mailing-lists.adoc[]
include::../../../../shared/en/teams.adoc[]
include::../../../../shared/en/urls.adoc[]
:imagesdir: ../../../../static/images/books/dev-model/
endif::[]
ifeval::["{backend}" == "epub3"]
include::../../../../shared/mirrors.adoc[]
include::../../../../shared/authors.adoc[]
include::../../../../shared/releases.adoc[]
include::../../../../shared/en/mailing-lists.adoc[]
include::../../../../shared/en/teams.adoc[]
include::../../../../shared/en/urls.adoc[]
:imagesdir: ../../../../static/images/books/dev-model/
endif::[]
'''
toc::[]
[[foreword]]
[.abstract-title]
Foreword
Up until now, the FreeBSD project has released a number of described techniques to do different parts of work. However, a project model summarising how the project is structured is needed because of the increasing amount of project members. footnote:[This goes hand-in-hand with Brooks' law that adding another person to a late project will make it later since it will increase the communication needs . A project model is a tool to reduce the communication needs.] This paper will provide such a project model and is donated to the FreeBSD Documentation project where it can evolve together with the project so that it can at any point in time reflect the way the project works. It is based on [<<thesis, Saers,2003>>].
I would like to thank the following people for taking the time to explain things that were unclear to me and for proofreading the document.
* Andrey A. Chernov mailto:ache@freebsd.org[ache@freebsd.org]
* Bruce A. Mah mailto:bmah@freebsd.org[bmah@freebsd.org]
* Dag-Erling Smørgrav mailto:des@freebsd.org[des@freebsd.org]
* Giorgos Keramidas mailto:keramida@freebsd.org[keramida@freebsd.org]
* Ingvil Hovig mailto:ingvil.hovig@skatteetaten.no[ingvil.hovig@skatteetaten.no]
* Jesper Holck mailto:jeh.inf@cbs.dk[jeh.inf@cbs.dk]
* John Baldwin mailto:jhb@freebsd.org[jhb@freebsd.org]
* John Polstra mailto:jdp@freebsd.org[jdp@freebsd.org]
* Kirk McKusick mailto:mckusick@freebsd.org[mckusick@freebsd.org]
* Mark Linimon mailto:linimon@freebsd.org[linimon@freebsd.org]
* Marleen Devos
* Niels Jørgenssen mailto:nielsj@ruc.dk[nielsj@ruc.dk]
* Nik Clayton mailto:nik@freebsd.org[nik@freebsd.org]
* Poul-Henning Kamp mailto:phk@freebsd.org[phk@freebsd.org]
* Simon L. Nielsen mailto:simon@freebsd.org[simon@freebsd.org]
[[overview]]
== Overview
A project model is a means to reduce the communications overhead in a project. As shown by [<<brooks, Brooks, 1995>>], increasing the number of project participants increases the communication in the project exponentionally. FreeBSD has during the past few years increased both its mass of active users and committers, and the communication in the project has risen accordingly. This project model will serve to reduce this overhead by providing an up-to-date description of the project.
During the Core elections in 2002, Mark Murray stated "I am opposed to a long rule-book, as that satisfies lawyer-tendencies, and is counter to the technocentricity that the project so badly needs." [<<bsd-election2002, FreeBSD, 2002B>>]. This project model is not meant to be a tool to justify creating impositions for developers, but as a tool to facilitate coordination. It is meant as a description of the project, with an overview of how the different processes are executed. It is an introduction to how the FreeBSD project works.
The FreeBSD project model will be described as of July 1st, 2004. It is based on the Niels Jørgensen's paper [<<jorgensen2001, Jørgensen, 2001>>], FreeBSD's official documents, discussions on FreeBSD mailing lists and interviews with developers.
After providing definitions of terms used, this document will outline the organisational structure (including role descriptions and communication lines), discuss the methodology model and after presenting the tools used for process control, it will present the defined processes. Finally it will outline major sub-projects of the FreeBSD project.
[<<freebsd-developer-handbook, FreeBSD, 2002A>>] Section 1.2 and 1.3 give the vision and the architectural guidelines for the project. The vision is "To produce the best UNIX-like operating system package possible, with due respect to the original software tools ideology as well as usability, performance and stability." The architectural guidelines help determine whether a problem that someone wants to be solved is within the scope of the project
[[definitions]]
== Definitions
[[ref-activity]]
=== Activity
An "activity" is an element of work performed during the course of a project [<<ref-pmbok, PMI, 2000>>]. It has an output and leads towards an outcome. Such an output can either be an input to another activity or a part of the process' delivery.
[[def-process]]
=== Process
A "process" is a series of activities that lead towards a particular outcome. A process can consist of one or more sub-processes. An example of a process is software design.
[[ref-hat]]
=== Hat
A "hat" is synonymous with role. A hat has certain responsibilities in a process and for the process outcome. The hat executes activities. It is well defined what issues the hat should be contacted about by the project members and people outside the project.
[[ref-outcome]]
=== Outcome
An "outcome" is the final output of the process. This is synonymous with deliverable, that is defined as "any measurable, tangible, verifiable outcome, result or item that must be produced to complete a project or part of a project. Often used more narrowly in reference to an external deliverable, which is a deliverable that is subject to approval by the project sponsor or customer" by [<<ref-pmbok, PMI, 2000>>]. Examples of outcomes are a piece of software, a decision made or a report written.
[[ref-freebsd]]
=== FreeBSD
When saying "FreeBSD" we will mean the BSD derivative UNIX-like operating system FreeBSD, whereas when saying "the FreeBSD Project" we will mean the project organisation.
[[model-orgstruct]]
== Organisational structure
While no-one takes ownership of FreeBSD, the FreeBSD organisation is divided into core, committers and contributors and is part of the FreeBSD community that lives around it.
The FreeBSD Project's structure (in order of descending authority)
[.informaltable]
[cols="1,1", options="header"]
|===
| Group
| Number of people
|Core members
|9
|Committers
|269
|Contributors
|~3000
|===
Number of committers has been determined by going through CVS logs from January 1st, 2004 to December 31st, 2004 and contributors by going through the list of contributions and problem reports.
The main resource in the FreeBSD community is its developers: the committers and contributors. It is with their contributions that the project can move forward. Regular developers are referred to as contributors. As of January 1st, 2003, there are an estimated 5500 contributors on the project.
Committers are developers with the privilege of being able to commit changes. These are usually the most active developers who are willing to spend their time not only integrating their own code but integrating code submitted by the developers who do not have this privilege. They are also the developers who elect the core team, and they have access to closed discussions.
The project can be grouped into four distinct separate parts, and most developers will focus their involvement in one part of FreeBSD. The four parts are kernel development, userland development, ports and documentation. When referring to the base system, both kernel and userland is meant.
This split changes our table to look like this:
The FreeBSD Project's structure with committers in categories
[.informaltable]
[cols="1,1,1", options="header"]
|===
| Group
| Category
| Number of people
|Core members
|
|9
|Committers
|Kernel
|56
|
|Userland
|50
|
|Docs
|9
|
|Ports
|120
|
|Total
|269
|Contributors
|
|~3000
|===
Number of committers per area has been determined by going through CVS logs from January 1st, 2004 to December 31st, 2004. Note that many committers work in multiple areas, making the total number higher than the real number of committers. The total number of committers at that time was 269.
Committers fall into three groups: committers who are only concerned with one area of the project (for instance file systems), committers who are involved only with one sub-project, and committers who commit to different parts of the code, including sub-projects. Because some committers work on different parts, the total number in the committers section of the table is higher than in the above table.
The kernel is the main building block of FreeBSD. While the userland applications are protected against faults in other userland applications, the entire system is vulnerable to errors in the kernel. This, combined with the vast amount of dependencies in the kernel and that it is not easy to see all the consequences of a kernel change, demands developers with a relative full understanding of the kernel. Multiple development efforts in the kernel also require a closer coordination than userland applications do.
The core utilities, known as userland, provide the interface that identifies FreeBSD, both user interface, shared libraries and external interfaces to connecting clients. Currently, 162 people are involved in userland development and maintenance, many being maintainers for their own part of the code. Maintainership will be discussed in the <<role-maintainer>> section.
Documentation is handled by <<sub-project-documentation>> and includes all documents surrounding the FreeBSD project, including the web pages. There were during 2004 101 people making commits to the FreeBSD Documentation Project.
Ports is the collection of meta-data that is needed to make software packages build correctly on FreeBSD. An example of a port is the port for the web-browser Mozilla. It contains information about where to fetch the source, what patches to apply and how, and how the package should be installed on the system. This allows automated tools to fetch, build and install the package. As of this writing, there are more than 12600 ports available. footnote:[Statistics are generated by counting the number of entries in the file fetched by portsdb by April 1st, 2005. portsdb is a part of the port sysutils/portupgrade.] , ranging from web servers to games, programming languages and most of the application types that are in use on modern computers. Ports will be discussed further in the section <<sub-project-ports>>.
[[methodology-model]]
== Methodology model
[[development-model]]
=== Development model
There is no defined model for how people write code in FreeBSD. However, Niels Jørgenssen has suggested a model of how written code is integrated into the project.
Jørgenssen's model for change integration
[.informaltable]
[cols="1,1,1", options="header"]
|===
| Stage
| Next if successful
| Next if unsuccessful
|code
|review
|
|review
|pre-commit test
|code
|pre-commit test
|development release
|code
|development release
|parallel debugging
|code
|parallel debugging
|production release
|code
|production release
|
|code
|===
The "development release" is the FreeBSD-CURRENT ("-CURRENT") branch and the "production release" is the FreeBSD-STABLE branch ("-STABLE") [<<jorgensen2001, Jørgensen, 2001>>].
This is a model for one change, and shows that after coding, developers seek community review and try integrating it with their own systems. After integrating the change into the development release, called FreeBSD-CURRENT, it is tested by many users and developers in the FreeBSD community. After it has gone through enough testing, it is merged into the production release, called FreeBSD-STABLE. Unless each stage is finished successfully, the developer needs to go back and make modifications in the code and restart the process. To integrate a change with either -CURRENT or -STABLE is called making a commit.
Jørgensen found that most FreeBSD developers work individually, meaning that this model is used in parallel by many developers on the different ongoing development efforts. A developer can also be working on multiple changes, so that while they are waiting for review or people to test one or more of their changes, they may be writing another change.
As each commit represents an increment, this is a massively incremental model. The commits are in fact so frequent that during one year footnote:[The period from January 1st, 2004 to December 31st, 2004 was examined to find this number.] , 85427 commits were made, making a daily average of 233 commits.
Within the "code" bracket in Jørgensen's model, each programmer has their own working style and follows their own development models. The bracket could very well have been called "development" as it includes requirements gathering and analysis, system and detailed design, implementation and verification. However, the only output from these stages is the source code or system documentation.
From a stepwise model's perspective (such as the waterfall model), the other brackets can be seen as further verification and system integration. This system integration is also important to see if a change is accepted by the community. Up until the code is committed, the developer is free to choose how much to communicate about it to the rest of the project. In order for -CURRENT to work as a buffer (so that bright ideas that had some undiscovered drawbacks can be backed out) the minimum time a commit should be in -CURRENT before merging it to -STABLE is 3 days. Such a merge is referred to as an MFC (Merge From Current).
It is important to notice the word "change". Most commits do not contain radical new features, but are maintenance updates.
The only exceptions from this model are security fixes and changes to features that are deprecated in the -CURRENT branch. In these cases, changes can be committed directly to the -STABLE branch.
In addition to many people working on the project, there are many related projects to the FreeBSD Project. These are either projects developing brand new features, sub-projects or projects whose outcome is incorporated into FreeBSD footnote:[For instance, the development of the Bluetooth stack started as a sub-project until it was deemed stable enough to be merged into the -CURRENT branch. Now it is a part of the core FreeBSD system.]. These projects fit into the FreeBSD Project just like regular development efforts: they produce code that is integrated with the FreeBSD Project. However, some of them (like Ports and Documentation) have the privilege of being applicable to both branches or commit directly to both -CURRENT and -STABLE.
There is no standards to how design should be done, nor is design collected in a centralised repository. The main design is that of 4.4BSD. footnote:[According to Kirk McKusick, after 20 years of developing UNIX operating systems, the interfaces are for the most part figured out. There is therefore no need for much design. However, new applications of the system and new hardware leads to some implementations being more beneficial than those that used to be preferred. One example is the introduction of web browsing that made the normal TCP/IP connection a short burst of data rather than a steady stream over a longer period of time.] As design is a part of the "Code" bracket in Jørgenssen's model, it is up to every developer or sub-project how this should be done. Even if the design should be stored in a central repository, the output from the design stages would be of limited use as the differences of methodologies would make them poorly if at all interoperable. For the overall design of the project, the project relies on the sub-projects to negotiate fit interfaces between each other rather than to dictate interfacing.
[[release-branches]]
=== Release branches
The releases of FreeBSD are best illustrated by a tree with many branches where each major branch represents a major version. Minor versions are represented by branches of the major branches.
In the following release tree, arrows that follow one-another in a particular direction represent a branch. Boxes with full lines and diamonds represent official releases. Boxes with dotted lines represent the development branch at that time. Security branches are represented by ovals. Diamonds differ from boxes in that they represent a fork, meaning a place where a branch splits into two branches where one of the branches becomes a sub-branch. For example, at 4.0-RELEASE the 4.0-CURRENT branch split into 4-STABLE and 5.0-CURRENT. At 4.5-RELEASE, the branch forked off a security branch called RELENG_4_5.
.The FreeBSD release tree
image::branches.png[Refer to table below for a screen-reader friendly version.]
[.informaltable]
[cols="1,1,1", options="header"]
|===
| Major release
| Forked from
| Following minor releases
|...
|
|
|3.0 Current (development branch)
|
|Releng 3 branches: 3.0 Release to 3.5 Release, leading to 3.5.1 Release and the subsequent 3 Stable branch
|4.0 Current (development branch)
|3.1 Release
|Releng 4 branches: 4.1 Release to 4.6 Release (and 4.6.2 Release), then 4.7 Release to 4.11 Release (all starting at 4.3 Release also leading to a Releng_4_n branch), and the subsequent 4 Release branch
|5.0 Current (development branch)
|4.0 Release
|Releng 5 branches: 5.0 Release to 5.4 Release (all except 5.0 and 5.3 also leading to a Releng_5_n branch), and the subsequent 5 Release branch
|6.0 Current (development branch)
|5.3 Release
|
|...
|
|
|===
The latest -CURRENT version is always referred to as -CURRENT, while the latest -STABLE release is always referred to as -STABLE. In this figure, -STABLE refers to 4-STABLE while -CURRENT refers to 5.0-CURRENT following 5.0-RELEASE. [<<freebsd-releng, FreeBSD, 2002E>>]
A "major release" is always made from the -CURRENT branch. However, the -CURRENT branch does not need to fork at that point in time, but can focus on stabilising. An example of this is that following 3.0-RELEASE, 3.1-RELEASE was also a continuation of the -CURRENT-branch, and -CURRENT did not become a true development branch until this version was released and the 3-STABLE branch was forked. When -CURRENT returns to becoming a development branch, it can only be followed by a major release. 5-STABLE is predicted to be forked off 5.0-CURRENT at around 5.3-RELEASE. It is not until 5-STABLE is forked that the development branch will be branded 6.0-CURRENT.
A "minor release" is made from the -CURRENT branch following a major release, or from the -STABLE branch.
Following and including, 4.3-RELEASEfootnote:[The first release this actually happened for was 4.5-RELEASE, but security branches were at the same time created for 4.3-RELEASE and 4.4-RELEASE.], when a minor release has been made, it becomes a "security branch". This is meant for organisations that do not want to follow the -STABLE branch and the potential new/changed features it offers, but instead require an absolutely stable environment, only updating to implement security updates. footnote:[There is a terminology overlap with respect to the word "stable", which leads to some confusion. The -STABLE branch is still a development branch, whose goal is to be useful for most people. If it is never acceptable for a system to get changes that are not announced at the time it is deployed, that system should run a security branch.]
Each update to a security branch is called a "patchlevel". For every security enhancement that is done, the patchlevel number is increased, making it easy for people tracking the branch to see what security enhancements they have implemented. In cases where there have been especially serious security flaws, an entire new release can be made from a security branch. An example of this is 4.6.2-RELEASE.
[[model-summary]]
=== Model summary
To summarise, the development model of FreeBSD can be seen as the following tree:
.The overall development model
image::freebsd-code-model.png[Refer to paragraphs below for a screen-reader friendly version.]
The tree of the FreeBSD development with ongoing development efforts and continuous integration.
The tree symbolises the release versions with major versions spawning new main branches and minor versions being versions of the main branch. The top branch is the -CURRENT branch where all new development is integrated, and the -STABLE branch is the branch directly below it. Below the -STABLE branch are old, unsupported versions.
Clouds of development efforts hang over the project where developers use the development models they see fit. The product of their work is then integrated into -CURRENT where it undergoes parallel debugging and is finally merged from -CURRENT into -STABLE. Security fixes are merged from -STABLE to the security branches.
Many committers have a special area of responsibility. These roles are called hats. These hats can be either project roles, such as public relations officer, or maintainer for a certain area of the code. Because this is a project where people give voluntarily of their spare time, people with assigned hats are not always available. They must therefore appoint a deputy that can perform the hat's role in their absence. The other option is to have the role held by a group.
Many of these hats are not formalised. Formalised hats have a charter stating the exact purpose of the hat along with its privileges and responsibilities. The writing of such charters is a new part of the project, and has thus yet to be completed for all hats. These hat descriptions are not such a formalisation, rather a summary of the role with links to the charter where available and contact addresses.
[[sect-hats]]
== Hats
[[general-hats]]
=== General Hats
[[role-contributor]]
==== Contributor
A Contributor contributes to the FreeBSD project either as a developer, as an author, by sending problem reports, or in other ways contributing to the progress of the project. A contributor has no special privileges in the FreeBSD project. [<<freebsd-contributors, FreeBSD, 2002F>>]
[[role-committer]]
==== Committer
A person who has the required privileges to add their code or documentation to the repository. A committer has made a commit within the past 12 months. [<<freebsd-developer-handbook, FreeBSD, 2000A>>] An active committer is a committer who has made an average of one commit per month during that time.
It is worth noting that there are no technical barriers to prevent someone, once having gained commit privileges to the main- or a sub-project, to make commits in parts of that project's source the committer did not specifically get permission to modify. However, when wanting to make modifications to parts a committer has not been involved in before, they should read the logs to see what has happened in this area before, and also read the MAINTAINERS file to see if the maintainer of this part has any special requests on how changes in the code should be made.
[[role-core]]
==== Core Team
The core team is elected by the committers from the pool of committers and serves as the board of directors of the FreeBSD project. It promotes active contributors to committers, assigns people to well-defined hats, and is the final arbiter of decisions involving which way the project should be heading. As of July 1st, 2004, core consisted of 9 members. Elections are held every two years.
[[role-maintainer]]
==== Maintainership
Maintainership means that that person is responsible for what is allowed to go into that area of the code and has the final say should disagreements over the code occur. This involves proactive work aimed at stimulating contributions and reactive work in reviewing commits.
With the FreeBSD source comes the MAINTAINERS file that contains a one-line summary of how each maintainer would like contributions to be made. Having this notice and contact information enables developers to focus on the development effort rather than being stuck in a slow correspondence should the maintainer be unavailable for some time.
If the maintainer is unavailable for an unreasonably long period of time, and other people do a significant amount of work, maintainership may be switched without the maintainer's approval. This is based on the stance that maintainership should be demonstrated, not declared.
Maintainership of a particular piece of code is a hat that is not held as a group.
[[official-hats]]
=== Official Hats
The official hats in the FreeBSD Project are hats that are more or less formalised and mainly administrative roles. They have the authority and responsibility for their area. The following list shows the responsibility lines and gives a description of each hat, including who it is held by.
[[role-doc-manager]]
==== Documentation project manager
<<sub-project-documentation>> architect is responsible for defining and following up documentation goals for the committers in the Documentation project, which they supervise.
Hat held by: The DocEng team mailto:doceng@FreeBSD.org[doceng@FreeBSD.org]. The https://www.freebsd.org/internal/doceng/[ DocEng Charter].
[[role-postmaster]]
==== Postmaster
The Postmaster is responsible for mail being correctly delivered to the committers' email address. They are also responsible for ensuring that the mailing lists work and should take measures against possible disruptions of mail such as having troll-, spam- and virus-filters.
Hat currently held by: the Postmaster Team mailto:postmaster@FreeBSD.org[postmaster@FreeBSD.org].
[[role-release-coordination]]
==== Release Coordination
The responsibilities of the Release Engineering Team are
* Setting, publishing and following a release schedule for official releases
* Documenting and formalising release engineering procedures
* Creation and maintenance of code branches
* Coordinating with the Ports and Documentation teams to have an updated set of packages and documentation released with the new releases
* Coordinating with the Security team so that pending releases are not affected by recently disclosed vulnerabilities.
Further information about the development process is available in the <<process-release-engineering>> section.
[[role-releng]]
Hat held by: the Release Engineering team mailto:re@FreeBSD.org[re@FreeBSD.org]. The https://www.freebsd.org/releng/charter/[ Release Engineering Charter].
[[role-pr-cr]]
==== Public Relations & Corporate Liaison
The Public Relations & Corporate Liaison's responsibilities are:
* Making press statements when happenings that are important to the FreeBSD Project happen.
* Being the official contact person for corporations that are working close with the FreeBSD Project.
* Take steps to promote FreeBSD within both the Open Source community and the corporate world.
* Handle the "freebsd-advocacy" mailing list.
This hat is currently not occupied.
[[role-security-officer]]
==== Security Officer
The Security Officer's main responsibility is to coordinate information exchange with others in the security community and in the FreeBSD project. The Security Officer is also responsible for taking action when security problems are reported and promoting proactive development behavior when it comes to security.
Because of the fear that information about vulnerabilities may leak out to people with malicious intent before a patch is available, only the Security Officer, consisting of an officer, a deputy and two <<role-core>> members, receive sensitive information about security issues. However, to create or implement a patch, the Security Officer has the Security Officer Team mailto:security-team@FreeBSD.org[security-team@FreeBSD.org] to help do the work.
[[role-repo-manager]]
==== Source Repository Manager
The Source Repository Manager is the only one who is allowed to directly modify the repository without using the <<tool-svn>> tool. It is their responsibility to ensure that technical problems that arise in the repository are resolved quickly. The source repository manager has the authority to back out commits if this is necessary to resolve a SVN technical problem.
Hat held by: the Source Repository Manager mailto:clusteradm@FreeBSD.org[clusteradm@FreeBSD.org].
[[role-election-manager]]
==== Election Manager
The Election Manager is responsible for the <<process-core-election>> process. The manager is responsible for running and maintaining the election system, and is the final authority should minor unforeseen events happen in the election process. Major unforeseen events have to be discussed with the <<role-core>>
Hat held only during elections.
[[role-webmaster]]
==== Web site Management
The Web site Management hat is responsible for coordinating the rollout of updated web pages on mirrors around the world, for the overall structure of the primary web site and the system it is running upon. The management needs to coordinate the content with <<sub-project-documentation>> and acts as maintainer for the "www" tree.
Hat held by: the FreeBSD Webmasters mailto:www@FreeBSD.org[www@FreeBSD.org].
[[role-ports-manager]]
==== Ports Manager
The Ports Manager acts as a liaison between <<sub-project-ports>> and the core project, and all requests from the project should go to the ports manager.
Hat held by: the Ports Management Team mailto:portmgr@FreeBSD.org[portmgr@FreeBSD.org]. The https://www.freebsd.org/portmgr/charter/[Portmgr charter].
[[role-standards]]
==== Standards
The Standards hat is responsible for ensuring that FreeBSD complies with the standards it is committed to , keeping up to date on the development of these standards and notifying FreeBSD developers of important changes that allows them to take a proactive role and decrease the time between a standards update and FreeBSD's compliancy.
Hat currently held by: Garrett Wollman mailto:wollman@FreeBSD.org[wollman@FreeBSD.org].
[[role-core-secretary]]
==== Core Secretary
The Core Secretary's main responsibility is to write drafts to and publish the final Core Reports. The secretary also keeps the core agenda, thus ensuring that no balls are dropped unresolved.
Hat currently held by: {bofh}.
[[role-bugmeister]]
==== Bugmeister
The Bugmeister is responsible for ensuring that the maintenance database is in working order, that the entries are correctly categorised and that there are no invalid entries. They supervise bugbusters.
Hat currently held by: the Bugmeister Team mailto:bugmeister@FreeBSD.org[bugmeister@FreeBSD.org].
[[role-donations]]
==== Donations Liaison Officer
The task of the donations liaison officer is to match the developers with needs with people or organisations willing to make a donation.
Hat held by: the Donations Liaison Office mailto:donations@FreeBSD.org[donations@FreeBSD.org]. The https://www.freebsd.org/donations/[ Donations Liaison Charter].
[[role-admin]]
==== Admin
(Also called "FreeBSD Cluster Admin")
The admin team consists of the people responsible for administrating the computers that the project relies on for its distributed work and communication to be synchronised. It consists mainly of those people who have physical access to the servers.
Hat held by: the Admin team mailto:admin@FreeBSD.org[admin@FreeBSD.org].
[[proc-depend-hats]]
=== Process dependent hats
[[role-problem-originator]]
==== Report originator
The person originally responsible for filing a Problem Report.
[[role-bugbuster]]
==== Bugbuster
A person who will either find the right person to solve the problem, or close the PR if it is a duplicate or otherwise not an interesting one.
[[role-mentor]]
==== Mentor
A mentor is a committer who takes it upon them to introduce a new committer to the project, both in terms of ensuring the new committer's setup is valid, that the new committer knows the available tools required in their work, and that the new committer knows what is expected of them in terms of behavior.
[[role-vendor]]
==== Vendor
The person(s) or organisation whom external code comes from and whom patches are sent to.
[[role-reviewer]]
==== Reviewers
People on the mailing list where the request for review is posted.
The following section will describe the defined project processes. Issues that are not handled by these processes happen on an ad-hoc basis based on what has been customary to do in similar cases.
[[model-processes]]
== Processes
[[proc-addrem-committer]]
=== Adding new and removing old committers
The Core team has the responsibility of giving and removing commit privileges to contributors. This can only be done through a vote on the core mailing list. The ports and documentation sub-projects can give commit privileges to people working on these projects, but have to date not removed such privileges.
Normally a contributor is recommended to core by a committer. For contributors or outsiders to contact core asking to be a committer is not well thought of and is usually rejected.
If the area of particular interest for the developer potentially overlaps with other committers' area of maintainership, the opinion of those maintainers is sought. However, it is frequently this committer that recommends the developer.
When a contributor is given committer status, they are assigned a mentor. The committer who recommended the new committer will, in the general case, take it upon themselves to be the new committers mentor.
When a contributor is given their commit bit, a <<tool-pgp>>-signed email is sent from either <<role-core-secretary>>, <<role-ports-manager>>, or nik@freebsd.org to both admins@freebsd.org, the assigned mentor, the new committer, and core confirming the approval of a new account. The mentor then gathers a password line, <<tool-ssh2>> public key, and PGP key from the new committer and sends them to <<role-admin>>. When the new account is created, the mentor activates the commit bit and guides the new committer through the rest of the initial process.
.Process summary: adding a new committer
image::proc-add-committer.png[Refer to paragraph below for a screen-reader friendly version.]
When a contributor sends a piece of code, the receiving committer may choose to recommend that the contributor is given commit privileges. If they recommend this to core, core will vote on this recommendation. If the vote is in favour, a mentor is assigned the new committer and the new committer has to email their details to the administrators for an account to be created. After this, the new committer is all set to make their first commit. By tradition, this is by adding their name to the committers list.
Recall that a committer is considered to be someone who has committed code during the past 12 months. However, it is not until after 18 months of inactivity have passed that commit privileges are eligible to be revoked. [<<freebsd-expiration-policy, FreeBSD, 2002H>>] There are, however, no automatic procedures for doing this. For reactions concerning commit privileges not triggered by time, see <<process-reactions,section 1.5.8>>.
.Process summary: removing a committer
image::proc-rm-committer.png[Refer to paragraph below for a screen-reader friendly version.]
When Core decides to clean up the committers list, they check who has not made a commit for the past 18 months. Committers who have not done so have their commit bits revoked and their account removed by the administrators.
It is also possible for committers to request that their commit bit be retired if for some reason they are no longer going to be actively committing to the project. In this case, it can also be restored at a later time by core, should the committer ask.
Roles in this process:
. <<role-core>>
. <<role-contributor>>
. <<role-committer>>
. <<role-maintainer>>
. <<role-mentor>>
[<<freebsd-bylaws, FreeBSD, 2000A>>] [<<freebsd-expiration-policy, FreeBSD, 2002H>>] [<<freebsd-new-account, FreeBSD, 2002I>>]
[[committing]]
=== Committing code
The committing of new or modified code is one of the most frequent processes in the FreeBSD project and will usually happen many times a day. Committing of code can only be done by a "committer". Committers commit either code written by themselves, code submitted to them, or code submitted through a <<model-pr,problem report>>.
When code is written by the developer that is non-trivial, they should seek a code review from the community. This is done by sending mail to the relevant list asking for review. Before submitting the code for review, they should ensure it compiles correctly with the entire tree and that all relevant tests run. This is called "pre-commit test". When contributed code is received, it should be reviewed by the committer and tested the same way.
When a change is committed to a part of the source that has been contributed from an outside <<role-vendor>>, the maintainer should ensure that the patch is contributed back to the vendor. This is in line with the open source philosophy and makes it easier to stay in sync with outside projects as the patches do not have to be reapplied every time a new release is made.
After the code has been available for review and no further changes are necessary, the code is committed into the development branch, -CURRENT. If the change applies for the -STABLE branch or the other branches as well, a "Merge From Current" ("MFC") countdown is set by the committer. After the number of days the committer chose when setting the MFC have passed, an email will automatically be sent to the committer reminding them to commit it to the -STABLE branch (and possibly security branches as well). Only security critical changes should be merged to security branches.
Delaying the commit to -STABLE and other branches allows for "parallel debugging" where the committed code is tested on a wide range of configurations. This makes changes to -STABLE to contain fewer faults and thus giving the branch its name.
.Process summary: A committer commits code
image::proc-commit.png[Refer to paragraph below for a screen-reader friendly version.]
When a committer has written a piece of code and wants to commit it, they first need to determine if it is trivial enough to go in without prior review or if it should first be reviewed by the developer community. If the code is trivial or has been reviewed and the committer is not the maintainer, they should consult the maintainer before proceeding. If the code is contributed by an outside vendor, the maintainer should create a patch that is sent back to the vendor. The code is then committed and then deployed by the users. Should they find problems with the code, this will be reported and the committer can go back to writing a patch. If a vendor is affected, they can choose to implement or ignore the patch.
.Process summary: A contributor commits code
image::proc-contrib.png[Refer to paragraphs below and above for a screen-reader friendly version.]
The difference when a contributor makes a code contribution is that they submit the code through the Bugzilla interface. This report is picked up by the maintainer who reviews the code and commits it.
Hats included in this process are:
. <<role-committer>>
. <<role-contributor>>
. <<role-vendor>>
. <<role-reviewer>>
[<<freebsd-committer, FreeBSD, 2001>>] [<<jorgensen2001, Jørgensen, 2001>>]
[[process-core-election]]
=== Core election
Core elections are held at least every two years. footnote:[The first Core election was held September 2000] Nine core members are elected. New elections are held if the number of core members drops below seven. New elections can also be held should at least 1/3 of the active committers demand this.
When an election is to take place, core announces this at least 6 weeks in advance, and appoints an election manager to run the elections.
Only committers can be elected into core. The candidates need to submit their candidacy at least one week before the election starts, but can refine their statements until the voting starts. They are presented in the http://election.uk.freebsd.org/candidates.html[candidates list]. When writing their election statements, the candidates must answer a few standard questions submitted by the election manager.
During elections, the rule that a committer must have committed during the 12 past months is followed strictly. Only these committers are eligible to vote.
When voting, the committer may vote once in support of up to nine nominees. The voting is done over a period of four weeks with reminders being posted on "developers" mailing list that is available to all committers.
The election results are released one week after the election ends, and the new core team takes office one week after the results have been posted.
Should there be a voting tie, this will be resolved by the new, unambiguously elected core members.
Votes and candidate statements are archived, but the archives are not publicly available.
.Process summary: Core elections
image::proc-elections.png[Refer to paragraph below for a screen-reader friendly version.]
Core announces the election and selects an election manager who prepares the elections, and when ready, candidates can announce their candidacies through submitting their statements. The committers then vote. After the vote is over, the election results are announced and the new core team takes office.
Hats in core elections are:
* <<role-core>>
* <<role-committer>>
* <<role-election-manager>>
[<<freebsd-bylaws, FreeBSD, 2000A>>] [<<bsd-election2002, FreeBSD, 2002B>>] [<<freebsd-election, FreeBSD, 2002G>>]
[[new-features]]
=== Development of new features
Within the project there are sub-projects that are working on new features. These projects are generally done by one person [<<jorgensen2001, Jørgensen, 2001>>]. Every project is free to organise development as it sees fit. However, when the project is merged to the -CURRENT branch it must follow the project guidelines. When the code has been well tested in the -CURRENT branch and deemed stable enough and relevant to the -STABLE branch, it is merged to the -STABLE branch.
The requirements of the project are given by developer wishes, requests from the community in terms of direct requests by mail, Problem Reports, commercial funding for the development of features, or contributions by the scientific community. The wishes that come within the responsibility of a developer are given to that developer who prioritises their time between the request and their wishes. A common way to do this is maintain a TODO-list maintained by the project. Items that do not come within someone's responsibility are collected on TODO-lists unless someone volunteers to take the responsibility. All requests, their distribution and follow-up are handled by the <<tool-bugzilla>> tool.
Requirements analysis happens in two ways. The requests that come in are discussed on mailing lists, both within the main project and in the sub-project that the request belongs to or is spawned by the request. Furthermore, individual developers on the sub-project will evaluate the feasibility of the requests and determine the prioritisation between them. Other than archives of the discussions that have taken place, no outcome is created by this phase that is merged into the main project.
As the requests are prioritised by the individual developers on the basis of doing what they find interesting, necessary, or are funded to do, there is no overall strategy or prioritisation of what requests to regard as requirements and following up their correct implementation. However, most developers have some shared vision of what issues are more important, and they can ask for guidelines from the release engineering team.
The verification phase of the project is two-fold. Before committing code to the current-branch, developers request their code to be reviewed by their peers. This review is for the most part done by functional testing, but also code review is important. When the code is committed to the branch, a broader functional testing will happen, that may trigger further code review and debugging should the code not behave as expected. This second verification form may be regarded as structural verification. Although the sub-projects themselves may write formal tests such as unit tests, these are usually not collected by the main project and are usually removed before the code is committed to the current branch. footnote:[More and more tests are however performed when building the system (make world). These tests are however a very new addition and no systematic framework for these tests have yet been created.]
[[model-maintenance]]
=== Maintenance
It is an advantage to the project to for each area of the source have at least one person that knows this area well. Some parts of the code have designated maintainers. Others have de-facto maintainers, and some parts of the system do not have maintainers. The maintainer is usually a person from the sub-project that wrote and integrated the code, or someone who has ported it from the platform it was written for. footnote:[sendmail and named are examples of code that has been merged from other platforms.] The maintainer's job is to make sure the code is in sync with the project the code comes from if it is contributed code, and apply patches submitted by the community or write fixes to issues that are discovered.
The main bulk of work that is put into the FreeBSD project is maintenance. [<<jorgensen2001, Jørgensen, 2001>>] has made a figure showing the life cycle of changes.
Jørgenssen's model for change integration
[.informaltable]
[cols="1,1,1", options="header"]
|===
| Stage
| Next if successful
| Next if unsuccessful
|code
|review
|
|review
|pre-commit test
|code
|pre-commit test
|development release
|code
|development release
|parallel debugging
|code
|parallel debugging
|production release
|code
|production release
|
|code
|===
Here "development release" refers to the -CURRENT branch while "production release" refers to the -STABLE branch. The "pre-commit test" is the functional testing by peer developers when asked to do so or trying out the code to determine the status of the sub-project. "Parallel debugging" is the functional testing that can trigger more review, and debugging when the code is included in the -CURRENT branch.
As of this writing, there were 269 committers in the project. When they commit a change to a branch, that constitutes a new release. It is very common for users in the community to track a particular branch. The immediate existence of a new release makes the changes widely available right away and allows for rapid feedback from the community. This also gives the community the response time they expect on issues that are of importance to them. This makes the community more engaged, and thus allows for more and better feedback that again spurs more maintenance and ultimately should create a better product.
Before making changes to code in parts of the tree that has a history unknown to the committer, the committer is required to read the commit logs to see why certain features are implemented the way they are in order not to make mistakes that have previously either been thought through or resolved.
[[model-pr]]
=== Problem reporting
Before FreeBSD 10, FreeBSD included a problem reporting tool called `send-pr`. Problems include bug reports, feature requests, feature enhancements and notices of new versions of external software that are included in the project. Although `send-pr` is available, users and developers are encouraged to submit issues using our https://bugs.freebsd.org/submit/[ problem report form].
Problem reports are sent to an email address where it is inserted into the Problem Reports maintenance database. A <<role-bugbuster>> classifies the problem and sends it to the correct group or maintainer within the project. After someone has taken responsibility for the report, the report is being analysed. This analysis includes verifying the problem and thinking out a solution for the problem. Often feedback is required from the report originator or even from the FreeBSD community. Once a patch for the problem is made, the originator may be asked to try it out. Finally, the working patch is integrated into the project, and documented if applicable. It there goes through the regular maintenance cycle as described in section <<model-maintenance>>. These are the states a problem report can be in: open, analyzed, feedback, patched, suspended and closed. The suspended state is for when further progress is not possible due to the lack of information or for when the task would require so much work that nobody is working on it at the moment.
.Process summary: problem reporting
image::proc-pr.png[Refer to paragraph below for a screen-reader friendly version.]
A problem is reported by the report originator. It is then classified by a bugbuster and handed to the correct maintainer. They verify the problem and discuss the problem with the originator until they have enough information to create a working patch. This patch is then committed and the problem report is closed.
The roles included in this process are:
. <<role-problem-originator>>
. <<role-maintainer>>
. <<role-bugbuster>>
[<<freebsd-handle-pr, FreeBSD, 2002C>>]. [<<freebsd-send-pr, FreeBSD, 2002D>>]
[[process-reactions]]
=== Reacting to misbehavior
[<<freebsd-committer, FreeBSD, 2001>>] has a number of rules that committers should follow. However, it happens that these rules are broken. The following rules exist in order to be able to react to misbehavior. They specify what actions will result in how long a suspension of the committer's commit privileges.
* Committing during code freezes without the approval of the Release Engineering team - 2 days
* Committing to a security branch without approval - 2 days
* Commit wars - 5 days to all participating parties
* Impolite or inappropriate behavior - 5 days
[<<ref-freebsd-trenches, Lehey, 2002>>]
For the suspensions to be efficient, any single core member can implement a suspension before discussing it on the "core" mailing list. Repeat offenders can, with a 2/3 vote by core, receive harsher penalties, including permanent removal of commit privileges. (However, the latter is always viewed as a last resort, due to its inherent tendency to create controversy.) All suspensions are posted to the "developers" mailing list, a list available to committers only.
It is important that you cannot be suspended for making technical errors. All penalties come from breaking social etiquette.
Hats involved in this process:
* <<role-core>>
* <<role-committer>>
[[process-release-engineering]]
=== Release engineering
The FreeBSD project has a Release Engineering team with a principal release engineer that is responsible for creating releases of FreeBSD that can be brought out to the user community via the net or sold in retail outlets. Since FreeBSD is available on multiple platforms and releases for the different architectures are made available at the same time, the team has one person in charge of each architecture. Also, there are roles in the team responsible for coordinating quality assurance efforts, building a package set and for having an updated set of documents. When referring to the release engineer, a representative for the release engineering team is meant.
When a release is coming, the FreeBSD project changes shape somewhat. A release schedule is made containing feature- and code-freezes, release of interim releases and the final release. A feature-freeze means no new features are allowed to be committed to the branch without the release engineers' explicit consent. Code-freeze means no changes to the code (like bugs-fixes) are allowed to be committed without the release engineers' explicit consent. This feature- and code-freeze is known as stabilising. During the release process, the release engineer has the full authority to revert to older versions of code and thus "back out" changes should they find that the changes are not suitable to be included in the release.
There are three different kinds of releases:
. .0 releases are the first release of a major version. These are branched of the -CURRENT branch and have a significantly longer release engineering cycle due to the unstable nature of the -CURRENT branch
. .X releases are releases of the -STABLE branch. They are scheduled to come out every 4 months.
. .X.Y releases are security releases that follow the .X branch. These come out only when sufficient security fixes have been merged since the last release on that branch. New features are rarely included, and the security team is far more involved in these than in regular releases.
For releases of the -STABLE-branch, the release process starts 45 days before the anticipated release date. During the first phase, the first 15 days, the developers merge what changes they have had in -CURRENT that they want to have in the release to the release branch. When this period is over, the code enters a 15 day code freeze in which only bug fixes, documentation updates, security-related fixes and minor device driver changes are allowed. These changes must be approved by the release engineer in advance. At the beginning of the last 15 day period a release candidate is created for widespread testing. Updates are less likely to be allowed during this period, except for important bug fixes and security updates. In this final period, all releases are considered release candidates. At the end of the release process, a release is created with the new version number, including binary distributions on web sites and the creation of CD-ROM images. However, the release is not considered "really released" until a <<tool-pgp>>-signed message stating exactly that, is sent to the mailing list freebsd-announce; anything labelled as a "release" before that may well be in-process and subject to change before the PGP-signed message is sent. footnote:[Many commercial vendors use these images to create CD-ROMs that are sold in retail outlets.].
The releases of the -CURRENT-branch (that is, all releases that end with ".0") are very similar, but with twice as long timeframe. It starts 8 weeks prior to the release with announcement of the release time line. Two weeks into the release process, the feature freeze is initiated and performance tweaks should be kept to a minimum. Four weeks prior to the release, an official beta version is made available. Two weeks prior to release, the code is officially branched into a new version. This version is given release candidate status, and as with the release engineering of -STABLE, the code freeze of the release candidate is hardened. However, development on the main development branch can continue. Other than these differences, the release engineering processes are alike.
*.0 releases go into their own branch and are aimed mainly at early adopters. The branch then goes through a period of stabilisation, and it is not until the <<role-releng, Release Engineering Team>> decides the demands to stability have been satisfied that the branch becomes -STABLE and -CURRENT targets the next major version. While this for the majority has been with *.1 versions, this is not a demand.
Most releases are made when a given date that has been deemed a long enough time since the previous release comes. A target is set for having major releases every 18 months and minor releases every 4 months. The user community has made it very clear that security and stability cannot be sacrificed by self-imposed deadlines and target release dates. For slips of time not to become too long with regards to security and stability issues, extra discipline is required when committing changes to -STABLE.
. Make release schedule
. Feature freeze
. Code freeze
. Make branch
. Release candidate
. Stabilize release (loop back to previous step as many times as necessary; when release is considered stable, proceed with next step)
. Build packages
. Warn mirrors
. Publish release
[<<freebsd-releng, FreeBSD, 2002E>>]
[[tools]]
== Tools
The major support tools for supporting the development process are Bugzilla, Mailman, and OpenSSH. These are externally developed tools and are commonly used in the open source world.
[[tool-svn]]
=== Subversion (SVN)
Subversion ("SVN") is a system to handle multiple versions of text files and tracking who committed what changes and why. A project lives within a "repository" and different versions are considered different "branches".
[[tool-bugzilla]]
=== Bugzilla
Bugzilla is a maintenance database consisting of a set of tools to track bugs at a central site. It supports the bug tracking process for sending and handling bugs as well as querying and updating the database and editing bug reports. The project uses its web interface to send "Problem Reports" to the project's central Bugzilla server. The committers also have web and command-line clients.
[[model-mailman]]
=== Mailman
Mailman is a program that automates the management of mailing lists. The FreeBSD Project uses it to run 16 general lists, 60 technical lists, 4 limited lists and 5 lists with SVN commit logs. It is also used for many mailing lists set up and used by other people and projects in the FreeBSD community. General lists are lists for the general public, technical lists are mainly for the development of specific areas of interest, and closed lists are for internal communication not intended for the general public. The majority of all the communication in the project goes through these 85 lists [<<ref-bsd-handbook, FreeBSD, 2003A>>, Appendix C].
[[tool-pgp]]
=== Pretty Good Privacy
Pretty Good Privacy, better known as PGP, is a cryptosystem using a public key architecture to allow people to digitally sign and/or encrypt information in order to ensure secure communication between two parties. A signature is used when sending information out to many recipients, enabling them to verify that the information has not been tampered with before they received it. In the FreeBSD Project this is the primary means of ensuring that information has been written by the person who claims to have written it, and not altered in transit.
[[tool-ssh2]]
=== Secure Shell
Secure Shell is a standard for securely logging into a remote system and for executing commands on the remote system. It allows other connections, called tunnels, to be established and protected between the two involved systems. This standard exists in two primary versions, and only version two is used for the FreeBSD Project. The most common implementation of the standard is OpenSSH that is a part of the project's main distribution. Since its source is updated more often than FreeBSD releases, the latest version is also available in the ports tree.
[[sub-projects]]
== Sub-projects
Sub-projects are formed to reduce the amount of communication needed to coordinate the group of developers. When a problem area is sufficiently isolated, most communication would be within the group focusing on the problem, requiring less communication with the groups they communicate with than were the group not isolated.
[[sub-project-ports]]
=== The Ports Subproject
A "port" is a set of meta-data and patches that are needed to fetch, compile and install correctly an external piece of software on a FreeBSD system. The amount of ports has grown at a tremendous rate, as shown by the following figure.
.Number of ports added between 1996 and 2008 [[fig-ports]]
image::portsstatus.png[Refer to tables below for a screen-reader friendly version.]
<<fig-ports>> shows the number of ports available to FreeBSD in the period 1995 to 2008. It looks like the curve has first grown exponentially, and then from the middle of 2001 to the middle of 2007 grown linearly at a rate of about 2000 ports/year, before its growth rate gets lower.
Approximate dates each multiple of 1000 ports is reached
[.informaltable]
[cols="1,1", options="header"]
|===
| Number of ports
| Approximate date
|1000
|Late 1997
|2000
|Late 1998
|3000
|Early 2000
|4000
|Late 2000
|5000
|Mid 2001
|6000
|4th quarter of 2001
|7000
|Mid 2002
|8000
|4th quarter of 2002
|9000
|Mid 2003
|10000
|End of 2003
|11000
|Mid 2004
|12000
|End of 2004
|13000
|Mid 2005
|14000
|Early 2006
|15000
|Mid 2006
|16000
|3rd quarter 2006
|17000
|2nd quarter 2007
|===
Approximate number of ports at the start of each year
[.informaltable]
[cols="1,1", options="header"]
|===
| Year
| Approximate number of ports
|1995
|100
|1996
|300
|1997
|700
|1998
|1200
|1999
|2000
|2000
|2900
|2001
|4300
|2002
|6200
|2003
|8100
|2004
|10050
|2005
|12100
|2006
|14000
|2007
|16200
|2008
|17900
|===
As the external software described by the port often is under continued development, the amount of work required to maintain the ports is already large, and increasing. This has led to the ports part of the FreeBSD project gaining a more empowered structure, and is more and more becoming a sub-project of the FreeBSD project.
Ports has its own core team with the <<role-ports-manager>> as its leader, and this team can appoint committers without FreeBSD Core's approval. Unlike in the FreeBSD Project, where a lot of maintenance frequently is rewarded with a commit bit, the ports sub-project contains many active maintainers that are not committers.
Unlike the main project, the ports tree is not branched. Every release of FreeBSD follows the current ports collection and has thus available updated information on where to find programs and how to build them. This, however, means that a port that makes dependencies on the system may need to have variations depending on what version of FreeBSD it runs on.
With an unbranched ports repository it is not possible to guarantee that any port will run on anything other than -CURRENT and -STABLE, in particular older, minor releases. There is neither the infrastructure nor volunteer time needed to guarantee this.
For efficiency of communication, teams depending on Ports, such as the release engineering team, have their own ports liaisons.
[[sub-project-documentation]]
=== The FreeBSD Documentation Project
The FreeBSD Documentation project was started January 1995. From the initial group of a project leader, four team leaders and 16 members, they are now a total of 44 committers. The documentation mailing list has just under 300 members, indicating that there is quite a large community around it.
The goal of the Documentation project is to provide good and useful documentation of the FreeBSD project, thus making it easier for new users to get familiar with the system and detailing advanced features for the users.
The main tasks in the Documentation project are to work on current projects in the "FreeBSD Documentation Set", and translate the documentation to other languages.
Like the FreeBSD Project, documentation is split in the same branches. This is done so that there is always an updated version of the documentation for each version. Only documentation errors are corrected in the security branches.
Like the ports sub-project, the Documentation project can appoint documentation committers without FreeBSD Core's approval. [<<freebsd-doceng-charter, FreeBSD, 2003B>>].
The Documentation project has link:{fdp-primer}[a primer]. This is used both to introduce new project members to the standard tools and syntaxes and to act as a reference when working on the project.
:sectnums!:
[bibliography]
[[bibliography]]
== References
[[brooks]]
[Brooks, 1995] Frederick P. Brooks. Copyright © 1975, 1995 Pearson Education Limited. 0201835959. Addison-Wesley Pub Co. The Mythical Man-Month. Essays on Software Engineering, Anniversary Edition (2nd Edition).
[[thesis]]
[Saers, 2003] Niklas Saers. Copyright © 2003. A project model for the FreeBSD Project. Candidatus Scientiarum thesis. http://niklas.saers.com/thesis.
[[jorgensen2001]]
[Jørgensen, 2001] Niels Jørgensen. Copyright © 2001. Putting it All in the Trunk. Incremental Software Development in the FreeBSD Open Source Project. http://www.dat.ruc.dk/~nielsj/research/papers/freebsd.pdf.
[[ref-pmbok]]
[PMI, 2000] Project Management Institute. Copyright © 1996, 2000 Project Management Institute. 1-880410-23-0. Project Management Institute. Newtown Square Pennsylvania USA . PMBOK Guide. A Guide to the Project Management Body of Knowledge, 2000 Edition.
[[freebsd-bylaws]]
[FreeBSD, 2000A] Copyright © 2002 The FreeBSD Project. Core Bylaws. https://www.freebsd.org/internal/bylaws/.
[[freebsd-developer-handbook]]
[FreeBSD, 2002A] Copyright © 2002 The FreeBSD Documentation Project. FreeBSD Developer's Handbook. link:{developers-handbook}[Developers Handbook].
[[bsd-election2002]]
[FreeBSD, 2002B] Copyright © 2002 The FreeBSD Project. Core team election 2002. http://election.uk.freebsd.org/candidates.html.
[[freebsd-handle-pr]]
[FreeBSD, 2002C] Dag-Erling Smørgrav and Hiten Pandya. Copyright © 2002 The FreeBSD Documentation Project. The FreeBSD Documentation Project. Problem Report Handling Guidelines. link:{pr-guidelines}[Problem Report Handling Guidelines].
[[freebsd-send-pr]]
[FreeBSD, 2002D] Dag-Erling Smørgrav. Copyright © 2002 The FreeBSD Documentation Project. The FreeBSD Documentation Project. Writing FreeBSD Problem Reports. link:{problem-reports}[Writing FreeBSD Problem Reports].
[[freebsd-committer]]
[FreeBSD, 2001] Copyright © 2001 The FreeBSD Documentation Project. The FreeBSD Documentation Project. Committers Guide. link:{committers-guide}[Committer's Guide].
[[freebsd-releng]]
[FreeBSD, 2002E] Murray Stokely. Copyright © 2002 The FreeBSD Documentation Project. The FreeBSD Documentation Project. FreeBSD Release Engineering. https://link:{releng}[FreeBSD Release Engineering].
[[ref-bsd-handbook]]
[FreeBSD, 2003A] The FreeBSD Documentation Project. FreeBSD Handbook. link:{handbook}[FreeBSD Handbook].
[[freebsd-contributors]]
[FreeBSD, 2002F] Copyright © 2002 The FreeBSD Documentation Project. The FreeBSD Documentation Project. Contributors to FreeBSD. link:{contributors}[Contributors to FreeBSD].
[[freebsd-election]]
[FreeBSD, 2002G] Copyright © 2002 The FreeBSD Project. The FreeBSD Project. Core team elections 2002. http://election.uk.freebsd.org.
[[freebsd-expiration-policy]]
[FreeBSD, 2002H] Copyright © 2002 The FreeBSD Project. The FreeBSD Project. Commit Bit Expiration Policy. 2002/04/06 15:35:30. https://www.freebsd.org/internal/expire-bits/.
[[freebsd-new-account]]
[FreeBSD, 2002I] Copyright © 2002 The FreeBSD Project. The FreeBSD Project. New Account Creation Procedure. 2002/08/19 17:11:27. https://www.freebsd.org/internal/new-account/.
[[freebsd-doceng-charter]]
[FreeBSD, 2003B] Copyright © 2002 The FreeBSD Documentation Project. The FreeBSD Documentation Project. FreeBSD DocEng Team Charter. 2003/03/16 12:17. https://www.freebsd.org/internal/doceng/.
[[ref-freebsd-trenches]]
[Lehey, 2002] Greg Lehey. Copyright © 2002 Greg Lehey. Greg Lehey. Two years in the trenches. The evolution of a software project. http://www.lemis.com/grog/In-the-trenches.pdf.
diff --git a/documentation/content/en/books/developers-handbook/_index.adoc b/documentation/content/en/books/developers-handbook/_index.adoc
index 3082156c0f..c7407e78d1 100644
--- a/documentation/content/en/books/developers-handbook/_index.adoc
+++ b/documentation/content/en/books/developers-handbook/_index.adoc
@@ -1,58 +1,58 @@
---
title: FreeBSD Developers' Handbook
authors:
- author: The FreeBSD Documentation Project
-copyright: 1995-2020 The FreeBSD Documentation Project
-releaseinfo: "$FreeBSD: head/en_US.ISO8859-1/books/developers-handbook/book.xml 54255 2020-06-15 08:13:08Z bcr $"
+copyright: 1995-2021 The FreeBSD Documentation Project
+description: FreeBSD Developers' Handbook Index
trademarks: ["freebsd", "apple", "ibm", "ieee", "intel", "linux", "microsoft", "opengroup", "sun", "general"]
next: books/developers-handbook/parti
---
= FreeBSD Developers' Handbook
:doctype: book
:toc: macro
:toclevels: 1
:icons: font
:sectnums:
:sectnumlevels: 6
:source-highlighter: rouge
:experimental:
ifeval::["{backend}" == "html5"]
include::shared/mirrors.adoc[]
include::shared/authors.adoc[]
include::shared/releases.adoc[]
include::shared/en/mailing-lists.adoc[]
include::shared/en/teams.adoc[]
include::shared/en/urls.adoc[]
endif::[]
ifeval::["{backend}" == "pdf"]
include::../../../../shared/mirrors.adoc[]
include::../../../../shared/authors.adoc[]
include::../../../../shared/releases.adoc[]
include::../../../../shared/en/mailing-lists.adoc[]
include::../../../../shared/en/teams.adoc[]
include::../../../../shared/en/urls.adoc[]
endif::[]
ifeval::["{backend}" == "epub3"]
include::../../../../shared/mirrors.adoc[]
include::../../../../shared/authors.adoc[]
include::../../../../shared/releases.adoc[]
include::../../../../shared/en/mailing-lists.adoc[]
include::../../../../shared/en/teams.adoc[]
include::../../../../shared/en/urls.adoc[]
endif::[]
[.abstract-title]
[abstract]
Abstract
Welcome to the Developers' Handbook. This manual is a _work in progress_ and is the work of many individuals. Many sections do not yet exist and some of those that do exist need to be updated. If you are interested in helping with this project, send email to the {freebsd-doc}.
The latest version of this document is always available from the link:https://www.FreeBSD.org[FreeBSD World Wide Web server]. It may also be downloaded in a variety of formats and compression options from the link:https://download.freebsd.org/ftp/doc/[FreeBSD FTP server] or one of the numerous link:{handbook}#mirrors-ftp/[mirror sites].
'''
include::content/en/books/developers-handbook/toc.adoc[]
diff --git a/documentation/content/en/books/developers-handbook/bibliography/_index.adoc b/documentation/content/en/books/developers-handbook/bibliography/_index.adoc
index fa861831c5..b4cc86fc48 100644
--- a/documentation/content/en/books/developers-handbook/bibliography/_index.adoc
+++ b/documentation/content/en/books/developers-handbook/bibliography/_index.adoc
@@ -1,40 +1,41 @@
---
title: Appendices
prev: books/developers-handbook/partv
+description: FreeBSD Developers Handbook Bibliography
---
[appendix]
[[bibliography]]
= Appendices
:doctype: book
:toc: macro
:toclevels: 1
:icons: font
:sectnums!:
:source-highlighter: rouge
:experimental:
:skip-front-matter:
:xrefstyle: basic
:relfileprefix: ../
:outfilesuffix:
:sectnumoffset: A
include::shared/mirrors.adoc[]
include::shared/authors.adoc[]
include::shared/releases.adoc[]
include::shared/en/mailing-lists.adoc[]
include::shared/en/teams.adoc[]
include::shared/en/urls.adoc[]
[[COD,1]] [1] Dave A Patterson and John L Hennessy. Copyright(R) 1998 Morgan Kaufmann Publishers, Inc. 1-55860-428-6. Morgan Kaufmann Publishers, Inc. Computer Organization and Design. The Hardware / Software Interface. 1-2.
[[APUE, 2]] [2] W. Richard Stevens. Copyright(R) 1993 Addison Wesley Longman, Inc. 0-201-56317-7. Addison Wesley Longman, Inc. Advanced Programming in the Unix Environment. 1-2.
[[DIFOS, 3]] [3] Marshall Kirk McKusick and George Neville-Neil. Copyright(R) 2004 Addison-Wesley. 0-201-70245-2. Addison-Wesley. The Design and Implementation of the FreeBSD Operating System. 1-2.
[[Phrack, 4]] [4] Aleph One. Phrack 49; "Smashing the Stack for Fun and Profit".
[[StackGuard, 5]] [5] Chrispin Cowan, Calton Pu, and Dave Maier. StackGuard; Automatic Adaptive Detection and Prevention of Buffer-Overflow Attacks.
[[OpenBSD, 6]] [6] Todd Miller and Theo de Raadt. strlcpy and strlcat -- consistent, safe string copy and concatenation.
diff --git a/documentation/content/en/books/developers-handbook/book.adoc b/documentation/content/en/books/developers-handbook/book.adoc
index 7953a77d83..a9e497d499 100644
--- a/documentation/content/en/books/developers-handbook/book.adoc
+++ b/documentation/content/en/books/developers-handbook/book.adoc
@@ -1,99 +1,99 @@
---
title: FreeBSD Developers' Handbook
authors:
- author: The FreeBSD Documentation Project
copyright: 1995-2020 The FreeBSD Documentation Project
releaseinfo: "$FreeBSD: head/en_US.ISO8859-1/books/developers-handbook/book.xml 54255 2020-06-15 08:13:08Z bcr $"
trademarks: ["freebsd", "apple", "ibm", "ieee", "intel", "linux", "microsoft", "opengroup", "sun", "general"]
---
= FreeBSD Developers' Handbook
:doctype: book
:toc: macro
:toclevels: 2
:icons: font
:xrefstyle: basic
:relfileprefix: ../
:outfilesuffix:
:sectnums:
:sectnumlevels: 6
:partnums:
:chapter-signifier: Chapter
:part-signifier: Part
:source-highlighter: rouge
:experimental:
:skip-front-matter:
:book: true
:pdf: false
ifeval::["{backend}" == "html5"]
include::shared/mirrors.adoc[]
include::shared/authors.adoc[]
include::shared/releases.adoc[]
include::shared/en/mailing-lists.adoc[]
include::shared/en/teams.adoc[]
include::shared/en/urls.adoc[]
:imagesdir: ../../../../images/books/developers-handbook/
:chapters-path: content/en/books/developers-handbook/
endif::[]
ifeval::["{backend}" == "pdf"]
include::../../../../shared/mirrors.adoc[]
include::../../../../shared/authors.adoc[]
include::../../../../shared/releases.adoc[]
include::../../../../shared/en/mailing-lists.adoc[]
include::../../../../shared/en/teams.adoc[]
include::../../../../shared/en/urls.adoc[]
:imagesdir: ../../../../static/images/books/developers-handbook/
:chapters-path:
endif::[]
ifeval::["{backend}" == "epub3"]
include::../../../../shared/mirrors.adoc[]
include::../../../../shared/authors.adoc[]
include::../../../../shared/releases.adoc[]
include::../../../../shared/en/mailing-lists.adoc[]
include::../../../../shared/en/teams.adoc[]
include::../../../../shared/en/urls.adoc[]
:imagesdir: ../../../../static/images/books/developers-handbook/
:chapters-path:
endif::[]
[.abstract-title]
[abstract]
Abstract
Welcome to the Developers' Handbook. This manual is a _work in progress_ and is the work of many individuals. Many sections do not yet exist and some of those that do exist need to be updated. If you are interested in helping with this project, send email to the {freebsd-doc}.
The latest version of this document is always available from the link:https://www.FreeBSD.org[FreeBSD World Wide Web server]. It may also be downloaded in a variety of formats and compression options from the link:https://download.freebsd.org/ftp/doc/[FreeBSD FTP server] or one of the numerous link:{handbook}#mirrors-ftp/[mirror sites].
'''
toc::[]
// Section one
include::{chapters-path}parti.adoc[lines=7..8]
-include::{chapters-path}introduction/_index.adoc[leveloffset=+1, lines=10..24;35..-1]
-include::{chapters-path}tools/_index.adoc[leveloffset=+1, lines=10..26;37..-1]
-include::{chapters-path}secure/_index.adoc[leveloffset=+1, lines=9..23;34..-1]
-include::{chapters-path}l10n/_index.adoc[leveloffset=+1, lines=8..22;33..-1]
-include::{chapters-path}policies/_index.adoc[leveloffset=+1, lines=10..24;35..-1]
-include::{chapters-path}testing/_index.adoc[leveloffset=+1, lines=8..22;33..-1]
+include::{chapters-path}introduction/_index.adoc[leveloffset=+1, lines=11..25;36..-1]
+include::{chapters-path}tools/_index.adoc[leveloffset=+1, lines=11..27;38..-1]
+include::{chapters-path}secure/_index.adoc[leveloffset=+1, lines=10..24;35..-1]
+include::{chapters-path}l10n/_index.adoc[leveloffset=+1, lines=9..23;34..-1]
+include::{chapters-path}policies/_index.adoc[leveloffset=+1, lines=11..25;36..-1]
+include::{chapters-path}testing/_index.adoc[leveloffset=+1, lines=9..23;34..-1]
// Section two
include::{chapters-path}partii.adoc[lines=7..8]
-include::{chapters-path}sockets/_index.adoc[leveloffset=+1, lines=9..23;35..-1]
-include::{chapters-path}ipv6/_index.adoc[leveloffset=+1, lines=9..23;34..-1]
+include::{chapters-path}sockets/_index.adoc[leveloffset=+1, lines=10..24;36..-1]
+include::{chapters-path}ipv6/_index.adoc[leveloffset=+1, lines=10..24;35..-1]
// Section three
include::{chapters-path}partiii.adoc[lines=7..8]
-include::{chapters-path}kernelbuild/_index.adoc[leveloffset=+1, lines=8..22;33..-1]
-include::{chapters-path}kerneldebug/_index.adoc[leveloffset=+1, lines=11..25;36..-1]
+include::{chapters-path}kernelbuild/_index.adoc[leveloffset=+1, lines=9..23;34..-1]
+include::{chapters-path}kerneldebug/_index.adoc[leveloffset=+1, lines=12..26;37..-1]
// Section four
include::{chapters-path}partiv.adoc[lines=7..8]
-include::{chapters-path}x86/_index.adoc[leveloffset=+1, lines=8..22;33..-1]
+include::{chapters-path}x86/_index.adoc[leveloffset=+1, lines=9..23;34..-1]
// Appendices
include::{chapters-path}partv.adoc[lines=7..8]
-include::{chapters-path}bibliography/_index.adoc[leveloffset=+1, lines=6..20;29..-1]
+include::{chapters-path}bibliography/_index.adoc[leveloffset=+1, lines=7..21;30..-1]
diff --git a/documentation/content/en/books/developers-handbook/introduction/_index.adoc b/documentation/content/en/books/developers-handbook/introduction/_index.adoc
index 019f80f577..5dcd910d76 100644
--- a/documentation/content/en/books/developers-handbook/introduction/_index.adoc
+++ b/documentation/content/en/books/developers-handbook/introduction/_index.adoc
@@ -1,132 +1,133 @@
---
title: Chapter 1. Introduction
authors:
- author: Murray Stokely
- author: Jeroen Ruigrok van der Werven
prev: books/developers-handbook/parti
next: books/developers-handbook/tools
+description: Introduction to the FreeBSD Developers Handbook
---
[[introduction]]
= Introduction
:doctype: book
:toc: macro
:toclevels: 1
:icons: font
:sectnums:
:sectnumlevels: 6
:source-highlighter: rouge
:experimental:
:skip-front-matter:
:xrefstyle: basic
:relfileprefix: ../
:outfilesuffix:
:sectnumoffset: 1
include::shared/mirrors.adoc[]
include::shared/authors.adoc[]
include::shared/releases.adoc[]
include::shared/en/mailing-lists.adoc[]
include::shared/en/teams.adoc[]
include::shared/en/urls.adoc[]
toc::[]
[[introduction-devel]]
== Developing on FreeBSD
So here we are. System all installed and you are ready to start programming. But where to start? What does FreeBSD provide? What can it do for me, as a programmer?
These are some questions which this chapter tries to answer. Of course, programming has different levels of proficiency like any other trade. For some it is a hobby, for others it is their profession. The information in this chapter might be aimed toward the beginning programmer; indeed, it could serve useful for the programmer unfamiliar with the FreeBSD platform.
[[introduction-bsdvision]]
== The BSD Vision
To produce the best UNIX(R) like operating system package possible, with due respect to the original software tools ideology as well as usability, performance and stability.
[[introduction-archguide]]
== Architectural Guidelines
Our ideology can be described by the following guidelines
* Do not add new functionality unless an implementor cannot complete a real application without it.
* It is as important to decide what a system is not as to decide what it is. Do not serve all the world's needs; rather, make the system extensible so that additional needs can be met in an upwardly compatible fashion.
* The only thing worse than generalizing from one example is generalizing from no examples at all.
* If a problem is not completely understood, it is probably best to provide no solution at all.
* If you can get 90 percent of the desired effect for 10 percent of the work, use the simpler solution.
* Isolate complexity as much as possible.
* Provide mechanism, rather than policy. In particular, place user interface policy in the client's hands.
From Scheifler & Gettys: "X Window System"
[[introduction-layout]]
== The Layout of /usr/src
The complete source code to FreeBSD is available from our public repository. The source code is normally installed in [.filename]#/usr/src# which contains the following subdirectories:
[.informaltable]
[cols="1,1", frame="none", options="header"]
|===
| Directory
| Description
|[.filename]#bin/#
|Source for files in [.filename]#/bin#
|[.filename]#cddl/#
|Utilities covered by the Common Development and Distribution License
|[.filename]#contrib/#
|Source for files from contributed software
|[.filename]#crypto/#
|Cryptographical sources
|[.filename]#etc/#
|Source for files in [.filename]#/etc#
|[.filename]#gnu/#
|Utilities covered by the GNU Public License
|[.filename]#include/#
|Source for files in [.filename]#/usr/include#
|[.filename]#kerberos5/#
|Source for Kerberos version 5
|[.filename]#lib/#
|Source for files in [.filename]#/usr/lib#
|[.filename]#libexec/#
|Source for files in [.filename]#/usr/libexec#
|[.filename]#release/#
|Files required to produce a FreeBSD release
|[.filename]#rescue/#
|Build system for the [.filename]#/rescue# utilities
|[.filename]#sbin/#
|Source for files in [.filename]#/sbin#
|[.filename]#secure/#
|Contributed cryptographic sources
|[.filename]#share/#
|Source for files in [.filename]#/usr/share#
|[.filename]#sys/#
|Kernel source files
|[.filename]#tests/#
|The FreeBSD test suite
|[.filename]#tools/#
|Tools used for maintenance and testing of FreeBSD
|[.filename]#usr.bin/#
|Source for files in [.filename]#/usr/bin#
|[.filename]#usr.sbin/#
|Source for files in [.filename]#/usr/sbin#
|===
diff --git a/documentation/content/en/books/developers-handbook/ipv6/_index.adoc b/documentation/content/en/books/developers-handbook/ipv6/_index.adoc
index 1d86a91fe4..9f781f7798 100644
--- a/documentation/content/en/books/developers-handbook/ipv6/_index.adoc
+++ b/documentation/content/en/books/developers-handbook/ipv6/_index.adoc
@@ -1,665 +1,666 @@
---
title: Chapter 8. IPv6 Internals
authors:
- author: Yoshinobu Inoue
prev: books/developers-handbook/sockets
next: books/developers-handbook/partiii
+description: IPv6 Internals
---
[[ipv6]]
= IPv6 Internals
:doctype: book
:toc: macro
:toclevels: 1
:icons: font
:sectnums:
:sectnumlevels: 6
:source-highlighter: rouge
:experimental:
:skip-front-matter:
:xrefstyle: basic
:relfileprefix: ../
:outfilesuffix:
:sectnumoffset: 8
include::shared/mirrors.adoc[]
include::shared/authors.adoc[]
include::shared/releases.adoc[]
include::shared/en/mailing-lists.adoc[]
include::shared/en/teams.adoc[]
include::shared/en/urls.adoc[]
toc::[]
[[ipv6-implementation]]
== IPv6/IPsec Implementation
This section should explain IPv6 and IPsec related implementation internals. These functionalities are derived from http://www.kame.net/[KAME project]
[[ipv6details]]
=== IPv6
==== Conformance
The IPv6 related functions conforms, or tries to conform to the latest set of IPv6 specifications. For future reference we list some of the relevant documents below (_NOTE_: this is not a complete list - this is too hard to maintain...).
For details please refer to specific chapter in the document, RFCs, manual pages, or comments in the source code.
Conformance tests have been performed on the KAME STABLE kit at TAHI project. Results can be viewed at http://www.tahi.org/report/KAME/[http://www.tahi.org/report/KAME/]. We also attended University of New Hampshire IOL tests (http://www.iol.unh.edu/[http://www.iol.unh.edu/]) in the past, with our past snapshots.
* RFC1639: FTP Operation Over Big Address Records (FOOBAR)
** RFC2428 is preferred over RFC1639. FTP clients will first try RFC2428, then RFC1639 if failed.
* RFC1886: DNS Extensions to support IPv6
* RFC1933: Transition Mechanisms for IPv6 Hosts and Routers
** IPv4 compatible address is not supported.
** automatic tunneling (described in 4.3 of this RFC) is not supported.
** man:gif[4] interface implements IPv[46]-over-IPv[46] tunnel in a generic way, and it covers "configured tunnel" described in the spec. See <<gif,23.5.1.5>> in this document for details.
* RFC1981: Path MTU Discovery for IPv6
* RFC2080: RIPng for IPv6
** usr.sbin/route6d support this.
* RFC2292: Advanced Sockets API for IPv6
** For supported library functions/kernel APIs, see [.filename]#sys/netinet6/ADVAPI#.
* RFC2362: Protocol Independent Multicast-Sparse Mode (PIM-SM)
** RFC2362 defines packet formats for PIM-SM. [.filename]#draft-ietf-pim-ipv6-01.txt# is written based on this.
* RFC2373: IPv6 Addressing Architecture
** supports node required addresses, and conforms to the scope requirement.
* RFC2374: An IPv6 Aggregatable Global Unicast Address Format
** supports 64-bit length of Interface ID.
* RFC2375: IPv6 Multicast Address Assignments
** Userland applications use the well-known addresses assigned in the RFC.
* RFC2428: FTP Extensions for IPv6 and NATs
** RFC2428 is preferred over RFC1639. FTP clients will first try RFC2428, then RFC1639 if failed.
* RFC2460: IPv6 specification
* RFC2461: Neighbor discovery for IPv6
** See <<neighbor-discovery,23.5.1.2>> in this document for details.
* RFC2462: IPv6 Stateless Address Autoconfiguration
** See <<ipv6-pnp,23.5.1.4>> in this document for details.
* RFC2463: ICMPv6 for IPv6 specification
** See <<icmpv6,23.5.1.9>> in this document for details.
* RFC2464: Transmission of IPv6 Packets over Ethernet Networks
* RFC2465: MIB for IPv6: Textual Conventions and General Group
** Necessary statistics are gathered by the kernel. Actual IPv6 MIB support is provided as a patchkit for ucd-snmp.
* RFC2466: MIB for IPv6: ICMPv6 group
** Necessary statistics are gathered by the kernel. Actual IPv6 MIB support is provided as patchkit for ucd-snmp.
* RFC2467: Transmission of IPv6 Packets over FDDI Networks
* RFC2497: Transmission of IPv6 packet over ARCnet Networks
* RFC2553: Basic Socket Interface Extensions for IPv6
** IPv4 mapped address (3.7) and special behavior of IPv6 wildcard bind socket (3.8) are supported. See <<ipv6-wildcard-socket,23.5.1.12>> in this document for details.
* RFC2675: IPv6 Jumbograms
** See <<ipv6-jumbo,23.5.1.7>> in this document for details.
* RFC2710: Multicast Listener Discovery for IPv6
* RFC2711: IPv6 router alert option
* [.filename]#draft-ietf-ipngwg-router-renum-08#: Router renumbering for IPv6
* [.filename]#draft-ietf-ipngwg-icmp-namelookups-02#: IPv6 Name Lookups Through ICMP
* [.filename]#draft-ietf-ipngwg-icmp-name-lookups-03#: IPv6 Name Lookups Through ICMP
* [.filename]#draft-ietf-pim-ipv6-01.txt#: PIM for IPv6
** man:pim6dd[8] implements dense mode. man:pim6sd[8] implements sparse mode.
* [.filename]#draft-itojun-ipv6-tcp-to-anycast-00#: Disconnecting TCP connection toward IPv6 anycast address
* [.filename]#draft-yamamoto-wideipv6-comm-model-00#
** See <<ipv6-sas,23.5.1.6>> in this document for details.
* [.filename]#draft-ietf-ipngwg-scopedaddr-format-00.txt#: An Extension of Format for IPv6 Scoped Addresses
[[neighbor-discovery]]
==== Neighbor Discovery
Neighbor Discovery is fairly stable. Currently Address Resolution, Duplicated Address Detection, and Neighbor Unreachability Detection are supported. In the near future we will be adding Proxy Neighbor Advertisement support in the kernel and Unsolicited Neighbor Advertisement transmission command as admin tool.
If DAD fails, the address will be marked "duplicated" and message will be generated to syslog (and usually to console). The "duplicated" mark can be checked with man:ifconfig[8]. It is administrators' responsibility to check for and recover from DAD failures. The behavior should be improved in the near future.
Some of the network driver loops multicast packets back to itself, even if instructed not to do so (especially in promiscuous mode). In such cases DAD may fail, because DAD engine sees inbound NS packet (actually from the node itself) and considers it as a sign of duplicate. You may want to look at #if condition marked "heuristics" in sys/netinet6/nd6_nbr.c:nd6_dad_timer() as workaround (note that the code fragment in "heuristics" section is not spec conformant).
Neighbor Discovery specification (RFC2461) does not talk about neighbor cache handling in the following cases:
. when there was no neighbor cache entry, node received unsolicited RS/NS/NA/redirect packet without link-layer address
. neighbor cache handling on medium without link-layer address (we need a neighbor cache entry for IsRouter bit)
For first case, we implemented workaround based on discussions on IETF ipngwg mailing list. For more details, see the comments in the source code and email thread started from (IPng 7155), dated Feb 6 1999.
IPv6 on-link determination rule (RFC2461) is quite different from assumptions in BSD network code. At this moment, no on-link determination rule is supported where default router list is empty (RFC2461, section 5.2, last sentence in 2nd paragraph - note that the spec misuse the word "host" and "node" in several places in the section).
To avoid possible DoS attacks and infinite loops, only 10 options on ND packet is accepted now. Therefore, if you have 20 prefix options attached to RA, only the first 10 prefixes will be recognized. If this troubles you, please ask it on FREEBSD-CURRENT mailing list and/or modify nd6_maxndopt in [.filename]#sys/netinet6/nd6.c#. If there are high demands we may provide sysctl knob for the variable.
[[ipv6-scope-index]]
==== Scope Index
IPv6 uses scoped addresses. Therefore, it is very important to specify scope index (interface index for link-local address, or site index for site-local address) with an IPv6 address. Without scope index, scoped IPv6 address is ambiguous to the kernel, and kernel will not be able to determine the outbound interface for a packet.
Ordinary userland applications should use advanced API (RFC2292) to specify scope index, or interface index. For similar purpose, sin6_scope_id member in sockaddr_in6 structure is defined in RFC2553. However, the semantics for sin6_scope_id is rather vague. If you care about portability of your application, we suggest you to use advanced API rather than sin6_scope_id.
In the kernel, an interface index for link-local scoped address is embedded into 2nd 16bit-word (3rd and 4th byte) in IPv6 address. For example, you may see something like:
[source,bash]
....
fe80:1::200:f8ff:fe01:6317
....
in the routing table and interface address structure (struct in6_ifaddr). The address above is a link-local unicast address which belongs to a network interface whose interface identifier is 1. The embedded index enables us to identify IPv6 link local addresses over multiple interfaces effectively and with only a little code change.
Routing daemons and configuration programs, like man:route6d[8] and man:ifconfig[8], will need to manipulate the "embedded" scope index. These programs use routing sockets and ioctls (like SIOCGIFADDR_IN6) and the kernel API will return IPv6 addresses with 2nd 16bit-word filled in. The APIs are for manipulating kernel internal structure. Programs that use these APIs have to be prepared about differences in kernels anyway.
When you specify scoped address to the command line, NEVER write the embedded form (such as ff02:1::1 or fe80:2::fedc). This is not supposed to work. Always use standard form, like ff02::1 or fe80::fedc, with command line option for specifying interface (like `ping6 -I ne0 ff02::1`). In general, if a command does not have command line option to specify outgoing interface, that command is not ready to accept scoped address. This may seem to be opposite from IPv6's premise to support "dentist office" situation. We believe that specifications need some improvements for this.
Some of the userland tools support extended numeric IPv6 syntax, as documented in [.filename]#draft-ietf-ipngwg-scopedaddr-format-00.txt#. You can specify outgoing link, by using name of the outgoing interface like "fe80::1%ne0". This way you will be able to specify link-local scoped address without much trouble.
To use this extension in your program, you will need to use man:getaddrinfo[3], and man:getnameinfo[3] with NI_WITHSCOPEID. The implementation currently assumes 1-to-1 relationship between a link and an interface, which is stronger than what specs say.
[[ipv6-pnp]]
==== Plug and Play
Most of the IPv6 stateless address autoconfiguration is implemented in the kernel. Neighbor Discovery functions are implemented in the kernel as a whole. Router Advertisement (RA) input for hosts is implemented in the kernel. Router Solicitation (RS) output for endhosts, RS input for routers, and RA output for routers are implemented in the userland.
===== Assignment of link-local, and special addresses
IPv6 link-local address is generated from IEEE802 address (Ethernet MAC address). Each of interface is assigned an IPv6 link-local address automatically, when the interface becomes up (IFF_UP). Also, direct route for the link-local address is added to routing table.
Here is an output of netstat command:
[source,bash]
....
Internet6:
Destination Gateway Flags Netif Expire
fe80:1::%ed0/64 link#1 UC ed0
fe80:2::%ep0/64 link#2 UC ep0
....
Interfaces that has no IEEE802 address (pseudo interfaces like tunnel interfaces, or ppp interfaces) will borrow IEEE802 address from other interfaces, such as Ethernet interfaces, whenever possible. If there is no IEEE802 hardware attached, a last resort pseudo-random value, MD5(hostname), will be used as source of link-local address. If it is not suitable for your usage, you will need to configure the link-local address manually.
If an interface is not capable of handling IPv6 (such as lack of multicast support), link-local address will not be assigned to that interface. See section 2 for details.
Each interface joins the solicited multicast address and the link-local all-nodes multicast addresses (e.g., fe80::1:ff01:6317 and ff02::1, respectively, on the link the interface is attached). In addition to a link-local address, the loopback address (::1) will be assigned to the loopback interface. Also, ::1/128 and ff01::/32 are automatically added to routing table, and loopback interface joins node-local multicast group ff01::1.
===== Stateless address autoconfiguration on Hosts
In IPv6 specification, nodes are separated into two categories: _routers_ and _hosts_. Routers forward packets addressed to others, hosts does not forward the packets. net.inet6.ip6.forwarding defines whether this node is router or host (router if it is 1, host if it is 0).
When a host hears Router Advertisement from the router, a host may autoconfigure itself by stateless address autoconfiguration. This behavior can be controlled by net.inet6.ip6.accept_rtadv (host autoconfigures itself if it is set to 1). By autoconfiguration, network address prefix for the receiving interface (usually global address prefix) is added. Default route is also configured. Routers periodically generate Router Advertisement packets. To request an adjacent router to generate RA packet, a host can transmit Router Solicitation. To generate a RS packet at any time, use the _rtsol_ command. man:rtsold[8] daemon is also available. man:rtsold[8] generates Router Solicitation whenever necessary, and it works great for nomadic usage (notebooks/laptops). If one wishes to ignore Router Advertisements, use sysctl to set net.inet6.ip6.accept_rtadv to 0.
To generate Router Advertisement from a router, use the man:rtadvd[8] daemon.
Note that, IPv6 specification assumes the following items, and nonconforming cases are left unspecified:
* Only hosts will listen to router advertisements
* Hosts have single network interface (except loopback)
Therefore, this is unwise to enable net.inet6.ip6.accept_rtadv on routers, or multi-interface host. A misconfigured node can behave strange (nonconforming configuration allowed for those who would like to do some experiments).
To summarize the sysctl knob:
[source,bash]
....
accept_rtadv forwarding role of the node
--- --- ---
0 0 host (to be manually configured)
0 1 router
1 0 autoconfigured host
(spec assumes that host has single
interface only, autoconfigured host
with multiple interface is
out-of-scope)
1 1 invalid, or experimental
(out-of-scope of spec)
....
RFC2462 has validation rule against incoming RA prefix information option, in 5.5.3 (e). This is to protect hosts from malicious (or misconfigured) routers that advertise very short prefix lifetime. There was an update from Jim Bound to ipngwg mailing list (look for "(ipng 6712)" in the archive) and it is implemented Jim's update.
See <<neighbor-discovery,23.5.1.2>> in the document for relationship between DAD and autoconfiguration.
[[gif]]
==== Generic Tunnel Interface
GIF (Generic InterFace) is a pseudo interface for configured tunnel. Details are described in man:gif[4]. Currently
* v6 in v6
* v6 in v4
* v4 in v6
* v4 in v4
are available. Use man:gifconfig[8] to assign physical (outer) source and destination address to gif interfaces. Configuration that uses same address family for inner and outer IP header (v4 in v4, or v6 in v6) is dangerous. It is very easy to configure interfaces and routing tables to perform infinite level of tunneling. _Please be warned_.
gif can be configured to be ECN-friendly. See <<ipsec-ecn,23.5.4.5>> for ECN-friendliness of tunnels, and man:gif[4] for how to configure.
If you would like to configure an IPv4-in-IPv6 tunnel with gif interface, read man:gif[4] carefully. You will need to remove IPv6 link-local address automatically assigned to the gif interface.
[[ipv6-sas]]
==== Source Address Selection
Current source selection rule is scope oriented (there are some exceptions - see below). For a given destination, a source IPv6 address is selected by the following rule:
. If the source address is explicitly specified by the user (e.g., via the advanced API), the specified address is used.
. If there is an address assigned to the outgoing interface (which is usually determined by looking up the routing table) that has the same scope as the destination address, the address is used.
+
This is the most typical case.
. If there is no address that satisfies the above condition, choose a global address assigned to one of the interfaces on the sending node.
. If there is no address that satisfies the above condition, and destination address is site local scope, choose a site local address assigned to one of the interfaces on the sending node.
. If there is no address that satisfies the above condition, choose the address associated with the routing table entry for the destination. This is the last resort, which may cause scope violation.
For instance, ::1 is selected for ff01::1, fe80:1::200:f8ff:fe01:6317 for fe80:1::2a0:24ff:feab:839b (note that embedded interface index - described in <<ipv6-scope-index,23.5.1.3>> - helps us choose the right source address. Those embedded indices will not be on the wire). If the outgoing interface has multiple address for the scope, a source is selected longest match basis (rule 3). Suppose 2001:0DB8:808:1:200:f8ff:fe01:6317 and 2001:0DB8:9:124:200:f8ff:fe01:6317 are given to the outgoing interface. 2001:0DB8:808:1:200:f8ff:fe01:6317 is chosen as the source for the destination 2001:0DB8:800::1.
Note that the above rule is not documented in the IPv6 spec. It is considered "up to implementation" item. There are some cases where we do not use the above rule. One example is connected TCP session, and we use the address kept in tcb as the source. Another example is source address for Neighbor Advertisement. Under the spec (RFC2461 7.2.2) NA's source should be the target address of the corresponding NS's target. In this case we follow the spec rather than the above longest-match rule.
For new connections (when rule 1 does not apply), deprecated addresses (addresses with preferred lifetime = 0) will not be chosen as source address if other choices are available. If no other choices are available, deprecated address will be used as a last resort. If there are multiple choice of deprecated addresses, the above scope rule will be used to choose from those deprecated addresses. If you would like to prohibit the use of deprecated address for some reason, configure net.inet6.ip6.use_deprecated to 0. The issue related to deprecated address is described in RFC2462 5.5.4 (NOTE: there is some debate underway in IETF ipngwg on how to use "deprecated" address).
[[ipv6-jumbo]]
==== Jumbo Payload
The Jumbo Payload hop-by-hop option is implemented and can be used to send IPv6 packets with payloads longer than 65,535 octets. But currently no physical interface whose MTU is more than 65,535 is supported, so such payloads can be seen only on the loopback interface (i.e., lo0).
If you want to try jumbo payloads, you first have to reconfigure the kernel so that the MTU of the loopback interface is more than 65,535 bytes; add the following to the kernel configuration file:
`options "LARGE_LOMTU" #To test jumbo payload`
and recompile the new kernel.
Then you can test jumbo payloads by the man:ping6[8] command with -b and -s options. The -b option must be specified to enlarge the size of the socket buffer and the -s option specifies the length of the packet, which should be more than 65,535. For example, type as follows:
[source,bash]
....
% ping6 -b 70000 -s 68000 ::1
....
The IPv6 specification requires that the Jumbo Payload option must not be used in a packet that carries a fragment header. If this condition is broken, an ICMPv6 Parameter Problem message must be sent to the sender. specification is followed, but you cannot usually see an ICMPv6 error caused by this requirement.
When an IPv6 packet is received, the frame length is checked and compared to the length specified in the payload length field of the IPv6 header or in the value of the Jumbo Payload option, if any. If the former is shorter than the latter, the packet is discarded and statistics are incremented. You can see the statistics as output of man:netstat[8] command with `-s -p ip6' option:
[source,bash]
....
% netstat -s -p ip6
ip6:
(snip)
1 with data size < data length
....
So, kernel does not send an ICMPv6 error unless the erroneous packet is an actual Jumbo Payload, that is, its packet size is more than 65,535 bytes. As described above, currently no physical interface with such a huge MTU is supported, so it rarely returns an ICMPv6 error.
TCP/UDP over jumbogram is not supported at this moment. This is because we have no medium (other than loopback) to test this. Contact us if you need this.
IPsec does not work on jumbograms. This is due to some specification twists in supporting AH with jumbograms (AH header size influences payload length, and this makes it real hard to authenticate inbound packet with jumbo payload option as well as AH).
There are fundamental issues in *BSD support for jumbograms. We would like to address those, but we need more time to finalize these. To name a few:
* mbuf pkthdr.len field is typed as "int" in 4.4BSD, so it will not hold jumbogram with len > 2G on 32bit architecture CPUs. If we would like to support jumbogram properly, the field must be expanded to hold 4G + IPv6 header + link-layer header. Therefore, it must be expanded to at least int64_t (u_int32_t is NOT enough).
* We mistakingly use "int" to hold packet length in many places. We need to convert them into larger integral type. It needs a great care, as we may experience overflow during packet length computation.
* We mistakingly check for ip6_plen field of IPv6 header for packet payload length in various places. We should be checking mbuf pkthdr.len instead. ip6_input() will perform sanity check on jumbo payload option on input, and we can safely use mbuf pkthdr.len afterwards.
* TCP code needs a careful update in bunch of places, of course.
==== Loop Prevention in Header Processing
IPv6 specification allows arbitrary number of extension headers to be placed onto packets. If we implement IPv6 packet processing code in the way BSD IPv4 code is implemented, kernel stack may overflow due to long function call chain. sys/netinet6 code is carefully designed to avoid kernel stack overflow, so sys/netinet6 code defines its own protocol switch structure, as "struct ip6protosw" (see [.filename]#netinet6/ip6protosw.h#). There is no such update to IPv4 part (sys/netinet) for compatibility, but small change is added to its pr_input() prototype. So "struct ipprotosw" is also defined. As a result, if you receive IPsec-over-IPv4 packet with massive number of IPsec headers, kernel stack may blow up. IPsec-over-IPv6 is okay. (Of-course, for those all IPsec headers to be processed, each such IPsec header must pass each IPsec check. So an anonymous attacker will not be able to do such an attack.)
[[icmpv6]]
==== ICMPv6
After RFC2463 was published, IETF ipngwg has decided to disallow ICMPv6 error packet against ICMPv6 redirect, to prevent ICMPv6 storm on a network medium. This is already implemented into the kernel.
==== Applications
For userland programming, we support IPv6 socket API as specified in RFC2553, RFC2292 and upcoming Internet drafts.
TCP/UDP over IPv6 is available and quite stable. You can enjoy man:telnet[1], man:ftp[1], man:rlogin[1], man:rsh[1], man:ssh[1], etc. These applications are protocol independent. That is, they automatically chooses IPv4 or IPv6 according to DNS.
==== Kernel Internals
While ip_forward() calls ip_output(), ip6_forward() directly calls if_output() since routers must not divide IPv6 packets into fragments.
ICMPv6 should contain the original packet as long as possible up to 1280. UDP6/IP6 port unreach, for instance, should contain all extension headers and the *unchanged* UDP6 and IP6 headers. So, all IP6 functions except TCP never convert network byte order into host byte order, to save the original packet.
tcp_input(), udp6_input() and icmp6_input() can not assume that IP6 header is preceding the transport headers due to extension headers. So, in6_cksum() was implemented to handle packets whose IP6 header and transport header is not continuous. TCP/IP6 nor UDP6/IP6 header structures do not exist for checksum calculation.
To process IP6 header, extension headers and transport headers easily, network drivers are now required to store packets in one internal mbuf or one or more external mbufs. A typical old driver prepares two internal mbufs for 96 - 204 bytes data, however, now such packet data is stored in one external mbuf.
`netstat -s -p ip6` tells you whether or not your driver conforms such requirement. In the following example, "cce0" violates the requirement. (For more information, refer to Section 2.)
[source,bash]
....
Mbuf statistics:
317 one mbuf
two or more mbuf::
lo0 = 8
cce0 = 10
3282 one ext mbuf
0 two or more ext mbuf
....
Each input function calls IP6_EXTHDR_CHECK in the beginning to check if the region between IP6 and its header is continuous. IP6_EXTHDR_CHECK calls m_pullup() only if the mbuf has M_LOOP flag, that is, the packet comes from the loopback interface. m_pullup() is never called for packets coming from physical network interfaces.
Both IP and IP6 reassemble functions never call m_pullup().
[[ipv6-wildcard-socket]]
==== IPv4 Mapped Address and IPv6 Wildcard Socket
RFC2553 describes IPv4 mapped address (3.7) and special behavior of IPv6 wildcard bind socket (3.8). The spec allows you to:
* Accept IPv4 connections by AF_INET6 wildcard bind socket.
* Transmit IPv4 packet over AF_INET6 socket by using special form of the address like ::ffff:10.1.1.1.
but the spec itself is very complicated and does not specify how the socket layer should behave. Here we call the former one "listening side" and the latter one "initiating side", for reference purposes.
You can perform wildcard bind on both of the address families, on the same port.
The following table show the behavior of FreeBSD 4.x.
[source,bash]
....
listening side initiating side
(AF_INET6 wildcard (connection to ::ffff:10.1.1.1)
socket gets IPv4 conn.)
--- ---
FreeBSD 4.x configurable supported
default: enabled
....
The following sections will give you more details, and how you can configure the behavior.
Comments on listening side:
It looks that RFC2553 talks too little on wildcard bind issue, especially on the port space issue, failure mode and relationship between AF_INET/INET6 wildcard bind. There can be several separate interpretation for this RFC which conform to it but behaves differently. So, to implement portable application you should assume nothing about the behavior in the kernel. Using man:getaddrinfo[3] is the safest way. Port number space and wildcard bind issues were discussed in detail on ipv6imp mailing list, in mid March 1999 and it looks that there is no concrete consensus (means, up to implementers). You may want to check the mailing list archives.
If a server application would like to accept IPv4 and IPv6 connections, there will be two alternatives.
One is using AF_INET and AF_INET6 socket (you will need two sockets). Use man:getaddrinfo[3] with AI_PASSIVE into ai_flags, and man:socket[2] and man:bind[2] to all the addresses returned. By opening multiple sockets, you can accept connections onto the socket with proper address family. IPv4 connections will be accepted by AF_INET socket, and IPv6 connections will be accepted by AF_INET6 socket.
Another way is using one AF_INET6 wildcard bind socket. Use man:getaddrinfo[3] with AI_PASSIVE into ai_flags and with AF_INET6 into ai_family, and set the 1st argument hostname to NULL. And man:socket[2] and man:bind[2] to the address returned. (should be IPv6 unspecified addr). You can accept either of IPv4 and IPv6 packet via this one socket.
To support only IPv6 traffic on AF_INET6 wildcard binded socket portably, always check the peer address when a connection is made toward AF_INET6 listening socket. If the address is IPv4 mapped address, you may want to reject the connection. You can check the condition by using IN6_IS_ADDR_V4MAPPED() macro.
To resolve this issue more easily, there is system dependent man:setsockopt[2] option, IPV6_BINDV6ONLY, used like below.
[.programlisting]
....
int on;
setsockopt(s, IPPROTO_IPV6, IPV6_BINDV6ONLY,
(char *)&on, sizeof (on)) < 0));
....
When this call succeed, then this socket only receive IPv6 packets.
Comments on initiating side:
Advise to application implementers: to implement a portable IPv6 application (which works on multiple IPv6 kernels), we believe that the following is the key to the success:
* NEVER hardcode AF_INET nor AF_INET6.
* Use man:getaddrinfo[3] and man:getnameinfo[3] throughout the system. Never use gethostby*(), getaddrby*(), inet_*() or getipnodeby*(). (To update existing applications to be IPv6 aware easily, sometime getipnodeby*() will be useful. But if possible, try to rewrite the code to use man:getaddrinfo[3] and man:getnameinfo[3].)
* If you would like to connect to destination, use man:getaddrinfo[3] and try all the destination returned, like man:telnet[1] does.
* Some of the IPv6 stack is shipped with buggy man:getaddrinfo[3]. Ship a minimal working version with your application and use that as last resort.
If you would like to use AF_INET6 socket for both IPv4 and IPv6 outgoing connection, you will need to use man:getipnodebyname[3]. When you would like to update your existing application to be IPv6 aware with minimal effort, this approach might be chosen. But please note that it is a temporal solution, because man:getipnodebyname[3] itself is not recommended as it does not handle scoped IPv6 addresses at all. For IPv6 name resolution, man:getaddrinfo[3] is the preferred API. So you should rewrite your application to use man:getaddrinfo[3], when you get the time to do it.
When writing applications that make outgoing connections, story goes much simpler if you treat AF_INET and AF_INET6 as totally separate address family. {set,get}sockopt issue goes simpler, DNS issue will be made simpler. We do not recommend you to rely upon IPv4 mapped address.
===== unified tcp and inpcb code
FreeBSD 4.x uses shared tcp code between IPv4 and IPv6 (from sys/netinet/tcp*) and separate udp4/6 code. It uses unified inpcb structure.
The platform can be configured to support IPv4 mapped address. Kernel configuration is summarized as follows:
* By default, AF_INET6 socket will grab IPv4 connections in certain condition, and can initiate connection to IPv4 destination embedded in IPv4 mapped IPv6 address.
* You can disable it on entire system with sysctl like below.
+
`sysctl net.inet6.ip6.mapped_addr=0`
====== Listening Side
Each socket can be configured to support special AF_INET6 wildcard bind (enabled by default). You can disable it on each socket basis with man:setsockopt[2] like below.
[.programlisting]
....
int on;
setsockopt(s, IPPROTO_IPV6, IPV6_BINDV6ONLY,
(char *)&on, sizeof (on)) < 0));
....
Wildcard AF_INET6 socket grabs IPv4 connection if and only if the following conditions are satisfied:
* there is no AF_INET socket that matches the IPv4 connection
* the AF_INET6 socket is configured to accept IPv4 traffic, i.e., getsockopt(IPV6_BINDV6ONLY) returns 0.
There is no problem with open/close ordering.
====== Initiating Side
FreeBSD 4.x supports outgoing connection to IPv4 mapped address (::ffff:10.1.1.1), if the node is configured to support IPv4 mapped address.
==== sockaddr_storage
When RFC2553 was about to be finalized, there was discussion on how struct sockaddr_storage members are named. One proposal is to prepend "__" to the members (like "__ss_len") as they should not be touched. The other proposal was not to prepend it (like "ss_len") as we need to touch those members directly. There was no clear consensus on it.
As a result, RFC2553 defines struct sockaddr_storage as follows:
[.programlisting]
....
struct sockaddr_storage {
u_char __ss_len; /* address length */
u_char __ss_family; /* address family */
/* and bunch of padding */
};
....
On the contrary, XNET draft defines as follows:
[.programlisting]
....
struct sockaddr_storage {
u_char ss_len; /* address length */
u_char ss_family; /* address family */
/* and bunch of padding */
};
....
In December 1999, it was agreed that RFC2553bis should pick the latter (XNET) definition.
Current implementation conforms to XNET definition, based on RFC2553bis discussion.
If you look at multiple IPv6 implementations, you will be able to see both definitions. As an userland programmer, the most portable way of dealing with it is to:
. ensure ss_family and/or ss_len are available on the platform, by using GNU autoconf,
. have -Dss_family=__ss_family to unify all occurrences (including header file) into __ss_family, or
. never touch __ss_family. cast to sockaddr * and use sa_family like:
+
[.programlisting]
....
struct sockaddr_storage ss;
family = ((struct sockaddr *)&ss)->sa_family
....
=== Network Drivers
Now following two items are required to be supported by standard drivers:
. mbuf clustering requirement. In this stable release, we changed MINCLSIZE into MHLEN+1 for all the operating systems in order to make all the drivers behave as we expect.
. multicast. If man:ifmcstat[8] yields no multicast group for a interface, that interface has to be patched.
If any of the drivers do not support the requirements, then the drivers cannot be used for IPv6 and/or IPsec communication. If you find any problem with your card using IPv6/IPsec, then, please report it to the {freebsd-bugs}.
(NOTE: In the past we required all PCMCIA drivers to have a call to in6_ifattach(). We have no such requirement any more)
=== Translator
We categorize IPv4/IPv6 translator into 4 types:
* _Translator A_ --- It is used in the early stage of transition to make it possible to establish a connection from an IPv6 host in an IPv6 island to an IPv4 host in the IPv4 ocean.
* _Translator B_ --- It is used in the early stage of transition to make it possible to establish a connection from an IPv4 host in the IPv4 ocean to an IPv6 host in an IPv6 island.
* _Translator C_ --- It is used in the late stage of transition to make it possible to establish a connection from an IPv4 host in an IPv4 island to an IPv6 host in the IPv6 ocean.
* _Translator D_ --- It is used in the late stage of transition to make it possible to establish a connection from an IPv6 host in the IPv6 ocean to an IPv4 host in an IPv4 island.
[[ipsec-implementation]]
=== IPsec
IPsec is mainly organized by three components.
. Policy Management
. Key Management
. AH and ESP handling
==== Policy Management
The kernel implements experimental policy management code. There are two way to manage security policy. One is to configure per-socket policy using man:setsockopt[2]. In this cases, policy configuration is described in man:ipsec_set_policy[3]. The other is to configure kernel packet filter-based policy using PF_KEY interface, via man:setkey[8].
The policy entry is not re-ordered with its indexes, so the order of entry when you add is very significant.
==== Key Management
The key management code implemented in this kit (sys/netkey) is a home-brew PFKEY v2 implementation. This conforms to RFC2367.
The home-brew IKE daemon, "racoon" is included in the kit (kame/kame/racoon). Basically you will need to run racoon as daemon, then set up a policy to require keys (like `ping -P 'out ipsec esp/transport//use'`). The kernel will contact racoon daemon as necessary to exchange keys.
==== AH and ESP Handling
IPsec module is implemented as "hooks" to the standard IPv4/IPv6 processing. When sending a packet, ip{,6}_output() checks if ESP/AH processing is required by checking if a matching SPD (Security Policy Database) is found. If ESP/AH is needed, {esp,ah}{4,6}_output() will be called and mbuf will be updated accordingly. When a packet is received, {esp,ah}4_input() will be called based on protocol number, i.e., (*inetsw[proto])(). {esp,ah}4_input() will decrypt/check authenticity of the packet, and strips off daisy-chained header and padding for ESP/AH. It is safe to strip off the ESP/AH header on packet reception, since we will never use the received packet in "as is" form.
By using ESP/AH, TCP4/6 effective data segment size will be affected by extra daisy-chained headers inserted by ESP/AH. Our code takes care of the case.
Basic crypto functions can be found in directory "sys/crypto". ESP/AH transform are listed in {esp,ah}_core.c with wrapper functions. If you wish to add some algorithm, add wrapper function in {esp,ah}_core.c, and add your crypto algorithm code into sys/crypto.
Tunnel mode is partially supported in this release, with the following restrictions:
* IPsec tunnel is not combined with GIF generic tunneling interface. It needs a great care because we may create an infinite loop between ip_output() and tunnelifp->if_output(). Opinion varies if it is better to unify them, or not.
* MTU and Don't Fragment bit (IPv4) considerations need more checking, but basically works fine.
* Authentication model for AH tunnel must be revisited. We will need to improve the policy management engine, eventually.
==== Conformance to RFCs and IDs
The IPsec code in the kernel conforms (or, tries to conform) to the following standards:
"old IPsec" specification documented in [.filename]#rfc182[5-9].txt#
"new IPsec" specification documented in [.filename]#rfc240[1-6].txt#, [.filename]#rfc241[01].txt#, [.filename]#rfc2451.txt# and [.filename]#draft-mcdonald-simple-ipsec-api-01.txt# (draft expired, but you can take from link:ftp://ftp.kame.net/pub/internet-drafts/[ ftp://ftp.kame.net/pub/internet-drafts/]). (NOTE: IKE specifications, [.filename]#rfc241[7-9].txt# are implemented in userland, as "racoon" IKE daemon)
Currently supported algorithms are:
* old IPsec AH
** null crypto checksum (no document, just for debugging)
** keyed MD5 with 128bit crypto checksum ([.filename]#rfc1828.txt#)
** keyed SHA1 with 128bit crypto checksum (no document)
** HMAC MD5 with 128bit crypto checksum ([.filename]#rfc2085.txt#)
** HMAC SHA1 with 128bit crypto checksum (no document)
* old IPsec ESP
** null encryption (no document, similar to [.filename]#rfc2410.txt#)
** DES-CBC mode ([.filename]#rfc1829.txt#)
* new IPsec AH
** null crypto checksum (no document, just for debugging)
** keyed MD5 with 96bit crypto checksum (no document)
** keyed SHA1 with 96bit crypto checksum (no document)
** HMAC MD5 with 96bit crypto checksum ([.filename]#rfc2403.txt#)
** HMAC SHA1 with 96bit crypto checksum ([.filename]#rfc2404.txt#)
* new IPsec ESP
** null encryption ([.filename]#rfc2410.txt#)
** DES-CBC with derived IV ([.filename]#draft-ietf-ipsec-ciph-des-derived-01.txt#, draft expired)
** DES-CBC with explicit IV ([.filename]#rfc2405.txt#)
** 3DES-CBC with explicit IV ([.filename]#rfc2451.txt#)
** BLOWFISH CBC ([.filename]#rfc2451.txt#)
** CAST128 CBC ([.filename]#rfc2451.txt#)
** RC5 CBC ([.filename]#rfc2451.txt#)
** each of the above can be combined with:
*** ESP authentication with HMAC-MD5(96bit)
*** ESP authentication with HMAC-SHA1(96bit)
The following algorithms are NOT supported:
* old IPsec AH
** HMAC MD5 with 128bit crypto checksum + 64bit replay prevention ([.filename]#rfc2085.txt#)
** keyed SHA1 with 160bit crypto checksum + 32bit padding ([.filename]#rfc1852.txt#)
IPsec (in kernel) and IKE (in userland as "racoon") has been tested at several interoperability test events, and it is known to interoperate with many other implementations well. Also, current IPsec implementation as quite wide coverage for IPsec crypto algorithms documented in RFC (we cover algorithms without intellectual property issues only).
[[ipsec-ecn]]
==== ECN Consideration on IPsec Tunnels
ECN-friendly IPsec tunnel is supported as described in [.filename]#draft-ipsec-ecn-00.txt#.
Normal IPsec tunnel is described in RFC2401. On encapsulation, IPv4 TOS field (or, IPv6 traffic class field) will be copied from inner IP header to outer IP header. On decapsulation outer IP header will be simply dropped. The decapsulation rule is not compatible with ECN, since ECN bit on the outer IP TOS/traffic class field will be lost.
To make IPsec tunnel ECN-friendly, we should modify encapsulation and decapsulation procedure. This is described in http://www.aciri.org/floyd/papers/draft-ipsec-ecn-00.txt[ http://www.aciri.org/floyd/papers/draft-ipsec-ecn-00.txt], chapter 3.
IPsec tunnel implementation can give you three behaviors, by setting net.inet.ipsec.ecn (or net.inet6.ipsec6.ecn) to some value:
* RFC2401: no consideration for ECN (sysctl value -1)
* ECN forbidden (sysctl value 0)
* ECN allowed (sysctl value 1)
Note that the behavior is configurable in per-node manner, not per-SA manner (draft-ipsec-ecn-00 wants per-SA configuration, but it looks too much for me).
The behavior is summarized as follows (see source code for more detail):
[source,bash]
....
encapsulate decapsulate
--- ---
RFC2401 copy all TOS bits drop TOS bits on outer
from inner to outer. (use inner TOS bits as is)
ECN forbidden copy TOS bits except for ECN drop TOS bits on outer
(masked with 0xfc) from inner (use inner TOS bits as is)
to outer. set ECN bits to 0.
ECN allowed copy TOS bits except for ECN use inner TOS bits with some
CE (masked with 0xfe) from change. if outer ECN CE bit
inner to outer. is 1, enable ECN CE bit on
set ECN CE bit to 0. the inner.
....
General strategy for configuration is as follows:
* if both IPsec tunnel endpoint are capable of ECN-friendly behavior, you should better configure both end to "ECN allowed" (sysctl value 1).
* if the other end is very strict about TOS bit, use "RFC2401" (sysctl value -1).
* in other cases, use "ECN forbidden" (sysctl value 0).
The default behavior is "ECN forbidden" (sysctl value 0).
For more information, please refer to:
http://www.aciri.org/floyd/papers/draft-ipsec-ecn-00.txt[ http://www.aciri.org/floyd/papers/draft-ipsec-ecn-00.txt], RFC2481 (Explicit Congestion Notification), src/sys/netinet6/{ah,esp}_input.c
(Thanks goes to Kenjiro Cho mailto:kjc@csl.sony.co.jp[kjc@csl.sony.co.jp] for detailed analysis)
==== Interoperability
Here are (some of) platforms that KAME code have tested IPsec/IKE interoperability in the past. Note that both ends may have modified their implementation, so use the following list just for reference purposes.
Altiga, Ashley-laurent (vpcom.com), Data Fellows (F-Secure), Ericsson ACC, FreeS/WAN, HITACHI, IBM AIX(R), IIJ, Intel, Microsoft(R) Windows NT(R), NIST (linux IPsec + plutoplus), Netscreen, OpenBSD, RedCreek, Routerware, SSH, Secure Computing, Soliton, Toshiba, VPNet, Yamaha RT100i
diff --git a/documentation/content/en/books/developers-handbook/kernelbuild/_index.adoc b/documentation/content/en/books/developers-handbook/kernelbuild/_index.adoc
index 7322b4cab3..c19392cacb 100644
--- a/documentation/content/en/books/developers-handbook/kernelbuild/_index.adoc
+++ b/documentation/content/en/books/developers-handbook/kernelbuild/_index.adoc
@@ -1,75 +1,76 @@
---
title: Chapter 9. Building and Installing a FreeBSD Kernel
authors:
prev: books/developers-handbook/partiii
next: books/developers-handbook/kerneldebug
+description: Building and Installing a FreeBSD Kernel
---
[[kernelbuild]]
= Building and Installing a FreeBSD Kernel
:doctype: book
:toc: macro
:toclevels: 1
:icons: font
:sectnums:
:sectnumlevels: 6
:source-highlighter: rouge
:experimental:
:skip-front-matter:
:xrefstyle: basic
:relfileprefix: ../
:outfilesuffix:
:sectnumoffset: 9
include::shared/mirrors.adoc[]
include::shared/authors.adoc[]
include::shared/releases.adoc[]
include::shared/en/mailing-lists.adoc[]
include::shared/en/teams.adoc[]
include::shared/en/urls.adoc[]
toc::[]
Being a kernel developer requires understanding of the kernel build process. To debug the FreeBSD kernel it is required to be able to build one. There are two known ways to do so:
The supported procedure to build and install a kernel is documented in the link:{handbook}#kernelconfig-building[Building and Installing a Custom Kernel] chapter of the FreeBSD Handbook.
[NOTE]
====
It is supposed that the reader of this chapter is familiar with the information described in the link:{handbook}#kernelconfig-building[Building and Installing a Custom Kernel] chapter of the FreeBSD Handbook. If this is not the case, please read through the above mentioned chapter to understand how the build process works.
====
[[kernelbuild-traditional]]
== Building the Faster but Brittle Way
Building the kernel this way may be useful when working on the kernel code and it may actually be faster than the documented procedure when only a single option or two were tweaked in the kernel configuration file. On the other hand, it might lead to unexpected kernel build breakage.
[.procedure]
. Run man:config[8] to generate the kernel source code:
+
[source,bash]
....
# /usr/sbin/config MYKERNEL
....
. Change into the build directory. man:config[8] will print the name of this directory after being run as above.
+
[source,bash]
....
# cd ../compile/MYKERNEL
....
. Compile the kernel:
+
[source,bash]
....
# make depend
# make
....
. Install the new kernel:
+
[source,bash]
....
# make install
....
diff --git a/documentation/content/en/books/developers-handbook/kerneldebug/_index.adoc b/documentation/content/en/books/developers-handbook/kerneldebug/_index.adoc
index 930d941c3f..a23e10fe08 100644
--- a/documentation/content/en/books/developers-handbook/kerneldebug/_index.adoc
+++ b/documentation/content/en/books/developers-handbook/kerneldebug/_index.adoc
@@ -1,737 +1,738 @@
---
title: Chapter 10. Kernel Debugging
authors:
- author: Paul Richards
- author: Jörg Wunsch
- author: Robert Watson
prev: books/developers-handbook/kernelbuild
next: books/developers-handbook/partiv
+description: FreeBSD Kernel Debugging
---
[[kerneldebug]]
= Kernel Debugging
:doctype: book
:toc: macro
:toclevels: 1
:icons: font
:sectnums:
:sectnumlevels: 6
:source-highlighter: rouge
:experimental:
:skip-front-matter:
:xrefstyle: basic
:relfileprefix: ../
:outfilesuffix:
:sectnumoffset: 10
include::shared/mirrors.adoc[]
include::shared/authors.adoc[]
include::shared/releases.adoc[]
include::shared/en/mailing-lists.adoc[]
include::shared/en/teams.adoc[]
include::shared/en/urls.adoc[]
toc::[]
[[kerneldebug-obtain]]
== Obtaining a Kernel Crash Dump
When running a development kernel (e.g., FreeBSD-CURRENT), such as a kernel under extreme conditions (e.g., very high load averages, tens of thousands of connections, exceedingly high number of concurrent users, hundreds of man:jail[8]s, etc.), or using a new feature or device driver on FreeBSD-STABLE (e.g., PAE), sometimes a kernel will panic. In the event that it does, this chapter will demonstrate how to extract useful information out of a crash.
A system reboot is inevitable once a kernel panics. Once a system is rebooted, the contents of a system's physical memory (RAM) is lost, as well as any bits that are on the swap device before the panic. To preserve the bits in physical memory, the kernel makes use of the swap device as a temporary place to store the bits that are in RAM across a reboot after a crash. In doing this, when FreeBSD boots after a crash, a kernel image can now be extracted and debugging can take place.
[NOTE]
====
A swap device that has been configured as a dump device still acts as a swap device. Dumps to non-swap devices (such as tapes or CDRWs, for example) are not supported at this time. A "swap device" is synonymous with a "swap partition."
====
Several types of kernel crash dumps are available:
Full memory dumps::
Hold the complete contents of physical memory.
Minidumps::
Hold only memory pages in use by the kernel (FreeBSD 6.2 and higher).
Textdumps::
Hold captured, scripted, or interactive debugger output (FreeBSD 7.1 and higher).
Minidumps are the default dump type as of FreeBSD 7.0, and in most cases will capture all necessary information present in a full memory dump, as most problems can be isolated only using kernel state.
[[config-dumpdev]]
=== Configuring the Dump Device
Before the kernel will dump the contents of its physical memory to a dump device, a dump device must be configured. A dump device is specified by using the man:dumpon[8] command to tell the kernel where to save kernel crash dumps. The man:dumpon[8] program must be called after the swap partition has been configured with man:swapon[8]. This is normally handled by setting the `dumpdev` variable in man:rc.conf[5] to the path of the swap device (the recommended way to extract a kernel dump) or `AUTO` to use the first configured swap device. The default for `dumpdev` is `AUTO` in HEAD, and changed to `NO` on RELENG_* branches (except for RELENG_7, which was left set to `AUTO`). On FreeBSD 9.0-RELEASE and later versions, bsdinstall will ask whether crash dumps should be enabled on the target system during the install process.
[TIP]
====
Check [.filename]#/etc/fstab# or man:swapinfo[8] for a list of swap devices.
====
[IMPORTANT]
====
Make sure the `dumpdir` specified in man:rc.conf[5] exists before a kernel crash!
[source,bash]
....
# mkdir /var/crash
# chmod 700 /var/crash
....
Also, remember that the contents of [.filename]#/var/crash# is sensitive and very likely contains confidential information such as passwords.
====
[[extract-dump]]
=== Extracting a Kernel Dump
Once a dump has been written to a dump device, the dump must be extracted before the swap device is mounted. To extract a dump from a dump device, use the man:savecore[8] program. If `dumpdev` has been set in man:rc.conf[5], man:savecore[8] will be called automatically on the first multi-user boot after the crash and before the swap device is mounted. The location of the extracted core is placed in the man:rc.conf[5] value `dumpdir`, by default [.filename]#/var/crash# and will be named [.filename]#vmcore.0#.
In the event that there is already a file called [.filename]#vmcore.0# in [.filename]#/var/crash# (or whatever `dumpdir` is set to), the kernel will increment the trailing number for every crash to avoid overwriting an existing [.filename]#vmcore# (e.g., [.filename]#vmcore.1#). man:savecore[8] will always create a symbolic link to named [.filename]#vmcore.last# in [.filename]#/var/crash# after a dump is saved. This symbolic link can be used to locate the name of the most recent dump.
The man:crashinfo[8] utility generates a text file containing a summary of information from a full memory dump or minidump. If `dumpdev` has been set in man:rc.conf[5], man:crashinfo[8] will be invoked automatically after man:savecore[8]. The output is saved to a file in `dumpdir` named [.filename]#core.txt.N#.
[TIP]
====
If you are testing a new kernel but need to boot a different one in order to get your system up and running again, boot it only into single user mode using the `-s` flag at the boot prompt, and then perform the following steps:
[source,bash]
....
# fsck -p
# mount -a -t ufs # make sure /var/crash is writable
# savecore /var/crash /dev/ad0s1b
# exit # exit to multi-user
....
This instructs man:savecore[8] to extract a kernel dump from [.filename]#/dev/ad0s1b# and place the contents in [.filename]#/var/crash#. Do not forget to make sure the destination directory [.filename]#/var/crash# has enough space for the dump. Also, do not forget to specify the correct path to your swap device as it is likely different than [.filename]#/dev/ad0s1b#!
====
=== Testing Kernel Dump Configuration
The kernel includes a man:sysctl[8] node that requests a kernel panic. This can be used to verify that your system is properly configured to save kernel crash dumps. You may wish to remount existing file systems as read-only in single user mode before triggering the crash to avoid data loss.
[source,bash]
....
# shutdown now
...
Enter full pathname of shell or RETURN for /bin/sh:
# mount -a -u -r
# sysctl debug.kdb.panic=1
debug.kdb.panic:panic: kdb_sysctl_panic
...
....
After rebooting, your system should save a dump in [.filename]#/var/crash# along with a matching summary from man:crashinfo[8].
[[kerneldebug-gdb]]
== Debugging a Kernel Crash Dump with `kgdb`
[NOTE]
====
This section covers man:kgdb[1]. The latest version is included in the package:devel/gdb[]. An older version is also present in FreeBSD 11 and earlier.
====
To enter into the debugger and begin getting information from the dump, start kgdb:
[source,bash]
....
# kgdb -n N
....
Where _N_ is the suffix of the [.filename]#vmcore.N# to examine. To open the most recent dump use:
[source,bash]
....
# kgdb -n last
....
Normally, man:kgdb[1] should be able to locate the kernel running at the time the dump was generated. If it is not able to locate the correct kernel, pass the pathname of the kernel and dump as two arguments to kgdb:
[source,bash]
....
# kgdb /boot/kernel/kernel /var/crash/vmcore.0
....
You can debug the crash dump using the kernel sources just like you can for any other program.
This dump is from a 5.2-BETA kernel and the crash comes from deep within the kernel. The output below has been modified to include line numbers on the left. This first trace inspects the instruction pointer and obtains a back trace. The address that is used on line 41 for the `list` command is the instruction pointer and can be found on line 17. Most developers will request having at least this information sent to them if you are unable to debug the problem yourself. If, however, you do solve the problem, make sure that your patch winds its way into the source tree via a problem report, mailing lists, or by being able to commit it!
[source,bash]
....
1:# cd /usr/obj/usr/src/sys/KERNCONF
2:# kgdb kernel.debug /var/crash/vmcore.0
3:GNU gdb 5.2.1 (FreeBSD)
4:Copyright 2002 Free Software Foundation, Inc.
5:GDB is free software, covered by the GNU General Public License, and you are
6:welcome to change it and/or distribute copies of it under certain conditions.
7:Type "show copying" to see the conditions.
8:There is absolutely no warranty for GDB. Type "show warranty" for details.
9:This GDB was configured as "i386-undermydesk-freebsd"...
10:panic: page fault
11:panic messages:
12:---
13:Fatal trap 12: page fault while in kernel mode
14:cpuid = 0; apic id = 00
15:fault virtual address = 0x300
16:fault code: = supervisor read, page not present
17:instruction pointer = 0x8:0xc0713860
18:stack pointer = 0x10:0xdc1d0b70
19:frame pointer = 0x10:0xdc1d0b7c
20:code segment = base 0x0, limit 0xfffff, type 0x1b
21: = DPL 0, pres 1, def32 1, gran 1
22:processor eflags = resume, IOPL = 0
23:current process = 14394 (uname)
24:trap number = 12
25:panic: page fault
26 cpuid = 0;
27:Stack backtrace:
28
29:syncing disks, buffers remaining... 2199 2199 panic: mi_switch: switch in a critical section
30:cpuid = 0;
31:Uptime: 2h43m19s
32:Dumping 255 MB
33: 16 32 48 64 80 96 112 128 144 160 176 192 208 224 240
34:---
35:Reading symbols from /boot/kernel/snd_maestro3.ko...done.
36:Loaded symbols for /boot/kernel/snd_maestro3.ko
37:Reading symbols from /boot/kernel/snd_pcm.ko...done.
38:Loaded symbols for /boot/kernel/snd_pcm.ko
39:#0 doadump () at /usr/src/sys/kern/kern_shutdown.c:240
40:240 dumping++;
41:(kgdb) list *0xc0713860
42:0xc0713860 is in lapic_ipi_wait (/usr/src/sys/i386/i386/local_apic.c:663).
43:658 incr = 0;
44:659 delay = 1;
45:660 } else
46:661 incr = 1;
47:662 for (x = 0; x < delay; x += incr) {
48:663 if ((lapic->icr_lo & APIC_DELSTAT_MASK) == APIC_DELSTAT_IDLE)
49:664 return (1);
50:665 ia32_pause();
51:666 }
52:667 return (0);
53:(kgdb) backtrace
54:#0 doadump () at /usr/src/sys/kern/kern_shutdown.c:240
55:#1 0xc055fd9b in boot (howto=260) at /usr/src/sys/kern/kern_shutdown.c:372
56:#2 0xc056019d in panic () at /usr/src/sys/kern/kern_shutdown.c:550
57:#3 0xc0567ef5 in mi_switch () at /usr/src/sys/kern/kern_synch.c:470
58:#4 0xc055fa87 in boot (howto=256) at /usr/src/sys/kern/kern_shutdown.c:312
59:#5 0xc056019d in panic () at /usr/src/sys/kern/kern_shutdown.c:550
60:#6 0xc0720c66 in trap_fatal (frame=0xdc1d0b30, eva=0)
61: at /usr/src/sys/i386/i386/trap.c:821
62:#7 0xc07202b3 in trap (frame=
63: {tf_fs = -1065484264, tf_es = -1065484272, tf_ds = -1065484272, tf_edi = 1, tf_esi = 0, tf_ebp = -602076292, tf_isp = -602076324, tf_ebx = 0, tf_edx = 0, tf_ecx = 1000000, tf_eax = 243, tf_trapno = 12, tf_err = 0, tf_eip = -1066321824, tf_cs = 8, tf_eflags = 65671, tf_esp = 243, tf_ss = 0})
64: at /usr/src/sys/i386/i386/trap.c:250
65:#8 0xc070c9f8 in calltrap () at {standard input}:94
66:#9 0xc07139f3 in lapic_ipi_vectored (vector=0, dest=0)
67: at /usr/src/sys/i386/i386/local_apic.c:733
68:#10 0xc0718b23 in ipi_selected (cpus=1, ipi=1)
69: at /usr/src/sys/i386/i386/mp_machdep.c:1115
70:#11 0xc057473e in kseq_notify (ke=0xcc05e360, cpu=0)
71: at /usr/src/sys/kern/sched_ule.c:520
72:#12 0xc0575cad in sched_add (td=0xcbcf5c80)
73: at /usr/src/sys/kern/sched_ule.c:1366
74:#13 0xc05666c6 in setrunqueue (td=0xcc05e360)
75: at /usr/src/sys/kern/kern_switch.c:422
76:#14 0xc05752f4 in sched_wakeup (td=0xcbcf5c80)
77: at /usr/src/sys/kern/sched_ule.c:999
78:#15 0xc056816c in setrunnable (td=0xcbcf5c80)
79: at /usr/src/sys/kern/kern_synch.c:570
80:#16 0xc0567d53 in wakeup (ident=0xcbcf5c80)
81: at /usr/src/sys/kern/kern_synch.c:411
82:#17 0xc05490a8 in exit1 (td=0xcbcf5b40, rv=0)
83: at /usr/src/sys/kern/kern_exit.c:509
84:#18 0xc0548011 in sys_exit () at /usr/src/sys/kern/kern_exit.c:102
85:#19 0xc0720fd0 in syscall (frame=
86: {tf_fs = 47, tf_es = 47, tf_ds = 47, tf_edi = 0, tf_esi = -1, tf_ebp = -1077940712, tf_isp = -602075788, tf_ebx = 672411944, tf_edx = 10, tf_ecx = 672411600, tf_eax = 1, tf_trapno = 12, tf_err = 2, tf_eip = 671899563, tf_cs = 31, tf_eflags = 642, tf_esp = -1077940740, tf_ss = 47})
87: at /usr/src/sys/i386/i386/trap.c:1010
88:#20 0xc070ca4d in Xint0x80_syscall () at {standard input}:136
89:---Can't read userspace from dump, or kernel process---
90:(kgdb) quit
....
[TIP]
====
If your system is crashing regularly and you are running out of disk space, deleting old [.filename]#vmcore# files in [.filename]#/var/crash# could save a considerable amount of disk space!
====
[[kerneldebug-online-ddb]]
== On-Line Kernel Debugging Using DDB
While `kgdb` as an off-line debugger provides a very high level of user interface, there are some things it cannot do. The most important ones being breakpointing and single-stepping kernel code.
If you need to do low-level debugging on your kernel, there is an on-line debugger available called DDB. It allows setting of breakpoints, single-stepping kernel functions, examining and changing kernel variables, etc. However, it cannot access kernel source files, and only has access to the global and static symbols, not to the full debug information like `kgdb` does.
To configure your kernel to include DDB, add the options
[.programlisting]
....
options KDB
....
[.programlisting]
....
options DDB
....
to your config file, and rebuild. (See link:{handbook}/[The FreeBSD Handbook] for details on configuring the FreeBSD kernel).
Once your DDB kernel is running, there are several ways to enter DDB. The first, and earliest way is to use the boot flag `-d`. The kernel will start up in debug mode and enter DDB prior to any device probing. Hence you can even debug the device probe/attach functions. To use this, exit the loader's boot menu and enter `boot -d` at the loader prompt.
The second scenario is to drop to the debugger once the system has booted. There are two simple ways to accomplish this. If you would like to break to the debugger from the command prompt, simply type the command:
[source,bash]
....
# sysctl debug.kdb.enter=1
....
Alternatively, if you are at the system console, you may use a hot-key on the keyboard. The default break-to-debugger sequence is kbd:[Ctrl+Alt+ESC]. For syscons, this sequence can be remapped and some of the distributed maps out there do this, so check to make sure you know the right sequence to use. There is an option available for serial consoles that allows the use of a serial line BREAK on the console line to enter DDB (`options BREAK_TO_DEBUGGER` in the kernel config file). It is not the default since there are a lot of serial adapters around that gratuitously generate a BREAK condition, for example when pulling the cable.
The third way is that any panic condition will branch to DDB if the kernel is configured to use it. For this reason, it is not wise to configure a kernel with DDB for a machine running unattended.
To obtain the unattended functionality, add:
[.programlisting]
....
options KDB_UNATTENDED
....
to the kernel configuration file and rebuild/reinstall.
The DDB commands roughly resemble some `gdb` commands. The first thing you probably need to do is to set a breakpoint:
[source,bash]
....
break function-name address
....
Numbers are taken hexadecimal by default, but to make them distinct from symbol names; hexadecimal numbers starting with the letters `a-f` need to be preceded with `0x` (this is optional for other numbers). Simple expressions are allowed, for example: `function-name + 0x103`.
To exit the debugger and continue execution, type:
[source,bash]
....
continue
....
To get a stack trace of the current thread, use:
[source,bash]
....
trace
....
To get a stack trace of an arbitrary thread, specify a process ID or thread ID as a second argument to `trace`.
If you want to remove a breakpoint, use
[source,bash]
....
del
del address-expression
....
The first form will be accepted immediately after a breakpoint hit, and deletes the current breakpoint. The second form can remove any breakpoint, but you need to specify the exact address; this can be obtained from:
[source,bash]
....
show b
....
or:
[source,bash]
....
show break
....
To single-step the kernel, try:
[source,bash]
....
s
....
This will step into functions, but you can make DDB trace them until the matching return statement is reached by:
[source,bash]
....
n
....
[NOTE]
====
This is different from ``gdb``'s `next` statement; it is like ``gdb``'s `finish`. Pressing kbd:[n] more than once will cause a continue.
====
To examine data from memory, use (for example):
[source,bash]
....
x/wx 0xf0133fe0,40
x/hd db_symtab_space
x/bc termbuf,10
x/s stringbuf
....
for word/halfword/byte access, and hexadecimal/decimal/character/ string display. The number after the comma is the object count. To display the next 0x10 items, simply use:
[source,bash]
....
x ,10
....
Similarly, use
[source,bash]
....
x/ia foofunc,10
....
to disassemble the first 0x10 instructions of `foofunc`, and display them along with their offset from the beginning of `foofunc`.
To modify memory, use the write command:
[source,bash]
....
w/b termbuf 0xa 0xb 0
w/w 0xf0010030 0 0
....
The command modifier (`b`/`h`/`w`) specifies the size of the data to be written, the first following expression is the address to write to and the remainder is interpreted as data to write to successive memory locations.
If you need to know the current registers, use:
[source,bash]
....
show reg
....
Alternatively, you can display a single register value by e.g.
[source,bash]
....
p $eax
....
and modify it by:
[source,bash]
....
set $eax new-value
....
Should you need to call some kernel functions from DDB, simply say:
[source,bash]
....
call func(arg1, arg2, ...)
....
The return value will be printed.
For a man:ps[1] style summary of all running processes, use:
[source,bash]
....
ps
....
Now you have examined why your kernel failed, and you wish to reboot. Remember that, depending on the severity of previous malfunctioning, not all parts of the kernel might still be working as expected. Perform one of the following actions to shut down and reboot your system:
[source,bash]
....
panic
....
This will cause your kernel to dump core and reboot, so you can later analyze the core on a higher level with man:kgdb[1].
[source,bash]
....
call boot(0)
....
Might be a good way to cleanly shut down the running system, `sync()` all disks, and finally, in some cases, reboot. As long as the disk and filesystem interfaces of the kernel are not damaged, this could be a good way for an almost clean shutdown.
[source,bash]
....
reset
....
This is the final way out of disaster and almost the same as hitting the Big Red Button.
If you need a short command summary, simply type:
[source,bash]
....
help
....
It is highly recommended to have a printed copy of the man:ddb[4] manual page ready for a debugging session. Remember that it is hard to read the on-line manual while single-stepping the kernel.
[[kerneldebug-online-gdb]]
== On-Line Kernel Debugging Using Remote GDB
This feature has been supported since FreeBSD 2.2, and it is actually a very neat one.
GDB has already supported _remote debugging_ for a long time. This is done using a very simple protocol along a serial line. Unlike the other methods described above, you will need two machines for doing this. One is the host providing the debugging environment, including all the sources, and a copy of the kernel binary with all the symbols in it, and the other one is the target machine that simply runs a similar copy of the very same kernel (but stripped of the debugging information).
You should configure the kernel in question with `config -g` if building the "traditional" way. If building the "new" way, make sure that `makeoptions DEBUG=-g` is in the configuration. In both cases, include `DDB` in the configuration, and compile it as usual. This gives a large binary, due to the debugging information. Copy this kernel to the target machine, strip the debugging symbols off with `strip -x`, and boot it using the `-d` boot option. Connect the serial line of the target machine that has "flags 080" set on its uart device to any serial line of the debugging host. See man:uart[4] for information on how to set the flags on an uart device. Now, on the debugging machine, go to the compile directory of the target kernel, and start `gdb`:
[source,bash]
....
% kgdb kernel
GDB is free software and you are welcome to distribute copies of it
under certain conditions; type "show copying" to see the conditions.
There is absolutely no warranty for GDB; type "show warranty" for details.
GDB 4.16 (i386-unknown-freebsd),
Copyright 1996 Free Software Foundation, Inc...
(kgdb)
....
Initialize the remote debugging session (assuming the first serial port is being used) by:
[source,bash]
....
(kgdb) target remote /dev/cuau0
....
Now, on the target host (the one that entered DDB right before even starting the device probe), type:
[source,bash]
....
Debugger("Boot flags requested debugger")
Stopped at Debugger+0x35: movb $0, edata+0x51bc
db> gdb
....
DDB will respond with:
[source,bash]
....
Next trap will enter GDB remote protocol mode
....
Every time you type `gdb`, the mode will be toggled between remote GDB and local DDB. In order to force a next trap immediately, simply type `s` (step). Your hosting GDB will now gain control over the target kernel:
[source,bash]
....
Remote debugging using /dev/cuau0
Debugger (msg=0xf01b0383 "Boot flags requested debugger")
at ../../i386/i386/db_interface.c:257
(kgdb)
....
You can use this session almost as any other GDB session, including full access to the source, running it in gud-mode inside an Emacs window (which gives you an automatic source code display in another Emacs window), etc.
[[kerneldebug-console]]
== Debugging a Console Driver
Since you need a console driver to run DDB on, things are more complicated if the console driver itself is failing. You might remember the use of a serial console (either with modified boot blocks, or by specifying `-h` at the `Boot:` prompt), and hook up a standard terminal onto your first serial port. DDB works on any configured console driver, including a serial console.
[[kerneldebug-deadlocks]]
== Debugging Deadlocks
You may experience so called deadlocks, a situation where a system stops doing useful work. To provide a helpful bug report in this situation, use man:ddb[4] as described in the previous section. Include the output of `ps` and `trace` for suspected processes in the report.
If possible, consider doing further investigation. The recipe below is especially useful if you suspect that a deadlock occurs in the VFS layer. Add these options to the kernel configuration file.
[.programlisting]
....
makeoptions DEBUG=-g
options INVARIANTS
options INVARIANT_SUPPORT
options WITNESS
options WITNESS_SKIPSPIN
options DEBUG_LOCKS
options DEBUG_VFS_LOCKS
options DIAGNOSTIC
....
When a deadlock occurs, in addition to the output of the `ps` command, provide information from the `show pcpu`, `show allpcpu`, `show locks`, `show alllocks`, `show lockedvnods` and `alltrace`.
To obtain meaningful backtraces for threaded processes, use `thread thread-id` to switch to the thread stack, and do a backtrace with `where`.
[[kerneldebug-dcons]]
== Kernel debugging with Dcons
man:dcons[4] is a very simple console driver that is not directly connected with any physical devices. It just reads and writes characters from and to a buffer in a kernel or loader. Due to its simple nature, it is very useful for kernel debugging, especially with a FireWire(R) device. Currently, FreeBSD provides two ways to interact with the buffer from outside of the kernel using man:dconschat[8].
=== Dcons over FireWire(R)
Most FireWire(R) (IEEE1394) host controllers are based on the OHCI specification that supports physical access to the host memory. This means that once the host controller is initialized, we can access the host memory without the help of software (kernel). We can exploit this facility for interaction with man:dcons[4]. man:dcons[4] provides similar functionality as a serial console. It emulates two serial ports, one for the console and DDB, the other for GDB. Since remote memory access is fully handled by the hardware, the man:dcons[4] buffer is accessible even when the system crashes.
FireWire(R) devices are not limited to those integrated into motherboards. PCI cards exist for desktops, and a cardbus interface can be purchased for laptops.
==== Enabling FireWire(R) and Dcons support on the target machine
To enable FireWire(R) and Dcons support in the kernel of the _target machine_:
* Make sure your kernel supports `dcons`, `dcons_crom` and `firewire`. `Dcons` should be statically linked with the kernel. For `dcons_crom` and `firewire`, modules should be OK.
* Make sure physical DMA is enabled. You may need to add `hw.firewire.phydma_enable=1` to [.filename]#/boot/loader.conf#.
* Add options for debugging.
* Add `dcons_gdb=1` in [.filename]#/boot/loader.conf# if you use GDB over FireWire(R).
* Enable `dcons` in [.filename]#/etc/ttys#.
* Optionally, to force `dcons` to be the high-level console, add `hw.firewire.dcons_crom.force_console=1` to [.filename]#loader.conf#.
To enable FireWire(R) and Dcons support in man:loader[8] on i386 or amd64:
Add `LOADER_FIREWIRE_SUPPORT=YES` in [.filename]#/etc/make.conf# and rebuild man:loader[8]:
[source,bash]
....
# cd /sys/boot/i386 && make clean && make && make install
....
To enable man:dcons[4] as an active low-level console, add `boot_multicons="YES"` to [.filename]#/boot/loader.conf#.
Here are a few configuration examples. A sample kernel configuration file would contain:
[source,bash]
....
device dcons
device dcons_crom
options KDB
options DDB
options GDB
options ALT_BREAK_TO_DEBUGGER
....
And a sample [.filename]#/boot/loader.conf# would contain:
[source,bash]
....
dcons_crom_load="YES"
dcons_gdb=1
boot_multicons="YES"
hw.firewire.phydma_enable=1
hw.firewire.dcons_crom.force_console=1
....
==== Enabling FireWire(R) and Dcons support on the host machine
To enable FireWire(R) support in the kernel on the _host machine_:
[source,bash]
....
# kldload firewire
....
Find out the EUI64 (the unique 64 bit identifier) of the FireWire(R) host controller, and use man:fwcontrol[8] or `dmesg` to find the EUI64 of the target machine.
Run man:dconschat[8], with:
[source,bash]
....
# dconschat -e \# -br -G 12345 -t 00-11-22-33-44-55-66-77
....
The following key combinations can be used once man:dconschat[8] is running:
[.informaltable]
[cols="1,1"]
|===
|kbd:[~+.]
|Disconnect
|kbd:[~]
|ALT BREAK
|kbd:[~]
|RESET target
|kbd:[~]
|Suspend dconschat
|===
Attach remote GDB by starting man:kgdb[1] with a remote debugging session:
[source,bash]
....
kgdb -r :12345 kernel
....
==== Some general tips
Here are some general tips:
To take full advantage of the speed of FireWire(R), disable other slow console drivers:
[source,bash]
....
# conscontrol delete ttyd0 # serial console
# conscontrol delete consolectl # video/keyboard
....
There exists a GDB mode for man:emacs[1]; this is what you will need to add to your [.filename]#.emacs#:
[source,bash]
....
(setq gud-gdba-command-name "kgdb -a -a -a -r :12345")
(setq gdb-many-windows t)
(xterm-mouse-mode 1)
M-x gdba
....
And for DDD ([.filename]#devel/ddd#):
[source,bash]
....
# remote serial protocol
LANG=C ddd --debugger kgdb -r :12345 kernel
# live core debug
LANG=C ddd --debugger kgdb kernel /dev/fwmem0.2
....
=== Dcons with KVM
We can directly read the man:dcons[4] buffer via [.filename]#/dev/mem# for live systems, and in the core dump for crashed systems. These give you similar output to `dmesg -a`, but the man:dcons[4] buffer includes more information.
==== Using Dcons with KVM
To use man:dcons[4] with KVM:
Dump a man:dcons[4] buffer of a live system:
[source,bash]
....
# dconschat -1
....
Dump a man:dcons[4] buffer of a crash dump:
[source,bash]
....
# dconschat -1 -M vmcore.XX
....
Live core debugging can be done via:
[source,bash]
....
# fwcontrol -m target_eui64
# kgdb kernel /dev/fwmem0.2
....
[[kerneldebug-options]]
== Glossary of Kernel Options for Debugging
This section provides a brief glossary of compile-time kernel options used for debugging:
* `options KDB`: compiles in the kernel debugger framework. Required for `options DDB` and `options GDB`. Little or no performance overhead. By default, the debugger will be entered on panic instead of an automatic reboot.
* `options KDB_UNATTENDED`: change the default value of the `debug.debugger_on_panic` sysctl to 0, which controls whether the debugger is entered on panic. When `options KDB` is not compiled into the kernel, the behavior is to automatically reboot on panic; when it is compiled into the kernel, the default behavior is to drop into the debugger unless `options KDB_UNATTENDED` is compiled in. If you want to leave the kernel debugger compiled into the kernel but want the system to come back up unless you're on-hand to use the debugger for diagnostics, use this option.
* `options KDB_TRACE`: change the default value of the `debug.trace_on_panic` sysctl to 1, which controls whether the debugger automatically prints a stack trace on panic. Especially if running with `options KDB_UNATTENDED`, this can be helpful to gather basic debugging information on the serial or firewire console while still rebooting to recover.
* `options DDB`: compile in support for the console debugger, DDB. This interactive debugger runs on whatever the active low-level console of the system is, which includes the video console, serial console, or firewire console. It provides basic integrated debugging facilities, such as stack tracing, process and thread listing, dumping of lock state, VM state, file system state, and kernel memory management. DDB does not require software running on a second machine or being able to generate a core dump or full debugging kernel symbols, and provides detailed diagnostics of the kernel at run-time. Many bugs can be fully diagnosed using only DDB output. This option depends on `options KDB`.
* `options GDB`: compile in support for the remote debugger, GDB, which can operate over serial cable or firewire. When the debugger is entered, GDB may be attached to inspect structure contents, generate stack traces, etc. Some kernel state is more awkward to access than in DDB, which is able to generate useful summaries of kernel state automatically, such as automatically walking lock debugging or kernel memory management structures, and a second machine running the debugger is required. On the other hand, GDB combines information from the kernel source and full debugging symbols, and is aware of full data structure definitions, local variables, and is scriptable. This option is not required to run GDB on a kernel core dump. This option depends on `options KDB`.
* `options BREAK_TO_DEBUGGER`, `options ALT_BREAK_TO_DEBUGGER`: allow a break signal or alternative signal on the console to enter the debugger. If the system hangs without a panic, this is a useful way to reach the debugger. Due to the current kernel locking, a break signal generated on a serial console is significantly more reliable at getting into the debugger, and is generally recommended. This option has little or no performance impact.
* `options INVARIANTS`: compile into the kernel a large number of run-time assertion checks and tests, which constantly test the integrity of kernel data structures and the invariants of kernel algorithms. These tests can be expensive, so are not compiled in by default, but help provide useful "fail stop" behavior, in which certain classes of undesired behavior enter the debugger before kernel data corruption occurs, making them easier to debug. Tests include memory scrubbing and use-after-free testing, which is one of the more significant sources of overhead. This option depends on `options INVARIANT_SUPPORT`.
* `options INVARIANT_SUPPORT`: many of the tests present in `options INVARIANTS` require modified data structures or additional kernel symbols to be defined.
* `options WITNESS`: this option enables run-time lock order tracking and verification, and is an invaluable tool for deadlock diagnosis. WITNESS maintains a graph of acquired lock orders by lock type, and checks the graph at each acquire for cycles (implicit or explicit). If a cycle is detected, a warning and stack trace are generated to the console, indicating that a potential deadlock might have occurred. WITNESS is required in order to use the `show locks`, `show witness` and `show alllocks` DDB commands. This debug option has significant performance overhead, which may be somewhat mitigated through the use of `options WITNESS_SKIPSPIN`. Detailed documentation may be found in man:witness[4].
* `options WITNESS_SKIPSPIN`: disable run-time checking of spinlock lock order with WITNESS. As spin locks are acquired most frequently in the scheduler, and scheduler events occur often, this option can significantly speed up systems running with WITNESS. This option depends on `options WITNESS`.
* `options WITNESS_KDB`: change the default value of the `debug.witness.kdb` sysctl to 1, which causes WITNESS to enter the debugger when a lock order violation is detected, rather than simply printing a warning. This option depends on `options WITNESS`.
* `options SOCKBUF_DEBUG`: perform extensive run-time consistency checking on socket buffers, which can be useful for debugging both socket bugs and race conditions in protocols and device drivers that interact with sockets. This option significantly impacts network performance, and may change the timing in device driver races.
* `options DEBUG_VFS_LOCKS`: track lock acquisition points for lockmgr/vnode locks, expanding the amount of information displayed by `show lockedvnods` in DDB. This option has a measurable performance impact.
* `options DEBUG_MEMGUARD`: a replacement for the man:malloc[9] kernel memory allocator that uses the VM system to detect reads or writes from allocated memory after free. Details may be found in man:memguard[9]. This option has a significant performance impact, but can be very helpful in debugging kernel memory corruption bugs.
* `options DIAGNOSTIC`: enable additional, more expensive diagnostic tests along the lines of `options INVARIANTS`.
diff --git a/documentation/content/en/books/developers-handbook/l10n/_index.adoc b/documentation/content/en/books/developers-handbook/l10n/_index.adoc
index 4966af0aa2..7562b785c8 100644
--- a/documentation/content/en/books/developers-handbook/l10n/_index.adoc
+++ b/documentation/content/en/books/developers-handbook/l10n/_index.adoc
@@ -1,209 +1,210 @@
---
title: Chapter 4. Localization and Internationalization - L10N and I18N
authors:
prev: books/developers-handbook/secure
next: books/developers-handbook/policies
+description: Localization and Internationalization - L10N and I18N in FreeBSD
---
[[l10n]]
= Localization and Internationalization - L10N and I18N
:doctype: book
:toc: macro
:toclevels: 1
:icons: font
:sectnums:
:sectnumlevels: 6
:source-highlighter: rouge
:experimental:
:skip-front-matter:
:xrefstyle: basic
:relfileprefix: ../
:outfilesuffix:
:sectnumoffset: 4
include::shared/mirrors.adoc[]
include::shared/authors.adoc[]
include::shared/releases.adoc[]
include::shared/en/mailing-lists.adoc[]
include::shared/en/teams.adoc[]
include::shared/en/urls.adoc[]
toc::[]
[[l10n-programming]]
== Programming I18N Compliant Applications
To make your application more useful for speakers of other languages, we hope that you will program I18N compliant. The GNU gcc compiler and GUI libraries like QT and GTK support I18N through special handling of strings. Making a program I18N compliant is very easy. It allows contributors to port your application to other languages quickly. Refer to the library specific I18N documentation for more details.
In contrast with common perception, I18N compliant code is easy to write. Usually, it only involves wrapping your strings with library specific functions. In addition, please be sure to allow for wide or multibyte character support.
=== A Call to Unify the I18N Effort
It has come to our attention that the individual I18N/L10N efforts for each country has been repeating each others' efforts. Many of us have been reinventing the wheel repeatedly and inefficiently. We hope that the various major groups in I18N could congregate into a group effort similar to the Core Team's responsibility.
Currently, we hope that, when you write or port I18N programs, you would send it out to each country's related FreeBSD mailing list for testing. In the future, we hope to create applications that work in all the languages out-of-the-box without dirty hacks.
The {freebsd-i18n} has been established. If you are an I18N/L10N developer, please send your comments, ideas, questions, and anything you deem related to it.
=== Perl and Python
Perl and Python have I18N and wide character handling libraries. Please use them for I18N compliance.
[[posix-nls]]
== Localized Messages with POSIX.1 Native Language Support (NLS)
Beyond the basic I18N functions, like supporting various input encodings or supporting national conventions, such as the different decimal separators, at a higher level of I18N, it is possible to localize the messages written to the output by the various programs. A common way of doing this is using the POSIX.1 NLS functions, which are provided as a part of the FreeBSD base system.
[[nls-catalogs]]
=== Organizing Localized Messages into Catalog Files
POSIX.1 NLS is based on catalog files, which contain the localized messages in the desired encoding. The messages are organized into sets and each message is identified by an integer number in the containing set. The catalog files are conventionally named after the locale they contain localized messages for, followed by the `.msg` extension. For instance, the Hungarian messages for ISO8859-2 encoding should be stored in a file called [.filename]#hu_HU.ISO8859-2#.
These catalog files are common text files that contain the numbered messages. It is possible to write comments by starting the line with a `$` sign. Set boundaries are also separated by special comments, where the keyword `set` must directly follow the `$` sign. The `set` keyword is then followed by the set number. For example:
[.programlisting]
....
$set 1
....
The actual message entries start with the message number and followed by the localized message. The well-known modifiers from man:printf[3] are accepted:
[.programlisting]
....
15 "File not found: %s\n"
....
The language catalog files have to be compiled into a binary form before they can be opened from the program. This conversion is done with the man:gencat[1] utility. Its first argument is the filename of the compiled catalog and its further arguments are the input catalogs. The localized messages can also be organized into more catalog files and then all of them can be processed with man:gencat[1].
[[nls-using]]
=== Using the Catalog Files from the Source Code
Using the catalog files is simple. To use the related functions, [.filename]#nl_types.h# must be included. Before using a catalog, it has to be opened with man:catopen[3]. The function takes two arguments. The first parameter is the name of the installed and compiled catalog. Usually, the name of the program is used, such as grep. This name will be used when looking for the compiled catalog file. The man:catopen[3] call looks for this file in [.filename]#/usr/shared/nls/locale/catname# and in [.filename]#/usr/local/shared/nls/locale/catname#, where `locale` is the locale set and `catname` is the catalog name being discussed. The second parameter is a constant, which can have two values:
* `NL_CAT_LOCALE`, which means that the used catalog file will be based on `LC_MESSAGES`.
* `0`, which means that `LANG` has to be used to open the proper catalog.
The man:catopen[3] call returns a catalog identifier of type `nl_catd`. Please refer to the manual page for a list of possible returned error codes.
After opening a catalog man:catgets[3] can be used to retrieve a message. The first parameter is the catalog identifier returned by man:catopen[3], the second one is the number of the set, the third one is the number of the messages, and the fourth one is a fallback message, which will be returned if the requested message cannot be retrieved from the catalog file.
After using the catalog file, it must be closed by calling man:catclose[3], which has one argument, the catalog id.
[[nls-example]]
=== A Practical Example
The following example will demonstrate an easy solution on how to use NLS catalogs in a flexible way.
The below lines need to be put into a common header file of the program, which is included into all source files where localized messages are necessary:
[.programlisting]
....
#ifdef WITHOUT_NLS
#define getstr(n) nlsstr[n]
#else
#include nl_types.h
extern nl_catd catalog;
#define getstr(n) catgets(catalog, 1, n, nlsstr[n])
#endif
extern char *nlsstr[];
....
Next, put these lines into the global declaration part of the main source file:
[.programlisting]
....
#ifndef WITHOUT_NLS
#include nl_types.h
nl_catd catalog;
#endif
/*
* Default messages to use when NLS is disabled or no catalog
* is found.
*/
char *nlsstr[] = {
"",
/* 1*/ "some random message",
/* 2*/ "some other message"
};
....
Next come the real code snippets, which open, read, and close the catalog:
[.programlisting]
....
#ifndef WITHOUT_NLS
catalog = catopen("myapp", NL_CAT_LOCALE);
#endif
...
printf(getstr(1));
...
#ifndef WITHOUT_NLS
catclose(catalog);
#endif
....
==== Reducing Strings to Localize
There is a good way of reducing the strings that need to be localized by using libc error messages. This is also useful to just avoid duplication and provide consistent error messages for the common errors that can be encountered by a great many of programs.
First, here is an example that does not use libc error messages:
[.programlisting]
....
#include err.h
...
if (!S_ISDIR(st.st_mode))
errx(1, "argument is not a directory");
....
This can be transformed to print an error message by reading `errno` and printing an error message accordingly:
[.programlisting]
....
#include err.h
#include errno.h
...
if (!S_ISDIR(st.st_mode)) {
errno = ENOTDIR;
err(1, NULL);
}
....
In this example, the custom string is eliminated, thus translators will have less work when localizing the program and users will see the usual "Not a directory" error message when they encounter this error. This message will probably seem more familiar to them. Please note that it was necessary to include [.filename]#errno.h# in order to directly access `errno`.
It is worth to note that there are cases when `errno` is set automatically by a preceding call, so it is not necessary to set it explicitly:
[.programlisting]
....
#include err.h
...
if ((p = malloc(size)) == NULL)
err(1, NULL);
....
[[nls-mk]]
=== Making use of [.filename]#bsd.nls.mk#
Using the catalog files requires few repeatable steps, such as compiling the catalogs and installing them to the proper location. In order to simplify this process even more, [.filename]#bsd.nls.mk# introduces some macros. It is not necessary to include [.filename]#bsd.nls.mk# explicitly, it is pulled in from the common Makefiles, such as [.filename]#bsd.prog.mk# or [.filename]#bsd.lib.mk#.
Usually it is enough to define `NLSNAME`, which should have the catalog name mentioned as the first argument of man:catopen[3] and list the catalog files in `NLS` without their `.msg` extension. Here is an example, which makes it possible to to disable NLS when used with the code examples before. The `WITHOUT_NLS` man:make[1] variable has to be defined in order to build the program without NLS support.
[.programlisting]
....
.if !defined(WITHOUT_NLS)
NLS= es_ES.ISO8859-1
NLS+= hu_HU.ISO8859-2
NLS+= pt_BR.ISO8859-1
.else
CFLAGS+= -DWITHOUT_NLS
.endif
....
Conventionally, the catalog files are placed under the [.filename]#nls# subdirectory and this is the default behavior of [.filename]#bsd.nls.mk#. It is possible, though to override the location of the catalogs with the `NLSSRCDIR` man:make[1] variable. The default name of the precompiled catalog files also follow the naming convention mentioned before. It can be overridden by setting the `NLSNAME` variable. There are other options to fine tune the processing of the catalog files but usually it is not needed, thus they are not described here. For further information on [.filename]#bsd.nls.mk#, please refer to the file itself, it is short and easy to understand.
diff --git a/documentation/content/en/books/developers-handbook/policies/_index.adoc b/documentation/content/en/books/developers-handbook/policies/_index.adoc
index 0f437fd48c..01d0ea98b9 100644
--- a/documentation/content/en/books/developers-handbook/policies/_index.adoc
+++ b/documentation/content/en/books/developers-handbook/policies/_index.adoc
@@ -1,271 +1,272 @@
---
title: Chapter 5. Source Tree Guidelines and Policies
authors:
- author: Poul-Henning Kamp
- author: Giorgos Keramidas
prev: books/developers-handbook/l10n
next: books/developers-handbook/testing
+description: Source Tree Guidelines and Policies
---
[[policies]]
= Source Tree Guidelines and Policies
:doctype: book
:toc: macro
:toclevels: 1
:icons: font
:sectnums:
:sectnumlevels: 6
:source-highlighter: rouge
:experimental:
:skip-front-matter:
:xrefstyle: basic
:relfileprefix: ../
:outfilesuffix:
:sectnumoffset: 5
include::shared/mirrors.adoc[]
include::shared/authors.adoc[]
include::shared/releases.adoc[]
include::shared/en/mailing-lists.adoc[]
include::shared/en/teams.adoc[]
include::shared/en/urls.adoc[]
toc::[]
This chapter documents various guidelines and policies in force for the FreeBSD source tree.
[[policies-style]]
== Style Guidelines
Consistent coding style is extremely important, particularly with large projects like FreeBSD. Code should follow the FreeBSD coding styles described in man:style[9] and man:style.Makefile[5].
[[policies-maintainer]]
== `MAINTAINER` on Makefiles
If a particular portion of the FreeBSD [.filename]#src/# distribution is being maintained by a person or group of persons, this is communicated through an entry in [.filename]#src/MAINTAINERS#. Maintainers of ports within the Ports Collection express their maintainership to the world by adding a `MAINTAINER` line to the [.filename]#Makefile# of the port in question:
[.programlisting]
....
MAINTAINER= email-addresses
....
[TIP]
====
For other parts of the repository, or for sections not listed as having a maintainer, or when you are unsure who the active maintainer is, try looking at the recent commit history of the relevant parts of the source tree. It is quite often the case that a maintainer is not explicitly named, but the people who are actively working in a part of the source tree for, say, the last couple of years are interested in reviewing changes. Even if this is not specifically mentioned in the documentation or the source itself, asking for a review as a form of courtesy is a very reasonable thing to do.
====
The role of the maintainer is as follows:
* The maintainer owns and is responsible for that code. This means that he or she is responsible for fixing bugs and answering problem reports pertaining to that piece of the code, and in the case of contributed software, for tracking new versions, as appropriate.
* Changes to directories which have a maintainer defined shall be sent to the maintainer for review before being committed. Only if the maintainer does not respond for an unacceptable period of time, to several emails, will it be acceptable to commit changes without review by the maintainer. However, it is suggested that you try to have the changes reviewed by someone else if at all possible.
* It is of course not acceptable to add a person or group as maintainer unless they agree to assume this duty. On the other hand it does not have to be a committer and it can easily be a group of people.
[[policies-contributed]]
== Contributed Software
Some parts of the FreeBSD distribution consist of software that is actively being maintained outside the FreeBSD project. For historical reasons, we call this _contributed_ software. Some examples are sendmail, gcc and patch.
Over the last couple of years, various methods have been used in dealing with this type of software and all have some number of advantages and drawbacks. No clear winner has emerged.
Since this is the case, after some debate one of these methods has been selected as the "official" method and will be required for future imports of software of this kind. Furthermore, it is strongly suggested that existing contributed software converge on this model over time, as it has significant advantages over the old method, including the ability to easily obtain diffs relative to the "official" versions of the source by everyone (even without direct repository access). This will make it significantly easier to return changes to the primary developers of the contributed software.
Ultimately, however, it comes down to the people actually doing the work. If using this model is particularly unsuited to the package being dealt with, exceptions to these rules may be granted only with the approval of the core team and with the general consensus of the other developers. The ability to maintain the package in the future will be a key issue in the decisions.
[NOTE]
====
Because it makes it harder to import future versions minor, trivial and/or cosmetic changes are _strongly discouraged_ on files that are still tracking the vendor branch.
====
[[vendor-import-svn]]
=== Vendor Imports with SVN
This section describes the vendor import procedure with Subversion in details.
[.procedure]
. *Preparing the Tree*
+
If this is your first import after the switch to SVN, you will have to flatten and clean up the vendor tree, and bootstrap merge history in the main tree. If not, you can safely omit this step.
+
During the conversion from CVS to SVN, vendor branches were imported with the same layout as the main tree. For example, the foo vendor sources ended up in [.filename]#vendor/foo/dist/contrib/foo#, but it is pointless and rather inconvenient. What we really want is to have the vendor source directly in [.filename]#vendor/foo/dist#, like this:
+
[source,bash]
....
% cd vendor/foo/dist/contrib/foo
% svn move $(svn list) ../..
% cd ../..
% svn remove contrib
% svn propdel -R svn:mergeinfo
% svn commit
....
+
Note that, the `propdel` bit is necessary because starting with 1.5, Subversion will automatically add `svn:mergeinfo` to any directory you copy or move. In this case, you will not need this information, since you are not going to merge anything from the tree you deleted.
+
[NOTE]
====
You may want to flatten the tags as well. The procedure is exactly the same. If you do this, put off the commit until the end.
====
+
Check the [.filename]#dist# tree and perform any cleanup that is deemed to be necessary. You may want to disable keyword expansion, as it makes no sense on unmodified vendor code. In some cases, it can be even be harmful.
+
[source,bash]
....
% svn propdel svn:keywords -R .
% svn commit
....
+
Bootstrapping of `svn:mergeinfo` on the target directory (in the main tree) to the revision that corresponds to the last change was made to the vendor tree prior to importing new sources is also needed:
+
[source,bash]
....
% cd head/contrib/foo
% svn merge --record-only ^/vendor/foo/dist@12345678 .
% svn commit
....
+
With some shells, the `^` in the above command may need to be escaped with a backslash.
. *Importing New Sources*
+
Prepare a full, clean tree of the vendor sources. With SVN, we can keep a full distribution in the vendor tree without bloating the main tree. Import everything but merge only what is needed.
+
Note that you will need to add any files that were added since the last vendor import, and remove any that were removed. To facilitate this, you should prepare sorted lists of the contents of the vendor tree and of the sources you are about to import:
+
[source,bash]
....
% cd vendor/foo/dist
% svn list -R | grep -v '/$' | sort > ../old
% cd ../foo-9.9
% find . -type f | cut -c 3- | sort > ../new
....
+
With these two files, the following command will list removed files (files only in [.filename]#old#):
+
[source,bash]
....
% comm -23 ../old ../new
....
+
While the command below will list added files (files only in [.filename]#new#):
+
[source,bash]
....
% comm -13 ../old ../new
....
+
Let us put this together:
+
[source,bash]
....
% cd vendor/foo/foo-9.9
% tar cf - . | tar xf - -C ../dist
% cd ../dist
% comm -23 ../old ../new | xargs svn remove
% comm -13 ../old ../new | xargs svn add
....
+
[WARNING]
====
If there are new directories in the new distribution, the last command will fail. You will have to add the directories, and run it again. Conversely, if any directories were removed, you will have to remove them manually.
====
+
Check properties on any new files:
** All text files should have `svn:eol-style` set to `native`.
** All binary files should have `svn:mime-type` set to `application/octet-stream`, unless there is a more appropriate media type.
** Executable files should have `svn:executable` set to `*`.
** There should be no other properties on any file in the tree.
+
[NOTE]
====
You are ready to commit, but you should first check the output of `svn stat` and `svn diff` to make sure everything is in order.
====
+
Once you have committed the new vendor release, you should tag it for future reference. The best and quickest way is to do it directly in the repository:
+
[source,bash]
....
% svn copy ^/vendor/foo/dist svn_base/vendor/foo/9.9
....
+
To get the new tag, you can update your working copy of [.filename]#vendor/foo#.
+
[NOTE]
====
If you choose to do the copy in the checkout instead, do not forget to remove the generated `svn:mergeinfo` as described above.
====
. *Merging to __-HEAD__*
+
After you have prepared your import, it is time to merge. Option `--accept=postpone` tells SVN not to handle merge conflicts yet, because they will be taken care of manually:
+
[source,bash]
....
% cd head/contrib/foo
% svn update
% svn merge --accept=postpone ^/vendor/foo/dist
....
+
Resolve any conflicts, and make sure that any files that were added or removed in the vendor tree have been properly added or removed in the main tree. It is always a good idea to check differences against the vendor branch:
+
[source,bash]
....
% svn diff --no-diff-deleted --old=^/vendor/foo/dist --new=.
....
+
`--no-diff-deleted` tells SVN not to check files that are in the vendor tree but not in the main tree.
+
[NOTE]
====
With SVN, there is no concept of on or off the vendor branch. If a file that previously had local modifications no longer does, just remove any left-over cruft, such as FreeBSD version tags, so it no longer shows up in diffs against the vendor tree.
====
+
If any changes are required for the world to build with the new sources, make them now - and test until you are satisfied that everything build and runs correctly.
. *Commit*
+
Now, you are ready to commit. Make sure you get everything in one go. Ideally, you would have done all steps in a clean tree, in which case you can just commit from the top of that tree. That is the best way to avoid surprises. If you do it properly, the tree will move atomically from a consistent state with the old code to a consistent state with the new code.
[[policies-encumbered]]
== Encumbered Files
It might occasionally be necessary to include an encumbered file in the FreeBSD source tree. For example, if a device requires a small piece of binary code to be loaded to it before the device will operate, and we do not have the source to that code, then the binary file is said to be encumbered. The following policies apply to including encumbered files in the FreeBSD source tree.
. Any file which is interpreted or executed by the system CPU(s) and not in source format is encumbered.
. Any file with a license more restrictive than BSD or GNU is encumbered.
. A file which contains downloadable binary data for use by the hardware is not encumbered, unless (1) or (2) apply to it. It must be stored in an architecture neutral ASCII format (file2c or uuencoding is recommended).
. Any encumbered file requires specific approval from the link:https://www.FreeBSD.org/administration/#t-core[Core Team] before it is added to the repository.
. Encumbered files go in [.filename]#src/contrib# or [.filename]#src/sys/contrib#.
. The entire module should be kept together. There is no point in splitting it, unless there is code-sharing with non-encumbered code.
. Object files are named [.filename]#arch/filename.o.uu>#.
. Kernel files:
.. Should always be referenced in [.filename]#conf/files.*# (for build simplicity).
.. Should always be in [.filename]#LINT#, but the link:https://www.FreeBSD.org/administration/#t-core[Core Team] decides per case if it should be commented out or not. The link:https://www.FreeBSD.org/administration/#t-core[Core Team] can, of course, change their minds later on.
.. The _Release Engineer_ decides whether or not it goes into the release.
. User-land files:
.. The link:https://www.FreeBSD.org/administration/#t-core[Core team] decides if the code should be part of `make world`.
.. The link:https://www.FreeBSD.org/administration/#t-re[Release Engineering] decides if it goes into the release.
[[policies-shlib]]
== Shared Libraries
If you are adding shared library support to a port or other piece of software that does not have one, the version numbers should follow these rules. Generally, the resulting numbers will have nothing to do with the release version of the software.
The three principles of shared library building are:
* Start from `1.0`
* If there is a change that is backwards compatible, bump minor number (note that ELF systems ignore the minor number)
* If there is an incompatible change, bump major number
For instance, added functions and bugfixes result in the minor version number being bumped, while deleted functions, changed function call syntax, etc. will force the major version number to change.
Stick to version numbers of the form major.minor (`_x_._y_`). Our a.out dynamic linker does not handle version numbers of the form `_x_._y_._z_` well. Any version number after the `_y_` (i.e., the third digit) is totally ignored when comparing shared lib version numbers to decide which library to link with. Given two shared libraries that differ only in the "micro" revision, `ld.so` will link with the higher one. That is, if you link with [.filename]#libfoo.so.3.3.3#, the linker only records `3.3` in the headers, and will link with anything starting with `_libfoo.so.3_._(anything >= 3)_._(highest available)_`.
[NOTE]
====
`ld.so` will always use the highest "minor" revision. For instance, it will use [.filename]#libc.so.2.2# in preference to [.filename]#libc.so.2.0#, even if the program was initially linked with [.filename]#libc.so.2.0#.
====
In addition, our ELF dynamic linker does not handle minor version numbers at all. However, one should still specify a major and minor version number as our [.filename]#Makefile#'s "do the right thing" based on the type of system.
For non-port libraries, it is also our policy to change the shared library version number only once between releases. In addition, it is our policy to change the major shared library version number only once between major OS releases (i.e., from 6.0 to 7.0). When you make a change to a system library that requires the version number to be bumped, check the [.filename]#Makefile#'s commit logs. It is the responsibility of the committer to ensure that the first such change since the release will result in the shared library version number in the [.filename]#Makefile# to be updated, and any subsequent changes will not.
diff --git a/documentation/content/en/books/developers-handbook/secure/_index.adoc b/documentation/content/en/books/developers-handbook/secure/_index.adoc
index 8ed21a75c2..f490e0b2dd 100644
--- a/documentation/content/en/books/developers-handbook/secure/_index.adoc
+++ b/documentation/content/en/books/developers-handbook/secure/_index.adoc
@@ -1,210 +1,211 @@
---
title: Chapter 3. Secure Programming
authors:
- author: Murray Stokely
prev: books/developers-handbook/tools
next: books/developers-handbook/l10n
+description: Secure Programming in FreeBSD
---
[[secure]]
= Secure Programming
:doctype: book
:toc: macro
:toclevels: 1
:icons: font
:sectnums:
:sectnumlevels: 6
:source-highlighter: rouge
:experimental:
:skip-front-matter:
:xrefstyle: basic
:relfileprefix: ../
:outfilesuffix:
:sectnumoffset: 3
include::shared/mirrors.adoc[]
include::shared/authors.adoc[]
include::shared/releases.adoc[]
include::shared/en/mailing-lists.adoc[]
include::shared/en/teams.adoc[]
include::shared/en/urls.adoc[]
toc::[]
[[secure-synopsis]]
== Synopsis
This chapter describes some of the security issues that have plagued UNIX(R) programmers for decades and some of the new tools available to help programmers avoid writing exploitable code.
[[secure-philosophy]]
== Secure Design Methodology
Writing secure applications takes a very scrutinous and pessimistic outlook on life. Applications should be run with the principle of "least privilege" so that no process is ever running with more than the bare minimum access that it needs to accomplish its function. Previously tested code should be reused whenever possible to avoid common mistakes that others may have already fixed.
One of the pitfalls of the UNIX(R) environment is how easy it is to make assumptions about the sanity of the environment. Applications should never trust user input (in all its forms), system resources, inter-process communication, or the timing of events. UNIX(R) processes do not execute synchronously so logical operations are rarely atomic.
[[secure-bufferov]]
== Buffer Overflows
Buffer Overflows have been around since the very beginnings of the von Neumann crossref:bibliography[cod,1] architecture. They first gained widespread notoriety in 1988 with the Morris Internet worm. Unfortunately, the same basic attack remains effective today. By far the most common type of buffer overflow attack is based on corrupting the stack.
Most modern computer systems use a stack to pass arguments to procedures and to store local variables. A stack is a last in first out (LIFO) buffer in the high memory area of a process image. When a program invokes a function a new "stack frame" is created. This stack frame consists of the arguments passed to the function as well as a dynamic amount of local variable space. The "stack pointer" is a register that holds the current location of the top of the stack. Since this value is constantly changing as new values are pushed onto the top of the stack, many implementations also provide a "frame pointer" that is located near the beginning of a stack frame so that local variables can more easily be addressed relative to this value. crossref:bibliography[cod,1] The return address for function calls is also stored on the stack, and this is the cause of stack-overflow exploits since overflowing a local variable in a function can overwrite the return address of that function, potentially allowing a malicious user to execute any code he or she wants.
Although stack-based attacks are by far the most common, it would also be possible to overrun the stack with a heap-based (malloc/free) attack.
The C programming language does not perform automatic bounds checking on arrays or pointers as many other languages do. In addition, the standard C library is filled with a handful of very dangerous functions.
[.informaltable]
[cols="1,1", frame="none"]
|===
|`strcpy`(char *dest, const char *src)
|
May overflow the dest buffer
|`strcat`(char *dest, const char *src)
|
May overflow the dest buffer
|`getwd`(char *buf)
|
May overflow the buf buffer
|`gets`(char *s)
|
May overflow the s buffer
|`[vf]scanf`(const char *format, ...)
|
May overflow its arguments.
|`realpath`(char *path, char resolved_path[])
|
May overflow the path buffer
|`[v]sprintf`(char *str, const char *format, ...)
|
May overflow the str buffer.
|===
=== Example Buffer Overflow
The following example code contains a buffer overflow designed to overwrite the return address and skip the instruction immediately following the function call. (Inspired by crossref:bibliography[Phrack,4])
[.programlisting]
....
#include <stdio.h>
void manipulate(char *buffer) {
char newbuffer[80];
strcpy(newbuffer,buffer);
}
int main() {
char ch,buffer[4096];
int i=0;
while ((buffer[i++] = getchar()) != '\n') {};
i=1;
manipulate(buffer);
i=2;
printf("The value of i is : %d\n",i);
return 0;
}
....
Let us examine what the memory image of this process would look like if we were to input 160 spaces into our little program before hitting return.
[XXX figure here!]
Obviously more malicious input can be devised to execute actual compiled instructions (such as exec(/bin/sh)).
=== Avoiding Buffer Overflows
The most straightforward solution to the problem of stack-overflows is to always use length restricted memory and string copy functions. `strncpy` and `strncat` are part of the standard C library. These functions accept a length value as a parameter which should be no larger than the size of the destination buffer. These functions will then copy up to 'length' bytes from the source to the destination. However there are a number of problems with these functions. Neither function guarantees NUL termination if the size of the input buffer is as large as the destination. The length parameter is also used inconsistently between strncpy and strncat so it is easy for programmers to get confused as to their proper usage. There is also a significant performance loss compared to `strcpy` when copying a short string into a large buffer since `strncpy` NUL fills up the size specified.
Another memory copy implementation exists to get around these problems. The `strlcpy` and `strlcat` functions guarantee that they will always null terminate the destination string when given a non-zero length argument.
==== Compiler based run-time bounds checking
Unfortunately there is still a very large assortment of code in public use which blindly copies memory around without using any of the bounded copy routines we just discussed. Fortunately, there is a way to help prevent such attacks - run-time bounds checking, which is implemented by several C/C++ compilers.
ProPolice is one such compiler feature, and is integrated into man:gcc[1] versions 4.1 and later. It replaces and extends the earlier StackGuard man:gcc[1] extension.
ProPolice helps to protect against stack-based buffer overflows and other attacks by laying pseudo-random numbers in key areas of the stack before calling any function. When a function returns, these "canaries" are checked and if they are found to have been changed the executable is immediately aborted. Thus any attempt to modify the return address or other variable stored on the stack in an attempt to get malicious code to run is unlikely to succeed, as the attacker would have to also manage to leave the pseudo-random canaries untouched.
Recompiling your application with ProPolice is an effective means of stopping most buffer-overflow attacks, but it can still be compromised.
==== Library based run-time bounds checking
Compiler-based mechanisms are completely useless for binary-only software for which you cannot recompile. For these situations there are a number of libraries which re-implement the unsafe functions of the C-library (`strcpy`, `fscanf`, `getwd`, etc..) and ensure that these functions can never write past the stack pointer.
* libsafe
* libverify
* libparanoia
Unfortunately these library-based defenses have a number of shortcomings. These libraries only protect against a very small set of security related issues and they neglect to fix the actual problem. These defenses may fail if the application was compiled with -fomit-frame-pointer. Also, the LD_PRELOAD and LD_LIBRARY_PATH environment variables can be overwritten/unset by the user.
[[secure-setuid]]
== SetUID issues
There are at least 6 different IDs associated with any given process, and you must therefore be very careful with the access that your process has at any given time. In particular, all seteuid applications should give up their privileges as soon as it is no longer required.
The real user ID can only be changed by a superuser process. The login program sets this when a user initially logs in and it is seldom changed.
The effective user ID is set by the `exec()` functions if a program has its seteuid bit set. An application can call `seteuid()` at any time to set the effective user ID to either the real user ID or the saved set-user-ID. When the effective user ID is set by `exec()` functions, the previous value is saved in the saved set-user-ID.
[[secure-chroot]]
== Limiting your program's environment
The traditional method of restricting a process is with the `chroot()` system call. This system call changes the root directory from which all other paths are referenced for a process and any child processes. For this call to succeed the process must have execute (search) permission on the directory being referenced. The new environment does not actually take effect until you `chdir()` into your new environment. It should also be noted that a process can easily break out of a chroot environment if it has root privilege. This could be accomplished by creating device nodes to read kernel memory, attaching a debugger to a process outside of the man:chroot[8] environment, or in many other creative ways.
The behavior of the `chroot()` system call can be controlled somewhat with the kern.chroot_allow_open_directories `sysctl` variable. When this value is set to 0, `chroot()` will fail with EPERM if there are any directories open. If set to the default value of 1, then `chroot()` will fail with EPERM if there are any directories open and the process is already subject to a `chroot()` call. For any other value, the check for open directories will be bypassed completely.
=== FreeBSD's jail functionality
The concept of a Jail extends upon the `chroot()` by limiting the powers of the superuser to create a true `virtual server'. Once a prison is set up all network communication must take place through the specified IP address, and the power of "root privilege" in this jail is severely constrained.
While in a prison, any tests of superuser power within the kernel using the `suser()` call will fail. However, some calls to `suser()` have been changed to a new interface `suser_xxx()`. This function is responsible for recognizing or denying access to superuser power for imprisoned processes.
A superuser process within a jailed environment has the power to:
* Manipulate credential with `setuid`, `seteuid`, `setgid`, `setegid`, `setgroups`, `setreuid`, `setregid`, `setlogin`
* Set resource limits with `setrlimit`
* Modify some sysctl nodes (kern.hostname)
* `chroot()`
* Set flags on a vnode: `chflags`, `fchflags`
* Set attributes of a vnode such as file permission, owner, group, size, access time, and modification time.
* Bind to privileged ports in the Internet domain (ports < 1024)
`Jail` is a very useful tool for running applications in a secure environment but it does have some shortcomings. Currently, the IPC mechanisms have not been converted to the `suser_xxx` so applications such as MySQL cannot be run within a jail. Superuser access may have a very limited meaning within a jail, but there is no way to specify exactly what "very limited" means.
=== POSIX(R).1e Process Capabilities
POSIX(R) has released a working draft that adds event auditing, access control lists, fine grained privileges, information labeling, and mandatory access control.
This is a work in progress and is the focus of the http://www.trustedbsd.org/[TrustedBSD] project. Some of the initial work has been committed to FreeBSD-CURRENT (cap_set_proc(3)).
[[secure-trust]]
== Trust
An application should never assume that anything about the users environment is sane. This includes (but is certainly not limited to): user input, signals, environment variables, resources, IPC, mmaps, the filesystem working directory, file descriptors, the # of open files, etc.
You should never assume that you can catch all forms of invalid input that a user might supply. Instead, your application should use positive filtering to only allow a specific subset of inputs that you deem safe. Improper data validation has been the cause of many exploits, especially with CGI scripts on the world wide web. For filenames you need to be extra careful about paths ("../", "/"), symbolic links, and shell escape characters.
Perl has a really cool feature called "Taint" mode which can be used to prevent scripts from using data derived outside the program in an unsafe way. This mode will check command line arguments, environment variables, locale information, the results of certain syscalls (`readdir()`, `readlink()`, `getpwxxx()`), and all file input.
[[secure-race-conditions]]
== Race Conditions
A race condition is anomalous behavior caused by the unexpected dependence on the relative timing of events. In other words, a programmer incorrectly assumed that a particular event would always happen before another.
Some of the common causes of race conditions are signals, access checks, and file opens. Signals are asynchronous events by nature so special care must be taken in dealing with them. Checking access with `access(2)` then `open(2)` is clearly non-atomic. Users can move files in between the two calls. Instead, privileged applications should `seteuid()` and then call `open()` directly. Along the same lines, an application should always set a proper umask before `open()` to obviate the need for spurious `chmod()` calls.
diff --git a/documentation/content/en/books/developers-handbook/sockets/_index.adoc b/documentation/content/en/books/developers-handbook/sockets/_index.adoc
index a224743a18..44db501822 100644
--- a/documentation/content/en/books/developers-handbook/sockets/_index.adoc
+++ b/documentation/content/en/books/developers-handbook/sockets/_index.adoc
@@ -1,893 +1,894 @@
---
title: Chapter 7. Sockets
authors:
- author: G. Adam Stanislav
prev: books/developers-handbook/partii
next: books/developers-handbook/ipv6
+description: FreeBSD Sockets
---
[[sockets]]
= Sockets
:doctype: book
:toc: macro
:toclevels: 1
:icons: font
:sectnums:
:sectnumlevels: 6
:source-highlighter: rouge
:experimental:
:skip-front-matter:
:xrefstyle: basic
:relfileprefix: ../
:outfilesuffix:
:sectnumoffset: 7
include::shared/mirrors.adoc[]
include::shared/authors.adoc[]
include::shared/releases.adoc[]
include::shared/en/mailing-lists.adoc[]
include::shared/en/teams.adoc[]
include::shared/en/urls.adoc[]
:imagesdir: ../../../../images/books/developers-handbook/
toc::[]
[[sockets-synopsis]]
== Synopsis
BSD sockets take interprocess communications to a new level. It is no longer necessary for the communicating processes to run on the same machine. They still _can_, but they do not have to.
Not only do these processes not have to run on the same machine, they do not have to run under the same operating system. Thanks to BSD sockets, your FreeBSD software can smoothly cooperate with a program running on a Macintosh(R), another one running on a Sun(TM) workstation, yet another one running under Windows(R) 2000, all connected with an Ethernet-based local area network.
But your software can equally well cooperate with processes running in another building, or on another continent, inside a submarine, or a space shuttle.
It can also cooperate with processes that are not part of a computer (at least not in the strict sense of the word), but of such devices as printers, digital cameras, medical equipment. Just about anything capable of digital communications.
[[sockets-diversity]]
== Networking and Diversity
We have already hinted on the _diversity_ of networking. Many different systems have to talk to each other. And they have to speak the same language. They also have to _understand_ the same language the same way.
People often think that _body language_ is universal. But it is not. Back in my early teens, my father took me to Bulgaria. We were sitting at a table in a park in Sofia, when a vendor approached us trying to sell us some roasted almonds.
I had not learned much Bulgarian by then, so, instead of saying no, I shook my head from side to side, the "universal" body language for _no_. The vendor quickly started serving us some almonds.
I then remembered I had been told that in Bulgaria shaking your head sideways meant _yes_. Quickly, I started nodding my head up and down. The vendor noticed, took his almonds, and walked away. To an uninformed observer, I did not change the body language: I continued using the language of shaking and nodding my head. What changed was the _meaning_ of the body language. At first, the vendor and I interpreted the same language as having completely different meaning. I had to adjust my own interpretation of that language so the vendor would understand.
It is the same with computers: The same symbols may have different, even outright opposite meaning. Therefore, for two computers to understand each other, they must not only agree on the same _language_, but on the same _interpretation_ of the language.
[[sockets-protocols]]
== Protocols
While various programming languages tend to have complex syntax and use a number of multi-letter reserved words (which makes them easy for the human programmer to understand), the languages of data communications tend to be very terse. Instead of multi-byte words, they often use individual _bits_. There is a very convincing reason for it: While data travels _inside_ your computer at speeds approaching the speed of light, it often travels considerably slower between two computers.
As the languages used in data communications are so terse, we usually refer to them as _protocols_ rather than languages.
As data travels from one computer to another, it always uses more than one protocol. These protocols are _layered_. The data can be compared to the inside of an onion: You have to peel off several layers of "skin" to get to the data. This is best illustrated with a picture:
.Protocol Layers
image::layers.png[]
In this example, we are trying to get an image from a web page we are connected to via an Ethernet.
The image consists of raw data, which is simply a sequence of RGB values that our software can process, i.e., convert into an image and display on our monitor.
Alas, our software has no way of knowing how the raw data is organized: Is it a sequence of RGB values, or a sequence of grayscale intensities, or perhaps of CMYK encoded colors? Is the data represented by 8-bit quanta, or are they 16 bits in size, or perhaps 4 bits? How many rows and columns does the image consist of? Should certain pixels be transparent?
I think you get the picture...
To inform our software how to handle the raw data, it is encoded as a PNG file. It could be a GIF, or a JPEG, but it is a PNG.
And PNG is a protocol.
At this point, I can hear some of you yelling, _"No, it is not! It is a file format!"_
Well, of course it is a file format. But from the perspective of data communications, a file format is a protocol: The file structure is a _language_, a terse one at that, communicating to our _process_ how the data is organized. Ergo, it is a _protocol_.
Alas, if all we received was the PNG file, our software would be facing a serious problem: How is it supposed to know the data is representing an image, as opposed to some text, or perhaps a sound, or what not? Secondly, how is it supposed to know the image is in the PNG format as opposed to GIF, or JPEG, or some other image format?
To obtain that information, we are using another protocol: HTTP. This protocol can tell us exactly that the data represents an image, and that it uses the PNG protocol. It can also tell us some other things, but let us stay focused on protocol layers here.
So, now we have some data wrapped in the PNG protocol, wrapped in the HTTP protocol. How did we get it from the server?
By using TCP/IP over Ethernet, that is how. Indeed, that is three more protocols. Instead of continuing inside out, I am now going to talk about Ethernet, simply because it is easier to explain the rest that way.
Ethernet is an interesting system of connecting computers in a _local area network_ (LAN). Each computer has a _network interface card_ (NIC), which has a unique 48-bit ID called its _address_. No two Ethernet NICs in the world have the same address.
These NICs are all connected with each other. Whenever one computer wants to communicate with another in the same Ethernet LAN, it sends a message over the network. Every NIC sees the message. But as part of the Ethernet _protocol_, the data contains the address of the destination NIC (among other things). So, only one of all the network interface cards will pay attention to it, the rest will ignore it.
But not all computers are connected to the same network. Just because we have received the data over our Ethernet does not mean it originated in our own local area network. It could have come to us from some other network (which may not even be Ethernet based) connected with our own network via the Internet.
All data is transferred over the Internet using IP, which stands for _Internet Protocol_. Its basic role is to let us know where in the world the data has arrived from, and where it is supposed to go to. It does not _guarantee_ we will receive the data, only that we will know where it came from _if_ we do receive it.
Even if we do receive the data, IP does not guarantee we will receive various chunks of data in the same order the other computer has sent it to us. So, we can receive the center of our image before we receive the upper left corner and after the lower right, for example.
It is TCP (_Transmission Control Protocol_) that asks the sender to resend any lost data and that places it all into the proper order.
All in all, it took _five_ different protocols for one computer to communicate to another what an image looks like. We received the data wrapped into the PNG protocol, which was wrapped into the HTTP protocol, which was wrapped into the TCP protocol, which was wrapped into the IP protocol, which was wrapped into the Ethernet protocol.
Oh, and by the way, there probably were several other protocols involved somewhere on the way. For example, if our LAN was connected to the Internet through a dial-up call, it used the PPP protocol over the modem which used one (or several) of the various modem protocols, et cetera, et cetera, et cetera...
As a developer you should be asking by now, _"How am I supposed to handle it all?"_
Luckily for you, you are _not_ supposed to handle it all. You _are_ supposed to handle some of it, but not all of it. Specifically, you need not worry about the physical connection (in our case Ethernet and possibly PPP, etc). Nor do you need to handle the Internet Protocol, or the Transmission Control Protocol.
In other words, you do not have to do anything to receive the data from the other computer. Well, you do have to _ask_ for it, but that is almost as simple as opening a file.
Once you have received the data, it is up to you to figure out what to do with it. In our case, you would need to understand the HTTP protocol and the PNG file structure.
To use an analogy, all the internetworking protocols become a gray area: Not so much because we do not understand how it works, but because we are no longer concerned about it. The sockets interface takes care of this gray area for us:
.Sockets Covered Protocol Layers
image::slayers.png[]
We only need to understand any protocols that tell us how to _interpret the data_, not how to _receive_ it from another process, nor how to _send_ it to another process.
[[sockets-model]]
== The Sockets Model
BSD sockets are built on the basic UNIX(R) model: _Everything is a file._ In our example, then, sockets would let us receive an _HTTP file_, so to speak. It would then be up to us to extract the _PNG file_ from it.
Due to the complexity of internetworking, we cannot just use the `open` system call, or the `open()` C function. Instead, we need to take several steps to "opening" a socket.
Once we do, however, we can start treating the _socket_ the same way we treat any _file descriptor_: We can `read` from it, `write` to it, `pipe` it, and, eventually, `close` it.
[[sockets-essential-functions]]
== Essential Socket Functions
While FreeBSD offers different functions to work with sockets, we only _need_ four to "open" a socket. And in some cases we only need two.
[[sockets-client-server]]
=== The Client-Server Difference
Typically, one of the ends of a socket-based data communication is a _server_, the other is a _client_.
[[sockets-common-elements]]
==== The Common Elements
[[sockets-socket]]
===== `socket`
The one function used by both, clients and servers, is man:socket[2]. It is declared this way:
[.programlisting]
....
int socket(int domain, int type, int protocol);
....
The return value is of the same type as that of `open`, an integer. FreeBSD allocates its value from the same pool as that of file handles. That is what allows sockets to be treated the same way as files.
The `domain` argument tells the system what _protocol family_ you want it to use. Many of them exist, some are vendor specific, others are very common. They are declared in [.filename]#sys/socket.h#.
Use `PF_INET` for UDP, TCP and other Internet protocols (IPv4).
Five values are defined for the `type` argument, again, in [.filename]#sys/socket.h#. All of them start with "`SOCK_`". The most common one is `SOCK_STREAM`, which tells the system you are asking for a _reliable stream delivery service_ (which is TCP when used with `PF_INET`).
If you asked for `SOCK_DGRAM`, you would be requesting a _connectionless datagram delivery service_ (in our case, UDP).
If you wanted to be in charge of the low-level protocols (such as IP), or even network interfaces (e.g., the Ethernet), you would need to specify `SOCK_RAW`.
Finally, the `protocol` argument depends on the previous two arguments, and is not always meaningful. In that case, use `0` for its value.
[NOTE]
.The Unconnected Socket
====
Nowhere, in the `socket` function have we specified to what other system we should be connected. Our newly created socket remains _unconnected_.
This is on purpose: To use a telephone analogy, we have just attached a modem to the phone line. We have neither told the modem to make a call, nor to answer if the phone rings.
====
[[sockets-sockaddr]]
===== `sockaddr`
Various functions of the sockets family expect the address of (or pointer to, to use C terminology) a small area of the memory. The various C declarations in the [.filename]#sys/socket.h# refer to it as `struct sockaddr`. This structure is declared in the same file:
[.programlisting]
....
/*
* Structure used by kernel to store most
* addresses.
*/
struct sockaddr {
unsigned char sa_len; /* total length */
sa_family_t sa_family; /* address family */
char sa_data[14]; /* actually longer; address value */
};
#define SOCK_MAXADDRLEN 255 /* longest possible addresses */
....
Please note the _vagueness_ with which the `sa_data` field is declared, just as an array of `14` bytes, with the comment hinting there can be more than `14` of them.
This vagueness is quite deliberate. Sockets is a very powerful interface. While most people perhaps think of it as nothing more than the Internet interface-and most applications probably use it for that nowadays-sockets can be used for just about _any_ kind of interprocess communications, of which the Internet (or, more precisely, IP) is only one.
The [.filename]#sys/socket.h# refers to the various types of protocols sockets will handle as _address families_, and lists them right before the definition of `sockaddr`:
[.programlisting]
....
/*
* Address families.
*/
#define AF_UNSPEC 0 /* unspecified */
#define AF_LOCAL 1 /* local to host (pipes, portals) */
#define AF_UNIX AF_LOCAL /* backward compatibility */
#define AF_INET 2 /* internetwork: UDP, TCP, etc. */
#define AF_IMPLINK 3 /* arpanet imp addresses */
#define AF_PUP 4 /* pup protocols: e.g. BSP */
#define AF_CHAOS 5 /* mit CHAOS protocols */
#define AF_NS 6 /* XEROX NS protocols */
#define AF_ISO 7 /* ISO protocols */
#define AF_OSI AF_ISO
#define AF_ECMA 8 /* European computer manufacturers */
#define AF_DATAKIT 9 /* datakit protocols */
#define AF_CCITT 10 /* CCITT protocols, X.25 etc */
#define AF_SNA 11 /* IBM SNA */
#define AF_DECnet 12 /* DECnet */
#define AF_DLI 13 /* DEC Direct data link interface */
#define AF_LAT 14 /* LAT */
#define AF_HYLINK 15 /* NSC Hyperchannel */
#define AF_APPLETALK 16 /* Apple Talk */
#define AF_ROUTE 17 /* Internal Routing Protocol */
#define AF_LINK 18 /* Link layer interface */
#define pseudo_AF_XTP 19 /* eXpress Transfer Protocol (no AF) */
#define AF_COIP 20 /* connection-oriented IP, aka ST II */
#define AF_CNT 21 /* Computer Network Technology */
#define pseudo_AF_RTIP 22 /* Help Identify RTIP packets */
#define AF_IPX 23 /* Novell Internet Protocol */
#define AF_SIP 24 /* Simple Internet Protocol */
#define pseudo_AF_PIP 25 /* Help Identify PIP packets */
#define AF_ISDN 26 /* Integrated Services Digital Network*/
#define AF_E164 AF_ISDN /* CCITT E.164 recommendation */
#define pseudo_AF_KEY 27 /* Internal key-management function */
#define AF_INET6 28 /* IPv6 */
#define AF_NATM 29 /* native ATM access */
#define AF_ATM 30 /* ATM */
#define pseudo_AF_HDRCMPLT 31 /* Used by BPF to not rewrite headers
* in interface output routine
*/
#define AF_NETGRAPH 32 /* Netgraph sockets */
#define AF_SLOW 33 /* 802.3ad slow protocol */
#define AF_SCLUSTER 34 /* Sitara cluster protocol */
#define AF_ARP 35
#define AF_BLUETOOTH 36 /* Bluetooth sockets */
#define AF_MAX 37
....
The one used for IP is AF_INET. It is a symbol for the constant `2`.
It is the _address family_ listed in the `sa_family` field of `sockaddr` that decides how exactly the vaguely named bytes of `sa_data` will be used.
Specifically, whenever the _address family_ is AF_INET, we can use `struct sockaddr_in` found in [.filename]#netinet/in.h#, wherever `sockaddr` is expected:
[.programlisting]
....
/*
* Socket address, internet style.
*/
struct sockaddr_in {
uint8_t sin_len;
sa_family_t sin_family;
in_port_t sin_port;
struct in_addr sin_addr;
char sin_zero[8];
};
....
We can visualize its organization this way:
.sockaddr_in structure
image::sain.png[]
The three important fields are `sin_family`, which is byte 1 of the structure, `sin_port`, a 16-bit value found in bytes 2 and 3, and `sin_addr`, a 32-bit integer representation of the IP address, stored in bytes 4-7.
Now, let us try to fill it out. Let us assume we are trying to write a client for the _daytime_ protocol, which simply states that its server will write a text string representing the current date and time to port 13. We want to use TCP/IP, so we need to specify `AF_INET` in the address family field. `AF_INET` is defined as `2`. Let us use the IP address of `192.43.244.18`, which is the time server of US federal government (`time.nist.gov`).
.Specific example of sockaddr_in
image::sainfill.png[]
By the way the `sin_addr` field is declared as being of the `struct in_addr` type, which is defined in [.filename]#netinet/in.h#:
[.programlisting]
....
/*
* Internet address (a structure for historical reasons)
*/
struct in_addr {
in_addr_t s_addr;
};
....
In addition, `in_addr_t` is a 32-bit integer.
The `192.43.244.18` is just a convenient notation of expressing a 32-bit integer by listing all of its 8-bit bytes, starting with the _most significant_ one.
So far, we have viewed `sockaddr` as an abstraction. Our computer does not store `short` integers as a single 16-bit entity, but as a sequence of 2 bytes. Similarly, it stores 32-bit integers as a sequence of 4 bytes.
Suppose we coded something like this:
[.programlisting]
....
sa.sin_family = AF_INET;
sa.sin_port = 13;
sa.sin_addr.s_addr = (((((192 << 8) | 43) << 8) | 244) << 8) | 18;
....
What would the result look like?
Well, that depends, of course. On a Pentium(R), or other x86, based computer, it would look like this:
.sockaddr_in on an Intel system
image::sainlsb.png[]
On a different system, it might look like this:
.sockaddr_in on an MSB system
image::sainmsb.png[]
And on a PDP it might look different yet. But the above two are the most common ways in use today.
Ordinarily, wanting to write portable code, programmers pretend that these differences do not exist. And they get away with it (except when they code in assembly language). Alas, you cannot get away with it that easily when coding for sockets.
Why?
Because when communicating with another computer, you usually do not know whether it stores data _most significant byte_ (MSB) or _least significant byte_ (LSB) first.
You might be wondering, _"So, will sockets not handle it for me?"_
It will not.
While that answer may surprise you at first, remember that the general sockets interface only understands the `sa_len` and `sa_family` fields of the `sockaddr` structure. You do not have to worry about the byte order there (of course, on FreeBSD `sa_family` is only 1 byte anyway, but many other UNIX(R) systems do not have `sa_len` and use 2 bytes for `sa_family`, and expect the data in whatever order is native to the computer).
But the rest of the data is just `sa_data[14]` as far as sockets goes. Depending on the _address family_, sockets just forwards that data to its destination.
Indeed, when we enter a port number, it is because we want the other computer to know what service we are asking for. And, when we are the server, we read the port number so we know what service the other computer is expecting from us. Either way, sockets only has to forward the port number as data. It does not interpret it in any way.
Similarly, we enter the IP address to tell everyone on the way where to send our data to. Sockets, again, only forwards it as data.
That is why, we (the _programmers_, not the _sockets_) have to distinguish between the byte order used by our computer and a conventional byte order to send the data in to the other computer.
We will call the byte order our computer uses the _host byte order_, or just the _host order_.
There is a convention of sending the multi-byte data over IP _MSB first_. This, we will refer to as the _network byte order_, or simply the _network order_.
Now, if we compiled the above code for an Intel based computer, our _host byte order_ would produce:
.Host byte order on an Intel system
image::sainlsb.png[]
But the _network byte order_ requires that we store the data MSB first:
.Network byte order
image::sainmsb.png[]
Unfortunately, our _host order_ is the exact opposite of the _network order_.
We have several ways of dealing with it. One would be to _reverse_ the values in our code:
[.programlisting]
....
sa.sin_family = AF_INET;
sa.sin_port = 13 << 8;
sa.sin_addr.s_addr = (((((18 << 8) | 244) << 8) | 43) << 8) | 192;
....
This will _trick_ our compiler into storing the data in the _network byte order_. In some cases, this is exactly the way to do it (e.g., when programming in assembly language). In most cases, however, it can cause a problem.
Suppose, you wrote a sockets-based program in C. You know it is going to run on a Pentium(R), so you enter all your constants in reverse and force them to the _network byte order_. It works well.
Then, some day, your trusted old Pentium(R) becomes a rusty old Pentium(R). You replace it with a system whose _host order_ is the same as the _network order_. You need to recompile all your software. All of your software continues to perform well, except the one program you wrote.
You have since forgotten that you had forced all of your constants to the opposite of the _host order_. You spend some quality time tearing out your hair, calling the names of all gods you ever heard of (and some you made up), hitting your monitor with a nerf bat, and performing all the other traditional ceremonies of trying to figure out why something that has worked so well is suddenly not working at all.
Eventually, you figure it out, say a couple of swear words, and start rewriting your code.
Luckily, you are not the first one to face the problem. Someone else has created the man:htons[3] and man:htonl[3] C functions to convert a `short` and `long` respectively from the _host byte order_ to the _network byte order_, and the man:ntohs[3] and man:ntohl[3] C functions to go the other way.
On _MSB-first_ systems these functions do nothing. On _LSB-first_ systems they convert values to the proper order.
So, regardless of what system your software is compiled on, your data will end up in the correct order if you use these functions.
[[sockets-client-functions]]
==== Client Functions
Typically, the client initiates the connection to the server. The client knows which server it is about to call: It knows its IP address, and it knows the _port_ the server resides at. It is akin to you picking up the phone and dialing the number (the _address_), then, after someone answers, asking for the person in charge of wingdings (the _port_).
[[sockets-connect]]
===== `connect`
Once a client has created a socket, it needs to connect it to a specific port on a remote system. It uses man:connect[2]:
[.programlisting]
....
int connect(int s, const struct sockaddr *name, socklen_t namelen);
....
The `s` argument is the socket, i.e., the value returned by the `socket` function. The `name` is a pointer to `sockaddr`, the structure we have talked about extensively. Finally, `namelen` informs the system how many bytes are in our `sockaddr` structure.
If `connect` is successful, it returns `0`. Otherwise it returns `-1` and stores the error code in `errno`.
There are many reasons why `connect` may fail. For example, with an attempt to an Internet connection, the IP address may not exist, or it may be down, or just too busy, or it may not have a server listening at the specified port. Or it may outright _refuse_ any request for specific code.
[[sockets-first-client]]
===== Our First Client
We now know enough to write a very simple client, one that will get current time from `192.43.244.18` and print it to [.filename]#stdout#.
[.programlisting]
....
/*
* daytime.c
*
* Programmed by G. Adam Stanislav
*/
#include <stdio.h>
#include <string.h>
#include <sys/types.h>
#include <sys/socket.h>
#include <netinet/in.h>
int main() {
register int s;
register int bytes;
struct sockaddr_in sa;
char buffer[BUFSIZ+1];
if ((s = socket(PF_INET, SOCK_STREAM, 0)) < 0) {
perror("socket");
return 1;
}
bzero(&sa, sizeof sa);
sa.sin_family = AF_INET;
sa.sin_port = htons(13);
sa.sin_addr.s_addr = htonl((((((192 << 8) | 43) << 8) | 244) << 8) | 18);
if (connect(s, (struct sockaddr *)&sa, sizeof sa) < 0) {
perror("connect");
close(s);
return 2;
}
while ((bytes = read(s, buffer, BUFSIZ)) > 0)
write(1, buffer, bytes);
close(s);
return 0;
}
....
Go ahead, enter it in your editor, save it as [.filename]#daytime.c#, then compile and run it:
[source,bash]
....
% cc -O3 -o daytime daytime.c
% ./daytime
52079 01-06-19 02:29:25 50 0 1 543.9 UTC(NIST) *
%
....
In this case, the date was June 19, 2001, the time was 02:29:25 UTC. Naturally, your results will vary.
[[sockets-server-functions]]
==== Server Functions
The typical server does not initiate the connection. Instead, it waits for a client to call it and request services. It does not know when the client will call, nor how many clients will call. It may be just sitting there, waiting patiently, one moment, The next moment, it can find itself swamped with requests from a number of clients, all calling in at the same time.
The sockets interface offers three basic functions to handle this.
[[sockets-bind]]
===== `bind`
Ports are like extensions to a phone line: After you dial a number, you dial the extension to get to a specific person or department.
There are 65535 IP ports, but a server usually processes requests that come in on only one of them. It is like telling the phone room operator that we are now at work and available to answer the phone at a specific extension. We use man:bind[2] to tell sockets which port we want to serve.
[.programlisting]
....
int bind(int s, const struct sockaddr *addr, socklen_t addrlen);
....
Beside specifying the port in `addr`, the server may include its IP address. However, it can just use the symbolic constant INADDR_ANY to indicate it will serve all requests to the specified port regardless of what its IP address is. This symbol, along with several similar ones, is declared in [.filename]#netinet/in.h#
[.programlisting]
....
#define INADDR_ANY (u_int32_t)0x00000000
....
Suppose we were writing a server for the _daytime_ protocol over TCP/IP. Recall that it uses port 13. Our `sockaddr_in` structure would look like this:
.Example Server sockaddr_in
image::sainserv.png[]
[[sockets-listen]]
===== `listen`
To continue our office phone analogy, after you have told the phone central operator what extension you will be at, you now walk into your office, and make sure your own phone is plugged in and the ringer is turned on. Plus, you make sure your call waiting is activated, so you can hear the phone ring even while you are talking to someone.
The server ensures all of that with the man:listen[2] function.
[.programlisting]
....
int listen(int s, int backlog);
....
In here, the `backlog` variable tells sockets how many incoming requests to accept while you are busy processing the last request. In other words, it determines the maximum size of the queue of pending connections.
[[sockets-accept]]
===== `accept`
After you hear the phone ringing, you accept the call by answering the call. You have now established a connection with your client. This connection remains active until either you or your client hang up.
The server accepts the connection by using the man:accept[2] function.
[.programlisting]
....
int accept(int s, struct sockaddr *addr, socklen_t *addrlen);
....
Note that this time `addrlen` is a pointer. This is necessary because in this case it is the socket that fills out `addr`, the `sockaddr_in` structure.
The return value is an integer. Indeed, the `accept` returns a _new socket_. You will use this new socket to communicate with the client.
What happens to the old socket? It continues to listen for more requests (remember the `backlog` variable we passed to `listen`?) until we `close` it.
Now, the new socket is meant only for communications. It is fully connected. We cannot pass it to `listen` again, trying to accept additional connections.
[[sockets-first-server]]
===== Our First Server
Our first server will be somewhat more complex than our first client was: Not only do we have more sockets functions to use, but we need to write it as a daemon.
This is best achieved by creating a _child process_ after binding the port. The main process then exits and returns control to the shell (or whatever program invoked it).
The child calls `listen`, then starts an endless loop, which accepts a connection, serves it, and eventually closes its socket.
[.programlisting]
....
/*
* daytimed - a port 13 server
*
* Programmed by G. Adam Stanislav
* June 19, 2001
*/
#include <stdio.h>
#include <string.h>
#include <time.h>
#include <unistd.h>
#include <sys/types.h>
#include <sys/socket.h>
#include <netinet/in.h>
#define BACKLOG 4
int main() {
register int s, c;
int b;
struct sockaddr_in sa;
time_t t;
struct tm *tm;
FILE *client;
if ((s = socket(PF_INET, SOCK_STREAM, 0)) < 0) {
perror("socket");
return 1;
}
bzero(&sa, sizeof sa);
sa.sin_family = AF_INET;
sa.sin_port = htons(13);
if (INADDR_ANY)
sa.sin_addr.s_addr = htonl(INADDR_ANY);
if (bind(s, (struct sockaddr *)&sa, sizeof sa) < 0) {
perror("bind");
return 2;
}
switch (fork()) {
case -1:
perror("fork");
return 3;
break;
default:
close(s);
return 0;
break;
case 0:
break;
}
listen(s, BACKLOG);
for (;;) {
b = sizeof sa;
if ((c = accept(s, (struct sockaddr *)&sa, &b)) < 0) {
perror("daytimed accept");
return 4;
}
if ((client = fdopen(c, "w")) == NULL) {
perror("daytimed fdopen");
return 5;
}
if ((t = time(NULL)) < 0) {
perror("daytimed time");
return 6;
}
tm = gmtime(&t);
fprintf(client, "%.4i-%.2i-%.2iT%.2i:%.2i:%.2iZ\n",
tm->tm_year + 1900,
tm->tm_mon + 1,
tm->tm_mday,
tm->tm_hour,
tm->tm_min,
tm->tm_sec);
fclose(client);
}
}
....
We start by creating a socket. Then we fill out the `sockaddr_in` structure in `sa`. Note the conditional use of INADDR_ANY:
[.programlisting]
....
if (INADDR_ANY)
sa.sin_addr.s_addr = htonl(INADDR_ANY);
....
Its value is `0`. Since we have just used `bzero` on the entire structure, it would be redundant to set it to `0` again. But if we port our code to some other system where INADDR_ANY is perhaps not a zero, we need to assign it to `sa.sin_addr.s_addr`. Most modern C compilers are clever enough to notice that INADDR_ANY is a constant. As long as it is a zero, they will optimize the entire conditional statement out of the code.
After we have called `bind` successfully, we are ready to become a _daemon_: We use `fork` to create a child process. In both, the parent and the child, the `s` variable is our socket. The parent process will not need it, so it calls `close`, then it returns `0` to inform its own parent it had terminated successfully.
Meanwhile, the child process continues working in the background. It calls `listen` and sets its backlog to `4`. It does not need a large value here because _daytime_ is not a protocol many clients request all the time, and because it can process each request instantly anyway.
Finally, the daemon starts an endless loop, which performs the following steps:
[.procedure]
. Call `accept`. It waits here until a client contacts it. At that point, it receives a new socket, `c`, which it can use to communicate with this particular client.
. It uses the C function `fdopen` to turn the socket from a low-level _file descriptor_ to a C-style `FILE` pointer. This will allow the use of `fprintf` later on.
. It checks the time, and prints it in the _ISO 8601_ format to the `client` "file". It then uses `fclose` to close the file. That will automatically close the socket as well.
We can _generalize_ this, and use it as a model for many other servers:
.Sequential Server
image::serv.png[]
This flowchart is good for _sequential servers_, i.e., servers that can serve one client at a time, just as we were able to with our _daytime_ server. This is only possible whenever there is no real "conversation" going on between the client and the server: As soon as the server detects a connection to the client, it sends out some data and closes the connection. The entire operation may take nanoseconds, and it is finished.
The advantage of this flowchart is that, except for the brief moment after the parent ``fork``s and before it exits, there is always only one _process_ active: Our server does not take up much memory and other system resources.
Note that we have added _initialize daemon_ in our flowchart. We did not need to initialize our own daemon, but this is a good place in the flow of the program to set up any `signal` handlers, open any files we may need, etc.
Just about everything in the flow chart can be used literally on many different servers. The _serve_ entry is the exception. We think of it as a _"black box"_, i.e., something you design specifically for your own server, and just "plug it into the rest."
Not all protocols are that simple. Many receive a request from the client, reply to it, then receive another request from the same client. As a result, they do not know in advance how long they will be serving the client. Such servers usually start a new process for each client. While the new process is serving its client, the daemon can continue listening for more connections.
Now, go ahead, save the above source code as [.filename]#daytimed.c# (it is customary to end the names of daemons with the letter `d`). After you have compiled it, try running it:
[source,bash]
....
% ./daytimed
bind: Permission denied
%
....
What happened here? As you will recall, the _daytime_ protocol uses port 13. But all ports below 1024 are reserved to the superuser (otherwise, anyone could start a daemon pretending to serve a commonly used port, while causing a security breach).
Try again, this time as the superuser:
[source,bash]
....
# ./daytimed
#
....
What... Nothing? Let us try again:
[source,bash]
....
# ./daytimed
bind: Address already in use
#
....
Every port can only be bound by one program at a time. Our first attempt was indeed successful: It started the child daemon and returned quietly. It is still running and will continue to run until you either kill it, or any of its system calls fail, or you reboot the system.
Fine, we know it is running in the background. But is it working? How do we know it is a proper _daytime_ server? Simple:
[source,bash]
....
% telnet localhost 13
Trying ::1...
telnet: connect to address ::1: Connection refused
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
2001-06-19T21:04:42Z
Connection closed by foreign host.
%
....
telnet tried the new IPv6, and failed. It retried with IPv4 and succeeded. The daemon works.
If you have access to another UNIX(R) system via telnet, you can use it to test accessing the server remotely. My computer does not have a static IP address, so this is what I did:
[source,bash]
....
% who
whizkid ttyp0 Jun 19 16:59 (216.127.220.143)
xxx ttyp1 Jun 19 16:06 (xx.xx.xx.xx)
% telnet 216.127.220.143 13
Trying 216.127.220.143...
Connected to r47.bfm.org.
Escape character is '^]'.
2001-06-19T21:31:11Z
Connection closed by foreign host.
%
....
Again, it worked. Will it work using the domain name?
[source,bash]
....
% telnet r47.bfm.org 13
Trying 216.127.220.143...
Connected to r47.bfm.org.
Escape character is '^]'.
2001-06-19T21:31:40Z
Connection closed by foreign host.
%
....
By the way, telnet prints the _Connection closed by foreign host_ message after our daemon has closed the socket. This shows us that, indeed, using `fclose(client);` in our code works as advertised.
[[sockets-helper-functions]]
== Helper Functions
FreeBSD C library contains many helper functions for sockets programming. For example, in our sample client we hard coded the `time.nist.gov` IP address. But we do not always know the IP address. Even if we do, our software is more flexible if it allows the user to enter the IP address, or even the domain name.
[[sockets-gethostbyname]]
=== `gethostbyname`
While there is no way to pass the domain name directly to any of the sockets functions, the FreeBSD C library comes with the man:gethostbyname[3] and man:gethostbyname2[3] functions, declared in [.filename]#netdb.h#.
[.programlisting]
....
struct hostent * gethostbyname(const char *name);
struct hostent * gethostbyname2(const char *name, int af);
....
Both return a pointer to the `hostent` structure, with much information about the domain. For our purposes, the `h_addr_list[0]` field of the structure points at `h_length` bytes of the correct address, already stored in the _network byte order_.
This allows us to create a much more flexible-and much more useful-version of our daytime program:
[.programlisting]
....
/*
* daytime.c
*
* Programmed by G. Adam Stanislav
* 19 June 2001
*/
#include <stdio.h>
#include <string.h>
#include <sys/types.h>
#include <sys/socket.h>
#include <netinet/in.h>
#include <netdb.h>
int main(int argc, char *argv[]) {
register int s;
register int bytes;
struct sockaddr_in sa;
struct hostent *he;
char buf[BUFSIZ+1];
char *host;
if ((s = socket(PF_INET, SOCK_STREAM, 0)) < 0) {
perror("socket");
return 1;
}
bzero(&sa, sizeof sa);
sa.sin_family = AF_INET;
sa.sin_port = htons(13);
host = (argc > 1) ? (char *)argv[1] : "time.nist.gov";
if ((he = gethostbyname(host)) == NULL) {
herror(host);
return 2;
}
bcopy(he->h_addr_list[0],&sa.sin_addr, he->h_length);
if (connect(s, (struct sockaddr *)&sa, sizeof sa) < 0) {
perror("connect");
return 3;
}
while ((bytes = read(s, buf, BUFSIZ)) > 0)
write(1, buf, bytes);
close(s);
return 0;
}
....
We now can type a domain name (or an IP address, it works both ways) on the command line, and the program will try to connect to its _daytime_ server. Otherwise, it will still default to `time.nist.gov`. However, even in this case we will use `gethostbyname` rather than hard coding `192.43.244.18`. That way, even if its IP address changes in the future, we will still find it.
Since it takes virtually no time to get the time from your local server, you could run daytime twice in a row: First to get the time from `time.nist.gov`, the second time from your own system. You can then compare the results and see how exact your system clock is:
[source,bash]
....
% daytime ; daytime localhost
52080 01-06-20 04:02:33 50 0 0 390.2 UTC(NIST) *
2001-06-20T04:02:35Z
%
....
As you can see, my system was two seconds ahead of the NIST time.
[[sockets-getservbyname]]
=== `getservbyname`
Sometimes you may not be sure what port a certain service uses. The man:getservbyname[3] function, also declared in [.filename]#netdb.h# comes in very handy in those cases:
[.programlisting]
....
struct servent * getservbyname(const char *name, const char *proto);
....
The `servent` structure contains the `s_port`, which contains the proper port, already in _network byte order_.
Had we not known the correct port for the _daytime_ service, we could have found it this way:
[.programlisting]
....
struct servent *se;
...
if ((se = getservbyname("daytime", "tcp")) == NULL {
fprintf(stderr, "Cannot determine which port to use.\n");
return 7;
}
sa.sin_port = se->s_port;
....
You usually do know the port. But if you are developing a new protocol, you may be testing it on an unofficial port. Some day, you will register the protocol and its port (if nowhere else, at least in your [.filename]#/etc/services#, which is where `getservbyname` looks). Instead of returning an error in the above code, you just use the temporary port number. Once you have listed the protocol in [.filename]#/etc/services#, your software will find its port without you having to rewrite the code.
[[sockets-concurrent-servers]]
== Concurrent Servers
Unlike a sequential server, a _concurrent server_ has to be able to serve more than one client at a time. For example, a _chat server_ may be serving a specific client for hours-it cannot wait till it stops serving a client before it serves the next one.
This requires a significant change in our flowchart:
.Concurrent Server
image::serv2.png[]
We moved the _serve_ from the _daemon process_ to its own _server process_. However, because each child process inherits all open files (and a socket is treated just like a file), the new process inherits not only the _"accepted handle,"_ i.e., the socket returned by the `accept` call, but also the _top socket_, i.e., the one opened by the top process right at the beginning.
However, the _server process_ does not need this socket and should `close` it immediately. Similarly, the _daemon process_ no longer needs the _accepted socket_, and not only should, but _must_ `close` it-otherwise, it will run out of available _file descriptors_ sooner or later.
After the _server process_ is done serving, it should close the _accepted socket_. Instead of returning to `accept`, it now exits.
Under UNIX(R), a process does not really _exit_. Instead, it _returns_ to its parent. Typically, a parent process ``wait``s for its child process, and obtains a return value. However, our _daemon process_ cannot simply stop and wait. That would defeat the whole purpose of creating additional processes. But if it never does `wait`, its children will become _zombies_-no longer functional but still roaming around.
For that reason, the _daemon process_ needs to set _signal handlers_ in its _initialize daemon_ phase. At least a SIGCHLD signal has to be processed, so the daemon can remove the zombie return values from the system and release the system resources they are taking up.
That is why our flowchart now contains a _process signals_ box, which is not connected to any other box. By the way, many servers also process SIGHUP, and typically interpret as the signal from the superuser that they should reread their configuration files. This allows us to change settings without having to kill and restart these servers.
diff --git a/documentation/content/en/books/developers-handbook/testing/_index.adoc b/documentation/content/en/books/developers-handbook/testing/_index.adoc
index 1b8459bfd3..e8d14c74c1 100644
--- a/documentation/content/en/books/developers-handbook/testing/_index.adoc
+++ b/documentation/content/en/books/developers-handbook/testing/_index.adoc
@@ -1,167 +1,168 @@
---
title: Chapter 6. Regression and Performance Testing
authors:
prev: books/developers-handbook/policies
next: books/developers-handbook/partii
+description: Regression and Performance Testing
---
[[testing]]
= Regression and Performance Testing
:doctype: book
:toc: macro
:toclevels: 1
:icons: font
:sectnums:
:sectnumlevels: 6
:source-highlighter: rouge
:experimental:
:skip-front-matter:
:xrefstyle: basic
:relfileprefix: ../
:outfilesuffix:
:sectnumoffset: 6
include::shared/mirrors.adoc[]
include::shared/authors.adoc[]
include::shared/releases.adoc[]
include::shared/en/mailing-lists.adoc[]
include::shared/en/teams.adoc[]
include::shared/en/urls.adoc[]
toc::[]
Regression tests are used to exercise a particular bit of the system to check that it works as expected, and to make sure that old bugs are not reintroduced.
The FreeBSD regression testing tools can be found in the FreeBSD source tree in the directory [.filename]#src/tools/regression#.
[[testing-micro-benchmark]]
== Micro Benchmark Checklist
This section contains hints for doing proper micro-benchmarking on FreeBSD or of FreeBSD itself.
It is not possible to use all of the suggestions below every single time, but the more used, the better the benchmark's ability to test small differences will be.
* Disable APM and any other kind of clock fiddling (ACPI ?).
* Run in single user mode. E.g., man:cron[8], and other daemons only add noise. The man:sshd[8] daemon can also cause problems. If ssh access is required during testing either disable the SSHv1 key regeneration, or kill the parent `sshd` daemon during the tests.
* Do not run man:ntpd[8].
* If man:syslog[3] events are generated, run man:syslogd[8] with an empty [.filename]#/etc/syslogd.conf#, otherwise, do not run it.
* Minimize disk-I/O, avoid it entirely if possible.
* Do not mount file systems that are not needed.
* Mount [.filename]#/#, [.filename]#/usr#, and any other file system as read-only if possible. This removes atime updates to disk (etc.) from the I/O picture.
* Reinitialize the read/write test file system with man:newfs[8] and populate it from a man:tar[1] or man:dump[8] file before every run. Unmount and mount it before starting the test. This results in a consistent file system layout. For a worldstone test this would apply to [.filename]#/usr/obj# (just reinitialize with `newfs` and mount). To get 100% reproducibility, populate the file system from a man:dd[1] file (i.e.: `dd if=myimage of=/dev/ad0s1h bs=1m`)
* Use malloc backed or preloaded man:md[4] partitions.
* Reboot between individual iterations of the test, this gives a more consistent state.
* Remove all non-essential device drivers from the kernel. For instance if USB is not needed for the test, do not put USB in the kernel. Drivers which attach often have timeouts ticking away.
* Unconfigure hardware that are not in use. Detach disks with man:atacontrol[8] and man:camcontrol[8] if the disks are not used for the test.
* Do not configure the network unless it is being tested, or wait until after the test has been performed to ship the results off to another computer.
+
If the system must be connected to a public network, watch out for spikes of broadcast traffic. Even though it is hardly noticeable, it will take up CPU cycles. Multicast has similar caveats.
* Put each file system on its own disk. This minimizes jitter from head-seek optimizations.
* Minimize output to serial or VGA consoles. Running output into files gives less jitter. (Serial consoles easily become a bottleneck.) Do not touch keyboard while the test is running, even kbd:[space] or kbd:[back-space] shows up in the numbers.
* Make sure the test is long enough, but not too long. If the test is too short, timestamping is a problem. If it is too long temperature changes and drift will affect the frequency of the quartz crystals in the computer. Rule of thumb: more than a minute, less than an hour.
* Try to keep the temperature as stable as possible around the machine. This affects both quartz crystals and disk drive algorithms. To get real stable clock, consider stabilized clock injection. E.g., get a OCXO + PLL, inject output into clock circuits instead of motherboard xtal. Contact {phk} for more information about this.
* Run the test at least 3 times but it is better to run more than 20 times both for "before" and "after" code. Try to interleave if possible (i.e.: do not run 20 times before then 20 times after), this makes it possible to spot environmental effects. Do not interleave 1:1, but 3:3, this makes it possible to spot interaction effects.
+
A good pattern is: `bababa{bbbaaa}*`. This gives hint after the first 1+1 runs (so it is possible to stop the test if it goes entirely the wrong way), a standard deviation after the first 3+3 (gives a good indication if it is going to be worth a long run) and trending and interaction numbers later on.
* Use man:ministat[1] to see if the numbers are significant. Consider buying "Cartoon guide to statistics" ISBN: 0062731025, highly recommended, if you have forgotten or never learned about standard deviation and Student's T.
* Do not use background man:fsck[8] unless the test is a benchmark of background `fsck`. Also, disable `background_fsck` in [.filename]#/etc/rc.conf# unless the benchmark is not started at least 60+"``fsck`` runtime" seconds after the boot, as man:rc[8] wakes up and checks if `fsck` needs to run on any file systems when background `fsck` is enabled. Likewise, make sure there are no snapshots lying around unless the benchmark is a test with snapshots.
* If the benchmark show unexpected bad performance, check for things like high interrupt volume from an unexpected source. Some versions of ACPI have been reported to "misbehave" and generate excess interrupts. To help diagnose odd test results, take a few snapshots of `vmstat -i` and look for anything unusual.
* Make sure to be careful about optimization parameters for kernel and userspace, likewise debugging. It is easy to let something slip through and realize later the test was not comparing the same thing.
* Do not ever benchmark with the `WITNESS` and `INVARIANTS` kernel options enabled unless the test is interested to benchmarking those features. `WITNESS` can cause 400%+ drops in performance. Likewise, userspace man:malloc[3] parameters default differently in -CURRENT from the way they ship in production releases.
[[testing-tinderbox]]
== The FreeBSD Source Tinderbox
The source Tinderbox consists of:
* A build script, [.filename]#tinderbox#, that automates checking out a specific version of the FreeBSD source tree and building it.
* A supervisor script, [.filename]#tbmaster#, that monitors individual Tinderbox instances, logs their output, and emails failure notices.
* A CGI script named [.filename]#index.cgi# that reads a set of tbmaster logs and presents an easy-to-read HTML summary of them.
* A set of build servers that continually test the tip of the most important FreeBSD code branches.
* A webserver that keeps a complete set of Tinderbox logs and displays an up-to-date summary.
The scripts are maintained and were developed by {des}, and are now written in Perl, a move on from their original incarnation as shell scripts. All scripts and configuration files are kept in https://www.freebsd.org/cgi/cvsweb.cgi/projects/tinderbox/[/projects/tinderbox/].
For more information about the tinderbox and tbmaster scripts at this stage, see their respective man pages: tinderbox(1) and tbmaster(1).
== The index.cgi Script
The [.filename]#index.cgi# script generates the HTML summary of tinderbox and tbmaster logs. Although originally intended to be used as a CGI script, as indicated by its name, this script can also be run from the command line or from a man:cron[8] job, in which case it will look for logs in the directory where the script is located. It will automatically detect context, generating HTTP headers when it is run as a CGI script. It conforms to XHTML standards and is styled using CSS.
The script starts in the `main()` block by attempting to verify that it is running on the official Tinderbox website. If it is not, a page indicating it is not an official website is produced, and a URL to the official site is provided.
Next, it scans the log directory to get an inventory of configurations, branches and architectures for which log files exist, to avoid hard-coding a list into the script and potentially ending up with blank rows or columns. This information is derived from the names of the log files matching the following pattern:
[.programlisting]
....
tinderbox-$config-$branch-$arch-$machine.{brief,full}
....
The configurations used on the official Tinderbox build servers are named for the branches they build. For example, the `releng_8` configuration is used to build `RELENG_8` as well as all still-supported release branches.
Once all of this startup procedure has been successfully completed, `do_config()` is called for each configuration.
The `do_config()` function generates HTML for a single Tinderbox configuration.
It works by first generating a header row, then iterating over each branch build with the specified configuration, producing a single row of results for each in the following manner:
* For each item:
** For each machine within that architecture:
*** If a brief log file exists, then:
**** Call `success()` to determine the outcome of the build.
**** Output the modification size.
**** Output the size of the brief log file with a link to the log file itself.
**** If a full log file also exists, then:
***** Output the size of the full log file with a link to the log file itself.
*** Otherwise:
**** No output.
The `success()` function mentioned above scans a brief log file for the string "tinderbox run completed" in order to determine whether the build was successful.
Configurations and branches are sorted according to their branch rank. This is computed as follows:
* `HEAD` and `CURRENT` have rank 9999.
* `RELENG_x` has rank __``xx``__99.
* `RELENG_x_y` has rank _xxyy_.
This means that `HEAD` always ranks highest, and `RELENG` branches are ranked in numerical order, with each `STABLE` branch ranking higher than the release branches forked off of it. For instance, for FreeBSD 8, the order from highest to lowest would be:
* `RELENG_8` (branch rank 899).
* `RELENG_8_3` (branch rank 803).
* `RELENG_8_2` (branch rank 802).
* `RELENG_8_1` (branch rank 801).
* `RELENG_8_0` (branch rank 800).
The colors that Tinderbox uses for each cell in the table are defined by CSS. Successful builds are displayed with green text; unsuccessful builds are displayed with red text. The color fades as time passes since the corresponding build, with every half an hour bringing the color closer to grey.
== Official Build Servers
The official Tinderbox build servers are hosted by http://www.sentex.ca[Sentex Data Communications], who also host the FreeBSD Netperf Cluster.
Three build servers currently exist:
_freebsd-current.sentex.ca_ builds:
* `HEAD` for amd64, arm, i386, i386/pc98, ia64, mips, powerpc, powerpc64, and sparc64.
* `RELENG_9` and supported 9._X_ branches for amd64, arm, i386, i386/pc98, ia64, mips, powerpc, powerpc64, and sparc64.
_freebsd-stable.sentex.ca_ builds:
* `RELENG_8` and supported 8._X_ branches for amd64, i386, i386/pc98, ia64, mips, powerpc and sparc64.
_freebsd-legacy.sentex.ca_ builds:
* `RELENG_7` and supported 7._X_ branches for amd64, i386, i386/pc98, ia64, powerpc, and sparc64.
== Official Summary Site
Summaries and logs from the official build servers are available online at http://tinderbox.FreeBSD.org[http://tinderbox.FreeBSD.org], hosted by {des} and set up as follows:
* A man:cron[8] job checks the build servers at regular intervals and downloads any new log files using man:rsync[1].
* Apache is set up to use [.filename]#index.cgi# as `DirectoryIndex`.
diff --git a/documentation/content/en/books/developers-handbook/tools/_index.adoc b/documentation/content/en/books/developers-handbook/tools/_index.adoc
index feeaee40b8..3720cf414c 100644
--- a/documentation/content/en/books/developers-handbook/tools/_index.adoc
+++ b/documentation/content/en/books/developers-handbook/tools/_index.adoc
@@ -1,1492 +1,1493 @@
---
title: Chapter 2. Programming Tools
authors:
- author: James Raynard
- author: Murray Stokely
prev: books/developers-handbook/introduction
next: books/developers-handbook/secure
+description: Programming Tools
---
[[tools]]
= Programming Tools
:doctype: book
:toc: macro
:toclevels: 1
:icons: font
:sectnums:
:sectnumlevels: 6
:source-highlighter: rouge
:experimental:
:skip-front-matter:
:xrefstyle: basic
:relfileprefix: ../
:outfilesuffix:
:sectnumoffset: 2
:c-plus-plus-command: c++
:clang-plus-plus-command: clang++
include::shared/mirrors.adoc[]
include::shared/authors.adoc[]
include::shared/releases.adoc[]
include::shared/en/mailing-lists.adoc[]
include::shared/en/teams.adoc[]
include::shared/en/urls.adoc[]
toc::[]
[[tools-synopsis]]
== Synopsis
This chapter is an introduction to using some of the programming tools supplied with FreeBSD, although much of it will be applicable to many other versions of UNIX(R). It does _not_ attempt to describe coding in any detail. Most of the chapter assumes little or no previous programming knowledge, although it is hoped that most programmers will find something of value in it.
[[tools-intro]]
== Introduction
FreeBSD offers an excellent development environment. Compilers for C and C++ and an assembler come with the basic system, not to mention classic UNIX(R) tools such as `sed` and `awk`. If that is not enough, there are many more compilers and interpreters in the Ports collection. The following section, <<tools-programming,Introduction to Programming>>, lists some of the available options. FreeBSD is very compatible with standards such as POSIX(R) and ANSI C, as well with its own BSD heritage, so it is possible to write applications that will compile and run with little or no modification on a wide range of platforms.
However, all this power can be rather overwhelming at first if you have never written programs on a UNIX(R) platform before. This document aims to help you get up and running, without getting too deeply into more advanced topics. The intention is that this document should give you enough of the basics to be able to make some sense of the documentation.
Most of the document requires little or no knowledge of programming, although it does assume a basic competence with using UNIX(R) and a willingness to learn!
[[tools-programming]]
== Introduction to Programming
A program is a set of instructions that tell the computer to do various things; sometimes the instruction it has to perform depends on what happened when it performed a previous instruction. This section gives an overview of the two main ways in which you can give these instructions, or "commands" as they are usually called. One way uses an _interpreter_, the other a _compiler_. As human languages are too difficult for a computer to understand in an unambiguous way, commands are usually written in one or other languages specially designed for the purpose.
=== Interpreters
With an interpreter, the language comes as an environment, where you type in commands at a prompt and the environment executes them for you. For more complicated programs, you can type the commands into a file and get the interpreter to load the file and execute the commands in it. If anything goes wrong, many interpreters will drop you into a debugger to help you track down the problem.
The advantage of this is that you can see the results of your commands immediately, and mistakes can be corrected readily. The biggest disadvantage comes when you want to share your programs with someone. They must have the same interpreter, or you must have some way of giving it to them, and they need to understand how to use it. Also users may not appreciate being thrown into a debugger if they press the wrong key! From a performance point of view, interpreters can use up a lot of memory, and generally do not generate code as efficiently as compilers.
In my opinion, interpreted languages are the best way to start if you have not done any programming before. This kind of environment is typically found with languages like Lisp, Smalltalk, Perl and Basic. It could also be argued that the UNIX(R) shell (`sh`, `csh`) is itself an interpreter, and many people do in fact write shell "scripts" to help with various "housekeeping" tasks on their machine. Indeed, part of the original UNIX(R) philosophy was to provide lots of small utility programs that could be linked together in shell scripts to perform useful tasks.
=== Interpreters Available with FreeBSD
Here is a list of interpreters that are available from the FreeBSD Ports Collection, with a brief discussion of some of the more popular interpreted languages.
Instructions on how to get and install applications from the Ports Collection can be found in the link:{handbook}#ports-using/[Ports section] of the handbook.
BASIC::
Short for Beginner's All-purpose Symbolic Instruction Code. Developed in the 1950s for teaching University students to program and provided with every self-respecting personal computer in the 1980s, BASIC has been the first programming language for many programmers. It is also the foundation for Visual Basic.
+
The Bywater Basic Interpreter can be found in the Ports Collection as package:lang/bwbasic[] and the Phil Cockroft's Basic Interpreter (formerly Rabbit Basic) is available as package:lang/pbasic[].
Lisp::
A language that was developed in the late 1950s as an alternative to the "number-crunching" languages that were popular at the time. Instead of being based on numbers, Lisp is based on lists; in fact, the name is short for "List Processing". It is very popular in AI (Artificial Intelligence) circles.
+
Lisp is an extremely powerful and sophisticated language, but can be rather large and unwieldy.
+
Various implementations of Lisp that can run on UNIX(R) systems are available in the Ports Collection for FreeBSD. GNU Common Lisp can be found as package:lang/gcl[]. CLISP by Bruno Haible and Michael Stoll is available as package:lang/clisp[]. For CMUCL, which includes a highly-optimizing compiler too, or simpler Lisp implementations like SLisp, which implements most of the Common Lisp constructs in a few hundred lines of C code, package:lang/cmucl[] and package:lang/slisp[] are available respectively.
Perl::
Very popular with system administrators for writing scripts; also often used on World Wide Web servers for writing CGI scripts.
+
Perl is available in the Ports Collection as package:lang/perl5.24[] for all FreeBSD releases.
Scheme::
A dialect of Lisp that is rather more compact and cleaner than Common Lisp. Popular in Universities as it is simple enough to teach to undergraduates as a first language, while it has a high enough level of abstraction to be used in research work.
+
Scheme is available from the Ports Collection as package:lang/elk[] for the Elk Scheme Interpreter. The MIT Scheme Interpreter can be found in package:lang/mit-scheme[] and the SCM Scheme Interpreter in package:lang/scm[].
Icon::
Icon is a high-level language with extensive facilities for processing strings and structures. The version of Icon for FreeBSD can be found in the Ports Collection as package:lang/icon[].
Logo::
Logo is a language that is easy to learn, and has been used as an introductory programming language in various courses. It is an excellent tool to work with when teaching programming to smaller age groups, as it makes creation of elaborate geometric shapes an easy task.
+
The latest version of Logo for FreeBSD is available from the Ports Collection in package:lang/logo[].
Python::
Python is an Object-Oriented, interpreted language. Its advocates argue that it is one of the best languages to start programming with, since it is relatively easy to start with, but is not limited in comparison to other popular interpreted languages that are used for the development of large, complex applications (Perl and Tcl are two other languages that are popular for such tasks).
+
The latest version of Python is available from the Ports Collection in package:lang/python[].
Ruby::
Ruby is an interpreter, pure object-oriented programming language. It has become widely popular because of its easy to understand syntax, flexibility when writing code, and the ability to easily develop and maintain large, complex programs.
+
Ruby is available from the Ports Collection as package:lang/ruby25[].
Tcl and Tk::
Tcl is an embeddable, interpreted language, that has become widely used and became popular mostly because of its portability to many platforms. It can be used both for quickly writing small, prototype applications, or (when combined with Tk, a GUI toolkit) fully-fledged, featureful programs.
+
Various versions of Tcl are available as ports for FreeBSD. The latest version, Tcl 8.5, can be found in package:lang/tcl87[].
=== Compilers
Compilers are rather different. First of all, you write your code in a file (or files) using an editor. You then run the compiler and see if it accepts your program. If it did not compile, grit your teeth and go back to the editor; if it did compile and gave you a program, you can run it either at a shell command prompt or in a debugger to see if it works properly.footnote:[If you run it in the shell, you may get a core dump.]
Obviously, this is not quite as direct as using an interpreter. However it allows you to do a lot of things which are very difficult or even impossible with an interpreter, such as writing code which interacts closely with the operating system-or even writing your own operating system! It is also useful if you need to write very efficient code, as the compiler can take its time and optimize the code, which would not be acceptable in an interpreter. Moreover, distributing a program written for a compiler is usually more straightforward than one written for an interpreter-you can just give them a copy of the executable, assuming they have the same operating system as you.
As the edit-compile-run-debug cycle is rather tedious when using separate programs, many commercial compiler makers have produced Integrated Development Environments (IDEs for short). FreeBSD does not include an IDE in the base system, but package:devel/kdevelop[] is available in the Ports Collection and many use Emacs for this purpose. Using Emacs as an IDE is discussed in <<emacs>>.
[[tools-compiling]]
== Compiling with `cc`
This section deals with the gcc and clang compilers for C and C++, since they come with the FreeBSD base system. Starting with FreeBSD 10.X `clang` is installed as `cc`. The details of producing a program with an interpreter vary considerably between interpreters, and are usually well covered in the documentation and on-line help for the interpreter.
Once you have written your masterpiece, the next step is to convert it into something that will (hopefully!) run on FreeBSD. This usually involves several steps, each of which is done by a separate program.
[.procedure]
. Pre-process your source code to remove comments and do other tricks like expanding macros in C.
. Check the syntax of your code to see if you have obeyed the rules of the language. If you have not, it will complain!
. Convert the source code into assembly language-this is very close to machine code, but still understandable by humans. Allegedly.
. Convert the assembly language into machine code-yep, we are talking bits and bytes, ones and zeros here.
. Check that you have used things like functions and global variables in a consistent way. For example, if you have called a non-existent function, it will complain.
. If you are trying to produce an executable from several source code files, work out how to fit them all together.
. Work out how to produce something that the system's run-time loader will be able to load into memory and run.
. Finally, write the executable on the filesystem.
The word _compiling_ is often used to refer to just steps 1 to 4-the others are referred to as _linking_. Sometimes step 1 is referred to as _pre-processing_ and steps 3-4 as _assembling_.
Fortunately, almost all this detail is hidden from you, as `cc` is a front end that manages calling all these programs with the right arguments for you; simply typing
[source,bash]
....
% cc foobar.c
....
will cause [.filename]#foobar.c# to be compiled by all the steps above. If you have more than one file to compile, just do something like
[source,bash]
....
% cc foo.c bar.c
....
Note that the syntax checking is just that-checking the syntax. It will not check for any logical mistakes you may have made, like putting the program into an infinite loop, or using a bubble sort when you meant to use a binary sort.footnote:[In case you did not know, a binary sort is an efficient way of sorting things into order and a bubble sort is not.]
There are lots and lots of options for `cc`, which are all in the manual page. Here are a few of the most important ones, with examples of how to use them.
`-o _filename_`::
The output name of the file. If you do not use this option, `cc` will produce an executable called [.filename]#a.out#.footnote:[The reasons for this are buried in the mists of history.]
+
[source,bash]
....
% cc foobar.c executable is a.out
% cc -o foobar foobar.c executable is foobar
....
`-c`::
Just compile the file, do not link it. Useful for toy programs where you just want to check the syntax, or if you are using a [.filename]#Makefile#.
+
[source,bash]
....
% cc -c foobar.c
....
+
This will produce an _object file_ (not an executable) called [.filename]#foobar.o#. This can be linked together with other object files into an executable.
`-g`::
Create a debug version of the executable. This makes the compiler put information into the executable about which line of which source file corresponds to which function call. A debugger can use this information to show the source code as you step through the program, which is _very_ useful; the disadvantage is that all this extra information makes the program much bigger. Normally, you compile with `-g` while you are developing a program and then compile a "release version" without `-g` when you are satisfied it works properly.
+
[source,bash]
....
% cc -g foobar.c
....
+
This will produce a debug version of the program. footnote:[Note, we did not use the -o flag to specify the executable name, so we will get an executable called a.out. Producing a debug version called foobar is left as an exercise for the reader!]
`-O`::
Create an optimized version of the executable. The compiler performs various clever tricks to try to produce an executable that runs faster than normal. You can add a number after the `-O` to specify a higher level of optimization, but this often exposes bugs in the compiler's optimizer.
+
[source,bash]
....
% cc -O -o foobar foobar.c
....
+
This will produce an optimized version of [.filename]#foobar#.
The following three flags will force `cc` to check that your code complies to the relevant international standard, often referred to as the ANSI standard, though strictly speaking it is an ISO standard.
`-Wall`::
Enable all the warnings which the authors of `cc` believe are worthwhile. Despite the name, it will not enable all the warnings `cc` is capable of.
`-ansi`::
Turn off most, but not all, of the non-ANSI C features provided by `cc`. Despite the name, it does not guarantee strictly that your code will comply to the standard.
`-pedantic`::
Turn off _all_ ``cc``'s non-ANSI C features.
Without these flags, `cc` will allow you to use some of its non-standard extensions to the standard. Some of these are very useful, but will not work with other compilers-in fact, one of the main aims of the standard is to allow people to write code that will work with any compiler on any system. This is known as _portable code_.
Generally, you should try to make your code as portable as possible, as otherwise you may have to completely rewrite the program later to get it to work somewhere else-and who knows what you may be using in a few years time?
[source,bash]
....
% cc -Wall -ansi -pedantic -o foobar foobar.c
....
This will produce an executable [.filename]#foobar# after checking [.filename]#foobar.c# for standard compliance.
`-l__library__`::
Specify a function library to be used at link time.
+
The most common example of this is when compiling a program that uses some of the mathematical functions in C. Unlike most other platforms, these are in a separate library from the standard C one and you have to tell the compiler to add it.
+
The rule is that if the library is called [.filename]#libsomething.a#, you give `cc` the argument `-l__something__`. For example, the math library is [.filename]#libm.a#, so you give `cc` the argument `-lm`. A common "gotcha" with the math library is that it has to be the last library on the command line.
+
[source,bash]
....
% cc -o foobar foobar.c -lm
....
+
This will link the math library functions into [.filename]#foobar#.
+
If you are compiling C++ code, use {c-plus-plus-command}. {c-plus-plus-command} can also be invoked as {clang-plus-plus-command} on FreeBSD.
+
[source,bash]
....
% c++ -o foobar foobar.cc
....
+
This will both produce an executable [.filename]#foobar# from the C++ source file [.filename]#foobar.cc#.
=== Common `cc` Queries and Problems
==== I am trying to write a program which uses the sin() function and I get an error like this. What does it mean?
[source,bash]
....
/var/tmp/cc0143941.o: Undefined symbol `_sin' referenced from text segment
....
When using mathematical functions like `sin()`, you have to tell `cc` to link in the math library, like so:
[source,bash]
....
% cc -o foobar foobar.c -lm
....
==== All right, I wrote this simple program to practice using -lm. All it does is raise 2.1 to the power of 6.
[.programlisting]
....
#include <stdio.h>
int main() {
float f;
f = pow(2.1, 6);
printf("2.1 ^ 6 = %f\n", f);
return 0;
}
....
and I compiled it as:
[source,bash]
....
% cc temp.c -lm
....
like you said I should, but I get this when I run it:
[source,bash]
....
% ./a.out
2.1 ^ 6 = 1023.000000
....
This is not the right answer! What is going on?
When the compiler sees you call a function, it checks if it has already seen a prototype for it. If it has not, it assumes the function returns an int, which is definitely not what you want here.
==== So how do I fix this?
The prototypes for the mathematical functions are in [.filename]#math.h#. If you include this file, the compiler will be able to find the prototype and it will stop doing strange things to your calculation!
[.programlisting]
....
#include <math.h>
#include <stdio.h>
int main() {
...
....
After recompiling it as you did before, run it:
[source,bash]
....
% ./a.out
2.1 ^ 6 = 85.766121
....
If you are using any of the mathematical functions, _always_ include [.filename]#math.h# and remember to link in the math library.
==== I compiled a file called foobar.c and I cannot find an executable called foobar. Where has it gone?
Remember, `cc` will call the executable [.filename]#a.out# unless you tell it differently. Use the `-o _filename_` option:
[source,bash]
....
% cc -o foobar foobar.c
....
==== OK, I have an executable called foobar, I can see it when I run ls, but when I type in foobar at the command prompt it tells me there is no such file. Why can it not find it?
Unlike MS-DOS(R), UNIX(R) does not look in the current directory when it is trying to find out which executable you want it to run, unless you tell it to. Type `./foobar`, which means "run the file called [.filename]#foobar# in the current directory."
=== I called my executable test, but nothing happens when I run it. What is going on?
Most UNIX(R) systems have a program called `test` in [.filename]#/usr/bin# and the shell is picking that one up before it gets to checking the current directory. Either type:
[source,bash]
....
% ./test
....
or choose a better name for your program!
==== I compiled my program and it seemed to run all right at first, then there was an error and it said something about core dumped. What does that mean?
The name _core dump_ dates back to the very early days of UNIX(R), when the machines used core memory for storing data. Basically, if the program failed under certain conditions, the system would write the contents of core memory to disk in a file called [.filename]#core#, which the programmer could then pore over to find out what went wrong.
==== Fascinating stuff, but what I am supposed to do now?
Use a debugger to analyze the core (see <<debugging>>).
==== When my program dumped core, it said something about a segmentation fault. What is that?
This basically means that your program tried to perform some sort of illegal operation on memory; UNIX(R) is designed to protect the operating system and other programs from rogue programs.
Common causes for this are:
* Trying to write to a NULL pointer, eg
+
[.programlisting]
....
char *foo = NULL;
strcpy(foo, "bang!");
....
* Using a pointer that has not been initialized, eg
+
[.programlisting]
....
char *foo;
strcpy(foo, "bang!");
....
+
The pointer will have some random value that, with luck, will point into an area of memory that is not available to your program and the kernel will kill your program before it can do any damage. If you are unlucky, it will point somewhere inside your own program and corrupt one of your data structures, causing the program to fail mysteriously.
* Trying to access past the end of an array, eg
+
[.programlisting]
....
int bar[20];
bar[27] = 6;
....
* Trying to store something in read-only memory, eg
+
[.programlisting]
....
char *foo = "My string";
strcpy(foo, "bang!");
....
+
UNIX(R) compilers often put string literals like `"My string"` into read-only areas of memory.
* Doing naughty things with `malloc()` and `free()`, eg
+
[.programlisting]
....
char bar[80];
free(bar);
....
+
or
+
[.programlisting]
....
char *foo = malloc(27);
free(foo);
free(foo);
....
Making one of these mistakes will not always lead to an error, but they are always bad practice. Some systems and compilers are more tolerant than others, which is why programs that ran well on one system can crash when you try them on an another.
==== Sometimes when I get a core dump it says bus error. It says in my UNIX(R) book that this means a hardware problem, but the computer still seems to be working. Is this true?
No, fortunately not (unless of course you really do have a hardware problem...). This is usually another way of saying that you accessed memory in a way you should not have.
==== This dumping core business sounds as though it could be quite useful, if I can make it happen when I want to. Can I do this, or do I have to wait until there is an error?
Yes, just go to another console or xterm, do
[source,bash]
....
% ps
....
to find out the process ID of your program, and do
[source,bash]
....
% kill -ABRT pid
....
where `_pid_` is the process ID you looked up.
This is useful if your program has got stuck in an infinite loop, for instance. If your program happens to trap SIGABRT, there are several other signals which have a similar effect.
Alternatively, you can create a core dump from inside your program, by calling the `abort()` function. See the manual page of man:abort[3] to learn more.
If you want to create a core dump from outside your program, but do not want the process to terminate, you can use the `gcore` program. See the manual page of man:gcore[1] for more information.
[[tools-make]]
== Make
=== What is `make`?
When you are working on a simple program with only one or two source files, typing in
[source,bash]
....
% cc file1.c file2.c
....
is not too bad, but it quickly becomes very tedious when there are several files-and it can take a while to compile, too.
One way to get around this is to use object files and only recompile the source file if the source code has changed. So we could have something like:
[source,bash]
....
% cc file1.o file2.o … file37.c …
....
if we had changed [.filename]#file37.c#, but not any of the others, since the last time we compiled. This may speed up the compilation quite a bit, but does not solve the typing problem.
Or we could write a shell script to solve the typing problem, but it would have to re-compile everything, making it very inefficient on a large project.
What happens if we have hundreds of source files lying about? What if we are working in a team with other people who forget to tell us when they have changed one of their source files that we use?
Perhaps we could put the two solutions together and write something like a shell script that would contain some kind of magic rule saying when a source file needs compiling. Now all we need now is a program that can understand these rules, as it is a bit too complicated for the shell.
This program is called `make`. It reads in a file, called a _makefile_, that tells it how different files depend on each other, and works out which files need to be re-compiled and which ones do not. For example, a rule could say something like "if [.filename]#fromboz.o# is older than [.filename]#fromboz.c#, that means someone must have changed [.filename]#fromboz.c#, so it needs to be re-compiled." The makefile also has rules telling make _how_ to re-compile the source file, making it a much more powerful tool.
Makefiles are typically kept in the same directory as the source they apply to, and can be called [.filename]#makefile#, [.filename]#Makefile# or [.filename]#MAKEFILE#. Most programmers use the name [.filename]#Makefile#, as this puts it near the top of a directory listing, where it can easily be seen.footnote:[They do not use the MAKEFILE form as block capitals are often used for documentation files like README.]
=== Example of Using `make`
Here is a very simple make file:
[.programlisting]
....
foo: foo.c
cc -o foo foo.c
....
It consists of two lines, a dependency line and a creation line.
The dependency line here consists of the name of the program (known as the _target_), followed by a colon, then whitespace, then the name of the source file. When `make` reads this line, it looks to see if [.filename]#foo# exists; if it exists, it compares the time [.filename]#foo# was last modified to the time [.filename]#foo.c# was last modified. If [.filename]#foo# does not exist, or is older than [.filename]#foo.c#, it then looks at the creation line to find out what to do. In other words, this is the rule for working out when [.filename]#foo.c# needs to be re-compiled.
The creation line starts with a tab (press kbd:[tab]) and then the command you would type to create [.filename]#foo# if you were doing it at a command prompt. If [.filename]#foo# is out of date, or does not exist, `make` then executes this command to create it. In other words, this is the rule which tells make how to re-compile [.filename]#foo.c#.
So, when you type `make`, it will make sure that [.filename]#foo# is up to date with respect to your latest changes to [.filename]#foo.c#. This principle can be extended to [.filename]#Makefile#'s with hundreds of targets-in fact, on FreeBSD, it is possible to compile the entire operating system just by typing `make world` in the appropriate directory!
Another useful property of makefiles is that the targets do not have to be programs. For instance, we could have a make file that looks like this:
[.programlisting]
....
foo: foo.c
cc -o foo foo.c
install:
cp foo /home/me
....
We can tell make which target we want to make by typing:
[source,bash]
....
% make target
....
`make` will then only look at that target and ignore any others. For example, if we type `make foo` with the makefile above, make will ignore the `install` target.
If we just type `make` on its own, make will always look at the first target and then stop without looking at any others. So if we typed `make` here, it will just go to the `foo` target, re-compile [.filename]#foo# if necessary, and then stop without going on to the `install` target.
Notice that the `install` target does not actually depend on anything! This means that the command on the following line is always executed when we try to make that target by typing `make install`. In this case, it will copy [.filename]#foo# into the user's home directory. This is often used by application makefiles, so that the application can be installed in the correct directory when it has been correctly compiled.
This is a slightly confusing subject to try to explain. If you do not quite understand how `make` works, the best thing to do is to write a simple program like "hello world" and a make file like the one above and experiment. Then progress to using more than one source file, or having the source file include a header file. `touch` is very useful here-it changes the date on a file without you having to edit it.
=== Make and include-files
C code often starts with a list of files to include, for example stdio.h. Some of these files are system-include files, some of them are from the project you are now working on:
[.programlisting]
....
#include <stdio.h>
#include "foo.h"
int main(....
....
To make sure that this file is recompiled the moment [.filename]#foo.h# is changed, you have to add it in your [.filename]#Makefile#:
[.programlisting]
....
foo: foo.c foo.h
....
The moment your project is getting bigger and you have more and more own include-files to maintain, it will be a pain to keep track of all include files and the files which are depending on it. If you change an include-file but forget to recompile all the files which are depending on it, the results will be devastating. `clang` has an option to analyze your files and to produce a list of include-files and their dependencies: `-MM`.
If you add this to your Makefile:
[.programlisting]
....
depend:
cc -E -MM *.c > .depend
....
and run `make depend`, the file [.filename]#.depend# will appear with a list of object-files, C-files and the include-files:
[.programlisting]
....
foo.o: foo.c foo.h
....
If you change [.filename]#foo.h#, next time you run `make` all files depending on [.filename]#foo.h# will be recompiled.
Do not forget to run `make depend` each time you add an include-file to one of your files.
=== FreeBSD Makefiles
Makefiles can be rather complicated to write. Fortunately, BSD-based systems like FreeBSD come with some very powerful ones as part of the system. One very good example of this is the FreeBSD ports system. Here is the essential part of a typical ports [.filename]#Makefile#:
[.programlisting]
....
MASTER_SITES= ftp://freefall.cdrom.com/pub/FreeBSD/LOCAL_PORTS/
DISTFILES= scheme-microcode+dist-7.3-freebsd.tgz
.include <bsd.port.mk>
....
Now, if we go to the directory for this port and type `make`, the following happens:
[.procedure]
. A check is made to see if the source code for this port is already on the system.
. If it is not, an FTP connection to the URL in MASTER_SITES is set up to download the source.
. The checksum for the source is calculated and compared it with one for a known, good, copy of the source. This is to make sure that the source was not corrupted while in transit.
. Any changes required to make the source work on FreeBSD are applied-this is known as _patching_.
. Any special configuration needed for the source is done. (Many UNIX(R) program distributions try to work out which version of UNIX(R) they are being compiled on and which optional UNIX(R) features are present-this is where they are given the information in the FreeBSD ports scenario).
. The source code for the program is compiled. In effect, we change to the directory where the source was unpacked and do `make`-the program's own make file has the necessary information to build the program.
. We now have a compiled version of the program. If we wish, we can test it now; when we feel confident about the program, we can type `make install`. This will cause the program and any supporting files it needs to be copied into the correct location; an entry is also made into a `package database`, so that the port can easily be uninstalled later if we change our mind about it.
Now I think you will agree that is rather impressive for a four line script!
The secret lies in the last line, which tells `make` to look in the system makefile called [.filename]#bsd.port.mk#. It is easy to overlook this line, but this is where all the clever stuff comes from-someone has written a makefile that tells `make` to do all the things above (plus a couple of other things I did not mention, including handling any errors that may occur) and anyone can get access to that just by putting a single line in their own make file!
If you want to have a look at these system makefiles, they are in [.filename]#/usr/shared/mk#, but it is probably best to wait until you have had a bit of practice with makefiles, as they are very complicated (and if you do look at them, make sure you have a flask of strong coffee handy!)
=== More Advanced Uses of `make`
`Make` is a very powerful tool, and can do much more than the simple example above shows. Unfortunately, there are several different versions of `make`, and they all differ considerably. The best way to learn what they can do is probably to read the documentation-hopefully this introduction will have given you a base from which you can do this.
The version of make that comes with FreeBSD is the Berkeley make; there is a tutorial for it in [.filename]#/usr/shared/doc/psd/12.make#. To view it, do
[source,bash]
....
% zmore paper.ascii.gz
....
in that directory.
Many applications in the ports use GNU make, which has a very good set of "info" pages. If you have installed any of these ports, GNU make will automatically have been installed as `gmake`. It is also available as a port and package in its own right.
To view the info pages for GNU make, you will have to edit [.filename]#dir# in the [.filename]#/usr/local/info# directory to add an entry for it. This involves adding a line like
[.programlisting]
....
* Make: (make). The GNU Make utility.
....
to the file. Once you have done this, you can type `info` and then select [.guimenuitem]#make# from the menu (or in Emacs, do `C-h i`).
[[debugging]]
== Debugging
=== Introduction to Available Debuggers
Using a debugger allows running the program under more controlled circumstances. Typically, it is possible to step through the program a line at a time, inspect the value of variables, change them, tell the debugger to run up to a certain point and then stop, and so on. It is also possible to attach to a program that is already running, or load a core file to investigate why the program crashed. It is even possible to debug the kernel, though that is a little trickier than the user applications we will be discussing in this section.
This section is intended to be a quick introduction to using debuggers and does not cover specialized topics such as debugging the kernel. For more information about that, refer to crossref:kerneldebug[kerneldebug,Kernel Debugging].
The standard debugger supplied with FreeBSD {rel121-current} is called `lldb` (LLVM debugger). As it is part of the standard installation for that release, there is no need to do anything special to use it. It has good command help, accessible via the `help` command, as well as https://lldb.llvm.org/[a web tutorial and documentation].
[NOTE]
====
The `lldb` command is available for FreeBSD {rel113-current} link:{handbook}#ports-using/[from ports or packages] as package:devel/llvm[]. This will install the default version of lldb (currently 9.0).
====
The other debugger available with FreeBSD is called `gdb` (GNU debugger). Unlike lldb, it is not installed by default on FreeBSD {rel121-current}; to use it, link:{handbook}#ports-using/[install] package:devel/gdb[] from ports or packages. The version installed by default on FreeBSD {rel113-current} is old; instead, install package:devel/gdb[] there as well. It has quite good on-line help, as well as a set of info pages.
Which one to use is largely a matter of taste. If familiar with one only, use that one. People familiar with neither or both but wanting to use one from inside Emacs will need to use `gdb` as `lldb` is unsupported by Emacs. Otherwise, try both and see which one you prefer.
=== Using lldb
==== Starting lldb
Start up lldb by typing
[source,bash]
....
% lldb -- progname
....
==== Running a Program with lldb
Compile the program with `-g` to get the most out of using `lldb`. It will work without, but will only display the name of the function currently running, instead of the source code. If it displays a line like:
[source,bash]
....
Breakpoint 1: where = temp`main, address = …
....
(without an indication of source code filename and line number) when setting a breakpoint, this means that the program was not compiled with `-g`.
[TIP]
====
Most `lldb` commands have shorter forms that can be used instead. The longer forms are used here for clarity.
====
At the `lldb` prompt, type `breakpoint set -n main`. This will tell the debugger not to display the preliminary set-up code in the program being run and to stop execution at the beginning of the program's code. Now type `process launch` to actually start the program- it will start at the beginning of the set-up code and then get stopped by the debugger when it calls `main()`.
To step through the program a line at a time, type `thread step-over`. When the program gets to a function call, step into it by typing `thread step-in`. Once in a function call, return from it by typing `thread step-out` or use `up` and `down` to take a quick look at the caller.
Here is a simple example of how to spot a mistake in a program with `lldb`. This is our program (with a deliberate mistake):
[.programlisting]
....
#include <stdio.h>
int bazz(int anint);
main() {
int i;
printf("This is my program\n");
bazz(i);
return 0;
}
int bazz(int anint) {
printf("You gave me %d\n", anint);
return anint;
}
....
This program sets i to be `5` and passes it to a function `bazz()` which prints out the number we gave it.
Compiling and running the program displays
[source,bash]
....
% cc -g -o temp temp.c
% ./temp
This is my program
anint = -5360
....
That is not what was expected! Time to see what is going on!
[source,bash]
....
% lldb -- temp
(lldb) target create "temp"
Current executable set to 'temp' (x86_64).
(lldb) breakpoint set -n main Skip the set-up code
Breakpoint 1: where = temp`main + 15 at temp.c:8:2, address = 0x00000000002012ef lldb puts breakpoint at main()
(lldb) process launch Run as far as main()
Process 9992 launching
Process 9992 launched: '/home/pauamma/tmp/temp' (x86_64) Program starts running
Process 9992 stopped
* thread #1, name = 'temp', stop reason = breakpoint 1.1 lldb stops at main()
frame #0: 0x00000000002012ef temp`main at temp.c:8:2
5 main() {
6 int i;
7
-> 8 printf("This is my program\n"); Indicates the line where it stopped
9 bazz(i);
10 return 0;
11 }
(lldb) thread step-over Go to next line
This is my program Program prints out
Process 9992 stopped
* thread #1, name = 'temp', stop reason = step over
frame #0: 0x0000000000201300 temp`main at temp.c:9:7
6 int i;
7
8 printf("This is my program\n");
-> 9 bazz(i);
10 return 0;
11 }
12
(lldb) thread step-in step into bazz()
Process 9992 stopped
* thread #1, name = 'temp', stop reason = step in
frame #0: 0x000000000020132b temp`bazz(anint=-5360) at temp.c:14:29 lldb displays stack frame
11 }
12
13 int bazz(int anint) {
-> 14 printf("You gave me %d\n", anint);
15 return anint;
16 }
(lldb)
....
Hang on a minute! How did anint get to be `-5360`? Was it not set to `5` in `main()`? Let us move up to `main()` and have a look.
[source,bash]
....
(lldb) up Move up call stack
frame #1: 0x000000000020130b temp`main at temp.c:9:2 lldb displays stack frame
6 int i;
7
8 printf("This is my program\n");
-> 9 bazz(i);
10 return 0;
11 }
12
(lldb) frame variable i Show us the value of i
(int) i = -5360 lldb displays -5360
....
Oh dear! Looking at the code, we forgot to initialize i. We meant to put
[.programlisting]
....
...
main() {
int i;
i = 5;
printf("This is my program\n");
...
....
but we left the `i=5;` line out. As we did not initialize i, it had whatever number happened to be in that area of memory when the program ran, which in this case happened to be `-5360`.
[NOTE]
====
The `lldb` command displays the stack frame every time we go into or out of a function, even if we are using `up` and `down` to move around the call stack. This shows the name of the function and the values of its arguments, which helps us keep track of where we are and what is going on. (The stack is a storage area where the program stores information about the arguments passed to functions and where to go when it returns from a function call.)
====
==== Examining a Core File with lldb
A core file is basically a file which contains the complete state of the process when it crashed. In "the good old days", programmers had to print out hex listings of core files and sweat over machine code manuals, but now life is a bit easier. Incidentally, under FreeBSD and other 4.4BSD systems, a core file is called [.filename]#progname.core# instead of just [.filename]#core#, to make it clearer which program a core file belongs to.
To examine a core file, specify the name of the core file in addition to the program itself. Instead of starting up `lldb` in the usual way, type `lldb -c _progname_.core -- _progname_`
The debugger will display something like this:
[source,bash,subs="verbatim,quotes"]
....
% lldb -c [.filename]#progname.core# -- [.filename]#progname#
(lldb) target create "[.filename]#progname#" --core "[.filename]#progname#.core"
Core file '/home/pauamma/tmp/[.filename]#progname.core#' (x86_64) was loaded.
(lldb)
....
In this case, the program was called [.filename]#progname#, so the core file is called [.filename]#progname.core#. The debugger does not display why the program crashed or where. For this, use `thread backtrace all`. This will also show how the function where the program dumped core was called.
[source,bash,subs="verbatim,quotes"]
....
(lldb) thread backtrace all
* thread #1, name = 'progname', stop reason = signal SIGSEGV
* frame #0: 0x0000000000201347 progname`bazz(anint=5) at temp2.c:17:10
frame #1: 0x0000000000201312 progname`main at temp2.c:10:2
frame #2: 0x000000000020110f progname`_start(ap=<unavailable>, cleanup=<unavailable>) at crt1.c:76:7
(lldb)
....
`SIGSEGV` indicates that the program tried to access memory (run code or read/write data usually) at a location that does not belong to it, but does not give any specifics. For that, look at the source code at line 10 of file temp2.c, in `bazz()`. The backtrace also says that in this case, `bazz()` was called from `main()`.
==== Attaching to a Running Program with lldb
One of the neatest features about `lldb` is that it can attach to a program that is already running. Of course, that requires sufficient permissions to do so. A common problem is stepping through a program that forks and wanting to trace the child, but the debugger will only trace the parent.
To do that, start up another `lldb`, use `ps` to find the process ID for the child, and do
[source,bash]
....
(lldb) process attach -p pid
....
in `lldb`, and then debug as usual.
For that to work well, the code that calls `fork` to create the child needs to do something like the following (courtesy of the `gdb` info pages):
[.programlisting]
....
...
if ((pid = fork()) < 0) /* _Always_ check this */
error();
else if (pid == 0) { /* child */
int PauseMode = 1;
while (PauseMode)
sleep(10); /* Wait until someone attaches to us */
...
} else { /* parent */
...
....
Now all that is needed is to attach to the child, set PauseMode to `0` with `expr PauseMode = 0` and wait for the `sleep()` call to return.
=== Remote Debugging Using LLDB
[NOTE]
====
The described functionality is available starting with LLDB version 12.0.0. Users of FreeBSD releases containing an earlier LLDB version may wish to use the snapshot available in link:{handbook}#ports-using/[ports or packages], as package:devel/llvm-devel[].
====
Starting with LLDB 12.0.0, remote debugging is supported on FreeBSD. This means that `lldb-server` can be started to debug a program on one host, while the interactive `lldb` client connects to it from another one.
To launch a new process to be debugged remotely, run `lldb-server` on the remote server by typing
[source,bash]
....
% lldb-server g host:port -- progname
....
The process will be stopped immediately after launching, and `lldb-server` will wait for the client to connect.
Start `lldb` locally and type the following command to connect to the remote server:
[source,bash]
....
(lldb) gdb-remote host:port
....
`lldb-server` can also attach to a running process. To do that, type the following on the remote server:
[source,bash]
....
% lldb-server g host:port --attach pid-or-name
....
=== Using gdb
==== Starting gdb
Start up gdb by typing
[source,bash]
....
% gdb progname
....
although many people prefer to run it inside Emacs. To do this, type:
[source,bash]
....
M-x gdb RET progname RET
....
Finally, for those finding its text-based command-prompt style off-putting, there is a graphical front-end for it (package:devel/xxgdb[]) in the Ports Collection.
==== Running a Program with gdb
Compile the program with `-g` to get the most out of using `gdb`. It will work without, but will only display the name of the function currently running, instead of the source code. A line like:
[source,bash]
....
... (no debugging symbols found) ...
....
when `gdb` starts up means that the program was not compiled with `-g`.
At the `gdb` prompt, type `break main`. This will tell the debugger to skip the preliminary set-up code in the program being run and to stop execution at the beginning of the program's code. Now type `run` to start the program- it will start at the beginning of the set-up code and then get stopped by the debugger when it calls `main()`.
To step through the program a line at a time, press `n`. When at a function call, step into it by pressing `s`. Once in a function call, return from it by pressing `f`, or use `up` and `down` to take a quick look at the caller.
Here is a simple example of how to spot a mistake in a program with `gdb`. This is our program (with a deliberate mistake):
[.programlisting]
....
#include <stdio.h>
int bazz(int anint);
main() {
int i;
printf("This is my program\n");
bazz(i);
return 0;
}
int bazz(int anint) {
printf("You gave me %d\n", anint);
return anint;
}
....
This program sets i to be `5` and passes it to a function `bazz()` which prints out the number we gave it.
Compiling and running the program displays
[source,bash]
....
% cc -g -o temp temp.c
% ./temp
This is my program
anint = 4231
....
That was not what we expected! Time to see what is going on!
[source,bash]
....
% gdb temp
GDB is free software and you are welcome to distribute copies of it
under certain conditions; type "show copying" to see the conditions.
There is absolutely no warranty for GDB; type "show warranty" for details.
GDB 4.13 (i386-unknown-freebsd), Copyright 1994 Free Software Foundation, Inc.
(gdb) break main Skip the set-up code
Breakpoint 1 at 0x160f: file temp.c, line 9. gdb puts breakpoint at main()
(gdb) run Run as far as main()
Starting program: /home/james/tmp/temp Program starts running
Breakpoint 1, main () at temp.c:9 gdb stops at main()
(gdb) n Go to next line
This is my program Program prints out
(gdb) s step into bazz()
bazz (anint=4231) at temp.c:17 gdb displays stack frame
(gdb)
....
Hang on a minute! How did anint get to be `4231`? Was it not set to `5` in `main()`? Let us move up to `main()` and have a look.
[source,bash]
....
(gdb) up Move up call stack
#1 0x1625 in main () at temp.c:11 gdb displays stack frame
(gdb) p i Show us the value of i
$1 = 4231 gdb displays 4231
....
Oh dear! Looking at the code, we forgot to initialize i. We meant to put
[.programlisting]
....
...
main() {
int i;
i = 5;
printf("This is my program\n");
...
....
but we left the `i=5;` line out. As we did not initialize i, it had whatever number happened to be in that area of memory when the program ran, which in this case happened to be `4231`.
[NOTE]
====
The `gdb` command displays the stack frame every time we go into or out of a function, even if we are using `up` and `down` to move around the call stack. This shows the name of the function and the values of its arguments, which helps us keep track of where we are and what is going on. (The stack is a storage area where the program stores information about the arguments passed to functions and where to go when it returns from a function call.)
====
==== Examining a Core File with gdb
A core file is basically a file which contains the complete state of the process when it crashed. In "the good old days", programmers had to print out hex listings of core files and sweat over machine code manuals, but now life is a bit easier. Incidentally, under FreeBSD and other 4.4BSD systems, a core file is called [.filename]#progname.core# instead of just [.filename]#core#, to make it clearer which program a core file belongs to.
To examine a core file, start up `gdb` in the usual way. Instead of typing `break` or `run`, type
[source,bash]
....
(gdb) core progname.core
....
If the core file is not in the current directory, type `dir /path/to/core/file` first.
The debugger should display something like this:
[source,bash,subs="verbatim,quotes"]
....
% gdb [.filename]#progname#
GDB is free software and you are welcome to distribute copies of it
under certain conditions; type "show copying" to see the conditions.
There is absolutely no warranty for GDB; type "show warranty" for details.
GDB 4.13 (i386-unknown-freebsd), Copyright 1994 Free Software Foundation, Inc.
(gdb) core [.filename]#progname.core#
Core was generated by `[.filename]#progname#'.
Program terminated with signal 11, Segmentation fault.
Cannot access memory at address 0x7020796d.
#0 0x164a in bazz (anint=0x5) at temp.c:17
(gdb)
....
In this case, the program was called [.filename]#progname#, so the core file is called [.filename]#progname.core#. We can see that the program crashed due to trying to access an area in memory that was not available to it in a function called `bazz`.
Sometimes it is useful to be able to see how a function was called, as the problem could have occurred a long way up the call stack in a complex program. `bt` causes `gdb` to print out a back-trace of the call stack:
[source,bash]
....
(gdb) bt
#0 0x164a in bazz (anint=0x5) at temp.c:17
#1 0xefbfd888 in end ()
#2 0x162c in main () at temp.c:11
(gdb)
....
The `end()` function is called when a program crashes; in this case, the `bazz()` function was called from `main()`.
==== Attaching to a Running Program with gdb
One of the neatest features about `gdb` is that it can attach to a program that is already running. Of course, that requires sufficient permissions to do so. A common problem is stepping through a program that forks and wanting to trace the child, but the debugger will only trace the parent.
To do that, start up another `gdb`, use `ps` to find the process ID for the child, and do
[source,bash]
....
(gdb) attach pid
....
in `gdb`, and then debug as usual.
For that to work well, the code that calls `fork` to create the child needs to do something like the following (courtesy of the `gdb` info pages):
[.programlisting]
....
...
if ((pid = fork()) < 0) /* _Always_ check this */
error();
else if (pid == 0) { /* child */
int PauseMode = 1;
while (PauseMode)
sleep(10); /* Wait until someone attaches to us */
...
} else { /* parent */
...
....
Now all that is needed is to attach to the child, set PauseMode to `0`, and wait for the `sleep()` call to return!
[[emacs]]
== Using Emacs as a Development Environment
=== Emacs
Emacs is a highly customizable editor-indeed, it has been customized to the point where it is more like an operating system than an editor! Many developers and sysadmins do in fact spend practically all their time working inside Emacs, leaving it only to log out.
It is impossible even to summarize everything Emacs can do here, but here are some of the features of interest to developers:
* Very powerful editor, allowing search-and-replace on both strings and regular expressions (patterns), jumping to start/end of block expression, etc, etc.
* Pull-down menus and online help.
* Language-dependent syntax highlighting and indentation.
* Completely customizable.
* You can compile and debug programs within Emacs.
* On a compilation error, you can jump to the offending line of source code.
* Friendly-ish front-end to the `info` program used for reading GNU hypertext documentation, including the documentation on Emacs itself.
* Friendly front-end to `gdb`, allowing you to look at the source code as you step through your program.
And doubtless many more that have been overlooked.
Emacs can be installed on FreeBSD using the package:editors/emacs[] port.
Once it is installed, start it up and do `C-h t` to read an Emacs tutorial-that means hold down kbd:[control], press kbd:[h], let go of kbd:[control], and then press kbd:[t]. (Alternatively, you can use the mouse to select [.guimenuitem]#Emacs Tutorial# from the menu:Help[] menu.)
Although Emacs does have menus, it is well worth learning the key bindings, as it is much quicker when you are editing something to press a couple of keys than to try to find the mouse and then click on the right place. And, when you are talking to seasoned Emacs users, you will find they often casually throw around expressions like "`M-x replace-s RET foo RET bar RET`" so it is useful to know what they mean. And in any case, Emacs has far too many useful functions for them to all fit on the menu bars.
Fortunately, it is quite easy to pick up the key-bindings, as they are displayed next to the menu item. My advice is to use the menu item for, say, opening a file until you understand how it works and feel confident with it, then try doing C-x C-f. When you are happy with that, move on to another menu command.
If you cannot remember what a particular combination of keys does, select [.guimenuitem]#Describe Key# from the menu:Help[] menu and type it in-Emacs will tell you what it does. You can also use the [.guimenuitem]#Command Apropos# menu item to find out all the commands which contain a particular word in them, with the key binding next to it.
By the way, the expression above means hold down the kbd:[Meta] key, press kbd:[x], release the kbd:[Meta] key, type `replace-s` (short for `replace-string`-another feature of Emacs is that you can abbreviate commands), press the kbd:[return] key, type `foo` (the string you want replaced), press the kbd:[return] key, type bar (the string you want to replace `foo` with) and press kbd:[return] again. Emacs will then do the search-and-replace operation you have just requested.
If you are wondering what on earth kbd:[Meta] is, it is a special key that many UNIX(R) workstations have. Unfortunately, PC's do not have one, so it is usually kbd:[alt] (or if you are unlucky, the kbd:[escape] key).
Oh, and to get out of Emacs, do `C-x C-c` (that means hold down the kbd:[control] key, press kbd:[x], press kbd:[c] and release the kbd:[control] key). If you have any unsaved files open, Emacs will ask you if you want to save them. (Ignore the bit in the documentation where it says `C-z` is the usual way to leave Emacs-that leaves Emacs hanging around in the background, and is only really useful if you are on a system which does not have virtual terminals).
=== Configuring Emacs
Emacs does many wonderful things; some of them are built in, some of them need to be configured.
Instead of using a proprietary macro language for configuration, Emacs uses a version of Lisp specially adapted for editors, known as Emacs Lisp. Working with Emacs Lisp can be quite helpful if you want to go on and learn something like Common Lisp. Emacs Lisp has many features of Common Lisp, although it is considerably smaller (and thus easier to master).
The best way to learn Emacs Lisp is to download the link:ftp://ftp.gnu.org/old-gnu/emacs/elisp-manual-19-2.4.tar.gz[Emacs Tutorial]
However, there is no need to actually know any Lisp to get started with configuring Emacs, as I have included a sample [.filename]#.emacs#, which should be enough to get you started. Just copy it into your home directory and restart Emacs if it is already running; it will read the commands from the file and (hopefully) give you a useful basic setup.
=== A Sample [.filename]#.emacs#
Unfortunately, there is far too much here to explain it in detail; however there are one or two points worth mentioning.
* Everything beginning with a `;` is a comment and is ignored by Emacs.
* In the first line, the `-*- Emacs-Lisp -*-` is so that we can edit [.filename]#.emacs# itself within Emacs and get all the fancy features for editing Emacs Lisp. Emacs usually tries to guess this based on the filename, and may not get it right for [.filename]#.emacs#.
* The kbd:[tab] key is bound to an indentation function in some modes, so when you press the tab key, it will indent the current line of code. If you want to put a tab character in whatever you are writing, hold the kbd:[control] key down while you are pressing the kbd:[tab] key.
* This file supports syntax highlighting for C, C++, Perl, Lisp and Scheme, by guessing the language from the filename.
* Emacs already has a pre-defined function called `next-error`. In a compilation output window, this allows you to move from one compilation error to the next by doing `M-n`; we define a complementary function, `previous-error`, that allows you to go to a previous error by doing `M-p`. The nicest feature of all is that `C-c C-c` will open up the source file in which the error occurred and jump to the appropriate line.
* We enable Emacs's ability to act as a server, so that if you are doing something outside Emacs and you want to edit a file, you can just type in
+
[source,bash]
....
% emacsclient filename
....
+
and then you can edit the file in your Emacs!footnote:[Many Emacs users set their EDITOR environment to emacsclient so this happens every time they need to edit a file.]
.A Sample [.filename]#.emacs#
====
[.programlisting]
....
;; -*-Emacs-Lisp-*-
;; This file is designed to be re-evaled; use the variable first-time
;; to avoid any problems with this.
(defvar first-time t
"Flag signifying this is the first time that .emacs has been evaled")
;; Meta
(global-set-key "\M- " 'set-mark-command)
(global-set-key "\M-\C-h" 'backward-kill-word)
(global-set-key "\M-\C-r" 'query-replace)
(global-set-key "\M-r" 'replace-string)
(global-set-key "\M-g" 'goto-line)
(global-set-key "\M-h" 'help-command)
;; Function keys
(global-set-key [f1] 'manual-entry)
(global-set-key [f2] 'info)
(global-set-key [f3] 'repeat-complex-command)
(global-set-key [f4] 'advertised-undo)
(global-set-key [f5] 'eval-current-buffer)
(global-set-key [f6] 'buffer-menu)
(global-set-key [f7] 'other-window)
(global-set-key [f8] 'find-file)
(global-set-key [f9] 'save-buffer)
(global-set-key [f10] 'next-error)
(global-set-key [f11] 'compile)
(global-set-key [f12] 'grep)
(global-set-key [C-f1] 'compile)
(global-set-key [C-f2] 'grep)
(global-set-key [C-f3] 'next-error)
(global-set-key [C-f4] 'previous-error)
(global-set-key [C-f5] 'display-faces)
(global-set-key [C-f8] 'dired)
(global-set-key [C-f10] 'kill-compilation)
;; Keypad bindings
(global-set-key [up] "\C-p")
(global-set-key [down] "\C-n")
(global-set-key [left] "\C-b")
(global-set-key [right] "\C-f")
(global-set-key [home] "\C-a")
(global-set-key [end] "\C-e")
(global-set-key [prior] "\M-v")
(global-set-key [next] "\C-v")
(global-set-key [C-up] "\M-\C-b")
(global-set-key [C-down] "\M-\C-f")
(global-set-key [C-left] "\M-b")
(global-set-key [C-right] "\M-f")
(global-set-key [C-home] "\M-<")
(global-set-key [C-end] "\M->")
(global-set-key [C-prior] "\M-<")
(global-set-key [C-next] "\M->")
;; Mouse
(global-set-key [mouse-3] 'imenu)
;; Misc
(global-set-key [C-tab] "\C-q\t") ; Control tab quotes a tab.
(setq backup-by-copying-when-mismatch t)
;; Treat 'y' or <CR> as yes, 'n' as no.
(fset 'yes-or-no-p 'y-or-n-p)
(define-key query-replace-map [return] 'act)
(define-key query-replace-map [?\C-m] 'act)
;; Load packages
(require 'desktop)
(require 'tar-mode)
;; Pretty diff mode
(autoload 'ediff-buffers "ediff" "Intelligent Emacs interface to diff" t)
(autoload 'ediff-files "ediff" "Intelligent Emacs interface to diff" t)
(autoload 'ediff-files-remote "ediff"
"Intelligent Emacs interface to diff")
(if first-time
(setq auto-mode-alist
(append '(("\\.cpp$" . c++-mode)
("\\.hpp$" . c++-mode)
("\\.lsp$" . lisp-mode)
("\\.scm$" . scheme-mode)
("\\.pl$" . perl-mode)
) auto-mode-alist)))
;; Auto font lock mode
(defvar font-lock-auto-mode-list
(list 'c-mode 'c++-mode 'c++-c-mode 'emacs-lisp-mode 'lisp-mode 'perl-mode 'scheme-mode)
"List of modes to always start in font-lock-mode")
(defvar font-lock-mode-keyword-alist
'((c++-c-mode . c-font-lock-keywords)
(perl-mode . perl-font-lock-keywords))
"Associations between modes and keywords")
(defun font-lock-auto-mode-select ()
"Automatically select font-lock-mode if the current major mode is in font-lock-auto-mode-list"
(if (memq major-mode font-lock-auto-mode-list)
(progn
(font-lock-mode t))
)
)
(global-set-key [M-f1] 'font-lock-fontify-buffer)
;; New dabbrev stuff
;(require 'new-dabbrev)
(setq dabbrev-always-check-other-buffers t)
(setq dabbrev-abbrev-char-regexp "\\sw\\|\\s_")
(add-hook 'emacs-lisp-mode-hook
'(lambda ()
(set (make-local-variable 'dabbrev-case-fold-search) nil)
(set (make-local-variable 'dabbrev-case-replace) nil)))
(add-hook 'c-mode-hook
'(lambda ()
(set (make-local-variable 'dabbrev-case-fold-search) nil)
(set (make-local-variable 'dabbrev-case-replace) nil)))
(add-hook 'text-mode-hook
'(lambda ()
(set (make-local-variable 'dabbrev-case-fold-search) t)
(set (make-local-variable 'dabbrev-case-replace) t)))
;; C++ and C mode...
(defun my-c++-mode-hook ()
(setq tab-width 4)
(define-key c++-mode-map "\C-m" 'reindent-then-newline-and-indent)
(define-key c++-mode-map "\C-ce" 'c-comment-edit)
(setq c++-auto-hungry-initial-state 'none)
(setq c++-delete-function 'backward-delete-char)
(setq c++-tab-always-indent t)
(setq c-indent-level 4)
(setq c-continued-statement-offset 4)
(setq c++-empty-arglist-indent 4))
(defun my-c-mode-hook ()
(setq tab-width 4)
(define-key c-mode-map "\C-m" 'reindent-then-newline-and-indent)
(define-key c-mode-map "\C-ce" 'c-comment-edit)
(setq c-auto-hungry-initial-state 'none)
(setq c-delete-function 'backward-delete-char)
(setq c-tab-always-indent t)
;; BSD-ish indentation style
(setq c-indent-level 4)
(setq c-continued-statement-offset 4)
(setq c-brace-offset -4)
(setq c-argdecl-indent 0)
(setq c-label-offset -4))
;; Perl mode
(defun my-perl-mode-hook ()
(setq tab-width 4)
(define-key c++-mode-map "\C-m" 'reindent-then-newline-and-indent)
(setq perl-indent-level 4)
(setq perl-continued-statement-offset 4))
;; Scheme mode...
(defun my-scheme-mode-hook ()
(define-key scheme-mode-map "\C-m" 'reindent-then-newline-and-indent))
;; Emacs-Lisp mode...
(defun my-lisp-mode-hook ()
(define-key lisp-mode-map "\C-m" 'reindent-then-newline-and-indent)
(define-key lisp-mode-map "\C-i" 'lisp-indent-line)
(define-key lisp-mode-map "\C-j" 'eval-print-last-sexp))
;; Add all of the hooks...
(add-hook 'c++-mode-hook 'my-c++-mode-hook)
(add-hook 'c-mode-hook 'my-c-mode-hook)
(add-hook 'scheme-mode-hook 'my-scheme-mode-hook)
(add-hook 'emacs-lisp-mode-hook 'my-lisp-mode-hook)
(add-hook 'lisp-mode-hook 'my-lisp-mode-hook)
(add-hook 'perl-mode-hook 'my-perl-mode-hook)
;; Complement to next-error
(defun previous-error (n)
"Visit previous compilation error message and corresponding source code."
(interactive "p")
(next-error (- n)))
;; Misc...
(transient-mark-mode 1)
(setq mark-even-if-inactive t)
(setq visible-bell nil)
(setq next-line-add-newlines nil)
(setq compile-command "make")
(setq suggest-key-bindings nil)
(put 'eval-expression 'disabled nil)
(put 'narrow-to-region 'disabled nil)
(put 'set-goal-column 'disabled nil)
(if (>= emacs-major-version 21)
(setq show-trailing-whitespace t))
;; Elisp archive searching
(autoload 'format-lisp-code-directory "lispdir" nil t)
(autoload 'lisp-dir-apropos "lispdir" nil t)
(autoload 'lisp-dir-retrieve "lispdir" nil t)
(autoload 'lisp-dir-verify "lispdir" nil t)
;; Font lock mode
(defun my-make-face (face color &optional bold)
"Create a face from a color and optionally make it bold"
(make-face face)
(copy-face 'default face)
(set-face-foreground face color)
(if bold (make-face-bold face))
)
(if (eq window-system 'x)
(progn
(my-make-face 'blue "blue")
(my-make-face 'red "red")
(my-make-face 'green "dark green")
(setq font-lock-comment-face 'blue)
(setq font-lock-string-face 'bold)
(setq font-lock-type-face 'bold)
(setq font-lock-keyword-face 'bold)
(setq font-lock-function-name-face 'red)
(setq font-lock-doc-string-face 'green)
(add-hook 'find-file-hooks 'font-lock-auto-mode-select)
(setq baud-rate 1000000)
(global-set-key "\C-cmm" 'menu-bar-mode)
(global-set-key "\C-cms" 'scroll-bar-mode)
(global-set-key [backspace] 'backward-delete-char)
; (global-set-key [delete] 'delete-char)
(standard-display-european t)
(load-library "iso-transl")))
;; X11 or PC using direct screen writes
(if window-system
(progn
;; (global-set-key [M-f1] 'hilit-repaint-command)
;; (global-set-key [M-f2] [?\C-u M-f1])
(setq hilit-mode-enable-list
'(not text-mode c-mode c++-mode emacs-lisp-mode lisp-mode
scheme-mode)
hilit-auto-highlight nil
hilit-auto-rehighlight 'visible
hilit-inhibit-hooks nil
hilit-inhibit-rebinding t)
(require 'hilit19)
(require 'paren))
(setq baud-rate 2400) ; For slow serial connections
)
;; TTY type terminal
(if (and (not window-system)
(not (equal system-type 'ms-dos)))
(progn
(if first-time
(progn
(keyboard-translate ?\C-h ?\C-?)
(keyboard-translate ?\C-? ?\C-h)))))
;; Under UNIX
(if (not (equal system-type 'ms-dos))
(progn
(if first-time
(server-start))))
;; Add any face changes here
(add-hook 'term-setup-hook 'my-term-setup-hook)
(defun my-term-setup-hook ()
(if (eq window-system 'pc)
(progn
;; (set-face-background 'default "red")
)))
;; Restore the "desktop" - do this as late as possible
(if first-time
(progn
(desktop-load-default)
(desktop-read)))
;; Indicate that this file has been read at least once
(setq first-time nil)
;; No need to debug anything now
(setq debug-on-error nil)
;; All done
(message "All done, %s%s" (user-login-name) ".")
....
====
=== Extending the Range of Languages Emacs Understands
Now, this is all very well if you only want to program in the languages already catered for in [.filename]#.emacs# (C, C++, Perl, Lisp and Scheme), but what happens if a new language called "whizbang" comes out, full of exciting features?
The first thing to do is find out if whizbang comes with any files that tell Emacs about the language. These usually end in [.filename]#.el#, short for "Emacs Lisp". For example, if whizbang is a FreeBSD port, we can locate these files by doing
[source,bash]
....
% find /usr/ports/lang/whizbang -name "*.el" -print
....
and install them by copying them into the Emacs site Lisp directory. On FreeBSD, this is [.filename]#/usr/local/shared/emacs/site-lisp#.
So for example, if the output from the find command was
[source,bash]
....
/usr/ports/lang/whizbang/work/misc/whizbang.el
....
we would do
[source,bash]
....
# cp /usr/ports/lang/whizbang/work/misc/whizbang.el /usr/local/shared/emacs/site-lisp
....
Next, we need to decide what extension whizbang source files have. Let us say for the sake of argument that they all end in [.filename]#.wiz#. We need to add an entry to our [.filename]#.emacs# to make sure Emacs will be able to use the information in [.filename]#whizbang.el#.
Find the auto-mode-alist entry in [.filename]#.emacs# and add a line for whizbang, such as:
[.programlisting]
....
...
("\\.lsp$" . lisp-mode)
("\\.wiz$" . whizbang-mode)
("\\.scm$" . scheme-mode)
...
....
This means that Emacs will automatically go into `whizbang-mode` when you edit a file ending in [.filename]#.wiz#.
Just below this, you will find the font-lock-auto-mode-list entry. Add `whizbang-mode` to it like so:
[.programlisting]
....
;; Auto font lock mode
(defvar font-lock-auto-mode-list
(list 'c-mode 'c++-mode 'c++-c-mode 'emacs-lisp-mode 'whizbang-mode 'lisp-mode 'perl-mode 'scheme-mode)
"List of modes to always start in font-lock-mode")
....
This means that Emacs will always enable `font-lock-mode` (ie syntax highlighting) when editing a [.filename]#.wiz# file.
And that is all that is needed. If there is anything else you want done automatically when you open up [.filename]#.wiz#, you can add a `whizbang-mode hook` (see `my-scheme-mode-hook` for a simple example that adds `auto-indent`).
[[tools-reading]]
== Further Reading
For information about setting up a development environment for contributing fixes to FreeBSD itself, please see man:development[7].
* Brian Harvey and Matthew Wright _Simply Scheme_ MIT 1994. ISBN 0-262-08226-8
* Randall Schwartz _Learning Perl_ O'Reilly 1993 ISBN 1-56592-042-2
* Patrick Henry Winston and Berthold Klaus Paul Horn _Lisp (3rd Edition)_ Addison-Wesley 1989 ISBN 0-201-08319-1
* Brian W. Kernighan and Rob Pike _The Unix Programming Environment_ Prentice-Hall 1984 ISBN 0-13-937681-X
* Brian W. Kernighan and Dennis M. Ritchie _The C Programming Language (2nd Edition)_ Prentice-Hall 1988 ISBN 0-13-110362-8
* Bjarne Stroustrup _The C++ Programming Language_ Addison-Wesley 1991 ISBN 0-201-53992-6
* W. Richard Stevens _Advanced Programming in the Unix Environment_ Addison-Wesley 1992 ISBN 0-201-56317-7
* W. Richard Stevens _Unix Network Programming_ Prentice-Hall 1990 ISBN 0-13-949876-1
diff --git a/documentation/content/en/books/developers-handbook/x86/_index.adoc b/documentation/content/en/books/developers-handbook/x86/_index.adoc
index 4307e1dd63..dcd04afa8e 100644
--- a/documentation/content/en/books/developers-handbook/x86/_index.adoc
+++ b/documentation/content/en/books/developers-handbook/x86/_index.adoc
@@ -1,3851 +1,3852 @@
---
title: Chapter 11. x86 Assembly Language Programming
authors:
prev: books/developers-handbook/partiv
next: books/developers-handbook/partv
+description: x86 Assembly Language Programming
---
[[x86]]
= x86 Assembly Language Programming
:doctype: book
:toc: macro
:toclevels: 1
:icons: font
:sectnums:
:sectnumlevels: 6
:source-highlighter: rouge
:experimental:
:skip-front-matter:
:xrefstyle: basic
:relfileprefix: ../
:outfilesuffix:
:sectnumoffset: 11
include::shared/mirrors.adoc[]
include::shared/authors.adoc[]
include::shared/releases.adoc[]
include::shared/en/mailing-lists.adoc[]
include::shared/en/teams.adoc[]
include::shared/en/urls.adoc[]
toc::[]
_This chapter was written by {stanislav}._
[[x86-intro]]
== Synopsis
Assembly language programming under UNIX(R) is highly undocumented. It is generally assumed that no one would ever want to use it because various UNIX(R) systems run on different microprocessors, so everything should be written in C for portability.
In reality, C portability is quite a myth. Even C programs need to be modified when ported from one UNIX(R) to another, regardless of what processor each runs on. Typically, such a program is full of conditional statements depending on the system it is compiled for.
Even if we believe that all of UNIX(R) software should be written in C, or some other high-level language, we still need assembly language programmers: Who else would write the section of C library that accesses the kernel?
In this chapter I will attempt to show you how you can use assembly language writing UNIX(R) programs, specifically under FreeBSD.
This chapter does not explain the basics of assembly language. There are enough resources about that (for a complete online course in assembly language, see Randall Hyde's http://webster.cs.ucr.edu/[Art of Assembly Language]; or if you prefer a printed book, take a look at Jeff Duntemann's Assembly Language Step-by-Step (ISBN: 0471375233). However, once the chapter is finished, any assembly language programmer will be able to write programs for FreeBSD quickly and efficiently.
Copyright (R) 2000-2001 G. Adam Stanislav. All rights reserved.
[[x86-the-tools]]
== The Tools
[[x86-the-assembler]]
=== The Assembler
The most important tool for assembly language programming is the assembler, the software that converts assembly language code into machine language.
Two very different assemblers are available for FreeBSD. One is man:as[1], which uses the traditional UNIX(R) assembly language syntax. It comes with the system.
The other is /usr/ports/devel/nasm. It uses the Intel syntax. Its main advantage is that it can assemble code for many operating systems. It needs to be installed separately, but is completely free.
This chapter uses nasm syntax because most assembly language programmers coming to FreeBSD from other operating systems will find it easier to understand. And, because, quite frankly, that is what I am used to.
[[x86-the-linker]]
=== The Linker
The output of the assembler, like that of any compiler, needs to be linked to form an executable file.
The standard man:ld[1] linker comes with FreeBSD. It works with the code assembled with either assembler.
[[x86-system-calls]]
== System Calls
[[x86-default-calling-convention]]
=== Default Calling Convention
By default, the FreeBSD kernel uses the C calling convention. Further, although the kernel is accessed using `int 80h`, it is assumed the program will call a function that issues `int 80h`, rather than issuing `int 80h` directly.
This convention is very convenient, and quite superior to the Microsoft(R) convention used by MS-DOS(R). Why? Because the UNIX(R) convention allows any program written in any language to access the kernel.
An assembly language program can do that as well. For example, we could open a file:
[.programlisting]
....
kernel:
int 80h ; Call kernel
ret
open:
push dword mode
push dword flags
push dword path
mov eax, 5
call kernel
add esp, byte 12
ret
....
This is a very clean and portable way of coding. If you need to port the code to a UNIX(R) system which uses a different interrupt, or a different way of passing parameters, all you need to change is the kernel procedure.
But assembly language programmers like to shave off cycles. The above example requires a `call/ret` combination. We can eliminate it by ``push``ing an extra dword:
[.programlisting]
....
open:
push dword mode
push dword flags
push dword path
mov eax, 5
push eax ; Or any other dword
int 80h
add esp, byte 16
....
The `5` that we have placed in `EAX` identifies the kernel function, in this case `open`.
[[x86-alternate-calling-convention]]
=== Alternate Calling Convention
FreeBSD is an extremely flexible system. It offers other ways of calling the kernel. For it to work, however, the system must have Linux emulation installed.
Linux is a UNIX(R) like system. However, its kernel uses the same system-call convention of passing parameters in registers MS-DOS(R) does. As with the UNIX(R) convention, the function number is placed in `EAX`. The parameters, however, are not passed on the stack but in `EBX, ECX, EDX, ESI, EDI, EBP`:
[.programlisting]
....
open:
mov eax, 5
mov ebx, path
mov ecx, flags
mov edx, mode
int 80h
....
This convention has a great disadvantage over the UNIX(R) way, at least as far as assembly language programming is concerned: Every time you make a kernel call you must `push` the registers, then `pop` them later. This makes your code bulkier and slower. Nevertheless, FreeBSD gives you a choice.
If you do choose the Linux convention, you must let the system know about it. After your program is assembled and linked, you need to brand the executable:
[source,bash]
....
% brandelf -t Linux filename
....
[[x86-use-geneva]]
=== Which Convention Should You Use?
If you are coding specifically for FreeBSD, you should always use the UNIX(R) convention: It is faster, you can store global variables in registers, you do not have to brand the executable, and you do not impose the installation of the Linux emulation package on the target system.
If you want to create portable code that can also run on Linux, you will probably still want to give the FreeBSD users as efficient a code as possible. I will show you how you can accomplish that after I have explained the basics.
[[x86-call-numbers]]
=== Call Numbers
To tell the kernel which system service you are calling, place its number in `EAX`. Of course, you need to know what the number is.
[[x86-the-syscalls-file]]
==== The [.filename]#syscalls# File
The numbers are listed in [.filename]#syscalls#. `locate syscalls` finds this file in several different formats, all produced automatically from [.filename]#syscalls.master#.
You can find the master file for the default UNIX(R) calling convention in [.filename]#/usr/src/sys/kern/syscalls.master#. If you need to use the other convention implemented in the Linux emulation mode, read [.filename]#/usr/src/sys/i386/linux/syscalls.master#.
[NOTE]
====
Not only do FreeBSD and Linux use different calling conventions, they sometimes use different numbers for the same functions.
====
[.filename]#syscalls.master# describes how the call is to be made:
[.programlisting]
....
0 STD NOHIDE { int nosys(void); } syscall nosys_args int
1 STD NOHIDE { void exit(int rval); } exit rexit_args void
2 STD POSIX { int fork(void); }
3 STD POSIX { ssize_t read(int fd, void *buf, size_t nbyte); }
4 STD POSIX { ssize_t write(int fd, const void *buf, size_t nbyte); }
5 STD POSIX { int open(char *path, int flags, int mode); }
6 STD POSIX { int close(int fd); }
etc...
....
It is the leftmost column that tells us the number to place in `EAX`.
The rightmost column tells us what parameters to `push`. They are ``push``ed _from right to left_.
For example, to `open` a file, we need to `push` the `mode` first, then `flags`, then the address at which the `path` is stored.
[[x86-return-values]]
== Return Values
A system call would not be useful most of the time if it did not return some kind of a value: The file descriptor of an open file, the number of bytes read to a buffer, the system time, etc.
Additionally, the system needs to inform us if an error occurs: A file does not exist, system resources are exhausted, we passed an invalid parameter, etc.
[[x86-man-pages]]
=== Man Pages
The traditional place to look for information about various system calls under UNIX(R) systems are the manual pages. FreeBSD describes its system calls in section 2, sometimes in section 3.
For example, man:open[2] says:
[.blockquote]
If successful, `open()` returns a non-negative integer, termed a file descriptor. It returns `-1` on failure, and sets `errno` to indicate the error.
The assembly language programmer new to UNIX(R) and FreeBSD will immediately ask the puzzling question: Where is `errno` and how do I get to it?
[NOTE]
====
The information presented in the manual pages applies to C programs. The assembly language programmer needs additional information.
====
[[x86-where-return-values]]
=== Where Are the Return Values?
Unfortunately, it depends... For most system calls it is in `EAX`, but not for all. A good rule of thumb, when working with a system call for the first time, is to look for the return value in `EAX`. If it is not there, you need further research.
[NOTE]
====
I am aware of one system call that returns the value in `EDX`: `SYS_fork`. All others I have worked with use `EAX`. But I have not worked with them all yet.
====
[TIP]
====
If you cannot find the answer here or anywhere else, study libc source code and see how it interfaces with the kernel.
====
[[x86-where-errno]]
=== Where Is `errno`?
Actually, nowhere...
`errno` is part of the C language, not the UNIX(R) kernel. When accessing kernel services directly, the error code is returned in `EAX`, the same register the proper return value generally ends up in.
This makes perfect sense. If there is no error, there is no error code. If there is an error, there is no return value. One register can contain either.
[[x86-how-to-know-error]]
=== Determining an Error Occurred
When using the standard FreeBSD calling convention, the `carry flag` is cleared upon success, set upon failure.
When using the Linux emulation mode, the signed value in `EAX` is non-negative upon success, and contains the return value. In case of an error, the value is negative, i.e., `-errno`.
[[x86-portable-code]]
== Creating Portable Code
Portability is generally not one of the strengths of assembly language. Yet, writing assembly language programs for different platforms is possible, especially with nasm. I have written assembly language libraries that can be assembled for such different operating systems as Windows(R) and FreeBSD.
It is all the more possible when you want your code to run on two platforms which, while different, are based on similar architectures.
For example, FreeBSD is UNIX(R), Linux is UNIX(R) like. I only mentioned three differences between them (from an assembly language programmer's perspective): The calling convention, the function numbers, and the way of returning values.
[[x86-deal-with-function-numbers]]
=== Dealing with Function Numbers
In many cases the function numbers are the same. However, even when they are not, the problem is easy to deal with: Instead of using numbers in your code, use constants which you have declared differently depending on the target architecture:
[.programlisting]
....
%ifdef LINUX
%define SYS_execve 11
%else
%define SYS_execve 59
%endif
....
[[x86-deal-with-geneva]]
=== Dealing with Conventions
Both, the calling convention, and the return value (the `errno` problem) can be resolved with macros:
[.programlisting]
....
%ifdef LINUX
%macro system 0
call kernel
%endmacro
align 4
kernel:
push ebx
push ecx
push edx
push esi
push edi
push ebp
mov ebx, [esp+32]
mov ecx, [esp+36]
mov edx, [esp+40]
mov esi, [esp+44]
mov ebp, [esp+48]
int 80h
pop ebp
pop edi
pop esi
pop edx
pop ecx
pop ebx
or eax, eax
js .errno
clc
ret
.errno:
neg eax
stc
ret
%else
%macro system 0
int 80h
%endmacro
%endif
....
[[x86-deal-with-other-portability]]
=== Dealing with Other Portability Issues
The above solutions can handle most cases of writing code portable between FreeBSD and Linux. Nevertheless, with some kernel services the differences are deeper.
In that case, you need to write two different handlers for those particular system calls, and use conditional assembly. Luckily, most of your code does something other than calling the kernel, so usually you will only need a few such conditional sections in your code.
[[x86-portable-library]]
=== Using a Library
You can avoid portability issues in your main code altogether by writing a library of system calls. Create a separate library for FreeBSD, a different one for Linux, and yet other libraries for more operating systems.
In your library, write a separate function (or procedure, if you prefer the traditional assembly language terminology) for each system call. Use the C calling convention of passing parameters. But still use `EAX` to pass the call number in. In that case, your FreeBSD library can be very simple, as many seemingly different functions can be just labels to the same code:
[.programlisting]
....
sys.open:
sys.close:
[etc...]
int 80h
ret
....
Your Linux library will require more different functions. But even here you can group system calls using the same number of parameters:
[.programlisting]
....
sys.exit:
sys.close:
[etc... one-parameter functions]
push ebx
mov ebx, [esp+12]
int 80h
pop ebx
jmp sys.return
...
sys.return:
or eax, eax
js sys.err
clc
ret
sys.err:
neg eax
stc
ret
....
The library approach may seem inconvenient at first because it requires you to produce a separate file your code depends on. But it has many advantages: For one, you only need to write it once and can use it for all your programs. You can even let other assembly language programmers use it, or perhaps use one written by someone else. But perhaps the greatest advantage of the library is that your code can be ported to other systems, even by other programmers, by simply writing a new library without any changes to your code.
If you do not like the idea of having a library, you can at least place all your system calls in a separate assembly language file and link it with your main program. Here, again, all porters have to do is create a new object file to link with your main program.
[[x86-portable-include]]
=== Using an Include File
If you are releasing your software as (or with) source code, you can use macros and place them in a separate file, which you include in your code.
Porters of your software will simply write a new include file. No library or external object file is necessary, yet your code is portable without any need to edit the code.
[NOTE]
====
This is the approach we will use throughout this chapter. We will name our include file [.filename]#system.inc#, and add to it whenever we deal with a new system call.
====
We can start our [.filename]#system.inc# by declaring the standard file descriptors:
[.programlisting]
....
%define stdin 0
%define stdout 1
%define stderr 2
....
Next, we create a symbolic name for each system call:
[.programlisting]
....
%define SYS_nosys 0
%define SYS_exit 1
%define SYS_fork 2
%define SYS_read 3
%define SYS_write 4
; [etc...]
....
We add a short, non-global procedure with a long name, so we do not accidentally reuse the name in our code:
[.programlisting]
....
section .text
align 4
access.the.bsd.kernel:
int 80h
ret
....
We create a macro which takes one argument, the syscall number:
[.programlisting]
....
%macro system 1
mov eax, %1
call access.the.bsd.kernel
%endmacro
....
Finally, we create macros for each syscall. These macros take no arguments.
[.programlisting]
....
%macro sys.exit 0
system SYS_exit
%endmacro
%macro sys.fork 0
system SYS_fork
%endmacro
%macro sys.read 0
system SYS_read
%endmacro
%macro sys.write 0
system SYS_write
%endmacro
; [etc...]
....
Go ahead, enter it into your editor and save it as [.filename]#system.inc#. We will add more to it as we discuss more syscalls.
[[x86-first-program]]
== Our First Program
We are now ready for our first program, the mandatory Hello, World!
[.programlisting]
....
1: %include 'system.inc'
2:
3: section .data
4: hello db 'Hello, World!', 0Ah
5: hbytes equ $-hello
6:
7: section .text
8: global _start
9: _start:
10: push dword hbytes
11: push dword hello
12: push dword stdout
13: sys.write
14:
15: push dword 0
16: sys.exit
....
Here is what it does: Line 1 includes the defines, the macros, and the code from [.filename]#system.inc#.
Lines 3-5 are the data: Line 3 starts the data section/segment. Line 4 contains the string "Hello, World!" followed by a new line (`0Ah`). Line 5 creates a constant that contains the length of the string from line 4 in bytes.
Lines 7-16 contain the code. Note that FreeBSD uses the _elf_ file format for its executables, which requires every program to start at the point labeled `_start` (or, more precisely, the linker expects that). This label has to be global.
Lines 10-13 ask the system to write `hbytes` bytes of the `hello` string to `stdout`.
Lines 15-16 ask the system to end the program with the return value of `0`. The `SYS_exit` syscall never returns, so the code ends there.
[NOTE]
====
If you have come to UNIX(R) from MS-DOS(R) assembly language background, you may be used to writing directly to the video hardware. You will never have to worry about this in FreeBSD, or any other flavor of UNIX(R). As far as you are concerned, you are writing to a file known as [.filename]#stdout#. This can be the video screen, or a telnet terminal, or an actual file, or even the input of another program. Which one it is, is for the system to figure out.
====
[[x86-assemble-1]]
=== Assembling the Code
Type the code (except the line numbers) in an editor, and save it in a file named [.filename]#hello.asm#. You need nasm to assemble it.
[[x86-get-nasm]]
==== Installing nasm
If you do not have nasm, type:
[source,bash]
....
% su
Password:your root password
# cd /usr/ports/devel/nasm
# make install
# exit
%
....
You may type `make install clean` instead of just `make install` if you do not want to keep nasm source code.
Either way, FreeBSD will automatically download nasm from the Internet, compile it, and install it on your system.
[NOTE]
====
If your system is not FreeBSD, you need to get nasm from its https://sourceforge.net/projects/nasm[home page]. You can still use it to assemble FreeBSD code.
====
Now you can assemble, link, and run the code:
[source,bash]
....
% nasm -f elf hello.asm
% ld -s -o hello hello.o
% ./hello
Hello, World!
%
....
[[x86-unix-filters]]
== Writing UNIX(R) Filters
A common type of UNIX(R) application is a filter-a program that reads data from the [.filename]#stdin#, processes it somehow, then writes the result to [.filename]#stdout#.
In this chapter, we shall develop a simple filter, and learn how to read from [.filename]#stdin# and write to [.filename]#stdout#. This filter will convert each byte of its input into a hexadecimal number followed by a blank space.
[.programlisting]
....
%include 'system.inc'
section .data
hex db '0123456789ABCDEF'
buffer db 0, 0, ' '
section .text
global _start
_start:
; read a byte from stdin
push dword 1
push dword buffer
push dword stdin
sys.read
add esp, byte 12
or eax, eax
je .done
; convert it to hex
movzx eax, byte [buffer]
mov edx, eax
shr dl, 4
mov dl, [hex+edx]
mov [buffer], dl
and al, 0Fh
mov al, [hex+eax]
mov [buffer+1], al
; print it
push dword 3
push dword buffer
push dword stdout
sys.write
add esp, byte 12
jmp short _start
.done:
push dword 0
sys.exit
....
In the data section we create an array called `hex`. It contains the 16 hexadecimal digits in ascending order. The array is followed by a buffer which we will use for both input and output. The first two bytes of the buffer are initially set to `0`. This is where we will write the two hexadecimal digits (the first byte also is where we will read the input). The third byte is a space.
The code section consists of four parts: Reading the byte, converting it to a hexadecimal number, writing the result, and eventually exiting the program.
To read the byte, we ask the system to read one byte from [.filename]#stdin#, and store it in the first byte of the `buffer`. The system returns the number of bytes read in `EAX`. This will be `1` while data is coming, or `0`, when no more input data is available. Therefore, we check the value of `EAX`. If it is `0`, we jump to `.done`, otherwise we continue.
[NOTE]
====
For simplicity sake, we are ignoring the possibility of an error condition at this time.
====
The hexadecimal conversion reads the byte from the `buffer` into `EAX`, or actually just `AL`, while clearing the remaining bits of `EAX` to zeros. We also copy the byte to `EDX` because we need to convert the upper four bits (nibble) separately from the lower four bits. We store the result in the first two bytes of the buffer.
Next, we ask the system to write the three bytes of the buffer, i.e., the two hexadecimal digits and the blank space, to [.filename]#stdout#. We then jump back to the beginning of the program and process the next byte.
Once there is no more input left, we ask the system to exit our program, returning a zero, which is the traditional value meaning the program was successful.
Go ahead, and save the code in a file named [.filename]#hex.asm#, then type the following (the `^D` means press the control key and type `D` while holding the control key down):
[source,bash]
....
% nasm -f elf hex.asm
% ld -s -o hex hex.o
% ./hex
Hello, World!
48 65 6C 6C 6F 2C 20 57 6F 72 6C 64 21 0A Here I come!
48 65 72 65 20 49 20 63 6F 6D 65 21 0A ^D %
....
[NOTE]
====
If you are migrating to UNIX(R) from MS-DOS(R), you may be wondering why each line ends with `0A` instead of `0D 0A`. This is because UNIX(R) does not use the cr/lf convention, but a "new line" convention, which is `0A` in hexadecimal.
====
Can we improve this? Well, for one, it is a bit confusing because once we have converted a line of text, our input no longer starts at the beginning of the line. We can modify it to print a new line instead of a space after each `0A`:
[.programlisting]
....
%include 'system.inc'
section .data
hex db '0123456789ABCDEF'
buffer db 0, 0, ' '
section .text
global _start
_start:
mov cl, ' '
.loop:
; read a byte from stdin
push dword 1
push dword buffer
push dword stdin
sys.read
add esp, byte 12
or eax, eax
je .done
; convert it to hex
movzx eax, byte [buffer]
mov [buffer+2], cl
cmp al, 0Ah
jne .hex
mov [buffer+2], al
.hex:
mov edx, eax
shr dl, 4
mov dl, [hex+edx]
mov [buffer], dl
and al, 0Fh
mov al, [hex+eax]
mov [buffer+1], al
; print it
push dword 3
push dword buffer
push dword stdout
sys.write
add esp, byte 12
jmp short .loop
.done:
push dword 0
sys.exit
....
We have stored the space in the `CL` register. We can do this safely because, unlike Microsoft(R) Windows(R), UNIX(R) system calls do not modify the value of any register they do not use to return a value in.
That means we only need to set `CL` once. We have, therefore, added a new label `.loop` and jump to it for the next byte instead of jumping at `_start`. We have also added the `.hex` label so we can either have a blank space or a new line as the third byte of the `buffer`.
Once you have changed [.filename]#hex.asm# to reflect these changes, type:
[source,bash]
....
% nasm -f elf hex.asm
% ld -s -o hex hex.o
% ./hex
Hello, World!
48 65 6C 6C 6F 2C 20 57 6F 72 6C 64 21 0A
Here I come!
48 65 72 65 20 49 20 63 6F 6D 65 21 0A
^D %
....
That looks better. But this code is quite inefficient! We are making a system call for every single byte twice (once to read it, another time to write the output).
[[x86-buffered-io]]
== Buffered Input and Output
We can improve the efficiency of our code by buffering our input and output. We create an input buffer and read a whole sequence of bytes at one time. Then we fetch them one by one from the buffer.
We also create an output buffer. We store our output in it until it is full. At that time we ask the kernel to write the contents of the buffer to [.filename]#stdout#.
The program ends when there is no more input. But we still need to ask the kernel to write the contents of our output buffer to [.filename]#stdout# one last time, otherwise some of our output would make it to the output buffer, but never be sent out. Do not forget that, or you will be wondering why some of your output is missing.
[.programlisting]
....
%include 'system.inc'
%define BUFSIZE 2048
section .data
hex db '0123456789ABCDEF'
section .bss
ibuffer resb BUFSIZE
obuffer resb BUFSIZE
section .text
global _start
_start:
sub eax, eax
sub ebx, ebx
sub ecx, ecx
mov edi, obuffer
.loop:
; read a byte from stdin
call getchar
; convert it to hex
mov dl, al
shr al, 4
mov al, [hex+eax]
call putchar
mov al, dl
and al, 0Fh
mov al, [hex+eax]
call putchar
mov al, ' '
cmp dl, 0Ah
jne .put
mov al, dl
.put:
call putchar
jmp short .loop
align 4
getchar:
or ebx, ebx
jne .fetch
call read
.fetch:
lodsb
dec ebx
ret
read:
push dword BUFSIZE
mov esi, ibuffer
push esi
push dword stdin
sys.read
add esp, byte 12
mov ebx, eax
or eax, eax
je .done
sub eax, eax
ret
align 4
.done:
call write ; flush output buffer
push dword 0
sys.exit
align 4
putchar:
stosb
inc ecx
cmp ecx, BUFSIZE
je write
ret
align 4
write:
sub edi, ecx ; start of buffer
push ecx
push edi
push dword stdout
sys.write
add esp, byte 12
sub eax, eax
sub ecx, ecx ; buffer is empty now
ret
....
We now have a third section in the source code, named `.bss`. This section is not included in our executable file, and, therefore, cannot be initialized. We use `resb` instead of `db`. It simply reserves the requested size of uninitialized memory for our use.
We take advantage of the fact that the system does not modify the registers: We use registers for what, otherwise, would have to be global variables stored in the `.data` section. This is also why the UNIX(R) convention of passing parameters to system calls on the stack is superior to the Microsoft convention of passing them in the registers: We can keep the registers for our own use.
We use `EDI` and `ESI` as pointers to the next byte to be read from or written to. We use `EBX` and `ECX` to keep count of the number of bytes in the two buffers, so we know when to dump the output to, or read more input from, the system.
Let us see how it works now:
[source,bash]
....
% nasm -f elf hex.asm
% ld -s -o hex hex.o
% ./hex
Hello, World!
Here I come!
48 65 6C 6C 6F 2C 20 57 6F 72 6C 64 21 0A
48 65 72 65 20 49 20 63 6F 6D 65 21 0A
^D %
....
Not what you expected? The program did not print the output until we pressed `^D`. That is easy to fix by inserting three lines of code to write the output every time we have converted a new line to `0A`. I have marked the three lines with > (do not copy the > in your [.filename]#hex.asm#).
[.programlisting]
....
%include 'system.inc'
%define BUFSIZE 2048
section .data
hex db '0123456789ABCDEF'
section .bss
ibuffer resb BUFSIZE
obuffer resb BUFSIZE
section .text
global _start
_start:
sub eax, eax
sub ebx, ebx
sub ecx, ecx
mov edi, obuffer
.loop:
; read a byte from stdin
call getchar
; convert it to hex
mov dl, al
shr al, 4
mov al, [hex+eax]
call putchar
mov al, dl
and al, 0Fh
mov al, [hex+eax]
call putchar
mov al, ' '
cmp dl, 0Ah
jne .put
mov al, dl
.put:
call putchar
> cmp al, 0Ah
> jne .loop
> call write
jmp short .loop
align 4
getchar:
or ebx, ebx
jne .fetch
call read
.fetch:
lodsb
dec ebx
ret
read:
push dword BUFSIZE
mov esi, ibuffer
push esi
push dword stdin
sys.read
add esp, byte 12
mov ebx, eax
or eax, eax
je .done
sub eax, eax
ret
align 4
.done:
call write ; flush output buffer
push dword 0
sys.exit
align 4
putchar:
stosb
inc ecx
cmp ecx, BUFSIZE
je write
ret
align 4
write:
sub edi, ecx ; start of buffer
push ecx
push edi
push dword stdout
sys.write
add esp, byte 12
sub eax, eax
sub ecx, ecx ; buffer is empty now
ret
....
Now, let us see how it works:
[source,bash]
....
% nasm -f elf hex.asm
% ld -s -o hex hex.o
% ./hex
Hello, World!
48 65 6C 6C 6F 2C 20 57 6F 72 6C 64 21 0A
Here I come!
48 65 72 65 20 49 20 63 6F 6D 65 21 0A
^D %
....
Not bad for a 644-byte executable, is it!
[NOTE]
====
This approach to buffered input/output still contains a hidden danger. I will discuss-and fix-it later, when I talk about the <<x86-buffered-dark-side,dark side of buffering>>.
====
[[x86-ungetc]]
=== How to Unread a Character
[WARNING]
====
This may be a somewhat advanced topic, mostly of interest to programmers familiar with the theory of compilers. If you wish, you may <<x86-command-line,skip to the next section>>, and perhaps read this later.
====
While our sample program does not require it, more sophisticated filters often need to look ahead. In other words, they may need to see what the next character is (or even several characters). If the next character is of a certain value, it is part of the token currently being processed. Otherwise, it is not.
For example, you may be parsing the input stream for a textual string (e.g., when implementing a language compiler): If a character is followed by another character, or perhaps a digit, it is part of the token you are processing. If it is followed by white space, or some other value, then it is not part of the current token.
This presents an interesting problem: How to return the next character back to the input stream, so it can be read again later?
One possible solution is to store it in a character variable, then set a flag. We can modify `getchar` to check the flag, and if it is set, fetch the byte from that variable instead of the input buffer, and reset the flag. But, of course, that slows us down.
The C language has an `ungetc()` function, just for that purpose. Is there a quick way to implement it in our code? I would like you to scroll back up and take a look at the `getchar` procedure and see if you can find a nice and fast solution before reading the next paragraph. Then come back here and see my own solution.
The key to returning a character back to the stream is in how we are getting the characters to start with:
First we check if the buffer is empty by testing the value of `EBX`. If it is zero, we call the `read` procedure.
If we do have a character available, we use `lodsb`, then decrease the value of `EBX`. The `lodsb` instruction is effectively identical to:
[.programlisting]
....
mov al, [esi]
inc esi
....
The byte we have fetched remains in the buffer until the next time `read` is called. We do not know when that happens, but we do know it will not happen until the next call to `getchar`. Hence, to "return" the last-read byte back to the stream, all we have to do is decrease the value of `ESI` and increase the value of `EBX`:
[.programlisting]
....
ungetc:
dec esi
inc ebx
ret
....
But, be careful! We are perfectly safe doing this if our look-ahead is at most one character at a time. If we are examining more than one upcoming character and call `ungetc` several times in a row, it will work most of the time, but not all the time (and will be tough to debug). Why?
Because as long as `getchar` does not have to call `read`, all of the pre-read bytes are still in the buffer, and our `ungetc` works without a glitch. But the moment `getchar` calls `read`, the contents of the buffer change.
We can always rely on `ungetc` working properly on the last character we have read with `getchar`, but not on anything we have read before that.
If your program reads more than one byte ahead, you have at least two choices:
If possible, modify the program so it only reads one byte ahead. This is the simplest solution.
If that option is not available, first of all determine the maximum number of characters your program needs to return to the input stream at one time. Increase that number slightly, just to be sure, preferably to a multiple of 16-so it aligns nicely. Then modify the `.bss` section of your code, and create a small "spare" buffer right before your input buffer, something like this:
[.programlisting]
....
section .bss
resb 16 ; or whatever the value you came up with
ibuffer resb BUFSIZE
obuffer resb BUFSIZE
....
You also need to modify your `ungetc` to pass the value of the byte to unget in `AL`:
[.programlisting]
....
ungetc:
dec esi
inc ebx
mov [esi], al
ret
....
With this modification, you can call `ungetc` up to 17 times in a row safely (the first call will still be within the buffer, the remaining 16 may be either within the buffer or within the "spare").
[[x86-command-line]]
== Command Line Arguments
Our hex program will be more useful if it can read the names of an input and output file from its command line, i.e., if it can process the command line arguments. But... Where are they?
Before a UNIX(R) system starts a program, it ``push``es some data on the stack, then jumps at the `_start` label of the program. Yes, I said jumps, not calls. That means the data can be accessed by reading `[esp+offset]`, or by simply ``pop``ping it.
The value at the top of the stack contains the number of command line arguments. It is traditionally called `argc`, for "argument count."
Command line arguments follow next, all `argc` of them. These are typically referred to as `argv`, for "argument value(s)." That is, we get `argv[0]`, `argv[1]`, `...`, `argv[argc-1]`. These are not the actual arguments, but pointers to arguments, i.e., memory addresses of the actual arguments. The arguments themselves are NUL-terminated character strings.
The `argv` list is followed by a NULL pointer, which is simply a `0`. There is more, but this is enough for our purposes right now.
[NOTE]
====
If you have come from the MS-DOS(R) programming environment, the main difference is that each argument is in a separate string. The second difference is that there is no practical limit on how many arguments there can be.
====
Armed with this knowledge, we are almost ready for the next version of [.filename]#hex.asm#. First, however, we need to add a few lines to [.filename]#system.inc#:
First, we need to add two new entries to our list of system call numbers:
[.programlisting]
....
%define SYS_open 5
%define SYS_close 6
....
Then we add two new macros at the end of the file:
[.programlisting]
....
%macro sys.open 0
system SYS_open
%endmacro
%macro sys.close 0
system SYS_close
%endmacro
....
Here, then, is our modified source code:
[.programlisting]
....
%include 'system.inc'
%define BUFSIZE 2048
section .data
fd.in dd stdin
fd.out dd stdout
hex db '0123456789ABCDEF'
section .bss
ibuffer resb BUFSIZE
obuffer resb BUFSIZE
section .text
align 4
err:
push dword 1 ; return failure
sys.exit
align 4
global _start
_start:
add esp, byte 8 ; discard argc and argv[0]
pop ecx
jecxz .init ; no more arguments
; ECX contains the path to input file
push dword 0 ; O_RDONLY
push ecx
sys.open
jc err ; open failed
add esp, byte 8
mov [fd.in], eax
pop ecx
jecxz .init ; no more arguments
; ECX contains the path to output file
push dword 420 ; file mode (644 octal)
push dword 0200h | 0400h | 01h
; O_CREAT | O_TRUNC | O_WRONLY
push ecx
sys.open
jc err
add esp, byte 12
mov [fd.out], eax
.init:
sub eax, eax
sub ebx, ebx
sub ecx, ecx
mov edi, obuffer
.loop:
; read a byte from input file or stdin
call getchar
; convert it to hex
mov dl, al
shr al, 4
mov al, [hex+eax]
call putchar
mov al, dl
and al, 0Fh
mov al, [hex+eax]
call putchar
mov al, ' '
cmp dl, 0Ah
jne .put
mov al, dl
.put:
call putchar
cmp al, dl
jne .loop
call write
jmp short .loop
align 4
getchar:
or ebx, ebx
jne .fetch
call read
.fetch:
lodsb
dec ebx
ret
read:
push dword BUFSIZE
mov esi, ibuffer
push esi
push dword [fd.in]
sys.read
add esp, byte 12
mov ebx, eax
or eax, eax
je .done
sub eax, eax
ret
align 4
.done:
call write ; flush output buffer
; close files
push dword [fd.in]
sys.close
push dword [fd.out]
sys.close
; return success
push dword 0
sys.exit
align 4
putchar:
stosb
inc ecx
cmp ecx, BUFSIZE
je write
ret
align 4
write:
sub edi, ecx ; start of buffer
push ecx
push edi
push dword [fd.out]
sys.write
add esp, byte 12
sub eax, eax
sub ecx, ecx ; buffer is empty now
ret
....
In our `.data` section we now have two new variables, `fd.in` and `fd.out`. We store the input and output file descriptors here.
In the `.text` section we have replaced the references to `stdin` and `stdout` with `[fd.in]` and `[fd.out]`.
The `.text` section now starts with a simple error handler, which does nothing but exit the program with a return value of `1`. The error handler is before `_start` so we are within a short distance from where the errors occur.
Naturally, the program execution still begins at `_start`. First, we remove `argc` and `argv[0]` from the stack: They are of no interest to us (in this program, that is).
We pop `argv[1]` to `ECX`. This register is particularly suited for pointers, as we can handle NULL pointers with `jecxz`. If `argv[1]` is not NULL, we try to open the file named in the first argument. Otherwise, we continue the program as before: Reading from `stdin`, writing to `stdout`. If we fail to open the input file (e.g., it does not exist), we jump to the error handler and quit.
If all went well, we now check for the second argument. If it is there, we open the output file. Otherwise, we send the output to `stdout`. If we fail to open the output file (e.g., it exists and we do not have the write permission), we, again, jump to the error handler.
The rest of the code is the same as before, except we close the input and output files before exiting, and, as mentioned, we use `[fd.in]` and `[fd.out]`.
Our executable is now a whopping 768 bytes long.
Can we still improve it? Of course! Every program can be improved. Here are a few ideas of what we could do:
* Have our error handler print a message to `stderr`.
* Add error handlers to the `read` and `write` functions.
* Close `stdin` when we open an input file, `stdout` when we open an output file.
* Add command line switches, such as `-i` and `-o`, so we can list the input and output files in any order, or perhaps read from `stdin` and write to a file.
* Print a usage message if command line arguments are incorrect.
I shall leave these enhancements as an exercise to the reader: You already know everything you need to know to implement them.
[[x86-environment]]
== UNIX(R) Environment
An important UNIX(R) concept is the environment, which is defined by _environment variables_. Some are set by the system, others by you, yet others by the shell, or any program that loads another program.
[[x86-find-environment]]
=== How to Find Environment Variables
I said earlier that when a program starts executing, the stack contains `argc` followed by the NULL-terminated `argv` array, followed by something else. The "something else" is the _environment_, or, to be more precise, a NULL-terminated array of pointers to _environment variables_. This is often referred to as `env`.
The structure of `env` is the same as that of `argv`, a list of memory addresses followed by a NULL (`0`). In this case, there is no `"envc"`-we figure out where the array ends by searching for the final NULL.
The variables usually come in the `name=value` format, but sometimes the `=value` part may be missing. We need to account for that possibility.
[[x86-webvar]]
=== webvars
I could just show you some code that prints the environment the same way the UNIX(R) env command does. But I thought it would be more interesting to write a simple assembly language CGI utility.
[[x86-cgi]]
==== CGI: a Quick Overview
I have a http://www.whizkidtech.redprince.net/cgi-bin/tutorial[detailed CGI tutorial] on my web site, but here is a very quick overview of CGI:
* The web server communicates with the CGI program by setting _environment variables_.
* The CGI program sends its output to [.filename]#stdout#. The web server reads it from there.
* It must start with an HTTP header followed by two blank lines.
* It then prints the HTML code, or whatever other type of data it is producing.
[NOTE]
====
While certain _environment variables_ use standard names, others vary, depending on the web server. That makes webvars quite a useful diagnostic tool.
====
[[x86-webvars-the-code]]
==== The Code
Our webvars program, then, must send out the HTTP header followed by some HTML mark-up. It then must read the _environment variables_ one by one and send them out as part of the HTML page.
The code follows. I placed comments and explanations right inside the code:
[.programlisting]
....
;;;;;;; webvars.asm ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;
; Copyright (c) 2000 G. Adam Stanislav
; All rights reserved.
;
; Redistribution and use in source and binary forms, with or without
; modification, are permitted provided that the following conditions
; are met:
; 1. Redistributions of source code must retain the above copyright
; notice, this list of conditions and the following disclaimer.
; 2. Redistributions in binary form must reproduce the above copyright
; notice, this list of conditions and the following disclaimer in the
; documentation and/or other materials provided with the distribution.
;
; THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND
; ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
; IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
; ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE
; FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
; DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
; OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
; HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
; LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
; OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
; SUCH DAMAGE.
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;
; Version 1.0
;
; Started: 8-Dec-2000
; Updated: 8-Dec-2000
;
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
%include 'system.inc'
section .data
http db 'Content-type: text/html', 0Ah, 0Ah
db '<?xml version="1.0" encoding="utf-8"?>', 0Ah
db '<!DOCTYPE html PUBLIC "-//W3C/DTD XHTML Strict//EN" '
db '"DTD/xhtml1-strict.dtd">', 0Ah
db '<html xmlns="http://www.w3.org/1999/xhtml" '
db 'xml.lang="en" lang="en">', 0Ah
db '<head>', 0Ah
db '<title>Web Environment</title>', 0Ah
db '<meta name="author" content="G. Adam Stanislav" />', 0Ah
db '</head>', 0Ah, 0Ah
db '<body bgcolor="#ffffff" text="#000000" link="#0000ff" '
db 'vlink="#840084" alink="#0000ff">', 0Ah
db '<div class="webvars">', 0Ah
db '<h1>Web Environment</h1>', 0Ah
db '<p>The following <b>environment variables</b> are defined '
db 'on this web server:</p>', 0Ah, 0Ah
db '<table align="center" width="80" border="0" cellpadding="10" '
db 'cellspacing="0" class="webvars">', 0Ah
httplen equ $-http
left db '<tr>', 0Ah
db '<td class="name"><tt>'
leftlen equ $-left
middle db '</tt></td>', 0Ah
db '<td class="value"><tt><b>'
midlen equ $-middle
undef db '<i>(undefined)</i>'
undeflen equ $-undef
right db '</b></tt></td>', 0Ah
db '</tr>', 0Ah
rightlen equ $-right
wrap db '</table>', 0Ah
db '</div>', 0Ah
db '</body>', 0Ah
db '</html>', 0Ah, 0Ah
wraplen equ $-wrap
section .text
global _start
_start:
; First, send out all the http and xhtml stuff that is
; needed before we start showing the environment
push dword httplen
push dword http
push dword stdout
sys.write
; Now find how far on the stack the environment pointers
; are. We have 12 bytes we have pushed before "argc"
mov eax, [esp+12]
; We need to remove the following from the stack:
;
; The 12 bytes we pushed for sys.write
; The 4 bytes of argc
; The EAX*4 bytes of argv
; The 4 bytes of the NULL after argv
;
; Total:
; 20 + eax * 4
;
; Because stack grows down, we need to ADD that many bytes
; to ESP.
lea esp, [esp+20+eax*4]
cld ; This should already be the case, but let's be sure.
; Loop through the environment, printing it out
.loop:
pop edi
or edi, edi ; Done yet?
je near .wrap
; Print the left part of HTML
push dword leftlen
push dword left
push dword stdout
sys.write
; It may be tempting to search for the '=' in the env string next.
; But it is possible there is no '=', so we search for the
; terminating NUL first.
mov esi, edi ; Save start of string
sub ecx, ecx
not ecx ; ECX = FFFFFFFF
sub eax, eax
repne scasb
not ecx ; ECX = string length + 1
mov ebx, ecx ; Save it in EBX
; Now is the time to find '='
mov edi, esi ; Start of string
mov al, '='
repne scasb
not ecx
add ecx, ebx ; Length of name
push ecx
push esi
push dword stdout
sys.write
; Print the middle part of HTML table code
push dword midlen
push dword middle
push dword stdout
sys.write
; Find the length of the value
not ecx
lea ebx, [ebx+ecx-1]
; Print "undefined" if 0
or ebx, ebx
jne .value
mov ebx, undeflen
mov edi, undef
.value:
push ebx
push edi
push dword stdout
sys.write
; Print the right part of the table row
push dword rightlen
push dword right
push dword stdout
sys.write
; Get rid of the 60 bytes we have pushed
add esp, byte 60
; Get the next variable
jmp .loop
.wrap:
; Print the rest of HTML
push dword wraplen
push dword wrap
push dword stdout
sys.write
; Return success
push dword 0
sys.exit
....
This code produces a 1,396-byte executable. Most of it is data, i.e., the HTML mark-up we need to send out.
Assemble and link it as usual:
[source,bash]
....
% nasm -f elf webvars.asm
% ld -s -o webvars webvars.o
....
To use it, you need to upload [.filename]#webvars# to your web server. Depending on how your web server is set up, you may have to store it in a special [.filename]#cgi-bin# directory, or perhaps rename it with a [.filename]#.cgi# extension.
Then you need to use your browser to view its output. To see its output on my web server, please go to http://www.int80h.org/webvars/[http://www.int80h.org/webvars/]. If curious about the additional environment variables present in a password protected web directory, go to http://www.int80h.org/private/[http://www.int80h.org/private/], using the name `asm` and password `programmer`.
[[x86-files]]
== Working with Files
We have already done some basic file work: We know how to open and close them, how to read and write them using buffers. But UNIX(R) offers much more functionality when it comes to files. We will examine some of it in this section, and end up with a nice file conversion utility.
Indeed, let us start at the end, that is, with the file conversion utility. It always makes programming easier when we know from the start what the end product is supposed to do.
One of the first programs I wrote for UNIX(R) was link:ftp://ftp.int80h.org/unix/tuc/[tuc], a text-to-UNIX(R) file converter. It converts a text file from other operating systems to a UNIX(R) text file. In other words, it changes from different kind of line endings to the newline convention of UNIX(R). It saves the output in a different file. Optionally, it converts a UNIX(R) text file to a DOS text file.
I have used tuc extensively, but always only to convert from some other OS to UNIX(R), never the other way. I have always wished it would just overwrite the file instead of me having to send the output to a different file. Most of the time, I end up using it like this:
[source,bash]
....
% tuc myfile tempfile
% mv tempfile myfile
....
It would be nice to have a ftuc, i.e., _fast tuc_, and use it like this:
[source,bash]
....
% ftuc myfile
....
In this chapter, then, we will write ftuc in assembly language (the original tuc is in C), and study various file-oriented kernel services in the process.
At first sight, such a file conversion is very simple: All you have to do is strip the carriage returns, right?
If you answered yes, think again: That approach will work most of the time (at least with MS DOS text files), but will fail occasionally.
The problem is that not all non UNIX(R) text files end their line with the carriage return / line feed sequence. Some use carriage returns without line feeds. Others combine several blank lines into a single carriage return followed by several line feeds. And so on.
A text file converter, then, must be able to handle any possible line endings:
* carriage return / line feed
* carriage return
* line feed / carriage return
* line feed
It should also handle files that use some kind of a combination of the above (e.g., carriage return followed by several line feeds).
[[x86-finite-state-machine]]
=== Finite State Machine
The problem is easily solved by the use of a technique called _finite state machine_, originally developed by the designers of digital electronic circuits. A _finite state machine_ is a digital circuit whose output is dependent not only on its input but on its previous input, i.e., on its state. The microprocessor is an example of a _finite state machine_: Our assembly language code is assembled to machine language in which some assembly language code produces a single byte of machine language, while others produce several bytes. As the microprocessor fetches the bytes from the memory one by one, some of them simply change its state rather than produce some output. When all the bytes of the op code are fetched, the microprocessor produces some output, or changes the value of a register, etc.
Because of that, all software is essentially a sequence of state instructions for the microprocessor. Nevertheless, the concept of _finite state machine_ is useful in software design as well.
Our text file converter can be designer as a _finite state machine_ with three possible states. We could call them states 0-2, but it will make our life easier if we give them symbolic names:
* ordinary
* cr
* lf
Our program will start in the ordinary state. During this state, the program action depends on its input as follows:
* If the input is anything other than a carriage return or line feed, the input is simply passed on to the output. The state remains unchanged.
* If the input is a carriage return, the state is changed to cr. The input is then discarded, i.e., no output is made.
* If the input is a line feed, the state is changed to lf. The input is then discarded.
Whenever we are in the cr state, it is because the last input was a carriage return, which was unprocessed. What our software does in this state again depends on the current input:
* If the input is anything other than a carriage return or line feed, output a line feed, then output the input, then change the state to ordinary.
* If the input is a carriage return, we have received two (or more) carriage returns in a row. We discard the input, we output a line feed, and leave the state unchanged.
* If the input is a line feed, we output the line feed and change the state to ordinary. Note that this is not the same as the first case above - if we tried to combine them, we would be outputting two line feeds instead of one.
Finally, we are in the lf state after we have received a line feed that was not preceded by a carriage return. This will happen when our file already is in UNIX(R) format, or whenever several lines in a row are expressed by a single carriage return followed by several line feeds, or when line ends with a line feed / carriage return sequence. Here is how we need to handle our input in this state:
* If the input is anything other than a carriage return or line feed, we output a line feed, then output the input, then change the state to ordinary. This is exactly the same action as in the cr state upon receiving the same kind of input.
* If the input is a carriage return, we discard the input, we output a line feed, then change the state to ordinary.
* If the input is a line feed, we output the line feed, and leave the state unchanged.
[[x86-final-state]]
==== The Final State
The above _finite state machine_ works for the entire file, but leaves the possibility that the final line end will be ignored. That will happen whenever the file ends with a single carriage return or a single line feed. I did not think of it when I wrote tuc, just to discover that occasionally it strips the last line ending.
This problem is easily fixed by checking the state after the entire file was processed. If the state is not ordinary, we simply need to output one last line feed.
[NOTE]
====
Now that we have expressed our algorithm as a _finite state machine_, we could easily design a dedicated digital electronic circuit (a "chip") to do the conversion for us. Of course, doing so would be considerably more expensive than writing an assembly language program.
====
[[x86-tuc-counter]]
==== The Output Counter
Because our file conversion program may be combining two characters into one, we need to use an output counter. We initialize it to `0`, and increase it every time we send a character to the output. At the end of the program, the counter will tell us what size we need to set the file to.
[[x86-software-fsm]]
=== Implementing FSM in Software
The hardest part of working with a _finite state machine_ is analyzing the problem and expressing it as a _finite state machine_. That accomplished, the software almost writes itself.
In a high-level language, such as C, there are several main approaches. One is to use a `switch` statement which chooses what function should be run. For example,
[.programlisting]
....
switch (state) {
default:
case REGULAR:
regular(inputchar);
break;
case CR:
cr(inputchar);
break;
case LF:
lf(inputchar);
break;
}
....
Another approach is by using an array of function pointers, something like this:
[.programlisting]
....
(output[state])(inputchar);
....
Yet another is to have `state` be a function pointer, set to point at the appropriate function:
[.programlisting]
....
(*state)(inputchar);
....
This is the approach we will use in our program because it is very easy to do in assembly language, and very fast, too. We will simply keep the address of the right procedure in `EBX`, and then just issue:
[.programlisting]
....
call ebx
....
This is possibly faster than hardcoding the address in the code because the microprocessor does not have to fetch the address from the memory-it is already stored in one of its registers. I said _possibly_ because with the caching modern microprocessors do, either way may be equally fast.
[[memory-mapped-files]]
=== Memory Mapped Files
Because our program works on a single file, we cannot use the approach that worked for us before, i.e., to read from an input file and to write to an output file.
UNIX(R) allows us to map a file, or a section of a file, into memory. To do that, we first need to open the file with the appropriate read/write flags. Then we use the `mmap` system call to map it into the memory. One nice thing about `mmap` is that it automatically works with virtual memory: We can map more of the file into the memory than we have physical memory available, yet still access it through regular memory op codes, such as `mov`, `lods`, and `stos`. Whatever changes we make to the memory image of the file will be written to the file by the system. We do not even have to keep the file open: As long as it stays mapped, we can read from it and write to it.
The 32-bit Intel microprocessors can access up to four gigabytes of memory - physical or virtual. The FreeBSD system allows us to use up to a half of it for file mapping.
For simplicity sake, in this tutorial we will only convert files that can be mapped into the memory in their entirety. There are probably not too many text files that exceed two gigabytes in size. If our program encounters one, it will simply display a message suggesting we use the original tuc instead.
If you examine your copy of [.filename]#syscalls.master#, you will find two separate syscalls named `mmap`. This is because of evolution of UNIX(R): There was the traditional BSD `mmap`, syscall 71. That one was superseded by the POSIX(R) `mmap`, syscall 197. The FreeBSD system supports both because older programs were written by using the original BSD version. But new software uses the POSIX(R) version, which is what we will use.
The [.filename]#syscalls.master# lists the POSIX(R) version like this:
[.programlisting]
....
197 STD BSD { caddr_t mmap(caddr_t addr, size_t len, int prot, \
int flags, int fd, long pad, off_t pos); }
....
This differs slightly from what man:mmap[2] says. That is because man:mmap[2] describes the C version.
The difference is in the `long pad` argument, which is not present in the C version. However, the FreeBSD syscalls add a 32-bit pad after ``push``ing a 64-bit argument. In this case, `off_t` is a 64-bit value.
When we are finished working with a memory-mapped file, we unmap it with the `munmap` syscall:
[TIP]
====
For an in-depth treatment of `mmap`, see W. Richard Stevens' http://www.int80h.org/cgi-bin/isbn?isbn=0130810819[Unix Network Programming, Volume 2, Chapter 12].
====
[[x86-file-size]]
=== Determining File Size
Because we need to tell `mmap` how many bytes of the file to map into the memory, and because we want to map the entire file, we need to determine the size of the file.
We can use the `fstat` syscall to get all the information about an open file that the system can give us. That includes the file size.
Again, [.filename]#syscalls.master# lists two versions of `fstat`, a traditional one (syscall 62), and a POSIX(R) one (syscall 189). Naturally, we will use the POSIX(R) version:
[.programlisting]
....
189 STD POSIX { int fstat(int fd, struct stat *sb); }
....
This is a very straightforward call: We pass to it the address of a `stat` structure and the descriptor of an open file. It will fill out the contents of the `stat` structure.
I do, however, have to say that I tried to declare the `stat` structure in the `.bss` section, and `fstat` did not like it: It set the carry flag indicating an error. After I changed the code to allocate the structure on the stack, everything was working fine.
[[x86-ftruncate]]
=== Changing the File Size
Because our program may combine carriage return / line feed sequences into straight line feeds, our output may be smaller than our input. However, since we are placing our output into the same file we read the input from, we may have to change the size of the file.
The `ftruncate` system call allows us to do just that. Despite its somewhat misleading name, the `ftruncate` system call can be used to both truncate the file (make it smaller) and to grow it.
And yes, we will find two versions of `ftruncate` in [.filename]#syscalls.master#, an older one (130), and a newer one (201). We will use the newer one:
[.programlisting]
....
201 STD BSD { int ftruncate(int fd, int pad, off_t length); }
....
Please note that this one contains a `int pad` again.
[[x86-ftuc]]
=== ftuc
We now know everything we need to write ftuc. We start by adding some new lines in [.filename]#system.inc#. First, we define some constants and structures, somewhere at or near the beginning of the file:
[.programlisting]
....
;;;;;;; open flags
%define O_RDONLY 0
%define O_WRONLY 1
%define O_RDWR 2
;;;;;;; mmap flags
%define PROT_NONE 0
%define PROT_READ 1
%define PROT_WRITE 2
%define PROT_EXEC 4
;;
%define MAP_SHARED 0001h
%define MAP_PRIVATE 0002h
;;;;;;; stat structure
struc stat
st_dev resd 1 ; = 0
st_ino resd 1 ; = 4
st_mode resw 1 ; = 8, size is 16 bits
st_nlink resw 1 ; = 10, ditto
st_uid resd 1 ; = 12
st_gid resd 1 ; = 16
st_rdev resd 1 ; = 20
st_atime resd 1 ; = 24
st_atimensec resd 1 ; = 28
st_mtime resd 1 ; = 32
st_mtimensec resd 1 ; = 36
st_ctime resd 1 ; = 40
st_ctimensec resd 1 ; = 44
st_size resd 2 ; = 48, size is 64 bits
st_blocks resd 2 ; = 56, ditto
st_blksize resd 1 ; = 64
st_flags resd 1 ; = 68
st_gen resd 1 ; = 72
st_lspare resd 1 ; = 76
st_qspare resd 4 ; = 80
endstruc
....
We define the new syscalls:
[.programlisting]
....
%define SYS_mmap 197
%define SYS_munmap 73
%define SYS_fstat 189
%define SYS_ftruncate 201
....
We add the macros for their use:
[.programlisting]
....
%macro sys.mmap 0
system SYS_mmap
%endmacro
%macro sys.munmap 0
system SYS_munmap
%endmacro
%macro sys.ftruncate 0
system SYS_ftruncate
%endmacro
%macro sys.fstat 0
system SYS_fstat
%endmacro
....
And here is our code:
[.programlisting]
....
;;;;;;; Fast Text-to-Unix Conversion (ftuc.asm) ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;;
;; Started: 21-Dec-2000
;; Updated: 22-Dec-2000
;;
;; Copyright 2000 G. Adam Stanislav.
;; All rights reserved.
;;
;;;;;;; v.1 ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
%include 'system.inc'
section .data
db 'Copyright 2000 G. Adam Stanislav.', 0Ah
db 'All rights reserved.', 0Ah
usg db 'Usage: ftuc filename', 0Ah
usglen equ $-usg
co db "ftuc: Can't open file.", 0Ah
colen equ $-co
fae db 'ftuc: File access error.', 0Ah
faelen equ $-fae
ftl db 'ftuc: File too long, use regular tuc instead.', 0Ah
ftllen equ $-ftl
mae db 'ftuc: Memory allocation error.', 0Ah
maelen equ $-mae
section .text
align 4
memerr:
push dword maelen
push dword mae
jmp short error
align 4
toolong:
push dword ftllen
push dword ftl
jmp short error
align 4
facerr:
push dword faelen
push dword fae
jmp short error
align 4
cantopen:
push dword colen
push dword co
jmp short error
align 4
usage:
push dword usglen
push dword usg
error:
push dword stderr
sys.write
push dword 1
sys.exit
align 4
global _start
_start:
pop eax ; argc
pop eax ; program name
pop ecx ; file to convert
jecxz usage
pop eax
or eax, eax ; Too many arguments?
jne usage
; Open the file
push dword O_RDWR
push ecx
sys.open
jc cantopen
mov ebp, eax ; Save fd
sub esp, byte stat_size
mov ebx, esp
; Find file size
push ebx
push ebp ; fd
sys.fstat
jc facerr
mov edx, [ebx + st_size + 4]
; File is too long if EDX != 0 ...
or edx, edx
jne near toolong
mov ecx, [ebx + st_size]
; ... or if it is above 2 GB
or ecx, ecx
js near toolong
; Do nothing if the file is 0 bytes in size
jecxz .quit
; Map the entire file in memory
push edx
push edx ; starting at offset 0
push edx ; pad
push ebp ; fd
push dword MAP_SHARED
push dword PROT_READ | PROT_WRITE
push ecx ; entire file size
push edx ; let system decide on the address
sys.mmap
jc near memerr
mov edi, eax
mov esi, eax
push ecx ; for SYS_munmap
push edi
; Use EBX for state machine
mov ebx, ordinary
mov ah, 0Ah
cld
.loop:
lodsb
call ebx
loop .loop
cmp ebx, ordinary
je .filesize
; Output final lf
mov al, ah
stosb
inc edx
.filesize:
; truncate file to new size
push dword 0 ; high dword
push edx ; low dword
push eax ; pad
push ebp
sys.ftruncate
; close it (ebp still pushed)
sys.close
add esp, byte 16
sys.munmap
.quit:
push dword 0
sys.exit
align 4
ordinary:
cmp al, 0Dh
je .cr
cmp al, ah
je .lf
stosb
inc edx
ret
align 4
.cr:
mov ebx, cr
ret
align 4
.lf:
mov ebx, lf
ret
align 4
cr:
cmp al, 0Dh
je .cr
cmp al, ah
je .lf
xchg al, ah
stosb
inc edx
xchg al, ah
; fall through
.lf:
stosb
inc edx
mov ebx, ordinary
ret
align 4
.cr:
mov al, ah
stosb
inc edx
ret
align 4
lf:
cmp al, ah
je .lf
cmp al, 0Dh
je .cr
xchg al, ah
stosb
inc edx
xchg al, ah
stosb
inc edx
mov ebx, ordinary
ret
align 4
.cr:
mov ebx, ordinary
mov al, ah
; fall through
.lf:
stosb
inc edx
ret
....
[WARNING]
====
Do not use this program on files stored on a disk formatted by MS-DOS(R) or Windows(R). There seems to be a subtle bug in the FreeBSD code when using `mmap` on these drives mounted under FreeBSD: If the file is over a certain size, `mmap` will just fill the memory with zeros, and then copy them to the file overwriting its contents.
====
[[x86-one-pointed-mind]]
== One-Pointed Mind
As a student of Zen, I like the idea of a one-pointed mind: Do one thing at a time, and do it well.
This, indeed, is very much how UNIX(R) works as well. While a typical Windows(R) application is attempting to do everything imaginable (and is, therefore, riddled with bugs), a typical UNIX(R) program does only one thing, and it does it well.
The typical UNIX(R) user then essentially assembles his own applications by writing a shell script which combines the various existing programs by piping the output of one program to the input of another.
When writing your own UNIX(R) software, it is generally a good idea to see what parts of the problem you need to solve can be handled by existing programs, and only write your own programs for that part of the problem that you do not have an existing solution for.
[[x86-csv]]
=== CSV
I will illustrate this principle with a specific real-life example I was faced with recently:
I needed to extract the 11th field of each record from a database I downloaded from a web site. The database was a CSV file, i.e., a list of _comma-separated values_. That is quite a standard format for sharing data among people who may be using different database software.
The first line of the file contains the list of various fields separated by commas. The rest of the file contains the data listed line by line, with values separated by commas.
I tried awk, using the comma as a separator. But because several lines contained a quoted comma, awk was extracting the wrong field from those lines.
Therefore, I needed to write my own software to extract the 11th field from the CSV file. However, going with the UNIX(R) spirit, I only needed to write a simple filter that would do the following:
* Remove the first line from the file;
* Change all unquoted commas to a different character;
* Remove all quotation marks.
Strictly speaking, I could use sed to remove the first line from the file, but doing so in my own program was very easy, so I decided to do it and reduce the size of the pipeline.
At any rate, writing a program like this took me about 20 minutes. Writing a program that extracts the 11th field from the CSV file would take a lot longer, and I could not reuse it to extract some other field from some other database.
This time I decided to let it do a little more work than a typical tutorial program would:
* It parses its command line for options;
* It displays proper usage if it finds wrong arguments;
* It produces meaningful error messages.
Here is its usage message:
[source,bash]
....
Usage: csv [-t<delim>] [-c<comma>] [-p] [-o <outfile>] [-i <infile>]
....
All parameters are optional, and can appear in any order.
The `-t` parameter declares what to replace the commas with. The `tab` is the default here. For example, `-t;` will replace all unquoted commas with semicolons.
I did not need the `-c` option, but it may come in handy in the future. It lets me declare that I want a character other than a comma replaced with something else. For example, `-c@` will replace all at signs (useful if you want to split a list of email addresses to their user names and domains).
The `-p` option preserves the first line, i.e., it does not delete it. By default, we delete the first line because in a CSV file it contains the field names rather than data.
The `-i` and `-o` options let me specify the input and the output files. Defaults are [.filename]#stdin# and [.filename]#stdout#, so this is a regular UNIX(R) filter.
I made sure that both `-i filename` and `-ifilename` are accepted. I also made sure that only one input and one output files may be specified.
To get the 11th field of each record, I can now do:
[source,bash]
....
% csv '-t;' data.csv | awk '-F;' '{print $11}'
....
The code stores the options (except for the file descriptors) in `EDX`: The comma in `DH`, the new separator in `DL`, and the flag for the `-p` option in the highest bit of `EDX`, so a check for its sign will give us a quick decision what to do.
Here is the code:
[.programlisting]
....
;;;;;;; csv.asm ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;
; Convert a comma-separated file to a something-else separated file.
;
; Started: 31-May-2001
; Updated: 1-Jun-2001
;
; Copyright (c) 2001 G. Adam Stanislav
; All rights reserved.
;
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
%include 'system.inc'
%define BUFSIZE 2048
section .data
fd.in dd stdin
fd.out dd stdout
usg db 'Usage: csv [-t<delim>] [-c<comma>] [-p] [-o <outfile>] [-i <infile>]', 0Ah
usglen equ $-usg
iemsg db "csv: Can't open input file", 0Ah
iemlen equ $-iemsg
oemsg db "csv: Can't create output file", 0Ah
oemlen equ $-oemsg
section .bss
ibuffer resb BUFSIZE
obuffer resb BUFSIZE
section .text
align 4
ierr:
push dword iemlen
push dword iemsg
push dword stderr
sys.write
push dword 1 ; return failure
sys.exit
align 4
oerr:
push dword oemlen
push dword oemsg
push dword stderr
sys.write
push dword 2
sys.exit
align 4
usage:
push dword usglen
push dword usg
push dword stderr
sys.write
push dword 3
sys.exit
align 4
global _start
_start:
add esp, byte 8 ; discard argc and argv[0]
mov edx, (',' << 8) | 9
.arg:
pop ecx
or ecx, ecx
je near .init ; no more arguments
; ECX contains the pointer to an argument
cmp byte [ecx], '-'
jne usage
inc ecx
mov ax, [ecx]
.o:
cmp al, 'o'
jne .i
; Make sure we are not asked for the output file twice
cmp dword [fd.out], stdout
jne usage
; Find the path to output file - it is either at [ECX+1],
; i.e., -ofile --
; or in the next argument,
; i.e., -o file
inc ecx
or ah, ah
jne .openoutput
pop ecx
jecxz usage
.openoutput:
push dword 420 ; file mode (644 octal)
push dword 0200h | 0400h | 01h
; O_CREAT | O_TRUNC | O_WRONLY
push ecx
sys.open
jc near oerr
add esp, byte 12
mov [fd.out], eax
jmp short .arg
.i:
cmp al, 'i'
jne .p
; Make sure we are not asked twice
cmp dword [fd.in], stdin
jne near usage
; Find the path to the input file
inc ecx
or ah, ah
jne .openinput
pop ecx
or ecx, ecx
je near usage
.openinput:
push dword 0 ; O_RDONLY
push ecx
sys.open
jc near ierr ; open failed
add esp, byte 8
mov [fd.in], eax
jmp .arg
.p:
cmp al, 'p'
jne .t
or ah, ah
jne near usage
or edx, 1 << 31
jmp .arg
.t:
cmp al, 't' ; redefine output delimiter
jne .c
or ah, ah
je near usage
mov dl, ah
jmp .arg
.c:
cmp al, 'c'
jne near usage
or ah, ah
je near usage
mov dh, ah
jmp .arg
align 4
.init:
sub eax, eax
sub ebx, ebx
sub ecx, ecx
mov edi, obuffer
; See if we are to preserve the first line
or edx, edx
js .loop
.firstline:
; get rid of the first line
call getchar
cmp al, 0Ah
jne .firstline
.loop:
; read a byte from stdin
call getchar
; is it a comma (or whatever the user asked for)?
cmp al, dh
jne .quote
; Replace the comma with a tab (or whatever the user wants)
mov al, dl
.put:
call putchar
jmp short .loop
.quote:
cmp al, '"'
jne .put
; Print everything until you get another quote or EOL. If it
; is a quote, skip it. If it is EOL, print it.
.qloop:
call getchar
cmp al, '"'
je .loop
cmp al, 0Ah
je .put
call putchar
jmp short .qloop
align 4
getchar:
or ebx, ebx
jne .fetch
call read
.fetch:
lodsb
dec ebx
ret
read:
jecxz .read
call write
.read:
push dword BUFSIZE
mov esi, ibuffer
push esi
push dword [fd.in]
sys.read
add esp, byte 12
mov ebx, eax
or eax, eax
je .done
sub eax, eax
ret
align 4
.done:
call write ; flush output buffer
; close files
push dword [fd.in]
sys.close
push dword [fd.out]
sys.close
; return success
push dword 0
sys.exit
align 4
putchar:
stosb
inc ecx
cmp ecx, BUFSIZE
je write
ret
align 4
write:
jecxz .ret ; nothing to write
sub edi, ecx ; start of buffer
push ecx
push edi
push dword [fd.out]
sys.write
add esp, byte 12
sub eax, eax
sub ecx, ecx ; buffer is empty now
.ret:
ret
....
Much of it is taken from [.filename]#hex.asm# above. But there is one important difference: I no longer call `write` whenever I am outputting a line feed. Yet, the code can be used interactively.
I have found a better solution for the interactive problem since I first started writing this chapter. I wanted to make sure each line is printed out separately only when needed. After all, there is no need to flush out every line when used non-interactively.
The new solution I use now is to call `write` every time I find the input buffer empty. That way, when running in the interactive mode, the program reads one line from the user's keyboard, processes it, and sees its input buffer is empty. It flushes its output and reads the next line.
[[x86-buffered-dark-side]]
==== The Dark Side of Buffering
This change prevents a mysterious lockup in a very specific case. I refer to it as the _dark side of buffering_, mostly because it presents a danger that is not quite obvious.
It is unlikely to happen with a program like the csv above, so let us consider yet another filter: In this case we expect our input to be raw data representing color values, such as the _red_, _green_, and _blue_ intensities of a pixel. Our output will be the negative of our input.
Such a filter would be very simple to write. Most of it would look just like all the other filters we have written so far, so I am only going to show you its inner loop:
[.programlisting]
....
.loop:
call getchar
not al ; Create a negative
call putchar
jmp short .loop
....
Because this filter works with raw data, it is unlikely to be used interactively.
But it could be called by image manipulation software. And, unless it calls `write` before each call to `read`, chances are it will lock up.
Here is what might happen:
[.procedure]
. The image editor will load our filter using the C function `popen()`.
. It will read the first row of pixels from a bitmap or pixmap.
. It will write the first row of pixels to the _pipe_ leading to the `fd.in` of our filter.
. Our filter will read each pixel from its input, turn it to a negative, and write it to its output buffer.
. Our filter will call `getchar` to fetch the next pixel.
. `getchar` will find an empty input buffer, so it will call `read`.
. `read` will call the `SYS_read` system call.
. The _kernel_ will suspend our filter until the image editor sends more data to the pipe.
. The image editor will read from the other pipe, connected to the `fd.out` of our filter so it can set the first row of the output image _before_ it sends us the second row of the input.
. The _kernel_ suspends the image editor until it receives some output from our filter, so it can pass it on to the image editor.
At this point our filter waits for the image editor to send it more data to process, while the image editor is waiting for our filter to send it the result of the processing of the first row. But the result sits in our output buffer.
The filter and the image editor will continue waiting for each other forever (or, at least, until they are killed). Our software has just entered a crossref:secure[secure-race-conditions,race condition].
This problem does not exist if our filter flushes its output buffer _before_ asking the _kernel_ for more input data.
[[x86-fpu]]
== Using the FPU
Strangely enough, most of assembly language literature does not even mention the existence of the FPU, or _floating point unit_, let alone discuss programming it.
Yet, never does assembly language shine more than when we create highly optimized FPU code by doing things that can be done _only_ in assembly language.
[[x86-fpu-organization]]
=== Organization of the FPU
The FPU consists of 8 80-bit floating-point registers. These are organized in a stack fashion-you can `push` a value on TOS (_top of stack_) and you can `pop` it.
That said, the assembly language op codes are not `push` and `pop` because those are already taken.
You can `push` a value on TOS by using `fld`, `fild`, and `fbld`. Several other op codes let you `push` many common _constants_-such as _pi_-on the TOS.
Similarly, you can `pop` a value by using `fst`, `fstp`, `fist`, `fistp`, and `fbstp`. Actually, only the op codes that end with a _p_ will literally `pop` the value, the rest will `store` it somewhere else without removing it from the TOS.
We can transfer the data between the TOS and the computer memory either as a 32-bit, 64-bit, or 80-bit _real_, a 16-bit, 32-bit, or 64-bit _integer_, or an 80-bit _packed decimal_.
The 80-bit _packed decimal_ is a special case of _binary coded decimal_ which is very convenient when converting between the ASCII representation of data and the internal data of the FPU. It allows us to use 18 significant digits.
No matter how we represent data in the memory, the FPU always stores it in the 80-bit _real_ format in its registers.
Its internal precision is at least 19 decimal digits, so even if we choose to display results as ASCII in the full 18-digit precision, we are still showing correct results.
We can perform mathematical operations on the TOS: We can calculate its _sine_, we can _scale_ it (i.e., we can multiply or divide it by a power of 2), we can calculate its base-2 _logarithm_, and many other things.
We can also _multiply_ or _divide_ it by, _add_ it to, or _subtract_ it from, any of the FPU registers (including itself).
The official Intel op code for the TOS is `st`, and for the _registers_ `st(0)`-`st(7)`. `st` and `st(0)`, then, refer to the same register.
For whatever reasons, the original author of nasm has decided to use different op codes, namely `st0`-`st7`. In other words, there are no parentheses, and the TOS is always `st0`, never just `st`.
[[x86-fpu-packed-decimal]]
==== The Packed Decimal Format
The _packed decimal_ format uses 10 bytes (80 bits) of memory to represent 18 digits. The number represented there is always an _integer_.
[TIP]
====
You can use it to get decimal places by multiplying the TOS by a power of 10 first.
====
The highest bit of the highest byte (byte 9) is the _sign bit_: If it is set, the number is _negative_, otherwise, it is _positive_. The rest of the bits of this byte are unused/ignored.
The remaining 9 bytes store the 18 digits of the number: 2 digits per byte.
The _more significant digit_ is stored in the high _nibble_ (4 bits), the _less significant digit_ in the low _nibble_.
That said, you might think that `-1234567` would be stored in the memory like this (using hexadecimal notation):
[.programlisting]
....
80 00 00 00 00 00 01 23 45 67
....
Alas it is not! As with everything else of Intel make, even the _packed decimal_ is _little-endian_.
That means our `-1234567` is stored like this:
[.programlisting]
....
67 45 23 01 00 00 00 00 00 80
....
Remember that, or you will be pulling your hair out in desperation!
[NOTE]
====
The book to read-if you can find it-is Richard Startz' http://www.amazon.com/exec/obidos/ASIN/013246604X/whizkidtechnomag[8087/80287/80387 for the IBM PC & Compatibles]. Though it does seem to take the fact about the little-endian storage of the _packed decimal_ for granted. I kid you not about the desperation of trying to figure out what was wrong with the filter I show below _before_ it occurred to me I should try the little-endian order even for this type of data.
====
[[x86-pinhole-photography]]
=== Excursion to Pinhole Photography
To write meaningful software, we must not only understand our programming tools, but also the field we are creating software for.
Our next filter will help us whenever we want to build a _pinhole camera_, so, we need some background in _pinhole photography_ before we can continue.
[[x86-camera]]
==== The Camera
The easiest way to describe any camera ever built is as some empty space enclosed in some lightproof material, with a small hole in the enclosure.
The enclosure is usually sturdy (e.g., a box), though sometimes it is flexible (the bellows). It is quite dark inside the camera. However, the hole lets light rays in through a single point (though in some cases there may be several). These light rays form an image, a representation of whatever is outside the camera, in front of the hole.
If some light sensitive material (such as film) is placed inside the camera, it can capture the image.
The hole often contains a _lens_, or a lens assembly, often called the _objective_.
[[x86-the-pinhole]]
==== The Pinhole
But, strictly speaking, the lens is not necessary: The original cameras did not use a lens but a _pinhole_. Even today, _pinholes_ are used, both as a tool to study how cameras work, and to achieve a special kind of image.
The image produced by the _pinhole_ is all equally sharp. Or _blurred_. There is an ideal size for a pinhole: If it is either larger or smaller, the image loses its sharpness.
[[x86-focal-length]]
==== Focal Length
This ideal pinhole diameter is a function of the square root of _focal length_, which is the distance of the pinhole from the film.
[.programlisting]
....
D = PC * sqrt(FL)
....
In here, `D` is the ideal diameter of the pinhole, `FL` is the focal length, and `PC` is a pinhole constant. According to Jay Bender, its value is `0.04`, while Kenneth Connors has determined it to be `0.037`. Others have proposed other values. Plus, this value is for the daylight only: Other types of light will require a different constant, whose value can only be determined by experimentation.
[[x86-f-number]]
==== The F-Number
The f-number is a very useful measure of how much light reaches the film. A light meter can determine that, for example, to expose a film of specific sensitivity with f5.6 mkay require the exposure to last 1/1000 sec.
It does not matter whether it is a 35-mm camera, or a 6x9cm camera, etc. As long as we know the f-number, we can determine the proper exposure.
The f-number is easy to calculate:
[.programlisting]
....
F = FL / D
....
In other words, the f-number equals the focal length divided by the diameter of the pinhole. It also means a higher f-number either implies a smaller pinhole or a larger focal distance, or both. That, in turn, implies, the higher the f-number, the longer the exposure has to be.
Furthermore, while pinhole diameter and focal distance are one-dimensional measurements, both, the film and the pinhole, are two-dimensional. That means that if you have measured the exposure at f-number `A` as `t`, then the exposure at f-number `B` is:
[.programlisting]
....
t * (B / A)²
....
[[x86-normalized-f-number]]
==== Normalized F-Number
While many modern cameras can change the diameter of their pinhole, and thus their f-number, quite smoothly and gradually, such was not always the case.
To allow for different f-numbers, cameras typically contained a metal plate with several holes of different sizes drilled to them.
Their sizes were chosen according to the above formula in such a way that the resultant f-number was one of standard f-numbers used on all cameras everywhere. For example, a very old Kodak Duaflex IV camera in my possession has three such holes for f-numbers 8, 11, and 16.
A more recently made camera may offer f-numbers of 2.8, 4, 5.6, 8, 11, 16, 22, and 32 (as well as others). These numbers were not chosen arbitrarily: They all are powers of the square root of 2, though they may be rounded somewha.
[[x86-f-stop]]
==== The F-Stop
A typical camera is designed in such a way that setting any of the normalized f-numbers changes the feel of the dial. It will naturally _stop_ in that position. Because of that, these positions of the dial are called f-stops.
Since the f-numbers at each stop are powers of the square root of 2, moving the dial by 1 stop will double the amount of light required for proper exposure. Moving it by 2 stops will quadruple the required exposure. Moving the dial by 3 stops will require the increase in exposure 8 times, etc.
[[x86-pinhole-software]]
=== Designing the Pinhole Software
We are now ready to decide what exactly we want our pinhole software to do.
[[xpinhole-processing-input]]
==== Processing Program Input
Since its main purpose is to help us design a working pinhole camera, we will use the _focal length_ as the input to the program. This is something we can determine without software: Proper focal length is determined by the size of the film and by the need to shoot "regular" pictures, wide angle pictures, or telephoto pictures.
Most of the programs we have written so far worked with individual characters, or bytes, as their input: The hex program converted individual bytes into a hexadecimal number, the csv program either let a character through, or deleted it, or changed it to a different character, etc.
One program, ftuc used the state machine to consider at most two input bytes at a time.
But our pinhole program cannot just work with individual characters, it has to deal with larger syntactic units.
For example, if we want the program to calculate the pinhole diameter (and other values we will discuss later) at the focal lengths of `100 mm`, `150 mm`, and `210 mm`, we may want to enter something like this:
[source,bash]
....
100, 150, 210
....
Our program needs to consider more than a single byte of input at a time. When it sees the first `1`, it must understand it is seeing the first digit of a decimal number. When it sees the `0` and the other `0`, it must know it is seeing more digits of the same number.
When it encounters the first comma, it must know it is no longer receiving the digits of the first number. It must be able to convert the digits of the first number into the value of `100`. And the digits of the second number into the value of `150`. And, of course, the digits of the third number into the numeric value of `210`.
We need to decide what delimiters to accept: Do the input numbers have to be separated by a comma? If so, how do we treat two numbers separated by something else?
Personally, I like to keep it simple. Something either is a number, so I process it. Or it is not a number, so I discard it. I do not like the computer complaining about me typing in an extra character when it is _obvious_ that it is an extra character. Duh!
Plus, it allows me to break up the monotony of computing and type in a query instead of just a number:
[source,bash]
....
What is the best pinhole diameter for the
focal length of 150?
....
There is no reason for the computer to spit out a number of complaints:
[source,bash]
....
Syntax error: What
Syntax error: is
Syntax error: the
Syntax error: best
....
Et cetera, et cetera, et cetera.
Secondly, I like the `#` character to denote the start of a comment which extends to the end of the line. This does not take too much effort to code, and lets me treat input files for my software as executable scripts.
In our case, we also need to decide what units the input should come in: We choose _millimeters_ because that is how most photographers measure the focus length.
Finally, we need to decide whether to allow the use of the decimal point (in which case we must also consider the fact that much of the world uses a decimal _comma_).
In our case allowing for the decimal point/comma would offer a false sense of precision: There is little if any noticeable difference between the focus lengths of `50` and `51`, so allowing the user to input something like `50.5` is not a good idea. This is my opinion, mind you, but I am the one writing this program. You can make other choices in yours, of course.
[[x86-pinhole-options]]
==== Offering Options
The most important thing we need to know when building a pinhole camera is the diameter of the pinhole. Since we want to shoot sharp images, we will use the above formula to calculate the pinhole diameter from focal length. As experts are offering several different values for the `PC` constant, we will need to have the choice.
It is traditional in UNIX(R) programming to have two main ways of choosing program parameters, plus to have a default for the time the user does not make a choice.
Why have two ways of choosing?
One is to allow a (relatively) _permanent_ choice that applies automatically each time the software is run without us having to tell it over and over what we want it to do.
The permanent choices may be stored in a configuration file, typically found in the user's home directory. The file usually has the same name as the application but is started with a dot. Often _"rc"_ is added to the file name. So, ours could be [.filename]#~/.pinhole# or [.filename]#~/.pinholerc#. (The [.filename]#~/# means current user's home directory.)
The configuration file is used mostly by programs that have many configurable parameters. Those that have only one (or a few) often use a different method: They expect to find the parameter in an _environment variable_. In our case, we might look at an environment variable named `PINHOLE`.
Usually, a program uses one or the other of the above methods. Otherwise, if a configuration file said one thing, but an environment variable another, the program might get confused (or just too complicated).
Because we only need to choose _one_ such parameter, we will go with the second method and search the environment for a variable named `PINHOLE`.
The other way allows us to make _ad hoc_ decisions: _"Though I usually want you to use 0.039, this time I want 0.03872."_ In other words, it allows us to _override_ the permanent choice.
This type of choice is usually done with command line parameters.
Finally, a program _always_ needs a _default_. The user may not make any choices. Perhaps he does not know what to choose. Perhaps he is "just browsing." Preferably, the default will be the value most users would choose anyway. That way they do not need to choose. Or, rather, they can choose the default without an additional effort.
Given this system, the program may find conflicting options, and handle them this way:
[.procedure]
. If it finds an _ad hoc_ choice (e.g., command line parameter), it should accept that choice. It must ignore any permanent choice and any default.
. _Otherwise_, if it finds a permanent option (e.g., an environment variable), it should accept it, and ignore the default.
. _Otherwise_, it should use the default.
We also need to decide what _format_ our `PC` option should have.
At first site, it seems obvious to use the `PINHOLE=0.04` format for the environment variable, and `-p0.04` for the command line.
Allowing that is actually a security risk. The `PC` constant is a very small number. Naturally, we will test our software using various small values of `PC`. But what will happen if someone runs the program choosing a huge value?
It may crash the program because we have not designed it to handle huge numbers.
Or, we may spend more time on the program so it can handle huge numbers. We might do that if we were writing commercial software for computer illiterate audience.
Or, we might say, _"Tough! The user should know better.""_
Or, we just may make it impossible for the user to enter a huge number. This is the approach we will take: We will use an _implied 0._ prefix.
In other words, if the user wants `0.04`, we will expect him to type `-p04`, or set `PINHOLE=04` in his environment. So, if he says `-p9999999`, we will interpret it as ``0.9999999``-still ridiculous but at least safer.
Secondly, many users will just want to go with either Bender's constant or Connors' constant. To make it easier on them, we will interpret `-b` as identical to `-p04`, and `-c` as identical to `-p037`.
[[x86-pinhole-output]]
==== The Output
We need to decide what we want our software to send to the output, and in what format.
Since our input allows for an unspecified number of focal length entries, it makes sense to use a traditional database-style output of showing the result of the calculation for each focal length on a separate line, while separating all values on one line by a `tab` character.
Optionally, we should also allow the user to specify the use of the CSV format we have studied earlier. In this case, we will print out a line of comma-separated names describing each field of every line, then show our results as before, but substituting a `comma` for the `tab`.
We need a command line option for the CSV format. We cannot use `-c` because that already means _use Connors' constant_. For some strange reason, many web sites refer to CSV files as _"Excel spreadsheet"_ (though the CSV format predates Excel). We will, therefore, use the `-e` switch to inform our software we want the output in the CSV format.
We will start each line of the output with the focal length. This may sound repetitious at first, especially in the interactive mode: The user types in the focal length, and we are repeating it.
But the user can type several focal lengths on one line. The input can also come in from a file or from the output of another program. In that case the user does not see the input at all.
By the same token, the output can go to a file which we will want to examine later, or it could go to the printer, or become the input of another program.
So, it makes perfect sense to start each line with the focal length as entered by the user.
No, wait! Not as entered by the user. What if the user types in something like this:
[source,bash]
....
00000000150
....
Clearly, we need to strip those leading zeros.
So, we might consider reading the user input as is, converting it to binary inside the FPU, and printing it out from there.
But...
What if the user types something like this:
[source,bash]
....
17459765723452353453534535353530530534563507309676764423
....
Ha! The packed decimal FPU format lets us input 18-digit numbers. But the user has entered more than 18 digits. How do we handle that?
Well, we _could_ modify our code to read the first 18 digits, enter it to the FPU, then read more, multiply what we already have on the TOS by 10 raised to the number of additional digits, then `add` to it.
Yes, we could do that. But in _this_ program it would be ridiculous (in a different one it may be just the thing to do): Even the circumference of the Earth expressed in millimeters only takes 11 digits. Clearly, we cannot build a camera that large (not yet, anyway).
So, if the user enters such a huge number, he is either bored, or testing us, or trying to break into the system, or playing games-doing anything but designing a pinhole camera.
What will we do?
We will slap him in the face, in a manner of speaking:
[source,bash]
....
17459765723452353453534535353530530534563507309676764423 ??? ??? ??? ??? ???
....
To achieve that, we will simply ignore any leading zeros. Once we find a non-zero digit, we will initialize a counter to `0` and start taking three steps:
[.procedure]
. Send the digit to the output.
. Append the digit to a buffer we will use later to produce the packed decimal we can send to the FPU.
. Increase the counter.
Now, while we are taking these three steps, we also need to watch out for one of two conditions:
* If the counter grows above 18, we stop appending to the buffer. We continue reading the digits and sending them to the output.
* If, or rather _when_, the next input character is not a digit, we are done inputting for now.
+
Incidentally, we can simply discard the non-digit, unless it is a `#`, which we must return to the input stream. It starts a comment, so we must see it after we are done producing output and start looking for more input.
That still leaves one possibility uncovered: If all the user enters is a zero (or several zeros), we will never find a non-zero to display.
We can determine this has happened whenever our counter stays at `0`. In that case we need to send `0` to the output, and perform another "slap in the face":
[source,bash]
....
0 ??? ??? ??? ??? ???
....
Once we have displayed the focal length and determined it is valid (greater than `0` but not exceeding 18 digits), we can calculate the pinhole diameter.
It is not by coincidence that _pinhole_ contains the word _pin_. Indeed, many a pinhole literally is a _pin hole_, a hole carefully punched with the tip of a pin.
That is because a typical pinhole is very small. Our formula gets the result in millimeters. We will multiply it by `1000`, so we can output the result in _microns_.
At this point we have yet another trap to face: _Too much precision._
Yes, the FPU was designed for high precision mathematics. But we are not dealing with high precision mathematics. We are dealing with physics (optics, specifically).
Suppose we want to convert a truck into a pinhole camera (we would not be the first ones to do that!). Suppose its box is `12` meters long, so we have the focal length of `12000`. Well, using Bender's constant, it gives us square root of `12000` multiplied by `0.04`, which is `4.381780460` millimeters, or `4381.780460` microns.
Put either way, the result is absurdly precise. Our truck is not _exactly_ `12000` millimeters long. We did not measure its length with such a precision, so stating we need a pinhole with the diameter of `4.381780460` millimeters is, well, deceiving. `4.4` millimeters would do just fine.
[NOTE]
====
I "only" used ten digits in the above example. Imagine the absurdity of going for all 18!
====
We need to limit the number of significant digits of our result. One way of doing it is by using an integer representing microns. So, our truck would need a pinhole with the diameter of `4382` microns. Looking at that number, we still decide that `4400` microns, or `4.4` millimeters is close enough.
Additionally, we can decide that no matter how big a result we get, we only want to display four significant digits (or any other number of them, of course). Alas, the FPU does not offer rounding to a specific number of digits (after all, it does not view the numbers as decimal but as binary).
We, therefore, must devise an algorithm to reduce the number of significant digits.
Here is mine (I think it is awkward-if you know a better one, _please_, let me know):
[.procedure]
. Initialize a counter to `0`.
. While the number is greater than or equal to `10000`, divide it by `10` and increase the counter.
. Output the result.
. While the counter is greater than `0`, output `0` and decrease the counter.
[NOTE]
====
The `10000` is only good if you want _four_ significant digits. For any other number of significant digits, replace `10000` with `10` raised to the number of significant digits.
====
We will, then, output the pinhole diameter in microns, rounded off to four significant digits.
At this point, we know the _focal length_ and the _pinhole diameter_. That means we have enough information to also calculate the _f-number_.
We will display the f-number, rounded to four significant digits. Chances are the f-number will tell us very little. To make it more meaningful, we can find the nearest _normalized f-number_, i.e., the nearest power of the square root of 2.
We do that by multiplying the actual f-number by itself, which, of course, will give us its `square`. We will then calculate its base-2 logarithm, which is much easier to do than calculating the base-square-root-of-2 logarithm! We will round the result to the nearest integer. Next, we will raise 2 to the result. Actually, the FPU gives us a good shortcut to do that: We can use the `fscale` op code to "scale" 1, which is analogous to ``shift``ing an integer left. Finally, we calculate the square root of it all, and we have the nearest normalized f-number.
If all that sounds overwhelming-or too much work, perhaps-it may become much clearer if you see the code. It takes 9 op codes altogether:
[.programlisting]
....
fmul st0, st0
fld1
fld st1
fyl2x
frndint
fld1
fscale
fsqrt
fstp st1
....
The first line, `fmul st0, st0`, squares the contents of the TOS (top of the stack, same as `st`, called `st0` by nasm). The `fld1` pushes `1` on the TOS.
The next line, `fld st1`, pushes the square back to the TOS. At this point the square is both in `st` and `st(2)` (it will become clear why we leave a second copy on the stack in a moment). `st(1)` contains `1`.
Next, `fyl2x` calculates base-2 logarithm of `st` multiplied by `st(1)`. That is why we placed `1` on `st(1)` before.
At this point, `st` contains the logarithm we have just calculated, `st(1)` contains the square of the actual f-number we saved for later.
`frndint` rounds the TOS to the nearest integer. `fld1` pushes a `1`. `fscale` shifts the `1` we have on the TOS by the value in `st(1)`, effectively raising 2 to `st(1)`.
Finally, `fsqrt` calculates the square root of the result, i.e., the nearest normalized f-number.
We now have the nearest normalized f-number on the TOS, the base-2 logarithm rounded to the nearest integer in `st(1)`, and the square of the actual f-number in `st(2)`. We are saving the value in `st(2)` for later.
But we do not need the contents of `st(1)` anymore. The last line, `fstp st1`, places the contents of `st` to `st(1)`, and pops. As a result, what was `st(1)` is now `st`, what was `st(2)` is now `st(1)`, etc. The new `st` contains the normalized f-number. The new `st(1)` contains the square of the actual f-number we have stored there for posterity.
At this point, we are ready to output the normalized f-number. Because it is normalized, we will not round it off to four significant digits, but will send it out in its full precision.
The normalized f-number is useful as long as it is reasonably small and can be found on our light meter. Otherwise we need a different method of determining proper exposure.
Earlier we have figured out the formula of calculating proper exposure at an arbitrary f-number from that measured at a different f-number.
Every light meter I have ever seen can determine proper exposure at f5.6. We will, therefore, calculate an _"f5.6 multiplier,"_ i.e., by how much we need to multiply the exposure measured at f5.6 to determine the proper exposure for our pinhole camera.
From the above formula we know this factor can be calculated by dividing our f-number (the actual one, not the normalized one) by `5.6`, and squaring the result.
Mathematically, dividing the square of our f-number by the square of `5.6` will give us the same result.
Computationally, we do not want to square two numbers when we can only square one. So, the first solution seems better at first.
But...
`5.6` is a _constant_. We do not have to have our FPU waste precious cycles. We can just tell it to divide the square of the f-number by whatever `5.6²` equals to. Or we can divide the f-number by `5.6`, and then square the result. The two ways now seem equal.
But, they are not!
Having studied the principles of photography above, we remember that the `5.6` is actually square root of 2 raised to the fifth power. An _irrational_ number. The square of this number is _exactly_ `32`.
Not only is `32` an integer, it is a power of 2. We do not need to divide the square of the f-number by `32`. We only need to use `fscale` to shift it right by five positions. In the FPU lingo it means we will `fscale` it with `st(1)` equal to `-5`. That is _much faster_ than a division.
So, now it has become clear why we have saved the square of the f-number on the top of the FPU stack. The calculation of the f5.6 multiplier is the easiest calculation of this entire program! We will output it rounded to four significant digits.
There is one more useful number we can calculate: The number of stops our f-number is from f5.6. This may help us if our f-number is just outside the range of our light meter, but we have a shutter which lets us set various speeds, and this shutter uses stops.
Say, our f-number is 5 stops from f5.6, and the light meter says we should use 1/1000 sec. Then we can set our shutter speed to 1/1000 first, then move the dial by 5 stops.
This calculation is quite easy as well. All we have to do is to calculate the base-2 logarithm of the f5.6 multiplier we had just calculated (though we need its value from before we rounded it off). We then output the result rounded to the nearest integer. We do not need to worry about having more than four significant digits in this one: The result is most likely to have only one or two digits anyway.
[[x86-fpu-optimizations]]
=== FPU Optimizations
In assembly language we can optimize the FPU code in ways impossible in high languages, including C.
Whenever a C function needs to calculate a floating-point value, it loads all necessary variables and constants into FPU registers. It then does whatever calculation is required to get the correct result. Good C compilers can optimize that part of the code really well.
It "returns" the value by leaving the result on the TOS. However, before it returns, it cleans up. Any variables and constants it used in its calculation are now gone from the FPU.
It cannot do what we just did above: We calculated the square of the f-number and kept it on the stack for later use by another function.
We _knew_ we would need that value later on. We also knew we had enough room on the stack (which only has room for 8 numbers) to store it there.
A C compiler has no way of knowing that a value it has on the stack will be required again in the very near future.
Of course, the C programmer may know it. But the only recourse he has is to store the value in a memory variable.
That means, for one, the value will be changed from the 80-bit precision used internally by the FPU to a C _double_ (64 bits) or even _single_ (32 bits).
That also means that the value must be moved from the TOS into the memory, and then back again. Alas, of all FPU operations, the ones that access the computer memory are the slowest.
So, whenever programming the FPU in assembly language, look for the ways of keeping intermediate results on the FPU stack.
We can take that idea even further! In our program we are using a _constant_ (the one we named `PC`).
It does not matter how many pinhole diameters we are calculating: 1, 10, 20, 1000, we are always using the same constant. Therefore, we can optimize our program by keeping the constant on the stack all the time.
Early on in our program, we are calculating the value of the above constant. We need to divide our input by `10` for every digit in the constant.
It is much faster to multiply than to divide. So, at the start of our program, we divide `10` into `1` to obtain `0.1`, which we then keep on the stack: Instead of dividing the input by `10` for every digit, we multiply it by `0.1`.
By the way, we do not input `0.1` directly, even though we could. We have a reason for that: While `0.1` can be expressed with just one decimal place, we do not know how many _binary_ places it takes. We, therefore, let the FPU calculate its binary value to its own high precision.
We are using other constants: We multiply the pinhole diameter by `1000` to convert it from millimeters to microns. We compare numbers to `10000` when we are rounding them off to four significant digits. So, we keep both, `1000` and `10000`, on the stack. And, of course, we reuse the `0.1` when rounding off numbers to four digits.
Last but not least, we keep `-5` on the stack. We need it to scale the square of the f-number, instead of dividing it by `32`. It is not by coincidence we load this constant last. That makes it the top of the stack when only the constants are on it. So, when the square of the f-number is being scaled, the `-5` is at `st(1)`, precisely where `fscale` expects it to be.
It is common to create certain constants from scratch instead of loading them from the memory. That is what we are doing with `-5`:
[.programlisting]
....
fld1 ; TOS = 1
fadd st0, st0 ; TOS = 2
fadd st0, st0 ; TOS = 4
fld1 ; TOS = 1
faddp st1, st0 ; TOS = 5
fchs ; TOS = -5
....
We can generalize all these optimizations into one rule: _Keep repeat values on the stack!_
[TIP]
====
_PostScript(R)_ is a stack-oriented programming language. There are many more books available about PostScript(R) than about the FPU assembly language: Mastering PostScript(R) will help you master the FPU.
====
[[x86-pinhole-the-code]]
=== pinhole-The Code
[.programlisting]
....
;;;;;;; pinhole.asm ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;
; Find various parameters of a pinhole camera construction and use
;
; Started: 9-Jun-2001
; Updated: 10-Jun-2001
;
; Copyright (c) 2001 G. Adam Stanislav
; All rights reserved.
;
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
%include 'system.inc'
%define BUFSIZE 2048
section .data
align 4
ten dd 10
thousand dd 1000
tthou dd 10000
fd.in dd stdin
fd.out dd stdout
envar db 'PINHOLE=' ; Exactly 8 bytes, or 2 dwords long
pinhole db '04,', ; Bender's constant (0.04)
connors db '037', 0Ah ; Connors' constant
usg db 'Usage: pinhole [-b] [-c] [-e] [-p <value>] [-o <outfile>] [-i <infile>]', 0Ah
usglen equ $-usg
iemsg db "pinhole: Can't open input file", 0Ah
iemlen equ $-iemsg
oemsg db "pinhole: Can't create output file", 0Ah
oemlen equ $-oemsg
pinmsg db "pinhole: The PINHOLE constant must not be 0", 0Ah
pinlen equ $-pinmsg
toobig db "pinhole: The PINHOLE constant may not exceed 18 decimal places", 0Ah
biglen equ $-toobig
huhmsg db 9, '???'
separ db 9, '???'
sep2 db 9, '???'
sep3 db 9, '???'
sep4 db 9, '???', 0Ah
huhlen equ $-huhmsg
header db 'focal length in millimeters,pinhole diameter in microns,'
db 'F-number,normalized F-number,F-5.6 multiplier,stops '
db 'from F-5.6', 0Ah
headlen equ $-header
section .bss
ibuffer resb BUFSIZE
obuffer resb BUFSIZE
dbuffer resb 20 ; decimal input buffer
bbuffer resb 10 ; BCD buffer
section .text
align 4
huh:
call write
push dword huhlen
push dword huhmsg
push dword [fd.out]
sys.write
add esp, byte 12
ret
align 4
perr:
push dword pinlen
push dword pinmsg
push dword stderr
sys.write
push dword 4 ; return failure
sys.exit
align 4
consttoobig:
push dword biglen
push dword toobig
push dword stderr
sys.write
push dword 5 ; return failure
sys.exit
align 4
ierr:
push dword iemlen
push dword iemsg
push dword stderr
sys.write
push dword 1 ; return failure
sys.exit
align 4
oerr:
push dword oemlen
push dword oemsg
push dword stderr
sys.write
push dword 2
sys.exit
align 4
usage:
push dword usglen
push dword usg
push dword stderr
sys.write
push dword 3
sys.exit
align 4
global _start
_start:
add esp, byte 8 ; discard argc and argv[0]
sub esi, esi
.arg:
pop ecx
or ecx, ecx
je near .getenv ; no more arguments
; ECX contains the pointer to an argument
cmp byte [ecx], '-'
jne usage
inc ecx
mov ax, [ecx]
inc ecx
.o:
cmp al, 'o'
jne .i
; Make sure we are not asked for the output file twice
cmp dword [fd.out], stdout
jne usage
; Find the path to output file - it is either at [ECX+1],
; i.e., -ofile --
; or in the next argument,
; i.e., -o file
or ah, ah
jne .openoutput
pop ecx
jecxz usage
.openoutput:
push dword 420 ; file mode (644 octal)
push dword 0200h | 0400h | 01h
; O_CREAT | O_TRUNC | O_WRONLY
push ecx
sys.open
jc near oerr
add esp, byte 12
mov [fd.out], eax
jmp short .arg
.i:
cmp al, 'i'
jne .p
; Make sure we are not asked twice
cmp dword [fd.in], stdin
jne near usage
; Find the path to the input file
or ah, ah
jne .openinput
pop ecx
or ecx, ecx
je near usage
.openinput:
push dword 0 ; O_RDONLY
push ecx
sys.open
jc near ierr ; open failed
add esp, byte 8
mov [fd.in], eax
jmp .arg
.p:
cmp al, 'p'
jne .c
or ah, ah
jne .pcheck
pop ecx
or ecx, ecx
je near usage
mov ah, [ecx]
.pcheck:
cmp ah, '0'
jl near usage
cmp ah, '9'
ja near usage
mov esi, ecx
jmp .arg
.c:
cmp al, 'c'
jne .b
or ah, ah
jne near usage
mov esi, connors
jmp .arg
.b:
cmp al, 'b'
jne .e
or ah, ah
jne near usage
mov esi, pinhole
jmp .arg
.e:
cmp al, 'e'
jne near usage
or ah, ah
jne near usage
mov al, ','
mov [huhmsg], al
mov [separ], al
mov [sep2], al
mov [sep3], al
mov [sep4], al
jmp .arg
align 4
.getenv:
; If ESI = 0, we did not have a -p argument,
; and need to check the environment for "PINHOLE="
or esi, esi
jne .init
sub ecx, ecx
.nextenv:
pop esi
or esi, esi
je .default ; no PINHOLE envar found
; check if this envar starts with 'PINHOLE='
mov edi, envar
mov cl, 2 ; 'PINHOLE=' is 2 dwords long
rep cmpsd
jne .nextenv
; Check if it is followed by a digit
mov al, [esi]
cmp al, '0'
jl .default
cmp al, '9'
jbe .init
; fall through
align 4
.default:
; We got here because we had no -p argument,
; and did not find the PINHOLE envar.
mov esi, pinhole
; fall through
align 4
.init:
sub eax, eax
sub ebx, ebx
sub ecx, ecx
sub edx, edx
mov edi, dbuffer+1
mov byte [dbuffer], '0'
; Convert the pinhole constant to real
.constloop:
lodsb
cmp al, '9'
ja .setconst
cmp al, '0'
je .processconst
jb .setconst
inc dl
.processconst:
inc cl
cmp cl, 18
ja near consttoobig
stosb
jmp short .constloop
align 4
.setconst:
or dl, dl
je near perr
finit
fild dword [tthou]
fld1
fild dword [ten]
fdivp st1, st0
fild dword [thousand]
mov edi, obuffer
mov ebp, ecx
call bcdload
.constdiv:
fmul st0, st2
loop .constdiv
fld1
fadd st0, st0
fadd st0, st0
fld1
faddp st1, st0
fchs
; If we are creating a CSV file,
; print header
cmp byte [separ], ','
jne .bigloop
push dword headlen
push dword header
push dword [fd.out]
sys.write
.bigloop:
call getchar
jc near done
; Skip to the end of the line if you got '#'
cmp al, '#'
jne .num
call skiptoeol
jmp short .bigloop
.num:
; See if you got a number
cmp al, '0'
jl .bigloop
cmp al, '9'
ja .bigloop
; Yes, we have a number
sub ebp, ebp
sub edx, edx
.number:
cmp al, '0'
je .number0
mov dl, 1
.number0:
or dl, dl ; Skip leading 0's
je .nextnumber
push eax
call putchar
pop eax
inc ebp
cmp ebp, 19
jae .nextnumber
mov [dbuffer+ebp], al
.nextnumber:
call getchar
jc .work
cmp al, '#'
je .ungetc
cmp al, '0'
jl .work
cmp al, '9'
ja .work
jmp short .number
.ungetc:
dec esi
inc ebx
.work:
; Now, do all the work
or dl, dl
je near .work0
cmp ebp, 19
jae near .toobig
call bcdload
; Calculate pinhole diameter
fld st0 ; save it
fsqrt
fmul st0, st3
fld st0
fmul st5
sub ebp, ebp
; Round off to 4 significant digits
.diameter:
fcom st0, st7
fstsw ax
sahf
jb .printdiameter
fmul st0, st6
inc ebp
jmp short .diameter
.printdiameter:
call printnumber ; pinhole diameter
; Calculate F-number
fdivp st1, st0
fld st0
sub ebp, ebp
.fnumber:
fcom st0, st6
fstsw ax
sahf
jb .printfnumber
fmul st0, st5
inc ebp
jmp short .fnumber
.printfnumber:
call printnumber ; F number
; Calculate normalized F-number
fmul st0, st0
fld1
fld st1
fyl2x
frndint
fld1
fscale
fsqrt
fstp st1
sub ebp, ebp
call printnumber
; Calculate time multiplier from F-5.6
fscale
fld st0
; Round off to 4 significant digits
.fmul:
fcom st0, st6
fstsw ax
sahf
jb .printfmul
inc ebp
fmul st0, st5
jmp short .fmul
.printfmul:
call printnumber ; F multiplier
; Calculate F-stops from 5.6
fld1
fxch st1
fyl2x
sub ebp, ebp
call printnumber
mov al, 0Ah
call putchar
jmp .bigloop
.work0:
mov al, '0'
call putchar
align 4
.toobig:
call huh
jmp .bigloop
align 4
done:
call write ; flush output buffer
; close files
push dword [fd.in]
sys.close
push dword [fd.out]
sys.close
finit
; return success
push dword 0
sys.exit
align 4
skiptoeol:
; Keep reading until you come to cr, lf, or eof
call getchar
jc done
cmp al, 0Ah
jne .cr
ret
.cr:
cmp al, 0Dh
jne skiptoeol
ret
align 4
getchar:
or ebx, ebx
jne .fetch
call read
.fetch:
lodsb
dec ebx
clc
ret
read:
jecxz .read
call write
.read:
push dword BUFSIZE
mov esi, ibuffer
push esi
push dword [fd.in]
sys.read
add esp, byte 12
mov ebx, eax
or eax, eax
je .empty
sub eax, eax
ret
align 4
.empty:
add esp, byte 4
stc
ret
align 4
putchar:
stosb
inc ecx
cmp ecx, BUFSIZE
je write
ret
align 4
write:
jecxz .ret ; nothing to write
sub edi, ecx ; start of buffer
push ecx
push edi
push dword [fd.out]
sys.write
add esp, byte 12
sub eax, eax
sub ecx, ecx ; buffer is empty now
.ret:
ret
align 4
bcdload:
; EBP contains the number of chars in dbuffer
push ecx
push esi
push edi
lea ecx, [ebp+1]
lea esi, [dbuffer+ebp-1]
shr ecx, 1
std
mov edi, bbuffer
sub eax, eax
mov [edi], eax
mov [edi+4], eax
mov [edi+2], ax
.loop:
lodsw
sub ax, 3030h
shl al, 4
or al, ah
mov [edi], al
inc edi
loop .loop
fbld [bbuffer]
cld
pop edi
pop esi
pop ecx
sub eax, eax
ret
align 4
printnumber:
push ebp
mov al, [separ]
call putchar
; Print the integer at the TOS
mov ebp, bbuffer+9
fbstp [bbuffer]
; Check the sign
mov al, [ebp]
dec ebp
or al, al
jns .leading
; We got a negative number (should never happen)
mov al, '-'
call putchar
.leading:
; Skip leading zeros
mov al, [ebp]
dec ebp
or al, al
jne .first
cmp ebp, bbuffer
jae .leading
; We are here because the result was 0.
; Print '0' and return
mov al, '0'
jmp putchar
.first:
; We have found the first non-zero.
; But it is still packed
test al, 0F0h
jz .second
push eax
shr al, 4
add al, '0'
call putchar
pop eax
and al, 0Fh
.second:
add al, '0'
call putchar
.next:
cmp ebp, bbuffer
jb .done
mov al, [ebp]
push eax
shr al, 4
add al, '0'
call putchar
pop eax
and al, 0Fh
add al, '0'
call putchar
dec ebp
jmp short .next
.done:
pop ebp
or ebp, ebp
je .ret
.zeros:
mov al, '0'
call putchar
dec ebp
jne .zeros
.ret:
ret
....
The code follows the same format as all the other filters we have seen before, with one subtle exception:
____
We are no longer assuming that the end of input implies the end of things to do, something we took for granted in the _character-oriented_ filters.
This filter does not process characters. It processes a _language_ (albeit a very simple one, consisting only of numbers).
When we have no more input, it can mean one of two things:
* We are done and can quit. This is the same as before.
* The last character we have read was a digit. We have stored it at the end of our ASCII-to-float conversion buffer. We now need to convert the contents of that buffer into a number and write the last line of our output.
For that reason, we have modified our `getchar` and our `read` routines to return with the `carry flag` _clear_ whenever we are fetching another character from the input, or the `carry flag` _set_ whenever there is no more input.
Of course, we are still using assembly language magic to do that! Take a good look at `getchar`. It _always_ returns with the `carry flag` _clear_.
Yet, our main code relies on the `carry flag` to tell it when to quit-and it works.
The magic is in `read`. Whenever it receives more input from the system, it just returns to `getchar`, which fetches a character from the input buffer, _clears_ the `carry flag` and returns.
But when `read` receives no more input from the system, it does _not_ return to `getchar` at all. Instead, the `add esp, byte 4` op code adds `4` to `ESP`, _sets_ the `carry flag`, and returns.
So, where does it return to? Whenever a program uses the `call` op code, the microprocessor ``push``es the return address, i.e., it stores it on the top of the stack (not the FPU stack, the system stack, which is in the memory). When a program uses the `ret` op code, the microprocessor ``pop``s the return value from the stack, and jumps to the address that was stored there.
But since we added `4` to `ESP` (which is the stack pointer register), we have effectively given the microprocessor a minor case of _amnesia_: It no longer remembers it was `getchar` that ``call``ed `read`.
And since `getchar` never ``push``ed anything before ``call``ing `read`, the top of the stack now contains the return address to whatever or whoever ``call``ed `getchar`. As far as that caller is concerned, he ``call``ed `getchar`, which ``ret``urned with the `carry flag` set!
____
Other than that, the `bcdload` routine is caught up in the middle of a Lilliputian conflict between the Big-Endians and the Little-Endians.
It is converting the text representation of a number into that number: The text is stored in the big-endian order, but the _packed decimal_ is little-endian.
To solve the conflict, we use the `std` op code early on. We cancel it with `cld` later on: It is quite important we do not `call` anything that may depend on the default setting of the _direction flag_ while `std` is active.
Everything else in this code should be quit eclear, providing you have read the entire chapter that precedes it.
It is a classical example of the adage that programming requires a lot of thought and only a little coding. Once we have thought through every tiny detail, the code almost writes itself.
[[x86-pinhole-using]]
=== Using pinhole
Because we have decided to make the program _ignore_ any input except for numbers (and even those inside a comment), we can actually perform _textual queries_. We do not _have to_, but we _can_.
In my humble opinion, forming a textual query, instead of having to follow a very strict syntax, makes software much more user friendly.
Suppose we want to build a pinhole camera to use the 4x5 inch film. The standard focal length for that film is about 150mm. We want to _fine-tune_ our focal length so the pinhole diameter is as round a number as possible. Let us also suppose we are quite comfortable with cameras but somewhat intimidated by computers. Rather than just have to type in a bunch of numbers, we want to _ask_ a couple of questions.
Our session might look like this:
[source,bash]
....
% pinhole
Computer,
What size pinhole do I need for the focal length of 150?
150 490 306 362 2930 12
Hmmm... How about 160?
160 506 316 362 3125 12
Let's make it 155, please.
155 498 311 362 3027 12
Ah, let's try 157...
157 501 313 362 3066 12
156?
156 500 312 362 3047 12
That's it! Perfect! Thank you very much!
^D
....
We have found that while for the focal length of 150, our pinhole diameter should be 490 microns, or 0.49 mm, if we go with the almost identical focal length of 156 mm, we can get away with a pinhole diameter of exactly one half of a millimeter.
[[x86-pinhole-scripting]]
=== Scripting
Because we have chosen the `#` character to denote the start of a comment, we can treat our pinhole software as a _scripting language_.
You have probably seen shell _scripts_ that start with:
[.programlisting]
....
#! /bin/sh
....
...or...
[.programlisting]
....
#!/bin/sh
....
...because the blank space after the `#!` is optional.
Whenever UNIX(R) is asked to run an executable file which starts with the `#!`, it assumes the file is a script. It adds the command to the rest of the first line of the script, and tries to execute that.
Suppose now that we have installed pinhole in /usr/local/bin/, we can now write a script to calculate various pinhole diameters suitable for various focal lengths commonly used with the 120 film.
The script might look something like this:
[.programlisting]
....
#! /usr/local/bin/pinhole -b -i
# Find the best pinhole diameter
# for the 120 film
### Standard
80
### Wide angle
30, 40, 50, 60, 70
### Telephoto
100, 120, 140
....
Because 120 is a medium size film, we may name this file medium.
We can set its permissions to execute, and run it as if it were a program:
[source,bash]
....
% chmod 755 medium
% ./medium
....
UNIX(R) will interpret that last command as:
[source,bash]
....
% /usr/local/bin/pinhole -b -i ./medium
....
It will run that command and display:
[source,bash]
....
80 358 224 256 1562 11
30 219 137 128 586 9
40 253 158 181 781 10
50 283 177 181 977 10
60 310 194 181 1172 10
70 335 209 181 1367 10
100 400 250 256 1953 11
120 438 274 256 2344 11
140 473 296 256 2734 11
....
Now, let us enter:
[source,bash]
....
% ./medium -c
....
UNIX(R) will treat that as:
[source,bash]
....
% /usr/local/bin/pinhole -b -i ./medium -c
....
That gives it two conflicting options: `-b` and `-c` (Use Bender's constant and use Connors' constant). We have programmed it so later options override early ones-our program will calculate everything using Connors' constant:
[source,bash]
....
80 331 242 256 1826 11
30 203 148 128 685 9
40 234 171 181 913 10
50 262 191 181 1141 10
60 287 209 181 1370 10
70 310 226 256 1598 11
100 370 270 256 2283 11
120 405 296 256 2739 11
140 438 320 362 3196 12
....
We decide we want to go with Bender's constant after all. We want to save its values as a comma-separated file:
[source,bash]
....
% ./medium -b -e > bender
% cat bender
focal length in millimeters,pinhole diameter in microns,F-number,normalized F-number,F-5.6 multiplier,stops from F-5.6
80,358,224,256,1562,11
30,219,137,128,586,9
40,253,158,181,781,10
50,283,177,181,977,10
60,310,194,181,1172,10
70,335,209,181,1367,10
100,400,250,256,1953,11
120,438,274,256,2344,11
140,473,296,256,2734,11
%
....
[[x86-caveats]]
== Caveats
Assembly language programmers who "grew up" under MS-DOS(R) and Windows(R) often tend to take shortcuts. Reading the keyboard scan codes and writing directly to video memory are two classical examples of practices which, under MS-DOS(R) are not frowned upon but considered the right thing to do.
The reason? Both the PC BIOS and MS-DOS(R) are notoriously slow when performing these operations.
You may be tempted to continue similar practices in the UNIX(R) environment. For example, I have seen a web site which explains how to access the keyboard scan codes on a popular UNIX(R) clone.
That is generally a _very bad idea_ in UNIX(R) environment! Let me explain why.
[[x86-protected]]
=== UNIX(R) Is Protected
For one thing, it may simply not be possible. UNIX(R) runs in protected mode. Only the kernel and device drivers are allowed to access hardware directly. Perhaps a particular UNIX(R) clone will let you read the keyboard scan codes, but chances are a real UNIX(R) operating system will not. And even if one version may let you do it, the next one may not, so your carefully crafted software may become a dinosaur overnight.
[[x86-abstraction]]
=== UNIX(R) Is an Abstraction
But there is a much more important reason not to try accessing the hardware directly (unless, of course, you are writing a device driver), even on the UNIX(R) like systems that let you do it:
_UNIX(R) is an abstraction!_
There is a major difference in the philosophy of design between MS-DOS(R) and UNIX(R). MS-DOS(R) was designed as a single-user system. It is run on a computer with a keyboard and a video screen attached directly to that computer. User input is almost guaranteed to come from that keyboard. Your program's output virtually always ends up on that screen.
This is NEVER guaranteed under UNIX(R). It is quite common for a UNIX(R) user to pipe and redirect program input and output:
[source,bash]
....
% program1 | program2 | program3 > file1
....
If you have written program2, your input does not come from the keyboard but from the output of program1. Similarly, your output does not go to the screen but becomes the input for program3 whose output, in turn, goes to [.filename]#file1#.
But there is more! Even if you made sure that your input comes from, and your output goes to, the terminal, there is no guarantee the terminal is a PC: It may not have its video memory where you expect it, nor may its keyboard be producing PC-style scan codes. It may be a Macintosh(R), or any other computer.
Now you may be shaking your head: My software is in PC assembly language, how can it run on a Macintosh(R)? But I did not say your software would be running on a Macintosh(R), only that its terminal may be a Macintosh(R).
Under UNIX(R), the terminal does not have to be directly attached to the computer that runs your software, it can even be on another continent, or, for that matter, on another planet. It is perfectly possible that a Macintosh(R) user in Australia connects to a UNIX(R) system in North America (or anywhere else) via telnet. The software then runs on one computer, while the terminal is on a different computer: If you try to read the scan codes, you will get the wrong input!
Same holds true about any other hardware: A file you are reading may be on a disk you have no direct access to. A camera you are reading images from may be on a space shuttle, connected to you via satellites.
That is why under UNIX(R) you must never make any assumptions about where your data is coming from and going to. Always let the system handle the physical access to the hardware.
[NOTE]
====
These are caveats, not absolute rules. Exceptions are possible. For example, if a text editor has determined it is running on a local machine, it may want to read the scan codes directly for improved control. I am not mentioning these caveats to tell you what to do or what not to do, just to make you aware of certain pitfalls that await you if you have just arrived to UNIX(R) form MS-DOS(R). Of course, creative people often break rules, and it is OK as long as they know they are breaking them and why.
====
[[x86-acknowledgements]]
== Acknowledgements
This tutorial would never have been possible without the help of many experienced FreeBSD programmers from the {freebsd-hackers}, many of whom have patiently answered my questions, and pointed me in the right direction in my attempts to explore the inner workings of UNIX(R) system programming in general and FreeBSD in particular.
Thomas M. Sommers opened the door for me . His https://web.archive.org/web/20090914064615/http://www.codebreakers-journal.com/content/view/262/27[How do I write "Hello, world" in FreeBSD assembler?] web page was my first encounter with an example of assembly language programming under FreeBSD.
Jake Burkholder has kept the door open by willingly answering all of my questions and supplying me with example assembly language source code.
Copyright (R) 2000-2001 G. Adam Stanislav. All rights reserved.
diff --git a/documentation/content/en/books/faq/_index.adoc b/documentation/content/en/books/faq/_index.adoc
index 5db269a12b..cf3d7379df 100644
--- a/documentation/content/en/books/faq/_index.adoc
+++ b/documentation/content/en/books/faq/_index.adoc
@@ -1,2642 +1,2642 @@
---
title: Frequently Asked Questions for FreeBSD 11.X, 12.X, and 13.X
authors:
- author: The FreeBSD Documentation Project
-copyright: 1995-2020 The FreeBSD Documentation Project
-releaseinfo: "$FreeBSD$"
+copyright: 1995-2021 The FreeBSD Documentation Project
+description: Frequently Asked Questions (FAQ) for FreeBSD 11.X, 12.X, and 13.X
trademarks: ["freebsd", "ibm", "ieee", "adobe", "intel", "linux", "microsoft", "opengroup", "sun", "netbsd", "general"]
---
= Frequently Asked Questions for FreeBSD {rel2-relx} and {rel-relx}
:doctype: book
:toc: macro
:toclevels: 1
:icons: font
:xrefstyle: basic
:relfileprefix: ../
:outfilesuffix:
:sectnums:
:sectnumlevels: 6
:partnums:
:chapter-signifier: Chapter
:part-signifier: Part
:source-highlighter: rouge
:experimental:
:skip-front-matter:
:rel-numbranch: 4
:rel-head: 14-CURRENT
:rel-head-relx: 14.X
:rel-head-releng: head/
:rel-relx: 13.X
:rel-stable: 13-STABLE
:rel-releng: stable/13/
:rel-relengdate: December 2018
:rel2-relx: 12.X
:rel2-stable: 12-STABLE
:rel2-releng: stable/12/
:rel2-relengdate: December 2018
:rel3-relx: 11.X
:rel3-stable: 11-STABLE
:rel3-releng: stable/11/
:rel3-relengdate: October 2016
ifeval::["{backend}" == "html5"]
include::shared/mirrors.adoc[]
include::shared/authors.adoc[]
include::shared/releases.adoc[]
include::shared/en/mailing-lists.adoc[]
include::shared/en/teams.adoc[]
include::shared/en/urls.adoc[]
endif::[]
ifeval::["{backend}" == "pdf"]
include::../../../../shared/mirrors.adoc[]
include::../../../../shared/authors.adoc[]
include::../../../../shared/releases.adoc[]
include::../../../../shared/en/mailing-lists.adoc[]
include::../../../../shared/en/teams.adoc[]
include::../../../../shared/en/urls.adoc[]
endif::[]
ifeval::["{backend}" == "epub3"]
include::../../../../shared/mirrors.adoc[]
include::../../../../shared/authors.adoc[]
include::../../../../shared/releases.adoc[]
include::../../../../shared/en/mailing-lists.adoc[]
include::../../../../shared/en/teams.adoc[]
include::../../../../shared/en/urls.adoc[]
endif::[]
[.abstract-title]
Abstract
This is the Frequently Asked Questions (FAQ) for FreeBSD versions {rel-relx}, {rel2-relx}, and {rel3-relx}. Every effort has been made to make this FAQ as informative as possible; if you have any suggestions as to how it may be improved, send them to the {freebsd-doc}.
The latest version of this document is always available from the link:{faq}[FreeBSD website]. It may also be downloaded as one large link:.[HTML] file with HTTP or as a variety of other formats from the https://download.freebsd.org/ftp/doc/[FreeBSD FTP server].
'''
toc::[]
[[introduction]]
== Introduction
[[what-is-FreeBSD]]
=== What is FreeBSD?
FreeBSD is a modern operating system for desktops, laptops, servers, and embedded systems with support for a large number of https://www.FreeBSD.org/platforms/[platforms].
It is based on U.C. Berkeley's "4.4BSD-Lite" release, with some "4.4BSD-Lite2" enhancements. It is also based indirectly on William Jolitz's port of U.C. Berkeley's "Net/2" to the i386(TM), known as "386BSD", though very little of the 386BSD code remains.
FreeBSD is used by companies, Internet Service Providers, researchers, computer professionals, students and home users all over the world in their work, education and recreation.
For more detailed information on FreeBSD, refer to the link:{handbook}[FreeBSD Handbook].
[[FreeBSD-goals]]
=== What is the goal of the FreeBSD Project?
The goal of the FreeBSD Project is to provide a stable and fast general purpose operating system that may be used for any purpose without strings attached.
[[bsd-license-restrictions]]
=== Does the FreeBSD license have any restrictions?
Yes. Those restrictions do not control how the code is used, but how to treat the FreeBSD Project itself. The license itself is available at https://www.FreeBSD.org/copyright/freebsd-license/[license] and can be summarized like this:
* Do not claim that you wrote this.
* Do not sue us if it breaks.
* Do not remove or modify the license.
Many of us have a significant investment in the project and would certainly not mind a little financial compensation now and then, but we definitely do not insist on it. We believe that our first and foremost "mission" is to provide code to any and all comers, and for whatever purpose, so that the code gets the widest possible use and provides the widest possible benefit. This, we believe, is one of the most fundamental goals of Free Software and one that we enthusiastically support.
Code in our source tree which falls under the https://www.FreeBSD.org/copyright/COPYING[GNU General Public License (GPL)] or https://www.FreeBSD.org/copyright/COPYING.LIB[GNU Library General Public License (LGPL)] comes with slightly more strings attached, though at least on the side of enforced access rather than the usual opposite. Due to the additional complexities that can evolve in the commercial use of GPL software, we do, however, endeavor to replace such software with submissions under the more relaxed https://www.FreeBSD.org/copyright/freebsd-license/[FreeBSD license] whenever possible.
[[replace-current-OS]]
=== Can FreeBSD replace my current operating system?
For most people, yes. But this question is not quite that cut-and-dried.
Most people do not actually use an operating system. They use applications. The applications are what really use the operating system. FreeBSD is designed to provide a robust and full-featured environment for applications. It supports a wide variety of web browsers, office suites, email readers, graphics programs, programming environments, network servers, and much more. Most of these applications can be managed through the https://www.FreeBSD.org/ports/[Ports Collection].
If an application is only available on one operating system, that operating system cannot just be replaced. Chances are, there is a very similar application on FreeBSD, however. As a solid office or Internet server or a reliable workstation, FreeBSD will almost certainly do everything you need. Many computer users across the world, including both novices and experienced UNIX(R) administrators, use FreeBSD as their only desktop operating system.
Users migrating to FreeBSD from another UNIX(R)-like environment will find FreeBSD to be similar. Windows(R) and Mac OS(R) users may be interested in instead using https://www.ghostbsd.org/[GhostBSD], https://www.midnightbsd.org/[MidnightBSD] or https://www.nomadbsd.org/[NomadBSD] three FreeBSD-based desktop distributions. Non-UNIX(R) users should expect to invest some additional time learning the UNIX(R) way of doing things. This FAQ and the link:{handbook}[FreeBSD Handbook] are excellent places to start.
[[why-called-FreeBSD]]
=== Why is it called FreeBSD?
* It may be used free of charge, even by commercial users.
* Full source for the operating system is freely available, and the minimum possible restrictions have been placed upon its use, distribution and incorporation into other work (commercial or non-commercial).
* Anyone who has an improvement or bug fix is free to submit their code and have it added to the source tree (subject to one or two obvious provisions).
It is worth pointing out that the word "free" is being used in two ways here: one meaning "at no cost" and the other meaning "do whatever you like". Apart from one or two things you _cannot_ do with the FreeBSD code, for example pretending you wrote it, you can really do whatever you like with it.
[[differences-to-other-bsds]]
=== What are the differences between FreeBSD and NetBSD, OpenBSD, and other open source BSD operating systems?
James Howard wrote a good explanation of the history and differences between the various projects, called https://jameshoward.us/archive/bsd-family-tree/[The BSD Family Tree] which goes a fair way to answering this question. Some of the information is out of date, but the history portion in particular remains accurate.
Most of the BSDs share patches and code, even today. All of the BSDs have common ancestry.
The design goals of FreeBSD are described in <<FreeBSD-goals>>, above. The design goals of the other most popular BSDs may be summarized as follows:
* OpenBSD aims for operating system security above all else. The OpenBSD team wrote man:ssh[1] and man:pf[4], which have both been ported to FreeBSD.
* NetBSD aims to be easily ported to other hardware platforms.
* DragonFly BSD is a fork of FreeBSD 4.8 that has since developed many interesting features of its own, including the HAMMER file system and support for user-mode "vkernels".
[[latest-version]]
=== What is the latest version of FreeBSD?
At any point in the development of FreeBSD, there can be multiple parallel branches. {rel-relx} releases are made from the {rel-stable} branch, and {rel2-relx} releases are made from the {rel2-stable} branch.
Up until the release of 12.0, the {rel2-relx} series was the one known as _-STABLE_. However, as of {rel-head-relx}, the {rel2-relx} branch will be designated for an "extended support" status and receive only fixes for major problems, such as security-related fixes.
Releases are made <<release-freq,every few months>>. While many people stay more up-to-date with the FreeBSD sources (see the questions on <<current,FreeBSD-CURRENT>> and <<stable,FreeBSD-STABLE>>) than that, doing so is more of a commitment, as the sources are a moving target.
More information on FreeBSD releases can be found on the https://www.FreeBSD.org/releng/#release-build[Release Engineering page] and in man:release[7].
[[current]]
=== What is _FreeBSD-CURRENT_?
link:{handbook}#current[FreeBSD-CURRENT] is the development version of the operating system, which will in due course become the new FreeBSD-STABLE branch. As such, it is really only of interest to developers working on the system and die-hard hobbyists. See the link:{handbook}#current[relevant section] in the link:{handbook}[Handbook] for details on running _-CURRENT_.
Users not familiar with FreeBSD should not use FreeBSD-CURRENT. This branch sometimes evolves quite quickly and due to mistake can be un-buildable at times. People that use FreeBSD-CURRENT are expected to be able to analyze, debug, and report problems.
[[stable]]
=== What is the FreeBSD-STABLE concept?
_FreeBSD-STABLE_ is the development branch from which major releases are made. Changes go into this branch at a slower pace and with the general assumption that they have first been tested in FreeBSD-CURRENT. However, at any given time, the sources for FreeBSD-STABLE may or may not be suitable for general use, as it may uncover bugs and corner cases that were not yet found in FreeBSD-CURRENT. Users who do not have the resources to perform testing should instead run the most recent release of FreeBSD. _FreeBSD-CURRENT_, on the other hand, has been one unbroken line since 2.0 was released.
For more detailed information on branches see "link:{releng}#rel-branch[FreeBSD Release Engineering: Creating the Release Branch]", the status of the branches and the upcoming release schedule can be found on the https://www.FreeBSD.org/releng[Release Engineering Information] page.
Version https://download.FreeBSD.org/ftp/releases/amd64/amd64/{rel121-current}-RELEASE/[{rel121-current}] is the latest release from the {rel-stable} branch; it was released in {rel121-current-date}. Version https://download.FreeBSD.org/ftp/releases/amd64/amd64/{rel113-current}-RELEASE/[{rel113-current}] is the latest release from the {rel2-stable} branch; it was released in {rel113-current-date}.
[[release-freq]]
=== When are FreeBSD releases made?
The {re} releases a new major version of FreeBSD about every 18 months and a new minor version about every 8 months, on average. Release dates are announced well in advance, so that the people working on the system know when their projects need to be finished and tested. A testing period precedes each release, to ensure that the addition of new features does not compromise the stability of the release. Many users regard this caution as one of the best things about FreeBSD, even though waiting for all the latest goodies to reach _-STABLE_ can be a little frustrating.
More information on the release engineering process (including a schedule of upcoming releases) can be found on the https://www.FreeBSD.org/releng/[release engineering] pages on the FreeBSD Web site.
For people who need or want a little more excitement, binary snapshots are made weekly as discussed above.
[[snapshot-freq]]
=== When are FreeBSD snapshots made?
FreeBSD link:https://www.FreeBSD.org/snapshots/[snapshot] releases are made based on the current state of the _-CURRENT_ and _-STABLE_ branches. The goals behind each snapshot release are:
* To test the latest version of the installation software.
* To give people who would like to run _-CURRENT_ or _-STABLE_ but who do not have the time or bandwidth to follow it on a day-to-day basis an easy way of bootstrapping it onto their systems.
* To preserve a fixed reference point for the code in question, just in case we break something really badly later. (Although Subversion normally prevents anything horrible like this happening.)
* To ensure that all new features and fixes in need of testing have the greatest possible number of potential testers.
No claims are made that any _-CURRENT_ snapshot can be considered "production quality" for any purpose. If a stable and fully tested system is needed, stick to full releases.
Snapshot releases are directly available from link:https://www.FreeBSD.org/snapshots/[snapshot].
Official snapshots are generated on a regular basis for all actively developed branches.
[[responsible]]
=== Who is responsible for FreeBSD?
The key decisions concerning the FreeBSD project, such as the overall direction of the project and who is allowed to add code to the source tree, are made by a link:https://www.FreeBSD.org/administration#t-core[core team] of 9 people. There is a much larger team of more than 350 link:{contributors}#staff-committers[committers] who are authorized to make changes directly to the FreeBSD source tree.
However, most non-trivial changes are discussed in advance in the <<mailing,mailing lists>>, and there are no restrictions on who may take part in the discussion.
[[where-get]]
=== Where can I get FreeBSD?
Every significant release of FreeBSD is available via anonymous FTP from the https://download.FreeBSD.org/ftp/releases/[FreeBSD FTP site]:
* The latest {rel-stable} release, {rel121-current}-RELEASE can be found in the https://download.FreeBSD.org/ftp/releases/amd64/amd64/{rel121-current}-RELEASE/[{rel121-current}-RELEASE directory].
* link:https://www.FreeBSD.org/snapshots/[Snapshot] releases are made monthly for the <<current,-CURRENT>> and <<stable,-STABLE>> branch, these being of service purely to bleeding-edge testers and developers.
* The latest {rel2-stable} release, {rel113-current}-RELEASE can be found in the https://download.FreeBSD.org/ftp/releases/amd64/amd64/{rel113-current}-RELEASE/[{rel113-current}-RELEASE directory].
Information about obtaining FreeBSD on CD, DVD, and other media can be found in link:{handbook}#mirrors/[the Handbook].
[[access-pr]]
=== How do I access the Problem Report database?
The Problem Report database of all user change requests may be queried by using our web-based PR https://bugs.FreeBSD.org/search/[query] interface.
The link:https://www.FreeBSD.org/support/bugreports[web-based problem report submission interface] can be used to submit problem reports through a web browser.
Before submitting a problem report, read link:{problem-reports}[Writing FreeBSD Problem Reports], an article on how to write good problem reports.
[[support]]
== Documentation and Support
[[books]]
=== What good books are there about FreeBSD?
The project produces a wide range of documentation, available online from this link: https://www.FreeBSD.org/docs/[https://www.FreeBSD.org/docs/].
[[doc-formats]]
=== Is the documentation available in other formats, such as plain text (ASCII), or PDF?
Yes. The documentation is available in a number of different formats and compression schemes on the FreeBSD FTP site, in the https://download.freebsd.org/ftp/doc/[/ftp/doc/] directory.
The documentation is categorized in a number of different ways. These include:
* The document's name, such as `faq`, or `handbook`.
* The document's language and encoding. These are based on the locale names found under [.filename]#/usr/share/locale# on a FreeBSD system. The current languages and encodings are as follows:
+
[.informaltable]
[cols="1,1", frame="none", options="header"]
|===
| Name
| Meaning
|`en_US.ISO8859-1`
|English (United States)
|`bn_BD.ISO10646-1`
|Bengali or Bangla (Bangladesh)
|`da_DK.ISO8859-1`
|Danish (Denmark)
|`de_DE.ISO8859-1`
|German (Germany)
|`el_GR.ISO8859-7`
|Greek (Greece)
|`es_ES.ISO8859-1`
|Spanish (Spain)
|`fr_FR.ISO8859-1`
|French (France)
|`hu_HU.ISO8859-2`
|Hungarian (Hungary)
|`it_IT.ISO8859-15`
|Italian (Italy)
|`ja_JP.eucJP`
|Japanese (Japan, EUC encoding)
|`ko_KR.UTF-8`
|Korean (Korea, UTF-8 encoding)
|`mn_MN.UTF-8`
|Mongolian (Mongolia, UTF-8 encoding)
|`nl_NL.ISO8859-1`
|Dutch (Netherlands)
|`pl_PL.ISO8859-2`
|Polish (Poland)
|`pt_BR.ISO8859-1`
|Portuguese (Brazil)
|`ru_RU.KOI8-R`
|Russian (Russia, KOI8-R encoding)
|`tr_TR.ISO8859-9`
|Turkish (Turkey)
|`zh_CN.UTF-8`
|Simplified Chinese (China, UTF-8 encoding)
|`zh_TW.UTF-8`
|Traditional Chinese (Taiwan, UTF-8 encoding)
|===
+
[NOTE]
====
Some documents may not be available in all languages.
====
* The document's format. We produce the documentation in a number of different output formats. Each format has its own advantages and disadvantages. Some formats are better suited for online reading, while others are meant to be aesthetically pleasing when printed on paper. Having the documentation available in any of these formats ensures that our readers will be able to read the parts they are interested in, either on their monitor, or on paper after printing the documents. The currently available formats are:
+
[.informaltable]
[cols="1,1", frame="none", options="header"]
|===
| Format
| Meaning
|`html-split`
|A collection of small, linked, HTML files.
|`html`
|One large HTML file containing the entire document
|`pdf`
|Adobe's Portable Document Format
|`txt`
|Plain text
|===
* The compression and packaging scheme.
.. Where the format is `html-split`, the files are bundled up using man:tar[1]. The resulting [.filename]#.tar# is then compressed using the compression schemes detailed in the next point.
.. All the other formats generate one file. For example, [.filename]#article.pdf#, [.filename]#book.html#, and so on.
+
These files are then compressed using either the `zip` or `bz2` compression schemes. man:tar[1] can be used to uncompress these files.
+
So the PDF version of the Handbook, compressed using `bzip2` will be stored in a file called [.filename]#book.pdf.bz2# in the [.filename]#handbook/# directory.
After choosing the format and compression mechanism, download the compressed files, uncompress them, and then copy the appropriate documents into place.
For example, the split HTML version of the FAQ, compressed using man:bzip2[1], can be found in [.filename]#doc/en_US.ISO8859-1/books/faq/book.html-split.tar.bz2# To download and uncompress that file, type:
[source,shell]
....
# fetch https://download.freebsd.org/ftp/doc/en_US.ISO8859-1/books/faq/book.html-split.tar.bz2
# tar xvf book.html-split.tar.bz2
....
If the file is compressed, tar will automatically detect the appropriate format and decompress it correctly, resulting in a collection of [.filename]#.html# files. The main one is called [.filename]#index.html#, which will contain the table of contents, introductory material, and links to the other parts of the document.
[[mailing]]
=== Where do I find info on the FreeBSD mailing lists? What FreeBSD news groups are available?
Refer to the link:{handbook}#eresources-mail[Handbook entry on mailing-lists] and the link:{handbook}#eresources-news/[Handbook entry on newsgroups].
[[irc]]
=== Are there FreeBSD IRC (Internet Relay Chat) channels?
Yes, most major IRC networks host a FreeBSD chat channel:
* Channel `#FreeBSDhelp` on http://www.efnet.org/index.php[EFNet] is a channel dedicated to helping FreeBSD users.
* Channel `#FreeBSD` on http://freenode.net/[Freenode] is a general help channel with many users at any time. The conversations have been known to run off-topic for a while, but priority is given to users with FreeBSD questions. Other users can help with the basics, referring to the Handbook whenever possible and providing links for learning more about a particular topic. This is primarily an English speaking channel, though it does have users from all over the world. Non-native English speakers should try to ask the question in English first and then relocate to `##freebsd-lang` as appropriate.
* Channel `#FreeBSD` on http://www.dal.net/[DALNET] is available at `irc.dal.net` in the US and `irc.eu.dal.net` in Europe.
* Channel `#FreeBSD` on http://www.undernet.org/[UNDERNET] is available at `us.undernet.org` in the US and `eu.undernet.org` in Europe. Since it is a help channel, be prepared to read the documents you are referred to.
* Channel `#FreeBSD` on http://www.rusnet.org.ru/[RUSNET] is a Russian language channel dedicated to helping FreeBSD users. This is also a good place for non-technical discussions.
* Channel `#bsdchat` on http://freenode.net/[Freenode] is a Traditional Chinese (UTF-8 encoding) language channel dedicated to helping FreeBSD users. This is also a good place for non-technical discussions.
The FreeBSD wiki has a https://wiki.freebsd.org/IRC/Channels[good list] of IRC channels.
Each of these channels are distinct and are not connected to each other. Since their chat styles differ, try each to find one suited to your chat style.
[[forums]]
=== Are there any web based forums to discuss FreeBSD?
The official FreeBSD forums are located at https://forums.FreeBSD.org/[https://forums.FreeBSD.org/].
[[training]]
=== Where can I get commercial FreeBSD training and support?
http://www.ixsystems.com[iXsystems, Inc.], parent company of the http://www.freebsdmall.com/[FreeBSD Mall], provides commercial FreeBSD and TrueOS software http://www.ixsystems.com/support[support], in addition to FreeBSD development and tuning solutions.
BSD Certification Group, Inc. provides system administration certifications for DragonFly BSD, FreeBSD, NetBSD, and OpenBSD. Refer to http://www.BSDCertification.org[their site] for more information.
Any other organizations providing training and support should contact the Project to be listed here.
[[install]]
== Installation
[[which-architecture]]
=== Which platform should I download? I have a 64 bit capable Intel(R) CPU, but I only see amd64.
amd64 is the term FreeBSD uses for 64-bit compatible x86 architectures (also known as "x86-64" or "x64"). Most modern computers should use amd64. Older hardware should use i386. When installing on a non-x86-compatible architecture, select the platform which best matches the hardware.
[[floppy-download]]
=== Which file do I download to get FreeBSD?
On the https://www.freebsd.org/where/[Getting FreeBSD] page, select `[iso]` next to the architecture that matches the hardware.
Any of the following can be used:
[.informaltable]
[cols="1,1", frame="none", options="header"]
|===
| file
| description
|[.filename]#disc1.iso#
|Contains enough to install FreeBSD and a minimal set of packages.
|[.filename]#dvd1.iso#
|Similar to [.filename]#disc1.iso# but with additional packages.
|[.filename]#memstick.img#
|A bootable image sufficient for writing to a USB stick.
|[.filename]#bootonly.iso#
|A minimal image that requires network access during installation to completely install FreeBSD.
|===
Full instructions on this procedure and a little bit more about installation issues in general can be found in the link:{handbook}#bsdinstall/[Handbook entry on installing FreeBSD].
[[floppy-image-too-large]]
=== What do I do if the install image does not boot?
This can be caused by not downloading the image in _binary_ mode when using FTP.
Some FTP clients default their transfer mode to _ascii_ and attempt to change any end-of-line characters received to match the conventions used by the client's system. This will almost invariably corrupt the boot image. Check the SHA-256 checksum of the downloaded boot image: if it is not _exactly_ that on the server, then the download process is suspect.
When using a command line FTP client, type _binary_ at the FTP command prompt after getting connected to the server and before starting the download of the image.
[[install-instructions-location]]
=== Where are the instructions for installing FreeBSD?
Installation instructions can be found at link:{handbook}#bsdinstall/[Handbook entry on installing FreeBSD].
[[custom-boot-floppy]]
=== How can I make my own custom release or install disk?
Customized FreeBSD installation media can be created by building a custom release. Follow the instructions in the link:{releng}[Release Engineering] article.
[[windows-coexist]]
=== Can Windows(R) co-exist with FreeBSD? (x86-specific)
If Windows(R) is installed first, then yes. FreeBSD's boot manager will then manage to boot Windows(R) and FreeBSD. If Windows(R) is installed afterwards, it will overwrite the boot manager. If that happens, see the next section.
[[bootmanager-restore]]
=== Another operating system destroyed my Boot Manager. How do I get it back? (x86-specific)
This depends upon the boot manager. The FreeBSD boot selection menu can be reinstalled using man:boot0cfg[8]. For example, to restore the boot menu onto the disk _ada0_:
[source,shell]
....
# boot0cfg -B ada0
....
The non-interactive MBR bootloader can be installed using man:gpart[8]:
[source,shell]
....
# gpart bootcode -b /boot/mbr ada0
....
For more complex situations, including GPT disks, see man:gpart[8].
[[need-complete-sources]]
=== Do I need to install the source?
In general, no. There is nothing in the base system which requires the presence of the source to operate. Some ports, like package:sysutils/lsof[], will not build unless the source is installed. In particular, if the port builds a kernel module or directly operates on kernel structures, the source must be installed.
[[need-kernel]]
=== Do I need to build a kernel?
Usually not. The supplied `GENERIC` kernel contains the drivers an ordinary computer will need. man:freebsd-update[8], the FreeBSD binary upgrade tool, cannot upgrade custom kernels, another reason to stick with the `GENERIC` kernel when possible. For computers with very limited RAM, such as embedded systems, it may be worthwhile to build a smaller custom kernel containing just the required drivers.
[[password-encryption]]
=== Should I use DES, Blowfish, or MD5 passwords and how do I specify which form my users receive?
FreeBSD uses _SHA512_ by default. DES passwords are still available for backwards compatibility with operating systems that still use the less secure password format. FreeBSD also supports the Blowfish and MD5 password formats. Which password format to use for new passwords is controlled by the `passwd_format` login capability in [.filename]#/etc/login.conf#, which takes values of `des`, `blf` (if these are available) or `md5`. See the man:login.conf[5] manual page for more information about login capabilities.
[[ffs-limits]]
=== What are the limits for FFS file systems?
For FFS file systems, the largest file system is practically limited by the amount of memory required to man:fsck[8] the file system. man:fsck[8] requires one bit per fragment, which with the default fragment size of 4 KB equates to 32 MB of memory per TB of disk. This does mean that on architectures which limit userland processes to 2 GB (e.g., i386(TM)), the maximum man:fsck[8]'able filesystem is ~60 TB.
If there was not a man:fsck[8] memory limit the maximum filesystem size would be 2 ^ 64 (blocks) * 32 KB => 16 Exa * 32 KB => 512 ZettaBytes.
The maximum size of a single FFS file is approximately 2 PB with the default block size of 32 KB. Each 32 KB block can point to 4096 blocks. With triple indirect blocks, the calculation is 32 KB * 12 + 32 KB * 4096 + 32 KB * 4096^2 + 32 KB * 4096^3. Increasing the block size to 64 KB will increase the max file size by a factor of 16.
[[archsw-readin-failed-error]]
=== Why do I get an error message, readin failed after compiling and booting a new kernel?
The world and kernel are out of sync. This is not supported. Be sure to use `make buildworld` and `make buildkernel` to update the kernel.
Boot the system by specifying the kernel directly at the second stage, pressing any key when the `|` shows up before loader is started.
[[general-configuration-tool]]
=== Is there a tool to perform post-installation configuration tasks?
Yes. bsdconfig provides a nice interface to configure FreeBSD post-installation.
[[hardware]]
== Hardware Compatibility
[[compatibility-general]]
=== General
[[which-hardware-to-get]]
==== I want to get a piece of hardware for my FreeBSD system. Which model/brand/type is best?
This is discussed continually on the FreeBSD mailing lists but is to be expected since hardware changes so quickly. Read through the Hardware Notes for FreeBSD link:{u-rel121-hardware}[{rel121-current}] or link:{u-rel113-hardware}[{rel113-current}] and search the mailing list https://www.FreeBSD.org/search/#mailinglists[archives] before asking about the latest and greatest hardware. Chances are a discussion about that type of hardware took place just last week.
Before purchasing a laptop, check the archives for {freebsd-questions}, or possibly a specific mailing list for a particular hardware type.
[[memory-upper-limitation]]
==== What are the limits for memory?
FreeBSD as an operating system generally supports as much physical memory (RAM) as the platform it is running on does. Keep in mind that different platforms have different limits for memory; for example i386(TM) without PAE supports at most 4 GB of memory (and usually less than that because of PCI address space) and i386(TM) with PAE supports at most 64 GB memory. As of FreeBSD 10, AMD64 platforms support up to 4 TB of physical memory.
[[memory-i386-over-4gb]]
==== Why does FreeBSD report less than 4 GB memory when installed on an i386(TM) machine?
The total address space on i386(TM) machines is 32-bit, meaning that at most 4 GB of memory is addressable (can be accessed). Furthermore, some addresses in this range are reserved by hardware for different purposes, for example for using and controlling PCI devices, for accessing video memory, and so on. Therefore, the total amount of memory usable by the operating system for its kernel and applications is limited to significantly less than 4 GB. Usually, 3.2 GB to 3.7 GB is the maximum usable physical memory in this configuration.
To access more than 3.2 GB to 3.7 GB of installed memory (meaning up to 4 GB but also more than 4 GB), a special tweak called PAE must be used. PAE stands for Physical Address Extension and is a way for 32-bit x86 CPUs to address more than 4 GB of memory. It remaps the memory that would otherwise be overlaid by address reservations for hardware devices above the 4 GB range and uses it as additional physical memory (see man:pae[4]). Using PAE has some drawbacks; this mode of memory access is a little bit slower than the normal (without PAE) mode and loadable modules (see man:kld[4]) are not supported. This means all drivers must be compiled into the kernel.
The most common way to enable PAE is to build a new kernel with the special ready-provided kernel configuration file called [.filename]#PAE#, which is already configured to build a safe kernel. Note that some entries in this kernel configuration file are too conservative and some drivers marked as unready to be used with PAE are actually usable. A rule of thumb is that if the driver is usable on 64-bit architectures (like AMD64), it is also usable with PAE. When creating a custom kernel configuration file, PAE can be enabled by adding the following line:
[.programlisting]
....
options PAE
....
PAE is not much used nowadays because most new x86 hardware also supports running in 64-bit mode, known as AMD64 or Intel(R) 64. It has a much larger address space and does not need such tweaks. FreeBSD supports AMD64 and it is recommended that this version of FreeBSD be used instead of the i386(TM) version if 4 GB or more memory is required.
[[compatibility-processors]]
=== Architectures and Processors
[[architectures]]
==== Does FreeBSD support architectures other than the x86?
Yes. FreeBSD divides support into multiple tiers. Tier 1 architectures, such as i386 or amd64; are fully supported. Tiers 2 and 3 are supported on a best-effort basis. A full explanation of the tier system is available in the link:{committers-guide}#archs/[Committer's Guide.]
A complete list of supported architectures can be found on the https://www.FreeBSD.org/platforms/[platforms page.]
[[smp-support]]
==== Does FreeBSD support Symmetric Multiprocessing (SMP)?
FreeBSD supports symmetric multi-processor (SMP) on all non-embedded platforms (e.g, i386, amd64, etc.). SMP is also supported in arm and MIPS kernels, although some CPUs may not support this. FreeBSD's SMP implementation uses fine-grained locking, and performance scales nearly linearly with number of CPUs.
man:smp[4] has more details.
[[microcode]]
==== What is microcode? How do I install Intel(R) CPU microcode updates?
Microcode is a method of programmatically implementing hardware level instructions. This allows for CPU bugs to be fixed without replacing the on board chip.
Install package:sysutils/devcpu-data[], then add:
[.programlisting]
....
microcode_update_enable="YES"
....
to [.filename]#/etc/rc.conf#
[[compatibility-peripherals]]
=== Peripherals
[[supported-peripherals]]
==== What kind of peripherals does FreeBSD support?
See the complete list in the Hardware Notes for FreeBSD link:{u-rel121-hardware}[{rel121-current}] or link:{u-rel113-hardware}[{rel113-current}].
[[compatibility-kbd-mice]]
=== Keyboards and Mice
[[moused]]
==== Is it possible to use a mouse outside the X Window system?
The default console driver, man:vt[4], provides the ability to use a mouse pointer in text consoles to cut & paste text. Run the mouse daemon, man:moused[8], and turn on the mouse pointer in the virtual console:
[source,shell]
....
# moused -p /dev/xxxx -t yyyy
# vidcontrol -m on
....
Where _xxxx_ is the mouse device name and _yyyy_ is a protocol type for the mouse. The mouse daemon can automatically determine the protocol type of most mice, except old serial mice. Specify the `auto` protocol to invoke automatic detection. If automatic detection does not work, see the man:moused[8] manual page for a list of supported protocol types.
For a PS/2 mouse, add `moused_enable="YES"` to [.filename]#/etc/rc.conf# to start the mouse daemon at boot time. Additionally, to use the mouse daemon on all virtual terminals instead of just the console, add `allscreens_flags="-m on"` to [.filename]#/etc/rc.conf#.
When the mouse daemon is running, access to the mouse must be coordinated between the mouse daemon and other programs such as X Windows. Refer to the FAQ<<x-and-moused,Why does my mouse not work with X?>> for more details on this issue.
[[text-mode-cut-paste]]
==== How do I cut and paste text with a mouse in the text console?
It is not possible to remove data using the mouse. However, it is possible to copy and paste. Once the mouse daemon is running as described in the <<moused,previous question>>, hold down button 1 (left button) and move the mouse to select a region of text. Then, press button 2 (middle button) to paste it at the text cursor. Pressing button 3 (right button) will "extend" the selected region of text.
If the mouse does not have a middle button, it is possible to emulate one or remap buttons using mouse daemon options. See the man:moused[8] manual page for details.
[[mouse-wheel-buttons]]
==== My mouse has a fancy wheel and buttons. Can I use them in FreeBSD?
The answer is, unfortunately, "It depends". These mice with additional features require specialized driver in most cases. Unless the mouse device driver or the user program has specific support for the mouse, it will act just like a standard two, or three button mouse.
For the possible usage of wheels in the X Window environment, refer to <<x-and-wheel,that section>>.
[[keyboard-delete-key]]
==== How do I use my delete key in sh and csh?
For the Bourne Shell, add the following lines to [.filename]#~/.shrc#. See man:sh[1] and man:editrc[5].
[.programlisting]
....
bind ^[[3~ ed-delete-next-char # for xterm
....
For the C Shell, add the following lines to [.filename]#~/.cshrc#. See man:csh[1].
[.programlisting]
....
bindkey ^[[3~ delete-char # for xterm
....
[[compatibility-other]]
=== Other Hardware
[[es1370-silent-pcm]]
==== Workarounds for no sound from my man:pcm[4] sound card?
Some sound cards set their output volume to 0 at every boot. Run the following command every time the machine boots:
[source,shell]
....
# mixer pcm 100 vol 100 cd 100
....
[[power-management-support]]
==== Does FreeBSD support power management on my laptop?
FreeBSD supports the ACPI features found in modern hardware. Further information can be found in man:acpi[4].
[[troubleshoot]]
== Troubleshooting
[[pae]]
=== Why is FreeBSD finding the wrong amount of memory on i386(TM) hardware?
The most likely reason is the difference between physical memory addresses and virtual addresses.
The convention for most PC hardware is to use the memory area between 3.5 GB and 4 GB for a special purpose (usually for PCI). This address space is used to access PCI hardware. As a result real, physical memory cannot be accessed by that address space.
What happens to the memory that should appear in that location is hardware dependent. Unfortunately, some hardware does nothing and the ability to use that last 500 MB of RAM is entirely lost.
Luckily, most hardware remaps the memory to a higher location so that it can still be used. However, this can cause some confusion when watching the boot messages.
On a 32-bit version of FreeBSD, the memory appears lost, since it will be remapped above 4 GB, which a 32-bit kernel is unable to access. In this case, the solution is to build a PAE enabled kernel. See the entry on memory limits for more information.
On a 64-bit version of FreeBSD, or when running a PAE-enabled kernel, FreeBSD will correctly detect and remap the memory so it is usable. During boot, however, it may seem as if FreeBSD is detecting more memory than the system really has, due to the described remapping. This is normal and the available memory will be corrected as the boot process completes.
[[signal11]]
=== Why do my programs occasionally die with Signal 11 errors?
Signal 11 errors are caused when a process has attempted to access memory which the operating system has not granted it access to. If something like this is happening at seemingly random intervals, start investigating the cause.
These problems can usually be attributed to either:
. If the problem is occurring only in a specific custom application, it is probably a bug in the code.
. If it is a problem with part of the base FreeBSD system, it may also be buggy code, but more often than not these problems are found and fixed long before us general FAQ readers get to use these bits of code (that is what -CURRENT is for).
It is probably not a FreeBSD bug if the problem occurs compiling a program, but the activity that the compiler is carrying out changes each time.
For example, if `make buildworld` fails while trying to compile [.filename]#ls.c# into [.filename]#ls.o# and, when run again, it fails in the same place, this is a broken build. Try updating source and try again. If the compile fails elsewhere, it is almost certainly due to hardware.
In the first case, use a debugger such as man:gdb[1] to find the point in the program which is attempting to access a bogus address and fix it.
In the second case, verify which piece of hardware is at fault.
Common causes of this include:
. The hard disks might be overheating: Check that the fans are still working, as the disk and other hardware might be overheating.
. The processor running is overheating: This might be because the processor has been overclocked, or the fan on the processor might have died. In either case, ensure that the hardware is running at what it is specified to run at, at least while trying to solve this problem. If it is not, clock it back to the default settings.)
+
Regarding overclocking, it is far cheaper to have a slow system than a fried system that needs replacing! Also the community is not sympathetic to problems on overclocked systems.
. Dodgy memory: if multiple memory SIMMS/DIMMS are installed, pull them all out and try running the machine with each SIMM or DIMM individually to narrow the problem down to either the problematic DIMM/SIMM or perhaps even a combination.
. Over-optimistic motherboard settings: the BIOS settings, and some motherboard jumpers, provide options to set various timings. The defaults are often sufficient, but sometimes setting the wait states on RAM too low, or setting the "RAM Speed: Turbo" option will cause strange behavior. A possible idea is to set to BIOS defaults, after noting the current settings first.
. Unclean or insufficient power to the motherboard. Remove any unused I/O boards, hard disks, or CD-ROMs, or disconnect the power cable from them, to see if the power supply can manage a smaller load. Or try another power supply, preferably one with a little more power. For instance, if the current power supply is rated at 250 Watts, try one rated at 300 Watts.
Read the section on <<signal11,Signal 11>> for a further explanation and a discussion on how memory testing software or hardware can still pass faulty memory. There is an extensive FAQ on this at http://www.bitwizard.nl/sig11/[the SIG11 problem FAQ].
Finally, if none of this has helped, it is possibly a bug in FreeBSD. Follow <<access-pr,these instructions>> to send a problem report.
[[trap-12-panic]]
=== My system crashes with either Fatal trap 12: page fault in kernel mode, or panic:, and spits out a bunch of information. What should I do?
The FreeBSD developers are interested in these errors, but need more information than just the error message. Copy the full crash message. Then consult the FAQ section on <<kernel-panic-troubleshooting,kernel panics>>, build a debugging kernel, and get a backtrace. This might sound difficult, but does not require any programming skills. Just follow the instructions.
[[proc-table-full]]
=== What is the meaning of the error maxproc limit exceeded by uid %i, please see tuning(7) and login.conf(5)?
The FreeBSD kernel will only allow a certain number of processes to exist at one time. The number is based on the `kern.maxusers` man:sysctl[8] variable. `kern.maxusers` also affects various other in-kernel limits, such as network buffers. If the machine is heavily loaded, increase `kern.maxusers`. This will increase these other system limits in addition to the maximum number of processes.
To adjust the `kern.maxusers` value, see the link:{handbook}#kern-maxfiles[File/Process Limits] section of the Handbook. While that section refers to open files, the same limits apply to processes.
If the machine is lightly loaded but running a very large number of processes, adjust the `kern.maxproc` tunable by defining it in [.filename]#/boot/loader.conf#. The tunable will not get adjusted until the system is rebooted. For more information about tuning tunables, see man:loader.conf[5]. If these processes are being run by a single user, adjust `kern.maxprocperuid` to be one less than the new `kern.maxproc` value. It must be at least one less because one system program, man:init[8], must always be running.
[[remote-fullscreen]]
=== Why do full screen applications on remote machines misbehave?
The remote machine may be setting the terminal type to something other than `xterm` which is required by the FreeBSD console. Alternatively the kernel may have the wrong values for the width and height of the terminal.
Check the value of the `TERM` environment variable is `xterm`. If the remote machine does not support that try `vt100`.
Run `stty -a` to check what the kernel thinks the terminal dimensions are. If they are incorrect, they can be changed by running `stty rows _RR_ cols _CC_`.
Alternatively, if the client machine has package:x11/xterm[] installed, then running `resize` will query the terminal for the correct dimensions and set them.
[[connection-delay]]
=== Why does it take so long to connect to my computer via ssh or telnet?
The symptom: there is a long delay between the time the TCP connection is established and the time when the client software asks for a password (or, in man:telnet[1]'s case, when a login prompt appears).
The problem: more likely than not, the delay is caused by the server software trying to resolve the client's IP address into a hostname. Many servers, including the Telnet and SSH servers that come with FreeBSD, do this to store the hostname in a log file for future reference by the administrator.
The remedy: if the problem occurs whenever connecting the client computer to any server, the problem is with the client. If the problem only occurs when someone connects to the server computer, the problem is with the server.
If the problem is with the client, the only remedy is to fix the DNS so the server can resolve it. If this is on a local network, consider it a server problem and keep reading. If this is on the Internet, contact your ISP.
If the problem is with the server on a local network, configure the server to resolve address-to-hostname queries for the local address range. See man:hosts[5] and man:named[8] for more information. If this is on the Internet, the problem may be that the local server's resolver is not functioning correctly. To check, try to look up another host such as `www.yahoo.com`. If it does not work, that is the problem.
Following a fresh install of FreeBSD, it is also possible that domain and name server information is missing from [.filename]#/etc/resolv.conf#. This will often cause a delay in SSH, as the option `UseDNS` is set to `yes` by default in [.filename]#/etc/ssh/sshd_config#. If this is causing the problem, either fill in the missing information in [.filename]#/etc/resolv.conf# or set `UseDNS` to `no` in [.filename]#sshd_config# as a temporary workaround.
[[file-table-full]]
=== Why does file: table is full show up repeatedly in man:dmesg[8]?
This error message indicates that the number of available file descriptors have been exhausted on the system. Refer to the link:{handbook}#kern-maxfiles[kern.maxfiles] section of the link:{handbook}#configtuning-kernel-limits/[Tuning Kernel Limits] section of the Handbook for a discussion and solution.
[[computer-clock-skew]]
=== Why does the clock on my computer keep incorrect time?
The computer has two or more clocks, and FreeBSD has chosen to use the wrong one.
Run man:dmesg[8], and check for lines that contain `Timecounter`. The one with the highest quality value that FreeBSD chose.
[source,shell]
....
# dmesg | grep Timecounter
Timecounter "i8254" frequency 1193182 Hz quality 0
Timecounter "ACPI-fast" frequency 3579545 Hz quality 1000
Timecounter "TSC" frequency 2998570050 Hz quality 800
Timecounters tick every 1.000 msec
....
Confirm this by checking the `kern.timecounter.hardware` man:sysctl[3].
[source,shell]
....
# sysctl kern.timecounter.hardware
kern.timecounter.hardware: ACPI-fast
....
It may be a broken ACPI timer. The simplest solution is to disable the ACPI timer in [.filename]#/boot/loader.conf#:
[.programlisting]
....
debug.acpi.disabled="timer"
....
Or the BIOS may modify the TSC clock-perhaps to change the speed of the processor when running from batteries, or going into a power saving mode, but FreeBSD is unaware of these adjustments, and appears to gain or lose time.
In this example, the `i8254` clock is also available, and can be selected by writing its name to the `kern.timecounter.hardware` man:sysctl[3].
[source,shell]
....
# sysctl kern.timecounter.hardware=i8254
kern.timecounter.hardware: TSC -> i8254
....
The computer should now start keeping more accurate time.
To have this change automatically run at boot time, add the following line to [.filename]#/etc/sysctl.conf#:
[.programlisting]
....
kern.timecounter.hardware=i8254
....
[[indefinite-wait-buffer]]
=== What does the error swap_pager: indefinite wait buffer: mean?
This means that a process is trying to page memory from disk, and the page attempt has hung trying to access the disk for more than 20 seconds. It might be caused by bad blocks on the disk drive, disk wiring, cables, or any other disk I/O-related hardware. If the drive itself is bad, disk errors will appear in [.filename]#/var/log/messages# and in the output of `dmesg`. Otherwise, check the cables and connections.
[[lock-order-reversal]]
=== What is a lock order reversal?
The FreeBSD kernel uses a number of resource locks to arbitrate contention for certain resources. When multiple kernel threads try to obtain multiple resource locks, there's always the potential for a deadlock, where two threads have each obtained one of the locks and blocks forever waiting for the other thread to release one of the other locks. This sort of locking problem can be avoided if all threads obtain the locks in the same order.
A run-time lock diagnostic system called man:witness[4], enabled in FreeBSD-CURRENT and disabled by default for stable branches and releases, detects the potential for deadlocks due to locking errors, including errors caused by obtaining multiple resource locks with a different order from different parts of the kernel. The man:witness[4] framework tries to detect this problem as it happens, and reports it by printing a message to the system console about a `lock order reversal` (often referred to also as LOR).
It is possible to get false positives, as man:witness[4] is conservative. A true positive report _does not_ mean that a system is dead-locked; instead it should be understood as a warning that a deadlock could have happened here.
[NOTE]
====
Problematic LORs tend to get fixed quickly, so check the {freebsd-current} before posting to it.
====
[[called-with-non-sleepable-locks-held]]
=== What does Called ... with the following non-sleepable locks held mean?
This means that a function that may sleep was called while a mutex (or other unsleepable) lock was held.
The reason this is an error is because mutexes are not intended to be held for long periods of time; they are supposed to only be held to maintain short periods of synchronization. This programming contract allows device drivers to use mutexes to synchronize with the rest of the kernel during interrupts. Interrupts (under FreeBSD) may not sleep. Hence it is imperative that no subsystem in the kernel block for an extended period while holding a mutex.
To catch such errors, assertions may be added to the kernel that interact with the man:witness[4] subsystem to emit a warning or fatal error (depending on the system configuration) when a potentially blocking call is made while holding a mutex.
In summary, such warnings are non-fatal, however with unfortunate timing they could cause undesirable effects ranging from a minor blip in the system's responsiveness to a complete system lockup.
For additional information about locking in FreeBSD see man:locking[9].
[[touch-not-found]]
=== Why does buildworld/installworld die with the message touch: not found?
This error does not mean that the man:touch[1] utility is missing. The error is instead probably due to the dates of the files being set sometime in the future. If the CMOS clock is set to local time, run `adjkerntz -i` to adjust the kernel clock when booting into single-user mode.
[[applications]]
== User Applications
[[user-apps]]
=== Where are all the user applications?
Refer to link:https://www.FreeBSD.org/ports/[the ports page] for info on software packages ported to FreeBSD.
Most ports should work on all supported versions of FreeBSD. Those that do not are specifically marked as such. Each time a FreeBSD release is made, a snapshot of the ports tree at the time of release is also included in the [.filename]#ports/# directory.
FreeBSD supports compressed binary packages to easily install and uninstall ports. Use man:pkg[7] to control the installation of packages.
[[how-do-download-ports-tree]]
=== How do I download the Ports tree? Should I be using Git?
See crossref:handbook[ports-using-installation-methods,Installing the Ports Collection].
[[ports-4x]]
=== Why can I not build this port on my {rel2-relx} -, or {rel-relx} -STABLE machine?
If the installed FreeBSD version lags significantly behind _-CURRENT_ or _-STABLE_, update the Ports Collection using the instructions in link:{handbook}#ports-using/[Using the Ports Collection]. If the system is up-to-date, someone might have committed a change to the port which works for _-CURRENT_ but which broke the port for _-STABLE_. https://bugs.FreeBSD.org/submit/[Submit] a bug report, since the Ports Collection is supposed to work for both the _-CURRENT_ and _-STABLE_ branches.
[[make-index]]
=== I just tried to build INDEX using make index, and it failed. Why?
First, make sure that the Ports Collection is up-to-date. Errors that affect building [.filename]#INDEX# from an up-to-date copy of the Ports Collection are high-visibility and are thus almost always fixed immediately.
There are rare cases where [.filename]#INDEX# will not build due to odd cases involving `OPTIONS_SET` being set in [.filename]#make.conf#. If you suspect that this is the case, try to make [.filename]#INDEX# with those variables turned off before reporting it to {freebsd-ports}.
[[ports-update]]
=== I updated the sources, now how do I update my installed ports?
FreeBSD does not include a port upgrading tool, but it does have some tools to make the upgrade process somewhat easier. Additional tools are available to simplify port handling and are described the link:{handbook}#ports-using/[Upgrading Ports] section in the FreeBSD Handbook.
[[ports-major-upgrade]]
=== Do I need to recompile every port each time I perform a major version update?
Yes! While a recent system will run with software compiled under an older release, things will randomly crash and fail to work once other ports are installed or updated.
When the system is upgraded, various shared libraries, loadable modules, and other parts of the system will be replaced with newer versions. Applications linked against the older versions may fail to start or, in other cases, fail to function properly.
For more information, see link:{handbook}#freebsdupdate-upgrade[the section on upgrades] in the FreeBSD Handbook.
[[ports-minor-upgrade]]
=== Do I need to recompile every port each time I perform a minor version update?
In general, no. FreeBSD developers do their utmost to guarantee binary compatibility across all releases with the same major version number. Any exceptions will be documented in the Release Notes, and advice given there should be followed.
[[minimal-sh]]
=== Why is /bin/sh so minimal? Why does FreeBSD not use bash or another shell?
Many people need to write shell scripts which will be portable across many systems. That is why POSIX(R) specifies the shell and utility commands in great detail. Most scripts are written in Bourne shell (man:sh[1]), and because several important programming interfaces (man:make[1], man:system[3], man:popen[3], and analogues in higher-level scripting languages like Perl and Tcl) are specified to use the Bourne shell to interpret commands. As the Bourne shell is so often and widely used, it is important for it to be quick to start, be deterministic in its behavior, and have a small memory footprint.
The existing implementation is our best effort at meeting as many of these requirements simultaneously as we can. To keep `/bin/sh` small, we have not provided many of the convenience features that other shells have. That is why other more featureful shells like `bash`, `scsh`, man:tcsh[1], and `zsh` are available. Compare the memory utilization of these shells by looking at the "VSZ" and "RSS" columns in a `ps -u` listing.
[[kernelconfig]]
== Kernel Configuration
[[make-kernel]]
=== I would like to customize my kernel. Is it difficult?
Not at all! Check out the link:{handbook}#kernelconfig/[kernel config section of the Handbook].
[NOTE]
====
The new [.filename]#kernel# will be installed to the [.filename]#/boot/kernel# directory along with its modules, while the old kernel and its modules will be moved to the [.filename]#/boot/kernel.old# directory. If a mistake is made in the configuration, simply boot the previous version of the kernel.
====
[[why-kernel-big]]
=== Why is my kernel so big?
`GENERIC` kernels shipped with FreeBSD are compiled in _debug mode_. Kernels built in debug mode contain debug data in separate files that are used for debugging. FreeBSD releases prior to 11.0 store these debug files in the same directory as the kernel itself, [.filename]#/boot/kernel/#. In FreeBSD 11.0 and later the debug files are stored in [.filename]#/usr/lib/debug/boot/kernel/#. Note that there will be little or no performance loss from running a debug kernel, and it is useful to keep one around in case of a system panic.
When running low on disk space, there are different options to reduce the size of [.filename]#/boot/kernel/# and [.filename]#/usr/lib/debug/#.
To not install the symbol files, make sure the following line exists in [.filename]#/etc/src.conf#:
[.programlisting]
....
WITHOUT_KERNEL_SYMBOLS=yes
....
For more information see man:src.conf[5].
If you want to avoid building debug files altogether, make sure that both of the following are true:
* This line does not exist in the kernel configuration file:
+
[.programlisting]
....
makeoptions DEBUG=-g
....
* Do not run man:config[8] with `-g`.
Either of the above settings will cause the kernel to be built in debug mode.
To build and install only the specified modules, list them in [.filename]#/etc/make.conf#:
[.programlisting]
....
MODULES_OVERRIDE= accf_http ipfw
....
Replace _accf_httpd ipfw_ with a list of needed modules. Only the listed modules will be built. This reduces the size of the kernel directory and decreases the amount of time needed to build the kernel. For more information, read [.filename]#/usr/share/examples/etc/make.conf#.
Unneeded devices can be removed from the kernel to further reduce the size. See <<make-kernel>> for more information.
To put any of these options into effect, follow the instructions to link:{handbook}#kernelconfig-building/[build and install] the new kernel.
For reference, the FreeBSD 11 amd64 kernel ([.filename]#/boot/kernel/kernel#) is approximately 25 MB.
[[generic-kernel-build-failure]]
=== Why does every kernel I try to build fail to compile, even GENERIC?
There are a number of possible causes for this problem:
* The source tree is different from the one used to build the currently running system. When attempting an upgrade, read [.filename]#/usr/src/UPDATING#, paying particular attention to the "COMMON ITEMS" section at the end.
* The `make buildkernel` did not complete successfully. The `make buildkernel` target relies on files generated by the `make buildworld` target to complete its job correctly.
* Even when building <<stable,FreeBSD-STABLE>>, it is possible that the source tree was fetched at a time when it was either being modified or it was broken. Only releases are guaranteed to be buildable, although <<stable,FreeBSD-STABLE>> builds fine the majority of the time. Try re-fetching the source tree and see if the problem goes away. Try using a different mirror in case the previous one is having problems.
[[scheduler-in-use]]
=== Which scheduler is in use on a running system?
The name of the scheduler currently being used is directly available as the value of the `kern.sched.name` sysctl:
[source,shell]
....
% sysctl kern.sched.name
kern.sched.name: ULE
....
[[scheduler-kern-quantum]]
=== What is kern.sched.quantum?
`kern.sched.quantum` is the maximum number of ticks a process can run without being preempted in the 4BSD scheduler.
[[disks]]
== Disks, File Systems, and Boot Loaders
[[adding-disks]]
=== How can I add my new hard disk to my FreeBSD system?
See the link:{handbook}#disks-adding/[Adding Disks] section in the FreeBSD Handbook.
[[new-huge-disk]]
=== How do I move my system over to my huge new disk?
The best way is to reinstall the operating system on the new disk, then move the user data over. This is highly recommended when tracking _-STABLE_ for more than one release or when updating a release instead of installing a new one. Install booteasy on both disks with man:boot0cfg[8] and dual boot until you are happy with the new configuration. Skip the next paragraph to find out how to move the data after doing this.
Alternatively, partition and label the new disk with either man:sade[8] or man:gpart[8]. If the disks are MBR-formatted, booteasy can be installed on both disks with man:boot0cfg[8] so that the computer can dual boot to the old or new system after the copying is done.
Once the new disk set up, the data cannot just be copied. Instead, use tools that understand device files and system flags, such as man:dump[8]. Although it is recommended to move the data while in single-user mode, it is not required.
When the disks are formatted with UFS, never use anything but man:dump[8] and man:restore[8] to move the root file system. These commands should also be used when moving a single partition to another empty partition. The sequence of steps to use `dump` to move the data from one UFS partitions to a new partition is:
[.procedure]
====
. `newfs` the new partition.
. `mount` it on a temporary mount point.
. `cd` to that directory.
. `dump` the old partition, piping output to the new one.
====
For example, to move [.filename]#/dev/ada1s1a# with [.filename]#/mnt# as the temporary mount point, type:
[source,shell]
....
# newfs /dev/ada1s1a
# mount /dev/ada1s1a /mnt
# cd /mnt
# dump 0af - / | restore rf -
....
Rearranging partitions with `dump` takes a bit more work. To merge a partition like [.filename]#/var# into its parent, create the new partition large enough for both, move the parent partition as described above, then move the child partition into the empty directory that the first move created:
[source,shell]
....
# newfs /dev/ada1s1a
# mount /dev/ada1s1a /mnt
# cd /mnt
# dump 0af - / | restore rf -
# cd var
# dump 0af - /var | restore rf -
....
To split a directory from its parent, say putting [.filename]#/var# on its own partition when it was not before, create both partitions, then mount the child partition on the appropriate directory in the temporary mount point, then move the old single partition:
[source,shell]
....
# newfs /dev/ada1s1a
# newfs /dev/ada1s1d
# mount /dev/ada1s1a /mnt
# mkdir /mnt/var
# mount /dev/ada1s1d /mnt/var
# cd /mnt
# dump 0af - / | restore rf -
....
The man:cpio[1] and man:pax[1] utilities are also available for moving user data. These are known to lose file flag information, so use them with caution.
[[safe-softupdates]]
=== Which partitions can safely use Soft Updates? I have heard that Soft Updates on / can cause problems. What about Journaled Soft Updates?
Short answer: Soft Updates can usually be safely used on all partitions.
Long answer: Soft Updates has two characteristics that may be undesirable on certain partitions. First, a Soft Updates partition has a small chance of losing data during a system crash. The partition will not be corrupted as the data will simply be lost. Second, Soft Updates can cause temporary space shortages.
When using Soft Updates, the kernel can take up to thirty seconds to write changes to the physical disk. When a large file is deleted the file still resides on disk until the kernel actually performs the deletion. This can cause a very simple race condition. Suppose one large file is deleted and another large file is immediately created. The first large file is not yet actually removed from the physical disk, so the disk might not have enough room for the second large file. This will produce an error that the partition does not have enough space, even though a large chunk of space has just been released. A few seconds later, the file creation works as expected.
If a system should crash after the kernel accepts a chunk of data for writing to disk, but before that data is actually written out, data could be lost. This risk is extremely small, but generally manageable.
These issues affect all partitions using Soft Updates. So, what does this mean for the root partition?
Vital information on the root partition changes very rarely. If the system crashed during the thirty-second window after such a change is made, it is possible that data could be lost. This risk is negligible for most applications, but be aware that it exists. If the system cannot tolerate this much risk, do not use Soft Updates on the root file system!
[.filename]#/# is traditionally one of the smallest partitions. If [.filename]#/tmp# is on [.filename]#/#, there may be intermittent space problems. Symlinking [.filename]#/tmp# to [.filename]#/var/tmp# will solve this problem.
Finally, man:dump[8] does not work in live mode (-L) on a filesystem, with Journaled Soft Updates (SU+J).
[[mount-foreign-fs]]
=== Can I mount other foreign file systems under FreeBSD?
FreeBSD supports a variety of other file systems.
UFS::
UFS CD-ROMs can be mounted directly on FreeBSD. Mounting disk partitions from Digital UNIX and other systems that support UFS may be more complex, depending on the details of the disk partitioning for the operating system in question.
ext2/ext3::
FreeBSD supports `ext2fs`, `ext3fs`, and `ext4fs` partitions. See man:ext2fs[5] for more information.
NTFS::
FUSE based NTFS support is available as a port (package:sysutils/fusefs-ntfs[]). For more information, see man:ntfs-3g[8].
FAT::
FreeBSD includes a read-write FAT driver. For more information, see man:mount_msdosfs[8].
ZFS::
FreeBSD includes a port of Sun(TM)'s ZFS driver. The current recommendation is to use it only on amd64 platforms with sufficient memory. For more information, see man:zfs[8].
FreeBSD includes the Network File System NFS and the FreeBSD Ports Collection provides several FUSE applications to support many other file systems.
[[mount-dos]]
=== How do I mount a secondary DOS partition?
The secondary DOS partitions are found after _all_ the primary partitions. For example, if `E` is the second DOS partition on the second SCSI drive, there will be a device file for "slice 5" in [.filename]#/dev#. To mount it:
[source,shell]
....
# mount -t msdosfs /dev/da1s5 /dos/e
....
[[crypto-file-system]]
=== Is there a cryptographic file system for FreeBSD?
Yes, man:gbde[8] and man:geli[8]. See the link:{handbook}#disks-encrypting/[Encrypting Disk Partitions] section of the FreeBSD Handbook.
[[grub-loader]]
=== How do I boot FreeBSD and Linux(R) using GRUB?
To boot FreeBSD using GRUB, add the following to either [.filename]#/boot/grub/menu.lst# or [.filename]#/boot/grub/grub.conf#, depending upon which is used by the Linux(R) distribution.
[.programlisting]
....
title FreeBSD 9.1
root (hd0,a)
kernel /boot/loader
....
Where _hd0,a_ points to the root partition on the first disk. To specify the slice number, use something like this _(hd0,2,a)_. By default, if the slice number is omitted, GRUB searches the first slice which has the `a` partition.
[[booteasy-loader]]
=== How do I boot FreeBSD and Linux(R) using BootEasy?
Install LILO at the start of the Linux(R) boot partition instead of in the Master Boot Record. Then boot LILO from BootEasy.
This is recommended when running Windows(R) and Linux(R) as it makes it simpler to get Linux(R) booting again if Windows(R) is reinstalled.
[[changing-bootprompt]]
=== How do I change the boot prompt from ??? to something more meaningful?
This cannot be accomplished with the standard boot manager without rewriting it. There are a number of other boot managers in the [.filename]#sysutils# category of the Ports Collection.
[[removable-drives]]
=== How do I use a new removable drive?
If the drive already has a file system on it, use a command like this:
[source,shell]
....
# mount -t msdosfs /dev/da0s1 /mnt
....
If the drive will only be used with FreeBSD systems, partition it with UFS or ZFS. This will provide long filename support, improvement in performance, and stability. If the drive will be used by other operating systems, a more portable choice, such as msdosfs, is better.
[source,shell]
....
# dd if=/dev/zero of=/dev/da0 count=2
# gpart create -s GPT /dev/da0
# gpart add -t freebsd-ufs /dev/da0
....
Finally, create a new file system:
[source,shell]
....
# newfs /dev/da0p1
....
and mount it:
[source,shell]
....
# mount /dev/da0s1 /mnt
....
It is a good idea to add a line to [.filename]#/etc/fstab# (see man:fstab[5]) so you can just type `mount /mnt` in the future:
[.programlisting]
....
/dev/da0p1 /mnt ufs rw,noauto 0 0
....
[[mount-cd-superblock]]
=== Why do I get Incorrect super block when mounting a CD?
The type of device to mount must be specified. This is described in the Handbook section on link:{handbook}#mounting-cd[Using Data CDs].
[[cdrom-not-configured]]
=== Why do I get Device not configured when mounting a CD?
This generally means that there is no CD in the drive, or the drive is not visible on the bus. Refer to the link:{handbook}#mounting-cd[Using Data CDs] section of the Handbook for a detailed discussion of this issue.
[[cdrom-unicode-filenames]]
=== Why do all non-English characters in filenames show up as ? on my CDs when mounted in FreeBSD?
The CD probably uses the "Joliet" extension for storing information about files and directories. This is discussed in the Handbook section on link:{handbook}#mounting-cd[Using Data CD-ROMs].
[[burncd-isofs]]
=== A CD burned under FreeBSD cannot be read under any other operating system. Why?
This means a raw file was burned to the CD, rather than creating an ISO 9660 file system. Take a look at the Handbook section on link:{handbook}#mounting-cd[Using Data CDs].
[[copy-cd]]
=== How can I create an image of a data CD?
This is discussed in the Handbook section on link:{handbook}#mkisofs[Writing Data to an ISO File System]. For more on working with CD-ROMs, see the link:{handbook}#creating-cds/[Creating CDs Section] in the Storage chapter in the Handbook.
[[mount-audio-CD]]
=== Why can I not mount an audio CD?
Trying to mount an audio CD will produce an error like `cd9660: /dev/cd0: Invalid argument`. This is because `mount` only works on file systems. Audio CDs do not have file systems; they just have data. Instead, use a program that reads audio CDs, such as the package:audio/xmcd[] package or port.
[[multi-session-CD]]
=== How do I mount a multi-session CD?
By default, man:mount[8] will attempt to mount the last data track (session) of a CD. To load an earlier session, use the `-s` command line argument. Refer to man:mount_cd9660[8] for specific examples.
[[user-floppymount]]
=== How do I let ordinary users mount CD-ROMs, DVDs, USB drives, and other removable media?
As `root` set the sysctl variable `vfs.usermount` to `1`.
[source,shell]
....
# sysctl vfs.usermount=1
....
To make this persist across reboots, add the line `vfs.usermount=1` to [.filename]#/etc/sysctl.conf# so that it is reset at system boot time.
Users can only mount devices they have read permissions to. To allow users to mount a device permissions must be set in [.filename]#/etc/devfs.conf#.
For example, to allow users to mount the first USB drive add:
[.programlisting]
....
# Allow all users to mount a USB drive.
own /dev/da0 root:operator
perm /dev/da0 0666
....
All users can now mount devices they could read onto a directory that they own:
[source,shell]
....
% mkdir ~/my-mount-point
% mount -t msdosfs /dev/da0 ~/my-mount-point
....
Unmounting the device is simple:
[source,shell]
....
% umount ~/my-mount-point
....
Enabling `vfs.usermount`, however, has negative security implications. A better way to access MS-DOS(R) formatted media is to use the package:emulators/mtools[] package in the Ports Collection.
[NOTE]
====
The device name used in the previous examples must be changed according to the configuration.
====
[[du-vs-df]]
=== The du and df commands show different amounts of disk space available. What is going on?
This is due to how these commands actually work. `du` goes through the directory tree, measures how large each file is, and presents the totals. `df` just asks the file system how much space it has left. They seem to be the same thing, but a file without a directory entry will affect `df` but not `du`.
When a program is using a file, and the file is deleted, the file is not really removed from the file system until the program stops using it. The file is immediately deleted from the directory listing, however. As an example, consider a file large enough to affect the output of `du` and `df`. A file being viewed with `more` can be deleted without causing an error. The entry is removed from the directory so no other program or user can access it. However, `du` shows that it is gone as it has walked the directory tree and the file is not listed. `df` shows that it is still there, as the file system knows that `more` is still using that space. Once the `more` session ends, `du` and `df` will agree.
This situation is common on web servers. Many people set up a FreeBSD web server and forget to rotate the log files. The access log fills up [.filename]#/var#. The new administrator deletes the file, but the system still complains that the partition is full. Stopping and restarting the web server program would free the file, allowing the system to release the disk space. To prevent this from happening, set up man:newsyslog[8].
Note that Soft Updates can delay the freeing of disk space and it can take up to 30 seconds for the change to be visible.
[[add-swap-space]]
=== How can I add more swap space?
This section link:{handbook}#adding-swap-space/[of the Handbook] describes how to do this.
[[manufacturer-disk-size]]
=== Why does FreeBSD see my disk as smaller than the manufacturer says it is?
Disk manufacturers calculate gigabytes as a billion bytes each, whereas FreeBSD calculates them as 1,073,741,824 bytes each. This explains why, for example, FreeBSD's boot messages will report a disk that supposedly has 80 GB as holding 76,319 MB.
Also note that FreeBSD will (by default) <<disk-more-than-full,reserve>> 8% of the disk space.
[[disk-more-than-full]]
=== How is it possible for a partition to be more than 100% full?
A portion of each UFS partition (8%, by default) is reserved for use by the operating system and the `root` user. man:df[1] does not count that space when calculating the `Capacity` column, so it can exceed 100%. Notice that the `Blocks` column is always greater than the sum of the `Used` and `Avail` columns, usually by a factor of 8%.
For more details, look up `-m` in man:tunefs[8].
[[all-about-zfs]]
== ZFS
[[how-much-ram-for-zfs]]
=== What is the minimum amount of RAM one should have to run ZFS?
A minimum of 4GB of RAM is required for comfortable usage, but individual workloads can vary widely.
[[what-is-zil]]
=== What is the ZIL and when does it get used?
The ZIL (ZFS intent log) is a write log used to implement posix write commitment semantics across crashes. Normally writes are bundled up into transaction groups and written to disk when filled ("Transaction Group Commit"). However syscalls like man:fsync[2] require a commitment that the data is written to stable storage before returning. The ZIL is needed for writes that have been acknowledged as written but which are not yet on disk as part of a transaction. The transaction groups are timestamped. In the event of a crash the last valid timestamp is found and missing data is merged in from the ZIL.
[[need-ssd-for-zil]]
=== Do I need a SSD for ZIL?
By default, ZFS stores the ZIL in the pool with all the data. If an application has a heavy write load, storing the ZIL in a separate device that has very fast synchronous, sequential write performance can improve overall system performance. For other workloads, a SSD is unlikely to make much of an improvement.
[[what-is-l2arc]]
=== What is the L2ARC?
The L2ARC is a read cache stored on a fast device such as an SSD. This cache is not persistent across reboots. Note that RAM is used as the first layer of cache and the L2ARC is only needed if there is insufficient RAM.
L2ARC needs space in the ARC to index it. So, perversely, a working set that fits perfectly in the ARC will not fit perfectly any more if a L2ARC is used because part of the ARC is holding the L2ARC index, pushing part of the working set into the L2ARC which is slower than RAM.
[[should-enable-dedup]]
=== Is enabling deduplication advisable?
Generally speaking, no.
Deduplication takes up a significant amount of RAM and may slow down read and write disk access times. Unless one is storing data that is very heavily duplicated, such as virtual machine images or user backups, it is possible that deduplication will do more harm than good. Another consideration is the inability to revert deduplication status. If data is written when deduplication is enabled, disabling dedup will not cause those blocks which were deduplicated to be replicated until they are next modified.
Deduplication can also lead to some unexpected situations. In particular, deleting files may become much slower.
[[zpool-fully-full]]
=== I cannot delete or create files on my ZFS pool. How can I fix this?
This could happen because the pool is 100% full. ZFS requires space on the disk to write transaction metadata. To restore the pool to a usable state, truncate the file to delete:
[source,shell]
....
% truncate -s 0 unimportant-file
....
File truncation works because a new transaction is not started, new spare blocks are created instead.
[NOTE]
====
On systems with additional ZFS dataset tuning, such as deduplication, the space may not be immediately available
====
[[zfs-ssd-trim]]
=== Does ZFS support TRIM for Solid State Drives?
ZFS TRIM support was added to FreeBSD 10-CURRENT with revision link:https://svnweb.freebsd.org/changeset/base/240868[r240868]. ZFS TRIM support was added to all FreeBSD-STABLE branches in link:https://svnweb.freebsd.org/changeset/base/252162[r252162] and link:https://svnweb.freebsd.org/changeset/base/251419[r251419], respectively.
ZFS TRIM is enabled by default, and can be turned off by adding this line to [.filename]#/etc/sysctl.conf#:
[.programlisting]
....
vfs.zfs.trim.enabled=0
....
[NOTE]
====
ZFS TRIM support was added to GELI as of link:https://svnweb.freebsd.org/changeset/base/286444[r286444]. Please see man:geli[8] and the `-T` switch.
====
[[admin]]
== System Administration
[[startup-config-files]]
=== Where are the system start-up configuration files?
The primary configuration file is [.filename]#/etc/defaults/rc.conf# which is described in man:rc.conf[5]. System startup scripts such as [.filename]#/etc/rc# and [.filename]#/etc/rc.d#, which are described in man:rc[8], include this file. _Do not edit this file!_ Instead, to edit an entry in [.filename]#/etc/defaults/rc.conf#, copy the line into [.filename]#/etc/rc.conf# and change it there.
For example, to start man:sshd[8], the included OpenSSH daemon:
[source,shell]
....
# echo 'sshd_enable="YES"' >> /etc/rc.conf
....
Alternatively, use man:sysrc[8] to modify [.filename]#/etc/rc.conf#:
[source,shell]
....
# sysrc sshd_enable="YES"
....
To start up local services, place shell scripts in the [.filename]#/usr/local/etc/rc.d# directory. These shell scripts should be set executable, the default file mode is `555`.
[[adding-users]]
=== How do I add a user easily?
Use the man:adduser[8] command, or the man:pw[8] command for more complicated situations.
To remove the user, use the man:rmuser[8] command or, if necessary, man:pw[8].
[[root-not-found-cron-errors]]
=== Why do I keep getting messages like root: not found after editing /etc/crontab?
This is normally caused by editing the system crontab. This is not the correct way to do things as the system crontab has a different format to the per-user crontabs. The system crontab has an extra field, specifying which user to run the command as. man:cron[8] assumes this user is the first word of the command to execute. Since no such command exists, this error message is displayed.
To delete the extra, incorrect crontab:
[source,shell]
....
# crontab -r
....
[[su-wheel-group]]
=== Why do I get the error, you are not in the correct group to su root when I try to su to root?
This is a security feature. In order to `su` to `root`, or any other account with superuser privileges, the user account must be a member of the `wheel` group. If this feature were not there, anybody with an account on a system who also found out ``root``'s password would be able to gain superuser level access to the system.
To allow someone to `su` to `root`, put them in the `wheel` group using `pw`:
[source,shell]
....
# pw groupmod wheel -m lisa
....
The above example will add user `lisa` to the group `wheel`.
[[rcconf-readonly]]
=== I made a mistake in rc.conf, or another startup file, and now I cannot edit it because the file system is read-only. What should I do?
Restart the system using `boot -s` at the loader prompt to enter single-user mode. When prompted for a shell pathname, press kbd:[Enter] and run `mount -urw /` to re-mount the root file system in read/write mode. You may also need to run `mount -a -t ufs` to mount the file system where your favorite editor is defined. If that editor is on a network file system, either configure the network manually before mounting the network file systems, or use an editor which resides on a local file system, such as man:ed[1].
In order to use a full screen editor such as man:vi[1] or man:emacs[1], run `export TERM=xterm` so that these editors can load the correct data from the man:termcap[5] database.
After performing these steps, edit [.filename]#/etc/rc.conf# to fix the syntax error. The error message displayed immediately after the kernel boot messages should indicate the number of the line in the file which is at fault.
[[printer-setup]]
=== Why am I having trouble setting up my printer?
See the link:{handbook}#printing/[Handbook entry on printing] for troubleshooting tips.
[[keyboard-mappings]]
=== How can I correct the keyboard mappings for my system?
Refer to the Handbook section on link:{handbook}#using-localization/[using localization], specifically the section on link:{handbook}#setting-console[console setup].
[[user-quotas]]
=== Why can I not get user quotas to work properly?
. It is possible that the kernel is not configured to use quotas. In this case, add the following line to the kernel configuration file and recompile the kernel:
+
[.programlisting]
....
options QUOTA
....
+
Refer to the link:{handbook}#quotas/[Handbook entry on quotas] for full details.
. Do not turn on quotas on [.filename]#/#.
. Put the quota file on the file system that the quotas are to be enforced on:
+
[.informaltable]
[cols="1,1", frame="none", options="header"]
|===
| File System
| Quota file
|[.filename]#/usr#
|[.filename]#/usr/admin/quotas#
|[.filename]#/home#
|[.filename]#/home/admin/quotas#
|...
|...
|===
[[sysv-ipc]]
=== Does FreeBSD support System V IPC primitives?
Yes, FreeBSD supports System V-style IPC, including shared memory, messages and semaphores, in the [.filename]#GENERIC# kernel. With a custom kernel, support may be loaded with the [.filename]#sysvshm.ko#, [.filename]#sysvsem.ko# and [.filename]#sysvmsg.ko# kernel modules, or enabled in the custom kernel by adding the following lines to the kernel configuration file:
[.programlisting]
....
options SYSVSHM # enable shared memory
options SYSVSEM # enable for semaphores
options SYSVMSG # enable for messaging
....
Recompile and install the kernel.
[[sendmail-alternative]]
=== What other mail-server software can I use instead of Sendmail?
The http://www.sendmail.org/[Sendmail] server is the default mail-server software for FreeBSD, but it can be replaced with another MTA installed from the Ports Collection. Available ports include package:mail/exim[], package:mail/postfix[], and package:mail/qmail[]. Search the mailing lists for discussions regarding the advantages and disadvantages of the available MTAs.
[[forgot-root-pw]]
=== I have forgotten the root password! What do I do?
Do not panic! Restart the system, type `boot -s` at the `Boot:` prompt to enter single-user mode. At the question about the shell to use, hit kbd:[Enter] which will display a # prompt. Enter `mount -urw /` to remount the root file system read/write, then run `mount -a` to remount all the file systems. Run `passwd root` to change the `root` password then run man:exit[1] to continue booting.
[NOTE]
====
If you are still prompted to give the `root` password when entering the single-user mode, it means that the console has been marked as `insecure` in [.filename]#/etc/ttys#. In this case, it will be required to boot from a FreeBSD installation disk, choose the [.guimenuitem]#Live CD# or [.guimenuitem]#Shell# at the beginning of the install process and issue the commands mentioned above. Mount the specific partition in this case and then chroot to it. For example, replace `mount -urw /` with `mount /dev/ada0p1 /mnt; chroot /mnt` for a system on _ada0p1_.
====
[NOTE]
====
If the root partition cannot be mounted from single-user mode, it is possible that the partitions are encrypted and it is impossible to mount them without the access keys. For more information see the section about encrypted disks in the FreeBSD link:{handbook}#disks-encrypting/[Handbook].
====
[[CAD-reboot]]
=== How do I keep kbd:[Control] + kbd:[Alt] + kbd:[Delete] from rebooting the system?
When using man:vt[4], the default console driver, this can be done by setting the following man:sysctl[8]:
[source,shell]
....
# sysctl kern.vt.kbd_reboot=0
....
[[dos-to-unix-txt]]
=== How do I reformat DOS text files to UNIX(R) ones?
Use this man:perl[1] command:
[source,shell]
....
% perl -i.bak -npe 's/\r\n/\n/g' file(s)
....
where _file(s)_ is one or more files to process. The modification is done in-place, with the original file stored with a [.filename]#.bak# extension.
Alternatively, use man:tr[1]:
[source,shell]
....
% tr -d '\r' < dos-text-file > unix-file
....
_dos-text-file_ is the file containing DOS text while _unix-file_ will contain the converted output. This can be quite a bit faster than using `perl`.
Yet another way to reformat DOS text files is to use the package:converters/dosunix[] port from the Ports Collection. Consult its documentation about the details.
[[reread-rc]]
=== How do I re-read [.filename]#/etc/rc.conf# and re-start [.filename]#/etc/rc# without a reboot?
Go into single-user mode and then back to multi-user mode:
[source,shell]
....
# shutdown now
# return
# exit
....
[[release-candidate]]
=== I tried to update my system to the latest _-STABLE_, but got _-BETAx_, _-RC_ or __-PRERELEASE__! What is going on?
Short answer: it is just a name. _RC_ stands for "Release Candidate". It signifies that a release is imminent. In FreeBSD, _-PRERELEASE_ is typically synonymous with the code freeze before a release. (For some releases, the _-BETA_ label was used in the same way as _-PRERELEASE_.)
Long answer: FreeBSD derives its releases from one of two places. Major, dot-zero, releases, such as 9.0-RELEASE are branched from the head of the development stream, commonly referred to as <<current,-CURRENT>>. Minor releases, such as 6.3-RELEASE or 5.2-RELEASE, have been snapshots of the active <<stable,-STABLE>> branch. Starting with 4.3-RELEASE, each release also now has its own branch which can be tracked by people requiring an extremely conservative rate of development (typically only security advisories).
When a release is about to be made, the branch from which it will be derived from has to undergo a certain process. Part of this process is a code freeze. When a code freeze is initiated, the name of the branch is changed to reflect that it is about to become a release. For example, if the branch used to be called 6.2-STABLE, its name will be changed to 6.3-PRERELEASE to signify the code freeze and signify that extra pre-release testing should be happening. Bug fixes can still be committed to be part of the release. When the source code is in shape for the release the name will be changed to 6.3-RC to signify that a release is about to be made from it. Once in the RC stage, only the most critical bugs found can be fixed. Once the release (6.3-RELEASE in this example) and release branch have been made, the branch will be renamed to 6.3-STABLE.
For more information on version numbers and the various Subversion branches, refer to the link:{releng}[Release Engineering] article.
[[kernel-chflag-failure]]
=== I tried to install a new kernel, and the man:chflags[1] failed. How do I get around this?
Short answer: the security level is greater than 0. Reboot directly to single-user mode to install the kernel.
Long answer: FreeBSD disallows changing system flags at security levels greater than 0. To check the current security level:
[source,shell]
....
# sysctl kern.securelevel
....
The security level cannot be lowered in multi-user mode, so boot to single-user mode to install the kernel, or change the security level in [.filename]#/etc/rc.conf# then reboot. See the man:init[8] manual page for details on `securelevel`, and see [.filename]#/etc/defaults/rc.conf# and the man:rc.conf[5] manual page for more information on [.filename]#rc.conf#.
[[kernel-securelevel-time]]
=== I cannot change the time on my system by more than one second! How do I get around this?
Short answer: the system is at a security level greater than 1. Reboot directly to single-user mode to change the date.
Long answer: FreeBSD disallows changing the time by more that one second at security levels greater than 1. To check the security level:
[source,shell]
....
# sysctl kern.securelevel
....
The security level cannot be lowered in multi-user mode. Either boot to single-user mode to change the date or change the security level in [.filename]#/etc/rc.conf# and reboot. See the man:init[8] manual page for details on `securelevel`, and see [.filename]#/etc/defaults/rc.conf# and the man:rc.conf[5] manual page for more information on [.filename]#rc.conf#.
[[statd-mem-leak]]
=== Why is rpc.statd using 256 MB of memory?
No, there is no memory leak, and it is not using 256 MB of memory. For convenience, `rpc.statd` maps an obscene amount of memory into its address space. There is nothing terribly wrong with this from a technical standpoint; it just throws off things like man:top[1] and man:ps[1].
man:rpc.statd[8] maps its status file (resident on [.filename]#/var#) into its address space; to save worrying about remapping the status file later when it needs to grow, it maps the status file with a generous size. This is very evident from the source code, where one can see that the length argument to man:mmap[2] is `0x10000000`, or one sixteenth of the address space on an IA32, or exactly 256 MB.
[[unsetting-schg]]
=== Why can I not unset the schg file flag?
The system is running at securelevel greater than 0. Lower the securelevel and try again. For more information, see <<securelevel,the FAQ entry on securelevel>> and the man:init[8] manual page.
[[vnlru]]
=== What is vnlru?
`vnlru` flushes and frees vnodes when the system hits the `kern.maxvnodes` limit. This kernel thread sits mostly idle, and only activates when there is a huge amount of RAM and users are accessing tens of thousands of tiny files.
[[top-memory-states]]
=== What do the various memory states displayed by top mean?
* `Active`: pages recently statistically used.
* `Inactive`: pages recently statistically unused.
* `Laundry`: pages recently statistically unused but known to be dirty, that is, whose contents needs to be paged out before they can be reused.
* `Free`: pages without data content, which can be immediately reused.
* `Wired`: pages that are fixed into memory, usually for kernel purposes, but also sometimes for special use in processes.
Pages are most often written to disk (sort of a VM sync) when they are in the laundry state, but active or inactive pages can also be synced. This depends upon the CPU tracking of the modified bit being available, and in certain situations there can be an advantage for a block of VM pages to be synced, regardless of the queue they belong to. In most common cases, it is best to think of the laundry queue as a queue of relatively unused pages that might or might not be in the process of being written to disk. The inactive queue contains a mix of clean and dirty pages; clean pages near the head of the queue are reclaimed immediately to alleviate a free page shortage, and dirty pages are moved to the laundry queue for deferred processing.
There are some other flags (e.g., busy flag or busy count) that might modify some of the described rules.
[[free-memory-amount]]
=== How much free memory is available?
There are a couple of kinds of "free memory". The most common is the amount of memory immediately available without reclaiming memory already in use. That is the size of the free pages queue plus some other reserved pages. This amount is exported by the `vm.stats.vm.v_free_count` man:sysctl[8], shown, for instance, by man:top[1]. Another kind of "free memory" is the total amount of virtual memory available to userland processes, which depends on the sum of swap space and usable memory. Other kinds of "free memory" descriptions are also possible, but it is relatively useless to define these, but rather it is important to make sure that the paging rate is kept low, and to avoid running out of swap space.
[[var-empty]]
=== What is [.filename]#/var/empty#?
[.filename]#/var/empty# is a directory that the man:sshd[8] program uses when performing privilege separation. The [.filename]#/var/empty# directory is empty, owned by `root` and has the `schg` flag set. This directory should not be deleted.
[[newsyslog-expectations]]
=== I just changed [.filename]#/etc/newsyslog.conf#. How can I check if it does what I expect?
To see what man:newsyslog[8] will do, use the following:
[source,shell]
....
% newsyslog -nrvv
....
[[timezone]]
=== My time is wrong, how can I change the timezone?
Use man:tzsetup[8].
[[X11]]
== The X Window System and Virtual Consoles
[[whatis-X]]
=== What is the X Window System?
The X Window System (commonly `X11`) is the most widely available windowing system capable of running on UNIX(R) or UNIX(R) like systems, including FreeBSD. http://www.x.org/wiki/[The X.Org Foundation] administers the http://en.wikipedia.org/wiki/X_Window_System_core_protocol[X protocol standards], with the current reference implementation, version 11 release 7.7, so references are often shortened to `X11`.
Many implementations are available for different architectures and operating systems. An implementation of the server-side code is properly known as an `X server`.
[[running-X]]
=== I want to run Xorg, how do I go about it?
To install Xorg do one of the following:
Use the package:x11/xorg[] meta-port, which builds and installs every Xorg component.
Use package:x11/xorg-minimal[], which builds and installs only the necessary Xorg components.
Install Xorg from FreeBSD packages:
[source,shell]
....
# pkg install xorg
....
After the installation of Xorg, follow the instructions from the link:{handbook}#x-config/[X11 Configuration] section of the FreeBSD Handbook.
[[running-X-securelevels]]
=== I tried to run X, but I get a No devices detected. error when I type startx. What do I do now?
The system is probably running at a raised `securelevel`. It is not possible to start X at a raised `securelevel` because X requires write access to man:io[4]. For more information, see at the man:init[8] manual page.
There are two solutions to the problem: set the `securelevel` back down to zero or run man:xdm[1] (or an alternative display manager) at boot time before the `securelevel` is raised.
See <<xdm-boot>> for more information about running man:xdm[1] at boot time.
[[x-and-moused]]
=== Why does my mouse not work with X?
When using man:vt[4], the default console driver, FreeBSD can be configured to support a mouse pointer on each virtual screen. To avoid conflicting with X, man:vt[4] supports a virtual device called [.filename]#/dev/sysmouse#. All mouse events received from the real mouse device are written to the man:sysmouse[4] device via man:moused[8]. To use the mouse on one or more virtual consoles, _and_ use X, see <<moused>> and set up man:moused[8].
Then edit [.filename]#/etc/X11/xorg.conf# and make sure the following lines exist:
[.programlisting]
....
Section "InputDevice"
Option "Protocol" "SysMouse"
Option "Device" "/dev/sysmouse"
.....
....
Starting with Xorg version 7.4, the `InputDevice` sections in [.filename]#xorg.conf# are ignored in favor of autodetected devices. To restore the old behavior, add the following line to the `ServerLayout` or `ServerFlags` section:
[.programlisting]
....
Option "AutoAddDevices" "false"
....
Some people prefer to use [.filename]#/dev/mouse# under X. To make this work, [.filename]#/dev/mouse# should be linked to [.filename]#/dev/sysmouse# (see man:sysmouse[4]) by adding the following line to [.filename]#/etc/devfs.conf# (see man:devfs.conf[5]):
[.programlisting]
....
link sysmouse mouse
....
This link can be created by restarting man:devfs[5] with the following command (as `root`):
[source,shell]
....
# service devfs restart
....
[[x-and-wheel]]
=== My mouse has a fancy wheel. Can I use it in X?
Yes, if X is configured for a 5 button mouse. To do this, add the lines `Buttons 5` and `ZAxisMapping 4 5` to the "InputDevice" section of [.filename]#/etc/X11/xorg.conf#, as seen in this example:
[.programlisting]
....
Section "InputDevice"
Identifier "Mouse1"
Driver "mouse"
Option "Protocol" "auto"
Option "Device" "/dev/sysmouse"
Option "Buttons" "5"
Option "ZAxisMapping" "4 5"
EndSection
....
The mouse can be enabled in Emacs by adding these lines to [.filename]#~/.emacs#:
[.programlisting]
....
;; wheel mouse
(global-set-key [mouse-4] 'scroll-down)
(global-set-key [mouse-5] 'scroll-up)
....
[[x-and-synaptic]]
=== My laptop has a Synaptics touchpad. Can I use it in X?
Yes, after configuring a few things to make it work.
In order to use the Xorg synaptics driver, first remove `moused_enable` from [.filename]#rc.conf#.
To enable synaptics, add the following line to [.filename]#/boot/loader.conf#:
[.programlisting]
....
hw.psm.synaptics_support="1"
....
Add the following to [.filename]#/etc/X11/xorg.conf#:
[.programlisting]
....
Section "InputDevice"
Identifier "Touchpad0"
Driver "synaptics"
Option "Protocol" "psm"
Option "Device" "/dev/psm0"
EndSection
....
And be sure to add the following into the "ServerLayout" section:
[.programlisting]
....
InputDevice "Touchpad0" "SendCoreEvents"
....
[[no-remote-x11]]
=== How do I use remote X displays?
For security reasons, the default setting is to not allow a machine to remotely open a window.
To enable this feature, start X with the optional `-listen_tcp` argument:
[source,shell]
....
% startx -listen_tcp
....
[[virtual-console]]
=== What is a virtual console and how do I make more?
Virtual consoles provide several simultaneous sessions on the same machine without doing anything complicated like setting up a network or running X.
When the system starts, it will display a login prompt on the monitor after displaying all the boot messages. Type in your login name and password to start working on the first virtual console.
To start another session, perhaps to look at documentation for a program or to read mail while waiting for an FTP transfer to finish, hold down kbd:[Alt] and press kbd:[F2]. This will display the login prompt for the second virtual console. To go back to the original session, press kbd:[Alt+F1].
The default FreeBSD installation has eight virtual consoles enabled. kbd:[Alt+F1], kbd:[Alt+F2], kbd:[Alt+F3], and so on will switch between these virtual consoles.
To enable more of virtual consoles, edit [.filename]#/etc/ttys# (see man:ttys[5]) and add entries for [.filename]#ttyv8# to [.filename]#ttyvc#, after the comment on "Virtual terminals":
[.programlisting]
....
# Edit the existing entry for ttyv8 in /etc/ttys and change
# "off" to "on".
ttyv8 "/usr/libexec/getty Pc" xterm on secure
ttyv9 "/usr/libexec/getty Pc" xterm on secure
ttyva "/usr/libexec/getty Pc" xterm on secure
ttyvb "/usr/libexec/getty Pc" xterm on secure
....
The more virtual terminals, the more resources that are used. This can be problematic on systems with 8 MB RAM or less. Consider changing `secure` to `insecure`.
[IMPORTANT]
====
In order to run an X server, at least one virtual terminal must be left to `off` for it to use. This means that only eleven of the Alt-function keys can be used as virtual consoles so that one is left for the X server.
====
For example, to run X and eleven virtual consoles, the setting for virtual terminal 12 should be:
[.programlisting]
....
ttyvb "/usr/libexec/getty Pc" xterm off secure
....
The easiest way to activate the virtual consoles is to reboot.
[[vty-from-x]]
=== How do I access the virtual consoles from X?
Use kbd:[Ctrl+Alt+Fn] to switch back to a virtual console. Press kbd:[Ctrl+Alt+F1] to return to the first virtual console.
Once at a text console, use kbd:[Alt+Fn] to move between them.
To return to the X session, switch to the virtual console running X. If X was started from the command line using `startx`, the X session will attach to the next unused virtual console, not the text console from which it was invoked. For eight active virtual terminals, X will run on the ninth, so use kbd:[Alt+F9].
[[xdm-boot]]
=== How do I start XDM on boot?
There are two schools of thought on how to start man:xdm[1]. One school starts `xdm` from [.filename]#/etc/ttys# (see man:ttys[5]) using the supplied example, while the other sets `xdm_enable=yes` in [.filename]#/etc/rc.conf#. Both are equally valid, and one may work in situations where the other does not. In both cases the result is the same: X will pop up a graphical login prompt.
The man:ttys[5] method has the advantage of documenting which vty X will start on and passing the responsibility of restarting the X server on logout to man:init[8]. The man:rc[8] method makes it easy to `kill xdm` if there is a problem starting the X server.
When using the man:rc[8] method, `xdm_tty` (default `ttyv8`) can be set in [.filename]#/etc/rc.conf# to choose which vty man:xdm[1] opens on.
[[xconsole-failure]]
=== Why do I get Couldn't open console when I run xconsole?
When X is started with `startx`, the permissions on [.filename]#/dev/console# will _not_ get changed, resulting in things like `xterm -C` and `xconsole` not working.
This is because of the way console permissions are set by default. On a multi-user system, one does not necessarily want just any user to be able to write on the system console. For users who are logging directly onto a machine with a VTY, the man:fbtab[5] file exists to solve such problems.
In a nutshell, make sure an uncommented line of the form is in [.filename]#/etc/fbtab# (see man:fbtab[5]):
[.programlisting]
....
/dev/ttyv0 0600 /dev/console
....
It will ensure that whomever logs in on [.filename]#/dev/ttyv0# will own the console.
[[ps2-x]]
=== Why does my PS/2 mouse misbehave under X?
The mouse and the mouse driver may have become out of synchronization. In rare cases, the driver may also erroneously report synchronization errors:
[.programlisting]
....
psmintr: out of sync (xxxx != yyyy)
....
If this happens, disable the synchronization check code by setting the driver flags for the PS/2 mouse driver to `0x100`. This can be easiest achieved by adding `hint.psm.0.flags="0x100"` to [.filename]#/boot/loader.conf# and rebooting.
[[mouse-button-reverse]]
=== How do I reverse the mouse buttons?
Type `xmodmap -e "pointer = 3 2 1"`. Add this command to [.filename]#~/.xinitrc# or [.filename]#~/.xsession# to make it happen automatically.
[[install-splash]]
=== How do I install a splash screen and where do I find them?
The detailed answer for this question can be found in the link:{handbook}#boot-splash/[Boot Time Splash Screens] section of the FreeBSD Handbook.
[[windows-keys]]
=== Can I use the kbd:[Windows] keys on my keyboard in X?
Yes. Use man:xmodmap[1] to define which functions the keys should perform.
Assuming all Windows keyboards are standard, the keycodes for these three keys are the following:
* 115 - kbd:[Windows] key, between the left-hand kbd:[Ctrl] and kbd:[Alt] keys
* 116 - kbd:[Windows] key, to the right of kbd:[AltGr]
* 117 - kbd:[Menu], to the left of the right-hand kbd:[Ctrl]
To have the left kbd:[Windows] key print a comma, try this.
[source,shell]
....
# xmodmap -e "keycode 115 = comma"
....
To have the kbd:[Windows] key-mappings enabled automatically every time X is started, either put the `xmodmap` commands in [.filename]#~/.xinitrc# or, preferably, create a [.filename]#~/.xmodmaprc# and include the `xmodmap` options, one per line, then add the following line to [.filename]#~/.xinitrc#:
[.programlisting]
....
xmodmap $HOME/.xmodmaprc
....
For example, to map the 3 keys to be kbd:[F13], kbd:[F14], and kbd:[F15], respectively. This would make it easy to map them to useful functions within applications or the window manager.
To do this, put the following in [.filename]#~/.xmodmaprc#.
[.programlisting]
....
keycode 115 = F13
keycode 116 = F14
keycode 117 = F15
....
For the package:x11-wm/fvwm2[] desktop manager, one could map the keys so that kbd:[F13] iconifies or de-iconifies the window the cursor is in, kbd:[F14] brings the window the cursor is in to the front or, if it is already at the front, pushes it to the back, and kbd:[F15] pops up the main Workplace menu even if the cursor is not on the desktop, which is useful when no part of the desktop is visible.
The following entries in [.filename]#~/.fvwmrc# implement the aforementioned setup:
[.programlisting]
....
Key F13 FTIWS A Iconify
Key F14 FTIWS A RaiseLower
Key F15 A A Menu Workplace Nop
....
[[x-3d-acceleration]]
=== How can I get 3D hardware acceleration for OpenGL(R)?
The availability of 3D acceleration depends on the version of Xorg and the type of video chip. For an nVidia chip, use the binary drivers provided for FreeBSD by installing one of the following ports:
The latest versions of nVidia cards are supported by the package:x11/nvidia-driver[] port.
Older drivers are available as package:x11/nvidia-driver-[]
nVidia provides detailed information on which card is supported by which driver on their web site: http://www.nvidia.com/object/IO_32667.html[http://www.nvidia.com/object/IO_32667.html].
For Matrox G200/G400, check the package:x11-drivers/xf86-video-mga[] port.
For ATI Rage 128 and Radeon see man:ati[4], man:r128[4] and man:radeon[4].
[[networking]]
== Networking
[[diskless-booting]]
=== Where can I get information on diskless booting?
"Diskless booting" means that the FreeBSD box is booted over a network, and reads the necessary files from a server instead of its hard disk. For full details, see link:{handbook}#network-diskless/[the Handbook entry on diskless booting].
[[router]]
=== Can a FreeBSD box be used as a dedicated network router?
Yes. Refer to the Handbook entry on link:{handbook}#advanced-networking/[advanced networking], specifically the section on link:{handbook}#network-routing/[routing and gateways].
[[natd]]
=== Does FreeBSD support NAT or Masquerading?
Yes. For instructions on how to use NAT over a PPP connection, see the link:{handbook}#userppp/[Handbook entry on PPP]. To use NAT over some other sort of network connection, look at the link:{handbook}#network-natd[natd] section of the Handbook.
[[ethernet-aliases]]
=== How can I set up Ethernet aliases?
If the alias is on the same subnet as an address already configured on the interface, add `netmask 0xffffffff` to this command:
[source,shell]
....
# ifconfig ed0 alias 192.0.2.2 netmask 0xffffffff
....
Otherwise, specify the network address and netmask as usual:
[source,shell]
....
# ifconfig ed0 alias 172.16.141.5 netmask 0xffffff00
....
More information can be found in the FreeBSD link:{handbook}#configtuning-virtual-hosts/[Handbook].
[[nfs-linux]]
=== Why can I not NFS-mount from a Linux(R) box?
Some versions of the Linux(R) NFS code only accept mount requests from a privileged port; try to issue the following command:
[source,shell]
....
# mount -o -P linuxbox:/blah /mnt
....
[[exports-errors]]
=== Why does mountd keep telling me it can't change attributes and that I have a bad exports list on my FreeBSD NFS server?
The most frequent problem is not understanding the correct format of [.filename]#/etc/exports#. Review man:exports[5] and the link:{handbook}#network-nfs/[NFS] entry in the Handbook, especially the section on link:{handbook}#configuring-nfs[configuring NFS].
[[ip-multicast]]
=== How do I enable IP multicast support?
Install the package:net/mrouted[] package or port and add `mrouted_enable="YES"` to [.filename]#/etc/rc.conf# start this service at boot time.
[[fqdn-hosts]]
=== Why do I have to use the FQDN for hosts on my site?
See the answer in the FreeBSD link:{handbook}#mail-trouble/[Handbook].
[[network-permission-denied]]
=== Why do I get an error, Permission denied, for all networking operations?
If the kernel is compiled with the `IPFIREWALL` option, be aware that the default policy is to deny all packets that are not explicitly allowed.
If the firewall is unintentionally misconfigured, restore network operability by typing the following as `root`:
[source,shell]
....
# ipfw add 65534 allow all from any to any
....
Consider setting `firewall_type="open"` in [.filename]#/etc/rc.conf#.
For further information on configuring this firewall, see the link:{handbook}#firewalls-ipfw/[Handbook chapter].
[[ipfw-fwd]]
=== Why is my `ipfw` “fwd” rule to redirect a service to another machine not working?
Possibly because network address translation (NAT) is needed instead of just forwarding packets. A "fwd" rule only forwards packets, it does not actually change the data inside the packet. Consider this rule:
[source,shell]
....
01000 fwd 10.0.0.1 from any to foo 21
....
When a packet with a destination address of _foo_ arrives at the machine with this rule, the packet is forwarded to _10.0.0.1_, but it still has the destination address of _foo_. The destination address of the packet is not changed to _10.0.0.1_. Most machines would probably drop a packet that they receive with a destination address that is not their own. Therefore, using a "fwd" rule does not often work the way the user expects. This behavior is a feature and not a bug.
See the <<service-redirect,FAQ about redirecting services>>, the man:natd[8] manual, or one of the several port redirecting utilities in the link:https://www.FreeBSD.org/ports/[Ports Collection] for a correct way to do this.
[[service-redirect]]
=== How can I redirect service requests from one machine to another?
FTP and other service requests can be redirected with the package:sysutils/socket[] package or port. Replace the entry for the service in [.filename]#/etc/inetd.conf# to call `socket`, as seen in this example for ftpd:
[.programlisting]
....
ftp stream tcp nowait nobody /usr/local/bin/socket socket ftp.example.com ftp
....
where _ftp.example.com_ and _ftp_ are the host and port to redirect to, respectively.
[[bandwidth-mgr-tool]]
=== Where can I get a bandwidth management tool?
There are three bandwidth management tools available for FreeBSD. man:dummynet[4] is integrated into FreeBSD as part of man:ipfw[4]. http://www.sonycsl.co.jp/person/kjc/programs.html[ALTQ] has been integrated into FreeBSD as part of man:pf[4]. Bandwidth Manager from http://www.etinc.com/[Emerging Technologies] is a commercial product.
[[bpf-not-configured]]
=== Why do I get /dev/bpf0: device not configured?
The running application requires the Berkeley Packet Filter (man:bpf[4]), but it was removed from a custom kernel. Add this to the kernel config file and build a new kernel:
[.programlisting]
....
device bpf # Berkeley Packet Filter
....
[[mount-smb-share]]
=== How do I mount a disk from a Windows(R) machine that is on my network, like smbmount in Linux(R)?
Use the SMBFS toolset. It includes a set of kernel modifications and a set of userland programs. The programs and information are available as man:mount_smbfs[8] in the base system.
[[icmp-response-bw-limit]]
=== What are these messages about: Limiting icmp/open port/closed port response in my log files?
This kernel message indicates that some activity is provoking it to send a large amount of ICMP or TCP reset (RST) responses. ICMP responses are often generated as a result of attempted connections to unused UDP ports. TCP resets are generated as a result of attempted connections to unopened TCP ports. Among others, these are the kinds of activities which may cause these messages:
* Brute-force denial of service (DoS) attacks (as opposed to single-packet attacks which exploit a specific vulnerability).
* Port scans which attempt to connect to a large number of ports (as opposed to only trying a few well-known ports).
The first number in the message indicates how many packets the kernel would have sent if the limit was not in place, and the second indicates the limit. This limit is controlled using `net.inet.icmp.icmplim`. This example sets the limit to `300` packets per second:
[source,shell]
....
# sysctl net.inet.icmp.icmplim=300
....
To disable these messages without disabling response limiting, use `net.inet.icmp.icmplim_output` to disable the output:
[source,shell]
....
# sysctl net.inet.icmp.icmplim_output=0
....
Finally, to disable response limiting completely, set `net.inet.icmp.icmplim` to `0`. Disabling response limiting is discouraged for the reasons listed above.
[[unknown-hw-addr-format]]
=== What are these arp: unknown hardware address format error messages?
This means that some device on the local Ethernet is using a MAC address in a format that FreeBSD does not recognize. This is probably caused by someone experimenting with an Ethernet card somewhere else on the network. This is most commonly seen on cable modem networks. It is harmless, and should not affect the performance of the FreeBSD system.
[[arp-wrong-iface]]
=== Why do I keep seeing messages like: 192.168.0.10 is on fxp1 but got reply from 00:15:17:67:cf:82 on rl0, and how do I disable it?
A packet is coming from outside the network unexpectedly. To disable them, set `net.link.ether.inet.log_arp_wrong_iface` to `0`.
[[ipv6-only]]
=== How do I compile an IPv6 only kernel?
Configure your kernel with these settings:
[source,shell]
....
include GENERIC
ident GENERIC-IPV6ONLY
makeoptions MKMODULESENV+="WITHOUT_INET_SUPPORT="
nooptions INET
nodevice gre
....
[[security]]
== Security
[[sandbox]]
=== What is a sandbox?
"Sandbox" is a security term. It can mean two things:
* A process which is placed inside a set of virtual walls that are designed to prevent someone who breaks into the process from being able to break into the wider system.
+
The process is only able to run inside the walls. Since nothing the process does in regards to executing code is supposed to be able to breach the walls, a detailed audit of its code is not needed in order to be able to say certain things about its security.
+
The walls might be a user ID, for example. This is the definition used in the man:security[7] and man:named[8] man pages.
+
Take the `ntalk` service, for example (see man:inetd[8]). This service used to run as user ID `root`. Now it runs as user ID `tty`. The `tty` user is a sandbox designed to make it more difficult for someone who has successfully hacked into the system via `ntalk` from being able to hack beyond that user ID.
* A process which is placed inside a simulation of the machine. It means that someone who is able to break into the process may believe that he can break into the wider machine but is, in fact, only breaking into a simulation of that machine and not modifying any real data.
+
The most common way to accomplish this is to build a simulated environment in a subdirectory and then run the processes in that directory chrooted so that [.filename]#/# for that process is this directory, not the real [.filename]#/# of the system).
+
Another common use is to mount an underlying file system read-only and then create a file system layer on top of it that gives a process a seemingly writeable view into that file system. The process may believe it is able to write to those files, but only the process sees the effects - other processes in the system do not, necessarily.
+
An attempt is made to make this sort of sandbox so transparent that the user (or hacker) does not realize that he is sitting in it.
UNIX(R) implements two core sandboxes. One is at the process level, and one is at the userid level.
Every UNIX(R) process is completely firewalled off from every other UNIX(R) process. One process cannot modify the address space of another.
A UNIX(R) process is owned by a particular userid. If the user ID is not the `root` user, it serves to firewall the process off from processes owned by other users. The user ID is also used to firewall off on-disk data.
[[securelevel]]
=== What is securelevel?
`securelevel` is a security mechanism implemented in the kernel. When the securelevel is positive, the kernel restricts certain tasks; not even the superuser (`root`) is allowed to do them. The securelevel mechanism limits the ability to:
* Unset certain file flags, such as `schg` (the system immutable flag).
* Write to kernel memory via [.filename]#/dev/mem# and [.filename]#/dev/kmem#.
* Load kernel modules.
* Alter firewall rules.
To check the status of the securelevel on a running system:
[source,shell]
....
# sysctl -n kern.securelevel
....
The output contains the current value of the securelevel. If it is greater than 0, at least some of the securelevel's protections are enabled.
The securelevel of a running system cannot be lowered as this would defeat its purpose. If a task requires that the securelevel be non-positive, change the `kern_securelevel` and `kern_securelevel_enable` variables in [.filename]#/etc/rc.conf# and reboot.
For more information on securelevel and the specific things all the levels do, consult man:init[8].
[WARNING]
====
Securelevel is not a silver bullet; it has many known deficiencies. More often than not, it provides a false sense of security.
One of its biggest problems is that in order for it to be at all effective, all files used in the boot process up until the securelevel is set must be protected. If an attacker can get the system to execute their code prior to the securelevel being set (which happens quite late in the boot process since some things the system must do at start-up cannot be done at an elevated securelevel), its protections are invalidated. While this task of protecting all files used in the boot process is not technically impossible, if it is achieved, system maintenance will become a nightmare since one would have to take the system down, at least to single-user mode, to modify a configuration file.
This point and others are often discussed on the mailing lists, particularly the {freebsd-security}. Search the archives link:https://www.FreeBSD.org/search/[here] for an extensive discussion. A more fine-grained mechanism is preferred.
====
[[toor-account]]
=== What is this UID 0 toor account? Have I been compromised?
Do not worry. `toor` is an "alternative" superuser account, where toor is root spelled backwards. It is intended to be used with a non-standard shell so the default shell for `root` does not need to change. This is important as shells which are not part of the base distribution, but are instead installed from ports or packages, are installed in [.filename]#/usr/local/bin# which, by default, resides on a different file system. If ``root``'s shell is located in [.filename]#/usr/local/bin# and the file system containing [.filename]#/usr/local/bin#) is not mounted, `root` will not be able to log in to fix a problem and will have to reboot into single-user mode in order to enter the path to a shell.
Some people use `toor` for day-to-day `root` tasks with a non-standard shell, leaving `root`, with a standard shell, for single-user mode or emergencies. By default, a user cannot log in using `toor` as it does not have a password, so log in as `root` and set a password for `toor` before using it to login.
[[serial]]
== Serial Communications
This section answers common questions about serial communications with FreeBSD.
[[serial-console-prompt]]
=== How do I get the boot: prompt to show on the serial console?
See link:{handbook}#serialconsole-setup/[this section of the Handbook].
[[found-serial]]
=== How do I tell if FreeBSD found my serial ports or modem cards?
As the FreeBSD kernel boots, it will probe for the serial ports for which the kernel is configured. Either watch the boot messages closely or run this command after the system is up and running:
[source,shell]
....
% grep -E '^(sio|uart)[0-9]' < /var/run/dmesg.boot
sio0: <16550A-compatible COM port> port 0x3f8-0x3ff irq 4 flags 0x10 on acpi0
sio0: type 16550A
sio1: <16550A-compatible COM port> port 0x2f8-0x2ff irq 3 on acpi0
sio1: type 16550A
....
This example shows two serial ports. The first is on IRQ4, port address `0x3f8`, and has a 16550A-type UART chip. The second uses the same kind of chip but is on IRQ3 and is at port address `0x2f8`. Internal modem cards are treated just like serial ports, except that they always have a modem attached to the port.
The [.filename]#GENERIC# kernel includes support for two serial ports using the same IRQ and port address settings in the above example. If these settings are not right for the system, or if there are more modem cards or serial ports than the kernel is configured for, reconfigure using the instructions in <<make-kernel,building a kernel>> for more details.
[[access-serial-ports]]
=== How do I access the serial ports on FreeBSD? (x86-specific)
The third serial port, [.filename]#sio2#, or [.filename]#COM3#, is on [.filename]#/dev/cuad2# for dial-out devices, and on [.filename]#/dev/ttyd2# for dial-in devices. What is the difference between these two classes of devices?
When opening [.filename]#/dev/ttydX# in blocking mode, a process will wait for the corresponding [.filename]#cuadX# device to become inactive, and then wait for the carrier detect line to go active. When the [.filename]#cuadX# device is opened, it makes sure the serial port is not already in use by the [.filename]#ttydX# device. If the port is available, it steals it from the [.filename]#ttydX# device. Also, the [.filename]#cuadX# device does not care about carrier detect. With this scheme and an auto-answer modem, remote users can log in and local users can still dial out with the same modem and the system will take care of all the conflicts.
[[enable-multiport-serial]]
=== How do I enable support for a multi-port serial card?
The section on kernel configuration provides information about configuring the kernel. For a multi-port serial card, place an man:sio[4] line for each serial port on the card in the man:device.hints[5] file. But place the IRQ specifiers on only one of the entries. All of the ports on the card should share one IRQ. For consistency, use the last serial port to specify the IRQ. Also, specify the following option in the kernel configuration file:
[.programlisting]
....
options COM_MULTIPORT
....
The following [.filename]#/boot/device.hints# example is for an AST 4-port serial card on IRQ 12:
[.programlisting]
....
hint.sio.4.at="isa"
hint.sio.4.port="0x2a0"
hint.sio.4.flags="0x701"
hint.sio.5.at="isa"
hint.sio.5.port="0x2a8"
hint.sio.5.flags="0x701"
hint.sio.6.at="isa"
hint.sio.6.port="0x2b0"
hint.sio.6.flags="0x701"
hint.sio.7.at="isa"
hint.sio.7.port="0x2b8"
hint.sio.7.flags="0x701"
hint.sio.7.irq="12"
....
The flags indicate that the master port has minor number `7` (`0x700`), and all the ports share an IRQ (`0x001`).
[[default-serial-params]]
=== Can I set the default serial parameters for a port?
See the link:{handbook}#serial/#serial-hw-config[Serial Communications] section in the FreeBSD Handbook.
[[cannot-tip]]
=== Why can I not run tip or cu?
The built-in man:tip[1] and man:cu[1] utilities can only access the [.filename]#/var/spool/lock# directory via user `uucp` and group `dialer`. Use the `dialer` group to control who has access to the modem or remote systems by adding user accounts to `dialer`.
Alternatively, everyone can be configured to run man:tip[1] and man:cu[1] by typing:
[source,shell]
....
# chmod 4511 /usr/bin/cu
# chmod 4511 /usr/bin/tip
....
[[misc]]
== Miscellaneous Questions
[[more-swap]]
=== FreeBSD uses a lot of swap space even when the computer has free memory left. Why?
FreeBSD will proactively move entirely idle, unused pages of main memory into swap in order to make more main memory available for active use. This heavy use of swap is balanced by using the extra free memory for caching.
Note that while FreeBSD is proactive in this regard, it does not arbitrarily decide to swap pages when the system is truly idle. Thus, the system will not be all paged out after leaving it idle overnight.
[[top-freemem]]
=== Why does top show very little free memory even when I have very few programs running?
The simple answer is that free memory is wasted memory. Any memory that programs do not actively allocate is used within the FreeBSD kernel as disk cache. The values shown by man:top[1] labeled as `Inact` and `Laundry` are cached data at different aging levels. This cached data means the system does not have to access a slow disk again for data it has accessed recently, thus increasing overall performance. In general, a low value shown for `Free` memory in man:top[1] is good, provided it is not _very_ low.
[[chmod-symlinks]]
=== Why will `chmod` not change the permissions on symlinks?
Symlinks do not have permissions, and by default, man:chmod[1] will follow symlinks to change the permissions on the source file, if possible. For the file, [.filename]#foo# with a symlink named [.filename]#bar#, this command will always succeed.
[source,shell]
....
% chmod g-w bar
....
However, the permissions on [.filename]#bar# will not have changed.
When changing modes of the file hierarchies rooted in the files instead of the files themselves, use either `-H` or `-L` together with `-R` to make this work. See man:chmod[1] and man:symlink[7] for more information.
[WARNING]
====
`-R` does a _recursive_ man:chmod[1]. Be careful about specifying directories or symlinks to directories to man:chmod[1]. To change the permissions of a directory referenced by a symlink, use man:chmod[1] without any options and follow the symlink with a trailing slash ([.filename]#/#). For example, if [.filename]#foo# is a symlink to directory [.filename]#bar#, to change the permissions of [.filename]#foo# (actually [.filename]#bar#), do something like:
[source,shell]
....
% chmod 555 foo/
....
With the trailing slash, man:chmod[1] will follow the symlink, [.filename]#foo#, to change the permissions of the directory, [.filename]#bar#.
====
[[dos-binaries]]
=== Can I run DOS binaries under FreeBSD?
Yes. A DOS emulation program, package:emulators/doscmd[], is available in the FreeBSD Ports Collection.
If doscmd will not suffice, package:emulators/pcemu[] emulates an 8088 and enough BIOS services to run many DOS text-mode applications. It requires the X Window System.
The Ports Collection also has package:emulators/dosbox[]. The main focus of this application is emulating old DOS games using the local file system for files.
[[translation]]
=== What do I need to do to translate a FreeBSD document into my native language?
See the link:{fdp-primer}#translations/[Translation FAQ] in the FreeBSD Documentation Project Primer.
[[freebsd-mail-bounces]]
=== Why does my email to any address at FreeBSD.org bounce?
The `FreeBSD.org` mail system implements some Postfix checks on incoming mail and rejects mail that is either from misconfigured relays or otherwise appears likely to be spam. Some of the specific requirements are:
* The IP address of the SMTP client must "reverse-resolve" to a forward confirmed hostname.
* The fully-qualified hostname given in the SMTP conversation (either HELO or EHLO) must resolve to the IP address of the client.
Other advice to help mail reach its destination include:
* Mail should be sent in plain text, and messages sent to mailing lists should generally be no more than 200KB in length.
* Avoid excessive cross posting. Choose _one_ mailing list which seems most relevant and send it there.
If you still have trouble with email infrastructure at `FreeBSD.org`, send a note with the details to mailto:postmaster@freebsd.org[postmaster@freebsd.org]; Include a date/time interval so that logs may be reviewed - and note that we only keep one week's worth of mail logs. (Be sure to specify the time zone or offset from UTC.)
[[free-account]]
=== Where can I find a free FreeBSD account?
While FreeBSD does not provide open access to any of their servers, others do provide open access UNIX(R) systems. The charge varies and limited services may be available.
http://www.arbornet.org/[Arbornet, Inc], also known as _M-Net_, has been providing open access to UNIX(R) systems since 1983. Starting on an Altos running System III, the site switched to BSD/OS in 1991. In June of 2000, the site switched again to FreeBSD. _M-Net_ can be accessed via telnet and SSH and provides basic access to the entire FreeBSD software suite. However, network access is limited to members and patrons who donate to the system, which is run as a non-profit organization. _M-Net_ also provides an bulletin board system and interactive chat.
[[daemon-name]]
=== What is the cute little red guy's name?
He does not have one, and is just called "the BSD daemon". If you insist upon using a name, call him "beastie". Note that "beastie" is pronounced "BSD".
More about the BSD daemon is available on his http://www.mckusick.com/beastie/index.html[home page].
[[use-beastie]]
=== Can I use the BSD daemon image?
Perhaps. The BSD daemon is copyrighted by Marshall Kirk McKusick. Check his http://www.mckusick.com/beastie/mainpage/copyright.html[Statement on the Use of the BSD Daemon Figure] for detailed usage terms.
In summary, the image can be used in a tasteful manner, for personal use, so long as appropriate credit is given. Before using the logo commercially, contact {mckusick} for permission. More details are available on the http://www.mckusick.com/beastie/index.html[BSD Daemon's home page].
[[daemon-images]]
=== Do you have any BSD daemon images I could use?
Xfig and eps drawings are available under [.filename]#/usr/share/examples/BSD_daemon/#.
[[glossary]]
=== I have seen an acronym or other term on the mailing lists and I do not understand what it means. Where should I look?
Refer to the link:{handbook}#freebsd-glossary/[FreeBSD Glossary].
[[bikeshed-painting]]
=== Why should I care what color the bikeshed is?
The really, really short answer is that you should not. The somewhat longer answer is that just because you are capable of building a bikeshed does not mean you should stop others from building one just because you do not like the color they plan to paint it. This is a metaphor indicating that you need not argue about every little feature just because you know enough to do so. Some people have commented that the amount of noise generated by a change is inversely proportional to the complexity of the change.
The longer and more complete answer is that after a very long argument about whether man:sleep[1] should take fractional second arguments, {phk} posted a long message entitled link:http://www.bikeshed.com[A bike shed (any color will do) on greener grass...]. The appropriate portions of that message are quoted below.
****
“What is it about this bike shed?” Some of you have asked me.
It is a long story, or rather it is an old story, but it is quite short actually. C. Northcote Parkinson wrote a book in the early 1960s, called “Parkinson's Law”, which contains a lot of insight into the dynamics of management.
[snip a bit of commentary on the book]
In the specific example involving the bike shed, the other vital component is an atomic power-plant, I guess that illustrates the age of the book.
Parkinson shows how you can go into the board of directors and get approval for building a multi-million or even billion dollar atomic power plant, but if you want to build a bike shed you will be tangled up in endless discussions.
Parkinson explains that this is because an atomic plant is so vast, so expensive and so complicated that people cannot grasp it, and rather than try, they fall back on the assumption that somebody else checked all the details before it got this far. Richard P. Feynmann gives a couple of interesting, and very much to the point, examples relating to Los Alamos in his books.
A bike shed on the other hand. Anyone can build one of those over a weekend, and still have time to watch the game on TV. So no matter how well prepared, no matter how reasonable you are with your proposal, somebody will seize the chance to show that he is doing his job, that he is paying attention, that he is here.
In Denmark we call it “setting your fingerprint”. It is about personal pride and prestige, it is about being able to point somewhere and say “There! I did that.” It is a strong trait in politicians, but present in most people given the chance. Just think about footsteps in wet cement.
--Poul-Henning Kamp <phk@FreeBSD.org> on freebsd-hackers, October 2, 1999
****
[[funnies]]
== The FreeBSD Funnies
[[very-very-cool]]
=== How cool is FreeBSD?
[qanda]
Q::Has anyone done any temperature testing while running FreeBSD? I know Linux(R) runs cooler than DOS, but have never seen a mention of FreeBSD. It seems to run really hot.
A:: No, but we have done numerous taste tests on blindfolded volunteers who have also had 250 micrograms of LSD-25 administered beforehand. 35% of the volunteers said that FreeBSD tasted sort of orange, whereas Linux(R) tasted like purple haze. Neither group mentioned any significant variances in temperature. We eventually had to throw the results of this survey out entirely anyway when we found that too many volunteers were wandering out of the room during the tests, thus skewing the results. We think most of the volunteers are at Apple now, working on their new "scratch and sniff" GUI. It is a funny old business we are in!
Seriously, FreeBSD uses the HLT (halt) instruction when the system is idle thus lowering its energy consumption and therefore the heat it generates. Also if you have ACPI (Advanced Configuration and Power Interface) configured, then FreeBSD can also put the CPU into a low power mode.
[[letmeoutofhere]]
=== Who is scratching in my memory banks??
[qanda]
Q:: Is there anything "odd" that FreeBSD does when compiling the kernel which would cause the memory to make a scratchy sound? When compiling (and for a brief moment after recognizing the floppy drive upon startup, as well), a strange scratchy sound emanates from what appears to be the memory banks.
A:: Yes! You will see frequent references to "daemons" in the BSD documentation, and what most people do not know is that this refers to genuine, non-corporeal entities that now possess your computer. The scratchy sound coming from your memory is actually high-pitched whispering exchanged among the daemons as they best decide how to deal with various system administration tasks.
If the noise gets to you, a good `fdisk /mbr` from DOS will get rid of them, but do not be surprised if they react adversely and try to stop you. In fact, if at any point during the exercise you hear the satanic voice of Bill Gates coming from the built-in speaker, take off running and do not ever look back! Freed from the counterbalancing influence of the BSD daemons, the twin demons of DOS and Windows(R) are often able to re-assert total control over your machine to the eternal damnation of your soul. Now that you know, given a choice you would probably prefer to get used to the scratchy noises, no?
[[changing-lightbulbs]]
=== How many FreeBSD hackers does it take to change a lightbulb?
One thousand, one hundred and sixty-nine:
Twenty-three to complain to -CURRENT about the lights being out;
Four to claim that it is a configuration problem, and that such matters really belong on -questions;
Three to submit PRs about it, one of which is misfiled under doc and consists only of "it's dark";
One to commit an untested lightbulb which breaks buildworld, then back it out five minutes later;
Eight to flame the PR originators for not including patches in their PRs;
Five to complain about buildworld being broken;
Thirty-one to answer that it works for them, and they must have updated at a bad time;
One to post a patch for a new lightbulb to -hackers;
One to complain that he had patches for this three years ago, but when he sent them to -CURRENT they were just ignored, and he has had bad experiences with the PR system; besides, the proposed new lightbulb is non-reflexive;
Thirty-seven to scream that lightbulbs do not belong in the base system, that committers have no right to do things like this without consulting the Community, and WHAT IS -CORE DOING ABOUT IT!?
Two hundred to complain about the color of the bicycle shed;
Three to point out that the patch breaks man:style[9];
Seventeen to complain that the proposed new lightbulb is under GPL;
Five hundred and eighty-six to engage in a flame war about the comparative advantages of the GPL, the BSD license, the MIT license, the NPL, and the personal hygiene of unnamed FSF founders;
Seven to move various portions of the thread to -chat and -advocacy;
One to commit the suggested lightbulb, even though it shines dimmer than the old one;
Two to back it out with a furious flame of a commit message, arguing that FreeBSD is better off in the dark than with a dim lightbulb;
Forty-six to argue vociferously about the backing out of the dim lightbulb and demanding a statement from -core;
Eleven to request a smaller lightbulb so it will fit their Tamagotchi if we ever decide to port FreeBSD to that platform;
Seventy-three to complain about the SNR on -hackers and -chat and unsubscribe in protest;
Thirteen to post "unsubscribe", "How do I unsubscribe?", or "Please remove me from the list", followed by the usual footer;
One to commit a working lightbulb while everybody is too busy flaming everybody else to notice;
Thirty-one to point out that the new lightbulb would shine 0.364% brighter if compiled with TenDRA (although it will have to be reshaped into a cube), and that FreeBSD should therefore switch to TenDRA instead of GCC;
One to complain that the new lightbulb lacks fairings;
Nine (including the PR originators) to ask "what is MFC?";
Fifty-seven to complain about the lights being out two weeks after the bulb has been changed.
_{nik} adds:_
_I was laughing quite hard at this._
_And then I thought, "Hang on, shouldn't there be '1 to document it.' in that list somewhere?"_
_And then I was enlightened :-)_
_{tabthorpe}_ says: "None, _real_ FreeBSD hackers are not afraid of the dark!"
[[dev-null]]
=== Where does data written to [.filename]#/dev/null# go?
It goes into a special data sink in the CPU where it is converted to heat which is vented through the heatsink / fan assembly. This is why CPU cooling is increasingly important; as people get used to faster processors, they become careless with their data and more and more of it ends up in [.filename]#/dev/null#, overheating their CPUs. If you delete [.filename]#/dev/null# (which effectively disables the CPU data sink) your CPU may run cooler but your system will quickly become constipated with all that excess data and start to behave erratically. If you have a fast network connection you can cool down your CPU by reading data out of [.filename]#/dev/random# and sending it off somewhere; however you run the risk of overheating your network connection and [.filename]#/# or angering your ISP, as most of the data will end up getting converted to heat by their equipment, but they generally have good cooling, so if you do not overdo it you should be OK.
_Paul Robinson adds:_
There are other methods. As every good sysadmin knows, it is part of standard practice to send data to the screen of interesting variety to keep all the pixies that make up your picture happy. Screen pixies (commonly mis-typed or re-named as "pixels") are categorized by the type of hat they wear (red, green or blue) and will hide or appear (thereby showing the color of their hat) whenever they receive a little piece of food. Video cards turn data into pixie-food, and then send them to the pixies - the more expensive the card, the better the food, so the better behaved the pixies are. They also need constant stimulation - this is why screen savers exist.
To take your suggestions further, you could just throw the random data to console, thereby letting the pixies consume it. This causes no heat to be produced at all, keeps the pixies happy and gets rid of your data quite quickly, even if it does make things look a bit messy on your screen.
Incidentally, as an ex-admin of a large ISP who experienced many problems attempting to maintain a stable temperature in a server room, I would strongly discourage people sending the data they do not want out to the network. The fairies who do the packet switching and routing get annoyed by it as well.
[[punk-my-friend]]
=== My colleague sits at the computer too much, how can I prank her?
Install package:games/sl[] and wait for her to mistype `sl` for `ls`.
[[advanced]]
== Advanced Topics
[[learn-advanced]]
=== How can I learn more about FreeBSD's internals?
See the link:{arch-handbook}[FreeBSD Architecture Handbook].
Additionally, much general UNIX(R) knowledge is directly applicable to FreeBSD.
[[how-to-contribute]]
=== How can I contribute to FreeBSD? What can I do to help?
We accept all types of contributions: documentation, code, and even art. See the article on link:{contributing}[Contributing to FreeBSD] for specific advice on how to do this.
And thanks for the thought!
[[define-snap-release]]
=== What are snapshots and releases?
There are currently {rel-numbranch} active/semi-active branches in the FreeBSD http://svnweb.FreeBSD.org/base/[Subversion Repository]. (Earlier branches are only changed very rarely, which is why there are only {rel-numbranch} active branches of development):
* {rel2-releng} AKA {rel2-stable}
* {rel-releng} AKA {rel-stable}
* {rel-head-releng} AKA _-CURRENT_ AKA {rel-head}
`HEAD` is not an actual branch tag. It is a symbolic constant for the current, non-branched development stream known as _-CURRENT_.
Right now, _-CURRENT_ is the {rel-head-relx} development stream; the {rel-stable} branch, {rel-releng}, forked off from _-CURRENT_ in {rel-relengdate} and the {rel2-stable} branch, {rel2-releng}, forked off from _-CURRENT_ in {rel2-relengdate}.
[[kernel-panic-troubleshooting]]
=== How can I make the most of the data I see when my kernel panics?
Here is typical kernel panic:
[.programlisting]
....
Fatal trap 12: page fault while in kernel mode
fault virtual address = 0x40
fault code = supervisor read, page not present
instruction pointer = 0x8:0xf014a7e5
stack pointer = 0x10:0xf4ed6f24
frame pointer = 0x10:0xf4ed6f28
code segment = base 0x0, limit 0xfffff, type 0x1b
= DPL 0, pres 1, def32 1, gran 1
processor eflags = interrupt enabled, resume, IOPL = 0
current process = 80 (mount)
interrupt mask =
trap number = 12
panic: page fault
....
This message is not enough. While the instruction pointer value is important, it is also configuration dependent as it varies depending on the kernel image. If it is a [.filename]#GENERIC# kernel image from one of the snapshots, it is possible for somebody else to track down the offending function, but for a custom kernel, only you can tell us where the fault occurred.
To proceed:
[.procedure]
====
. Write down the instruction pointer value. Note that the `0x8:` part at the beginning is not significant in this case: it is the `0xf0xxxxxx` part that we want.
. When the system reboots, do the following:
+
[source,shell]
....
% nm -n kernel.that.caused.the.panic | grep f0xxxxxx
....
+
where `f0xxxxxx` is the instruction pointer value. The odds are you will not get an exact match since the symbols in the kernel symbol table are for the entry points of functions and the instruction pointer address will be somewhere inside a function, not at the start. If you do not get an exact match, omit the last digit from the instruction pointer value and try again:
+
[source,shell]
....
% nm -n kernel.that.caused.the.panic | grep f0xxxxx
....
+
If that does not yield any results, chop off another digit. Repeat until there is some sort of output. The result will be a possible list of functions which caused the panic. This is a less than exact mechanism for tracking down the point of failure, but it is better than nothing.
====
However, the best way to track down the cause of a panic is by capturing a crash dump, then using man:kgdb[1] to generate a stack trace on the crash dump.
In any case, the method is this:
[.procedure]
====
. Make sure that the following line is included in the kernel configuration file:
+
[.programlisting]
....
makeoptions DEBUG=-g # Build kernel with gdb(1) debug symbols
....
. Change to the [.filename]#/usr/src# directory:
+
[source,shell]
....
# cd /usr/src
....
. Compile the kernel:
+
[source,shell]
....
# make buildkernel KERNCONF=MYKERNEL
....
. Wait for man:make[1] to finish compiling.
+
[source,shell]
....
# make installkernel KERNCONF=MYKERNEL
....
. Reboot.
====
[NOTE]
====
If `KERNCONF` is not included, the [.filename]#GENERIC# kernel will instead be built and installed.
====
The man:make[1] process will have built two kernels. [.filename]#/usr/obj/usr/src/sys/MYKERNEL/kernel# and [.filename]#/usr/obj/usr/src/sys/MYKERNEL/kernel.debug#. [.filename]#kernel# was installed as [.filename]#/boot/kernel/kernel#, while [.filename]#kernel.debug# can be used as the source of debugging symbols for man:kgdb[1].
To capture a crash dump, edit [.filename]#/etc/rc.conf# and set `dumpdev` to point to either the swap partition or `AUTO`. This will cause the man:rc[8] scripts to use the man:dumpon[8] command to enable crash dumps. This command can also be run manually. After a panic, the crash dump can be recovered using man:savecore[8]; if `dumpdev` is set in [.filename]#/etc/rc.conf#, the man:rc[8] scripts will run man:savecore[8] automatically and put the crash dump in [.filename]#/var/crash#.
[NOTE]
====
FreeBSD crash dumps are usually the same size as physical RAM. Therefore, make sure there is enough space in [.filename]#/var/crash# to hold the dump. Alternatively, run man:savecore[8] manually and have it recover the crash dump to another directory with more room. It is possible to limit the size of the crash dump by using `options MAXMEM=N` where _N_ is the size of kernel's memory usage in KBs. For example, for 1 GB of RAM, limit the kernel's memory usage to 128 MB, so that the crash dump size will be 128 MB instead of 1 GB.
====
Once the crash dump has been recovered , get a stack trace as follows:
[source,shell]
....
% kgdb /usr/obj/usr/src/sys/MYKERNEL/kernel.debug /var/crash/vmcore.0
(kgdb) backtrace
....
Note that there may be several screens worth of information. Ideally, use man:script[1] to capture all of them. Using the unstripped kernel image with all the debug symbols should show the exact line of kernel source code where the panic occurred. The stack trace is usually read from the bottom up to trace the exact sequence of events that lead to the crash. man:kgdb[1] can also be used to print out the contents of various variables or structures to examine the system state at the time of the crash.
[TIP]
====
If a second computer is available, man:kgdb[1] can be configured to do remote debugging, including setting breakpoints and single-stepping through the kernel code.
====
[NOTE]
====
If `DDB` is enabled and the kernel drops into the debugger, a panic and a crash dump can be forced by typing `panic` at the `ddb` prompt. It may stop in the debugger again during the panic phase. If it does, type `continue` and it will finish the crash dump.
====
[[dlsym-failure]]
=== Why has dlsym() stopped working for ELF executables?
The ELF toolchain does not, by default, make the symbols defined in an executable visible to the dynamic linker. Consequently `dlsym()` searches on handles obtained from calls to `dlopen(NULL, flags)` will fail to find such symbols.
To search, using `dlsym()`, for symbols present in the main executable of a process, link the executable using the `--export-dynamic` option to the ELF linker (man:ld[1]).
[[change-kernel-address-space]]
=== How can I increase or reduce the kernel address space on i386?
By default, the kernel address space is 1 GB (2 GB for PAE) for i386. When running a network-intensive server or using ZFS, this will probably not be enough.
Add the following line to the kernel configuration file to increase available space and rebuild the kernel:
[.programlisting]
....
options KVA_PAGES=N
....
To find the correct value of _N_, divide the desired address space size (in megabytes) by four. (For example, it is `512` for 2 GB.)
[[acknowledgments]]
== Acknowledgments
This innocent little Frequently Asked Questions document has been written, rewritten, edited, folded, spindled, mutilated, eviscerated, contemplated, discombobulated, cogitated, regurgitated, rebuilt, castigated, and reinvigorated over the last decade, by a cast of hundreds if not thousands. Repeatedly.
We wish to thank every one of the people responsible, and we encourage you to link:{contributing}[join them] in making this FAQ even better.
diff --git a/documentation/content/en/books/fdp-primer/_index.adoc b/documentation/content/en/books/fdp-primer/_index.adoc
index 3ec2036db2..403e249f11 100644
--- a/documentation/content/en/books/fdp-primer/_index.adoc
+++ b/documentation/content/en/books/fdp-primer/_index.adoc
@@ -1,32 +1,33 @@
---
title: FreeBSD Documentation Project Primer for New Contributors
authors:
- author: The FreeBSD Documentation Project
copyright: 1998-2021 DocEng
-trademarks: ["general"]
+trademarks: ["general"]
+description: FreeBSD Documentation Project Primer for New Contributors Index
next: books/fdp-primer/preface
---
= FreeBSD Documentation Project Primer for New Contributors
:doctype: book
:toc: macro
:toclevels: 1
:icons: font
:sectnums:
:sectnumlevels: 6
:experimental:
[.abstract-title]
Abstract
Thank you for becoming a part of the FreeBSD Documentation Project.
Your contribution is extremely valuable, and we appreciate it.
This primer covers details needed to start contributing to the FreeBSD Documentation Project, or FDP, including tools, software, and the philosophy behind the Documentation Project.
This is a work in progress.
Corrections and additions are always welcome.
'''
include::content/en/books/fdp-primer/toc.adoc[]
diff --git a/documentation/content/en/books/fdp-primer/asciidoctor-primer/_index.adoc b/documentation/content/en/books/fdp-primer/asciidoctor-primer/_index.adoc
index c2b5bc2ee8..87ace29ecb 100644
--- a/documentation/content/en/books/fdp-primer/asciidoctor-primer/_index.adoc
+++ b/documentation/content/en/books/fdp-primer/asciidoctor-primer/_index.adoc
@@ -1,224 +1,225 @@
---
title: Chapter 6. AsciiDoctor Primer
prev: books/fdp-primer/doc-build
next: books/fdp-primer/rosetta
+description: A brief introduction to AsciiDoctor
---
[[asciidoctor-primer]]
= AsciiDoctor Primer
:doctype: book
:toc: macro
:toclevels: 1
:icons: font
:sectnums:
:sectnumlevels: 6
:source-highlighter: rouge
:experimental:
:skip-front-matter:
:xrefstyle: basic
:relfileprefix: ../
:outfilesuffix:
:sectnumoffset: 6
include::shared/en/urls.adoc[]
toc::[]
Most FDP documentation is written with AsciiDoc.
This chapter explains what that means, how to read and understand the documentation source, and the techniques used.
To get a complete reference of the AsciiDoctor capabilities please consult the link:https://docs.asciidoctor.org/home/[Asciidoctor documentation].
Some of the examples used in this chapter have been taken from the link:https://docs.asciidoctor.org/asciidoc/latest/syntax-quick-reference[AsciiDoc Syntax Quick Reference].
[[asciidoctor-primer-overview]]
== Overview
In the original days of computers, electronic text was simple.
There were a few character sets like ASCII or EBCDIC, but that was about it.
Text was text, and what you saw really was what you got.
No frills, no formatting, no intelligence.
Inevitably, this was not enough.
When text is in a machine-usable format, machines are expected to be able to use and manipulate it intelligently.
Authors want to indicate that certain phrases should be emphasized, or added to a glossary, or made into hyperlinks.
Filenames could be shown in a “typewriter” style font for viewing on screen, but as “italics” when printed, or any of a myriad of other options for presentation.
It was once hoped that Artificial Intelligence (AI) would make this easy.
The computer would read the document and automatically identify key phrases, filenames, text that the reader should type in, examples, and more.
Unfortunately, real life has not happened quite like that, and computers still require assistance before they can meaningfully process text.
More precisely, they need help identifying what is what.
Consider this text:
To remove [.filename]#/tmp/foo#, use man:rm[1].
[source,shell]
----
% rm /tmp/foo
----
It is easy for the reader to see which parts are filenames, which are commands to be typed in, which parts are references to manual pages, and so on.
But the computer processing the document cannot reliably determine this.
For this we need markup.
The previous example is actually represented in this document like this:
....
To remove [.filename]#/tmp/foo#, use man:rm[1].
[source,shell]
----
% rm /tmp/foo
----
....
[[asciidoctor-headings]]
== Headings
AsciiDoctor supports six headings levels.
If the document type is `article` only one level 0 (`=`) can be used.
If the document type is `book` then there can be multiple level 0 (`=`) headings.
This is an example of headings in an `article`.
....
= Document Title (Level 0)
== Level 1 Section Title
=== Level 2 Section Title
==== Level 3 Section Title
===== Level 4 Section Title
====== Level 5 Section Title
== Another Level 1 Section Title
....
[WARNING]
====
Section levels cannot be skipped when nesting sections.
The following syntax is not correct.
....
= Document Title
== Level 2
==== Level 4
....
====
[[asciidoctor-paragraphs]]
== Paragraphs
Paragraphs don't require special markup in AsciiDoc.
A paragraph is defined by one or more consecutive lines of text.
To create a new paragraph leave one blank line.
For example, this is a heading with two paragraphs.
....
= This is the heading
This is the first paragraph.
This is also the first paragraph.
And this is the second paragraph.
....
[[asciidoctor-lists]]
== Lists
AsciiDoctor supports two type of lists: ordered and unordered.
To get more information about lists check link:https://docs.asciidoctor.org/asciidoc/latest/syntax-quick-reference/#lists[AsciiDoc Syntax Quick Reference].
[[asciidoctor-ordered-lists]]
=== Ordered lists
To create an ordered list use the `*` character.
For example this is an ordered list.
....
* First item
* Second item
** Subsecond item
* Third item
....
And this would be rendered as.
* First item
* Second item
** Subsecond item
* Third item
[[asciidoctor-unordered-lists]]
=== Unordered lists
To create an unordered list use the `.` character.
For example this is an unordered list.
....
. First item
. Second item
.. Subsecond item
. Third item
....
And this would be rendered as.
. First item
. Second item
.. Subsecond item
. Third item
[[asciidoctor-links]]
== Links
[[asciidoctor-links-external]]
=== External links
To point to another website the `link` macro should be used.
....
link:https://www.FreeBSD.org[FreeBSD]
....
[NOTE]
====
As the AsciiDoctor documentation describes, the `link` macro is not required when the target starts with a URL scheme like `https`.
However, it is a good practice to do this anyway to ensure that AsciiDoctor renders the link correctly, especially in non-latin languages like Japanese.
====
[[asciidoctor-links-internal]]
=== Internal link
To point to another book or article the AsciiDoctor variables should be used.
For example, if we are in the `cups` article and we want to point to `ipsec-must` these steps should be used.
. Include the [.filename]#urls.adoc# file from [.filename]#~/doc/shared# folder.
+
....
\include::shared/{lang}/urls.adoc[]
....
+
. Then create a link using the AsciiDoctor variable to the `ipsec-must` article.
+
....
link:{ipsec-must}[IPSec-Must article]
....
+
And this would be rendered as.
+
link:{ipsec-must}[IPSec-Must article]
[[asciidoctor-conclusion]]
== Conclusion
This is the conclusion of this AsciiDoctor primer.
For reasons of space and complexity, several things have not been covered in depth (or at all).
diff --git a/documentation/content/en/books/fdp-primer/book.adoc b/documentation/content/en/books/fdp-primer/book.adoc
index b12b59b48e..f3668ced27 100644
--- a/documentation/content/en/books/fdp-primer/book.adoc
+++ b/documentation/content/en/books/fdp-primer/book.adoc
@@ -1,114 +1,115 @@
---
title: FreeBSD Documentation Project Primer for New Contributors
authors:
- author: The FreeBSD Documentation Project
copyright: 1998-2021 DocEng
-trademarks: ["general"]
+description: FreeBSD Documentation Project Primer for New Contributors Index
+trademarks: ["general"]
---
= FreeBSD Documentation Project Primer for New Contributors
:doctype: book
:toc: macro
:toclevels: 2
:icons: font
:xrefstyle: basic
:relfileprefix: ../
:outfilesuffix:
:sectnums:
:sectnumlevels: 6
:partnums:
:chapter-signifier: Chapter
:part-signifier: Part
:source-highlighter: rouge
:experimental:
:skip-front-matter:
:book: true
:pdf: false
ifeval::["{backend}" == "html5"]
include::shared/mirrors.adoc[]
include::shared/authors.adoc[]
include::shared/releases.adoc[]
include::shared/en/mailing-lists.adoc[]
include::shared/en/teams.adoc[]
include::shared/en/urls.adoc[]
:chapters-path: content/en/books/fdp-primer/
endif::[]
ifeval::["{backend}" == "pdf"]
include::../../../../shared/mirrors.adoc[]
include::../../../../shared/authors.adoc[]
include::../../../../shared/releases.adoc[]
include::../../../../shared/en/mailing-lists.adoc[]
include::../../../../shared/en/teams.adoc[]
include::../../../../shared/en/urls.adoc[]
:chapters-path:
endif::[]
ifeval::["{backend}" == "epub3"]
include::../../../../shared/mirrors.adoc[]
include::../../../../shared/authors.adoc[]
include::../../../../shared/releases.adoc[]
include::../../../../shared/en/mailing-lists.adoc[]
include::../../../../shared/en/teams.adoc[]
include::../../../../shared/en/urls.adoc[]
:chapters-path:
endif::[]
[.abstract-title]
[abstract]
Abstract
Thank you for becoming a part of the FreeBSD Documentation Project. Your contribution is extremely valuable, and we appreciate it.
This primer covers details needed to start contributing to the FreeBSD Documentation Project, or FDP, including tools, software, and the philosophy behind the Documentation Project.
This is a work in progress. Corrections and additions are always welcome.
'''
toc::[]
include::content/en/books/fdp-primer/toc-figures.adoc[]
include::content/en/books/fdp-primer/toc-tables.adoc[]
include::content/en/books/fdp-primer/toc-examples.adoc[]
:sectnums!:
-include::{chapters-path}preface/_index.adoc[leveloffset=+1, lines=7..-1]
+include::{chapters-path}preface/_index.adoc[leveloffset=+1, lines=8..-1]
:sectnums:
-include::{chapters-path}overview/_index.adoc[leveloffset=+1, lines=7..21; 25..-1]
+include::{chapters-path}overview/_index.adoc[leveloffset=+1, lines=8..22; 26..-1]
-include::{chapters-path}tools/_index.adoc[leveloffset=+1, lines=7..21; 25..-1]
+include::{chapters-path}tools/_index.adoc[leveloffset=+1, lines=8..22; 26..-1]
-include::{chapters-path}working-copy/_index.adoc[leveloffset=+1, lines=7..21; 25..-1]
+include::{chapters-path}working-copy/_index.adoc[leveloffset=+1, lines=8..22; 26..-1]
-include::{chapters-path}structure/_index.adoc[leveloffset=+1, lines=7..21; 25..-1]
+include::{chapters-path}structure/_index.adoc[leveloffset=+1, lines=8..22; 26..-1]
-include::{chapters-path}doc-build/_index.adoc[leveloffset=+1, lines=7..21; 25..-1]
+include::{chapters-path}doc-build/_index.adoc[leveloffset=+1, lines=8..22; 26..-1]
-include::{chapters-path}asciidoctor-primer/_index.adoc[leveloffset=+1, lines=7..21; 27..-1]
+include::{chapters-path}asciidoctor-primer/_index.adoc[leveloffset=+1, lines=8..22; 28..-1]
-include::{chapters-path}rosetta/_index.adoc[leveloffset=+1, lines=7..21; 25..-1]
+include::{chapters-path}rosetta/_index.adoc[leveloffset=+1, lines=8..22; 26..-1]
-include::{chapters-path}translations/_index.adoc[leveloffset=+1, lines=7..21; 28..-1]
+include::{chapters-path}translations/_index.adoc[leveloffset=+1, lines=8..22; 29..-1]
-include::{chapters-path}po-translations/_index.adoc[leveloffset=+1, lines=7..21; 27..-1]
+include::{chapters-path}po-translations/_index.adoc[leveloffset=+1, lines=8..22; 28..-1]
-include::{chapters-path}manual-pages/_index.adoc[leveloffset=+1, lines=7..21; 25..-1]
+include::{chapters-path}manual-pages/_index.adoc[leveloffset=+1, lines=8..22; 26..-1]
-include::{chapters-path}writing-style/_index.adoc[leveloffset=+1, lines=7..21; 27..-1]
+include::{chapters-path}writing-style/_index.adoc[leveloffset=+1, lines=8..22; 28..-1]
-include::{chapters-path}editor-config/_index.adoc[leveloffset=+1, lines=7..21; 25..-1]
+include::{chapters-path}editor-config/_index.adoc[leveloffset=+1, lines=8..22; 26..-1]
-include::{chapters-path}see-also/_index.adoc[leveloffset=+1, lines=7..21; 27..-1]
+include::{chapters-path}see-also/_index.adoc[leveloffset=+1, lines=8..22; 28..-1]
:sectnums!:
-include::{chapters-path}examples/_index.adoc[leveloffset=+1, lines=6..21; 25..-1]
+include::{chapters-path}examples/_index.adoc[leveloffset=+1, lines=7..22; 26..-1]
:sectnums:
diff --git a/documentation/content/en/books/fdp-primer/doc-build/_index.adoc b/documentation/content/en/books/fdp-primer/doc-build/_index.adoc
index c3a0404fd0..ef8fec17bf 100644
--- a/documentation/content/en/books/fdp-primer/doc-build/_index.adoc
+++ b/documentation/content/en/books/fdp-primer/doc-build/_index.adoc
@@ -1,282 +1,283 @@
---
title: Chapter 5. The FreeBSD Documentation Build Process
prev: books/fdp-primer/structure
next: books/fdp-primer/asciidoctor-primer
+description: Describes the FreeBSD Documentation Build Process
---
[[doc-build]]
= The FreeBSD Documentation Build Process
:doctype: book
:toc: macro
:toclevels: 1
:icons: font
:sectnums:
:sectnumlevels: 6
:source-highlighter: rouge
:experimental:
:skip-front-matter:
:xrefstyle: basic
:relfileprefix: ../
:outfilesuffix:
:sectnumoffset: 5
toc::[]
This chapter covers organization of the documentation build process and how man:make[1] is used to control it.
[[doc-build-rendering]]
== Rendering AsciiDoc into Output
Different types of output can be produced from a single AsciiDoc source file.
[cols="20%,20%,60%", frame="none", options="header"]
|===
| Formats
| File Type
| Description
|`html`
|HTML
|An `article` or `book` chapter.
|`pdf`
|PDF
|Portable Document Format.
|`epub`
|EPUB
|Electronic Publication.
ePub file format.
|===
[[doc-build-rendering-html]]
=== Rendering to html
To render the documentation and the website to `html` use one of the following examples.
[[documentation-build-example]]
.Build the documentation
[example]
====
[source,shell]
....
% cd ~/doc/documentation
% make
....
====
[[website-build-example]]
.Build the website
[example]
====
[source,shell]
....
% cd ~/doc/website
% make
....
====
[[project-build-example]]
.Build the entire documentation project
[example]
====
[source,shell]
....
% cd ~/doc
% make -j2
....
====
[[doc-build-rendering-pdf]]
=== Rendering to pdf
To generate a document in `pdf` format use this command.
In this example the English Handbook will be used.
In order to export the document correctly all the extensions should be passed using the `-r` argument.
[[document-pdf-example]]
.Build a document in pdf
[example]
====
[source,shell]
....
% cd ~/doc/documentation
% asciidoctor-pdf \
-r ./shared/lib/man-macro.rb \
-r ./shared/lib/git-macro.rb \
-r ./shared/lib/packages-macro.rb \
-r ./shared/lib/inter-document-references-macro.rb \
-r ./shared/lib/sectnumoffset-treeprocessor.rb \
--doctype=book \
-a skip-front-matter \
-a pdf-theme=./themes/default-pdf-theme.yml \
-o /tmp/handbook.pdf \
content/en/books/handbook/book.adoc
....
====
[[doc-build-toolset]]
== The FreeBSD Documentation Build Toolset
These are the tools used to build and install the FDP documentation.
* The primary build tool is man:make[1], specifically Berkeley Make.
* Python is used to generate the Table of Contents and other related Tables.
* Hugo
* AsciiDoctor
* Git
[[doc-build-makefile]]
== Understanding the Makefile in the Documentation Tree
There are three [.filename]#Makefile# files for building some or all of the documentation project.
* The [.filename]#Makefile# in the [.filename]#documentation# directory will build only the documentation.
* The [.filename]#Makefile# in the [.filename]#website# directory will build only the website.
* The [.filename]#Makefile# at the top of the tree will build both the documentation and the website.
The [.filename]#Makefile# appearing in subdirectories also support `make run` to serve built content with Hugo's internal webserver.
This webserver runs on port 1313 by default.
[[documentation-makefile]]
=== Documentation Makefile
This [.filename]#Makefile# takes the following form:
[source,shell]
....
# Generate the FreeBSD documentation
#
# Copyright (c) 2020-2021, The FreeBSD Documentation Project
# Copyright (c) 2020-2021, Sergio Carlavilla <carlavilla@FreeBSD.org>
#
# Targets intended for use on the command line
#
# all (default) - generate the books TOC and compile all the documentation
# run - serves the built documentation site for local browsing
#
# The run target uses hugo's built-in webserver to make the documentation site
# available for local browsing. The documentation should have been built prior
# to attempting to use the `run` target. By default, hugo will start its
# webserver on port 1313.
MAINTAINER=carlavilla@FreeBSD.org <.>
PYTHON_CMD = /usr/local/bin/python3 <.>
HUGO_CMD = /usr/local/bin/hugo <.>
LANGUAGES = en,es,pt_BR,de,ja,zh_CN,zh_TW,ru,el,hu,it,mn,nl,pl,fr <.>
RUBYLIB = ../shared/lib
.export RUBYLIB
.ifndef HOSTNAME
.HOST+=localhost
.else
.HOST+=$(HOSTNAME)
.endif
.ORDER: all run<.>
.ORDER: starting-message generate-books-toc
.ORDER: starting-message build
.ORDER: generate-books-toc build
all: starting-message generate-books-toc build <.>
starting-message: .PHONY <.>
@echo ---------------------------------------------------------------
@echo Building the documentation
@echo ---------------------------------------------------------------
generate-books-toc: .PHONY <.>
${PYTHON_CMD} ./tools/books-toc-parts-creator.py -l ${LANGUAGES}
${PYTHON_CMD} ./tools/books-toc-creator.py -l ${LANGUAGES}
${PYTHON_CMD} ./tools/books-toc-figures-creator.py -l ${LANGUAGES}
${PYTHON_CMD} ./tools/books-toc-tables-creator.py -l ${LANGUAGES}
${PYTHON_CMD} ./tools/books-toc-examples-creator.py -l ${LANGUAGES}
run: .PHONY <.>
${HUGO_CMD} server -D --baseURL="http://$(.HOST):1313"
build: .PHONY <.>
${HUGO_CMD} --minify
....
<.> The `MAINTAINER` flag specifies who is the maintainer of this Makefile.
<.> `PYTHON_CMD` flag specifies the location of the Python binary.
<.> `HUGO_CMD` flag specifies the location of the Hugo binary.
<.> `LANGUAGES` flag specifies in which languages the table of contents has to be generated.
<.> `.ORDER` directives are used to ensure multiple make jobs may run without problem.
<.> `all` target generates the books' tables of contents ("TOCs"), builds the documentation and puts the result in [.filename]#~/doc/documentation/public#.
<.> `starting-message` shows a message in the CLI to show the user that the process is running.
<.> `generate-books-toc` calls the scripts to generate the books TOCs.
<.> `run` runs hugo webserver on port 1313, or a random free port if that is already in use.
<.> `build` builds the documentation and puts the result in the [.filename]#~/doc/documentation/public#.
[[website-makefile]]
=== Website Makefile
This [.filename]#Makefile# takes the form of:
[source,shell]
....
# Generate the FreeBSD website
#
# Copyright (c) 2020-2021, The FreeBSD Documentation Project
# Copyright (c) 2020-2021, Sergio Carlavilla <carlavilla@FreeBSD.org>
#
# Targets intended for use on the command line
#
# all (default) - generate the releases.toml and compile all the website
# run - serves the built documentation site for local browsing
#
# The run target uses hugo's built-in webserver to make the documentation site
# available for local browsing. The documentation should have been built prior
# to attempting to use the `run` target. By default, hugo will start its
# webserver on port 1313.
MAINTAINER=carlavilla@FreeBSD.org <.>
PYTHON_CMD = /usr/local/bin/python3 <.>
HUGO_CMD = /usr/local/bin/hugo <.>
RUBYLIB = ../shared/lib
.export RUBYLIB
.ifndef HOSTNAME
.HOST+=localhost
.else
.HOST+=$(HOSTNAME)
.endif
.ORDER: all run<.>
.ORDER: starting-message generate-releases
.ORDER: starting-message build
.ORDER: generate-releases build
all: starting-message generate-releases run <.>
starting-message: .PHONY <.>
@echo ---------------------------------------------------------------
@echo Building the website
@echo ---------------------------------------------------------------
generate-releases: .PHONY <.>
${PYTHON_CMD} ./tools/releases-toml.py -p ./shared/releases.adoc
run: .PHONY <.>
${HUGO_CMD} server -D --baseURL="http://$(.HOST):1313"
build: .PHONY <.>
${HUGO_CMD}
....
<.> The `MAINTAINER` flag specifies who is the maintainer of this Makefile.
<.> `PYTHON_CMD` flag specifies the location of the Python binary.
<.> `HUGO_CMD` flag specifies the location of the Hugo binary.
<.> `.ORDER` directives are used to ensure multiple make jobs may run without problem.
<.> `all` target builds the website and puts the result in [.filename]#~/doc/website/public#.
<.> `starting-message` shows a message in the CLI to show the user that the process is running.
<.> `generate-releases` calls the script used to convert from AsciiDoc variables to TOML variables.
With this conversion, the releases variables can be used in AsciiDoc and in the Hugo custom templates.
<.> `run` runs hugo webserver on port 1313, or a random free port if that is already in use.
<.> `build` builds the website and puts the result in the [.filename]#~/doc/website/public#.
diff --git a/documentation/content/en/books/fdp-primer/editor-config/_index.adoc b/documentation/content/en/books/fdp-primer/editor-config/_index.adoc
index 5f184292dd..6d925d006e 100644
--- a/documentation/content/en/books/fdp-primer/editor-config/_index.adoc
+++ b/documentation/content/en/books/fdp-primer/editor-config/_index.adoc
@@ -1,221 +1,222 @@
---
title: Chapter 12. Editor Configuration
prev: books/fdp-primer/writing-style
next: books/fdp-primer/see-also
+description: Configuration used in the texts editors in the FreeBSD Documentation Project
---
[[editor-config]]
= Editor Configuration
:doctype: book
:toc: macro
:toclevels: 1
:icons: font
:sectnums:
:sectnumlevels: 6
:source-highlighter: rouge
:experimental:
:skip-front-matter:
:xrefstyle: basic
:relfileprefix: ../
:outfilesuffix:
:sectnumoffset: 12
toc::[]
Adjusting your text editor configuration can make working on document files quicker and easier, and help documents conform to FDP guidelines.
[[editor-config-vim]]
== Vim
Install from package:editors/vim[], package:editors/vim-console[], or package:editors/vim-tiny[] then follow the configuration instructions in <<editor-config-vim-config>>.
[[editor-config-vim-use]]
=== Use
Press kbd:[P] to reformat paragraphs or text that has been selected in Visual mode.
Press kbd:[T] to replace groups of eight spaces with a tab.
[[editor-config-vim-config]]
=== Configuration
Edit [.filename]#~/.vimrc#, adding these lines to the end of the file:
[.programlisting]
....
if has("autocmd")
au BufNewFile,BufRead *.sgml,*.ent,*.xsl,*.xml call Set_SGML()
au BufNewFile,BufRead *.[1-9] call ShowSpecial()
endif " has(autocmd)
function Set_Highlights()
"match ExtraWhitespace /^\s* \s*\|\s\+$/
highlight default link OverLength ErrorMsg
match OverLength /\%71v.\+/
return 0
endfunction " Set_Highlights()
function ShowSpecial()
setlocal list listchars=tab:>>,trail:*,eol:$
hi def link nontext ErrorMsg
return 0
endfunction " ShowSpecial()
function Set_SGML()
setlocal number
syn match sgmlSpecial "&[^;]*;"
setlocal syntax=sgml
setlocal filetype=xml
setlocal shiftwidth=2
setlocal textwidth=70
setlocal tabstop=8
setlocal softtabstop=2
setlocal formatprg="fmt -p"
setlocal autoindent
setlocal smartindent
" Rewrap paragraphs
noremap P gqj
" Replace spaces with tabs
noremap T :s/ /\t/<CR>
call ShowSpecial()
call Set_Highlights()
return 0
endfunction " Set_SGML()
....
[[editor-config-emacs]]
== Emacs
Install from package:editors/emacs[] or package:editors/emacs-devel[].
[[editor-config-emacs-validation]]
=== Validation
Emacs's nxml-mode uses compact relax NG schemas for validating XML.
A compact relax NG schema for FreeBSD's extension to DocBook 5.0 is included in the documentation repository.
To configure nxml-mode to validate using this schema, create [.filename]#~/.emacs.d/schema/schemas.xml# and add these lines to the file:
....
locatingRules xmlns="http://thaiopensource.com/ns/locating-rules/1.0"
documentElement localName="section" typeId="DocBook"
documentElement localName="chapter" typeId="DocBook"
documentElement localName="article" typeId="DocBook"
documentElement localName="book" typeId="DocBook"
typeId id="DocBook" uri="/usr/local/shared/xml/docbook/5.0/rng/docbook.rnc"
locatingRules
....
[[editor-config-emacs-igor]]
=== Automated Proofreading with Flycheck and Igor
The Flycheck package is available from Milkypostman's Emacs Lisp Package Archive (MELPA).
If MELPA is not already in Emacs's packages-archives, it can be added by evaluating
....
(add-to-list 'package-archives '("melpa" . "http://stable.melpa.org/packages/") t)
....
Add the line to Emacs's initialization file (one of [.filename]#~/.emacs#, [.filename]#~/.emacs.el#, or [.filename]#~.emacs.d/init.el#) to make this change permanent.
To install Flycheck, evaluate
....
(package-install 'flycheck)
....
Create a Flycheck checker for package:textproc/igor[] by evaluating
....
(flycheck-define-checker igor
"FreeBSD Documentation Project sanity checker.
See URLs https://www.freebsd.org/docproj/ and
http://www.freshports.org/textproc/igor/."
:command ("igor" "-X" source-inplace)
:error-parser flycheck-parse-checkstyle
:modes (nxml-mode)
:standard-input t)
(add-to-list 'flycheck-checkers 'igor 'append)
....
Again, add these lines to Emacs's initialization file to make the changes permanent.
[[editor-config-emacs-specifc]]
=== FreeBSD Documentation Specific Settings
To apply settings specific to the FreeBSD documentation project, create [.filename]#.dir-locals.el# in the root directory of the documentation repository and add these lines to the file:
....
;;; Directory Local Variables
;;; For more information see (info "(emacs) Directory Variables")
((nxml-mode
(eval . (turn-on-auto-fill))
(fill-column . 70)
(eval . (require 'flycheck))
(eval . (flycheck-mode 1))
(flycheck-checker . igor)
(eval . (add-to-list 'rng-schema-locating-files "~/.emacs.d/schema/schemas.xml"))))
....
[[editor-config-nano]]
== nano
Install from package:editors/nano[] or package:editors/nano-devel[].
[[editor-config-nano-config]]
=== Configuration
Copy the sample XML syntax highlight file to the user's home directory:
[source,shell]
....
% cp /usr/local/shared/nano/xml.nanorc ~/.nanorc
....
Use an editor to replace the lines in the [.filename]#~/.nanorc# `syntax "xml"` block with these rules:
....
syntax "xml" "\.([jrs]html?|xml|xslt?)$"
# trailing whitespace
color ,blue "[[:space:]]+$"
# multiples of eight spaces at the start a line
# (after zero or more tabs) should be a tab
color ,blue "^([TAB]*[ ]{8})+"
# tabs after spaces
color ,yellow "( )+TAB"
# highlight indents that have an odd number of spaces
color ,red "^(([ ]{2})+|(TAB+))*[ ]{1}[^ ]{1}"
# lines longer than 70 characters
color ,yellow "^(.{71})|(TAB.{63})|(TAB{2}.{55})|(TAB{3}.{47}).+$"
....
Process the file to create embedded tabs:
[source,shell]
....
% perl -i'' -pe 's/TAB/\t/g' ~/.nanorc
....
[[editor-config-nano-use]]
=== Use
Specify additional helpful options when running the editor:
[source,shell]
....
% nano -AKipwz -r 70 -T8 _index.adoc
....
Users of man:csh[1] can define an alias in [.filename]#~/.cshrc# to automate these options:
....
alias nano "nano -AKipwz -r 70 -T8"
....
After the alias is defined, the options will be added automatically:
[source,shell]
....
% nano _index.adoc
....
diff --git a/documentation/content/en/books/fdp-primer/examples/_index.adoc b/documentation/content/en/books/fdp-primer/examples/_index.adoc
index 65a97518ef..93cb777a0e 100644
--- a/documentation/content/en/books/fdp-primer/examples/_index.adoc
+++ b/documentation/content/en/books/fdp-primer/examples/_index.adoc
@@ -1,138 +1,139 @@
---
title: Appendix A. Examples
prev: books/fdp-primer/see-also/
+description: Example of an article and a book used in the FreeBSD Documentation Project
---
[appendix]
[[examples]]
= Examples
:doctype: book
:toc: macro
:toclevels: 1
:icons: font
:sectnums:
:sectnumlevels: 6
:source-highlighter: rouge
:experimental:
:skip-front-matter:
:xrefstyle: basic
:relfileprefix: ../
:outfilesuffix:
:sectnumoffset: A
toc::[]
These examples are not exhaustive - they do not contain all the elements that might be desirable to use, particularly in a document's front matter.
For more examples of AsciiDoctor, examine the AsciiDoc source for this and other documents available in the Git `doc` repository, or available online starting at link:https://cgit.freebsd.org/doc/[https://cgit.freebsd.org/doc/].
[[examples-asciidoctor-book]]
== AsciiDoctor `book`
.AsciiDoctor `book`
[example]
====
[.programlisting]
....
---
title: An Example Book
authors:
- author: The FreeBSD Documentation Project
copyright: 1995-2021 The FreeBSD Documentation Project
releaseinfo: ""
trademarks: ["general"]
---
= An Example Book
:doctype: book
:toc: macro
:toclevels: 2
:icons: font
:xrefstyle: basic
:relfileprefix: ../
:outfilesuffix:
:sectnums:
:sectnumlevels: 6
:partnums:
:chapter-signifier: Chapter
:part-signifier: Part
:source-highlighter: rouge
:experimental:
:skip-front-matter:
:book: true
:pdf: false
ifeval::["{backend}" == "html5"]
:chapters-path: content/en/books/bookname/
endif::[]
ifeval::["{backend}" == "pdf"]
:chapters-path:
endif::[]
ifeval::["{backend}" == "epub3"]
:chapters-path:
endif::[]
[abstract]
Abstract
Abstract section
'''
toc::[]
:sectnums!:
\include::{chapters-path}preface/_index.adoc[leveloffset=+1]
:sectnums:
\include::{chapters-path}parti.adoc[lines=7..18]
\include::{chapters-path}chapter-name/_index.adoc[leveloffset=+1]
....
====
[[examples-asciidoctor-article]]
== AsciiDoctor `article`
.AsciiDoctor `article`
[example]
====
[.programlisting]
....
---
title: An Example Article
authors:
- author: Your name and surname
email: foo@example.com
trademarks: ["general"]
---
\= An Example Article
:doctype: article
:toc: macro
:toclevels: 1
:icons: font
:sectnums:
:sectnumlevels: 6
:source-highlighter: rouge
:experimental:
'''
toc::[]
\== My First Section
This is the first section in my article.
\== My First Sub-Section
This is the first sub-section in my article.
....
====
diff --git a/documentation/content/en/books/fdp-primer/manual-pages/_index.adoc b/documentation/content/en/books/fdp-primer/manual-pages/_index.adoc
index 057e965a9d..292743d612 100644
--- a/documentation/content/en/books/fdp-primer/manual-pages/_index.adoc
+++ b/documentation/content/en/books/fdp-primer/manual-pages/_index.adoc
@@ -1,536 +1,537 @@
---
title: Chapter 10. Manual Pages
prev: books/fdp-primer/po-translations
next: books/fdp-primer/writing-style
+description: How to work with the FreeBSD Manual Pages
---
[[manual-pages]]
= Manual Pages
:doctype: book
:toc: macro
:toclevels: 1
:icons: font
:sectnums:
:sectnumlevels: 6
:source-highlighter: rouge
:experimental:
:skip-front-matter:
:xrefstyle: basic
:relfileprefix: ../
:outfilesuffix:
:sectnumoffset: 10
toc::[]
[[manual-pages-introduction]]
== Introduction
_Manual pages_, commonly shortened to _man pages_, were conceived as readily-available reminders for command syntax, device driver details, or configuration file formats.
They have become an extremely valuable quick-reference from the command line for users, system administrators, and programmers.
Although intended as reference material rather than tutorials, the EXAMPLES sections of manual pages often provide detailed use case.
Manual pages are generally shown interactively by the man:man[1] command.
When the user types `man ls`, a search is performed for a manual page matching `ls`.
The first matching result is displayed.
[[manual-pages-sections]]
== Sections
Manual pages are grouped into _sections_.
Each section contains manual pages for a specific category of documentation:
[.informaltable]
[cols="1,8", options="header"]
|===
| Section Number
| Category
|1
|General Commands
|2
|System Calls
|3
|Library Functions
|4
|Kernel Interfaces
|5
|File Formats
|6
|Games
|7
|Miscellaneous
|8
|System Manager
|9
|Kernel Developer
|===
[[manual-pages-markup]]
== Markup
Various markup forms and rendering programs have been used for manual pages.
FreeBSD has used man:groff[7] and the newer man:mandoc[1].
Most existing FreeBSD manual pages, and all new ones, use the man:mdoc[7] form of markup.
This is a simple line-based markup that is reasonably expressive.
It is mostly semantic: parts of text are marked up for what they are, rather than for how they should appear when rendered.
There is some appearance-based markup which is usually best avoided.
Manual page source is usually interpreted and displayed to the screen interactively.
The source files can be ordinary text files or compressed with man:gzip[1] to save space.
Manual pages can also be rendered to other formats, including PostScript for printing or PDF generation.
See man:man[1].
[[manual-pages-markup-sections]]
=== Manual Page Sections
Manual pages are composed of several standard sections.
Each section has a title in upper case, and the sections for a particular type of manual page appear in a specific order.
For a category 1 General Command manual page, the sections are:
[.informaltable]
[cols="2,4", options="header"]
|===
| Section Name
| Description
|NAME
|Name of the command
|SYNOPSIS
|Format of options and arguments
|DESCRIPTION
|Description of purpose and usage
|ENVIRONMENT
|Environment settings that affect operation
|EXIT STATUS
|Error codes returned on exit
|EXAMPLES
|Examples of usage
|COMPATIBILITY
|Compatibility with other implementations
|SEE ALSO
|Cross-reference to related manual pages
|STANDARDS
|Compatibility with standards like POSIX
|HISTORY
|History of implementation
|BUGS
|Known bugs
|AUTHORS
|People who created the command or wrote the manual page.
|===
Some sections are optional, and the combination of sections for a specific type of manual page vary.
Examples of the most common types are shown later in this chapter.
[[manual-pages-markup-macros]]
=== Macros
man:mdoc[7] markup is based on _macros_.
Lines that begin with a dot contain macro commands, each two or three letters long.
For example, consider this portion of the man:ls[1] manual page:
[.programlisting]
....
.Dd December 1, 2015
.Dt LS 1
.Sh NAME
.Nm ls
.Nd list directory contents
.Sh SYNOPSIS
.Nm
.Op Fl -libxo
.Op Fl ABCFGHILPRSTUWZabcdfghiklmnopqrstuwxy1,
.Op Fl D Ar format
.Op Ar
.Sh DESCRIPTION
For each operand that names a
.Ar file
of a type other than
directory,
.Nm
displays its name as well as any requested,
associated information.
For each operand that names a
.Ar file
of type directory,
.Nm
displays the names of files contained
within that directory, as well as any requested, associated
information.
....
A _Document date_ and _Document title_ are defined.
A _Section header_ for the NAME section is defined.
Then the _Name_ of the command and a one-line _Name description_ are defined.
The SYNOPSIS section begins.
This section describes the command-line options and arguments accepted.
_Name_ (`.Nm`) has already been defined, and repeating it here just displays the defined value in the text.
An _Optional_ _Flag_ called `-libxo` is shown.
The `Fl` macro adds a dash to the beginning of flags, so this appears in the manual page as `--libxo`.
A long list of optional single-character flags are shown.
An optional `-D` flag is defined.
If the `-D` flag is given, it must be followed by an _Argument_.
The argument is a _format_, a string that tells man:ls[1] what to display and how to display it.
Details on the format string are given later in the manual page.
A final optional argument is defined.
Since no name is specified for the argument, the default of `file ...` is used.
The _Section header_ for the DESCRIPTION section is defined.
When rendered with the command `man ls`, the result displayed on the screen looks like this:
[.programlisting]
....
LS(1) FreeBSD General Commands Manual LS(1)
NAME
ls - list directory contents
SYNOPSIS
ls [--libxo] [-ABCFGHILPRSTUWZabcdfghiklmnopqrstuwxy1,] [-D format]
[file ...]
DESCRIPTION
For each operand that names a file of a type other than directory, ls
displays its name as well as any requested, associated information. For
each operand that names a file of type directory, ls displays the names
of files contained within that directory, as well as any requested,
associated information.
....
Optional values are shown inside square brackets.
[[manual-pages-markup-guidelines]]
=== Markup Guidelines
The man:mdoc[7] markup language is not very strict.
For clarity and consistency, the FreeBSD Documentation project adds some additional style guidelines:
Only the first letter of macros is upper case::
Always use upper case for the first letter of a macro and lower case for the remaining letters.
Begin new sentences on new lines::
Start a new sentence on a new line, do not begin it on the same line as an existing sentence.
Update `.Dd` when making non-trivial changes to a manual page::
The _Document date_ informs the reader about the last time the manual page was updated.
It is important to update whenever non-trivial changes are made to the manual pages.
Trivial changes like spelling or punctuation fixes that do not affect usage can be made without updating `.Dd`.
Give examples::
Show the reader examples when possible.
Even trivial examples are valuable, because what is trivial to the writer is not necessarily trivial to the reader.
Three examples are a good goal.
A trivial example shows the minimal requirements, a serious example shows actual use, and an in-depth example demonstrates unusual or non-obvious functionality.
Include the BSD license::
Include the BSD license on new manual pages.
The preferred license is available from the link:{committers-guide}[Committer's Guide].
[[manual-pages-markup-tricks]]
=== Markup Tricks
Add a space before punctuation on a line with macros. Example:
[.programlisting]
....
.Sh SEE ALSO
.Xr geom 4 ,
.Xr boot0cfg 8 ,
.Xr geom 8 ,
.Xr gptboot 8
....
Note how the commas at the end of the `.Xr` lines have been placed after a space.
The `.Xr` macro expects two parameters to follow it, the name of an external manual page, and a section number.
The space separates the punctuation from the section number.
Without the space, the external links would incorrectly point to section `4,` or `8,`.
[[manual-pages-markup-important-macros]]
=== Important Macros
Some very common macros will be shown here.
For more usage examples, see man:mdoc[7], man:groff_mdoc[7], or search for actual use in [.filename]#/usr/share/man/man*# directories.
For example, to search for examples of the `.Bd` _Begin display_ macro:
[source,shell]
....
% find /usr/share/man/man* | xargs zgrep '.Bd'
....
[[manual-pages-markup-important-macros-organizational]]
==== Organizational Macros
Some macros are used to define logical blocks of a manual page.
[.informaltable]
[cols="1,8", options="header"]
|===
| Organizational Macro
| Use
|`.Sh`
|Section header.
Followed by the name of the section, traditionally all upper case.
Think of these as chapter titles.
|`.Ss`
|Subsection header.
Followed by the name of the subsection.
Used to divide a `.Sh` section into subsections.
|`.Bl`
|Begin list. Start a list of items.
|`.El`
|End a list.
|`.Bd`
|Begin display.
Begin a special area of text, like an indented area.
|`.Ed`
|End display.
|===
[[manual-pages-markup-important-macros-inline]]
==== Inline Macros
Many macros are used to mark up inline text.
[.informaltable]
[cols="1,8", options="header"]
|===
| Inline Macro
| Use
|`.Nm`
|Name.
Called with a name as a parameter on the first use, then used later without the parameter to display the name that has already been defined.
|`.Pa`
|Path to a file.
Used to mark up filenames and directory paths.
|===
[[manual-pages-sample-structures]]
== Sample Manual Page Structures
This section shows minimal desired man page contents for several common categories of manual pages.
[[manual-pages-sample-structures-section-1-8]]
=== Section 1 or 8 Command
The preferred basic structure for a section 1 or 8 command:
[.programlisting]
....
.Dd August 25, 2017
.Dt EXAMPLECMD 8
.Os
.Sh NAME
.Nm examplecmd
.Nd "command to demonstrate section 1 and 8 man pages"
.Sh SYNOPSIS
.Nm
.Op Fl v
.Sh DESCRIPTION
The
.Nm
utility does nothing except demonstrate a trivial but complete
manual page for a section 1 or 8 command.
.Sh SEE ALSO
.Xr exampleconf 5
.Sh AUTHORS
.An Firstname Lastname Aq Mt flastname@example.com
....
[[manual-pages-sample-structures-section-4]]
=== Section 4 Device Driver
The preferred basic structure for a section 4 device driver:
[.programlisting]
....
.Dd August 25, 2017
.Dt EXAMPLEDRIVER 4
.Os
.Sh NAME
.Nm exampledriver
.Nd "driver to demonstrate section 4 man pages"
.Sh SYNOPSIS
To compile this driver into the kernel, add this line to the
kernel configuration file:
.Bd -ragged -offset indent
.Cd "device exampledriver"
.Ed
.Pp
To load the driver as a module at boot, add this line to
.Xr loader.conf 5 :
.Bd -literal -offset indent
exampledriver_load="YES"
.Ed
.Sh DESCRIPTION
The
.Nm
driver provides an opportunity to show a skeleton or template
file for section 4 manual pages.
.Sh HARDWARE
The
.Nm
driver supports these cards from the aptly-named Nonexistent
Technologies:
.Pp
.Bl -bullet -compact
.It
NT X149.2 (single and dual port)
.It
NT X149.8 (single port)
.El
.Sh DIAGNOSTICS
.Bl -diag
.It "flashing green light"
Something bad happened.
.It "flashing red light"
Something really bad happened.
.It "solid black light"
Power cord is unplugged.
.El
.Sh SEE ALSO
.Xr example 8
.Sh HISTORY
The
.Nm
device driver first appeared in
.Fx 49.2 .
.Sh AUTHORS
.An Firstname Lastname Aq Mt flastname@example.com
....
[[manual-pages-sample-structures-section-5]]
=== Section 5 Configuration File
The preferred basic structure for a section 5 configuration file:
[.programlisting]
....
.Dd August 25, 2017
.Dt EXAMPLECONF 5
.Os
.Sh NAME
.Nm example.conf
.Nd "config file to demonstrate section 5 man pages"
.Sh DESCRIPTION
.Nm
is an example configuration file.
.Sh SEE ALSO
.Xr example 8
.Sh AUTHORS
.An Firstname Lastname Aq Mt flastname@example.com
....
[[manual-pages-testing]]
== Testing
Testing a new manual page can be challenging.
Fortunately there are some tools that can assist in the task.
Some of them, like man:man[1], do not look in the current directory.
It is a good idea to prefix the filename with `./` if the new manual page is in the current directory.
An absolute path can also be used.
Use man:mandoc[1]'s linter to check for parsing errors:
[source,shell]
....
% mandoc -T lint ./mynewmanpage.8
....
Use package:textproc/igor[] to proofread the manual page:
[source,shell]
....
% igor ./mynewmanpage.8
....
Use man:man[1] to check the final result of your changes:
[source,shell]
....
% man ./mynewmanpage.8
....
You can use man:col[1] to filter the output of man:man[1] and get rid of the backspace characters before loading the result in your favorite editor for spell checking:
[source,shell]
....
% man ./mynewmanpage.8 | col -b | vim -R -
....
Spell-checking with fully-featured dictionaries is encouraged,
and can be accomplished by using package:textproc/hunspell[] or package:textproc/aspell[] combined with package:textproc/en-hunspell[] or package:textproc/en-aspell[], respectively.
For instance:
[source,shell]
....
% aspell check --lang=en --mode=nroff ./mynewmanpage.8
....
[[manual-pages-examples-as-templates]]
== Example Manual Pages to Use as Templates
Some manual pages are suitable as in-depth examples.
[.informaltable]
[cols="1,4", options="header"]
|===
| Manual Page
| Path to Source Location
|man:cp[1]
|[.filename]#/usr/src/bin/cp/cp.1#
|man:vt[4]
|[.filename]#/usr/src/share/man/man4/vt.4#
|man:crontab[5]
|[.filename]#/usr/src/usr.sbin/cron/crontab/crontab.5#
|man:gpart[8]
|[.filename]#/usr/src/sbin/geom/class/part/gpart.8#
|===
[[manual-pages-resources]]
== Resources
Resources for manual page writers:
* man:man[1]
* man:mandoc[1]
* man:groff_mdoc[7]
* http://manpages.bsd.lv/mdoc.html[Practical UNIX Manuals: mdoc]
* http://manpages.bsd.lv/history.html[History of UNIX Manpages]
diff --git a/documentation/content/en/books/fdp-primer/overview/_index.adoc b/documentation/content/en/books/fdp-primer/overview/_index.adoc
index ba37cba76e..3fcbd195a8 100644
--- a/documentation/content/en/books/fdp-primer/overview/_index.adoc
+++ b/documentation/content/en/books/fdp-primer/overview/_index.adoc
@@ -1,124 +1,125 @@
---
title: Chapter 1. Overview
prev: books/fdp-primer/preface
next: books/fdp-primer/tools
+description: Overview about the FreeBSD Documentation Process
---
[[overview]]
= Overview
:doctype: book
:toc: macro
:toclevels: 1
:icons: font
:sectnums:
:sectnumlevels: 6
:source-highlighter: rouge
:experimental:
:skip-front-matter:
:xrefstyle: basic
:relfileprefix: ../
:outfilesuffix:
:sectnumoffset: 1
include::shared/releases.adoc[]
include::shared/en/mailing-lists.adoc[]
toc::[]
Welcome to the FreeBSD Documentation Project (FDP).
Quality documentation is crucial to the success of FreeBSD, and we value your contributions very highly.
This document describes how the FDP is organized, how to write and submit documentation, and how to effectively use the available tools.
Everyone is welcome to contribute to the FDP.
Willingness to contribute is the only membership requirement.
This primer shows how to:
* Identify which parts of FreeBSD are maintained by the FDP.
* Install the required documentation tools and files.
* Make changes to the documentation.
* Submit changes back for review and inclusion in the FreeBSD documentation.
[[overview-quick-start]]
== Quick Start
Some preparatory steps must be taken before editing the FreeBSD documentation.
First, subscribe to the {freebsd-doc}.
Some team members also interact on the `#bsddocs` IRC channel on http://www.efnet.org/[EFnet].
These people can help with questions or problems involving the documentation.
[.procedure]
====
. Install these packages. These packages are all of the software needed to edit and build FreeBSD documentation. The Git package is needed to obtain a working copy of the documentation and generate patches with.
+
[source,shell]
....
# pkg install gohugo python3 git-lite rubygem-asciidoctor rubygem-rouge
....
+
. Optional: to generate PDF documentation install `asciidoctor-pdf`
+
[source,shell]
....
# pkg install rubygem-asciidoctor-pdf
....
+
. Install a local working copy of the documentation from the FreeBSD repository in [.filename]#~/doc# (see crossref:working-copy[working-copy,The Working Copy]).
+
[source,shell]
....
% git clone https://git.FreeBSD.org/doc.git ~/doc
....
+
. Edit the documentation files that require changes. If a file needs major changes, consult the mailing list for input.
+
Review the output and edit the file to fix any problems shown, then rerun the command to find any remaining problems.
Repeat until all of the errors are resolved.
+
. *_Always_* build and test the changes before submitting them. Running `make` in the top-level directory of the documentation will generate that documentation in HTML format.
+
[source,shell]
....
% make
....
+
. When changes are complete and tested, generate a "diff file":
+
[source,shell]
....
% cd ~/doc
% git diff > bsdinstall.diff.txt
....
+
Give the diff file a descriptive name.
In the example above, changes have been made to the [.filename]#bsdinstall# portion of the Handbook.
. Submit the diff file using the web-based https://bugs.FreeBSD.org/bugzilla/enter_bug.cgi?product=Documentation[Problem Report] system. If using the web form, enter a Summary of _[patch] short description of problem_. Select the Component `Documentation`. In the Description field, enter a short description of the changes and any important details about them. Use the btn:[Add an attachment] button to attach the diff file. Finally, use the btn:[Submit Bug] button to submit your diff to the problem report system.
====
[[overview-doc]]
== The FreeBSD Documentation Set
The FDP is responsible for four categories of FreeBSD documentation.
* _Handbook_: The Handbook is the comprehensive online resource and reference for FreeBSD users.
* _FAQ_: The FAQ uses a short question and answer format to address questions that are frequently asked on the various mailing lists and forums devoted to FreeBSD. This format does not permit long and comprehensive answers.
* _Manual pages_: The English language system manual pages are usually not written by the FDP, as they are part of the base system. However, the FDP can reword parts of existing manual pages to make them clearer or to correct inaccuracies.
* _Web site_: This is the main FreeBSD presence on the web, visible at https://www.freebsd.org/[https://www.FreeBSD.org/] and many mirrors around the world. The web site is typically a new user's first exposure to FreeBSD.
Translation teams are responsible for translating the Handbook and web site into different languages.
Manual pages are not translated at present.
Documentation source for the FreeBSD web site, Handbook, and FAQ is available in the documentation repository at `https://cgit.freebsd.org/doc/`.
Source for manual pages is available in a separate source repository located at `https://cgit.freebsd.org/src/`.
Documentation commit messages are visible with `git log`.
Commit messages are also archived at link:{git-doc-all}.
Web frontends to both of these repositories are available at https://cgit.freebsd.org/doc/[] and https://cgit.freebsd.org/src/[].
Many people have written tutorials or how-to articles about FreeBSD.
Some are stored as part of the FDP files.
In other cases, the author has decided to keep the documentation separate.
The FDP endeavors to provide links to as much of this external documentation as possible.
diff --git a/documentation/content/en/books/fdp-primer/po-translations/_index.adoc b/documentation/content/en/books/fdp-primer/po-translations/_index.adoc
index b38f21a016..3601ce7b74 100644
--- a/documentation/content/en/books/fdp-primer/po-translations/_index.adoc
+++ b/documentation/content/en/books/fdp-primer/po-translations/_index.adoc
@@ -1,418 +1,419 @@
---
title: Chapter 9. PO Translations
prev: books/fdp-primer/translations
next: books/fdp-primer/manual-pages
+description: How to work with PO translation in the FreeBSD Documentation Project
---
[[po-translations]]
= PO Translations
:doctype: book
:toc: macro
:toclevels: 1
:icons: font
:sectnums:
:sectnumlevels: 6
:source-highlighter: rouge
:experimental:
:skip-front-matter:
:xrefstyle: basic
:relfileprefix: ../
:outfilesuffix:
:sectnumoffset: 9
include::shared/en/urls.adoc[]
toc::[]
[[po-translations-introduction]]
== Introduction
The http://www.gnu.org/software/gettext/[GNU gettext] system offers translators an easy way to create and maintain translations of documents.
Translatable strings are extracted from the original document into a PO (Portable Object) file.
Translated versions of the strings are entered with a separate editor.
The strings can be used directly or built into a complete translated version of the original document.
[[po-translations-quick-start]]
== Quick Start
The procedure shown in crossref:overview[overview-quick-start,Quick Start] is assumed to have already been performed.
The `TRANSLATOR` option is required and already enabled by default in the package:textproc/docproj[] port.
This example shows the creation of a Spanish translation of the short link:{leap-seconds}[Leap Seconds] article.
[[po-translations-quick-start-install-po-editor]]
[.procedure]
====
.Procedure: Install a PO Editor
. A PO editor is needed to edit translation files. This example uses package:editors/poedit[].
+
[source,shell]
....
# pkg install poedit
....
====
[[po-translations-quick-start-initial-setup]]
[.procedure]
====
.Procedure: Initial Setup
When a new translation is first created, the directory structure must be created or copied from the English original:
. Create a directory for the new translation. The English article source is in [.filename]#~/doc/documentation/content/en/articles/leap-seconds/#. The Spanish translation will go in [.filename]#~/doc/documentation/content/es/articles/leap-seconds/#. The path is the same except for the name of the language directory.
The English article source is in [.filename]#~/doc/en/articles/leap-seconds/#.
The Spanish translation will go in [.filename]#~/doc/es/articles/leap-seconds/#.
The path is the same except for the name of the language directory.
+
[source,shell]
....
% mkdir ~/doc/documentation/content/es/articles/leap-seconds
....
. Copy the [.filename]#_index.adoc# from the original document into the translation directory:
+
[source,shell]
....
% cp ~/doc/documentation/content/en/articles/leap-seconds/_index.adoc \
~/doc/documentation/content/es/articles/leap-seconds/
....
====
[[po-translations-quick-start-translation]]
[.procedure]
====
.Procedure: Translation
Translating a document consists of two steps: extracting translatable strings from the original document, and entering translations for those strings.
These steps are repeated until the translator feels that enough of the document has been translated to produce a usable translated document.
. Extract the translatable strings from the original English version into a PO file:
+
[source,shell]
....
% cd ~/doc
% po4a-gettextize \
--format asciidoc \
--option compat=asciidoctor \
--option yfm_keys=title,part \
--master "documentation/content/en/articles/leap-seconds/_index.adoc" \
--master-charset "UTF-8" \
--copyright-holder "The FreeBSD Project" \
--package-name "FreeBSD Documentation" \
--po "documentation/content/es/articles/leap-seconds/_index.po"
....
+
. Use a PO editor to enter translations in the PO file. There are several different editors available. [.filename]#poedit# from package:editors/poedit[] is shown here.
+
[source,shell]
....
% poedit documentation/content/es/articles/leap-seconds/_index.po
....
====
[[po-translations-quick-generating-a-translated-document]]
[.procedure]
====
.Procedure: Generating a Translated Document
. Generate the translated document:
+
[source,shell]
....
% cd ~/doc
% po4a-translate \
--format asciidoc \
--option compat=asciidoctor \
--option yfm_keys=title,part \
--master "documentation/content/en/articles/leap-seconds/_index.adoc" \
--master-charset "UTF-8" \
--po "documentation/content/es/articles/leap-seconds/_index.po" \
--localized "documentation/content/es/articles/leap-seconds/_index.adoc" \
--localized-charset "UTF-8" \
--keep 0
....
+
The name of the generated document matches the name of the English original, usually [.filename]#_index.adoc#.
+
. Check the generated file by rendering it to HTML and viewing it with a web browser:
+
[source,shell]
....
% cd ~/doc/documentation
% make
....
====
[[po-translations-creating]]
== Creating New Translations
The first step to creating a new translated document is locating or creating a directory to hold it.
FreeBSD puts translated documents in a subdirectory named for their language and region in the format [.filename]#lang#.
_lang_ is a two-character lowercase code.
[[po-translations-language-names]]
.Language Names
[cols="1,1,1", frame="none", options="header"]
|===
| Language
| Region
| Translated Directory Name
|English
|United States
|[.filename]#en#
|Bengali
|Bangladesh
|[.filename]#bn#
|Danish
|Denmark
|[.filename]#da#
|German
|Germany
|[.filename]#de#
|Greek
|Greece
|[.filename]#el#
|Spanish
|Spain
|[.filename]#es#
|French
|France
|[.filename]#fr#
|Hungarian
|Hungary
|[.filename]#hu#
|Italian
|Italy
|[.filename]#it#
|Japanese
|Japan
|[.filename]#ja#
|Korean
|Korea
|[.filename]#ko#
|Mongolian
|Mongolia
|[.filename]#mn#
|Dutch
|Netherlands
|[.filename]#nl#
|Polish
|Poland
|[.filename]#pl#
|Portuguese
|Brazil
|[.filename]#pt-br#
|Russian
|Russia
|[.filename]#ru#
|Turkish
|Turkey
|[.filename]#tr#
|Chinese
|China
|[.filename]#zh-cn#
|Chinese
|Taiwan
|[.filename]#zh-tw#
|===
The translations are in subdirectories of the main documentation directory,
here assumed to be [.filename]#~/doc/documentation/# as shown in <<overview-quick-start>>.
For example, German translations are located in [.filename]#~/doc/documentation/content/de/#,
and French translations are in [.filename]#~/doc/documentation/content/fr/#.
Each language directory contains separate subdirectories named for the type of documents, usually [.filename]#articles/# and [.filename]#books/#.
Combining these directory names gives the complete path to an article or book.
For example, the French translation of the NanoBSD article is in [.filename]#~/doc/documentation/content/fr/articles/nanobsd/#,
and the Mongolian translation of the Handbook is in [.filename]#~/doc/documentation/content/mn/books/handbook/#.
A new language directory must be created when translating a document to a new language.
If the language directory already exists, only a subdirectory in the [.filename]#articles/# or [.filename]#books/# directory is needed.
[[po-translations-creating-example]]
.Creating a Spanish Translation of the Porter's Handbook
[example]
====
Create a new Spanish translation of the link:{porters-handbook}[Porter's Handbook].
The original is a book in [.filename]#~/doc/documentation/content/en/books/porters-handbook/#.
[.procedure]
======
. The Spanish language books directory [.filename]#~/doc/documentation/content/es/books/# already exists, so only a new subdirectory for the Porter's Handbook is needed:
+
[source,shell]
....
% cd ~/doc/documentation/content/es/books
% mkdir porters-handbook
....
. Copy the content from the original book:
+
[source,shell]
....
% cd porters-handbook
% cp -R ~/doc/documentation/content/en/books/porters-handbook/* .
....
+
Now the document structure is ready for the translator to begin translating with `po4a` command.
======
====
[[po-translations-translating]]
== Translating
The gettext system greatly reduces the number of things that must be tracked by a translator.
Strings to be translated are extracted from the original document into a PO file.
Then a PO editor is used to enter the translated versions of each string.
The FreeBSD PO translation system does not overwrite PO files, so the extraction step can be run at any time to update the PO file.
A PO editor is used to edit the file.
package:editors/poedit[] is shown in these examples because it is simple and has minimal requirements.
Other PO editors offer features to make the job of translating easier.
The Ports Collection offers several of these editors, including package:devel/gtranslator[].
It is important to preserve the PO file.
It contains all of the work that translators have done.
[[po-translations-translating-example]]
.Translating the Porter's Handbook to Spanish
[example]
====
[.procedure]
======
. Change to the base directory and update all PO files.
+
[source,shell]
....
% cd ~/doc
% po4a-gettextize \
--format asciidoc \
--option compat=asciidoctor \
--option yfm_keys=title,part \
--master "documentation/content/en/books/porters-handbook/_index.adoc" \
--master-charset "UTF-8" \
--copyright-holder "The FreeBSD Project" \
--package-name "FreeBSD Documentation" \
--po "documentation/content/es/books/porters-handbook/_index.po"
....
. Enter translations using a PO editor:
+
[source,shell]
....
% poedit documentation/content/es/books/porters-handbook/_index.po
....
======
These steps are necessary for all `.adoc` files, excluding `chapters-order.adoc` and `toc-*.adoc`.
====
[[po-translations-tips]]
== Tips for Translators
[[po-translations-tips-xmltags]]
=== Preserving AsciiDoc macros
Preserve AsciiDoc macros that are shown in the English original.
.Preserving AsciiDoc macros
[example]
====
English original:
[.programlisting]
....
msgid ""
"This example shows the creation of a Spanish translation of the short "
"link:{leap-seconds}[Leap Seconds] article."
....
Spanish translation:
[.programlisting]
....
msgid ""
"Este ejemplo muestra la creación de un artículo con poco contenido como el artículo "
"link:{leap-seconds}[Leap Seconds]."
....
====
[[po-translations-tips-spaces]]
=== Preserving Spaces
Preserve existing spaces at the beginning and end of strings to be translated.
The translated version must have these spaces also.
[[po-translations-tips-verbatim]]
=== Verbatim Tags
The contents of some tags should be copied verbatim, not translated:
* `man:man[1]`
* `package:package[]`
* `link`
* `image`
* `include`
* `Admonitions`
* `id's`
* `Heading tags`
* `source`
[[po-translations-building]]
== Building a Translated Document
A translated version of the original document can be created at any time.
Any untranslated portions of the original will be included in English in the resulting document.
Most PO editors have an indicator that shows how much of the translation has been completed.
This makes it easy for the translator to see when enough strings have been translated to make building the final document worthwhile.
[[po-translations-submitting]]
== Submitting the New Translation
Prepare the new translation files for submission.
This includes adding the files to the version control system, setting additional properties on them, then creating a diff for submission.
The diff files created by these examples can be attached to a https://bugs.freebsd.org/bugzilla/enter_bug.cgi?product=Documentation[documentation bug report] or https://reviews.freebsd.org/[code review].
[[po-translations-submitting-spanish]]
.Spanish Translation of the NanoBSD Article
[example]
====
[.procedure]
======
. Create a diff of the new files from the [.filename]#~/doc/# base directory so the full path is shown with the filenames. This helps committers identify the target language directory.
+
[source,shell]
....
% cd ~/doc
% git diff documentation/content/es/articles/nanobsd/ > /tmp/es_nanobsd.diff
....
======
====
diff --git a/documentation/content/en/books/fdp-primer/preface/_index.adoc b/documentation/content/en/books/fdp-primer/preface/_index.adoc
index f1ce0a6516..c3db7905f7 100644
--- a/documentation/content/en/books/fdp-primer/preface/_index.adoc
+++ b/documentation/content/en/books/fdp-primer/preface/_index.adoc
@@ -1,131 +1,132 @@
---
title: Preface
prev: books/fdp-primer/
next: books/fdp-primer/overview
+description: Preface about the FreeBSD Documentation Project
---
[preface]
[[preface]]
= Preface
:doctype: book
:toc: macro
:toclevels: 1
:icons: font
:source-highlighter: rouge
:experimental:
:skip-front-matter:
:xrefstyle: basic
:relfileprefix: ../
:outfilesuffix:
[[preface-prompts]]
== Shell Prompts
This table shows the default system prompt and superuser prompt.
The examples use these prompts to indicate which type of user is running the example.
[.informaltable]
[cols="1,2", frame="none", options="header"]
|===
| User
| Prompt
|Normal user
|%
|`root`
|#
|===
[[preface-conventions]]
== Typographic Conventions
This table describes the typographic conventions used in this book.
[.informaltable]
[cols="1,2", frame="none", options="header"]
|===
| Meaning
| Examples
|The names of commands.
|Use `ls -l` to list all files.
|The names of files.
|Edit [.filename]#.login#.
|On-screen computer output.
a|
[source,shell]
....
You have mail.
....
|What the user types, contrasted with on-screen computer output.
a|
[source,shell]
....
% date +"The time is %H:%M"
The time is 09:18
....
|Manual page references.
|Use man:su[1] to change user identity.
|User and group names.
|Only `root` can do this.
|Emphasis.
|The user _must_ do this.
|Text that the user is expected to replace with the actual text.
|To search for a keyword in the manual pages, type `man -k _keyword_`
|Environment variables.
|`$HOME` is set to the user's home directory.
|===
[[preface-notes]]
== Notes, Tips, Important Information, Warnings, and Examples
Notes, warnings, and examples appear within the text.
[NOTE]
====
Notes are represented like this, and contain information to take note of, as it may affect what the user does.
====
[TIP]
====
Tips are represented like this, and contain information helpful to the user, such as showing an easier way to do something.
====
[IMPORTANT]
====
Important information is represented like this.
Typically, these show extra steps the user may need to take.
====
[WARNING]
====
Warnings are represented like this, and contain information warning about possible damage if the instructions are not followed.
This damage may be physical, to the hardware or the user, or it may be non-physical, such as the inadvertent deletion of important files.
====
.A Sample Example
[example]
====
Examples are represented like this, and typically contain examples showing a walkthrough, or the results of a particular action.
====
[[preface-acknowledgements]]
== Acknowledgments
My thanks to Sue Blake, Patrick Durusau, Jon Hamilton, Peter Flynn, and Christopher Maden, who took the time to read early drafts of this document and offer many valuable comments and criticisms.
diff --git a/documentation/content/en/books/fdp-primer/rosetta/_index.adoc b/documentation/content/en/books/fdp-primer/rosetta/_index.adoc
index b6221643aa..3bb8298c09 100644
--- a/documentation/content/en/books/fdp-primer/rosetta/_index.adoc
+++ b/documentation/content/en/books/fdp-primer/rosetta/_index.adoc
@@ -1,291 +1,292 @@
---
title: Chapter 7. Rosetta Stone
prev: books/fdp-primer/asciidoctor-primer
next: books/fdp-primer/translations
+description: Rosetta Stone with the differences between Docbook and AsciiDoc
---
[[rosetta]]
= Rosetta Stone
:doctype: book
:toc: macro
:toclevels: 1
:icons: font
:sectnums:
:sectnumlevels: 6
:source-highlighter: rouge
:experimental:
:skip-front-matter:
:xrefstyle: basic
:relfileprefix: ../
:outfilesuffix:
:sectnumoffset: 7
toc::[]
[[docbook-vs-asciidoc]]
== Comparison between Docbook and AsciiDoc
This rosetta stone tries to show the differences between Docbook and AsciiDoc.
.Comparision between Docbook and AsciiDoc
[cols="1,4,4"]
|===
|Language Feature |Docbook | AsciiDoc
|*Bold*
|<strong>bold</strong>
|\*bold*
|*Italic*
|<emphasis>Italic</emphasis>
|\_Italic_
|*Monospace*
|<literal>Monospace</literal>
|\`Monospace`
|*Paragraph*
|<para>This is a paragraph</para>
|This is a paragraph
|*Keycap*
|<keycap>F11</keycap>
|\kbd:[F11]
|*Links*
a|
[source,xml]
----
<link xlink:href="https://www.freebsd.org/where/">Download FreeBSD</link>
----
a|
[source]
----
link:https://www.freebsd.org/where/[Download FreeBSD]
----
|*Sections*
a|
[source,xml]
----
<sect1 xml:id="id">
<title>Section 1</title>
</sect1>
----
a|
[source]
----
[[id]]
= Section 1
----
|*Unordered list*
a|
[source,xml]
----
<itemizedlist>
<listitem>
<para>When to build a custom kernel.</para>
</listitem>
<listitem>
<para>How to take a hardware inventory.</para>
</listitem>
</itemizedlist>
----
a|
[source]
----
* When to build a custom kernel.
* How to take a hardware inventory.
----
|*Ordered list*
a|
[source,xml]
----
<orderedlist>
<listitem>
<para>One</para>
</listitem>
<listitem>
<para>Two</para>
</listitem>
<listitem>
<para>Three</para>
</listitem>
<listitem>
<para>Four</para>
</listitem>
</orderedlist>
----
a|
[source]
----
. One
. Two
. Three
. Four
----
|*Variable list*
a|
[source,xml]
----
<variablelist>
<varlistentry>
<term>amd64</term>
<listitem>
<para>This is the most common desktop...</para>
</listitem>
</varlistentry>
</variablelist>
----
a|
[source]
----
amd64::
This is the most common desktop...
----
|*Source code*
a|
[source,xml]
----
<screen>
&prompt.root; <userinput>mkdir -p /var/spool/lpd/lp</userinput>
</screen>
----
a|
[source]
....
[source,shell]
----
# mkdir -p /var/spool/lpd/lp
----
....
|*Literal block*
a|
[source,xml]
----
<programlisting>
include GENERIC
ident MYKERNEL
options IPFIREWALL
options DUMMYNET
options IPFIREWALL_DEFAULT_TO_ACCEPT
options IPDIVERT
</programlisting>
----
a|
[source]
----
....
include GENERIC
ident MYKERNEL
options IPFIREWALL
options DUMMYNET
options IPFIREWALL_DEFAULT_TO_ACCEPT
options IPDIVERT
....
----
|*Images*
a|
[source,xml]
----
<figure xml:id="bsdinstall-newboot-loader-menu">
<title>FreeBSD Boot Loader Menu</title>
<mediaobject>
<imageobject>
<imagedata fileref="bsdinstall/bsdinstall-newboot-loader-menu"/>
</imageobject>
<textobject>
</literallayout>ASCII art replacement is no longer supported.</literallayout>
</textobject>
<textobject>
<phrase>The FreeBSD loader menu, with options 1-6 to boot
multi-user, boot single user, escape to loader prompt, reboot,
select a kernel to load, and select boot options</phrase>
</textobject>
</mediaobject>
</figure>
----
a|
[source]
----
[[bsdinstall-newboot-loader-menu]]
.FreeBSD Boot Loader Menu
image::bsdinstall/bsdinstall-newboot-loader-menu[The FreeBSD loader menu, with options 1-6 to boot multi-user, boot single user, escape to loader prompt, reboot, select a kernel to load, and select boot options]
----
|*Includes*
|n/a
a|
[source]
----
\include::chapter.adoc[]
----
|*Tables*
a|
[source,xml]
----
<table xml:id="partition-schemes" frame="none" rowsep="1" pgwide="1">
<title>Partitioning Schemes</title>
<tgroup cols="2" align="left">
<thead>
<row>
<entry align="left">Abbreviation</entry>
<entry align="left">Description</entry>
</row>
</thead>
<tbody>
<row>
<entry>APM</entry>
<entry>Apple Partition Map, used by PowerPC(R).</entry>
</row>
</tbody>
</tgroup>
</table>
----
a|
[source]
----
[[partition-schemes]]
.Partitioning Schemes
[cols="1,1", frame="none", options="header"]
\|===
\| Abbreviation
\| Description
\|APM
\|Apple Partition Map, used by PowerPC(R).
\|===
----
|*Admonitions*
a|
[source,xml]
----
<tip>
<para>This is a tip</para>
</tip>
----
a|
[source]
----
[TIP]
====
This is a tip
====
----
|===
diff --git a/documentation/content/en/books/fdp-primer/see-also/_index.adoc b/documentation/content/en/books/fdp-primer/see-also/_index.adoc
index 6ab1433f4b..166b3c92ea 100644
--- a/documentation/content/en/books/fdp-primer/see-also/_index.adoc
+++ b/documentation/content/en/books/fdp-primer/see-also/_index.adoc
@@ -1,46 +1,47 @@
---
title: Chapter 13. See Also
prev: books/fdp-primer/editor-config/
next: books/fdp-primer/examples
+description: More information about the FreeBSD Documentation Project
---
[[see-also]]
= See Also
:doctype: book
:toc: macro
:toclevels: 1
:icons: font
:sectnums:
:sectnumlevels: 6
:source-highlighter: rouge
:experimental:
:skip-front-matter:
:xrefstyle: basic
:relfileprefix: ../
:outfilesuffix:
:sectnumoffset: 13
include::shared/en/urls.adoc[]
toc::[]
This document is deliberately not an exhaustive discussion of AsciiDoc and the FreeBSD Documentation Project.
For more information about these, you are encouraged to see the following web sites.
[[see-also-fdp]]
== The FreeBSD Documentation Project
* link:https://www.FreeBSD.org/docproj/[The FreeBSD Documentation Project web pages]
* link:{handbook}[The FreeBSD Handbook]
[[see-also-asciidoc]]
== AsciiDoctor
* link:https://asciidoctor.org/[AsciiDoctor]
[[see-also-html]]
== HTML
* link:http://www.w3.org/[The World Wide Web Consortium]
* link:https://dev.w3.org/html5/spec-LC/[The HTML 5 specification]
* link:https://www.w3.org/Style/CSS/specs.en.html[CSS specification]
diff --git a/documentation/content/en/books/fdp-primer/structure/_index.adoc b/documentation/content/en/books/fdp-primer/structure/_index.adoc
index 2f190b6dd2..7973dcd8b6 100644
--- a/documentation/content/en/books/fdp-primer/structure/_index.adoc
+++ b/documentation/content/en/books/fdp-primer/structure/_index.adoc
@@ -1,247 +1,248 @@
---
title: Chapter 4. Documentation Directory Structure
prev: books/fdp-primer/working-copy
next: books/fdp-primer/doc-build
+description: Documentation Directory Structure explanation used in the FreeBSD Documentation Project
---
[[structure]]
= Documentation Directory Structure
:doctype: book
:toc: macro
:toclevels: 1
:icons: font
:sectnums:
:sectnumlevels: 6
:source-highlighter: rouge
:experimental:
:skip-front-matter:
:xrefstyle: basic
:relfileprefix: ../
:outfilesuffix:
:sectnumoffset: 4
toc::[]
Files and directories in the [.filename]#doc/# tree follow a structure meant to:
. Make it easy to automate converting the document to other formats.
. Promote consistency between the different documentation organizations, to make it easier to switch between working on different documents.
. Make it easy to decide where in the tree new documentation should be placed.
In addition, the documentation tree must accommodate documents in many different languages.
It is important that the documentation tree structure does not enforce any particular defaults or cultural preferences.
[[structure-top]]
== The Top Level, doc/
There are three sections under [.filename]#doc/#, documentation and website share the same structure.
[cols="20%,80%", frame="none", options="header"]
|===
| Directory
| Usage
|[.filename]#documentation#
|Contains all the articles and books in AsciiDoc format.
Contains subdirectories to further categorize the information by languages.
|[.filename]#shared#
|Contains files that are not specific to the various translations of the documentation.
Contains subdirectories to further categorize the information by languages and three files to store the authors, releases and mirrors information.
This directory is shared between `documentation` and the `website`.
|[.filename]#website#
|Contains the link:https://www.FreeBSD.org[FreeBSD website] in AsciiDoc format.
Contains subdirectories to further categorize the information by languages.
|===
[[structure-locale]]
== The Directories
These directories contain the documentation and the website.
The documentation is organized into subdirectories below this level, following the link:https://gohugo.io/getting-started/directory-structure/[Hugo directory structure].
[cols="20%,80%", frame="none", options="header"]
|===
| Directory
| Usage
|[.filename]#archetypes#
|Contain templates to create new articles, books and webpages.
For more information take a look link:https://gohugo.io/content-management/archetypes/[here].
|[.filename]#config#
|Contain the Hugo configuration files.
One main file and one file per language.
For more information take a look link:https://gohugo.io/getting-started/configuration/[here].
|[.filename]#content#
|Contain the books, articles and webpages.
One directory exists for each available translation of the documentation, for example `en` and `zh-tw`.
| [.filename]#data#
| Contain custom data for build the website in link:https://en.wikipedia.org/wiki/TOML[TOML] format.
This directory is used to store the events, news, press, etc.
For more information take a look link:https://gohugo.io/templates/data-templates/[here].
| [.filename]#static#
| Contain static assets.
Images, security advisories, the pgpkeys, etc.
For more information take a look link:https://gohugo.io/content-management/static-files/[here].
| [.filename]#themes#
| Contain the templates in the form of `.html` files that specify how the website looks.
For more information take a look link:https://gohugo.io/templates/[here].
| [.filename]#tools#
| Contain tools used to enhance the documentation build.
For example to generate the Table of Contents of the books, etc.
| [.filename]#beastie.png#
| This image doesn't need an introduction ;)
| [.filename]#LICENSE#
| License of the documentation, shared and website. BSD 2-Clause License.
| [.filename]#Makefile#
| The [.filename]#Makefile# defines the build process of the documentation and the website.
|===
[[structure-document]]
== Document-Specific Information
This section contains specific notes about particular documents managed by the FDP.
[[structure-document-books]]
== The Books: books/
The books are written in AsciiDoc.
The books are organized as an AsciiDoc `book`.
The books are divided into ``part``s, each of which contains several ``chapter``s.
``chapter``s are further subdivided into sections (`=`) and subsections (`==`, `===`) and so on.
[[structure-document-books-physical]]
=== Physical Organization
There are a number of files and directories within the books directory, all with the same structure.
[[structure-document-books-physical-index]]
==== _index.adoc
The [.filename]#_index.adoc# defines some AsciiDoc variables that affect how the AsciiDoc source is converted to other formats and list the Table of Contents, Table of Examples, Table of Figures, Table of Tables and the abstract section.
[[structure-document-books-physical-book]]
==== book.adoc
The [.filename]#_index.adoc# defines some AsciiDoc variables that affect how the AsciiDoc source is converted to other formats and list the Table of Contents, Table of Examples, Table of Figures, Table of Tables, the abstract section and all the chapters.
This file is used to generate the PDF with `asciidoctor-pdf` and to generate the book in one `html` page.
[[structure-document-books-physical-part]]
==== part*.adoc
The [.filename]#part*.adoc# files stores a brief introduction of one part of the book.
[[structure-document-books-physical-toc]]
==== toc*.adoc
The [.filename]#toc*.adoc# files stores the Table of Contents, Table of Figures, Table of Examples, Table of Tables and different Table of Contents for each part.
All of these files are generated by the Python `tools`.
*Please don't edit them.*
[[structure-document-books-physical-chapters-order]]
==== chapters-order.adoc
The [.filename]#chapters-order.adoc# file stores the order of the book chapters.
[IMPORTANT]
====
Please be careful with this file.
It is used by the Python `tools` to generate the Table of Contents of the books.
In case of editing this file, first contact the mailto:doceng@freebsd.org[Documentation Engineering] team.
====
[[structure-document-handbook-physical-chapters]]
==== directory/_index.adoc
Each chapter in the Handbook is stored in a file called [.filename]#_index.adoc# in a separate directory from the other chapters.
For example, this is an example of the header of one chapter:
[.programlisting]
....
---
title: Chapter 8. Configuring the FreeBSD Kernel
part: Part II. Common Tasks
prev: books/handbook/multimedia
next: books/handbook/printing
---
[[kernelconfig]]
\= Configuring the FreeBSD Kernel <.>
...
....
<.> The character at the end of the line should not be used in a production document.
This character is here to skip this title in the autogenerated [.filename]#toc-*.adoc# files.
When the HTML5 version of the Handbook is produced, this will yield [.filename]#kernelconfig/index.html#.
A brief look will show that there are many directories with individual [.filename]#_index.adoc# files, including [.filename]#basics/_index.adoc#, [.filename]#introduction/_index.adoc#, and [.filename]#printing/_index.xml#.
[IMPORTANT]
====
Do not name chapters or directories after their ordering within the Handbook.
This ordering can change as the content within the Handbook is reorganized.
Reorganization should be possible without renaming files, unless entire chapters are being promoted or demoted within the hierarchy.
====
[[structure-document-articles]]
== The Articles: articles/
The articles are written in AsciiDoc.
The articles are organized as an AsciiDoc `article`.
The articles are divided into sections (`=`) and subsections (`==`, `===`) and so on.
[[structure-document-articles-physical]]
=== Physical Organization
There is one [.filename]#_index.adoc# file per article.
[[structure-document-articles-physical-index]]
==== _index.adoc
The [.filename]#_index.adoc# file contains all the AsciiDoc variables and the content.
For example, this is an example of one article, the structure is pretty similar to one book chapter:
[.programlisting]
....
---
title: Why you should use a BSD style license for your Open Source Project
authors:
- author: Bruce Montague
email: brucem@alumni.cse.ucsc.edu
releaseinfo: "$FreeBSD$"
trademarks: ["freebsd", "intel", "general"]
---
\= Why you should use a BSD style license for your Open Source Project <1>
:doctype: article
:toc: macro
:toclevels: 1
:icons: font
:sectnums:
:sectnumlevels: 6
:source-highlighter: rouge
:experimental:
'''
toc::[]
[[intro]]
\== Introduction <1>
....
<1> The character at the end of the line should not be used in a production document.
This character is here to skip this title in the autogenerated [.filename]#toc-*.adoc# files.
diff --git a/documentation/content/en/books/fdp-primer/tools/_index.adoc b/documentation/content/en/books/fdp-primer/tools/_index.adoc
index 87f35747ac..9085b010a2 100644
--- a/documentation/content/en/books/fdp-primer/tools/_index.adoc
+++ b/documentation/content/en/books/fdp-primer/tools/_index.adoc
@@ -1,49 +1,50 @@
---
title: Chapter 2. Tools
prev: books/fdp-primer/overview
next: books/fdp-primer/working-copy
+description: Tools used in the FreeBSD Documentation Project
---
[[tools]]
= Tools
:doctype: book
:toc: macro
:toclevels: 1
:icons: font
:sectnums:
:sectnumlevels: 6
:source-highlighter: rouge
:experimental:
:skip-front-matter:
:xrefstyle: basic
:relfileprefix: ../
:outfilesuffix:
:sectnumoffset: 2
toc::[]
Several software tools are used to manage the FreeBSD documentation and render it to different output formats.
Some of these tools are required and must be installed before working through the examples in the following chapters.
Some are optional, adding capabilities or making the job of creating documentation less demanding.
[[tools-required]]
== Required Tools
Install `gohugo` and `rubygem-asciidoctor` as shown in crossref:overview[overview,the overview chapter] from the Ports Collection.
These applications are required to do useful work with the FreeBSD documentation.
Some further notes on particular components are given below.
[[tools-optional]]
== Optional Tools
These applications are not required, but can make working on the documentation easier or add capabilities.
[[tools-optional-software]]
=== Software
Vim (package:editors/vim[])::
A popular editor for working with AsciiDoctor.
Emacs (package:editors/emacs[])::
Both of these editors include a special mode for editing documents.
This mode includes commands to reduce the amount of typing needed, and help reduce the possibility of errors.
diff --git a/documentation/content/en/books/fdp-primer/translations/_index.adoc b/documentation/content/en/books/fdp-primer/translations/_index.adoc
index 897e3d5c67..ea7f0b94c6 100644
--- a/documentation/content/en/books/fdp-primer/translations/_index.adoc
+++ b/documentation/content/en/books/fdp-primer/translations/_index.adoc
@@ -1,230 +1,231 @@
---
title: Chapter 8. Translations
prev: books/fdp-primer/rosetta
next: books/fdp-primer/po-translations
+description: FAQ about the translation process in the FreeBSD Documentation Project
---
[[translations]]
= Translations
:doctype: book
:toc: macro
:toclevels: 1
:icons: font
:sectnums:
:sectnumlevels: 6
:source-highlighter: rouge
:experimental:
:skip-front-matter:
:xrefstyle: basic
:relfileprefix: ../
:outfilesuffix:
:sectnumoffset: 8
include::shared/en/teams.adoc[]
include::shared/en/mailing-lists.adoc[]
toc::[]
This is the FAQ for people translating the FreeBSD documentation (FAQ, Handbook, tutorials, manual pages, and others) to different languages.
It is _very_ heavily based on the translation FAQ from the FreeBSD German Documentation Project, originally written by Frank Gründer
mailto:elwood@mc5sys.in-berlin.de[elwood@mc5sys.in-berlin.de] and translated back to English by Bernd Warken
mailto:bwarken@mayn.de[bwarken@mayn.de].
== What do i18n and l10n mean?
i18n means internationalization and l10n means localization.
They are just a convenient shorthand.
i18n can be read as "i" followed by 18 letters, followed by "n".
Similarly, l10n is "l" followed by 10 letters, followed by "n".
== Is there a mailing list for translators?
Yes. Different translation groups have their own mailing lists.
The https://www.freebsd.org/docproj/translations[list of translation projects] has more information about the mailing lists and web sites run by each translation project.
In addition there is mailto:freebsd-translators@freebsd.org[freebsd-translators@freebsd.org] for general translation discussion.
== Are more translators needed?
Yes. The more people that work on translation the faster it gets done, and the faster changes to the English documentation are mirrored in the translated documents.
You do not have to be a professional translator to be able to help.
== What languages do I need to know?
Ideally, you will have a good knowledge of written English, and obviously you will need to be fluent in the language you are translating to.
English is not strictly necessary.
For example, you could do a Hungarian translation of the FAQ from the Spanish translation.
== What software do I need to know?
It is strongly recommended that you maintain a local copy of the FreeBSD Git repository (at least the documentation part).
This can be done by running:
[source,shell]
....
% git clone https://git.FreeBSD.org/doc.git ~/doc
....
https://git.FreeBSD.org/[git.FreeBSD.org] is a public `git` server.
[NOTE]
====
This will require the package:git-lite[] package to be installed.
====
You should be comfortable using git.
This will allow you to see what has changed between different versions of the files that make up the documentation.
For example, to view the differences between revisions `abff932fe8` and `2191c44469` of [.filename]#documentation/content/en/articles/committers-guide/_index.adoc#, run:
[source,shell]
....
% git diff abff932fe8 2191c44469 documentation/content/en/articles/committers-guide/_index.adoc
....
Please see the complete explanation of using Git in FreeBSD in the link:{handbook}mirrors/#git[FreeBSD Handbook].
== How do I find out who else might be translating to the same language?
The https://www.FreeBSD.org/docproj/translations/[Documentation Project translations page] lists the translation efforts that are currently known about.
If others are already working on translating documentation to your language, please do not duplicate their efforts.
Instead, contact them to see how you can help.
If no one is listed on that page as translating for your language, then send a message to the {freebsd-doc} in case someone else is thinking of doing a translation, but has not announced it yet.
== No one else is translating to my language. What do I do?
Congratulations, you have just started the "FreeBSD _your-language-here_ Documentation Translation Project".
Welcome aboard.
First, decide whether or not you have got the time to spare.
Since you are the only person working on your language at the moment it is going to be your responsibility to publicize your work and coordinate any volunteers that might want to help you.
Write an email to the Documentation Project mailing list, announcing that you are going to translate the documentation, so the Documentation Project translations page can be maintained.
If there is already someone in your country providing FreeBSD mirroring services you should contact them and ask if you can have some webspace for your project, and possibly an email address or mailing list services.
Then pick a document and start translating.
It is best to start with something fairly small - either the FAQ, or one of the tutorials.
== I have translated some documentation, where do I send it?
That depends.
If you are already working with a translation team (such as the Japanese team, or the German team) then they will have their own procedures for handling submitted documentation, and these will be outlined on their web pages.
If you are the only person working on a particular language (or you are responsible for a translation project and want to submit your changes back to the FreeBSD project) then you should send your translation to the FreeBSD project (see the next question).
== I am the only person working on translating to this language, how do I submit my translation?
First, make sure your translation is organized properly.
This means that it should drop into the existing documentation tree and build straight away.
Directories below this are named according to the language code they are written in,
as defined in ISO639 ([.filename]#/usr/share/misc/iso639# on a version of FreeBSD newer than 20th January 1999).
[WARNING]
====
Hugo need the language codes in lowercase.
For example, instead of `pt_BR` Hugo uses `pt-br`.
====
Currently, the FreeBSD documentation is stored in a top level directory called [.filename]#documentation/#.
Directories below this are named according to the language code they are written in, as defined in ISO639 ([.filename]#/usr/share/misc/iso639# on a version of FreeBSD newer than 20th January 1999).
If your language can be encoded in different ways (for example, Chinese) then there should be directories below this, one for each encoding format you have provided.
Finally, you should have directories for each document.
For example, a hypothetical Swedish translation might look like:
[.programlisting]
....
documentation/
content/
sv/
books/
faq/
_index.adoc
....
`sv` is the name of the translation, in [.filename]#lang# form.
Note the two Makefiles, which will be used to build the documentation.
Use git diff command to generate a diff and send it to the link:reviews.freebsd.org/[reviews system].
[source,shell]
....
% git diff > sv-faq.diff
....
You should use Bugzilla to link:https://bugs.freebsd.org/bugzilla/enter_bug.cgi[submit a report] indicating that you have submitted the documentation.
It would be very helpful if you could get other people to look over your translation and double check it first, since it is unlikely that the person committing it will be fluent in the language.
Someone (probably the Documentation Project Manager, currently {doceng}) will then take your translation and confirm that it builds.
In particular, the following things will be looked at:
. Does `make` in the [.filename]#root# directory work correctly?
If there are any problems then whoever is looking at the submission will get back to you to work them out.
If there are no problems your translation will be committed as soon as possible.
== Can I include language or country specific text in my translation?
We would prefer that you did not.
For example, suppose that you are translating the Handbook to Korean, and want to include a section about retailers in Korea in your Handbook.
There is no real reason why that information should not be in the English (or German, or Spanish, or Japanese, or ...) versions as well.
It is feasible that an English speaker in Korea might try to pick up a copy of FreeBSD whilst over there.
It also helps increase FreeBSD's perceived presence around the globe, which is not a bad thing.
If you have country specific information, please submit it as a change to the English Handbook (using Bugzilla) and then translate the change back to your language in the translated Handbook.
Thanks.
=== Addressing the reader
In the English documents, the reader is addressed as "you", there is no formal/informal distinction as there is in some languages.
If you are translating to a language which does distinguish, use whichever form is typically used in other technical documentation in your language.
If in doubt, use a mildly polite form.
=== Do I need to include any additional information in my translations?
Yes.
The header of the English version of each document will look something like this:
[.programlisting]
....
---
title: Why you should use a BSD style license for your Open Source Project
releaseinfo: "$FreeBSD: head/en_US.ISO8859-1/articles/bsdl-gpl/article.xml 53942 2020-03-01 12:23:40Z carlavilla $"
trademarks: ["freebsd", "intel", "general"]
---
= Why you should use a BSD style license for your Open Source Project
....
The exact boilerplate may change, but it will always include a $FreeBSD$ line and the phrase `The FreeBSD Documentation Project`.
Note that the $FreeBSD$ part is expanded automatically by Git, so it should be empty (just `$FreeBSD$`) for new files.
Your translated documents should include their own FreeBSD line, and change the `FreeBSD Documentation Project` line to `The FreeBSD _language_ Documentation Project`.
In addition, you should add a third line which indicates which revision of the English text this is based on.
So, the Spanish version of this file might start:
[.programlisting]
....
---
title: Soporte para segundos intercalares en FreeBSD
releaseinfo: "$FreeBSD: head/es_ES.ISO8859-1/articles/leap-seconds/article.xml 53090 2019-06-01 17:52:59Z carlavilla $"
---
= Soporte para segundos intercalares en FreeBSD
....
diff --git a/documentation/content/en/books/fdp-primer/working-copy/_index.adoc b/documentation/content/en/books/fdp-primer/working-copy/_index.adoc
index b6375a6e55..8a1abedaf8 100644
--- a/documentation/content/en/books/fdp-primer/working-copy/_index.adoc
+++ b/documentation/content/en/books/fdp-primer/working-copy/_index.adoc
@@ -1,135 +1,136 @@
---
title: Chapter 3. The Working Copy
prev: books/fdp-primer/tools
next: books/fdp-primer/structure
+description: How to get a working copy of the FreeBSD Documentation Project
---
[[working-copy]]
= The Working Copy
:doctype: book
:toc: macro
:toclevels: 1
:icons: font
:sectnums:
:sectnumlevels: 6
:source-highlighter: rouge
:experimental:
:skip-front-matter:
:xrefstyle: basic
:relfileprefix: ../
:outfilesuffix:
:sectnumoffset: 3
toc::[]
The _working copy_ is a copy of the FreeBSD repository documentation tree downloaded onto the local computer.
Changes are made to the local working copy, tested, and then submitted as patches to be committed to the main repository.
A full copy of the documentation tree can occupy 550 megabytes of disk space.
Allow for a full gigabyte of space to have room for temporary files and test versions of various output formats.
link:https://git-scm.com/[Git] is used to manage the FreeBSD documentation files.
It is obtained by installing the Git package:
[source,shell]
....
# pkg install git-lite
....
[[working-copy-doc-and-src]]
== Documentation and Manual Pages
FreeBSD documentation is not just books and articles.
Manual pages for all the commands and configuration files are also part of the documentation, and part of the FDP's territory.
Two repositories are involved: `doc` for the books and articles, and `src` for the operating system and manual pages.
To edit manual pages, the `src` repository must be checked out separately.
Repositories may contain multiple versions of documentation and source code.
New modifications are almost always made only to the latest version, called `main`.
[[working-copy-choosing-directory]]
== Choosing a Directory
FreeBSD documentation is traditionally stored in [.filename]#/usr/doc/#, and system source code with manual pages in [.filename]#/usr/src/#.
These directory trees are relocatable, and users may want to put the working copies in other locations to avoid interfering with existing information in the main directories.
The examples that follow use [.filename]#~/doc# and [.filename]#~/src#, both subdirectories of the user's home directory.
[[working-copy-checking-out]]
== Checking Out a Copy
A download of a working copy from the repository is called a _clone_, and done with `git clone`.
This example clones a copy of the latest version (`main`) of the main documentation tree:
[source,shell]
....
% git clone https://git.FreeBSD.org/doc.git ~/doc
....
A checkout of the source code to work on manual pages is very similar:
[source,shell]
....
% git clone https://git.FreeBSD.org/src.git ~/src
....
[[working-copy-updating]]
== Updating a Working Copy
The documents and files in the FreeBSD repository change daily.
People modify files and commit changes frequently.
Even a short time after an initial checkout, there will already be differences between the local working copy and the main FreeBSD repository.
To update the local version with the changes that have been made to the main repository, use `git pull` on the directory containing the local working copy:
[source,shell]
....
% cd ~/doc
% git pull --ff-only
....
Get in the protective habit of using `git pull` before editing document files.
Someone else may have edited that file very recently, and the local working copy will not include the latest changes until it has been updated.
Editing the newest version of a file is much easier than trying to combine an older, edited local file with the newer version from the repository.
[[working-copy-revert]]
== Reverting Changes
Sometimes it turns out that changes were not necessary after all, or the writer just wants to start over.
Files can be "reset" to their unchanged form with `git restore`.
For example, to erase the edits made to [.filename]#_index.adoc# and reset it to unmodified form:
[source,shell]
....
% git restore _index.adoc
....
[[working-copy-making-diff]]
== Making a Diff
After edits to a file or group of files are completed, the differences between the local working copy and the version on the FreeBSD repository must be collected into a single file for submission.
These _diff_ files are produced by redirecting the output of `git diff` into a file:
[source,shell]
....
% cd ~/doc
% git diff > doc-fix-spelling.diff
....
Give the file a meaningful name that identifies the contents.
The example above is for spelling fixes to the whole documentation tree.
If the diff file is to be submitted with the web "link:https://bugs.FreeBSD.org/bugzilla/enter_bug.cgi[Submit a FreeBSD problem report]" interface, add a [.filename]#.txt# extension to give the earnest and simple-minded web form a clue that the contents are plain text.
Be careful: `git diff` includes all changes made in the current directory and any subdirectories.
If there are files in the working copy with edits that are not ready to be submitted yet, provide a list of only the files that are to be included:
[source,shell]
....
% cd ~/doc
% git diff disks/_index.adoc printers/_index.adoc > disks-printers.diff
....
[[working-copy-git-references]]
== Git References
These examples show very basic usage of Git.
More detail is available in the https://git-scm.com/book/en/v2[Git Book] and the https://git-scm.com/doc[Git documentation].
diff --git a/documentation/content/en/books/fdp-primer/writing-style/_index.adoc b/documentation/content/en/books/fdp-primer/writing-style/_index.adoc
index afb1494d6a..71f12ab1c9 100644
--- a/documentation/content/en/books/fdp-primer/writing-style/_index.adoc
+++ b/documentation/content/en/books/fdp-primer/writing-style/_index.adoc
@@ -1,231 +1,232 @@
---
title: Chapter 11. Writing Style
prev: books/fdp-primer/manual-pages
next: books/fdp-primer/editor-config
+description: Writing Style and some conventions used in the FreeBSD Documentation Project
---
[[writing-style]]
= Writing Style
:doctype: book
:toc: macro
:toclevels: 1
:icons: font
:sectnums:
:sectnumlevels: 6
:source-highlighter: rouge
:experimental:
:skip-front-matter:
:xrefstyle: basic
:relfileprefix: ../
:outfilesuffix:
:sectnumoffset: 11
include::shared/en/mailing-lists.adoc[]
toc::[]
[[writing-style-tips]]
== Tips
Technical documentation can be improved by consistent use of several principles.
Most of these can be classified into three goals: _be clear_, _be complete_, and _be concise_.
These goals can conflict with each other. Good writing consists of a balance between them.
[[writing-style-be-clear]]
=== Be Clear
Clarity is extremely important.
The reader may be a novice, or reading the document in a second language.
Strive for simple, uncomplicated text that clearly explains the concepts.
Avoid flowery or embellished speech, jokes, or colloquial expressions.
Write as simply and clearly as possible.
Simple text is easier to understand and translate.
Keep explanations as short, simple, and clear as possible.
Avoid empty phrases like "in order to", which usually just means "to".
Avoid potentially patronizing words like "basically".
Avoid Latin terms like "i.e.," or "cf.", which may be unknown outside of academic or scientific groups.
Write in a formal style.
Avoid addressing the reader as "you".
For example, say "copy the file to [.filename]#/tmp#" rather than "you can copy the file to [.filename]#/tmp#".
Give clear, correct, _tested_ examples.
A trivial example is better than no example.
A good example is better yet.
Do not give bad examples, identifiable by apologies or sentences like "but really it should never be done that way".
Bad examples are worse than no examples.
Give good examples, because _even when warned not to use the example as shown_,
the reader will usually just use the example as shown.
Avoid _weasel words_ like "should", "might", "try", or "could".
These words imply that the speaker is unsure of the facts, and create doubt in the reader.
Similarly, give instructions as imperative commands: not "you should do this", but merely "do this".
[[writing-style-be-complete]]
=== Be Complete
Do not make assumptions about the reader's abilities or skill level.
Tell them what they need to know.
Give links to other documents to provide background information without having to recreate it.
Put yourself in the reader's place, anticipate the questions they will ask, and answer them.
[[writing-style-be-concise]]
=== Be Concise
While features should be documented completely,
sometimes there is so much information that the reader cannot easily find the specific detail needed.
The balance between being complete and being concise is a challenge.
One approach is to have an introduction,
then a "quick start" section that describes the most common situation,
followed by an in-depth reference section.
[[writing-style-guidelines]]
== Guidelines
To promote consistency between the myriad authors of the FreeBSD documentation,
some guidelines have been drawn up for authors to follow.
Use American English Spelling::
There are several variants of English, with different spellings for the same word.
Where spellings differ, use the American English variant.
"color", not "colour", "rationalize", not "rationalise", and so on.
+
[NOTE]
====
The use of British English may be accepted in the case of a contributed article,
however the spelling must be consistent within the whole document.
The other documents such as books, web site, manual pages, etc. will have to use American English.
====
Do not use contractions::
Do not use contractions.
Always spell the phrase out in full.
"Don't use contractions" is wrong.
+
Avoiding contractions makes for a more formal tone, is more precise, and is slightly easier for translators.
Use the serial comma::
In a list of items within a paragraph, separate each item from the others with a comma.
Separate the last item from the others with a comma and the word "and".
+
For example:
+
This is a list of one, two and three items.
+
Is this a list of three items, "one", "two", and "three", or a list of two items, "one" and "two and three"?
+
It is better to be explicit and include a serial comma:
+
This is a list of one, two, and three items.
Avoid redundant phrases::
Do not use redundant phrases.
In particular, "the command", "the file", and "man command" are often redundant.
+
For example, commands:
+
Wrong: Use the `git` command to update sources.
+
Right: Use `git` to update sources.
+
Filenames:
+
Wrong: ... in the filename [.filename]#/etc/rc.local#...
+
Right: ... in [.filename]#/etc/rc.local#...
+
Manual page references (the second example uses `citerefentry` with the man:csh[1] entity):.
+
Wrong: See `man csh` for more information.
+
Right: See man:csh[1].
For more information about writing style, see http://www.bartleby.com/141/[Elements of Style], by William Strunk.
[[writing-style-guide]]
== Style Guide
To keep the source for the documentation consistent when many different people are editing it, please follow these style conventions.
[[one-sentence-per-line]]
== One sentence per line
Use Semantic Line Breaks in the documentation, a technique called "one sentence per line".
The idea of this technique is to help the users to write and read documentation.
To get more information about this technique read the link:https://sembr.org/[Semantic Line Breaks] page.
This is an example which don't use "one sentence per line".
....
All human beings are born free and equal in dignity and rights. They are endowed with reason and conscience and should act towards one another in a spirit of brotherhood.
....
And this is an example which uses the technique.
....
All human beings are born free and equal in dignity and rights.
They are endowed with reason and conscience and should act towards one another in a spirit of brotherhood.
....
[[writing-style-acronyms]]
=== Acronyms
Acronyms should be defined the first time they appear in a document, as in: "Network Time Protocol (NTP)".
After the acronym has been defined, use the acronym alone unless it makes more sense contextually to use the whole term.
Acronyms are usually defined only once per chapter or per document.
All acronyms should be enclosed using the ` character.
[[writing-style-special-characters]]
== Special Character List
This list of special characters shows the correct syntax and the output when used in FreeBSD documentation.
If a character is not on this list, ask about it on the {freebsd-doc}.
[.informaltable]
[cols="1,1,1", frame="none", options="header"]
|===
| Name
| Syntax
| Rendered
| Copyright
| +(C)+
| (C)
| Registered
| +(R)+
| (R)
| Trademark
| +(TM)+
| (TM)
| Em dash
| +--+
| --
| Ellipses
| +...+
| ...
| Single right arrow
| +->+
| ->
| Double right arrow
| +=>+
| =>
| Single left arrow
| +<-+
| <-
| Double left arrow
| +<=+
| <=
|===
diff --git a/documentation/content/en/books/handbook/_index.adoc b/documentation/content/en/books/handbook/_index.adoc
index e22ea09e43..0fdd49727d 100644
--- a/documentation/content/en/books/handbook/_index.adoc
+++ b/documentation/content/en/books/handbook/_index.adoc
@@ -1,39 +1,39 @@
---
title: FreeBSD Handbook
authors:
- author: The FreeBSD Documentation Project
copyright: 1995-2021 The FreeBSD Documentation Project
-releaseinfo: "$FreeBSD$"
+description: FreeBSD Handbook Index
trademarks: ["freebsd", "ibm", "ieee", "redhat", "3com", "adobe", "apple", "intel", "linux", "microsoft", "opengroup", "sun", "realnetworks", "oracle", "3ware", "arm", "adaptec", "google", "heidelberger", "intuit", "lsilogic", "themathworks", "thomson", "vmware", "wolframresearch", "xiph", "xfree86", "general"]
next: books/handbook/preface
---
= FreeBSD Handbook
:doctype: book
:toc: macro
:toclevels: 1
:icons: font
:sectnums:
:sectnumlevels: 6
:source-highlighter: rouge
:experimental:
include::shared/releases.adoc[]
include::shared/en/mailing-lists.adoc[]
[.abstract-title]
Abstract
Welcome to FreeBSD! This handbook covers the installation and day to day use of _FreeBSD {rel130-current}-RELEASE_, _FreeBSD {rel122-current}-RELEASE_ and _FreeBSD {rel114-current}-RELEASE_. This book is the result of ongoing work by many individuals. Some sections might be outdated. Those interested in helping to update and expand this document should send email to the {freebsd-doc}.
The latest version of this book is available from the https://www.FreeBSD.org/[FreeBSD web site]. Previous versions can be obtained from https://docs.FreeBSD.org/doc/[https://docs.FreeBSD.org/doc/]. The book can be downloaded in a variety of formats and compression options from the https://download.freebsd.org/ftp/doc/[FreeBSD FTP server] or one of the numerous link:./mirrors#mirrors-ftp[mirror sites]. Printed copies can be purchased at the https://www.freebsdmall.com/[FreeBSD Mall]. Searches can be performed on the handbook and other documents on the link:https://www.FreeBSD.org/search/[search page].
'''
include::content/en/books/handbook/toc.adoc[]
include::content/en/books/handbook/toc-figures.adoc[]
include::content/en/books/handbook/toc-tables.adoc[]
include::content/en/books/handbook/toc-examples.adoc[]
diff --git a/documentation/content/en/books/handbook/advanced-networking/_index.adoc b/documentation/content/en/books/handbook/advanced-networking/_index.adoc
index 76fa248741..33cb761d1c 100644
--- a/documentation/content/en/books/handbook/advanced-networking/_index.adoc
+++ b/documentation/content/en/books/handbook/advanced-networking/_index.adoc
@@ -1,2821 +1,2822 @@
---
title: Chapter 32. Advanced Networking
part: IV. Network Communication
prev: books/handbook/firewalls
next: books/handbook/partv
+description: "Advanced networking in FreeBSD: basics of gateways and routes, CARP, how to configure multiple VLANs on FreeBSD, etc"
---
[[advanced-networking]]
= Advanced Networking
:doctype: book
:toc: macro
:toclevels: 1
:icons: font
:sectnums:
:sectnumlevels: 6
:source-highlighter: rouge
:experimental:
:skip-front-matter:
:xrefstyle: basic
:relfileprefix: ../
:outfilesuffix:
:sectnumoffset: 32
ifeval::["{backend}" == "html5"]
:imagesdir: ../../../../images/books/handbook/advanced-networking/
endif::[]
ifeval::["{backend}" == "pdf"]
:imagesdir: ../../../../static/images/books/handbook/advanced-networking/
endif::[]
ifeval::["{backend}" == "epub3"]
:imagesdir: ../../../../static/images/books/handbook/advanced-networking/
endif::[]
include::shared/authors.adoc[]
include::shared/releases.adoc[]
include::shared/en/mailing-lists.adoc[]
include::shared/en/teams.adoc[]
include::shared/en/urls.adoc[]
toc::[]
[[advanced-networking-synopsis]]
== Synopsis
This chapter covers a number of advanced networking topics.
After reading this chapter, you will know:
* The basics of gateways and routes.
* How to set up USB tethering.
* How to set up IEEE(R) 802.11 and Bluetooth(R) devices.
* How to make FreeBSD act as a bridge.
* How to set up network PXE booting.
* How to set up IPv6 on a FreeBSD machine.
* How to enable and utilize the features of the Common Address Redundancy Protocol (CARP) in FreeBSD.
* How to configure multiple VLANs on FreeBSD.
* Configure bluetooth headset.
Before reading this chapter, you should:
* Understand the basics of the [.filename]#/etc/rc# scripts.
* Be familiar with basic network terminology.
* Know how to configure and install a new FreeBSD kernel (crossref:kernelconfig[kernelconfig,Configuring the FreeBSD Kernel]).
* Know how to install additional third-party software (crossref:ports[ports,Installing Applications: Packages and Ports]).
[[network-routing]]
== Gateways and Routes
_Routing_ is the mechanism that allows a system to find the network path to another system. A _route_ is a defined pair of addresses which represent the "destination" and a "gateway". The route indicates that when trying to get to the specified destination, send the packets through the specified gateway. There are three types of destinations: individual hosts, subnets, and "default". The "default route" is used if no other routes apply. There are also three types of gateways: individual hosts, interfaces, also called links, and Ethernet hardware (MAC) addresses. Known routes are stored in a routing table.
This section provides an overview of routing basics. It then demonstrates how to configure a FreeBSD system as a router and offers some troubleshooting tips.
[[network-routing-default]]
=== Routing Basics
To view the routing table of a FreeBSD system, use man:netstat[1]:
[source,shell]
....
% netstat -r
Routing tables
Internet:
Destination Gateway Flags Refs Use Netif Expire
default outside-gw UGS 37 418 em0
localhost localhost UH 0 181 lo0
test0 0:e0:b5:36:cf:4f UHLW 5 63288 re0 77
10.20.30.255 link#1 UHLW 1 2421
example.com link#1 UC 0 0
host1 0:e0:a8:37:8:1e UHLW 3 4601 lo0
host2 0:e0:a8:37:8:1e UHLW 0 5 lo0 =>
host2.example.com link#1 UC 0 0
224 link#1 UC 0 0
....
The entries in this example are as follows:
default::
The first route in this table specifies the `default` route. When the local system needs to make a connection to a remote host, it checks the routing table to determine if a known path exists. If the remote host matches an entry in the table, the system checks to see if it can connect using the interface specified in that entry.
+
If the destination does not match an entry, or if all known paths fail, the system uses the entry for the default route. For hosts on a local area network, the `Gateway` field in the default route is set to the system which has a direct connection to the Internet. When reading this entry, verify that the `Flags` column indicates that the gateway is usable (`UG`).
+
The default route for a machine which itself is functioning as the gateway to the outside world will be the gateway machine at the Internet Service Provider (ISP).
localhost::
The second route is the `localhost` route. The interface specified in the `Netif` column for `localhost` is [.filename]#lo0#, also known as the loopback device. This indicates that all traffic for this destination should be internal, rather than sending it out over the network.
MAC address::
The addresses beginning with `0:e0:` are MAC addresses. FreeBSD will automatically identify any hosts, `test0` in the example, on the local Ethernet and add a route for that host over the Ethernet interface, [.filename]#re0#. This type of route has a timeout, seen in the `Expire` column, which is used if the host does not respond in a specific amount of time. When this happens, the route to this host will be automatically deleted. These hosts are identified using the Routing Information Protocol (RIP), which calculates routes to local hosts based upon a shortest path determination.
subnet::
FreeBSD will automatically add subnet routes for the local subnet. In this example, `10.20.30.255` is the broadcast address for the subnet `10.20.30` and `example.com` is the domain name associated with that subnet. The designation `link#1` refers to the first Ethernet card in the machine.
+
Local network hosts and local subnets have their routes automatically configured by a daemon called man:routed[8]. If it is not running, only routes which are statically defined by the administrator will exist.
host::
The `host1` line refers to the host by its Ethernet address. Since it is the sending host, FreeBSD knows to use the loopback interface ([.filename]#lo0#) rather than the Ethernet interface.
+
The two `host2` lines represent aliases which were created using man:ifconfig[8]. The `=>` symbol after the [.filename]#lo0# interface says that an alias has been set in addition to the loopback address. Such routes only show up on the host that supports the alias and all other hosts on the local network will have a `link#1` line for such routes.
224::
The final line (destination subnet `224`) deals with multicasting.
Various attributes of each route can be seen in the `Flags` column. <<routeflags>> summarizes some of these flags and their meanings:
[[routeflags]]
.Commonly Seen Routing Table Flags
[cols="1,1", frame="none", options="header"]
|===
| Command
| Purpose
|U
|The route is active (up).
|H
|The route destination is a single host.
|G
|Send anything for this destination on to this gateway, which will figure out from there where to send it.
|S
|This route was statically configured.
|C
|Clones a new route based upon this route for machines to connect to. This type of route is normally used for local networks.
|W
|The route was auto-configured based upon a local area network (clone) route.
|L
|Route involves references to Ethernet (link) hardware.
|===
On a FreeBSD system, the default route can defined in [.filename]#/etc/rc.conf# by specifying the IP address of the default gateway:
[.programlisting]
....
defaultrouter="10.20.30.1"
....
It is also possible to manually add the route using `route`:
[source,shell]
....
# route add default 10.20.30.1
....
Note that manually added routes will not survive a reboot. For more information on manual manipulation of network routing tables, refer to man:route[8].
[[network-static-routes]]
=== Configuring a Router with Static Routes
A FreeBSD system can be configured as the default gateway, or router, for a network if it is a dual-homed system. A dual-homed system is a host which resides on at least two different networks. Typically, each network is connected to a separate network interface, though IP aliasing can be used to bind multiple addresses, each on a different subnet, to one physical interface.
In order for the system to forward packets between interfaces, FreeBSD must be configured as a router. Internet standards and good engineering practice prevent the FreeBSD Project from enabling this feature by default, but it can be configured to start at boot by adding this line to [.filename]#/etc/rc.conf#:
[.programlisting]
....
gateway_enable="YES" # Set to YES if this host will be a gateway
....
To enable routing now, set the man:sysctl[8] variable `net.inet.ip.forwarding` to `1`. To stop routing, reset this variable to `0`.
The routing table of a router needs additional routes so it knows how to reach other networks. Routes can be either added manually using static routes or routes can be automatically learned using a routing protocol. Static routes are appropriate for small networks and this section describes how to add a static routing entry for a small network.
[NOTE]
====
For large networks, static routes quickly become unscalable. FreeBSD comes with the standard BSD routing daemon man:routed[8], which provides the routing protocols RIP, versions 1 and 2, and IRDP. Support for the BGP and OSPF routing protocols can be installed using the package:net/zebra[] package or port.
====
Consider the following network:
image::static-routes.png[]
In this scenario, `RouterA` is a FreeBSD machine that is acting as a router to the rest of the Internet. It has a default route set to `10.0.0.1` which allows it to connect with the outside world. `RouterB` is already configured to use `192.168.1.1` as its default gateway.
Before adding any static routes, the routing table on `RouterA` looks like this:
[source,shell]
....
% netstat -nr
Routing tables
Internet:
Destination Gateway Flags Refs Use Netif Expire
default 10.0.0.1 UGS 0 49378 xl0
127.0.0.1 127.0.0.1 UH 0 6 lo0
10.0.0.0/24 link#1 UC 0 0 xl0
192.168.1.0/24 link#2 UC 0 0 xl1
....
With the current routing table, `RouterA` does not have a route to the `192.168.2.0/24` network. The following command adds the `Internal Net 2` network to ``RouterA``'s routing table using `192.168.1.2` as the next hop:
[source,shell]
....
# route add -net 192.168.2.0/24 192.168.1.2
....
Now, `RouterA` can reach any host on the `192.168.2.0/24` network. However, the routing information will not persist if the FreeBSD system reboots. If a static route needs to be persistent, add it to [.filename]#/etc/rc.conf#:
[.programlisting]
....
# Add Internal Net 2 as a persistent static route
static_routes="internalnet2"
route_internalnet2="-net 192.168.2.0/24 192.168.1.2"
....
The `static_routes` configuration variable is a list of strings separated by a space, where each string references a route name. The variable `route_internalnet2` contains the static route for that route name.
Using more than one string in `static_routes` creates multiple static routes. The following shows an example of adding static routes for the `192.168.0.0/24` and `192.168.1.0/24` networks:
[.programlisting]
....
static_routes="net1 net2"
route_net1="-net 192.168.0.0/24 192.168.0.1"
route_net2="-net 192.168.1.0/24 192.168.1.1"
....
[[network-routing-troubleshooting]]
=== Troubleshooting
When an address space is assigned to a network, the service provider configures their routing tables so that all traffic for the network will be sent to the link for the site. But how do external sites know to send their packets to the network's ISP?
There is a system that keeps track of all assigned address spaces and defines their point of connection to the Internet backbone, or the main trunk lines that carry Internet traffic across the country and around the world. Each backbone machine has a copy of a master set of tables, which direct traffic for a particular network to a specific backbone carrier, and from there down the chain of service providers until it reaches a particular network.
It is the task of the service provider to advertise to the backbone sites that they are the point of connection, and thus the path inward, for a site. This is known as route propagation.
Sometimes, there is a problem with route propagation and some sites are unable to connect. Perhaps the most useful command for trying to figure out where routing is breaking down is `traceroute`. It is useful when `ping` fails.
When using `traceroute`, include the address of the remote host to connect to. The output will show the gateway hosts along the path of the attempt, eventually either reaching the target host, or terminating because of a lack of connection. For more information, refer to man:traceroute[8].
[[network-routing-multicast]]
=== Multicast Considerations
FreeBSD natively supports both multicast applications and multicast routing. Multicast applications do not require any special configuration in order to run on FreeBSD. Support for multicast routing requires that the following option be compiled into a custom kernel:
[.programlisting]
....
options MROUTING
....
The multicast routing daemon, mrouted can be installed using the package:net/mrouted[] package or port. This daemon implements the DVMRP multicast routing protocol and is configured by editing [.filename]#/usr/local/etc/mrouted.conf# in order to set up the tunnels and DVMRP. The installation of mrouted also installs map-mbone and mrinfo, as well as their associated man pages. Refer to these for configuration examples.
[NOTE]
====
DVMRP has largely been replaced by the PIM protocol in many multicast installations. Refer to man:pim[4] for more information.
====
[[network-wireless]]
== Wireless Networking
=== Wireless Networking Basics
Most wireless networks are based on the IEEE(R) 802.11 standards. A basic wireless network consists of multiple stations communicating with radios that broadcast in either the 2.4GHz or 5GHz band, though this varies according to the locale and is also changing to enable communication in the 2.3GHz and 4.9GHz ranges.
802.11 networks are organized in two ways. In _infrastructure mode_, one station acts as a master with all the other stations associating to it, the network is known as a BSS, and the master station is termed an access point (AP). In a BSS, all communication passes through the AP; even when one station wants to communicate with another wireless station, messages must go through the AP. In the second form of network, there is no master and stations communicate directly. This form of network is termed an IBSS and is commonly known as an _ad-hoc network_.
802.11 networks were first deployed in the 2.4GHz band using protocols defined by the IEEE(R) 802.11 and 802.11b standard. These specifications include the operating frequencies and the MAC layer characteristics, including framing and transmission rates, as communication can occur at various rates. Later, the 802.11a standard defined operation in the 5GHz band, including different signaling mechanisms and higher transmission rates. Still later, the 802.11g standard defined the use of 802.11a signaling and transmission mechanisms in the 2.4GHz band in such a way as to be backwards compatible with 802.11b networks.
Separate from the underlying transmission techniques, 802.11 networks have a variety of security mechanisms. The original 802.11 specifications defined a simple security protocol called WEP. This protocol uses a fixed pre-shared key and the RC4 cryptographic cipher to encode data transmitted on a network. Stations must all agree on the fixed key in order to communicate. This scheme was shown to be easily broken and is now rarely used except to discourage transient users from joining networks. Current security practice is given by the IEEE(R) 802.11i specification that defines new cryptographic ciphers and an additional protocol to authenticate stations to an access point and exchange keys for data communication. Cryptographic keys are periodically refreshed and there are mechanisms for detecting and countering intrusion attempts. Another security protocol specification commonly used in wireless networks is termed WPA, which was a precursor to 802.11i. WPA specifies a subset of the requirements found in 802.11i and is designed for implementation on legacy hardware. Specifically, WPA requires only the TKIP cipher that is derived from the original WEP cipher. 802.11i permits use of TKIP but also requires support for a stronger cipher, AES-CCM, for encrypting data. The AES cipher was not required in WPA because it was deemed too computationally costly to be implemented on legacy hardware.
The other standard to be aware of is 802.11e. It defines protocols for deploying multimedia applications, such as streaming video and voice over IP (VoIP), in an 802.11 network. Like 802.11i, 802.11e also has a precursor specification termed WME (later renamed WMM) that has been defined by an industry group as a subset of 802.11e that can be deployed now to enable multimedia applications while waiting for the final ratification of 802.11e. The most important thing to know about 802.11e and WME/WMM is that it enables prioritized traffic over a wireless network through Quality of Service (QoS) protocols and enhanced media access protocols. Proper implementation of these protocols enables high speed bursting of data and prioritized traffic flow.
FreeBSD supports networks that operate using 802.11a, 802.11b, and 802.11g. The WPA and 802.11i security protocols are likewise supported (in conjunction with any of 11a, 11b, and 11g) and QoS and traffic prioritization required by the WME/WMM protocols are supported for a limited set of wireless devices.
[[network-wireless-quick-start]]
=== Quick Start
Connecting a computer to an existing wireless network is a very common situation. This procedure shows the steps required.
[.procedure]
. Obtain the SSID (Service Set Identifier) and PSK (Pre-Shared Key) for the wireless network from the network administrator.
. Identify the wireless adapter. The FreeBSD [.filename]#GENERIC# kernel includes drivers for many common wireless adapters. If the wireless adapter is one of those models, it will be shown in the output from man:ifconfig[8]:
+
[source,shell]
....
% ifconfig | grep -B3 -i wireless
....
+
On FreeBSD 11 or higher, use this command instead:
+
[source,shell]
....
% sysctl net.wlan.devices
....
+
If a wireless adapter is not listed, an additional kernel module might be required, or it might be a model not supported by FreeBSD.
+
This example shows the Atheros `ath0` wireless adapter.
. Add an entry for this network to [.filename]#/etc/wpa_supplicant.conf#. If the file does not exist, create it. Replace _myssid_ and _mypsk_ with the SSID and PSK provided by the network administrator.
+
[.programlisting]
....
network={
ssid="myssid"
psk="mypsk"
}
....
. Add entries to [.filename]#/etc/rc.conf# to configure the network on startup:
+
[.programlisting]
....
wlans_ath0="wlan0"
ifconfig_wlan0="WPA SYNCDHCP"
....
. Restart the computer, or restart the network service to connect to the network:
+
[source,shell]
....
# service netif restart
....
[[network-wireless-basic]]
=== Basic Setup
==== Kernel Configuration
To use wireless networking, a wireless networking card is needed and the kernel needs to be configured with the appropriate wireless networking support. The kernel is separated into multiple modules so that only the required support needs to be configured.
The most commonly used wireless devices are those that use parts made by Atheros. These devices are supported by man:ath[4] and require the following line to be added to [.filename]#/boot/loader.conf#:
[.programlisting]
....
if_ath_load="YES"
....
The Atheros driver is split up into three separate pieces: the driver (man:ath[4]), the hardware support layer that handles chip-specific functions (man:ath_hal[4]), and an algorithm for selecting the rate for transmitting frames. When this support is loaded as kernel modules, any dependencies are automatically handled. To load support for a different type of wireless device, specify the module for that device. This example is for devices based on the Intersil Prism parts (man:wi[4]) driver:
[.programlisting]
....
if_wi_load="YES"
....
[NOTE]
====
The examples in this section use an man:ath[4] device and the device name in the examples must be changed according to the configuration. A list of available wireless drivers and supported adapters can be found in the FreeBSD Hardware Notes, available on the https://www.FreeBSD.org/releases/[Release Information] page of the FreeBSD website. If a native FreeBSD driver for the wireless device does not exist, it may be possible to use the Windows(R) driver with the help of the crossref:config[config-network-ndis,NDIS] driver wrapper.
====
In addition, the modules that implement cryptographic support for the security protocols to use must be loaded. These are intended to be dynamically loaded on demand by the man:wlan[4] module, but for now they must be manually configured. The following modules are available: man:wlan_wep[4], man:wlan_ccmp[4], and man:wlan_tkip[4]. The man:wlan_ccmp[4] and man:wlan_tkip[4] drivers are only needed when using the WPA or 802.11i security protocols. If the network does not use encryption, man:wlan_wep[4] support is not needed. To load these modules at boot time, add the following lines to [.filename]#/boot/loader.conf#:
[.programlisting]
....
wlan_wep_load="YES"
wlan_ccmp_load="YES"
wlan_tkip_load="YES"
....
Once this information has been added to [.filename]#/boot/loader.conf#, reboot the FreeBSD box. Alternately, load the modules by hand using man:kldload[8].
[NOTE]
====
For users who do not want to use modules, it is possible to compile these drivers into the kernel by adding the following lines to a custom kernel configuration file:
[.programlisting]
....
device wlan # 802.11 support
device wlan_wep # 802.11 WEP support
device wlan_ccmp # 802.11 CCMP support
device wlan_tkip # 802.11 TKIP support
device wlan_amrr # AMRR transmit rate control algorithm
device ath # Atheros pci/cardbus NIC's
device ath_hal # pci/cardbus chip support
options AH_SUPPORT_AR5416 # enable AR5416 tx/rx descriptors
device ath_rate_sample # SampleRate tx rate control for ath
....
With this information in the kernel configuration file, recompile the kernel and reboot the FreeBSD machine.
====
Information about the wireless device should appear in the boot messages, like this:
[source,shell]
....
ath0: <Atheros 5212> mem 0x88000000-0x8800ffff irq 11 at device 0.0 on cardbus1
ath0: [ITHREAD]
ath0: AR2413 mac 7.9 RF2413 phy 4.5
....
==== Setting the Correct Region
Since the regulatory situation is different in various parts of the world, it is necessary to correctly set the domains that apply to your location to have the correct information about what channels can be used.
The available region definitions can be found in [.filename]#/etc/regdomain.xml#. To set the data at runtime, use `ifconfig`:
[source,shell]
....
# ifconfig wlan0 regdomain ETSI country AT
....
To persist the settings, add it to [.filename]#/etc/rc.conf#:
[source,shell]
....
# sysrc create_args_wlan0="country AT regdomain ETSI"
....
=== Infrastructure Mode
Infrastructure (BSS) mode is the mode that is typically used. In this mode, a number of wireless access points are connected to a wired network. Each wireless network has its own name, called the SSID. Wireless clients connect to the wireless access points.
==== FreeBSD Clients
===== How to Find Access Points
To scan for available networks, use man:ifconfig[8]. This request may take a few moments to complete as it requires the system to switch to each available wireless frequency and probe for available access points. Only the superuser can initiate a scan:
[source,shell]
....
# ifconfig wlan0 create wlandev ath0
# ifconfig wlan0 up scan
SSID/MESH ID BSSID CHAN RATE S:N INT CAPS
dlinkap 00:13:46:49:41:76 11 54M -90:96 100 EPS WPA WME
freebsdap 00:11:95:c3:0d:ac 1 54M -83:96 100 EPS WPA
....
[NOTE]
====
The interface must be `up` before it can scan. Subsequent scan requests do not require the interface to be marked as up again.
====
The output of a scan request lists each BSS/IBSS network found. Besides listing the name of the network, the `SSID`, the output also shows the `BSSID`, which is the MAC address of the access point. The `CAPS` field identifies the type of each network and the capabilities of the stations operating there:
.Station Capability Codes
[cols="1,1", frame="none", options="header"]
|===
| Capability Code
| Meaning
|`E`
|Extended Service Set (ESS). Indicates that the station is part of an infrastructure network rather than an IBSS/ad-hoc network.
|`I`
|IBSS/ad-hoc network. Indicates that the station is part of an ad-hoc network rather than an ESS network.
|`P`
|Privacy. Encryption is required for all data frames exchanged within the BSS using cryptographic means such as WEP, TKIP or AES-CCMP.
|`S`
|Short Preamble. Indicates that the network is using short preambles, defined in 802.11b High Rate/DSSS PHY, and utilizes a 56 bit sync field rather than the 128 bit field used in long preamble mode.
|`s`
|Short slot time. Indicates that the 802.11g network is using a short slot time because there are no legacy (802.11b) stations present.
|===
One can also display the current list of known networks with:
[source,shell]
....
# ifconfig wlan0 list scan
....
This information may be updated automatically by the adapter or manually with a `scan` request. Old data is automatically removed from the cache, so over time this list may shrink unless more scans are done.
===== Basic Settings
This section provides a simple example of how to make the wireless network adapter work in FreeBSD without encryption. Once familiar with these concepts, it is strongly recommend to use <<network-wireless-wpa,WPA>> to set up the wireless network.
There are three basic steps to configure a wireless network: select an access point, authenticate the station, and configure an IP address. The following sections discuss each step.
====== Selecting an Access Point
Most of the time, it is sufficient to let the system choose an access point using the builtin heuristics. This is the default behavior when an interface is marked as up or it is listed in [.filename]#/etc/rc.conf#:
[.programlisting]
....
wlans_ath0="wlan0"
ifconfig_wlan0="DHCP"
....
If there are multiple access points, a specific one can be selected by its SSID:
[.programlisting]
....
wlans_ath0="wlan0"
ifconfig_wlan0="ssid your_ssid_here DHCP"
....
In an environment where there are multiple access points with the same SSID, which is often done to simplify roaming, it may be necessary to associate to one specific device. In this case, the BSSID of the access point can be specified, with or without the SSID:
[.programlisting]
....
wlans_ath0="wlan0"
ifconfig_wlan0="ssid your_ssid_here bssid xx:xx:xx:xx:xx:xx DHCP"
....
There are other ways to constrain the choice of an access point, such as limiting the set of frequencies the system will scan on. This may be useful for a multi-band wireless card as scanning all the possible channels can be time-consuming. To limit operation to a specific band, use the `mode` parameter:
[.programlisting]
....
wlans_ath0="wlan0"
ifconfig_wlan0="mode 11g ssid your_ssid_here DHCP"
....
This example will force the card to operate in 802.11g, which is defined only for 2.4GHz frequencies so any 5GHz channels will not be considered. This can also be achieved with the `channel` parameter, which locks operation to one specific frequency, and the `chanlist` parameter, to specify a list of channels for scanning. More information about these parameters can be found in man:ifconfig[8].
====== Authentication
Once an access point is selected, the station needs to authenticate before it can pass data. Authentication can happen in several ways. The most common scheme, open authentication, allows any station to join the network and communicate. This is the authentication to use for test purposes the first time a wireless network is setup. Other schemes require cryptographic handshakes to be completed before data traffic can flow, either using pre-shared keys or secrets, or more complex schemes that involve backend services such as RADIUS. Open authentication is the default setting. The next most common setup is WPA-PSK, also known as WPA Personal, which is described in <<network-wireless-wpa-wpa-psk>>.
[NOTE]
====
If using an Apple(R) AirPort(R) Extreme base station for an access point, shared-key authentication together with a WEP key needs to be configured. This can be configured in [.filename]#/etc/rc.conf# or by using man:wpa_supplicant[8]. For a single AirPort(R) base station, access can be configured with:
[.programlisting]
....
wlans_ath0="wlan0"
ifconfig_wlan0="authmode shared wepmode on weptxkey 1 wepkey 01234567 DHCP"
....
In general, shared key authentication should be avoided because it uses the WEP key material in a highly-constrained manner, making it even easier to crack the key. If WEP must be used for compatibility with legacy devices, it is better to use WEP with `open` authentication. More information regarding WEP can be found in <<network-wireless-wep>>.
====
====== Getting an IP Address with DHCP
Once an access point is selected and the authentication parameters are set, an IP address must be obtained in order to communicate. Most of the time, the IP address is obtained via DHCP. To achieve that, edit [.filename]#/etc/rc.conf# and add `DHCP` to the configuration for the device:
[.programlisting]
....
wlans_ath0="wlan0"
ifconfig_wlan0="DHCP"
....
The wireless interface is now ready to bring up:
[source,shell]
....
# service netif start
....
Once the interface is running, use man:ifconfig[8] to see the status of the interface [.filename]#ath0#:
[source,shell]
....
# ifconfig wlan0
wlan0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> mtu 1500
ether 00:11:95:d5:43:62
inet 192.168.1.100 netmask 0xffffff00 broadcast 192.168.1.255
media: IEEE 802.11 Wireless Ethernet OFDM/54Mbps mode 11g
status: associated
ssid dlinkap channel 11 (2462 Mhz 11g) bssid 00:13:46:49:41:76
country US ecm authmode OPEN privacy OFF txpower 21.5 bmiss 7
scanvalid 60 bgscan bgscanintvl 300 bgscanidle 250 roam:rssi 7
roam:rate 5 protmode CTS wme burst
....
The `status: associated` line means that it is connected to the wireless network. The `bssid 00:13:46:49:41:76` is the MAC address of the access point and `authmode OPEN` indicates that the communication is not encrypted.
====== Static IP Address
If an IP address cannot be obtained from a DHCP server, set a fixed IP address. Replace the `DHCP` keyword shown above with the address information. Be sure to retain any other parameters for selecting the access point:
[.programlisting]
....
wlans_ath0="wlan0"
ifconfig_wlan0="inet 192.168.1.100 netmask 255.255.255.0 ssid your_ssid_here"
....
[[network-wireless-wpa]]
===== WPA
Wi-Fi Protected Access (WPA) is a security protocol used together with 802.11 networks to address the lack of proper authentication and the weakness of WEP. WPA leverages the 802.1X authentication protocol and uses one of several ciphers instead of WEP for data integrity. The only cipher required by WPA is the Temporary Key Integrity Protocol (TKIP). TKIP is a cipher that extends the basic RC4 cipher used by WEP by adding integrity checking, tamper detection, and measures for responding to detected intrusions. TKIP is designed to work on legacy hardware with only software modification. It represents a compromise that improves security but is still not entirely immune to attack. WPA also specifies the AES-CCMP cipher as an alternative to TKIP, and that is preferred when possible. For this specification, the term WPA2 or RSN is commonly used.
WPA defines authentication and encryption protocols. Authentication is most commonly done using one of two techniques: by 802.1X and a backend authentication service such as RADIUS, or by a minimal handshake between the station and the access point using a pre-shared secret. The former is commonly termed WPA Enterprise and the latter is known as WPA Personal. Since most people will not set up a RADIUS backend server for their wireless network, WPA-PSK is by far the most commonly encountered configuration for WPA.
The control of the wireless connection and the key negotiation or authentication with a server is done using man:wpa_supplicant[8]. This program requires a configuration file, [.filename]#/etc/wpa_supplicant.conf#, to run. More information regarding this file can be found in man:wpa_supplicant.conf[5].
[[network-wireless-wpa-wpa-psk]]
====== WPA-PSK
WPA-PSK, also known as WPA Personal, is based on a pre-shared key (PSK) which is generated from a given password and used as the master key in the wireless network. This means every wireless user will share the same key. WPA-PSK is intended for small networks where the use of an authentication server is not possible or desired.
[WARNING]
====
Always use strong passwords that are sufficiently long and made from a rich alphabet so that they will not be easily guessed or attacked.
====
The first step is the configuration of [.filename]#/etc/wpa_supplicant.conf# with the SSID and the pre-shared key of the network:
[.programlisting]
....
network={
ssid="freebsdap"
psk="freebsdmall"
}
....
Then, in [.filename]#/etc/rc.conf#, indicate that the wireless device configuration will be done with WPA and the IP address will be obtained with DHCP:
[.programlisting]
....
wlans_ath0="wlan0"
ifconfig_wlan0="WPA DHCP"
....
Then, bring up the interface:
[source,shell]
....
# service netif start
Starting wpa_supplicant.
DHCPDISCOVER on wlan0 to 255.255.255.255 port 67 interval 5
DHCPDISCOVER on wlan0 to 255.255.255.255 port 67 interval 6
DHCPOFFER from 192.168.0.1
DHCPREQUEST on wlan0 to 255.255.255.255 port 67
DHCPACK from 192.168.0.1
bound to 192.168.0.254 -- renewal in 300 seconds.
wlan0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> mtu 1500
ether 00:11:95:d5:43:62
inet 192.168.0.254 netmask 0xffffff00 broadcast 192.168.0.255
media: IEEE 802.11 Wireless Ethernet OFDM/36Mbps mode 11g
status: associated
ssid freebsdap channel 1 (2412 Mhz 11g) bssid 00:11:95:c3:0d:ac
country US ecm authmode WPA2/802.11i privacy ON deftxkey UNDEF
AES-CCM 3:128-bit txpower 21.5 bmiss 7 scanvalid 450 bgscan
bgscanintvl 300 bgscanidle 250 roam:rssi 7 roam:rate 5 protmode CTS
wme burst roaming MANUAL
....
Or, try to configure the interface manually using the information in [.filename]#/etc/wpa_supplicant.conf#:
[source,shell]
....
# wpa_supplicant -i wlan0 -c /etc/wpa_supplicant.conf
Trying to associate with 00:11:95:c3:0d:ac (SSID='freebsdap' freq=2412 MHz)
Associated with 00:11:95:c3:0d:ac
WPA: Key negotiation completed with 00:11:95:c3:0d:ac [PTK=CCMP GTK=CCMP]
CTRL-EVENT-CONNECTED - Connection to 00:11:95:c3:0d:ac completed (auth) [id=0 id_str=]
....
The next operation is to launch man:dhclient[8] to get the IP address from the DHCP server:
[source,shell]
....
# dhclient wlan0
DHCPREQUEST on wlan0 to 255.255.255.255 port 67
DHCPACK from 192.168.0.1
bound to 192.168.0.254 -- renewal in 300 seconds.
# ifconfig wlan0
wlan0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> mtu 1500
ether 00:11:95:d5:43:62
inet 192.168.0.254 netmask 0xffffff00 broadcast 192.168.0.255
media: IEEE 802.11 Wireless Ethernet OFDM/36Mbps mode 11g
status: associated
ssid freebsdap channel 1 (2412 Mhz 11g) bssid 00:11:95:c3:0d:ac
country US ecm authmode WPA2/802.11i privacy ON deftxkey UNDEF
AES-CCM 3:128-bit txpower 21.5 bmiss 7 scanvalid 450 bgscan
bgscanintvl 300 bgscanidle 250 roam:rssi 7 roam:rate 5 protmode CTS
wme burst roaming MANUAL
....
[NOTE]
====
If [.filename]#/etc/rc.conf# has an `ifconfig_wlan0="DHCP"` entry, man:dhclient[8] will be launched automatically after man:wpa_supplicant[8] associates with the access point.
====
If DHCP is not possible or desired, set a static IP address after man:wpa_supplicant[8] has authenticated the station:
[source,shell]
....
# ifconfig wlan0 inet 192.168.0.100 netmask 255.255.255.0
# ifconfig wlan0
wlan0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> mtu 1500
ether 00:11:95:d5:43:62
inet 192.168.0.100 netmask 0xffffff00 broadcast 192.168.0.255
media: IEEE 802.11 Wireless Ethernet OFDM/36Mbps mode 11g
status: associated
ssid freebsdap channel 1 (2412 Mhz 11g) bssid 00:11:95:c3:0d:ac
country US ecm authmode WPA2/802.11i privacy ON deftxkey UNDEF
AES-CCM 3:128-bit txpower 21.5 bmiss 7 scanvalid 450 bgscan
bgscanintvl 300 bgscanidle 250 roam:rssi 7 roam:rate 5 protmode CTS
wme burst roaming MANUAL
....
When DHCP is not used, the default gateway and the nameserver also have to be manually set:
[source,shell]
....
# route add default your_default_router
# echo "nameserver your_DNS_server" >> /etc/resolv.conf
....
[[network-wireless-wpa-eap-tls]]
====== WPA with EAP-TLS
The second way to use WPA is with an 802.1X backend authentication server. In this case, WPA is called WPA Enterprise to differentiate it from the less secure WPA Personal. Authentication in WPA Enterprise is based on the Extensible Authentication Protocol (EAP).
EAP does not come with an encryption method. Instead, EAP is embedded inside an encrypted tunnel. There are many EAP authentication methods, but EAP-TLS, EAP-TTLS, and EAP-PEAP are the most common.
EAP with Transport Layer Security (EAP-TLS) is a well-supported wireless authentication protocol since it was the first EAP method to be certified by the http://www.wi-fi.org/[Wi-Fi Alliance]. EAP-TLS requires three certificates to run: the certificate of the Certificate Authority (CA) installed on all machines, the server certificate for the authentication server, and one client certificate for each wireless client. In this EAP method, both the authentication server and wireless client authenticate each other by presenting their respective certificates, and then verify that these certificates were signed by the organization's CA.
As previously, the configuration is done via [.filename]#/etc/wpa_supplicant.conf#:
[.programlisting]
....
network={
ssid="freebsdap" <.>
proto=RSN <.>
key_mgmt=WPA-EAP <.>
eap=TLS <.>
identity="loader" <.>
ca_cert="/etc/certs/cacert.pem" <.>
client_cert="/etc/certs/clientcert.pem" <.>
private_key="/etc/certs/clientkey.pem" <.>
private_key_passwd="freebsdmallclient" <.>
}
....
<.> This field indicates the network name (SSID).
<.> This example uses the RSN IEEE(R) 802.11i protocol, also known as WPA2.
<.> The `key_mgmt` line refers to the key management protocol to use. In this example, it is WPA using EAP authentication.
<.> This field indicates the EAP method for the connection.
<.> The `identity` field contains the identity string for EAP.
<.> The `ca_cert` field indicates the pathname of the CA certificate file. This file is needed to verify the server certificate.
<.> The `client_cert` line gives the pathname to the client certificate file. This certificate is unique to each wireless client of the network.
<.> The `private_key` field is the pathname to the client certificate private key file.
<.> The `private_key_passwd` field contains the passphrase for the private key.
Then, add the following lines to [.filename]#/etc/rc.conf#:
[.programlisting]
....
wlans_ath0="wlan0"
ifconfig_wlan0="WPA DHCP"
....
The next step is to bring up the interface:
[source,shell]
....
# service netif start
Starting wpa_supplicant.
DHCPREQUEST on wlan0 to 255.255.255.255 port 67 interval 7
DHCPREQUEST on wlan0 to 255.255.255.255 port 67 interval 15
DHCPACK from 192.168.0.20
bound to 192.168.0.254 -- renewal in 300 seconds.
wlan0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> mtu 1500
ether 00:11:95:d5:43:62
inet 192.168.0.254 netmask 0xffffff00 broadcast 192.168.0.255
media: IEEE 802.11 Wireless Ethernet DS/11Mbps mode 11g
status: associated
ssid freebsdap channel 1 (2412 Mhz 11g) bssid 00:11:95:c3:0d:ac
country US ecm authmode WPA2/802.11i privacy ON deftxkey UNDEF
AES-CCM 3:128-bit txpower 21.5 bmiss 7 scanvalid 450 bgscan
bgscanintvl 300 bgscanidle 250 roam:rssi 7 roam:rate 5 protmode CTS
wme burst roaming MANUAL
....
It is also possible to bring up the interface manually using man:wpa_supplicant[8] and man:ifconfig[8].
[[network-wireless-wpa-eap-ttls]]
====== WPA with EAP-TTLS
With EAP-TLS, both the authentication server and the client need a certificate. With EAP-TTLS, a client certificate is optional. This method is similar to a web server which creates a secure SSL tunnel even if visitors do not have client-side certificates. EAP-TTLS uses an encrypted TLS tunnel for safe transport of the authentication data.
The required configuration can be added to [.filename]#/etc/wpa_supplicant.conf#:
[.programlisting]
....
network={
ssid="freebsdap"
proto=RSN
key_mgmt=WPA-EAP
eap=TTLS <.>
identity="test" <.>
password="test" <.>
ca_cert="/etc/certs/cacert.pem" <.>
phase2="auth=MD5" <.>
}
....
<.> This field specifies the EAP method for the connection.
<.> The `identity` field contains the identity string for EAP authentication inside the encrypted TLS tunnel.
<.> The `password` field contains the passphrase for the EAP authentication.
<.> The `ca_cert` field indicates the pathname of the CA certificate file. This file is needed to verify the server certificate.
<.> This field specifies the authentication method used in the encrypted TLS tunnel. In this example, EAP with MD5-Challenge is used. The "inner authentication" phase is often called "phase2".
Next, add the following lines to [.filename]#/etc/rc.conf#:
[.programlisting]
....
wlans_ath0="wlan0"
ifconfig_wlan0="WPA DHCP"
....
The next step is to bring up the interface:
[source,shell]
....
# service netif start
Starting wpa_supplicant.
DHCPREQUEST on wlan0 to 255.255.255.255 port 67 interval 7
DHCPREQUEST on wlan0 to 255.255.255.255 port 67 interval 15
DHCPREQUEST on wlan0 to 255.255.255.255 port 67 interval 21
DHCPACK from 192.168.0.20
bound to 192.168.0.254 -- renewal in 300 seconds.
wlan0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> mtu 1500
ether 00:11:95:d5:43:62
inet 192.168.0.254 netmask 0xffffff00 broadcast 192.168.0.255
media: IEEE 802.11 Wireless Ethernet DS/11Mbps mode 11g
status: associated
ssid freebsdap channel 1 (2412 Mhz 11g) bssid 00:11:95:c3:0d:ac
country US ecm authmode WPA2/802.11i privacy ON deftxkey UNDEF
AES-CCM 3:128-bit txpower 21.5 bmiss 7 scanvalid 450 bgscan
bgscanintvl 300 bgscanidle 250 roam:rssi 7 roam:rate 5 protmode CTS
wme burst roaming MANUAL
....
[[network-wireless-wpa-eap-peap]]
====== WPA with EAP-PEAP
[NOTE]
====
PEAPv0/EAP-MSCHAPv2 is the most common PEAP method. In this chapter, the term PEAP is used to refer to that method.
====
Protected EAP (PEAP) is designed as an alternative to EAP-TTLS and is the most used EAP standard after EAP-TLS. In a network with mixed operating systems, PEAP should be the most supported standard after EAP-TLS.
PEAP is similar to EAP-TTLS as it uses a server-side certificate to authenticate clients by creating an encrypted TLS tunnel between the client and the authentication server, which protects the ensuing exchange of authentication information. PEAP authentication differs from EAP-TTLS as it broadcasts the username in the clear and only the password is sent in the encrypted TLS tunnel. EAP-TTLS will use the TLS tunnel for both the username and password.
Add the following lines to [.filename]#/etc/wpa_supplicant.conf# to configure the EAP-PEAP related settings:
[.programlisting]
....
network={
ssid="freebsdap"
proto=RSN
key_mgmt=WPA-EAP
eap=PEAP <.>
identity="test" <.>
password="test" <.>
ca_cert="/etc/certs/cacert.pem" <.>
phase1="peaplabel=0" <.>
phase2="auth=MSCHAPV2" <.>
}
....
<.> This field specifies the EAP method for the connection.
<.> The `identity` field contains the identity string for EAP authentication inside the encrypted TLS tunnel.
<.> The `password` field contains the passphrase for the EAP authentication.
<.> The `ca_cert` field indicates the pathname of the CA certificate file. This file is needed to verify the server certificate.
<.> This field contains the parameters for the first phase of authentication, the TLS tunnel. According to the authentication server used, specify a specific label for authentication. Most of the time, the label will be "client EAP encryption" which is set by using `peaplabel=0`. More information can be found in man:wpa_supplicant.conf[5].
<.> This field specifies the authentication protocol used in the encrypted TLS tunnel. In the case of PEAP, it is `auth=MSCHAPV2`.
Add the following to [.filename]#/etc/rc.conf#:
[.programlisting]
....
wlans_ath0="wlan0"
ifconfig_wlan0="WPA DHCP"
....
Then, bring up the interface:
[source,shell]
....
# service netif start
Starting wpa_supplicant.
DHCPREQUEST on wlan0 to 255.255.255.255 port 67 interval 7
DHCPREQUEST on wlan0 to 255.255.255.255 port 67 interval 15
DHCPREQUEST on wlan0 to 255.255.255.255 port 67 interval 21
DHCPACK from 192.168.0.20
bound to 192.168.0.254 -- renewal in 300 seconds.
wlan0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> mtu 1500
ether 00:11:95:d5:43:62
inet 192.168.0.254 netmask 0xffffff00 broadcast 192.168.0.255
media: IEEE 802.11 Wireless Ethernet DS/11Mbps mode 11g
status: associated
ssid freebsdap channel 1 (2412 Mhz 11g) bssid 00:11:95:c3:0d:ac
country US ecm authmode WPA2/802.11i privacy ON deftxkey UNDEF
AES-CCM 3:128-bit txpower 21.5 bmiss 7 scanvalid 450 bgscan
bgscanintvl 300 bgscanidle 250 roam:rssi 7 roam:rate 5 protmode CTS
wme burst roaming MANUAL
....
[[network-wireless-wep]]
===== WEP
Wired Equivalent Privacy (WEP) is part of the original 802.11 standard. There is no authentication mechanism, only a weak form of access control which is easily cracked.
WEP can be set up using man:ifconfig[8]:
[source,shell]
....
# ifconfig wlan0 create wlandev ath0
# ifconfig wlan0 inet 192.168.1.100 netmask 255.255.255.0 \
ssid my_net wepmode on weptxkey 3 wepkey 3:0x3456789012
....
* The `weptxkey` specifies which WEP key will be used in the transmission. This example uses the third key. This must match the setting on the access point. When unsure which key is used by the access point, try `1` (the first key) for this value.
* The `wepkey` selects one of the WEP keys. It should be in the format _index:key_. Key `1` is used by default; the index only needs to be set when using a key other than the first key.
+
[NOTE]
====
Replace the `0x3456789012` with the key configured for use on the access point.
====
Refer to man:ifconfig[8] for further information.
The man:wpa_supplicant[8] facility can be used to configure a wireless interface with WEP. The example above can be set up by adding the following lines to [.filename]#/etc/wpa_supplicant.conf#:
[.programlisting]
....
network={
ssid="my_net"
key_mgmt=NONE
wep_key3=3456789012
wep_tx_keyidx=3
}
....
Then:
[source,shell]
....
# wpa_supplicant -i wlan0 -c /etc/wpa_supplicant.conf
Trying to associate with 00:13:46:49:41:76 (SSID='dlinkap' freq=2437 MHz)
Associated with 00:13:46:49:41:76
....
=== Ad-hoc Mode
IBSS mode, also called ad-hoc mode, is designed for point to point connections. For example, to establish an ad-hoc network between the machines `A` and `B`, choose two IP addresses and a SSID.
On `A`:
[source,shell]
....
# ifconfig wlan0 create wlandev ath0 wlanmode adhoc
# ifconfig wlan0 inet 192.168.0.1 netmask 255.255.255.0 ssid freebsdap
# ifconfig wlan0
wlan0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
ether 00:11:95:c3:0d:ac
inet 192.168.0.1 netmask 0xffffff00 broadcast 192.168.0.255
media: IEEE 802.11 Wireless Ethernet autoselect mode 11g <adhoc>
status: running
ssid freebsdap channel 2 (2417 Mhz 11g) bssid 02:11:95:c3:0d:ac
country US ecm authmode OPEN privacy OFF txpower 21.5 scanvalid 60
protmode CTS wme burst
....
The `adhoc` parameter indicates that the interface is running in IBSS mode.
`B` should now be able to detect `A`:
[source,shell]
....
# ifconfig wlan0 create wlandev ath0 wlanmode adhoc
# ifconfig wlan0 up scan
SSID/MESH ID BSSID CHAN RATE S:N INT CAPS
freebsdap 02:11:95:c3:0d:ac 2 54M -64:-96 100 IS WME
....
The `I` in the output confirms that `A` is in ad-hoc mode. Now, configure `B` with a different IP address:
[source,shell]
....
# ifconfig wlan0 inet 192.168.0.2 netmask 255.255.255.0 ssid freebsdap
# ifconfig wlan0
wlan0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
ether 00:11:95:d5:43:62
inet 192.168.0.2 netmask 0xffffff00 broadcast 192.168.0.255
media: IEEE 802.11 Wireless Ethernet autoselect mode 11g <adhoc>
status: running
ssid freebsdap channel 2 (2417 Mhz 11g) bssid 02:11:95:c3:0d:ac
country US ecm authmode OPEN privacy OFF txpower 21.5 scanvalid 60
protmode CTS wme burst
....
Both `A` and `B` are now ready to exchange information.
[[network-wireless-ap]]
=== FreeBSD Host Access Points
FreeBSD can act as an Access Point (AP) which eliminates the need to buy a hardware AP or run an ad-hoc network. This can be particularly useful when a FreeBSD machine is acting as a gateway to another network such as the Internet.
[[network-wireless-ap-basic]]
==== Basic Settings
Before configuring a FreeBSD machine as an AP, the kernel must be configured with the appropriate networking support for the wireless card as well as the security protocols being used. For more details, see <<network-wireless-basic>>.
[NOTE]
====
The NDIS driver wrapper for Windows(R) drivers does not currently support AP operation. Only native FreeBSD wireless drivers support AP mode.
====
Once wireless networking support is loaded, check if the wireless device supports the host-based access point mode, also known as hostap mode:
[source,shell]
....
# ifconfig wlan0 create wlandev ath0
# ifconfig wlan0 list caps
drivercaps=6f85edc1<STA,FF,TURBOP,IBSS,HOSTAP,AHDEMO,TXPMGT,SHSLOT,SHPREAMBLE,MONITOR,MBSS,WPA1,WPA2,BURST,WME,WDS,BGSCAN,TXFRAG>
cryptocaps=1f<WEP,TKIP,AES,AES_CCM,TKIPMIC>
....
This output displays the card's capabilities. The `HOSTAP` word confirms that this wireless card can act as an AP. Various supported ciphers are also listed: WEP, TKIP, and AES. This information indicates which security protocols can be used on the AP.
The wireless device can only be put into hostap mode during the creation of the network pseudo-device, so a previously created device must be destroyed first:
[source,shell]
....
# ifconfig wlan0 destroy
....
then regenerated with the correct option before setting the other parameters:
[source,shell]
....
# ifconfig wlan0 create wlandev ath0 wlanmode hostap
# ifconfig wlan0 inet 192.168.0.1 netmask 255.255.255.0 ssid freebsdap mode 11g channel 1
....
Use man:ifconfig[8] again to see the status of the [.filename]#wlan0# interface:
[source,shell]
....
# ifconfig wlan0
wlan0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
ether 00:11:95:c3:0d:ac
inet 192.168.0.1 netmask 0xffffff00 broadcast 192.168.0.255
media: IEEE 802.11 Wireless Ethernet autoselect mode 11g <hostap>
status: running
ssid freebsdap channel 1 (2412 Mhz 11g) bssid 00:11:95:c3:0d:ac
country US ecm authmode OPEN privacy OFF txpower 21.5 scanvalid 60
protmode CTS wme burst dtimperiod 1 -dfs
....
The `hostap` parameter indicates the interface is running in the host-based access point mode.
The interface configuration can be done automatically at boot time by adding the following lines to [.filename]#/etc/rc.conf#:
[.programlisting]
....
wlans_ath0="wlan0"
create_args_wlan0="wlanmode hostap"
ifconfig_wlan0="inet 192.168.0.1 netmask 255.255.255.0 ssid freebsdap mode 11g channel 1"
....
==== Host-based Access Point Without Authentication or Encryption
Although it is not recommended to run an AP without any authentication or encryption, this is a simple way to check if the AP is working. This configuration is also important for debugging client issues.
Once the AP is configured, initiate a scan from another wireless machine to find the AP:
[source,shell]
....
# ifconfig wlan0 create wlandev ath0
# ifconfig wlan0 up scan
SSID/MESH ID BSSID CHAN RATE S:N INT CAPS
freebsdap 00:11:95:c3:0d:ac 1 54M -66:-96 100 ES WME
....
The client machine found the AP and can be associated with it:
[source,shell]
....
# ifconfig wlan0 inet 192.168.0.2 netmask 255.255.255.0 ssid freebsdap
# ifconfig wlan0
wlan0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
ether 00:11:95:d5:43:62
inet 192.168.0.2 netmask 0xffffff00 broadcast 192.168.0.255
media: IEEE 802.11 Wireless Ethernet OFDM/54Mbps mode 11g
status: associated
ssid freebsdap channel 1 (2412 Mhz 11g) bssid 00:11:95:c3:0d:ac
country US ecm authmode OPEN privacy OFF txpower 21.5 bmiss 7
scanvalid 60 bgscan bgscanintvl 300 bgscanidle 250 roam:rssi 7
roam:rate 5 protmode CTS wme burst
....
[[network-wireless-ap-wpa]]
==== WPA2 Host-based Access Point
This section focuses on setting up a FreeBSD access point using the WPA2 security protocol. More details regarding WPA and the configuration of WPA-based wireless clients can be found in <<network-wireless-wpa>>.
The man:hostapd[8] daemon is used to deal with client authentication and key management on the WPA2-enabled AP.
The following configuration operations are performed on the FreeBSD machine acting as the AP. Once the AP is correctly working, man:hostapd[8] can be automatically started at boot with this line in [.filename]#/etc/rc.conf#:
[.programlisting]
....
hostapd_enable="YES"
....
Before trying to configure man:hostapd[8], first configure the basic settings introduced in <<network-wireless-ap-basic>>.
===== WPA2-PSK
WPA2-PSK is intended for small networks where the use of a backend authentication server is not possible or desired.
The configuration is done in [.filename]#/etc/hostapd.conf#:
[.programlisting]
....
interface=wlan0 <.>
debug=1 <.>
ctrl_interface=/var/run/hostapd <.>
ctrl_interface_group=wheel <.>
ssid=freebsdap <.>
wpa=2 <.>
wpa_passphrase=freebsdmall <.>
wpa_key_mgmt=WPA-PSK <.>
wpa_pairwise=CCMP <.>
....
<.> Wireless interface used for the access point.
<.> Level of verbosity used during the execution of man:hostapd[8]. A value of `1` represents the minimal level.
<.> Pathname of the directory used by man:hostapd[8] to store domain socket files for communication with external programs such as man:hostapd_cli[8]. The default value is used in this example.
<.> The group allowed to access the control interface files.
<.> The wireless network name, or SSID, that will appear in wireless scans.
<.> Enable WPA and specify which WPA authentication protocol will be required. A value of `2` configures the AP for WPA2 and is recommended. Set to `1` only if the obsolete WPA is required.
<.> ASCII passphrase for WPA authentication.
<.> The key management protocol to use. This example sets WPA-PSK.
<.> Encryption algorithms accepted by the access point. In this example, only the CCMP (AES) cipher is accepted. CCMP is an alternative to TKIP and is strongly preferred when possible. TKIP should be allowed only when there are stations incapable of using CCMP.
The next step is to start man:hostapd[8]:
[source,shell]
....
# service hostapd forcestart
....
[source,shell]
....
# ifconfig wlan0
wlan0: flags=8943<UP,BROADCAST,RUNNING,PROMISC,SIMPLEX,MULTICAST> metric 0 mtu 1500
ether 04:f0:21:16:8e:10
inet6 fe80::6f0:21ff:fe16:8e10%wlan0 prefixlen 64 scopeid 0x9
nd6 options=21<PERFORMNUD,AUTO_LINKLOCAL>
media: IEEE 802.11 Wireless Ethernet autoselect mode 11na <hostap>
status: running
ssid No5ignal channel 36 (5180 MHz 11a ht/40+) bssid 04:f0:21:16:8e:10
country US ecm authmode WPA2/802.11i privacy MIXED deftxkey 2
AES-CCM 2:128-bit AES-CCM 3:128-bit txpower 17 mcastrate 6 mgmtrate 6
scanvalid 60 ampdulimit 64k ampdudensity 8 shortgi wme burst
dtimperiod 1 -dfs
groups: wlan
....
Once the AP is running, the clients can associate with it. See <<network-wireless-wpa>> for more details. It is possible to see the stations associated with the AP using `ifconfig _wlan0_ list sta`.
==== WEP Host-based Access Point
It is not recommended to use WEP for setting up an AP since there is no authentication mechanism and the encryption is easily cracked. Some legacy wireless cards only support WEP and these cards will only support an AP without authentication or encryption.
The wireless device can now be put into hostap mode and configured with the correct SSID and IP address:
[source,shell]
....
# ifconfig wlan0 create wlandev ath0 wlanmode hostap
# ifconfig wlan0 inet 192.168.0.1 netmask 255.255.255.0 \
ssid freebsdap wepmode on weptxkey 3 wepkey 3:0x3456789012 mode 11g
....
* The `weptxkey` indicates which WEP key will be used in the transmission. This example uses the third key as key numbering starts with `1`. This parameter must be specified in order to encrypt the data.
* The `wepkey` sets the selected WEP key. It should be in the format _index:key_. If the index is not given, key `1` is set. The index needs to be set when using keys other than the first key.
Use man:ifconfig[8] to see the status of the [.filename]#wlan0# interface:
[source,shell]
....
# ifconfig wlan0
wlan0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
ether 00:11:95:c3:0d:ac
inet 192.168.0.1 netmask 0xffffff00 broadcast 192.168.0.255
media: IEEE 802.11 Wireless Ethernet autoselect mode 11g <hostap>
status: running
ssid freebsdap channel 4 (2427 Mhz 11g) bssid 00:11:95:c3:0d:ac
country US ecm authmode OPEN privacy ON deftxkey 3 wepkey 3:40-bit
txpower 21.5 scanvalid 60 protmode CTS wme burst dtimperiod 1 -dfs
....
From another wireless machine, it is now possible to initiate a scan to find the AP:
[source,shell]
....
# ifconfig wlan0 create wlandev ath0
# ifconfig wlan0 up scan
SSID BSSID CHAN RATE S:N INT CAPS
freebsdap 00:11:95:c3:0d:ac 1 54M 22:1 100 EPS
....
In this example, the client machine found the AP and can associate with it using the correct parameters. See <<network-wireless-wep>> for more details.
=== Using Both Wired and Wireless Connections
A wired connection provides better performance and reliability, while a wireless connection provides flexibility and mobility. Laptop users typically want to roam seamlessly between the two types of connections.
On FreeBSD, it is possible to combine two or even more network interfaces together in a "failover" fashion. This type of configuration uses the most preferred and available connection from a group of network interfaces, and the operating system switches automatically when the link state changes.
Link aggregation and failover is covered in <<network-aggregation>> and an example for using both wired and wireless connections is provided at <<networking-lagg-wired-and-wireless>>.
=== Troubleshooting
This section describes a number of steps to help troubleshoot common wireless networking problems.
* If the access point is not listed when scanning, check that the configuration has not limited the wireless device to a limited set of channels.
* If the device cannot associate with an access point, verify that the configuration matches the settings on the access point. This includes the authentication scheme and any security protocols. Simplify the configuration as much as possible. If using a security protocol such as WPA or WEP, configure the access point for open authentication and no security to see if traffic will pass.
+
Debugging support is provided by man:wpa_supplicant[8]. Try running this utility manually with `-dd` and look at the system logs.
* Once the system can associate with the access point, diagnose the network configuration using tools like man:ping[8].
* There are many lower-level debugging tools. Debugging messages can be enabled in the 802.11 protocol support layer using man:wlandebug[8]. For example, to enable console messages related to scanning for access points and the 802.11 protocol handshakes required to arrange communication:
+
[source,shell]
....
# wlandebug -i wlan0 +scan+auth+debug+assoc
net.wlan.0.debug: 0 => 0xc80000<assoc,auth,scan>
....
+
Many useful statistics are maintained by the 802.11 layer and `wlanstats`, found in [.filename]#/usr/src/tools/tools/net80211#, will dump this information. These statistics should display all errors identified by the 802.11 layer. However, some errors are identified in the device drivers that lie below the 802.11 layer so they may not show up. To diagnose device-specific problems, refer to the drivers' documentation.
If the above information does not help to clarify the problem, submit a problem report and include output from the above tools.
[[network-usb-tethering]]
== USB Tethering
Many cellphones provide the option to share their data connection over USB (often called "tethering"). This feature uses one of RNDIS, CDC, or a custom Apple(R) iPhone(R)/iPad(R) protocol.
* Android(TM) devices generally use the man:urndis[4] driver.
* Apple(R) devices use the man:ipheth[4] driver.
* Older devices will often use the man:cdce[4] driver.
Before attaching a device, load the appropriate driver into the kernel:
[source,shell]
....
# kldload if_urndis
# kldload if_cdce
# kldload if_ipheth
....
Once the device is attached ``ue``_0_ will be available for use like a normal network device. Be sure that the "USB tethering" option is enabled on the device.
To make this change permanent and load the driver as a module at boot time, place the appropriate line of the following in [.filename]#/boot/loader.conf#:
[source,shell]
....
if_urndis_load="YES"
if_cdce_load="YES"
if_ipheth_load="YES"
....
[[network-bluetooth]]
== Bluetooth
Bluetooth is a wireless technology for creating personal networks operating in the 2.4 GHz unlicensed band, with a range of 10 meters. Networks are usually formed ad-hoc from portable devices such as cellular phones, handhelds, and laptops. Unlike Wi-Fi wireless technology, Bluetooth offers higher level service profiles, such as FTP-like file servers, file pushing, voice transport, serial line emulation, and more.
This section describes the use of a USB Bluetooth dongle on a FreeBSD system. It then describes the various Bluetooth protocols and utilities.
=== Loading Bluetooth Support
The Bluetooth stack in FreeBSD is implemented using the man:netgraph[4] framework. A broad variety of Bluetooth USB dongles is supported by man:ng_ubt[4]. Broadcom BCM2033 based Bluetooth devices are supported by the man:ubtbcmfw[4] and man:ng_ubt[4] drivers. The 3Com Bluetooth PC Card 3CRWB60-A is supported by the man:ng_bt3c[4] driver. Serial and UART based Bluetooth devices are supported by man:sio[4], man:ng_h4[4], and man:hcseriald[8].
Before attaching a device, determine which of the above drivers it uses, then load the driver. For example, if the device uses the man:ng_ubt[4] driver:
[source,shell]
....
# kldload ng_ubt
....
If the Bluetooth device will be attached to the system during system startup, the system can be configured to load the module at boot time by adding the driver to [.filename]#/boot/loader.conf#:
[.programlisting]
....
ng_ubt_load="YES"
....
Once the driver is loaded, plug in the USB dongle. If the driver load was successful, output similar to the following should appear on the console and in [.filename]#/var/log/messages#:
[source,shell]
....
ubt0: vendor 0x0a12 product 0x0001, rev 1.10/5.25, addr 2
ubt0: Interface 0 endpoints: interrupt=0x81, bulk-in=0x82, bulk-out=0x2
ubt0: Interface 1 (alt.config 5) endpoints: isoc-in=0x83, isoc-out=0x3,
wMaxPacketSize=49, nframes=6, buffer size=294
....
To start and stop the Bluetooth stack, use its startup script. It is a good idea to stop the stack before unplugging the device. Starting the bluetooth stack might require man:hcsecd[8] to be started. When starting the stack, the output should be similar to the following:
[source,shell]
....
# service bluetooth start ubt0
BD_ADDR: 00:02:72:00:d4:1a
Features: 0xff 0xff 0xf 00 00 00 00 00
<3-Slot> <5-Slot> <Encryption> <Slot offset>
<Timing accuracy> <Switch> <Hold mode> <Sniff mode>
<Park mode> <RSSI> <Channel quality> <SCO link>
<HV2 packets> <HV3 packets> <u-law log> <A-law log> <CVSD>
<Paging scheme> <Power control> <Transparent SCO data>
Max. ACL packet size: 192 bytes
Number of ACL packets: 8
Max. SCO packet size: 64 bytes
Number of SCO packets: 8
....
=== Finding Other Bluetooth Devices
The Host Controller Interface (HCI) provides a uniform method for accessing Bluetooth baseband capabilities. In FreeBSD, a netgraph HCI node is created for each Bluetooth device. For more details, refer to man:ng_hci[4].
One of the most common tasks is discovery of Bluetooth devices within RF proximity. This operation is called _inquiry_. Inquiry and other HCI related operations are done using man:hccontrol[8]. The example below shows how to find out which Bluetooth devices are in range. The list of devices should be displayed in a few seconds. Note that a remote device will only answer the inquiry if it is set to _discoverable_ mode.
[source,shell]
....
% hccontrol -n ubt0hci inquiry
Inquiry result, num_responses=1
Inquiry result #0
BD_ADDR: 00:80:37:29:19:a4
Page Scan Rep. Mode: 0x1
Page Scan Period Mode: 00
Page Scan Mode: 00
Class: 52:02:04
Clock offset: 0x78ef
Inquiry complete. Status: No error [00]
....
The `BD_ADDR` is the unique address of a Bluetooth device, similar to the MAC address of a network card. This address is needed for further communication with a device and it is possible to assign a human readable name to a `BD_ADDR`. Information regarding the known Bluetooth hosts is contained in [.filename]#/etc/bluetooth/hosts#. The following example shows how to obtain the human readable name that was assigned to the remote device:
[source,shell]
....
% hccontrol -n ubt0hci remote_name_request 00:80:37:29:19:a4
BD_ADDR: 00:80:37:29:19:a4
Name: Pav's T39
....
If an inquiry is performed on a remote Bluetooth device, it will find the computer as "your.host.name (ubt0)". The name assigned to the local device can be changed at any time.
Remote devices can be assigned aliases in [.filename]#/etc/bluetooth/hosts#. More information about [.filename]#/etc/bluetooth/hosts# file might be found in man:bluetooth.hosts[5].
The Bluetooth system provides a point-to-point connection between two Bluetooth units, or a point-to-multipoint connection which is shared among several Bluetooth devices. The following example shows how to create a connection to a remote device:
[source,shell]
....
% hccontrol -n ubt0hci create_connection BT_ADDR
....
`create_connection` accepts `BT_ADDR` as well as host aliases in [.filename]#/etc/bluetooth/hosts#.
The following example shows how to obtain the list of active baseband connections for the local device:
[source,shell]
....
% hccontrol -n ubt0hci read_connection_list
Remote BD_ADDR Handle Type Mode Role Encrypt Pending Queue State
00:80:37:29:19:a4 41 ACL 0 MAST NONE 0 0 OPEN
....
A _connection handle_ is useful when termination of the baseband connection is required, though it is normally not required to do this by hand. The stack will automatically terminate inactive baseband connections.
[source,shell]
....
# hccontrol -n ubt0hci disconnect 41
Connection handle: 41
Reason: Connection terminated by local host [0x16]
....
Type `hccontrol help` for a complete listing of available HCI commands. Most of the HCI commands do not require superuser privileges.
=== Device Pairing
By default, Bluetooth communication is not authenticated, and any device can talk to any other device. A Bluetooth device, such as a cellular phone, may choose to require authentication to provide a particular service. Bluetooth authentication is normally done with a _PIN code_, an ASCII string up to 16 characters in length. The user is required to enter the same PIN code on both devices. Once the user has entered the PIN code, both devices will generate a _link key_. After that, the link key can be stored either in the devices or in a persistent storage. Next time, both devices will use the previously generated link key. This procedure is called _pairing_. Note that if the link key is lost by either device, the pairing must be repeated.
The man:hcsecd[8] daemon is responsible for handling Bluetooth authentication requests. The default configuration file is [.filename]#/etc/bluetooth/hcsecd.conf#. An example section for a cellular phone with the PIN code set to `1234` is shown below:
[.programlisting]
....
device {
bdaddr 00:80:37:29:19:a4;
name "Pav's T39";
key nokey;
pin "1234";
}
....
The only limitation on PIN codes is length. Some devices, such as Bluetooth headsets, may have a fixed PIN code built in. The `-d` switch forces man:hcsecd[8] to stay in the foreground, so it is easy to see what is happening. Set the remote device to receive pairing and initiate the Bluetooth connection to the remote device. The remote device should indicate that pairing was accepted and request the PIN code. Enter the same PIN code listed in [.filename]#hcsecd.conf#. Now the computer and the remote device are paired. Alternatively, pairing can be initiated on the remote device.
The following line can be added to [.filename]#/etc/rc.conf# to configure man:hcsecd[8] to start automatically on system start:
[.programlisting]
....
hcsecd_enable="YES"
....
The following is a sample of the man:hcsecd[8] daemon output:
[.programlisting]
....
hcsecd[16484]: Got Link_Key_Request event from 'ubt0hci', remote bdaddr 0:80:37:29:19:a4
hcsecd[16484]: Found matching entry, remote bdaddr 0:80:37:29:19:a4, name 'Pav's T39', link key doesn't exist
hcsecd[16484]: Sending Link_Key_Negative_Reply to 'ubt0hci' for remote bdaddr 0:80:37:29:19:a4
hcsecd[16484]: Got PIN_Code_Request event from 'ubt0hci', remote bdaddr 0:80:37:29:19:a4
hcsecd[16484]: Found matching entry, remote bdaddr 0:80:37:29:19:a4, name 'Pav's T39', PIN code exists
hcsecd[16484]: Sending PIN_Code_Reply to 'ubt0hci' for remote bdaddr 0:80:37:29:19:a4
....
=== Network Access with PPP Profiles
A Dial-Up Networking (DUN) profile can be used to configure a cellular phone as a wireless modem for connecting to a dial-up Internet access server. It can also be used to configure a computer to receive data calls from a cellular phone.
Network access with a PPP profile can be used to provide LAN access for a single Bluetooth device or multiple Bluetooth devices. It can also provide PC to PC connection using PPP networking over serial cable emulation.
In FreeBSD, these profiles are implemented with man:ppp[8] and the man:rfcomm_pppd[8] wrapper which converts a Bluetooth connection into something PPP can use. Before a profile can be used, a new PPP label must be created in [.filename]#/etc/ppp/ppp.conf#. Consult man:rfcomm_pppd[8] for examples.
In this example, man:rfcomm_pppd[8] is used to open a connection to a remote device with a `BD_ADDR` of `00:80:37:29:19:a4` on a DUNRFCOMM channel:
[source,shell]
....
# rfcomm_pppd -a 00:80:37:29:19:a4 -c -C dun -l rfcomm-dialup
....
The actual channel number will be obtained from the remote device using the SDP protocol. It is possible to specify the RFCOMM channel by hand, and in this case man:rfcomm_pppd[8] will not perform the SDP query. Use man:sdpcontrol[8] to find out the RFCOMM channel on the remote device.
In order to provide network access with the PPPLAN service, man:sdpd[8] must be running and a new entry for LAN clients must be created in [.filename]#/etc/ppp/ppp.conf#. Consult man:rfcomm_pppd[8] for examples. Finally, start the RFCOMMPPP server on a valid RFCOMM channel number. The RFCOMMPPP server will automatically register the Bluetooth LAN service with the local SDP daemon. The example below shows how to start the RFCOMMPPP server.
[source,shell]
....
# rfcomm_pppd -s -C 7 -l rfcomm-server
....
=== Bluetooth Protocols
This section provides an overview of the various Bluetooth protocols, their function, and associated utilities.
==== Logical Link Control and Adaptation Protocol (L2CAP)
The Logical Link Control and Adaptation Protocol (L2CAP) provides connection-oriented and connectionless data services to upper layer protocols. L2CAP permits higher level protocols and applications to transmit and receive L2CAP data packets up to 64 kilobytes in length.
L2CAP is based around the concept of _channels_. A channel is a logical connection on top of a baseband connection, where each channel is bound to a single protocol in a many-to-one fashion. Multiple channels can be bound to the same protocol, but a channel cannot be bound to multiple protocols. Each L2CAP packet received on a channel is directed to the appropriate higher level protocol. Multiple channels can share the same baseband connection.
In FreeBSD, a netgraph L2CAP node is created for each Bluetooth device. This node is normally connected to the downstream Bluetooth HCI node and upstream Bluetooth socket nodes. The default name for the L2CAP node is "devicel2cap". For more details refer to man:ng_l2cap[4].
A useful command is man:l2ping[8], which can be used to ping other devices. Some Bluetooth implementations might not return all of the data sent to them, so `0 bytes` in the following example is normal.
[source,shell]
....
# l2ping -a 00:80:37:29:19:a4
0 bytes from 0:80:37:29:19:a4 seq_no=0 time=48.633 ms result=0
0 bytes from 0:80:37:29:19:a4 seq_no=1 time=37.551 ms result=0
0 bytes from 0:80:37:29:19:a4 seq_no=2 time=28.324 ms result=0
0 bytes from 0:80:37:29:19:a4 seq_no=3 time=46.150 ms result=0
....
The man:l2control[8] utility is used to perform various operations on L2CAP nodes. This example shows how to obtain the list of logical connections (channels) and the list of baseband connections for the local device:
[source,shell]
....
% l2control -a 00:02:72:00:d4:1a read_channel_list
L2CAP channels:
Remote BD_ADDR SCID/ DCID PSM IMTU/ OMTU State
00:07:e0:00:0b:ca 66/ 64 3 132/ 672 OPEN
% l2control -a 00:02:72:00:d4:1a read_connection_list
L2CAP connections:
Remote BD_ADDR Handle Flags Pending State
00:07:e0:00:0b:ca 41 O 0 OPEN
....
Another diagnostic tool is man:btsockstat[1]. It is similar to man:netstat[1], but for Bluetooth network-related data structures. The example below shows the same logical connection as man:l2control[8] above.
[source,shell]
....
% btsockstat
Active L2CAP sockets
PCB Recv-Q Send-Q Local address/PSM Foreign address CID State
c2afe900 0 0 00:02:72:00:d4:1a/3 00:07:e0:00:0b:ca 66 OPEN
Active RFCOMM sessions
L2PCB PCB Flag MTU Out-Q DLCs State
c2afe900 c2b53380 1 127 0 Yes OPEN
Active RFCOMM sockets
PCB Recv-Q Send-Q Local address Foreign address Chan DLCI State
c2e8bc80 0 250 00:02:72:00:d4:1a 00:07:e0:00:0b:ca 3 6 OPEN
....
==== Radio Frequency Communication (RFCOMM)
The RFCOMM protocol provides emulation of serial ports over the L2CAP protocol. RFCOMM is a simple transport protocol, with additional provisions for emulating the 9 circuits of RS-232 (EIATIA-232-E) serial ports. It supports up to 60 simultaneous connections (RFCOMM channels) between two Bluetooth devices.
For the purposes of RFCOMM, a complete communication path involves two applications running on the communication endpoints with a communication segment between them. RFCOMM is intended to cover applications that make use of the serial ports of the devices in which they reside. The communication segment is a direct connect Bluetooth link from one device to another.
RFCOMM is only concerned with the connection between the devices in the direct connect case, or between the device and a modem in the network case. RFCOMM can support other configurations, such as modules that communicate via Bluetooth wireless technology on one side and provide a wired interface on the other side.
In FreeBSD, RFCOMM is implemented at the Bluetooth sockets layer.
==== Service Discovery Protocol (SDP)
The Service Discovery Protocol (SDP) provides the means for client applications to discover the existence of services provided by server applications as well as the attributes of those services. The attributes of a service include the type or class of service offered and the mechanism or protocol information needed to utilize the service.
SDP involves communication between a SDP server and a SDP client. The server maintains a list of service records that describe the characteristics of services associated with the server. Each service record contains information about a single service. A client may retrieve information from a service record maintained by the SDP server by issuing a SDP request. If the client, or an application associated with the client, decides to use a service, it must open a separate connection to the service provider in order to utilize the service. SDP provides a mechanism for discovering services and their attributes, but it does not provide a mechanism for utilizing those services.
Normally, a SDP client searches for services based on some desired characteristics of the services. However, there are times when it is desirable to discover which types of services are described by an SDP server's service records without any prior information about the services. This process of looking for any offered services is called _browsing_.
The Bluetooth SDP server, man:sdpd[8], and command line client, man:sdpcontrol[8], are included in the standard FreeBSD installation. The following example shows how to perform a SDP browse query.
[source,shell]
....
% sdpcontrol -a 00:01:03:fc:6e:ec browse
Record Handle: 00000000
Service Class ID List:
Service Discovery Server (0x1000)
Protocol Descriptor List:
L2CAP (0x0100)
Protocol specific parameter #1: u/int/uuid16 1
Protocol specific parameter #2: u/int/uuid16 1
Record Handle: 0x00000001
Service Class ID List:
Browse Group Descriptor (0x1001)
Record Handle: 0x00000002
Service Class ID List:
LAN Access Using PPP (0x1102)
Protocol Descriptor List:
L2CAP (0x0100)
RFCOMM (0x0003)
Protocol specific parameter #1: u/int8/bool 1
Bluetooth Profile Descriptor List:
LAN Access Using PPP (0x1102) ver. 1.0
....
Note that each service has a list of attributes, such as the RFCOMM channel. Depending on the service, the user might need to make note of some of the attributes. Some Bluetooth implementations do not support service browsing and may return an empty list. In this case, it is possible to search for the specific service. The example below shows how to search for the OBEX Object Push (OPUSH) service:
[source,shell]
....
% sdpcontrol -a 00:01:03:fc:6e:ec search OPUSH
....
Offering services on FreeBSD to Bluetooth clients is done with the man:sdpd[8] server. The following line can be added to [.filename]#/etc/rc.conf#:
[.programlisting]
....
sdpd_enable="YES"
....
Then the man:sdpd[8] daemon can be started with:
[source,shell]
....
# service sdpd start
....
The local server application that wants to provide a Bluetooth service to remote clients will register the service with the local SDP daemon. An example of such an application is man:rfcomm_pppd[8]. Once started, it will register the Bluetooth LAN service with the local SDP daemon.
The list of services registered with the local SDP server can be obtained by issuing a SDP browse query via the local control channel:
[source,shell]
....
# sdpcontrol -l browse
....
==== OBEX Object Push (OPUSH)
Object Exchange (OBEX) is a widely used protocol for simple file transfers between mobile devices. Its main use is in infrared communication, where it is used for generic file transfers between notebooks or PDAs, and for sending business cards or calendar entries between cellular phones and other devices with Personal Information Manager (PIM) applications.
The OBEX server and client are implemented by obexapp, which can be installed using the package:comms/obexapp[] package or port.
The OBEX client is used to push and/or pull objects from the OBEX server. An example object is a business card or an appointment. The OBEX client can obtain the RFCOMM channel number from the remote device via SDP. This can be done by specifying the service name instead of the RFCOMM channel number. Supported service names are: `IrMC`, `FTRN`, and `OPUSH`. It is also possible to specify the RFCOMM channel as a number. Below is an example of an OBEX session where the device information object is pulled from the cellular phone, and a new object, the business card, is pushed into the phone's directory.
[source,shell]
....
% obexapp -a 00:80:37:29:19:a4 -C IrMC
obex> get telecom/devinfo.txt devinfo-t39.txt
Success, response: OK, Success (0x20)
obex> put new.vcf
Success, response: OK, Success (0x20)
obex> di
Success, response: OK, Success (0x20)
....
In order to provide the OPUSH service, man:sdpd[8] must be running and a root folder, where all incoming objects will be stored, must be created. The default path to the root folder is [.filename]#/var/spool/obex#. Finally, start the OBEX server on a valid RFCOMM channel number. The OBEX server will automatically register the OPUSH service with the local SDP daemon. The example below shows how to start the OBEX server.
[source,shell]
....
# obexapp -s -C 10
....
==== Serial Port Profile (SPP)
The Serial Port Profile (SPP) allows Bluetooth devices to perform serial cable emulation. This profile allows legacy applications to use Bluetooth as a cable replacement, through a virtual serial port abstraction.
In FreeBSD, man:rfcomm_sppd[1] implements SPP and a pseudo tty is used as a virtual serial port abstraction. The example below shows how to connect to a remote device's serial port service. A RFCOMM channel does not have to be specified as man:rfcomm_sppd[1] can obtain it from the remote device via SDP. To override this, specify a RFCOMM channel on the command line.
[source,shell]
....
# rfcomm_sppd -a 00:07:E0:00:0B:CA -t
rfcomm_sppd[94692]: Starting on /dev/pts/6...
/dev/pts/6
....
Once connected, the pseudo tty can be used as serial port:
[source,shell]
....
# cu -l /dev/pts/6
....
The pseudo tty is printed on stdout and can be read by wrapper scripts:
[.programlisting]
....
PTS=`rfcomm_sppd -a 00:07:E0:00:0B:CA -t`
cu -l $PTS
....
=== Troubleshooting
By default, when FreeBSD is accepting a new connection, it tries to perform a role switch and become master. Some older Bluetooth devices which do not support role switching will not be able to connect. Since role switching is performed when a new connection is being established, it is not possible to ask the remote device if it supports role switching. However, there is a HCI option to disable role switching on the local side:
[source,shell]
....
# hccontrol -n ubt0hci write_node_role_switch 0
....
To display Bluetooth packets, use the third-party package hcidump, which can be installed using the package:comms/hcidump[] package or port. This utility is similar to man:tcpdump[1] and can be used to display the contents of Bluetooth packets on the terminal and to dump the Bluetooth packets to a file.
[[network-bridging]]
== Bridging
It is sometimes useful to divide a network, such as an Ethernet segment, into network segments without having to create IP subnets and use a router to connect the segments together. A device that connects two networks together in this fashion is called a "bridge".
A bridge works by learning the MAC addresses of the devices on each of its network interfaces. It forwards traffic between networks only when the source and destination MAC addresses are on different networks. In many respects, a bridge is like an Ethernet switch with very few ports. A FreeBSD system with multiple network interfaces can be configured to act as a bridge.
Bridging can be useful in the following situations:
Connecting Networks::
The basic operation of a bridge is to join two or more network segments. There are many reasons to use a host-based bridge instead of networking equipment, such as cabling constraints or firewalling. A bridge can also connect a wireless interface running in hostap mode to a wired network and act as an access point.
Filtering/Traffic Shaping Firewall::
A bridge can be used when firewall functionality is needed without routing or Network Address Translation (NAT).
+
An example is a small company that is connected via DSL or ISDN to an ISP. There are thirteen public IP addresses from the ISP and ten computers on the network. In this situation, using a router-based firewall is difficult because of subnetting issues. A bridge-based firewall can be configured without any IP addressing issues.
Network Tap::
A bridge can join two network segments in order to inspect all Ethernet frames that pass between them using man:bpf[4] and man:tcpdump[1] on the bridge interface or by sending a copy of all frames out an additional interface known as a span port.
Layer 2 VPN::
Two Ethernet networks can be joined across an IP link by bridging the networks to an EtherIP tunnel or a man:tap[4] based solution such as OpenVPN.
Layer 2 Redundancy::
A network can be connected together with multiple links and use the Spanning Tree Protocol (STP) to block redundant paths.
This section describes how to configure a FreeBSD system as a bridge using man:if_bridge[4]. A netgraph bridging driver is also available, and is described in man:ng_bridge[4].
[NOTE]
====
Packet filtering can be used with any firewall package that hooks into the man:pfil[9] framework. The bridge can be used as a traffic shaper with man:altq[4] or man:dummynet[4].
====
=== Enabling the Bridge
In FreeBSD, man:if_bridge[4] is a kernel module which is automatically loaded by man:ifconfig[8] when creating a bridge interface. It is also possible to compile bridge support into a custom kernel by adding `device if_bridge` to the custom kernel configuration file.
The bridge is created using interface cloning. To create the bridge interface:
[source,shell]
....
# ifconfig bridge create
bridge0
# ifconfig bridge0
bridge0: flags=8802<BROADCAST,SIMPLEX,MULTICAST> metric 0 mtu 1500
ether 96:3d:4b:f1:79:7a
id 00:00:00:00:00:00 priority 32768 hellotime 2 fwddelay 15
maxage 20 holdcnt 6 proto rstp maxaddr 100 timeout 1200
root id 00:00:00:00:00:00 priority 0 ifcost 0 port 0
....
When a bridge interface is created, it is automatically assigned a randomly generated Ethernet address. The `maxaddr` and `timeout` parameters control how many MAC addresses the bridge will keep in its forwarding table and how many seconds before each entry is removed after it is last seen. The other parameters control how STP operates.
Next, specify which network interfaces to add as members of the bridge. For the bridge to forward packets, all member interfaces and the bridge need to be up:
[source,shell]
....
# ifconfig bridge0 addm fxp0 addm fxp1 up
# ifconfig fxp0 up
# ifconfig fxp1 up
....
The bridge can now forward Ethernet frames between [.filename]#fxp0# and [.filename]#fxp1#. Add the following lines to [.filename]#/etc/rc.conf# so the bridge is created at startup:
[.programlisting]
....
cloned_interfaces="bridge0"
ifconfig_bridge0="addm fxp0 addm fxp1 up"
ifconfig_fxp0="up"
ifconfig_fxp1="up"
....
If the bridge host needs an IP address, set it on the bridge interface, not on the member interfaces. The address can be set statically or via DHCP. This example sets a static IP address:
[source,shell]
....
# ifconfig bridge0 inet 192.168.0.1/24
....
It is also possible to assign an IPv6 address to a bridge interface. To make the changes permanent, add the addressing information to [.filename]#/etc/rc.conf#.
[NOTE]
====
When packet filtering is enabled, bridged packets will pass through the filter inbound on the originating interface on the bridge interface, and outbound on the appropriate interfaces. Either stage can be disabled. When direction of the packet flow is important, it is best to firewall on the member interfaces rather than the bridge itself.
The bridge has several configurable settings for passing non-IP and IP packets, and layer2 firewalling with man:ipfw[8]. See man:if_bridge[4] for more information.
====
=== Enabling Spanning Tree
For an Ethernet network to function properly, only one active path can exist between two devices. The STP protocol detects loops and puts redundant links into a blocked state. Should one of the active links fail, STP calculates a different tree and enables one of the blocked paths to restore connectivity to all points in the network.
The Rapid Spanning Tree Protocol (RSTP or 802.1w) provides backwards compatibility with legacy STP. RSTP provides faster convergence and exchanges information with neighboring switches to quickly transition to forwarding mode without creating loops. FreeBSD supports RSTP and STP as operating modes, with RSTP being the default mode.
STP can be enabled on member interfaces using man:ifconfig[8]. For a bridge with [.filename]#fxp0# and [.filename]#fxp1# as the current interfaces, enable STP with:
[source,shell]
....
# ifconfig bridge0 stp fxp0 stp fxp1
bridge0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
ether d6:cf:d5:a0:94:6d
id 00:01:02:4b:d4:50 priority 32768 hellotime 2 fwddelay 15
maxage 20 holdcnt 6 proto rstp maxaddr 100 timeout 1200
root id 00:01:02:4b:d4:50 priority 32768 ifcost 0 port 0
member: fxp0 flags=1c7<LEARNING,DISCOVER,STP,AUTOEDGE,PTP,AUTOPTP>
port 3 priority 128 path cost 200000 proto rstp
role designated state forwarding
member: fxp1 flags=1c7<LEARNING,DISCOVER,STP,AUTOEDGE,PTP,AUTOPTP>
port 4 priority 128 path cost 200000 proto rstp
role designated state forwarding
....
This bridge has a spanning tree ID of `00:01:02:4b:d4:50` and a priority of `32768`. As the `root id` is the same, it indicates that this is the root bridge for the tree.
Another bridge on the network also has STP enabled:
[source,shell]
....
bridge0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
ether 96:3d:4b:f1:79:7a
id 00:13:d4:9a:06:7a priority 32768 hellotime 2 fwddelay 15
maxage 20 holdcnt 6 proto rstp maxaddr 100 timeout 1200
root id 00:01:02:4b:d4:50 priority 32768 ifcost 400000 port 4
member: fxp0 flags=1c7<LEARNING,DISCOVER,STP,AUTOEDGE,PTP,AUTOPTP>
port 4 priority 128 path cost 200000 proto rstp
role root state forwarding
member: fxp1 flags=1c7<LEARNING,DISCOVER,STP,AUTOEDGE,PTP,AUTOPTP>
port 5 priority 128 path cost 200000 proto rstp
role designated state forwarding
....
The line `root id 00:01:02:4b:d4:50 priority 32768 ifcost 400000 port 4` shows that the root bridge is `00:01:02:4b:d4:50` and has a path cost of `400000` from this bridge. The path to the root bridge is via `port 4` which is [.filename]#fxp0#.
=== Bridge Interface Parameters
Several `ifconfig` parameters are unique to bridge interfaces. This section summarizes some common uses for these parameters. The complete list of available parameters is described in man:ifconfig[8].
private::
A private interface does not forward any traffic to any other port that is also designated as a private interface. The traffic is blocked unconditionally so no Ethernet frames will be forwarded, including ARP packets. If traffic needs to be selectively blocked, a firewall should be used instead.
span::
A span port transmits a copy of every Ethernet frame received by the bridge. The number of span ports configured on a bridge is unlimited, but if an interface is designated as a span port, it cannot also be used as a regular bridge port. This is most useful for snooping a bridged network passively on another host connected to one of the span ports of the bridge. For example, to send a copy of all frames out the interface named [.filename]#fxp4#:
+
[source,shell]
....
# ifconfig bridge0 span fxp4
....
sticky::
If a bridge member interface is marked as sticky, dynamically learned address entries are treated as static entries in the forwarding cache. Sticky entries are never aged out of the cache or replaced, even if the address is seen on a different interface. This gives the benefit of static address entries without the need to pre-populate the forwarding table. Clients learned on a particular segment of the bridge cannot roam to another segment.
+
An example of using sticky addresses is to combine the bridge with VLANs in order to isolate customer networks without wasting IP address space. Consider that `CustomerA` is on `vlan100`, `CustomerB` is on `vlan101`, and the bridge has the address `192.168.0.1`:
+
[source,shell]
....
# ifconfig bridge0 addm vlan100 sticky vlan100 addm vlan101 sticky vlan101
# ifconfig bridge0 inet 192.168.0.1/24
....
+
In this example, both clients see `192.168.0.1` as their default gateway. Since the bridge cache is sticky, one host cannot spoof the MAC address of the other customer in order to intercept their traffic.
+
Any communication between the VLANs can be blocked using a firewall or, as seen in this example, private interfaces:
+
[source,shell]
....
# ifconfig bridge0 private vlan100 private vlan101
....
+
The customers are completely isolated from each other and the full `/24` address range can be allocated without subnetting.
+
The number of unique source MAC addresses behind an interface can be limited. Once the limit is reached, packets with unknown source addresses are dropped until an existing host cache entry expires or is removed.
+
The following example sets the maximum number of Ethernet devices for `CustomerA` on `vlan100` to 10:
+
[source,shell]
....
# ifconfig bridge0 ifmaxaddr vlan100 10
....
Bridge interfaces also support monitor mode, where the packets are discarded after man:bpf[4] processing and are not processed or forwarded further. This can be used to multiplex the input of two or more interfaces into a single man:bpf[4] stream. This is useful for reconstructing the traffic for network taps that transmit the RX/TX signals out through two separate interfaces. For example, to read the input from four network interfaces as one stream:
[source,shell]
....
# ifconfig bridge0 addm fxp0 addm fxp1 addm fxp2 addm fxp3 monitor up
# tcpdump -i bridge0
....
=== SNMP Monitoring
The bridge interface and STP parameters can be monitored via man:bsnmpd[1] which is included in the FreeBSD base system. The exported bridge MIBs conform to IETF standards so any SNMP client or monitoring package can be used to retrieve the data.
To enable monitoring on the bridge, uncomment this line in [.filename]#/etc/snmpd.config# by removing the beginning `#` symbol:
[.programlisting]
....
begemotSnmpdModulePath."bridge" = "/usr/lib/snmp_bridge.so"
....
Other configuration settings, such as community names and access lists, may need to be modified in this file. See man:bsnmpd[1] and man:snmp_bridge[3] for more information. Once these edits are saved, add this line to [.filename]#/etc/rc.conf#:
[.programlisting]
....
bsnmpd_enable="YES"
....
Then, start man:bsnmpd[1]:
[source,shell]
....
# service bsnmpd start
....
The following examples use the Net-SNMP software (package:net-mgmt/net-snmp[]) to query a bridge from a client system. The package:net-mgmt/bsnmptools[] port can also be used. From the SNMP client which is running Net-SNMP, add the following lines to [.filename]#$HOME/.snmp/snmp.conf# in order to import the bridge MIB definitions:
[.programlisting]
....
mibdirs +/usr/share/snmp/mibs
mibs +BRIDGE-MIB:RSTP-MIB:BEGEMOT-MIB:BEGEMOT-BRIDGE-MIB
....
To monitor a single bridge using the IETF BRIDGE-MIB (RFC4188):
[source,shell]
....
% snmpwalk -v 2c -c public bridge1.example.com mib-2.dot1dBridge
BRIDGE-MIB::dot1dBaseBridgeAddress.0 = STRING: 66:fb:9b:6e:5c:44
BRIDGE-MIB::dot1dBaseNumPorts.0 = INTEGER: 1 ports
BRIDGE-MIB::dot1dStpTimeSinceTopologyChange.0 = Timeticks: (189959) 0:31:39.59 centi-seconds
BRIDGE-MIB::dot1dStpTopChanges.0 = Counter32: 2
BRIDGE-MIB::dot1dStpDesignatedRoot.0 = Hex-STRING: 80 00 00 01 02 4B D4 50
...
BRIDGE-MIB::dot1dStpPortState.3 = INTEGER: forwarding(5)
BRIDGE-MIB::dot1dStpPortEnable.3 = INTEGER: enabled(1)
BRIDGE-MIB::dot1dStpPortPathCost.3 = INTEGER: 200000
BRIDGE-MIB::dot1dStpPortDesignatedRoot.3 = Hex-STRING: 80 00 00 01 02 4B D4 50
BRIDGE-MIB::dot1dStpPortDesignatedCost.3 = INTEGER: 0
BRIDGE-MIB::dot1dStpPortDesignatedBridge.3 = Hex-STRING: 80 00 00 01 02 4B D4 50
BRIDGE-MIB::dot1dStpPortDesignatedPort.3 = Hex-STRING: 03 80
BRIDGE-MIB::dot1dStpPortForwardTransitions.3 = Counter32: 1
RSTP-MIB::dot1dStpVersion.0 = INTEGER: rstp(2)
....
The `dot1dStpTopChanges.0` value is two, indicating that the STP bridge topology has changed twice. A topology change means that one or more links in the network have changed or failed and a new tree has been calculated. The `dot1dStpTimeSinceTopologyChange.0` value will show when this happened.
To monitor multiple bridge interfaces, the private BEGEMOT-BRIDGE-MIB can be used:
[source,shell]
....
% snmpwalk -v 2c -c public bridge1.example.com
enterprises.fokus.begemot.begemotBridge
BEGEMOT-BRIDGE-MIB::begemotBridgeBaseName."bridge0" = STRING: bridge0
BEGEMOT-BRIDGE-MIB::begemotBridgeBaseName."bridge2" = STRING: bridge2
BEGEMOT-BRIDGE-MIB::begemotBridgeBaseAddress."bridge0" = STRING: e:ce:3b:5a:9e:13
BEGEMOT-BRIDGE-MIB::begemotBridgeBaseAddress."bridge2" = STRING: 12:5e:4d:74:d:fc
BEGEMOT-BRIDGE-MIB::begemotBridgeBaseNumPorts."bridge0" = INTEGER: 1
BEGEMOT-BRIDGE-MIB::begemotBridgeBaseNumPorts."bridge2" = INTEGER: 1
...
BEGEMOT-BRIDGE-MIB::begemotBridgeStpTimeSinceTopologyChange."bridge0" = Timeticks: (116927) 0:19:29.27 centi-seconds
BEGEMOT-BRIDGE-MIB::begemotBridgeStpTimeSinceTopologyChange."bridge2" = Timeticks: (82773) 0:13:47.73 centi-seconds
BEGEMOT-BRIDGE-MIB::begemotBridgeStpTopChanges."bridge0" = Counter32: 1
BEGEMOT-BRIDGE-MIB::begemotBridgeStpTopChanges."bridge2" = Counter32: 1
BEGEMOT-BRIDGE-MIB::begemotBridgeStpDesignatedRoot."bridge0" = Hex-STRING: 80 00 00 40 95 30 5E 31
BEGEMOT-BRIDGE-MIB::begemotBridgeStpDesignatedRoot."bridge2" = Hex-STRING: 80 00 00 50 8B B8 C6 A9
....
To change the bridge interface being monitored via the `mib-2.dot1dBridge` subtree:
[source,shell]
....
% snmpset -v 2c -c private bridge1.example.com
BEGEMOT-BRIDGE-MIB::begemotBridgeDefaultBridgeIf.0 s bridge2
....
[[network-aggregation]]
== Link Aggregation and Failover
FreeBSD provides the man:lagg[4] interface which can be used to aggregate multiple network interfaces into one virtual interface in order to provide failover and link aggregation. Failover allows traffic to continue to flow as long as at least one aggregated network interface has an established link. Link aggregation works best on switches which support LACP, as this protocol distributes traffic bi-directionally while responding to the failure of individual links.
The aggregation protocols supported by the lagg interface determine which ports are used for outgoing traffic and whether or not a specific port accepts incoming traffic. The following protocols are supported by man:lagg[4]:
failover::
This mode sends and receives traffic only through the master port. If the master port becomes unavailable, the next active port is used. The first interface added to the virtual interface is the master port and all subsequently added interfaces are used as failover devices. If failover to a non-master port occurs, the original port becomes master once it becomes available again.
fec / loadbalance::
Cisco(R) Fast EtherChannel(R) (FEC) is found on older Cisco(R) switches. It provides a static setup and does not negotiate aggregation with the peer or exchange frames to monitor the link. If the switch supports LACP, that should be used instead.
lacp::
The IEEE(R) 802.3ad Link Aggregation Control Protocol (LACP) negotiates a set of aggregable links with the peer into one or more Link Aggregated Groups (LAGs). Each LAG is composed of ports of the same speed, set to full-duplex operation, and traffic is balanced across the ports in the LAG with the greatest total speed. Typically, there is only one LAG which contains all the ports. In the event of changes in physical connectivity, LACP will quickly converge to a new configuration.
+
LACP balances outgoing traffic across the active ports based on hashed protocol header information and accepts incoming traffic from any active port. The hash includes the Ethernet source and destination address and, if available, the VLAN tag, and the IPv4 or IPv6 source and destination address.
roundrobin::
This mode distributes outgoing traffic using a round-robin scheduler through all active ports and accepts incoming traffic from any active port. Since this mode violates Ethernet frame ordering, it should be used with caution.
=== Configuration Examples
This section demonstrates how to configure a Cisco(R) switch and a FreeBSD system for LACP load balancing. It then shows how to configure two Ethernet interfaces in failover mode as well as how to configure failover mode between an Ethernet and a wireless interface.
[[networking-lacp-aggregation-cisco]]
.LACP Aggregation with a Cisco(R) Switch
[example]
====
This example connects two man:fxp[4] Ethernet interfaces on a FreeBSD machine to the first two Ethernet ports on a Cisco(R) switch as a single load balanced and fault tolerant link. More interfaces can be added to increase throughput and fault tolerance. Replace the names of the Cisco(R) ports, Ethernet devices, channel group number, and IP address shown in the example to match the local configuration.
Frame ordering is mandatory on Ethernet links and any traffic between two stations always flows over the same physical link, limiting the maximum speed to that of one interface. The transmit algorithm attempts to use as much information as it can to distinguish different traffic flows and balance the flows across the available interfaces.
On the Cisco(R) switch, add the _FastEthernet0/1_ and _FastEthernet0/2_ interfaces to channel group _1_:
[source,shell]
....
interface FastEthernet0/1
channel-group 1 mode active
channel-protocol lacp
!
interface FastEthernet0/2
channel-group 1 mode active
channel-protocol lacp
....
On the FreeBSD system, create the man:lagg[4] interface using the physical interfaces _fxp0_ and _fxp1_ and bring the interfaces up with an IP address of _10.0.0.3/24_:
[source,shell]
....
# ifconfig fxp0 up
# ifconfig fxp1 up
# ifconfig lagg0 create
# ifconfig lagg0 up laggproto lacp laggport fxp0 laggport fxp1 10.0.0.3/24
....
Next, verify the status of the virtual interface:
[source,shell]
....
# ifconfig lagg0
lagg0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
options=8<VLAN_MTU>
ether 00:05:5d:71:8d:b8
inet 10.0.0.3 netmask 0xffffff00 broadcast 10.0.0.255
media: Ethernet autoselect
status: active
laggproto lacp
laggport: fxp1 flags=1c<ACTIVE,COLLECTING,DISTRIBUTING>
laggport: fxp0 flags=1c<ACTIVE,COLLECTING,DISTRIBUTING>
....
Ports marked as `ACTIVE` are part of the LAG that has been negotiated with the remote switch. Traffic will be transmitted and received through these active ports. Add `-v` to the above command to view the LAG identifiers.
To see the port status on the Cisco(R) switch:
[source,shell]
....
switch# show lacp neighbor
Flags: S - Device is requesting Slow LACPDUs
F - Device is requesting Fast LACPDUs
A - Device is in Active mode P - Device is in Passive mode
Channel group 1 neighbors
Partner's information:
LACP port Oper Port Port
Port Flags Priority Dev ID Age Key Number State
Fa0/1 SA 32768 0005.5d71.8db8 29s 0x146 0x3 0x3D
Fa0/2 SA 32768 0005.5d71.8db8 29s 0x146 0x4 0x3D
....
For more detail, type `show lacp neighbor detail`.
To retain this configuration across reboots, add the following entries to [.filename]#/etc/rc.conf# on the FreeBSD system:
[.programlisting]
....
ifconfig_fxp0="up"
ifconfig_fxp1="up"
cloned_interfaces="lagg0"
ifconfig_lagg0="laggproto lacp laggport fxp0 laggport fxp1 10.0.0.3/24"
....
====
[[networking-lagg-failover]]
.Failover Mode
[example]
====
Failover mode can be used to switch over to a secondary interface if the link is lost on the master interface. To configure failover, make sure that the underlying physical interfaces are up, then create the man:lagg[4] interface. In this example, _fxp0_ is the master interface, _fxp1_ is the secondary interface, and the virtual interface is assigned an IP address of _10.0.0.15/24_:
[source,shell]
....
# ifconfig fxp0 up
# ifconfig fxp1 up
# ifconfig lagg0 create
# ifconfig lagg0 up laggproto failover laggport fxp0 laggport fxp1 10.0.0.15/24
....
The virtual interface should look something like this:
[source,shell]
....
# ifconfig lagg0
lagg0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
options=8<VLAN_MTU>
ether 00:05:5d:71:8d:b8
inet 10.0.0.15 netmask 0xffffff00 broadcast 10.0.0.255
media: Ethernet autoselect
status: active
laggproto failover
laggport: fxp1 flags=0<>
laggport: fxp0 flags=5<MASTER,ACTIVE>
....
Traffic will be transmitted and received on _fxp0_. If the link is lost on _fxp0_, _fxp1_ will become the active link. If the link is restored on the master interface, it will once again become the active link.
To retain this configuration across reboots, add the following entries to [.filename]#/etc/rc.conf#:
[.programlisting]
....
ifconfig_fxp0="up"
ifconfig_fxp1="up"
cloned_interfaces="lagg0"
ifconfig_lagg0="laggproto failover laggport fxp0 laggport fxp1 10.0.0.15/24"
....
====
[[networking-lagg-wired-and-wireless]]
.Failover Mode Between Ethernet and Wireless Interfaces
[example]
====
For laptop users, it is usually desirable to configure the wireless device as a secondary which is only used when the Ethernet connection is not available. With man:lagg[4], it is possible to configure a failover which prefers the Ethernet connection for both performance and security reasons, while maintaining the ability to transfer data over the wireless connection.
This is achieved by overriding the Ethernet interface's MAC address with that of the wireless interface.
[NOTE]
****
In theory, either the Ethernet or wireless MAC address can be changed to match the other. However, some popular wireless interfaces lack support for overriding the MAC address. We therefore recommend overriding the Ethernet MAC address for this purpose.
****
[NOTE]
****
If the driver for the wireless interface is not loaded in the `GENERIC` or custom kernel, and the computer is running FreeBSD {rel121-current}, load the corresponding [.filename]#.ko# in [.filename]#/boot/loader.conf# by adding `*driver_load="YES"*` to that file and rebooting. Another, better way is to load the driver in [.filename]#/etc/rc.conf# by adding it to `kld_list` (see man:rc.conf[5] for details) in that file and rebooting. This is needed because otherwise the driver is not loaded yet at the time the man:lagg[4] interface is set up.
****
In this example, the Ethernet interface, _re0_, is the master and the wireless interface, _wlan0_, is the failover. The _wlan0_ interface was created from the _ath0_ physical wireless interface, and the Ethernet interface will be configured with the MAC address of the wireless interface. First, determine the MAC address of the wireless interface:
[source,shell]
....
# ifconfig wlan0
wlan0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
ether b8:ee:65:5b:32:59
groups: wlan
ssid Bbox-A3BD2403 channel 6 (2437 MHz 11g ht/20) bssid 00:37:b7:56:4b:60
regdomain ETSI country FR indoor ecm authmode WPA2/802.11i privacy ON
deftxkey UNDEF AES-CCM 2:128-bit txpower 30 bmiss 7 scanvalid 60
protmode CTS ampdulimit 64k ampdudensity 8 shortgi -stbctx stbcrx
-ldpc wme burst roaming MANUAL
media: IEEE 802.11 Wireless Ethernet MCS mode 11ng
status: associated
nd6 options=29<PERFORMNUD,IFDISABLED,AUTO_LINKLOCAL>
....
Replace _wlan0_ to match the system's wireless interface name. The `ether` line will contain the MAC address of the specified interface. Now, change the MAC address of the Ethernet interface:
[source,shell]
....
# ifconfig re0 ether b8:ee:65:5b:32:59
....
Bring the wireless interface up (replacing _FR_ with your own 2-letter country code), but do not set an IP address:
[source,shell]
....
# ifconfig wlan0 create wlandev ath0 country FR ssid my_router up
....
Make sure the _re0_ interface is up, then create the man:lagg[4] interface with _re0_ as master with failover to _wlan0_:
[source,shell]
....
# ifconfig re0 up
# ifconfig lagg0 create
# ifconfig lagg0 up laggproto failover laggport re0 laggport wlan0
....
The virtual interface should look something like this:
[source,shell]
....
# ifconfig lagg0
lagg0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
options=8<VLAN_MTU>
ether b8:ee:65:5b:32:59
laggproto failover lagghash l2,l3,l4
laggport: re0 flags=5<MASTER,ACTIVE>
laggport: wlan0 flags=0<>
groups: lagg
media: Ethernet autoselect
status: active
....
Then, start the DHCP client to obtain an IP address:
[source,shell]
....
# dhclient lagg0
....
To retain this configuration across reboots, add the following entries to [.filename]#/etc/rc.conf#:
[.programlisting]
....
ifconfig_re0="ether b8:ee:65:5b:32:59"
wlans_ath0="wlan0"
ifconfig_wlan0="WPA"
create_args_wlan0="country FR"
cloned_interfaces="lagg0"
ifconfig_lagg0="up laggproto failover laggport re0 laggport wlan0 DHCP"
....
====
[[network-diskless]]
== Diskless Operation with PXE
The Intel(R) Preboot eXecution Environment (PXE) allows an operating system to boot over the network. For example, a FreeBSD system can boot over the network and operate without a local disk, using file systems mounted from an NFS server. PXE support is usually available in the BIOS. To use PXE when the machine starts, select the `Boot from network` option in the BIOS setup or type a function key during system initialization.
In order to provide the files needed for an operating system to boot over the network, a PXE setup also requires properly configured DHCP, TFTP, and NFS servers, where:
* Initial parameters, such as an IP address, executable boot filename and location, server name, and root path are obtained from the DHCP server.
* The operating system loader file is booted using TFTP.
* The file systems are loaded using NFS.
When a computer PXE boots, it receives information over DHCP about where to obtain the initial boot loader file. After the host computer receives this information, it downloads the boot loader via TFTP and then executes the boot loader. In FreeBSD, the boot loader file is [.filename]#/boot/pxeboot#. After [.filename]#/boot/pxeboot# executes, the FreeBSD kernel is loaded and the rest of the FreeBSD bootup sequence proceeds, as described in crossref:boot[boot,The FreeBSD Booting Process].
This section describes how to configure these services on a FreeBSD system so that other systems can PXE boot into FreeBSD. Refer to man:diskless[8] for more information.
[CAUTION]
====
As described, the system providing these services is insecure. It should live in a protected area of a network and be untrusted by other hosts.
====
[[network-pxe-nfs]]
=== Setting Up the PXE Environment
The steps shown in this section configure the built-in NFS and TFTP servers. The next section demonstrates how to install and configure the DHCP server. In this example, the directory which will contain the files used by PXE users is [.filename]#/b/tftpboot/FreeBSD/install#. It is important that this directory exists and that the same directory name is set in both [.filename]#/etc/inetd.conf# and [.filename]#/usr/local/etc/dhcpd.conf#.
[NOTE]
====
The command examples below assume use of the man:sh[1] shell. man:csh[1] and man:tcsh[1] users will need to start a man:sh[1] shell or adapt the commands to man:csh[1] syntax.
====
[.procedure]
. Create the root directory which will contain a FreeBSD installation to be NFS mounted:
+
[source,shell]
....
# export NFSROOTDIR=/b/tftpboot/FreeBSD/install
# mkdir -p ${NFSROOTDIR}
....
. Enable the NFS server by adding this line to [.filename]#/etc/rc.conf#:
+
[.programlisting]
....
nfs_server_enable="YES"
....
. Export the diskless root directory via NFS by adding the following to [.filename]#/etc/exports#:
+
[.programlisting]
....
/b -ro -alldirs -maproot=root
....
. Start the NFS server:
+
[source,shell]
....
# service nfsd start
....
. Enable man:inetd[8] by adding the following line to [.filename]#/etc/rc.conf#:
+
[.programlisting]
....
inetd_enable="YES"
....
. Uncomment the following line in [.filename]#/etc/inetd.conf# by making sure it does not start with a `#` symbol:
+
[.programlisting]
....
tftp dgram udp wait root /usr/libexec/tftpd tftpd -l -s /b/tftpboot
....
+
[NOTE]
====
Some PXE versions require the TCP version of TFTP. In this case, uncomment the second `tftp` line which contains `stream tcp`.
====
. Start man:inetd[8]:
+
[source,shell]
....
# service inetd start
....
. Install the base system into [.filename]#${NFSROOTDIR}#, either by decompressing the official archives or by rebuilding the FreeBSD kernel and userland (refer to crossref:cutting-edge[makeworld,“Updating FreeBSD from Source”] for more detailed instructions, but do not forget to add `DESTDIR=_${NFSROOTDIR}_` when running the `make installkernel` and `make installworld` commands.
. Test that the TFTP server works and can download the boot loader which will be obtained via PXE:
+
[source,shell]
....
# tftp localhost
tftp> get FreeBSD/install/boot/pxeboot
Received 264951 bytes in 0.1 seconds
....
. Edit [.filename]#${NFSROOTDIR}/etc/fstab# and create an entry to mount the root file system over NFS:
+
[.programlisting]
....
# Device Mountpoint FSType Options Dump Pass
myhost.example.com:/b/tftpboot/FreeBSD/install / nfs ro 0 0
....
+
Replace _myhost.example.com_ with the hostname or IP address of the NFS server. In this example, the root file system is mounted read-only in order to prevent NFS clients from potentially deleting the contents of the root file system.
. Set the root password in the PXE environment for client machines which are PXE booting :
+
[source,shell]
....
# chroot ${NFSROOTDIR}
# passwd
....
. If needed, enable man:ssh[1] root logins for client machines which are PXE booting by editing [.filename]#${NFSROOTDIR}/etc/ssh/sshd_config# and enabling `PermitRootLogin`. This option is documented in man:sshd_config[5].
. Perform any other needed customizations of the PXE environment in [.filename]#${NFSROOTDIR}#. These customizations could include things like installing packages or editing the password file with man:vipw[8].
When booting from an NFS root volume, [.filename]#/etc/rc# detects the NFS boot and runs [.filename]#/etc/rc.initdiskless#. In this case, [.filename]#/etc# and [.filename]#/var# need to be memory backed file systems so that these directories are writable but the NFS root directory is read-only:
[source,shell]
....
# chroot ${NFSROOTDIR}
# mkdir -p conf/base
# tar -c -v -f conf/base/etc.cpio.gz --format cpio --gzip etc
# tar -c -v -f conf/base/var.cpio.gz --format cpio --gzip var
....
When the system boots, memory file systems for [.filename]#/etc# and [.filename]#/var# will be created and mounted and the contents of the [.filename]#cpio.gz# files will be copied into them. By default, these file systems have a maximum capacity of 5 megabytes. If your archives do not fit, which is usually the case for [.filename]#/var# when binary packages have been installed, request a larger size by putting the number of 512 byte sectors needed (e.g., 5 megabytes is 10240 sectors) in [.filename]#${NFSROOTDIR}/conf/base/etc/md_size# and [.filename]#${NFSROOTDIR}/conf/base/var/md_size# files for [.filename]#/etc# and [.filename]#/var# file systems respectively.
[[network-pxe-setting-up-dhcp]]
=== Configuring the DHCP Server
The DHCP server does not need to be the same machine as the TFTP and NFS server, but it needs to be accessible in the network.
DHCP is not part of the FreeBSD base system but can be installed using the package:net/isc-dhcp43-server[] port or package.
Once installed, edit the configuration file, [.filename]#/usr/local/etc/dhcpd.conf#. Configure the `next-server`, `filename`, and `root-path` settings as seen in this example:
[.programlisting]
....
subnet 192.168.0.0 netmask 255.255.255.0 {
range 192.168.0.2 192.168.0.3 ;
option subnet-mask 255.255.255.0 ;
option routers 192.168.0.1 ;
option broadcast-address 192.168.0.255 ;
option domain-name-servers 192.168.35.35, 192.168.35.36 ;
option domain-name "example.com";
# IP address of TFTP server
next-server 192.168.0.1 ;
# path of boot loader obtained via tftp
filename "FreeBSD/install/boot/pxeboot" ;
# pxeboot boot loader will try to NFS mount this directory for root FS
option root-path "192.168.0.1:/b/tftpboot/FreeBSD/install/" ;
}
....
The `next-server` directive is used to specify the IP address of the TFTP server.
The `filename` directive defines the path to [.filename]#/boot/pxeboot#. A relative filename is used, meaning that [.filename]#/b/tftpboot# is not included in the path.
The `root-path` option defines the path to the NFS root file system.
Once the edits are saved, enable DHCP at boot time by adding the following line to [.filename]#/etc/rc.conf#:
[.programlisting]
....
dhcpd_enable="YES"
....
Then start the DHCP service:
[source,shell]
....
# service isc-dhcpd start
....
=== Debugging PXE Problems
Once all of the services are configured and started, PXE clients should be able to automatically load FreeBSD over the network. If a particular client is unable to connect, when that client machine boots up, enter the BIOS configuration menu and confirm that it is set to boot from the network.
This section describes some troubleshooting tips for isolating the source of the configuration problem should no clients be able to PXE boot.
[.procedure]
****
. Use the package:net/wireshark[] package or port to debug the network traffic involved during the PXE booting process, which is illustrated in the diagram below.
+
.PXE Booting Process with NFS Root Mount
image::pxe-nfs.png[]
+
1. Client broadcasts a DHCPDISCOVER message.
+
2. The DHCP server responds with the IP address, next-server, filename, and root-path values.
+
3. The client sends a TFTP request to next-server, asking to retrieve filename.
+
4. The TFTP server responds and sends filename to client.
+
5. The client executes filename, which is pxeboot(8), which then loads the kernel. When the kernel executes, the root file system specified by root-path is mounted over NFS.
+
. On the TFTP server, read [.filename]#/var/log/xferlog# to ensure that [.filename]#pxeboot# is being retrieved from the correct location. To test this example configuration:
+
[source,shell]
....
# tftp 192.168.0.1
tftp> get FreeBSD/install/boot/pxeboot
Received 264951 bytes in 0.1 seconds
....
+
The `BUGS` sections in man:tftpd[8] and man:tftp[1] document some limitations with TFTP.
. Make sure that the root file system can be mounted via NFS. To test this example configuration:
+
[source,shell]
....
# mount -t nfs 192.168.0.1:/b/tftpboot/FreeBSD/install /mnt
....
****
[[network-ipv6]]
== IPv6
IPv6 is the new version of the well known IP protocol, also known as IPv4. IPv6 provides several advantages over IPv4 as well as many new features:
* Its 128-bit address space allows for 340,282,366,920,938,463,463,374,607,431,768,211,456 addresses. This addresses the IPv4 address shortage and eventual IPv4 address exhaustion.
* Routers only store network aggregation addresses in their routing tables, thus reducing the average space of a routing table to 8192 entries. This addresses the scalability issues associated with IPv4, which required every allocated block of IPv4 addresses to be exchanged between Internet routers, causing their routing tables to become too large to allow efficient routing.
* Address autoconfiguration (http://www.ietf.org/rfc/rfc2462.txt[RFC2462]).
* Mandatory multicast addresses.
* Built-in IPsec (IP security).
* Simplified header structure.
* Support for mobile IP.
* IPv6-to-IPv4 transition mechanisms.
FreeBSD includes the http://www.kame.net/[http://www.kame.net/] IPv6 reference implementation and comes with everything needed to use IPv6. This section focuses on getting IPv6 configured and running.
=== Background on IPv6 Addresses
There are three different types of IPv6 addresses:
Unicast::
A packet sent to a unicast address arrives at the interface belonging to the address.
Anycast::
These addresses are syntactically indistinguishable from unicast addresses but they address a group of interfaces. The packet destined for an anycast address will arrive at the nearest router interface. Anycast addresses are only used by routers.
Multicast::
These addresses identify a group of interfaces. A packet destined for a multicast address will arrive at all interfaces belonging to the multicast group. The IPv4 broadcast address, usually `xxx.xxx.xxx.255`, is expressed by multicast addresses in IPv6.
When reading an IPv6 address, the canonical form is represented as `x:x:x:x:x:x:x:x`, where each `x` represents a 16 bit hex value. An example is `FEBC:A574:382B:23C1:AA49:4592:4EFE:9982`.
Often, an address will have long substrings of all zeros. A `::` (double colon) can be used to replace one substring per address. Also, up to three leading ``0``s per hex value can be omitted. For example, `fe80::1` corresponds to the canonical form `fe80:0000:0000:0000:0000:0000:0000:0001`.
A third form is to write the last 32 bits using the well known IPv4 notation. For example, `2002::10.0.0.1` corresponds to the hexadecimal canonical representation `2002:0000:0000:0000:0000:0000:0a00:0001`, which in turn is equivalent to `2002::a00:1`.
To view a FreeBSD system's IPv6 address, use man:ifconfig[8]:
[source,shell]
....
# ifconfig
....
[.programlisting]
....
rl0: flags=8943<UP,BROADCAST,RUNNING,PROMISC,SIMPLEX,MULTICAST> mtu 1500
inet 10.0.0.10 netmask 0xffffff00 broadcast 10.0.0.255
inet6 fe80::200:21ff:fe03:8e1%rl0 prefixlen 64 scopeid 0x1
ether 00:00:21:03:08:e1
media: Ethernet autoselect (100baseTX )
status: active
....
In this example, the [.filename]#rl0# interface is using `fe80::200:21ff:fe03:8e1%rl0`, an auto-configured link-local address which was automatically generated from the MAC address.
Some IPv6 addresses are reserved. A summary of these reserved addresses is seen in <<reservedip6>>:
[[reservedip6]]
.Reserved IPv6 Addresses
[cols="1,1,1,1", frame="none", options="header"]
|===
| IPv6 address
| Prefixlength (Bits)
| Description
| Notes
|`::`
|128 bits
|unspecified
|Equivalent to `0.0.0.0` in IPv4.
|`::1`
|128 bits
|loopback address
|Equivalent to `127.0.0.1` in IPv4.
|`::00:xx:xx:xx:xx`
|96 bits
|embedded IPv4
|The lower 32 bits are the compatible IPv4 address.
|`::ff:xx:xx:xx:xx`
|96 bits
|IPv4 mapped IPv6 address
|The lower 32 bits are the IPv4 address for hosts which do not support IPv6.
|`fe80::/10`
|10 bits
|link-local
|Equivalent to 169.254.0.0/16 in IPv4.
|`fc00::/7`
|7 bits
|unique-local
|Unique local addresses are intended for local communication and are only routable within a set of cooperating sites.
|`ff00::`
|8 bits
|multicast
|
|``2000::-3fff::``
|3 bits
|global unicast
|All global unicast addresses are assigned from this pool. The first 3 bits are `001`.
|===
For further information on the structure of IPv6 addresses, refer to http://www.ietf.org/rfc/rfc3513.txt[RFC3513].
=== Configuring IPv6
To configure a FreeBSD system as an IPv6 client, add these two lines to [.filename]#rc.conf#:
[.programlisting]
....
ifconfig_rl0_ipv6="inet6 accept_rtadv"
rtsold_enable="YES"
....
The first line enables the specified interface to receive router advertisement messages. The second line enables the router solicitation daemon, man:rtsol[8].
If the interface needs a statically assigned IPv6 address, add an entry to specify the static address and associated prefix length:
[.programlisting]
....
ifconfig_rl0_ipv6="inet6 2001:db8:4672:6565:2026:5043:2d42:5344 prefixlen 64"
....
To assign a default router, specify its address:
[.programlisting]
....
ipv6_defaultrouter="2001:db8:4672:6565::1"
....
=== Connecting to a Provider
In order to connect to other IPv6 networks, one must have a provider or a tunnel that supports IPv6:
* Contact an Internet Service Provider to see if they offer IPv6.
* http://www.tunnelbroker.net[Hurricane Electric] offers tunnels with end-points all around the globe.
[NOTE]
====
Install the package:net/freenet6[] package or port for a dial-up connection.
====
This section demonstrates how to take the directions from a tunnel provider and convert them into [.filename]#/etc/rc.conf# settings that will persist through reboots.
The first [.filename]#/etc/rc.conf# entry creates the generic tunneling interface [.filename]#gif0#:
[.programlisting]
....
cloned_interfaces="gif0"
....
Next, configure that interface with the IPv4 addresses of the local and remote endpoints. Replace `_MY_IPv4_ADDR_` and `_REMOTE_IPv4_ADDR_` with the actual IPv4 addresses:
[.programlisting]
....
create_args_gif0="tunnel MY_IPv4_ADDR REMOTE_IPv4_ADDR"
....
To apply the IPv6 address that has been assigned for use as the IPv6 tunnel endpoint, add this line, replacing `_MY_ASSIGNED_IPv6_TUNNEL_ENDPOINT_ADDR_` with the assigned address:
[.programlisting]
....
ifconfig_gif0_ipv6="inet6 MY_ASSIGNED_IPv6_TUNNEL_ENDPOINT_ADDR"
....
Then, set the default route for the other side of the IPv6 tunnel. Replace `_MY_IPv6_REMOTE_TUNNEL_ENDPOINT_ADDR_` with the default gateway address assigned by the provider:
[.programlisting]
....
ipv6_defaultrouter="MY_IPv6_REMOTE_TUNNEL_ENDPOINT_ADDR"
....
If the FreeBSD system will route IPv6 packets between the rest of the network and the world, enable the gateway using this line:
[.programlisting]
....
ipv6_gateway_enable="YES"
....
=== Router Advertisement and Host Auto Configuration
This section demonstrates how to setup man:rtadvd[8] to advertise the IPv6 default route.
To enable man:rtadvd[8], add the following to [.filename]#/etc/rc.conf#:
[.programlisting]
....
rtadvd_enable="YES"
....
It is important to specify the interface on which to do IPv6 router advertisement. For example, to tell man:rtadvd[8] to use [.filename]#rl0#:
[.programlisting]
....
rtadvd_interfaces="rl0"
....
Next, create the configuration file, [.filename]#/etc/rtadvd.conf# as seen in this example:
[.programlisting]
....
rl0:\
:addrs#1:addr="2001:db8:1f11:246::":prefixlen#64:tc=ether:
....
Replace [.filename]#rl0# with the interface to be used and `2001:db8:1f11:246::` with the prefix of the allocation.
For a dedicated `/64` subnet, nothing else needs to be changed. Otherwise, change the `prefixlen#` to the correct value.
=== IPv6 and IPv6 Address Mapping
When IPv6 is enabled on a server, there may be a need to enable IPv4 mapped IPv6 address communication. This compatibility option allows for IPv4 addresses to be represented as IPv6 addresses. Permitting IPv6 applications to communicate with IPv4 and vice versa may be a security issue.
This option may not be required in most cases and is available only for compatibility. This option will allow IPv6-only applications to work with IPv4 in a dual stack environment. This is most useful for third party applications which may not support an IPv6-only environment. To enable this feature, add the following to [.filename]#/etc/rc.conf#:
[.programlisting]
....
ipv6_ipv4mapping="YES"
....
Reviewing the information in RFC 3493, section 3.6 and 3.7 as well as RFC 4038 section 4.2 may be useful to some administrators.
[[carp]]
== Common Address Redundancy Protocol (CARP)
The Common Address Redundancy Protocol (CARP) allows multiple hosts to share the same IP address and Virtual Host ID (VHID) in order to provide _high availability_ for one or more services. This means that one or more hosts can fail, and the other hosts will transparently take over so that users do not see a service failure.
In addition to the shared IP address, each host has its own IP address for management and configuration. All of the machines that share an IP address have the same VHID. The VHID for each virtual IP address must be unique across the broadcast domain of the network interface.
High availability using CARP is built into FreeBSD, though the steps to configure it vary slightly depending upon the FreeBSD version. This section provides the same example configuration for versions before and equal to or after FreeBSD 10.
This example configures failover support with three hosts, all with unique IP addresses, but providing the same web content. It has two different masters named `hosta.example.org` and `hostb.example.org`, with a shared backup named `hostc.example.org`.
These machines are load balanced with a Round Robin DNS configuration. The master and backup machines are configured identically except for their hostnames and management IP addresses. These servers must have the same configuration and run the same services. When the failover occurs, requests to the service on the shared IP address can only be answered correctly if the backup server has access to the same content. The backup machine has two additional CARP interfaces, one for each of the master content server's IP addresses. When a failure occurs, the backup server will pick up the failed master machine's IP address.
[[carp-10x]]
=== Using CARP on FreeBSD 10 and Later
Enable boot-time support for CARP by adding an entry for the [.filename]#carp.ko# kernel module in [.filename]#/boot/loader.conf#:
[.programlisting]
....
carp_load="YES"
....
To load the module now without rebooting:
[source,shell]
....
# kldload carp
....
For users who prefer to use a custom kernel, include the following line in the custom kernel configuration file and compile the kernel as described in crossref:kernelconfig[kernelconfig,Configuring the FreeBSD Kernel]:
[.programlisting]
....
device carp
....
The hostname, management IP address and subnet mask, shared IP address, and VHID are all set by adding entries to [.filename]#/etc/rc.conf#. This example is for `hosta.example.org`:
[.programlisting]
....
hostname="hosta.example.org"
ifconfig_em0="inet 192.168.1.3 netmask 255.255.255.0"
ifconfig_em0_alias0="inet vhid 1 pass testpass alias 192.168.1.50/32"
....
The next set of entries are for `hostb.example.org`. Since it represents a second master, it uses a different shared IP address and VHID. However, the passwords specified with `pass` must be identical as CARP will only listen to and accept advertisements from machines with the correct password.
[.programlisting]
....
hostname="hostb.example.org"
ifconfig_em0="inet 192.168.1.4 netmask 255.255.255.0"
ifconfig_em0_alias0="inet vhid 2 pass testpass alias 192.168.1.51/32"
....
The third machine, `hostc.example.org`, is configured to handle failover from either master. This machine is configured with two CARPVHIDs, one to handle the virtual IP address for each of the master hosts. The CARP advertising skew, `advskew`, is set to ensure that the backup host advertises later than the master, since `advskew` controls the order of precedence when there are multiple backup servers.
[.programlisting]
....
hostname="hostc.example.org"
ifconfig_em0="inet 192.168.1.5 netmask 255.255.255.0"
ifconfig_em0_alias0="inet vhid 1 advskew 100 pass testpass alias 192.168.1.50/32"
ifconfig_em0_alias1="inet vhid 2 advskew 100 pass testpass alias 192.168.1.51/32"
....
Having two CARPVHIDs configured means that `hostc.example.org` will notice if either of the master servers becomes unavailable. If a master fails to advertise before the backup server, the backup server will pick up the shared IP address until the master becomes available again.
[NOTE]
====
If the original master server becomes available again, `hostc.example.org` will not release the virtual IP address back to it automatically. For this to happen, preemption has to be enabled. The feature is disabled by default, it is controlled via the man:sysctl[8] variable `net.inet.carp.preempt`. The administrator can force the backup server to return the IP address to the master:
[source,shell]
....
# ifconfig em0 vhid 1 state backup
....
====
Once the configuration is complete, either restart networking or reboot each system. High availability is now enabled.
CARP functionality can be controlled via several man:sysctl[8] variables documented in the man:carp[4] manual pages. Other actions can be triggered from CARP events by using man:devd[8].
[[carp-9x]]
=== Using CARP on FreeBSD 9 and Earlier
The configuration for these versions of FreeBSD is similar to the one described in the previous section, except that a CARP device must first be created and referred to in the configuration.
Enable boot-time support for CARP by loading the [.filename]#if_carp.ko# kernel module in [.filename]#/boot/loader.conf#:
[.programlisting]
....
if_carp_load="YES"
....
To load the module now without rebooting:
[source,shell]
....
# kldload carp
....
For users who prefer to use a custom kernel, include the following line in the custom kernel configuration file and compile the kernel as described in crossref:kernelconfig[kernelconfig,Configuring the FreeBSD Kernel]:
[.programlisting]
....
device carp
....
Next, on each host, create a CARP device:
[source,shell]
....
# ifconfig carp0 create
....
Set the hostname, management IP address, the shared IP address, and VHID by adding the required lines to [.filename]#/etc/rc.conf#. Since a virtual CARP device is used instead of an alias, the actual subnet mask of `/24` is used instead of `/32`. Here are the entries for `hosta.example.org`:
[.programlisting]
....
hostname="hosta.example.org"
ifconfig_fxp0="inet 192.168.1.3 netmask 255.255.255.0"
cloned_interfaces="carp0"
ifconfig_carp0="vhid 1 pass testpass 192.168.1.50/24"
....
On `hostb.example.org`:
[.programlisting]
....
hostname="hostb.example.org"
ifconfig_fxp0="inet 192.168.1.4 netmask 255.255.255.0"
cloned_interfaces="carp0"
ifconfig_carp0="vhid 2 pass testpass 192.168.1.51/24"
....
The third machine, `hostc.example.org`, is configured to handle failover from either of the master hosts:
[.programlisting]
....
hostname="hostc.example.org"
ifconfig_fxp0="inet 192.168.1.5 netmask 255.255.255.0"
cloned_interfaces="carp0 carp1"
ifconfig_carp0="vhid 1 advskew 100 pass testpass 192.168.1.50/24"
ifconfig_carp1="vhid 2 advskew 100 pass testpass 192.168.1.51/24"
....
[NOTE]
====
Preemption is disabled in the [.filename]#GENERIC# FreeBSD kernel. If preemption has been enabled with a custom kernel, `hostc.example.org` may not release the IP address back to the original content server. The administrator can force the backup server to return the IP address to the master with the command:
[source,shell]
....
# ifconfig carp0 down && ifconfig carp0 up
....
This should be done on the [.filename]#carp# interface which corresponds to the correct host.
====
Once the configuration is complete, either restart networking or reboot each system. High availability is now enabled.
[[network-vlan]]
== VLANs
VLANs are a way of virtually dividing up a network into many different subnetworks, also referred to as segmenting. Each segment will have its own broadcast domain and be isolated from other VLANs.
On FreeBSD, VLANs must be supported by the network card driver. To see which drivers support vlans, refer to the man:vlan[4] manual page.
When configuring a VLAN, a couple pieces of information must be known. First, which network interface? Second, what is the VLAN tag?
To configure VLANs at run time, with a NIC of `em0` and a VLAN tag of `5` the command would look like this:
[source,shell]
....
# ifconfig em0.5 create vlan 5 vlandev em0 inet 192.168.20.20/24
....
[NOTE]
====
See how the interface name includes the NIC driver name and the VLAN tag, separated by a period? This is a best practice to make maintaining the VLAN configuration easy when many VLANs are present on a machine.
====
To configure VLANs at boot time, [.filename]#/etc/rc.conf# must be updated. To duplicate the configuration above, the following will need to be added:
[.programlisting]
....
vlans_em0="5"
ifconfig_em0_5="inet 192.168.20.20/24"
....
Additional VLANs may be added, by simply adding the tag to the `vlans_em0` field and adding an additional line configuring the network on that VLAN tag's interface.
It is useful to assign a symbolic name to an interface so that when the associated hardware is changed, only a few configuration variables need to be updated. For example, security cameras need to be run over VLAN 1 on `em0`. Later, if the `em0` card is replaced with a card that uses the man:ixgb[4] driver, all references to `em0.1` will not have to change to `ixgb0.1`.
To configure VLAN `5`, on the NIC `em0`, assign the interface name `cameras`, and assign the interface an IP address of `_192.168.20.20_` with a `24`-bit prefix, use this command:
[source,shell]
....
# ifconfig em0.5 create vlan 5 vlandev em0 name cameras inet 192.168.20.20/24
....
For an interface named `video`, use the following:
[source,shell]
....
# ifconfig video.5 create vlan 5 vlandev video name cameras inet 192.168.20.20/24
....
To apply the changes at boot time, add the following lines to [.filename]#/etc/rc.conf#:
[.programlisting]
....
vlans_video="cameras"
create_args_cameras="vlan 5"
ifconfig_cameras="inet 192.168.20.20/24"
....
diff --git a/documentation/content/en/books/handbook/audit/_index.adoc b/documentation/content/en/books/handbook/audit/_index.adoc
index 0b6ae95166..aa4a1bd746 100644
--- a/documentation/content/en/books/handbook/audit/_index.adoc
+++ b/documentation/content/en/books/handbook/audit/_index.adoc
@@ -1,401 +1,402 @@
---
title: Chapter 17. Security Event Auditing
part: Part III. System Administration
prev: books/handbook/mac
next: books/handbook/disks
+description: FreeBSD security event auditing supports reliable, fine-grained, and configurable logging of a variety of security-relevant system events, including logins, configuration changes, and file and network access
---
[[audit]]
= Security Event Auditing
:doctype: book
:toc: macro
:toclevels: 1
:icons: font
:sectnums:
:sectnumlevels: 6
:source-highlighter: rouge
:experimental:
:skip-front-matter:
:xrefstyle: basic
:relfileprefix: ../
:outfilesuffix:
:sectnumoffset: 17
ifeval::["{backend}" == "html5"]
:imagesdir: ../../../../images/books/handbook/audit/
endif::[]
ifeval::["{backend}" == "pdf"]
:imagesdir: ../../../../static/images/books/handbook/audit/
endif::[]
ifeval::["{backend}" == "epub3"]
:imagesdir: ../../../../static/images/books/handbook/audit/
endif::[]
include::shared/authors.adoc[]
include::shared/releases.adoc[]
include::shared/en/mailing-lists.adoc[]
include::shared/en/teams.adoc[]
include::shared/en/urls.adoc[]
toc::[]
[[audit-synopsis]]
== Synopsis
The FreeBSD operating system includes support for security event auditing. Event auditing supports reliable, fine-grained, and configurable logging of a variety of security-relevant system events, including logins, configuration changes, and file and network access. These log records can be invaluable for live system monitoring, intrusion detection, and postmortem analysis. FreeBSD implements Sun(TM)'s published Basic Security Module (BSM) Application Programming Interface (API) and file format, and is interoperable with the Solaris(TM) and Mac OS(R) X audit implementations.
This chapter focuses on the installation and configuration of event auditing. It explains audit policies and provides an example audit configuration.
After reading this chapter, you will know:
* What event auditing is and how it works.
* How to configure event auditing on FreeBSD for users and processes.
* How to review the audit trail using the audit reduction and review tools.
Before reading this chapter, you should:
* Understand UNIX(R) and FreeBSD basics (crossref:basics[basics,FreeBSD Basics]).
* Be familiar with the basics of kernel configuration/compilation (crossref:kernelconfig[kernelconfig,Configuring the FreeBSD Kernel]).
* Have some familiarity with security and how it pertains to FreeBSD (crossref:security[security,Security]).
[WARNING]
====
The audit facility has some known limitations. Not all security-relevant system events are auditable and some login mechanisms, such as Xorg-based display managers and third-party daemons, do not properly configure auditing for user login sessions.
The security event auditing facility is able to generate very detailed logs of system activity. On a busy system, trail file data can be very large when configured for high detail, exceeding gigabytes a week in some configurations. Administrators should take into account the disk space requirements associated with high volume audit configurations. For example, it may be desirable to dedicate a file system to [.filename]#/var/audit# so that other file systems are not affected if the audit file system becomes full.
====
[[audit-inline-glossary]]
== Key Terms
The following terms are related to security event auditing:
* _event_: an auditable event is any event that can be logged using the audit subsystem. Examples of security-relevant events include the creation of a file, the building of a network connection, or a user logging in. Events are either "attributable", meaning that they can be traced to an authenticated user, or "non-attributable". Examples of non-attributable events are any events that occur before authentication in the login process, such as bad password attempts.
* _class_: a named set of related events which are used in selection expressions. Commonly used classes of events include "file creation" (fc), "exec" (ex), and "login_logout" (lo).
* _record_: an audit log entry describing a security event. Records contain a record event type, information on the subject (user) performing the action, date and time information, information on any objects or arguments, and a success or failure condition.
* _trail_: a log file consisting of a series of audit records describing security events. Trails are in roughly chronological order with respect to the time events completed. Only authorized processes are allowed to commit records to the audit trail.
* _selection expression_: a string containing a list of prefixes and audit event class names used to match events.
* _preselection_: the process by which the system identifies which events are of interest to the administrator. The preselection configuration uses a series of selection expressions to identify which classes of events to audit for which users, as well as global settings that apply to both authenticated and unauthenticated processes.
* _reduction_: the process by which records from existing audit trails are selected for preservation, printing, or analysis. Likewise, the process by which undesired audit records are removed from the audit trail. Using reduction, administrators can implement policies for the preservation of audit data. For example, detailed audit trails might be kept for one month, but after that, trails might be reduced in order to preserve only login information for archival purposes.
[[audit-config]]
== Audit Configuration
User space support for event auditing is installed as part of the base FreeBSD operating system. Kernel support is available in the [.filename]#GENERIC# kernel by default, and man:auditd[8] can be enabled by adding the following line to [.filename]#/etc/rc.conf#:
[.programlisting]
....
auditd_enable="YES"
....
Then, start the audit daemon:
[source,shell]
....
# service auditd start
....
Users who prefer to compile a custom kernel must include the following line in their custom kernel configuration file:
[.programlisting]
....
options AUDIT
....
=== Event Selection Expressions
Selection expressions are used in a number of places in the audit configuration to determine which events should be audited. Expressions contain a list of event classes to match. Selection expressions are evaluated from left to right, and two expressions are combined by appending one onto the other.
<<event-selection>> summarizes the default audit event classes:
[[event-selection]]
.Default Audit Event Classes
[cols="1,1,1", frame="none", options="header"]
|===
| Class Name
| Description
| Action
|all
|all
|Match all event classes.
|aa
|authentication and authorization
|
|ad
|administrative
|Administrative actions performed on the system as a whole.
|ap
|application
|Application defined action.
|cl
|file close
|Audit calls to the `close` system call.
|ex
|exec
|Audit program execution. Auditing of command line arguments and environmental variables is controlled via man:audit_control[5] using the `argv` and `envv` parameters to the `policy` setting.
|fa
|file attribute access
|Audit the access of object attributes such as man:stat[1] and man:pathconf[2].
|fc
|file create
|Audit events where a file is created as a result.
|fd
|file delete
|Audit events where file deletion occurs.
|fm
|file attribute modify
|Audit events where file attribute modification occurs, such as by man:chown[8], man:chflags[1], and man:flock[2].
|fr
|file read
|Audit events in which data is read or files are opened for reading.
|fw
|file write
|Audit events in which data is written or files are written or modified.
|io
|ioctl
|Audit use of the `ioctl` system call.
|ip
|ipc
|Audit various forms of Inter-Process Communication, including POSIX pipes and System V IPC operations.
|lo
|login_logout
|Audit man:login[1] and man:logout[1] events.
|na
|non attributable
|Audit non-attributable events.
|no
|invalid class
|Match no audit events.
|nt
|network
|Audit events related to network actions such as man:connect[2] and man:accept[2].
|ot
|other
|Audit miscellaneous events.
|pc
|process
|Audit process operations such as man:exec[3] and man:exit[3].
|===
These audit event classes may be customized by modifying the [.filename]#audit_class# and [.filename]#audit_event# configuration files.
Each audit event class may be combined with a prefix indicating whether successful/failed operations are matched, and whether the entry is adding or removing matching for the class and type. <<event-prefixes>> summarizes the available prefixes:
[[event-prefixes]]
.Prefixes for Audit Event Classes
[cols="1,1", frame="none", options="header"]
|===
| Prefix
| Action
|+
|Audit successful events in this class.
|-
|Audit failed events in this class.
|^
|Audit neither successful nor failed events in this class.
|^+
|Do not audit successful events in this class.
|^-
|Do not audit failed events in this class.
|===
If no prefix is present, both successful and failed instances of the event will be audited.
The following example selection string selects both successful and failed login/logout events, but only successful execution events:
[.programlisting]
....
lo,+ex
....
=== Configuration Files
The following configuration files for security event auditing are found in [.filename]#/etc/security#:
* [.filename]#audit_class#: contains the definitions of the audit classes.
* [.filename]#audit_control#: controls aspects of the audit subsystem, such as default audit classes, minimum disk space to leave on the audit log volume, and maximum audit trail size.
* [.filename]#audit_event#: textual names and descriptions of system audit events and a list of which classes each event is in.
* [.filename]#audit_user#: user-specific audit requirements to be combined with the global defaults at login.
* [.filename]#audit_warn#: a customizable shell script used by man:auditd[8] to generate warning messages in exceptional situations, such as when space for audit records is running low or when the audit trail file has been rotated.
[WARNING]
====
Audit configuration files should be edited and maintained carefully, as errors in configuration may result in improper logging of events.
====
In most cases, administrators will only need to modify [.filename]#audit_control# and [.filename]#audit_user#. The first file controls system-wide audit properties and policies and the second file may be used to fine-tune auditing by user.
[[audit-auditcontrol]]
==== The [.filename]#audit_control# File
A number of defaults for the audit subsystem are specified in [.filename]#audit_control#:
[.programlisting]
....
dir:/var/audit
dist:off
flags:lo,aa
minfree:5
naflags:lo,aa
policy:cnt,argv
filesz:2M
expire-after:10M
....
The `dir` entry is used to set one or more directories where audit logs will be stored. If more than one directory entry appears, they will be used in order as they fill. It is common to configure audit so that audit logs are stored on a dedicated file system, in order to prevent interference between the audit subsystem and other subsystems if the file system fills.
If the `dist` field is set to `on` or `yes`, hard links will be created to all trail files in [.filename]#/var/audit/dist#.
The `flags` field sets the system-wide default preselection mask for attributable events. In the example above, successful and failed login/logout events as well as authentication and authorization are audited for all users.
The `minfree` entry defines the minimum percentage of free space for the file system where the audit trail is stored.
The `naflags` entry specifies audit classes to be audited for non-attributed events, such as the login/logout process and authentication and authorization.
The `policy` entry specifies a comma-separated list of policy flags controlling various aspects of audit behavior. The `cnt` indicates that the system should continue running despite an auditing failure (this flag is highly recommended). The other flag, `argv`, causes command line arguments to the man:execve[2] system call to be audited as part of command execution.
The `filesz` entry specifies the maximum size for an audit trail before automatically terminating and rotating the trail file. A value of `0` disables automatic log rotation. If the requested file size is below the minimum of 512k, it will be ignored and a log message will be generated.
The `expire-after` field specifies when audit log files will expire and be removed.
[[audit-audituser]]
==== The [.filename]#audit_user# File
The administrator can specify further audit requirements for specific users in [.filename]#audit_user#. Each line configures auditing for a user via two fields: the `alwaysaudit` field specifies a set of events that should always be audited for the user, and the `neveraudit` field specifies a set of events that should never be audited for the user.
The following example entries audit login/logout events and successful command execution for `root` and file creation and successful command execution for `www`. If used with the default [.filename]#audit_control#, the `lo` entry for `root` is redundant, and login/logout events will also be audited for `www`.
[.programlisting]
....
root:lo,+ex:no
www:fc,+ex:no
....
[[audit-administration]]
== Working with Audit Trails
Since audit trails are stored in the BSM binary format, several built-in tools are available to modify or convert these trails to text. To convert trail files to a simple text format, use `praudit`. To reduce the audit trail file for analysis, archiving, or printing purposes, use `auditreduce`. This utility supports a variety of selection parameters, including event type, event class, user, date or time of the event, and the file path or object acted on.
For example, to dump the entire contents of a specified audit log in plain text:
[source,shell]
....
# praudit /var/audit/AUDITFILE
....
Where _AUDITFILE_ is the audit log to dump.
Audit trails consist of a series of audit records made up of tokens, which `praudit` prints sequentially, one per line. Each token is of a specific type, such as `header` (an audit record header) or `path` (a file path from a name lookup). The following is an example of an `execve` event:
[.programlisting]
....
header,133,10,execve(2),0,Mon Sep 25 15:58:03 2006, + 384 msec
exec arg,finger,doug
path,/usr/bin/finger
attribute,555,root,wheel,90,24918,104944
subject,robert,root,wheel,root,wheel,38439,38032,42086,128.232.9.100
return,success,0
trailer,133
....
This audit represents a successful `execve` call, in which the command `finger doug` has been run. The `exec arg` token contains the processed command line presented by the shell to the kernel. The `path` token holds the path to the executable as looked up by the kernel. The `attribute` token describes the binary and includes the file mode. The `subject` token stores the audit user ID, effective user ID and group ID, real user ID and group ID, process ID, session ID, port ID, and login address. Notice that the audit user ID and real user ID differ as the user `robert` switched to the `root` account before running this command, but it is audited using the original authenticated user. The `return` token indicates the successful execution and the `trailer` concludes the record.
XML output format is also supported and can be selected by including `-x`.
Since audit logs may be very large, a subset of records can be selected using `auditreduce`. This example selects all audit records produced for the user `trhodes` stored in [.filename]#AUDITFILE#:
[source,shell]
....
# auditreduce -u trhodes /var/audit/AUDITFILE | praudit
....
Members of the `audit` group have permission to read audit trails in [.filename]#/var/audit#. By default, this group is empty, so only the `root` user can read audit trails. Users may be added to the `audit` group in order to delegate audit review rights. As the ability to track audit log contents provides significant insight into the behavior of users and processes, it is recommended that the delegation of audit review rights be performed with caution.
=== Live Monitoring Using Audit Pipes
Audit pipes are cloning pseudo-devices which allow applications to tap the live audit record stream. This is primarily of interest to authors of intrusion detection and system monitoring applications. However, the audit pipe device is a convenient way for the administrator to allow live monitoring without running into problems with audit trail file ownership or log rotation interrupting the event stream. To track the live audit event stream:
[source,shell]
....
# praudit /dev/auditpipe
....
By default, audit pipe device nodes are accessible only to the `root` user. To make them accessible to the members of the `audit` group, add a `devfs` rule to [.filename]#/etc/devfs.rules#:
[.programlisting]
....
add path 'auditpipe*' mode 0440 group audit
....
See man:devfs.rules[5] for more information on configuring the devfs file system.
[WARNING]
====
It is easy to produce audit event feedback cycles, in which the viewing of each audit event results in the generation of more audit events. For example, if all network I/O is audited, and `praudit` is run from an SSH session, a continuous stream of audit events will be generated at a high rate, as each event being printed will generate another event. For this reason, it is advisable to run `praudit` on an audit pipe device from sessions without fine-grained I/O auditing.
====
=== Rotating and Compressing Audit Trail Files
Audit trails are written to by the kernel and managed by the audit daemon, man:auditd[8]. Administrators should not attempt to use man:newsyslog.conf[5] or other tools to directly rotate audit logs. Instead, `audit` should be used to shut down auditing, reconfigure the audit system, and perform log rotation. The following command causes the audit daemon to create a new audit log and signal the kernel to switch to using the new log. The old log will be terminated and renamed, at which point it may then be manipulated by the administrator:
[source,shell]
....
# audit -n
....
If man:auditd[8] is not currently running, this command will fail and an error message will be produced.
Adding the following line to [.filename]#/etc/crontab# will schedule this rotation every twelve hours:
[.programlisting]
....
0 */12 * * * root /usr/sbin/audit -n
....
The change will take effect once [.filename]#/etc/crontab# is saved.
Automatic rotation of the audit trail file based on file size is possible using `filesz` in [.filename]#audit_control# as described in <<audit-auditcontrol>>.
As audit trail files can become very large, it is often desirable to compress or otherwise archive trails once they have been closed by the audit daemon. The [.filename]#audit_warn# script can be used to perform customized operations for a variety of audit-related events, including the clean termination of audit trails when they are rotated. For example, the following may be added to [.filename]#/etc/security/audit_warn# to compress audit trails on close:
[.programlisting]
....
#
# Compress audit trail files on close.
#
if [ "$1" = closefile ]; then
gzip -9 $2
fi
....
Other archiving activities might include copying trail files to a centralized server, deleting old trail files, or reducing the audit trail to remove unneeded records. This script will be run only when audit trail files are cleanly terminated. It will not be run on trails left unterminated following an improper shutdown.
diff --git a/documentation/content/en/books/handbook/basics/_index.adoc b/documentation/content/en/books/handbook/basics/_index.adoc
index d425efa693..dd2e767f8f 100644
--- a/documentation/content/en/books/handbook/basics/_index.adoc
+++ b/documentation/content/en/books/handbook/basics/_index.adoc
@@ -1,1544 +1,1545 @@
---
title: Chapter 3. FreeBSD basics
part: Part I. Getting Started
prev: books/handbook/bsdinstall
next: books/handbook/ports
+description: Basic commands and functionality of the FreeBSD operating system
---
[[basics]]
= FreeBSD basics
:doctype: book
:toc: macro
:toclevels: 1
:icons: font
:sectnums:
:sectnumlevels: 6
:source-highlighter: rouge
:experimental:
:skip-front-matter:
:xrefstyle: basic
:relfileprefix: ../
:outfilesuffix:
:sectnumoffset: 3
ifeval::["{backend}" == "html5"]
:imagesdir: ../../../../images/books/handbook/basics/
endif::[]
ifeval::["{backend}" == "pdf"]
:imagesdir: ../../../../static/images/books/handbook/basics/
endif::[]
ifeval::["{backend}" == "epub3"]
:imagesdir: ../../../../static/images/books/handbook/basics/
endif::[]
include::shared/authors.adoc[]
include::shared/releases.adoc[]
include::shared/en/mailing-lists.adoc[]
include::shared/en/teams.adoc[]
include::shared/en/urls.adoc[]
toc::[]
[[basics-synopsis]]
== Synopsis
This chapter covers the basic commands and functionality of the FreeBSD operating system. Much of this material is relevant for any UNIX(R)-like operating system. New FreeBSD users are encouraged to read through this chapter carefully.
After reading this chapter, you will know:
* How to use and configure virtual consoles.
* How to create and manage users and groups on FreeBSD.
* How UNIX(R) file permissions and FreeBSD file flags work.
* The default FreeBSD file system layout.
* The FreeBSD disk organization.
* How to mount and unmount file systems.
* What processes, daemons, and signals are.
* What a shell is, and how to change the default login environment.
* How to use basic text editors.
* What devices and device nodes are.
* How to read manual pages for more information.
[[consoles]]
== Virtual Consoles and Terminals
Unless FreeBSD has been configured to automatically start a graphical environment during startup, the system will boot into a command line login prompt, as seen in this example:
[source,shell]
....
FreeBSD/amd64 (pc3.example.org) (ttyv0)
login:
....
The first line contains some information about the system. The `amd64` indicates that the system in this example is running a 64-bit version of FreeBSD. The hostname is `pc3.example.org`, and [.filename]#ttyv0# indicates that this is the "system console". The second line is the login prompt.
Since FreeBSD is a multiuser system, it needs some way to distinguish between different users. This is accomplished by requiring every user to log into the system before gaining access to the programs on the system. Every user has a unique name "username" and a personal "password".
To log into the system console, type the username that was configured during system installation, as described in crossref:bsdinstall[bsdinstall-addusers,Add Users], and press kbd:[Enter]. Then enter the password associated with the username and press kbd:[Enter]. The password is _not echoed_ for security reasons.
Once the correct password is input, the message of the day (MOTD) will be displayed followed by a command prompt. Depending upon the shell that was selected when the user was created, this prompt will be a `#`, `$`, or `%` character. The prompt indicates that the user is now logged into the FreeBSD system console and ready to try the available commands.
[[consoles-virtual]]
=== Virtual Consoles
While the system console can be used to interact with the system, a user working from the command line at the keyboard of a FreeBSD system will typically instead log into a virtual console. This is because system messages are configured by default to display on the system console. These messages will appear over the command or file that the user is working on, making it difficult to concentrate on the work at hand.
By default, FreeBSD is configured to provide several virtual consoles for inputting commands. Each virtual console has its own login prompt and shell and it is easy to switch between virtual consoles. This essentially provides the command line equivalent of having several windows open at the same time in a graphical environment.
The key combinations kbd:[Alt+F1] through kbd:[Alt+F8] have been reserved by FreeBSD for switching between virtual consoles. Use kbd:[Alt+F1] to switch to the system console ([.filename]#ttyv0#), kbd:[Alt+F2] to access the first virtual console ([.filename]#ttyv1#), kbd:[Alt+F3] to access the second virtual console ([.filename]#ttyv2#), and so on. When using Xorg as a graphical console, the combination becomes kbd:[Ctrl+Alt+F1] to return to a text-based virtual console.
When switching from one console to the next, FreeBSD manages the screen output. The result is an illusion of having multiple virtual screens and keyboards that can be used to type commands for FreeBSD to run. The programs that are launched in one virtual console do not stop running when the user switches to a different virtual console.
Refer to man:kbdcontrol[1], man:vidcontrol[1], man:atkbd:[4], man:syscons[4], and man:vt[4] for a more technical description of the FreeBSD console and its keyboard drivers.
In FreeBSD, the number of available virtual consoles is configured in this section of [.filename]#/etc/ttys#:
[.programlisting]
....
# name getty type status comments
#
ttyv0 "/usr/libexec/getty Pc" xterm on secure
# Virtual terminals
ttyv1 "/usr/libexec/getty Pc" xterm on secure
ttyv2 "/usr/libexec/getty Pc" xterm on secure
ttyv3 "/usr/libexec/getty Pc" xterm on secure
ttyv4 "/usr/libexec/getty Pc" xterm on secure
ttyv5 "/usr/libexec/getty Pc" xterm on secure
ttyv6 "/usr/libexec/getty Pc" xterm on secure
ttyv7 "/usr/libexec/getty Pc" xterm on secure
ttyv8 "/usr/X11R6/bin/xdm -nodaemon" xterm off secure
....
To disable a virtual console, put a comment symbol (`#`) at the beginning of the line representing that virtual console. For example, to reduce the number of available virtual consoles from eight to four, put a `#` in front of the last four lines representing virtual consoles [.filename]#ttyv5# through [.filename]#ttyv8#. _Do not_ comment out the line for the system console [.filename]#ttyv0#. Note that the last virtual console ([.filename]#ttyv8#) is used to access the graphical environment if Xorg has been installed and configured as described in crossref:x11[x11,The X Window System].
For a detailed description of every column in this file and the available options for the virtual consoles, refer to man:ttys[5].
[[consoles-singleuser]]
=== Single User Mode
The FreeBSD boot menu provides an option labelled as "Boot Single User". If this option is selected, the system will boot into a special mode known as "single user mode". This mode is typically used to repair a system that will not boot or to reset the `root` password when it is not known. While in single user mode, networking and other virtual consoles are not available. However, full `root` access to the system is available, and by default, the `root` password is not needed. For these reasons, physical access to the keyboard is needed to boot into this mode and determining who has physical access to the keyboard is something to consider when securing a FreeBSD system.
The settings which control single user mode are found in this section of [.filename]#/etc/ttys#:
[.programlisting]
....
# name getty type status comments
#
# If console is marked "insecure", then init will ask for the root password
# when going to single-user mode.
console none unknown off secure
....
By default, the status is set to `secure`. This assumes that who has physical access to the keyboard is either not important or it is controlled by a physical security policy. If this setting is changed to `insecure`, the assumption is that the environment itself is insecure because anyone can access the keyboard. When this line is changed to `insecure`, FreeBSD will prompt for the `root` password when a user selects to boot into single user mode.
[NOTE]
====
_Be careful when changing this setting to `insecure`!_ If the `root` password is forgotten, booting into single user mode is still possible, but may be difficult for someone who is not familiar with the FreeBSD booting process.
====
[[consoles-vidcontrol]]
=== Changing Console Video Modes
The FreeBSD console default video mode may be adjusted to 1024x768, 1280x1024, or any other size supported by the graphics chip and monitor. To use a different video mode load the `VESA` module:
[source,shell]
....
# kldload vesa
....
To determine which video modes are supported by the hardware, use man:vidcontrol[1]. To get a list of supported video modes issue the following:
[source,shell]
....
# vidcontrol -i mode
....
The output of this command lists the video modes that are supported by the hardware. To select a new video mode, specify the mode using man:vidcontrol[1] as the `root` user:
[source,shell]
....
# vidcontrol MODE_279
....
If the new video mode is acceptable, it can be permanently set on boot by adding it to [.filename]#/etc/rc.conf#:
[.programlisting]
....
allscreens_flags="MODE_279"
....
[[users-synopsis]]
== Users and Basic Account Management
FreeBSD allows multiple users to use the computer at the same time. While only one user can sit in front of the screen and use the keyboard at any one time, any number of users can log in to the system through the network. To use the system, each user should have their own user account.
This chapter describes:
* The different types of user accounts on a FreeBSD system.
* How to add, remove, and modify user accounts.
* How to set limits to control the resources that users and groups are allowed to access.
* How to create groups and add users as members of a group.
[[users-introduction]]
=== Account Types
Since all access to the FreeBSD system is achieved using accounts and all processes are run by users, user and account management is important.
There are three main types of accounts: system accounts, user accounts, and the superuser account.
[[users-system]]
==== System Accounts
System accounts are used to run services such as DNS, mail, and web servers. The reason for this is security; if all services ran as the superuser, they could act without restriction.
Examples of system accounts are `daemon`, `operator`, `bind`, `news`, and `www`.
[WARNING]
====
Care must be taken when using the operator group, as unintended superuser-like access privileges may be granted, including but not limited to shutdown, reboot, and access to all items in [.filename]#/dev# in the group.
====
`nobody` is the generic unprivileged system account. However, the more services that use `nobody`, the more files and processes that user will become associated with, and hence the more privileged that user becomes.
[[users-user]]
==== User Accounts
User accounts are assigned to real people and are used to log in and use the system. Every person accessing the system should have a unique user account. This allows the administrator to find out who is doing what and prevents users from clobbering the settings of other users.
Each user can set up their own environment to accommodate their use of the system, by configuring their default shell, editor, key bindings, and language settings.
Every user account on a FreeBSD system has certain information associated with it:
User name::
The user name is typed at the `login:` prompt. Each user must have a unique user name. There are a number of rules for creating valid user names which are documented in man:passwd[5]. It is recommended to use user names that consist of eight or fewer, all lower case characters in order to maintain backwards compatibility with applications.
Password::
Each account has an associated password.
User ID (UID)::
The User ID (UID) is a number used to uniquely identify the user to the FreeBSD system. Commands that allow a user name to be specified will first convert it to the UID. It is recommended to use a UID less than 65535, since higher values may cause compatibility issues with some software.
Group ID (GID)::
The Group ID (GID) is a number used to uniquely identify the primary group that the user belongs to. Groups are a mechanism for controlling access to resources based on a user's GID rather than their UID. This can significantly reduce the size of some configuration files and allows users to be members of more than one group. It is recommended to use a GID of 65535 or lower as higher GIDs may break some software.
Login class::
Login classes are an extension to the group mechanism that provide additional flexibility when tailoring the system to different users. Login classes are discussed further in crossref:security[users-limiting,Configuring Login Classes].
Password change time::
By default, passwords do not expire. However, password expiration can be enabled on a per-user basis, forcing some or all users to change their passwords after a certain amount of time has elapsed.
Account expiration time::
By default, FreeBSD does not expire accounts. When creating accounts that need a limited lifespan, such as student accounts in a school, specify the account expiry date using man:pw[8]. After the expiry time has elapsed, the account cannot be used to log in to the system, although the account's directories and files will remain.
User's full name::
The user name uniquely identifies the account to FreeBSD, but does not necessarily reflect the user's real name. Similar to a comment, this information can contain spaces, uppercase characters, and be more than 8 characters long.
Home directory::
The home directory is the full path to a directory on the system. This is the user's starting directory when the user logs in. A common convention is to put all user home directories under [.filename]#/home/username# or [.filename]#/usr/home/username#. Each user stores their personal files and subdirectories in their own home directory.
User shell::
The shell provides the user's default environment for interacting with the system. There are many different kinds of shells and experienced users will have their own preferences, which can be reflected in their account settings.
[[users-superuser]]
==== The Superuser Account
The superuser account, usually called `root`, is used to manage the system with no limitations on privileges. For this reason, it should not be used for day-to-day tasks like sending and receiving mail, general exploration of the system, or programming.
The superuser, unlike other user accounts, can operate without limits, and misuse of the superuser account may result in spectacular disasters. User accounts are unable to destroy the operating system by mistake, so it is recommended to login as a user account and to only become the superuser when a command requires extra privilege.
Always double and triple-check any commands issued as the superuser, since an extra space or missing character can mean irreparable data loss.
There are several ways to gain superuser privilege. While one can log in as `root`, this is highly discouraged.
Instead, use man:su[1] to become the superuser. If `-` is specified when running this command, the user will also inherit the root user's environment. The user running this command must be in the `wheel` group or else the command will fail. The user must also know the password for the `root` user account.
In this example, the user only becomes superuser in order to run `make install` as this step requires superuser privilege. Once the command completes, the user types `exit` to leave the superuser account and return to the privilege of their user account.
.Install a Program As the Superuser
[example]
====
[source,shell]
....
% configure
% make
% su -
Password:
# make install
# exit
%
....
====
The built-in man:su[1] framework works well for single systems or small networks with just one system administrator. An alternative is to install the package:security/sudo[] package or port. This software provides activity logging and allows the administrator to configure which users can run which commands as the superuser.
[[users-modifying]]
=== Managing Accounts
FreeBSD provides a variety of different commands to manage user accounts. The most common commands are summarized in <<users-modifying-utilities>>, followed by some examples of their usage. See the manual page for each utility for more details and usage examples.
[[users-modifying-utilities]]
.Utilities for Managing User Accounts
[cols="1,1", frame="none", options="header"]
|===
| Command
| Summary
|man:adduser[8]
|The recommended command-line application for adding new users.
|man:rmuser[8]
|The recommended command-line application for removing users.
|man:chpass[1]
|A flexible tool for changing user database information.
|man:passwd[1]
|The command-line tool to change user passwords.
|man:pw[8]
|A powerful and flexible tool for modifying all aspects of user accounts.
|===
[[users-adduser]]
==== `adduser`
The recommended program for adding new users is man:adduser[8]. When a new user is added, this program automatically updates [.filename]#/etc/passwd# and [.filename]#/etc/group#. It also creates a home directory for the new user, copies in the default configuration files from [.filename]#/usr/share/skel#, and can optionally mail the new user a welcome message. This utility must be run as the superuser.
The man:adduser[8] utility is interactive and walks through the steps for creating a new user account. As seen in <<users-modifying-adduser>>, either input the required information or press kbd:[Return] to accept the default value shown in square brackets. In this example, the user has been invited into the `wheel` group, allowing them to become the superuser with man:su[1]. When finished, the utility will prompt to either create another user or to exit.
[[users-modifying-adduser]]
.Adding a User on FreeBSD
[example]
====
[source,shell]
....
# adduser
Username: jru
Full name: J. Random User
Uid (Leave empty for default):
Login group [jru]:
Login group is jru. Invite jru into other groups? []: wheel
Login class [default]:
Shell (sh csh tcsh zsh nologin) [sh]: zsh
Home directory [/home/jru]:
Home directory permissions (Leave empty for default):
Use password-based authentication? [yes]:
Use an empty password? (yes/no) [no]:
Use a random password? (yes/no) [no]:
Enter password:
Enter password again:
Lock out the account after creation? [no]:
Username : jru
Password : ****
Full Name : J. Random User
Uid : 1001
Class :
Groups : jru wheel
Home : /home/jru
Shell : /usr/local/bin/zsh
Locked : no
OK? (yes/no): yes
adduser: INFO: Successfully added (jru) to the user database.
Add another user? (yes/no): no
Goodbye!
#
....
====
[NOTE]
====
Since the password is not echoed when typed, be careful to not mistype the password when creating the user account.
====
[[users-rmuser]]
==== `rmuser`
To completely remove a user from the system, run man:rmuser[8] as the superuser. This command performs the following steps:
[.procedure]
. Removes the user's man:crontab[1] entry, if one exists.
. Removes any man:at[1] jobs belonging to the user.
. Kills all processes owned by the user.
. Removes the user from the system's local password file.
. Optionally removes the user's home directory, if it is owned by the user.
. Removes the incoming mail files belonging to the user from [.filename]#/var/mail#.
. Removes all files owned by the user from temporary file storage areas such as [.filename]#/tmp#.
. Finally, removes the username from all groups to which it belongs in [.filename]#/etc/group#. If a group becomes empty and the group name is the same as the username, the group is removed. This complements the per-user unique groups created by man:adduser[8].
man:rmuser[8] cannot be used to remove superuser accounts since that is almost always an indication of massive destruction.
By default, an interactive mode is used, as shown in the following example.
.`rmuser` Interactive Account Removal
[example]
====
[source,shell]
....
# rmuser jru
Matching password entry:
jru:*:1001:1001::0:0:J. Random User:/home/jru:/usr/local/bin/zsh
Is this the entry you wish to remove? y
Remove user's home directory (/home/jru)? y
Removing user (jru): mailspool home passwd.
#
....
====
[[users-chpass]]
==== `chpass`
Any user can use man:chpass[1] to change their default shell and personal information associated with their user account. The superuser can use this utility to change additional account information for any user.
When passed no options, aside from an optional username, man:chpass[1] displays an editor containing user information. When the user exits from the editor, the user database is updated with the new information.
[NOTE]
====
This utility will prompt for the user's password when exiting the editor, unless the utility is run as the superuser.
====
In <<users-modifying-chpass-su>>, the superuser has typed `chpass jru` and is now viewing the fields that can be changed for this user. If `jru` runs this command instead, only the last six fields will be displayed and available for editing. This is shown in <<users-modifying-chpass-ru>>.
[[users-modifying-chpass-su]]
.Using `chpass` as Superuser
[example]
====
[source,shell]
....
#Changing user database information for jru.
Login: jru
Password: *
Uid [#]: 1001
Gid [# or name]: 1001
Change [month day year]:
Expire [month day year]:
Class:
Home directory: /home/jru
Shell: /usr/local/bin/zsh
Full Name: J. Random User
Office Location:
Office Phone:
Home Phone:
Other information:
....
====
[[users-modifying-chpass-ru]]
.Using `chpass` as Regular User
[example]
====
[source,shell]
....
#Changing user database information for jru.
Shell: /usr/local/bin/zsh
Full Name: J. Random User
Office Location:
Office Phone:
Home Phone:
Other information:
....
====
[NOTE]
====
The commands man:chfn[1] and man:chsh[1] are links to man:chpass[1], as are man:ypchpass[1], man:ypchfn[1], and man:ypchsh[1]. Since NIS support is automatic, specifying the `yp` before the command is not necessary. How to configure NIS is covered in crossref:network-servers[network-servers,Network Servers].
====
[[users-passwd]]
==== `passwd`
Any user can easily change their password using man:passwd[1]. To prevent accidental or unauthorized changes, this command will prompt for the user's original password before a new password can be set:
.Changing Your Password
[example]
====
[source,shell]
....
% passwd
Changing local password for jru.
Old password:
New password:
Retype new password:
passwd: updating the database...
passwd: done
....
====
The superuser can change any user's password by specifying the username when running man:passwd[1]. When this utility is run as the superuser, it will not prompt for the user's current password. This allows the password to be changed when a user cannot remember the original password.
.Changing Another User's Password as the Superuser
[example]
====
[source,shell]
....
# passwd jru
Changing local password for jru.
New password:
Retype new password:
passwd: updating the database...
passwd: done
....
====
[NOTE]
====
As with man:chpass[1], man:yppasswd[1] is a link to man:passwd[1], so NIS works with either command.
====
[[users-pw]]
==== `pw`
The man:pw[8] utility can create, remove, modify, and display users and groups. It functions as a front end to the system user and group files. man:pw[8] has a very powerful set of command line options that make it suitable for use in shell scripts, but new users may find it more complicated than the other commands presented in this section.
[[users-groups]]
=== Managing Groups
A group is a list of users. A group is identified by its group name and GID. In FreeBSD, the kernel uses the UID of a process, and the list of groups it belongs to, to determine what the process is allowed to do. Most of the time, the GID of a user or process usually means the first group in the list.
The group name to GID mapping is listed in [.filename]#/etc/group#. This is a plain text file with four colon-delimited fields. The first field is the group name, the second is the encrypted password, the third the GID, and the fourth the comma-delimited list of members. For a more complete description of the syntax, refer to man:group[5].
The superuser can modify [.filename]#/etc/group# using a text editor. Alternatively, man:pw[8] can be used to add and edit groups. For example, to add a group called `teamtwo` and then confirm that it exists:
.Adding a Group Using man:pw[8]
[example]
====
[source,shell]
....
# pw groupadd teamtwo
# pw groupshow teamtwo
teamtwo:*:1100:
....
====
In this example, `1100` is the GID of `teamtwo`. Right now, `teamtwo` has no members. This command will add `jru` as a member of `teamtwo`.
.Adding User Accounts to a New Group Using man:pw[8]
[example]
====
[source,shell]
....
# pw groupmod teamtwo -M jru
# pw groupshow teamtwo
teamtwo:*:1100:jru
....
====
The argument to `-M` is a comma-delimited list of users to be added to a new (empty) group or to replace the members of an existing group. To the user, this group membership is different from (and in addition to) the user's primary group listed in the password file. This means that the user will not show up as a member when using `groupshow` with man:pw[8], but will show up when the information is queried via man:id[1] or a similar tool. When man:pw[8] is used to add a user to a group, it only manipulates [.filename]#/etc/group# and does not attempt to read additional data from [.filename]#/etc/passwd#.
.Adding a New Member to a Group Using man:pw[8]
[example]
====
[source,shell]
....
# pw groupmod teamtwo -m db
# pw groupshow teamtwo
teamtwo:*:1100:jru,db
....
====
In this example, the argument to `-m` is a comma-delimited list of users who are to be added to the group. Unlike the previous example, these users are appended to the group and do not replace existing users in the group.
.Using man:id[1] to Determine Group Membership
[example]
====
[source,shell]
....
% id jru
uid=1001(jru) gid=1001(jru) groups=1001(jru), 1100(teamtwo)
....
====
In this example, `jru` is a member of the groups `jru` and `teamtwo`.
For more information about this command and the format of [.filename]#/etc/group#, refer to man:pw[8] and man:group[5].
[[permissions]]
== Permissions
In FreeBSD, every file and directory has an associated set of permissions and several utilities are available for viewing and modifying these permissions. Understanding how permissions work is necessary to make sure that users are able to access the files that they need and are unable to improperly access the files used by the operating system or owned by other users.
This section discusses the traditional UNIX(R) permissions used in FreeBSD. For finer grained file system access control, refer to crossref:security[fs-acl,“Access Control Lists”].
In UNIX(R), basic permissions are assigned using three types of access: read, write, and execute. These access types are used to determine file access to the file's owner, group, and others (everyone else). The read, write, and execute permissions can be represented as the letters `r`, `w`, and `x`. They can also be represented as binary numbers as each permission is either on or off (`0`). When represented as a number, the order is always read as `rwx`, where `r` has an on value of `4`, `w` has an on value of `2` and `x` has an on value of `1`.
Table 4.1 summarizes the possible numeric and alphabetic possibilities. When reading the "Directory Listing" column, a `-` is used to represent a permission that is set to off.
.UNIX(R) Permissions
[cols="1,1,1", frame="none", options="header"]
|===
| Value
| Permission
| Directory Listing
|0
|No read, no write, no execute
|`---`
|1
|No read, no write, execute
|`--x`
|2
|No read, write, no execute
|`-w-`
|3
|No read, write, execute
|`-wx`
|4
|Read, no write, no execute
|`r--`
|5
|Read, no write, execute
|`r-x`
|6
|Read, write, no execute
|`rw-`
|7
|Read, write, execute
|`rwx`
|===
Use the `-l` argument to man:ls[1] to view a long directory listing that includes a column of information about a file's permissions for the owner, group, and everyone else. For example, an `ls -l` in an arbitrary directory may show:
[source,shell]
....
% ls -l
total 530
-rw-r--r-- 1 root wheel 512 Sep 5 12:31 myfile
-rw-r--r-- 1 root wheel 512 Sep 5 12:31 otherfile
-rw-r--r-- 1 root wheel 7680 Sep 5 12:31 email.txt
....
The first (leftmost) character in the first column indicates whether this file is a regular file, a directory, a special character device, a socket, or any other special pseudo-file device. In this example, the `-` indicates a regular file. The next three characters, `rw-` in this example, give the permissions for the owner of the file. The next three characters, `r--`, give the permissions for the group that the file belongs to. The final three characters, `r--`, give the permissions for the rest of the world. A dash means that the permission is turned off. In this example, the permissions are set so the owner can read and write to the file, the group can read the file, and the rest of the world can only read the file. According to the table above, the permissions for this file would be `644`, where each digit represents the three parts of the file's permission.
How does the system control permissions on devices? FreeBSD treats most hardware devices as a file that programs can open, read, and write data to. These special device files are stored in [.filename]#/dev/#.
Directories are also treated as files. They have read, write, and execute permissions. The executable bit for a directory has a slightly different meaning than that of files. When a directory is marked executable, it means it is possible to change into that directory using man:cd[1]. This also means that it is possible to access the files within that directory, subject to the permissions on the files themselves.
In order to perform a directory listing, the read permission must be set on the directory. In order to delete a file that one knows the name of, it is necessary to have write _and_ execute permissions to the directory containing the file.
There are more permission bits, but they are primarily used in special circumstances such as setuid binaries and sticky directories. For more information on file permissions and how to set them, refer to man:chmod[1].
=== Symbolic Permissions
Symbolic permissions use characters instead of octal values to assign permissions to files or directories. Symbolic permissions use the syntax of (who) (action) (permissions), where the following values are available:
[.informaltable]
[cols="1,1,1", frame="none", options="header"]
|===
| Option
| Letter
| Represents
|(who)
|u
|User
|(who)
|g
|Group owner
|(who)
|o
|Other
|(who)
|a
|All ("world")
|(action)
|+
|Adding permissions
|(action)
|-
|Removing permissions
|(action)
|=
|Explicitly set permissions
|(permissions)
|r
|Read
|(permissions)
|w
|Write
|(permissions)
|x
|Execute
|(permissions)
|t
|Sticky bit
|(permissions)
|s
|Set UID or GID
|===
These values are used with man:chmod[1], but with letters instead of numbers. For example, the following command would block other users from accessing _FILE_:
[source,shell]
....
% chmod go= FILE
....
A comma separated list can be provided when more than one set of changes to a file must be made. For example, the following command removes the group and "world" write permission on _FILE_, and adds the execute permissions for everyone:
[source,shell]
....
% chmod go-w,a+x FILE
....
=== FreeBSD File Flags
In addition to file permissions, FreeBSD supports the use of "file flags". These flags add an additional level of security and control over files, but not directories. With file flags, even `root` can be prevented from removing or altering files.
File flags are modified using man:chflags[1]. For example, to enable the system undeletable flag on the file [.filename]#file1#, issue the following command:
[source,shell]
....
# chflags sunlink file1
....
To disable the system undeletable flag, put a "no" in front of the `sunlink`:
[source,shell]
....
# chflags nosunlink file1
....
To view the flags of a file, use `-lo` with man:ls[1]:
[source,shell]
....
# ls -lo file1
....
[.programlisting]
....
-rw-r--r-- 1 trhodes trhodes sunlnk 0 Mar 1 05:54 file1
....
Several file flags may only be added or removed by the `root` user. In other cases, the file owner may set its file flags. Refer to man:chflags[1] and man:chflags[2] for more information.
=== The `setuid`, `setgid`, and `sticky` Permissions
Other than the permissions already discussed, there are three other specific settings that all administrators should know about. They are the `setuid`, `setgid`, and `sticky` permissions.
These settings are important for some UNIX(R) operations as they provide functionality not normally granted to normal users. To understand them, the difference between the real user ID and effective user ID must be noted.
The real user ID is the UID who owns or starts the process. The effective UID is the user ID the process runs as. As an example, man:passwd[1] runs with the real user ID when a user changes their password. However, in order to update the password database, the command runs as the effective ID of the `root` user. This allows users to change their passwords without seeing a `Permission Denied` error.
The setuid permission may be set by prefixing a permission set with the number four (4) as shown in the following example:
[source,shell]
....
# chmod 4755 suidexample.sh
....
The permissions on [.filename]#suidexample.sh# now look like the following:
[.programlisting]
....
-rwsr-xr-x 1 trhodes trhodes 63 Aug 29 06:36 suidexample.sh
....
Note that a `s` is now part of the permission set designated for the file owner, replacing the executable bit. This allows utilities which need elevated permissions, such as man:passwd[1].
[NOTE]
====
The `nosuid` man:mount[8] option will cause such binaries to silently fail without alerting the user. That option is not completely reliable as a `nosuid` wrapper may be able to circumvent it.
====
To view this in real time, open two terminals. On one, type `passwd` as a normal user. While it waits for a new password, check the process table and look at the user information for man:passwd[1]:
In terminal A:
[source,shell]
....
Changing local password for trhodes
Old Password:
....
In terminal B:
[source,shell]
....
# ps aux | grep passwd
....
[source,shell]
....
trhodes 5232 0.0 0.2 3420 1608 0 R+ 2:10AM 0:00.00 grep passwd
root 5211 0.0 0.2 3620 1724 2 I+ 2:09AM 0:00.01 passwd
....
Although man:passwd[1] is run as a normal user, it is using the effective UID of `root`.
The `setgid` permission performs the same function as the `setuid` permission; except that it alters the group settings. When an application or utility executes with this setting, it will be granted the permissions based on the group that owns the file, not the user who started the process.
To set the `setgid` permission on a file, provide man:chmod[1] with a leading two (2):
[source,shell]
....
# chmod 2755 sgidexample.sh
....
In the following listing, notice that the `s` is now in the field designated for the group permission settings:
[source,shell]
....
-rwxr-sr-x 1 trhodes trhodes 44 Aug 31 01:49 sgidexample.sh
....
[NOTE]
====
In these examples, even though the shell script in question is an executable file, it will not run with a different EUID or effective user ID. This is because shell scripts may not access the man:setuid[2] system calls.
====
The `setuid` and `setgid` permission bits may lower system security, by allowing for elevated permissions. The third special permission, the `sticky bit`, can strengthen the security of a system.
When the `sticky bit` is set on a directory, it allows file deletion only by the file owner. This is useful to prevent file deletion in public directories, such as [.filename]#/tmp#, by users who do not own the file. To utilize this permission, prefix the permission set with a one (1):
[source,shell]
....
# chmod 1777 /tmp
....
The `sticky bit` permission will display as a `t` at the very end of the permission set:
[source,shell]
....
# ls -al / | grep tmp
....
[source,shell]
....
drwxrwxrwt 10 root wheel 512 Aug 31 01:49 tmp
....
[[dirstructure]]
== Directory Structure
The FreeBSD directory hierarchy is fundamental to obtaining an overall understanding of the system. The most important directory is root or, "/". This directory is the first one mounted at boot time and it contains the base system necessary to prepare the operating system for multi-user operation. The root directory also contains mount points for other file systems that are mounted during the transition to multi-user operation.
A mount point is a directory where additional file systems can be grafted onto a parent file system (usually the root file system). This is further described in <<disk-organization>>. Standard mount points include [.filename]#/usr/#, [.filename]#/var/#, [.filename]#/tmp/#, [.filename]#/mnt/#, and [.filename]#/cdrom/#. These directories are usually referenced to entries in [.filename]#/etc/fstab#. This file is a table of various file systems and mount points and is read by the system. Most of the file systems in [.filename]#/etc/fstab# are mounted automatically at boot time from the script man:rc[8] unless their entry includes `noauto`. Details can be found in <<disks-fstab>>.
A complete description of the file system hierarchy is available in man:hier[7]. The following table provides a brief overview of the most common directories.
[.informaltable]
[cols="1,1", frame="none", options="header"]
|===
| Directory
| Description
|[.filename]#/#
|Root directory of the file system.
|[.filename]#/bin/#
|User utilities fundamental to both single-user and multi-user environments.
|[.filename]#/boot/#
|Programs and configuration files used during operating system bootstrap.
|[.filename]#/boot/defaults/#
|Default boot configuration files. Refer to man:loader.conf[5] for details.
|[.filename]#/dev/#
|Device nodes. Refer to man:intro[4] for details.
|[.filename]#/etc/#
|System configuration files and scripts.
|[.filename]#/etc/defaults/#
|Default system configuration files. Refer to man:rc[8] for details.
|[.filename]#/etc/mail/#
|Configuration files for mail transport agents such as man:sendmail[8].
|[.filename]#/etc/periodic/#
|Scripts that run daily, weekly, and monthly, via man:cron[8]. Refer to man:periodic[8] for details.
|[.filename]#/etc/ppp/#
|man:ppp[8] configuration files.
|[.filename]#/mnt/#
|Empty directory commonly used by system administrators as a temporary mount point.
|[.filename]#/proc/#
|Process file system. Refer to man:procfs[5], man:mount_procfs[8] for details.
|[.filename]#/rescue/#
|Statically linked programs for emergency recovery as described in man:rescue[8].
|[.filename]#/root/#
|Home directory for the `root` account.
|[.filename]#/sbin/#
|System programs and administration utilities fundamental to both single-user and multi-user environments.
|[.filename]#/tmp/#
|Temporary files which are usually _not_ preserved across a system reboot. A memory-based file system is often mounted at [.filename]#/tmp#. This can be automated using the tmpmfs-related variables of man:rc.conf[5] or with an entry in [.filename]#/etc/fstab#; refer to man:mdmfs[8] for details.
|[.filename]#/usr/#
|The majority of user utilities and applications.
|[.filename]#/usr/bin/#
|Common utilities, programming tools, and applications.
|[.filename]#/usr/include/#
|Standard C include files.
|[.filename]#/usr/lib/#
|Archive libraries.
|[.filename]#/usr/libdata/#
|Miscellaneous utility data files.
|[.filename]#/usr/libexec/#
|System daemons and system utilities executed by other programs.
|[.filename]#/usr/local/#
|Local executables and libraries. Also used as the default destination for the FreeBSD ports framework. Within [.filename]#/usr/local#, the general layout sketched out by man:hier[7] for [.filename]#/usr# should be used. Exceptions are the man directory, which is directly under [.filename]#/usr/local# rather than under [.filename]#/usr/local/share#, and the ports documentation is in [.filename]#share/doc/port#.
|[.filename]#/usr/obj/#
|Architecture-specific target tree produced by building the [.filename]#/usr/src# tree.
|[.filename]#/usr/ports/#
|The FreeBSD Ports Collection (optional).
|[.filename]#/usr/sbin/#
|System daemons and system utilities executed by users.
|[.filename]#/usr/share/#
|Architecture-independent files.
|[.filename]#/usr/src/#
|BSD and/or local source files.
|[.filename]#/var/#
|Multi-purpose log, temporary, transient, and spool files. A memory-based file system is sometimes mounted at [.filename]#/var#. This can be automated using the varmfs-related variables in man:rc.conf[5] or with an entry in [.filename]#/etc/fstab#; refer to man:mdmfs[8] for details.
|[.filename]#/var/log/#
|Miscellaneous system log files.
|[.filename]#/var/mail/#
|User mailbox files.
|[.filename]#/var/spool/#
|Miscellaneous printer and mail system spooling directories.
|[.filename]#/var/tmp/#
|Temporary files which are usually preserved across a system reboot, unless [.filename]#/var# is a memory-based file system.
|[.filename]#/var/yp/#
|NIS maps.
|===
[[disk-organization]]
== Disk Organization
The smallest unit of organization that FreeBSD uses to find files is the filename. Filenames are case-sensitive, which means that [.filename]#readme.txt# and [.filename]#README.TXT# are two separate files. FreeBSD does not use the extension of a file to determine whether the file is a program, document, or some other form of data.
Files are stored in directories. A directory may contain no files, or it may contain many hundreds of files. A directory can also contain other directories, allowing a hierarchy of directories within one another in order to organize data.
Files and directories are referenced by giving the file or directory name, followed by a forward slash, `/`, followed by any other directory names that are necessary. For example, if the directory [.filename]#foo# contains a directory [.filename]#bar# which contains the file [.filename]#readme.txt#, the full name, or _path_, to the file is [.filename]#foo/bar/readme.txt#. Note that this is different from Windows(R) which uses `\` to separate file and directory names. FreeBSD does not use drive letters, or other drive names in the path. For example, one would not type [.filename]#c:\foo\bar\readme.txt# on FreeBSD.
Directories and files are stored in a file system. Each file system contains exactly one directory at the very top level, called the _root directory_ for that file system. This root directory can contain other directories. One file system is designated the _root file system_ or `/`. Every other file system is _mounted_ under the root file system. No matter how many disks are on the FreeBSD system, every directory appears to be part of the same disk.
Consider three file systems, called `A`, `B`, and `C`. Each file system has one root directory, which contains two other directories, called `A1`, `A2` (and likewise `B1`, `B2` and `C1`, `C2`).
Call `A` the root file system. If man:ls[1] is used to view the contents of this directory, it will show two subdirectories, `A1` and `A2`. The directory tree looks like this:
image::example-dir1.png[]
A file system must be mounted on to a directory in another file system. When mounting file system `B` on to the directory `A1`, the root directory of `B` replaces `A1`, and the directories in `B` appear accordingly:
image::example-dir2.png[]
Any files that are in the `B1` or `B2` directories can be reached with the path [.filename]#/A1/B1# or [.filename]#/A1/B2# as necessary. Any files that were in [.filename]#/A1# have been temporarily hidden. They will reappear if `B` is _unmounted_ from `A`.
If `B` had been mounted on `A2` then the diagram would look like this:
image::example-dir3.png[]
and the paths would be [.filename]#/A2/B1# and [.filename]#/A2/B2# respectively.
File systems can be mounted on top of one another. Continuing the last example, the `C` file system could be mounted on top of the `B1` directory in the `B` file system, leading to this arrangement:
image::example-dir4.png[]
Or `C` could be mounted directly on to the `A` file system, under the `A1` directory:
image::example-dir5.png[]
It is entirely possible to have one large root file system, and not need to create any others. There are some drawbacks to this approach, and one advantage.
.Benefits of Multiple File Systems
* Different file systems can have different _mount options_. For example, the root file system can be mounted read-only, making it impossible for users to inadvertently delete or edit a critical file. Separating user-writable file systems, such as [.filename]#/home#, from other file systems allows them to be mounted _nosuid_. This option prevents the _suid_/_guid_ bits on executables stored on the file system from taking effect, possibly improving security.
* FreeBSD automatically optimizes the layout of files on a file system, depending on how the file system is being used. So a file system that contains many small files that are written frequently will have a different optimization to one that contains fewer, larger files. By having one big file system this optimization breaks down.
* FreeBSD's file systems are robust if power is lost. However, a power loss at a critical point could still damage the structure of the file system. By splitting data over multiple file systems it is more likely that the system will still come up, making it easier to restore from backup as necessary.
.Benefit of a Single File System
* File systems are a fixed size. If you create a file system when you install FreeBSD and give it a specific size, you may later discover that you need to make the partition bigger. This is not easily accomplished without backing up, recreating the file system with the new size, and then restoring the backed up data.
+
[IMPORTANT]
====
FreeBSD features the man:growfs[8] command, which makes it possible to increase the size of file system on the fly, removing this limitation.
====
File systems are contained in partitions. This does not have the same meaning as the common usage of the term partition (for example, MS-DOS(R) partition), because of FreeBSD's UNIX(R) heritage. Each partition is identified by a letter from `a` through to `h`. Each partition can contain only one file system, which means that file systems are often described by either their typical mount point in the file system hierarchy, or the letter of the partition they are contained in.
FreeBSD also uses disk space for _swap space_ to provide _virtual memory_. This allows your computer to behave as though it has much more memory than it actually does. When FreeBSD runs out of memory, it moves some of the data that is not currently being used to the swap space, and moves it back in (moving something else out) when it needs it.
Some partitions have certain conventions associated with them.
[.informaltable]
[cols="1,1", frame="none", options="header"]
|===
| Partition
| Convention
|`a`
|Normally contains the root file system.
|`b`
|Normally contains swap space.
|`c`
|Normally the same size as the enclosing slice. This allows utilities that need to work on the entire slice, such as a bad block scanner, to work on the `c` partition. A file system would not normally be created on this partition.
|`d`
|Partition `d` used to have a special meaning associated with it, although that is now gone and `d` may work as any normal partition.
|===
Disks in FreeBSD are divided into slices, referred to in Windows(R) as partitions, which are numbered from 1 to 4. These are then divided into partitions, which contain file systems, and are labeled using letters.
Slice numbers follow the device name, prefixed with an `s`, starting at 1. So "da0__s1__" is the first slice on the first SCSI drive. There can only be four physical slices on a disk, but there can be logical slices inside physical slices of the appropriate type. These extended slices are numbered starting at 5, so "ada0__s5__" is the first extended slice on the first SATA disk. These devices are used by file systems that expect to occupy a slice.
Slices, "dangerously dedicated" physical drives, and other drives contain _partitions_, which are represented as letters from `a` to `h`. This letter is appended to the device name, so "da0__a__" is the `a` partition on the first `da` drive, which is "dangerously dedicated". "ada1s3__e__" is the fifth partition in the third slice of the second SATA disk drive.
Finally, each disk on the system is identified. A disk name starts with a code that indicates the type of disk, and then a number, indicating which disk it is. Unlike slices, disk numbering starts at 0. Common codes are listed in <<disks-naming>>.
When referring to a partition, include the disk name, `s`, the slice number, and then the partition letter. Examples are shown in <<basics-disk-slice-part>>.
<<basics-concept-disk-model>> shows a conceptual model of a disk layout.
When installing FreeBSD, configure the disk slices, create partitions within the slice to be used for FreeBSD, create a file system or swap space in each partition, and decide where each file system will be mounted.
[[disks-naming]]
.Disk Device Names
[cols="1,1", frame="none", options="header"]
|===
| Drive Type
| Drive Device Name
|SATA and IDE hard drives
|`ada` or `ad`
|SCSI hard drives and USB storage devices
|`da`
|SATA and IDECD-ROM drives
|`cd` or `acd`
|SCSICD-ROM drives
|`cd`
|Floppy drives
|`fd`
|Assorted non-standard CD-ROM drives
|`mcd` for Mitsumi CD-ROM and `scd` for Sony CD-ROM devices
|SCSI tape drives
|`sa`
|IDE tape drives
|`ast`
|RAID drives
|Examples include `aacd` for Adaptec(R) AdvancedRAID, `mlxd` and `mlyd` for Mylex(R), `amrd` for AMI MegaRAID(R), `idad` for Compaq Smart RAID, `twed` for 3ware(R) RAID.
|===
[example]
====
[[basics-disk-slice-part]]
.Sample Disk, Slice, and Partition Names
[.informaltable]
[cols="1,1", frame="none", options="header"]
|===
| Name
| Meaning
|`ada0s1a`
|The first partition (`a`) on the first slice (`s1`) on the first SATA disk (`ada0`).
|`da1s2e`
|The fifth partition (`e`) on the second slice (`s2`) on the second SCSI disk (`da1`).
|===
====
[[basics-concept-disk-model]]
.Conceptual Model of a Disk
[example]
====
This diagram shows FreeBSD's view of the first SATA disk attached to the system. Assume that the disk is 250 GB in size, and contains an 80 GB slice and a 170 GB slice (MS-DOS(R) partitions). The first slice contains a Windows(R) NTFS file system, [.filename]#C:#, and the second slice contains a FreeBSD installation. This example FreeBSD installation has four data partitions and a swap partition.
The four partitions each hold a file system. Partition `a` is used for the root file system, `d` for [.filename]#/var/#, `e` for [.filename]#/tmp/#, and `f` for [.filename]#/usr/#. Partition letter `c` refers to the entire slice, and so is not used for ordinary partitions.
image::disk-layout.png[]
====
[[mount-unmount]]
== Mounting and Unmounting File Systems
The file system is best visualized as a tree, rooted, as it were, at [.filename]#/#. [.filename]#/dev#, [.filename]#/usr#, and the other directories in the root directory are branches, which may have their own branches, such as [.filename]#/usr/local#, and so on.
There are various reasons to house some of these directories on separate file systems. [.filename]#/var# contains the directories [.filename]#log/#, [.filename]#spool/#, and various types of temporary files, and as such, may get filled up. Filling up the root file system is not a good idea, so splitting [.filename]#/var# from [.filename]#/# is often favorable.
Another common reason to contain certain directory trees on other file systems is if they are to be housed on separate physical disks, or are separate virtual disks, such as Network File System mounts, described in crossref:network-servers[network-nfs,“Network File System (NFS)”], or CDROM drives.
[[disks-fstab]]
=== The [.filename]#fstab# File
During the boot process (crossref:boot[boot,The FreeBSD Booting Process]), file systems listed in [.filename]#/etc/fstab# are automatically mounted except for the entries containing `noauto`. This file contains entries in the following format:
[.programlisting]
....
device /mount-point fstype options dumpfreq passno
....
`device`::
An existing device name as explained in <<disks-naming>>.
`mount-point`::
An existing directory on which to mount the file system.
`fstype`::
The file system type to pass to man:mount[8]. The default FreeBSD file system is `ufs`.
`options`::
Either `rw` for read-write file systems, or `ro` for read-only file systems, followed by any other options that may be needed. A common option is `noauto` for file systems not normally mounted during the boot sequence. Other options are listed in man:mount[8].
`dumpfreq`::
Used by man:dump[8] to determine which file systems require dumping. If the field is missing, a value of zero is assumed.
`passno`::
Determines the order in which file systems should be checked. File systems that should be skipped should have their `passno` set to zero. The root file system needs to be checked before everything else and should have its `passno` set to one. The other file systems should be set to values greater than one. If more than one file system has the same `passno`, man:fsck[8] will attempt to check file systems in parallel if possible.
Refer to man:fstab[5] for more information on the format of [.filename]#/etc/fstab# and its options.
[[disks-mount]]
=== Using man:mount[8]
File systems are mounted using man:mount[8]. The most basic syntax is as follows:
[example]
====
[source,shell]
....
# mount device mountpoint
....
====
This command provides many options which are described in man:mount[8], The most commonly used options include:
.Mount Options
`-a`::
Mount all the file systems listed in [.filename]#/etc/fstab#, except those marked as "noauto", excluded by the `-t` flag, or those that are already mounted.
`-d`::
Do everything except for the actual mount system call. This option is useful in conjunction with the `-v` flag to determine what man:mount[8] is actually trying to do.
`-f`::
Force the mount of an unclean file system (dangerous), or the revocation of write access when downgrading a file system's mount status from read-write to read-only.
`-r`::
Mount the file system read-only. This is identical to using `-o ro`.
``-t _fstype_``::
Mount the specified file system type or mount only file systems of the given type, if `-a` is included. "ufs" is the default file system type.
`-u`::
Update mount options on the file system.
`-v`::
Be verbose.
`-w`::
Mount the file system read-write.
The following options can be passed to `-o` as a comma-separated list:
nosuid::
Do not interpret setuid or setgid flags on the file system. This is also a useful security option.
[[disks-umount]]
=== Using man:umount[8]
To unmount a file system use man:umount[8]. This command takes one parameter which can be a mountpoint, device name, `-a` or `-A`.
All forms take `-f` to force unmounting, and `-v` for verbosity. Be warned that `-f` is not generally a good idea as it might crash the computer or damage data on the file system.
To unmount all mounted file systems, or just the file system types listed after `-t`, use `-a` or `-A`. Note that `-A` does not attempt to unmount the root file system.
[[basics-processes]]
== Processes and Daemons
FreeBSD is a multi-tasking operating system. Each program running at any one time is called a _process_. Every running command starts at least one new process and there are a number of system processes that are run by FreeBSD.
Each process is uniquely identified by a number called a _process ID_ (PID). Similar to files, each process has one owner and group, and the owner and group permissions are used to determine which files and devices the process can open. Most processes also have a parent process that started them. For example, the shell is a process, and any command started in the shell is a process which has the shell as its parent process. The exception is a special process called man:init[8] which is always the first process to start at boot time and which always has a PID of `1`.
Some programs are not designed to be run with continuous user input and disconnect from the terminal at the first opportunity. For example, a web server responds to web requests, rather than user input. Mail servers are another example of this type of application. These types of programs are known as _daemons_. The term daemon comes from Greek mythology and represents an entity that is neither good nor evil, and which invisibly performs useful tasks. This is why the BSD mascot is the cheerful-looking daemon with sneakers and a pitchfork.
There is a convention to name programs that normally run as daemons with a trailing "d". For example, BIND is the Berkeley Internet Name Domain, but the actual program that executes is `named`. The Apache web server program is `httpd` and the line printer spooling daemon is `lpd`. This is only a naming convention. For example, the main mail daemon for the Sendmail application is `sendmail`, and not `maild`.
=== Viewing Processes
To see the processes running on the system, use man:ps[1] or man:top[1]. To display a static list of the currently running processes, their PIDs, how much memory they are using, and the command they were started with, use man:ps[1]. To display all the running processes and update the display every few seconds in order to interactively see what the computer is doing, use man:top[1].
By default, man:ps[1] only shows the commands that are running and owned by the user. For example:
[source,shell]
....
% ps
PID TT STAT TIME COMMAND
8203 0 Ss 0:00.59 /bin/csh
8895 0 R+ 0:00.00 ps
....
The output from man:ps[1] is organized into a number of columns. The `PID` column displays the process ID. PIDs are assigned starting at 1, go up to 99999, then wrap around back to the beginning. However, a PID is not reassigned if it is already in use. The `TT` column shows the tty the program is running on and `STAT` shows the program's state. `TIME` is the amount of time the program has been running on the CPU. This is usually not the elapsed time since the program was started, as most programs spend a lot of time waiting for things to happen before they need to spend time on the CPU. Finally, `COMMAND` is the command that was used to start the program.
A number of different options are available to change the information that is displayed. One of the most useful sets is `auxww`, where `a` displays information about all the running processes of all users, `u` displays the username and memory usage of the process' owner, `x` displays information about daemon processes, and `ww` causes man:ps[1] to display the full command line for each process, rather than truncating it once it gets too long to fit on the screen.
The output from man:top[1] is similar:
[source,shell]
....
% top
last pid: 9609; load averages: 0.56, 0.45, 0.36 up 0+00:20:03 10:21:46
107 processes: 2 running, 104 sleeping, 1 zombie
CPU: 6.2% user, 0.1% nice, 8.2% system, 0.4% interrupt, 85.1% idle
Mem: 541M Active, 450M Inact, 1333M Wired, 4064K Cache, 1498M Free
ARC: 992M Total, 377M MFU, 589M MRU, 250K Anon, 5280K Header, 21M Other
Swap: 2048M Total, 2048M Free
PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND
557 root 1 -21 r31 136M 42296K select 0 2:20 9.96% Xorg
8198 dru 2 52 0 449M 82736K select 3 0:08 5.96% kdeinit4
8311 dru 27 30 0 1150M 187M uwait 1 1:37 0.98% firefox
431 root 1 20 0 14268K 1728K select 0 0:06 0.98% moused
9551 dru 1 21 0 16600K 2660K CPU3 3 0:01 0.98% top
2357 dru 4 37 0 718M 141M select 0 0:21 0.00% kdeinit4
8705 dru 4 35 0 480M 98M select 2 0:20 0.00% kdeinit4
8076 dru 6 20 0 552M 113M uwait 0 0:12 0.00% soffice.bin
2623 root 1 30 10 12088K 1636K select 3 0:09 0.00% powerd
2338 dru 1 20 0 440M 84532K select 1 0:06 0.00% kwin
1427 dru 5 22 0 605M 86412K select 1 0:05 0.00% kdeinit4
....
The output is split into two sections. The header (the first five or six lines) shows the PID of the last process to run, the system load averages (which are a measure of how busy the system is), the system uptime (time since the last reboot) and the current time. The other figures in the header relate to how many processes are running, how much memory and swap space has been used, and how much time the system is spending in different CPU states. If the ZFS file system module has been loaded, an `ARC` line indicates how much data was read from the memory cache instead of from disk.
Below the header is a series of columns containing similar information to the output from man:ps[1], such as the PID, username, amount of CPU time, and the command that started the process. By default, man:top[1] also displays the amount of memory space taken by the process. This is split into two columns: one for total size and one for resident size. Total size is how much memory the application has needed and the resident size is how much it is actually using now.
man:top[1] automatically updates the display every two seconds. A different interval can be specified with `-s`.
[[basics-daemons]]
=== Killing Processes
One way to communicate with any running process or daemon is to send a _signal_ using man:kill[1]. There are a number of different signals; some have a specific meaning while others are described in the application's documentation. A user can only send a signal to a process they own and sending a signal to someone else's process will result in a permission denied error. The exception is the `root` user, who can send signals to anyone's processes.
The operating system can also send a signal to a process. If an application is badly written and tries to access memory that it is not supposed to, FreeBSD will send the process the "Segmentation Violation" signal (`SIGSEGV`). If an application has been written to use the man:alarm[3] system call to be alerted after a period of time has elapsed, it will be sent the "Alarm" signal (`SIGALRM`).
Two signals can be used to stop a process: `SIGTERM` and `SIGKILL`. `SIGTERM` is the polite way to kill a process as the process can read the signal, close any log files it may have open, and attempt to finish what it is doing before shutting down. In some cases, a process may ignore `SIGTERM` if it is in the middle of some task that cannot be interrupted.
`SIGKILL` cannot be ignored by a process. Sending a `SIGKILL` to a process will usually stop that process there and then. footnote:[There are a few tasks that cannot be interrupted. For example, if the process is trying to read from a file that is on another computer on the network, and the other computer is unavailable, the process is said to be uninterruptible. Eventually the process will time out, typically after two minutes. As soon as this time out occurs the process will be killed.].
Other commonly used signals are `SIGHUP`, `SIGUSR1`, and `SIGUSR2`. Since these are general purpose signals, different applications will respond differently.
For example, after changing a web server's configuration file, the web server needs to be told to re-read its configuration. Restarting `httpd` would result in a brief outage period on the web server. Instead, send the daemon the `SIGHUP` signal. Be aware that different daemons will have different behavior, so refer to the documentation for the daemon to determine if `SIGHUP` will achieve the desired results.
[.procedure]
****
.Procedure: Sending a Signal to a Process
This example shows how to send a signal to man:inetd[8]. The man:inetd[8] configuration file is [.filename]#/etc/inetd.conf#, and man:inetd[8] will re-read this configuration file when it is sent a `SIGHUP`.
. Find the PID of the process to send the signal to using man:pgrep[1]. In this example, the PID for man:inetd[8] is 198:
+
[source,shell]
....
% pgrep -l inetd
198 inetd -wW
....
+
. Use man:kill[1] to send the signal. As man:inetd[8] is owned by `root`, use man:su[1] to become `root` first.
+
[source,shell]
....
% su
Password:
# /bin/kill -s HUP 198
....
Like most UNIX(R) commands, man:kill[1] will not print any output if it is successful. If a signal is sent to a process not owned by that user, the message `kill: _PID_: Operation not permitted` will be displayed. Mistyping the PID will either send the signal to the wrong process, which could have negative results, or will send the signal to a PID that is not currently in use, resulting in the error `kill: _PID_: No such process`.
[NOTE]
====
*Why Use `/bin/kill`?:* +
Many shells provide `kill` as a built in command, meaning that the shell will send the signal directly, rather than running [.filename]#/bin/kill#. Be aware that different shells have a different syntax for specifying the name of the signal to send. Rather than try to learn all of them, it can be simpler to specify `/bin/kill`.
====
****
When sending other signals, substitute `TERM` or `KILL` with the name of the signal.
[IMPORTANT]
====
Killing a random process on the system is a bad idea. In particular, man:init[8], PID 1, is special. Running `/bin/kill -s KILL 1` is a quick, and unrecommended, way to shutdown the system. _Always_ double check the arguments to man:kill[1] _before_ pressing kbd:[Return].
====
[[shells]]
== Shells
A _shell_ provides a command line interface for interacting with the operating system. A shell receives commands from the input channel and executes them. Many shells provide built in functions to help with everyday tasks such as file management, file globbing, command line editing, command macros, and environment variables. FreeBSD comes with several shells, including the Bourne shell (man:sh[1]) and the extended C shell (man:tcsh[1]). Other shells are available from the FreeBSD Ports Collection, such as `zsh` and `bash`.
The shell that is used is really a matter of taste. A C programmer might feel more comfortable with a C-like shell such as man:tcsh[1]. A Linux(R) user might prefer `bash`. Each shell has unique properties that may or may not work with a user's preferred working environment, which is why there is a choice of which shell to use.
One common shell feature is filename completion. After a user types the first few letters of a command or filename and presses kbd:[Tab], the shell completes the rest of the command or filename. Consider two files called [.filename]#foobar# and [.filename]#football#. To delete [.filename]#foobar#, the user might type `rm foo` and press kbd:[Tab] to complete the filename.
But the shell only shows `rm foo`. It was unable to complete the filename because both [.filename]#foobar# and [.filename]#football# start with `foo`. Some shells sound a beep or show all the choices if more than one name matches. The user must then type more characters to identify the desired filename. Typing a `t` and pressing kbd:[Tab] again is enough to let the shell determine which filename is desired and fill in the rest.
Another feature of the shell is the use of environment variables. Environment variables are a variable/key pair stored in the shell's environment. This environment can be read by any program invoked by the shell, and thus contains a lot of program configuration. <<shell-env-vars>> provides a list of common environment variables and their meanings. Note that the names of environment variables are always in uppercase.
[[shell-env-vars]]
.Common Environment Variables
[cols="1,1", frame="none", options="header"]
|===
| Variable
| Description
|`USER`
|Current logged in user's name.
|`PATH`
|Colon-separated list of directories to search for binaries.
|`DISPLAY`
|Network name of the Xorg display to connect to, if available.
|`SHELL`
|The current shell.
|`TERM`
|The name of the user's type of terminal. Used to determine the capabilities of the terminal.
|`TERMCAP`
|Database entry of the terminal escape codes to perform various terminal functions.
|`OSTYPE`
|Type of operating system.
|`MACHTYPE`
|The system's CPU architecture.
|`EDITOR`
|The user's preferred text editor.
|`PAGER`
|The user's preferred utility for viewing text one page at a time.
|`MANPATH`
|Colon-separated list of directories to search for manual pages.
|===
How to set an environment variable differs between shells. In man:tcsh[1] and man:csh[1], use `setenv` to set environment variables. In man:sh[1] and `bash`, use `export` to set the current environment variables. This example sets the default `EDITOR` to [.filename]#/usr/local/bin/emacs# for the man:tcsh[1] shell:
[source,shell]
....
% setenv EDITOR /usr/local/bin/emacs
....
The equivalent command for `bash` would be:
[source,shell]
....
% export EDITOR="/usr/local/bin/emacs"
....
To expand an environment variable in order to see its current setting, type a `$` character in front of its name on the command line. For example, `echo $TERM` displays the current `$TERM` setting.
Shells treat special characters, known as meta-characters, as special representations of data. The most common meta-character is `\*`, which represents any number of characters in a filename. Meta-characters can be used to perform filename globbing. For example, `echo *` is equivalent to `ls` because the shell takes all the files that match `*` and `echo` lists them on the command line.
To prevent the shell from interpreting a special character, escape it from the shell by starting it with a backslash (`\`). For example, `echo $TERM` prints the terminal setting whereas `echo \$TERM` literally prints the string `$TERM`.
[[changing-shells]]
=== Changing the Shell
The easiest way to permanently change the default shell is to use `chsh`. Running this command will open the editor that is configured in the `EDITOR` environment variable, which by default is set to man:vi[1]. Change the `Shell:` line to the full path of the new shell.
Alternately, use `chsh -s` which will set the specified shell without opening an editor. For example, to change the shell to `bash`:
[source,shell]
....
% chsh -s /usr/local/bin/bash
....
[NOTE]
====
The new shell _must_ be present in [.filename]#/etc/shells#. If the shell was installed from the FreeBSD Ports Collection as described in crossref:ports[ports,Installing Applications: Packages and Ports], it should be automatically added to this file. If it is missing, add it using this command, replacing the path with the path of the shell:
[source,shell]
....
# echo /usr/local/bin/bash >> /etc/shells
....
Then, rerun man:chsh[1].
====
=== Advanced Shell Techniques
The UNIX(R) shell is not just a command interpreter, it acts as a powerful tool which allows users to execute commands, redirect their output, redirect their input and chain commands together to improve the final command output. When this functionality is mixed with built in commands, the user is provided with an environment that can maximize efficiency.
Shell redirection is the action of sending the output or the input of a command into another command or into a file. To capture the output of the man:ls[1] command, for example, into a file, redirect the output:
[source,shell]
....
% ls > directory_listing.txt
....
The directory contents will now be listed in [.filename]#directory_listing.txt#. Some commands can be used to read input, such as man:sort[1]. To sort this listing, redirect the input:
[source,shell]
....
% sort < directory_listing.txt
....
The input will be sorted and placed on the screen. To redirect that input into another file, one could redirect the output of man:sort[1] by mixing the direction:
[source,shell]
....
% sort < directory_listing.txt > sorted.txt
....
In all of the previous examples, the commands are performing redirection using file descriptors. Every UNIX(R) system has file descriptors, which include standard input (stdin), standard output (stdout), and standard error (stderr). Each one has a purpose, where input could be a keyboard or a mouse, something that provides input. Output could be a screen or paper in a printer. And error would be anything that is used for diagnostic or error messages. All three are considered I/O based file descriptors and sometimes considered streams.
Through the use of these descriptors, the shell allows output and input to be passed around through various commands and redirected to or from a file. Another method of redirection is the pipe operator.
The UNIX(R) pipe operator, "|" allows the output of one command to be directly passed or directed to another program. Basically, a pipe allows the standard output of a command to be passed as standard input to another command, for example:
[source,shell]
....
% cat directory_listing.txt | sort | less
....
In that example, the contents of [.filename]#directory_listing.txt# will be sorted and the output passed to man:less[1]. This allows the user to scroll through the output at their own pace and prevent it from scrolling off the screen.
[[editors]]
== Text Editors
Most FreeBSD configuration is done by editing text files, so it is a good idea to become familiar with a text editor. FreeBSD comes with a few as part of the base system, and many more are available in the Ports Collection.
A simple editor to learn is man:ee[1], which stands for easy editor. To start this editor, type `ee _filename_` where _filename_ is the name of the file to be edited. Once inside the editor, all of the commands for manipulating the editor's functions are listed at the top of the display. The caret (`^`) represents kbd:[Ctrl], so `^e` expands to kbd:[Ctrl+e]. To leave man:ee[1], press kbd:[Esc], then choose the "leave editor" option from the main menu. The editor will prompt to save any changes if the file has been modified.
FreeBSD also comes with more powerful text editors, such as man:vi[1], as part of the base system. Other editors, like package:editors/emacs[] and package:editors/vim[], are part of the FreeBSD Ports Collection. These editors offer more functionality at the expense of being more complicated to learn. Learning a more powerful editor such as vim or Emacs can save more time in the long run.
Many applications which modify files or require typed input will automatically open a text editor. To change the default editor, set the `EDITOR` environment variable as described in <<shells>>.
[[basics-devices]]
== Devices and Device Nodes
A device is a term used mostly for hardware-related activities in a system, including disks, printers, graphics cards, and keyboards. When FreeBSD boots, the majority of the boot messages refer to devices being detected. A copy of the boot messages are saved to [.filename]#/var/run/dmesg.boot#.
Each device has a device name and number. For example, [.filename]#ada0# is the first SATA hard drive, while [.filename]#kbd0# represents the keyboard.
Most devices in FreeBSD must be accessed through special files called device nodes, which are located in [.filename]#/dev#.
[[basics-more-information]]
== Manual Pages
The most comprehensive documentation on FreeBSD is in the form of manual pages. Nearly every program on the system comes with a short reference manual explaining the basic operation and available arguments. These manuals can be viewed using `man`:
[source,shell]
....
% man command
....
where _command_ is the name of the command to learn about. For example, to learn more about man:ls[1], type:
[source,shell]
....
% man ls
....
Manual pages are divided into sections which represent the type of topic. In FreeBSD, the following sections are available:
. User commands.
. System calls and error numbers.
. Functions in the C libraries.
. Device drivers.
. File formats.
. Games and other diversions.
. Miscellaneous information.
. System maintenance and operation commands.
. System kernel interfaces.
In some cases, the same topic may appear in more than one section of the online manual. For example, there is a `chmod` user command and a `chmod()` system call. To tell man:man[1] which section to display, specify the section number:
[source,shell]
....
% man 1 chmod
....
This will display the manual page for the user command man:chmod[1]. References to a particular section of the online manual are traditionally placed in parenthesis in written documentation, so man:chmod[1] refers to the user command and man:chmod[2] refers to the system call.
If the name of the manual page is unknown, use `man -k` to search for keywords in the manual page descriptions:
[source,shell]
....
% man -k mail
....
This command displays a list of commands that have the keyword "mail" in their descriptions. This is equivalent to using man:apropos[1].
To read the descriptions for all of the commands in [.filename]#/usr/bin#, type:
[source,shell]
....
% cd /usr/bin
% man -f * | more
....
or
[source,shell]
....
% cd /usr/bin
% whatis * |more
....
[[basics-info]]
=== GNU Info Files
FreeBSD includes several applications and utilities produced by the Free Software Foundation (FSF). In addition to manual pages, these programs may include hypertext documents called `info` files. These can be viewed using man:info[1] or, if package:editors/emacs[] is installed, the info mode of emacs.
To use man:info[1], type:
[source,shell]
....
% info
....
For a brief introduction, type `h`. For a quick command reference, type `?`.
diff --git a/documentation/content/en/books/handbook/bibliography/_index.adoc b/documentation/content/en/books/handbook/bibliography/_index.adoc
index 3f82a9c331..c86daf08c0 100644
--- a/documentation/content/en/books/handbook/bibliography/_index.adoc
+++ b/documentation/content/en/books/handbook/bibliography/_index.adoc
@@ -1,148 +1,149 @@
---
title: Appendix B. Bibliography
part: Part V. Appendices
prev: books/handbook/mirrors
next: books/handbook/eresources
+description: FreeBSD Handbook Bibliography
---
[appendix]
[[bibliography]]
= Bibliography
:doctype: book
:icons: font
:sectnums:
:sectnumlevels: 6
:source-highlighter: rouge
:experimental:
:skip-front-matter:
:xrefstyle: basic
:relfileprefix: ../
:outfilesuffix:
:sectnumoffset: B
include::shared/authors.adoc[]
include::shared/releases.adoc[]
include::shared/en/mailing-lists.adoc[]
include::shared/en/teams.adoc[]
include::shared/en/urls.adoc[]
While manual pages provide a definitive reference for individual pieces of the FreeBSD operating system, they seldom illustrate how to put the pieces together to make the whole operating system run smoothly. For this, there is no substitute for a good book or users' manual on UNIX(R) system administration.
[[bibliography-freebsd]]
== Books Specific to FreeBSD
International books:
* http://jdli.tw.FreeBSD.org/publication/book/freebsd2/index.htm[Using FreeBSD] (in Traditional Chinese), published by http://www.drmaster.com.tw/[Drmaster], 1997. ISBN 9-578-39435-7.
* FreeBSD Unleashed (Simplified Chinese translation), published by http://www.hzbook.com/[China Machine Press]. ISBN 7-111-10201-0.
* FreeBSD From Scratch Second Edition (in Simplified Chinese), published by China Machine Press. ISBN 7-111-10286-X.
* FreeBSD Handbook Second Edition (Simplified Chinese translation), published by http://www.ptpress.com.cn/[Posts & Telecom Press]. ISBN 7-115-10541-3.
* FreeBSD & Windows (in Simplified Chinese), published by http://www.tdpress.com/[China Railway Publishing House]. ISBN 7-113-03845-X
* FreeBSD Internet Services HOWTO (in Simplified Chinese), published by China Railway Publishing House. ISBN 7-113-03423-3
* FreeBSD (in Japanese), published by CUTT. ISBN 4-906391-22-2 C3055 P2400E.
* http://www.shoeisha.com/book/Detail.asp?bid=650[Complete Introduction to FreeBSD] (in Japanese), published by http://www.shoeisha.co.jp/[Shoeisha Co., Ltd]. ISBN 4-88135-473-6 P3600E.
* http://www.ascii.co.jp/pb/book1/shinkan/detail/1322785.html[Personal UNIX Starter Kit FreeBSD] (in Japanese), published by http://www.ascii.co.jp/[ASCII]. ISBN 4-7561-1733-3 P3000E.
* FreeBSD Handbook (Japanese translation), published by http://www.ascii.co.jp/[ASCII]. ISBN 4-7561-1580-2 P3800E.
* FreeBSD mit Methode (in German), published by http://www.cul.de[Computer und Literatur Verlag]/Vertrieb Hanser, 1998. ISBN 3-932311-31-0.
* http://www.mitp.de/vmi/mitp/detail/pWert/1343/[FreeBSD de Luxe] (in German), published by http://www.mitp.de[Verlag Modere Industrie], 2003. ISBN 3-8266-1343-0.
* http://www.pc.mycom.co.jp/FreeBSD/install-manual.html[FreeBSD Install and Utilization Manual] (in Japanese), published by http://www.pc.mycom.co.jp/[Mainichi Communications Inc.], 1998. ISBN 4-8399-0112-0.
* Onno W Purbo, Dodi Maryanto, Syahrial Hubbany, Widjil Widodo _http://maxwell.itb.ac.id/[Building Internet Server with FreeBSD]_ (in Indonesia Language), published by http://www.elexmedia.co.id/[Elex Media Komputindo].
* Absolute BSD: The Ultimate Guide to FreeBSD (Traditional Chinese translation), published by http://www.grandtech.com.tw/[GrandTech Press], 2003. ISBN 986-7944-92-5.
* http://www.twbsd.org/cht/book/[The FreeBSD 6.0 Book] (in Traditional Chinese), published by Drmaster, 2006. ISBN 9-575-27878-X.
English language books:
* http://www.absoluteFreeBSD.com/[Absolute FreeBSD, 2nd Edition: The Complete Guide to FreeBSD], published by http://www.nostarch.com/[No Starch Press], 2007. ISBN: 978-1-59327-151-0
* http://www.freebsdmall.com/cgi-bin/fm/bsdcomp[The Complete FreeBSD], published by http://www.oreilly.com/[O'Reilly], 2003. ISBN: 0596005164
* http://www.freebsd-corp-net-guide.com/[The FreeBSD Corporate Networker's Guide], published by http://www.awl.com/aw/[Addison-Wesley], 2000. ISBN: 0201704811
* http://andrsn.stanford.edu/FreeBSD/introbook/[FreeBSD: An Open-Source Operating System for Your Personal Computer], published by The Bit Tree Press, 2001. ISBN: 0971204500
* Teach Yourself FreeBSD in 24 Hours, published by http://www.samspublishing.com/[Sams], 2002. ISBN: 0672324245
* FreeBSD 6 Unleashed, published by http://www.samspublishing.com/[Sams], 2006. ISBN: 0672328755
* FreeBSD: The Complete Reference, published by http://books.mcgraw-hill.com[McGrawHill], 2003. ISBN: 0072224096
[[bibliography-userguides]]
== Users' Guides
* Ohio State University has written a http://www.cs.duke.edu/csl/docs/unix_course/[UNIX Introductory Course] which is available online in HTML and PostScript format.
+
An Italian https://www.FreeBSD.org/doc/it_IT.ISO8859-15/books/unix-introduction/[translation] of this document is available as part of the FreeBSD Italian Documentation Project.
* http://www.jp.FreeBSD.org/[Jpman Project, Japan FreeBSD Users Group]. FreeBSD User's Reference Manual (Japanese translation). http://www.pc.mycom.co.jp/[Mainichi Communications Inc.], 1998. ISBN4-8399-0088-4 P3800E.
* http://www.ed.ac.uk/[Edinburgh University] has written an http://www.ed.ac.uk/information-services/help-consultancy/is-skills/catalogue/program-op-sys-catalogue/unix1[Online Guide] for newcomers to the UNIX environment.
[[bibliography-adminguides]]
== Administrators' Guides
* http://www.jp.FreeBSD.org/[Jpman Project, Japan FreeBSD Users Group]. FreeBSD System Administrator's Manual (Japanese translation). http://www.pc.mycom.co.jp/[Mainichi Communications Inc.], 1998. ISBN4-8399-0109-0 P3300E.
* Dreyfus, Emmanuel. http://www.eyrolles.com/Informatique/Livre/9782212114638/[Cahiers de l'Admin: BSD] 2nd Ed. (in French), Eyrolles, 2004. ISBN 2-212-11463-X
[[bibliography-programmers]]
== Programmers' Guides
* Computer Systems Research Group, UC Berkeley. _4.4BSD Programmer's Reference Manual_. O'Reilly & Associates, Inc., 1994. ISBN 1-56592-078-3
* Computer Systems Research Group, UC Berkeley. _4.4BSD Programmer's Supplementary Documents_. O'Reilly & Associates, Inc., 1994. ISBN 1-56592-079-1
* Harbison, Samuel P. and Steele, Guy L. Jr. _C: A Reference Manual_. 4th Ed. Prentice Hall, 1995. ISBN 0-13-326224-3
* Kernighan, Brian and Dennis M. Ritchie. _The C Programming Language_. 2nd Ed. PTR Prentice Hall, 1988. ISBN 0-13-110362-8
* Lehey, Greg. _Porting UNIX Software_. O'Reilly & Associates, Inc., 1995. ISBN 1-56592-126-7
* Plauger, P. J. _The Standard C Library_. Prentice Hall, 1992. ISBN 0-13-131509-9
* Spinellis, Diomidis. http://www.spinellis.gr/codereading/[Code Reading: The Open Source Perspective]. Addison-Wesley, 2003. ISBN 0-201-79940-5
* Spinellis, Diomidis. http://www.spinellis.gr/codequality/[Code Quality: The Open Source Perspective]. Addison-Wesley, 2006. ISBN 0-321-16607-8
* Stevens, W. Richard and Stephen A. Rago. _Advanced Programming in the UNIX Environment_. 2nd Ed. Reading, Mass. : Addison-Wesley, 2005. ISBN 0-201-43307-9
* Stevens, W. Richard. _UNIX Network Programming_. 2nd Ed, PTR Prentice Hall, 1998. ISBN 0-13-490012-X
[[bibliography-osinternals]]
== Operating System Internals
* Andleigh, Prabhat K. _UNIX System Architecture_. Prentice-Hall, Inc., 1990. ISBN 0-13-949843-5
* Jolitz, William. "Porting UNIX to the 386". _Dr. Dobb's Journal_. January 1991-July 1992.
* Leffler, Samuel J., Marshall Kirk McKusick, Michael J Karels and John Quarterman _The Design and Implementation of the 4.3BSD UNIX Operating System_. Reading, Mass. : Addison-Wesley, 1989. ISBN 0-201-06196-1
* Leffler, Samuel J., Marshall Kirk McKusick, _The Design and Implementation of the 4.3BSD UNIX Operating System: Answer Book_. Reading, Mass. : Addison-Wesley, 1991. ISBN 0-201-54629-9
* McKusick, Marshall Kirk, Keith Bostic, Michael J Karels, and John Quarterman. _The Design and Implementation of the 4.4BSD Operating System_. Reading, Mass. : Addison-Wesley, 1996. ISBN 0-201-54979-4
+
(Chapter 2 of this book is available link:{design-44bsd}[online] as part of the FreeBSD Documentation Project.)
* Marshall Kirk McKusick, George V. Neville-Neil _The Design and Implementation of the FreeBSD Operating System_. Boston, Mass. : Addison-Wesley, 2004. ISBN 0-201-70245-2
* Marshall Kirk McKusick, George V. Neville-Neil, Robert N. M. Watson _The Design and Implementation of the FreeBSD Operating System, 2nd Ed._. Westford, Mass. : Pearson Education, Inc., 2014. ISBN 0-321-96897-2
* Stevens, W. Richard. _TCP/IP Illustrated, Volume 1: The Protocols_. Reading, Mass. : Addison-Wesley, 1996. ISBN 0-201-63346-9
* Schimmel, Curt. _Unix Systems for Modern Architectures_. Reading, Mass. : Addison-Wesley, 1994. ISBN 0-201-63338-8
* Stevens, W. Richard. _TCP/IP Illustrated, Volume 3: TCP for Transactions, HTTP, NNTP and the UNIX Domain Protocols_. Reading, Mass. : Addison-Wesley, 1996. ISBN 0-201-63495-3
* Vahalia, Uresh. _UNIX Internals -- The New Frontiers_. Prentice Hall, 1996. ISBN 0-13-101908-2
* Wright, Gary R. and W. Richard Stevens. _TCP/IP Illustrated, Volume 2: The Implementation_. Reading, Mass. : Addison-Wesley, 1995. ISBN 0-201-63354-X
[[bibliography-security]]
== Security Reference
* Cheswick, William R. and Steven M. Bellovin. _Firewalls and Internet Security: Repelling the Wily Hacker_. Reading, Mass. : Addison-Wesley, 1995. ISBN 0-201-63357-4
* Garfinkel, Simson. _PGP Pretty Good Privacy_ O'Reilly & Associates, Inc., 1995. ISBN 1-56592-098-8
[[bibliography-hardware]]
== Hardware Reference
* Anderson, Don and Tom Shanley. _Pentium Processor System Architecture_. 2nd Ed. Reading, Mass. : Addison-Wesley, 1995. ISBN 0-201-40992-5
* Ferraro, Richard F. _Programmer's Guide to the EGA, VGA, and Super VGA Cards_. 3rd ed. Reading, Mass. : Addison-Wesley, 1995. ISBN 0-201-62490-7
* Intel Corporation publishes documentation on their CPUs, chipsets and standards on their http://developer.intel.com/[developer web site], usually as PDF files.
* Shanley, Tom. _80486 System Architecture_. 3rd Ed. Reading, Mass. : Addison-Wesley, 1995. ISBN 0-201-40994-1
* Shanley, Tom. _ISA System Architecture_. 3rd Ed. Reading, Mass. : Addison-Wesley, 1995. ISBN 0-201-40996-8
* Shanley, Tom. _PCI System Architecture_. 4th Ed. Reading, Mass. : Addison-Wesley, 1999. ISBN 0-201-30974-2
* Van Gilluwe, Frank. _The Undocumented PC_, 2nd Ed. Reading, Mass: Addison-Wesley Pub. Co., 1996. ISBN 0-201-47950-8
* Messmer, Hans-Peter. _The Indispensable PC Hardware Book_, 4th Ed. Reading, Mass : Addison-Wesley Pub. Co., 2002. ISBN 0-201-59616-4
[[bibliography-history]]
== UNIX(R) History
* Lion, John _Lion's Commentary on UNIX, 6th Ed. With Source Code_. ITP Media Group, 1996. ISBN 1573980137
* Raymond, Eric S. _The New Hacker's Dictionary, 3rd edition_. MIT Press, 1996. ISBN 0-262-68092-0. Also known as the http://www.catb.org/~esr/jargon/html/index.html[Jargon File]
* Salus, Peter H. _A quarter century of UNIX_. Addison-Wesley Publishing Company, Inc., 1994. ISBN 0-201-54777-5
* Simon Garfinkel, Daniel Weise, Steven Strassmann. _The UNIX-HATERS Handbook_. IDG Books Worldwide, Inc., 1994. ISBN 1-56884-203-1. Out of print, but available http://www.simson.net/ref/ugh.pdf[online].
* Don Libes, Sandy Ressler _Life with UNIX_ - special edition. Prentice-Hall, Inc., 1989. ISBN 0-13-536657-7
* _The BSD family tree_. https://cgit.freebsd.org/src/tree/share/misc/bsd-family-tree[https://cgit.freebsd.org/src/tree/share/misc/bsd-family-tree] or link:file://localhost/usr/share/misc/bsd-family-tree[/usr/share/misc/bsd-family-tree] on a FreeBSD machine.
* _Networked Computer Science Technical Reports Library_.
* _Old BSD releases from the Computer Systems Research group (CSRG)_. http://www.mckusick.com/csrg/[http://www.mckusick.com/csrg/]: The 4CD set covers all BSD versions from 1BSD to 4.4BSD and 4.4BSD-Lite2 (but not 2.11BSD, unfortunately). The last disk also holds the final sources plus the SCCS files.
* Kernighan, Brian _Unix: A History and a Memoir_. Kindle Direct Publishing, 2020. ISBN 978-169597855-3
[[bibliography-journals]]
== Periodicals, Journals, and Magazines
* http://www.admin-magazin.de/[Admin Magazin] (in German), published by Medialinx AG. ISSN: 2190-1066
* http://www.bsdmag.org/[BSD Magazine], published by Software Press Sp. z o.o. SK. ISSN: 1898-9144
* http://www.bsdnow.tv/[BSD Now - Video Podcast], published by Jupiter Broadcasting LLC
* http://bsdtalk.blogspot.com/[BSD Talk Podcast], by Will Backman
* http://freebsdjournal.com/[FreeBSD Journal], published by S&W Publishing, sponsored by The FreeBSD Foundation. ISBN: 978-0-615-88479-0
diff --git a/documentation/content/en/books/handbook/book.adoc b/documentation/content/en/books/handbook/book.adoc
index e840e6704d..31eec3bed2 100644
--- a/documentation/content/en/books/handbook/book.adoc
+++ b/documentation/content/en/books/handbook/book.adoc
@@ -1,167 +1,167 @@
---
title: FreeBSD Handbook
authors:
- author: The FreeBSD Documentation Project
copyright: 1995-2021 The FreeBSD Documentation Project
-releaseinfo: "$FreeBSD$"
+description: FreeBSD Handbook
trademarks: ["freebsd", "ibm", "ieee", "redhat", "3com", "adobe", "apple", "intel", "linux", "microsoft", "opengroup", "sun", "realnetworks", "oracle", "3ware", "arm", "adaptec", "google", "heidelberger", "intuit", "lsilogic", "themathworks", "thomson", "vmware", "wolframresearch", "xiph", "xfree86", "general"]
---
= FreeBSD Handbook
:doctype: book
:toc: macro
:toclevels: 2
:icons: font
:xrefstyle: basic
:relfileprefix: ../
:outfilesuffix:
:sectnums:
:sectnumlevels: 6
:partnums:
:chapter-signifier: Chapter
:part-signifier: Part
:source-highlighter: rouge
:experimental:
:skip-front-matter:
:book: true
:pdf: false
:pgpkeys-path: ../../../../../
ifeval::["{backend}" == "html5"]
include::shared/mirrors.adoc[]
include::shared/authors.adoc[]
include::shared/releases.adoc[]
include::shared/en/mailing-lists.adoc[]
include::shared/en/teams.adoc[]
include::shared/en/urls.adoc[]
:chapters-path: content/en/books/handbook/
endif::[]
ifeval::["{backend}" == "pdf"]
include::../../../../shared/mirrors.adoc[]
include::../../../../shared/authors.adoc[]
include::../../../../shared/releases.adoc[]
include::../../../../shared/en/mailing-lists.adoc[]
include::../../../../shared/en/teams.adoc[]
include::../../../../shared/en/urls.adoc[]
:chapters-path:
endif::[]
ifeval::["{backend}" == "epub3"]
include::../../../../shared/mirrors.adoc[]
include::../../../../shared/authors.adoc[]
include::../../../../shared/releases.adoc[]
include::../../../../shared/en/mailing-lists.adoc[]
include::../../../../shared/en/teams.adoc[]
include::../../../../shared/en/urls.adoc[]
:chapters-path:
endif::[]
[.abstract-title]
[abstract]
Abstract
Welcome to FreeBSD! This handbook covers the installation and day to day use of _FreeBSD {rel122-current}-RELEASE_, _FreeBSD {rel121-current}-RELEASE_ and _FreeBSD {rel114-current}-RELEASE_. This book is the result of ongoing work by many individuals. Some sections might be outdated. Those interested in helping to update and expand this document should send email to the {freebsd-doc}.
The latest version of this book is available from the https://www.FreeBSD.org/[FreeBSD web site]. Previous versions can be obtained from https://docs.FreeBSD.org/doc/[https://docs.FreeBSD.org/doc/]. The book can be downloaded in a variety of formats and compression options from the https://download.freebsd.org/ftp/doc/[FreeBSD FTP server] or one of the numerous crossref:mirrors[mirrors-ftp,mirror sites]. Printed copies can be purchased at the https://www.freebsdmall.com/[FreeBSD Mall]. Searches can be performed on the handbook and other documents on the link:https://www.FreeBSD.org/search/[search page].
'''
toc::[]
:sectnums!:
-include::{chapters-path}preface/_index.adoc[leveloffset=+1, lines=7..-1]
+include::{chapters-path}preface/_index.adoc[leveloffset=+1, lines=8..-1]
:sectnums:
// Section one
include::{chapters-path}parti.adoc[lines=7..18]
-include::{chapters-path}introduction/_index.adoc[leveloffset=+1, lines=8..34;44..-1]
+include::{chapters-path}introduction/_index.adoc[leveloffset=+1, lines=8..35;45..-1]
-include::{chapters-path}bsdinstall/_index.adoc[leveloffset=+1, lines=8..34;44..-1]
+include::{chapters-path}bsdinstall/_index.adoc[leveloffset=+1, lines=8..35;45..-1]
-include::{chapters-path}basics/_index.adoc[leveloffset=+1, lines=8..34;44..-1]
+include::{chapters-path}basics/_index.adoc[leveloffset=+1, lines=8..35;45..-1]
-include::{chapters-path}ports/_index.adoc[leveloffset=+1, lines=8..34;44..-1]
+include::{chapters-path}ports/_index.adoc[leveloffset=+1, lines=8..35;45..-1]
-include::{chapters-path}x11/_index.adoc[leveloffset=+1, lines=8..34;44..-1]
+include::{chapters-path}x11/_index.adoc[leveloffset=+1, lines=8..35;45..-1]
// Section two
include::{chapters-path}partii.adoc[lines=7..18]
-include::{chapters-path}desktop/_index.adoc[leveloffset=+1, lines=8..34;44..-1]
+include::{chapters-path}desktop/_index.adoc[leveloffset=+1, lines=8..35;45..-1]
-include::{chapters-path}multimedia/_index.adoc[leveloffset=+1, lines=8..34;44..-1]
+include::{chapters-path}multimedia/_index.adoc[leveloffset=+1, lines=8..35;45..-1]
-include::{chapters-path}kernelconfig/_index.adoc[leveloffset=+1, lines=8..34;44..-1]
+include::{chapters-path}kernelconfig/_index.adoc[leveloffset=+1, lines=8..35;45..-1]
-include::{chapters-path}printing/_index.adoc[leveloffset=+1, lines=8..34;44..-1]
+include::{chapters-path}printing/_index.adoc[leveloffset=+1, lines=8..35;45..-1]
-include::{chapters-path}linuxemu/_index.adoc[leveloffset=+1, lines=8..34;44..-1]
+include::{chapters-path}linuxemu/_index.adoc[leveloffset=+1, lines=8..35;45..-1]
-include::{chapters-path}wine/_index.adoc[leveloffset=+1, lines=8..34;44..-1]
+include::{chapters-path}wine/_index.adoc[leveloffset=+1, lines=8..35;45..-1]
// Section three
include::{chapters-path}partiii.adoc[lines=7..12]
-include::{chapters-path}config/_index.adoc[leveloffset=+1, lines=8..34;44..-1]
+include::{chapters-path}config/_index.adoc[leveloffset=+1, lines=8..35;45..-1]
-include::{chapters-path}boot/_index.adoc[leveloffset=+1, lines=8..34;44..-1]
+include::{chapters-path}boot/_index.adoc[leveloffset=+1, lines=8..35;45..-1]
-include::{chapters-path}security/_index.adoc[leveloffset=+1, lines=8..34;44..-1]
+include::{chapters-path}security/_index.adoc[leveloffset=+1, lines=8..35;45..-1]
-include::{chapters-path}jails/_index.adoc[leveloffset=+1, lines=8..34;44..-1]
+include::{chapters-path}jails/_index.adoc[leveloffset=+1, lines=8..35;45..-1]
-include::{chapters-path}mac/_index.adoc[leveloffset=+1, lines=8..34;44..-1]
+include::{chapters-path}mac/_index.adoc[leveloffset=+1, lines=8..35;45..-1]
-include::{chapters-path}audit/_index.adoc[leveloffset=+1, lines=8..34;44..-1]
+include::{chapters-path}audit/_index.adoc[leveloffset=+1, lines=8..35;45..-1]
-include::{chapters-path}disks/_index.adoc[leveloffset=+1, lines=8..34;44..-1]
+include::{chapters-path}disks/_index.adoc[leveloffset=+1, lines=8..35;45..-1]
-include::{chapters-path}geom/_index.adoc[leveloffset=+1, lines=8..34;44..-1]
+include::{chapters-path}geom/_index.adoc[leveloffset=+1, lines=8..35;45..-1]
-include::{chapters-path}zfs/_index.adoc[leveloffset=+1, lines=8..34;44..-1]
+include::{chapters-path}zfs/_index.adoc[leveloffset=+1, lines=8..35;45..-1]
-include::{chapters-path}filesystems/_index.adoc[leveloffset=+1, lines=8..34;44..-1]
+include::{chapters-path}filesystems/_index.adoc[leveloffset=+1, lines=8..35;45..-1]
-include::{chapters-path}virtualization/_index.adoc[leveloffset=+1, lines=8..34;44..-1]
+include::{chapters-path}virtualization/_index.adoc[leveloffset=+1, lines=8..35;45..-1]
-include::{chapters-path}l10n/_index.adoc[leveloffset=+1, lines=8..34;44..-1]
+include::{chapters-path}l10n/_index.adoc[leveloffset=+1, lines=8..35;45..-1]
-include::{chapters-path}cutting-edge/_index.adoc[leveloffset=+1, lines=8..34;44..-1]
+include::{chapters-path}cutting-edge/_index.adoc[leveloffset=+1, lines=8..35;45..-1]
-include::{chapters-path}dtrace/_index.adoc[leveloffset=+1, lines=8..34;44..-1]
+include::{chapters-path}dtrace/_index.adoc[leveloffset=+1, lines=8..35;45..-1]
-include::{chapters-path}usb-device-mode/_index.adoc[leveloffset=+1, lines=8..34;44..-1]
+include::{chapters-path}usb-device-mode/_index.adoc[leveloffset=+1, lines=8..35;45..-1]
// Section four
include::{chapters-path}partiv.adoc[lines=7..19]
-include::{chapters-path}serialcomms/_index.adoc[leveloffset=+1, lines=8..34;44..-1]
+include::{chapters-path}serialcomms/_index.adoc[leveloffset=+1, lines=8..35;45..-1]
-include::{chapters-path}ppp-and-slip/_index.adoc[leveloffset=+1, lines=8..34;44..-1]
+include::{chapters-path}ppp-and-slip/_index.adoc[leveloffset=+1, lines=8..35;45..-1]
-include::{chapters-path}mail/_index.adoc[leveloffset=+1, lines=8..34;44..-1]
+include::{chapters-path}mail/_index.adoc[leveloffset=+1, lines=8..35;45..-1]
-include::{chapters-path}network-servers/_index.adoc[leveloffset=+1, lines=8..34;44..-1]
+include::{chapters-path}network-servers/_index.adoc[leveloffset=+1, lines=8..35;45..-1]
-include::{chapters-path}firewalls/_index.adoc[leveloffset=+1, lines=8..34;44..-1]
+include::{chapters-path}firewalls/_index.adoc[leveloffset=+1, lines=8..35;45..-1]
-include::{chapters-path}advanced-networking/_index.adoc[leveloffset=+1, lines=8..34;44..-1]
+include::{chapters-path}advanced-networking/_index.adoc[leveloffset=+1, lines=8..35;45..-1]
// Section five
include::{chapters-path}partv.adoc[lines=7..8]
:sectnums!:
-include::{chapters-path}mirrors/_index.adoc[leveloffset=+1, lines=8..21;30..-1]
+include::{chapters-path}mirrors/_index.adoc[leveloffset=+1, lines=9..22;31..-1]
-include::{chapters-path}bibliography/_index.adoc[leveloffset=+1, lines=8..21;29..-1]
+include::{chapters-path}bibliography/_index.adoc[leveloffset=+1, lines=9..22;30..-1]
-include::{chapters-path}eresources/_index.adoc[leveloffset=+1, lines=8..21;30..-1]
+include::{chapters-path}eresources/_index.adoc[leveloffset=+1, lines=9..22;31..-1]
-include::{chapters-path}pgpkeys/_index.adoc[leveloffset=+1, lines=8..21;31..-1]
+include::{chapters-path}pgpkeys/_index.adoc[leveloffset=+1, lines=9..22;32..-1]
:sectnums:
diff --git a/documentation/content/en/books/handbook/boot/_index.adoc b/documentation/content/en/books/handbook/boot/_index.adoc
index 6e281e8ab3..35dc1d5156 100644
--- a/documentation/content/en/books/handbook/boot/_index.adoc
+++ b/documentation/content/en/books/handbook/boot/_index.adoc
@@ -1,368 +1,369 @@
---
title: Chapter 13. The FreeBSD Booting Process
part: Part III. System Administration
prev: books/handbook/config
next: books/handbook/security
+description: An introduction to the FreeBSD Booting Process, demonstrates how to customize the FreeBSD boot process, including everything that happens until the FreeBSD kernel has started, probed for devices, and started init
---
[[boot]]
= The FreeBSD Booting Process
:doctype: book
:toc: macro
:toclevels: 1
:icons: font
:sectnums:
:sectnumlevels: 6
:source-highlighter: rouge
:experimental:
:skip-front-matter:
:xrefstyle: basic
:relfileprefix: ../
:outfilesuffix:
:sectnumoffset: 13
ifeval::["{backend}" == "html5"]
:imagesdir: ../../../../images/books/handbook/boot/
endif::[]
ifeval::["{backend}" == "pdf"]
:imagesdir: ../../../../static/images/books/handbook/boot/
endif::[]
ifeval::["{backend}" == "epub3"]
:imagesdir: ../../../../static/images/books/handbook/boot/
endif::[]
include::shared/authors.adoc[]
include::shared/releases.adoc[]
include::shared/en/mailing-lists.adoc[]
include::shared/en/teams.adoc[]
include::shared/en/urls.adoc[]
toc::[]
[[boot-synopsis]]
== Synopsis
The process of starting a computer and loading the operating system is referred to as "the bootstrap process", or "booting". FreeBSD's boot process provides a great deal of flexibility in customizing what happens when the system starts, including the ability to select from different operating systems installed on the same computer, different versions of the same operating system, or a different installed kernel.
This chapter details the configuration options that can be set. It demonstrates how to customize the FreeBSD boot process, including everything that happens until the FreeBSD kernel has started, probed for devices, and started man:init[8]. This occurs when the text color of the boot messages changes from bright white to grey.
After reading this chapter, you will recognize:
* The components of the FreeBSD bootstrap system and how they interact.
* The options that can be passed to the components in the FreeBSD bootstrap in order to control the boot process.
* The basics of setting device hints.
* How to boot into single- and multi-user mode and how to properly shut down a FreeBSD system.
[NOTE]
====
This chapter only describes the boot process for FreeBSD running on x86 and amd64 systems.
====
[[boot-introduction]]
== FreeBSD Boot Process
Turning on a computer and starting the operating system poses an interesting dilemma. By definition, the computer does not know how to do anything until the operating system is started. This includes running programs from the disk. If the computer can not run a program from the disk without the operating system, and the operating system programs are on the disk, how is the operating system started?
This problem parallels one in the book The Adventures of Baron Munchausen. A character had fallen part way down a manhole, and pulled himself out by grabbing his bootstraps and lifting. In the early days of computing, the term _bootstrap_ was applied to the mechanism used to load the operating system. It has since become shortened to "booting".
On x86 hardware, the Basic Input/Output System (BIOS) is responsible for loading the operating system. The BIOS looks on the hard disk for the Master Boot Record (MBR), which must be located in a specific place on the disk. The BIOS has enough knowledge to load and run the MBR, and assumes that the MBR can then carry out the rest of the tasks involved in loading the operating system, possibly with the help of the BIOS.
[NOTE]
====
FreeBSD provides for booting from both the older MBR standard, and the newer GUID Partition Table (GPT). GPT partitioning is often found on computers with the Unified Extensible Firmware Interface (UEFI). However, FreeBSD can boot from GPT partitions even on machines with only a legacy BIOS with man:gptboot[8]. Work is under way to provide direct UEFI booting.
====
The code within the MBR is typically referred to as a _boot manager_, especially when it interacts with the user. The boot manager usually has more code in the first track of the disk or within the file system. Examples of boot managers include the standard FreeBSD boot manager boot0, also called Boot Easy, and Grub, which is used by many Linux(R) distributions.
If only one operating system is installed, the MBR searches for the first bootable (active) slice on the disk, and then runs the code on that slice to load the remainder of the operating system. When multiple operating systems are present, a different boot manager can be installed to display a list of operating systems so the user can select one to boot.
The remainder of the FreeBSD bootstrap system is divided into three stages. The first stage knows just enough to get the computer into a specific state and run the second stage. The second stage can do a little bit more, before running the third stage. The third stage finishes the task of loading the operating system. The work is split into three stages because the MBR puts limits on the size of the programs that can be run at stages one and two. Chaining the tasks together allows FreeBSD to provide a more flexible loader.
The kernel is then started and begins to probe for devices and initialize them for use. Once the kernel boot process is finished, the kernel passes control to the user process man:init[8], which makes sure the disks are in a usable state, starts the user-level resource configuration which mounts file systems, sets up network cards to communicate on the network, and starts the processes which have been configured to run at startup.
This section describes these stages in more detail and demonstrates how to interact with the FreeBSD boot process.
[[boot-boot0]]
=== The Boot Manager
The boot manager code in the MBR is sometimes referred to as _stage zero_ of the boot process. By default, FreeBSD uses the boot0 boot manager.
The MBR installed by the FreeBSD installer is based on [.filename]#/boot/boot0#. The size and capability of boot0 is restricted to 446 bytes due to the slice table and `0x55AA` identifier at the end of the MBR. If boot0 and multiple operating systems are installed, a message similar to this example will be displayed at boot time:
[[boot-boot0-example]]
.[.filename]#boot0# Screenshot
[example]
====
[source,shell]
....
F1 Win
F2 FreeBSD
Default: F2
....
====
Other operating systems will overwrite an existing MBR if they are installed after FreeBSD. If this happens, or to replace the existing MBR with the FreeBSD MBR, use the following command:
[source,shell]
....
# fdisk -B -b /boot/boot0 device
....
where _device_ is the boot disk, such as [.filename]#ad0# for the first IDE disk, [.filename]#ad2# for the first IDE disk on a second IDE controller, or [.filename]#da0# for the first SCSI disk. To create a custom configuration of the MBR, refer to man:boot0cfg[8].
[[boot-boot1]]
=== Stage One and Stage Two
Conceptually, the first and second stages are part of the same program on the same area of the disk. Due to space constraints, they have been split into two, but are always installed together. They are copied from the combined [.filename]#/boot/boot# by the FreeBSD installer or `bsdlabel`.
These two stages are located outside file systems, in the first track of the boot slice, starting with the first sector. This is where boot0, or any other boot manager, expects to find a program to run which will continue the boot process.
The first stage, [.filename]#boot1#, is very simple, since it can only be 512 bytes in size. It knows just enough about the FreeBSD _bsdlabel_, which stores information about the slice, to find and execute [.filename]#boot2#.
Stage two, [.filename]#boot2#, is slightly more sophisticated, and understands the FreeBSD file system enough to find files. It can provide a simple interface to choose the kernel or loader to run. It runs loader, which is much more sophisticated and provides a boot configuration file. If the boot process is interrupted at stage two, the following interactive screen is displayed:
[[boot-boot2-example]]
.[.filename]#boot2# Screenshot
[example]
====
[source,shell]
....
>> FreeBSD/i386 BOOT
Default: 0:ad(0,a)/boot/loader
boot:
....
====
To replace the installed [.filename]#boot1# and [.filename]#boot2#, use `bsdlabel`, where _diskslice_ is the disk and slice to boot from, such as [.filename]#ad0s1# for the first slice on the first IDE disk:
[source,shell]
....
# bsdlabel -B diskslice
....
[WARNING]
====
If just the disk name is used, such as [.filename]#ad0#, `bsdlabel` will create the disk in "dangerously dedicated mode", without slices. This is probably not the desired action, so double check the _diskslice_ before pressing kbd:[Return].
====
[[boot-loader]]
=== Stage Three
The loader is the final stage of the three-stage bootstrap process. It is located on the file system, usually as [.filename]#/boot/loader#.
The loader is intended as an interactive method for configuration, using a built-in command set, backed up by a more powerful interpreter which has a more complex command set.
During initialization, loader will probe for a console and for disks, and figure out which disk it is booting from. It will set variables accordingly, and an interpreter is started where user commands can be passed from a script or interactively.
The loader will then read [.filename]#/boot/loader.rc#, which by default reads in [.filename]#/boot/defaults/loader.conf# which sets reasonable defaults for variables and reads [.filename]#/boot/loader.conf# for local changes to those variables. [.filename]#loader.rc# then acts on these variables, loading whichever modules and kernel are selected.
Finally, by default, loader issues a 10 second wait for key presses, and boots the kernel if it is not interrupted. If interrupted, the user is presented with a prompt which understands the command set, where the user may adjust variables, unload all modules, load modules, and then finally boot or reboot. <<boot-loader-commands>> lists the most commonly used loader commands. For a complete discussion of all available commands, refer to man:loader[8].
[[boot-loader-commands]]
.Loader Built-In Commands
[cols="20%,80%", frame="none", options="header"]
|===
| Variable
| Description
|autoboot _seconds_
|Proceeds to boot the kernel if not interrupted within the time span given, in seconds. It displays a countdown, and the default time span is 10 seconds.
|boot [`-options`] [`kernelname`]
|Immediately proceeds to boot the kernel, with any specified options or kernel name. Providing a kernel name on the command-line is only applicable after an `unload` has been issued. Otherwise, the previously-loaded kernel will be used. If _kernelname_ is not qualified, it will be searched under _/boot/kernel_ and _/boot/modules_.
|boot-conf
|Goes through the same automatic configuration of modules based on specified variables, most commonly `kernel`. This only makes sense if `unload` is used first, before changing some variables.
|help [`_topic_`]
|Shows help messages read from [.filename]#/boot/loader.help#. If the topic given is `index`, the list of available topics is displayed.
|include `_filename_` ...
|Reads the specified file and interprets it line by line. An error immediately stops the `include`.
|load [-t ``_type_``] `_filename_`
|Loads the kernel, kernel module, or file of the type given, with the specified filename. Any arguments after _filename_ are passed to the file. If _filename_ is not qualified, it will be searched under _/boot/kernel_ and _/boot/modules_.
|ls [-l] [``_path_``]
|Displays a listing of files in the given path, or the root directory, if the path is not specified. If `-l` is specified, file sizes will also be shown.
|lsdev [`-v`]
|Lists all of the devices from which it may be possible to load modules. If `-v` is specified, more details are printed.
|lsmod [`-v`]
|Displays loaded modules. If `-v` is specified, more details are shown.
|more `_filename_`
|Displays the files specified, with a pause at each `LINES` displayed.
|reboot
|Immediately reboots the system.
|set `_variable_`, set `_variable=value_`
|Sets the specified environment variables.
|unload
|Removes all loaded modules.
|===
Here are some practical examples of loader usage. To boot the usual kernel in single-user mode :
[source,shell]
....
boot -s
....
To unload the usual kernel and modules and then load the previous or another, specified kernel:
[source,shell]
....
unload
load /path/to/kernelfile
....
Use the qualified [.filename]#/boot/GENERIC/kernel# to refer to the default kernel that comes with an installation, or [.filename]#/boot/kernel.old/kernel#, to refer to the previously installed kernel before a system upgrade or before configuring a custom kernel.
Use the following to load the usual modules with another kernel. Note that in this case it is not necessary the qualified name:
[source,shell]
....
unload
set kernel="mykernel"
boot-conf
....
To load an automated kernel configuration script:
[source,shell]
....
load -t userconfig_script /boot/kernel.conf
....
[[boot-init]]
=== Last Stage
Once the kernel is loaded by either loader or by boot2, which bypasses loader, it examines any boot flags and adjusts its behavior as necessary. <<boot-kernel>> lists the commonly used boot flags. Refer to man:boot[8] for more information on the other boot flags.
[[boot-kernel]]
.Kernel Interaction During Boot
[cols="1,1", frame="none", options="header"]
|===
| Option
| Description
|`-a`
|During kernel initialization, ask for the device to mount as the root file system.
|`-C`
|Boot the root file system from a CDROM.
|`-s`
|Boot into single-user mode.
|`-v`
|Be more verbose during kernel startup.
|===
Once the kernel has finished booting, it passes control to the user process man:init[8], which is located at [.filename]#/sbin/init#, or the program path specified in the `init_path` variable in `loader`. This is the last stage of the boot process.
The boot sequence makes sure that the file systems available on the system are consistent. If a UFS file system is not, and `fsck` cannot fix the inconsistencies, init drops the system into single-user mode so that the system administrator can resolve the problem directly. Otherwise, the system boots into multi-user mode.
[[boot-singleuser]]
==== Single-User Mode
A user can specify this mode by booting with `-s` or by setting the `boot_single` variable in loader. It can also be reached by running `shutdown now` from multi-user mode. Single-user mode begins with this message:
[.programlisting]
....
Enter full pathname of shell or RETURN for /bin/sh:
....
If the user presses kbd:[Enter], the system will enter the default Bourne shell. To specify a different shell, input the full path to the shell.
Single-user mode is usually used to repair a system that will not boot due to an inconsistent file system or an error in a boot configuration file. It can also be used to reset the `root` password when it is unknown. These actions are possible as the single-user mode prompt gives full, local access to the system and its configuration files. There is no networking in this mode.
While single-user mode is useful for repairing a system, it poses a security risk unless the system is in a physically secure location. By default, any user who can gain physical access to a system will have full control of that system after booting into single-user mode.
If the system `console` is changed to `insecure` in [.filename]#/etc/ttys#, the system will first prompt for the `root` password before initiating single-user mode. This adds a measure of security while removing the ability to reset the `root` password when it is unknown.
[[boot-insecure-console]]
.Configuring an Insecure Console in [.filename]#/etc/ttys#
[example]
====
[.programlisting]
....
# name getty type status comments
#
# If console is marked "insecure", then init will ask for the root password
# when going to single-user mode.
console none unknown off insecure
....
====
An `insecure` console means that physical security to the console is considered to be insecure, so only someone who knows the `root` password may use single-user mode.
[[boot-multiuser]]
==== Multi-User Mode
If init finds the file systems to be in order, or once the user has finished their commands in single-user mode and has typed `exit` to leave single-user mode, the system enters multi-user mode, in which it starts the resource configuration of the system.
The resource configuration system reads in configuration defaults from [.filename]#/etc/defaults/rc.conf# and system-specific details from [.filename]#/etc/rc.conf#. It then proceeds to mount the system file systems listed in [.filename]#/etc/fstab#. It starts up networking services, miscellaneous system daemons, then the startup scripts of locally installed packages.
To learn more about the resource configuration system, refer to man:rc[8] and examine the scripts located in [.filename]#/etc/rc.d#.
[[device-hints]]
== Device Hints
During initial system startup, the boot man:loader[8] reads man:device.hints[5]. This file stores kernel boot information known as variables, sometimes referred to as "device hints". These "device hints" are used by device drivers for device configuration.
Device hints may also be specified at the Stage 3 boot loader prompt, as demonstrated in <<boot-loader>>. Variables can be added using `set`, removed with `unset`, and viewed `show`. Variables set in [.filename]#/boot/device.hints# can also be overridden. Device hints entered at the boot loader are not permanent and will not be applied on the next reboot.
Once the system is booted, man:kenv[1] can be used to dump all of the variables.
The syntax for [.filename]#/boot/device.hints# is one variable per line, using the hash "#" as comment markers. Lines are constructed as follows:
[source,shell]
....
hint.driver.unit.keyword="value"
....
The syntax for the Stage 3 boot loader is:
[source,shell]
....
set hint.driver.unit.keyword=value
....
where `driver` is the device driver name, `unit` is the device driver unit number, and `keyword` is the hint keyword. The keyword may consist of the following options:
* `at`: specifies the bus which the device is attached to.
* `port`: specifies the start address of the I/O to be used.
* `irq`: specifies the interrupt request number to be used.
* `drq`: specifies the DMA channel number.
* `maddr`: specifies the physical memory address occupied by the device.
* `flags`: sets various flag bits for the device.
* `disabled`: if set to `1` the device is disabled.
Since device drivers may accept or require more hints not listed here, viewing a driver's manual page is recommended. For more information, refer to man:device.hints[5], man:kenv[1], man:loader.conf[5], and man:loader[8].
[[boot-shutdown]]
== Shutdown Sequence
Upon controlled shutdown using man:shutdown[8], man:init[8] will attempt to run the script [.filename]#/etc/rc.shutdown#, and then proceed to send all processes the `TERM` signal, and subsequently the `KILL` signal to any that do not terminate in a timely manner.
To power down a FreeBSD machine on architectures and systems that support power management, use `shutdown -p now` to turn the power off immediately. To reboot a FreeBSD system, use `shutdown -r now`. One must be `root` or a member of `operator` in order to run man:shutdown[8]. One can also use man:halt[8] and man:reboot[8]. Refer to their manual pages and to man:shutdown[8] for more information.
Modify group membership by referring to crossref:basics[users-synopsis,“Users and Basic Account Management”].
[NOTE]
====
Power management requires man:acpi[4] to be loaded as a module or statically compiled into a custom kernel.
====
diff --git a/documentation/content/en/books/handbook/bsdinstall/_index.adoc b/documentation/content/en/books/handbook/bsdinstall/_index.adoc
index dee524f938..f310306e02 100644
--- a/documentation/content/en/books/handbook/bsdinstall/_index.adoc
+++ b/documentation/content/en/books/handbook/bsdinstall/_index.adoc
@@ -1,1073 +1,1074 @@
---
title: Chapter 2. Installing FreeBSD
part: Part I. Getting Started
prev: books/handbook/introduction
next: books/handbook/basics
+description: Guide about how to install FreeBSD, the minimum hardware requirements and supported architectures, how to create the installation media, etc
---
[[bsdinstall]]
= Installing FreeBSD
:doctype: book
:toc: macro
:toclevels: 1
:icons: font
:sectnums:
:sectnumlevels: 6
:source-highlighter: rouge
:experimental:
:skip-front-matter:
:xrefstyle: basic
:relfileprefix: ../
:outfilesuffix:
:sectnumoffset: 2
ifeval::["{backend}" == "html5"]
:imagesdir: ../../../../images/books/handbook/bsdinstall/
endif::[]
ifeval::["{backend}" == "pdf"]
:imagesdir: ../../../../static/images/books/handbook/bsdinstall/
endif::[]
ifeval::["{backend}" == "epub3"]
:imagesdir: ../../../../static/images/books/handbook/bsdinstall/
endif::[]
include::shared/authors.adoc[]
include::shared/releases.adoc[]
include::shared/en/mailing-lists.adoc[]
include::shared/en/teams.adoc[]
include::shared/en/urls.adoc[]
toc::[]
[[bsdinstall-synopsis]]
== Synopsis
There are several different ways of getting FreeBSD to run, depending on the environment. Those are:
* Virtual Machine images, to download and import on a virtual environment of choice. These can be downloaded from the https://www.freebsd.org/where/[Download FreeBSD] page. There are images for KVM ("qcow2"), VMWare ("vmdk"), Hyper-V ("vhd"), and raw device images that are universally supported. These are not installation images, but rather the preconfigured ("already installed") instances, ready to run and perform post-installation tasks.
* Virtual Machine images available at Amazon's https://aws.amazon.com/marketplace/pp/B07L6QV354[AWS Marketplace], https://azuremarketplace.microsoft.com/en-us/marketplace/apps?search=freebsd&page=1[Microsoft Azure Marketplace], and https://console.cloud.google.com/marketplace/details/freebsd-cloud/freebsd-12[Google Cloud Platform], to run on their respective hosting services. For more information on deploying FreeBSD on Azure please consult the relevant chapter in the https://docs.microsoft.com/en-us/azure/virtual-machines/linux/freebsd-intro-on-azure[Azure Documentation].
* SD card images, for embedded systems such as Raspberry Pi or BeagleBone Black. These can be downloaded from the https://www.freebsd.org/where/[Download FreeBSD] page. These files must be uncompressed and written as a raw image to an SD card, from which the board will then boot.
* Installation images, to install FreeBSD on a hard drive for the usual desktop, laptop, or server systems.
The rest of this chapter describes the fourth case, explaining how to install FreeBSD using the text-based installation program named bsdinstall.
In general, the installation instructions in this chapter are written for the i386(TM) and AMD64 architectures. Where applicable, instructions specific to other platforms will be listed. There may be minor differences between the installer and what is shown here, so use this chapter as a general guide rather than as a set of literal instructions.
[NOTE]
====
Users who prefer to install FreeBSD using a graphical installer may be interested in https://ghostbsd.org[GhostBSD], https://www.midnightbsd.org[MidnightBSD] or https://nomadbsd.org[NomadBSD].
====
After reading this chapter, you will know:
* The minimum hardware requirements and FreeBSD supported architectures.
* How to create the FreeBSD installation media.
* How to start bsdinstall.
* The questions bsdinstall will ask, what they mean, and how to answer them.
* How to troubleshoot a failed installation.
* How to access a live version of FreeBSD before committing to an installation.
Before reading this chapter, you should:
* Read the supported hardware list that shipped with the version of FreeBSD to be installed and verify that the system's hardware is supported.
[[bsdinstall-hardware]]
== Minimum Hardware Requirements
The hardware requirements to install FreeBSD vary by architecture. Hardware architectures and devices supported by a FreeBSD release are listed on the link:https://www.FreeBSD.org/releases/[FreeBSD Release Information] page. The link:https://www.FreeBSD.org/where/[FreeBSD download page] also has recommendations for choosing the correct image for different architectures.
A FreeBSD installation requires a minimum of 96 MB of RAM and 1.5 GB of free hard drive space. However, such small amounts of memory and disk space are really only suitable for custom applications like embedded appliances. General-purpose desktop systems need more resources. 2-4 GB RAM and at least 8 GB hard drive space is a good starting point.
These are the processor requirements for each architecture:
amd64::
This is the most common desktop and laptop processor type, used in most modern systems. Intel(R) calls it Intel64. Other manufacturers sometimes call it x86-64.
+
Examples of amd64 compatible processors include: AMD Athlon(TM)64, AMD Opteron(TM), multi-core Intel(R) Xeon(TM), and Intel(R) Core(TM) 2 and later processors.
i386::
Older desktops and laptops often use this 32-bit, x86 architecture.
+
Almost all i386-compatible processors with a floating point unit are supported. All Intel(R) processors 486 or higher are supported.
However, binaries released by the project are compiled for the 686 processor, so a special build will be needed for 486 and 586 systems.
+
FreeBSD will take advantage of Physical Address Extensions (PAE) support on CPUs with this feature. A kernel with the PAE feature enabled will detect memory above 4 GB and allow it to be used by the system. However, using PAE places constraints on device drivers and other features of FreeBSD.
arm64::
Most embedded boards are 64-bit ARM computers.
A number of arm64 servers are supported.
arm::
Older armv7 boards are supported.
powerpc::
All New World ROM Apple(R) Mac(R) systems with built-in USB are supported. SMP is supported on machines with multiple CPUs.
+
A 32-bit kernel can only use the first 2 GB of RAM.
[[bsdinstall-pre]]
== Pre-Installation Tasks
Once it has been determined that the system meets the minimum hardware requirements for installing FreeBSD, the installation file should be downloaded and the installation media prepared. Before doing this, check that the system is ready for an installation by verifying the items in this checklist:
[.procedure]
. *Back Up Important Data*
+
Before installing any operating system, _always_ backup all important data first. Do not store the backup on the system being installed. Instead, save the data to a removable disk such as a USB drive, another system on the network, or an online backup service. Test the backup before starting the installation to make sure it contains all of the needed files. Once the installer formats the system's disk, all data stored on that disk will be lost.
. *Decide Where to Install FreeBSD*
+
If FreeBSD will be the only operating system installed, this step can be skipped. But if FreeBSD will share the disk with another operating system, decide which disk or partition will be used for FreeBSD.
+
In the i386 and amd64 architectures, disks can be divided into multiple partitions using one of two partitioning schemes. A traditional _Master Boot Record_ (MBR) holds a partition table defining up to four _primary partitions_. For historical reasons, FreeBSD calls these primary partition _slices_. One of these primary partitions can be made into an _extended partition_ containing multiple _logical partitions_. The _GUID Partition Table_ (GPT) is a newer and simpler method of partitioning a disk. Common GPT implementations allow up to 128 partitions per disk, eliminating the need for logical partitions.
+
The FreeBSD boot loader requires either a primary or GPT partition. If all of the primary or GPT partitions are already in use, one must be freed for FreeBSD. To create a partition without deleting existing data, use a partition resizing tool to shrink an existing partition and create a new partition using the freed space.
+
A variety of free and commercial partition resizing tools are listed at http://en.wikipedia.org/wiki/List_of_disk_partitioning_software[http://en.wikipedia.org/wiki/List_of_disk_partitioning_software]. GParted Live (http://gparted.sourceforge.net/livecd.php[http://gparted.sourceforge.net/livecd.php]) is a free live CD which includes the GParted partition editor. GParted is also included with many other Linux live CD distributions.
+
[WARNING]
====
When used properly, disk shrinking utilities can safely create space for creating a new partition. Since the possibility of selecting the wrong partition exists, always backup any important data and verify the integrity of the backup before modifying disk partitions.
====
+
Disk partitions containing different operating systems make it possible to install multiple operating systems on one computer. An alternative is to use virtualization (crossref:virtualization[virtualization,Virtualization]) which allows multiple operating systems to run at the same time without modifying any disk partitions.
. *Collect Network Information*
+
Some FreeBSD installation methods require a network connection in order to download the installation files. After any installation, the installer will offer to setup the system's network interfaces.
+
If the network has a DHCP server, it can be used to provide automatic network configuration. If DHCP is not available, the following network information for the system must be obtained from the local network administrator or Internet service provider:
+
[[bsdinstall-collect-network-information]]
Required Network Information
.. IP address
.. Subnet mask
.. IP address of default gateway
.. Domain name of the network
.. IP addresses of the network's DNS servers
. *Check for FreeBSD Errata*
+
Although the FreeBSD Project strives to ensure that each release of FreeBSD is as stable as possible, bugs occasionally creep into the process. On very rare occasions those bugs affect the installation process. As these problems are discovered and fixed, they are noted in the FreeBSD Errata (link:https://www.FreeBSD.org/releases/{rel121-current}R/errata/[https://www.freebsd.org/releases/{rel121-current}R/errata/]) on the FreeBSD web site. Check the errata before installing to make sure that there are no problems that might affect the installation.
+
Information and errata for all the releases can be found on the release information section of the FreeBSD web site (link:https://www.FreeBSD.org/releases/[https://www.freebsd.org/releases/]).
[[bsdinstall-installation-media]]
=== Prepare the Installation Media
The FreeBSD installer is not an application that can be run from within another operating system. Instead, download a FreeBSD installation file, burn it to the media associated with its file type and size (CD, DVD, or USB), and boot the system to install from the inserted media.
FreeBSD installation files are available at link:https://www.FreeBSD.org/where/[www.freebsd.org/where/]. Each installation file's name includes the release version of FreeBSD, the architecture, and the type of file. For example, to install FreeBSD 12.1 on an amd64 system from a DVD, download [.filename]#FreeBSD-12.1-RELEASE-amd64-dvd1.iso#, burn this file to a DVD, and boot the system with the DVD inserted.
Installation files are available in several formats. The formats vary depending on computer architecture and media type.
[[bsdinstall-installation-media-uefi]]
Additional installation files are included for computers that boot with UEFI (Unified Extensible Firmware Interface). The names of these files include the string [.filename]#uefi#.
File types:
* `-bootonly.iso`: This is the smallest installation file as it only contains the installer. A working Internet connection is required during installation as the installer will download the files it needs to complete the FreeBSD installation. This file should be burned to a CD using a CD burning application.
* `-disc1.iso`: This file contains all of the files needed to install FreeBSD, its source, and the Ports Collection. It should be burned to a CD using a CD burning application.
* `-dvd1.iso`: This file contains all of the files needed to install FreeBSD, its source, and the Ports Collection. It also contains a set of popular binary packages for installing a window manager and some applications so that a complete system can be installed from media without requiring a connection to the Internet. This file should be burned to a DVD using a DVD burning application.
* `-memstick.img`: This file contains all of the files needed to install FreeBSD, its source, and the Ports Collection. It should be burned to a USB stick using the instructions below.
* `-mini-memstick.img`: Like `-bootonly.iso`, does not include installation files, but downloads them as needed. A working internet connection is required during installation. Write this file to a USB stick as shown in <<bsdinstall-usb>>.
After downloading the image file, download [.filename]#CHECKSUM.SHA256# from the same directory. Calculate a _checksum_ for the image file. FreeBSD provides man:sha256[1] for this, used as `sha256 _imagefilename_`. Other operating systems have similar programs.
Compare the calculated checksum with the one shown in [.filename]#CHECKSUM.SHA256#. The checksums must match exactly. If the checksums do not match, the image file is corrupt and must be downloaded again.
[[bsdinstall-usb]]
==== Writing an Image File to USB
The [.filename]#\*.img# file is an _image_ of the complete contents of a memory stick. It _cannot_ be copied to the target device as a file. Several applications are available for writing the [.filename]#*.img# to a USB stick. This section describes two of these utilities.
[IMPORTANT]
====
Before proceeding, back up any important data on the USB stick. This procedure will erase the existing data on the stick.
====
[[bsdinstall-usb-dd]]
[.procedure]
****
*Procedure. Using `dd` to Write the Image* +
[WARNING]
====
This example uses [.filename]#/dev/da0# as the target device where the image will be written. Be _very careful_ that the correct device is used as this command will destroy the existing data on the specified target device.
====
. The command-line utility is available on BSD, Linux(R), and Mac OS(R) systems. To burn the image using `dd`, insert the USB stick and determine its device name. Then, specify the name of the downloaded installation file and the device name for the USB stick. This example burns the amd64 installation image to the first USB device on an existing FreeBSD system.
+
[source,shell]
....
# dd if=FreeBSD-12.1-RELEASE-amd64-memstick.img of=/dev/da0 bs=1M conv=sync
....
+
If this command fails, verify that the USB stick is not mounted and that the device name is for the disk, not a partition.
Some operating systems might require this command to be run with man:sudo[8].
The man:dd[1] syntax varies slightly across different platforms; for example, Mac OS(R) requires a lower-case `bs=1m`.
Systems like Linux(R) might buffer writes.
To force all writes to complete, use man:sync[8].
****
[.procedure]
****
*Procedure. Using Windows(R) to Write the Image* +
[WARNING]
====
Be sure to give the correct drive letter as the existing data on the specified drive will be overwritten and destroyed.
====
. *Obtaining Image Writer for Windows(R)*
+
Image Writer for Windows(R) is a free application that can correctly write an image file to a memory stick. Download it from https://sourceforge.net/projects/win32diskimager/[https://sourceforge.net/projects/win32diskimager/] and extract it into a folder.
. *Writing the Image with Image Writer*
+
Double-click the Win32DiskImager icon to start the program. Verify that the drive letter shown under `Device` is the drive with the memory stick. Click the folder icon and select the image to be written to the memory stick. Click btn:[Save] to accept the image file name. Verify that everything is correct, and that no folders on the memory stick are open in other windows. When everything is ready, click btn:[Write] to write the image file to the memory stick.
****
You are now ready to start installing FreeBSD.
[[bsdinstall-start]]
== Starting the Installation
[IMPORTANT]
====
By default, the installation will not make any changes to the disk(s) before the following message:
[.programlisting]
....
Your changes will now be written to disk. If you
have chosen to overwrite existing data, it will
be PERMANENTLY ERASED. Are you sure you want to
commit your changes?
....
The install can be exited at any time prior to this warning. If there is a concern that something is incorrectly configured, just turn the computer off before this point and no changes will be made to the system's disks.
====
This section describes how to boot the system from the installation media which was prepared using the instructions in <<bsdinstall-installation-media>>. When using a bootable USB stick, plug in the USB stick before turning on the computer. When booting from CD or DVD, turn on the computer and insert the media at the first opportunity. How to configure the system to boot from the inserted media depends upon the architecture.
[[bsdinstall-starting-i386]]
=== Booting on i386(TM) and amd64
These architectures provide a BIOS menu for selecting the boot device. Depending upon the installation media being used, select the CD/DVD or USB device as the first boot device. Most systems also provide a key for selecting the boot device during startup without having to enter the BIOS. Typically, the key is either kbd:[F10], kbd:[F11], kbd:[F12], or kbd:[Escape].
If the computer loads the existing operating system instead of the FreeBSD installer, then either:
. The installation media was not inserted early enough in the boot process. Leave the media inserted and try restarting the computer.
. The BIOS changes were incorrect or not saved. Double-check that the right boot device is selected as the first boot device.
. This system is too old to support booting from the chosen media. In this case, the Plop Boot Manager (http://www.plop.at/en/bootmanagers.html[]) can be used to boot the system from the selected media.
=== Booting on PowerPC(R)
On most machines, holding kbd:[C] on the keyboard during boot will boot from the CD. Otherwise, hold kbd:[Command+Option+O+F], or kbd:[Windows+Alt+O+F] on non-Apple(R) keyboards. At the `0 >` prompt, enter
[source,shell]
....
boot cd:,\ppc\loader cd:0
....
[[bsdinstall-view-probe]]
=== FreeBSD Boot Menu
Once the system boots from the installation media, a menu similar to the following will be displayed:
[[bsdinstall-newboot-loader-menu]]
.FreeBSD Boot Loader Menu
image::bsdinstall-newboot-loader-menu.png[]
By default, the menu will wait ten seconds for user input before booting into the FreeBSD installer or, if FreeBSD is already installed, before booting into FreeBSD. To pause the boot timer in order to review the selections, press kbd:[Space]. To select an option, press its highlighted number, character, or key. The following options are available.
* `Boot Multi User`: This will continue the FreeBSD boot process. If the boot timer has been paused, press kbd:[1], upper- or lower-case kbd:[B], or kbd:[Enter].
* `Boot Single User`: This mode can be used to fix an existing FreeBSD installation as described in crossref:boot[boot-singleuser,“Single-User Mode”]. Press kbd:[2] or the upper- or lower-case kbd:[S] to enter this mode.
* `Escape to loader prompt`: This will boot the system into a repair prompt that contains a limited number of low-level commands. This prompt is described in crossref:boot[boot-loader,“Stage Three”]. Press kbd:[3] or kbd:[Esc] to boot into this prompt.
* `Reboot`: Reboots the system.
* `Kernel`: Loads a different kernel.
* `Configure Boot Options`: Opens the menu shown in, and described under, <<bsdinstall-boot-options-menu>>.
[[bsdinstall-boot-options-menu]]
.FreeBSD Boot Options Menu
image::bsdinstall-boot-options-menu.png[]
The boot options menu is divided into two sections. The first section can be used to either return to the main boot menu or to reset any toggled options back to their defaults.
The next section is used to toggle the available options to `On` or `Off` by pressing the option's highlighted number or character. The system will always boot using the settings for these options until they are modified. Several options can be toggled using this menu:
* `ACPI Support`: If the system hangs during boot, try toggling this option to `Off`.
* `Safe Mode`: If the system still hangs during boot even with `ACPI Support` set to `Off`, try setting this option to `On`.
* `Single User`: Toggle this option to `On` to fix an existing FreeBSD installation as described in crossref:boot[boot-singleuser,“Single-User Mode”]. Once the problem is fixed, set it back to `Off`.
* `Verbose`: Toggle this option to `On` to see more detailed messages during the boot process. This can be useful when troubleshooting a piece of hardware.
After making the needed selections, press kbd:[1] or kbd:[Backspace] to return to the main boot menu, then press kbd:[Enter] to continue booting into FreeBSD. A series of boot messages will appear as FreeBSD carries out its hardware device probes and loads the installation program. Once the boot is complete, the welcome menu shown in <<bsdinstall-choose-mode>> will be displayed.
[[bsdinstall-choose-mode]]
.Welcome Menu
image::bsdinstall-choose-mode.png[]
Press kbd:[Enter] to select the default of btn:[Install] to enter the installer. The rest of this chapter describes how to use this installer. Otherwise, use the right or left arrows or the colorized letter to select the desired menu item. The btn:[Shell] can be used to access a FreeBSD shell in order to use command line utilities to prepare the disks before installation. The btn:[Live CD] option can be used to try out FreeBSD before installing it. The live version is described in <<using-live-cd>>.
[TIP]
====
To review the boot messages, including the hardware device probe, press the upper- or lower-case kbd:[S] and then kbd:[Enter] to access a shell. At the shell prompt, type `more /var/run/dmesg.boot` and use the space bar to scroll through the messages. When finished, type `exit` to return to the welcome menu.
====
[[using-bsdinstall]]
== Using bsdinstall
This section shows the order of the bsdinstall menus and the type of information that will be asked before the system is installed. Use the arrow keys to highlight a menu option, then kbd:[Space] to select or deselect that menu item. When finished, press kbd:[Enter] to save the selection and move onto the next screen.
[[bsdinstall-keymap]]
=== Selecting the Keymap Menu
Before starting the process, bsdinstall will load the keymap files as show in <<bsdinstall-keymap-loading>>.
[[bsdinstall-keymap-loading]]
.Keymap Loading
image::bsdinstall-keymap-loading.png[]
After the keymaps have been loaded bsdinstall displays the menu shown in <<bsdinstall-keymap-10>>. Use the up and down arrows to select the keymap that most closely represents the mapping of the keyboard attached to the system. Press kbd:[Enter] to save the selection.
[[bsdinstall-keymap-10]]
.Keymap Selection Menu
image::bsdinstall-keymap-10.png[]
[NOTE]
====
Pressing kbd:[Esc] will exit this menu and use the default keymap. If the choice of keymap is not clear, [.guimenuitem]#United States of America ISO-8859-1# is also a safe option.
====
In addition, when selecting a different keymap, the user can try the keymap and ensure it is correct before proceeding as shown in <<bsdinstall-keymap-testing>>.
[[bsdinstall-keymap-testing]]
.Keymap Testing Menu
image::bsdinstall-keymap-testing.png[]
[[bsdinstall-hostname]]
=== Setting the Hostname
The next bsdinstall menu is used to set the hostname for the newly installed system.
[[bsdinstall-config-hostname]]
.Setting the Hostname
image::bsdinstall-config-hostname.png[]
Type in a hostname that is unique for the network. It should be a fully-qualified hostname, such as `machine3.example.com`.
[[bsdinstall-components]]
=== Selecting Components to Install
Next, bsdinstall will prompt to select optional components to install.
[[bsdinstall-config-components]]
.Selecting Components to Install
image::bsdinstall-config-components.png[]
Deciding which components to install will depend largely on the intended use of the system and the amount of disk space available. The FreeBSD kernel and userland, collectively known as the _base system_, are always installed. Depending on the architecture, some of these components may not appear:
* `base-dbg` - Base tools like cat, ls among many others with debug symbols activated.
* `kernel-dbg` - Kernel and modules with debug symbols activated.
* `lib32-dbg` - Compatibility libraries for running 32-bit applications on a 64-bit version of FreeBSD with debug symbols activated.
* `lib32` - Compatibility libraries for running 32-bit applications on a 64-bit version of FreeBSD.
* `ports` - The FreeBSD Ports Collection is a collection of files which automates the downloading, compiling and installation of third-party software packages. crossref:ports[ports,Installing Applications: Packages and Ports] discusses how to use the Ports Collection.
+
[WARNING]
====
The installation program does not check for adequate disk space. Select this option only if sufficient hard disk space is available. The FreeBSD Ports Collection takes up about {ports-size} of disk space.
====
* `src` - The complete FreeBSD source code for both the kernel and the userland. Although not required for the majority of applications, it may be required to build device drivers, kernel modules, or some applications from the Ports Collection. It is also used for developing FreeBSD itself. The full source tree requires 1 GB of disk space and recompiling the entire FreeBSD system requires an additional 5 GB of space.
* `tests` - FreeBSD Test Suite.
[[bsdinstall-netinstall]]
=== Installing from the Network
The menu shown in <<bsdinstall-netinstall-notify>> only appears when installing from a [.filename]#-bootonly.iso# or [.filename]#-mini-memstick.img# as this installation media does not hold copies of the installation files. Since the installation files must be retrieved over a network connection, this menu indicates that the network interface must be configured first. If this menu is shown in any step of the process remember to follow the instructions in <<bsdinstall-config-network-dev>>.
[[bsdinstall-netinstall-notify]]
.Installing from the Network
image::bsdinstall-netinstall-files.png[]
[[bsdinstall-partitioning]]
== Allocating Disk Space
The next menu is used to determine the method for allocating disk space.
[[bsdinstall-zfs-partmenu]]
.Partitioning Choices
image::bsdinstall-zfs-partmenu.png[]
bsdinstall gives the user four methods for allocating disk space:
* `Auto (UFS)` partitioning automatically sets up the disk partitions using the `UFS` file system.
* `Manual` partitioning allows advanced users to create customized partitions from menu options.
* `Shell` opens a shell prompt where advanced users can create customized partitions using command-line utilities like man:gpart[8], man:fdisk[8], and man:bsdlabel[8].
* `Auto (ZFS)` partitioning creates a root-on-ZFS system with optional GELI encryption support for _boot environments_.
This section describes what to consider when laying out the disk partitions. It then demonstrates how to use the different partitioning methods.
[[configtuning-initial]]
=== Designing the Partition Layout
When laying out file systems, remember that hard drives transfer data faster from the outer tracks to the inner. Thus, smaller and heavier-accessed file systems should be closer to the outside of the drive, while larger partitions like [.filename]#/usr# should be placed toward the inner parts of the disk. It is a good idea to create partitions in an order similar to: [.filename]#/#, swap, [.filename]#/var#, and [.filename]#/usr#.
The size of the [.filename]#/var# partition reflects the intended machine's usage. This partition is used to hold mailboxes, log files, and printer spools. Mailboxes and log files can grow to unexpected sizes depending on the number of users and how long log files are kept. On average, most users rarely need more than about a gigabyte of free disk space in [.filename]#/var#.
[NOTE]
====
Sometimes, a lot of disk space is required in [.filename]#/var/tmp#. When new software is installed, the packaging tools extract a temporary copy of the packages under [.filename]#/var/tmp#. Large software packages, like Firefox or LibreOffice may be tricky to install if there is not enough disk space under [.filename]#/var/tmp#.
====
The [.filename]#/usr# partition holds many of the files which support the system, including the FreeBSD Ports Collection and system source code. At least 2 gigabytes of space is recommended for this partition.
When selecting partition sizes, keep the space requirements in mind. Running out of space in one partition while barely using another can be a hassle.
As a rule of thumb, the swap partition should be about double the size of physical memory (RAM). Systems with minimal RAM may perform better with more swap. Configuring too little swap can lead to inefficiencies in the VM page scanning code and might create issues later if more memory is added.
On larger systems with multiple SCSI disks or multiple IDE disks operating on different controllers, it is recommended that swap be configured on each drive, up to four drives. The swap partitions should be approximately the same size. The kernel can handle arbitrary sizes but internal data structures scale to 4 times the largest swap partition. Keeping the swap partitions near the same size will allow the kernel to optimally stripe swap space across disks. Large swap sizes are fine, even if swap is not used much. It might be easier to recover from a runaway program before being forced to reboot.
By properly partitioning a system, fragmentation introduced in the smaller write heavy partitions will not bleed over into the mostly read partitions. Keeping the write loaded partitions closer to the disk's edge will increase I/O performance in the partitions where it occurs the most. While I/O performance in the larger partitions may be needed, shifting them more toward the edge of the disk will not lead to a significant performance improvement over moving [.filename]#/var# to the edge.
[[bsdinstall-part-guided]]
=== Guided Partitioning Using UFS
When this method is selected, a menu will display the available disk(s). If multiple disks are connected, choose the one where FreeBSD is to be installed.
[[bsdinstall-part-guided-disk]]
.Selecting from Multiple Disks
image::bsdinstall-part-guided-disk.png[]
Once the disk is selected, the next menu prompts to install to either the entire disk or to create a partition using free space. If btn:[Entire Disk] is chosen, a general partition layout filling the whole disk is automatically created. Selecting btn:[Partition] creates a partition layout from the unused space on the disk.
[[bsdinstall-part-entire-part]]
.Selecting Entire Disk or Partition
image::bsdinstall-part-entire-part.png[]
After btn:[Entire Disk] is chosen bsdinstall displays a dialog indicating that the disk will be erased.
[[bsdinstall-ufs-warning]]
.Confirmation
image::bsdinstall-ufs-warning.png[]
The next menu shows a list with the partition schemes types. GPT is usually the most appropriate choice for amd64 computers. Older computers that are not compatible with GPT should use MBR. The other partition schemes are generally used for uncommon or older computers. More information is available in <<partition-schemes>>.
[[bsdinstall-ufs-scheme]]
.Select Partition Scheme
image::bsdinstall-part-manual-partscheme.png[]
After the partition layout has been created, review it to ensure it meets the needs of the installation. Selecting btn:[Revert] will reset the partitions to their original values and pressing btn:[Auto] will recreate the automatic FreeBSD partitions. Partitions can also be manually created, modified, or deleted. When the partitioning is correct, select btn:[Finish] to continue with the installation.
[[bsdinstall-part-review]]
.Review Created Partitions
image::bsdinstall-part-review.png[]
Once the disks are configured, the next menu provides the last chance to make changes before the selected drives are formatted. If changes need to be made, select btn:[Back] to return to the main partitioning menu. btn:[Revert & Exit] exits the installer without making any changes to the drive. Select btn:[Commit] to start the installation process.
[[bsdinstall-ufs-final-confirmation]]
.Final Confirmation
image::bsdinstall-final-confirmation.png[]
To continue with the installation process go to <<bsdinstall-fetching-distribution>>.
[[bsdinstall-part-manual]]
=== Manual Partitioning
Selecting this method opens the partition editor:
[[bsdinstall-part-manual-create]]
.Manually Create Partitions
image::bsdinstall-part-manual-create.png[]
Highlight the installation drive ([.filename]#ada0# in this example) and select btn:[Create] to display a menu of available partition schemes:
[[bsdinstall-part-manual-partscheme]]
.Manually Create Partitions
image::bsdinstall-part-manual-partscheme.png[]
GPT is usually the most appropriate choice for amd64 computers. Older computers that are not compatible with GPT should use MBR. The other partition schemes are generally used for uncommon or older computers.
[[partition-schemes]]
.Partitioning Schemes
[cols="1,1", frame="none", options="header"]
|===
<| Abbreviation
<| Description
|APM
|Apple Partition Map, used by PowerPC(R).
|BSD
|BSD label without an MBR, sometimes called _dangerously dedicated mode_ as non-BSD disk utilities may not recognize it.
|GPT
|GUID Partition Table (http://en.wikipedia.org/wiki/GUID_Partition_Table[http://en.wikipedia.org/wiki/GUID_Partition_Table]).
|MBR
|Master Boot Record (http://en.wikipedia.org/wiki/Master_boot_record[http://en.wikipedia.org/wiki/Master_boot_record]).
|===
After the partitioning scheme has been selected and created, select btn:[Create] again to create the partitions. The kbd:[Tab] key is used to move the cursor between fields.
[[bsdinstall-part-manual-addpart]]
.Manually Create Partitions
image::bsdinstall-part-manual-addpart.png[]
A standard FreeBSD GPT installation uses at least three partitions:
* `freebsd-boot` - Holds the FreeBSD boot code.
* `freebsd-ufs` - A FreeBSD UFS file system.
* `freebsd-zfs` - A FreeBSD ZFS file system. More information about ZFS is available in crossref:zfs[zfs,The Z File System (ZFS)].
* `freebsd-swap` - FreeBSD swap space.
Refer to man:gpart[8] for descriptions of the available GPT partition types.
Multiple file system partitions can be created and some people prefer a traditional layout with separate partitions for [.filename]#/#, [.filename]#/var#, [.filename]#/tmp#, and [.filename]#/usr#. See <<bsdinstall-part-manual-splitfs>> for an example.
The `Size` may be entered with common abbreviations: _K_ for kilobytes, _M_ for megabytes, or _G_ for gigabytes.
[TIP]
====
Proper sector alignment provides the best performance, and making partition sizes even multiples of 4K bytes helps to ensure alignment on drives with either 512-byte or 4K-byte sectors. Generally, using partition sizes that are even multiples of 1M or 1G is the easiest way to make sure every partition starts at an even multiple of 4K. There is one exception: the _freebsd-boot_ partition should be no larger than 512K due to current boot code limitations.
====
A `Mountpoint` is needed if the partition will contain a file system. If only a single UFS partition will be created, the mountpoint should be [.filename]#/#.
The `Label` is a name by which the partition will be known. Drive names or numbers can change if the drive is connected to a different controller or port, but the partition label does not change. Referring to labels instead of drive names and partition numbers in files like [.filename]#/etc/fstab# makes the system more tolerant to hardware changes. GPT labels appear in [.filename]#/dev/gpt/# when a disk is attached. Other partitioning schemes have different label capabilities and their labels appear in different directories in [.filename]#/dev/#.
[TIP]
====
Use a unique label on every partition to avoid conflicts from identical labels. A few letters from the computer's name, use, or location can be added to the label. For instance, use `labroot` or `rootfslab` for the UFS root partition on the computer named `lab`.
====
[[bsdinstall-part-manual-splitfs]]
.Creating Traditional Split File System Partitions
[example]
====
For a traditional partition layout where the [.filename]#/#, [.filename]#/var#, [.filename]#/tmp#, and [.filename]#/usr# directories are separate file systems on their own partitions, create a GPT partitioning scheme, then create the partitions as shown. Partition sizes shown are typical for a 20G target disk. If more space is available on the target disk, larger swap or [.filename]#/var# partitions may be useful. Labels shown here are prefixed with `ex` for "example", but readers should use other unique label values as described above.
By default, FreeBSD's [.filename]#gptboot# expects the first UFS partition to be the [.filename]#/# partition.
[.informaltable]
[cols="1,1,1,1", frame="none", options="header"]
|===
| Partition Type
| Size
| Mountpoint
| Label
|`freebsd-boot`
|`512K`
|
|
|`freebsd-ufs`
|`2G`
|[.filename]#/#
|`exrootfs`
|`freebsd-swap`
|`4G`
|
|`exswap`
|`freebsd-ufs`
|`2G`
|[.filename]#/var#
|`exvarfs`
|`freebsd-ufs`
|`1G`
|[.filename]#/tmp#
|`extmpfs`
|`freebsd-ufs`
|accept the default (remainder of the disk)
|[.filename]#/usr#
|`exusrfs`
|===
====
After the custom partitions have been created, select btn:[Finish] to continue with the installation and go to <<bsdinstall-fetching-distribution>>.
[[bsdinstall-part-zfs]]
=== Guided Partitioning Using Root-on-ZFS
This partitioning mode only works with whole disks and will erase the contents of the entire disk. The main ZFS configuration menu offers a number of options to control the creation of the pool.
[[bsdinstall-zfs-menu]]
.ZFS Partitioning Menu
image::bsdinstall-zfs-menu.png[]
Here is a summary of the options which can be used in this menu:
* `Install` - Proceed with the installation with the selected options.
* `Pool Type/Disks` - Allow to configure the `Pool Type` and the disk(s) that will constitute the pool. The automatic ZFS installer currently only supports the creation of a single top level vdev, except in stripe mode. To create more complex pools, use the instructions in <<bsdinstall-part-shell>> to create the pool.
* `Rescan Devices` - Repopulate the list of available disks.
* `Disk Info` - Disk Info menu can be used to inspect each disk, including its partition table and various other information such as the device model number and serial number, if available.
* `Pool Name` - Establish the name of the pool. The default name is _zroot_.
* `Force 4K Sectors?` - Force the use of 4K sectors. By default, the installer will automatically create partitions aligned to 4K boundaries and force ZFS to use 4K sectors. This is safe even with 512 byte sector disks, and has the added benefit of ensuring that pools created on 512 byte disks will be able to have 4K sector disks added in the future, either as additional storage space or as replacements for failed disks. Press the kbd:[Enter] key to chose to activate it or not.
* `Encrypt Disks?` - Encrypting the disks allows the user to encrypt the disks using GELI. More information about disk encryption is available in crossref:disks[disks-encrypting-geli,“Disk Encryption with geli”]. Press the kbd:[Enter] key to chose activate it or not.
* `Partition Scheme` - Allow to choose the partition scheme. GPT is the recommended option in most cases. Press the kbd:[Enter] key to chose between the different options.
* `Swap Size` - Establish the amount of swap space.
* `Mirror Swap?` - Allows the user to mirror the swap between the disks. Be aware, enabling mirror swap will break crash dumps. Press the kbd:[Enter] key to activate it or not.
* `Encrypt Swap?` - Allow the user the possibility to encrypt the swap. Encrypts the swap with a temporary key each time that the system boots and discards it on reboot. Press the kbd:[Enter] key to chose activate it or not. More information about swap encryption in crossref:disks[swap-encrypting,“Encrypting Swap”].
Select kbd:[T] to configure the `Pool Type` and the disk(s) that will constitute the pool.
[[bsdinstall-zfs-vdev_type]]
.ZFS Pool Type
image::bsdinstall-zfs-vdev_type.png[]
Here is a summary of the `Pool Type` which can be selected in this menu:
* `stripe` - Striping provides maximum storage of all connected devices, but no redundancy. If just one disk fails the data on the pool is lost irrevocably.
* `mirror` - Mirroring stores a complete copy of all data on every disk. Mirroring provides a good read performance because data is read from all disks in parallel. Write performance is slower as the data must be written to all disks in the pool. Allows all but one disk to fail. This option requires at least two disks.
* `raid10` - Striped mirrors. Provides the best performance, but the least storage. This option needs at least an even number of disks and a minimum of four disks.
* `raidz1` - Single Redundant RAID. Allow one disk to fail concurrently. This option needs at least three disks.
* `raidz2` - Double Redundant RAID. Allows two disks to fail concurrently. This option needs at least four disks.
* `raidz3` - Triple Redundant RAID. Allows three disks to fail concurrently. This option needs at least five disks.
Once a `Pool Type` has been selected, a list of available disks is displayed, and the user is prompted to select one or more disks to make up the pool. The configuration is then validated, to ensure enough disks are selected. If not, select btn:[<Change Selection>] to return to the list of disks, or btn:[<Back>] to change the `Pool Type`.
[[bsdinstall-zfs-disk_select]]
.Disk Selection
image::bsdinstall-zfs-disk_select.png[]
[[bsdinstall-zfs-vdev_invalid]]
.Invalid Selection
image::bsdinstall-zfs-vdev_invalid.png[]
If one or more disks are missing from the list, or if disks were attached after the installer was started, select btn:[- Rescan Devices] to repopulate the list of available disks.
[[bsdinstall-zfs-rescan-devices]]
.Rescan Devices
image::bsdinstall-zfs-rescan-devices.png[]
To avoid accidentally erasing the wrong disk, the btn:[- Disk Info] menu can be used to inspect each disk, including its partition table and various other information such as the device model number and serial number, if available.
[[bsdinstall-zfs-disk_info]]
.Analyzing a Disk
image::bsdinstall-zfs-disk_info.png[]
Select kbd:[N] to configure the `Pool Name`. Enter the desired name then select btn:[<OK>] to establish it or btn:[<Cancel>] to return to the main menu and leave the default name.
[[bsdinstall-zfs-pool-name]]
.Pool Name
image::bsdinstall-zfs-pool-name.png[]
Select kbd:[S] to set the amount of swap. Enter the desired amount of swap and then select btn:[<OK>] to establish it or btn:[<Cancel>] to return to the main menu and let the default amount.
[[bsdinstall-zfs-swap-amount]]
.Swap Amount
image::bsdinstall-zfs-swap-amount.png[]
Once all options have been set to the desired values, select the btn:[>>> Install] option at the top of the menu. The installer then offers a last chance to cancel before the contents of the selected drives are destroyed to create the ZFS pool.
[[bsdinstall-zfs-warning]]
.Last Chance
image::bsdinstall-zfs-warning.png[]
If GELI disk encryption was enabled, the installer will prompt twice for the passphrase to be used to encrypt the disks. And after that the initializing of the encryption begins.
[[bsdinstall-zfs-geli_password]]
.Disk Encryption Password
image::bsdinstall-zfs-geli_password.png[]
[[bsdinstall-zfs-init-encription]]
.Initializing Encryption
image::bsdinstall-zfs-init-encription.png[]
The installation then proceeds normally. To continue with the installation go to <<bsdinstall-fetching-distribution>>.
[[bsdinstall-part-shell]]
=== Shell Mode Partitioning
When creating advanced installations, the bsdinstall partitioning menus may not provide the level of flexibility required. Advanced users can select the btn:[Shell] option from the partitioning menu in order to manually partition the drives, create the file system(s), populate [.filename]#/tmp/bsdinstall_etc/fstab#, and mount the file systems under [.filename]#/mnt#. Once this is done, type `exit` to return to bsdinstall and continue the installation.
[[bsdinstall-fetching-distribution]]
== Fetching Distribution Files
Installation time will vary depending on the distributions chosen, installation media, and speed of the computer. A series of messages will indicate the progress.
First, the installer formats the selected disk(s) and initializes the partitions. Next, in the case of a `bootonly media` or `mini memstick`, it downloads the selected components:
[[bsdinstall-distfile-fetching]]
.Fetching Distribution Files
image::bsdinstall-distfile-fetching.png[]
Next, the integrity of the distribution files is verified to ensure they have not been corrupted during download or misread from the installation media:
[[bsdinstall-distfile-verify]]
.Verifying Distribution Files
image::bsdinstall-distfile-verifying.png[]
Finally, the verified distribution files are extracted to the disk:
[[bsdinstall-distfile-extract]]
.Extracting Distribution Files
image::bsdinstall-distfile-extracting.png[]
Once all requested distribution files have been extracted, bsdinstall displays the first post-installation configuration screen. The available post-configuration options are described in the next section.
[[bsdinstall-post]]
== Accounts, Time Zone, Services and Hardening
[[bsdinstall-post-root]]
=== Setting the `root` Password
First, the `root` password must be set. While entering the password, the characters being typed are not displayed on the screen. After the password has been entered, it must be entered again. This helps prevent typing errors.
[[bsdinstall-post-set-root-passwd]]
.Setting the `root` Password
image::bsdinstall-post-root-passwd.png[]
[[bsdinstall-timezone]]
=== Setting the Time Zone
The next series of menus are used to determine the correct local time by selecting the geographic region, country, and time zone. Setting the time zone allows the system to automatically correct for regional time changes, such as daylight savings time, and perform other time zone related functions properly.
The example shown here is for a machine located in the mainland time zone of Spain, Europe. The selections will vary according to the geographical location.
[[bsdinstall-timezone-region]]
.Select a Region
image::bsdinstall-timezone-region.png[]
The appropriate region is selected using the arrow keys and then pressing kbd:[Enter].
[[bsdinstall-timezone-country]]
.Select a Country
image::bsdinstall-timezone-country.png[]
Select the appropriate country using the arrow keys and press kbd:[Enter].
[[bsdinstall-timezone-zone]]
.Select a Time Zone
image::bsdinstall-timezone-zone.png[]
The appropriate time zone is selected using the arrow keys and pressing kbd:[Enter].
[[bsdinstall-timezone-confirmation]]
.Confirm Time Zone
image::bsdinstall-timezone-confirm.png[]
Confirm the abbreviation for the time zone is correct.
[[bsdinstall-timezone-date]]
.Select Date
image::bsdinstall-timezone-date.png[]
The appropriate date is selected using the arrow keys and then pressing btn:[Set Date]. Otherwise, the date selection can be skipped by pressing btn:[Skip].
[[bsdinstall-timezone-time]]
.Select Time
image::bsdinstall-timezone-time.png[]
The appropriate time is selected using the arrow keys and then pressing btn:[Set Time]. Otherwise, the time selection can be skipped by pressing btn:[Skip].
[[bsdinstall-sysconf]]
=== Enabling Services
The next menu is used to configure which system services will be started whenever the system boots. All of these services are optional. Only start the services that are needed for the system to function.
[[bsdinstall-config-serv]]
.Selecting Additional Services to Enable
image::bsdinstall-config-services.png[]
Here is a summary of the services which can be enabled in this menu:
* `local_unbound` - Enable the DNS local unbound. It is necessary to keep in mind that this is the unbound of the base system and is only meant for use as a local caching forwarding resolver. If the objective is to set up a resolver for the entire network install package:dns/unbound[].
* `sshd` - The Secure Shell (SSH) daemon is used to remotely access a system over an encrypted connection. Only enable this service if the system should be available for remote logins.
* `moused` - Enable this service if the mouse will be used from the command-line system console.
* `ntpdate` - Enable the automatic clock synchronization at boot time. The functionality of this program is now available in the man:ntpd[8] daemon. After a suitable period of mourning, the man:ntpdate[8] utility will be retired.
* `ntpd` - The Network Time Protocol (NTP) daemon for automatic clock synchronization. Enable this service if there is a Windows(R), Kerberos, or LDAP server on the network.
* `powerd` - System power control utility for power control and energy saving.
* `dumpdev` - Enabling crash dumps is useful in debugging issues with the system, so users are encouraged to enable crash dumps.
[[bsdinstall-hardening]]
=== Enabling Hardening Security Options
The next menu is used to configure which security options will be enabled. All of these options are optional. But their use is encouraged.
[[bsdinstall-hardening-options]]
.Selecting Hardening Security Options
image::bsdinstall-hardening.png[]
Here is a summary of the options which can be enabled in this menu:
* `hide_uids` - Hide processes running as other users to prevent the unprivileged users to see other running processes in execution by other users (UID) preventing information leakage.
* `hide_gids` - Hide processes running as other groups to prevent the unprivileged users to see other running processes in execution by other groups (GID) preventing information leakage.
* `hide_jail` - Hide processes running in jails to prevent the unprivileged users to see processes running inside the jails.
* `read_msgbuf` - Disabling reading kernel message buffer for unprivileged users prevent from using man:dmesg[8] to view messages from the kernel's log buffer.
* `proc_debug` - Disabling process debugging facilities for unprivileged users disables a variety of unprivileged inter-process debugging services, including some procfs functionality, ptrace(), and ktrace(). Please note that this will also prevent debugging tools, for instance man:lldb[1], man:truss[1], man:procstat[1], as well as some built-in debugging facilities in certain scripting language like PHP, etc., from working for unprivileged users.
* `random_pid` - Randomize the PID of newly created processes.
* `clear_tmp` - Clean [.filename]#/tmp# when the system starts up.
* `disable_syslogd` - Disable opening syslogd network socket. By default FreeBSD runs syslogd in a secure way with `-s`. That prevents the daemon from listening for incoming UDP requests at port 514. With this option enabled syslogd will run with the flag `-ss` which prevents syslogd from opening any port. To get more information consult man:syslogd[8].
* `disable_sendmail` - Disable the sendmail mail transport agent.
* `secure_console` - When this option is enabled, the prompt requests the `root` password when entering single-user mode.
* `disable_ddtrace` - DTrace can run in a mode that will actually affect the running kernel. Destructive actions may not be used unless they have been explicitly enabled. To enable this option when using DTrace use `-w`. To get more information consult man:dtrace[1].
[[bsdinstall-addusers]]
=== Add Users
The next menu prompts to create at least one user account. It is recommended to login to the system using a user account rather than as `root`. When logged in as `root`, there are essentially no limits or protection on what can be done. Logging in as a normal user is safer and more secure.
Select btn:[Yes] to add new users.
[[bsdinstall-add-user1]]
.Add User Accounts
image::bsdinstall-adduser1.png[]
Follow the prompts and input the requested information for the user account. The example shown in <<bsdinstall-add-user2>> creates the `asample` user account.
[[bsdinstall-add-user2]]
.Enter User Information
image::bsdinstall-adduser2.png[]
Here is a summary of the information to input:
* `Username` - The name the user will enter to log in. A common convention is to use the first letter of the first name combined with the last name, as long as each username is unique for the system. The username is case sensitive and should not contain any spaces.
* `Full name` - The user's full name. This can contain spaces and is used as a description for the user account.
* `Uid` - User ID. Typically, this is left blank so the system will assign a value.
* `Login group` - The user's group. Typically this is left blank to accept the default.
* `Invite _user_ into other groups?` - Additional groups to which the user will be added as a member. If the user needs administrative access, type `wheel` here.
* `Login class` - Typically left blank for the default.
* `Shell` - Type in one of the listed values to set the interactive shell for the user. Refer to crossref:basics[shells,“Shells”] for more information about shells.
* `Home directory` - The user's home directory. The default is usually correct.
* `Home directory permissions` - Permissions on the user's home directory. The default is usually correct.
* `Use password-based authentication?` - Typically `yes` so that the user is prompted to input their password at login.
* `Use an empty password?` - Typically `no` as it is insecure to have a blank password.
* `Use a random password?` - Typically `no` so that the user can set their own password in the next prompt.
* `Enter password` - The password for this user. Characters typed will not show on the screen.
* `Enter password again` - The password must be typed again for verification.
* `Lock out the account after creation?` - Typically `no` so that the user can login.
After entering everything, a summary is shown for review. If a mistake was made, enter `no` and try again. If everything is correct, enter `yes` to create the new user.
[[bsdinstall-add-user3]]
.Exit User and Group Management
image::bsdinstall-adduser3.png[]
If there are more users to add, answer the `Add another user?` question with `yes`. Enter `no` to finish adding users and continue the installation.
For more information on adding users and user management, see crossref:basics[users-synopsis,“Users and Basic Account Management”].
[[bsdinstall-final-conf]]
=== Final Configuration
After everything has been installed and configured, a final chance is provided to modify settings.
[[bsdinstall-final-config]]
.Final Configuration
image::bsdinstall-finalconfiguration.png[]
Use this menu to make any changes or do any additional configuration before completing the installation.
* `Add User` - Described in <<bsdinstall-addusers>>.
* `Root Password` - Described in <<bsdinstall-post-root>>.
* `Hostname` - Described in <<bsdinstall-hostname>>.
* `Network` - Described in <<bsdinstall-config-network-dev>>.
* `Services` - Described in <<bsdinstall-sysconf>>.
* `System Hardening` - Described in <<bsdinstall-hardening>>.
* `Time Zone` - Described in <<bsdinstall-timezone>>.
* `Handbook` - Download and install the FreeBSD Handbook.
After any final configuration is complete, select btn:[Exit].
[[bsdinstall-final-modification-shell]]
.Manual Configuration
image::bsdinstall-final-modification-shell.png[]
bsdinstall will prompt if there are any additional configuration that needs to be done before rebooting into the new system. Select btn:[Yes] to exit to a shell within the new system or btn:[No] to proceed to the last step of the installation.
[[bsdinstall-final-main]]
.Complete the Installation
image::bsdinstall-mainexit.png[]
If further configuration or special setup is needed, select btn:[Live CD] to boot the install media into Live CD mode.
If the installation is complete, select btn:[Reboot] to reboot the computer and start the new FreeBSD system. Do not forget to remove the FreeBSD install media or the computer may boot from it again.
As FreeBSD boots, informational messages are displayed. After the system finishes booting, a login prompt is displayed. At the `login:` prompt, enter the username added during the installation. Avoid logging in as `root`. Refer to crossref:basics[users-superuser,“The Superuser Account”] for instructions on how to become the superuser when administrative access is needed.
The messages that appeared during boot can be reviewed by pressing kbd:[Scroll-Lock] to turn on the scroll-back buffer. The kbd:[PgUp], kbd:[PgDn], and arrow keys can be used to scroll back through the messages. When finished, press kbd:[Scroll-Lock] again to unlock the display and return to the console. To review these messages once the system has been up for some time, type `less /var/run/dmesg.boot` from a command prompt. Press kbd:[q] to return to the command line after viewing.
If sshd was enabled in <<bsdinstall-config-serv>>, the first boot may be a bit slower as the system will generate the RSA and DSA keys. Subsequent boots will be faster. The fingerprints of the keys will be displayed, as seen in this example:
[source,shell]
....
Generating public/private rsa1 key pair.
Your identification has been saved in /etc/ssh/ssh_host_key.
Your public key has been saved in /etc/ssh/ssh_host_key.pub.
The key fingerprint is:
10:a0:f5:af:93:ae:a3:1a:b2:bb:3c:35:d9:5a:b3:f3 root@machine3.example.com
The key's randomart image is:
+--[RSA1 1024]----+
| o.. |
| o . . |
| . o |
| o |
| o S |
| + + o |
|o . + * |
|o+ ..+ . |
|==o..o+E |
+-----------------+
Generating public/private dsa key pair.
Your identification has been saved in /etc/ssh/ssh_host_dsa_key.
Your public key has been saved in /etc/ssh/ssh_host_dsa_key.pub.
The key fingerprint is:
7e:1c:ce:dc:8a:3a:18:13:5b:34:b5:cf:d9:d1:47:b2 root@machine3.example.com
The key's randomart image is:
+--[ DSA 1024]----+
| .. . .|
| o . . + |
| . .. . E .|
| . . o o . . |
| + S = . |
| + . = o |
| + . * . |
| . . o . |
| .o. . |
+-----------------+
Starting sshd.
....
Refer to crossref:security[openssh,"OpenSSH"] for more information about fingerprints and SSH.
FreeBSD does not install a graphical environment by default. Refer to crossref:x11[x11,The X Window System] for more information about installing and configuring a graphical window manager.
Proper shutdown of a FreeBSD computer helps protect data and hardware from damage. _Do not turn off the power before the system has been properly shut down!_ If the user is a member of the `wheel` group, become the superuser by typing `su` at the command line and entering the `root` password. Then, type `shutdown -p now` and the system will shut down cleanly, and if the hardware supports it, turn itself off.
[[bsdinstall-network]]
== Network Interfaces
[[bsdinstall-config-network-dev]]
=== Configuring Network Interfaces
Next, a list of the network interfaces found on the computer is shown. Select the interface to configure.
[[bsdinstall-configure-net-interface]]
.Choose a Network Interface
image::bsdinstall-configure-network-interface.png[]
If an Ethernet interface is selected, the installer will skip ahead to the menu shown in <<bsdinstall-configure-net-ipv4>>. If a wireless network interface is chosen, the system will instead scan for wireless access points:
[[bsdinstall-wireless-scan]]
.Scanning for Wireless Access Points
image::bsdinstall-configure-wireless-scan.png[]
Wireless networks are identified by a Service Set Identifier (SSID), a short, unique name given to each network. SSIDs found during the scan are listed, followed by a description of the encryption types available for that network. If the desired SSID does not appear in the list, select btn:[Rescan] to scan again. If the desired network still does not appear, check for problems with antenna connections or try moving the computer closer to the access point. Rescan after each change is made.
[[bsdinstall-wireless-accesspoints]]
.Choosing a Wireless Network
image::bsdinstall-configure-wireless-accesspoints.png[]
Next, enter the encryption information for connecting to the selected wireless network. WPA2 encryption is strongly recommended as older encryption types, like WEP, offer little security. If the network uses WPA2, input the password, also known as the Pre-Shared Key (PSK). For security reasons, the characters typed into the input box are displayed as asterisks.
[[bsdinstall-wireless-wpa2]]
.WPA2 Setup
image::bsdinstall-configure-wireless-wpa2setup.png[]
Next, choose whether or not an IPv4 address should be configured on the Ethernet or wireless interface:
[[bsdinstall-configure-net-ipv4]]
.Choose IPv4 Networking
image::bsdinstall-configure-network-interface-ipv4.png[]
There are two methods of IPv4 configuration. DHCP will automatically configure the network interface correctly and should be used if the network provides a DHCP server. Otherwise, the addressing information needs to be input manually as a static configuration.
[NOTE]
====
Do not enter random network information as it will not work. If a DHCP server is not available, obtain the information listed in <<bsdinstall-collect-network-information, Required Network Information>> from the network administrator or Internet service provider.
====
If a DHCP server is available, select btn:[Yes] in the next menu to automatically configure the network interface. The installer will appear to pause for a minute or so as it finds the DHCP server and obtains the addressing information for the system.
[[bsdinstall-net-ipv4-dhcp]]
.Choose IPv4DHCP Configuration
image::bsdinstall-configure-network-interface-ipv4-dhcp.png[]
If a DHCP server is not available, select btn:[No] and input the following addressing information in this menu:
[[bsdinstall-net-ipv4-static]]
.IPv4 Static Configuration
image::bsdinstall-configure-network-interface-ipv4-static.png[]
* `IP Address` - The IPv4 address assigned to this computer. The address must be unique and not already in use by another piece of equipment on the local network.
* `Subnet Mask` - The subnet mask for the network.
* `Default Router` - The IP address of the network's default gateway.
The next screen will ask if the interface should be configured for IPv6. If IPv6 is available and desired, choose btn:[Yes] to select it.
[[bsdinstall-net-ipv6]]
.Choose IPv6 Networking
image::bsdinstall-configure-network-interface-ipv6.png[]
IPv6 also has two methods of configuration. StateLess Address AutoConfiguration (SLAAC) will automatically request the correct configuration information from a local router. Refer to http://tools.ietf.org/html/rfc4862[rfc4862] for more information. Static configuration requires manual entry of network information.
If an IPv6 router is available, select btn:[Yes] in the next menu to automatically configure the network interface. The installer will appear to pause for a minute or so as it finds the router and obtains the addressing information for the system.
[[bsdinstall-net-ipv6-slaac]]
.Choose IPv6 SLAAC Configuration
image::bsdinstall-configure-network-interface-slaac.png[]
If an IPv6 router is not available, select btn:[No] and input the following addressing information in this menu:
[[bsdinstall-net-ipv6-static]]
.IPv6 Static Configuration
image::bsdinstall-configure-network-interface-ipv6-static.png[]
* `IPv6 Address` - The IPv6 address assigned to this computer. The address must be unique and not already in use by another piece of equipment on the local network.
* `Default Router` - The IPv6 address of the network's default gateway.
The last network configuration menu is used to configure the Domain Name System (DNS) resolver, which converts hostnames to and from network addresses. If DHCP or SLAAC was used to autoconfigure the network interface, the `Resolver Configuration` values may already be filled in. Otherwise, enter the local network's domain name in the `Search` field. `DNS #1` and `DNS #2` are the IPv4 and/or IPv6 addresses of the DNS servers. At least one DNS server is required.
[[bsdinstall-net-dns-config]]
.DNS Configuration
image::bsdinstall-configure-network-ipv4-dns.png[]
Once the interface is configured, select a mirror site that is located in the same region of the world as the computer on which FreeBSD is being installed. Files can be retrieved more quickly when the mirror is close to the target computer, reducing installation time.
[[bsdinstall-netinstall-mirror]]
.Choosing a Mirror
image::bsdinstall-netinstall-mirrorselect.png[]
[[bsdinstall-install-trouble]]
== Troubleshooting
This section covers basic installation troubleshooting, such as common problems people have reported.
Check the Hardware Notes (link:https://www.FreeBSD.org/releases/[https://www.freebsd.org/releases/]) document for the version of FreeBSD to make sure the hardware is supported. If the hardware is supported and lock-ups or other problems occur, build a custom kernel using the instructions in crossref:kernelconfig[kernelconfig,Configuring the FreeBSD Kernel] to add support for devices which are not present in the [.filename]#GENERIC# kernel. The default kernel assumes that most hardware devices are in their factory default configuration in terms of IRQs, I/O addresses, and DMA channels. If the hardware has been reconfigured, a custom kernel configuration file can tell FreeBSD where to find things.
[NOTE]
====
Some installation problems can be avoided or alleviated by updating the firmware on various hardware components, most notably the motherboard. Motherboard firmware is usually referred to as the BIOS. Most motherboard and computer manufacturers have a website for upgrades and upgrade information.
Manufacturers generally advise against upgrading the motherboard BIOS unless there is a good reason for doing so, like a critical update. The upgrade process _can_ go wrong, leaving the BIOS incomplete and the computer inoperative.
====
If the system hangs while probing hardware during boot, or it behaves strangely during install, ACPI may be the culprit. FreeBSD makes extensive use of the system ACPI service on the i386 and amd64 platforms to aid in system configuration if it is detected during boot. Unfortunately, some bugs still exist in both the ACPI driver and within system motherboards and BIOS firmware. ACPI can be disabled by setting the `hint.acpi.0.disabled` hint in the third stage boot loader:
[source,shell]
....
set hint.acpi.0.disabled="1"
....
This is reset each time the system is booted, so it is necessary to add `hint.acpi.0.disabled="1"` to the file [.filename]#/boot/loader.conf#. More information about the boot loader can be found in crossref:boot[boot-synopsis,“Synopsis”].
[[using-live-cd]]
== Using the Live CD
The welcome menu of bsdinstall, shown in <<bsdinstall-choose-mode>>, provides a btn:[Live CD] option. This is useful for those who are still wondering whether FreeBSD is the right operating system for them and want to test some of the features before installing.
The following points should be noted before using the btn:[Live CD]:
* To gain access to the system, authentication is required. The username is `root` and the password is blank.
* As the system runs directly from the installation media, performance will be significantly slower than that of a system installed on a hard disk.
* This option only provides a command prompt and not a graphical interface.
diff --git a/documentation/content/en/books/handbook/colophon.adoc b/documentation/content/en/books/handbook/colophon.adoc
index 95fad7eb4b..7904bb122f 100644
--- a/documentation/content/en/books/handbook/colophon.adoc
+++ b/documentation/content/en/books/handbook/colophon.adoc
@@ -1,20 +1,21 @@
---
title: Colophon
prev: books/handbook/glossary
+description: FreeBSD Handbook Colophon
---
[colophon]
[[colophon]]
= Colophon
:doctype: book
:icons: font
:!sectnums:
:source-highlighter: rouge
:experimental:
:skip-front-matter:
:xrefstyle: basic
:relfileprefix: ../
:outfilesuffix:
This book is the combined work of hundreds of contributors to "The FreeBSD Documentation Project".
The text is authored in AsciiDoc.
diff --git a/documentation/content/en/books/handbook/config/_index.adoc b/documentation/content/en/books/handbook/config/_index.adoc
index 66173c1ece..146b6a92c1 100644
--- a/documentation/content/en/books/handbook/config/_index.adoc
+++ b/documentation/content/en/books/handbook/config/_index.adoc
@@ -1,1550 +1,1551 @@
---
title: Chapter 12. Configuration and Tuning
part: Part III. System Administration
prev: books/handbook/partiii
next: books/handbook/boot
+description: This chapter explains much of the FreeBSD configuration process, including some of the parameters which can be set to tune a FreeBSD system.
---
[[config-tuning]]
= Configuration and Tuning
:doctype: book
:toc: macro
:toclevels: 1
:icons: font
:sectnums:
:sectnumlevels: 6
:source-highlighter: rouge
:experimental:
:skip-front-matter:
:xrefstyle: basic
:relfileprefix: ../
:outfilesuffix:
:sectnumoffset: 12
ifeval::["{backend}" == "html5"]
:imagesdir: ../../../../images/books/handbook/x11/
endif::[]
ifeval::["{backend}" == "pdf"]
:imagesdir: ../../../../static/images/books/handbook/x11/
endif::[]
ifeval::["{backend}" == "epub3"]
:imagesdir: ../../../../static/images/books/handbook/x11/
endif::[]
include::shared/authors.adoc[]
include::shared/releases.adoc[]
include::shared/en/mailing-lists.adoc[]
include::shared/en/teams.adoc[]
include::shared/en/urls.adoc[]
toc::[]
[[config-synopsis]]
== Synopsis
One of the important aspects of FreeBSD is proper system configuration. This chapter explains much of the FreeBSD configuration process, including some of the parameters which can be set to tune a FreeBSD system.
After reading this chapter, you will know:
* The basics of [.filename]#rc.conf# configuration and [.filename]#/usr/local/etc/rc.d# startup scripts.
* How to configure and test a network card.
* How to configure virtual hosts on network devices.
* How to use the various configuration files in [.filename]#/etc#.
* How to tune FreeBSD using man:sysctl[8] variables.
* How to tune disk performance and modify kernel limitations.
Before reading this chapter, you should:
* Understand UNIX(R) and FreeBSD basics (crossref:basics[basics,FreeBSD Basics]).
* Be familiar with the basics of kernel configuration and compilation (crossref:kernelconfig[kernelconfig,Configuring the FreeBSD Kernel]).
[[configtuning-starting-services]]
== Starting Services
Many users install third party software on FreeBSD from the Ports Collection and require the installed services to be started upon system initialization. Services, such as package:mail/postfix[] or package:www/apache22[] are just two of the many software packages which may be started during system initialization. This section explains the procedures available for starting third party software.
In FreeBSD, most included services, such as man:cron[8], are started through the system startup scripts.
=== Extended Application Configuration
Now that FreeBSD includes [.filename]#rc.d#, configuration of application startup is easier and provides more features. Using the key words discussed in <<configtuning-rcd>>, applications can be set to start after certain other services and extra flags can be passed through [.filename]#/etc/rc.conf# in place of hard coded flags in the startup script. A basic script may look similar to the following:
[.programlisting]
....
#!/bin/sh
#
# PROVIDE: utility
# REQUIRE: DAEMON
# KEYWORD: shutdown
. /etc/rc.subr
name=utility
rcvar=utility_enable
command="/usr/local/sbin/utility"
load_rc_config $name
#
# DO NOT CHANGE THESE DEFAULT VALUES HERE
# SET THEM IN THE /etc/rc.conf FILE
#
utility_enable=${utility_enable-"NO"}
pidfile=${utility_pidfile-"/var/run/utility.pid"}
run_rc_command "$1"
....
This script will ensure that the provided `utility` will be started after the `DAEMON` pseudo-service. It also provides a method for setting and tracking the process ID (PID).
This application could then have the following line placed in [.filename]#/etc/rc.conf#:
[.programlisting]
....
utility_enable="YES"
....
This method allows for easier manipulation of command line arguments, inclusion of the default functions provided in [.filename]#/etc/rc.subr#, compatibility with man:rcorder[8], and provides for easier configuration via [.filename]#rc.conf#.
=== Using Services to Start Services
Other services can be started using man:inetd[8]. Working with man:inetd[8] and its configuration is described in depth in crossref:network-servers[network-inetd,“The inetd Super-Server”].
In some cases, it may make more sense to use man:cron[8] to start system services. This approach has a number of advantages as man:cron[8] runs these processes as the owner of the man:crontab[5]. This allows regular users to start and maintain their own applications.
The `@reboot` feature of man:cron[8], may be used in place of the time specification. This causes the job to run when man:cron[8] is started, normally during system initialization.
[[configtuning-cron]]
== Configuring man:cron[8]
One of the most useful utilities in FreeBSD is cron. This utility runs in the background and regularly checks [.filename]#/etc/crontab# for tasks to execute and searches [.filename]#/var/cron/tabs# for custom crontab files. These files are used to schedule tasks which cron runs at the specified times. Each entry in a crontab defines a task to run and is known as a _cron job_.
Two different types of configuration files are used: the system crontab, which should not be modified, and user crontabs, which can be created and edited as needed. The format used by these files is documented in man:crontab[5]. The format of the system crontab, [.filename]#/etc/crontab# includes a `who` column which does not exist in user crontabs. In the system crontab, cron runs the command as the user specified in this column. In a user crontab, all commands run as the user who created the crontab.
User crontabs allow individual users to schedule their own tasks. The `root` user can also have a user [.filename]#crontab# which can be used to schedule tasks that do not exist in the system [.filename]#crontab#.
Here is a sample entry from the system crontab, [.filename]#/etc/crontab#:
[.programlisting]
....
# /etc/crontab - root's crontab for FreeBSD
#
# $FreeBSD$
# <.>
SHELL=/bin/sh
PATH=/etc:/bin:/sbin:/usr/bin:/usr/sbin <.>
#
#minute hour mday month wday who command <.>
#
*/5 * * * * root /usr/libexec/atrun <.>
....
<.> Lines that begin with the `#` character are comments. A comment can be placed in the file as a reminder of what and why a desired action is performed. Comments cannot be on the same line as a command or else they will be interpreted as part of the command; they must be on a new line. Blank lines are ignored.
<.> The equals (`=`) character is used to define any environment settings. In this example, it is used to define the `SHELL` and `PATH`. If the `SHELL` is omitted, cron will use the default Bourne shell. If the `PATH` is omitted, the full path must be given to the command or script to run.
<.> This line defines the seven fields used in a system crontab: `minute`, `hour`, `mday`, `month`, `wday`, `who`, and `command`. The `minute` field is the time in minutes when the specified command will be run, the `hour` is the hour when the specified command will be run, the `mday` is the day of the month, `month` is the month, and `wday` is the day of the week. These fields must be numeric values, representing the twenty-four hour clock, or a `*`, representing all values for that field. The `who` field only exists in the system crontab and specifies which user the command should be run as. The last field is the command to be executed.
<.> This entry defines the values for this cron job. The `\*/5`, followed by several more `*` characters, specifies that `/usr/libexec/atrun` is invoked by `root` every five minutes of every hour, of every day and day of the week, of every month.Commands can include any number of switches. However, commands which extend to multiple lines need to be broken with the backslash "\" continuation character.
[[configtuning-installcrontab]]
=== Creating a User Crontab
To create a user crontab, invoke `crontab` in editor mode:
[source,shell]
....
% crontab -e
....
This will open the user's crontab using the default text editor. The first time a user runs this command, it will open an empty file. Once a user creates a crontab, this command will open that file for editing.
It is useful to add these lines to the top of the crontab file in order to set the environment variables and to remember the meanings of the fields in the crontab:
[.programlisting]
....
SHELL=/bin/sh
PATH=/etc:/bin:/sbin:/usr/bin:/usr/sbin
# Order of crontab fields
# minute hour mday month wday command
....
Then add a line for each command or script to run, specifying the time to run the command. This example runs the specified custom Bourne shell script every day at two in the afternoon. Since the path to the script is not specified in `PATH`, the full path to the script is given:
[.programlisting]
....
0 14 * * * /usr/home/dru/bin/mycustomscript.sh
....
[TIP]
====
Before using a custom script, make sure it is executable and test it with the limited set of environment variables set by cron. To replicate the environment that would be used to run the above cron entry, use:
[.programlisting]
....
env -i SHELL=/bin/sh PATH=/etc:/bin:/sbin:/usr/bin:/usr/sbin HOME=/home/dru LOGNAME=dru /usr/home/dru/bin/mycustomscript.sh
....
The environment set by cron is discussed in man:crontab[5]. Checking that scripts operate correctly in a cron environment is especially important if they include any commands that delete files using wildcards.
====
When finished editing the crontab, save the file. It will automatically be installed and cron will read the crontab and run its cron jobs at their specified times. To list the cron jobs in a crontab, use this command:
[source,shell]
....
% crontab -l
0 14 * * * /usr/home/dru/bin/mycustomscript.sh
....
To remove all of the cron jobs in a user crontab:
[source,shell]
....
% crontab -r
remove crontab for dru? y
....
[[configtuning-rcd]]
== Managing Services in FreeBSD
FreeBSD uses the man:rc[8] system of startup scripts during system initialization and for managing services. The scripts listed in [.filename]#/etc/rc.d# provide basic services which can be controlled with the `start`, `stop`, and `restart` options to man:service[8]. For instance, man:sshd[8] can be restarted with the following command:
[source,shell]
....
# service sshd restart
....
This procedure can be used to start services on a running system. Services will be started automatically at boot time as specified in man:rc.conf[5]. For example, to enable man:natd[8] at system startup, add the following line to [.filename]#/etc/rc.conf#:
[.programlisting]
....
natd_enable="YES"
....
If a `natd_enable="NO"` line is already present, change the `NO` to `YES`. The man:rc[8] scripts will automatically load any dependent services during the next boot, as described below.
Since the man:rc[8] system is primarily intended to start and stop services at system startup and shutdown time, the `start`, `stop` and `restart` options will only perform their action if the appropriate [.filename]#/etc/rc.conf# variable is set. For instance, `sshd restart` will only work if `sshd_enable` is set to `YES` in [.filename]#/etc/rc.conf#. To `start`, `stop` or `restart` a service regardless of the settings in [.filename]#/etc/rc.conf#, these commands should be prefixed with "one". For instance, to restart man:sshd[8] regardless of the current [.filename]#/etc/rc.conf# setting, execute the following command:
[source,shell]
....
# service sshd onerestart
....
To check if a service is enabled in [.filename]#/etc/rc.conf#, run the appropriate man:rc[8] script with `rcvar`. This example checks to see if man:sshd[8] is enabled in [.filename]#/etc/rc.conf#:
[source,shell]
....
# service sshd rcvar
# sshd
#
sshd_enable="YES"
# (default: "")
....
[NOTE]
====
The `# sshd` line is output from the above command, not a `root` console.
====
To determine whether or not a service is running, use `status`. For instance, to verify that man:sshd[8] is running:
[source,shell]
....
# service sshd status
sshd is running as pid 433.
....
In some cases, it is also possible to `reload` a service. This attempts to send a signal to an individual service, forcing the service to reload its configuration files. In most cases, this means sending the service a `SIGHUP` signal. Support for this feature is not included for every service.
The man:rc[8] system is used for network services and it also contributes to most of the system initialization. For instance, when the [.filename]#/etc/rc.d/bgfsck# script is executed, it prints out the following message:
[source,shell]
....
Starting background file system checks in 60 seconds.
....
This script is used for background file system checks, which occur only during system initialization.
Many system services depend on other services to function properly. For example, man:yp[8] and other RPC-based services may fail to start until after the man:rpcbind[8] service has started. To resolve this issue, information about dependencies and other meta-data is included in the comments at the top of each startup script. The man:rcorder[8] program is used to parse these comments during system initialization to determine the order in which system services should be invoked to satisfy the dependencies.
The following key word must be included in all startup scripts as it is required by man:rc.subr[8] to "enable" the startup script:
* `PROVIDE`: Specifies the services this file provides.
The following key words may be included at the top of each startup script. They are not strictly necessary, but are useful as hints to man:rcorder[8]:
* `REQUIRE`: Lists services which are required for this service. The script containing this key word will run _after_ the specified services.
* `BEFORE`: Lists services which depend on this service. The script containing this key word will run _before_ the specified services.
By carefully setting these keywords for each startup script, an administrator has a fine-grained level of control of the startup order of the scripts, without the need for "runlevels" used by some UNIX(R) operating systems.
Additional information can be found in man:rc[8] and man:rc.subr[8]. Refer to link:{rc-scripting}[this article] for instructions on how to create custom man:rc[8] scripts.
[[configtuning-core-configuration]]
=== Managing System-Specific Configuration
The principal location for system configuration information is [.filename]#/etc/rc.conf#. This file contains a wide range of configuration information and it is read at system startup to configure the system. It provides the configuration information for the [.filename]#rc*# files.
The entries in [.filename]#/etc/rc.conf# override the default settings in [.filename]#/etc/defaults/rc.conf#. The file containing the default settings should not be edited. Instead, all system-specific changes should be made to [.filename]#/etc/rc.conf#.
A number of strategies may be applied in clustered applications to separate site-wide configuration from system-specific configuration in order to reduce administration overhead. The recommended approach is to place system-specific configuration into [.filename]#/etc/rc.conf.local#. For example, these entries in [.filename]#/etc/rc.conf# apply to all systems:
[.programlisting]
....
sshd_enable="YES"
keyrate="fast"
defaultrouter="10.1.1.254"
....
Whereas these entries in [.filename]#/etc/rc.conf.local# apply to this system only:
[.programlisting]
....
hostname="node1.example.org"
ifconfig_fxp0="inet 10.1.1.1/8"
....
Distribute [.filename]#/etc/rc.conf# to every system using an application such as rsync or puppet, while [.filename]#/etc/rc.conf.local# remains unique.
Upgrading the system will not overwrite [.filename]#/etc/rc.conf#, so system configuration information will not be lost.
[TIP]
====
Both [.filename]#/etc/rc.conf# and [.filename]#/etc/rc.conf.local# are parsed by man:sh[1]. This allows system operators to create complex configuration scenarios. Refer to man:rc.conf[5] for further information on this topic.
====
[[config-network-setup]]
== Setting Up Network Interface Cards
Adding and configuring a network interface card (NIC) is a common task for any FreeBSD administrator.
=== Locating the Correct Driver
First, determine the model of the NIC and the chip it uses. FreeBSD supports a wide variety of NICs. Check the Hardware Compatibility List for the FreeBSD release to see if the NIC is supported.
If the NIC is supported, determine the name of the FreeBSD driver for the NIC. Refer to [.filename]#/usr/src/sys/conf/NOTES# and [.filename]#/usr/src/sys/arch/conf/NOTES# for the list of NIC drivers with some information about the supported chipsets. When in doubt, read the manual page of the driver as it will provide more information about the supported hardware and any known limitations of the driver.
The drivers for common NICs are already present in the [.filename]#GENERIC# kernel, meaning the NIC should be probed during boot. The system's boot messages can be viewed by typing `more /var/run/dmesg.boot` and using the spacebar to scroll through the text. In this example, two Ethernet NICs using the man:dc[4] driver are present on the system:
[source,shell]
....
dc0: <82c169 PNIC 10/100BaseTX> port 0xa000-0xa0ff mem 0xd3800000-0xd38
000ff irq 15 at device 11.0 on pci0
miibus0: <MII bus> on dc0
bmtphy0: <BCM5201 10/100baseTX PHY> PHY 1 on miibus0
bmtphy0: 10baseT, 10baseT-FDX, 100baseTX, 100baseTX-FDX, auto
dc0: Ethernet address: 00:a0:cc:da:da:da
dc0: [ITHREAD]
dc1: <82c169 PNIC 10/100BaseTX> port 0x9800-0x98ff mem 0xd3000000-0xd30
000ff irq 11 at device 12.0 on pci0
miibus1: <MII bus> on dc1
bmtphy1: <BCM5201 10/100baseTX PHY> PHY 1 on miibus1
bmtphy1: 10baseT, 10baseT-FDX, 100baseTX, 100baseTX-FDX, auto
dc1: Ethernet address: 00:a0:cc:da:da:db
dc1: [ITHREAD]
....
If the driver for the NIC is not present in [.filename]#GENERIC#, but a driver is available, the driver will need to be loaded before the NIC can be configured and used. This may be accomplished in one of two ways:
* The easiest way is to load a kernel module for the NIC using man:kldload[8]. To also automatically load the driver at boot time, add the appropriate line to [.filename]#/boot/loader.conf#. Not all NIC drivers are available as modules.
* Alternatively, statically compile support for the NIC into a custom kernel. Refer to [.filename]#/usr/src/sys/conf/NOTES#, [.filename]#/usr/src/sys/arch/conf/NOTES# and the manual page of the driver to determine which line to add to the custom kernel configuration file. For more information about recompiling the kernel, refer to crossref:kernelconfig[kernelconfig,Configuring the FreeBSD Kernel]. If the NIC was detected at boot, the kernel does not need to be recompiled.
[[config-network-ndis]]
==== Using Windows(R) NDIS Drivers
Unfortunately, there are still many vendors that do not provide schematics for their drivers to the open source community because they regard such information as trade secrets. Consequently, the developers of FreeBSD and other operating systems are left with two choices: develop the drivers by a long and pain-staking process of reverse engineering or using the existing driver binaries available for Microsoft(R) Windows(R) platforms.
FreeBSD provides "native" support for the Network Driver Interface Specification (NDIS). It includes man:ndisgen[8] which can be used to convert a Windows(R) XP driver into a format that can be used on FreeBSD. As the man:ndis[4] driver uses a Windows(R) XP binary, it only runs on i386(TM) and amd64 systems. PCI, CardBus, PCMCIA, and USB devices are supported.
To use man:ndisgen[8], three things are needed:
. FreeBSD kernel sources.
. A Windows(R) XP driver binary with a [.filename]#.SYS# extension.
. A Windows(R) XP driver configuration file with a [.filename]#.INF# extension.
Download the [.filename]#.SYS# and [.filename]#.INF# files for the specific NIC. Generally, these can be found on the driver CD or at the vendor's website. The following examples use [.filename]#W32DRIVER.SYS# and [.filename]#W32DRIVER.INF#.
The driver bit width must match the version of FreeBSD. For FreeBSD/i386, use a Windows(R) 32-bit driver. For FreeBSD/amd64, a Windows(R) 64-bit driver is needed.
The next step is to compile the driver binary into a loadable kernel module. As `root`, use man:ndisgen[8]:
[source,shell]
....
# ndisgen /path/to/W32DRIVER.INF /path/to/W32DRIVER.SYS
....
This command is interactive and prompts for any extra information it requires. A new kernel module will be generated in the current directory. Use man:kldload[8] to load the new module:
[source,shell]
....
# kldload ./W32DRIVER_SYS.ko
....
In addition to the generated kernel module, the [.filename]#ndis.ko# and [.filename]#if_ndis.ko# modules must be loaded. This should happen automatically when any module that depends on man:ndis[4] is loaded. If not, load them manually, using the following commands:
[source,shell]
....
# kldload ndis
# kldload if_ndis
....
The first command loads the man:ndis[4] miniport driver wrapper and the second loads the generated NIC driver.
Check man:dmesg[8] to see if there were any load errors. If all went well, the output should be similar to the following:
[source,shell]
....
ndis0: <Wireless-G PCI Adapter> mem 0xf4100000-0xf4101fff irq 3 at device 8.0 on pci1
ndis0: NDIS API version: 5.0
ndis0: Ethernet address: 0a:b1:2c:d3:4e:f5
ndis0: 11b rates: 1Mbps 2Mbps 5.5Mbps 11Mbps
ndis0: 11g rates: 6Mbps 9Mbps 12Mbps 18Mbps 36Mbps 48Mbps 54Mbps
....
From here, [.filename]#ndis0# can be configured like any other NIC.
To configure the system to load the man:ndis[4] modules at boot time, copy the generated module, [.filename]#W32DRIVER_SYS.ko#, to [.filename]#/boot/modules#. Then, add the following line to [.filename]#/boot/loader.conf#:
[.programlisting]
....
W32DRIVER_SYS_load="YES"
....
=== Configuring the Network Card
Once the right driver is loaded for the NIC, the card needs to be configured. It may have been configured at installation time by man:bsdinstall[8].
To display the NIC configuration, enter the following command:
[source,shell]
....
% ifconfig
dc0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
options=80008<VLAN_MTU,LINKSTATE>
ether 00:a0:cc:da:da:da
inet 192.168.1.3 netmask 0xffffff00 broadcast 192.168.1.255
media: Ethernet autoselect (100baseTX <full-duplex>)
status: active
dc1: flags=8802<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
options=80008<VLAN_MTU,LINKSTATE>
ether 00:a0:cc:da:da:db
inet 10.0.0.1 netmask 0xffffff00 broadcast 10.0.0.255
media: Ethernet 10baseT/UTP
status: no carrier
lo0: flags=8049<UP,LOOPBACK,RUNNING,MULTICAST> metric 0 mtu 16384
options=3<RXCSUM,TXCSUM>
inet6 fe80::1%lo0 prefixlen 64 scopeid 0x4
inet6 ::1 prefixlen 128
inet 127.0.0.1 netmask 0xff000000
nd6 options=3<PERFORMNUD,ACCEPT_RTADV>
....
In this example, the following devices were displayed:
* [.filename]#dc0#: The first Ethernet interface.
* [.filename]#dc1#: The second Ethernet interface.
* [.filename]#lo0#: The loopback device.
FreeBSD uses the driver name followed by the order in which the card is detected at boot to name the NIC. For example, [.filename]#sis2# is the third NIC on the system using the man:sis[4] driver.
In this example, [.filename]#dc0# is up and running. The key indicators are:
. `UP` means that the card is configured and ready.
. The card has an Internet (`inet`) address, `192.168.1.3`.
. It has a valid subnet mask (`netmask`), where `0xffffff00` is the same as `255.255.255.0`.
. It has a valid broadcast address, `192.168.1.255`.
. The MAC address of the card (`ether`) is `00:a0:cc:da:da:da`.
. The physical media selection is on autoselection mode (`media: Ethernet autoselect (100baseTX <full-duplex>)`). In this example, [.filename]#dc1# is configured to run with `10baseT/UTP` media. For more information on available media types for a driver, refer to its manual page.
. The status of the link (`status`) is `active`, indicating that the carrier signal is detected. For [.filename]#dc1#, the `status: no carrier` status is normal when an Ethernet cable is not plugged into the card.
If the man:ifconfig[8] output had shown something similar to:
[source,shell]
....
dc0: flags=8843<BROADCAST,SIMPLEX,MULTICAST> metric 0 mtu 1500
options=80008<VLAN_MTU,LINKSTATE>
ether 00:a0:cc:da:da:da
media: Ethernet autoselect (100baseTX <full-duplex>)
status: active
....
it would indicate the card has not been configured.
The card must be configured as `root`. The NIC configuration can be performed from the command line with man:ifconfig[8] but will not persist after a reboot unless the configuration is also added to [.filename]#/etc/rc.conf#. If a DHCP server is present on the LAN, just add this line:
[.programlisting]
....
ifconfig_dc0="DHCP"
....
Replace _dc0_ with the correct value for the system.
The line added, then, follow the instructions given in <<config-network-testing>>.
[NOTE]
====
If the network was configured during installation, some entries for the NIC(s) may be already present. Double check [.filename]#/etc/rc.conf# before adding any lines.
====
If there is no DHCP server, the NIC(s) must be configured manually. Add a line for each NIC present on the system, as seen in this example:
[.programlisting]
....
ifconfig_dc0="inet 192.168.1.3 netmask 255.255.255.0"
ifconfig_dc1="inet 10.0.0.1 netmask 255.255.255.0 media 10baseT/UTP"
....
Replace [.filename]#dc0# and [.filename]#dc1# and the IP address information with the correct values for the system. Refer to the man page for the driver, man:ifconfig[8], and man:rc.conf[5] for more details about the allowed options and the syntax of [.filename]#/etc/rc.conf#.
If the network is not using DNS, edit [.filename]#/etc/hosts# to add the names and IP addresses of the hosts on the LAN, if they are not already there. For more information, refer to man:hosts[5] and to [.filename]#/usr/share/examples/etc/hosts#.
[NOTE]
====
If there is no DHCP server and access to the Internet is needed, manually configure the default gateway and the nameserver:
[source,shell]
....
# sysrc defaultrouter="your_default_router"
# echo 'nameserver your_DNS_server' >> /etc/resolv.conf
....
====
[[config-network-testing]]
=== Testing and Troubleshooting
Once the necessary changes to [.filename]#/etc/rc.conf# are saved, a reboot can be used to test the network configuration and to verify that the system restarts without any configuration errors. Alternatively, apply the settings to the networking system with this command:
[source,shell]
....
# service netif restart
....
[NOTE]
====
If a default gateway has been set in [.filename]#/etc/rc.conf#, also issue this command:
[source,shell]
....
# service routing restart
....
====
Once the networking system has been relaunched, test the NICs.
==== Testing the Ethernet Card
To verify that an Ethernet card is configured correctly, man:ping[8] the interface itself, and then man:ping[8] another machine on the LAN:
[source,shell]
....
% ping -c5 192.168.1.3
PING 192.168.1.3 (192.168.1.3): 56 data bytes
64 bytes from 192.168.1.3: icmp_seq=0 ttl=64 time=0.082 ms
64 bytes from 192.168.1.3: icmp_seq=1 ttl=64 time=0.074 ms
64 bytes from 192.168.1.3: icmp_seq=2 ttl=64 time=0.076 ms
64 bytes from 192.168.1.3: icmp_seq=3 ttl=64 time=0.108 ms
64 bytes from 192.168.1.3: icmp_seq=4 ttl=64 time=0.076 ms
--- 192.168.1.3 ping statistics ---
5 packets transmitted, 5 packets received, 0% packet loss
round-trip min/avg/max/stddev = 0.074/0.083/0.108/0.013 ms
....
[source,shell]
....
% ping -c5 192.168.1.2
PING 192.168.1.2 (192.168.1.2): 56 data bytes
64 bytes from 192.168.1.2: icmp_seq=0 ttl=64 time=0.726 ms
64 bytes from 192.168.1.2: icmp_seq=1 ttl=64 time=0.766 ms
64 bytes from 192.168.1.2: icmp_seq=2 ttl=64 time=0.700 ms
64 bytes from 192.168.1.2: icmp_seq=3 ttl=64 time=0.747 ms
64 bytes from 192.168.1.2: icmp_seq=4 ttl=64 time=0.704 ms
--- 192.168.1.2 ping statistics ---
5 packets transmitted, 5 packets received, 0% packet loss
round-trip min/avg/max/stddev = 0.700/0.729/0.766/0.025 ms
....
To test network resolution, use the host name instead of the IP address. If there is no DNS server on the network, [.filename]#/etc/hosts# must first be configured. To this purpose, edit [.filename]#/etc/hosts# to add the names and IP addresses of the hosts on the LAN, if they are not already there. For more information, refer to man:hosts[5] and to [.filename]#/usr/share/examples/etc/hosts#.
==== Troubleshooting
When troubleshooting hardware and software configurations, check the simple things first. Is the network cable plugged in? Are the network services properly configured? Is the firewall configured correctly? Is the NIC supported by FreeBSD? Before sending a bug report, always check the Hardware Notes, update the version of FreeBSD to the latest STABLE version, check the mailing list archives, and search the Internet.
If the card works, yet performance is poor, read through man:tuning[7]. Also, check the network configuration as incorrect network settings can cause slow connections.
Some users experience one or two `device timeout` messages, which is normal for some cards. If they continue, or are bothersome, determine if the device is conflicting with another device. Double check the cable connections. Consider trying another card.
To resolve `watchdog timeout` errors, first check the network cable. Many cards require a PCI slot which supports bus mastering. On some old motherboards, only one PCI slot allows it, usually slot 0. Check the NIC and the motherboard documentation to determine if that may be the problem.
`No route to host` messages occur if the system is unable to route a packet to the destination host. This can happen if no default route is specified or if a cable is unplugged. Check the output of `netstat -rn` and make sure there is a valid route to the host. If there is not, read crossref:advanced-networking[network-routing,“Gateways and Routes”].
`ping: sendto: Permission denied` error messages are often caused by a misconfigured firewall. If a firewall is enabled on FreeBSD but no rules have been defined, the default policy is to deny all traffic, even man:ping[8]. Refer to crossref:firewalls[firewalls,Firewalls] for more information.
Sometimes performance of the card is poor or below average. In these cases, try setting the media selection mode from `autoselect` to the correct media selection. While this works for most hardware, it may or may not resolve the issue. Again, check all the network settings, and refer to man:tuning[7].
[[configtuning-virtual-hosts]]
== Virtual Hosts
A common use of FreeBSD is virtual site hosting, where one server appears to the network as many servers. This is achieved by assigning multiple network addresses to a single interface.
A given network interface has one "real" address, and may have any number of "alias" addresses. These aliases are normally added by placing alias entries in [.filename]#/etc/rc.conf#, as seen in this example:
[.programlisting]
....
ifconfig_fxp0_alias0="inet xxx.xxx.xxx.xxx netmask xxx.xxx.xxx.xxx"
....
Alias entries must start with `alias__0__` using a sequential number such as `alias0`, `alias1`, and so on. The configuration process will stop at the first missing number.
The calculation of alias netmasks is important. For a given interface, there must be one address which correctly represents the network's netmask. Any other addresses which fall within this network must have a netmask of all ``1``s, expressed as either `255.255.255.255` or `0xffffffff`.
For example, consider the case where the [.filename]#fxp0# interface is connected to two networks: `10.1.1.0` with a netmask of `255.255.255.0` and `202.0.75.16` with a netmask of `255.255.255.240`. The system is to be configured to appear in the ranges `10.1.1.1` through `10.1.1.5` and `202.0.75.17` through `202.0.75.20`. Only the first address in a given network range should have a real netmask. All the rest (`10.1.1.2` through `10.1.1.5` and `202.0.75.18` through `202.0.75.20`) must be configured with a netmask of `255.255.255.255`.
The following [.filename]#/etc/rc.conf# entries configure the adapter correctly for this scenario:
[.programlisting]
....
ifconfig_fxp0="inet 10.1.1.1 netmask 255.255.255.0"
ifconfig_fxp0_alias0="inet 10.1.1.2 netmask 255.255.255.255"
ifconfig_fxp0_alias1="inet 10.1.1.3 netmask 255.255.255.255"
ifconfig_fxp0_alias2="inet 10.1.1.4 netmask 255.255.255.255"
ifconfig_fxp0_alias3="inet 10.1.1.5 netmask 255.255.255.255"
ifconfig_fxp0_alias4="inet 202.0.75.17 netmask 255.255.255.240"
ifconfig_fxp0_alias5="inet 202.0.75.18 netmask 255.255.255.255"
ifconfig_fxp0_alias6="inet 202.0.75.19 netmask 255.255.255.255"
ifconfig_fxp0_alias7="inet 202.0.75.20 netmask 255.255.255.255"
....
A simpler way to express this is with a space-separated list of IP address ranges. The first address will be given the indicated subnet mask and the additional addresses will have a subnet mask of `255.255.255.255`.
[.programlisting]
....
ifconfig_fxp0_aliases="inet 10.1.1.1-5/24 inet 202.0.75.17-20/28"
....
[[configtuning-syslog]]
== Configuring System Logging
Generating and reading system logs is an important aspect of system administration. The information in system logs can be used to detect hardware and software issues as well as application and system configuration errors. This information also plays an important role in security auditing and incident response. Most system daemons and applications will generate log entries.
FreeBSD provides a system logger, syslogd, to manage logging. By default, syslogd is started when the system boots. This is controlled by the variable `syslogd_enable` in [.filename]#/etc/rc.conf#. There are numerous application arguments that can be set using `syslogd_flags` in [.filename]#/etc/rc.conf#. Refer to man:syslogd[8] for more information on the available arguments.
This section describes how to configure the FreeBSD system logger for both local and remote logging and how to perform log rotation and log management.
=== Configuring Local Logging
The configuration file, [.filename]#/etc/syslog.conf#, controls what syslogd does with log entries as they are received. There are several parameters to control the handling of incoming events. The _facility_ describes which subsystem generated the message, such as the kernel or a daemon, and the _level_ describes the severity of the event that occurred. This makes it possible to configure if and where a log message is logged, depending on the facility and level. It is also possible to take action depending on the application that sent the message, and in the case of remote logging, the hostname of the machine generating the logging event.
This configuration file contains one line per action, where the syntax for each line is a selector field followed by an action field. The syntax of the selector field is _facility.level_ which will match log messages from _facility_ at level _level_ or higher. It is also possible to add an optional comparison flag before the level to specify more precisely what is logged. Multiple selector fields can be used for the same action, and are separated with a semicolon (`;`). Using `*` will match everything. The action field denotes where to send the log message, such as to a file or remote log host. As an example, here is the default [.filename]#syslog.conf# from FreeBSD:
[.programlisting]
....
# $FreeBSD$
#
# Spaces ARE valid field separators in this file. However,
# other *nix-like systems still insist on using tabs as field
# separators. If you are sharing this file between systems, you
# may want to use only tabs as field separators here.
# Consult the syslog.conf(5) manpage.
*.err;kern.warning;auth.notice;mail.crit /dev/console
*.notice;authpriv.none;kern.debug;lpr.info;mail.crit;news.err /var/log/messages
security.* /var/log/security
auth.info;authpriv.info /var/log/auth.log
mail.info /var/log/maillog
lpr.info /var/log/lpd-errs
ftp.info /var/log/xferlog
cron.* /var/log/cron
!-devd
*.=debug /var/log/debug.log
*.emerg *
# uncomment this to log all writes to /dev/console to /var/log/console.log
#console.info /var/log/console.log
# uncomment this to enable logging of all log messages to /var/log/all.log
# touch /var/log/all.log and chmod it to mode 600 before it will work
#*.* /var/log/all.log
# uncomment this to enable logging to a remote loghost named loghost
#*.* @loghost
# uncomment these if you're running inn
# news.crit /var/log/news/news.crit
# news.err /var/log/news/news.err
# news.notice /var/log/news/news.notice
# Uncomment this if you wish to see messages produced by devd
# !devd
# *.>=info
!ppp
*.* /var/log/ppp.log
!*
....
In this example:
* Line 8 matches all messages with a level of `err` or higher, as well as `kern.warning`, `auth.notice` and `mail.crit`, and sends these log messages to the console ([.filename]#/dev/console#).
* Line 12 matches all messages from the `mail` facility at level `info` or above and logs the messages to [.filename]#/var/log/maillog#.
* Line 17 uses a comparison flag (`=`) to only match messages at level `debug` and logs them to [.filename]#/var/log/debug.log#.
* Line 33 is an example usage of a program specification. This makes the rules following it only valid for the specified program. In this case, only the messages generated by ppp are logged to [.filename]#/var/log/ppp.log#.
The available levels, in order from most to least critical are `emerg`, `alert`, `crit`, `err`, `warning`, `notice`, `info`, and `debug`.
The facilities, in no particular order, are `auth`, `authpriv`, `console`, `cron`, `daemon`, `ftp`, `kern`, `lpr`, `mail`, `mark`, `news`, `security`, `syslog`, `user`, `uucp`, and `local0` through `local7`. Be aware that other operating systems might have different facilities.
To log everything of level `notice` and higher to [.filename]#/var/log/daemon.log#, add the following entry:
[.programlisting]
....
daemon.notice /var/log/daemon.log
....
For more information about the different levels and facilities, refer to man:syslog[3] and man:syslogd[8]. For more information about [.filename]#/etc/syslog.conf#, its syntax, and more advanced usage examples, see man:syslog.conf[5].
=== Log Management and Rotation
Log files can grow quickly, taking up disk space and making it more difficult to locate useful information. Log management attempts to mitigate this. In FreeBSD, newsyslog is used to manage log files. This built-in program periodically rotates and compresses log files, and optionally creates missing log files and signals programs when log files are moved. The log files may be generated by syslogd or by any other program which generates log files. While newsyslog is normally run from man:cron[8], it is not a system daemon. In the default configuration, it runs every hour.
To know which actions to take, newsyslog reads its configuration file, [.filename]#/etc/newsyslog.conf#. This file contains one line for each log file that newsyslog manages. Each line states the file owner, permissions, when to rotate that file, optional flags that affect log rotation, such as compression, and programs to signal when the log is rotated. Here is the default configuration in FreeBSD:
[.programlisting]
....
# configuration file for newsyslog
# $FreeBSD$
#
# Entries which do not specify the '/pid_file' field will cause the
# syslogd process to be signalled when that log file is rotated. This
# action is only appropriate for log files which are written to by the
# syslogd process (ie, files listed in /etc/syslog.conf). If there
# is no process which needs to be signalled when a given log file is
# rotated, then the entry for that file should include the 'N' flag.
#
# The 'flags' field is one or more of the letters: BCDGJNUXZ or a '-'.
#
# Note: some sites will want to select more restrictive protections than the
# defaults. In particular, it may be desirable to switch many of the 644
# entries to 640 or 600. For example, some sites will consider the
# contents of maillog, messages, and lpd-errs to be confidential. In the
# future, these defaults may change to more conservative ones.
#
# logfilename [owner:group] mode count size when flags [/pid_file] [sig_num]
/var/log/all.log 600 7 * @T00 J
/var/log/amd.log 644 7 100 * J
/var/log/auth.log 600 7 100 @0101T JC
/var/log/console.log 600 5 100 * J
/var/log/cron 600 3 100 * JC
/var/log/daily.log 640 7 * @T00 JN
/var/log/debug.log 600 7 100 * JC
/var/log/kerberos.log 600 7 100 * J
/var/log/lpd-errs 644 7 100 * JC
/var/log/maillog 640 7 * @T00 JC
/var/log/messages 644 5 100 @0101T JC
/var/log/monthly.log 640 12 * $M1D0 JN
/var/log/pflog 600 3 100 * JB /var/run/pflogd.pid
/var/log/ppp.log root:network 640 3 100 * JC
/var/log/devd.log 644 3 100 * JC
/var/log/security 600 10 100 * JC
/var/log/sendmail.st 640 10 * 168 B
/var/log/utx.log 644 3 * @01T05 B
/var/log/weekly.log 640 5 1 $W6D0 JN
/var/log/xferlog 600 7 100 * JC
....
Each line starts with the name of the log to be rotated, optionally followed by an owner and group for both rotated and newly created files. The `mode` field sets the permissions on the log file and `count` denotes how many rotated log files should be kept. The `size` and `when` fields tell newsyslog when to rotate the file. A log file is rotated when either its size is larger than the `size` field or when the time in the `when` field has passed. An asterisk (`*`) means that this field is ignored. The _flags_ field gives further instructions, such as how to compress the rotated file or to create the log file if it is missing. The last two fields are optional and specify the name of the Process ID (PID) file of a process and a signal number to send to that process when the file is rotated.
For more information on all fields, valid flags, and how to specify the rotation time, refer to man:newsyslog.conf[5]. Since newsyslog is run from man:cron[8], it cannot rotate files more often than it is scheduled to run from man:cron[8].
[[network-syslogd]]
=== Configuring Remote Logging
Monitoring the log files of multiple hosts can become unwieldy as the number of systems increases. Configuring centralized logging can reduce some of the administrative burden of log file administration.
In FreeBSD, centralized log file aggregation, merging, and rotation can be configured using syslogd and newsyslog. This section demonstrates an example configuration, where host `A`, named `logserv.example.com`, will collect logging information for the local network. Host `B`, named `logclient.example.com`, will be configured to pass logging information to the logging server.
==== Log Server Configuration
A log server is a system that has been configured to accept logging information from other hosts. Before configuring a log server, check the following:
* If there is a firewall between the logging server and any logging clients, ensure that the firewall ruleset allows UDP port 514 for both the clients and the server.
* The logging server and all client machines must have forward and reverse entries in the local DNS. If the network does not have a DNS server, create entries in each system's [.filename]#/etc/hosts#. Proper name resolution is required so that log entries are not rejected by the logging server.
On the log server, edit [.filename]#/etc/syslog.conf# to specify the name of the client to receive log entries from, the logging facility to be used, and the name of the log to store the host's log entries. This example adds the hostname of `B`, logs all facilities, and stores the log entries in [.filename]#/var/log/logclient.log#.
.Sample Log Server Configuration
[example]
====
[.programlisting]
....
+logclient.example.com
*.* /var/log/logclient.log
....
====
When adding multiple log clients, add a similar two-line entry for each client. More information about the available facilities may be found in man:syslog.conf[5].
Next, configure [.filename]#/etc/rc.conf#:
[.programlisting]
....
syslogd_enable="YES"
syslogd_flags="-a logclient.example.com -v -v"
....
The first entry starts syslogd at system boot. The second entry allows log entries from the specified client. The `-v -v` increases the verbosity of logged messages. This is useful for tweaking facilities as administrators are able to see what type of messages are being logged under each facility.
Multiple `-a` options may be specified to allow logging from multiple clients. IP addresses and whole netblocks may also be specified. Refer to man:syslogd[8] for a full list of possible options.
Finally, create the log file:
[source,shell]
....
# touch /var/log/logclient.log
....
At this point, syslogd should be restarted and verified:
[source,shell]
....
# service syslogd restart
# pgrep syslog
....
If a PID is returned, the server restarted successfully, and client configuration can begin. If the server did not restart, consult [.filename]#/var/log/messages# for the error.
==== Log Client Configuration
A logging client sends log entries to a logging server on the network. The client also keeps a local copy of its own logs.
Once a logging server has been configured, edit [.filename]#/etc/rc.conf# on the logging client:
[.programlisting]
....
syslogd_enable="YES"
syslogd_flags="-s -v -v"
....
The first entry enables syslogd on boot up. The second entry prevents logs from being accepted by this client from other hosts (`-s`) and increases the verbosity of logged messages.
Next, define the logging server in the client's [.filename]#/etc/syslog.conf#. In this example, all logged facilities are sent to a remote system, denoted by the `@` symbol, with the specified hostname:
[.programlisting]
....
*.* @logserv.example.com
....
After saving the edit, restart syslogd for the changes to take effect:
[source,shell]
....
# service syslogd restart
....
To test that log messages are being sent across the network, use man:logger[1] on the client to send a message to syslogd:
[source,shell]
....
# logger "Test message from logclient"
....
This message should now exist both in [.filename]#/var/log/messages# on the client and [.filename]#/var/log/logclient.log# on the log server.
==== Debugging Log Servers
If no messages are being received on the log server, the cause is most likely a network connectivity issue, a hostname resolution issue, or a typo in a configuration file. To isolate the cause, ensure that both the logging server and the logging client are able to `ping` each other using the hostname specified in their [.filename]#/etc/rc.conf#. If this fails, check the network cabling, the firewall ruleset, and the hostname entries in the DNS server or [.filename]#/etc/hosts# on both the logging server and clients. Repeat until the `ping` is successful from both hosts.
If the `ping` succeeds on both hosts but log messages are still not being received, temporarily increase logging verbosity to narrow down the configuration issue. In the following example, [.filename]#/var/log/logclient.log# on the logging server is empty and [.filename]#/var/log/messages# on the logging client does not indicate a reason for the failure. To increase debugging output, edit the `syslogd_flags` entry on the logging server and issue a restart:
[.programlisting]
....
syslogd_flags="-d -a logclient.example.com -v -v"
....
[source,shell]
....
# service syslogd restart
....
Debugging data similar to the following will flash on the console immediately after the restart:
[source,shell]
....
logmsg: pri 56, flags 4, from logserv.example.com, msg syslogd: restart
syslogd: restarted
logmsg: pri 6, flags 4, from logserv.example.com, msg syslogd: kernel boot file is /boot/kernel/kernel
Logging to FILE /var/log/messages
syslogd: kernel boot file is /boot/kernel/kernel
cvthname(192.168.1.10)
validate: dgram from IP 192.168.1.10, port 514, name logclient.example.com;
rejected in rule 0 due to name mismatch.
....
In this example, the log messages are being rejected due to a typo which results in a hostname mismatch. The client's hostname should be `logclient`, not `logclien`. Fix the typo, issue a restart, and verify the results:
[source,shell]
....
# service syslogd restart
logmsg: pri 56, flags 4, from logserv.example.com, msg syslogd: restart
syslogd: restarted
logmsg: pri 6, flags 4, from logserv.example.com, msg syslogd: kernel boot file is /boot/kernel/kernel
syslogd: kernel boot file is /boot/kernel/kernel
logmsg: pri 166, flags 17, from logserv.example.com,
msg Dec 10 20:55:02 <syslog.err> logserv.example.com syslogd: exiting on signal 2
cvthname(192.168.1.10)
validate: dgram from IP 192.168.1.10, port 514, name logclient.example.com;
accepted in rule 0.
logmsg: pri 15, flags 0, from logclient.example.com, msg Dec 11 02:01:28 trhodes: Test message 2
Logging to FILE /var/log/logclient.log
Logging to FILE /var/log/messages
....
At this point, the messages are being properly received and placed in the correct file.
==== Security Considerations
As with any network service, security requirements should be considered before implementing a logging server. Log files may contain sensitive data about services enabled on the local host, user accounts, and configuration data. Network data sent from the client to the server will not be encrypted or password protected. If a need for encryption exists, consider using package:security/stunnel[], which will transmit the logging data over an encrypted tunnel.
Local security is also an issue. Log files are not encrypted during use or after log rotation. Local users may access log files to gain additional insight into system configuration. Setting proper permissions on log files is critical. The built-in log rotator, newsyslog, supports setting permissions on newly created and rotated log files. Setting log files to mode `600` should prevent unwanted access by local users. Refer to man:newsyslog.conf[5] for additional information.
[[configtuning-configfiles]]
== Configuration Files
=== [.filename]#/etc# Layout
There are a number of directories in which configuration information is kept. These include:
[.informaltable]
[cols="1,1", frame="none"]
|===
|[.filename]#/etc#
|Generic system-specific configuration information.
|[.filename]#/etc/defaults#
|Default versions of system configuration files.
|[.filename]#/etc/mail#
|Extra man:sendmail[8] configuration and other MTA configuration files.
|[.filename]#/etc/ppp#
|Configuration for both user- and kernel-ppp programs.
|[.filename]#/usr/local/etc#
|Configuration files for installed applications. May contain per-application subdirectories.
|[.filename]#/usr/local/etc/rc.d#
|man:rc[8] scripts for installed applications.
|[.filename]#/var/db#
|Automatically generated system-specific database files, such as the package database and the man:locate[1] database.
|===
=== Hostnames
==== [.filename]#/etc/resolv.conf#
How a FreeBSD system accesses the Internet Domain Name System (DNS) is controlled by man:resolv.conf[5].
The most common entries to [.filename]#/etc/resolv.conf# are:
[.informaltable]
[cols="1,1", frame="none"]
|===
|`nameserver`
|The IP address of a name server the resolver should query. The servers are queried in the order listed with a maximum of three.
|`search`
|Search list for hostname lookup. This is normally determined by the domain of the local hostname.
|`domain`
|The local domain name.
|===
A typical [.filename]#/etc/resolv.conf# looks like this:
[.programlisting]
....
search example.com
nameserver 147.11.1.11
nameserver 147.11.100.30
....
[NOTE]
====
Only one of the `search` and `domain` options should be used.
====
When using DHCP, man:dhclient[8] usually rewrites [.filename]#/etc/resolv.conf# with information received from the DHCP server.
==== [.filename]#/etc/hosts#
[.filename]#/etc/hosts# is a simple text database which works in conjunction with DNS and NIS to provide host name to IP address mappings. Entries for local computers connected via a LAN can be added to this file for simplistic naming purposes instead of setting up a man:named[8] server. Additionally, [.filename]#/etc/hosts# can be used to provide a local record of Internet names, reducing the need to query external DNS servers for commonly accessed names.
[.programlisting]
....
# $FreeBSD$
#
#
# Host Database
#
# This file should contain the addresses and aliases for local hosts that
# share this file. Replace 'my.domain' below with the domainname of your
# machine.
#
# In the presence of the domain name service or NIS, this file may
# not be consulted at all; see /etc/nsswitch.conf for the resolution order.
#
#
::1 localhost localhost.my.domain
127.0.0.1 localhost localhost.my.domain
#
# Imaginary network.
#10.0.0.2 myname.my.domain myname
#10.0.0.3 myfriend.my.domain myfriend
#
# According to RFC 1918, you can use the following IP networks for
# private nets which will never be connected to the Internet:
#
# 10.0.0.0 - 10.255.255.255
# 172.16.0.0 - 172.31.255.255
# 192.168.0.0 - 192.168.255.255
#
# In case you want to be able to connect to the Internet, you need
# real official assigned numbers. Do not try to invent your own network
# numbers but instead get one from your network provider (if any) or
# from your regional registry (ARIN, APNIC, LACNIC, RIPE NCC, or AfriNIC.)
#
....
The format of [.filename]#/etc/hosts# is as follows:
[.programlisting]
....
[Internet address] [official hostname] [alias1] [alias2] ...
....
For example:
[.programlisting]
....
10.0.0.1 myRealHostname.example.com myRealHostname foobar1 foobar2
....
Consult man:hosts[5] for more information.
[[configtuning-sysctl]]
== Tuning with man:sysctl[8]
man:sysctl[8] is used to make changes to a running FreeBSD system. This includes many advanced options of the TCP/IP stack and virtual memory system that can dramatically improve performance for an experienced system administrator. Over five hundred system variables can be read and set using man:sysctl[8].
At its core, man:sysctl[8] serves two functions: to read and to modify system settings.
To view all readable variables:
[source,shell]
....
% sysctl -a
....
To read a particular variable, specify its name:
[source,shell]
....
% sysctl kern.maxproc
kern.maxproc: 1044
....
To set a particular variable, use the _variable_=_value_ syntax:
[source,shell]
....
# sysctl kern.maxfiles=5000
kern.maxfiles: 2088 -> 5000
....
Settings of sysctl variables are usually either strings, numbers, or booleans, where a boolean is `1` for yes or `0` for no.
To automatically set some variables each time the machine boots, add them to [.filename]#/etc/sysctl.conf#. For more information, refer to man:sysctl.conf[5] and <<configtuning-sysctlconf>>.
[[configtuning-sysctlconf]]
=== [.filename]#sysctl.conf#
The configuration file for man:sysctl[8], [.filename]#/etc/sysctl.conf#, looks much like [.filename]#/etc/rc.conf#. Values are set in a `variable=value` form. The specified values are set after the system goes into multi-user mode. Not all variables are settable in this mode.
For example, to turn off logging of fatal signal exits and prevent users from seeing processes started by other users, the following tunables can be set in [.filename]#/etc/sysctl.conf#:
[.programlisting]
....
# Do not log fatal signal exits (e.g., sig 11)
kern.logsigexit=0
# Prevent users from seeing information about processes that
# are being run under another UID.
security.bsd.see_other_uids=0
....
[[sysctl-readonly]]
=== man:sysctl[8] Read-only
In some cases it may be desirable to modify read-only man:sysctl[8] values, which will require a reboot of the system.
For instance, on some laptop models the man:cardbus[4] device will not probe memory ranges and will fail with errors similar to:
[source,shell]
....
cbb0: Could not map register memory
device_probe_and_attach: cbb0 attach returned 12
....
The fix requires the modification of a read-only man:sysctl[8] setting. Add `hw.pci.allow_unsupported_io_range=1` to [.filename]#/boot/loader.conf# and reboot. Now man:cardbus[4] should work properly.
[[configtuning-disk]]
== Tuning Disks
The following section will discuss various tuning mechanisms and options which may be applied to disk devices. In many cases, disks with mechanical parts, such as SCSI drives, will be the bottleneck driving down the overall system performance. While a solution is to install a drive without mechanical parts, such as a solid state drive, mechanical drives are not going away anytime in the near future. When tuning disks, it is advisable to utilize the features of the man:iostat[8] command to test various changes to the system. This command will allow the user to obtain valuable information on system IO.
=== Sysctl Variables
==== `vfs.vmiodirenable`
The `vfs.vmiodirenable` man:sysctl[8] variable may be set to either `0` (off) or `1` (on). It is set to `1` by default. This variable controls how directories are cached by the system. Most directories are small, using just a single fragment (typically 1 K) in the file system and typically 512 bytes in the buffer cache. With this variable turned off, the buffer cache will only cache a fixed number of directories, even if the system has a huge amount of memory. When turned on, this man:sysctl[8] allows the buffer cache to use the VM page cache to cache the directories, making all the memory available for caching directories. However, the minimum in-core memory used to cache a directory is the physical page size (typically 4 K) rather than 512 bytes. Keeping this option enabled is recommended if the system is running any services which manipulate large numbers of files. Such services can include web caches, large mail systems, and news systems. Keeping this option on will generally not reduce performance, even with the wasted memory, but one should experiment to find out.
==== `vfs.write_behind`
The `vfs.write_behind` man:sysctl[8] variable defaults to `1` (on). This tells the file system to issue media writes as full clusters are collected, which typically occurs when writing large sequential files. This avoids saturating the buffer cache with dirty buffers when it would not benefit I/O performance. However, this may stall processes and under certain circumstances should be turned off.
==== `vfs.hirunningspace`
The `vfs.hirunningspace` man:sysctl[8] variable determines how much outstanding write I/O may be queued to disk controllers system-wide at any given instance. The default is usually sufficient, but on machines with many disks, try bumping it up to four or five _megabytes_. Setting too high a value which exceeds the buffer cache's write threshold can lead to bad clustering performance. Do not set this value arbitrarily high as higher write values may add latency to reads occurring at the same time.
There are various other buffer cache and VM page cache related man:sysctl[8] values. Modifying these values is not recommended as the VM system does a good job of automatically tuning itself.
==== `vm.swap_idle_enabled`
The `vm.swap_idle_enabled` man:sysctl[8] variable is useful in large multi-user systems with many active login users and lots of idle processes. Such systems tend to generate continuous pressure on free memory reserves. Turning this feature on and tweaking the swapout hysteresis (in idle seconds) via `vm.swap_idle_threshold1` and `vm.swap_idle_threshold2` depresses the priority of memory pages associated with idle processes more quickly then the normal pageout algorithm. This gives a helping hand to the pageout daemon. Only turn this option on if needed, because the tradeoff is essentially pre-page memory sooner rather than later which eats more swap and disk bandwidth. In a small system this option will have a determinable effect, but in a large system that is already doing moderate paging, this option allows the VM system to stage whole processes into and out of memory easily.
==== `hw.ata.wc`
Turning off IDE write caching reduces write bandwidth to IDE disks, but may sometimes be necessary due to data consistency issues introduced by hard drive vendors. The problem is that some IDE drives lie about when a write completes. With IDE write caching turned on, IDE hard drives write data to disk out of order and will sometimes delay writing some blocks indefinitely when under heavy disk load. A crash or power failure may cause serious file system corruption. Check the default on the system by observing the `hw.ata.wc` man:sysctl[8] variable. If IDE write caching is turned off, one can set this read-only variable to `1` in [.filename]#/boot/loader.conf# in order to enable it at boot time.
For more information, refer to man:ata[4].
==== `SCSI_DELAY` (`kern.cam.scsi_delay`)
The `SCSI_DELAY` kernel configuration option may be used to reduce system boot times. The defaults are fairly high and can be responsible for `15` seconds of delay in the boot process. Reducing it to `5` seconds usually works with modern drives. The `kern.cam.scsi_delay` boot time tunable should be used. The tunable and kernel configuration option accept values in terms of _milliseconds_ and _not seconds_.
[[soft-updates]]
=== Soft Updates
To fine-tune a file system, use man:tunefs[8]. This program has many different options. To toggle Soft Updates on and off, use:
[source,shell]
....
# tunefs -n enable /filesystem
# tunefs -n disable /filesystem
....
A file system cannot be modified with man:tunefs[8] while it is mounted. A good time to enable Soft Updates is before any partitions have been mounted, in single-user mode.
Soft Updates is recommended for UFS file systems as it drastically improves meta-data performance, mainly file creation and deletion, through the use of a memory cache. There are two downsides to Soft Updates to be aware of. First, Soft Updates guarantee file system consistency in the case of a crash, but could easily be several seconds or even a minute behind updating the physical disk. If the system crashes, unwritten data may be lost. Secondly, Soft Updates delay the freeing of file system blocks. If the root file system is almost full, performing a major update, such as `make installworld`, can cause the file system to run out of space and the update to fail.
==== More Details About Soft Updates
Meta-data updates are updates to non-content data like inodes or directories. There are two traditional approaches to writing a file system's meta-data back to disk.
Historically, the default behavior was to write out meta-data updates synchronously. If a directory changed, the system waited until the change was actually written to disk. The file data buffers (file contents) were passed through the buffer cache and backed up to disk later on asynchronously. The advantage of this implementation is that it operates safely. If there is a failure during an update, meta-data is always in a consistent state. A file is either created completely or not at all. If the data blocks of a file did not find their way out of the buffer cache onto the disk by the time of the crash, man:fsck[8] recognizes this and repairs the file system by setting the file length to `0`. Additionally, the implementation is clear and simple. The disadvantage is that meta-data changes are slow. For example, `rm -r` touches all the files in a directory sequentially, but each directory change will be written synchronously to the disk. This includes updates to the directory itself, to the inode table, and possibly to indirect blocks allocated by the file. Similar considerations apply for unrolling large hierarchies using `tar -x`.
The second approach is to use asynchronous meta-data updates. This is the default for a UFS file system mounted with `mount -o async`. Since all meta-data updates are also passed through the buffer cache, they will be intermixed with the updates of the file content data. The advantage of this implementation is there is no need to wait until each meta-data update has been written to disk, so all operations which cause huge amounts of meta-data updates work much faster than in the synchronous case. This implementation is still clear and simple, so there is a low risk for bugs creeping into the code. The disadvantage is that there is no guarantee for a consistent state of the file system. If there is a failure during an operation that updated large amounts of meta-data, like a power failure or someone pressing the reset button, the file system will be left in an unpredictable state. There is no opportunity to examine the state of the file system when the system comes up again as the data blocks of a file could already have been written to the disk while the updates of the inode table or the associated directory were not. It is impossible to implement a man:fsck[8] which is able to clean up the resulting chaos because the necessary information is not available on the disk. If the file system has been damaged beyond repair, the only choice is to reformat it and restore from backup.
The usual solution for this problem is to implement _dirty region logging_, which is also referred to as _journaling_. Meta-data updates are still written synchronously, but only into a small region of the disk. Later on, they are moved to their proper location. Since the logging area is a small, contiguous region on the disk, there are no long distances for the disk heads to move, even during heavy operations, so these operations are quicker than synchronous updates. Additionally, the complexity of the implementation is limited, so the risk of bugs being present is low. A disadvantage is that all meta-data is written twice, once into the logging region and once to the proper location, so performance "pessimization" might result. On the other hand, in case of a crash, all pending meta-data operations can be either quickly rolled back or completed from the logging area after the system comes up again, resulting in a fast file system startup.
Kirk McKusick, the developer of Berkeley FFS, solved this problem with Soft Updates. All pending meta-data updates are kept in memory and written out to disk in a sorted sequence ("ordered meta-data updates"). This has the effect that, in case of heavy meta-data operations, later updates to an item "catch" the earlier ones which are still in memory and have not already been written to disk. All operations are generally performed in memory before the update is written to disk and the data blocks are sorted according to their position so that they will not be on the disk ahead of their meta-data. If the system crashes, an implicit "log rewind" causes all operations which were not written to the disk appear as if they never happened. A consistent file system state is maintained that appears to be the one of 30 to 60 seconds earlier. The algorithm used guarantees that all resources in use are marked as such in their blocks and inodes. After a crash, the only resource allocation error that occurs is that resources are marked as "used" which are actually "free". man:fsck[8] recognizes this situation, and frees the resources that are no longer used. It is safe to ignore the dirty state of the file system after a crash by forcibly mounting it with `mount -f`. In order to free resources that may be unused, man:fsck[8] needs to be run at a later time. This is the idea behind the _background man:fsck[8]_: at system startup time, only a _snapshot_ of the file system is recorded and man:fsck[8] is run afterwards. All file systems can then be mounted "dirty", so the system startup proceeds in multi-user mode. Then, background man:fsck[8] is scheduled for all file systems where this is required, to free resources that may be unused. File systems that do not use Soft Updates still need the usual foreground man:fsck[8].
The advantage is that meta-data operations are nearly as fast as asynchronous updates and are faster than _logging_, which has to write the meta-data twice. The disadvantages are the complexity of the code, a higher memory consumption, and some idiosyncrasies. After a crash, the state of the file system appears to be somewhat "older". In situations where the standard synchronous approach would have caused some zero-length files to remain after the man:fsck[8], these files do not exist at all with Soft Updates because neither the meta-data nor the file contents have been written to disk. Disk space is not released until the updates have been written to disk, which may take place some time after running man:rm[1]. This may cause problems when installing large amounts of data on a file system that does not have enough free space to hold all the files twice.
[[configtuning-kernel-limits]]
== Tuning Kernel Limits
[[file-process-limits]]
=== File/Process Limits
[[kern-maxfiles]]
==== `kern.maxfiles`
The `kern.maxfiles` man:sysctl[8] variable can be raised or lowered based upon system requirements. This variable indicates the maximum number of file descriptors on the system. When the file descriptor table is full, `file: table is full` will show up repeatedly in the system message buffer, which can be viewed using man:dmesg[8].
Each open file, socket, or fifo uses one file descriptor. A large-scale production server may easily require many thousands of file descriptors, depending on the kind and number of services running concurrently.
In older FreeBSD releases, the default value of `kern.maxfiles` is derived from `maxusers` in the kernel configuration file. `kern.maxfiles` grows proportionally to the value of `maxusers`. When compiling a custom kernel, consider setting this kernel configuration option according to the use of the system. From this number, the kernel is given most of its pre-defined limits. Even though a production machine may not have 256 concurrent users, the resources needed may be similar to a high-scale web server.
The read-only man:sysctl[8] variable `kern.maxusers` is automatically sized at boot based on the amount of memory available in the system, and may be determined at run-time by inspecting the value of `kern.maxusers`. Some systems require larger or smaller values of `kern.maxusers` and values of `64`, `128`, and `256` are not uncommon. Going above `256` is not recommended unless a huge number of file descriptors is needed. Many of the tunable values set to their defaults by `kern.maxusers` may be individually overridden at boot-time or run-time in [.filename]#/boot/loader.conf#. Refer to man:loader.conf[5] and [.filename]#/boot/defaults/loader.conf# for more details and some hints.
In older releases, the system will auto-tune `maxusers` if it is set to `0`. footnote:[The auto-tuning algorithm sets maxusers equal to the amount of memory in the system, with a minimum of 32, and a maximum of 384.]. When setting this option, set `maxusers` to at least `4`, especially if the system runs Xorg or is used to compile software. The most important table set by `maxusers` is the maximum number of processes, which is set to `20 + 16 * maxusers`. If `maxusers` is set to `1`, there can only be `36` simultaneous processes, including the `18` or so that the system starts up at boot time and the `15` or so used by Xorg. Even a simple task like reading a manual page will start up nine processes to filter, decompress, and view it. Setting `maxusers` to `64` allows up to `1044` simultaneous processes, which should be enough for nearly all uses. If, however, the error is displayed when trying to start another program, or a server is running with a large number of simultaneous users, increase the number and rebuild.
[NOTE]
====
`maxusers` does _not_ limit the number of users which can log into the machine. It instead sets various table sizes to reasonable values considering the maximum number of users on the system and how many processes each user will be running.
====
==== `kern.ipc.soacceptqueue`
The `kern.ipc.soacceptqueue` man:sysctl[8] variable limits the size of the listen queue for accepting new `TCP` connections. The default value of `128` is typically too low for robust handling of new connections on a heavily loaded web server. For such environments, it is recommended to increase this value to `1024` or higher. A service such as man:sendmail[8], or Apache may itself limit the listen queue size, but will often have a directive in its configuration file to adjust the queue size. Large listen queues do a better job of avoiding Denial of Service (DoS) attacks.
[[nmbclusters]]
=== Network Limits
The `NMBCLUSTERS` kernel configuration option dictates the amount of network Mbufs available to the system. A heavily-trafficked server with a low number of Mbufs will hinder performance. Each cluster represents approximately 2 K of memory, so a value of `1024` represents `2` megabytes of kernel memory reserved for network buffers. A simple calculation can be done to figure out how many are needed. A web server which maxes out at `1000` simultaneous connections where each connection uses a 6 K receive and 16 K send buffer, requires approximately 32 MB worth of network buffers to cover the web server. A good rule of thumb is to multiply by `2`, so 2x32 MB / 2 KB = 64 MB / 2 kB = `32768`. Values between `4096` and `32768` are recommended for machines with greater amounts of memory. Never specify an arbitrarily high value for this parameter as it could lead to a boot time crash. To observe network cluster usage, use `-m` with man:netstat[1].
The `kern.ipc.nmbclusters` loader tunable should be used to tune this at boot time. Only older versions of FreeBSD will require the use of the `NMBCLUSTERS` kernel man:config[8] option.
For busy servers that make extensive use of the man:sendfile[2] system call, it may be necessary to increase the number of man:sendfile[2] buffers via the `NSFBUFS` kernel configuration option or by setting its value in [.filename]#/boot/loader.conf# (see man:loader[8] for details). A common indicator that this parameter needs to be adjusted is when processes are seen in the `sfbufa` state. The man:sysctl[8] variable `kern.ipc.nsfbufs` is read-only. This parameter nominally scales with `kern.maxusers`, however it may be necessary to tune accordingly.
[IMPORTANT]
====
Even though a socket has been marked as non-blocking, calling man:sendfile[2] on the non-blocking socket may result in the man:sendfile[2] call blocking until enough ``struct sf_buf``'s are made available.
====
==== `net.inet.ip.portrange.*`
The `net.inet.ip.portrange.*` man:sysctl[8] variables control the port number ranges automatically bound to `TCP` and `UDP` sockets. There are three ranges: a low range, a default range, and a high range. Most network programs use the default range which is controlled by `net.inet.ip.portrange.first` and `net.inet.ip.portrange.last`, which default to `1024` and `5000`, respectively. Bound port ranges are used for outgoing connections and it is possible to run the system out of ports under certain circumstances. This most commonly occurs when running a heavily loaded web proxy. The port range is not an issue when running a server which handles mainly incoming connections, such as a web server, or has a limited number of outgoing connections, such as a mail relay. For situations where there is a shortage of ports, it is recommended to increase `net.inet.ip.portrange.last` modestly. A value of `10000`, `20000` or `30000` may be reasonable. Consider firewall effects when changing the port range. Some firewalls may block large ranges of ports, usually low-numbered ports, and expect systems to use higher ranges of ports for outgoing connections. For this reason, it is not recommended that the value of `net.inet.ip.portrange.first` be lowered.
==== `TCP` Bandwidth Delay Product
`TCP` bandwidth delay product limiting can be enabled by setting the `net.inet.tcp.inflight.enable` man:sysctl[8] variable to `1`. This instructs the system to attempt to calculate the bandwidth delay product for each connection and limit the amount of data queued to the network to just the amount required to maintain optimum throughput.
This feature is useful when serving data over modems, Gigabit Ethernet, high speed `WAN` links, or any other link with a high bandwidth delay product, especially when also using window scaling or when a large send window has been configured. When enabling this option, also set `net.inet.tcp.inflight.debug` to `0` to disable debugging. For production use, setting `net.inet.tcp.inflight.min` to at least `6144` may be beneficial. Setting high minimums may effectively disable bandwidth limiting, depending on the link. The limiting feature reduces the amount of data built up in intermediate route and switch packet queues and reduces the amount of data built up in the local host's interface queue. With fewer queued packets, interactive connections, especially over slow modems, will operate with lower _Round Trip Times_. This feature only effects server side data transmission such as uploading. It has no effect on data reception or downloading.
Adjusting `net.inet.tcp.inflight.stab` is _not_ recommended. This parameter defaults to `20`, representing 2 maximal packets added to the bandwidth delay product window calculation. The additional window is required to stabilize the algorithm and improve responsiveness to changing conditions, but it can also result in higher man:ping[8] times over slow links, though still much lower than without the inflight algorithm. In such cases, try reducing this parameter to `15`, `10`, or `5` and reducing `net.inet.tcp.inflight.min` to a value such as `3500` to get the desired effect. Reducing these parameters should be done as a last resort only.
=== Virtual Memory
==== `kern.maxvnodes`
A vnode is the internal representation of a file or directory. Increasing the number of vnodes available to the operating system reduces disk I/O. Normally, this is handled by the operating system and does not need to be changed. In some cases where disk I/O is a bottleneck and the system is running out of vnodes, this setting needs to be increased. The amount of inactive and free RAM will need to be taken into account.
To see the current number of vnodes in use:
[source,shell]
....
# sysctl vfs.numvnodes
vfs.numvnodes: 91349
....
To see the maximum vnodes:
[source,shell]
....
# sysctl kern.maxvnodes
kern.maxvnodes: 100000
....
If the current vnode usage is near the maximum, try increasing `kern.maxvnodes` by a value of `1000`. Keep an eye on the number of `vfs.numvnodes`. If it climbs up to the maximum again, `kern.maxvnodes` will need to be increased further. Otherwise, a shift in memory usage as reported by man:top[1] should be visible and more memory should be active.
[[adding-swap-space]]
== Adding Swap Space
Sometimes a system requires more swap space. This section describes two methods to increase swap space: adding swap to an existing partition or new hard drive, and creating a swap file on an existing partition.
For information on how to encrypt swap space, which options exist, and why it should be done, refer to crossref:disks[swap-encrypting,“Encrypting Swap”].
[[new-drive-swap]]
=== Swap on a New Hard Drive or Existing Partition
Adding a new hard drive for swap gives better performance than using a partition on an existing drive. Setting up partitions and hard drives is explained in crossref:disks[disks-adding,“Adding Disks”] while crossref:bsdinstall[configtuning-initial,“Designing the Partition Layout”] discusses partition layouts and swap partition size considerations.
Use `swapon` to add a swap partition to the system. For example:
[source,shell]
....
# swapon /dev/ada1s1b
....
[WARNING]
====
It is possible to use any partition not currently mounted, even if it already contains data. Using `swapon` on a partition that contains data will overwrite and destroy that data. Make sure that the partition to be added as swap is really the intended partition before running `swapon`.
====
To automatically add this swap partition on boot, add an entry to [.filename]#/etc/fstab#:
[.programlisting]
....
/dev/ada1s1b none swap sw 0 0
....
See man:fstab[5] for an explanation of the entries in [.filename]#/etc/fstab#. More information about `swapon` can be found in man:swapon[8].
[[create-swapfile]]
=== Creating a Swap File
These examples create a 512M swap file called [.filename]#/usr/swap0# instead of using a partition.
Using swap files requires that the module needed by man:md[4] has either been built into the kernel or has been loaded before swap is enabled. See crossref:kernelconfig[kernelconfig,Configuring the FreeBSD Kernel] for information about building a custom kernel.
[[swapfile-10-and-later]]
.Creating a Swap File
[example]
====
[.procedure]
. Create the swap file:
+
[source,shell]
....
# dd if=/dev/zero of=/usr/swap0 bs=1m count=512
....
. Set the proper permissions on the new file:
+
[source,shell]
....
# chmod 0600 /usr/swap0
....
. Inform the system about the swap file by adding a line to [.filename]#/etc/fstab#:
+
[.programlisting]
....
md99 none swap sw,file=/usr/swap0,late 0 0
....
+
The man:md[4] device [.filename]#md99# is used, leaving lower device numbers available for interactive use.
. Swap space will be added on system startup. To add swap space immediately, use man:swapon[8]:
+
[source,shell]
....
# swapon -aL
....
====
[[acpi-overview]]
== Power and Resource Management
It is important to utilize hardware resources in an efficient manner. Power and resource management allows the operating system to monitor system limits and to possibly provide an alert if the system temperature increases unexpectedly. An early specification for providing power management was the Advanced Power Management (APM) facility. APM controls the power usage of a system based on its activity. However, it was difficult and inflexible for operating systems to manage the power usage and thermal properties of a system. The hardware was managed by the BIOS and the user had limited configurability and visibility into the power management settings. The APMBIOS is supplied by the vendor and is specific to the hardware platform. An APM driver in the operating system mediates access to the APM Software Interface, which allows management of power levels.
There are four major problems in APM. First, power management is done by the vendor-specific BIOS, separate from the operating system. For example, the user can set idle-time values for a hard drive in the APMBIOS so that, when exceeded, the BIOS spins down the hard drive without the consent of the operating system. Second, the APM logic is embedded in the BIOS, and it operates outside the scope of the operating system. This means that users can only fix problems in the APMBIOS by flashing a new one into the ROM, which is a dangerous procedure with the potential to leave the system in an unrecoverable state if it fails. Third, APM is a vendor-specific technology, meaning that there is a lot of duplication of efforts and bugs found in one vendor's BIOS may not be solved in others. Lastly, the APMBIOS did not have enough room to implement a sophisticated power policy or one that can adapt well to the purpose of the machine.
The Plug and Play BIOS (PNPBIOS) was unreliable in many situations. PNPBIOS is 16-bit technology, so the operating system has to use 16-bit emulation in order to interface with PNPBIOS methods. FreeBSD provides an APM driver as APM should still be used for systems manufactured at or before the year 2000. The driver is documented in man:apm[4].
The successor to APM is the Advanced Configuration and Power Interface (ACPI). ACPI is a standard written by an alliance of vendors to provide an interface for hardware resources and power management. It is a key element in _Operating System-directed configuration and Power Management_ as it provides more control and flexibility to the operating system.
This chapter demonstrates how to configure ACPI on FreeBSD. It then offers some tips on how to debug ACPI and how to submit a problem report containing debugging information so that developers can diagnosis and fix ACPI issues.
[[acpi-config]]
=== Configuring ACPI
In FreeBSD the man:acpi[4] driver is loaded by default at system boot and should _not_ be compiled into the kernel. This driver cannot be unloaded after boot because the system bus uses it for various hardware interactions. However, if the system is experiencing problems, ACPI can be disabled altogether by rebooting after setting `hint.acpi.0.disabled="1"` in [.filename]#/boot/loader.conf# or by setting this variable at the loader prompt, as described in crossref:boot[boot-loader,“Stage Three”].
[NOTE]
====
ACPI and APM cannot coexist and should be used separately. The last one to load will terminate if the driver notices the other is running.
====
ACPI can be used to put the system into a sleep mode with `acpiconf`, the `-s` flag, and a number from `1` to `5`. Most users only need `1` (quick suspend to RAM) or `3` (suspend to RAM). Option `5` performs a soft-off which is the same as running `halt -p`.
The man:acpi_video[4] driver uses
link:https://uefi.org/specs/ACPI/6.4/Apx_B_Video_Extensions/Apx_B_Video_Extensions.html[ACPI
Video Extensions] to control display switching and backlight brightness. It must
be loaded after any of the DRM kernel modules. After loading the driver,
the kbd:[Fn] brightness keys will change the brightness of the screen. It is
possible to check the ACPI events by inspecting [.filename]#/var/run/devd.pipe#:
[source,shell]
...
# cat /var/run/devd.pipe
!system=ACPI subsystem=Video type=brightness notify=62
!system=ACPI subsystem=Video type=brightness notify=63
!system=ACPI subsystem=Video type=brightness notify=64
...
Other options are available using `sysctl`. Refer to man:acpi[4] and man:acpiconf[8] for more information.
[[ACPI-comprob]]
=== Common Problems
ACPI is present in all modern computers that conform to the ia32 (x86) and amd64 (AMD) architectures. The full standard has many features including CPU performance management, power planes control, thermal zones, various battery systems, embedded controllers, and bus enumeration. Most systems implement less than the full standard. For instance, a desktop system usually only implements bus enumeration while a laptop might have cooling and battery management support as well. Laptops also have suspend and resume, with their own associated complexity.
An ACPI-compliant system has various components. The BIOS and chipset vendors provide various fixed tables, such as FADT, in memory that specify things like the APIC map (used for SMP), config registers, and simple configuration values. Additionally, a bytecode table, the Differentiated System Description Table DSDT, specifies a tree-like name space of devices and methods.
The ACPI driver must parse the fixed tables, implement an interpreter for the bytecode, and modify device drivers and the kernel to accept information from the ACPI subsystem. For FreeBSD, Intel(R) has provided an interpreter (ACPI-CA) that is shared with Linux(R) and NetBSD. The path to the ACPI-CA source code is [.filename]#src/sys/contrib/dev/acpica#. The glue code that allows ACPI-CA to work on FreeBSD is in [.filename]#src/sys/dev/acpica/Osd#. Finally, drivers that implement various ACPI devices are found in [.filename]#src/sys/dev/acpica#.
For ACPI to work correctly, all the parts have to work correctly. Here are some common problems, in order of frequency of appearance, and some possible workarounds or fixes. If a fix does not resolve the issue, refer to <<ACPI-submitdebug>> for instructions on how to submit a bug report.
==== Mouse Issues
In some cases, resuming from a suspend operation will cause the mouse to fail. A known work around is to add `hint.psm.0.flags="0x3000"` to [.filename]#/boot/loader.conf#.
==== Suspend/Resume
ACPI has three suspend to RAM (STR) states, `S1`-`S3`, and one suspend to disk state (STD), called `S4`. STD can be implemented in two separate ways. The ``S4``BIOS is a BIOS-assisted suspend to disk and ``S4``OS is implemented entirely by the operating system. The normal state the system is in when plugged in but not powered up is "soft off" (`S5`).
Use `sysctl hw.acpi` to check for the suspend-related items. These example results are from a Thinkpad:
[source,shell]
....
hw.acpi.supported_sleep_state: S3 S4 S5
hw.acpi.s4bios: 0
....
Use `acpiconf -s` to test `S3`, `S4`, and `S5`. An `s4bios` of one (`1`) indicates ``S4``BIOS support instead of `S4` operating system support.
When testing suspend/resume, start with `S1`, if supported. This state is most likely to work since it does not require much driver support. No one has implemented `S2`, which is similar to `S1`. Next, try `S3`. This is the deepest STR state and requires a lot of driver support to properly reinitialize the hardware.
A common problem with suspend/resume is that many device drivers do not save, restore, or reinitialize their firmware, registers, or device memory properly. As a first attempt at debugging the problem, try:
[source,shell]
....
# sysctl debug.bootverbose=1
# sysctl debug.acpi.suspend_bounce=1
# acpiconf -s 3
....
This test emulates the suspend/resume cycle of all device drivers without actually going into `S3` state. In some cases, problems such as losing firmware state, device watchdog time out, and retrying forever, can be captured with this method. Note that the system will not really enter `S3` state, which means devices may not lose power, and many will work fine even if suspend/resume methods are totally missing, unlike real `S3` state.
If the previous test worked, on a laptop it is possible to configure the system
to suspend into `S3` on lid close and resume when it is open back again:
[source,shell]
....
# sysctl hw.acpi.lid_switch_state=S3
....
This change can be made persistent across reboots:
[source,shell]
....
# echo 'hw.acpi.lid_switch_state=S3' >> /etc/sysctl.conf
....
Harder cases require additional hardware, such as a serial port and cable for debugging through a serial console, a Firewire port and cable for using man:dcons[4], and kernel debugging skills.
To help isolate the problem, unload as many drivers as possible. If it works, narrow down which driver is the problem by loading drivers until it fails again. Typically, binary drivers like [.filename]#nvidia.ko#, display drivers, and USB will have the most problems while Ethernet interfaces usually work fine. If drivers can be properly loaded and unloaded, automate this by putting the appropriate commands in [.filename]#/etc/rc.suspend# and [.filename]#/etc/rc.resume#. Try setting `hw.acpi.reset_video` to `1` if the display is messed up after resume. Try setting longer or shorter values for `hw.acpi.sleep_delay` to see if that helps.
Try loading a recent Linux(R) distribution to see if suspend/resume works on the same hardware. If it works on Linux(R), it is likely a FreeBSD driver problem. Narrowing down which driver causes the problem will assist developers in fixing the problem. Since the ACPI maintainers rarely maintain other drivers, such as sound or ATA, any driver problems should also be posted to the {freebsd-current} and mailed to the driver maintainer. Advanced users can include debugging man:printf[3]s in a problematic driver to track down where in its resume function it hangs.
Finally, try disabling ACPI and enabling APM instead. If suspend/resume works with APM, stick with APM, especially on older hardware (pre-2000). It took vendors a while to get ACPI support correct and older hardware is more likely to have BIOS problems with ACPI.
==== System Hangs
Most system hangs are a result of lost interrupts or an interrupt storm. Chipsets may have problems based on boot, how the BIOS configures interrupts before correctness of the APIC (MADT) table, and routing of the System Control Interrupt (SCI).
Interrupt storms can be distinguished from lost interrupts by checking the output of `vmstat -i` and looking at the line that has `acpi0`. If the counter is increasing at more than a couple per second, there is an interrupt storm. If the system appears hung, try breaking to DDB (kbd:[CTRL+ALT+ESC] on console) and type `show interrupts`.
When dealing with interrupt problems, try disabling APIC support with `hint.apic.0.disabled="1"` in [.filename]#/boot/loader.conf#.
==== Panics
Panics are relatively rare for ACPI and are the top priority to be fixed. The first step is to isolate the steps to reproduce the panic, if possible, and get a backtrace. Follow the advice for enabling `options DDB` and setting up a serial console in crossref:serialcomms[serialconsole-ddb,“Entering the DDB Debugger from the Serial Line”] or setting up a dump partition. To get a backtrace in DDB, use `tr`. When handwriting the backtrace, get at least the last five and the top five lines in the trace.
Then, try to isolate the problem by booting with ACPI disabled. If that works, isolate the ACPI subsystem by using various values of `debug.acpi.disable`. See man:acpi[4] for some examples.
==== System Powers Up After Suspend or Shutdown
First, try setting `hw.acpi.disable_on_poweroff="0"` in [.filename]#/boot/loader.conf#. This keeps ACPI from disabling various events during the shutdown process. Some systems need this value set to `1` (the default) for the same reason. This usually fixes the problem of a system powering up spontaneously after a suspend or poweroff.
[[ACPI-aslanddump]]
==== BIOS Contains Buggy Bytecode
Some BIOS vendors provide incorrect or buggy bytecode. This is usually manifested by kernel console messages like this:
[source,shell]
....
ACPI-1287: *** Error: Method execution failed [\\_SB_.PCI0.LPC0.FIGD._STA] \\
(Node 0xc3f6d160), AE_NOT_FOUND
....
Often, these problems may be resolved by updating the BIOS to the latest revision. Most console messages are harmless, but if there are other problems, like the battery status is not working, these messages are a good place to start looking for problems.
=== Overriding the Default AML
The BIOS bytecode, known as ACPI Machine Language (AML), is compiled from a source language called ACPI Source Language (ASL). The AML is found in the table known as the Differentiated System Description Table (DSDT).
The goal of FreeBSD is for everyone to have working ACPI without any user intervention. Workarounds are still being developed for common mistakes made by BIOS vendors. The Microsoft(R) interpreter ([.filename]#acpi.sys# and [.filename]#acpiec.sys#) does not strictly check for adherence to the standard, and thus many BIOS vendors who only test ACPI under Windows(R) never fix their ASL. FreeBSD developers continue to identify and document which non-standard behavior is allowed by Microsoft(R)'s interpreter and replicate it so that FreeBSD can work without forcing users to fix the ASL.
To help identify buggy behavior and possibly fix it manually, a copy can be made of the system's ASL. To copy the system's ASL to a specified file name, use `acpidump` with `-t`, to show the contents of the fixed tables, and `-d`, to disassemble the AML:
[source,shell]
....
# acpidump -td > my.asl
....
Some AML versions assume the user is running Windows(R). To override this, set `hw.acpi.osname=_"Windows 2009"_` in [.filename]#/boot/loader.conf#, using the most recent Windows(R) version listed in the ASL.
Other workarounds may require [.filename]#my.asl# to be customized. If this file is edited, compile the new ASL using the following command. Warnings can usually be ignored, but errors are bugs that will usually prevent ACPI from working correctly.
[source,shell]
....
# iasl -f my.asl
....
Including `-f` forces creation of the AML, even if there are errors during compilation. Some errors, such as missing return statements, are automatically worked around by the FreeBSD interpreter.
The default output filename for `iasl` is [.filename]#DSDT.aml#. Load this file instead of the BIOS's buggy copy, which is still present in flash memory, by editing [.filename]#/boot/loader.conf# as follows:
[.programlisting]
....
acpi_dsdt_load="YES"
acpi_dsdt_name="/boot/DSDT.aml"
....
Be sure to copy [.filename]#DSDT.aml# to [.filename]#/boot#, then reboot the system. If this fixes the problem, send a man:diff[1] of the old and new ASL to {freebsd-acpi} so that developers can work around the buggy behavior in [.filename]#acpica#.
[[ACPI-submitdebug]]
=== Getting and Submitting Debugging Info
The ACPI driver has a flexible debugging facility. A set of subsystems and the level of verbosity can be specified. The subsystems to debug are specified as layers and are broken down into components (`ACPI_ALL_COMPONENTS`) and ACPI hardware support (`ACPI_ALL_DRIVERS`). The verbosity of debugging output is specified as the level and ranges from just report errors (`ACPI_LV_ERROR`) to everything (`ACPI_LV_VERBOSE`). The level is a bitmask so multiple options can be set at once, separated by spaces. In practice, a serial console should be used to log the output so it is not lost as the console message buffer flushes. A full list of the individual layers and levels is found in man:acpi[4].
Debugging output is not enabled by default. To enable it, add `options ACPI_DEBUG` to the custom kernel configuration file if ACPI is compiled into the kernel. Add `ACPI_DEBUG=1` to [.filename]#/etc/make.conf# to enable it globally. If a module is used instead of a custom kernel, recompile just the [.filename]#acpi.ko# module as follows:
[source,shell]
....
# cd /sys/modules/acpi/acpi && make clean && make ACPI_DEBUG=1
....
Copy the compiled [.filename]#acpi.ko# to [.filename]#/boot/kernel# and add the desired level and layer to [.filename]#/boot/loader.conf#. The entries in this example enable debug messages for all ACPI components and hardware drivers and output error messages at the least verbose level:
[.programlisting]
....
debug.acpi.layer="ACPI_ALL_COMPONENTS ACPI_ALL_DRIVERS"
debug.acpi.level="ACPI_LV_ERROR"
....
If the required information is triggered by a specific event, such as a suspend and then resume, do not modify [.filename]#/boot/loader.conf#. Instead, use `sysctl` to specify the layer and level after booting and preparing the system for the specific event. The variables which can be set using `sysctl` are named the same as the tunables in [.filename]#/boot/loader.conf#.
Once the debugging information is gathered, it can be sent to {freebsd-acpi} so that it can be used by the FreeBSD ACPI maintainers to identify the root cause of the problem and to develop a solution.
[NOTE]
====
Before submitting debugging information to this mailing list, ensure the latest BIOS version is installed and, if available, the embedded controller firmware version.
====
When submitting a problem report, include the following information:
* Description of the buggy behavior, including system type, model, and anything that causes the bug to appear. Note as accurately as possible when the bug began occurring if it is new.
* The output of `dmesg` after running `boot -v`, including any error messages generated by the bug.
* The `dmesg` output from `boot -v` with ACPI disabled, if disabling ACPI helps to fix the problem.
* Output from `sysctl hw.acpi`. This lists which features the system offers.
* The URL to a pasted version of the system's ASL. Do _not_ send the ASL directly to the list as it can be very large. Generate a copy of the ASL by running this command:
+
[source,shell]
....
# acpidump -dt > name-system.asl
....
+
Substitute the login name for _name_ and manufacturer/model for _system_. For example, use [.filename]#njl-FooCo6000.asl#.
Most FreeBSD developers watch the {freebsd-current}, but one should submit problems to {freebsd-acpi} to be sure it is seen. Be patient when waiting for a response. If the bug is not immediately apparent, submit a bug report. When entering a PR, include the same information as requested above. This helps developers to track the problem and resolve it. Do not send a PR without emailing {freebsd-acpi} first as it is likely that the problem has been reported before.
[[ACPI-References]]
=== References
More information about ACPI may be found in the following locations:
* The FreeBSD ACPI Mailing List Archives (https://lists.freebsd.org/pipermail/freebsd-acpi/[https://lists.freebsd.org/pipermail/freebsd-acpi/])
* The ACPI 2.0 Specification (http://acpi.info/spec.htm[http://acpi.info/spec.htm])
* man:acpi[4], man:acpi_thermal[4], man:acpidump[8], man:iasl[8], and man:acpidb[8]
diff --git a/documentation/content/en/books/handbook/cutting-edge/_index.adoc b/documentation/content/en/books/handbook/cutting-edge/_index.adoc
index 3cd371ee7c..f3eb818b61 100644
--- a/documentation/content/en/books/handbook/cutting-edge/_index.adoc
+++ b/documentation/content/en/books/handbook/cutting-edge/_index.adoc
@@ -1,1030 +1,1031 @@
---
title: Chapter 24. Updating and Upgrading FreeBSD
part: Part III. System Administration
prev: books/handbook/l10n
next: books/handbook/dtrace
+description: Information about how to keep a FreeBSD system up-to-date with freebsd-update or Git, how to rebuild and reinstall the entire base system, etc
---
[[updating-upgrading]]
= Updating and Upgrading FreeBSD
:doctype: book
:toc: macro
:toclevels: 1
:icons: font
:sectnums:
:sectnumlevels: 6
:source-highlighter: rouge
:experimental:
:skip-front-matter:
:xrefstyle: basic
:relfileprefix: ../
:outfilesuffix:
:sectnumoffset: 24
ifeval::["{backend}" == "html5"]
:imagesdir: ../../../../images/books/handbook/cutting-edge/
endif::[]
ifeval::["{backend}" == "pdf"]
:imagesdir: ../../../../static/images/books/handbook/cutting-edge/
endif::[]
ifeval::["{backend}" == "epub3"]
:imagesdir: ../../../../static/images/books/handbook/cutting-edge/
endif::[]
include::shared/authors.adoc[]
include::shared/releases.adoc[]
include::shared/en/mailing-lists.adoc[]
include::shared/en/teams.adoc[]
include::shared/en/urls.adoc[]
toc::[]
[[updating-upgrading-synopsis]]
== Synopsis
FreeBSD is under constant development between releases.
Some people prefer to use the officially released versions, while others prefer to keep in sync with the latest developments.
However, even official releases are often updated with security and other critical fixes.
Regardless of the version used, FreeBSD provides all the necessary tools to keep the system updated, and allows for easy upgrades between versions.
This chapter describes how to track the development system and the basic tools for keeping a FreeBSD system up-to-date.
After reading this chapter, you will know:
* How to keep a FreeBSD system up-to-date with freebsd-update or Git.
* How to compare the state of an installed system against a known pristine copy.
* How to keep the installed documentation up-to-date with Git or documentation ports.
* The difference between the two development branches: FreeBSD-STABLE and FreeBSD-CURRENT.
* How to rebuild and reinstall the entire base system.
Before reading this chapter, you should:
* Properly set up the network connection (crossref:advanced-networking[advanced-networking,Advanced Networking]).
* Know how to install additional third-party software (crossref:ports[ports,Installing Applications: Packages and Ports]).
[NOTE]
====
Throughout this chapter, `git` is used to obtain and update FreeBSD sources. Optionally, the package:devel/git[] port or package may be used.
====
[[updating-upgrading-freebsdupdate]]
== FreeBSD Update
Applying security patches in a timely manner and upgrading to a newer release of an operating system are important aspects of ongoing system administration.
FreeBSD includes a utility called `freebsd-update` which can be used to perform both these tasks.
This utility supports binary security and errata updates to FreeBSD, without the need to manually compile and install the patch or a new kernel.
Binary updates are available for all architectures and releases currently supported by the security team.
The list of supported releases and their estimated end-of-life dates are listed at https://www.FreeBSD.org/security/[https://www.FreeBSD.org/security/].
This utility also supports operating system upgrades to minor point releases as well as upgrades to another release branch.
Before upgrading to a new release, review its release announcement as it contains important information pertinent to the release.
Release announcements are available from https://www.FreeBSD.org/releases/[https://www.FreeBSD.org/releases/].
[NOTE]
====
If a `crontab` utilizing the features of man:freebsd-update[8] exists, it must be disabled before upgrading the operating system.
====
This section describes the configuration file used by `freebsd-update`, demonstrates how to apply a security patch and how to upgrade to a minor or major operating system release, and discusses some of the considerations when upgrading the operating system.
[[freebsdupdate-config-file]]
=== The Configuration File
The default configuration file for `freebsd-update` works as-is.
Some users may wish to tweak the default configuration in [.filename]#/etc/freebsd-update.conf#, allowing better control of the process.
The comments in this file explain the available options, but the following may require a bit more explanation:
[.programlisting]
....
# Components of the base system which should be kept updated.
Components world kernel
....
This parameter controls which parts of FreeBSD will be kept up-to-date.
The default is to update the entire base system and the kernel.
Individual components can instead be specified, such as `src/base` or `src/sys`.
However, the best option is to leave this at the default as changing it to include specific items requires every needed item to be listed.
Over time, this could have disastrous consequences as source code and binaries may become out of sync.
[.programlisting]
....
# Paths which start with anything matching an entry in an IgnorePaths
# statement will be ignored.
IgnorePaths /boot/kernel/linker.hints
....
To leave specified directories, such as [.filename]#/bin# or [.filename]#/sbin#, untouched during the update process, add their paths to this statement.
This option may be used to prevent `freebsd-update` from overwriting local modifications.
[.programlisting]
....
# Paths which start with anything matching an entry in an UpdateIfUnmodified
# statement will only be updated if the contents of the file have not been
# modified by the user (unless changes are merged; see below).
UpdateIfUnmodified /etc/ /var/ /root/ /.cshrc /.profile
....
This option will only update unmodified configuration files in the specified directories.
Any changes made by the user will prevent the automatic updating of these files.
There is another option, `KeepModifiedMetadata`, which will instruct `freebsd-update` to save the changes during the merge.
[.programlisting]
....
# When upgrading to a new FreeBSD release, files which match MergeChanges
# will have any local changes merged into the version from the new release.
MergeChanges /etc/ /var/named/etc/ /boot/device.hints
....
List of directories with configuration files that `freebsd-update` should attempt to merge.
The file merge process is a series of man:diff[1] patches similar to man:mergemaster[8], but with fewer options.
Merges are either accepted, open an editor, or cause `freebsd-update` to abort.
When in doubt, backup [.filename]#/etc# and just accept the merges.
See man:mergemaster[8] for more information about `mergemaster`.
[.programlisting]
....
# Directory in which to store downloaded updates and temporary
# files used by FreeBSD Update.
# WorkDir /var/db/freebsd-update
....
This directory is where all patches and temporary files are placed.
In cases where the user is doing a version upgrade, this location should have at least a gigabyte of disk space available.
[.programlisting]
....
# When upgrading between releases, should the list of Components be
# read strictly (StrictComponents yes) or merely as a list of components
# which *might* be installed of which FreeBSD Update should figure out
# which actually are installed and upgrade those (StrictComponents no)?
# StrictComponents no
....
When this option is set to `yes`, `freebsd-update` will assume that the `Components` list is complete and will not attempt to make changes outside of the list.
Effectively, `freebsd-update` will attempt to update every file which belongs to the `Components` list.
[[freebsdupdate-security-patches]]
=== Applying Security Patches
The process of applying FreeBSD security patches has been simplified, allowing an administrator to keep a system fully patched using `freebsd-update`.
More information about FreeBSD security advisories can be found in crossref:security[security-advisories,"FreeBSD Security Advisories"].
FreeBSD security patches may be downloaded and installed using the following commands.
The first command will determine if any outstanding patches are available, and if so, will list the files that will be modifed if the patches are applied.
The second command will apply the patches.
[source,shell]
....
# freebsd-update fetch
# freebsd-update install
....
If the update applies any kernel patches, the system will need a reboot in order to boot into the patched kernel.
If the patch was applied to any running binaries, the affected applications should be restarted so that the patched version of the binary is used.
[NOTE]
====
Usually, the user needs to be prepared to reboot the system.
To know if a reboot is required by a kernel update, execute the commands `freebsd-version -k` and `uname -r` and if it differs a reboot is required.
====
The system can be configured to automatically check for updates once every day by adding this entry to [.filename]#/etc/crontab#:
[.programlisting]
....
@daily root freebsd-update cron
....
If patches exist, they will automatically be downloaded but will not be applied.
The `root` user will be sent an email so that the patches may be reviewed and manually installed with `freebsd-update install`.
If anything goes wrong, `freebsd-update` has the ability to roll back the last set of changes with the following command:
[source,shell]
....
# freebsd-update rollback
Uninstalling updates... done.
....
Again, the system should be restarted if the kernel or any kernel modules were modified and any affected binaries should be restarted.
Only the [.filename]#GENERIC# kernel can be automatically updated by `freebsd-update`.
If a custom kernel is installed, it will have to be rebuilt and reinstalled after `freebsd-update` finishes installing the updates.
The default kernel name is _GENERIC_.
The man:uname[1] command may be used to verify its installation.
[NOTE]
====
Always keep a copy of the [.filename]#GENERIC# kernel in [.filename]#/boot/GENERIC#.
It will be helpful in diagnosing a variety of problems and in performing version upgrades.
Refer to <<freebsd-update-custom-kernel-9x>> for instructions on how to get a copy of the [.filename]#GENERIC# kernel.
====
Unless the default configuration in [.filename]#/etc/freebsd-update.conf# has been changed,
`freebsd-update` will install the updated kernel sources along with the rest of the updates.
Rebuilding and reinstalling a new custom kernel can then be performed in the usual way.
The updates distributed by `freebsd-update` do not always involve the kernel.
It is not necessary to rebuild a custom kernel if the kernel sources have not been modified by `freebsd-update install`.
However, `freebsd-update` will always update [.filename]#/usr/src/sys/conf/newvers.sh#.
The current patch level, as indicated by the `-p` number reported by `uname -r`, is obtained from this file.
Rebuilding a custom kernel, even if nothing else changed, allows `uname` to accurately report the current patch level of the system.
This is particularly helpful when maintaining multiple systems, as it allows for a quick assessment of the updates installed in each one.
[[freebsdupdate-upgrade]]
=== Performing Major and Minor Version Upgrades
Upgrades from one minor version of FreeBSD to another, like from FreeBSD 9.0 to FreeBSD 9.1, are called _minor version_ upgrades.
_Major version_ upgrades occur when FreeBSD is upgraded from one major version to another, like from FreeBSD 9.X to FreeBSD 10.X.
Both types of upgrades can be performed by providing `freebsd-update` with a release version target.
[NOTE]
====
If the system is running a custom kernel, make sure that a copy of the [.filename]#GENERIC# kernel exists in [.filename]#/boot/GENERIC# before starting the upgrade.
Refer to <<freebsd-update-custom-kernel-9x>> for instructions on how to get a copy of the [.filename]#GENERIC# kernel.
====
The following command, when run on a FreeBSD 9.0 system, will upgrade it to FreeBSD 9.1:
[source,shell]
....
# freebsd-update -r 9.1-RELEASE upgrade
....
After the command has been received, `freebsd-update` will evaluate the configuration file and current system in an attempt to gather the information necessary to perform the upgrade.
A screen listing will display which components have and have not been detected.
For example:
[source,shell]
....
Looking up update.FreeBSD.org mirrors... 1 mirrors found.
Fetching metadata signature for 9.0-RELEASE from update1.FreeBSD.org... done.
Fetching metadata index... done.
Inspecting system... done.
The following components of FreeBSD seem to be installed:
kernel/smp src/base src/bin src/contrib src/crypto src/etc src/games
src/gnu src/include src/krb5 src/lib src/libexec src/release src/rescue
src/sbin src/secure src/share src/sys src/tools src/ubin src/usbin
world/base world/info world/lib32 world/manpages
The following components of FreeBSD do not seem to be installed:
kernel/generic world/catpages world/dict world/doc world/games
world/proflibs
Does this look reasonable (y/n)? y
....
At this point, `freebsd-update` will attempt to download all files required for the upgrade.
In some cases, the user may be prompted with questions regarding what to install or how to proceed.
When using a custom kernel, the above step will produce a warning similar to the following:
[source,shell]
....
WARNING: This system is running a "MYKERNEL" kernel, which is not a
kernel configuration distributed as part of FreeBSD 9.0-RELEASE.
This kernel will not be updated: you MUST update the kernel manually
before running "/usr/sbin/freebsd-update install"
....
This warning may be safely ignored at this point.
The updated [.filename]#GENERIC# kernel will be used as an intermediate step in the upgrade process.
Once all the patches have been downloaded to the local system, they will be applied.
This process may take a while, depending on the speed and workload of the machine.
Configuration files will then be merged.
The merging process requires some user intervention as a file may be merged or an editor may appear on screen for a manual merge.
The results of every successful merge will be shown to the user as the process continues.
A failed or ignored merge will cause the process to abort.
Users may wish to make a backup of [.filename]#/etc# and manually merge important files,
such as [.filename]#master.passwd# or [.filename]#group# at a later time.
[NOTE]
====
The system is not being altered yet as all patching and merging is happening in another directory.
Once all patches have been applied successfully,
all configuration files have been merged and it seems the process will go smoothly,
the changes can be committed to disk by the user using the following command:
[source,shell]
....
# freebsd-update install
....
====
The kernel and kernel modules will be patched first.
If the system is running with a custom kernel,
use man:nextboot[8] to set the kernel for the next boot to the updated [.filename]#/boot/GENERIC#:
[source,shell]
....
# nextboot -k GENERIC
....
[WARNING]
====
Before rebooting with the [.filename]#GENERIC# kernel,
make sure it contains all the drivers required for the system to boot properly and connect to the network,
if the machine being updated is accessed remotely.
In particular, if the running custom kernel contains built-in functionality usually provided by kernel modules, make sure to temporarily load these modules into the [.filename]#GENERIC# kernel using the [.filename]#/boot/loader.conf# facility.
It is recommended to disable non-essential services as well as any disk and network mounts until the upgrade process is complete.
====
The machine should now be restarted with the updated kernel:
[source,shell]
....
# shutdown -r now
....
Once the system has come back online, restart `freebsd-update` using the following command.
Since the state of the process has been saved, `freebsd-update` will not start from the beginning,
but will instead move on to the next phase and remove all old shared libraries and object files.
[source,shell]
....
# freebsd-update install
....
[NOTE]
====
Depending upon whether any library version numbers were bumped, there may only be two install phases instead of three.
====
The upgrade is now complete.
If this was a major version upgrade, reinstall all ports and packages as described in <<freebsdupdate-portsrebuild>>.
[[freebsd-update-custom-kernel-9x]]
==== Custom Kernels with FreeBSD 9.X and Later
Before using `freebsd-update`, ensure that a copy of the [.filename]#GENERIC# kernel exists in [.filename]#/boot/GENERIC#.
If a custom kernel has only been built once, the kernel in [.filename]#/boot/kernel.old# is the `GENERIC` kernel.
Simply rename this directory to [.filename]#/boot/GENERIC#.
If a custom kernel has been built more than once or if it is unknown how many times the custom kernel has been built,
obtain a copy of the `GENERIC` kernel that matches the current version of the operating system.
If physical access to the system is available, a copy of the `GENERIC` kernel can be installed from the installation media:
[source,shell]
....
# mount /cdrom
# cd /cdrom/usr/freebsd-dist
# tar -C/ -xvf kernel.txz boot/kernel/kernel
....
Alternately, the `GENERIC` kernel may be rebuilt and installed from source:
[source,shell]
....
# cd /usr/src
# make kernel __MAKE_CONF=/dev/null SRCCONF=/dev/null
....
For this kernel to be identified as the `GENERIC` kernel by `freebsd-update`,
the [.filename]#GENERIC# configuration file must not have been modified in any way.
It is also suggested that the kernel is built without any other special options.
Rebooting into the [.filename]#GENERIC# kernel is not required as `freebsd-update` only needs [.filename]#/boot/GENERIC# to exist.
[[freebsdupdate-portsrebuild]]
==== Upgrading Packages After a Major Version Upgrade
Generally, installed applications will continue to work without problems after minor version upgrades.
Major versions use different Application Binary Interfaces (ABIs), which will break most third-party applications.
After a major version upgrade, all installed packages and ports need to be upgraded.
Packages can be upgraded using `pkg upgrade`.
To upgrade installed ports, use a utility such as package:ports-mgmt/portmaster[].
A forced upgrade of all installed packages will replace the packages with fresh versions from the repository even if the version number has not increased.
This is required because of the ABI version change when upgrading between major versions of FreeBSD.
The forced upgrade can be accomplished by performing:
[source,shell]
....
# pkg-static upgrade -f
....
A rebuild of all installed applications can be accomplished with this command:
[source,shell]
....
# portmaster -af
....
This command will display the configuration screens for each application that has configurable options and wait for the user to interact with those screens.
To prevent this behavior, and use only the default options, include `-G` in the above command.
Once the software upgrades are complete,
finish the upgrade process with a final call to `freebsd-update` in order to tie up all the loose ends in the upgrade process:
[source,shell]
....
# freebsd-update install
....
If the [.filename]#GENERIC# kernel was temporarily used,
this is the time to build and install a new custom kernel using the instructions in crossref:kernelconfig[kernelconfig,Configuring the FreeBSD Kernel].
Reboot the machine into the new FreeBSD version.
The upgrade process is now complete.
[[freebsdupdate-system-comparison]]
=== System State Comparison
The state of the installed FreeBSD version against a known good copy can be tested using `freebsd-update IDS`.
This command evaluates the current version of system utilities, libraries, and configuration files and can be used as a built-in Intrusion Detection System (IDS).
[WARNING]
====
This command is not a replacement for a real IDS such as package:security/snort[].
As `freebsd-update` stores data on disk, the possibility of tampering is evident.
While this possibility may be reduced using `kern.securelevel` and by storing the `freebsd-update` data on a read-only file system when not in use,
a better solution would be to compare the system against a secure disk, such as a DVD or securely stored external USB disk device.
An alternative method for providing IDS functionality using a built-in utility is described in crossref:security[security-ids,"Binary Verification"]
====
To begin the comparison, specify the output file to save the results to:
[source,shell]
....
# freebsd-update IDS >> outfile.ids
....
The system will now be inspected and a lengthy listing of files, along with the SHA256 hash values for both the known value in the release and the current installation, will be sent to the specified output file.
The entries in the listing are extremely long, but the output format may be easily parsed.
For instance, to obtain a list of all files which differ from those in the release, issue the following command:
[source,shell]
....
# cat outfile.ids | awk '{ print $1 }' | more
/etc/master.passwd
/etc/motd
/etc/passwd
/etc/pf.conf
....
This sample output has been truncated as many more files exist.
Some files have natural modifications.
For example, [.filename]#/etc/passwd# will be modified if users have been added to the system.
Kernel modules may differ as `freebsd-update` may have updated them.
To exclude specific files or directories, add them to the `IDSIgnorePaths` option in [.filename]#/etc/freebsd-update.conf#.
[[updating-upgrading-documentation]]
== Updating the Documentation Set
Documentation is an integral part of the FreeBSD operating system.
While an up-to-date version of the FreeBSD documentation is always available on the FreeBSD web site (link:https://docs.FreeBSD.org[Documentation Portal]), it can be handy to have an up-to-date, local copy of the FreeBSD website, handbooks, FAQ, and articles.
This section describes how to use either source or the FreeBSD Ports Collection to keep a local copy of the FreeBSD documentation up-to-date.
For information on editing and submitting corrections to the documentation,
refer to the FreeBSD Documentation Project Primer for New Contributors (link:{fdp-primer}[FreeBSD Documentation Project Primer for New Contributors]).
[[updating-installed-documentation]]
=== Updating Documentation from Source
Rebuilding the FreeBSD documentation from source requires a collection of tools which are not part of the FreeBSD base system.
The required tools can be installed following link:{fdp-primer}#overview-quick-start[these steps] from the FreeBSD Documentation Project Primer.
Once installed, use `git` to fetch a clean copy of the documentation source:
[source,shell]
....
# git clone https://git.FreeBSD.org/doc.git /usr/doc
....
The initial download of the documentation sources may take a while.
Let it run until it completes.
Future updates of the documentation sources may be fetched by running:
[source,shell]
....
# git pull
....
Once an up-to-date snapshot of the documentation sources has been fetched to [.filename]#/usr/doc#,
everything is ready for an update of the installed documentation.
A full update may be performed by typing:
[source,shell]
....
# cd /usr/doc
# make
....
[[current-stable]]
== Tracking a Development Branch
FreeBSD has two development branches: FreeBSD-CURRENT and FreeBSD-STABLE.
This section provides an explanation of each branch and its intended audience, as well as how to keep a system up-to-date with each respective branch.
[[current]]
=== Using FreeBSD-CURRENT
FreeBSD-CURRENT is the "bleeding edge" of FreeBSD development and FreeBSD-CURRENT users are expected to have a high degree of technical skill.
Less technical users who wish to track a development branch should track FreeBSD-STABLE instead.
FreeBSD-CURRENT is the very latest source code for FreeBSD and includes works in progress, experimental changes, and transitional mechanisms that might or might not be present in the next official release.
While many FreeBSD developers compile the FreeBSD-CURRENT source code daily, there are short periods of time when the source may not be buildable.
These problems are resolved as quickly as possible, but whether or not FreeBSD-CURRENT brings disaster or new functionality can be a matter of when the source code was synced.
FreeBSD-CURRENT is made available for three primary interest groups:
. Members of the FreeBSD community who are actively working on some part of the source tree.
. Members of the FreeBSD community who are active testers. They are willing to spend time solving problems, making topical suggestions on changes and the general direction of FreeBSD, and submitting patches.
. Users who wish to keep an eye on things, use the current source for reference purposes, or make the occasional comment or code contribution.
FreeBSD-CURRENT should _not_ be considered a fast-track to getting new features before the next release as pre-release features are not yet fully tested and most likely contain bugs.
It is not a quick way of getting bug fixes as any given commit is just as likely to introduce new bugs as to fix existing ones.
FreeBSD-CURRENT is not in any way "officially supported".
To track FreeBSD-CURRENT:
. Join the {freebsd-current} and the {dev-commits-src-main} lists. This is _essential_ in order to see the comments that people are making about the current state of the system and to receive important bulletins about the current state of FreeBSD-CURRENT.
+
The {dev-commits-src-main} list records the commit log entry for each change as it is made, along with any pertinent information on possible side effects.
+
To join these lists, go to {mailman-lists}, click on the list to subscribe to, and follow the instructions.
In order to track changes to the whole source tree, not just the changes to FreeBSD-CURRENT, subscribe to the {dev-commits-src-all}.
. Synchronize with the FreeBSD-CURRENT sources. Typically, `git` is used to check out the -CURRENT code from the `main` branch of the FreeBSD Git repository (see crossref:mirrors[git,“Using Git”] for details).
. Due to the size of the repository, some users choose to only synchronize the sections of source that interest them or which they are contributing patches to. However, users that plan to compile the operating system from source must download _all_ of FreeBSD-CURRENT, not just selected portions.
+
Before compiling FreeBSD-CURRENT, read [.filename]#/usr/src/Makefile# very carefully and follow the instructions in <<makeworld>>.
Read the {freebsd-current} and [.filename]#/usr/src/UPDATING# to stay up-to-date on other bootstrapping procedures that sometimes become necessary on the road to the next release.
. Be active! FreeBSD-CURRENT users are encouraged to submit their suggestions for enhancements or bug fixes. Suggestions with accompanying code are always welcome.
[[stable]]
=== Using FreeBSD-STABLE
FreeBSD-STABLE is the development branch from which major releases are made.
Changes go into this branch at a slower pace and with the general assumption that they have first been tested in FreeBSD-CURRENT.
This is _still_ a development branch and, at any given time, the sources for FreeBSD-STABLE may or may not be suitable for general use.
It is simply another engineering development track, not a resource for end-users.
Users who do not have the resources to perform testing should instead run the most recent release of FreeBSD.
Those interested in tracking or contributing to the FreeBSD development process, especially as it relates to the next release of FreeBSD, should consider following FreeBSD-STABLE.
While the FreeBSD-STABLE branch should compile and run at all times, this cannot be guaranteed.
Since more people run FreeBSD-STABLE than FreeBSD-CURRENT, it is inevitable that bugs and corner cases will sometimes be found in FreeBSD-STABLE that were not apparent in FreeBSD-CURRENT.
For this reason, one should not blindly track FreeBSD-STABLE.
It is particularly important _not_ to update any production servers to FreeBSD-STABLE without thoroughly testing the code in a development or testing environment.
To track FreeBSD-STABLE:
. Join the {freebsd-stable} in order to stay informed of build dependencies that may appear in FreeBSD-STABLE or any other issues requiring special attention. Developers will also make announcements in this mailing list when they are contemplating some controversial fix or update, giving the users a chance to respond if they have any issues to raise concerning the proposed change.
+
Join the relevant git list for the branch being tracked.
For example, users tracking the {betarel-current-major}-STABLE branch should join the {dev-commits-src-branches}.
This list records the commit log entry for each change as it is made, along with any pertinent information on possible side effects.
+
To join these lists, go to {mailman-lists}, click on the list to subscribe to, and follow the instructions.
In order to track changes for the whole source tree, subscribe to {dev-commits-src-all}.
. To install a new FreeBSD-STABLE system, install the most recent FreeBSD-STABLE release from the crossref:mirrors[mirrors,FreeBSD mirror sites] or use a monthly snapshot built from FreeBSD-STABLE. Refer to link:https://www.FreeBSD.org/snapshots/[www.freebsd.org/snapshots] for more information about snapshots.
+
To compile or upgrade to an existing FreeBSD system to FreeBSD-STABLE, use `git` to check out the source for the desired branch. Branch names, such as `stable/9`, are listed at link:https://www.FreeBSD.org/releng/[www.freebsd.org/releng].
. Before compiling or upgrading to FreeBSD-STABLE , read [.filename]#/usr/src/Makefile# carefully and follow the instructions in <<makeworld>>. Read the {freebsd-stable} and [.filename]#/usr/src/UPDATING# to keep up-to-date on other bootstrapping procedures that sometimes become necessary on the road to the next release.
[[makeworld]]
== Updating FreeBSD from Source
Updating FreeBSD by compiling from source offers several advantages over binary updates.
Code can be built with options to take advantage of specific hardware.
Parts of the base system can be built with non-default settings, or left out entirely where they are not needed or desired.
The build process takes longer to update a system than just installing binary updates, but allows complete customization to produce a tailored version of FreeBSD.
[[updating-src-quick-start]]
=== Quick Start
This is a quick reference for the typical steps used to update FreeBSD by building from source. Later sections describe the process in more detail.
[.procedure]
====
* Update and Build
+
[source,shell]
....
# git pull /usr/src <.>
check /usr/src/UPDATING <.>
# cd /usr/src <.>
# make -j4 buildworld <.>
# make -j4 kernel <.>
# shutdown -r now <.>
# etcupdate -p <.>
# cd /usr/src <.>
# make installworld <.>
# etcupdate -B <.>
# shutdown -r now <.>
....
<.> Get the latest version of the source. See <<updating-src-obtaining-src>> for more information on obtaining and updating source.
<.> Check [.filename]#/usr/src/UPDATING# for any manual steps required before or after building from source.
<.> Go to the source directory.
<.> Compile the world, everything except the kernel.
<.> Compile and install the kernel. This is equivalent to `make buildkernel installkernel`.
<.> Reboot the system to the new kernel.
<.> Update and merge configuraton files in [.filename]#/etc/# required before installworld.
<.> Go to the source directory.
<.> Install the world.
<.> Update and merge configuration files in [.filename]#/etc/#.
<.> Restart the system to use the newly-built world and kernel.
====
[[updating-src-preparing]]
=== Preparing for a Source Update
Read [.filename]#/usr/src/UPDATING#. Any manual steps that must be performed before or after an update are described in this file.
[[updating-src-obtaining-src]]
=== Updating the Source
FreeBSD source code is located in [.filename]#/usr/src/#.
The preferred method of updating this source is through the Git version control system.
Verify that the source code is under version control:
[source,shell]
....
# cd /usr/src
# git remote --v
origin https://git.freebsd.org/src.git (fetch)
origin https://git.freebsd.org/src.git (push)
....
This indicates that [.filename]#/usr/src/# is under version control and can be updated with man:git[1]:
[[synching]]
[source,shell]
....
# git pull /usr/src
....
The update process can take some time if the directory has not been updated recently.
After it finishes, the source code is up to date and the build process described in the next section can begin.
[NOTE]
====
*Obtaining the Source:* +
If the output says `fatal: not a git repository`, the files there are missing or were installed with a different method.
A new checkout of the source is required.
[[updating-src-obtaining-src-repopath]]
.FreeBSD Versions and Repository Branchs
[cols="10%,10%,80%", options="header"]
|===
| uname -r Output
| Repository Path
| Description
|`_X.Y_-RELEASE`
|`releng/_X.Y_`
|The Release version plus only critical security and bug fix patches. This branch is recommended for most users.
|`_X.Y_-STABLE`
|`stable/_X_`
|
The Release version plus all additional development on that branch. _STABLE_ refers to the Applications Binary Interface (ABI) not changing, so software compiled for earlier versions still runs. For example, software compiled to run on FreeBSD 10.1 will still run on FreeBSD 10-STABLE compiled later.
STABLE branches occasionally have bugs or incompatibilities which might affect users, although these are typically fixed quickly.
|`_X_-CURRENT`
|`main/`
|The latest unreleased development version of FreeBSD. The CURRENT branch can have major bugs or incompatibilities and is recommended only for advanced users.
|===
Determine which version of FreeBSD is being used with man:uname[1]:
[source,shell]
....
# uname -r
10.3-RELEASE
....
Based on <<updating-src-obtaining-src-repopath>>, the source used to update `10.3-RELEASE` has a repository path of `releng/10.3`. That path is used when checking out the source:
[source,shell]
....
# mv /usr/src /usr/src.bak <.>
# git clone --branch releng/10.3 https://git.FreeBSD.org/src.git /usr/src <.>
....
<.> Move the old directory out of the way. If there are no local modifications in this directory, it can be deleted.
<.> The path from <<updating-src-obtaining-src-repopath>> is added to the repository URL. The third parameter is the destination directory for the source code on the local system.
====
[[updating-src-building]]
=== Building from Source
The _world_, or all of the operating system except the kernel, is compiled.
This is done first to provide up-to-date tools to build the kernel. Then the kernel itself is built:
[source,shell]
....
# cd /usr/src
# make buildworld
# make buildkernel
....
The compiled code is written to [.filename]#/usr/obj#.
These are the basic steps. Additional options to control the build are described below.
[[updating-src-building-clean-build]]
==== Performing a Clean Build
Some versions of the FreeBSD build system leave previously-compiled code in the temporary object directory, [.filename]#/usr/obj#.
This can speed up later builds by avoiding recompiling code that has not changed.
To force a clean rebuild of everything, use `cleanworld` before starting a build:
[source,shell]
....
# make cleanworld
....
[[updating-src-building-jobs]]
==== Setting the Number of Jobs
Increasing the number of build jobs on multi-core processors can improve build speed.
Determine the number of cores with `sysctl hw.ncpu`.
Processors vary, as do the build systems used with different versions of FreeBSD, so testing is the only sure method to tell how a different number of jobs affects the build speed.
For a starting point, consider values between half and double the number of cores.
The number of jobs is specified with `-j`.
[[updating-src-building-jobs-example]]
.Increasing the Number of Build Jobs
[example]
====
Building the world and kernel with four jobs:
[source,shell]
....
# make -j4 buildworld buildkernel
....
====
[[updating-src-building-only-kernel]]
==== Building Only the Kernel
A `buildworld` must be completed if the source code has changed.
After that, a `buildkernel` to build a kernel can be run at any time.
To build just the kernel:
[source,shell]
....
# cd /usr/src
# make buildkernel
....
[[updating-src-building-custom-kernel]]
==== Building a Custom Kernel
The standard FreeBSD kernel is based on a _kernel config file_ called [.filename]#GENERIC#.
The [.filename]#GENERIC# kernel includes the most commonly-needed device drivers and options.
Sometimes it is useful or necessary to build a custom kernel, adding or removing device drivers or options to fit a specific need.
For example, someone developing a small embedded computer with severely limited RAM could remove unneeded device drivers or options to make the kernel slightly smaller.
Kernel config files are located in [.filename]#/usr/src/sys/arch/conf/#, where _arch_ is the output from `uname -m`.
On most computers, that is `amd64`, giving a config file directory of [.filename]#/usr/src/sys/amd64/conf/#.
[TIP]
====
[.filename]#/usr/src# can be deleted or recreated, so it is preferable to keep custom kernel config files in a separate directory, like [.filename]#/root#.
Link the kernel config file into the [.filename]#conf# directory.
If that directory is deleted or overwritten, the kernel config can be re-linked into the new one.
====
A custom config file can be created by copying the [.filename]#GENERIC# config file.
In this example, the new custom kernel is for a storage server, so is named [.filename]#STORAGESERVER#:
[source,shell]
....
# cp /usr/src/sys/amd64/conf/GENERIC /root/STORAGESERVER
# cd /usr/src/sys/amd64/conf
# ln -s /root/STORAGESERVER .
....
[.filename]#/root/STORAGESERVER# is then edited, adding or removing devices or options as shown in man:config[5].
The custom kernel is built by setting `KERNCONF` to the kernel config file on the command line:
[source,shell]
....
# make buildkernel KERNCONF=STORAGESERVER
....
[[updating-src-installing]]
=== Installing the Compiled Code
After the `buildworld` and `buildkernel` steps have been completed, the new kernel and world are installed:
[source,shell]
....
# cd /usr/src
# make installkernel
# shutdown -r now
# cd /usr/src
# make installworld
# shutdown -r now
....
If a custom kernel was built, `KERNCONF` must also be set to use the new custom kernel:
[source,shell]
....
# cd /usr/src
# make installkernel KERNCONF=STORAGESERVER
# shutdown -r now
# cd /usr/src
# make installworld
# shutdown -r now
....
[[updating-src-completing]]
=== Completing the Update
A few final tasks complete the update.
Any modified configuration files are merged with the new versions, outdated libraries are located and removed, then the system is restarted.
[[updating-src-completing-merge-etcupdate]]
==== Merging Configuration Files with man:etcupdate[8]
man:etcupdate[8] is a tool for managing updates to files that are not updated as part of an installworld such as files located in [.filename]#/etc/#.
It manages updates by doing a three-way merge of changes made to these files against the local versions.
It is also designed to minimize the amount of user intervention, in contrast to man:mergemaster[8]'s interactive prompts.
[NOTE]
====
In general, man:etcupdate[8] does not need any specific arguments for its job.
There is however a handy in between command for sanity checking what will be done the first time man:etcupdate[8] is used:
[source,shell]
....
# etcupdate diff
....
This command allows the user to audit configuration changes.
====
If man:etcupdate[8] is not able to merge a file automatically, the merge conflicts can be resolved with manual interaction by issuing:
[source,shell]
....
# etcupdate resolve
....
[WARNING]
====
When switching from man:mergemaster[8] to man:etcupdate[8], the first run might merge changes incorrectly generating spurious conflicts.
To prevent this, perform the following steps *before* updating sources and building the new world:
[source,shell]
....
# etcupdate bootstrap <.>
# etcupdate diff <.>
....
<.> Bootstrap the database of stock [.filename]#/etc# files, for more information see man:etcupdate[8].
<.> Check the diff after bootstrapping. Trim any local changes that are no longer needed to reduce the chance of conflicts in future updates.
====
[[updating-src-completing-merge-mergemaster]]
==== Merging Configuration Files with man:mergemaster[8]
man:mergemaster[8] provides a way to merge changes that have been made to system configuration files with new versions of those files.
man:mergemaster[8] is an alternative to the preferred man:etcupdate[8]
With `-Ui`, man:mergemaster[8] automatically updates files that have not been user-modified and installs new files that are not already present:
[source,shell]
....
# mergemaster -Ui
....
If a file must be manually merged, an interactive display allows the user to choose which portions of the files are kept.
See man:mergemaster[8] for more information.
[[updating-src-completing-check-old]]
==== Checking for Outdated Files and Libraries
Some obsolete files or directories can remain after an update.
These files can be located:
[source,shell]
....
# make check-old
....
and deleted:
[source,shell]
....
# make delete-old
....
Some obsolete libraries can also remain.
These can be detected with:
[source,shell]
....
# make check-old-libs
....
and deleted with
[source,shell]
....
# make delete-old-libs
....
Programs which were still using those old libraries will stop working when the library has been deleted.
These programs must be rebuilt or replaced after deleting the old libraries.
[TIP]
====
When all the old files or directories are known to be safe to delete,
pressing kbd:[y] and kbd:[Enter] to delete each file can be avoided by setting `BATCH_DELETE_OLD_FILES` in the command.
For example:
[source,shell]
....
# make BATCH_DELETE_OLD_FILES=yes delete-old-libs
....
====
[[updating-src-completing-restart]]
==== Restarting After the Update
The last step after updating is to restart the computer so all the changes take effect:
[source,shell]
....
# shutdown -r now
....
[[small-lan]]
== Tracking for Multiple Machines
When multiple machines need to track the same source tree,
it is a waste of disk space, network bandwidth,
and CPU cycles to have each system download the sources and rebuild everything.
The solution is to have one machine do most of the work, while the rest of the machines mount that work via NFS.
This section outlines a method of doing so.
For more information about using NFS, refer to crossref:network-servers[network-nfs,"Network File System (NFS)"].
First, identify a set of machines which will run the same set of binaries, known as a _build set_.
Each machine can have a custom kernel, but will run the same userland binaries.
From that set, choose a machine to be the _build machine_ that the world and kernel are built on.
Ideally, this is a fast machine that has sufficient spare CPU to run `make buildworld` and `make buildkernel`.
Select a machine to be the _test machine_, which will test software updates before they are put into production.
This _must_ be a machine that can afford to be down for an extended period of time.
It can be the build machine, but need not be.
All the machines in this build set need to mount [.filename]#/usr/obj# and [.filename]#/usr/src# from the build machine via NFS.
For multiple build sets, [.filename]#/usr/src# should be on one build machine, and NFS mounted on the rest.
Ensure that [.filename]#/etc/make.conf# and [.filename]#/etc/src.conf# on all the machines in the build set agree with the build machine.
That means that the build machine must build all the parts of the base system that any machine in the build set is going to install.
Also, each build machine should have its kernel name set with `KERNCONF` in [.filename]#/etc/make.conf#,
and the build machine should list them all in its `KERNCONF`, listing its own kernel first.
The build machine must have the kernel configuration files for each machine in its [.filename]#/usr/src/sys/arch/conf#.
On the build machine, build the kernel and world as described in <<makeworld>>,
but do not install anything on the build machine.
Instead, install the built kernel on the test machine.
On the test machine, mount [.filename]#/usr/src# and [.filename]#/usr/obj# via NFS.
Then, run `shutdown now` to go to single-user mode in order to install the new kernel and world and run `mergemaster` as usual.
When done, reboot to return to normal multi-user operations.
After verifying that everything on the test machine is working properly,
use the same procedure to install the new software on each of the other machines in the build set.
The same methodology can be used for the ports tree.
The first step is to share [.filename]#/usr/ports# via NFS to all the machines in the build set.
To configure [.filename]#/etc/make.conf# to share distfiles,
set `DISTDIR` to a common shared directory that is writable by whichever user `root` is mapped to by the NFS mount.
Each machine should set `WRKDIRPREFIX` to a local build directory, if ports are to be built locally.
Alternately, if the build system is to build and distribute packages to the machines in the build set,
set `PACKAGES` on the build system to a directory similar to `DISTDIR`.
diff --git a/documentation/content/en/books/handbook/desktop/_index.adoc b/documentation/content/en/books/handbook/desktop/_index.adoc
index 98532722bd..68245d943d 100644
--- a/documentation/content/en/books/handbook/desktop/_index.adoc
+++ b/documentation/content/en/books/handbook/desktop/_index.adoc
@@ -1,570 +1,571 @@
---
title: Chapter 6. Desktop Applications
part: Part II. Common Tasks
prev: books/handbook/partii
next: books/handbook/multimedia
+description: This chapter demonstrates how to install numerous desktop applications, including web browsers, productivity software, document viewers, and financial software
---
[[desktop]]
= Desktop Applications
:doctype: book
:toc: macro
:toclevels: 1
:icons: font
:sectnums:
:sectnumlevels: 6
:source-highlighter: rouge
:experimental:
:skip-front-matter:
:xrefstyle: basic
:relfileprefix: ../
:outfilesuffix:
:sectnumoffset: 6
ifeval::["{backend}" == "html5"]
:imagesdir: ../../../../images/books/handbook/desktop/
endif::[]
ifeval::["{backend}" == "pdf"]
:imagesdir: ../../../../static/images/books/handbook/desktop/
endif::[]
ifeval::["{backend}" == "epub3"]
:imagesdir: ../../../../static/images/books/handbook/desktop/
endif::[]
include::shared/authors.adoc[]
include::shared/releases.adoc[]
include::shared/en/mailing-lists.adoc[]
include::shared/en/teams.adoc[]
include::shared/en/urls.adoc[]
toc::[]
[[desktop-synopsis]]
== Synopsis
While FreeBSD is popular as a server for its performance and stability, it is also suited for day-to-day use as a desktop. With over {numports} applications available as FreeBSD packages or ports, it is easy to build a customized desktop that runs a wide variety of desktop applications. This chapter demonstrates how to install numerous desktop applications, including web browsers, productivity software, document viewers, and financial software.
[NOTE]
====
Users who prefer to install a pre-built desktop version of FreeBSD rather than configuring one from scratch should refer to https://ghostbsd.org[GhostBSD], https://www.midnightbsd.org[MidnightBSD] or https://www.nomad.org[NomadBSD].
====
Readers of this chapter should know how to:
* Install additional software using packages or ports as described in crossref:ports[ports,Installing Applications: Packages and Ports].
* Install X and a window manager as described in crossref:x11[x11,The X Window System].
For information on how to configure a multimedia environment, refer to crossref:multimedia[multimedia,Multimedia].
[[desktop-browsers]]
== Browsers
FreeBSD does not come with a pre-installed web browser. Instead, the https://www.FreeBSD.org/ports/[www] category of the Ports Collection contains many browsers which can be installed as a package or compiled from the Ports Collection.
The KDE and GNOME desktop environments include their own HTML browser. Refer to crossref:x11[x11-wm,“Desktop Environments”] for more information on how to set up these complete desktops.
Some lightweight browsers include package:www/dillo2[], package:www/links[], and package:www/w3m[].
This section demonstrates how to install the following popular web browsers and indicates if the application is resource-heavy, takes time to compile from ports, or has any major dependencies.
[.informaltable]
[cols="1,1,1,1", frame="none", options="header"]
|===
| Application Name
| Resources Needed
| Installation from Ports
| Notes
|Firefox
|medium
|heavy
|FreeBSD, Linux(R), and localized versions are available
|Konqueror
|medium
|heavy
|Requires KDE libraries
|Chromium
|medium
|heavy
|Requires Gtk+
|===
=== Firefox
Firefox is an open source browser that features a standards-compliant HTML display engine, tabbed browsing, popup blocking, extensions, improved security, and more. Firefox is based on the Mozilla codebase.
To install the package of the latest release version of Firefox, type:
[source,shell]
....
# pkg install firefox
....
To instead install Firefox Extended Support Release (ESR) version, use:
[source,shell]
....
# pkg install firefox-esr
....
The Ports Collection can instead be used to compile the desired version of Firefox from source code. This example builds package:www/firefox[], where `firefox` can be replaced with the ESR or localized version to install.
[source,shell]
....
# cd /usr/ports/www/firefox
# make install clean
....
=== Konqueror
Konqueror is more than a web browser as it is also a file manager and a multimedia viewer. Supports WebKit as well as its own KHTML. WebKit is a rendering engine used by many modern browsers including Chromium.
Konqueror can be installed as a package by typing:
[source,shell]
....
# pkg install konqueror
....
To install from the Ports Collection:
[source,shell]
....
# cd /usr/ports/x11-fm/konqueror/
# make install clean
....
=== Chromium
Chromium is an open source browser project that aims to build a safer, faster, and more stable web browsing experience. Chromium features tabbed browsing, popup blocking, extensions, and much more. Chromium is the open source project upon which the Google Chrome web browser is based.
Chromium can be installed as a package by typing:
[source,shell]
....
# pkg install chromium
....
Alternatively, Chromium can be compiled from source using the Ports Collection:
[source,shell]
....
# cd /usr/ports/www/chromium
# make install clean
....
[NOTE]
====
The executable for Chromium is [.filename]#/usr/local/bin/chrome#, not [.filename]#/usr/local/bin/chromium#.
====
[[desktop-productivity]]
== Productivity
When it comes to productivity, users often look for an office suite or an easy-to-use word processor. While some <<x11-wm,desktop environments>> like KDE provide an office suite, there is no default productivity package. Several office suites and graphical word processors are available for FreeBSD, regardless of the installed window manager.
This section demonstrates how to install the following popular productivity software and indicates if the application is resource-heavy, takes time to compile from ports, or has any major dependencies.
[.informaltable]
[cols="1,1,1,1", frame="none", options="header"]
|===
| Application Name
| Resources Needed
| Installation from Ports
| Major Dependencies
|Calligra
|light
|heavy
|KDE
|AbiWord
|light
|light
|Gtk+ or GNOME
|The Gimp
|light
|heavy
|Gtk+
|Apache OpenOffice
|heavy
|huge
|JDK(TM) and Mozilla
|LibreOffice
|somewhat heavy
|huge
|Gtk+, or KDE/ GNOME, or JDK(TM)
|===
=== Calligra
The KDE desktop environment includes an office suite which can be installed separately from KDE. Calligra includes standard components that can be found in other office suites. Words is the word processor, Sheets is the spreadsheet program, Stage manages slide presentations, and Karbon is used to draw graphical documents.
In FreeBSD, package:editors/calligra[] can be installed as a package or a port. To install the package:
[source,shell]
....
# pkg install calligra
....
If the package is not available, use the Ports Collection instead:
[source,shell]
....
# cd /usr/ports/editors/calligra
# make install clean
....
=== AbiWord
AbiWord is a free word processing program similar in look and feel to Microsoft(R) Word. It is fast, contains many features, and is user-friendly.
AbiWord can import or export many file formats, including some proprietary ones like Microsoft(R) [.filename]#.rtf#.
To install the AbiWord package:
[source,shell]
....
# pkg install abiword
....
If the package is not available, it can be compiled from the Ports Collection:
[source,shell]
....
# cd /usr/ports/editors/abiword
# make install clean
....
=== The GIMP
For image authoring or picture retouching, The GIMP provides a sophisticated image manipulation program. It can be used as a simple paint program or as a quality photo retouching suite. It supports a large number of plugins and features a scripting interface. The GIMP can read and write a wide range of file formats and supports interfaces with scanners and tablets.
To install the package:
[source,shell]
....
# pkg install gimp
....
Alternately, use the Ports Collection:
[source,shell]
....
# cd /usr/ports/graphics/gimp
# make install clean
....
The graphics category (https://www.FreeBSD.org/ports/graphics/[freebsd.org/ports/graphics/]) of the Ports Collection contains several GIMP-related plugins, help files, and user manuals.
=== Apache OpenOffice
Apache OpenOffice is an open source office suite which is developed under the wing of the Apache Software Foundation's Incubator. It includes all of the applications found in a complete office productivity suite: a word processor, spreadsheet, presentation manager, and drawing program. Its user interface is similar to other office suites, and it can import and export in various popular file formats. It is available in a number of different languages and internationalization has been extended to interfaces, spell checkers, and dictionaries.
The word processor of Apache OpenOffice uses a native XML file format for increased portability and flexibility. The spreadsheet program features a macro language which can be interfaced with external databases. Apache OpenOffice is stable and runs natively on Windows(R), Solaris(TM), Linux(R), FreeBSD, and Mac OS(R) X. More information about Apache OpenOffice can be found at http://openoffice.org/[openoffice.org]. For FreeBSD specific information refer to http://porting.openoffice.org/freebsd/[porting.openoffice.org/freebsd/].
To install the Apache OpenOffice package:
[source,shell]
....
# pkg install apache-openoffice
....
Once the package is installed, type the following command to launch Apache OpenOffice:
[source,shell]
....
% openoffice-X.Y.Z
....
where _X.Y.Z_ is the version number of the installed version of Apache OpenOffice. The first time Apache OpenOffice launches, some questions will be asked and a [.filename]#.openoffice.org# folder will be created in the user's home directory.
If the desired Apache OpenOffice package is not available, compiling the port is still an option. However, this requires a lot of disk space and a fairly long time to compile:
[source,shell]
....
# cd /usr/ports/editors/openoffice-4
# make install clean
....
[NOTE]
====
To build a localized version, replace the previous command with:
[source,shell]
....
# make LOCALIZED_LANG=your_language install clean
....
Replace _your_language_ with the correct language ISO-code. A list of supported language codes is available in [.filename]#files/Makefile.localized#, located in the port's directory.
====
=== LibreOffice
LibreOffice is a free software office suite developed by http://www.documentfoundation.org/[documentfoundation.org]. It is compatible with other major office suites and available on a variety of platforms. It is a rebranded fork of Apache OpenOffice and includes applications found in a complete office productivity suite: a word processor, spreadsheet, presentation manager, drawing program, database management program, and a tool for creating and editing mathematical formulæ. It is available in a number of different languages and internationalization has been extended to interfaces, spell checkers, and dictionaries.
The word processor of LibreOffice uses a native XML file format for increased portability and flexibility. The spreadsheet program features a macro language which can be interfaced with external databases. LibreOffice is stable and runs natively on Windows(R), Linux(R), FreeBSD, and Mac OS(R) X. More information about LibreOffice can be found at http://www.libreoffice.org/[libreoffice.org].
To install the English version of the LibreOffice package:
[source,shell]
....
# pkg install libreoffice
....
The editors category (https://www.FreeBSD.org/ports/editors/[freebsd.org/ports/editors/]) of the Ports Collection contains several localizations for LibreOffice. When installing a localized package, replace `libreoffice` with the name of the localized package.
Once the package is installed, type the following command to run LibreOffice:
[source,shell]
....
% libreoffice
....
During the first launch, some questions will be asked and a [.filename]#.libreoffice# folder will be created in the user's home directory.
If the desired LibreOffice package is not available, compiling the port is still an option. However, this requires a lot of disk space and a fairly long time to compile. This example compiles the English version:
[source,shell]
....
# cd /usr/ports/editors/libreoffice
# make install clean
....
[NOTE]
====
To build a localized version, `cd` into the port directory of the desired language. Supported languages can be found in the editors category (https://www.FreeBSD.org/ports/editors/[freebsd.org/ports/editors/]) of the Ports Collection.
====
[[desktop-viewers]]
== Document Viewers
Some new document formats have gained popularity since the advent of UNIX(R) and the viewers they require may not be available in the base system. This section demonstrates how to install the following document viewers:
[.informaltable]
[cols="1,1,1,1", frame="none", options="header"]
|===
| Application Name
| Resources Needed
| Installation from Ports
| Major Dependencies
|Xpdf
|light
|light
|FreeType
|gv
|light
|light
|Xaw3d
|Geeqie
|light
|light
|Gtk+ or GNOME
|ePDFView
|light
|light
|Gtk+
|Okular
|light
|heavy
|KDE
|===
=== Xpdf
For users that prefer a small FreeBSD PDF viewer, Xpdf provides a light-weight and efficient viewer which requires few resources. It uses the standard X fonts and does not require any additional toolkits.
To install the Xpdf package:
[source,shell]
....
# pkg install xpdf
....
If the package is not available, use the Ports Collection:
[source,shell]
....
# cd /usr/ports/graphics/xpdf
# make install clean
....
Once the installation is complete, launch `xpdf` and use the right mouse button to activate the menu.
=== gv
gv is a PostScript(R) and PDF viewer. It is based on ghostview, but has a nicer look as it is based on the Xaw3d widget toolkit. gv has many configurable features, such as orientation, paper size, scale, and anti-aliasing. Almost any operation can be performed with either the keyboard or the mouse.
To install gv as a package:
[source,shell]
....
# pkg install gv
....
If a package is unavailable, use the Ports Collection:
[source,shell]
....
# cd /usr/ports/print/gv
# make install clean
....
=== Geeqie
Geeqie is a fork from the unmaintained GQView project, in an effort to move development forward and integrate the existing patches. Geeqie is an image manager which supports viewing a file with a single click, launching an external editor, and thumbnail previews. It also features a slideshow mode and some basic file operations, making it easy to manage image collections and to find duplicate files. Geeqie supports full screen viewing and internationalization.
To install the Geeqie package:
[source,shell]
....
# pkg install geeqie
....
If the package is not available, use the Ports Collection:
[source,shell]
....
# cd /usr/ports/graphics/geeqie
# make install clean
....
=== ePDFView
ePDFView is a lightweight `PDF` document viewer that only uses the Gtk+ and Poppler libraries. It is currently under development, but already opens most `PDF` files (even encrypted), save copies of documents, and has support for printing using CUPS.
To install ePDFView as a package:
[source,shell]
....
# pkg install epdfview
....
If a package is unavailable, use the Ports Collection:
[source,shell]
....
# cd /usr/ports/graphics/epdfview
# make install clean
....
=== Okular
Okular is a universal document viewer based on KPDF for KDE. It can open many document formats, including `PDF`, PostScript(R), DjVu, `CHM`, `XPS`, and ePub.
To install Okular as a package:
[source,shell]
....
# pkg install okular
....
If a package is unavailable, use the Ports Collection:
[source,shell]
....
# cd /usr/ports/graphics/okular
# make install clean
....
[[desktop-finance]]
== Finance
For managing personal finances on a FreeBSD desktop, some powerful and easy-to-use applications can be installed. Some are compatible with widespread file formats, such as the formats used by Quicken and Excel.
This section covers these programs:
[.informaltable]
[cols="1,1,1,1", frame="none", options="header"]
|===
| Application Name
| Resources Needed
| Installation from Ports
| Major Dependencies
|GnuCash
|light
|heavy
|GNOME
|Gnumeric
|light
|heavy
|GNOME
|KMyMoney
|light
|heavy
|KDE
|===
=== GnuCash
GnuCash is part of the GNOME effort to provide user-friendly, yet powerful, applications to end-users. GnuCash can be used to keep track of income and expenses, bank accounts, and stocks. It features an intuitive interface while remaining professional.
GnuCash provides a smart register, a hierarchical system of accounts, and many keyboard accelerators and auto-completion methods. It can split a single transaction into several more detailed pieces. GnuCash can import and merge Quicken QIF files. It also handles most international date and currency formats.
To install the GnuCash package:
[source,shell]
....
# pkg install gnucash
....
If the package is not available, use the Ports Collection:
[source,shell]
....
# cd /usr/ports/finance/gnucash
# make install clean
....
=== Gnumeric
Gnumeric is a spreadsheet program developed by the GNOME community. It features convenient automatic guessing of user input according to the cell format with an autofill system for many sequences. It can import files in a number of popular formats, including Excel, Lotus 1-2-3, and Quattro Pro. It has a large number of built-in functions and allows all of the usual cell formats such as number, currency, date, time, and much more.
To install Gnumeric as a package:
[source,shell]
....
# pkg install gnumeric
....
If the package is not available, use the Ports Collection:
[source,shell]
....
# cd /usr/ports/math/gnumeric
# make install clean
....
=== KMyMoney
KMyMoney is a personal finance application created by the KDE community. KMyMoney aims to provide the important features found in commercial personal finance manager applications. It also highlights ease-of-use and proper double-entry accounting among its features. KMyMoney imports from standard Quicken QIF files, tracks investments, handles multiple currencies, and provides a wealth of reports.
To install KMyMoney as a package:
[source,shell]
....
# pkg install kmymoney-kde4
....
If the package is not available, use the Ports Collection:
[source,shell]
....
# cd /usr/ports/finance/kmymoney-kde4
# make install clean
....
diff --git a/documentation/content/en/books/handbook/disks/_index.adoc b/documentation/content/en/books/handbook/disks/_index.adoc
index 4f22ff2772..956450f588 100644
--- a/documentation/content/en/books/handbook/disks/_index.adoc
+++ b/documentation/content/en/books/handbook/disks/_index.adoc
@@ -1,2237 +1,2238 @@
---
title: Chapter 18. Storage
part: Part III. System Administration
prev: books/handbook/audit
next: books/handbook/geom
+description: This chapter covers the use of disks and storage media in FreeBSD. This includes SCSI and IDE disks, CD and DVD media, memory-backed disks, and USB storage devices.
---
[[disks]]
= Storage
:doctype: book
:toc: macro
:toclevels: 1
:icons: font
:sectnums:
:sectnumlevels: 6
:source-highlighter: rouge
:experimental:
:skip-front-matter:
:xrefstyle: basic
:relfileprefix: ../
:outfilesuffix:
:sectnumoffset: 18
ifeval::["{backend}" == "html5"]
:imagesdir: ../../../../images/books/handbook/disks/
endif::[]
ifeval::["{backend}" == "pdf"]
:imagesdir: ../../../../static/images/books/handbook/disks/
endif::[]
ifeval::["{backend}" == "epub3"]
:imagesdir: ../../../../static/images/books/handbook/disks/
endif::[]
include::shared/authors.adoc[]
include::shared/releases.adoc[]
include::shared/en/mailing-lists.adoc[]
include::shared/en/teams.adoc[]
include::shared/en/urls.adoc[]
toc::[]
[[disks-synopsis]]
== Synopsis
This chapter covers the use of disks and storage media in FreeBSD. This includes SCSI and IDE disks, CD and DVD media, memory-backed disks, and USB storage devices.
After reading this chapter, you will know:
* How to add additional hard disks to a FreeBSD system.
* How to grow the size of a disk's partition on FreeBSD.
* How to configure FreeBSD to use USB storage devices.
* How to use CD and DVD media on a FreeBSD system.
* How to use the backup programs available under FreeBSD.
* How to set up memory disks.
* What file system snapshots are and how to use them efficiently.
* How to use quotas to limit disk space usage.
* How to encrypt disks and swap to secure them against attackers.
* How to configure a highly available storage network.
Before reading this chapter, you should:
* Know how to crossref:kernelconfig[kernelconfig,configure and install a new FreeBSD kernel].
[[disks-adding]]
== Adding Disks
This section describes how to add a new SATA disk to a machine that currently only has a single drive. First, turn off the computer and install the drive in the computer following the instructions of the computer, controller, and drive manufacturers. Reboot the system and become `root`.
Inspect [.filename]#/var/run/dmesg.boot# to ensure the new disk was found. In this example, the newly added SATA drive will appear as [.filename]#ada1#.
For this example, a single large partition will be created on the new disk. The http://en.wikipedia.org/wiki/GUID_Partition_Table[GPT] partitioning scheme will be used in preference to the older and less versatile MBR scheme.
[NOTE]
====
If the disk to be added is not blank, old partition information can be removed with `gpart delete`. See man:gpart[8] for details.
====
The partition scheme is created, and then a single partition is added. To improve performance on newer disks with larger hardware block sizes, the partition is aligned to one megabyte boundaries:
[source,shell]
....
# gpart create -s GPT ada1
# gpart add -t freebsd-ufs -a 1M ada1
....
Depending on use, several smaller partitions may be desired. See man:gpart[8] for options to create partitions smaller than a whole disk.
The disk partition information can be viewed with `gpart show`:
[source,shell]
....
% gpart show ada1
=> 34 1465146988 ada1 GPT (699G)
34 2014 - free - (1.0M)
2048 1465143296 1 freebsd-ufs (699G)
1465145344 1678 - free - (839K)
....
A file system is created in the new partition on the new disk:
[source,shell]
....
# newfs -U /dev/ada1p1
....
An empty directory is created as a _mountpoint_, a location for mounting the new disk in the original disk's file system:
[source,shell]
....
# mkdir /newdisk
....
Finally, an entry is added to [.filename]#/etc/fstab# so the new disk will be mounted automatically at startup:
[.programlisting]
....
/dev/ada1p1 /newdisk ufs rw 2 2
....
The new disk can be mounted manually, without restarting the system:
[source,shell]
....
# mount /newdisk
....
[[disks-growing]]
== Resizing and Growing Disks
A disk's capacity can increase without any changes to the data already present. This happens commonly with virtual machines, when the virtual disk turns out to be too small and is enlarged. Sometimes a disk image is written to a USB memory stick, but does not use the full capacity. Here we describe how to resize or _grow_ disk contents to take advantage of increased capacity.
Determine the device name of the disk to be resized by inspecting [.filename]#/var/run/dmesg.boot#. In this example, there is only one SATA disk in the system, so the drive will appear as [.filename]#ada0#.
List the partitions on the disk to see the current configuration:
[source,shell]
....
# gpart show ada0
=> 34 83886013 ada0 GPT (48G) [CORRUPT]
34 128 1 freebsd-boot (64k)
162 79691648 2 freebsd-ufs (38G)
79691810 4194236 3 freebsd-swap (2G)
83886046 1 - free - (512B)
....
[NOTE]
====
If the disk was formatted with the http://en.wikipedia.org/wiki/GUID_Partition_Table[GPT] partitioning scheme, it may show as "corrupted" because the GPT backup partition table is no longer at the end of the drive. Fix the backup partition table with `gpart`:
[source,shell]
....
# gpart recover ada0
ada0 recovered
....
====
Now the additional space on the disk is available for use by a new partition, or an existing partition can be expanded:
[source,shell]
....
# gpart show ada0
=> 34 102399933 ada0 GPT (48G)
34 128 1 freebsd-boot (64k)
162 79691648 2 freebsd-ufs (38G)
79691810 4194236 3 freebsd-swap (2G)
83886046 18513921 - free - (8.8G)
....
Partitions can only be resized into contiguous free space. Here, the last partition on the disk is the swap partition, but the second partition is the one that needs to be resized. Swap partitions only contain temporary data, so it can safely be unmounted, deleted, and then recreate the third partition after resizing the second partition.
Disable the swap partition:
[source,shell]
....
# swapoff /dev/ada0p3
....
Delete the third partition, specified by the `-i` flag, from the disk _ada0_.
[source,shell]
....
# gpart delete -i 3 ada0
ada0p3 deleted
# gpart show ada0
=> 34 102399933 ada0 GPT (48G)
34 128 1 freebsd-boot (64k)
162 79691648 2 freebsd-ufs (38G)
79691810 22708157 - free - (10G)
....
[WARNING]
====
There is risk of data loss when modifying the partition table of a mounted file system. It is best to perform the following steps on an unmounted file system while running from a live CD-ROM or USB device. However, if absolutely necessary, a mounted file system can be resized after disabling GEOM safety features:
[source,shell]
....
# sysctl kern.geom.debugflags=16
....
====
Resize the partition, leaving room to recreate a swap partition of the desired size. The partition to resize is specified with `-i`, and the new desired size with `-s`. Optionally, alignment of the partition is controlled with `-a`. This only modifies the size of the partition. The file system in the partition will be expanded in a separate step.
[source,shell]
....
# gpart resize -i 2 -s 47G -a 4k ada0
ada0p2 resized
# gpart show ada0
=> 34 102399933 ada0 GPT (48G)
34 128 1 freebsd-boot (64k)
162 98566144 2 freebsd-ufs (47G)
98566306 3833661 - free - (1.8G)
....
Recreate the swap partition and activate it. If no size is specified with `-s`, all remaining space is used:
[source,shell]
....
# gpart add -t freebsd-swap -a 4k ada0
ada0p3 added
# gpart show ada0
=> 34 102399933 ada0 GPT (48G)
34 128 1 freebsd-boot (64k)
162 98566144 2 freebsd-ufs (47G)
98566306 3833661 3 freebsd-swap (1.8G)
# swapon /dev/ada0p3
....
Grow the UFS file system to use the new capacity of the resized partition:
[source,shell]
....
# growfs /dev/ada0p2
Device is mounted read-write; resizing will result in temporary write suspension for /.
It's strongly recommended to make a backup before growing the file system.
OK to grow file system on /dev/ada0p2, mounted on /, from 38GB to 47GB? [Yes/No] Yes
super-block backups (for fsck -b #) at:
80781312, 82063552, 83345792, 84628032, 85910272, 87192512, 88474752,
89756992, 91039232, 92321472, 93603712, 94885952, 96168192, 97450432
....
If the file system is ZFS, the resize is triggered by running the `online` subcommand with `-e`:
[source,shell]
....
# zpool online -e zroot /dev/ada0p2
....
Both the partition and the file system on it have now been resized to use the newly-available disk space.
[[usb-disks]]
== USB Storage Devices
Many external storage solutions, such as hard drives, USB thumbdrives, and CD and DVD burners, use the Universal Serial Bus (USB). FreeBSD provides support for USB 1.x, 2.0, and 3.0 devices.
[NOTE]
====
USB 3.0 support is not compatible with some hardware, including Haswell (Lynx point) chipsets. If FreeBSD boots with a `failed with error 19` message, disable xHCI/USB3 in the system BIOS.
====
Support for USB storage devices is built into the [.filename]#GENERIC# kernel. For a custom kernel, be sure that the following lines are present in the kernel configuration file:
[.programlisting]
....
device scbus # SCSI bus (required for ATA/SCSI)
device da # Direct Access (disks)
device pass # Passthrough device (direct ATA/SCSI access)
device uhci # provides USB 1.x support
device ohci # provides USB 1.x support
device ehci # provides USB 2.0 support
device xhci # provides USB 3.0 support
device usb # USB Bus (required)
device umass # Disks/Mass storage - Requires scbus and da
device cd # needed for CD and DVD burners
....
FreeBSD uses the man:umass[4] driver which uses the SCSI subsystem to access USB storage devices. Since any USB device will be seen as a SCSI device by the system, if the USB device is a CD or DVD burner, do _not_ include `device atapicam` in a custom kernel configuration file.
The rest of this section demonstrates how to verify that a USB storage device is recognized by FreeBSD and how to configure the device so that it can be used.
=== Device Configuration
To test the USB configuration, plug in the USB device. Use `dmesg` to confirm that the drive appears in the system message buffer. It should look something like this:
[source,shell]
....
umass0: <STECH Simple Drive, class 0/0, rev 2.00/1.04, addr 3> on usbus0
umass0: SCSI over Bulk-Only; quirks = 0x0100
umass0:4:0:-1: Attached to scbus4
da0 at umass-sim0 bus 0 scbus4 target 0 lun 0
da0: <STECH Simple Drive 1.04> Fixed Direct Access SCSI-4 device
da0: Serial Number WD-WXE508CAN263
da0: 40.000MB/s transfers
da0: 152627MB (312581808 512 byte sectors: 255H 63S/T 19457C)
da0: quirks=0x2<NO_6_BYTE>
....
The brand, device node ([.filename]#da0#), speed, and size will differ according to the device.
Since the USB device is seen as a SCSI one, `camcontrol` can be used to list the USB storage devices attached to the system:
[source,shell]
....
# camcontrol devlist
<STECH Simple Drive 1.04> at scbus4 target 0 lun 0 (pass3,da0)
....
Alternately, `usbconfig` can be used to list the device. Refer to man:usbconfig[8] for more information about this command.
[source,shell]
....
# usbconfig
ugen0.3: <Simple Drive STECH> at usbus0, cfg=0 md=HOST spd=HIGH (480Mbps) pwr=ON (2mA)
....
If the device has not been formatted, refer to <<disks-adding>> for instructions on how to format and create partitions on the USB drive. If the drive comes with a file system, it can be mounted by `root` using the instructions in crossref:basics[mount-unmount,“Mounting and Unmounting File Systems”].
[WARNING]
====
Allowing untrusted users to mount arbitrary media, by enabling `vfs.usermount` as described below, should not be considered safe from a security point of view. Most file systems were not built to safeguard against malicious devices.
====
To make the device mountable as a normal user, one solution is to make all users of the device a member of the `operator` group using man:pw[8]. Next, ensure that `operator` is able to read and write the device by adding these lines to [.filename]#/etc/devfs.rules#:
[.programlisting]
....
[localrules=5]
add path 'da*' mode 0660 group operator
....
[NOTE]
====
If internal SCSI disks are also installed in the system, change the second line as follows:
[.programlisting]
....
add path 'da[3-9]*' mode 0660 group operator
....
This will exclude the first three SCSI disks ([.filename]#da0# to [.filename]#da2#)from belonging to the `operator` group. Replace _3_ with the number of internal SCSI disks. Refer to man:devfs.rules[5] for more information about this file.
====
Next, enable the ruleset in [.filename]#/etc/rc.conf#:
[.programlisting]
....
devfs_system_ruleset="localrules"
....
Then, instruct the system to allow regular users to mount file systems by adding the following line to [.filename]#/etc/sysctl.conf#:
[.programlisting]
....
vfs.usermount=1
....
Since this only takes effect after the next reboot, use `sysctl` to set this variable now:
[source,shell]
....
# sysctl vfs.usermount=1
vfs.usermount: 0 -> 1
....
The final step is to create a directory where the file system is to be mounted. This directory needs to be owned by the user that is to mount the file system. One way to do that is for `root` to create a subdirectory owned by that user as [.filename]#/mnt/username#. In the following example, replace _username_ with the login name of the user and _usergroup_ with the user's primary group:
[source,shell]
....
# mkdir /mnt/username
# chown username:usergroup /mnt/username
....
Suppose a USB thumbdrive is plugged in, and a device [.filename]#/dev/da0s1# appears. If the device is formatted with a FAT file system, the user can mount it using:
[source,shell]
....
% mount -t msdosfs -o -m=644,-M=755 /dev/da0s1 /mnt/username
....
Before the device can be unplugged, it _must_ be unmounted first:
[source,shell]
....
% umount /mnt/username
....
After device removal, the system message buffer will show messages similar to the following:
[source,shell]
....
umass0: at uhub3, port 2, addr 3 (disconnected)
da0 at umass-sim0 bus 0 scbus4 target 0 lun 0
da0: <STECH Simple Drive 1.04> s/n WD-WXE508CAN263 detached
(da0:umass-sim0:0:0:0): Periph destroyed
....
=== Automounting Removable Media
USB devices can be automatically mounted by uncommenting this line in [.filename]#/etc/auto_master#:
[source,shell]
....
/media -media -nosuid
....
Then add these lines to [.filename]#/etc/devd.conf#:
[source,shell]
....
notify 100 {
match "system" "GEOM";
match "subsystem" "DEV";
action "/usr/sbin/automount -c";
};
....
Reload the configuration if man:autofs[5] and man:devd[8] are already running:
[source,shell]
....
# service automount restart
# service devd restart
....
man:autofs[5] can be set to start at boot by adding this line to [.filename]#/etc/rc.conf#:
[.programlisting]
....
autofs_enable="YES"
....
man:autofs[5] requires man:devd[8] to be enabled, as it is by default.
Start the services immediately with:
[source,shell]
....
# service automount start
# service automountd start
# service autounmountd start
# service devd start
....
Each file system that can be automatically mounted appears as a directory in [.filename]#/media/#. The directory is named after the file system label. If the label is missing, the directory is named after the device node.
The file system is transparently mounted on the first access, and unmounted after a period of inactivity. Automounted drives can also be unmounted manually:
[source,shell]
....
# automount -fu
....
This mechanism is typically used for memory cards and USB memory sticks. It can be used with any block device, including optical drives or iSCSILUNs.
[[creating-cds]]
== Creating and Using CD Media
Compact Disc (CD) media provide a number of features that differentiate them from conventional disks. They are designed so that they can be read continuously without delays to move the head between tracks. While CD media do have tracks, these refer to a section of data to be read continuously, and not a physical property of the disk. The ISO 9660 file system was designed to deal with these differences.
The FreeBSD Ports Collection provides several utilities for burning and duplicating audio and data CDs. This chapter demonstrates the use of several command line utilities. For CD burning software with a graphical utility, consider installing the package:sysutils/xcdroast[] or package:sysutils/k3b[] packages or ports.
[[atapicam]]
=== Supported Devices
The [.filename]#GENERIC# kernel provides support for SCSI, USB, and ATAPICD readers and burners. If a custom kernel is used, the options that need to be present in the kernel configuration file vary by the type of device.
For a SCSI burner, make sure these options are present:
[.programlisting]
....
device scbus # SCSI bus (required for ATA/SCSI)
device da # Direct Access (disks)
device pass # Passthrough device (direct ATA/SCSI access)
device cd # needed for CD and DVD burners
....
For a USB burner, make sure these options are present:
[.programlisting]
....
device scbus # SCSI bus (required for ATA/SCSI)
device da # Direct Access (disks)
device pass # Passthrough device (direct ATA/SCSI access)
device cd # needed for CD and DVD burners
device uhci # provides USB 1.x support
device ohci # provides USB 1.x support
device ehci # provides USB 2.0 support
device xhci # provides USB 3.0 support
device usb # USB Bus (required)
device umass # Disks/Mass storage - Requires scbus and da
....
For an ATAPI burner, make sure these options are present:
[.programlisting]
....
device ata # Legacy ATA/SATA controllers
device scbus # SCSI bus (required for ATA/SCSI)
device pass # Passthrough device (direct ATA/SCSI access)
device cd # needed for CD and DVD burners
....
[NOTE]
====
On FreeBSD versions prior to 10.x, this line is also needed in the kernel configuration file if the burner is an ATAPI device:
[.programlisting]
....
device atapicam
....
Alternately, this driver can be loaded at boot time by adding the following line to [.filename]#/boot/loader.conf#:
[.programlisting]
....
atapicam_load="YES"
....
This will require a reboot of the system as this driver can only be loaded at boot time.
====
To verify that FreeBSD recognizes the device, run `dmesg` and look for an entry for the device. On systems prior to 10.x, the device name in the first line of the output will be [.filename]#acd0# instead of [.filename]#cd0#.
[source,shell]
....
% dmesg | grep cd
cd0 at ahcich1 bus 0 scbus1 target 0 lun 0
cd0: <HL-DT-ST DVDRAM GU70N LT20> Removable CD-ROM SCSI-0 device
cd0: Serial Number M3OD3S34152
cd0: 150.000MB/s transfers (SATA 1.x, UDMA6, ATAPI 12bytes, PIO 8192bytes)
cd0: Attempt to query device size failed: NOT READY, Medium not present - tray closed
....
[[cdrecord]]
=== Burning a CD
In FreeBSD, `cdrecord` can be used to burn CDs. This command is installed with the package:sysutils/cdrtools[] package or port.
While `cdrecord` has many options, basic usage is simple. Specify the name of the ISO file to burn and, if the system has multiple burner devices, specify the name of the device to use:
[source,shell]
....
# cdrecord dev=device imagefile.iso
....
To determine the device name of the burner, use `-scanbus` which might produce results like this:
[source,shell]
....
# cdrecord -scanbus
ProDVD-ProBD-Clone 3.00 (amd64-unknown-freebsd10.0) Copyright (C) 1995-2010 Jörg Schilling
Using libscg version 'schily-0.9'
scsibus0:
0,0,0 0) 'SEAGATE ' 'ST39236LW ' '0004' Disk
0,1,0 1) 'SEAGATE ' 'ST39173W ' '5958' Disk
0,2,0 2) *
0,3,0 3) 'iomega ' 'jaz 1GB ' 'J.86' Removable Disk
0,4,0 4) 'NEC ' 'CD-ROM DRIVE:466' '1.26' Removable CD-ROM
0,5,0 5) *
0,6,0 6) *
0,7,0 7) *
scsibus1:
1,0,0 100) *
1,1,0 101) *
1,2,0 102) *
1,3,0 103) *
1,4,0 104) *
1,5,0 105) 'YAMAHA ' 'CRW4260 ' '1.0q' Removable CD-ROM
1,6,0 106) 'ARTEC ' 'AM12S ' '1.06' Scanner
1,7,0 107) *
....
Locate the entry for the CD burner and use the three numbers separated by commas as the value for `dev`. In this case, the Yamaha burner device is `1,5,0`, so the appropriate input to specify that device is `dev=1,5,0`. Refer to the manual page for `cdrecord` for other ways to specify this value and for information on writing audio tracks and controlling the write speed.
Alternately, run the following command to get the device address of the burner:
[source,shell]
....
# camcontrol devlist
<MATSHITA CDRW/DVD UJDA740 1.00> at scbus1 target 0 lun 0 (cd0,pass0)
....
Use the numeric values for `scbus`, `target`, and `lun`. For this example, `1,0,0` is the device name to use.
[[mkisofs]]
=== Writing Data to an ISO File System
In order to produce a data CD, the data files that are going to make up the tracks on the CD must be prepared before they can be burned to the CD. In FreeBSD, package:sysutils/cdrtools[] installs `mkisofs`, which can be used to produce an ISO 9660 file system that is an image of a directory tree within a UNIX(R) file system. The simplest usage is to specify the name of the ISO file to create and the path to the files to place into the ISO 9660 file system:
[source,shell]
....
# mkisofs -o imagefile.iso /path/to/tree
....
This command maps the file names in the specified path to names that fit the limitations of the standard ISO 9660 file system, and will exclude files that do not meet the standard for ISO file systems.
A number of options are available to overcome the restrictions imposed by the standard. In particular, `-R` enables the Rock Ridge extensions common to UNIX(R) systems and `-J` enables Joliet extensions used by Microsoft(R) systems.
For CDs that are going to be used only on FreeBSD systems, `-U` can be used to disable all filename restrictions. When used with `-R`, it produces a file system image that is identical to the specified FreeBSD tree, even if it violates the ISO 9660 standard.
The last option of general use is `-b`. This is used to specify the location of a boot image for use in producing an "El Torito" bootable CD. This option takes an argument which is the path to a boot image from the top of the tree being written to the CD. By default, `mkisofs` creates an ISO image in "floppy disk emulation" mode, and thus expects the boot image to be exactly 1200, 1440 or 2880 KB in size. Some boot loaders, like the one used by the FreeBSD distribution media, do not use emulation mode. In this case, `-no-emul-boot` should be used. So, if [.filename]#/tmp/myboot# holds a bootable FreeBSD system with the boot image in [.filename]#/tmp/myboot/boot/cdboot#, this command would produce [.filename]#/tmp/bootable.iso#:
[source,shell]
....
# mkisofs -R -no-emul-boot -b boot/cdboot -o /tmp/bootable.iso /tmp/myboot
....
The resulting ISO image can be mounted as a memory disk with:
[source,shell]
....
# mdconfig -a -t vnode -f /tmp/bootable.iso -u 0
# mount -t cd9660 /dev/md0 /mnt
....
One can then verify that [.filename]#/mnt# and [.filename]#/tmp/myboot# are identical.
There are many other options available for `mkisofs` to fine-tune its behavior. Refer to man:mkisofs[8] for details.
[NOTE]
====
It is possible to copy a data CD to an image file that is functionally equivalent to the image file created with `mkisofs`. To do so, use [.filename]#dd# with the device name as the input file and the name of the ISO to create as the output file:
[source,shell]
....
# dd if=/dev/cd0 of=file.iso bs=2048
....
The resulting image file can be burned to CD as described in <<cdrecord>>.
====
[[mounting-cd]]
=== Using Data CDs
Once an ISO has been burned to a CD, it can be mounted by specifying the file system type, the name of the device containing the CD, and an existing mount point:
[source,shell]
....
# mount -t cd9660 /dev/cd0 /mnt
....
Since `mount` assumes that a file system is of type `ufs`, an `Incorrect super block` error will occur if `-t cd9660` is not included when mounting a data CD.
While any data CD can be mounted this way, disks with certain ISO 9660 extensions might behave oddly. For example, Joliet disks store all filenames in two-byte Unicode characters. If some non-English characters show up as question marks, specify the local charset with `-C`. For more information, refer to man:mount_cd9660[8].
[NOTE]
====
In order to do this character conversion with the help of `-C`, the kernel requires the [.filename]#cd9660_iconv.ko# module to be loaded. This can be done either by adding this line to [.filename]#loader.conf#:
[.programlisting]
....
cd9660_iconv_load="YES"
....
and then rebooting the machine, or by directly loading the module with `kldload`.
====
Occasionally, `Device not configured` will be displayed when trying to mount a data CD. This usually means that the CD drive has not detected a disk in the tray, or that the drive is not visible on the bus. It can take a couple of seconds for a CD drive to detect media, so be patient.
Sometimes, a SCSICD drive may be missed because it did not have enough time to answer the bus reset. To resolve this, a custom kernel can be created which increases the default SCSI delay. Add the following option to the custom kernel configuration file and rebuild the kernel using the instructions in crossref:kernelconfig[kernelconfig-building,“Building and Installing a Custom Kernel”]:
[.programlisting]
....
options SCSI_DELAY=15000
....
This tells the SCSI bus to pause 15 seconds during boot, to give the CD drive every possible chance to answer the bus reset.
[NOTE]
====
It is possible to burn a file directly to CD, without creating an ISO 9660 file system. This is known as burning a raw data CD and some people do this for backup purposes.
This type of disk can not be mounted as a normal data CD. In order to retrieve the data burned to such a CD, the data must be read from the raw device node. For example, this command will extract a compressed tar file located on the second CD device into the current working directory:
[source,shell]
....
# tar xzvf /dev/cd1
....
In order to mount a data CD, the data must be written using `mkisofs`.
====
[[duplicating-audiocds]]
=== Duplicating Audio CDs
To duplicate an audio CD, extract the audio data from the CD to a series of files, then write these files to a blank CD.
<<using-cdrecord>> describes how to duplicate and burn an audio CD. If the FreeBSD version is less than 10.0 and the device is ATAPI, the `atapicam` module must be first loaded using the instructions in <<atapicam>>.
[[using-cdrecord]]
[.procedure]
.Procedure: Duplicating an Audio CD
. The package:sysutils/cdrtools[] package or port installs `cdda2wav`. This command can be used to extract all of the audio tracks, with each track written to a separate WAV file in the current working directory:
+
[source,shell]
....
% cdda2wav -vall -B -Owav
....
+
A device name does not need to be specified if there is only one CD device on the system. Refer to the `cdda2wav` manual page for instructions on how to specify a device and to learn more about the other options available for this command.
. Use `cdrecord` to write the [.filename]#.wav# files:
+
[source,shell]
....
% cdrecord -v dev=2,0 -dao -useinfo *.wav
....
+
Make sure that _2,0_ is set appropriately, as described in <<cdrecord>>.
[[creating-dvds]]
== Creating and Using DVD Media
Compared to the CD, the DVD is the next generation of optical media storage technology. The DVD can hold more data than any CD and is the standard for video publishing.
Five physical recordable formats can be defined for a recordable DVD:
* DVD-R: This was the first DVD recordable format available. The DVD-R standard is defined by the http://www.dvdforum.org/forum.shtml[DVD Forum]. This format is write once.
* DVD-RW: This is the rewritable version of the DVD-R standard. A DVD-RW can be rewritten about 1000 times.
* DVD-RAM: This is a rewritable format which can be seen as a removable hard drive. However, this media is not compatible with most DVD-ROM drives and DVD-Video players as only a few DVD writers support the DVD-RAM format. Refer to <<creating-dvd-ram>> for more information on DVD-RAM use.
* DVD+RW: This is a rewritable format defined by the https://en.wikipedia.org/wiki/DVD%2BRW_Alliance[DVD+RW Alliance]. A DVD+RW can be rewritten about 1000 times.
* DVD+R: This format is the write once variation of the DVD+RW format.
A single layer recordable DVD can hold up to 4,700,000,000 bytes which is actually 4.38 GB or 4485 MB as 1 kilobyte is 1024 bytes.
[NOTE]
====
A distinction must be made between the physical media and the application. For example, a DVD-Video is a specific file layout that can be written on any recordable DVD physical media such as DVD-R, DVD+R, or DVD-RW. Before choosing the type of media, ensure that both the burner and the DVD-Video player are compatible with the media under consideration.
====
=== Configuration
To perform DVD recording, use man:growisofs[1]. This command is part of the package:sysutils/dvd+rw-tools[] utilities which support all DVD media types.
These tools use the SCSI subsystem to access the devices, therefore <<atapicam,ATAPI/CAM support>> must be loaded or statically compiled into the kernel. This support is not needed if the burner uses the USB interface. Refer to <<usb-disks>> for more details on USB device configuration.
DMA access must also be enabled for ATAPI devices, by adding the following line to [.filename]#/boot/loader.conf#:
[.programlisting]
....
hw.ata.atapi_dma="1"
....
Before attempting to use dvd+rw-tools, consult the http://fy.chalmers.se/~appro/linux/DVD+RW/hcn.html[Hardware Compatibility Notes].
[NOTE]
====
For a graphical user interface, consider using package:sysutils/k3b[] which provides a user friendly interface to man:growisofs[1] and many other burning tools.
====
=== Burning Data DVDs
Since man:growisofs[1] is a front-end to <<mkisofs,mkisofs>>, it will invoke man:mkisofs[8] to create the file system layout and perform the write on the DVD. This means that an image of the data does not need to be created before the burning process.
To burn to a DVD+R or a DVD-R the data in [.filename]#/path/to/data#, use the following command:
[source,shell]
....
# growisofs -dvd-compat -Z /dev/cd0 -J -R /path/to/data
....
In this example, `-J -R` is passed to man:mkisofs[8] to create an ISO 9660 file system with Joliet and Rock Ridge extensions. Refer to man:mkisofs[8] for more details.
For the initial session recording, `-Z` is used for both single and multiple sessions. Replace _/dev/cd0_, with the name of the DVD device. Using `-dvd-compat` indicates that the disk will be closed and that the recording will be unappendable. This should also provide better media compatibility with DVD-ROM drives.
To burn a pre-mastered image, such as _imagefile.iso_, use:
[source,shell]
....
# growisofs -dvd-compat -Z /dev/cd0=imagefile.iso
....
The write speed should be detected and automatically set according to the media and the drive being used. To force the write speed, use `-speed=`. Refer to man:growisofs[1] for example usage.
[NOTE]
====
In order to support working files larger than 4.38GB, an UDF/ISO-9660 hybrid file system must be created by passing `-udf -iso-level 3` to man:mkisofs[8] and all related programs, such as man:growisofs[1]. This is required only when creating an ISO image file or when writing files directly to a disk. Since a disk created this way must be mounted as an UDF file system with man:mount_udf[8], it will be usable only on an UDF aware operating system. Otherwise it will look as if it contains corrupted files.
To create this type of ISO file:
[source,shell]
....
% mkisofs -R -J -udf -iso-level 3 -o imagefile.iso /path/to/data
....
To burn files directly to a disk:
[source,shell]
....
# growisofs -dvd-compat -udf -iso-level 3 -Z /dev/cd0 -J -R /path/to/data
....
When an ISO image already contains large files, no additional options are required for man:growisofs[1] to burn that image on a disk.
Be sure to use an up-to-date version of package:sysutils/cdrtools[], which contains man:mkisofs[8], as an older version may not contain large files support. If the latest version does not work, install package:sysutils/cdrtools-devel[] and read its man:mkisofs[8].
====
=== Burning a DVD-Video
A DVD-Video is a specific file layout based on the ISO 9660 and micro-UDF (M-UDF) specifications. Since DVD-Video presents a specific data structure hierarchy, a particular program such as package:multimedia/dvdauthor[] is needed to author the DVD.
If an image of the DVD-Video file system already exists, it can be burned in the same way as any other image. If `dvdauthor` was used to make the DVD and the result is in [.filename]#/path/to/video#, the following command should be used to burn the DVD-Video:
[source,shell]
....
# growisofs -Z /dev/cd0 -dvd-video /path/to/video
....
`-dvd-video` is passed to man:mkisofs[8] to instruct it to create a DVD-Video file system layout. This option implies the `-dvd-compat` man:growisofs[1] option.
=== Using a DVD+RW
Unlike CD-RW, a virgin DVD+RW needs to be formatted before first use. It is _recommended_ to let man:growisofs[1] take care of this automatically whenever appropriate. However, it is possible to use `dvd+rw-format` to format the DVD+RW:
[source,shell]
....
# dvd+rw-format /dev/cd0
....
Only perform this operation once and keep in mind that only virgin DVD+RW medias need to be formatted. Once formatted, the DVD+RW can be burned as usual.
To burn a totally new file system and not just append some data onto a DVD+RW, the media does not need to be blanked first. Instead, write over the previous recording like this:
[source,shell]
....
# growisofs -Z /dev/cd0 -J -R /path/to/newdata
....
The DVD+RW format supports appending data to a previous recording. This operation consists of merging a new session to the existing one as it is not considered to be multi-session writing. man:growisofs[1] will _grow_ the ISO 9660 file system present on the media.
For example, to append data to a DVD+RW, use the following:
[source,shell]
....
# growisofs -M /dev/cd0 -J -R /path/to/nextdata
....
The same man:mkisofs[8] options used to burn the initial session should be used during next writes.
[NOTE]
====
Use `-dvd-compat` for better media compatibility with DVD-ROM drives. When using DVD+RW, this option will not prevent the addition of data.
====
To blank the media, use:
[source,shell]
....
# growisofs -Z /dev/cd0=/dev/zero
....
=== Using a DVD-RW
A DVD-RW accepts two disc formats: incremental sequential and restricted overwrite. By default, DVD-RW discs are in sequential format.
A virgin DVD-RW can be directly written without being formatted. However, a non-virgin DVD-RW in sequential format needs to be blanked before writing a new initial session.
To blank a DVD-RW in sequential mode:
[source,shell]
....
# dvd+rw-format -blank=full /dev/cd0
....
[NOTE]
====
A full blanking using `-blank=full` will take about one hour on a 1x media. A fast blanking can be performed using `-blank`, if the DVD-RW will be recorded in Disk-At-Once (DAO) mode. To burn the DVD-RW in DAO mode, use the command:
[source,shell]
....
# growisofs -use-the-force-luke=dao -Z /dev/cd0=imagefile.iso
....
Since man:growisofs[1] automatically attempts to detect fast blanked media and engage DAO write, `-use-the-force-luke=dao` should not be required.
One should instead use restricted overwrite mode with any DVD-RW as this format is more flexible than the default of incremental sequential.
====
To write data on a sequential DVD-RW, use the same instructions as for the other DVD formats:
[source,shell]
....
# growisofs -Z /dev/cd0 -J -R /path/to/data
....
To append some data to a previous recording, use `-M` with man:growisofs[1]. However, if data is appended on a DVD-RW in incremental sequential mode, a new session will be created on the disc and the result will be a multi-session disc.
A DVD-RW in restricted overwrite format does not need to be blanked before a new initial session. Instead, overwrite the disc with `-Z`. It is also possible to grow an existing ISO 9660 file system written on the disc with `-M`. The result will be a one-session DVD.
To put a DVD-RW in restricted overwrite format, the following command must be used:
[source,shell]
....
# dvd+rw-format /dev/cd0
....
To change back to sequential format, use:
[source,shell]
....
# dvd+rw-format -blank=full /dev/cd0
....
=== Multi-Session
Few DVD-ROM drives support multi-session DVDs and most of the time only read the first session. DVD+R, DVD-R and DVD-RW in sequential format can accept multiple sessions. The notion of multiple sessions does not exist for the DVD+RW and the DVD-RW restricted overwrite formats.
Using the following command after an initial non-closed session on a DVD+R, DVD-R, or DVD-RW in sequential format, will add a new session to the disc:
[source,shell]
....
# growisofs -M /dev/cd0 -J -R /path/to/nextdata
....
Using this command with a DVD+RW or a DVD-RW in restricted overwrite mode will append data while merging the new session to the existing one. The result will be a single-session disc. Use this method to add data after an initial write on these types of media.
[NOTE]
====
Since some space on the media is used between each session to mark the end and start of sessions, one should add sessions with a large amount of data to optimize media space. The number of sessions is limited to 154 for a DVD+R, about 2000 for a DVD-R, and 127 for a DVD+R Double Layer.
====
=== For More Information
To obtain more information about a DVD, use `dvd+rw-mediainfo _/dev/cd0_` while the disc in the specified drive.
More information about dvd+rw-tools can be found in man:growisofs[1], on the http://fy.chalmers.se/~appro/linux/DVD+RW/[dvd+rw-tools web site], and in the http://lists.debian.org/cdwrite/[cdwrite mailing list] archives.
[NOTE]
====
When creating a problem report related to the use of dvd+rw-tools, always include the output of `dvd+rw-mediainfo`.
====
[[creating-dvd-ram]]
=== Using a DVD-RAM
DVD-RAM writers can use either a SCSI or ATAPI interface. For ATAPI devices, DMA access has to be enabled by adding the following line to [.filename]#/boot/loader.conf#:
[.programlisting]
....
hw.ata.atapi_dma="1"
....
A DVD-RAM can be seen as a removable hard drive. Like any other hard drive, the DVD-RAM must be formatted before it can be used. In this example, the whole disk space will be formatted with a standard UFS2 file system:
[source,shell]
....
# dd if=/dev/zero of=/dev/acd0 bs=2k count=1
# bsdlabel -Bw acd0
# newfs /dev/acd0
....
The DVD device, [.filename]#acd0#, must be changed according to the configuration.
Once the DVD-RAM has been formatted, it can be mounted as a normal hard drive:
[source,shell]
....
# mount /dev/acd0 /mnt
....
Once mounted, the DVD-RAM will be both readable and writeable.
[[floppies]]
== Creating and Using Floppy Disks
This section explains how to format a 3.5 inch floppy disk in FreeBSD.
[.procedure]
====
*Procedure: Steps to Format a Floppy*
A floppy disk needs to be low-level formatted before it can be used. This is usually done by the vendor, but formatting is a good way to check media integrity. To low-level format the floppy disk on FreeBSD, use man:fdformat[1]. When using this utility, make note of any error messages, as these can help determine if the disk is good or bad.
. To format the floppy, insert a new 3.5 inch floppy disk into the first floppy drive and issue:
+
[source,shell]
....
# /usr/sbin/fdformat -f 1440 /dev/fd0
....
+
. After low-level formatting the disk, create a disk label as it is needed by the system to determine the size of the disk and its geometry. The supported geometry values are listed in [.filename]#/etc/disktab#.
+
To write the disk label, use man:bsdlabel[8]:
+
[source,shell]
....
# /sbin/bsdlabel -B -w /dev/fd0 fd1440
....
+
. The floppy is now ready to be high-level formatted with a file system. The floppy's file system can be either UFS or FAT, where FAT is generally a better choice for floppies.
+
To format the floppy with FAT, issue:
+
[source,shell]
....
# /sbin/newfs_msdos /dev/fd0
....
====
The disk is now ready for use. To use the floppy, mount it with man:mount_msdosfs[8]. One can also install and use package:emulators/mtools[] from the Ports Collection.
[[using-ntfs]]
== Using NTFS Disks
This section explains how to mount NTFS disks in FreeBSD.
NTFS (New Technology File System) is a proprietary journaling file system
developed by Microsoft(R). It has been the default file system in Microsoft
Windows(R) for many years. FreeBSD can mount NTFS volumes using a FUSE file
system. These file systems are implemented as user space programs which
interact with the man:fusefs[5] kernel module via a well defined interface.
[.procedure]
====
*Procedure: Steps to Mount a NTFS Disk*
. Before using a FUSE file system we need to load the man:fusefs[5] kernel
module:
+
[source,shell]
....
# kldload fusefs
....
+
Use man:sysrc[8] to load the module at startup:
+
[source,shell]
....
# sysrc kld_list+=fusefs
....
. Install the actual NTFS file system from packages as in the example (see
crossref:ports[pkgng-intro,Using pkg for Binary Package Management]) or from
ports (see crossref:ports[ports-using,Using the Ports Collection]):
+
[source,shell]
....
# pkg install fusefs-ntfs
....
. Last we need to create a directory where the file system will be mounted:
+
[source,shell]
....
# mkdir /mnt/usb
....
. Suppose a USB disk is plugged in. The disk partition information can be
viewed with man:gpart[8]:
+
[source,shell]
....
# gpart show da0
=> 63 1953525105 da0 MBR (932G)
63 1953525105 1 ntfs (932G)
....
. We can mount the disk using the following command:
+
[source,shell]
....
# ntfs-3g /dev/da0s1 /mnt/usb/
....
The disk is now ready to use.
+
. Additionally, an entry can be added to /etc/fstab:
+
[.programlisting]
....
/dev/da0s1 /mnt/usb ntfs mountprog=/usr/local/bin/ntfs-3g,noauto,rw 0 0
....
+
Now the disk can be now mounted with:
+
[source,shell]
....
# mount /mnt/usb
....
. The disk can be unmounted with:
+
[source,shell]
....
# umount /mnt/usb/
....
====
[[backup-basics]]
== Backup Basics
Implementing a backup plan is essential in order to have the ability to recover from disk failure, accidental file deletion, random file corruption, or complete machine destruction, including destruction of on-site backups.
The backup type and schedule will vary, depending upon the importance of the data, the granularity needed for file restores, and the amount of acceptable downtime. Some possible backup techniques include:
* Archives of the whole system, backed up onto permanent, off-site media. This provides protection against all of the problems listed above, but is slow and inconvenient to restore from, especially for non-privileged users.
* File system snapshots, which are useful for restoring deleted files or previous versions of files.
* Copies of whole file systems or disks which are synchronized with another system on the network using a scheduled package:net/rsync[].
* Hardware or software RAID, which minimizes or avoids downtime when a disk fails.
Typically, a mix of backup techniques is used. For example, one could create a schedule to automate a weekly, full system backup that is stored off-site and to supplement this backup with hourly ZFS snapshots. In addition, one could make a manual backup of individual directories or files before making file edits or deletions.
This section describes some of the utilities which can be used to create and manage backups on a FreeBSD system.
=== File System Backups
The traditional UNIX(R) programs for backing up a file system are man:dump[8], which creates the backup, and man:restore[8], which restores the backup. These utilities work at the disk block level, below the abstractions of the files, links, and directories that are created by file systems. Unlike other backup software, `dump` backs up an entire file system and is unable to backup only part of a file system or a directory tree that spans multiple file systems. Instead of writing files and directories, `dump` writes the raw data blocks that comprise files and directories.
[NOTE]
====
If `dump` is used on the root directory, it will not back up [.filename]#/home#, [.filename]#/usr# or many other directories since these are typically mount points for other file systems or symbolic links into those file systems.
====
When used to restore data, `restore` stores temporary files in [.filename]#/tmp/# by default. When using a recovery disk with a small [.filename]#/tmp#, set `TMPDIR` to a directory with more free space in order for the restore to succeed.
When using `dump`, be aware that some quirks remain from its early days in Version 6 of AT&T UNIX(R),circa 1975. The default parameters assume a backup to a 9-track tape, rather than to another type of media or to the high-density tapes available today. These defaults must be overridden on the command line.
It is possible to backup a file system across the network to a another system or to a tape drive attached to another computer. While the man:rdump[8] and man:rrestore[8] utilities can be used for this purpose, they are not considered to be secure.
Instead, one can use `dump` and `restore` in a more secure fashion over an SSH connection. This example creates a full, compressed backup of [.filename]#/usr# and sends the backup file to the specified host over a SSH connection.
.Using `dump` over ssh
[example]
====
[source,shell]
....
# /sbin/dump -0uan -f - /usr | gzip -2 | ssh -c blowfish \
targetuser@targetmachine.example.com dd of=/mybigfiles/dump-usr-l0.gz
....
====
This example sets `RSH` in order to write the backup to a tape drive on a remote system over a SSH connection:
.Using `dump` over ssh with `RSH` Set
[example]
====
[source,shell]
....
# env RSH=/usr/bin/ssh /sbin/dump -0uan -f targetuser@targetmachine.example.com:/dev/sa0 /usr
....
====
=== Directory Backups
Several built-in utilities are available for backing up and restoring specified files and directories as needed.
A good choice for making a backup of all of the files in a directory is man:tar[1]. This utility dates back to Version 6 of AT&T UNIX(R) and by default assumes a recursive backup to a local tape device. Switches can be used to instead specify the name of a backup file.
This example creates a compressed backup of the current directory and saves it to [.filename]#/tmp/mybackup.tgz#. When creating a backup file, make sure that the backup is not saved to the same directory that is being backed up.
.Backing Up the Current Directory with `tar`
[example]
====
[source,shell]
....
# tar czvf /tmp/mybackup.tgz .
....
====
To restore the entire backup, `cd` into the directory to restore into and specify the name of the backup. Note that this will overwrite any newer versions of files in the restore directory. When in doubt, restore to a temporary directory or specify the name of the file within the backup to restore.
.Restoring Up the Current Directory with `tar`
[example]
====
[source,shell]
....
# tar xzvf /tmp/mybackup.tgz
....
====
There are dozens of available switches which are described in man:tar[1]. This utility also supports the use of exclude patterns to specify which files should not be included when backing up the specified directory or restoring files from a backup.
To create a backup using a specified list of files and directories, man:cpio[1] is a good choice. Unlike `tar`, `cpio` does not know how to walk the directory tree and it must be provided the list of files to backup.
For example, a list of files can be created using `ls` or `find`. This example creates a recursive listing of the current directory which is then piped to `cpio` in order to create an output backup file named [.filename]#/tmp/mybackup.cpio#.
.Using `ls` and `cpio` to Make a Recursive Backup of the Current Directory
[example]
====
[source,shell]
....
# ls -R | cpio -ovF /tmp/mybackup.cpio
....
====
A backup utility which tries to bridge the features provided by `tar` and `cpio` is man:pax[1]. Over the years, the various versions of `tar` and `cpio` became slightly incompatible. POSIX(R) created `pax` which attempts to read and write many of the various `cpio` and `tar` formats, plus new formats of its own.
The `pax` equivalent to the previous examples would be:
.Backing Up the Current Directory with `pax`
[example]
====
[source,shell]
....
# pax -wf /tmp/mybackup.pax .
....
====
[[backups-tapebackups]]
=== Using Data Tapes for Backups
While tape technology has continued to evolve, modern backup systems tend to combine off-site backups with local removable media. FreeBSD supports any tape drive that uses SCSI, such as LTO or DAT. There is limited support for SATA and USB tape drives.
For SCSI tape devices, FreeBSD uses the man:sa[4] driver and the [.filename]#/dev/sa0#, [.filename]#/dev/nsa0#, and [.filename]#/dev/esa0# devices. The physical device name is [.filename]#/dev/sa0#. When [.filename]#/dev/nsa0# is used, the backup application will not rewind the tape after writing a file, which allows writing more than one file to a tape. Using [.filename]#/dev/esa0# ejects the tape after the device is closed.
In FreeBSD, `mt` is used to control operations of the tape drive, such as seeking through files on a tape or writing tape control marks to the tape. For example, the first three files on a tape can be preserved by skipping past them before writing a new file:
[source,shell]
....
# mt -f /dev/nsa0 fsf 3
....
This utility supports many operations. Refer to man:mt[1] for details.
To write a single file to tape using `tar`, specify the name of the tape device and the file to backup:
[source,shell]
....
# tar cvf /dev/sa0 file
....
To recover files from a `tar` archive on tape into the current directory:
[source,shell]
....
# tar xvf /dev/sa0
....
To backup a UFS file system, use `dump`. This examples backs up [.filename]#/usr# without rewinding the tape when finished:
[source,shell]
....
# dump -0aL -b64 -f /dev/nsa0 /usr
....
To interactively restore files from a `dump` file on tape into the current directory:
[source,shell]
....
# restore -i -f /dev/nsa0
....
[[backups-programs-amanda]]
=== Third-Party Backup Utilities
The FreeBSD Ports Collection provides many third-party utilities which can be used to schedule the creation of backups, simplify tape backup, and make backups easier and more convenient. Many of these applications are client/server based and can be used to automate the backups of a single system or all of the computers in a network.
Popular utilities include Amanda, Bacula, rsync, and duplicity.
=== Emergency Recovery
In addition to regular backups, it is recommended to perform the following steps as part of an emergency preparedness plan.
Create a print copy of the output of the following commands:
* `gpart show`
* `more /etc/fstab`
* `dmesg`
Store this printout and a copy of the installation media in a secure location. Should an emergency restore be needed, boot into the installation media and select `Live CD` to access a rescue shell. This rescue mode can be used to view the current state of the system, and if needed, to reformat disks and restore data from backups.
[NOTE]
====
The installation media for FreeBSD/i386 {rel112-current}-RELEASE does not include a rescue shell. For this version, instead download and burn a Livefs CD image from link:ftp://ftp.FreeBSD.org/pub/FreeBSD/releases/i386/ISO-IMAGES/{rel112-current}/FreeBSD-{rel112-current}-RELEASE-i386-livefs.iso[ftp://ftp.FreeBSD.org/pub/FreeBSD/releases/i386/ISO-IMAGES/{rel112-current}/FreeBSD-{rel112-current}-RELEASE-i386-livefs.iso].
====
Next, test the rescue shell and the backups. Make notes of the procedure. Store these notes with the media, the printouts, and the backups. These notes may prevent the inadvertent destruction of the backups while under the stress of performing an emergency recovery.
For an added measure of security, store the latest backup at a remote location which is physically separated from the computers and disk drives by a significant distance.
[[disks-virtual]]
== Memory Disks
In addition to physical disks, FreeBSD also supports the creation and use of memory disks. One possible use for a memory disk is to access the contents of an ISO file system without the overhead of first burning it to a CD or DVD, then mounting the CD/DVD media.
In FreeBSD, the man:md[4] driver is used to provide support for memory disks. The [.filename]#GENERIC# kernel includes this driver. When using a custom kernel configuration file, ensure it includes this line:
[.programlisting]
....
device md
....
[[disks-mdconfig]]
=== Attaching and Detaching Existing Images
To mount an existing file system image, use `mdconfig` to specify the name of the ISO file and a free unit number. Then, refer to that unit number to mount it on an existing mount point. Once mounted, the files in the ISO will appear in the mount point. This example attaches _diskimage.iso_ to the memory device [.filename]#/dev/md0# then mounts that memory device on [.filename]#/mnt#:
[source,shell]
....
# mdconfig -f diskimage.iso -u 0
# mount -t cd9660 /dev/md0 /mnt
....
Notice that `-t cd9660` was used to mount an ISO format. If a unit number is not specified with `-u`, `mdconfig` will automatically allocate an unused memory device and output the name of the allocated unit, such as [.filename]#md4#. Refer to man:mdconfig[8] for more details about this command and its options.
When a memory disk is no longer in use, its resources should be released back to the system. First, unmount the file system, then use `mdconfig` to detach the disk from the system and release its resources. To continue this example:
[source,shell]
....
# umount /mnt
# mdconfig -d -u 0
....
To determine if any memory disks are still attached to the system, type `mdconfig -l`.
[[disks-md-freebsd5]]
=== Creating a File- or Memory-Backed Memory Disk
FreeBSD also supports memory disks where the storage to use is allocated from either a hard disk or an area of memory. The first method is commonly referred to as a file-backed file system and the second method as a memory-backed file system. Both types can be created using `mdconfig`.
To create a new memory-backed file system, specify a type of `swap` and the size of the memory disk to create. Then, format the memory disk with a file system and mount as usual. This example creates a 5M memory disk on unit `1`. That memory disk is then formatted with the UFS file system before it is mounted:
[source,shell]
....
# mdconfig -a -t swap -s 5m -u 1
# newfs -U md1
/dev/md1: 5.0MB (10240 sectors) block size 16384, fragment size 2048
using 4 cylinder groups of 1.27MB, 81 blks, 192 inodes.
with soft updates
super-block backups (for fsck -b #) at:
160, 2752, 5344, 7936
# mount /dev/md1 /mnt
# df /mnt
Filesystem 1K-blocks Used Avail Capacity Mounted on
/dev/md1 4718 4 4338 0% /mnt
....
To create a new file-backed memory disk, first allocate an area of disk to use. This example creates an empty 5MB file named [.filename]#newimage#:
[source,shell]
....
# dd if=/dev/zero of=newimage bs=1k count=5k
5120+0 records in
5120+0 records out
....
Next, attach that file to a memory disk, label the memory disk and format it with the UFS file system, mount the memory disk, and verify the size of the file-backed disk:
[source,shell]
....
# mdconfig -f newimage -u 0
# bsdlabel -w md0 auto
# newfs -U md0a
/dev/md0a: 5.0MB (10224 sectors) block size 16384, fragment size 2048
using 4 cylinder groups of 1.25MB, 80 blks, 192 inodes.
super-block backups (for fsck -b #) at:
160, 2720, 5280, 7840
# mount /dev/md0a /mnt
# df /mnt
Filesystem 1K-blocks Used Avail Capacity Mounted on
/dev/md0a 4710 4 4330 0% /mnt
....
It takes several commands to create a file- or memory-backed file system using `mdconfig`. FreeBSD also comes with `mdmfs` which automatically configures a memory disk, formats it with the UFS file system, and mounts it. For example, after creating _newimage_ with `dd`, this one command is equivalent to running the `bsdlabel`, `newfs`, and `mount` commands shown above:
[source,shell]
....
# mdmfs -F newimage -s 5m md0 /mnt
....
To instead create a new memory-based memory disk with `mdmfs`, use this one command:
[source,shell]
....
# mdmfs -s 5m md1 /mnt
....
If the unit number is not specified, `mdmfs` will automatically select an unused memory device. For more details about `mdmfs`, refer to man:mdmfs[8].
[[snapshots]]
== File System Snapshots
FreeBSD offers a feature in conjunction with crossref:config[soft-updates,Soft Updates]: file system snapshots.
UFS snapshots allow a user to create images of specified file systems, and treat them as a file. Snapshot files must be created in the file system that the action is performed on, and a user may create no more than 20 snapshots per file system. Active snapshots are recorded in the superblock so they are persistent across unmount and remount operations along with system reboots. When a snapshot is no longer required, it can be removed using man:rm[1]. While snapshots may be removed in any order, all the used space may not be acquired because another snapshot will possibly claim some of the released blocks.
The un-alterable `snapshot` file flag is set by man:mksnap_ffs[8] after initial creation of a snapshot file. man:unlink[1] makes an exception for snapshot files since it allows them to be removed.
Snapshots are created using man:mount[8]. To place a snapshot of [.filename]#/var# in the file [.filename]#/var/snapshot/snap#, use the following command:
[source,shell]
....
# mount -u -o snapshot /var/snapshot/snap /var
....
Alternatively, use man:mksnap_ffs[8] to create the snapshot:
[source,shell]
....
# mksnap_ffs /var /var/snapshot/snap
....
One can find snapshot files on a file system, such as [.filename]#/var#, using man:find[1]:
[source,shell]
....
# find /var -flags snapshot
....
Once a snapshot has been created, it has several uses:
* Some administrators will use a snapshot file for backup purposes, because the snapshot can be transferred to CDs or tape.
* The file system integrity checker, man:fsck[8], may be run on the snapshot. Assuming that the file system was clean when it was mounted, this should always provide a clean and unchanging result.
* Running man:dump[8] on the snapshot will produce a dump file that is consistent with the file system and the timestamp of the snapshot. man:dump[8] can also take a snapshot, create a dump image, and then remove the snapshot in one command by using `-L`.
* The snapshot can be mounted as a frozen image of the file system. To man:mount[8] the snapshot [.filename]#/var/snapshot/snap# run:
+
[source,shell]
....
# mdconfig -a -t vnode -o readonly -f /var/snapshot/snap -u 4
# mount -r /dev/md4 /mnt
....
The frozen [.filename]#/var# is now available through [.filename]#/mnt#. Everything will initially be in the same state it was during the snapshot creation time. The only exception is that any earlier snapshots will appear as zero length files. To unmount the snapshot, use:
[source,shell]
....
# umount /mnt
# mdconfig -d -u 4
....
For more information about `softupdates` and file system snapshots, including technical papers, visit Marshall Kirk McKusick's website at http://www.mckusick.com/[http://www.mckusick.com/].
[[quotas]]
== Disk Quotas
Disk quotas can be used to limit the amount of disk space or the number of files a user or members of a group may allocate on a per-file system basis. This prevents one user or group of users from consuming all of the available disk space.
This section describes how to configure disk quotas for the UFS file system. To configure quotas on the ZFS file system, refer to crossref:zfs[zfs-zfs-quota,"Dataset, User, and Group Quotas"]
=== Enabling Disk Quotas
To determine if the FreeBSD kernel provides support for disk quotas:
[source,shell]
....
% sysctl kern.features.ufs_quota
kern.features.ufs_quota: 1
....
In this example, the `1` indicates quota support. If the value is instead `0`, add the following line to a custom kernel configuration file and rebuild the kernel using the instructions in crossref:kernelconfig[kernelconfig,Configuring the FreeBSD Kernel]:
[.programlisting]
....
options QUOTA
....
Next, enable disk quotas in [.filename]#/etc/rc.conf#:
[.programlisting]
....
quota_enable="YES"
....
Normally on bootup, the quota integrity of each file system is checked by man:quotacheck[8]. This program insures that the data in the quota database properly reflects the data on the file system. This is a time consuming process that will significantly affect the time the system takes to boot. To skip this step, add this variable to [.filename]#/etc/rc.conf#:
[.programlisting]
....
check_quotas="NO"
....
Finally, edit [.filename]#/etc/fstab# to enable disk quotas on a per-file system basis. To enable per-user quotas on a file system, add `userquota` to the options field in the [.filename]#/etc/fstab# entry for the file system to enable quotas on. For example:
[.programlisting]
....
/dev/da1s2g /home ufs rw,userquota 1 2
....
To enable group quotas, use `groupquota` instead. To enable both user and group quotas, separate the options with a comma:
[.programlisting]
....
/dev/da1s2g /home ufs rw,userquota,groupquota 1 2
....
By default, quota files are stored in the root directory of the file system as [.filename]#quota.user# and [.filename]#quota.group#. Refer to man:fstab[5] for more information. Specifying an alternate location for the quota files is not recommended.
Once the configuration is complete, reboot the system and [.filename]#/etc/rc# will automatically run the appropriate commands to create the initial quota files for all of the quotas enabled in [.filename]#/etc/fstab#.
In the normal course of operations, there should be no need to manually run man:quotacheck[8], man:quotaon[8], or man:quotaoff[8]. However, one should read these manual pages to be familiar with their operation.
=== Setting Quota Limits
To verify that quotas are enabled, run:
[source,shell]
....
# quota -v
....
There should be a one line summary of disk usage and current quota limits for each file system that quotas are enabled on.
The system is now ready to be assigned quota limits with `edquota`.
Several options are available to enforce limits on the amount of disk space a user or group may allocate, and how many files they may create. Allocations can be limited based on disk space (block quotas), number of files (inode quotas), or a combination of both. Each limit is further broken down into two categories: hard and soft limits.
A hard limit may not be exceeded. Once a user reaches a hard limit, no further allocations can be made on that file system by that user. For example, if the user has a hard limit of 500 kbytes on a file system and is currently using 490 kbytes, the user can only allocate an additional 10 kbytes. Attempting to allocate an additional 11 kbytes will fail.
Soft limits can be exceeded for a limited amount of time, known as the grace period, which is one week by default. If a user stays over their limit longer than the grace period, the soft limit turns into a hard limit and no further allocations are allowed. When the user drops back below the soft limit, the grace period is reset.
In the following example, the quota for the `test` account is being edited. When `edquota` is invoked, the editor specified by `EDITOR` is opened in order to edit the quota limits. The default editor is set to vi.
[source,shell]
....
# edquota -u test
Quotas for user test:
/usr: kbytes in use: 65, limits (soft = 50, hard = 75)
inodes in use: 7, limits (soft = 50, hard = 60)
/usr/var: kbytes in use: 0, limits (soft = 50, hard = 75)
inodes in use: 0, limits (soft = 50, hard = 60)
....
There are normally two lines for each file system that has quotas enabled. One line represents the block limits and the other represents the inode limits. Change the value to modify the quota limit. For example, to raise the block limit on [.filename]#/usr# to a soft limit of `500` and a hard limit of `600`, change the values in that line as follows:
[.programlisting]
....
/usr: kbytes in use: 65, limits (soft = 500, hard = 600)
....
The new quota limits take effect upon exiting the editor.
Sometimes it is desirable to set quota limits on a range of users. This can be done by first assigning the desired quota limit to a user. Then, use `-p` to duplicate that quota to a specified range of user IDs (UIDs). The following command will duplicate those quota limits for UIDs `10,000` through `19,999`:
[source,shell]
....
# edquota -p test 10000-19999
....
For more information, refer to man:edquota[8].
=== Checking Quota Limits and Disk Usage
To check individual user or group quotas and disk usage, use man:quota[1]. A user may only examine their own quota and the quota of a group they are a member of. Only the superuser may view all user and group quotas. To get a summary of all quotas and disk usage for file systems with quotas enabled, use man:repquota[8].
Normally, file systems that the user is not using any disk space on will not show in the output of `quota`, even if the user has a quota limit assigned for that file system. Use `-v` to display those file systems. The following is sample output from `quota -v` for a user that has quota limits on two file systems.
[.programlisting]
....
Disk quotas for user test (uid 1002):
Filesystem usage quota limit grace files quota limit grace
/usr 65* 50 75 5days 7 50 60
/usr/var 0 50 75 0 50 60
....
In this example, the user is currently 15 kbytes over the soft limit of 50 kbytes on [.filename]#/usr# and has 5 days of grace period left. The asterisk `*` indicates that the user is currently over the quota limit.
=== Quotas over NFS
Quotas are enforced by the quota subsystem on the NFS server. The man:rpc.rquotad[8] daemon makes quota information available to `quota` on NFS clients, allowing users on those machines to see their quota statistics.
On the NFS server, enable `rpc.rquotad` by removing the `#` from this line in [.filename]*/etc/inetd.conf*:
[.programlisting]
....
rquotad/1 dgram rpc/udp wait root /usr/libexec/rpc.rquotad rpc.rquotad
....
Then, restart `inetd`:
[source,shell]
....
# service inetd restart
....
[[disks-encrypting]]
== Encrypting Disk Partitions
FreeBSD offers excellent online protections against unauthorized data access. File permissions and crossref:mac[mac,Mandatory Access Control] (MAC) help prevent unauthorized users from accessing data while the operating system is active and the computer is powered up. However, the permissions enforced by the operating system are irrelevant if an attacker has physical access to a computer and can move the computer's hard drive to another system to copy and analyze the data.
Regardless of how an attacker may have come into possession of a hard drive or powered-down computer, the GEOM-based cryptographic subsystems built into FreeBSD are able to protect the data on the computer's file systems against even highly-motivated attackers with significant resources. Unlike encryption methods that encrypt individual files, the built-in `gbde` and `geli` utilities can be used to transparently encrypt entire file systems. No cleartext ever touches the hard drive's platter.
This chapter demonstrates how to create an encrypted file system on FreeBSD. It first demonstrates the process using `gbde` and then demonstrates the same example using `geli`.
=== Disk Encryption with gbde
The objective of the man:gbde[4] facility is to provide a formidable challenge for an attacker to gain access to the contents of a _cold_ storage device. However, if the computer is compromised while up and running and the storage device is actively attached, or the attacker has access to a valid passphrase, it offers no protection to the contents of the storage device. Thus, it is important to provide physical security while the system is running and to protect the passphrase used by the encryption mechanism.
This facility provides several barriers to protect the data stored in each disk sector. It encrypts the contents of a disk sector using 128-bit AES in CBC mode. Each sector on the disk is encrypted with a different AES key. For more information on the cryptographic design, including how the sector keys are derived from the user-supplied passphrase, refer to man:gbde[4].
FreeBSD provides a kernel module for gbde which can be loaded with this command:
[source,shell]
....
# kldload geom_bde
....
If using a custom kernel configuration file, ensure it contains this line:
`options GEOM_BDE`
The following example demonstrates adding a new hard drive to a system that will hold a single encrypted partition that will be mounted as [.filename]#/private#.
[.procedure]
.Procedure: Encrypting a Partition with gbde
. Add the New Hard Drive
+
Install the new drive to the system as explained in <<disks-adding>>. For the purposes of this example, a new hard drive partition has been added as [.filename]#/dev/ad4s1c# and [.filename]#/dev/ad0s1*# represents the existing standard FreeBSD partitions.
+
[source,shell]
....
# ls /dev/ad*
/dev/ad0 /dev/ad0s1b /dev/ad0s1e /dev/ad4s1
/dev/ad0s1 /dev/ad0s1c /dev/ad0s1f /dev/ad4s1c
/dev/ad0s1a /dev/ad0s1d /dev/ad4
....
. Create a Directory to Hold `gbde` Lock Files
+
[source,shell]
....
# mkdir /etc/gbde
....
+
The gbde lock file contains information that gbde requires to access encrypted partitions. Without access to the lock file, gbde will not be able to decrypt the data contained in the encrypted partition without significant manual intervention which is not supported by the software. Each encrypted partition uses a separate lock file.
. Initialize the `gbde` Partition
+
A gbde partition must be initialized before it can be used. This initialization needs to be performed only once. This command will open the default editor, in order to set various configuration options in a template. For use with the UFS file system, set the sector_size to 2048:
+
[source,shell]
....
# gbde init /dev/ad4s1c -i -L /etc/gbde/ad4s1c.lock
# $FreeBSD: src/sbin/gbde/template.txt,v 1.1.36.1 2009/08/03 08:13:06 kensmith Exp $
#
# Sector size is the smallest unit of data which can be read or written.
# Making it too small decreases performance and decreases available space.
# Making it too large may prevent filesystems from working. 512 is the
# minimum and always safe. For UFS, use the fragment size
#
sector_size = 2048
[...]
....
+
Once the edit is saved, the user will be asked twice to type the passphrase used to secure the data. The passphrase must be the same both times. The ability of gbde to protect data depends entirely on the quality of the passphrase. For tips on how to select a secure passphrase that is easy to remember, see http://world.std.com/\~reinhold/diceware.html[http://world.std.com/~reinhold/diceware.htm].
+
This initialization creates a lock file for the gbde partition. In this example, it is stored as [.filename]#/etc/gbde/ad4s1c.lock#. Lock files must end in ".lock" in order to be correctly detected by the [.filename]#/etc/rc.d/gbde# start up script.
+
[CAUTION]
====
Lock files _must_ be backed up together with the contents of any encrypted partitions. Without the lock file, the legitimate owner will be unable to access the data on the encrypted partition.
====
. Attach the Encrypted Partition to the Kernel
+
[source,shell]
....
# gbde attach /dev/ad4s1c -l /etc/gbde/ad4s1c.lock
....
+
This command will prompt to input the passphrase that was selected during the initialization of the encrypted partition. The new encrypted device will appear in [.filename]#/dev# as [.filename]#/dev/device_name.bde#:
+
[source,shell]
....
# ls /dev/ad*
/dev/ad0 /dev/ad0s1b /dev/ad0s1e /dev/ad4s1
/dev/ad0s1 /dev/ad0s1c /dev/ad0s1f /dev/ad4s1c
/dev/ad0s1a /dev/ad0s1d /dev/ad4 /dev/ad4s1c.bde
....
. Create a File System on the Encrypted Device
+
Once the encrypted device has been attached to the kernel, a file system can be created on the device. This example creates a UFS file system with soft updates enabled. Be sure to specify the partition which has a [.filename]#*.bde# extension:
+
[source,shell]
....
# newfs -U /dev/ad4s1c.bde
....
. Mount the Encrypted Partition
+
Create a mount point and mount the encrypted file system:
+
[source,shell]
....
# mkdir /private
# mount /dev/ad4s1c.bde /private
....
. Verify That the Encrypted File System is Available
+
The encrypted file system should now be visible and available for use:
+
[source,shell]
....
% df -H
Filesystem Size Used Avail Capacity Mounted on
/dev/ad0s1a 1037M 72M 883M 8% /
/devfs 1.0K 1.0K 0B 100% /dev
/dev/ad0s1f 8.1G 55K 7.5G 0% /home
/dev/ad0s1e 1037M 1.1M 953M 0% /tmp
/dev/ad0s1d 6.1G 1.9G 3.7G 35% /usr
/dev/ad4s1c.bde 150G 4.1K 138G 0% /private
....
After each boot, any encrypted file systems must be manually re-attached to the kernel, checked for errors, and mounted, before the file systems can be used. To configure these steps, add the following lines to [.filename]#/etc/rc.conf#:
[.programlisting]
....
gbde_autoattach_all="YES"
gbde_devices="ad4s1c"
gbde_lockdir="/etc/gbde"
....
This requires that the passphrase be entered at the console at boot time. After typing the correct passphrase, the encrypted partition will be mounted automatically. Additional gbde boot options are available and listed in man:rc.conf[5].
[NOTE]
====
sysinstall is incompatible with gbde-encrypted devices. All [.filename]#*.bde# devices must be detached from the kernel before starting sysinstall or it will crash during its initial probing for devices. To detach the encrypted device used in the example, use the following command:
[source,shell]
....
# gbde detach /dev/ad4s1c
....
====
[[disks-encrypting-geli]]
=== Disk Encryption with `geli`
An alternative cryptographic GEOM class is available using `geli`. This control utility adds some features and uses a different scheme for doing cryptographic work. It provides the following features:
* Utilizes the man:crypto[9] framework and automatically uses cryptographic hardware when it is available.
* Supports multiple cryptographic algorithms such as AES, Blowfish, and 3DES.
* Allows the root partition to be encrypted. The passphrase used to access the encrypted root partition will be requested during system boot.
* Allows the use of two independent keys.
* It is fast as it performs simple sector-to-sector encryption.
* Allows backup and restore of master keys. If a user destroys their keys, it is still possible to get access to the data by restoring keys from the backup.
* Allows a disk to attach with a random, one-time key which is useful for swap partitions and temporary file systems.
More features and usage examples can be found in man:geli[8].
The following example describes how to generate a key file which will be used as part of the master key for the encrypted provider mounted under [.filename]#/private#. The key file will provide some random data used to encrypt the master key. The master key will also be protected by a passphrase. The provider's sector size will be 4kB. The example describes how to attach to the `geli` provider, create a file system on it, mount it, work with it, and finally, how to detach it.
[.procedure]
.Procedure: Encrypting a Partition with `geli`
. Load `geli` Support
+
Support for `geli` is available as a loadable kernel module. To configure the system to automatically load the module at boot time, add the following line to [.filename]#/boot/loader.conf#:
+
[.programlisting]
....
geom_eli_load="YES"
....
+
To load the kernel module now:
+
[source,shell]
....
# kldload geom_eli
....
+
For a custom kernel, ensure the kernel configuration file contains these lines:
+
[.programlisting]
....
options GEOM_ELI
device crypto
....
. Generate the Master Key
+
The following commands generate a master key that all data will be encrypted with. This key can never be changed. Rather than using it directly, it is encrypted with one or more user keys. The user keys are made up of an optional combination of random bytes from a file, [.filename]#/root/da2.key#, and/or a passphrase. In this case, the data source for the key file is [.filename]#/dev/random#. This command also configures the sector size of the provider ([.filename]#/dev/da2.eli#) as 4kB, for better performance:
+
[source,shell]
....
# dd if=/dev/random of=/root/da2.key bs=64 count=1
# geli init -K /root/da2.key -s 4096 /dev/da2
Enter new passphrase:
Reenter new passphrase:
....
+
It is not mandatory to use both a passphrase and a key file as either method of securing the master key can be used in isolation.
+
If the key file is given as "-", standard input will be used. For example, this command generates three key files:
+
[source,shell]
....
# cat keyfile1 keyfile2 keyfile3 | geli init -K - /dev/da2
....
. Attach the Provider with the Generated Key
+
To attach the provider, specify the key file, the name of the disk, and the passphrase:
+
[source,shell]
....
# geli attach -k /root/da2.key /dev/da2
Enter passphrase:
....
+
This creates a new device with an [.filename]#.eli# extension:
+
[source,shell]
....
# ls /dev/da2*
/dev/da2 /dev/da2.eli
....
. Create the New File System
+
Next, format the device with the UFS file system and mount it on an existing mount point:
+
[source,shell]
....
# dd if=/dev/random of=/dev/da2.eli bs=1m
# newfs /dev/da2.eli
# mount /dev/da2.eli /private
....
+
The encrypted file system should now be available for use:
+
[source,shell]
....
# df -H
Filesystem Size Used Avail Capacity Mounted on
/dev/ad0s1a 248M 89M 139M 38% /
/devfs 1.0K 1.0K 0B 100% /dev
/dev/ad0s1f 7.7G 2.3G 4.9G 32% /usr
/dev/ad0s1d 989M 1.5M 909M 0% /tmp
/dev/ad0s1e 3.9G 1.3G 2.3G 35% /var
/dev/da2.eli 150G 4.1K 138G 0% /private
....
Once the work on the encrypted partition is done, and the [.filename]#/private# partition is no longer needed, it is prudent to put the device into cold storage by unmounting and detaching the `geli` encrypted partition from the kernel:
[source,shell]
....
# umount /private
# geli detach da2.eli
....
An [.filename]#rc.d# script is provided to simplify the mounting of `geli`-encrypted devices at boot time. For this example, add these lines to [.filename]#/etc/rc.conf#:
[.programlisting]
....
geli_devices="da2"
geli_da2_flags="-k /root/da2.key"
....
This configures [.filename]#/dev/da2# as a `geli` provider with a master key of [.filename]#/root/da2.key#. The system will automatically detach the provider from the kernel before the system shuts down. During the startup process, the script will prompt for the passphrase before attaching the provider. Other kernel messages might be shown before and after the password prompt. If the boot process seems to stall, look carefully for the password prompt among the other messages. Once the correct passphrase is entered, the provider is attached. The file system is then mounted, typically by an entry in [.filename]#/etc/fstab#. Refer to crossref:basics[mount-unmount,“Mounting and Unmounting File Systems”] for instructions on how to configure a file system to mount at boot time.
[[swap-encrypting]]
== Encrypting Swap
Like the encryption of disk partitions, encryption of swap space is used to protect sensitive information. Consider an application that deals with passwords. As long as these passwords stay in physical memory, they are not written to disk and will be cleared after a reboot. However, if FreeBSD starts swapping out memory pages to free space, the passwords may be written to the disk unencrypted. Encrypting swap space can be a solution for this scenario.
This section demonstrates how to configure an encrypted swap partition using man:gbde[8] or man:geli[8] encryption. It assumes that [.filename]#/dev/ada0s1b# is the swap partition.
=== Configuring Encrypted Swap
Swap partitions are not encrypted by default and should be cleared of any sensitive data before continuing. To overwrite the current swap partition with random garbage, execute the following command:
[source,shell]
....
# dd if=/dev/random of=/dev/ada0s1b bs=1m
....
To encrypt the swap partition using man:gbde[8], add the `.bde` suffix to the swap line in [.filename]#/etc/fstab#:
[.programlisting]
....
# Device Mountpoint FStype Options Dump Pass#
/dev/ada0s1b.bde none swap sw 0 0
....
To instead encrypt the swap partition using man:geli[8], use the `.eli` suffix:
[.programlisting]
....
# Device Mountpoint FStype Options Dump Pass#
/dev/ada0s1b.eli none swap sw 0 0
....
By default, man:geli[8] uses the AES algorithm with a key length of 128 bits. Normally the default settings will suffice. If desired, these defaults can be altered in the options field in [.filename]#/etc/fstab#. The possible flags are:
aalgo::
Data integrity verification algorithm used to ensure that the encrypted data has not been tampered with. See man:geli[8] for a list of supported algorithms.
ealgo::
Encryption algorithm used to protect the data. See man:geli[8] for a list of supported algorithms.
keylen::
The length of the key used for the encryption algorithm. See man:geli[8] for the key lengths that are supported by each encryption algorithm.
sectorsize::
The size of the blocks data is broken into before it is encrypted. Larger sector sizes increase performance at the cost of higher storage overhead. The recommended size is 4096 bytes.
This example configures an encrypted swap partition using the Blowfish algorithm with a key length of 128 bits and a sectorsize of 4 kilobytes:
[.programlisting]
....
# Device Mountpoint FStype Options Dump Pass#
/dev/ada0s1b.eli none swap sw,ealgo=blowfish,keylen=128,sectorsize=4096 0 0
....
=== Encrypted Swap Verification
Once the system has rebooted, proper operation of the encrypted swap can be verified using `swapinfo`.
If man:gbde[8] is being used:
[source,shell]
....
% swapinfo
Device 1K-blocks Used Avail Capacity
/dev/ada0s1b.bde 542720 0 542720 0
....
If man:geli[8] is being used:
[source,shell]
....
% swapinfo
Device 1K-blocks Used Avail Capacity
/dev/ada0s1b.eli 542720 0 542720 0
....
[[disks-hast]]
== Highly Available Storage (HAST)
High availability is one of the main requirements in serious business applications and highly-available storage is a key component in such environments. In FreeBSD, the Highly Available STorage (HAST) framework allows transparent storage of the same data across several physically separated machines connected by a TCP/IP network. HAST can be understood as a network-based RAID1 (mirror), and is similar to the DRBD(R) storage system used in the GNU/Linux(R) platform. In combination with other high-availability features of FreeBSD like CARP, HAST makes it possible to build a highly-available storage cluster that is resistant to hardware failures.
The following are the main features of HAST:
* Can be used to mask I/O errors on local hard drives.
* File system agnostic as it works with any file system supported by FreeBSD.
* Efficient and quick resynchronization as only the blocks that were modified during the downtime of a node are synchronized.
* Can be used in an already deployed environment to add additional redundancy.
* Together with CARP, Heartbeat, or other tools, it can be used to build a robust and durable storage system.
After reading this section, you will know:
* What HAST is, how it works, and which features it provides.
* How to set up and use HAST on FreeBSD.
* How to integrate CARP and man:devd[8] to build a robust storage system.
Before reading this section, you should:
* Understand UNIX(R) and FreeBSD basics (crossref:basics[basics,FreeBSD Basics]).
* Know how to configure network interfaces and other core FreeBSD subsystems (crossref:config[config-tuning,Configuration and Tuning]).
* Have a good understanding of FreeBSD networking (crossref:partiv[network-communication,"Network Communication"]).
The HAST project was sponsored by The FreeBSD Foundation with support from http://www.omc.net/[http://www.omc.net/] and http://www.transip.nl/[http://www.transip.nl/].
=== HAST Operation
HAST provides synchronous block-level replication between two physical machines: the _primary_, also known as the _master_ node, and the _secondary_, or _slave_ node. These two machines together are referred to as a cluster.
Since HAST works in a primary-secondary configuration, it allows only one of the cluster nodes to be active at any given time. The primary node, also called _active_, is the one which will handle all the I/O requests to HAST-managed devices. The secondary node is automatically synchronized from the primary node.
The physical components of the HAST system are the local disk on primary node, and the disk on the remote, secondary node.
HAST operates synchronously on a block level, making it transparent to file systems and applications. HAST provides regular GEOM providers in [.filename]#/dev/hast/# for use by other tools or applications. There is no difference between using HAST-provided devices and raw disks or partitions.
Each write, delete, or flush operation is sent to both the local disk and to the remote disk over TCP/IP. Each read operation is served from the local disk, unless the local disk is not up-to-date or an I/O error occurs. In such cases, the read operation is sent to the secondary node.
HAST tries to provide fast failure recovery. For this reason, it is important to reduce synchronization time after a node's outage. To provide fast synchronization, HAST manages an on-disk bitmap of dirty extents and only synchronizes those during a regular synchronization, with an exception of the initial sync.
There are many ways to handle synchronization. HAST implements several replication modes to handle different synchronization methods:
* _memsync_: This mode reports a write operation as completed when the local write operation is finished and when the remote node acknowledges data arrival, but before actually storing the data. The data on the remote node will be stored directly after sending the acknowledgement. This mode is intended to reduce latency, but still provides good reliability. This mode is the default.
* _fullsync_: This mode reports a write operation as completed when both the local write and the remote write complete. This is the safest and the slowest replication mode.
* _async_: This mode reports a write operation as completed when the local write completes. This is the fastest and the most dangerous replication mode. It should only be used when replicating to a distant node where latency is too high for other modes.
=== HAST Configuration
The HAST framework consists of several components:
* The man:hastd[8] daemon which provides data synchronization. When this daemon is started, it will automatically load `geom_gate.ko`.
* The userland management utility, man:hastctl[8].
* The man:hast.conf[5] configuration file. This file must exist before starting hastd.
Users who prefer to statically build `GEOM_GATE` support into the kernel should add this line to the custom kernel configuration file, then rebuild the kernel using the instructions in crossref:kernelconfig[kernelconfig,Configuring the FreeBSD Kernel]:
[.programlisting]
....
options GEOM_GATE
....
The following example describes how to configure two nodes in master-slave/primary-secondary operation using HAST to replicate the data between the two. The nodes will be called `hasta`, with an IP address of `172.16.0.1`, and `hastb`, with an IP address of `172.16.0.2`. Both nodes will have a dedicated hard drive [.filename]#/dev/ad6# of the same size for HAST operation. The HAST pool, sometimes referred to as a resource or the GEOM provider in [.filename]#/dev/hast/#, will be called `test`.
Configuration of HAST is done using [.filename]#/etc/hast.conf#. This file should be identical on both nodes. The simplest configuration is:
[.programlisting]
....
resource test {
on hasta {
local /dev/ad6
remote 172.16.0.2
}
on hastb {
local /dev/ad6
remote 172.16.0.1
}
}
....
For more advanced configuration, refer to man:hast.conf[5].
[TIP]
====
It is also possible to use host names in the `remote` statements if the hosts are resolvable and defined either in [.filename]#/etc/hosts# or in the local DNS.
====
Once the configuration exists on both nodes, the HAST pool can be created. Run these commands on both nodes to place the initial metadata onto the local disk and to start man:hastd[8]:
[source,shell]
....
# hastctl create test
# service hastd onestart
....
[NOTE]
====
It is _not_ possible to use GEOM providers with an existing file system or to convert an existing storage to a HAST-managed pool. This procedure needs to store some metadata on the provider and there will not be enough required space available on an existing provider.
====
A HAST node's `primary` or `secondary` role is selected by an administrator, or software like Heartbeat, using man:hastctl[8]. On the primary node, `hasta`, issue this command:
[source,shell]
....
# hastctl role primary test
....
Run this command on the secondary node, `hastb`:
[source,shell]
....
# hastctl role secondary test
....
Verify the result by running `hastctl` on each node:
[source,shell]
....
# hastctl status test
....
Check the `status` line in the output. If it says `degraded`, something is wrong with the configuration file. It should say `complete` on each node, meaning that the synchronization between the nodes has started. The synchronization completes when `hastctl status` reports 0 bytes of `dirty` extents.
The next step is to create a file system on the GEOM provider and mount it. This must be done on the `primary` node. Creating the file system can take a few minutes, depending on the size of the hard drive. This example creates a UFS file system on [.filename]#/dev/hast/test#:
[source,shell]
....
# newfs -U /dev/hast/test
# mkdir /hast/test
# mount /dev/hast/test /hast/test
....
Once the HAST framework is configured properly, the final step is to make sure that HAST is started automatically during system boot. Add this line to [.filename]#/etc/rc.conf#:
[.programlisting]
....
hastd_enable="YES"
....
==== Failover Configuration
The goal of this example is to build a robust storage system which is resistant to the failure of any given node. If the primary node fails, the secondary node is there to take over seamlessly, check and mount the file system, and continue to work without missing a single bit of data.
To accomplish this task, the Common Address Redundancy Protocol (CARP) is used to provide for automatic failover at the IP layer. CARP allows multiple hosts on the same network segment to share an IP address. Set up CARP on both nodes of the cluster according to the documentation available in crossref:advanced-networking[carp,“Common Address Redundancy Protocol (CARP)”]. In this example, each node will have its own management IP address and a shared IP address of _172.16.0.254_. The primary HAST node of the cluster must be the master CARP node.
The HAST pool created in the previous section is now ready to be exported to the other hosts on the network. This can be accomplished by exporting it through NFS or Samba, using the shared IP address _172.16.0.254_. The only problem which remains unresolved is an automatic failover should the primary node fail.
In the event of CARP interfaces going up or down, the FreeBSD operating system generates a man:devd[8] event, making it possible to watch for state changes on the CARP interfaces. A state change on the CARP interface is an indication that one of the nodes failed or came back online. These state change events make it possible to run a script which will automatically handle the HAST failover.
To catch state changes on the CARP interfaces, add this configuration to [.filename]#/etc/devd.conf# on each node:
[.programlisting]
....
notify 30 {
match "system" "IFNET";
match "subsystem" "carp0";
match "type" "LINK_UP";
action "/usr/local/sbin/carp-hast-switch master";
};
notify 30 {
match "system" "IFNET";
match "subsystem" "carp0";
match "type" "LINK_DOWN";
action "/usr/local/sbin/carp-hast-switch slave";
};
....
[NOTE]
====
If the systems are running FreeBSD 10 or higher, replace [.filename]#carp0# with the name of the CARP-configured interface.
====
Restart man:devd[8] on both nodes to put the new configuration into effect:
[source,shell]
....
# service devd restart
....
When the specified interface state changes by going up or down , the system generates a notification, allowing the man:devd[8] subsystem to run the specified automatic failover script, [.filename]#/usr/local/sbin/carp-hast-switch#. For further clarification about this configuration, refer to man:devd.conf[5].
Here is an example of an automated failover script:
[.programlisting]
....
#!/bin/sh
# Original script by Freddie Cash <fjwcash@gmail.com>
# Modified by Michael W. Lucas <mwlucas@BlackHelicopters.org>
# and Viktor Petersson <vpetersson@wireload.net>
# The names of the HAST resources, as listed in /etc/hast.conf
resources="test"
# delay in mounting HAST resource after becoming master
# make your best guess
delay=3
# logging
log="local0.debug"
name="carp-hast"
# end of user configurable stuff
case "$1" in
master)
logger -p $log -t $name "Switching to primary provider for ${resources}."
sleep ${delay}
# Wait for any "hastd secondary" processes to stop
for disk in ${resources}; do
while $( pgrep -lf "hastd: ${disk} \(secondary\)" > /dev/null 2>&1 ); do
sleep 1
done
# Switch role for each disk
hastctl role primary ${disk}
if [ $? -ne 0 ]; then
logger -p $log -t $name "Unable to change role to primary for resource ${disk}."
exit 1
fi
done
# Wait for the /dev/hast/* devices to appear
for disk in ${resources}; do
for I in $( jot 60 ); do
[ -c "/dev/hast/${disk}" ] && break
sleep 0.5
done
if [ ! -c "/dev/hast/${disk}" ]; then
logger -p $log -t $name "GEOM provider /dev/hast/${disk} did not appear."
exit 1
fi
done
logger -p $log -t $name "Role for HAST resources ${resources} switched to primary."
logger -p $log -t $name "Mounting disks."
for disk in ${resources}; do
mkdir -p /hast/${disk}
fsck -p -y -t ufs /dev/hast/${disk}
mount /dev/hast/${disk} /hast/${disk}
done
;;
slave)
logger -p $log -t $name "Switching to secondary provider for ${resources}."
# Switch roles for the HAST resources
for disk in ${resources}; do
if ! mount | grep -q "^/dev/hast/${disk} on "
then
else
umount -f /hast/${disk}
fi
sleep $delay
hastctl role secondary ${disk} 2>&1
if [ $? -ne 0 ]; then
logger -p $log -t $name "Unable to switch role to secondary for resource ${disk}."
exit 1
fi
logger -p $log -t $name "Role switched to secondary for resource ${disk}."
done
;;
esac
....
In a nutshell, the script takes these actions when a node becomes master:
* Promotes the HAST pool to primary on the other node.
* Checks the file system under the HAST pool.
* Mounts the pool.
When a node becomes secondary:
* Unmounts the HAST pool.
* Degrades the HAST pool to secondary.
[CAUTION]
====
This is just an example script which serves as a proof of concept. It does not handle all the possible scenarios and can be extended or altered in any way, for example, to start or stop required services.
====
[TIP]
====
For this example, a standard UFS file system was used. To reduce the time needed for recovery, a journal-enabled UFS or ZFS file system can be used instead.
====
More detailed information with additional examples can be found at http://wiki.FreeBSD.org/HAST[http://wiki.FreeBSD.org/HAST].
=== Troubleshooting
HAST should generally work without issues. However, as with any other software product, there may be times when it does not work as supposed. The sources of the problems may be different, but the rule of thumb is to ensure that the time is synchronized between the nodes of the cluster.
When troubleshooting HAST, the debugging level of man:hastd[8] should be increased by starting `hastd` with `-d`. This argument may be specified multiple times to further increase the debugging level. Consider also using `-F`, which starts `hastd` in the foreground.
[[disks-hast-sb]]
==== Recovering from the Split-brain Condition
_Split-brain_ occurs when the nodes of the cluster are unable to communicate with each other, and both are configured as primary. This is a dangerous condition because it allows both nodes to make incompatible changes to the data. This problem must be corrected manually by the system administrator.
The administrator must either decide which node has more important changes, or perform the merge manually. Then, let HAST perform full synchronization of the node which has the broken data. To do this, issue these commands on the node which needs to be resynchronized:
[source,shell]
....
# hastctl role init test
# hastctl create test
# hastctl role secondary test
....
diff --git a/documentation/content/en/books/handbook/dtrace/_index.adoc b/documentation/content/en/books/handbook/dtrace/_index.adoc
index d278fa50f0..fad2d49fc5 100644
--- a/documentation/content/en/books/handbook/dtrace/_index.adoc
+++ b/documentation/content/en/books/handbook/dtrace/_index.adoc
@@ -1,229 +1,230 @@
---
title: Chapter 25. DTrace
part: Part III. System Administration
prev: books/handbook/cutting-edge
next: books/handbook/usb-device-mode
+description: This chapter explains how to use DTrace in FreeBSD
---
[[dtrace]]
= DTrace
:doctype: book
:toc: macro
:toclevels: 1
:icons: font
:sectnums:
:sectnumlevels: 6
:source-highlighter: rouge
:experimental:
:skip-front-matter:
:xrefstyle: basic
:relfileprefix: ../
:outfilesuffix:
:sectnumoffset: 25
ifeval::["{backend}" == "html5"]
:imagesdir: ../../../../images/books/handbook/dtrace/
endif::[]
ifeval::["{backend}" == "pdf"]
:imagesdir: ../../../../static/images/books/handbook/dtrace/
endif::[]
ifeval::["{backend}" == "epub3"]
:imagesdir: ../../../../static/images/books/handbook/dtrace/
endif::[]
include::shared/authors.adoc[]
include::shared/releases.adoc[]
include::shared/en/mailing-lists.adoc[]
include::shared/en/teams.adoc[]
include::shared/en/urls.adoc[]
toc::[]
[[dtrace-synopsis]]
== Synopsis
DTrace, also known as Dynamic Tracing, was developed by Sun(TM) as a tool for locating performance bottlenecks in production and pre-production systems. In addition to diagnosing performance problems, DTrace can be used to help investigate and debug unexpected behavior in both the FreeBSD kernel and in userland programs.
DTrace is a remarkable profiling tool, with an impressive array of features for diagnosing system issues. It may also be used to run pre-written scripts to take advantage of its capabilities. Users can author their own utilities using the DTrace D Language, allowing them to customize their profiling based on specific needs.
The FreeBSD implementation provides full support for kernel DTrace and experimental support for userland DTrace. Userland DTrace allows users to perform function boundary tracing for userland programs using the `pid` provider, and to insert static probes into userland programs for later tracing. Some ports, such as package:databases/postgresql12-server[] and package:lang/php74[] have a DTrace option to enable static probes.
The official guide to DTrace is maintained by the Illumos project at http://dtrace.org/guide[DTrace Guide].
After reading this chapter, you will know:
* What DTrace is and what features it provides.
* Differences between the Solaris(TM) DTrace implementation and the one provided by FreeBSD.
* How to enable and use DTrace on FreeBSD.
Before reading this chapter, you should:
* Understand UNIX(R) and FreeBSD basics (crossref:basics[basics,FreeBSD Basics]).
* Have some familiarity with security and how it pertains to FreeBSD (crossref:security[security,Security]).
[[dtrace-implementation]]
== Implementation Differences
While the DTrace in FreeBSD is similar to that found in Solaris(TM), differences do exist. The primary difference is that in FreeBSD, DTrace is implemented as a set of kernel modules and DTrace can not be used until the modules are loaded. To load all of the necessary modules:
[source,shell]
....
# kldload dtraceall
....
Beginning with FreeBSD 10.0-RELEASE, the modules are automatically loaded when `dtrace` is run.
FreeBSD uses the `DDB_CTF` kernel option to enable support for loading `CTF` data from kernel modules and the kernel itself. `CTF` is the Solaris(TM) Compact C Type Format which encapsulates a reduced form of debugging information similar to `DWARF` and the venerable stabs. `CTF` data is added to binaries by the `ctfconvert` and `ctfmerge` build tools. The `ctfconvert` utility parses `DWARF``ELF` debug sections created by the compiler and `ctfmerge` merges `CTF``ELF` sections from objects into either executables or shared libraries.
Some different providers exist for FreeBSD than for Solaris(TM). Most notable is the `dtmalloc` provider, which allows tracing `malloc()` by type in the FreeBSD kernel. Some of the providers found in Solaris(TM), such as `cpc` and `mib`, are not present in FreeBSD. These may appear in future versions of FreeBSD. Moreover, some of the providers available in both operating systems are not compatible, in the sense that their probes have different argument types. Thus, `D` scripts written on Solaris(TM) may or may not work unmodified on FreeBSD, and vice versa.
Due to security differences, only `root` may use DTrace on FreeBSD. Solaris(TM) has a few low level security checks which do not yet exist in FreeBSD. As such, the [.filename]#/dev/dtrace/dtrace# is strictly limited to `root`.
DTrace falls under the Common Development and Distribution License (`CDDL`) license. To view this license on FreeBSD, see [.filename]#/usr/src/cddl/contrib/opensolaris/OPENSOLARIS.LICENSE# or view it online at http://opensource.org/licenses/CDDL-1.0[http://opensource.org/licenses/CDDL-1.0]. While a FreeBSD kernel with DTrace support is `BSD` licensed, the `CDDL` is used when the modules are distributed in binary form or the binaries are loaded.
[[dtrace-enable]]
== Enabling DTrace Support
In FreeBSD 9.2 and 10.0, DTrace support is built into the [.filename]#GENERIC# kernel. Users of earlier versions of FreeBSD or who prefer to statically compile in DTrace support should add the following lines to a custom kernel configuration file and recompile the kernel using the instructions in crossref:kernelconfig[kernelconfig,Configuring the FreeBSD Kernel]:
[.programlisting]
....
options KDTRACE_HOOKS
options DDB_CTF
makeoptions DEBUG=-g
makeoptions WITH_CTF=1
....
Users of the AMD64 architecture should also add this line:
[.programlisting]
....
options KDTRACE_FRAME
....
This option provides support for `FBT`. While DTrace will work without this option, there will be limited support for function boundary tracing.
Once the FreeBSD system has rebooted into the new kernel, or the DTrace kernel modules have been loaded using `kldload dtraceall`, the system will need support for the Korn shell as the DTrace Toolkit has several utilities written in `ksh`. Make sure that the package:shells/ksh93[] package or port is installed. It is also possible to run these tools under package:shells/pdksh[] or package:shells/mksh[].
Finally, install the current DTrace Toolkit, a collection of ready-made scripts for collecting system information. There are scripts to check open files, memory, `CPU` usage, and a lot more. FreeBSD 10 installs a few of these scripts into [.filename]#/usr/share/dtrace#. On other FreeBSD versions, or to install the full DTrace Toolkit, use the package:sysutils/DTraceToolkit[] package or port.
[NOTE]
====
The scripts found in [.filename]#/usr/share/dtrace# have been specifically ported to FreeBSD. Not all of the scripts found in the DTrace Toolkit will work as-is on FreeBSD and some scripts may require some effort in order for them to work on FreeBSD.
====
The DTrace Toolkit includes many scripts in the special language of DTrace. This language is called the D language and it is very similar to C++. An in depth discussion of the language is beyond the scope of this document. It is covered extensively in the http://www.dtrace.org/guide[Illumos Dynamic Tracing Guide].
[[dtrace-using]]
== Using DTrace
DTrace scripts consist of a list of one or more _probes_, or instrumentation points, where each probe is associated with an action. Whenever the condition for a probe is met, the associated action is executed. For example, an action may occur when a file is opened, a process is started, or a line of code is executed. The action might be to log some information or to modify context variables. The reading and writing of context variables allows probes to share information and to cooperatively analyze the correlation of different events.
To view all probes, the administrator can execute the following command:
[source,shell]
....
# dtrace -l | more
....
Each probe has an `ID`, a `PROVIDER` (dtrace or fbt), a `MODULE`, and a `FUNCTION NAME`. Refer to man:dtrace[1] for more information about this command.
The examples in this section provide an overview of how to use two of the fully supported scripts from the DTrace Toolkit: the [.filename]#hotkernel# and [.filename]#procsystime# scripts.
The [.filename]#hotkernel# script is designed to identify which function is using the most kernel time. It will produce output similar to the following:
[source,shell]
....
# cd /usr/local/shared/dtrace-toolkit
# ./hotkernel
Sampling... Hit Ctrl-C to end.
....
As instructed, use the kbd:[Ctrl+C] key combination to stop the process. Upon termination, the script will display a list of kernel functions and timing information, sorting the output in increasing order of time:
[source,shell]
....
kernel`_thread_lock_flags 2 0.0%
0xc1097063 2 0.0%
kernel`sched_userret 2 0.0%
kernel`kern_select 2 0.0%
kernel`generic_copyin 3 0.0%
kernel`_mtx_assert 3 0.0%
kernel`vm_fault 3 0.0%
kernel`sopoll_generic 3 0.0%
kernel`fixup_filename 4 0.0%
kernel`_isitmyx 4 0.0%
kernel`find_instance 4 0.0%
kernel`_mtx_unlock_flags 5 0.0%
kernel`syscall 5 0.0%
kernel`DELAY 5 0.0%
0xc108a253 6 0.0%
kernel`witness_lock 7 0.0%
kernel`read_aux_data_no_wait 7 0.0%
kernel`Xint0x80_syscall 7 0.0%
kernel`witness_checkorder 7 0.0%
kernel`sse2_pagezero 8 0.0%
kernel`strncmp 9 0.0%
kernel`spinlock_exit 10 0.0%
kernel`_mtx_lock_flags 11 0.0%
kernel`witness_unlock 15 0.0%
kernel`sched_idletd 137 0.3%
0xc10981a5 42139 99.3%
....
This script will also work with kernel modules. To use this feature, run the script with `-m`:
[source,shell]
....
# ./hotkernel -m
Sampling... Hit Ctrl-C to end.
^C
MODULE COUNT PCNT
0xc107882e 1 0.0%
0xc10e6aa4 1 0.0%
0xc1076983 1 0.0%
0xc109708a 1 0.0%
0xc1075a5d 1 0.0%
0xc1077325 1 0.0%
0xc108a245 1 0.0%
0xc107730d 1 0.0%
0xc1097063 2 0.0%
0xc108a253 73 0.0%
kernel 874 0.4%
0xc10981a5 213781 99.6%
....
The [.filename]#procsystime# script captures and prints the system call time usage for a given process `ID` (`PID`) or process name. In the following example, a new instance of [.filename]#/bin/csh# was spawned. Then, [.filename]#procsystime# was executed and remained waiting while a few commands were typed on the other incarnation of `csh`. These are the results of this test:
[source,shell]
....
# ./procsystime -n csh
Tracing... Hit Ctrl-C to end...
^C
Elapsed Times for processes csh,
SYSCALL TIME (ns)
getpid 6131
sigreturn 8121
close 19127
fcntl 19959
dup 26955
setpgid 28070
stat 31899
setitimer 40938
wait4 62717
sigaction 67372
sigprocmask 119091
gettimeofday 183710
write 263242
execve 492547
ioctl 770073
vfork 3258923
sigsuspend 6985124
read 3988049784
....
As shown, the `read()` system call used the most time in nanoseconds while the `getpid()` system call used the least amount of time.
diff --git a/documentation/content/en/books/handbook/eresources/_index.adoc b/documentation/content/en/books/handbook/eresources/_index.adoc
index b41d30622d..d2edf0d66b 100644
--- a/documentation/content/en/books/handbook/eresources/_index.adoc
+++ b/documentation/content/en/books/handbook/eresources/_index.adoc
@@ -1,1194 +1,1195 @@
---
title: Appendix C. Resources on the Internet
part: Part V. Appendices
prev: books/handbook/bibliography
next: books/handbook/pgpkeys
+description: FreeBSD additional resources on internet like websites, mailing lists, mirrors, etc
---
[appendix]
[[eresources]]
= Resources on the Internet
:doctype: book
:icons: font
:sectnums:
:sectnumlevels: 6
:source-highlighter: rouge
:experimental:
:skip-front-matter:
:xrefstyle: basic
:relfileprefix: ../
:outfilesuffix:
:sectnumoffset: C
include::shared/mirrors.adoc[]
include::shared/authors.adoc[]
include::shared/releases.adoc[]
include::shared/en/mailing-lists.adoc[]
include::shared/en/teams.adoc[]
include::shared/en/urls.adoc[]
The rapid pace of FreeBSD progress makes print media impractical as a means of following the latest developments. Electronic resources are the best, if not often the only, way to stay informed of the latest advances. Since FreeBSD is a volunteer effort, the user community itself also generally serves as a "technical support department" of sorts, with electronic mail, web forums, and USENET news being the most effective way of reaching that community.
The most important points of contact with the FreeBSD user community are outlined below. Please send other resources not mentioned here to the {freebsd-doc} so that they may also be included.
[[eresources-www]]
== Websites
* https://forums.FreeBSD.org/[The FreeBSD Forums] provide a web based discussion forum for FreeBSD questions and technical discussion.
* The http://www.youtube.com/bsdconferences[BSDConferences YouTube Channel] provides a collection of high quality videos from BSD conferences around the world. This is a great way to watch key developers give presentations about new work in FreeBSD.
[[eresources-mail]]
== Mailing Lists
The mailing lists are the most direct way of addressing questions or opening a technical discussion to a concentrated FreeBSD audience. There are a wide variety of lists on a number of different FreeBSD topics. Sending questions to the most appropriate mailing list will invariably assure a faster and more accurate response.
The charters for the various lists are given at the bottom of this document. _Please read the charter before joining or sending mail to any list_. Most list subscribers receive many hundreds of FreeBSD related messages every day, and the charters and rules for use are meant to keep the signal-to-noise ratio of the lists high. To do less would see the mailing lists ultimately fail as an effective communications medium for the Project.
[NOTE]
====
_To test the ability to send email to FreeBSD lists, send a test message to {freebsd-test}._ Please do not send test messages to any other list.
====
When in doubt about what list to post a question to, see link:{freebsd-questions-article}[How to get best results from the FreeBSD-questions mailing list].
Before posting to any list, please learn about how to best use the mailing lists, such as how to help avoid frequently-repeated discussions, by reading the link:{mailing-list-faq}[Mailing List Frequently Asked Questions] (FAQ) document.
Archives are kept for all of the mailing lists and can be searched using the https://www.FreeBSD.org/search/[FreeBSD World Wide Web server]. The keyword searchable archive offers an excellent way of finding answers to frequently asked questions and should be consulted before posting a question. Note that this also means that messages sent to FreeBSD mailing lists are archived in perpetuity. When protecting privacy is a concern, consider using a disposable secondary email address and posting only public information.
[[eresources-summary]]
=== List Summary
_General lists:_ The following are general lists which anyone is free (and encouraged) to join:
[.informaltable]
[cols="20%,80%", frame="none", options="header"]
|===
| List
| Purpose
|link:{freebsd-advocacy-url}[freebsd-advocacy]
|FreeBSD Evangelism
|link:{freebsd-announce-url}[freebsd-announce]
|Important events and Project milestones (moderated)
|link:{freebsd-arch-url}[freebsd-arch]
|Architecture and design discussions
|link:{freebsd-bugbusters-url}[freebsd-bugbusters]
|Discussions pertaining to the maintenance of the FreeBSD problem report database and related tools
|link:{freebsd-bugs-url}[freebsd-bugs]
|Bug reports
|link:{freebsd-chat-url}[freebsd-chat]
|Non-technical items related to the FreeBSD community
|link:{freebsd-chromium-url}[freebsd-chromium]
|FreeBSD-specific Chromium issues
|link:{freebsd-current-url}[freebsd-current]
|Discussion concerning the use of FreeBSD-CURRENT
|link:{freebsd-isp-url}[freebsd-isp]
|Issues for Internet Service Providers using FreeBSD
|link:{freebsd-jobs-url}[freebsd-jobs]
|FreeBSD employment and consulting opportunities
|link:{freebsd-quarterly-calls-url}[freebsd-quarterly-calls]
|Calls for quarterly status reports (moderated)
|link:{freebsd-questions-url}[freebsd-questions]
|User questions and technical support
|link:{freebsd-security-notifications-url}[freebsd-security-notifications]
|Security notifications (moderated)
|link:{freebsd-stable-url}[freebsd-stable]
|Discussion concerning the use of FreeBSD-STABLE
|link:{freebsd-test-url}[freebsd-test]
|Where to send test messages instead of to one of the actual lists
|link:{freebsd-women-url}[freebsd-women]
|FreeBSD advocacy for women
|===
_Technical lists:_ The following lists are for technical discussion. Read the charter for each list carefully before joining or sending mail to one as there are firm guidelines for their use and content.
[.informaltable]
[cols="20%,80%", frame="none", options="header"]
|===
| List
| Purpose
|link:{freebsd-acpi-url}[freebsd-acpi]
|ACPI and power management development
|link:{freebsd-amd64-url}[freebsd-amd64]
|Porting FreeBSD to AMD64 systems (moderated)
|link:{freebsd-apache-url}[freebsd-apache]
|Discussion about Apache related ports
|link:{freebsd-arm-url}[freebsd-arm]
|Porting FreeBSD to ARM(R) processors
|link:{freebsd-atm-url}[freebsd-atm]
|Using ATM networking with FreeBSD
|link:{freebsd-bluetooth-url}[freebsd-bluetooth]
|Using Bluetooth(R) technology in FreeBSD
|link:{freebsd-cloud-url}[freebsd-cloud]
|FreeBSD on cloud platforms (EC2, GCE, Azure, etc.)
|link:{freebsd-cluster-url}[freebsd-cluster]
|Using FreeBSD in a clustered environment
|link:{freebsd-database-url}[freebsd-database]
|Discussing database use and development under FreeBSD
|link:{freebsd-desktop-url}[freebsd-desktop]
|Using and improving FreeBSD on the desktop
|link:{dev-ci-url}[dev-ci]
|Build and test reports from the Continuous Integration servers
|link:{dev-reviews-url}[dev-reviews]
|Notifications of the FreeBSD review system
|link:{freebsd-doc-url}[freebsd-doc]
|Creating FreeBSD related documents
|link:{freebsd-drivers-url}[freebsd-drivers]
|Writing device drivers for FreeBSD
|link:{freebsd-dtrace-url}[freebsd-dtrace]
|Using and working on DTrace in FreeBSD
|link:{freebsd-eclipse-url}[freebsd-eclipse]
|FreeBSD users of Eclipse IDE, tools, rich client applications and ports.
|link:{freebsd-elastic-url}[freebsd-elastic]
|FreeBSD-specific ElasticSearch discussions
|link:{freebsd-embedded-url}[freebsd-embedded]
|Using FreeBSD in embedded applications
|link:{freebsd-eol-url}[freebsd-eol]
|Peer support of FreeBSD-related software that is no longer supported by the FreeBSD Project.
|link:{freebsd-emulation-url}[freebsd-emulation]
|Emulation of other systems such as Linux/MS-DOS(R)/Windows(R)
|link:{freebsd-enlightenment-url}[freebsd-enlightenment]
|Porting Enlightenment and Enlightenment applications
|link:{freebsd-erlang-url}[freebsd-erlang]
|FreeBSD-specific Erlang discussions
|link:{freebsd-firewire-url}[freebsd-firewire]
|FreeBSD FireWire(R) (iLink, IEEE 1394) technical discussion
|link:{freebsd-fortran-url}[freebsd-fortran]
|Fortran on FreeBSD
|link:{freebsd-fs-url}[freebsd-fs]
|File systems
|link:{freebsd-games-url}[freebsd-games]
|Support for Games on FreeBSD
|link:{freebsd-gecko-url}[freebsd-gecko]
|Gecko Rendering Engine issues
|link:{freebsd-geom-url}[freebsd-geom]
|GEOM-specific discussions and implementations
|link:{freebsd-git-url}[freebsd-git]
|Discussion of git use in the FreeBSD project
|link:{freebsd-gnome-url}[freebsd-gnome]
|Porting GNOME and GNOME applications
|link:{freebsd-hackers-url}[freebsd-hackers]
|General technical discussion
|link:{freebsd-haskell-url}[freebsd-haskell]
|FreeBSD-specific Haskell issues and discussions
|link:{freebsd-hardware-url}[freebsd-hardware]
|General discussion of hardware for running FreeBSD
|link:{freebsd-i18n-url}[freebsd-i18n]
|FreeBSD Internationalization
|link:{freebsd-infiniband-url}[freebsd-infiniband]
|Infiniband on FreeBSD
|link:{freebsd-ipfw-url}[freebsd-ipfw]
|Technical discussion concerning the redesign of the IP firewall code
|link:{freebsd-isdn-url}[freebsd-isdn]
|ISDN developers
|link:{freebsd-jail-url}[freebsd-jail]
|Discussion about the man:jail[8] facility
|link:{freebsd-java-url}[freebsd-java]
|Java(TM) developers and people porting JDK(TM)s to FreeBSD
|link:{freebsd-kde-url}[freebsd-kde]
|Porting KDE and KDE applications
|link:{freebsd-lfs-url}[freebsd-lfs]
|Porting LFS to FreeBSD
|link:{freebsd-mips-url}[freebsd-mips]
|Porting FreeBSD to MIPS(R)
|link:{freebsd-mono-url}[freebsd-mono]
|Mono and C# applications on FreeBSD
|link:{freebsd-multimedia-url}[freebsd-multimedia]
|Multimedia applications
|link:{freebsd-new-bus-url}[freebsd-new-bus]
|Technical discussions about bus architecture
|link:{freebsd-net-url}[freebsd-net]
|Networking discussion and TCP/IP source code
|link:{freebsd-numerics-url}[freebsd-numerics]
|Discussions of high quality implementation of libm functions
|link:{freebsd-ocaml-url}[freebsd-ocaml]
|FreeBSD-specific OCaml discussions
|link:{freebsd-office-url}[freebsd-office]
|Office applications on FreeBSD
|link:{freebsd-performance-url}[freebsd-performance]
|Performance tuning questions for high performance/load installations
|link:{freebsd-perl-url}[freebsd-perl]
|Maintenance of a number of Perl-related ports
|link:{freebsd-pf-url}[freebsd-pf]
|Discussion and questions about the packet filter firewall system
|link:{freebsd-pkg-url}[freebsd-pkg]
|Binary package management and package tools discussion
|link:{freebsd-pkg-fallout-url}[freebsd-pkg-fallout]
|Fallout logs from package building
|link:{freebsd-pkgbase-url}[freebsd-pkgbase]
|Packaging the FreeBSD base system
|link:{freebsd-platforms-url}[freebsd-platforms]
|Concerning ports to non Intel(R) architecture platforms
|link:{freebsd-ports-url}[freebsd-ports]
|Discussion of the Ports Collection
|link:{freebsd-ports-announce-url}[freebsd-ports-announce]
|Important news and instructions about the Ports Collection (moderated)
|link:{freebsd-ports-bugs-url}[freebsd-ports-bugs]
|Discussion of the ports bugs/PRs
|link:{freebsd-ppc-url}[freebsd-ppc]
|Porting FreeBSD to the PowerPC(R)
|link:{freebsd-proliant-url}[freebsd-proliant]
|Technical discussion of FreeBSD on HP ProLiant server platforms
|link:{freebsd-python-url}[freebsd-python]
|FreeBSD-specific Python issues
|link:{freebsd-rc-url}[freebsd-rc]
|Discussion related to the [.filename]#rc.d# system and its development
|link:{freebsd-realtime-url}[freebsd-realtime]
|Development of realtime extensions to FreeBSD
|link:{freebsd-risc-url}[freebsd-risc]
|Porting FreeBSD to RISC-V(R) systems
|link:{freebsd-ruby-url}[freebsd-ruby]
|FreeBSD-specific Ruby discussions
|link:{freebsd-scsi-url}[freebsd-scsi]
|The SCSI subsystem
|link:{freebsd-security-url}[freebsd-security]
|Security issues affecting FreeBSD
|link:{freebsd-snapshots-url}[freebsd-snapshots]
|FreeBSD Development Snapshot Announcements
|link:{freebsd-sparc64-url}[freebsd-sparc64]
|Porting FreeBSD to SPARC(R) based systems
|link:{freebsd-standards-url}[freebsd-standards]
|FreeBSD's conformance to the C99 and the POSIX(R) standards
|link:{freebsd-sysinstall-url}[freebsd-sysinstall]
|man:sysinstall[8] development
|link:{freebsd-tcltk-url}[freebsd-tcltk]
|FreeBSD-specific Tcl/Tk discussions
|link:{freebsd-testing-url}[freebsd-testing]
|Testing on FreeBSD
|link:{freebsd-tex-url}[freebsd-tex]
|Porting TeX and its applications to FreeBSD
|link:{freebsd-threads-url}[freebsd-threads]
|Threading in FreeBSD
|link:{freebsd-tilera-url}[freebsd-tilera]
|Porting FreeBSD to the Tilera family of CPUs
|link:{freebsd-tokenring-url}[freebsd-tokenring]
|Support Token Ring in FreeBSD
|link:{freebsd-toolchain-url}[freebsd-toolchain]
|Maintenance of FreeBSD's integrated toolchain
|link:{freebsd-translators-url}[freebsd-translators]
|Translating FreeBSD documents and programs
|link:{freebsd-transport-url}[freebsd-transport]
|Discussions of transport level network protocols in FreeBSD
|link:{freebsd-usb-url}[freebsd-usb]
|Discussing FreeBSD support for USB
|link:{freebsd-virtualization-url}[freebsd-virtualization]
|Discussion of various virtualization techniques supported by FreeBSD
|link:{freebsd-vuxml-url}[freebsd-vuxml]
|Discussion on VuXML infrastructure
|link:{freebsd-x11-url}[freebsd-x11]
|Maintenance and support of X11 on FreeBSD
|link:{freebsd-xen-url}[freebsd-xen]
|Discussion of the FreeBSD port to Xen(TM) - implementation and usage
|link:{freebsd-xfce-url}[freebsd-xfce]
|XFCE for FreeBSD - porting and maintaining
|link:{freebsd-zope-url}[freebsd-zope]
|Zope for FreeBSD - porting and maintaining
|===
_Limited lists:_ The following lists are for more specialized (and demanding) audiences and are probably not of interest to the general public. It is also a good idea to establish a presence in the technical lists before joining one of these limited lists in order to understand the communications etiquette involved.
[.informaltable]
[cols="20%,80%", frame="none", options="header"]
|===
| List
| Purpose
|link:{freebsd-hubs-url}[freebsd-hubs]
|People running mirror sites (infrastructural support)
|link:{freebsd-user-groups-url}[freebsd-user-groups]
|User group coordination
|link:{freebsd-wip-status-url}[freebsd-wip-status]
|FreeBSD Work-In-Progress Status
|link:{freebsd-wireless-url}[freebsd-wireless]
|Discussions of 802.11 stack, tools, device driver development
|===
_Digest lists:_ All of the above lists are available in a digest format. Once subscribed to a list, the digest options can be changed in the account options section.
_Commit message lists:_ The following lists are for people interested in seeing the log messages for changes to various areas of the source tree.
[NOTE]
====
SVN log messages are sent to SVN lists.
====
[.informaltable]
[cols="1,1,1", frame="none", options="header"]
|===
| List
| Source area
| Area Description (source for)
|{dev-commits-doc-all-url}[dev-commits-doc-all]
|[.filename]#/usr/doc#
|All changes to the doc repository
|{dev-commits-ports-all-url}[dev-commits-ports-all]
|[.filename]#/usr/ports#
|All changes to the ports repository
|{dev-commits-ports-main-url}[dev-commits-ports-main]
|[.filename]#/usr/ports#
|All changes to the "main" branch of the ports repository
|{dev-commits-ports-branches-url}[dev-commits-ports-branches]
|[.filename]#/usr/ports#
|All changes to the quarterly branches of the ports repository
|{dev-commits-src-all-url}[dev-commits-src-all]
|[.filename]#/usr/src#
|All changes to the src repository
|{dev-commits-src-main-url}[dev-commits-src-main]
|[.filename]#/usr/src#
|All changes to the "main" branch of the src repository (the FreeBSD-CURRENT branch)
|{dev-commits-src-branches-url}[dev-commits-src-branches]
|[.filename]#/usr/src#
|All changes to all stable branches of the src repository
|===
_SVN lists:_ The following lists are for people interested in seeing the SVN log messages for changes to various areas of the source tree.
[NOTE]
====
Only SVN log messages are sent to SVN lists. After the SVN to Git Migration, the following lists no longer receives new commit messages.
====
[.informaltable]
[cols="20%,20%,60%", frame="none", options="header"]
|===
| List
| Source area
| Area Description (source for)
|link:{svn-doc-all-url}[svn-doc-all]
|[.filename]#/usr/doc#
|All changes to the doc Subversion repository (except for [.filename]#user#, [.filename]#projects# and [.filename]#translations#)
|link:{svn-doc-head-url}[svn-doc-head]
|[.filename]#/usr/doc#
|All changes to the "head" branch of the doc Subversion repository
|link:{svn-doc-projects-url}[svn-doc-projects]
|[.filename]#/usr/doc/projects#
|All changes to the [.filename]#projects# area of the doc Subversion repository
|link:{svn-doc-svnadmin-url}[svn-doc-svnadmin]
|[.filename]#/usr/doc#
|All changes to the administrative scripts, hooks, and other configuration data of the doc Subversion repository
|link:{svn-ports-all-url}[svn-ports-all]
|[.filename]#/usr/ports#
|All changes to the ports Subversion repository
|link:{svn-ports-head-url}[svn-ports-head]
|[.filename]#/usr/ports#
|All changes to the "head" branch of the ports Subversion repository
|link:{svn-ports-svnadmin-url}[svn-ports-svnadmin]
|[.filename]#/usr/ports#
|All changes to the administrative scripts, hooks, and other configuration data of the ports Subversion repository
|link:{svn-src-all-url}[svn-src-all]
|[.filename]#/usr/src#
|All changes to the src Subversion repository (except for [.filename]#user# and [.filename]#projects#)
|link:{svn-src-head-url}[svn-src-head]
|[.filename]#/usr/src#
|All changes to the "head" branch of the src Subversion repository (the FreeBSD-CURRENT branch)
|link:{svn-src-projects-url}[svn-src-projects]
|[.filename]#/usr/projects#
|All changes to the [.filename]#projects# area of the src Subversion repository
|link:{svn-src-release-url}[svn-src-release]
|[.filename]#/usr/src#
|All changes to the [.filename]#releases# area of the src Subversion repository
|link:{svn-src-releng-url}[svn-src-releng]
|[.filename]#/usr/src#
|All changes to the [.filename]#releng# branches of the src Subversion repository (the security / release engineering branches)
|link:{svn-src-stable-url}[svn-src-stable]
|[.filename]#/usr/src#
|All changes to the all stable branches of the src Subversion repository
|link:{svn-src-stable-6-url}[svn-src-stable-6]
|[.filename]#/usr/src#
|All changes to the [.filename]#stable/6# branch of the src Subversion repository
|link:{svn-src-stable-7-url}[svn-src-stable-7]
|[.filename]#/usr/src#
|All changes to the [.filename]#stable/7# branch of the src Subversion repository
|link:{svn-src-stable-8-url}[svn-src-stable-8]
|[.filename]#/usr/src#
|All changes to the [.filename]#stable/8# branch of the src Subversion repository
|link:{svn-src-stable-9-url}[svn-src-stable-9]
|[.filename]#/usr/src#
|All changes to the [.filename]#stable/9# branch of the src Subversion repository
|link:{svn-src-stable-10-url}[svn-src-stable-10]
|[.filename]#/usr/src#
|All changes to the [.filename]#stable/10# branch of the src Subversion repository
|link:{svn-src-stable-11-url}[svn-src-stable-11]
|[.filename]#/usr/src#
|All changes to the [.filename]#stable/11# branch of the src Subversion repository
|link:{svn-src-stable-12-url}[svn-src-stable-12]
|[.filename]#/usr/src#
|All changes to the [.filename]#stable/12# branch of the src Subversion repository
|link:{svn-src-stable-other-url}[svn-src-stable-other]
|[.filename]#/usr/src#
|All changes to the older [.filename]#stable# branches of the src Subversion repository
|link:{svn-src-svnadmin-url}[svn-src-svnadmin]
|[.filename]#/usr/src#
|All changes to the administrative scripts, hooks, and other configuration data of the src Subversion repository
|link:{svn-src-user-url}[svn-src-user]
|[.filename]#/usr/src#
|All changes to the experimental [.filename]#user# area of the src Subversion repository
|link:{svn-src-vendor-url}[svn-src-vendor]
|[.filename]#/usr/src#
|All changes to the vendor work area of the src Subversion repository
|===
[[eresources-subscribe]]
=== How to Subscribe
To subscribe to a list, click the list name at {mailman-lists-url}. The page that is displayed should contain all of the necessary subscription instructions for that list.
To actually post to a given list, send mail to mailto:listname@FreeBSD.org[listname@FreeBSD.org]. It will then be redistributed to mailing list members world-wide.
To unsubscribe from a list, click on the URL found at the bottom of every email received from the list. It is also possible to send an email to mailto:listname-unsubscribe@FreeBSD.org[listname-unsubscribe@FreeBSD.org] to unsubscribe.
It is important to keep discussion in the technical mailing lists on a technical track. To only receive important announcements, instead join the {freebsd-announce}, which is intended for infrequent traffic.
[[eresources-charters]]
=== List Charters
_All_ FreeBSD mailing lists have certain basic rules which must be adhered to by anyone using them. Failure to comply with these guidelines will result in two (2) written warnings from the FreeBSD Postmaster mailto:postmaster@FreeBSD.org[postmaster@FreeBSD.org], after which, on a third offense, the poster will removed from all FreeBSD mailing lists and filtered from further posting to them. We regret that such rules and measures are necessary at all, but today's Internet is a pretty harsh environment, it would seem, and many fail to appreciate just how fragile some of its mechanisms are.
Rules of the road:
* The topic of any posting should adhere to the basic charter of the list it is posted to. If the list is about technical issues, the posting should contain technical discussion. Ongoing irrelevant chatter or flaming only detracts from the value of the mailing list for everyone on it and will not be tolerated. For free-form discussion on no particular topic, the {freebsd-chat} is freely available and should be used instead.
* No posting should be made to more than 2 mailing lists, and only to 2 when a clear and obvious need to post to both lists exists. For most lists, there is already a great deal of subscriber overlap and except for the most esoteric mixes (say "-stable & -scsi"), there really is no reason to post to more than one list at a time. If a message is received with multiple mailing lists on the `Cc` line, trim the `Cc` line before replying. _The person who replies is still responsible for cross-posting, no matter who the originator might have been._
* Personal attacks and profanity (in the context of an argument) are not allowed, and that includes users and developers alike. Gross breaches of netiquette, like excerpting or reposting private mail when permission to do so was not and would not be forthcoming, are frowned upon but not specifically enforced. _However_, there are also very few cases where such content would fit within the charter of a list and it would therefore probably rate a warning (or ban) on that basis alone.
* Advertising of non-FreeBSD related products or services is strictly prohibited and will result in an immediate ban if it is clear that the offender is advertising by spam.
_Individual list charters:_
link:{freebsd-acpi-url}[freebsd-acpi]::
_ACPI and power management development_
link:{freebsd-announce-url}[freebsd-announce]::
_Important events / milestones_
+
This is the mailing list for people interested only in occasional announcements of significant FreeBSD events. This includes announcements about snapshots and other releases. It contains announcements of new FreeBSD capabilities. It may contain calls for volunteers etc. This is a low volume, strictly moderated mailing list.
link:{freebsd-arch-url}[freebsd-arch]::
_Architecture and design discussions_
+
This list is for discussion of the FreeBSD architecture. Messages will mostly be kept strictly technical in nature. Examples of suitable topics are:
** How to re-vamp the build system to have several customized builds running at the same time.
** What needs to be fixed with VFS to make Heidemann layers work.
** How do we change the device driver interface to be able to use the same drivers cleanly on many buses and architectures.
** How to write a network driver.
link:{freebsd-bluetooth-url}[freebsd-bluetooth]::
_Bluetooth(R) in FreeBSD_
+
This is the forum where FreeBSD's Bluetooth(R) users congregate. Design issues, implementation details, patches, bug reports, status reports, feature requests, and all matters related to Bluetooth(R) are fair game.
link:{freebsd-bugbusters-url}[freebsd-bugbusters]::
_Coordination of the Problem Report handling effort_
+
The purpose of this list is to serve as a coordination and discussion forum for the Bugmeister, his Bugbusters, and any other parties who have a genuine interest in the PR database. This list is not for discussions about specific bugs, patches or PRs.
link:{freebsd-bugs-url}[freebsd-bugs]::
_Bug reports_
+
This is the mailing list for reporting bugs in FreeBSD. Whenever possible, bugs should be submitted using the https://bugs.freebsd.org/bugzilla/enter_bug.cgi[web interface] to it.
link:{freebsd-chat-url}[freebsd-chat]::
_Non technical items related to the FreeBSD community_
+
This list contains the overflow from the other lists about non-technical, social information. It includes discussion about whether Jordan looks like a toon ferret or not, whether or not to type in capitals, who is drinking too much coffee, where the best beer is brewed, who is brewing beer in their basement, and so on. Occasional announcements of important events (such as upcoming parties, weddings, births, new jobs, etc) can be made to the technical lists, but the follow ups should be directed to this -chat list.
link:{freebsd-chromium-url}[freebsd-chromium]::
_FreeBSD-specific Chromium issues_
+
This is a list for the discussion of Chromium support for FreeBSD. This is a technical list to discuss development and installation of Chromium.
link:{freebsd-cloud-url}[freebsd-cloud]::
_Running FreeBSD on various cloud platforms_
+
This list discusses running FreeBSD on Amazon EC2, Google Compute Engine, Microsoft Azure, and other cloud computing platforms.
_FreeBSD core team_::
This is an internal mailing list for use by the core members. Messages can be sent to it when a serious FreeBSD-related matter requires arbitration or high-level scrutiny.
link:{freebsd-current-url}[freebsd-current]::
_Discussions about the use of FreeBSD-CURRENT_
+
This is the mailing list for users of FreeBSD-CURRENT. It includes warnings about new features coming out in -CURRENT that will affect the users, and instructions on steps that must be taken to remain -CURRENT. Anyone running "CURRENT" must subscribe to this list. This is a technical mailing list for which strictly technical content is expected.
link:{freebsd-desktop-url}[freebsd-desktop]::
_Using and improving FreeBSD on the desktop_
+
This is a forum for discussion of FreeBSD on the desktop. It is primarily a place for desktop porters and users to discuss issues and improve FreeBSD's desktop support.
link:{dev-ci-url}[dev-ci]::
_Continuous Integration reports of build and test results_
+
All Continuous Integration reports of build and test results
link:{dev-reviews-url}[dev-reviews]::
_Notifications of work in progress in FreeBSD's review tool_
+
Automated notifications of work in progress for review in FreeBSD's review tools, including patches.
link:{freebsd-doc-url}[freebsd-doc]::
_Documentation Project_
+
This mailing list is for the discussion of issues and projects related to the creation of documentation for FreeBSD. The members of this mailing list are collectively referred to as "The FreeBSD Documentation Project". It is an open list; feel free to join and contribute!
link:{freebsd-drivers-url}[freebsd-drivers]::
_Writing device drivers for FreeBSD_
+
This is a forum for technical discussions related to device drivers on FreeBSD. It is primarily a place for device driver writers to ask questions about how to write device drivers using the APIs in the FreeBSD kernel.
link:{freebsd-dtrace-url}[freebsd-dtrace]::
_Using and working on DTrace in FreeBSD_
+
DTrace is an integrated component of FreeBSD that provides a framework for understanding the kernel as well as user space programs at run time. The mailing list is an archived discussion for developers of the code as well as those using it.
link:{freebsd-eclipse-url}[freebsd-eclipse]::
_FreeBSD users of Eclipse IDE, tools, rich client applications and ports._
+
The intention of this list is to provide mutual support for everything to do with choosing, installing, using, developing and maintaining the Eclipse IDE, tools, rich client applications on the FreeBSD platform and assisting with the porting of Eclipse IDE and plugins to the FreeBSD environment.
+
The intention is also to facilitate exchange of information between the Eclipse community and the FreeBSD community to the mutual benefit of both.
+
Although this list is focused primarily on the needs of Eclipse users it will also provide a forum for those who would like to develop FreeBSD specific applications using the Eclipse framework.
link:{freebsd-embedded-url}[freebsd-embedded]::
_Using FreeBSD in embedded applications_
+
This list discusses topics related to using FreeBSD in embedded systems. This is a technical mailing list for which strictly technical content is expected. For the purpose of this list, embedded systems are those computing devices which are not desktops and which usually serve a single purpose as opposed to being general computing environments. Examples include, but are not limited to, all kinds of phone handsets, network equipment such as routers, switches and PBXs, remote measuring equipment, PDAs, Point Of Sale systems, and so on.
link:{freebsd-emulation-url}[freebsd-emulation]::
_Emulation of other systems such as Linux/MS-DOS(R)/Windows(R)_
+
This is a forum for technical discussions related to running programs written for other operating systems on FreeBSD.
link:{freebsd-enlightenment-url}[freebsd-enlightenment]::
_Enlightenment_
+
Discussions concerning the Enlightenment Desktop Environment for FreeBSD systems. This is a technical mailing list for which strictly technical content is expected.
link:{freebsd-eol-url}[freebsd-eol]::
_Peer support of FreeBSD-related software that is no longer supported by the FreeBSD Project._
+
This list is for those interested in providing or making use of peer support of FreeBSD-related software for which the FreeBSD Project no longer provides official support in the form of security advisories and patches.
link:{freebsd-firewire-url}[freebsd-firewire]::
_FireWire(R) (iLink, IEEE 1394)_
+
This is a mailing list for discussion of the design and implementation of a FireWire(R) (aka IEEE 1394 aka iLink) subsystem for FreeBSD. Relevant topics specifically include the standards, bus devices and their protocols, adapter boards/cards/chips sets, and the architecture and implementation of code for their proper support.
link:{freebsd-fortran-url}[freebsd-fortran]::
_Fortran on FreeBSD_
+
This is the mailing list for discussion of Fortran related ports on FreeBSD: compilers, libraries, scientific and engineering applications from laptops to HPC clusters.
link:{freebsd-fs-url}[freebsd-fs]::
_File systems_
+
Discussions concerning FreeBSD filesystems. This is a technical mailing list for which strictly technical content is expected.
link:{freebsd-games-url}[freebsd-games]::
_Games on FreeBSD_
+
This is a technical list for discussions related to bringing games to FreeBSD. It is for individuals actively working on porting games to FreeBSD, to bring up problems or discuss alternative solutions. Individuals interested in following the technical discussion are also welcome.
link:{freebsd-gecko-url}[freebsd-gecko]::
_Gecko Rendering Engine_
+
This is a forum about Gecko applications using FreeBSD.
+
Discussion centers around Gecko Ports applications, their installation, their development and their support within FreeBSD.
link:{freebsd-geom-url}[freebsd-geom]::
_GEOM_
+
Discussions specific to GEOM and related implementations. This is a technical mailing list for which strictly technical content is expected.
link:{freebsd-git-url}[freebsd-git]::
_Use of git in the FreeBSD project_
+
Discussions of how to use git in FreeBSD infrastructure including the github mirror and other uses of git for project collaboration. Discussion area for people using git against the FreeBSD github mirror. People wanting to get started with the mirror or git in general on FreeBSD can ask here.
link:{freebsd-gnome-url}[freebsd-gnome]::
_GNOME_
+
Discussions concerning The GNOME Desktop Environment for FreeBSD systems. This is a technical mailing list for which strictly technical content is expected.
link:{freebsd-infiniband-url}[freebsd-infiniband]::
_Infiniband on FreeBSD_
+
Technical mailing list discussing Infiniband, OFED, and OpenSM on FreeBSD.
link:{freebsd-ipfw-url}[freebsd-ipfw]::
_IP Firewall_
+
This is the forum for technical discussions concerning the redesign of the IP firewall code in FreeBSD. This is a technical mailing list for which strictly technical content is expected.
link:{freebsd-isdn-url}[freebsd-isdn]::
_ISDN Communications_
+
This is the mailing list for people discussing the development of ISDN support for FreeBSD.
link:{freebsd-java-url}[freebsd-java]::
_Java(TM) Development_
+
This is the mailing list for people discussing the development of significant Java(TM) applications for FreeBSD and the porting and maintenance of JDK(TM)s.
[[eresources-charters-jobs]]
link:{freebsd-jobs-url}[freebsd-jobs]::
_Jobs offered and sought_
+
This is a forum for posting employment notices specifically related to FreeBSD and resumes from those seeking FreeBSD-related employment. This is _not_ a mailing list for general employment issues since adequate forums for that already exist elsewhere.
+
Note that this list, like other `FreeBSD.org` mailing lists, is distributed worldwide. Be clear about the geographic location and the extent to which telecommuting or assistance with relocation is available.
+
Email should use open formats only - preferably plain text, but basic Portable Document Format (PDF), HTML, and a few others are acceptable to many readers. Closed formats such as Microsoft(R) Word ([.filename]#.doc#) will be rejected by the mailing list server.
link:{freebsd-kde-url}[freebsd-kde]::
_KDE_
+
Discussions concerning KDE on FreeBSD systems. This is a technical mailing list for which strictly technical content is expected.
link:{freebsd-hackers-url}[freebsd-hackers]::
_Technical discussions_
+
This is a forum for technical discussions related to FreeBSD. This is the primary technical mailing list. It is for individuals actively working on FreeBSD, to bring up problems or discuss alternative solutions. Individuals interested in following the technical discussion are also welcome. This is a technical mailing list for which strictly technical content is expected.
link:{freebsd-hardware-url}[freebsd-hardware]::
_General discussion of FreeBSD hardware_
+
General discussion about the types of hardware that FreeBSD runs on, various problems and suggestions concerning what to buy or avoid.
link:{freebsd-hubs-url}[freebsd-hubs]::
_Mirror sites_
+
Announcements and discussion for people who run FreeBSD mirror sites.
link:{freebsd-isp-url}[freebsd-isp]::
_Issues for Internet Service Providers_
+
This mailing list is for discussing topics relevant to Internet Service Providers (ISPs) using FreeBSD. This is a technical mailing list for which strictly technical content is expected.
link:{freebsd-mono-url}[freebsd-mono]::
_Mono and C# applications on FreeBSD_
+
This is a list for discussions related to the Mono development framework on FreeBSD. This is a technical mailing list. It is for individuals actively working on porting Mono or C# applications to FreeBSD, to bring up problems or discuss alternative solutions. Individuals interested in following the technical discussion are also welcome.
link:{freebsd-ocaml-url}[freebsd-ocaml]::
_FreeBSD-specific OCaml discussions_
+
This is a list for discussions related to the OCaml support on FreeBSD. This is a technical mailing list. It is for individuals working on OCaml ports, 3rd party libraries and frameworks. Individuals interested in the technical discussion are also welcome.
link:{freebsd-office-url}[freebsd-office]::
_Office applications on FreeBSD_
+
Discussion centers around office applications, their installation, their development and their support within FreeBSD.
link:{freebsd-ops-announce-url}[freebsd-ops-announce]::
_Project Infrastructure Announcements_
+
This is the mailing list for people interested in changes and issues related to the FreeBSD.org Project infrastructure.
+
This moderated list is strictly for announcements: no replies, requests, discussions, or opinions.
link:{freebsd-performance-url}[freebsd-performance]::
_Discussions about tuning or speeding up FreeBSD_
+
This mailing list exists to provide a place for hackers, administrators, and/or concerned parties to discuss performance related topics pertaining to FreeBSD. Acceptable topics includes talking about FreeBSD installations that are either under high load, are experiencing performance problems, or are pushing the limits of FreeBSD. Concerned parties that are willing to work toward improving the performance of FreeBSD are highly encouraged to subscribe to this list. This is a highly technical list ideally suited for experienced FreeBSD users, hackers, or administrators interested in keeping FreeBSD fast, robust, and scalable. This list is not a question-and-answer list that replaces reading through documentation, but it is a place to make contributions or inquire about unanswered performance related topics.
link:{freebsd-pf-url}[freebsd-pf]::
_Discussion and questions about the packet filter firewall system_
+
Discussion concerning the packet filter (pf) firewall system in terms of FreeBSD. Technical discussion and user questions are both welcome. This list is also a place to discuss the ALTQ QoS framework.
link:{freebsd-pkg-url}[freebsd-pkg]::
_Binary package management and package tools discussion_
+
Discussion of all aspects of managing FreeBSD systems by using binary packages to install software, including binary package toolkits and formats, their development and support within FreeBSD, package repository management, and third party packages.
+
Note that discussion of ports which fail to generate packages correctly should generally be considered as ports problems, and so inappropriate for this list.
link:{freebsd-pkg-fallout-url}[freebsd-pkg-fallout]::
_Fallout logs from package building_
+
All packages building failures logs from the package building clusters
link:{freebsd-pkgbase-url}[freebsd-pkgbase]::
_Packaging the FreeBSD base system._
+
Discussions surrounding implementation and issues regarding packaging the FreeBSD base system.
link:{freebsd-platforms-url}[freebsd-platforms]::
_Porting to Non Intel(R) platforms_
+
Cross-platform FreeBSD issues, general discussion and proposals for non Intel(R) FreeBSD ports. This is a technical mailing list for which strictly technical content is expected.
link:{freebsd-ports-url}[freebsd-ports]::
_Discussion of "ports"_
+
Discussions concerning FreeBSD's "ports collection" ([.filename]#/usr/ports#), ports infrastructure, and general ports coordination efforts. This is a technical mailing list for which strictly technical content is expected.
link:{freebsd-ports-announce-url}[freebsd-ports-announce]::
_Important news and instructions about the FreeBSD "Ports Collection"_
+
Important news for developers, porters, and users of the "Ports Collection" ([.filename]#/usr/ports#), including architecture/infrastructure changes, new capabilities, critical upgrade instructions, and release engineering information. This is a low-volume mailing list, intended for announcements.
link:{freebsd-ports-bugs-url}[freebsd-ports-bugs]::
_Discussion of "ports" bugs_
+
Discussions concerning problem reports for FreeBSD's "ports collection" ([.filename]#/usr/ports#), proposed ports, or modifications to ports. This is a technical mailing list for which strictly technical content is expected.
link:{freebsd-proliant-url}[freebsd-proliant]::
_Technical discussion of FreeBSD on HP ProLiant server platforms_
+
This mailing list is to be used for the technical discussion of the usage of FreeBSD on HP ProLiant servers, including the discussion of ProLiant-specific drivers, management software, configuration tools, and BIOS updates. As such, this is the primary place to discuss the hpasmd, hpasmcli, and hpacucli modules.
link:{freebsd-python-url}[freebsd-python]::
_Python on FreeBSD_
+
This is a list for discussions related to improving Python-support on FreeBSD. This is a technical mailing list. It is for individuals working on porting Python, its third party modules and Zope stuff to FreeBSD. Individuals interested in following the technical discussion are also welcome.
link:{freebsd-questions-url}[freebsd-questions]::
_User questions_
+
This is the mailing list for questions about FreeBSD. Do not send "how to" questions to the technical lists unless the question is quite technical.
link:{freebsd-ruby-url}[freebsd-ruby]::
_FreeBSD-specific Ruby discussions_
+
This is a list for discussions related to the Ruby support on FreeBSD. This is a technical mailing list. It is for individuals working on Ruby ports, third party libraries and frameworks.
+
Individuals interested in the technical discussion are also welcome.
link:{freebsd-scsi-url}[freebsd-scsi]::
_SCSI subsystem_
+
This is the mailing list for people working on the SCSI subsystem for FreeBSD. This is a technical mailing list for which strictly technical content is expected.
link:{freebsd-security-url}[freebsd-security]::
_Security issues_
+
FreeBSD computer security issues (DES, Kerberos, known security holes and fixes, etc). This is a technical mailing list for which strictly technical discussion is expected. Note that this is not a question-and-answer list, but that contributions (BOTH question AND answer) to the FAQ are welcome.
link:{freebsd-security-notifications-url}[freebsd-security-notifications]::
_Security Notifications_
+
Notifications of FreeBSD security problems and fixes. This is not a discussion list. The discussion list is FreeBSD-security.
link:{freebsd-snapshots-url}[freebsd-snapshots]::
_FreeBSD Development Snapshot Announcements_
+
This list provides notifications about the availability of new FreeBSD development snapshots for the head/ and stable/ branches.
link:{freebsd-stable-url}[freebsd-stable]::
_Discussions about the use of FreeBSD-STABLE_
+
This is the mailing list for users of FreeBSD-STABLE. "STABLE" is the branch where development continues after a RELEASE, including bug fixes and new features. The ABI is kept stable for binary compatibility. It includes warnings about new features coming out in -STABLE that will affect the users, and instructions on steps that must be taken to remain -STABLE. Anyone running "STABLE" should subscribe to this list. This is a technical mailing list for which strictly technical content is expected.
link:{freebsd-standards-url}[freebsd-standards]::
_C99 POSIX Conformance_
+
This is a forum for technical discussions related to FreeBSD Conformance to the C99 and the POSIX standards.
link:{freebsd-teaching-url}[freebsd-teaching]::
_Teaching with FreeBSD_
+
Non technical mailing list discussing teaching with FreeBSD.
link:{freebsd-testing-url}[freebsd-testing]::
_Testing on FreeBSD_
+
Technical mailing list discussing testing on FreeBSD, including ATF/Kyua, test build infrastructure, port tests to FreeBSD from other operating systems (NetBSD, ...), etc.
link:{freebsd-tex-url}[freebsd-tex]::
_Porting TeX and its applications to FreeBSD_
+
This is a technical mailing list for discussions related to TeX and its applications on FreeBSD. It is for individuals actively working on porting TeX to FreeBSD, to bring up problems or discuss alternative solutions. Individuals interested in following the technical discussion are also welcome.
link:{freebsd-toolchain-url}[freebsd-toolchain]::
_Maintenance of FreeBSD's integrated toolchain_
+
This is the mailing list for discussions related to the maintenance of the toolchain shipped with FreeBSD. This could include the state of Clang and GCC, but also pieces of software such as assemblers, linkers and debuggers.
link:{freebsd-transport-url}[freebsd-transport]::
_Discussions of transport level network protocols in FreeBSD_
+
The transport mailing list exists for the discussion of issues and designs around the transport level protocols in the FreeBSD network stack, including TCP, SCTP and UDP. Other networking topics, including driver specific and network protocol issues should be discussed on the {freebsd-net}.
link:{freebsd-translators-url}[freebsd-translators]::
_Translating FreeBSD documents and programs_
+
A discussion list where translators of FreeBSD documents from English into other languages can talk about translation methods and tools. New members are asked to introduce themselves and mention the languages they are interested in translating.
link:{freebsd-usb-url}[freebsd-usb]::
_Discussing FreeBSD support for USB_
+
This is a mailing list for technical discussions related to FreeBSD support for USB.
link:{freebsd-user-groups-url}[freebsd-user-groups]::
_User Group Coordination List_
+
This is the mailing list for the coordinators from each of the local area Users Groups to discuss matters with each other and a designated individual from the Core Team. This mail list should be limited to meeting synopsis and coordination of projects that span User Groups.
link:{freebsd-virtualization-url}[freebsd-virtualization]::
_Discussion of various virtualization techniques supported by FreeBSD_
+
A list to discuss the various virtualization techniques supported by FreeBSD. On one hand the focus will be on the implementation of the basic functionality as well as adding new features. On the other hand users will have a forum to ask for help in case of problems or to discuss their use cases.
link:{freebsd-wip-status-url}[freebsd-wip-status]::
_FreeBSD Work-In-Progress Status_
+
This mailing list can be used by developers to announce the creation and progress of FreeBSD related work. Messages will be moderated. It is suggested to send the message "To:" a more topical FreeBSD list and only "BCC:" this list. This way the WIP can also be discussed on the topical list, as no discussion is allowed on this list.
+
Look inside the archives for examples of suitable messages.
+
An editorial digest of the messages to this list might be posted to the FreeBSD website every few months as part of the Status Reports footnote:[https://www.freebsd.org/news/status/]. Past reports are archived.
link:{freebsd-wireless-url}[freebsd-wireless]::
_Discussions of 802.11 stack, tools device driver development_
+
The FreeBSD-wireless list focuses on 802.11 stack (sys/net80211), device driver and tools development. This includes bugs, new features and maintenance.
link:{freebsd-xen-url}[freebsd-xen]::
_Discussion of the FreeBSD port to Xen(TM) - implementation and usage_
+
A list that focuses on the FreeBSD Xen(TM) port. The anticipated traffic level is small enough that it is intended as a forum for both technical discussions of the implementation and design details as well as administrative deployment issues.
link:{freebsd-xfce-url}[freebsd-xfce]::
_XFCE_
+
This is a forum for discussions related to bring the XFCE environment to FreeBSD. This is a technical mailing list. It is for individuals actively working on porting XFCE to FreeBSD, to bring up problems or discuss alternative solutions. Individuals interested in following the technical discussion are also welcome.
link:{freebsd-zope-url}[freebsd-zope]::
_Zope_
+
This is a forum for discussions related to bring the Zope environment to FreeBSD. This is a technical mailing list. It is for individuals actively working on porting Zope to FreeBSD, to bring up problems or discuss alternative solutions. Individuals interested in following the technical discussion are also welcome.
[[eresources-mailfiltering]]
=== Filtering on the Mailing Lists
The FreeBSD mailing lists are filtered in multiple ways to avoid the distribution of spam, viruses, and other unwanted emails. The filtering actions described in this section do not include all those used to protect the mailing lists.
Only certain types of attachments are allowed on the mailing lists. All attachments with a MIME content type not found in the list below will be stripped before an email is distributed on the mailing lists.
* application/octet-stream
* application/pdf
* application/pgp-signature
* application/x-pkcs7-signature
* message/rfc822
* multipart/alternative
* multipart/related
* multipart/signed
* text/html
* text/plain
* text/x-diff
* text/x-patch
[NOTE]
====
Some of the mailing lists might allow attachments of other MIME content types, but the above list should be applicable for most of the mailing lists.
====
If an email contains both an HTML and a plain text version, the HTML version will be removed. If an email contains only an HTML version, it will be converted to plain text.
[[eresources-news]]
== Usenet Newsgroups
In addition to two FreeBSD specific newsgroups, there are many others in which FreeBSD is discussed or are otherwise relevant to FreeBSD users.
=== BSD Specific Newsgroups
* link:news:comp.unix.bsd.freebsd.announce[comp.unix.bsd.freebsd.announce]
* link:news:comp.unix.bsd.freebsd.misc[comp.unix.bsd.freebsd.misc]
* link:news:de.comp.os.unix.bsd[de.comp.os.unix.bsd] (German)
* link:news:fr.comp.os.bsd[fr.comp.os.bsd] (French)
=== Other UNIX(R) Newsgroups of Interest
* link:news:comp.unix[comp.unix]
* link:news:comp.unix.questions[comp.unix.questions]
* link:news:comp.unix.admin[comp.unix.admin]
* link:news:comp.unix.programmer[comp.unix.programmer]
* link:news:comp.unix.shell[comp.unix.shell]
* link:news:comp.unix.misc[comp.unix.misc]
* link:news:comp.unix.bsd[comp.unix.bsd]
=== X Window System
* link:news:comp.windows.x[comp.windows.x]
[[eresources-web]]
== Official Mirrors
<<central-mirrors, {central}>>, <<armenia-mirrors, {mirrors-armenia}>>, <<australia-mirrors, {mirrors-australia}>>, <<austria-mirrors, {mirrors-austria}>>, <<czech-republic-mirrors, {mirrors-czech}>>, <<denmark-mirrors, {mirrors-denmark}>>, <<finland-mirrors, {mirrors-finland}>>, <<france-mirrors, {mirrors-france}>>, <<germany-mirrors, {mirrors-germany}>>, <<hong-kong-mirrors, {mirrors-hongkong}>>, <<ireland-mirrors, {mirrors-ireland}>>, <<japan-mirrors, {mirrors-japan}>>, <<latvia-mirrors, {mirrors-latvia}>>, <<lithuania-mirrors, {mirrors-lithuania}>>, <<netherlands-mirrors, {mirrors-netherlands}>>, <<norway-mirrors, {mirrors-norway}>>, <<russia-mirrors, {mirrors-russia}>>, <<slovenia-mirrors, {mirrors-slovenia}>>, <<south-africa-mirrors, {mirrors-south-africa}>>, <<spain-mirrors, {mirrors-spain}>>, <<sweden-mirrors, {mirrors-sweden}>>, <<switzerland-mirrors, {mirrors-switzerland}>>, <<taiwan-mirrors, {mirrors-taiwan}>>, <<uk-mirrors, {mirrors-uk}>>, <<usa-mirrors, {mirrors-us}>>.
(as of UTC)
[[central-mirrors]]
*{central}*
* {central-www}
[[armenia-mirrors]]
*{mirrors-armenia}*
* {mirrors-armenia-www-httpv6} (IPv6)
[[australia-mirrors]]
*{mirrors-australia}*
* {mirrors-australia-www-http}
* {mirrors-australia-www2-http}
[[austria-mirrors]]
*{mirrors-austria}*
* {mirrors-armenia-www-httpv6} (IPv6)
[[czech-republic-mirrors]]
*{mirrors-czech}*
* {mirrors-czech-www-httpv6} (IPv6)
[[denmark-mirrors]]
*{mirrors-denmark}*
* {mirrors-denmark-www-httpv6} (IPv6)
[[finland-mirrors]]
*{mirrors-finland}*
* {mirrors-finland-www-http}
[[france-mirrors]]
*{mirrors-france}*
* {mirrors-france-www-http}
[[germany-mirrors]]
*{mirrors-germany}*
* {mirrors-germany-www-http}
[[hong-kong-mirrors]]
*{mirrors-hongkong}*
* {mirrors-hongkong-www}
[[ireland-mirrors]]
*{mirrors-ireland}*
* {mirrors-ireland-www}
[[japan-mirrors]]
*{mirrors-japan}*
* {mirrors-japan-www-httpv6} (IPv6)
[[latvia-mirrors]]
*{mirrors-latvia}*
* {mirrors-latvia-www}
[[lithuania-mirrors]]
*{mirrors-lithuania}*
* {mirrors-lithuania-www}
[[netherlands-mirrors]]
*{mirrors-netherlands}*
* {mirrors-netherlands-www}
[[norway-mirrors]]
*{mirrors-norway}*
* {mirrors-norway-www}
[[russia-mirrors]]
*{mirrors-russia}*
* {mirrors-russia-www-httpv6} (IPv6)
[[slovenia-mirrors]]
*{mirrors-slovenia}*
* {mirrors-slovenia-www}
[[south-africa-mirrors]]
*{mirrors-south-africa}*
* {mirrors-south-africa-www}
[[spain-mirrors]]
*{mirrors-spain}*
* {mirrors-spain-www}
* {mirrors-spain-www2}
[[sweden-mirrors]]
*{mirrors-sweden}*
* {mirrors-sweden-www}
[[switzerland-mirrors]]
*{mirrors-switzerland}*
* {mirrors-switzerland-www-httpv6} (IPv6)
* {mirrors-switzerland-www2-httpv6} (IPv6)
[[taiwan-mirrors]]
*{mirrors-taiwan}*
* {mirrors-taiwan-www}
* {mirrors-taiwan-www2}
* {mirrors-taiwan-www4}
* {mirrors-taiwan-www5-httpv6} (IPv6)
[[uk-mirrors]]
*{mirrors-uk}*
* {mirrors-uk-www}
* {mirrors-uk-www3}
[[usa-mirrors]]
*{mirrors-us}*
* {mirrors-us-www5-httpv6} (IPv6)
:sectnums:
:sectnumlevels: 6
diff --git a/documentation/content/en/books/handbook/filesystems/_index.adoc b/documentation/content/en/books/handbook/filesystems/_index.adoc
index b517d6458a..3fa8a2cd64 100644
--- a/documentation/content/en/books/handbook/filesystems/_index.adoc
+++ b/documentation/content/en/books/handbook/filesystems/_index.adoc
@@ -1,94 +1,95 @@
---
title: Chapter 21. Other File Systems
part: Part III. System Administration
prev: books/handbook/zfs
next: books/handbook/virtualization
+description: This chapter shows the other filesystems supported by FreeBSD
---
[[filesystems]]
= Other File Systems
:doctype: book
:toc: macro
:toclevels: 1
:icons: font
:sectnums:
:sectnumlevels: 6
:source-highlighter: rouge
:experimental:
:skip-front-matter:
:xrefstyle: basic
:relfileprefix: ../
:outfilesuffix:
:sectnumoffset: 21
ifeval::["{backend}" == "html5"]
:imagesdir: ../../../../images/books/handbook/filesystems/
endif::[]
ifeval::["{backend}" == "pdf"]
:imagesdir: ../../../../static/images/books/handbook/filesystems/
endif::[]
ifeval::["{backend}" == "epub3"]
:imagesdir: ../../../../static/images/books/handbook/filesystems/
endif::[]
include::shared/authors.adoc[]
include::shared/releases.adoc[]
include::shared/en/mailing-lists.adoc[]
include::shared/en/teams.adoc[]
include::shared/en/urls.adoc[]
toc::[]
[[filesystems-synopsis]]
== Synopsis
File systems are an integral part of any operating system. They allow users to upload and store files, provide access to data, and make hard drives useful. Different operating systems differ in their native file system. Traditionally, the native FreeBSD file system has been the Unix File System UFS which has been modernized as UFS2. Since FreeBSD 7.0, the Z File System (ZFS) is also available as a native file system. See crossref:zfs[zfs,The Z File System (ZFS)] for more information.
In addition to its native file systems, FreeBSD supports a multitude of other file systems so that data from other operating systems can be accessed locally, such as data stored on locally attached USB storage devices, flash drives, and hard disks. This includes support for the Linux(R) Extended File System (EXT).
There are different levels of FreeBSD support for the various file systems. Some require a kernel module to be loaded and others may require a toolset to be installed. Some non-native file system support is full read-write while others are read-only.
After reading this chapter, you will know:
* The difference between native and supported file systems.
* Which file systems are supported by FreeBSD.
* How to enable, configure, access, and make use of non-native file systems.
Before reading this chapter, you should:
* Understand UNIX(R) and crossref:basics[basics,FreeBSD basics].
* Be familiar with the basics of crossref:kernelconfig[kernelconfig,kernel configuration and compilation].
* Feel comfortable crossref:ports[ports,installing software] in FreeBSD.
* Have some familiarity with crossref:disks[disks,disks], storage, and device names in FreeBSD.
[[filesystems-linux]]
== Linux(R) File Systems
FreeBSD provides built-in support for several Linux(R) file systems. This section demonstrates how to load support for and how to mount the supported Linux(R) file systems.
=== ext2
Kernel support for ext2 file systems has been available since FreeBSD 2.2. In FreeBSD 8.x and earlier, the code is licensed under the GPL. Since FreeBSD 9.0, the code has been rewritten and is now BSD licensed.
The man:ext2fs[5] driver allows the FreeBSD kernel to both read and write to ext2 file systems.
[NOTE]
====
This driver can also be used to access ext3 and ext4 file systems. The man:ext2fs[5] filesystem has full read and write support for ext4 as of FreeBSD 12.0-RELEASE. Additionally, extended attributes and ACLs are also supported, while journalling and encryption are not. Starting with FreeBSD 12.1-RELEASE, a DTrace provider will be available as well. Prior versions of FreeBSD can access ext4 in read and write mode using package:sysutils/fusefs-ext2[].
====
To access an ext file system, first load the kernel loadable module:
[source,shell]
....
# kldload ext2fs
....
Then, mount the ext volume by specifying its FreeBSD partition name and an existing mount point. This example mounts [.filename]#/dev/ad1s1# on [.filename]#/mnt#:
[source,shell]
....
# mount -t ext2fs /dev/ad1s1 /mnt
....
diff --git a/documentation/content/en/books/handbook/firewalls/_index.adoc b/documentation/content/en/books/handbook/firewalls/_index.adoc
index 0343c2b003..eda3d742cb 100644
--- a/documentation/content/en/books/handbook/firewalls/_index.adoc
+++ b/documentation/content/en/books/handbook/firewalls/_index.adoc
@@ -1,2225 +1,2226 @@
---
title: Chapter 31. Firewalls
part: IV. Network Communication
prev: books/handbook/network-servers
next: books/handbook/advanced-networking
+description: "FreeBSD has three firewalls built into the base system: PF, IPFW, and IPFILTER. This chapter covers how to define packet filtering rules, the differences between the firewalls built into FreeBSD and how to use them"
---
[[firewalls]]
= Firewalls
:doctype: book
:toc: macro
:toclevels: 1
:icons: font
:sectnums:
:sectnumlevels: 6
:source-highlighter: rouge
:experimental:
:skip-front-matter:
:xrefstyle: basic
:relfileprefix: ../
:outfilesuffix:
:sectnumoffset: 31
ifeval::["{backend}" == "html5"]
:imagesdir: ../../../../images/books/handbook/firewalls/
endif::[]
ifeval::["{backend}" == "pdf"]
:imagesdir: ../../../../static/images/books/handbook/firewalls/
endif::[]
ifeval::["{backend}" == "epub3"]
:imagesdir: ../../../../static/images/books/handbook/firewalls/
endif::[]
include::shared/authors.adoc[]
include::shared/releases.adoc[]
include::shared/en/mailing-lists.adoc[]
include::shared/en/teams.adoc[]
include::shared/en/urls.adoc[]
toc::[]
[[firewalls-intro]]
== Synopsis
Firewalls make it possible to filter the incoming and outgoing traffic that flows through a system. A firewall can use one or more sets of "rules" to inspect network packets as they come in or go out of network connections and either allows the traffic through or blocks it. The rules of a firewall can inspect one or more characteristics of the packets such as the protocol type, source or destination host address, and source or destination port.
Firewalls can enhance the security of a host or a network. They can be used to do one or more of the following:
* Protect and insulate the applications, services, and machines of an internal network from unwanted traffic from the public Internet.
* Limit or disable access from hosts of the internal network to services of the public Internet.
* Support network address translation (NAT), which allows an internal network to use private IP addresses and share a single connection to the public Internet using either a single IP address or a shared pool of automatically assigned public addresses.
FreeBSD has three firewalls built into the base system: PF, IPFW, and IPFILTER, also known as IPF. FreeBSD also provides two traffic shapers for controlling bandwidth usage: man:altq[4] and man:dummynet[4]. ALTQ has traditionally been closely tied with PF and dummynet with IPFW. Each firewall uses rules to control the access of packets to and from a FreeBSD system, although they go about it in different ways and each has a different rule syntax.
FreeBSD provides multiple firewalls in order to meet the different requirements and preferences for a wide variety of users. Each user should evaluate which firewall best meets their needs.
After reading this chapter, you will know:
* How to define packet filtering rules.
* The differences between the firewalls built into FreeBSD.
* How to use and configure the PF firewall.
* How to use and configure the IPFW firewall.
* How to use and configure the IPFILTER firewall.
Before reading this chapter, you should:
* Understand basic FreeBSD and Internet concepts.
[NOTE]
====
Since all firewalls are based on inspecting the values of selected packet control fields, the creator of the firewall ruleset must have an understanding of how TCP/IP works, what the different values in the packet control fields are, and how these values are used in a normal session conversation. For a good introduction, refer to http://www.ipprimer.com[Daryl's TCP/IP Primer].
====
[[firewalls-concepts]]
== Firewall Concepts
A ruleset contains a group of rules which pass or block packets based on the values contained in the packet. The bi-directional exchange of packets between hosts comprises a session conversation. The firewall ruleset processes both the packets arriving from the public Internet, as well as the packets produced by the system as a response to them. Each TCP/IP service is predefined by its protocol and listening port. Packets destined for a specific service originate from the source address using an unprivileged port and target the specific service port on the destination address. All the above parameters can be used as selection criteria to create rules which will pass or block services.
To lookup unknown port numbers, refer to [.filename]#/etc/services#. Alternatively, visit http://en.wikipedia.org/wiki/List_of_TCP_and_UDP_port_numbers[http://en.wikipedia.org/wiki/List_of_TCP_and_UDP_port_numbers] and do a port number lookup to find the purpose of a particular port number.
Check out this link for http://web.archive.org/web/20150803024617/http://www.sans.org/security-resources/idfaq/oddports.php[port numbers used by Trojans].
FTP has two modes: active mode and passive mode. The difference is in how the data channel is acquired. Passive mode is more secure as the data channel is acquired by the ordinal ftp session requester. For a good explanation of FTP and the different modes, see http://www.slacksite.com/other/ftp.html[http://www.slacksite.com/other/ftp.html].
A firewall ruleset can be either "exclusive" or "inclusive". An exclusive firewall allows all traffic through except for the traffic matching the ruleset. An inclusive firewall does the reverse as it only allows traffic matching the rules through and blocks everything else.
An inclusive firewall offers better control of the outgoing traffic, making it a better choice for systems that offer services to the public Internet. It also controls the type of traffic originating from the public Internet that can gain access to a private network. All traffic that does not match the rules is blocked and logged. Inclusive firewalls are generally safer than exclusive firewalls because they significantly reduce the risk of allowing unwanted traffic.
[NOTE]
====
Unless noted otherwise, all configuration and example rulesets in this chapter create inclusive firewall rulesets.
====
Security can be tightened further using a "stateful firewall". This type of firewall keeps track of open connections and only allows traffic which either matches an existing connection or opens a new, allowed connection.
Stateful filtering treats traffic as a bi-directional exchange of packets comprising a session. When state is specified on a matching rule the firewall dynamically generates internal rules for each anticipated packet being exchanged during the session. It has sufficient matching capabilities to determine if a packet is valid for a session. Any packets that do not properly fit the session template are automatically rejected.
When the session completes, it is removed from the dynamic state table.
Stateful filtering allows one to focus on blocking/passing new sessions. If the new session is passed, all its subsequent packets are allowed automatically and any impostor packets are automatically rejected. If a new session is blocked, none of its subsequent packets are allowed. Stateful filtering provides advanced matching abilities capable of defending against the flood of different attack methods employed by attackers.
NAT stands for _Network Address Translation_. NAT function enables the private LAN behind the firewall to share a single ISP-assigned IP address, even if that address is dynamically assigned. NAT allows each computer in the LAN to have Internet access, without having to pay the ISP for multiple Internet accounts or IP addresses.
NAT will automatically translate the private LAN IP address for each system on the LAN to the single public IP address as packets exit the firewall bound for the public Internet. It also performs the reverse translation for returning packets.
According to RFC 1918, the following IP address ranges are reserved for private networks which will never be routed directly to the public Internet, and therefore are available for use with NAT:
* `10.0.0.0/8`.
* `172.16.0.0/12`.
* `192.168.0.0/16`.
[WARNING]
====
When working with the firewall rules, be _very careful_. Some configurations _can lock the administrator out_ of the server. To be on the safe side, consider performing the initial firewall configuration from the local console rather than doing it remotely over ssh.
====
[[firewalls-pf]]
== PF
Since FreeBSD 5.3, a ported version of OpenBSD's PF firewall has been included as an integrated part of the base system. PF is a complete, full-featured firewall that has optional support for ALTQ (Alternate Queuing), which provides Quality of Service (QoS).
The OpenBSD Project maintains the definitive reference for PF in the http://www.openbsd.org/faq/pf/[PF FAQ]. Peter Hansteen maintains a thorough PF tutorial at http://home.nuug.no/\~peter/pf/[http://home.nuug.no/~peter/pf/].
[WARNING]
====
When reading the http://www.openbsd.org/faq/pf/[PF FAQ], keep in mind that FreeBSD's version of PF has diverged substantially from the upstream OpenBSD version over the years. Not all features work the same way on FreeBSD as they do in OpenBSD and vice versa.
====
The {freebsd-pf} is a good place to ask questions about configuring and running the PF firewall. Check the mailing list archives before asking a question as it may have already been answered.
This section of the Handbook focuses on PF as it pertains to FreeBSD. It demonstrates how to enable PF and ALTQ. It also provides several examples for creating rulesets on a FreeBSD system.
=== Enabling PF
To use PF, its kernel module must be first loaded. This section describes the entries that can be added to [.filename]#/etc/rc.conf# to enable PF.
Start by adding `pf_enable=yes` to [.filename]#/etc/rc.conf#:
[source,shell]
....
# sysrc pf_enable=yes
....
Additional options, described in man:pfctl[8], can be passed to PF when it is started. Add or change this entry in [.filename]#/etc/rc.conf# and specify any required flags between the two quotes (`""`):
[.programlisting]
....
pf_flags="" # additional flags for pfctl startup
....
PF will not start if it cannot find its ruleset configuration file. By default, FreeBSD does not ship with a ruleset and there is no [.filename]#/etc/pf.conf#. Example rulesets can be found in [.filename]#/usr/share/examples/pf/#. If a custom ruleset has been saved somewhere else, add a line to [.filename]#/etc/rc.conf# which specifies the full path to the file:
[.programlisting]
....
pf_rules="/path/to/pf.conf"
....
Logging support for PF is provided by man:pflog[4]. To enable logging support, add `pflog_enable=yes` to [.filename]#/etc/rc.conf#:
[source,shell]
....
# sysrc pflog_enable=yes
....
The following lines can also be added to change the default location of the log file or to specify any additional flags to pass to man:pflog[4] when it is started:
[.programlisting]
....
pflog_logfile="/var/log/pflog" # where pflogd should store the logfile
pflog_flags="" # additional flags for pflogd startup
....
Finally, if there is a LAN behind the firewall and packets need to be forwarded for the computers on the LAN, or NAT is required, enable the following option:
[.programlisting]
....
gateway_enable="YES" # Enable as LAN gateway
....
After saving the needed edits, PF can be started with logging support by typing:
[source,shell]
....
# service pf start
# service pflog start
....
By default, PF reads its configuration rules from [.filename]#/etc/pf.conf# and modifies, drops, or passes packets according to the rules or definitions specified in this file. The FreeBSD installation includes several sample files located in [.filename]#/usr/share/examples/pf/#. Refer to the http://www.openbsd.org/faq/pf/[PF FAQ] for complete coverage of PF rulesets.
To control PF, use `pfctl`. <<pfctl>> summarizes some useful options to this command. Refer to man:pfctl[8] for a description of all available options:
[[pfctl]]
.Useful `pfctl` Options
[cols="1,1", frame="none", options="header"]
|===
| Command
| Purpose
|`pfctl -e`
|Enable PF.
|`pfctl -d`
|Disable PF.
|`pfctl -F all -f /etc/pf.conf`
|Flush all NAT, filter, state, and table rules and reload [.filename]#/etc/pf.conf#.
|`pfctl -s [ rules \| nat \| states ]`
|Report on the filter rules, NAT rules, or state table.
|`pfctl -vnf /etc/pf.conf`
|Check [.filename]#/etc/pf.conf# for errors, but do not load ruleset.
|===
[TIP]
====
package:security/sudo[] is useful for running commands like `pfctl` that require elevated privileges. It can be installed from the Ports Collection.
====
To keep an eye on the traffic that passes through the PF firewall, consider installing the package:sysutils/pftop[] package or port. Once installed, pftop can be run to view a running snapshot of traffic in a format which is similar to man:top[1].
[[pf-tutorial]]
=== PF Rulesets
This section demonstrates how to create a customized ruleset. It starts with the simplest of rulesets and builds upon its concepts using several examples to demonstrate real-world usage of PF's many features.
The simplest possible ruleset is for a single machine that does not run any services and which needs access to one network, which may be the Internet. To create this minimal ruleset, edit [.filename]#/etc/pf.conf# so it looks like this:
[.programlisting]
....
block in all
pass out all keep state
....
The first rule denies all incoming traffic by default. The second rule allows connections created by this system to pass out, while retaining state information on those connections. This state information allows return traffic for those connections to pass back and should only be used on machines that can be trusted. The ruleset can be loaded with:
[source,shell]
....
# pfctl -e ; pfctl -f /etc/pf.conf
....
In addition to keeping state, PF provides _lists_ and _macros_ which can be defined for use when creating rules. Macros can include lists and need to be defined before use. As an example, insert these lines at the very top of the ruleset:
[.programlisting]
....
tcp_services = "{ ssh, smtp, domain, www, pop3, auth, pop3s }"
udp_services = "{ domain }"
....
PF understands port names as well as port numbers, as long as the names are listed in [.filename]#/etc/services#. This example creates two macros. The first is a list of seven TCP port names and the second is one UDP port name. Once defined, macros can be used in rules. In this example, all traffic is blocked except for the connections initiated by this system for the seven specified TCP services and the one specified UDP service:
[.programlisting]
....
tcp_services = "{ ssh, smtp, domain, www, pop3, auth, pop3s }"
udp_services = "{ domain }"
block all
pass out proto tcp to any port $tcp_services keep state
pass proto udp to any port $udp_services keep state
....
Even though UDP is considered to be a stateless protocol, PF is able to track some state information. For example, when a UDP request is passed which asks a name server about a domain name, PF will watch for the response to pass it back.
Whenever an edit is made to a ruleset, the new rules must be loaded so they can be used:
[source,shell]
....
# pfctl -f /etc/pf.conf
....
If there are no syntax errors, `pfctl` will not output any messages during the rule load. Rules can also be tested before attempting to load them:
[source,shell]
....
# pfctl -nf /etc/pf.conf
....
Including `-n` causes the rules to be interpreted only, but not loaded. This provides an opportunity to correct any errors. At all times, the last valid ruleset loaded will be enforced until either PF is disabled or a new ruleset is loaded.
[TIP]
====
Adding `-v` to a `pfctl` ruleset verify or load will display the fully parsed rules exactly the way they will be loaded. This is extremely useful when debugging rules.
====
[[pftut-gateway]]
==== A Simple Gateway with NAT
This section demonstrates how to configure a FreeBSD system running PF to act as a gateway for at least one other machine. The gateway needs at least two network interfaces, each connected to a separate network. In this example, [.filename]#xl0# is connected to the Internet and [.filename]#xl1# is connected to the internal network.
First, enable the gateway to let the machine forward the network traffic it receives on one interface to another interface. This sysctl setting will forward IPv4 packets:
[source,shell]
....
# sysctl net.inet.ip.forwarding=1
....
To forward IPv6 traffic, use:
[source,shell]
....
# sysctl net.inet6.ip6.forwarding=1
....
To enable these settings at system boot, use man:sysrc[8] to add them to [.filename]#/etc/rc.conf#:
[source,shell]
....
# sysrc gateway_enable=yes
# sysrc ipv6_gateway_enable=yes
....
Verify with `ifconfig` that both of the interfaces are up and running.
Next, create the PF rules to allow the gateway to pass traffic. While the following rule allows stateful traffic from hosts of the internal network to pass to the gateway, the `to` keyword does not guarantee passage all the way from source to destination:
[.programlisting]
....
pass in on xl1 from xl1:network to xl0:network port $ports keep state
....
That rule only lets the traffic pass in to the gateway on the internal interface. To let the packets go further, a matching rule is needed:
[.programlisting]
....
pass out on xl0 from xl1:network to xl0:network port $ports keep state
....
While these two rules will work, rules this specific are rarely needed. For a busy network admin, a readable ruleset is a safer ruleset. The remainder of this section demonstrates how to keep the rules as simple as possible for readability. For example, those two rules could be replaced with one rule:
[.programlisting]
....
pass from xl1:network to any port $ports keep state
....
The `interface:network` notation can be replaced with a macro to make the ruleset even more readable. For example, a `$localnet` macro could be defined as the network directly attached to the internal interface (`$xl1:network`). Alternatively, the definition of `$localnet` could be changed to an _IP address/netmask_ notation to denote a network, such as `192.168.100.1/24` for a subnet of private addresses.
If required, `$localnet` could even be defined as a list of networks. Whatever the specific needs, a sensible `$localnet` definition could be used in a typical pass rule as follows:
[.programlisting]
....
pass from $localnet to any port $ports keep state
....
The following sample ruleset allows all traffic initiated by machines on the internal network. It first defines two macros to represent the external and internal 3COM interfaces of the gateway.
[NOTE]
====
For dialup users, the external interface will use [.filename]#tun0#. For an ADSL connection, specifically those using PPP over Ethernet (PPPoE), the correct external interface is [.filename]#tun0#, not the physical Ethernet interface.
====
[.programlisting]
....
ext_if = "xl0" # macro for external interface - use tun0 for PPPoE
int_if = "xl1" # macro for internal interface
localnet = $int_if:network
# ext_if IP address could be dynamic, hence ($ext_if)
nat on $ext_if from $localnet to any -> ($ext_if)
block all
pass from { lo0, $localnet } to any keep state
....
This ruleset introduces the `nat` rule which is used to handle the network address translation from the non-routable addresses inside the internal network to the IP address assigned to the external interface. The parentheses surrounding the last part of the nat rule `($ext_if)` is included when the IP address of the external interface is dynamically assigned. It ensures that network traffic runs without serious interruptions even if the external IP address changes.
Note that this ruleset probably allows more traffic to pass out of the network than is needed. One reasonable setup could create this macro:
[.programlisting]
....
client_out = "{ ftp-data, ftp, ssh, domain, pop3, auth, nntp, http, \
https, cvspserver, 2628, 5999, 8000, 8080 }"
....
to use in the main pass rule:
[.programlisting]
....
pass inet proto tcp from $localnet to any port $client_out \
flags S/SA keep state
....
A few other pass rules may be needed. This one enables SSH on the external interface:
[.programlisting]
....
pass in inet proto tcp to $ext_if port ssh
....
This macro definition and rule allows DNS and NTP for internal clients:
[.programlisting]
....
udp_services = "{ domain, ntp }"
pass quick inet proto { tcp, udp } to any port $udp_services keep state
....
Note the `quick` keyword in this rule. Since the ruleset consists of several rules, it is important to understand the relationships between the rules in a ruleset. Rules are evaluated from top to bottom, in the sequence they are written. For each packet or connection evaluated by PF, _the last matching rule_ in the ruleset is the one which is applied. However, when a packet matches a rule which contains the `quick` keyword, the rule processing stops and the packet is treated according to that rule. This is very useful when an exception to the general rules is needed.
[[pftut-ftp]]
==== Creating an FTP Proxy
Configuring working FTP rules can be problematic due to the nature of the FTP protocol. FTP pre-dates firewalls by several decades and is insecure in its design. The most common points against using FTP include:
* Passwords are transferred in the clear.
* The protocol demands the use of at least two TCP connections (control and data) on separate ports.
* When a session is established, data is communicated using randomly selected ports.
All of these points present security challenges, even before considering any potential security weaknesses in client or server software. More secure alternatives for file transfer exist, such as man:sftp[1] or man:scp[1], which both feature authentication and data transfer over encrypted connections..
For those situations when FTP is required, PF provides redirection of FTP traffic to a small proxy program called man:ftp-proxy[8], which is included in the base system of FreeBSD. The role of the proxy is to dynamically insert and delete rules in the ruleset, using a set of anchors, to correctly handle FTP traffic.
To enable the FTP proxy, add this line to [.filename]#/etc/rc.conf#:
[.programlisting]
....
ftpproxy_enable="YES"
....
Then start the proxy by running `service ftp-proxy start`.
For a basic configuration, three elements need to be added to [.filename]#/etc/pf.conf#. First, the anchors which the proxy will use to insert the rules it generates for the FTP sessions:
[.programlisting]
....
nat-anchor "ftp-proxy/*"
rdr-anchor "ftp-proxy/*"
....
Second, a pass rule is needed to allow FTP traffic in to the proxy.
Third, redirection and NAT rules need to be defined before the filtering rules. Insert this `rdr` rule immediately after the `nat` rule:
[.programlisting]
....
rdr pass on $int_if proto tcp from any to any port ftp -> 127.0.0.1 port 8021
....
Finally, allow the redirected traffic to pass:
[.programlisting]
....
pass out proto tcp from $proxy to any port ftp
....
where `$proxy` expands to the address the proxy daemon is bound to.
Save [.filename]#/etc/pf.conf#, load the new rules, and verify from a client that FTP connections are working:
[source,shell]
....
# pfctl -f /etc/pf.conf
....
This example covers a basic setup where the clients in the local network need to contact FTP servers elsewhere. This basic configuration should work well with most combinations of FTP clients and servers. As shown in man:ftp-proxy[8], the proxy's behavior can be changed in various ways by adding options to the `ftpproxy_flags=` line. Some clients or servers may have specific quirks that must be compensated for in the configuration, or there may be a need to integrate the proxy in specific ways such as assigning FTP traffic to a specific queue.
For ways to run an FTP server protected by PF and man:ftp-proxy[8], configure a separate `ftp-proxy` in reverse mode, using `-R`, on a separate port with its own redirecting pass rule.
[[pftut-icmp]]
==== Managing ICMP
Many of the tools used for debugging or troubleshooting a TCP/IP network rely on the Internet Control Message Protocol (ICMP), which was designed specifically with debugging in mind.
The ICMP protocol sends and receives _control messages_ between hosts and gateways, mainly to provide feedback to a sender about any unusual or difficult conditions enroute to the target host. Routers use ICMP to negotiate packet sizes and other transmission parameters in a process often referred to as _path MTU discovery_.
From a firewall perspective, some ICMP control messages are vulnerable to known attack vectors. Also, letting all diagnostic traffic pass unconditionally makes debugging easier, but it also makes it easier for others to extract information about the network. For these reasons, the following rule may not be optimal:
[.programlisting]
....
pass inet proto icmp from any to any
....
One solution is to let all ICMP traffic from the local network through while stopping all probes from outside the network:
[.programlisting]
....
pass inet proto icmp from $localnet to any keep state
pass inet proto icmp from any to $ext_if keep state
....
Additional options are available which demonstrate some of PF's flexibility. For example, rather than allowing all ICMP messages, one can specify the messages used by man:ping[8] and man:traceroute[8]. Start by defining a macro for that type of message:
[.programlisting]
....
icmp_types = "echoreq"
....
and a rule which uses the macro:
[.programlisting]
....
pass inet proto icmp all icmp-type $icmp_types keep state
....
If other types of ICMP packets are needed, expand `icmp_types` to a list of those packet types. Type `more /usr/src/sbin/pfctl/pfctl_parser.c` to see the list of ICMP message types supported by PF. Refer to http://www.iana.org/assignments/icmp-parameters/icmp-parameters.xhtml[http://www.iana.org/assignments/icmp-parameters/icmp-parameters.xhtml] for an explanation of each message type.
Since Unix `traceroute` uses UDP by default, another rule is needed to allow Unix `traceroute`:
[.programlisting]
....
# allow out the default range for traceroute(8):
pass out on $ext_if inet proto udp from any to any port 33433 >< 33626 keep state
....
Since `TRACERT.EXE` on Microsoft Windows systems uses ICMP echo request messages, only the first rule is needed to allow network traces from those systems. Unix `traceroute` can be instructed to use other protocols as well, and will use ICMP echo request messages if `-I` is used. Check the man:traceroute[8] man page for details.
[[pftut-pathmtudisc]]
===== Path MTU Discovery
Internet protocols are designed to be device independent, and one consequence of device independence is that the optimal packet size for a given connection cannot always be predicted reliably. The main constraint on packet size is the _Maximum Transmission Unit_ (MTU) which sets the upper limit on the packet size for an interface. Type `ifconfig` to view the MTUs for a system's network interfaces.
TCP/IP uses a process known as path MTU discovery to determine the right packet size for a connection. This process sends packets of varying sizes with the "Do not fragment" flag set, expecting an ICMP return packet of "type 3, code 4" when the upper limit has been reached. Type 3 means "destination unreachable", and code 4 is short for "fragmentation needed, but the do-not-fragment flag is set". To allow path MTU discovery in order to support connections to other MTUs, add the `destination unreachable` type to the `icmp_types` macro:
[.programlisting]
....
icmp_types = "{ echoreq, unreach }"
....
Since the pass rule already uses that macro, it does not need to be modified to support the new ICMP type:
[.programlisting]
....
pass inet proto icmp all icmp-type $icmp_types keep state
....
PF allows filtering on all variations of ICMP types and codes. The list of possible types and codes are documented in man:icmp[4] and man:icmp6[4].
[[pftut-tables]]
==== Using Tables
Some types of data are relevant to filtering and redirection at a given time, but their definition is too long to be included in the ruleset file. PF supports the use of tables, which are defined lists that can be manipulated without needing to reload the entire ruleset, and which can provide fast lookups. Table names are always enclosed within `< >`, like this:
[.programlisting]
....
table <clients> { 192.168.2.0/24, !192.168.2.5 }
....
In this example, the `192.168.2.0/24` network is part of the table, except for the address `192.168.2.5`, which is excluded using the `!` operator. It is also possible to load tables from files where each item is on a separate line, as seen in this example [.filename]#/etc/clients#:
[.programlisting]
....
192.168.2.0/24
!192.168.2.5
....
To refer to the file, define the table like this:
[.programlisting]
....
table <clients> persist file "/etc/clients"
....
Once the table is defined, it can be referenced by a rule:
[.programlisting]
....
pass inet proto tcp from <clients> to any port $client_out flags S/SA keep state
....
A table's contents can be manipulated live, using `pfctl`. This example adds another network to the table:
[source,shell]
....
# pfctl -t clients -T add 192.168.1.0/16
....
Note that any changes made this way will take affect now, making them ideal for testing, but will not survive a power failure or reboot. To make the changes permanent, modify the definition of the table in the ruleset or edit the file that the table refers to. One can maintain the on-disk copy of the table using a man:cron[8] job which dumps the table's contents to disk at regular intervals, using a command such as `pfctl -t clients -T show >/etc/clients`. Alternatively, [.filename]#/etc/clients# can be updated with the in-memory table contents:
[source,shell]
....
# pfctl -t clients -T replace -f /etc/clients
....
[[pftut-overload]]
==== Using Overload Tables to Protect SSH
Those who run SSH on an external interface have probably seen something like this in the authentication logs:
[.programlisting]
....
Sep 26 03:12:34 skapet sshd[25771]: Failed password for root from 200.72.41.31 port 40992 ssh2
Sep 26 03:12:34 skapet sshd[5279]: Failed password for root from 200.72.41.31 port 40992 ssh2
Sep 26 03:12:35 skapet sshd[5279]: Received disconnect from 200.72.41.31: 11: Bye Bye
Sep 26 03:12:44 skapet sshd[29635]: Invalid user admin from 200.72.41.31
Sep 26 03:12:44 skapet sshd[24703]: input_userauth_request: invalid user admin
Sep 26 03:12:44 skapet sshd[24703]: Failed password for invalid user admin from 200.72.41.31 port 41484 ssh2
....
This is indicative of a brute force attack where somebody or some program is trying to discover the user name and password which will let them into the system.
If external SSH access is needed for legitimate users, changing the default port used by SSH can offer some protection. However, PF provides a more elegant solution. Pass rules can contain limits on what connecting hosts can do and violators can be banished to a table of addresses which are denied some or all access. It is even possible to drop all existing connections from machines which overreach the limits.
To configure this, create this table in the tables section of the ruleset:
[.programlisting]
....
table <bruteforce> persist
....
Then, somewhere early in the ruleset, add rules to block brute access while allowing legitimate access:
[.programlisting]
....
block quick from <bruteforce>
pass inet proto tcp from any to $localnet port $tcp_services \
flags S/SA keep state \
(max-src-conn 100, max-src-conn-rate 15/5, \
overload <bruteforce> flush global)
....
The part in parentheses defines the limits and the numbers should be changed to meet local requirements. It can be read as follows:
`max-src-conn` is the number of simultaneous connections allowed from one host.
`max-src-conn-rate` is the rate of new connections allowed from any single host (_15_) per number of seconds (_5_).
`overload <bruteforce>` means that any host which exceeds these limits gets its address added to the `bruteforce` table. The ruleset blocks all traffic from addresses in the `bruteforce` table.
Finally, `flush global` says that when a host reaches the limit, that all (`global`) of that host's connections will be terminated (`flush`).
[NOTE]
====
These rules will _not_ block slow bruteforcers, as described in http://home.nuug.no/\~peter/hailmary2013/[http://home.nuug.no/~peter/hailmary2013/].
====
This example ruleset is intended mainly as an illustration. For example, if a generous number of connections in general are wanted, but the desire is to be more restrictive when it comes to ssh, supplement the rule above with something like the one below, early on in the rule set:
[.programlisting]
....
pass quick proto { tcp, udp } from any to any port ssh \
flags S/SA keep state \
(max-src-conn 15, max-src-conn-rate 5/3, \
overload <bruteforce> flush global)
....
[NOTE]
====
*It May Not be Necessary to Block All Overloaders:* +
It is worth noting that the overload mechanism is a general technique which does not apply exclusively to SSH, and it is not always optimal to entirely block all traffic from offenders.
For example, an overload rule could be used to protect a mail service or a web service, and the overload table could be used in a rule to assign offenders to a queue with a minimal bandwidth allocation or to redirect to a specific web page.
====
Over time, tables will be filled by overload rules and their size will grow incrementally, taking up more memory. Sometimes an IP address that is blocked is a dynamically assigned one, which has since been assigned to a host who has a legitimate reason to communicate with hosts in the local network.
For situations like these, pfctl provides the ability to expire table entries. For example, this command will remove `<bruteforce>` table entries which have not been referenced for `86400` seconds:
[source,shell]
....
# pfctl -t bruteforce -T expire 86400
....
Similar functionality is provided by package:security/expiretable[], which removes table entries which have not been accessed for a specified period of time.
Once installed, expiretable can be run to remove `<bruteforce>` table entries older than a specified age. This example removes all entries older than 24 hours:
[.programlisting]
....
/usr/local/sbin/expiretable -v -d -t 24h bruteforce
....
[[pftut-spamd]]
==== Protecting Against SPAM
Not to be confused with the spamd daemon which comes bundled with spamassassin, package:mail/spamd[] can be configured with PF to provide an outer defense against SPAM. This spamd hooks into the PF configuration using a set of redirections.
Spammers tend to send a large number of messages, and SPAM is mainly sent from a few spammer friendly networks and a large number of hijacked machines, both of which are reported to _blacklists_ fairly quickly.
When an SMTP connection from an address in a blacklist is received, spamd presents its banner and immediately switches to a mode where it answers SMTP traffic one byte at a time. This technique, which is intended to waste as much time as possible on the spammer's end, is called _tarpitting_. The specific implementation which uses one byte SMTP replies is often referred to as _stuttering_.
This example demonstrates the basic procedure for setting up spamd with automatically updated blacklists. Refer to the man pages which are installed with package:mail/spamd[] for more information.
[.procedure]
****
.Procedure: Configuring spamd
. Install the package:mail/spamd[] package or port. To use spamd's greylisting features, man:fdescfs[5] must be mounted at [.filename]#/dev/fd#. Add the following line to [.filename]#/etc/fstab#:
+
[.programlisting]
....
fdescfs /dev/fd fdescfs rw 0 0
....
+
Then, mount the filesystem:
+
[.programlisting]
....
# mount fdescfs
....
. Next, edit the PF ruleset to include:
+
[.programlisting]
....
table <spamd> persist
table <spamd-white> persist
rdr pass on $ext_if inet proto tcp from <spamd> to \
{ $ext_if, $localnet } port smtp -> 127.0.0.1 port 8025
rdr pass on $ext_if inet proto tcp from !<spamd-white> to \
{ $ext_if, $localnet } port smtp -> 127.0.0.1 port 8025
....
+
The two tables `<spamd>` and `<spamd-white>` are essential. SMTP traffic from an address listed in `<spamd>` but not in `<spamd-white>` is redirected to the spamd daemon listening at port 8025.
. The next step is to configure spamd in [.filename]#/usr/local/etc/spamd.conf# and to add some [.filename]#rc.conf# parameters.
+
The installation of package:mail/spamd[] includes a sample configuration file ([.filename]#/usr/local/etc/spamd.conf.sample#) and a man page for [.filename]#spamd.conf#. Refer to these for additional configuration options beyond those shown in this example.
+
One of the first lines in the configuration file that does not begin with a `#` comment sign contains the block which defines the `all` list, which specifies the lists to use:
+
[.programlisting]
....
all:\
:traplist:whitelist:
....
+
This entry adds the desired blacklists, separated by colons (`:`). To use a whitelist to subtract addresses from a blacklist, add the name of the whitelist _immediately_ after the name of that blacklist. For example: `:blacklist:whitelist:`.
+
This is followed by the specified blacklist's definition:
+
[.programlisting]
....
traplist:\
:black:\
:msg="SPAM. Your address %A has sent spam within the last 24 hours":\
:method=http:\
:file=www.openbsd.org/spamd/traplist.gz
....
+
where the first line is the name of the blacklist and the second line specifies the list type. The `msg` field contains the message to display to blacklisted senders during the SMTP dialogue. The `method` field specifies how spamd-setup fetches the list data; supported methods are `http`, `ftp`, from a `file` in a mounted file system, and via `exec` of an external program. Finally, the `file` field specifies the name of the file spamd expects to receive.
+
The definition of the specified whitelist is similar, but omits the `msg` field since a message is not needed:
+
[.programlisting]
....
whitelist:\
:white:\
:method=file:\
:file=/var/mail/whitelist.txt
....
+
[TIP]
====
*Choose Data Sources with Care:* +
Using all the blacklists in the sample [.filename]#spamd.conf# will blacklist large blocks of the Internet. Administrators need to edit the file to create an optimal configuration which uses applicable data sources and, when necessary, uses custom lists.
====
+
Next, add this entry to [.filename]#/etc/rc.conf#. Additional flags are described in the man page specified by the comment:
+
[.programlisting]
....
spamd_flags="-v" # use "" and see spamd-setup(8) for flags
....
+
When finished, reload the ruleset, start spamd by typing `service obspamd start`, and complete the configuration using `spamd-setup`. Finally, create a man:cron[8] job which calls `spamd-setup` to update the tables at reasonable intervals.
****
On a typical gateway in front of a mail server, hosts will soon start getting trapped within a few seconds to several minutes.
PF also supports _greylisting_, which temporarily rejects messages from unknown hosts with _45n_ codes. Messages from greylisted hosts which try again within a reasonable time are let through. Traffic from senders which are set up to behave within the limits set by RFC 1123 and RFC 2821 are immediately let through.
More information about greylisting as a technique can be found at the http://www.greylisting.org/[greylisting.org] web site. The most amazing thing about greylisting, apart from its simplicity, is that it still works. Spammers and malware writers have been very slow to adapt to bypass this technique.
The basic procedure for configuring greylisting is as follows:
[.procedure]
.Procedure: Configuring Greylisting
. Make sure that man:fdescfs[5] is mounted as described in Step 1 of the previous Procedure.
. To run spamd in greylisting mode, add this line to [.filename]#/etc/rc.conf#:
+
[.programlisting]
....
spamd_grey="YES" # use spamd greylisting if YES
....
+
Refer to the spamd man page for descriptions of additional related parameters.
. To complete the greylisting setup:
+
[.programlisting]
....
# service obspamd restart
# service obspamlogd start
....
Behind the scenes, the spamdb database tool and the spamlogd whitelist updater perform essential functions for the greylisting feature. spamdb is the administrator's main interface to managing the black, grey, and white lists via the contents of the [.filename]#/var/db/spamdb# database.
[[pftut-hygiene]]
==== Network Hygiene
This section describes how `block-policy`, `scrub`, and `antispoof` can be used to make the ruleset behave sanely.
The `block-policy` is an option which can be set in the `options` part of the ruleset, which precedes the redirection and filtering rules. This option determines which feedback, if any, PF sends to hosts that are blocked by a rule. The option has two possible values: `drop` drops blocked packets with no feedback, and `return` returns a status code such as `Connection refused`.
If not set, the default policy is `drop`. To change the `block-policy`, specify the desired value:
[.programlisting]
....
set block-policy return
....
In PF, `scrub` is a keyword which enables network packet normalization. This process reassembles fragmented packets and drops TCP packets that have invalid flag combinations. Enabling `scrub` provides a measure of protection against certain kinds of attacks based on incorrect handling of packet fragments. A number of options are available, but the simplest form is suitable for most configurations:
[.programlisting]
....
scrub in all
....
Some services, such as NFS, require specific fragment handling options. Refer to https://home.nuug.no/\~peter/pf/en/scrub.html[https://home.nuug.no/~peter/pf/en/scrub.html] for more information.
This example reassembles fragments, clears the "do not fragment" bit, and sets the maximum segment size to 1440 bytes:
[.programlisting]
....
scrub in all fragment reassemble no-df max-mss 1440
....
The `antispoof` mechanism protects against activity from spoofed or forged IP addresses, mainly by blocking packets appearing on interfaces and in directions which are logically not possible.
These rules weed out spoofed traffic coming in from the rest of the world as well as any spoofed packets which originate in the local network:
[.programlisting]
....
antispoof for $ext_if
antispoof for $int_if
....
[[pftut-unrouteables]]
==== Handling Non-Routable Addresses
Even with a properly configured gateway to handle network address translation, one may have to compensate for other people's misconfigurations. A common misconfiguration is to let traffic with non-routable addresses out to the Internet. Since traffic from non-routeable addresses can play a part in several DoS attack techniques, consider explicitly blocking traffic from non-routeable addresses from entering the network through the external interface.
In this example, a macro containing non-routable addresses is defined, then used in blocking rules. Traffic to and from these addresses is quietly dropped on the gateway's external interface.
[.programlisting]
....
martians = "{ 127.0.0.0/8, 192.168.0.0/16, 172.16.0.0/12, \
10.0.0.0/8, 169.254.0.0/16, 192.0.2.0/24, \
0.0.0.0/8, 240.0.0.0/4 }"
block drop in quick on $ext_if from $martians to any
block drop out quick on $ext_if from any to $martians
....
=== Enabling ALTQ
On FreeBSD, ALTQ can be used with PF to provide Quality of Service (QOS). Once ALTQ is enabled, queues can be defined in the ruleset which determine the processing priority of outbound packets.
Before enabling ALTQ, refer to man:altq[4] to determine if the drivers for the network cards installed on the system support it.
ALTQ is not available as a loadable kernel module. If the system's interfaces support ALTQ, create a custom kernel using the instructions in crossref:kernelconfig[kernelconfig,Configuring the FreeBSD Kernel]. The following kernel options are available. The first is needed to enable ALTQ. At least one of the other options is necessary to specify the queueing scheduler algorithm:
[.programlisting]
....
options ALTQ
options ALTQ_CBQ # Class Based Queuing (CBQ)
options ALTQ_RED # Random Early Detection (RED)
options ALTQ_RIO # RED In/Out
options ALTQ_HFSC # Hierarchical Packet Scheduler (HFSC)
options ALTQ_PRIQ # Priority Queuing (PRIQ)
....
The following scheduler algorithms are available:
CBQ::
Class Based Queuing (CBQ) is used to divide a connection's bandwidth into different classes or queues to prioritize traffic based on filter rules.
RED::
Random Early Detection (RED) is used to avoid network congestion by measuring the length of the queue and comparing it to the minimum and maximum thresholds for the queue. When the queue is over the maximum, all new packets are randomly dropped.
RIO::
In Random Early Detection In and Out (RIO) mode, RED maintains multiple average queue lengths and multiple threshold values, one for each QOS level.
HFSC::
Hierarchical Fair Service Curve Packet Scheduler (HFSC) is described in http://www-2.cs.cmu.edu/\~hzhang/HFSC/main.html[http://www-2.cs.cmu.edu/~hzhang/HFSC/main.html].
PRIQ::
Priority Queuing (PRIQ) always passes traffic that is in a higher queue first.
More information about the scheduling algorithms and example rulesets are available at the https://web.archive.org/web/20151109213426/http://www.openbsd.org/faq/pf/queueing.html[OpenBSD's web archive].
[[firewalls-ipfw]]
== IPFW
IPFW is a stateful firewall written for FreeBSD which supports both IPv4 and IPv6. It is comprised of several components: the kernel firewall filter rule processor and its integrated packet accounting facility, the logging facility, NAT, the man:dummynet[4] traffic shaper, a forward facility, a bridge facility, and an ipstealth facility.
FreeBSD provides a sample ruleset in [.filename]#/etc/rc.firewall# which defines several firewall types for common scenarios to assist novice users in generating an appropriate ruleset. IPFW provides a powerful syntax which advanced users can use to craft customized rulesets that meet the security requirements of a given environment.
This section describes how to enable IPFW, provides an overview of its rule syntax, and demonstrates several rulesets for common configuration scenarios.
[[firewalls-ipfw-enable]]
=== Enabling IPFW
IPFW is included in the basic FreeBSD install as a kernel loadable module, meaning that a custom kernel is not needed in order to enable IPFW.
For those users who wish to statically compile IPFW support into a custom kernel, see <<firewalls-ipfw-kernelconfig>>.
To configure the system to enable IPFW at boot time, add `firewall_enable="YES"` to [.filename]#/etc/rc.conf#:
[source,shell]
....
# sysrc firewall_enable="YES"
....
To use one of the default firewall types provided by FreeBSD, add another line which specifies the type:
[source,shell]
....
# sysrc firewall_type="open"
....
The available types are:
* `open`: passes all traffic.
* `client`: protects only this machine.
* `simple`: protects the whole network.
* `closed`: entirely disables IP traffic except for the loopback interface.
* `workstation`: protects only this machine using stateful rules.
* `UNKNOWN`: disables the loading of firewall rules.
* [.filename]#filename#: full path of the file containing the firewall ruleset.
If `firewall_type` is set to either `client` or `simple`, modify the default rules found in [.filename]#/etc/rc.firewall# to fit the configuration of the system.
Note that the `filename` type is used to load a custom ruleset.
An alternate way to load a custom ruleset is to set the `firewall_script` variable to the absolute path of an _executable script_ that includes IPFW commands. The examples used in this section assume that the `firewall_script` is set to [.filename]#/etc/ipfw.rules#:
[source,shell]
....
# sysrc firewall_script="/etc/ipfw.rules"
....
To enable logging through man:syslogd[8], include this line:
[source,shell]
....
# sysrc firewall_logging="YES"
....
[WARNING]
====
Only firewall rules with the `log` option will be logged. The default rules do not include this option and it must be manually added. Therefore it is advisable that the default ruleset is edited for logging. In addition, log rotation may be desired if the logs are stored in a separate file.
====
There is no [.filename]#/etc/rc.conf# variable to set logging limits. To limit the number of times a rule is logged per connection attempt, specify the number using this line in [.filename]#/etc/sysctl.conf#:
[source,shell]
....
# echo "net.inet.ip.fw.verbose_limit=5" >> /etc/sysctl.conf
....
To enable logging through a dedicated interface named `ipfw0`, add this line to [.filename]#/etc/rc.conf# instead:
[source,shell]
....
# sysrc firewall_logif="YES"
....
Then use tcpdump to see what is being logged:
[source,shell]
....
# tcpdump -t -n -i ipfw0
....
[TIP]
====
There is no overhead due to logging unless tcpdump is attached.
====
After saving the needed edits, start the firewall. To enable logging limits now, also set the `sysctl` value specified above:
[source,shell]
....
# service ipfw start
# sysctl net.inet.ip.fw.verbose_limit=5
....
[[firewalls-ipfw-rules]]
=== IPFW Rule Syntax
When a packet enters the IPFW firewall, it is compared against the first rule in the ruleset and progresses one rule at a time, moving from top to bottom in sequence. When the packet matches the selection parameters of a rule, the rule's action is executed and the search of the ruleset terminates for that packet. This is referred to as "first match wins". If the packet does not match any of the rules, it gets caught by the mandatory IPFW default rule number 65535, which denies all packets and silently discards them. However, if the packet matches a rule that contains the `count`, `skipto`, or `tee` keywords, the search continues. Refer to man:ipfw[8] for details on how these keywords affect rule processing.
When creating an IPFW rule, keywords must be written in the following order. Some keywords are mandatory while other keywords are optional. The words shown in uppercase represent a variable and the words shown in lowercase must precede the variable that follows it. The `#` symbol is used to mark the start of a comment and may appear at the end of a rule or on its own line. Blank lines are ignored.
`_CMD RULE_NUMBER set SET_NUMBER ACTION log LOG_AMOUNT PROTO from SRC SRC_PORT to DST DST_PORT OPTIONS_`
This section provides an overview of these keywords and their options. It is not an exhaustive list of every possible option. Refer to man:ipfw[8] for a complete description of the rule syntax that can be used when creating IPFW rules.
CMD::
Every rule must start with `ipfw add`.
RULE_NUMBER::
Each rule is associated with a number from `1` to `65534`. The number is used to indicate the order of rule processing. Multiple rules can have the same number, in which case they are applied according to the order in which they have been added.
SET_NUMBER::
Each rule is associated with a set number from `0` to `31`. Sets can be individually disabled or enabled, making it possible to quickly add or delete a set of rules. If a SET_NUMBER is not specified, the rule will be added to set `0`.
ACTION::
A rule can be associated with one of the following actions. The specified action will be executed when the packet matches the selection criterion of the rule.
+
`allow | accept | pass | permit`: these keywords are equivalent and allow packets that match the rule.
+
`check-state`: checks the packet against the dynamic state table. If a match is found, execute the action associated with the rule which generated this dynamic rule, otherwise move to the next rule. A `check-state` rule does not have selection criterion. If no `check-state` rule is present in the ruleset, the dynamic rules table is checked at the first `keep-state` or `limit` rule.
+
`count`: updates counters for all packets that match the rule. The search continues with the next rule.
+
`deny | drop`: either word silently discards packets that match this rule.
+
Additional actions are available. Refer to man:ipfw[8] for details.
LOG_AMOUNT::
When a packet matches a rule with the `log` keyword, a message will be logged to man:syslogd[8] with a facility name of `SECURITY`. Logging only occurs if the number of packets logged for that particular rule does not exceed a specified LOG_AMOUNT. If no LOG_AMOUNT is specified, the limit is taken from the value of `net.inet.ip.fw.verbose_limit`. A value of zero removes the logging limit. Once the limit is reached, logging can be re-enabled by clearing the logging counter or the packet counter for that rule, using `ipfw resetlog`.
+
[NOTE]
====
Logging is done after all other packet matching conditions have been met, and before performing the final action on the packet. The administrator decides which rules to enable logging on.
====
PROTO::
This optional value can be used to specify any protocol name or number found in [.filename]#/etc/protocols#.
SRC::
The `from` keyword must be followed by the source address or a keyword that represents the source address. An address can be represented by `any`, `me` (any address configured on an interface on this system), `me6`, (any IPv6 address configured on an interface on this system), or `table` followed by the number of a lookup table which contains a list of addresses. When specifying an IP address, it can be optionally followed by its CIDR mask or subnet mask. For example, `1.2.3.4/25` or `1.2.3.4:255.255.255.128`.
SRC_PORT::
An optional source port can be specified using the port number or name from [.filename]#/etc/services#.
DST::
The `to` keyword must be followed by the destination address or a keyword that represents the destination address. The same keywords and addresses described in the SRC section can be used to describe the destination.
DST_PORT::
An optional destination port can be specified using the port number or name from [.filename]#/etc/services#.
OPTIONS::
Several keywords can follow the source and destination. As the name suggests, OPTIONS are optional. Commonly used options include `in` or `out`, which specify the direction of packet flow, `icmptypes` followed by the type of ICMP message, and `keep-state`.
+
When a `keep-state` rule is matched, the firewall will create a dynamic rule which matches bidirectional traffic between the source and destination addresses and ports using the same protocol.
+
The dynamic rules facility is vulnerable to resource depletion from a SYN-flood attack which would open a huge number of dynamic rules. To counter this type of attack with IPFW, use `limit`. This option limits the number of simultaneous sessions by checking the open dynamic rules, counting the number of times this rule and IP address combination occurred. If this count is greater than the value specified by `limit`, the packet is discarded.
+
Dozens of OPTIONS are available. Refer to man:ipfw[8] for a description of each available option.
=== Example Ruleset
This section demonstrates how to create an example stateful firewall ruleset script named [.filename]#/etc/ipfw.rules#. In this example, all connection rules use `in` or `out` to clarify the direction. They also use `via` _interface-name_ to specify the interface the packet is traveling over.
[NOTE]
====
When first creating or testing a firewall ruleset, consider temporarily setting this tunable:
[.programlisting]
....
net.inet.ip.fw.default_to_accept="1"
....
This sets the default policy of man:ipfw[8] to be more permissive than the default `deny ip from any to any`, making it slightly more difficult to get locked out of the system right after a reboot.
====
The firewall script begins by indicating that it is a Bourne shell script and flushes any existing rules. It then creates the `cmd` variable so that `ipfw add` does not have to be typed at the beginning of every rule. It also defines the `pif` variable which represents the name of the interface that is attached to the Internet.
[.programlisting]
....
#!/bin/sh
# Flush out the list before we begin.
ipfw -q -f flush
# Set rules command prefix
cmd="ipfw -q add"
pif="dc0" # interface name of NIC attached to Internet
....
The first two rules allow all traffic on the trusted internal interface and on the loopback interface:
[.programlisting]
....
# Change xl0 to LAN NIC interface name
$cmd 00005 allow all from any to any via xl0
# No restrictions on Loopback Interface
$cmd 00010 allow all from any to any via lo0
....
The next rule allows the packet through if it matches an existing entry in the dynamic rules table:
[.programlisting]
....
$cmd 00101 check-state
....
The next set of rules defines which stateful connections internal systems can create to hosts on the Internet:
[.programlisting]
....
# Allow access to public DNS
# Replace x.x.x.x with the IP address of a public DNS server
# and repeat for each DNS server in /etc/resolv.conf
$cmd 00110 allow tcp from any to x.x.x.x 53 out via $pif setup keep-state
$cmd 00111 allow udp from any to x.x.x.x 53 out via $pif keep-state
# Allow access to ISP's DHCP server for cable/DSL configurations.
# Use the first rule and check log for IP address.
# Then, uncomment the second rule, input the IP address, and delete the first rule
$cmd 00120 allow log udp from any to any 67 out via $pif keep-state
#$cmd 00120 allow udp from any to x.x.x.x 67 out via $pif keep-state
# Allow outbound HTTP and HTTPS connections
$cmd 00200 allow tcp from any to any 80 out via $pif setup keep-state
$cmd 00220 allow tcp from any to any 443 out via $pif setup keep-state
# Allow outbound email connections
$cmd 00230 allow tcp from any to any 25 out via $pif setup keep-state
$cmd 00231 allow tcp from any to any 110 out via $pif setup keep-state
# Allow outbound ping
$cmd 00250 allow icmp from any to any out via $pif keep-state
# Allow outbound NTP
$cmd 00260 allow udp from any to any 123 out via $pif keep-state
# Allow outbound SSH
$cmd 00280 allow tcp from any to any 22 out via $pif setup keep-state
# deny and log all other outbound connections
$cmd 00299 deny log all from any to any out via $pif
....
The next set of rules controls connections from Internet hosts to the internal network. It starts by denying packets typically associated with attacks and then explicitly allows specific types of connections. All the authorized services that originate from the Internet use `limit` to prevent flooding.
[.programlisting]
....
# Deny all inbound traffic from non-routable reserved address spaces
$cmd 00300 deny all from 192.168.0.0/16 to any in via $pif #RFC 1918 private IP
$cmd 00301 deny all from 172.16.0.0/12 to any in via $pif #RFC 1918 private IP
$cmd 00302 deny all from 10.0.0.0/8 to any in via $pif #RFC 1918 private IP
$cmd 00303 deny all from 127.0.0.0/8 to any in via $pif #loopback
$cmd 00304 deny all from 0.0.0.0/8 to any in via $pif #loopback
$cmd 00305 deny all from 169.254.0.0/16 to any in via $pif #DHCP auto-config
$cmd 00306 deny all from 192.0.2.0/24 to any in via $pif #reserved for docs
$cmd 00307 deny all from 204.152.64.0/23 to any in via $pif #Sun cluster interconnect
$cmd 00308 deny all from 224.0.0.0/3 to any in via $pif #Class D & E multicast
# Deny public pings
$cmd 00310 deny icmp from any to any in via $pif
# Deny ident
$cmd 00315 deny tcp from any to any 113 in via $pif
# Deny all Netbios services.
$cmd 00320 deny tcp from any to any 137 in via $pif
$cmd 00321 deny tcp from any to any 138 in via $pif
$cmd 00322 deny tcp from any to any 139 in via $pif
$cmd 00323 deny tcp from any to any 81 in via $pif
# Deny fragments
$cmd 00330 deny all from any to any frag in via $pif
# Deny ACK packets that did not match the dynamic rule table
$cmd 00332 deny tcp from any to any established in via $pif
# Allow traffic from ISP's DHCP server.
# Replace x.x.x.x with the same IP address used in rule 00120.
#$cmd 00360 allow udp from any to x.x.x.x 67 in via $pif keep-state
# Allow HTTP connections to internal web server
$cmd 00400 allow tcp from any to me 80 in via $pif setup limit src-addr 2
# Allow inbound SSH connections
$cmd 00410 allow tcp from any to me 22 in via $pif setup limit src-addr 2
# Reject and log all other incoming connections
$cmd 00499 deny log all from any to any in via $pif
....
The last rule logs all packets that do not match any of the rules in the ruleset:
[.programlisting]
....
# Everything else is denied and logged
$cmd 00999 deny log all from any to any
....
[[in-kernel-nat]]
=== In-kernel NAT
FreeBSD's IPFW firewall has two implementations of NAT: the userland implementation man:natd[8], and the more recent in-kernel NAT implementation. Both work in conjunction with IPFW to provide network address translation. This can be used to provide an Internet Connection Sharing solution so that several internal computers can connect to the Internet using a single public IP address.
To do this, the FreeBSD machine connected to the Internet must act as a gateway. This system must have two NICs, where one is connected to the Internet and the other is connected to the internal LAN. Each machine connected to the LAN should be assigned an IP address in the private network space, as defined by https://www.ietf.org/rfc/rfc1918.txt[RFC 1918].
Some additional configuration is needed in order to enable the in-kernel NAT facility of IPFW. To enable in-kernel NAT support at boot time, the following must be set in [.filename]#/etc/rc.conf#:
[.programlisting]
....
gateway_enable="YES"
firewall_enable="YES"
firewall_nat_enable="YES"
....
[NOTE]
====
When `firewall_nat_enable` is set but `firewall_enable` is not, it will have no effect and do nothing. This is because the in-kernel NAT implementation is only compatible with IPFW.
====
When the ruleset contains stateful rules, the positioning of the NAT rule is critical and the `skipto` action is used. The `skipto` action requires a rule number so that it knows which rule to jump to. The example below builds upon the firewall ruleset shown in the previous section. It adds some additional entries and modifies some existing rules in order to configure the firewall for in-kernel NAT. It starts by adding some additional variables which represent the rule number to skip to, the `keep-state` option, and a list of TCP ports which will be used to reduce the number of rules.
[.programlisting]
....
#!/bin/sh
ipfw -q -f flush
cmd="ipfw -q add"
skip="skipto 1000"
pif=dc0
ks="keep-state"
good_tcpo="22,25,37,53,80,443,110"
....
With in-kernel NAT it is necessary to disable TCP segmentation offloading (TSO) due to the architecture of man:libalias[3], a library implemented as a kernel module to provide the in-kernel NAT facility of IPFW. TSO can be disabled on a per network interface basis using man:ifconfig[8] or on a system wide basis using man:sysctl[8]. To disable TSO system wide, the following must be set it [.filename]#/etc/sysctl.conf#:
[.programlisting]
....
net.inet.tcp.tso="0"
....
A NAT instance will also be configured. It is possible to have multiple NAT instances each with their own configuration. For this example only one NAT instance is needed, NAT instance number 1. The configuration can take a few options such as: `if` which indicates the public interface, `same_ports` which takes care that alliased ports and local port numbers are mapped the same, `unreg_only` will result in only unregistered (private) address spaces to be processed by the NAT instance, and `reset` which will help to keep a functioning NAT instance even when the public IP address of the IPFW machine changes. For all possible options that can be passed to a single NAT instance configuration consult man:ipfw[8]. When configuring a stateful NATing firewall, it is necessary to allow translated packets to be reinjected in the firewall for further processing. This can be achieved by disabling `one_pass` behavior at the start of the firewall script.
[.programlisting]
....
ipfw disable one_pass
ipfw -q nat 1 config if $pif same_ports unreg_only reset
....
The inbound NAT rule is inserted _after_ the two rules which allow all traffic on the trusted and loopback interfaces and after the reassemble rule but _before_ the `check-state` rule. It is important that the rule number selected for this NAT rule, in this example `100`, is higher than the first three rules and lower than the `check-state` rule. Furthermore, because of the behavior of in-kernel NAT it is advised to place a reassemble rule just before the first NAT rule and after the rules that allow traffic on trusted interface. Normally, IP fragmentation should not happen, but when dealing with IPSEC/ESP/GRE tunneling traffic it might and the reassembling of fragments is necessary before handing the complete packet over to the in-kernel NAT facility.
[NOTE]
====
The reassemble rule was not needed with userland man:natd[8] because the internal workings of the IPFW `divert` action already takes care of reassembling packets before delivery to the socket as also stated in man:ipfw[8].
The NAT instance and rule number used in this example does not match with the default NAT instance and rule number created by [.filename]#rc.firewall#. [.filename]#rc.firewall# is a script that sets up the default firewall rules present in FreeBSD.
====
[.programlisting]
....
$cmd 005 allow all from any to any via xl0 # exclude LAN traffic
$cmd 010 allow all from any to any via lo0 # exclude loopback traffic
$cmd 099 reass all from any to any in # reassemble inbound packets
$cmd 100 nat 1 ip from any to any in via $pif # NAT any inbound packets
# Allow the packet through if it has an existing entry in the dynamic rules table
$cmd 101 check-state
....
The outbound rules are modified to replace the `allow` action with the `$skip` variable, indicating that rule processing will continue at rule `1000`. The seven `tcp` rules have been replaced by rule `125` as the `$good_tcpo` variable contains the seven allowed outbound ports.
[NOTE]
====
Remember that IPFW's performance is largely determined by the number of rules present in the ruleset.
====
[.programlisting]
....
# Authorized outbound packets
$cmd 120 $skip udp from any to x.x.x.x 53 out via $pif $ks
$cmd 121 $skip udp from any to x.x.x.x 67 out via $pif $ks
$cmd 125 $skip tcp from any to any $good_tcpo out via $pif setup $ks
$cmd 130 $skip icmp from any to any out via $pif $ks
....
The inbound rules remain the same, except for the very last rule which removes the `via $pif` in order to catch both inbound and outbound rules. The NAT rule must follow this last outbound rule, must have a higher number than that last rule, and the rule number must be referenced by the `skipto` action. In this ruleset, rule number `1000` handles passing all packets to our configured instance for NAT processing. The next rule allows any packet which has undergone NAT processing to pass.
[.programlisting]
....
$cmd 999 deny log all from any to any
$cmd 1000 nat 1 ip from any to any out via $pif # skipto location for outbound stateful rules
$cmd 1001 allow ip from any to any
....
In this example, rules `100`, `101`, `125`, `1000`, and `1001` control the address translation of the outbound and inbound packets so that the entries in the dynamic state table always register the private LANIP address.
Consider an internal web browser which initializes a new outbound HTTP session over port 80. When the first outbound packet enters the firewall, it does not match rule `100` because it is headed out rather than in. It passes rule `101` because this is the first packet and it has not been posted to the dynamic state table yet. The packet finally matches rule `125` as it is outbound on an allowed port and has a source IP address from the internal LAN. On matching this rule, two actions take place. First, the `keep-state` action adds an entry to the dynamic state table and the specified action, `skipto rule 1000`, is executed. Next, the packet undergoes NAT and is sent out to the Internet. This packet makes its way to the destination web server, where a response packet is generated and sent back. This new packet enters the top of the ruleset. It matches rule `100` and has its destination IP address mapped back to the original internal address. It then is processed by the `check-state` rule, is found in the table as an existing session, and is released to the LAN.
On the inbound side, the ruleset has to deny bad packets and allow only authorized services. A packet which matches an inbound rule is posted to the dynamic state table and the packet is released to the LAN. The packet generated as a response is recognized by the `check-state` rule as belonging to an existing session. It is then sent to rule `1000` to undergo NAT before being released to the outbound interface.
[NOTE]
====
Transitioning from userland man:natd[8] to in-kernel NAT might appear seamless at first but there is small catch. When using the GENERIC kernel, IPFW will load the [.filename]#libalias.ko# kernel module, when `firewall_nat_enable` is enabled in [.filename]#/etc/rc.conf#. The [.filename]#libalias.ko# kernel module only provides basic NAT functionality, whereas the userland implementation man:natd[8] has all NAT functionality available in its userland library without any extra configuration. All functionality refers to the following kernel modules that can additionally be loaded when needed besides the standard [.filename]#libalias.ko# kernel module: [.filename]#alias_ftp.ko#, [.filename]#alias_bbt.ko#, [.filename]#skinny.ko#, [.filename]#irc.ko#, [.filename]#alias_pptp.ko# and [.filename]#alias_smedia.ko# using the `kld_list` directive in [.filename]#/etc/rc.conf#. If a custom kernel is used, the full functionality of the userland library can be compiled in, in the kernel, using the `options LIBALIAS`.
====
==== Port Redirection
The drawback with NAT in general is that the LAN clients are not accessible from the Internet. Clients on the LAN can make outgoing connections to the world but cannot receive incoming ones. This presents a problem if trying to run Internet services on one of the LAN client machines. A simple way around this is to redirect selected Internet ports on the NAT providing machine to a LAN client.
For example, an IRC server runs on client `A` and a web server runs on client `B`. For this to work properly, connections received on ports 6667 (IRC) and 80 (HTTP) must be redirected to the respective machines.
With in-kernel NAT all configuration is done in the NAT instance configuration. For a full list of options that an in-kernel NAT instance can use, consult man:ipfw[8]. The IPFW syntax follows the syntax of natd. The syntax for `redirect_port` is as follows:
[.programlisting]
....
redirect_port proto targetIP:targetPORT[-targetPORT]
[aliasIP:]aliasPORT[-aliasPORT]
[remoteIP[:remotePORT[-remotePORT]]]
....
To configure the above example setup, the arguments should be:
[.programlisting]
....
redirect_port tcp 192.168.0.2:6667 6667
redirect_port tcp 192.168.0.3:80 80
....
After adding these arguments to the configuration of NAT instance 1 in the above ruleset, the TCP ports will be port forwarded to the LAN client machines running the IRC and HTTP services.
[.programlisting]
....
ipfw -q nat 1 config if $pif same_ports unreg_only reset \
redirect_port tcp 192.168.0.2:6667 6667 \
redirect_port tcp 192.168.0.3:80 80
....
Port ranges over individual ports can be indicated with `redirect_port`. For example, _tcp 192.168.0.2:2000-3000 2000-3000_ would redirect all connections received on ports 2000 to 3000 to ports 2000 to 3000 on client `A`.
==== Address Redirection
Address redirection is useful if more than one IP address is available. Each LAN client can be assigned its own external IP address by man:ipfw[8], which will then rewrite outgoing packets from the LAN clients with the proper external IP address and redirects all traffic incoming on that particular IP address back to the specific LAN client. This is also known as static NAT. For example, if IP addresses `128.1.1.1`, `128.1.1.2`, and `128.1.1.3` are available, `128.1.1.1` can be used as the man:ipfw[8] machine's external IP address, while `128.1.1.2` and `128.1.1.3` are forwarded back to LAN clients `A` and `B`.
The `redirect_address` syntax is as below, where `localIP` is the internal IP address of the LAN client, and `publicIP` the external IP address corresponding to the LAN client.
[.programlisting]
....
redirect_address localIP publicIP
....
In the example, the arguments would read:
[.programlisting]
....
redirect_address 192.168.0.2 128.1.1.2
redirect_address 192.168.0.3 128.1.1.3
....
Like `redirect_port`, these arguments are placed in a NAT instance configuration. With address redirection, there is no need for port redirection, as all data received on a particular IP address is redirected.
The external IP addresses on the man:ipfw[8] machine must be active and aliased to the external interface. Refer to man:rc.conf[5] for details.
==== Userspace NAT
Let us start with a statement: the userspace NAT implementation: man:natd[8], has more overhead than in-kernel NAT. For man:natd[8] to translate packets, the packets have to be copied from the kernel to userspace and back which brings in extra overhead that is not present with in-kernel NAT.
To enable the userpace NAT daemon man:natd[8] at boot time, the following is a minimum configuration in [.filename]#/etc/rc.conf#. Where `natd_interface` is set to the name of the NIC attached to the Internet. The man:rc[8] script of man:natd[8] will automatically check if a dynamic IP address is used and configure itself to handle that.
[.programlisting]
....
gateway_enable="YES"
natd_enable="YES"
natd_interface="rl0"
....
In general, the above ruleset as explained for in-kernel NAT can also be used together with man:natd[8]. The exceptions are the configuration of the in-kernel NAT instance `(ipfw -q nat 1 config ...)` which is not needed together with reassemble rule 99 because its functionality is included in the `divert` action. Rule number 100 and 1000 will have to change sligthly as shown below.
[.programlisting]
....
$cmd 100 divert natd ip from any to any in via $pif
$cmd 1000 divert natd ip from any to any out via $pif
....
To configure port or address redirection, a similar syntax as with in-kernel NAT is used. Although, now, instead of specifying the configuration in our ruleset script like with in-kernel NAT, configuration of man:natd[8] is best done in a configuration file. To do this, an extra flag must be passed via [.filename]#/etc/rc.conf# which specifies the path of the configuration file.
[.programlisting]
....
natd_flags="-f /etc/natd.conf"
....
[NOTE]
====
The specified file must contain a list of configuration options, one per line. For more information about the configuration file and possible variables, consult man:natd[8]. Below are two example entries, one per line:
[.programlisting]
....
redirect_port tcp 192.168.0.2:6667 6667
redirect_address 192.168.0.3 128.1.1.3
....
====
[[firewalls-ipfw-cmd]]
=== The IPFW Command
`ipfw` can be used to make manual, single rule additions or deletions to the active firewall while it is running. The problem with using this method is that all the changes are lost when the system reboots. It is recommended to instead write all the rules in a file and to use that file to load the rules at boot time and to replace the currently running firewall rules whenever that file changes.
`ipfw` is a useful way to display the running firewall rules to the console screen. The IPFW accounting facility dynamically creates a counter for each rule that counts each packet that matches the rule. During the process of testing a rule, listing the rule with its counter is one way to determine if the rule is functioning as expected.
To list all the running rules in sequence:
[source,shell]
....
# ipfw list
....
To list all the running rules with a time stamp of when the last time the rule was matched:
[source,shell]
....
# ipfw -t list
....
The next example lists accounting information and the packet count for matched rules along with the rules themselves. The first column is the rule number, followed by the number of matched packets and bytes, followed by the rule itself.
[source,shell]
....
# ipfw -a list
....
To list dynamic rules in addition to static rules:
[source,shell]
....
# ipfw -d list
....
To also show the expired dynamic rules:
[source,shell]
....
# ipfw -d -e list
....
To zero the counters:
[source,shell]
....
# ipfw zero
....
To zero the counters for just the rule with number _NUM_:
[source,shell]
....
# ipfw zero NUM
....
==== Logging Firewall Messages
Even with the logging facility enabled, IPFW will not generate any rule logging on its own. The firewall administrator decides which rules in the ruleset will be logged, and adds the `log` keyword to those rules. Normally only deny rules are logged. It is customary to duplicate the "ipfw default deny everything" rule with the `log` keyword included as the last rule in the ruleset. This way, it is possible to see all the packets that did not match any of the rules in the ruleset.
Logging is a two edged sword. If one is not careful, an over abundance of log data or a DoS attack can fill the disk with log files. Log messages are not only written to syslogd, but also are displayed on the root console screen and soon become annoying.
The `IPFIREWALL_VERBOSE_LIMIT=5` kernel option limits the number of consecutive messages sent to man:syslogd[8], concerning the packet matching of a given rule. When this option is enabled in the kernel, the number of consecutive messages concerning a particular rule is capped at the number specified. There is nothing to be gained from 200 identical log messages. With this option set to five, five consecutive messages concerning a particular rule would be logged to syslogd and the remainder identical consecutive messages would be counted and posted to syslogd with a phrase like the following:
[.programlisting]
....
last message repeated 45 times
....
All logged packets messages are written by default to [.filename]#/var/log/security#, which is defined in [.filename]#/etc/syslog.conf#.
[[firewalls-ipfw-rules-script]]
==== Building a Rule Script
Most experienced IPFW users create a file containing the rules and code them in a manner compatible with running them as a script. The major benefit of doing this is the firewall rules can be refreshed in mass without the need of rebooting the system to activate them. This method is convenient in testing new rules as the procedure can be executed as many times as needed. Being a script, symbolic substitution can be used for frequently used values to be substituted into multiple rules.
This example script is compatible with the syntax used by the man:sh[1], man:csh[1], and man:tcsh[1] shells. Symbolic substitution fields are prefixed with a dollar sign ($). Symbolic fields do not have the $ prefix. The value to populate the symbolic field must be enclosed in double quotes ("").
Start the rules file like this:
[.programlisting]
....
############### start of example ipfw rules script #############
#
ipfw -q -f flush # Delete all rules
# Set defaults
oif="tun0" # out interface
odns="192.0.2.11" # ISP's DNS server IP address
cmd="ipfw -q add " # build rule prefix
ks="keep-state" # just too lazy to key this each time
$cmd 00500 check-state
$cmd 00502 deny all from any to any frag
$cmd 00501 deny tcp from any to any established
$cmd 00600 allow tcp from any to any 80 out via $oif setup $ks
$cmd 00610 allow tcp from any to $odns 53 out via $oif setup $ks
$cmd 00611 allow udp from any to $odns 53 out via $oif $ks
################### End of example ipfw rules script ############
....
The rules are not important as the focus of this example is how the symbolic substitution fields are populated.
If the above example was in [.filename]#/etc/ipfw.rules#, the rules could be reloaded by the following command:
[source,shell]
....
# sh /etc/ipfw.rules
....
[.filename]#/etc/ipfw.rules# can be located anywhere and the file can have any name.
The same thing could be accomplished by running these commands by hand:
[source,shell]
....
# ipfw -q -f flush
# ipfw -q add check-state
# ipfw -q add deny all from any to any frag
# ipfw -q add deny tcp from any to any established
# ipfw -q add allow tcp from any to any 80 out via tun0 setup keep-state
# ipfw -q add allow tcp from any to 192.0.2.11 53 out via tun0 setup keep-state
# ipfw -q add 00611 allow udp from any to 192.0.2.11 53 out via tun0 keep-state
....
[[firewalls-ipfw-kernelconfig]]
=== IPFW Kernel Options
In order to statically compile IPFW support into a custom kernel, refer to the instructions in crossref:kernelconfig[kernelconfig,Configuring the FreeBSD Kernel]. The following options are available for the custom kernel configuration file:
[.programlisting]
....
options IPFIREWALL # enables IPFW
options IPFIREWALL_VERBOSE # enables logging for rules with log keyword to syslogd(8)
options IPFIREWALL_VERBOSE_LIMIT=5 # limits number of logged packets per-entry
options IPFIREWALL_DEFAULT_TO_ACCEPT # sets default policy to pass what is not explicitly denied
options IPFIREWALL_NAT # enables basic in-kernel NAT support
options LIBALIAS # enables full in-kernel NAT support
options IPFIREWALL_NAT64 # enables in-kernel NAT64 support
options IPFIREWALL_NPTV6 # enables in-kernel IPv6 NPT support
options IPFIREWALL_PMOD # enables protocols modification module support
options IPDIVERT # enables NAT through natd(8)
....
[NOTE]
====
IPFW can be loaded as a kernel module: options above are built by default as modules or can be set at runtime using tunables.
====
[[firewalls-ipf]]
== IPFILTER (IPF)
IPFILTER, also known as IPF, is a cross-platform, open source firewall which has been ported to several operating systems, including FreeBSD, NetBSD, OpenBSD, and Solaris(TM).
IPFILTER is a kernel-side firewall and NAT mechanism that can be controlled and monitored by userland programs. Firewall rules can be set or deleted using ipf, NAT rules can be set or deleted using ipnat, run-time statistics for the kernel parts of IPFILTER can be printed using ipfstat, and ipmon can be used to log IPFILTER actions to the system log files.
IPF was originally written using a rule processing logic of "the last matching rule wins" and only used stateless rules. Since then, IPF has been enhanced to include the `quick` and `keep state` options.
The IPF FAQ is at http://www.phildev.net/ipf/index.html[http://www.phildev.net/ipf/index.html]. A searchable archive of the IPFilter mailing list is available at http://marc.info/?l=ipfilter[http://marc.info/?l=ipfilter].
This section of the Handbook focuses on IPF as it pertains to FreeBSD. It provides examples of rules that contain the `quick` and `keep state` options.
=== Enabling IPF
IPF is included in the basic FreeBSD install as a kernel loadable module, meaning that a custom kernel is not needed in order to enable IPF.
For users who prefer to statically compile IPF support into a custom kernel, refer to the instructions in crossref:kernelconfig[kernelconfig,Configuring the FreeBSD Kernel]. The following kernel options are available:
[.programlisting]
....
options IPFILTER
options IPFILTER_LOG
options IPFILTER_LOOKUP
options IPFILTER_DEFAULT_BLOCK
....
where `options IPFILTER` enables support for IPFILTER, `options IPFILTER_LOG` enables IPF logging using the [.filename]#ipl# packet logging pseudo-device for every rule that has the `log` keyword, `IPFILTER_LOOKUP` enables IP pools in order to speed up IP lookups, and `options IPFILTER_DEFAULT_BLOCK` changes the default behavior so that any packet not matching a firewall `pass` rule gets blocked.
To configure the system to enable IPF at boot time, add the following entries to [.filename]#/etc/rc.conf#. These entries will also enable logging and `default pass all`. To change the default policy to `block all` without compiling a custom kernel, remember to add a `block all` rule at the end of the ruleset.
[.programlisting]
....
ipfilter_enable="YES" # Start ipf firewall
ipfilter_rules="/etc/ipf.rules" # loads rules definition text file
ipv6_ipfilter_rules="/etc/ipf6.rules" # loads rules definition text file for IPv6
ipmon_enable="YES" # Start IP monitor log
ipmon_flags="-Ds" # D = start as daemon
# s = log to syslog
# v = log tcp window, ack, seq
# n = map IP & port to names
....
If NAT functionality is needed, also add these lines:
[.programlisting]
....
gateway_enable="YES" # Enable as LAN gateway
ipnat_enable="YES" # Start ipnat function
ipnat_rules="/etc/ipnat.rules" # rules definition file for ipnat
....
Then, to start IPF now:
[.programlisting]
....
# service ipfilter start
....
To load the firewall rules, specify the name of the ruleset file using `ipf`. The following command can be used to replace the currently running firewall rules:
[source,shell]
....
# ipf -Fa -f /etc/ipf.rules
....
where `-Fa` flushes all the internal rules tables and `-f` specifies the file containing the rules to load.
This provides the ability to make changes to a custom ruleset and update the running firewall with a fresh copy of the rules without having to reboot the system. This method is convenient for testing new rules as the procedure can be executed as many times as needed.
Refer to man:ipf[8] for details on the other flags available with this command.
=== IPF Rule Syntax
This section describes the IPF rule syntax used to create stateful rules. When creating rules, keep in mind that unless the `quick` keyword appears in a rule, every rule is read in order, with the _last matching rule_ being the one that is applied. This means that even if the first rule to match a packet is a `pass`, if there is a later matching rule that is a `block`, the packet will be dropped. Sample rulesets can be found in [.filename]#/usr/share/examples/ipfilter#.
When creating rules, a `#` character is used to mark the start of a comment and may appear at the end of a rule, to explain that rule's function, or on its own line. Any blank lines are ignored.
The keywords which are used in rules must be written in a specific order, from left to right. Some keywords are mandatory while others are optional. Some keywords have sub-options which may be keywords themselves and also include more sub-options. The keyword order is as follows, where the words shown in uppercase represent a variable and the words shown in lowercase must precede the variable that follows it:
`_ACTION DIRECTION OPTIONS proto PROTO_TYPE from SRC_ADDR SRC_PORT to DST_ADDR DST_PORT TCP_FLAG|ICMP_TYPE keep state STATE_`
This section describes each of these keywords and their options. It is not an exhaustive list of every possible option. Refer to man:ipf[5] for a complete description of the rule syntax that can be used when creating IPF rules and examples for using each keyword.
ACTION::
The action keyword indicates what to do with the packet if it matches that rule. Every rule _must_ have an action. The following actions are recognized:
+
`block`: drops the packet.
+
`pass`: allows the packet.
+
`log`: generates a log record.
+
`count`: counts the number of packets and bytes which can provide an indication of how often a rule is used.
+
`auth`: queues the packet for further processing by another program.
+
`call`: provides access to functions built into IPF that allow more complex actions.
+
`decapsulate`: removes any headers in order to process the contents of the packet.
DIRECTION::
Next, each rule must explicitly state the direction of traffic using one of these keywords:
+
`in`: the rule is applied against an inbound packet.
+
`out`: the rule is applied against an outbound packet.
+
`all`: the rule applies to either direction.
+
If the system has multiple interfaces, the interface can be specified along with the direction. An example would be `in on fxp0`.
OPTIONS::
Options are optional. However, if multiple options are specified, they must be used in the order shown here.
+
`log`: when performing the specified ACTION, the contents of the packet's headers will be written to the man:ipl[4] packet log pseudo-device.
+
`quick`: if a packet matches this rule, the ACTION specified by the rule occurs and no further processing of any following rules will occur for this packet.
+
`on`: must be followed by the interface name as displayed by man:ifconfig[8]. The rule will only match if the packet is going through the specified interface in the specified direction.
+
When using the `log` keyword, the following qualifiers may be used in this order:
+
`body`: indicates that the first 128 bytes of the packet contents will be logged after the headers.
+
`first`: if the `log` keyword is being used in conjunction with a `keep state` option, this option is recommended so that only the triggering packet is logged and not every packet which matches the stateful connection.
+
Additional options are available to specify error return messages. Refer to man:ipf[5] for more details.
PROTO_TYPE::
The protocol type is optional. However, it is mandatory if the rule needs to specify a SRC_PORT or a DST_PORT as it defines the type of protocol. When specifying the type of protocol, use the `proto` keyword followed by either a protocol number or name from [.filename]#/etc/protocols#. Example protocol names include `tcp`, `udp`, or `icmp`. If PROTO_TYPE is specified but no SRC_PORT or DST_PORT is specified, all port numbers for that protocol will match that rule.
SRC_ADDR::
The `from` keyword is mandatory and is followed by a keyword which represents the source of the packet. The source can be a hostname, an IP address followed by the CIDR mask, an address pool, or the keyword `all`. Refer to man:ipf[5] for examples.
+
There is no way to match ranges of IP addresses which do not express themselves easily using the dotted numeric form / mask-length notation. The package:net-mgmt/ipcalc[] package or port may be used to ease the calculation of the CIDR mask. Additional information is available at the utility's web page: http://jodies.de/ipcalc[http://jodies.de/ipcalc].
SRC_PORT::
The port number of the source is optional. However, if it is used, it requires PROTO_TYPE to be first defined in the rule. The port number must also be preceded by the `proto` keyword.
+
A number of different comparison operators are supported: `=` (equal to), `!=` (not equal to), `<` (less than), `>` (greater than), `<=` (less than or equal to), and `>=` (greater than or equal to).
+
To specify port ranges, place the two port numbers between `<>` (less than and greater than ), `><` (greater than and less than ), or `:` (greater than or equal to and less than or equal to).
DST_ADDR::
The `to` keyword is mandatory and is followed by a keyword which represents the destination of the packet. Similar to SRC_ADDR, it can be a hostname, an IP address followed by the CIDR mask, an address pool, or the keyword `all`.
DST_PORT::
Similar to SRC_PORT, the port number of the destination is optional. However, if it is used, it requires PROTO_TYPE to be first defined in the rule. The port number must also be preceded by the `proto` keyword.
TCP_FLAG|ICMP_TYPE::
If `tcp` is specified as the PROTO_TYPE, flags can be specified as letters, where each letter represents one of the possible TCP flags used to determine the state of a connection. Possible values are: `S` (SYN), `A` (ACK), `P` (PSH), `F` (FIN), `U` (URG), `R` (RST), `C` (CWN), and `E` (ECN).
+
If `icmp` is specified as the PROTO_TYPE, the ICMP type to match can be specified. Refer to man:ipf[5] for the allowable types.
STATE::
If a `pass` rule contains `keep state`, IPF will add an entry to its dynamic state table and allow subsequent packets that match the connection. IPF can track state for TCP, UDP, and ICMP sessions. Any packet that IPF can be certain is part of an active session, even if it is a different protocol, will be allowed.
+
In IPF, packets destined to go out through the interface connected to the public Internet are first checked against the dynamic state table. If the packet matches the next expected packet comprising an active session conversation, it exits the firewall and the state of the session conversation flow is updated in the dynamic state table. Packets that do not belong to an already active session are checked against the outbound ruleset. Packets coming in from the interface connected to the public Internet are first checked against the dynamic state table. If the packet matches the next expected packet comprising an active session, it exits the firewall and the state of the session conversation flow is updated in the dynamic state table. Packets that do not belong to an already active session are checked against the inbound ruleset.
+
Several keywords can be added after `keep state`. If used, these keywords set various options that control stateful filtering, such as setting connection limits or connection age. Refer to man:ipf[5] for the list of available options and their descriptions.
=== Example Ruleset
This section demonstrates how to create an example ruleset which only allows services matching `pass` rules and blocks all others.
FreeBSD uses the loopback interface ([.filename]#lo0#) and the IP address `127.0.0.1` for internal communication. The firewall ruleset must contain rules to allow free movement of these internally used packets:
[.programlisting]
....
# no restrictions on loopback interface
pass in quick on lo0 all
pass out quick on lo0 all
....
The public interface connected to the Internet is used to authorize and control access of all outbound and inbound connections. If one or more interfaces are cabled to private networks, those internal interfaces may require rules to allow packets originating from the LAN to flow between the internal networks or to the interface attached to the Internet. The ruleset should be organized into three major sections: any trusted internal interfaces, outbound connections through the public interface, and inbound connections through the public interface.
These two rules allow all traffic to pass through a trusted LAN interface named [.filename]#xl0#:
[.programlisting]
....
# no restrictions on inside LAN interface for private network
pass out quick on xl0 all
pass in quick on xl0 all
....
The rules for the public interface's outbound and inbound sections should have the most frequently matched rules placed before less commonly matched rules, with the last rule in the section blocking and logging all packets for that interface and direction.
This set of rules defines the outbound section of the public interface named [.filename]#dc0#. These rules keep state and identify the specific services that internal systems are authorized for public Internet access. All the rules use `quick` and specify the appropriate port numbers and, where applicable, destination addresses.
[.programlisting]
....
# interface facing Internet (outbound)
# Matches session start requests originating from or behind the
# firewall, destined for the Internet.
# Allow outbound access to public DNS servers.
# Replace x.x.x. with address listed in /etc/resolv.conf.
# Repeat for each DNS server.
pass out quick on dc0 proto tcp from any to x.x.x. port = 53 flags S keep state
pass out quick on dc0 proto udp from any to xxx port = 53 keep state
# Allow access to ISP's specified DHCP server for cable or DSL networks.
# Use the first rule, then check log for the IP address of DHCP server.
# Then, uncomment the second rule, replace z.z.z.z with the IP address,
# and comment out the first rule
pass out log quick on dc0 proto udp from any to any port = 67 keep state
#pass out quick on dc0 proto udp from any to z.z.z.z port = 67 keep state
# Allow HTTP and HTTPS
pass out quick on dc0 proto tcp from any to any port = 80 flags S keep state
pass out quick on dc0 proto tcp from any to any port = 443 flags S keep state
# Allow email
pass out quick on dc0 proto tcp from any to any port = 110 flags S keep state
pass out quick on dc0 proto tcp from any to any port = 25 flags S keep state
# Allow NTP
pass out quick on dc0 proto tcp from any to any port = 37 flags S keep state
# Allow FTP
pass out quick on dc0 proto tcp from any to any port = 21 flags S keep state
# Allow SSH
pass out quick on dc0 proto tcp from any to any port = 22 flags S keep state
# Allow ping
pass out quick on dc0 proto icmp from any to any icmp-type 8 keep state
# Block and log everything else
block out log first quick on dc0 all
....
This example of the rules in the inbound section of the public interface blocks all undesirable packets first. This reduces the number of packets that are logged by the last rule.
[.programlisting]
....
# interface facing Internet (inbound)
# Block all inbound traffic from non-routable or reserved address spaces
block in quick on dc0 from 192.168.0.0/16 to any #RFC 1918 private IP
block in quick on dc0 from 172.16.0.0/12 to any #RFC 1918 private IP
block in quick on dc0 from 10.0.0.0/8 to any #RFC 1918 private IP
block in quick on dc0 from 127.0.0.0/8 to any #loopback
block in quick on dc0 from 0.0.0.0/8 to any #loopback
block in quick on dc0 from 169.254.0.0/16 to any #DHCP auto-config
block in quick on dc0 from 192.0.2.0/24 to any #reserved for docs
block in quick on dc0 from 204.152.64.0/23 to any #Sun cluster interconnect
block in quick on dc0 from 224.0.0.0/3 to any #Class D & E multicast
# Block fragments and too short tcp packets
block in quick on dc0 all with frags
block in quick on dc0 proto tcp all with short
# block source routed packets
block in quick on dc0 all with opt lsrr
block in quick on dc0 all with opt ssrr
# Block OS fingerprint attempts and log first occurrence
block in log first quick on dc0 proto tcp from any to any flags FUP
# Block anything with special options
block in quick on dc0 all with ipopts
# Block public pings and ident
block in quick on dc0 proto icmp all icmp-type 8
block in quick on dc0 proto tcp from any to any port = 113
# Block incoming Netbios services
block in log first quick on dc0 proto tcp/udp from any to any port = 137
block in log first quick on dc0 proto tcp/udp from any to any port = 138
block in log first quick on dc0 proto tcp/udp from any to any port = 139
block in log first quick on dc0 proto tcp/udp from any to any port = 81
....
Any time there are logged messages on a rule with the `log first` option, run `ipfstat -hio` to evaluate how many times the rule has been matched. A large number of matches may indicate that the system is under attack.
The rest of the rules in the inbound section define which connections are allowed to be initiated from the Internet. The last rule denies all connections which were not explicitly allowed by previous rules in this section.
[.programlisting]
....
# Allow traffic in from ISP's DHCP server. Replace z.z.z.z with
# the same IP address used in the outbound section.
pass in quick on dc0 proto udp from z.z.z.z to any port = 68 keep state
# Allow public connections to specified internal web server
pass in quick on dc0 proto tcp from any to x.x.x.x port = 80 flags S keep state
# Block and log only first occurrence of all remaining traffic.
block in log first quick on dc0 all
....
=== Configuring NAT
To enable NAT, add these statements to [.filename]#/etc/rc.conf# and specify the name of the file containing the NAT rules:
[.programlisting]
....
gateway_enable="YES"
ipnat_enable="YES"
ipnat_rules="/etc/ipnat.rules"
....
NAT rules are flexible and can accomplish many different things to fit the needs of both commercial and home users. The rule syntax presented here has been simplified to demonstrate common usage. For a complete rule syntax description, refer to man:ipnat[5].
The basic syntax for a NAT rule is as follows, where `map` starts the rule and _IF_ should be replaced with the name of the external interface:
[.programlisting]
....
map IF LAN_IP_RANGE -> PUBLIC_ADDRESS
....
The _LAN_IP_RANGE_ is the range of IP addresses used by internal clients. Usually, it is a private address range such as `192.168.1.0/24`. The _PUBLIC_ADDRESS_ can either be the static external IP address or the keyword `0/32` which represents the IP address assigned to _IF_.
In IPF, when a packet arrives at the firewall from the LAN with a public destination, it first passes through the outbound rules of the firewall ruleset. Then, the packet is passed to the NAT ruleset which is read from the top down, where the first matching rule wins. IPF tests each NAT rule against the packet's interface name and source IP address. When a packet's interface name matches a NAT rule, the packet's source IP address in the private LAN is checked to see if it falls within the IP address range specified in _LAN_IP_RANGE_. On a match, the packet has its source IP address rewritten with the public IP address specified by _PUBLIC_ADDRESS_. IPF posts an entry in its internal NAT table so that when the packet returns from the Internet, it can be mapped back to its original private IP address before being passed to the firewall rules for further processing.
For networks that have large numbers of internal systems or multiple subnets, the process of funneling every private IP address into a single public IP address becomes a resource problem. Two methods are available to relieve this issue.
The first method is to assign a range of ports to use as source ports. By adding the `portmap` keyword, NAT can be directed to only use source ports in the specified range:
[.programlisting]
....
map dc0 192.168.1.0/24 -> 0/32 portmap tcp/udp 20000:60000
....
Alternately, use the `auto` keyword which tells NAT to determine the ports that are available for use:
[.programlisting]
....
map dc0 192.168.1.0/24 -> 0/32 portmap tcp/udp auto
....
The second method is to use a pool of public addresses. This is useful when there are too many LAN addresses to fit into a single public address and a block of public IP addresses is available. These public addresses can be used as a pool from which NAT selects an IP address as a packet's address is mapped on its way out.
The range of public IP addresses can be specified using a netmask or CIDR notation. These two rules are equivalent:
[.programlisting]
....
map dc0 192.168.1.0/24 -> 204.134.75.0/255.255.255.0
map dc0 192.168.1.0/24 -> 204.134.75.0/24
....
A common practice is to have a publically accessible web server or mail server segregated to an internal network segment. The traffic from these servers still has to undergo NAT, but port redirection is needed to direct inbound traffic to the correct server. For example, to map a web server using the internal address `10.0.10.25` to its public IP address of `20.20.20.5`, use this rule:
[.programlisting]
....
rdr dc0 20.20.20.5/32 port 80 -> 10.0.10.25 port 80
....
If it is the only web server, this rule would also work as it redirects all external HTTP requests to `10.0.10.25`:
[.programlisting]
....
rdr dc0 0.0.0.0/0 port 80 -> 10.0.10.25 port 80
....
IPF has a built in FTP proxy which can be used with NAT. It monitors all outbound traffic for active or passive FTP connection requests and dynamically creates temporary filter rules containing the port number used by the FTP data channel. This eliminates the need to open large ranges of high order ports for FTP connections.
In this example, the first rule calls the proxy for outbound FTP traffic from the internal LAN. The second rule passes the FTP traffic from the firewall to the Internet, and the third rule handles all non-FTP traffic from the internal LAN:
[.programlisting]
....
map dc0 10.0.10.0/29 -> 0/32 proxy port 21 ftp/tcp
map dc0 0.0.0.0/0 -> 0/32 proxy port 21 ftp/tcp
map dc0 10.0.10.0/29 -> 0/32
....
The FTP `map` rules go before the NAT rule so that when a packet matches an FTP rule, the FTP proxy creates temporary filter rules to let the FTP session packets pass and undergo NAT. All LAN packets that are not FTP will not match the FTP rules but will undergo NAT if they match the third rule.
Without the FTP proxy, the following firewall rules would instead be needed. Note that without the proxy, all ports above `1024` need to be allowed:
[.programlisting]
....
# Allow out LAN PC client FTP to public Internet
# Active and passive modes
pass out quick on rl0 proto tcp from any to any port = 21 flags S keep state
# Allow out passive mode data channel high order port numbers
pass out quick on rl0 proto tcp from any to any port > 1024 flags S keep state
# Active mode let data channel in from FTP server
pass in quick on rl0 proto tcp from any to any port = 20 flags S keep state
....
Whenever the file containing the NAT rules is edited, run `ipnat` with `-CF` to delete the current NAT rules and flush the contents of the dynamic translation table. Include `-f` and specify the name of the NAT ruleset to load:
[source,shell]
....
# ipnat -CF -f /etc/ipnat.rules
....
To display the NAT statistics:
[source,shell]
....
# ipnat -s
....
To list the NAT table's current mappings:
[source,shell]
....
# ipnat -l
....
To turn verbose mode on and display information relating to rule processing and active rules and table entries:
[source,shell]
....
# ipnat -v
....
=== Viewing IPF Statistics
IPF includes man:ipfstat[8] which can be used to retrieve and display statistics which are gathered as packets match rules as they go through the firewall. Statistics are accumulated since the firewall was last started or since the last time they were reset to zero using `ipf -Z`.
The default `ipfstat` output looks like this:
[source,shell]
....
input packets: blocked 99286 passed 1255609 nomatch 14686 counted 0
output packets: blocked 4200 passed 1284345 nomatch 14687 counted 0
input packets logged: blocked 99286 passed 0
output packets logged: blocked 0 passed 0
packets logged: input 0 output 0
log failures: input 3898 output 0
fragment state(in): kept 0 lost 0
fragment state(out): kept 0 lost 0
packet state(in): kept 169364 lost 0
packet state(out): kept 431395 lost 0
ICMP replies: 0 TCP RSTs sent: 0
Result cache hits(in): 1215208 (out): 1098963
IN Pullups succeeded: 2 failed: 0
OUT Pullups succeeded: 0 failed: 0
Fastroute successes: 0 failures: 0
TCP cksum fails(in): 0 (out): 0
Packet log flags set: (0)
....
Several options are available. When supplied with either `-i` for inbound or `-o` for outbound, the command will retrieve and display the appropriate list of filter rules currently installed and in use by the kernel. To also see the rule numbers, include `-n`. For example, `ipfstat -on` displays the outbound rules table with rule numbers:
[source,shell]
....
@1 pass out on xl0 from any to any
@2 block out on dc0 from any to any
@3 pass out quick on dc0 proto tcp/udp from any to any keep state
....
Include `-h` to prefix each rule with a count of how many times the rule was matched. For example, `ipfstat -oh` displays the outbound internal rules table, prefixing each rule with its usage count:
[source,shell]
....
2451423 pass out on xl0 from any to any
354727 block out on dc0 from any to any
430918 pass out quick on dc0 proto tcp/udp from any to any keep state
....
To display the state table in a format similar to man:top[1], use `ipfstat -t`. When the firewall is under attack, this option provides the ability to identify and see the attacking packets. The optional sub-flags give the ability to select the destination or source IP, port, or protocol to be monitored in real time. Refer to man:ipfstat[8] for details.
=== IPF Logging
IPF provides `ipmon`, which can be used to write the firewall's logging information in a human readable format. It requires that `options IPFILTER_LOG` be first added to a custom kernel using the instructions in crossref:kernelconfig[kernelconfig,Configuring the FreeBSD Kernel].
This command is typically run in daemon mode in order to provide a continuous system log file so that logging of past events may be reviewed. Since FreeBSD has a built in man:syslogd[8] facility to automatically rotate system logs, the default [.filename]#rc.conf#`ipmon_flags` statement uses `-Ds`:
[.programlisting]
....
ipmon_flags="-Ds" # D = start as daemon
# s = log to syslog
# v = log tcp window, ack, seq
# n = map IP & port to names
....
Logging provides the ability to review, after the fact, information such as which packets were dropped, what addresses they came from, and where they were going. This information is useful in tracking down attackers.
Once the logging facility is enabled in [.filename]#rc.conf# and started with `service ipmon start`, IPF will only log the rules which contain the `log` keyword. The firewall administrator decides which rules in the ruleset should be logged and normally only deny rules are logged. It is customary to include the `log` keyword in the last rule in the ruleset. This makes it possible to see all the packets that did not match any of the rules in the ruleset.
By default, `ipmon -Ds` mode uses `local0` as the logging facility. The following logging levels can be used to further segregate the logged data:
[source,shell]
....
LOG_INFO - packets logged using the "log" keyword as the action rather than pass or block.
LOG_NOTICE - packets logged which are also passed
LOG_WARNING - packets logged which are also blocked
LOG_ERR - packets which have been logged and which can be considered short due to an incomplete header
....
In order to setup IPF to log all data to [.filename]#/var/log/ipfilter.log#, first create the empty file:
[source,shell]
....
# touch /var/log/ipfilter.log
....
Then, to write all logged messages to the specified file, add the following statement to [.filename]#/etc/syslog.conf#:
[.programlisting]
....
local0.* /var/log/ipfilter.log
....
To activate the changes and instruct man:syslogd[8] to read the modified [.filename]#/etc/syslog.conf#, run `service syslogd reload`.
Do not forget to edit [.filename]#/etc/newsyslog.conf# to rotate the new log file.
Messages generated by `ipmon` consist of data fields separated by white space. Fields common to all messages are:
. The date of packet receipt.
. The time of packet receipt. This is in the form HH:MM:SS.F, for hours, minutes, seconds, and fractions of a second.
. The name of the interface that processed the packet.
. The group and rule number of the rule in the format `@0:17`.
. The action: `p` for passed, `b` for blocked, `S` for a short packet, `n` did not match any rules, and `L` for a log rule.
. The addresses written as three fields: the source address and port separated by a comma, the -> symbol, and the destination address and port. For example: `209.53.17.22,80 -> 198.73.220.17,1722`.
. `PR` followed by the protocol name or number: for example, `PR tcp`.
. `len` followed by the header length and total length of the packet: for example, `len 20 40`.
If the packet is a TCP packet, there will be an additional field starting with a hyphen followed by letters corresponding to any flags that were set. Refer to man:ipf[5] for a list of letters and their flags.
If the packet is an ICMP packet, there will be two fields at the end: the first always being "icmp" and the next being the ICMP message and sub-message type, separated by a slash. For example: `icmp 3/3` for a port unreachable message.
[[firewalls-blacklistd]]
== Blacklistd
Blacklistd is a daemon listening to sockets to receive notifications from other daemons about connection attempts that failed or were successful. It is most widely used in blocking too many connection attempts on open ports. A prime example is SSH running on the internet getting a lot of requests from bots or scripts trying to guess passwords and gain access. Using blacklistd, the daemon can notify the firewall to create a filter rule to block excessive connection attempts from a single source after a number of tries. Blacklistd was first developed on NetBSD and appeared there in version 7. FreeBSD 11 imported blacklistd from NetBSD.
This chapter describes how to set up blacklistd, configure it, and provides examples on how to use it. Readers should be familiar with basic firewall concepts like rules. For details, refer to the firewall chapter. PF is used in the examples, but other firewalls available on FreeBSD should be able to work with blacklistd, too.
=== Enabling Blacklistd
The main configuration for blacklistd is stored in man:blacklistd.conf[5]. Various command line options are also available to change blacklistd's run-time behavior. Persistent configuration across reboots should be stored in [.filename]#/etc/blacklistd.conf#. To enable the daemon during system boot, add a `blacklistd_enable` line to [.filename]#/etc/rc.conf# like this:
[source,shell]
....
# sysrc blacklistd_enable=yes
....
To start the service manually, run this command:
[source,shell]
....
# service blacklistd start
....
=== Creating a Blacklistd Ruleset
Rules for blacklistd are configured in man:blacklistd.conf[5] with one entry per line. Each rule contains a tuple separated by spaces or tabs. Rules either belong to a `local` or a `remote`, which applies to the machine where blacklistd is running or an outside source, respectively.
==== Local Rules
An example blacklistd.conf entry for a local rule looks like this:
[.programlisting]
....
[local]
ssh stream * * * 3 24h
....
All rules that follow the `[local]` section are treated as local rules (which is the default), applying to the local machine. When a `[remote]` section is encountered, all rules that follow it are handled as remote machine rules.
Seven fields define a rule separated by either tabs or spaces. The first four fields identify the traffic that should be blacklisted. The three fields that follow define backlistd's behavior. Wildcards are denoted as asterisks (`*`), matching anything in this field. The first field defines the location. In local rules, these are the network ports. The syntax for the location field is as follows:
[.programlisting]
....
[address|interface][/mask][:port]
....
Adressses can be specified as IPv4 in numeric format or IPv6 in square brackets. An interface name like `_em0_` can also be used.
The socket type is defined by the second field. TCP sockets are of type `stream`, whereas UDP is denoted as `dgram`. The example above uses TCP, since SSH is using that protocol.
A protocol can be used in the third field of a blacklistd rule. The following protocols can be used: `tcp`, `udp`, `tcp6`, `udp6`, or numeric. A wildcard, like in the example, is typically used to match all protocols unless there is a reason to distinguish traffic by a certain protocol.
In the fourth field, the effective user or owner of the daemon process that is reporting the event is defined. The username or UID can be used here, as well as a wildcard (see example rule above).
The packet filter rule name is declared by the fifth field, which starts the behavior part of the rule. By default, blacklistd puts all blocks under a pf anchor called `blacklistd` in [.filename]#pf.conf# like this:
[.programlisting]
....
anchor "blacklistd/*" in on $ext_if
block in
pass out
....
For separate blacklists, an anchor name can be used in this field. In other cases, the wildcard will suffice. When a name starts with a hyphen (`-`) it means that an anchor with the default rule name prepended should be used. A modified example from the above using the hyphen would look like this:
[.programlisting]
....
ssh stream * * -ssh 3 24h
....
With such a rule, any new blacklist rules are added to an anchor called `blacklistd-ssh`.
To block whole subnets for a single rule violation, a `/` in the rule name can be used. This causes the remaining portion of the name to be interpreted as the mask to be applied to the address specified in the rule. For example, this rule would block every address adjoining `/24`.
[.programlisting]
....
22 stream tcp * */24 3 24h
....
[NOTE]
====
It is important to specify the proper protocol here. IPv4 and IPv6 treat /24 differently, that is the reason why `*` cannot be used in the third field for this rule.
====
This rule defines that if any one host in that network is misbehaving, everything else on that network will be blocked, too.
The sixth field, called `nfail`, sets the number of login failures required to blacklist the remote IP in question. When a wildcard is used at this position, it means that blocks will never happen. In the example rule above, a limit of three is defined meaning that after three attempts to log into SSH on one connection, the IP is blocked.
The last field in a blacklistd rule definition specifies how long a host is blacklisted. The default unit is seconds, but suffixes like `m`, `h`, and `d` can also be specified for minutes, hours, and days, respectively.
The example rule in its entirety means that after three times authenticating to SSH will result in a new PF block rule for that host. Rule matches are performed by first checking local rules one after another, from most specific to least specific. When a match occurs, the `remote` rules are applied and the name, `nfail`, and disable fields are changed by the `remote` rule that matched.
==== Remote Rules
Remote rules are used to specify how blacklistd changes its behavior depending on the remote host currently being evaluated. Each field in a remote rule is the same as in a local rule. The only difference is in the way blacklistd is using them. To explain it, this example rule is used:
[.programlisting]
....
[remote]
203.0.113.128/25 * * * =/25 = 48h
....
The address field can be an IP address (either v4 or v6), a port or both. This allows setting special rules for a specific remote address range like in this example. The fields for type, protocol and owner are identically interpreted as in the local rule.
The name fields is different though: the equal sign (`=`) in a remote rule tells blacklistd to use the value from the matching local rule. It means that the firewall rule entry is taken and the `/25` prefix (a netmask of `255.255.255.128`) is added. When a connection from that address range is blacklisted, the entire subnet is affected. A PF anchor name can also be used here, in which case blacklistd will add rules for this address block to the anchor of that name. The default table is used when a wildcard is specified.
A custom number of failures in the `nfail` column can be defined for an address. This is useful for exceptions to a specific rule, to maybe allow someone a less strict application of rules or a bit more leniency in login tries. Blocking is disabled when an asterisk is used in this sixth field.
Remote rules allow a stricter enforcement of limits on attempts to log in compared to attempts coming from a local network like an office.
=== Blacklistd Client Configuration
There are a few software packages in FreeBSD that can utilize blacklistd's functionality. The two most prominent ones are man:ftpd[8] and man:sshd[8] to block excessive connection attempts. To activate blacklistd in the SSH daemon, add the following line to [.filename]#/etc/ssh/sshd_config#:
[.programlisting]
....
UseBlacklist yes
....
Restart sshd afterwards to make these changes take effect.
Blacklisting for man:ftpd[8] is enabled using `-B`, either in [.filename]#/etc/inetd.conf# or as a flag in [.filename]#/etc/rc.conf# like this:
[.programlisting]
....
ftpd_flags="-B"
....
That is all that is needed to make these programs talk to blacklistd.
=== Blacklistd Management
Blacklistd provides the user with a management utility called man:blacklistctl[8]. It displays blocked addresses and networks that are blacklisted by the rules defined in man:blacklistd.conf[5]. To see the list of currently blocked hosts, use `dump` combined with `-b` like this.
[source,shell]
....
# blacklistctl dump -b
address/ma:port id nfail last access
213.0.123.128/25:22 OK 6/3 2019/06/08 14:30:19
....
This example shows that there were 6 out of three permitted attempts on port 22 coming from the address range `213.0.123.128/25`. There are more attempts listed than are allowed because SSH allows a client to try multiple logins on a single TCP connection. A connection that is currently going on is not stopped by blacklistd. The last connection attempt is listed in the `last access` column of the output.
To see the remaining time that this host will be on the blacklist, add `-r` to the previous command.
[source,shell]
....
# blacklistctl dump -br
address/ma:port id nfail remaining time
213.0.123.128/25:22 OK 6/3 36s
....
In this example, there are 36s seconds left until this host will not be blocked any more.
=== Removing Hosts from the Block List
Sometimes it is necessary to remove a host from the block list before the remaining time expires. Unfortunately, there is no functionality in blacklistd to do that. However, it is possible to remove the address from the PF table using pfctl. For each blocked port, there is a child anchor inside the blacklistd anchor defined in [.filename]#/etc/pf.conf#. For example, if there is a child anchor for blocking port 22 it is called `blacklistd/22`. There is a table inside that child anchor that contains the blocked addresses. This table is called port followed by the port number. In this example, it would be called `port22`. With that information at hand, it is now possible to use man:pfctl[8] to display all addresses listed like this:
[source,shell]
....
# pfctl -a blacklistd/22 -t port22 -T show
...
213.0.123.128/25
...
....
After identifying the address to be unblocked from the list, the following command removes it from the list:
[source,shell]
....
# pfctl -a blacklistd/22 -t port22 -T delete 213.0.123.128/25
....
The address is now removed from PF, but will still show up in the blacklistctl list, since it does not know about any changes made in PF. The entry in blacklistd's database will eventually expire and be removed from its output eventually. The entry will be added again if the host is matching one of the block rules in blacklistd again.
diff --git a/documentation/content/en/books/handbook/geom/_index.adoc b/documentation/content/en/books/handbook/geom/_index.adoc
index c09077d153..9d1f99dfe3 100644
--- a/documentation/content/en/books/handbook/geom/_index.adoc
+++ b/documentation/content/en/books/handbook/geom/_index.adoc
@@ -1,1129 +1,1130 @@
---
title: "Chapter 19. GEOM: Modular Disk Transformation Framework"
part: Part III. System Administration
prev: books/handbook/disks
next: books/handbook/zfs
+description: In FreeBSD, the GEOM framework permits access and control to classes, such as Master Boot Records and BSD labels, through the use of providers, or the disk devices in /dev.
---
[[geom]]
= GEOM: Modular Disk Transformation Framework
:doctype: book
:toc: macro
:toclevels: 1
:icons: font
:sectnums:
:sectnumlevels: 6
:source-highlighter: rouge
:experimental:
:skip-front-matter:
:xrefstyle: basic
:relfileprefix: ../
:outfilesuffix:
:sectnumoffset: 19
ifeval::["{backend}" == "html5"]
:imagesdir: ../../../../images/books/handbook/geom/
endif::[]
ifeval::["{backend}" == "pdf"]
:imagesdir: ../../../../static/images/books/handbook/geom/
endif::[]
ifeval::["{backend}" == "epub3"]
:imagesdir: ../../../../static/images/books/handbook/geom/
endif::[]
include::shared/authors.adoc[]
include::shared/releases.adoc[]
include::shared/en/mailing-lists.adoc[]
include::shared/en/teams.adoc[]
include::shared/en/urls.adoc[]
toc::[]
[[geom-synopsis]]
== Synopsis
In FreeBSD, the GEOM framework permits access and control to classes, such as Master Boot Records and BSD labels, through the use of providers, or the disk devices in [.filename]#/dev#. By supporting various software RAID configurations, GEOM transparently provides access to the operating system and operating system utilities.
This chapter covers the use of disks under the GEOM framework in FreeBSD. This includes the major RAID control utilities which use the framework for configuration. This chapter is not a definitive guide to RAID configurations and only GEOM-supported RAID classifications are discussed.
After reading this chapter, you will know:
* What type of RAID support is available through GEOM.
* How to use the base utilities to configure, maintain, and manipulate the various RAID levels.
* How to mirror, stripe, encrypt, and remotely connect disk devices through GEOM.
* How to troubleshoot disks attached to the GEOM framework.
Before reading this chapter, you should:
* Understand how FreeBSD treats disk devices (crossref:disks[disks,Storage]).
* Know how to configure and install a new kernel (crossref:kernelconfig[kernelconfig,Configuring the FreeBSD Kernel]).
[[geom-striping]]
== RAID0 - Striping
Striping combines several disk drives into a single volume. Striping can be performed through the use of hardware RAID controllers. The GEOM disk subsystem provides software support for disk striping, also known as RAID0, without the need for a RAID disk controller.
In RAID0, data is split into blocks that are written across all the drives in the array. As seen in the following illustration, instead of having to wait on the system to write 256k to one disk, RAID0 can simultaneously write 64k to each of the four disks in the array, offering superior I/O performance. This performance can be enhanced further by using multiple disk controllers.
image::striping.png[Disk Striping Illustration]
Each disk in a RAID0 stripe must be of the same size, since I/O requests are interleaved to read or write to multiple disks in parallel.
[NOTE]
====
RAID0 does _not_ provide any redundancy. This means that if one disk in the array fails, all of the data on the disks is lost. If the data is important, implement a backup strategy that regularly saves backups to a remote system or device.
====
The process for creating a software, GEOM-based RAID0 on a FreeBSD system using commodity disks is as follows. Once the stripe is created, refer to man:gstripe[8] for more information on how to control an existing stripe.
[.procedure]
****
*Procedure: Creating a Stripe of Unformatted ATA Disks*
. Load the [.filename]#geom_stripe.ko# module:
+
[source,shell]
....
# kldload geom_stripe
....
. Ensure that a suitable mount point exists. If this volume will become a root partition, then temporarily use another mount point such as [.filename]#/mnt#.
. Determine the device names for the disks which will be striped, and create the new stripe device. For example, to stripe two unused and unpartitioned ATA disks with device names of [.filename]#/dev/ad2# and [.filename]#/dev/ad3#:
+
[source,shell]
....
# gstripe label -v st0 /dev/ad2 /dev/ad3
Metadata value stored on /dev/ad2.
Metadata value stored on /dev/ad3.
Done.
....
. Write a standard label, also known as a partition table, on the new volume and install the default bootstrap code:
+
[source,shell]
....
# bsdlabel -wB /dev/stripe/st0
....
. This process should create two other devices in [.filename]#/dev/stripe# in addition to [.filename]#st0#. Those include [.filename]#st0a# and [.filename]#st0c#. At this point, a UFS file system can be created on [.filename]#st0a# using `newfs`:
+
[source,shell]
....
# newfs -U /dev/stripe/st0a
....
+
Many numbers will glide across the screen, and after a few seconds, the process will be complete. The volume has been created and is ready to be mounted.
. To manually mount the created disk stripe:
+
[source,shell]
....
# mount /dev/stripe/st0a /mnt
....
. To mount this striped file system automatically during the boot process, place the volume information in [.filename]#/etc/fstab#. In this example, a permanent mount point, named [.filename]#stripe#, is created:
+
[source,shell]
....
# mkdir /stripe
# echo "/dev/stripe/st0a /stripe ufs rw 2 2" \
>> /etc/fstab
....
. The [.filename]#geom_stripe.ko# module must also be automatically loaded during system initialization, by adding a line to [.filename]#/boot/loader.conf#:
+
[source,shell]
....
# echo 'geom_stripe_load="YES"' >> /boot/loader.conf
....
****
[[geom-mirror]]
== RAID1 - Mirroring
RAID1, or _mirroring_, is the technique of writing the same data to more than one disk drive. Mirrors are usually used to guard against data loss due to drive failure. Each drive in a mirror contains an identical copy of the data. When an individual drive fails, the mirror continues to work, providing data from the drives that are still functioning. The computer keeps running, and the administrator has time to replace the failed drive without user interruption.
Two common situations are illustrated in these examples. The first creates a mirror out of two new drives and uses it as a replacement for an existing single drive. The second example creates a mirror on a single new drive, copies the old drive's data to it, then inserts the old drive into the mirror. While this procedure is slightly more complicated, it only requires one new drive.
Traditionally, the two drives in a mirror are identical in model and capacity, but man:gmirror[8] does not require that. Mirrors created with dissimilar drives will have a capacity equal to that of the smallest drive in the mirror. Extra space on larger drives will be unused. Drives inserted into the mirror later must have at least as much capacity as the smallest drive already in the mirror.
[WARNING]
====
The mirroring procedures shown here are non-destructive, but as with any major disk operation, make a full backup first.
====
[WARNING]
====
While man:dump[8] is used in these procedures to copy file systems, it does not work on file systems with soft updates journaling. See man:tunefs[8] for information on detecting and disabling soft updates journaling.
====
[[geom-mirror-metadata]]
=== Metadata Issues
Many disk systems store metadata at the end of each disk. Old metadata should be erased before reusing the disk for a mirror. Most problems are caused by two particular types of leftover metadata: GPT partition tables and old metadata from a previous mirror.
GPT metadata can be erased with man:gpart[8]. This example erases both primary and backup GPT partition tables from disk [.filename]#ada8#:
[source,shell]
....
# gpart destroy -F ada8
....
A disk can be removed from an active mirror and the metadata erased in one step using man:gmirror[8]. Here, the example disk [.filename]#ada8# is removed from the active mirror [.filename]#gm4#:
[source,shell]
....
# gmirror remove gm4 ada8
....
If the mirror is not running, but old mirror metadata is still on the disk, use `gmirror clear` to remove it:
[source,shell]
....
# gmirror clear ada8
....
man:gmirror[8] stores one block of metadata at the end of the disk. As GPT partition schemes also store metadata at the end of the disk, mirroring entire GPT disks with man:gmirror[8] is not recommended. MBR partitioning is used here because it only stores a partition table at the start of the disk and does not conflict with the mirror metadata.
[[geom-mirror-two-new-disks]]
=== Creating a Mirror with Two New Disks
In this example, FreeBSD has already been installed on a single disk, [.filename]#ada0#. Two new disks, [.filename]#ada1# and [.filename]#ada2#, have been connected to the system. A new mirror will be created on these two disks and used to replace the old single disk.
The [.filename]#geom_mirror.ko# kernel module must either be built into the kernel or loaded at boot- or run-time. Manually load the kernel module now:
[source,shell]
....
# gmirror load
....
Create the mirror with the two new drives:
[source,shell]
....
# gmirror label -v gm0 /dev/ada1 /dev/ada2
....
[.filename]#gm0# is a user-chosen device name assigned to the new mirror. After the mirror has been started, this device name appears in [.filename]#/dev/mirror/#.
MBR and bsdlabel partition tables can now be created on the mirror with man:gpart[8]. This example uses a traditional file system layout, with partitions for [.filename]#/#, swap, [.filename]#/var#, [.filename]#/tmp#, and [.filename]#/usr#. A single [.filename]#/# and a swap partition will also work.
Partitions on the mirror do not have to be the same size as those on the existing disk, but they must be large enough to hold all the data already present on [.filename]#ada0#.
[source,shell]
....
# gpart create -s MBR mirror/gm0
# gpart add -t freebsd -a 4k mirror/gm0
# gpart show mirror/gm0
=> 63 156301423 mirror/gm0 MBR (74G)
63 63 - free - (31k)
126 156301299 1 freebsd (74G)
156301425 61 - free - (30k)
....
[source,shell]
....
# gpart create -s BSD mirror/gm0s1
# gpart add -t freebsd-ufs -a 4k -s 2g mirror/gm0s1
# gpart add -t freebsd-swap -a 4k -s 4g mirror/gm0s1
# gpart add -t freebsd-ufs -a 4k -s 2g mirror/gm0s1
# gpart add -t freebsd-ufs -a 4k -s 1g mirror/gm0s1
# gpart add -t freebsd-ufs -a 4k mirror/gm0s1
# gpart show mirror/gm0s1
=> 0 156301299 mirror/gm0s1 BSD (74G)
0 2 - free - (1.0k)
2 4194304 1 freebsd-ufs (2.0G)
4194306 8388608 2 freebsd-swap (4.0G)
12582914 4194304 4 freebsd-ufs (2.0G)
16777218 2097152 5 freebsd-ufs (1.0G)
18874370 137426928 6 freebsd-ufs (65G)
156301298 1 - free - (512B)
....
Make the mirror bootable by installing bootcode in the MBR and bsdlabel and setting the active slice:
[source,shell]
....
# gpart bootcode -b /boot/mbr mirror/gm0
# gpart set -a active -i 1 mirror/gm0
# gpart bootcode -b /boot/boot mirror/gm0s1
....
Format the file systems on the new mirror, enabling soft-updates.
[source,shell]
....
# newfs -U /dev/mirror/gm0s1a
# newfs -U /dev/mirror/gm0s1d
# newfs -U /dev/mirror/gm0s1e
# newfs -U /dev/mirror/gm0s1f
....
File systems from the original [.filename]#ada0# disk can now be copied onto the mirror with man:dump[8] and man:restore[8].
[source,shell]
....
# mount /dev/mirror/gm0s1a /mnt
# dump -C16 -b64 -0aL -f - / | (cd /mnt && restore -rf -)
# mount /dev/mirror/gm0s1d /mnt/var
# mount /dev/mirror/gm0s1e /mnt/tmp
# mount /dev/mirror/gm0s1f /mnt/usr
# dump -C16 -b64 -0aL -f - /var | (cd /mnt/var && restore -rf -)
# dump -C16 -b64 -0aL -f - /tmp | (cd /mnt/tmp && restore -rf -)
# dump -C16 -b64 -0aL -f - /usr | (cd /mnt/usr && restore -rf -)
....
Edit [.filename]#/mnt/etc/fstab# to point to the new mirror file systems:
[.programlisting]
....
# Device Mountpoint FStype Options Dump Pass#
/dev/mirror/gm0s1a / ufs rw 1 1
/dev/mirror/gm0s1b none swap sw 0 0
/dev/mirror/gm0s1d /var ufs rw 2 2
/dev/mirror/gm0s1e /tmp ufs rw 2 2
/dev/mirror/gm0s1f /usr ufs rw 2 2
....
If the [.filename]#geom_mirror.ko# kernel module has not been built into the kernel, [.filename]#/mnt/boot/loader.conf# is edited to load the module at boot:
[.programlisting]
....
geom_mirror_load="YES"
....
Reboot the system to test the new mirror and verify that all data has been copied. The BIOS will see the mirror as two individual drives rather than a mirror. Since the drives are identical, it does not matter which is selected to boot.
See <<gmirror-troubleshooting>> if there are problems booting. Powering down and disconnecting the original [.filename]#ada0# disk will allow it to be kept as an offline backup.
In use, the mirror will behave just like the original single drive.
[[geom-mirror-existing-drive]]
=== Creating a Mirror with an Existing Drive
In this example, FreeBSD has already been installed on a single disk, [.filename]#ada0#. A new disk, [.filename]#ada1#, has been connected to the system. A one-disk mirror will be created on the new disk, the existing system copied onto it, and then the old disk will be inserted into the mirror. This slightly complex procedure is required because `gmirror` needs to put a 512-byte block of metadata at the end of each disk, and the existing [.filename]#ada0# has usually had all of its space already allocated.
Load the [.filename]#geom_mirror.ko# kernel module:
[source,shell]
....
# gmirror load
....
Check the media size of the original disk with `diskinfo`:
[source,shell]
....
# diskinfo -v ada0 | head -n3
/dev/ada0
512 # sectorsize
1000204821504 # mediasize in bytes (931G)
....
Create a mirror on the new disk. To make certain that the mirror capacity is not any larger than the original [.filename]#ada0# drive, man:gnop[8] is used to create a fake drive of the exact same size. This drive does not store any data, but is used only to limit the size of the mirror. When man:gmirror[8] creates the mirror, it will restrict the capacity to the size of [.filename]#gzero.nop#, even if the new [.filename]#ada1# drive has more space. Note that the _1000204821504_ in the second line is equal to [.filename]#ada0#'s media size as shown by `diskinfo` above.
[source,shell]
....
# geom zero load
# gnop create -s 1000204821504 gzero
# gmirror label -v gm0 gzero.nop ada1
# gmirror forget gm0
....
Since [.filename]#gzero.nop# does not store any data, the mirror does not see it as connected. The mirror is told to "forget" unconnected components, removing references to [.filename]#gzero.nop#. The result is a mirror device containing only a single disk, [.filename]#ada1#.
After creating [.filename]#gm0#, view the partition table on [.filename]#ada0#. This output is from a 1 TB drive. If there is some unallocated space at the end of the drive, the contents may be copied directly from [.filename]#ada0# to the new mirror.
However, if the output shows that all of the space on the disk is allocated, as in the following listing, there is no space available for the 512-byte mirror metadata at the end of the disk.
[source,shell]
....
# gpart show ada0
=> 63 1953525105 ada0 MBR (931G)
63 1953525105 1 freebsd [active] (931G)
....
In this case, the partition table must be edited to reduce the capacity by one sector on [.filename]#mirror/gm0#. The procedure will be explained later.
In either case, partition tables on the primary disk should be first copied using `gpart backup` and `gpart restore`.
[source,shell]
....
# gpart backup ada0 > table.ada0
# gpart backup ada0s1 > table.ada0s1
....
These commands create two files, [.filename]#table.ada0# and [.filename]#table.ada0s1#. This example is from a 1 TB drive:
[source,shell]
....
# cat table.ada0
MBR 4
1 freebsd 63 1953525105 [active]
....
[source,shell]
....
# cat table.ada0s1
BSD 8
1 freebsd-ufs 0 4194304
2 freebsd-swap 4194304 33554432
4 freebsd-ufs 37748736 50331648
5 freebsd-ufs 88080384 41943040
6 freebsd-ufs 130023424 838860800
7 freebsd-ufs 968884224 984640881
....
If no free space is shown at the end of the disk, the size of both the slice and the last partition must be reduced by one sector. Edit the two files, reducing the size of both the slice and last partition by one. These are the last numbers in each listing.
[source,shell]
....
# cat table.ada0
MBR 4
1 freebsd 63 1953525104 [active]
....
[source,shell]
....
# cat table.ada0s1
BSD 8
1 freebsd-ufs 0 4194304
2 freebsd-swap 4194304 33554432
4 freebsd-ufs 37748736 50331648
5 freebsd-ufs 88080384 41943040
6 freebsd-ufs 130023424 838860800
7 freebsd-ufs 968884224 984640880
....
If at least one sector was unallocated at the end of the disk, these two files can be used without modification.
Now restore the partition table into [.filename]#mirror/gm0#:
[source,shell]
....
# gpart restore mirror/gm0 < table.ada0
# gpart restore mirror/gm0s1 < table.ada0s1
....
Check the partition table with `gpart show`. This example has [.filename]#gm0s1a# for [.filename]#/#, [.filename]#gm0s1d# for [.filename]#/var#, [.filename]#gm0s1e# for [.filename]#/usr#, [.filename]#gm0s1f# for [.filename]#/data1#, and [.filename]#gm0s1g# for [.filename]#/data2#.
[source,shell]
....
# gpart show mirror/gm0
=> 63 1953525104 mirror/gm0 MBR (931G)
63 1953525042 1 freebsd [active] (931G)
1953525105 62 - free - (31k)
# gpart show mirror/gm0s1
=> 0 1953525042 mirror/gm0s1 BSD (931G)
0 2097152 1 freebsd-ufs (1.0G)
2097152 16777216 2 freebsd-swap (8.0G)
18874368 41943040 4 freebsd-ufs (20G)
60817408 20971520 5 freebsd-ufs (10G)
81788928 629145600 6 freebsd-ufs (300G)
710934528 1242590514 7 freebsd-ufs (592G)
1953525042 63 - free - (31k)
....
Both the slice and the last partition must have at least one free block at the end of the disk.
Create file systems on these new partitions. The number of partitions will vary to match the original disk, [.filename]#ada0#.
[source,shell]
....
# newfs -U /dev/mirror/gm0s1a
# newfs -U /dev/mirror/gm0s1d
# newfs -U /dev/mirror/gm0s1e
# newfs -U /dev/mirror/gm0s1f
# newfs -U /dev/mirror/gm0s1g
....
Make the mirror bootable by installing bootcode in the MBR and bsdlabel and setting the active slice:
[source,shell]
....
# gpart bootcode -b /boot/mbr mirror/gm0
# gpart set -a active -i 1 mirror/gm0
# gpart bootcode -b /boot/boot mirror/gm0s1
....
Adjust [.filename]#/etc/fstab# to use the new partitions on the mirror. Back up this file first by copying it to [.filename]#/etc/fstab.orig#.
[source,shell]
....
# cp /etc/fstab /etc/fstab.orig
....
Edit [.filename]#/etc/fstab#, replacing [.filename]#/dev/ada0# with [.filename]#mirror/gm0#.
[.programlisting]
....
# Device Mountpoint FStype Options Dump Pass#
/dev/mirror/gm0s1a / ufs rw 1 1
/dev/mirror/gm0s1b none swap sw 0 0
/dev/mirror/gm0s1d /var ufs rw 2 2
/dev/mirror/gm0s1e /usr ufs rw 2 2
/dev/mirror/gm0s1f /data1 ufs rw 2 2
/dev/mirror/gm0s1g /data2 ufs rw 2 2
....
If the [.filename]#geom_mirror.ko# kernel module has not been built into the kernel, edit [.filename]#/boot/loader.conf# to load it at boot:
[.programlisting]
....
geom_mirror_load="YES"
....
File systems from the original disk can now be copied onto the mirror with man:dump[8] and man:restore[8]. Each file system dumped with `dump -L` will create a snapshot first, which can take some time.
[source,shell]
....
# mount /dev/mirror/gm0s1a /mnt
# dump -C16 -b64 -0aL -f - / | (cd /mnt && restore -rf -)
# mount /dev/mirror/gm0s1d /mnt/var
# mount /dev/mirror/gm0s1e /mnt/usr
# mount /dev/mirror/gm0s1f /mnt/data1
# mount /dev/mirror/gm0s1g /mnt/data2
# dump -C16 -b64 -0aL -f - /usr | (cd /mnt/usr && restore -rf -)
# dump -C16 -b64 -0aL -f - /var | (cd /mnt/var && restore -rf -)
# dump -C16 -b64 -0aL -f - /data1 | (cd /mnt/data1 && restore -rf -)
# dump -C16 -b64 -0aL -f - /data2 | (cd /mnt/data2 && restore -rf -)
....
Restart the system, booting from [.filename]#ada1#. If everything is working, the system will boot from [.filename]#mirror/gm0#, which now contains the same data as [.filename]#ada0# had previously. See <<gmirror-troubleshooting>> if there are problems booting.
At this point, the mirror still consists of only the single [.filename]#ada1# disk.
After booting from [.filename]#mirror/gm0# successfully, the final step is inserting [.filename]#ada0# into the mirror.
[IMPORTANT]
====
When [.filename]#ada0# is inserted into the mirror, its former contents will be overwritten by data from the mirror. Make certain that [.filename]#mirror/gm0# has the same contents as [.filename]#ada0# before adding [.filename]#ada0# to the mirror. If the contents previously copied by man:dump[8] and man:restore[8] are not identical to what was on [.filename]#ada0#, revert [.filename]#/etc/fstab# to mount the file systems on [.filename]#ada0#, reboot, and start the whole procedure again.
====
[source,shell]
....
# gmirror insert gm0 ada0
GEOM_MIRROR: Device gm0: rebuilding provider ada0
....
Synchronization between the two disks will start immediately. Use `gmirror status` to view the progress.
[source,shell]
....
# gmirror status
Name Status Components
girror/gm0 DEGRADED ada1 (ACTIVE)
ada0 (SYNCHRONIZING, 64%)
....
After a while, synchronization will finish.
[source,shell]
....
GEOM_MIRROR: Device gm0: rebuilding provider ada0 finished.
# gmirror status
Name Status Components
mirror/gm0 COMPLETE ada1 (ACTIVE)
ada0 (ACTIVE)
....
[.filename]#mirror/gm0# now consists of the two disks [.filename]#ada0# and [.filename]#ada1#, and the contents are automatically synchronized with each other. In use, [.filename]#mirror/gm0# will behave just like the original single drive.
[[gmirror-troubleshooting]]
=== Troubleshooting
If the system no longer boots, BIOS settings may have to be changed to boot from one of the new mirrored drives. Either mirror drive can be used for booting, as they contain identical data.
If the boot stops with this message, something is wrong with the mirror device:
[source,shell]
....
Mounting from ufs:/dev/mirror/gm0s1a failed with error 19.
Loader variables:
vfs.root.mountfrom=ufs:/dev/mirror/gm0s1a
vfs.root.mountfrom.options=rw
Manual root filesystem specification:
<fstype>:<device> [options]
Mount <device> using filesystem <fstype>
and with the specified (optional) option list.
eg. ufs:/dev/da0s1a
zfs:tank
cd9660:/dev/acd0 ro
(which is equivalent to: mount -t cd9660 -o ro /dev/acd0 /)
? List valid disk boot devices
. Yield 1 second (for background tasks)
<empty line> Abort manual input
mountroot>
....
Forgetting to load the [.filename]#geom_mirror.ko# module in [.filename]#/boot/loader.conf# can cause this problem. To fix it, boot from a FreeBSD installation media and choose `Shell` at the first prompt. Then load the mirror module and mount the mirror device:
[source,shell]
....
# gmirror load
# mount /dev/mirror/gm0s1a /mnt
....
Edit [.filename]#/mnt/boot/loader.conf#, adding a line to load the mirror module:
[.programlisting]
....
geom_mirror_load="YES"
....
Save the file and reboot.
Other problems that cause `error 19` require more effort to fix. Although the system should boot from [.filename]#ada0#, another prompt to select a shell will appear if [.filename]#/etc/fstab# is incorrect. Enter `ufs:/dev/ada0s1a` at the boot loader prompt and press kbd:[Enter]. Undo the edits in [.filename]#/etc/fstab# then mount the file systems from the original disk ([.filename]#ada0#) instead of the mirror. Reboot the system and try the procedure again.
[source,shell]
....
Enter full pathname of shell or RETURN for /bin/sh:
# cp /etc/fstab.orig /etc/fstab
# reboot
....
=== Recovering from Disk Failure
The benefit of disk mirroring is that an individual disk can fail without causing the mirror to lose any data. In the above example, if [.filename]#ada0# fails, the mirror will continue to work, providing data from the remaining working drive, [.filename]#ada1#.
To replace the failed drive, shut down the system and physically replace the failed drive with a new drive of equal or greater capacity. Manufacturers use somewhat arbitrary values when rating drives in gigabytes, and the only way to really be sure is to compare the total count of sectors shown by `diskinfo -v`. A drive with larger capacity than the mirror will work, although the extra space on the new drive will not be used.
After the computer is powered back up, the mirror will be running in a "degraded" mode with only one drive. The mirror is told to forget drives that are not currently connected:
[source,shell]
....
# gmirror forget gm0
....
Any old metadata should be cleared from the replacement disk using the instructions in <<geom-mirror-metadata>>. Then the replacement disk, [.filename]#ada4# for this example, is inserted into the mirror:
[source,shell]
....
# gmirror insert gm0 /dev/ada4
....
Resynchronization begins when the new drive is inserted into the mirror. This process of copying mirror data to a new drive can take a while. Performance of the mirror will be greatly reduced during the copy, so inserting new drives is best done when there is low demand on the computer.
Progress can be monitored with `gmirror status`, which shows drives that are being synchronized and the percentage of completion. During resynchronization, the status will be `DEGRADED`, changing to `COMPLETE` when the process is finished.
[[geom-raid3]]
== RAID3 - Byte-level Striping with Dedicated Parity
RAID3 is a method used to combine several disk drives into a single volume with a dedicated parity disk. In a RAID3 system, data is split up into a number of bytes that are written across all the drives in the array except for one disk which acts as a dedicated parity disk. This means that disk reads from a RAID3 implementation access all disks in the array. Performance can be enhanced by using multiple disk controllers. The RAID3 array provides a fault tolerance of 1 drive, while providing a capacity of 1 - 1/n times the total capacity of all drives in the array, where n is the number of hard drives in the array. Such a configuration is mostly suitable for storing data of larger sizes such as multimedia files.
At least 3 physical hard drives are required to build a RAID3 array. Each disk must be of the same size, since I/O requests are interleaved to read or write to multiple disks in parallel. Also, due to the nature of RAID3, the number of drives must be equal to 3, 5, 9, 17, and so on, or 2^n + 1.
This section demonstrates how to create a software RAID3 on a FreeBSD system.
[NOTE]
====
While it is theoretically possible to boot from a RAID3 array on FreeBSD, that configuration is uncommon and is not advised.
====
=== Creating a Dedicated RAID3 Array
In FreeBSD, support for RAID3 is implemented by the man:graid3[8] GEOM class. Creating a dedicated RAID3 array on FreeBSD requires the following steps.
[.procedure]
. First, load the [.filename]#geom_raid3.ko# kernel module by issuing one of the following commands:
+
[source,shell]
....
# graid3 load
....
+
or:
+
[source,shell]
....
# kldload geom_raid3
....
. Ensure that a suitable mount point exists. This command creates a new directory to use as the mount point:
+
[source,shell]
....
# mkdir /multimedia
....
. Determine the device names for the disks which will be added to the array, and create the new RAID3 device. The final device listed will act as the dedicated parity disk. This example uses three unpartitioned ATA drives: [.filename]#ada1# and [.filename]#ada2# for data, and [.filename]#ada3# for parity.
+
[source,shell]
....
# graid3 label -v gr0 /dev/ada1 /dev/ada2 /dev/ada3
Metadata value stored on /dev/ada1.
Metadata value stored on /dev/ada2.
Metadata value stored on /dev/ada3.
Done.
....
. Partition the newly created [.filename]#gr0# device and put a UFS file system on it:
+
[source,shell]
....
# gpart create -s GPT /dev/raid3/gr0
# gpart add -t freebsd-ufs /dev/raid3/gr0
# newfs -j /dev/raid3/gr0p1
....
+
Many numbers will glide across the screen, and after a bit of time, the process will be complete. The volume has been created and is ready to be mounted:
+
[source,shell]
....
# mount /dev/raid3/gr0p1 /multimedia/
....
+
The RAID3 array is now ready to use.
Additional configuration is needed to retain this setup across system reboots.
[.procedure]
. The [.filename]#geom_raid3.ko# module must be loaded before the array can be mounted. To automatically load the kernel module during system initialization, add the following line to [.filename]#/boot/loader.conf#:
+
[.programlisting]
....
geom_raid3_load="YES"
....
. The following volume information must be added to [.filename]#/etc/fstab# in order to automatically mount the array's file system during the system boot process:
+
[.programlisting]
....
/dev/raid3/gr0p1 /multimedia ufs rw 2 2
....
[[geom-graid]]
== Software RAID Devices
Some motherboards and expansion cards add some simple hardware, usually just a ROM, that allows the computer to boot from a RAID array. After booting, access to the RAID array is handled by software running on the computer's main processor. This "hardware-assisted software RAID" gives RAID arrays that are not dependent on any particular operating system, and which are functional even before an operating system is loaded.
Several levels of RAID are supported, depending on the hardware in use. See man:graid[8] for a complete list.
man:graid[8] requires the [.filename]#geom_raid.ko# kernel module, which is included in the [.filename]#GENERIC# kernel starting with FreeBSD 9.1. If needed, it can be loaded manually with `graid load`.
[[geom-graid-creating]]
=== Creating an Array
Software RAID devices often have a menu that can be entered by pressing special keys when the computer is booting. The menu can be used to create and delete RAID arrays. man:graid[8] can also create arrays directly from the command line.
`graid label` is used to create a new array. The motherboard used for this example has an Intel software RAID chipset, so the Intel metadata format is specified. The new array is given a label of [.filename]#gm0#, it is a mirror (RAID1), and uses drives [.filename]#ada0# and [.filename]#ada1#.
[CAUTION]
====
Some space on the drives will be overwritten when they are made into a new array. Back up existing data first!
====
[source,shell]
....
# graid label Intel gm0 RAID1 ada0 ada1
GEOM_RAID: Intel-a29ea104: Array Intel-a29ea104 created.
GEOM_RAID: Intel-a29ea104: Disk ada0 state changed from NONE to ACTIVE.
GEOM_RAID: Intel-a29ea104: Subdisk gm0:0-ada0 state changed from NONE to ACTIVE.
GEOM_RAID: Intel-a29ea104: Disk ada1 state changed from NONE to ACTIVE.
GEOM_RAID: Intel-a29ea104: Subdisk gm0:1-ada1 state changed from NONE to ACTIVE.
GEOM_RAID: Intel-a29ea104: Array started.
GEOM_RAID: Intel-a29ea104: Volume gm0 state changed from STARTING to OPTIMAL.
Intel-a29ea104 created
GEOM_RAID: Intel-a29ea104: Provider raid/r0 for volume gm0 created.
....
A status check shows the new mirror is ready for use:
[source,shell]
....
# graid status
Name Status Components
raid/r0 OPTIMAL ada0 (ACTIVE (ACTIVE))
ada1 (ACTIVE (ACTIVE))
....
The array device appears in [.filename]#/dev/raid/#. The first array is called [.filename]#r0#. Additional arrays, if present, will be [.filename]#r1#, [.filename]#r2#, and so on.
The BIOS menu on some of these devices can create arrays with special characters in their names. To avoid problems with those special characters, arrays are given simple numbered names like [.filename]#r0#. To show the actual labels, like [.filename]#gm0# in the example above, use man:sysctl[8]:
[source,shell]
....
# sysctl kern.geom.raid.name_format=1
....
[[geom-graid-volumes]]
=== Multiple Volumes
Some software RAID devices support more than one _volume_ on an array. Volumes work like partitions, allowing space on the physical drives to be split and used in different ways. For example, Intel software RAID devices support two volumes. This example creates a 40 G mirror for safely storing the operating system, followed by a 20 G RAID0 (stripe) volume for fast temporary storage:
[source,shell]
....
# graid label -S 40G Intel gm0 RAID1 ada0 ada1
# graid add -S 20G gm0 RAID0
....
Volumes appear as additional [.filename]#rX# entries in [.filename]#/dev/raid/#. An array with two volumes will show [.filename]#r0# and [.filename]#r1#.
See man:graid[8] for the number of volumes supported by different software RAID devices.
[[geom-graid-converting]]
=== Converting a Single Drive to a Mirror
Under certain specific conditions, it is possible to convert an existing single drive to a man:graid[8] array without reformatting. To avoid data loss during the conversion, the existing drive must meet these minimum requirements:
* The drive must be partitioned with the MBR partitioning scheme. GPT or other partitioning schemes with metadata at the end of the drive will be overwritten and corrupted by the man:graid[8] metadata.
* There must be enough unpartitioned and unused space at the end of the drive to hold the man:graid[8] metadata. This metadata varies in size, but the largest occupies 64 M, so at least that much free space is recommended.
If the drive meets these requirements, start by making a full backup. Then create a single-drive mirror with that drive:
[source,shell]
....
# graid label Intel gm0 RAID1 ada0 NONE
....
man:graid[8] metadata was written to the end of the drive in the unused space. A second drive can now be inserted into the mirror:
[source,shell]
....
# graid insert raid/r0 ada1
....
Data from the original drive will immediately begin to be copied to the second drive. The mirror will operate in degraded status until the copy is complete.
[[geom-graid-inserting]]
=== Inserting New Drives into the Array
Drives can be inserted into an array as replacements for drives that have failed or are missing. If there are no failed or missing drives, the new drive becomes a spare. For example, inserting a new drive into a working two-drive mirror results in a two-drive mirror with one spare drive, not a three-drive mirror.
In the example mirror array, data immediately begins to be copied to the newly-inserted drive. Any existing information on the new drive will be overwritten.
[source,shell]
....
# graid insert raid/r0 ada1
GEOM_RAID: Intel-a29ea104: Disk ada1 state changed from NONE to ACTIVE.
GEOM_RAID: Intel-a29ea104: Subdisk gm0:1-ada1 state changed from NONE to NEW.
GEOM_RAID: Intel-a29ea104: Subdisk gm0:1-ada1 state changed from NEW to REBUILD.
GEOM_RAID: Intel-a29ea104: Subdisk gm0:1-ada1 rebuild start at 0.
....
[[geom-graid-removing]]
=== Removing Drives from the Array
Individual drives can be permanently removed from a from an array and their metadata erased:
[source,shell]
....
# graid remove raid/r0 ada1
GEOM_RAID: Intel-a29ea104: Disk ada1 state changed from ACTIVE to OFFLINE.
GEOM_RAID: Intel-a29ea104: Subdisk gm0:1-[unknown] state changed from ACTIVE to NONE.
GEOM_RAID: Intel-a29ea104: Volume gm0 state changed from OPTIMAL to DEGRADED.
....
[[geom-graid-stopping]]
=== Stopping the Array
An array can be stopped without removing metadata from the drives. The array will be restarted when the system is booted.
[source,shell]
....
# graid stop raid/r0
....
[[geom-graid-status]]
=== Checking Array Status
Array status can be checked at any time. After a drive was added to the mirror in the example above, data is being copied from the original drive to the new drive:
[source,shell]
....
# graid status
Name Status Components
raid/r0 DEGRADED ada0 (ACTIVE (ACTIVE))
ada1 (ACTIVE (REBUILD 28%))
....
Some types of arrays, like `RAID0` or `CONCAT`, may not be shown in the status report if disks have failed. To see these partially-failed arrays, add `-ga`:
[source,shell]
....
# graid status -ga
Name Status Components
Intel-e2d07d9a BROKEN ada6 (ACTIVE (ACTIVE))
....
[[geom-graid-deleting]]
=== Deleting Arrays
Arrays are destroyed by deleting all of the volumes from them. When the last volume present is deleted, the array is stopped and metadata is removed from the drives:
[source,shell]
....
# graid delete raid/r0
....
[[geom-graid-unexpected]]
=== Deleting Unexpected Arrays
Drives may unexpectedly contain man:graid[8] metadata, either from previous use or manufacturer testing. man:graid[8] will detect these drives and create an array, interfering with access to the individual drive. To remove the unwanted metadata:
[.procedure]
. Boot the system. At the boot menu, select `2` for the loader prompt. Enter:
+
[source,shell]
....
OK set kern.geom.raid.enable=0
OK boot
....
+
The system will boot with man:graid[8] disabled.
. Back up all data on the affected drive.
. As a workaround, man:graid[8] array detection can be disabled by adding
+
[.programlisting]
....
kern.geom.raid.enable=0
....
+
to [.filename]#/boot/loader.conf#.
+
To permanently remove the man:graid[8] metadata from the affected drive, boot a FreeBSD installation CD-ROM or memory stick, and select `Shell`. Use `status` to find the name of the array, typically `raid/r0`:
+
[source,shell]
....
# graid status
Name Status Components
raid/r0 OPTIMAL ada0 (ACTIVE (ACTIVE))
ada1 (ACTIVE (ACTIVE))
....
+
Delete the volume by name:
+
[source,shell]
....
# graid delete raid/r0
....
+
If there is more than one volume shown, repeat the process for each volume. After the last array has been deleted, the volume will be destroyed.
+
Reboot and verify data, restoring from backup if necessary. After the metadata has been removed, the `kern.geom.raid.enable=0` entry in [.filename]#/boot/loader.conf# can also be removed.
[[geom-ggate]]
== GEOM Gate Network
GEOM provides a simple mechanism for providing remote access to devices such as disks, CDs, and file systems through the use of the GEOM Gate network daemon, ggated. The system with the device runs the server daemon which handles requests made by clients using ggatec. The devices should not contain any sensitive data as the connection between the client and the server is not encrypted.
Similar to NFS, which is discussed in crossref:network-servers[network-nfs,"Network File System (NFS)"], ggated is configured using an exports file. This file specifies which systems are permitted to access the exported resources and what level of access they are offered. For example, to give the client `192.168.1.5` read and write access to the fourth slice on the first SCSI disk, create [.filename]#/etc/gg.exports# with this line:
[.programlisting]
....
192.168.1.5 RW /dev/da0s4d
....
Before exporting the device, ensure it is not currently mounted. Then, start ggated:
[source,shell]
....
# ggated
....
Several options are available for specifying an alternate listening port or changing the default location of the exports file. Refer to man:ggated[8] for details.
To access the exported device on the client machine, first use `ggatec` to specify the IP address of the server and the device name of the exported device. If successful, this command will display a `ggate` device name to mount. Mount that specified device name on a free mount point. This example connects to the [.filename]#/dev/da0s4d# partition on `192.168.1.1`, then mounts [.filename]#/dev/ggate0# on [.filename]#/mnt#:
[source,shell]
....
# ggatec create -o rw 192.168.1.1 /dev/da0s4d
ggate0
# mount /dev/ggate0 /mnt
....
The device on the server may now be accessed through [.filename]#/mnt# on the client. For more details about `ggatec` and a few usage examples, refer to man:ggatec[8].
[NOTE]
====
The mount will fail if the device is currently mounted on either the server or any other client on the network. If simultaneous access is needed to network resources, use NFS instead.
====
When the device is no longer needed, unmount it with `umount` so that the resource is available to other clients.
[[geom-glabel]]
== Labeling Disk Devices
During system initialization, the FreeBSD kernel creates device nodes as devices are found. This method of probing for devices raises some issues. For instance, what if a new disk device is added via USB? It is likely that a flash device may be handed the device name of [.filename]#da0# and the original [.filename]#da0# shifted to [.filename]#da1#. This will cause issues mounting file systems if they are listed in [.filename]#/etc/fstab# which may also prevent the system from booting.
One solution is to chain SCSI devices in order so a new device added to the SCSI card will be issued unused device numbers. But what about USB devices which may replace the primary SCSI disk? This happens because USB devices are usually probed before the SCSI card. One solution is to only insert these devices after the system has been booted. Another method is to use only a single ATA drive and never list the SCSI devices in [.filename]#/etc/fstab#.
A better solution is to use `glabel` to label the disk devices and use the labels in [.filename]#/etc/fstab#. Since `glabel` stores the label in the last sector of a given provider, the label will remain persistent across reboots. By using this label as a device, the file-system may always be mounted regardless of what device node it is accessed through.
[NOTE]
====
`glabel` can create both transient and permanent labels. Only permanent labels are consistent across reboots. Refer to man:glabel[8] for more information on the differences between labels.
====
=== Label Types and Examples
Permanent labels can be a generic or a file system label. Permanent file system labels can be created with man:tunefs[8] or man:newfs[8]. These types of labels are created in a sub-directory of [.filename]#/dev#, and will be named according to the file system type. For example, UFS2 file system labels will be created in [.filename]#/dev/ufs#. Generic permanent labels can be created with `glabel label`. These are not file system specific and will be created in [.filename]#/dev/label#.
Temporary labels are destroyed at the next reboot. These labels are created in [.filename]#/dev/label# and are suited to experimentation. A temporary label can be created using `glabel create`.
To create a permanent label for a UFS2 file system without destroying any data, issue the following command:
[source,shell]
....
# tunefs -L home /dev/da3
....
A label should now exist in [.filename]#/dev/ufs# which may be added to [.filename]#/etc/fstab#:
[.programlisting]
....
/dev/ufs/home /home ufs rw 2 2
....
[NOTE]
====
The file system must not be mounted while attempting to run `tunefs`.
====
Now the file system may be mounted:
[source,shell]
....
# mount /home
....
From this point on, so long as the [.filename]#geom_label.ko# kernel module is loaded at boot with [.filename]#/boot/loader.conf# or the `GEOM_LABEL` kernel option is present, the device node may change without any ill effect on the system.
File systems may also be created with a default label by using the `-L` flag with `newfs`. Refer to man:newfs[8] for more information.
The following command can be used to destroy the label:
[source,shell]
....
# glabel destroy home
....
The following example shows how to label the partitions of a boot disk.
.Labeling Partitions on the Boot Disk
[example]
====
By permanently labeling the partitions on the boot disk, the system should be able to continue to boot normally, even if the disk is moved to another controller or transferred to a different system. For this example, it is assumed that a single ATA disk is used, which is currently recognized by the system as [.filename]#ad0#. It is also assumed that the standard FreeBSD partition scheme is used, with [.filename]#/#, [.filename]#/var#, [.filename]#/usr# and [.filename]#/tmp#, as well as a swap partition.
Reboot the system, and at the man:loader[8] prompt, press kbd:[4] to boot into single user mode. Then enter the following commands:
[source,shell]
....
# glabel label rootfs /dev/ad0s1a
GEOM_LABEL: Label for provider /dev/ad0s1a is label/rootfs
# glabel label var /dev/ad0s1d
GEOM_LABEL: Label for provider /dev/ad0s1d is label/var
# glabel label usr /dev/ad0s1f
GEOM_LABEL: Label for provider /dev/ad0s1f is label/usr
# glabel label tmp /dev/ad0s1e
GEOM_LABEL: Label for provider /dev/ad0s1e is label/tmp
# glabel label swap /dev/ad0s1b
GEOM_LABEL: Label for provider /dev/ad0s1b is label/swap
# exit
....
The system will continue with multi-user boot. After the boot completes, edit [.filename]#/etc/fstab# and replace the conventional device names, with their respective labels. The final [.filename]#/etc/fstab# will look like this:
[.programlisting]
....
# Device Mountpoint FStype Options Dump Pass#
/dev/label/swap none swap sw 0 0
/dev/label/rootfs / ufs rw 1 1
/dev/label/tmp /tmp ufs rw 2 2
/dev/label/usr /usr ufs rw 2 2
/dev/label/var /var ufs rw 2 2
....
The system can now be rebooted. If everything went well, it will come up normally and `mount` will show:
[source,shell]
....
# mount
/dev/label/rootfs on / (ufs, local)
devfs on /dev (devfs, local)
/dev/label/tmp on /tmp (ufs, local, soft-updates)
/dev/label/usr on /usr (ufs, local, soft-updates)
/dev/label/var on /var (ufs, local, soft-updates)
....
====
The man:glabel[8] class supports a label type for UFS file systems, based on the unique file system id, `ufsid`. These labels may be found in [.filename]#/dev/ufsid# and are created automatically during system startup. It is possible to use `ufsid` labels to mount partitions using [.filename]#/etc/fstab#. Use `glabel status` to receive a list of file systems and their corresponding `ufsid` labels:
[source,shell]
....
% glabel status
Name Status Components
ufsid/486b6fc38d330916 N/A ad4s1d
ufsid/486b6fc16926168e N/A ad4s1f
....
In the above example, [.filename]#ad4s1d# represents [.filename]#/var#, while [.filename]#ad4s1f# represents [.filename]#/usr#. Using the `ufsid` values shown, these partitions may now be mounted with the following entries in [.filename]#/etc/fstab#:
[.programlisting]
....
/dev/ufsid/486b6fc38d330916 /var ufs rw 2 2
/dev/ufsid/486b6fc16926168e /usr ufs rw 2 2
....
Any partitions with `ufsid` labels can be mounted in this way, eliminating the need to manually create permanent labels, while still enjoying the benefits of device name independent mounting.
[[geom-gjournal]]
== UFS Journaling Through GEOM
Support for journals on UFS file systems is available on FreeBSD. The implementation is provided through the GEOM subsystem and is configured using `gjournal`. Unlike other file system journaling implementations, the `gjournal` method is block based and not implemented as part of the file system. It is a GEOM extension.
Journaling stores a log of file system transactions, such as changes that make up a complete disk write operation, before meta-data and file writes are committed to the disk. This transaction log can later be replayed to redo file system transactions, preventing file system inconsistencies.
This method provides another mechanism to protect against data loss and inconsistencies of the file system. Unlike Soft Updates, which tracks and enforces meta-data updates, and snapshots, which create an image of the file system, a log is stored in disk space specifically for this task. For better performance, the journal may be stored on another disk. In this configuration, the journal provider or storage device should be listed after the device to enable journaling on.
The [.filename]#GENERIC# kernel provides support for `gjournal`. To automatically load the [.filename]#geom_journal.ko# kernel module at boot time, add the following line to [.filename]#/boot/loader.conf#:
[.programlisting]
....
geom_journal_load="YES"
....
If a custom kernel is used, ensure the following line is in the kernel configuration file:
[.programlisting]
....
options GEOM_JOURNAL
....
Once the module is loaded, a journal can be created on a new file system using the following steps. In this example, [.filename]#da4# is a new SCSI disk:
[source,shell]
....
# gjournal load
# gjournal label /dev/da4
....
This will load the module and create a [.filename]#/dev/da4.journal# device node on [.filename]#/dev/da4#.
A UFS file system may now be created on the journaled device, then mounted on an existing mount point:
[source,shell]
....
# newfs -O 2 -J /dev/da4.journal
# mount /dev/da4.journal /mnt
....
[NOTE]
====
In the case of several slices, a journal will be created for each individual slice. For instance, if [.filename]#ad4s1# and [.filename]#ad4s2# are both slices, then `gjournal` will create [.filename]#ad4s1.journal# and [.filename]#ad4s2.journal#.
====
Journaling may also be enabled on current file systems by using `tunefs`. However, _always_ make a backup before attempting to alter an existing file system. In most cases, `gjournal` will fail if it is unable to create the journal, but this does not protect against data loss incurred as a result of misusing `tunefs`. Refer to man:gjournal[8] and man:tunefs[8] for more information about these commands.
It is possible to journal the boot disk of a FreeBSD system. Refer to the article link:{gjournal-desktop}[Implementing UFS Journaling on a Desktop PC] for detailed instructions.
diff --git a/documentation/content/en/books/handbook/glossary.adoc b/documentation/content/en/books/handbook/glossary.adoc
index 20b6323a1c..2a6c755a35 100644
--- a/documentation/content/en/books/handbook/glossary.adoc
+++ b/documentation/content/en/books/handbook/glossary.adoc
@@ -1,1042 +1,1043 @@
---
title: FreeBSD Glossary
prev: books/handbook/pgpkeys
next: books/handbook/colophon
+description: FreeBSD Handbook Glossary
---
[glossary]
[[freebsd-glossary]]
= FreeBSD Glossary
:doctype: book
:icons: font
:!sectnums:
:source-highlighter: rouge
:experimental:
:skip-front-matter:
:xrefstyle: basic
:relfileprefix: ../
:outfilesuffix:
include::shared/en/urls.adoc[]
This glossary contains terms and acronyms used within the FreeBSD community and documentation.
[discrete]
== A
ACL::
See <<acl-glossary,Access Control List>>.
ACPI::
See <<acpi-glossary,Advanced Configuration and Power Interface>>.
AMD::
See <<amd-glossary,Automatic Mount Daemon>>.
AML::
See <<aml-glossary,ACPI Machine Language>>.
API::
See <<api-glossary,Application Programming Interface>>.
APIC::
See <<apic-glossary,Advanced Programmable Interrupt Controller>>.
APM::
See <<apm-glossary,Advanced Power Management>>.
APOP::
See <<apop-glossary,Authenticated Post Office Protocol>>.
ASL::
See <<asl-glossary,ACPI Source Language>>.
ATA::
See <<ata-glossary,Advanced Technology Attachment>>.
ATM::
See <<atm-glossary,Asynchronous Transfer Mode>>.
[[aml-glossary]]
ACPI Machine Language::
Pseudocode, interpreted by a virtual machine within an ACPI-compliant operating system, providing a layer between the underlying hardware and the documented interface presented to the OS.
[[asl-glossary]]
ACPI Source Language::
The programming language AML is written in.
[[acl-glossary]]
Access Control List::
A list of permissions attached to an object, usually either a file or a network device.
[[acpi-glossary]]
Advanced Configuration and Power Interface::
A specification which provides an abstraction of the interface the hardware presents to the operating system, so that the operating system should need to know nothing about the underlying hardware to make the most of it.
ACPI evolves and supersedes the functionality provided previously by APM, PNPBIOS and other technologies, and provides facilities for controlling power consumption, machine suspension, device enabling and disabling, etc.
[[api-glossary]]
Application Programming Interface::
A set of procedures, protocols and tools that specify the canonical interaction of one or more program parts;
how, when and why they do work together, and what data they share or operate on.
[[apm-glossary]]
Advanced Power Management::
An API enabling the operating system to work in conjunction with the BIOS in order to achieve power management.
APM has been superseded by the much more generic and powerful ACPI specification for most applications.
[[apic-glossary]]
Advanced Programmable Interrupt Controller::
{empty}
[[ata-glossary]]
Advanced Technology Attachment::
{empty}
[[atm-glossary]]
Asynchronous Transfer Mode::
{empty}
[[apop-glossary]]
Authenticated Post Office Protocol::
{empty}
[[amd-glossary]]
Automatic Mount Daemon::
A daemon that automatically mounts a filesystem when a file or directory within that filesystem is accessed.
[discrete]
== B
BAR::
See <<bar-glossary,Base Address Register>>.
BIND::
See <<bind-glossary,Berkeley Internet Name Domain>>.
BIOS::
See <<bios-glossary,Basic Input/Output System>>.
BSD::
See <<bsd-glossary,Berkeley Software Distribution>>.
[[bar-glossary]]
Base Address Register::
The registers that determine which address range a PCI device will respond to.
[[bios-glossary]]
Basic Input/Output System::
The definition of BIOS depends a bit on the context.
Some people refer to it as the ROM chip with a basic set of routines to provide an interface between software and hardware.
Others refer to it as the set of routines contained in the chip that help in bootstrapping the system.
Some might also refer to it as the screen used to configure the bootstrapping process.
The BIOS is PC-specific but other systems have something similar.
[[bind-glossary]]
Berkeley Internet Name Domain::
An implementation of the DNS protocols.
[[bsd-glossary]]
Berkeley Software Distribution::
This is the name that the Computer Systems Research Group (CSRG) at link:http://www.berkeley.edu[The University of California at Berkeley] gave to their improvements and modifications to AT&T's 32V UNIX(R).
FreeBSD is a descendant of the CSRG work.
[[bikeshed-glossary]]
Bikeshed Building::
A phenomenon whereby many people will give an opinion on an uncomplicated topic, whilst a complex topic receives little or no discussion.
See the link:{faq}#bikeshed-painting[FAQ] for the origin of the term.
[discrete]
== C
CD::
See <<cd-glossary,Carrier Detect>>.
CHAP::
See <<chap-glossary,Challenge Handshake Authentication Protocol>>.
CLIP::
See <<clip-glossary,Classical IP over ATM>>.
COFF::
See <<coff-glossary,Common Object File Format>>.
CPU::
See <<cpu-glossary,Central Processing Unit>>.
CTS::
See <<cts-glossary,Clear To Send>>.
[[cd-glossary]]
Carrier Detect::
An RS232C signal indicating that a carrier has been detected.
[[cpu-glossary]]
Central Processing Unit::
Also known as the processor.
This is the brain of the computer where all calculations take place.
There are a number of different architectures with different instruction sets.
Among the more well-known are the Intel-x86 and derivatives, Arm, and PowerPC.
[[chap-glossary]]
Challenge Handshake Authentication Protocol::
A method of authenticating a user, based on a secret shared between client and server.
[[clip-glossary]]
Classical IP over ATM::
{empty}
[[cts-glossary]]
Clear To Send::
An RS232C signal giving the remote system permission to send data.
+
See <<rts-glossary,Also Request To Send>>.
[[coff-glossary]]
Common Object File Format::
{empty}
[discrete]
== D
DAC::
See <<dac-glossary,Discretionary Access Control>>.
DDB::
See <<ddb-glossary,Debugger>>.
DES::
See <<des-glossary,Data Encryption Standard>>.
DHCP::
See <<dhcp-glossary,Dynamic Host Configuration Protocol>>.
DNS::
See <<dns-glossary,Domain Name System>>.
DSDT::
See <<dsdt-glossary,Differentiated System Description Table>>.
DSR::
See <<dsr-glossary,Data Set Ready>>.
DTR::
See <<dtr-glossary,Data Terminal Ready>>.
DVMRP::
See <<dvmrp-glossary,Distance-Vector Multicast Routing Protocol>>.
[[dac-glossary]]
Discretionary Access Control::
{empty}
[[des-glossary]]
Data Encryption Standard::
A method of encrypting information, traditionally used as the method of encryption for UNIX(R) passwords and the man:crypt[3] function.
[[dsr-glossary]]
Data Set Ready::
An RS232C signal sent from the modem to the computer or terminal indicating a readiness to send and receive data.
+
See <<dtr-glossary,Also Data Terminal Ready>>.
[[dtr-glossary]]
Data Terminal Ready::
An RS232C signal sent from the computer or terminal to the modem indicating a readiness to send and receive data.
[[ddb-glossary]]
Debugger::
An interactive in-kernel facility for examining the status of a system, often used after a system has crashed to establish the events surrounding the failure.
[[dsdt-glossary]]
Differentiated System Description Table::
An ACPI table, supplying basic configuration information about the base system.
[[dvmrp-glossary]]
Distance-Vector Multicast Routing Protocol::
{empty}
[[dns-glossary]]
Domain Name System::
The system that converts humanly readable hostnames (i.e., mail.example.net) to Internet addresses and vice versa.
[[dhcp-glossary]]
Dynamic Host Configuration Protocol::
A protocol that dynamically assigns IP addresses to a computer (host) when it requests one from the server.
The address assignment is called a “lease”.
[discrete]
== E
ECOFF::
See <<ecoff-glossary,Extended COFF>>.
ELF::
See <<elf-glossary,Executable and Linking Format>>.
ESP::
See <<esp-glossary,Encapsulated Security Payload>>.
Encapsulated Security Payload::
{empty}
[[elf-glossary]]
Executable and Linking Format::
{empty}
[[ecoff-glossary]]
Extended COFF::
{empty}
[discrete]
== F
FADT::
See <<fadt-glossary,Fixed ACPI Description Table>>.
FAT::
See <<fat-glossary,File Allocation Table>>.
FAT16::
See <<fat16-glossary,File Allocation Table (16-bit)>>.
FTP::
See <<ftp-glossary,File Transfer Protocol>>.
[[fat-glossary]]
File Allocation Table::
{empty}
[[fat16-glossary]]
File Allocation Table (16-bit)::
{empty}
[[ftp-glossary]]
File Transfer Protocol::
A member of the family of high-level protocols implemented on top of TCP which can be used to transfer files over a TCP/IP network.
[[fadt-glossary]]
Fixed ACPI Description Table::
{empty}
[discrete]
== G
GUI::
See <<gui-glossary,Graphical User Interface>>.
[[giant-glossary]]
Giant::
The name of a mutual exclusion mechanism (a sleep `mutex`) that protects a large set of kernel resources.
Although a simple locking mechanism was adequate in the days where a machine might have only a few dozen processes, one networking card, and certainly only one processor, in current times it is an unacceptable performance bottleneck.
FreeBSD developers are actively working to replace it with locks that protect individual resources, which will allow a much greater degree of parallelism for both single-processor and multi-processor machines.
[[gui-glossary]]
Graphical User Interface::
A system where the user and computer interact with graphics.
[discrete]
== H
HTML::
See <<html-glossary,HyperText Markup Language>>.
HUP::
See <<hup-glossary,HangUp>>.
[[hup-glossary]]
HangUp::
{empty}
[[html-glossary]]
HyperText Markup Language::
The markup language used to create web pages.
[discrete]
== I
I/O::
See <<io-glossary,Input/Output>>.
IASL::
See <<iasl-glossary,Intel’s ASL compiler>>.
IMAP::
See <<imap-glossary,Internet Message Access Protocol>>.
IP::
See <<ip-glossary,Internet Protocol>>.
IPFW::
See <<ipfw-glossary,IP Firewall>>.
IPP::
See <<ipp-glossary,Internet Printing Protocol>>.
IPv4::
See <<ipv4-glossary,IP Version 4>>.
IPv6::
See <<ipv6-glossary,IP Version 6>>.
ISP::
See <<isp-glossary,Internet Service Provider>>.
[[ipfw-glossary]]
IP Firewall::
{empty}
[[ipv4-glossary]]
IP Version 4::
The IP protocol version 4, which uses 32 bits for addressing.
This version is still the most widely used, but it is slowly being replaced with IPv6.
+
See <<ipv6-glossary,Also IP Version 6>>.
[[ipv6-glossary]]
IP Version 6::
The new IP protocol.
Invented because the address space in IPv4 is running out.
Uses 128 bits for addressing.
[[io-glossary]]
Input/Output::
{empty}
[[iasl-glossary]]
Intel’s ASL compiler::
Intel’s compiler for converting ASL into AML.
[[imap-glossary]]
Internet Message Access Protocol::
A protocol for accessing email messages on a mail server, characterised by the messages usually being kept on the server as opposed to being downloaded to the mail reader client.
+
See Also Post Office Protocol Version 3.
[[ipp-glossary]]
Internet Printing Protocol::
{empty}
[[ip-glossary]]
Internet Protocol::
The packet transmitting protocol that is the basic protocol on the Internet.
Originally developed at the U.S. Department of Defense and an extremely important part of the TCP/IP stack.
Without the Internet Protocol, the Internet would not have become what it is today.
For more information, see link:ftp://ftp.rfc-editor.org/in-notes/rfc791.txt[RFC 791].
[[isp-glossary]]
Internet Service Provider::
A company that provides access to the Internet.
[discrete]
== K
[[kame-glossary]]
KAME::
Japanese for “turtle”, the term KAME is used in computing circles to refer to the link:http://www.kame.net/[KAME Project], who work on an implementation of IPv6.
KDC::
See <<kdc-glossary,Key Distribution Center>>.
KLD::
See <<kld-glossary,Kernel ld(1)>>.
KSE::
See <<kse-glossary,Kernel Scheduler Entities>>.
KVA::
See <<kva-glossary,Kernel Virtual Address>>.
Kbps::
See <<kbps-glossary,Kilo Bits Per Second>>.
[[kld-glossary]]
Kernel man:ld[1]::
A method of dynamically loading functionality into a FreeBSD kernel without rebooting the system.
[[kse-glossary]]
Kernel Scheduler Entities::
A kernel-supported threading system.
See the link:http://www.freebsd.org/kse[project home page] for further details.
[[kva-glossary]]
Kernel Virtual Address::
{empty}
[[kdc-glossary]]
Key Distribution Center::
{empty}
[[kbps-glossary]]
Kilo Bits Per Second::
Used to measure bandwidth (how much data can pass a given point at a specified amount of time).
Alternates to the Kilo prefix include Mega, Giga, Tera, and so forth.
[discrete]
== L
LAN::
See <<lan-glossary,Local Area Network>>.
LOR::
See <<lor-glossary,Lock Order Reversal>>.
LPD::
See <<lpd-glossary,Line Printer Daemon>>.
[[lpd-glossary]]
Line Printer Daemon::
{empty}
[[lan-glossary]]
Local Area Network::
A network used on a local area, e.g. office, home, or so forth.
[[lor-glossary]]
Lock Order Reversal::
The FreeBSD kernel uses a number of resource locks to arbitrate contention for those resources.
A run-time lock diagnostic system found in FreeBSD-CURRENT kernels (but removed for releases), called man:witness[4], detects the potential for deadlocks due to locking errors.
(man:witness[4] is actually slightly conservative, so it is possible to get false positives.)
A true positive report indicates that “if you were unlucky, a deadlock would have happened here”.
+
True positive LORs tend to get fixed quickly, so check http://lists.FreeBSD.org/mailman/listinfo/freebsd-current and the link:http://sources.zabbadoz.net/freebsd/lor.html[LORs Seen] page before posting to the mailing lists.
[discrete]
== M
MAC::
See <<mac-glossary,Mandatory Access Control>>.
MADT::
See <<madt-glossary,Multiple APIC Description Table>>.
MFC::
See <<mfc-glossary,Merge From Current>>.
MFH::
See <<mfh-glossary,Merge From Head>>.
MFS::
See <<mfs-glossary,Merge From Stable>>.
MFV::
See <<mfv-glossary,Merge From Vendor>>.
MIT::
See <<mit-glossary,Massachusetts Institute of Technology>>.
MLS::
See <<mls-glossary,Multi-Level Security>>.
MOTD::
See <<motd-glossary,Message Of The Day>>.
MTA::
See <<mta-glossary,Mail Transfer Agent>>.
MUA::
See <<mua-glossary,Mail User Agent>>.
[[mta-glossary]]
Mail Transfer Agent::
An application used to transfer email.
An MTA has traditionally been part of the BSD base system.
Today Sendmail is included in the base system, but there are many other MTAs, such as postfix, qmail and Exim.
[[mua-glossary]]
Mail User Agent::
An application used by users to display and write email.
[[mac-glossary]]
Mandatory Access Control::
{empty}
[[mit-glossary]]
Massachusetts Institute of Technology::
{empty}
[[mfc-glossary]]
Merge From Current::
To merge functionality or a patch from the -CURRENT branch to another, most often -STABLE.
[[mfh-glossary]]
Merge From Head::
To merge functionality or a patch from a repository HEAD to an earlier branch.
[[mfs-glossary]]
Merge From Stable::
In the normal course of FreeBSD development, a change will be committed to the -CURRENT branch for testing before being merged to -STABLE.
On rare occasions, a change will go into -STABLE first and then be merged to -CURRENT.
+
This term is also used when a patch is merged from -STABLE to a security branch.
+
See <<mfc-glossary,Also Merge From Current>>.
[[mfv-glossary]]
Merge From Vendor::
{empty}
[[motd-glossary]]
Message Of The Day::
A message, usually shown on login, often used to distribute information to users of the system.
[[mls-glossary]]
Multi-Level Security::
{empty}
[[madt-glossary]]
Multiple APIC Description Table::
{empty}
[discrete]
== N
NAT::
See <<nat-glossary,Network Address Translation>>.
NDISulator::
See <<projectevil-glossary,Project Evil>>.
NFS::
See <<nfs-glossary,Network File System>>.
NTFS::
See <<ntfs-glossary,New Technology File System>>.
NTP::
See <<ntp-glossary,Network Time Protocol>>.
[[nat-glossary]]
Network Address Translation::
A technique where IP packets are rewritten on the way through a gateway, enabling many machines behind the gateway to effectively share a single IP address.
[[nfs-glossary]]
Network File System::
{empty}
[[ntfs-glossary]]
New Technology File System::
A filesystem developed by Microsoft and available in its “New Technology” operating systems, such as Windows(R) 2000, Windows NT(R) and Windows(R) XP.
[[ntp-glossary]]
Network Time Protocol::
A means of synchronizing clocks over a network.
[discrete]
== O
OBE::
See <<obe-glossary,Overtaken By Events>>.
ODMR::
See <<odmr-glossary,On-Demand Mail Relay>>.
OS::
See <<os-glossary,Operating System>>.
[[odmr-glossary]]
On-Demand Mail Relay::
{empty}
[[os-glossary]]
Operating System::
A set of programs, libraries and tools that provide access to the hardware resources of a computer.
Operating systems range today from simplistic designs that support only one program running at a time, accessing only one device to fully multi-user, multi-tasking and multi-process systems that can serve thousands of users simultaneously, each of them running dozens of different applications.
[[obe-glossary]]
Overtaken By Events::
Indicates a suggested change (such as a Problem Report or a feature request) which is no longer relevant or applicable due to such things as later changes to FreeBSD, changes in networking standards, the affected hardware having since become obsolete, and so forth.
[discrete]
== P
PAE::
See <<pae-glossary,Physical Address Extensions>>.
PAM::
See <<pam-glossary,Pluggable Authentication Modules>>.
PAP::
See <<pap-glossary,Password Authentication Protocol>>.
PC::
See <<pc-glossary,Personal Computer>>.
PCNSFD::
See <<pcnfsd-glossary,Personal Computer Network File System Daemon>>.
PDF::
See <<pdf-glossary,Portable Document Format>>.
PID::
See <<pid-glossary,Process ID>>.
POLA::
See <<pola-glossary,Principle Of Least Astonishment>>.
POP::
See <<pop-glossary,Post Office Protocol>>.
POP3::
See <<pop3-glossary,Post Office Protocol Version 3>>.
PPD::
See <<ppd-glossary,PostScript Printer Description>>.
PPP::
See <<ppp-glossary,Point-to-Point Protocol>>.
PPPoA::
See <<pppoa-glossary,PPP over ATM>>.
PPPoE::
See <<pppoe-glossary,PPP over Ethernet>>.
[[pppoa-glossary]]
PPP over ATM::
{empty}
[[pppoe-glossary]]
PPP over Ethernet::
{empty}
PR::
See <<pr-glossary,Problem Report>>.
PXE::
See <<pxe-glossary,Preboot eXecution Environment>>.
[[pap-glossary]]
Password Authentication Protocol::
{empty}
[[pc-glossary]]
Personal Computer::
{empty}
[[pcnfsd-glossary]]
Personal Computer Network File System Daemon::
{empty}
[[pae-glossary]]
Physical Address Extensions::
A method of enabling access to up to 64 GB of RAM on systems which only physically have a 32-bit wide address space (and would therefore be limited to 4 GB without PAE).
[[pam-glossary]]
Pluggable Authentication Modules::
{empty}
[[ppp-glossary]]
Point-to-Point Protocol::
{empty}
[[pointyhat]]
Pointy Hat::
A mythical piece of headgear, much like a dunce cap, awarded to any FreeBSD committer who breaks the build, makes revision numbers go backwards, or creates any other kind of havoc in the source base.
Any committer worth his or her salt will soon accumulate a large collection.
The usage is (almost always?) humorous.
[[pdf-glossary]]
Portable Document Format::
{empty}
[[pop-glossary]]
Post Office Protocol::
See Also Post Office Protocol Version 3.
[[pop3-glossary]]
Post Office Protocol Version 3::
A protocol for accessing email messages on a mail server, characterised by the messages usually being downloaded from the server to the client, as opposed to remaining on the server.
+
See <<imap-glossary,Also Internet Message Access Protocol>>.
[[ppd-glossary]]
PostScript Printer Description::
{empty}
[[pxe-glossary]]
Preboot eXecution Environment::
{empty}
[[pola-glossary]]
Principle Of Least Astonishment::
As FreeBSD evolves, changes visible to the user should be kept as unsurprising as possible.
For example, arbitrarily rearranging system startup variables in [.filename]#/etc/defaults/rc.conf# violates POLA.
Developers consider POLA when contemplating user-visible system changes.
[[pr-glossary]]
Problem Report::
A description of some kind of problem that has been found in either the FreeBSD source or documentation.
See link:{problem-reports}[Writing FreeBSD Problem Reports].
[[pid-glossary]]
Process ID::
A number, unique to a particular process on a system, which identifies it and allows actions to be taken against it.
[[projectevil-glossary]]
Project Evil::
The working title for the NDISulator, written by Bill Paul, who named it referring to how awful it is (from a philosophical standpoint) to need to have something like this in the first place.
The NDISulator is a special compatibility module to allow Microsoft Windows(TM) NDIS miniport network drivers to be used with FreeBSD/i386.
This is usually the only way to use cards where the driver is closed-source.
See [.filename]#src/sys/compat/ndis/subr_ndis.c#.
[discrete]
== R
RA::
See <<ra-glossary,Router Advertisement>>.
RAID::
See <<raid-glossary,Redundant Array of Inexpensive Disks>>.
RAM::
See <<ram-glossary,Random Access Memory>>.
RD::
See <<rd-glossary,Received Data>>.
RFC::
See <<rfc-glossary,Request For Comments>>.
RISC::
See <<risc-glossary,Reduced Instruction Set Computer>>.
RPC::
See <<rpc-glossary,Remote Procedure Call>>.
RS232C::
See <<rs232c-glossary,Recommended Standard 232C>>.
RTS::
See <<rts-glossary,Request To Send>>.
[[ram-glossary]]
Random Access Memory::
{empty}
[[rcs-glossary]]
Revision Control System::
The _Revision Control System (RCS)_ is one of the oldest software suites that implement “revision control” for plain files.
It allows the storage, retrieval, archival, logging, identification and merging of multiple revisions for each file.
RCS consists of many small tools that work together.
It lacks some of the features found in more modern revision control systems, like Git, but it is very simple to install, configure, and start using for a small set of files.
+
See <<svn-glossary,Also Subversion>>.
[[rd-glossary]]
Received Data::
An RS232C pin or wire that data is received on.
+
See <<td-glossary,Also Transmitted Data>>.
[[rs232c-glossary]]
Recommended Standard 232C::
A standard for communications between serial devices.
[[risc-glossary]]
Reduced Instruction Set Computer::
An approach to processor design where the operations the hardware can perform are simplified but made as general purpose as possible.
This can lead to lower power consumption, fewer transistors and in some cases, better performance and increased code density.
Examples of RISC processors include the Alpha, SPARC(R), ARM(R) and PowerPC(R).
[[raid-glossary]]
Redundant Array of Inexpensive Disks::
{empty}
[[rpc-glossary]]
Remote Procedure Call::
{empty}
[[rfc-glossary]]
Request For Comments::
A set of documents defining Internet standards, protocols, and so forth.
See www.rfc-editor.org.
+
Also used as a general term when someone has a suggested change and wants feedback.
[[rts-glossary]]
Request To Send::
An RS232C signal requesting that the remote system commences transmission of data.
+
See <<cts-glossary,Also Clear To Send>>.
[[ra-glossary]]
Router Advertisement::
{empty}
[discrete]
== S
SCI::
See <<sci-glossary,System Control Interrupt>>.
SCSI::
See <<scsi-glossary,Small Computer System Interface>>.
SG::
See <<sg-glossary,Signal Ground>>.
SMB::
See <<smb-glossary,Server Message Block>>.
SMP::
See <<smp-glossary,Symmetric MultiProcessor>>.
SMTP::
See <<smtp-glossary,Simple Mail Transfer Protocol>>.
SMTP AUTH::
See <<smtpauth-glossary,SMTP Authentication>>.
SSH::
See <<ssh-glossary,Secure Shell>>.
STR::
See <<str-glossary,Suspend To RAM>>.
SVN::
See <<svn-glossary,Subversion>>.
[[smtpauth-glossary]]
SMTP Authentication::
{empty}
[[smb-glossary]]
Server Message Block::
{empty}
[[sg-glossary]]
Signal Ground::
An RS232 pin or wire that is the ground reference for the signal.
[[smtp-glossary]]
Simple Mail Transfer Protocol::
{empty}
[[ssh-glossary]]
Secure Shell::
{empty}
[[scsi-glossary]]
Small Computer System Interface::
{empty}
[[svn-glossary]]
Subversion::
Subversion is a version control system currently used by the FreeBSD project.
[[str-glossary]]
Suspend To RAM::
{empty}
[[smp-glossary]]
Symmetric MultiProcessor::
{empty}
[[sci-glossary]]
System Control Interrupt::
{empty}
[discrete]
== T
TCP::
See <<tcp-glossary,Transmission Control Protocol>>.
TCP/IP::
See <<tcpip-glossary,Transmission Control Protocol/Internet Protocol>>.
TD::
See <<td-glossary,Transmitted Data>>.
TFTP::
See <<tftp-glossary,Trivial FTP>>.
TGT::
See <<tgt-glossary,Ticket-Granting Ticket>>.
TSC::
See <<tsc-glossary,Time Stamp Counter>>.
[[tgt-glossary]]
Ticket-Granting Ticket::
{empty}
[[tsc-glossary]]
Time Stamp Counter::
A profiling counter internal to modern Pentium(R) processors that counts core frequency clock ticks.
[[tcp-glossary]]
Transmission Control Protocol::
A protocol that sits on top of (e.g.) the IP protocol and guarantees that packets are delivered in a reliable, ordered, fashion.
[[tcpip-glossary]]
Transmission Control Protocol/Internet Protocol::
The term for the combination of the TCP protocol running over the IP protocol.
Much of the Internet runs over TCP/IP.
[[td-glossary]]
Transmitted Data::
An RS232C pin or wire that data is transmitted on.
+
See <<rd-glossary,Also Received Data>>.
[[tftp-glossary]]
Trivial FTP::
{empty}
[discrete]
== U
UDP::
See <<udp-glossary,User Datagram Protocol>>.
UFS1::
See <<ufs1-glossary,Unix File System Version 1>>.
UFS2::
See <<ufs2-glossary,Unix File System Version 2>>.
UID::
See <<uid-glossary,User ID>>.
URL::
See <<url-glossary,Uniform Resource Locator>>.
USB::
See <<usb-glossary,Universal Serial Bus>>.
[[url-glossary]]
Uniform Resource Locator::
A method of locating a resource, such as a document on the Internet and a means to identify that resource.
[[ufs1-glossary]]
Unix File System Version 1::
The original UNIX(R) file system, sometimes called the Berkeley Fast File System.
[[ufs2-glossary]]
Unix File System Version 2::
An extension to UFS1, introduced in FreeBSD 5-CURRENT.
UFS2 adds 64 bit block pointers (breaking the 1T barrier), support for extended file storage and other features.
[[usb-glossary]]
Universal Serial Bus::
A hardware standard used to connect a wide variety of computer peripherals to a universal interface.
[[uid-glossary]]
User ID::
A unique number assigned to each user of a computer, by which the resources and permissions assigned to that user can be identified.
[[udp-glossary]]
User Datagram Protocol::
A simple, unreliable datagram protocol which is used for exchanging data on a TCP/IP network.
UDP does not provide error checking and correction like TCP.
[discrete]
== V
VPN::
See <<vpn-glossary,Virtual Private Network>>.
[[vpn-glossary]]
Virtual Private Network::
A method of using a public telecommunication such as the Internet, to provide remote access to a localized network, such as a corporate LAN.
diff --git a/documentation/content/en/books/handbook/introduction/_index.adoc b/documentation/content/en/books/handbook/introduction/_index.adoc
index 06ba49148b..423c6e428f 100644
--- a/documentation/content/en/books/handbook/introduction/_index.adoc
+++ b/documentation/content/en/books/handbook/introduction/_index.adoc
@@ -1,223 +1,224 @@
---
title: Chapter 1. Introduction
part: Part I. Getting Started
prev: books/handbook/parti
next: books/handbook/bsdinstall
+description: This chapter covers various aspects of the FreeBSD Project, such as its history, goals, development model, and so on
---
[[introduction]]
= Introduction
:doctype: book
:toc: macro
:toclevels: 1
:icons: font
:sectnums:
:sectnumlevels: 6
:source-highlighter: rouge
:experimental:
:skip-front-matter:
:xrefstyle: basic
:relfileprefix: ../
:outfilesuffix:
:sectnumoffset: 1
ifeval::["{backend}" == "html5"]
:imagesdir: ../../../../images/books/handbook/introduction/
endif::[]
ifeval::["{backend}" == "pdf"]
:imagesdir: ../../../../static/images/books/handbook/introduction/
endif::[]
ifeval::["{backend}" == "epub3"]
:imagesdir: ../../../../static/images/books/handbook/introduction/
endif::[]
include::shared/authors.adoc[]
include::shared/releases.adoc[]
include::shared/en/mailing-lists.adoc[]
include::shared/en/teams.adoc[]
include::shared/en/urls.adoc[]
toc::[]
[[introduction-synopsis]]
== Synopsis
Thank you for your interest in FreeBSD! The following chapter covers various aspects of the FreeBSD Project, such as its history, goals, development model, and so on.
After reading this chapter you will know:
* How FreeBSD relates to other computer operating systems.
* The history of the FreeBSD Project.
* The goals of the FreeBSD Project.
* The basics of the FreeBSD open-source development model.
* And of course: where the name "FreeBSD" comes from.
[[nutshell]]
== Welcome to FreeBSD!
FreeBSD is an Open Source, standards-compliant Unix-like operating system for x86 (both 32 and 64 bit), ARM(R), AArch64, RISC-V(R), MIPS(R), POWER(R), PowerPC(R), and Sun UltraSPARC(R) computers. It provides all the features that are nowadays taken for granted, such as preemptive multitasking, memory protection, virtual memory, multi-user facilities, SMP support, all the Open Source development tools for different languages and frameworks, and desktop features centered around X Window System, KDE, or GNOME. Its particular strengths are:
* _Liberal Open Source license_, which grants you rights to freely modify and extend its source code and incorporate it in both Open Source projects and closed products without imposing restrictions typical to copyleft licenses, as well as avoiding potential license incompatibility problems.
* _Strong TCP/IP networking_ - FreeBSD implements industry standard protocols with ever increasing performance and scalability. This makes it a good match in both server, and routing/firewalling roles - and indeed many companies and vendors use it precisely for that purpose.
* _Fully integrated OpenZFS support_, including root-on-ZFS, ZFS Boot Environments, fault management, administrative delegation, support for jails, FreeBSD specific documentation, and system installer support.
* _Extensive security features_, from the Mandatory Access Control framework to Capsicum capability and sandbox mechanisms.
* _Over 30 thousand prebuilt packages_ for all supported architectures, and the Ports Collection which makes it easy to build your own, customized ones.
* _Documentation_ - in addition to Handbook and books from different authors that cover topics ranging from system administration to kernel internals, there are also the man:man[1] pages, not only for userspace daemons, utilities, and configuration files, but also for kernel driver APIs (section 9) and individual drivers (section 4).
* _Simple and consistent repository structure and build system_ - FreeBSD uses a single repository for all of its components, both kernel and userspace. This, along with an unified and easy to customize build system and a well thought out development process makes it easy to integrate FreeBSD with build infrastructure for your own product.
* _Staying true to Unix philosophy_, preferring composability instead of monolithic "all in one" daemons with hardcoded behavior.
* _Binary compatibility_ with Linux, which makes it possible to run many Linux binaries without the need for virtualisation.
FreeBSD is based on the 4.4BSD-Lite release from Computer Systems Research Group (CSRG) at the University of California at Berkeley, and carries on the distinguished tradition of BSD systems development. In addition to the fine work provided by CSRG, the FreeBSD Project has put in many thousands of man-hours into extending the functionality and fine-tuning the system for maximum performance and reliability in real-life load situations. FreeBSD offers performance and reliability on par with other Open Source and commercial offerings, combined with cutting-edge features not available anywhere else.
[[os-overview]]
=== What Can FreeBSD Do?
The applications to which FreeBSD can be put are truly limited only by your own imagination. From software development to factory automation, inventory control to azimuth correction of remote satellite antennae; if it can be done with a commercial UNIX(R) product then it is more than likely that you can do it with FreeBSD too! FreeBSD also benefits significantly from literally thousands of high quality applications developed by research centers and universities around the world, often available at little to no cost.
Because the source code for FreeBSD itself is freely available, the system can also be customized to an almost unheard of degree for special applications or projects, and in ways not generally possible with operating systems from most major commercial vendors. Here is just a sampling of some of the applications in which people are currently using FreeBSD:
* _Internet Services:_ The robust TCP/IP networking built into FreeBSD makes it an ideal platform for a variety of Internet services such as:
** Web servers
** IPv4 and IPv6 routing
** Firewalls and NAT ("IP masquerading") gateways
** FTP servers
** Email servers
** And more...
* _Education:_ Are you a student of computer science or a related engineering field? There is no better way of learning about operating systems, computer architecture and networking than the hands on, under the hood experience that FreeBSD can provide. A number of freely available CAD, mathematical and graphic design packages also make it highly useful to those whose primary interest in a computer is to get _other_ work done!
* _Research:_ With source code for the entire system available, FreeBSD is an excellent platform for research in operating systems as well as other branches of computer science. FreeBSD's freely available nature also makes it possible for remote groups to collaborate on ideas or shared development without having to worry about special licensing agreements or limitations on what may be discussed in open forums.
* _Networking:_ Need a new router? A name server (DNS)? A firewall to keep people out of your internal network? FreeBSD can easily turn that unused PC sitting in the corner into an advanced router with sophisticated packet-filtering capabilities.
* _Embedded:_ FreeBSD makes an excellent platform to build embedded systems upon. With support for the ARM(R), MIPS(R) and PowerPC(R) platforms, coupled with a robust network stack, cutting edge features and the permissive link:{faq}#bsd-license-restrictions[BSD license] FreeBSD makes an excellent foundation for building embedded routers, firewalls, and other devices.
* _Desktop:_ FreeBSD makes a fine choice for an inexpensive desktop solution using the freely available X11 server. FreeBSD offers a choice from many open-source desktop environments, including the standard GNOME and KDE graphical user interfaces. FreeBSD can even boot "diskless" from a central server, making individual workstations even cheaper and easier to administer.
* _Software Development:_ The basic FreeBSD system comes with a full suite of development tools including a full C/C++ compiler and debugger suite. Support for many other languages are also available through the ports and packages collection.
FreeBSD is available to download free of charge, or can be obtained on either CD-ROM or DVD. Please see crossref:mirrors[mirrors, Obtaining FreeBSD] for more information about obtaining FreeBSD.
[[introduction-nutshell-users]]
=== Who Uses FreeBSD?
FreeBSD has been known for its web serving capabilities - sites that run on FreeBSD include https://news.ycombinator.com/[Hacker News], http://www.netcraft.com/[Netcraft], http://www.163.com/[NetEase], https://signup.netflix.com/openconnect[Netflix], http://www.sina.com/[Sina], http://www.sony.co.jp/[Sony Japan], http://www.rambler.ru/[Rambler], http://www.yahoo.com/[Yahoo!], and http://www.yandex.ru/[Yandex].
FreeBSD's advanced features, proven security, predictable release cycle, and permissive license have led to its use as a platform for building many commercial and open source appliances, devices, and products. Many of the world's largest IT companies use FreeBSD:
* http://www.apache.org/[Apache] - The Apache Software Foundation runs most of its public facing infrastructure, including possibly one of the largest SVN repositories in the world with over 1.4 million commits, on FreeBSD.
* http://www.apple.com/[Apple] - OS X borrows heavily from FreeBSD for the network stack, virtual file system, and many userland components. Apple iOS also contains elements borrowed from FreeBSD.
* http://www.cisco.com/[Cisco] - IronPort network security and anti-spam appliances run a modified FreeBSD kernel.
* http://www.citrix.com/[Citrix] - The NetScaler line of security appliances provide layer 4-7 load balancing, content caching, application firewall, secure VPN, and mobile cloud network access, along with the power of a FreeBSD shell.
* https://www.emc.com/isilon[Dell EMC Isilon] - Isilon's enterprise storage appliances are based on FreeBSD. The extremely liberal FreeBSD license allowed Isilon to integrate their intellectual property throughout the kernel and focus on building their product instead of an operating system.
* http://www.quest.com/KACE[Quest KACE] - The KACE system management appliances run FreeBSD because of its reliability, scalability, and the community that supports its continued development.
* http://www.ixsystems.com/[iXsystems] - The TrueNAS line of unified storage appliances is based on FreeBSD. In addition to their commercial products, iXsystems also manages development of the open source projects TrueOS and FreeNAS.
* http://www.juniper.net/[Juniper] - The JunOS operating system that powers all Juniper networking gear (including routers, switches, security, and networking appliances) is based on FreeBSD. Juniper is one of many vendors that showcases the symbiotic relationship between the project and vendors of commercial products. Improvements generated at Juniper are upstreamed into FreeBSD to reduce the complexity of integrating new features from FreeBSD back into JunOS in the future.
* http://www.mcafee.com/[McAfee] - SecurOS, the basis of McAfee enterprise firewall products including Sidewinder is based on FreeBSD.
* http://www.netapp.com/[NetApp] - The Data ONTAP GX line of storage appliances are based on FreeBSD. In addition, NetApp has contributed back many features, including the new BSD licensed hypervisor, bhyve.
* http://www.netflix.com/[Netflix] - The OpenConnect appliance that Netflix uses to stream movies to its customers is based on FreeBSD. Netflix has made extensive contributions to the codebase and works to maintain a zero delta from mainline FreeBSD. Netflix OpenConnect appliances are responsible for delivering more than 32% of all Internet traffic in North America.
* http://www.sandvine.com/[Sandvine] - Sandvine uses FreeBSD as the basis of their high performance real-time network processing platforms that make up their intelligent network policy control products.
* http://www.sony.com/[Sony] - The PlayStation 4 gaming console runs a modified version of FreeBSD.
* http://www.sophos.com/[Sophos] - The Sophos Email Appliance product is based on a hardened FreeBSD and scans inbound mail for spam and viruses, while also monitoring outbound mail for malware as well as the accidental loss of sensitive information.
* http://www.spectralogic.com/[Spectra Logic] - The nTier line of archive grade storage appliances run FreeBSD and OpenZFS.
* https://www.stormshield.com[Stormshield] - Stormshield Network Security appliances are based on a hardened version of FreeBSD. The BSD license allows them to integrate their own intellectual property with the system while returning a great deal of interesting development to the community.
* http://www.weather.com/[The Weather Channel] - The IntelliStar appliance that is installed at each local cable provider's headend and is responsible for injecting local weather forecasts into the cable TV network's programming runs FreeBSD.
* http://www.verisign.com/[Verisign] - Verisign is responsible for operating the .com and .net root domain registries as well as the accompanying DNS infrastructure. They rely on a number of different network operating systems including FreeBSD to ensure there is no common point of failure in their infrastructure.
* http://www.voxer.com/[Voxer] - Voxer powers their mobile voice messaging platform with ZFS on FreeBSD. Voxer switched from a Solaris derivative to FreeBSD because of its superior documentation, larger and more active community, and more developer friendly environment. In addition to critical features like ZFS and DTrace, FreeBSD also offers TRIM support for ZFS.
* https://fudosecurity.com/en/[Fudo Security] - The FUDO security appliance allows enterprises to monitor, control, record, and audit contractors and administrators who work on their systems. Based on all of the best security features of FreeBSD including ZFS, GELI, Capsicum, HAST, and auditdistd.
FreeBSD has also spawned a number of related open source projects:
* http://bsdrp.net/[BSD Router] - A FreeBSD based replacement for large enterprise routers designed to run on standard PC hardware.
* http://www.freenas.org/[FreeNAS] - A customized FreeBSD designed to be used as a network file server appliance. Provides a python based web interface to simplify the management of both the UFS and ZFS file systems. Includes support for NFS, SMB/CIFS, AFP, FTP, and iSCSI. Includes an extensible plugin system based on FreeBSD jails.
* https://ghostbsd.org/[GhostBSD] - is derived from FreeBSD, uses the GTK environment to provide a beautiful looks and comfortable experience on the modern BSD platform offering a natural and native UNIX(R) work environment.
* http://mfsbsd.vx.sk/[mfsBSD] - A toolkit for building a FreeBSD system image that runs entirely from memory.
* http://www.nas4free.org/[NAS4Free] - A file server distribution based on FreeBSD with a PHP powered web interface.
* http://www.opnsense.org/[OPNSense] - OPNsense is an open source, easy-to-use and easy-to-build FreeBSD based firewall and routing platform. OPNsense includes most of the features available in expensive commercial firewalls, and more in many cases. It brings the rich feature set of commercial offerings with the benefits of open and verifiable sources.
* https://www.trueos.org[TrueOS] - TrueOS is based on the legendary security and stability of FreeBSD. TrueOS follows FreeBSD-CURRENT, with the latest drivers, security updates, and packages available.
* https://www.midnightbsd.org[MidnightBSD] - is a FreeBSD derived operating system developed with desktop users in mind. It includes all the software you'd expect for your daily tasks: mail, web browsing, word processing, gaming, and much more.
* https://www.nomadbsd.org[NomadBSD] - is a persistent live system for USB flash drives, based on FreeBSD. Together with automatic hardware detection and setup, it is configured to be used as a desktop system that works out of the box, but can also be used for data recovery, for educational purposes, or to test FreeBSD's hardware compatibility.
* http://www.pfsense.org/[pfSense] - A firewall distribution based on FreeBSD with a huge array of features and extensive IPv6 support.
* http://zrouter.org/[ZRouter] - An open source alternative firmware for embedded devices based on FreeBSD. Designed to replace the proprietary firmware on off-the-shelf routers.
A list of https://www.freebsdfoundation.org/about/testimonials/[testimonials from companies basing their products and services on FreeBSD] can be found at the FreeBSD Foundation website. Wikipedia also maintains a https://en.wikipedia.org/wiki/List_of_products_based_on_FreeBSD[list of products based on FreeBSD].
[[history]]
== About the FreeBSD Project
The following section provides some background information on the project, including a brief history, project goals, and the development model of the project.
[[intro-history]]
=== A Brief History of FreeBSD
The FreeBSD Project had its genesis in the early part of 1993, partially as the brainchild of the Unofficial 386BSDPatchkit's last 3 coordinators: Nate Williams, Rod Grimes and Jordan Hubbard.
The original goal was to produce an intermediate snapshot of 386BSD in order to fix a number of problems that the patchkit mechanism was just not capable of solving. The early working title for the project was 386BSD 0.5 or 386BSD Interim in reference of that fact.
386BSD was Bill Jolitz's operating system, which had been up to that point suffering rather severely from almost a year's worth of neglect. As the patchkit swelled ever more uncomfortably with each passing day, they decided to assist Bill by providing this interim "cleanup" snapshot. Those plans came to a rude halt when Bill Jolitz suddenly decided to withdraw his sanction from the project without any clear indication of what would be done instead.
The trio thought that the goal remained worthwhile, even without Bill's support, and so they adopted the name "FreeBSD" coined by David Greenman. The initial objectives were set after consulting with the system's current users and, once it became clear that the project was on the road to perhaps even becoming a reality, Jordan contacted Walnut Creek CDROM with an eye toward improving FreeBSD's distribution channels for those many unfortunates without easy access to the Internet. Walnut Creek CDROM not only supported the idea of distributing FreeBSD on CD but also went so far as to provide the project with a machine to work on and a fast Internet connection. Without Walnut Creek CDROM's almost unprecedented degree of faith in what was, at the time, a completely unknown project, it is quite unlikely that FreeBSD would have gotten as far, as fast, as it has today.
The first CD-ROM (and general net-wide) distribution was FreeBSD 1.0, released in December of 1993. This was based on the 4.3BSD-Lite ("Net/2") tape from U.C. Berkeley, with many components also provided by 386BSD and the Free Software Foundation. It was a fairly reasonable success for a first offering, and they followed it with the highly successful FreeBSD 1.1 release in May of 1994.
Around this time, some rather unexpected storm clouds formed on the horizon as Novell and U.C. Berkeley settled their long-running lawsuit over the legal status of the Berkeley Net/2 tape. A condition of that settlement was U.C. Berkeley's concession that large parts of Net/2 were "encumbered" code and the property of Novell, who had in turn acquired it from AT&T some time previously. What Berkeley got in return was Novell's "blessing" that the 4.4BSD-Lite release, when it was finally released, would be declared unencumbered and all existing Net/2 users would be strongly encouraged to switch. This included FreeBSD, and the project was given until the end of July 1994 to stop shipping its own Net/2 based product. Under the terms of that agreement, the project was allowed one last release before the deadline, that release being FreeBSD 1.1.5.1.
FreeBSD then set about the arduous task of literally re-inventing itself from a completely new and rather incomplete set of 4.4BSD-Lite bits. The "Lite" releases were light in part because Berkeley's CSRG had removed large chunks of code required for actually constructing a bootable running system (due to various legal requirements) and the fact that the Intel port of 4.4 was highly incomplete. It took the project until November of 1994 to make this transition, and in December it released FreeBSD 2.0 to the world. Despite being still more than a little rough around the edges, the release was a significant success and was followed by the more robust and easier to install FreeBSD 2.0.5 release in June of 1995.
Since that time, FreeBSD has made a series of releases each time improving the stability, speed, and feature set of the previous version.
For now, long-term development projects continue to take place in the 10.X-CURRENT (trunk) branch, and snapshot releases of 10.X are continually made available from link:ftp://ftp.FreeBSD.org/pub/FreeBSD/snapshots/[the snapshot server] as work progresses.
[[goals]]
=== FreeBSD Project Goals
The goals of the FreeBSD Project are to provide software that may be used for any purpose and without strings attached. Many of us have a significant investment in the code (and project) and would certainly not mind a little financial compensation now and then, but we are definitely not prepared to insist on it. We believe that our first and foremost "mission" is to provide code to any and all comers, and for whatever purpose, so that the code gets the widest possible use and provides the widest possible benefit. This is, I believe, one of the most fundamental goals of Free Software and one that we enthusiastically support.
That code in our source tree which falls under the GNU General Public License (GPL) or Library General Public License (LGPL) comes with slightly more strings attached, though at least on the side of enforced access rather than the usual opposite. Due to the additional complexities that can evolve in the commercial use of GPL software we do, however, prefer software submitted under the more relaxed BSD license when it is a reasonable option to do so.
[[development]]
=== The FreeBSD Development Model
The development of FreeBSD is a very open and flexible process, being literally built from the contributions of thousands of people around the world, as can be seen from our link:{contributors}[list of contributors]. FreeBSD's development infrastructure allow these thousands of contributors to collaborate over the Internet. We are constantly on the lookout for new developers and ideas, and those interested in becoming more closely involved with the project need simply contact us at the {freebsd-hackers}. The {freebsd-announce} is also available to those wishing to make other FreeBSD users aware of major areas of work.
Useful things to know about the FreeBSD Project and its development process, whether working independently or in close cooperation:
The SVN repositories[[development-cvs-repository]]::
For several years, the central source tree for FreeBSD was maintained by http://www.nongnu.org/cvs/[CVS] (Concurrent Versions System), a freely available source code control tool. In June 2008, the Project switched to using http://subversion.tigris.org[SVN] (Subversion). The switch was deemed necessary, as the technical limitations imposed by CVS were becoming obvious due to the rapid expansion of the source tree and the amount of history already stored. The Documentation Project and Ports Collection repositories also moved from CVS to SVN in May 2012 and July 2012, respectively. Please refer to the crossref:cutting-edge[synching, Obtaining the Source] section for more information on obtaining the FreeBSD `src/` repository and crossref:ports[ports-using, Using the Ports Collection] for details on obtaining the FreeBSD Ports Collection.
The committers list[[development-committers]]::
The _committers_ are the people who have _write_ access to the Subversion tree, and are authorized to make modifications to the FreeBSD source (the term "committer" comes from `commit`, the source control command which is used to bring new changes into the repository). Anyone can submit a bug to the https://bugs.FreeBSD.org/submit/[Bug Database]. Before submitting a bug report, the FreeBSD mailing lists, IRC channels, or forums can be used to help verify that an issue is actually a bug.
The FreeBSD core team[[development-core]]::
The _FreeBSD core team_ would be equivalent to the board of directors if the FreeBSD Project were a company. The primary task of the core team is to make sure the project, as a whole, is in good shape and is heading in the right directions. Inviting dedicated and responsible developers to join our group of committers is one of the functions of the core team, as is the recruitment of new core team members as others move on. The current core team was elected from a pool of committer candidates in June 2020. Elections are held every 2 years.
+
[NOTE]
====
Like most developers, most members of the core team are also volunteers when it comes to FreeBSD development and do not benefit from the project financially, so "commitment" should also not be misconstrued as meaning "guaranteed support." The "board of directors" analogy above is not very accurate, and it may be more suitable to say that these are the people who gave up their lives in favor of FreeBSD against their better judgement!
====
Outside contributors::
Last, but definitely not least, the largest group of developers are the users themselves who provide feedback and bug fixes to us on an almost constant basis. The primary way of keeping in touch with FreeBSD's more non-centralized development is to subscribe to the {freebsd-hackers} where such things are discussed. See crossref:eresources[eresources, Resources on the Internet] for more information about the various FreeBSD mailing lists.
+
link:{contributors}[The FreeBSD Contributors List] is a long and growing one, so why not join it by contributing something back to FreeBSD today?
+
Providing code is not the only way of contributing to the project; for a more complete list of things that need doing, please refer to the link:https://www.FreeBSD.org/[FreeBSD Project web site].
In summary, our development model is organized as a loose set of concentric circles. The centralized model is designed for the convenience of the _users_ of FreeBSD, who are provided with an easy way of tracking one central code base, not to keep potential contributors out! Our desire is to present a stable operating system with a large set of coherent crossref:ports[ports,application programs] that the users can easily install and use - this model works very well in accomplishing that.
All we ask of those who would join us as FreeBSD developers is some of the same dedication its current people have to its continued success!
[[third-party-programs]]
=== Third Party Programs
In addition to the base distributions, FreeBSD offers a ported software collection with thousands of commonly sought-after programs. At the time of this writing, there were over {numports} ports! The list of ports ranges from http servers, to games, languages, editors, and almost everything in between. The entire Ports Collection requires approximately {ports-size}. To compile a port, you simply change to the directory of the program you wish to install, type `make install`, and let the system do the rest. The full original distribution for each port you build is retrieved dynamically so you need only enough disk space to build the ports you want. Almost every port is also provided as a pre-compiled "package", which can be installed with a simple command (`pkg install`) by those who do not wish to compile their own ports from source. More information on packages and ports can be found in crossref:ports[ports,Installing Applications: Packages and Ports].
=== Additional Documentation
All supported FreeBSD versions provide an option in the installer to install additional documentation under [.filename]#/usr/local/shared/doc/freebsd# during the initial system setup. Documentation may also be installed at any later time using packages as described in crossref:cutting-edge[doc-ports-install-package,“Updating Documentation from Ports”]. You may view the locally installed manuals with any HTML capable browser using the following URLs:
The FreeBSD Handbook::
[.filename]#link:file://localhost/usr/local/shared/doc/freebsd/handbook/index.html[/usr/local/shared/doc/freebsd/handbook/index.html]#
The FreeBSD FAQ::
[.filename]#link:file://localhost/usr/local/shared/doc/freebsd/faq/index.html[/usr/local/shared/doc/freebsd/faq/index.html]#
You can also view the master (and most frequently updated) copies at https://www.FreeBSD.org/[https://www.FreeBSD.org/].
diff --git a/documentation/content/en/books/handbook/jails/_index.adoc b/documentation/content/en/books/handbook/jails/_index.adoc
index 6c636f4176..d55839a785 100644
--- a/documentation/content/en/books/handbook/jails/_index.adoc
+++ b/documentation/content/en/books/handbook/jails/_index.adoc
@@ -1,1101 +1,1102 @@
---
title: Chapter 15. Jails
part: Part III. System Administration
prev: books/handbook/security
next: books/handbook/mac
+description: Jails improve on the concept of the traditional chroot environment in several ways
---
[[jails]]
= Jails
:doctype: book
:toc: macro
:toclevels: 1
:icons: font
:sectnums:
:sectnumlevels: 6
:source-highlighter: rouge
:experimental:
:skip-front-matter:
:xrefstyle: basic
:relfileprefix: ../
:outfilesuffix:
:sectnumoffset: 15
ifeval::["{backend}" == "html5"]
:imagesdir: ../../../../images/books/handbook/jails/
endif::[]
ifeval::["{backend}" == "pdf"]
:imagesdir: ../../../../static/images/books/handbook/jails/
endif::[]
ifeval::["{backend}" == "epub3"]
:imagesdir: ../../../../static/images/books/handbook/jails/
endif::[]
include::shared/authors.adoc[]
include::shared/releases.adoc[]
include::shared/en/mailing-lists.adoc[]
include::shared/en/teams.adoc[]
include::shared/en/urls.adoc[]
toc::[]
[[jails-synopsis]]
== Synopsis
Since system administration is a difficult task, many tools have been developed to make life easier for the administrator. These tools often enhance the way systems are installed, configured, and maintained. One of the tools which can be used to enhance the security of a FreeBSD system is _jails_. Jails have been available since FreeBSD 4.X and continue to be enhanced in their usefulness, performance, reliability, and security.
Jails build upon the man:chroot[2] concept, which is used to change the root directory of a set of processes. This creates a safe environment, separate from the rest of the system. Processes created in the chrooted environment can not access files or resources outside of it. For that reason, compromising a service running in a chrooted environment should not allow the attacker to compromise the entire system. However, a chroot has several limitations. It is suited to easy tasks which do not require much flexibility or complex, advanced features. Over time, many ways have been found to escape from a chrooted environment, making it a less than ideal solution for securing services.
Jails improve on the concept of the traditional chroot environment in several ways. In a traditional chroot environment, processes are only limited in the part of the file system they can access. The rest of the system resources, system users, running processes, and the networking subsystem are shared by the chrooted processes and the processes of the host system. Jails expand this model by virtualizing access to the file system, the set of users, and the networking subsystem. More fine-grained controls are available for tuning the access of a jailed environment. Jails can be considered as a type of operating system-level virtualization.
A jail is characterized by four elements:
* A directory subtree: the starting point from which a jail is entered. Once inside the jail, a process is not permitted to escape outside of this subtree.
* A hostname: which will be used by the jail.
* An IP address: which is assigned to the jail. The IP address of a jail is often an alias address for an existing network interface.
* A command: the path name of an executable to run inside the jail. The path is relative to the root directory of the jail environment.
Jails have their own set of users and their own `root` account which are limited to the jail environment. The `root` account of a jail is not allowed to perform operations to the system outside of the associated jail environment.
This chapter provides an overview of the terminology and commands for managing FreeBSD jails. Jails are a powerful tool for both system administrators, and advanced users.
After reading this chapter, you will know:
* What a jail is and what purpose it may serve in FreeBSD installations.
* How to build, start, and stop a jail.
* The basics of jail administration, both from inside and outside the jail.
[IMPORTANT]
====
Jails are a powerful tool, but they are not a security panacea. While it is not possible for a jailed process to break out on its own, there are several ways in which an unprivileged user outside the jail can cooperate with a privileged user inside the jail to obtain elevated privileges in the host environment.
Most of these attacks can be mitigated by ensuring that the jail root is not accessible to unprivileged users in the host environment. As a general rule, untrusted users with privileged access to a jail should not be given access to the host environment.
====
[[jails-terms]]
== Terms Related to Jails
To facilitate better understanding of parts of the FreeBSD system related to jails, their internals and the way they interact with the rest of FreeBSD, the following terms are used further in this chapter:
man:chroot[8] (command)::
Utility, which uses man:chroot[2] FreeBSD system call to change the root directory of a process and all its descendants.
man:chroot[2] (environment)::
The environment of processes running in a "chroot". This includes resources such as the part of the file system which is visible, user and group IDs which are available, network interfaces and other IPC mechanisms, etc.
man:jail[8] (command)::
The system administration utility which allows launching of processes within a jail environment.
host (system, process, user, etc.)::
The controlling system of a jail environment. The host system has access to all the hardware resources available, and can control processes both outside of and inside a jail environment. One of the important differences of the host system from a jail is that the limitations which apply to superuser processes inside a jail are not enforced for processes of the host system.
hosted (system, process, user, etc.)::
A process, user or other entity, whose access to resources is restricted by a FreeBSD jail.
[[jails-build]]
== Creating and Controlling Jails
Some administrators divide jails into the following two types: "complete" jails, which resemble a real FreeBSD system, and "service" jails, dedicated to one application or service, possibly running with privileges. This is only a conceptual division and the process of building a jail is not affected by it. When creating a "complete" jail there are two options for the source of the userland: use prebuilt binaries (such as those supplied on an install media) or build from source.
=== Installing a Jail
[[jails-install-internet]]
==== To install a Jail from the Internet
The man:bsdinstall[8] tool can be used to fetch and install the binaries needed for a jail. This will walk through the picking of a mirror, which distributions will be installed into the destination directory, and some basic configuration of the jail:
[source,shell]
....
# bsdinstall jail /here/is/the/jail
....
Once the command is complete, the next step is configuring the host to run the jail.
[[jails-install-iso]]
==== To install a Jail from an ISO
To install the userland from installation media, first create the root directory for the jail. This can be done by setting the `DESTDIR` variable to the proper location.
Start a shell and define `DESTDIR`:
[source,shell]
....
# sh
# export DESTDIR=/here/is/the/jail
....
Mount the install media as covered in man:mdconfig[8] when using the install ISO:
[source,shell]
....
# mount -t cd9660 /dev/`mdconfig -f cdimage.iso` /mnt
# cd /mnt/usr/freebsd-dist/
....
Extract the binaries from the tarballs on the install media into the declared destination. Minimally, only the base set needs to be extracted, but a complete install can be performed when preferred.
To install just the base system:
[source,shell]
....
# tar -xf base.txz -C $DESTDIR
....
To install everything except the kernel:
[source,shell]
....
# for set in base ports; do tar -xf $set.txz -C $DESTDIR ; done
....
[[jails-install-source]]
==== To build and install a Jail from source
The man:jail[8] manual page explains the procedure for building a jail:
[source,shell]
....
# setenv D /here/is/the/jail
# mkdir -p $D <.>
# cd /usr/src
# make buildworld <.>
# make installworld DESTDIR=$D <.>
# make distribution DESTDIR=$D <.>
# mount -t devfs devfs $D/dev <.>
....
<.> Selecting a location for a jail is the best starting point. This is where the jail will physically reside within the file system of the jail's host. A good choice can be [.filename]#/usr/jail/jailname#, where _jailname_ is the hostname identifying the jail. Usually, [.filename]#/usr/# has enough space for the jail file system, which for "complete" jails is, essentially, a replication of every file present in a default installation of the FreeBSD base system.
<.> If you have already rebuilt your userland using `make world` or `make buildworld`, you can skip this step and install your existing userland into the new jail.
<.> This command will populate the directory subtree chosen as jail's physical location on the file system with the necessary binaries, libraries, manual pages and so on.
<.> The `distribution` target for make installs every needed configuration file. In simple words, it installs every installable file of [.filename]#/usr/src/etc/# to the [.filename]#/etc# directory of the jail environment: [.filename]#$D/etc/#.
<.> Mounting the man:devfs[8] file system inside a jail is not required. On the other hand, any, or almost any application requires access to at least one device, depending on the purpose of the given application. It is very important to control access to devices from inside a jail, as improper settings could permit an attacker to do nasty things in the jail. Control over man:devfs[8] is managed through rulesets which are described in the man:devfs[8] and man:devfs.conf[5] manual pages.
=== Configuring the Host
Once a jail is installed, it can be started by using the man:jail[8] utility. The man:jail[8] utility takes four mandatory arguments which are described in the <<jails-synopsis>>. Other arguments may be specified too, e.g., to run the jailed process with the credentials of a specific user. The `_command_` argument depends on the type of the jail; for a _virtual system_, [.filename]#/etc/rc# is a good choice, since it will replicate the startup sequence of a real FreeBSD system. For a _service_ jail, it depends on the service or application that will run within the jail.
Jails are often started at boot time and the FreeBSD [.filename]#rc# mechanism provides an easy way to do this.
[.procedure]
* Configure jail parameters in [.filename]#jail.conf#:
+
[.programlisting]
....
www {
host.hostname = www.example.org; # Hostname
ip4.addr = 192.168.0.10; # IP address of the jail
path = "/usr/jail/www"; # Path to the jail
devfs_ruleset = "www_ruleset"; # devfs ruleset
mount.devfs; # Mount devfs inside the jail
exec.start = "/bin/sh /etc/rc"; # Start command
exec.stop = "/bin/sh /etc/rc.shutdown"; # Stop command
}
....
+
Configure jails to start at boot time in [.filename]#rc.conf#:
+
[.programlisting]
....
jail_enable="YES" # Set to NO to disable starting of any jails
....
+
The default startup of jails configured in man:jail.conf[5], will run the [.filename]#/etc/rc# script of the jail, which assumes the jail is a complete virtual system. For service jails, the default startup command of the jail should be changed, by setting the `exec.start` option appropriately.
+
[NOTE]
====
For a full list of available options, please see the man:jail.conf[5] manual page.
====
man:service[8] can be used to start or stop a jail by hand, if an entry for it exists in [.filename]#jail.conf#:
[source,shell]
....
# service jail start www
# service jail stop www
....
Jails can be shut down with man:jexec[8]. Use man:jls[8] to identify the jail's `JID`, then use man:jexec[8] to run the shutdown script in that jail.
[source,shell]
....
# jls
JID IP Address Hostname Path
3 192.168.0.10 www /usr/jail/www
# jexec 3 /etc/rc.shutdown
....
More information about this can be found in the man:jail[8] manual page.
[[jails-tuning]]
== Fine Tuning and Administration
There are several options which can be set for any jail, and various ways of combining a host FreeBSD system with jails, to produce higher level applications. This section presents:
* Some of the options available for tuning the behavior and security restrictions implemented by a jail installation.
* Some of the high-level applications for jail management, which are available through the FreeBSD Ports Collection, and can be used to implement overall jail-based solutions.
[[jails-tuning-utilities]]
=== System Tools for Jail Tuning in FreeBSD
Fine tuning of a jail's configuration is mostly done by setting man:sysctl[8] variables. A special subtree of sysctl exists as a basis for organizing all the relevant options: the `security.jail.*` hierarchy of FreeBSD kernel options. Here is a list of the main jail-related sysctls, complete with their default value. Names should be self-explanatory, but for more information about them, please refer to the man:jail[8] and man:sysctl[8] manual pages.
* `security.jail.set_hostname_allowed: 1`
* `security.jail.socket_unixiproute_only: 1`
* `security.jail.sysvipc_allowed: 0`
* `security.jail.enforce_statfs: 2`
* `security.jail.allow_raw_sockets: 0`
* `security.jail.chflags_allowed: 0`
* `security.jail.jailed: 0`
These variables can be used by the system administrator of the _host system_ to add or remove some of the limitations imposed by default on the `root` user. Note that there are some limitations which cannot be removed. The `root` user is not allowed to mount or unmount file systems from within a man:jail[8]. The `root` inside a jail may not load or unload man:devfs[8] rulesets, set firewall rules, or do many other administrative tasks which require modifications of in-kernel data, such as setting the `securelevel` of the kernel.
The base system of FreeBSD contains a basic set of tools for viewing information about the active jails, and attaching to a jail to run administrative commands. The man:jls[8] and man:jexec[8] commands are part of the base FreeBSD system, and can be used to perform the following simple tasks:
* Print a list of active jails and their corresponding jail identifier (JID), IP address, hostname and path.
* Attach to a running jail, from its host system, and run a command inside the jail or perform administrative tasks inside the jail itself. This is especially useful when the `root` user wants to cleanly shut down a jail. The man:jexec[8] utility can also be used to start a shell in a jail to do administration in it; for example:
+
[source,shell]
....
# jexec 1 tcsh
....
[[jails-tuning-admintools]]
=== High-Level Administrative Tools in the FreeBSD Ports Collection
Among the many third-party utilities for jail administration, one of the most complete and useful is package:sysutils/ezjail[]. It is a set of scripts that contribute to man:jail[8] management. Please refer to <<jails-ezjail,the handbook section on ezjail>> for more information.
[[jails-updating]]
=== Keeping Jails Patched and up to Date
Jails should be kept up to date from the host operating system as attempting to patch userland from within the jail may likely fail as the default behavior in FreeBSD is to disallow the use of man:chflags[1] in a jail which prevents the replacement of some files. It is possible to change this behavior but it is recommended to use man:freebsd-update[8] to maintain jails instead. Use `-b` to specify the path of the jail to be updated.
To update the jail to the latest patch release of the version of FreeBSD it is already running, then execute the following commands on the host:
[source,shell]
....
# freebsd-update -b /here/is/the/jail fetch
# freebsd-update -b /here/is/the/jail install
....
To upgrade the jail to a new major or minor version, first upgrade the host system as described in crossref:cutting-edge[freebsdupdate-upgrade,“Performing Major and Minor Version Upgrades”]. Once the host has been upgraded and rebooted, the jail can then be upgraded. For example to upgrade from 12.0-RELEASE to 12.1-RELEASE, on the host run:
[source,shell]
....
# freebsd-update -b /here/is/the/jail --currently-running 12.0-RELEASE -r 12.1-RELEASE upgrade
# freebsd-update -b /here/is/the/jail install
# service jail restart myjail
# freebsd-update -b /here/is/the/jail install
....
Then, if it was a major version upgrade, reinstall all installed packages and restart the jail again. This is required because the ABI version changes when upgrading between major versions of FreeBSD. From the host:
[source,shell]
....
# pkg -j myjail upgrade -f
# service jail restart myjail
....
[[jails-application]]
== Updating Multiple Jails
The management of multiple jails can become problematic because every jail has to be rebuilt from scratch whenever it is upgraded. This can be time consuming and tedious if a lot of jails are created and manually updated.
This section demonstrates one method to resolve this issue by safely sharing as much as is possible between jails using read-only man:mount_nullfs[8] mounts, so that updating is simpler. This makes it more attractive to put single services, such as HTTP, DNS, and SMTP, into individual jails. Additionally, it provides a simple way to add, remove, and upgrade jails.
[NOTE]
====
Simpler solutions exist, such as ezjail, which provides an easier method of administering FreeBSD jails but is less versatile than this setup. ezjail is covered in more detail in <<jails-ezjail>>.
====
The goals of the setup described in this section are:
* Create a simple and easy to understand jail structure that does not require running a full installworld on each and every jail.
* Make it easy to add new jails or remove existing ones.
* Make it easy to update or upgrade existing jails.
* Make it possible to run a customized FreeBSD branch.
* Be paranoid about security, reducing as much as possible the possibility of compromise.
* Save space and inodes, as much as possible.
This design relies on a single, read-only master template which is mounted into each jail and one read-write device per jail. A device can be a separate physical disc, a partition, or a vnode backed memory device. This example uses read-write nullfs mounts.
The file system layout is as follows:
* The jails are based under the [.filename]#/home# partition.
* Each jail will be mounted under the [.filename]#/home/j# directory.
* The template for each jail and the read-only partition for all of the jails is [.filename]#/home/j/mroot#.
* A blank directory will be created for each jail under the [.filename]#/home/j# directory.
* Each jail will have a [.filename]#/s# directory that will be linked to the read-write portion of the system.
* Each jail will have its own read-write system that is based upon [.filename]#/home/j/skel#.
* The read-write portion of each jail will be created in [.filename]#/home/js#.
[[jails-service-jails-template]]
=== Creating the Template
This section describes the steps needed to create the master template.
It is recommended to first update the host FreeBSD system to the latest -RELEASE branch using the instructions in crossref:cutting-edge[makeworld,“Updating FreeBSD from Source”]. Additionally, this template uses the package:sysutils/cpdup[] package or port and portsnap will be used to download the FreeBSD Ports Collection.
[.procedure]
. First, create a directory structure for the read-only file system which will contain the FreeBSD binaries for the jails. Then, change directory to the FreeBSD source tree and install the read-only file system to the jail template:
+
[source,shell]
....
# mkdir /home/j /home/j/mroot
# cd /usr/src
# make installworld DESTDIR=/home/j/mroot
....
. Next, prepare a FreeBSD Ports Collection for the jails as well as a FreeBSD source tree, which is required for mergemaster:
+
[source,shell]
....
# cd /home/j/mroot
# mkdir usr/ports
# portsnap -p /home/j/mroot/usr/ports fetch extract
# cpdup /usr/src /home/j/mroot/usr/src
....
. Create a skeleton for the read-write portion of the system:
+
[source,shell]
....
# mkdir /home/j/skel /home/j/skel/home /home/j/skel/usr-X11R6 /home/j/skel/distfiles
# mv etc /home/j/skel
# mv usr/local /home/j/skel/usr-local
# mv tmp /home/j/skel
# mv var /home/j/skel
# mv root /home/j/skel
....
. Use mergemaster to install missing configuration files. Then, remove the extra directories that mergemaster creates:
+
[source,shell]
....
# mergemaster -t /home/j/skel/var/tmp/temproot -D /home/j/skel -i
# cd /home/j/skel
# rm -R bin boot lib libexec mnt proc rescue sbin sys usr dev
....
. Now, symlink the read-write file system to the read-only file system. Ensure that the symlinks are created in the correct [.filename]#s/# locations as the creation of directories in the wrong locations will cause the installation to fail.
+
[source,shell]
....
# cd /home/j/mroot
# mkdir s
# ln -s s/etc etc
# ln -s s/home home
# ln -s s/root root
# ln -s ../s/usr-local usr/local
# ln -s ../s/usr-X11R6 usr/X11R6
# ln -s ../../s/distfiles usr/ports/distfiles
# ln -s s/tmp tmp
# ln -s s/var var
....
. As a last step, create a generic [.filename]#/home/j/skel/etc/make.conf# containing this line:
+
[.programlisting]
....
WRKDIRPREFIX?= /s/portbuild
....
+
This makes it possible to compile FreeBSD ports inside each jail. Remember that the ports directory is part of the read-only system. The custom path for `WRKDIRPREFIX` allows builds to be done in the read-write portion of every jail.
[[jails-service-jails-creating]]
=== Creating Jails
The jail template can now be used to setup and configure the jails in [.filename]#/etc/rc.conf#. This example demonstrates the creation of 3 jails: `NS`, `MAIL` and `WWW`.
[.procedure]
. Add the following lines to [.filename]#/etc/fstab#, so that the read-only template for the jails and the read-write space will be available in the respective jails:
+
[.programlisting]
....
/home/j/mroot /home/j/ns nullfs ro 0 0
/home/j/mroot /home/j/mail nullfs ro 0 0
/home/j/mroot /home/j/www nullfs ro 0 0
/home/js/ns /home/j/ns/s nullfs rw 0 0
/home/js/mail /home/j/mail/s nullfs rw 0 0
/home/js/www /home/j/www/s nullfs rw 0 0
....
+
To prevent fsck from checking nullfs mounts during boot and dump from backing up the read-only nullfs mounts of the jails, the last two columns are both set to `0`.
. Configure the jails in [.filename]#/etc/rc.conf#:
+
[.programlisting]
....
jail_enable="YES"
jail_set_hostname_allow="NO"
jail_list="ns mail www"
jail_ns_hostname="ns.example.org"
jail_ns_ip="192.168.3.17"
jail_ns_rootdir="/usr/home/j/ns"
jail_ns_devfs_enable="YES"
jail_mail_hostname="mail.example.org"
jail_mail_ip="192.168.3.18"
jail_mail_rootdir="/usr/home/j/mail"
jail_mail_devfs_enable="YES"
jail_www_hostname="www.example.org"
jail_www_ip="62.123.43.14"
jail_www_rootdir="/usr/home/j/www"
jail_www_devfs_enable="YES"
....
+
The `jail__name__rootdir` variable is set to [.filename]#/usr/home# instead of [.filename]#/home# because the physical path of [.filename]#/home# on a default FreeBSD installation is [.filename]#/usr/home#. The `jail__name__rootdir` variable must _not_ be set to a path which includes a symbolic link, otherwise the jails will refuse to start.
. Create the required mount points for the read-only file system of each jail:
+
[source,shell]
....
# mkdir /home/j/ns /home/j/mail /home/j/www
....
. Install the read-write template into each jail using package:sysutils/cpdup[]:
+
[source,shell]
....
# mkdir /home/js
# cpdup /home/j/skel /home/js/ns
# cpdup /home/j/skel /home/js/mail
# cpdup /home/j/skel /home/js/www
....
. In this phase, the jails are built and prepared to run. First, mount the required file systems for each jail, and then start them:
+
[source,shell]
....
# mount -a
# service jail start
....
The jails should be running now. To check if they have started correctly, use `jls`. Its output should be similar to the following:
[source,shell]
....
# jls
JID IP Address Hostname Path
3 192.168.3.17 ns.example.org /home/j/ns
2 192.168.3.18 mail.example.org /home/j/mail
1 62.123.43.14 www.example.org /home/j/www
....
At this point, it should be possible to log onto each jail, add new users, or configure daemons. The `JID` column indicates the jail identification number of each running jail. Use the following command to perform administrative tasks in the jail whose JID is `3`:
[source,shell]
....
# jexec 3 tcsh
....
[[jails-service-jails-upgrading]]
=== Upgrading
The design of this setup provides an easy way to upgrade existing jails while minimizing their downtime. Also, it provides a way to roll back to the older version should a problem occur.
[.procedure]
. The first step is to upgrade the host system. Then, create a new temporary read-only template in [.filename]#/home/j/mroot2#.
+
[source,shell]
....
# mkdir /home/j/mroot2
# cd /usr/src
# make installworld DESTDIR=/home/j/mroot2
# cd /home/j/mroot2
# cpdup /usr/src usr/src
# mkdir s
....
+
The `installworld` creates a few unnecessary directories, which should be removed:
+
[source,shell]
....
# chflags -R 0 var
# rm -R etc var root usr/local tmp
....
. Recreate the read-write symlinks for the master file system:
+
[source,shell]
....
# ln -s s/etc etc
# ln -s s/root root
# ln -s s/home home
# ln -s ../s/usr-local usr/local
# ln -s ../s/usr-X11R6 usr/X11R6
# ln -s s/tmp tmp
# ln -s s/var var
....
. Next, stop the jails:
+
[source,shell]
....
# service jail stop
....
. Unmount the original file systems as the read-write systems are attached to the read-only system ([.filename]#/s#):
+
[source,shell]
....
# umount /home/j/ns/s
# umount /home/j/ns
# umount /home/j/mail/s
# umount /home/j/mail
# umount /home/j/www/s
# umount /home/j/www
....
. Move the old read-only file system and replace it with the new one. This will serve as a backup and archive of the old read-only file system should something go wrong. The naming convention used here corresponds to when a new read-only file system has been created. Move the original FreeBSD Ports Collection over to the new file system to save some space and inodes:
+
[source,shell]
....
# cd /home/j
# mv mroot mroot.20060601
# mv mroot2 mroot
# mv mroot.20060601/usr/ports mroot/usr
....
. At this point the new read-only template is ready, so the only remaining task is to remount the file systems and start the jails:
+
[source,shell]
....
# mount -a
# service jail start
....
Use `jls` to check if the jails started correctly. Run `mergemaster` in each jail to update the configuration files.
[[jails-ezjail]]
== Managing Jails with ezjail
Creating and managing multiple jails can quickly become tedious and error-prone. Dirk Engling's ezjail automates and greatly simplifies many jail tasks. A _basejail_ is created as a template. Additional jails use man:mount_nullfs[8] to share many of the basejail directories without using additional disk space. Each additional jail takes only a few megabytes of disk space before applications are installed. Upgrading the copy of the userland in the basejail automatically upgrades all of the other jails.
Additional benefits and features are described in detail on the ezjail web site, https://erdgeist.org/arts/software/ezjail/[].
[[jails-ezjail-install]]
=== Installing ezjail
Installing ezjail consists of adding a loopback interface for use in jails, installing the port or package, and enabling the service.
[[jails-ezjail-install-procedure]]
[.procedure]
. To keep jail loopback traffic off the host's loopback network interface `lo0`, a second loopback interface is created by adding an entry to [.filename]#/etc/rc.conf#:
+
[.programlisting]
....
cloned_interfaces="lo1"
....
+
The second loopback interface `lo1` will be created when the system starts. It can also be created manually without a restart:
+
[source,shell]
....
# service netif cloneup
Created clone interfaces: lo1.
....
+
Jails can be allowed to use aliases of this secondary loopback interface without interfering with the host.
+
Inside a jail, access to the loopback address `127.0.0.1` is redirected to the first IP address assigned to the jail. To make the jail loopback correspond with the new `lo1` interface, that interface must be specified first in the list of interfaces and IP addresses given when creating a new jail.
+
Give each jail a unique loopback address in the `127.0.0.0/8` netblock.
. Install package:sysutils/ezjail[]:
+
[source,shell]
....
# cd /usr/ports/sysutils/ezjail
# make install clean
....
. Enable ezjail by adding this line to [.filename]#/etc/rc.conf#:
+
[.programlisting]
....
ezjail_enable="YES"
....
. The service will automatically start on system boot. It can be started immediately for the current session:
+
[source,shell]
....
# service ezjail start
....
[[jails-ezjail-initialsetup]]
=== Initial Setup
With ezjail installed, the basejail directory structure can be created and populated. This step is only needed once on the jail host computer.
In both of these examples, `-p` causes the ports tree to be retrieved with man:portsnap[8] into the basejail. That single copy of the ports directory will be shared by all the jails. Using a separate copy of the ports directory for jails isolates them from the host. The ezjailFAQ explains in more detail: http://erdgeist.org/arts/software/ezjail/#FAQ[].
[[jails-ezjail-initialsetup-procedure]]
[.procedure]
. To Populate the Jail with FreeBSD-RELEASE
+
For a basejail based on the FreeBSD RELEASE matching that of the host computer, use `install`. For example, on a host computer running FreeBSD 10-STABLE, the latest RELEASE version of FreeBSD -10 will be installed in the jail):
+
[source,shell]
....
# ezjail-admin install -p
....
. To Populate the Jail with `installworld`
+
The basejail can be installed from binaries created by `buildworld` on the host with `ezjail-admin update`.
+
In this example, FreeBSD 10-STABLE has been built from source. The jail directories are created. Then `installworld` is executed, installing the host's [.filename]#/usr/obj# into the basejail.
+
[source,shell]
....
# ezjail-admin update -i -p
....
+
The host's [.filename]#/usr/src# is used by default. A different source directory on the host can be specified with `-s` and a path, or set with `ezjail_sourcetree` in [.filename]#/usr/local/etc/ezjail.conf#.
[TIP]
====
The basejail's ports tree is shared by other jails. However, downloaded distfiles are stored in the jail that downloaded them. By default, these files are stored in [.filename]#/var/ports/distfiles# within each jail. [.filename]#/var/ports# inside each jail is also used as a work directory when building ports.
====
[TIP]
====
The FTP protocol is used by default to download packages for the installation of the basejail. Firewall or proxy configurations can prevent or interfere with FTP transfers. The HTTP protocol works differently and avoids these problems. It can be chosen by specifying a full URL for a particular download mirror in [.filename]#/usr/local/etc/ezjail.conf#:
[.programlisting]
....
ezjail_ftphost=http://ftp.FreeBSD.org
....
See crossref:mirrors[mirrors-ftp,“FTP Sites”] for a list of sites.
====
[[jails-ezjail-create]]
=== Creating and Starting a New Jail
New jails are created with `ezjail-admin create`. In these examples, the `lo1` loopback interface is used as described above.
[[jails-ezjail-create-steps]]
[.procedure]
.Procedure: Create and Start a New Jail
. Create the jail, specifying a name and the loopback and network interfaces to use, along with their IP addresses. In this example, the jail is named `dnsjail`.
+
[source,shell]
....
# ezjail-admin create dnsjail 'lo1|127.0.1.1,em0|192.168.1.50'
....
+
[TIP]
====
Most network services run in jails without problems. A few network services, most notably man:ping[8], use _raw network sockets_. In jails, raw network sockets are disabled by default for security. Services that require them will not work.
Occasionally, a jail genuinely needs raw sockets. For example, network monitoring applications often use man:ping[8] to check the availability of other computers. When raw network sockets are actually needed in a jail, they can be enabled by editing the ezjail configuration file for the individual jail, [.filename]#/usr/local/etc/ezjail/jailname#. Modify the `parameters` entry:
[.programlisting]
....
export jail_jailname_parameters="allow.raw_sockets=1"
....
Do not enable raw network sockets unless services in the jail actually require them.
====
. Start the jail:
+
[source,shell]
....
# ezjail-admin start dnsjail
....
. Use a console on the jail:
+
[source,shell]
....
# ezjail-admin console dnsjail
....
The jail is operating and additional configuration can be completed. Typical settings added at this point include:
[.procedure]
. Set the `root` Password
+
Connect to the jail and set the `root` user's password:
+
[source,shell]
....
# ezjail-admin console dnsjail
# passwd
Changing local password for root
New Password:
Retype New Password:
....
. Time Zone Configuration
+
The jail's time zone can be set with man:tzsetup[8]. To avoid spurious error messages, the man:adjkerntz[8] entry in [.filename]#/etc/crontab# can be commented or removed. This job attempts to update the computer's hardware clock with time zone changes, but jails are not allowed to access that hardware.
. DNS Servers
+
Enter domain name server lines in [.filename]#/etc/resolv.conf# so DNS works in the jail.
. Edit [.filename]#/etc/hosts#
+
Change the address and add the jail name to the `localhost` entries in [.filename]#/etc/hosts#.
. Configure [.filename]#/etc/rc.conf#
+
Enter configuration settings in [.filename]#/etc/rc.conf#. This is much like configuring a full computer. The host name and IP address are not set here. Those values are already provided by the jail configuration.
With the jail configured, the applications for which the jail was created can be installed.
[TIP]
====
Some ports must be built with special options to be used in a jail. For example, both of the network monitoring plugin packages package:net-mgmt/nagios-plugins[] and package:net-mgmt/monitoring-plugins[] have a `JAIL` option which must be enabled for them to work correctly inside a jail.
====
[[jails-ezjail-update]]
=== Updating Jails
[[jails-ezjail-update-os]]
==== Updating the Operating System
Because the basejail's copy of the userland is shared by the other jails, updating the basejail automatically updates all of the other jails. Either source or binary updates can be used.
To build the world from source on the host, then install it in the basejail, use:
[source,shell]
....
# ezjail-admin update -b
....
If the world has already been compiled on the host, install it in the basejail with:
[source,shell]
....
# ezjail-admin update -i
....
Binary updates use man:freebsd-update[8]. These updates have the same limitations as if man:freebsd-update[8] were being run directly. The most important one is that only -RELEASE versions of FreeBSD are available with this method.
Update the basejail to the latest patched release of the version of FreeBSD on the host. For example, updating from RELEASE-p1 to RELEASE-p2.
[source,shell]
....
# ezjail-admin update -u
....
To upgrade the basejail to a new version, first upgrade the host system as described in crossref:cutting-edge[freebsdupdate-upgrade,“Performing Major and Minor Version Upgrades”]. Once the host has been upgraded and rebooted, the basejail can then be upgraded. man:freebsd-update[8] has no way of determining which version is currently installed in the basejail, so the original version must be specified. Use man:file[1] to determine the original version in the basejail:
[source,shell]
....
# file /usr/jails/basejail/bin/sh
/usr/jails/basejail/bin/sh: ELF 64-bit LSB executable, x86-64, version 1 (FreeBSD), dynamically linked (uses shared libs), for FreeBSD 9.3, stripped
....
Now use this information to perform the upgrade from `9.3-RELEASE` to the current version of the host system:
[source,shell]
....
# ezjail-admin update -U -s 9.3-RELEASE
....
After updating the basejail, man:mergemaster[8] must be run to update each jail's configuration files.
How to use man:mergemaster[8] depends on the purpose and trustworthiness of a jail. If a jail's services or users are not trusted, then man:mergemaster[8] should only be run from within that jail:
[[jails-ezjail-update-mergemaster-untrusted]]
.man:mergemaster[8] on Untrusted Jail
[example]
====
Delete the link from the jail's [.filename]#/usr/src# into the basejail and create a new [.filename]#/usr/src# in the jail as a mountpoint. Mount the host computer's [.filename]#/usr/src# read-only on the jail's new [.filename]#/usr/src# mountpoint:
[source,shell]
....
# rm /usr/jails/jailname/usr/src
# mkdir /usr/jails/jailname/usr/src
# mount -t nullfs -o ro /usr/src /usr/jails/jailname/usr/src
....
Get a console in the jail:
[source,shell]
....
# ezjail-admin console jailname
....
Inside the jail, run `mergemaster`. Then exit the jail console:
[source,shell]
....
# cd /usr/src
# mergemaster -U
# exit
....
Finally, unmount the jail's [.filename]#/usr/src#:
[source,shell]
....
# umount /usr/jails/jailname/usr/src
....
====
[[jails-ezjail-update-mergemaster-trusted]]
.man:mergemaster[8] on Trusted Jail
[example]
====
If the users and services in a jail are trusted, man:mergemaster[8] can be run from the host:
[source,shell]
....
# mergemaster -U -D /usr/jails/jailname
....
====
[TIP]
====
After a major version update it is recommended by package:sysutils/ezjail[] to make sure your `pkg` is of the correct version. Therefore enter:
[source,shell]
....
# pkg-static upgrade -f pkg
....
to upgrade or downgrade to the appropriate version.
====
[[jails-ezjail-update-ports]]
==== Updating Ports
The ports tree in the basejail is shared by the other jails. Updating that copy of the ports tree gives the other jails the updated version also.
The basejail ports tree is updated with man:portsnap[8]:
[source,shell]
....
# ezjail-admin update -P
....
[[jails-ezjail-control]]
=== Controlling Jails
[[jails-ezjail-control-stop-start]]
==== Stopping and Starting Jails
ezjail automatically starts jails when the computer is started. Jails can be manually stopped and restarted with `stop` and `start`:
[source,shell]
....
# ezjail-admin stop sambajail
Stopping jails: sambajail.
....
By default, jails are started automatically when the host computer starts. Autostarting can be disabled with `config`:
[source,shell]
....
# ezjail-admin config -r norun seldomjail
....
This takes effect the next time the host computer is started. A jail that is already running will not be stopped.
Enabling autostart is very similar:
[source,shell]
....
# ezjail-admin config -r run oftenjail
....
[[jails-ezjail-control-backup]]
==== Archiving and Restoring Jails
Use `archive` to create a [.filename]#.tar.gz# archive of a jail. The file name is composed from the name of the jail and the current date. Archive files are written to the archive directory, [.filename]#/usr/jails/ezjail_archives#. A different archive directory can be chosen by setting `ezjail_archivedir` in the configuration file.
The archive file can be copied elsewhere as a backup, or an existing jail can be restored from it with `restore`. A new jail can be created from the archive, providing a convenient way to clone existing jails.
Stop and archive a jail named `wwwserver`:
[source,shell]
....
# ezjail-admin stop wwwserver
Stopping jails: wwwserver.
# ezjail-admin archive wwwserver
# ls /usr/jails/ezjail-archives/
wwwserver-201407271153.13.tar.gz
....
Create a new jail named `wwwserver-clone` from the archive created in the previous step. Use the [.filename]#em1# interface and assign a new IP address to avoid conflict with the original:
[source,shell]
....
# ezjail-admin create -a /usr/jails/ezjail_archives/wwwserver-201407271153.13.tar.gz wwwserver-clone 'lo1|127.0.3.1,em1|192.168.1.51'
....
[[jails-ezjail-example-bind]]
=== Full Example: BIND in a Jail
Putting the BINDDNS server in a jail improves security by isolating it. This example creates a simple caching-only name server.
* The jail will be called `dns1`.
* The jail will use IP address `192.168.1.240` on the host's `re0` interface.
* The upstream ISP's DNS servers are at `10.0.0.62` and `10.0.0.61`.
* The basejail has already been created and a ports tree installed as shown in <<jails-ezjail-initialsetup>>.
[[jails-ezjail-example-bind-steps]]
.Running BIND in a Jail
[example]
====
Create a cloned loopback interface by adding a line to [.filename]#/etc/rc.conf#:
[.programlisting]
....
cloned_interfaces="lo1"
....
Immediately create the new loopback interface:
[source,shell]
....
# service netif cloneup
Created clone interfaces: lo1.
....
Create the jail:
[source,shell]
....
# ezjail-admin create dns1 'lo1|127.0.2.1,re0|192.168.1.240'
....
Start the jail, connect to a console running on it, and perform some basic configuration:
[source,shell]
....
# ezjail-admin start dns1
# ezjail-admin console dns1
# passwd
Changing local password for root
New Password:
Retype New Password:
# tzsetup
# sed -i .bak -e '/adjkerntz/ s/^/#/' /etc/crontab
# sed -i .bak -e 's/127.0.0.1/127.0.2.1/g; s/localhost.my.domain/dns1.my.domain dns1/' /etc/hosts
....
Temporarily set the upstream DNS servers in [.filename]#/etc/resolv.conf# so ports can be downloaded:
[.programlisting]
....
nameserver 10.0.0.62
nameserver 10.0.0.61
....
Still using the jail console, install package:dns/bind99[].
[source,shell]
....
# make -C /usr/ports/dns/bind99 install clean
....
Configure the name server by editing [.filename]#/usr/local/etc/namedb/named.conf#.
Create an Access Control List (ACL) of addresses and networks that are permitted to send DNS queries to this name server. This section is added just before the `options` section already in the file:
[.programlisting]
....
...
// or cause huge amounts of useless Internet traffic.
acl "trusted" {
192.168.1.0/24;
localhost;
localnets;
};
options {
...
....
Use the jail IP address in the `listen-on` setting to accept DNS queries from other computers on the network:
[.programlisting]
....
listen-on { 192.168.1.240; };
....
A simple caching-only DNS name server is created by changing the `forwarders` section. The original file contains:
[.programlisting]
....
/*
forwarders {
127.0.0.1;
};
*/
....
Uncomment the section by removing the `/\*` and `*/` lines. Enter the IP addresses of the upstream DNS servers. Immediately after the `forwarders` section, add references to the `trusted` ACL defined earlier:
[.programlisting]
....
forwarders {
10.0.0.62;
10.0.0.61;
};
allow-query { any; };
allow-recursion { trusted; };
allow-query-cache { trusted; };
....
Enable the service in [.filename]#/etc/rc.conf#:
[.programlisting]
....
named_enable="YES"
....
Start and test the name server:
[source,shell]
....
# service named start
wrote key file "/usr/local/etc/namedb/rndc.key"
Starting named.
# /usr/local/bin/dig @192.168.1.240 freebsd.org
....
A response that includes
[source,shell]
....
;; Got answer;
....
shows that the new DNS server is working. A long delay followed by a response including
[source,shell]
....
;; connection timed out; no servers could be reached
....
shows a problem. Check the configuration settings and make sure any local firewalls allow the new DNS access to the upstream DNS servers.
The new DNS server can use itself for local name resolution, just like other local computers. Set the address of the DNS server in the client computer's [.filename]#/etc/resolv.conf#:
[.programlisting]
....
nameserver 192.168.1.240
....
A local DHCP server can be configured to provide this address for a local DNS server, providing automatic configuration on DHCP clients.
====
diff --git a/documentation/content/en/books/handbook/kernelconfig/_index.adoc b/documentation/content/en/books/handbook/kernelconfig/_index.adoc
index 61e173ed84..5be666a641 100644
--- a/documentation/content/en/books/handbook/kernelconfig/_index.adoc
+++ b/documentation/content/en/books/handbook/kernelconfig/_index.adoc
@@ -1,294 +1,295 @@
---
title: Chapter 8. Configuring the FreeBSD Kernel
part: Part II. Common Tasks
prev: books/handbook/multimedia
next: books/handbook/printing
+description: This chapter covers how to configure the FreeBSD Kernel. When to build a custom kernel, how to take a hardware inventory, how to customize a kernel configuration file, etc
---
[[kernelconfig]]
= Configuring the FreeBSD Kernel
:doctype: book
:toc: macro
:toclevels: 1
:icons: font
:sectnums:
:sectnumlevels: 6
:source-highlighter: rouge
:experimental:
:skip-front-matter:
:xrefstyle: basic
:relfileprefix: ../
:outfilesuffix:
:sectnumoffset: 8
ifeval::["{backend}" == "html5"]
:imagesdir: ../../../../images/books/handbook/kernelconfig/
endif::[]
ifeval::["{backend}" == "pdf"]
:imagesdir: ../../../../static/images/books/handbook/kernelconfig/
endif::[]
ifeval::["{backend}" == "epub3"]
:imagesdir: ../../../../static/images/books/handbook/kernelconfig/
endif::[]
include::shared/authors.adoc[]
include::shared/releases.adoc[]
include::shared/en/mailing-lists.adoc[]
include::shared/en/teams.adoc[]
include::shared/en/urls.adoc[]
toc::[]
[[kernelconfig-synopsis]]
== Synopsis
The kernel is the core of the FreeBSD operating system. It is responsible for managing memory, enforcing security controls, networking, disk access, and much more. While much of FreeBSD is dynamically configurable, it is still occasionally necessary to configure and compile a custom kernel.
After reading this chapter, you will know:
* When to build a custom kernel.
* How to take a hardware inventory.
* How to customize a kernel configuration file.
* How to use the kernel configuration file to create and build a new kernel.
* How to install the new kernel.
* How to troubleshoot if things go wrong.
All of the commands listed in the examples in this chapter should be executed as `root`.
[[kernelconfig-custom-kernel]]
== Why Build a Custom Kernel?
Traditionally, FreeBSD used a monolithic kernel. The kernel was one large program, supported a fixed list of devices, and in order to change the kernel's behavior, one had to compile and then reboot into a new kernel.
Today, most of the functionality in the FreeBSD kernel is contained in modules which can be dynamically loaded and unloaded from the kernel as necessary. This allows the running kernel to adapt immediately to new hardware and for new functionality to be brought into the kernel. This is known as a modular kernel.
Occasionally, it is still necessary to perform static kernel configuration. Sometimes the needed functionality is so tied to the kernel that it can not be made dynamically loadable. Some security environments prevent the loading and unloading of kernel modules and require that only needed functionality is statically compiled into the kernel.
Building a custom kernel is often a rite of passage for advanced BSD users. This process, while time consuming, can provide benefits to the FreeBSD system. Unlike the [.filename]#GENERIC# kernel, which must support a wide range of hardware, a custom kernel can be stripped down to only provide support for that computer's hardware. This has a number of benefits, such as:
* Faster boot time. Since the kernel will only probe the hardware on the system, the time it takes the system to boot can decrease.
* Lower memory usage. A custom kernel often uses less memory than the [.filename]#GENERIC# kernel by omitting unused features and device drivers. This is important because the kernel code remains resident in physical memory at all times, preventing that memory from being used by applications. For this reason, a custom kernel is useful on a system with a small amount of RAM.
* Additional hardware support. A custom kernel can add support for devices which are not present in the [.filename]#GENERIC# kernel.
Before building a custom kernel, consider the reason for doing so. If there is a need for specific hardware support, it may already exist as a module.
Kernel modules exist in [.filename]#/boot/kernel# and may be dynamically loaded into the running kernel using man:kldload[8]. Most kernel drivers have a loadable module and manual page. For example, the man:ath[4] wireless Ethernet driver has the following information in its manual page:
[source,shell,subs="macros"]
....
Alternatively, to load the driver as a module at boot time, place the
following line in man:loader.conf[5]:
if_ath_load="YES"
....
Adding `if_ath_load="YES"` to [.filename]#/boot/loader.conf# will load this module dynamically at boot time.
In some cases, there is no associated module in [.filename]#/boot/kernel#. This is mostly true for certain subsystems.
[[kernelconfig-devices]]
== Finding the System Hardware
Before editing the kernel configuration file, it is recommended to perform an inventory of the machine's hardware. On a dual-boot system, the inventory can be created from the other operating system. For example, Microsoft(R)'s Device Manager contains information about installed devices.
[NOTE]
====
Some versions of Microsoft(R) Windows(R) have a System icon which can be used to access Device Manager.
====
If FreeBSD is the only installed operating system, use man:dmesg[8] to determine the hardware that was found and listed during the boot probe. Most device drivers on FreeBSD have a manual page which lists the hardware supported by that driver. For example, the following lines indicate that the man:psm[4] driver found a mouse:
[source,shell]
....
psm0: <PS/2 Mouse> irq 12 on atkbdc0
psm0: [GIANT-LOCKED]
psm0: [ITHREAD]
psm0: model Generic PS/2 mouse, device ID 0
....
Since this hardware exists, this driver should not be removed from a custom kernel configuration file.
If the output of `dmesg` does not display the results of the boot probe output, instead read the contents of [.filename]#/var/run/dmesg.boot#.
Another tool for finding hardware is man:pciconf[8], which provides more verbose output. For example:
[source,shell]
....
% pciconf -lv
ath0@pci0:3:0:0: class=0x020000 card=0x058a1014 chip=0x1014168c rev=0x01 hdr=0x00
vendor = 'Atheros Communications Inc.'
device = 'AR5212 Atheros AR5212 802.11abg wireless'
class = network
subclass = ethernet
....
This output shows that the [.filename]#ath# driver located a wireless Ethernet device.
The `-k` flag of man:man[1] can be used to provide useful information. For example, it can be used to display a list of manual pages which contain a particular device brand or name:
[source,shell]
....
# man -k Atheros
ath(4) - Atheros IEEE 802.11 wireless network driver
ath_hal(4) - Atheros Hardware Access Layer (HAL)
....
Once the hardware inventory list is created, refer to it to ensure that drivers for installed hardware are not removed as the custom kernel configuration is edited.
[[kernelconfig-config]]
== The Configuration File
In order to create a custom kernel configuration file and build a custom kernel, the full FreeBSD source tree must first be installed.
If [.filename]#/usr/src/# does not exist or it is empty, source has not been installed. Source can be installed using Subversion and the instructions in crossref:mirrors[svn,“Using Subversion”].
Once source is installed, review the contents of [.filename]#/usr/src/sys#. This directory contains a number of subdirectories, including those which represent the following supported architectures: [.filename]#amd64#, [.filename]#i386#, [.filename]#powerpc#, and [.filename]#sparc64#. Everything inside a particular architecture's directory deals with that architecture only and the rest of the code is machine independent code common to all platforms. Each supported architecture has a [.filename]#conf# subdirectory which contains the [.filename]#GENERIC# kernel configuration file for that architecture.
Do not make edits to [.filename]#GENERIC#. Instead, copy the file to a different name and make edits to the copy. The convention is to use a name with all capital letters. When maintaining multiple FreeBSD machines with different hardware, it is a good idea to name it after the machine's hostname. This example creates a copy, named [.filename]#MYKERNEL#, of the [.filename]#GENERIC# configuration file for the `amd64` architecture:
[source,shell]
....
# cd /usr/src/sys/amd64/conf
# cp GENERIC MYKERNEL
....
[.filename]#MYKERNEL# can now be customized with any `ASCII` text editor. The default editor is vi, though an easier editor for beginners, called ee, is also installed with FreeBSD.
The format of the kernel configuration file is simple. Each line contains a keyword that represents a device or subsystem, an argument, and a brief description. Any text after a `#` is considered a comment and ignored. To remove kernel support for a device or subsystem, put a `#` at the beginning of the line representing that device or subsystem. Do not add or remove a `#` for any line that you do not understand.
[WARNING]
====
It is easy to remove support for a device or option and end up with a broken kernel. For example, if the man:ata[4] driver is removed from the kernel configuration file, a system using `ATA` disk drivers may not boot. When in doubt, just leave support in the kernel.
====
In addition to the brief descriptions provided in this file, additional descriptions are contained in [.filename]#NOTES#, which can be found in the same directory as [.filename]#GENERIC# for that architecture. For architecture independent options, refer to [.filename]#/usr/src/sys/conf/NOTES#.
[TIP]
====
When finished customizing the kernel configuration file, save a backup copy to a location outside of [.filename]#/usr/src#.
Alternately, keep the kernel configuration file elsewhere and create a symbolic link to the file:
[source,shell]
....
# cd /usr/src/sys/amd64/conf
# mkdir /root/kernels
# cp GENERIC /root/kernels/MYKERNEL
# ln -s /root/kernels/MYKERNEL
....
====
An `include` directive is available for use in configuration files. This allows another configuration file to be included in the current one, making it easy to maintain small changes relative to an existing file. If only a small number of additional options or drivers are required, this allows a delta to be maintained with respect to [.filename]#GENERIC#, as seen in this example:
[.programlisting]
....
include GENERIC
ident MYKERNEL
options IPFIREWALL
options DUMMYNET
options IPFIREWALL_DEFAULT_TO_ACCEPT
options IPDIVERT
....
Using this method, the local configuration file expresses local differences from a [.filename]#GENERIC# kernel. As upgrades are performed, new features added to [.filename]#GENERIC# will also be added to the local kernel unless they are specifically prevented using `nooptions` or `nodevice`. A comprehensive list of configuration directives and their descriptions may be found in man:config[5].
[NOTE]
====
To build a file which contains all available options, run the following command as `root`:
[source,shell]
....
# cd /usr/src/sys/arch/conf && make LINT
....
====
[[kernelconfig-building]]
== Building and Installing a Custom Kernel
Once the edits to the custom configuration file have been saved, the source code for the kernel can be compiled using the following steps:
[.procedure]
*Procedure: Building a Kernel*
. Change to this directory:
+
[source,shell]
....
# cd /usr/src
....
. Compile the new kernel by specifying the name of the custom kernel configuration file:
+
[source,shell]
....
# make buildkernel KERNCONF=MYKERNEL
....
. Install the new kernel associated with the specified kernel configuration file. This command will copy the new kernel to [.filename]#/boot/kernel/kernel# and save the old kernel to [.filename]#/boot/kernel.old/kernel#:
+
[source,shell]
....
# make installkernel KERNCONF=MYKERNEL
....
. Shutdown the system and reboot into the new kernel. If something goes wrong, refer to <<kernelconfig-noboot, The kernel does not boot>>.
By default, when a custom kernel is compiled, all kernel modules are rebuilt. To update a kernel faster or to build only custom modules, edit [.filename]#/etc/make.conf# before starting to build the kernel.
For example, this variable specifies the list of modules to build instead of using the default of building all modules:
[.programlisting]
....
MODULES_OVERRIDE = linux acpi
....
Alternately, this variable lists which modules to exclude from the build process:
[.programlisting]
....
WITHOUT_MODULES = linux acpi sound
....
Additional variables are available. Refer to man:make.conf[5] for details.
[[kernelconfig-trouble]]
== If Something Goes Wrong
There are four categories of trouble that can occur when building a custom kernel:
`config` fails::
If `config` fails, it will print the line number that is incorrect. As an example, for the following message, make sure that line 17 is typed correctly by comparing it to [.filename]#GENERIC# or [.filename]#NOTES#:
+
[source,shell]
....
config: line 17: syntax error
....
`make` fails::
If `make` fails, it is usually due to an error in the kernel configuration file which is not severe enough for `config` to catch. Review the configuration, and if the problem is not apparent, send an email to the {freebsd-questions} which contains the kernel configuration file.
[[kernelconfig-noboot]]
The kernel does not boot::
If the new kernel does not boot or fails to recognize devices, do not panic! Fortunately, FreeBSD has an excellent mechanism for recovering from incompatible kernels. Simply choose the kernel to boot from at the FreeBSD boot loader. This can be accessed when the system boot menu appears by selecting the "Escape to a loader prompt" option. At the prompt, type `boot _kernel.old_`, or the name of any other kernel that is known to boot properly.
+
After booting with a good kernel, check over the configuration file and try to build it again. One helpful resource is [.filename]#/var/log/messages# which records the kernel messages from every successful boot. Also, man:dmesg[8] will print the kernel messages from the current boot.
+
[NOTE]
====
When troubleshooting a kernel, make sure to keep a copy of [.filename]#GENERIC#, or some other kernel that is known to work, as a different name that will not get erased on the next build. This is important because every time a new kernel is installed, [.filename]#kernel.old# is overwritten with the last installed kernel, which may or may not be bootable. As soon as possible, move the working kernel by renaming the directory containing the good kernel:
[source,shell]
....
# mv /boot/kernel /boot/kernel.bad
# mv /boot/kernel.good /boot/kernel
....
====
The kernel works, but man:ps[1] does not::
If the kernel version differs from the one that the system utilities have been built with, for example, a kernel built from -CURRENT sources is installed on a -RELEASE system, many system status commands like man:ps[1] and man:vmstat[8] will not work. To fix this, crossref:cutting-edge[makeworld,recompile and install a world] built with the same version of the source tree as the kernel. It is never a good idea to use a different version of the kernel than the rest of the operating system.
diff --git a/documentation/content/en/books/handbook/l10n/_index.adoc b/documentation/content/en/books/handbook/l10n/_index.adoc
index 17e19a2393..d422996c82 100644
--- a/documentation/content/en/books/handbook/l10n/_index.adoc
+++ b/documentation/content/en/books/handbook/l10n/_index.adoc
@@ -1,583 +1,584 @@
---
title: Chapter 23. Localization - i18n/L10n Usage and Setup
part: Part III. System Administration
prev: books/handbook/virtualization
next: books/handbook/cutting-edge
+description: FreeBSD supports localization into many languages, allowing users to view, input, or process data in non-English languages
---
[[l10n]]
= Localization - i18n/L10n Usage and Setup
:doctype: book
:toc: macro
:toclevels: 1
:icons: font
:sectnums:
:sectnumlevels: 6
:source-highlighter: rouge
:experimental:
:skip-front-matter:
:xrefstyle: basic
:relfileprefix: ../
:outfilesuffix:
:sectnumoffset: 23
ifeval::["{backend}" == "html5"]
:imagesdir: ../../../../images/books/handbook/l10n/
endif::[]
ifeval::["{backend}" == "pdf"]
:imagesdir: ../../../../static/images/books/handbook/l10n/
endif::[]
ifeval::["{backend}" == "epub3"]
:imagesdir: ../../../../static/images/books/handbook/l10n/
endif::[]
include::shared/authors.adoc[]
include::shared/releases.adoc[]
include::shared/en/mailing-lists.adoc[]
include::shared/en/teams.adoc[]
include::shared/en/urls.adoc[]
toc::[]
[[l10n-synopsis]]
== Synopsis
FreeBSD is a distributed project with users and contributors located all over the world. As such, FreeBSD supports localization into many languages, allowing users to view, input, or process data in non-English languages. One can choose from most of the major languages, including, but not limited to: Chinese, German, Japanese, Korean, French, Russian, and Vietnamese.
The term internationalization has been shortened to i18n, which represents the number of letters between the first and the last letters of `internationalization`. L10n uses the same naming scheme, but from `localization`. The i18n/L10n methods, protocols, and applications allow users to use languages of their choice.
This chapter discusses the internationalization and localization features of FreeBSD. After reading this chapter, you will know:
* How locale names are constructed.
* How to set the locale for a login shell.
* How to configure the console for non-English languages.
* How to configure Xorg for different languages.
* How to find i18n-compliant applications.
* Where to find more information for configuring specific languages.
Before reading this chapter, you should:
* Know how to crossref:ports[ports,install additional third-party applications].
[[using-localization]]
== Using Localization
Localization settings are based on three components: the language code, country code, and encoding. Locale names are constructed from these parts as follows:
[.programlisting]
....
LanguageCode_CountryCode.Encoding
....
The _LanguageCode_ and _CountryCode_ are used to determine the country and the specific language variation. <<locale-lang-country>> provides some examples of __LanguageCode_CountryCode__:
[[locale-lang-country]]
.Common Language and Country Codes
[cols="1,1", frame="none", options="header"]
|===
| LanguageCode_Country Code
| Description
|en_US
|English, United States
|ru_RU
|Russian, Russia
|zh_TW
|Traditional Chinese, Taiwan
|===
A complete listing of available locales can be found by typing:
[source,shell]
....
% locale -a | more
....
To determine the current locale setting:
[source,shell]
....
% locale
....
Language specific character sets, such as ISO8859-1, ISO8859-15, KOI8-R, and CP437, are described in man:multibyte[3]. The active list of character sets can be found at the http://www.iana.org/assignments/character-sets[IANA Registry].
Some languages, such as Chinese or Japanese, cannot be represented using ASCII characters and require an extended language encoding using either wide or multibyte characters. Examples of wide or multibyte encodings include EUC and Big5. Older applications may mistake these encodings for control characters while newer applications usually recognize these characters. Depending on the implementation, users may be required to compile an application with wide or multibyte character support, or to configure it correctly.
[NOTE]
====
FreeBSD uses Xorg-compatible locale encodings.
====
The rest of this section describes the various methods for configuring the locale on a FreeBSD system. The next section will discuss the considerations for finding and compiling applications with i18n support.
[[setting-locale]]
=== Setting Locale for Login Shell
Locale settings are configured either in a user's [.filename]#~/.login_conf# or in the startup file of the user's shell: [.filename]#~/.profile#, [.filename]#~/.bashrc#, or [.filename]#~/.cshrc#.
Two environment variables should be set:
* `LANG`, which sets the locale
* `MM_CHARSET`, which sets the MIME character set used by applications
In addition to the user's shell configuration, these variables should also be set for specific application configuration and Xorg configuration.
Two methods are available for making the needed variable assignments: the <<login-class,login class>> method, which is the recommended method, and the <<startup-file,startup file>> method. The next two sections demonstrate how to use both methods.
[[login-class]]
==== Login Classes Method
This first method is the recommended method as it assigns the required environment variables for locale name and MIME character sets for every possible shell. This setup can either be performed by each user or it can be configured for all users by the superuser.
This minimal example sets both variables for Latin-1 encoding in the [.filename]#.login_conf# of an individual user's home directory:
[.programlisting]
....
me:\
:charset=ISO-8859-1:\
:lang=de_DE.ISO8859-1:
....
Here is an example of a user's [.filename]#~/.login_conf# that sets the variables for Traditional Chinese in BIG-5 encoding. More variables are needed because some applications do not correctly respect locale variables for Chinese, Japanese, and Korean:
[.programlisting]
....
#Users who do not wish to use monetary units or time formats
#of Taiwan can manually change each variable
me:\
:lang=zh_TW.Big5:\
:setenv=LC_ALL=zh_TW.Big5,LC_COLLATE=zh_TW.Big5,LC_CTYPE=zh_TW.Big5,LC_MESSAGES=zh_TW.Big5,LC_MONETARY=zh_TW.Big5,LC_NUMERIC=zh_TW.Big5,LC_TIME=zh_TW.Big5:\
:charset=big5:\
:xmodifiers="@im=gcin": #Set gcin as the XIM Input Server
....
Alternately, the superuser can configure all users of the system for localization. The following variables in [.filename]#/etc/login.conf# are used to set the locale and MIME character set:
[.programlisting]
....
language_name|Account Type Description:\
:charset=MIME_charset:\
:lang=locale_name:\
:tc=default:
....
So, the previous Latin-1 example would look like this:
[.programlisting]
....
german|German Users Accounts:\
:charset=ISO-8859-1:\
:lang=de_DE.ISO8859-1:\
:tc=default:
....
See man:login.conf[5] for more details about these variables. Note that it already contains pre-defined _russian_ class.
Whenever [.filename]#/etc/login.conf# is edited, remember to execute the following command to update the capability database:
[source,shell]
....
# cap_mkdb /etc/login.conf
....
[NOTE]
====
For an end user, the `cap_mkdb` command will need to be run on their [.filename]#~/.login_conf# for any changes to take effect.
====
===== Utilities Which Change Login Classes
In addition to manually editing [.filename]#/etc/login.conf#, several utilities are available for setting the locale for newly created users.
When using `vipw` to add new users, specify the _language_ to set the locale:
[.programlisting]
....
user:password:1111:11:language:0:0:User Name:/home/user:/bin/sh
....
When using `adduser` to add new users, the default language can be pre-configured for all new users or specified for an individual user.
If all new users use the same language, set `defaultclass=_language_` in [.filename]#/etc/adduser.conf#.
To override this setting when creating a user, either input the required locale at this prompt:
[source,shell]
....
Enter login class: default []:
....
or specify the locale to set when invoking `adduser`:
[source,shell]
....
# adduser -class language
....
If `pw` is used to add new users, specify the locale as follows:
[source,shell]
....
# pw useradd user_name -L language
....
To change the login class of an existing user, `chpass` can be used. Invoke it as superuser and provide the username to edit as the argument.
[source,shell]
....
# chpass user_name
....
[[startup-file]]
==== Shell Startup File Method
This second method is not recommended as each shell that is used requires manual configuration, where each shell has a different configuration file and differing syntax. As an example, to set the German language for the `sh` shell, these lines could be added to [.filename]#~/.profile# to set the shell for that user only. These lines could also be added to [.filename]#/etc/profile# or [.filename]#/usr/share/skel/dot.profile# to set that shell for all users:
[.programlisting]
....
LANG=de_DE.ISO8859-1; export LANG
MM_CHARSET=ISO-8859-1; export MM_CHARSET
....
However, the name of the configuration file and the syntax used differs for the `csh` shell. These are the equivalent settings for [.filename]#~/.login#, [.filename]#/etc/csh.login#, or [.filename]#/usr/share/skel/dot.login#:
[.programlisting]
....
setenv LANG de_DE.ISO8859-1
setenv MM_CHARSET ISO-8859-1
....
To complicate matters, the syntax needed to configure Xorg in [.filename]#~/.xinitrc# also depends upon the shell. The first example is for the `sh` shell and the second is for the `csh` shell:
[.programlisting]
....
LANG=de_DE.ISO8859-1; export LANG
....
[.programlisting]
....
setenv LANG de_DE.ISO8859-1
....
[[setting-console]]
=== Console Setup
Several localized fonts are available for the console. To see a listing of available fonts, type `ls /usr/share/syscons/fonts`. To configure the console font, specify the _font_name_, without the [.filename]#.fnt# suffix, in [.filename]#/etc/rc.conf#:
[.programlisting]
....
font8x16=font_name
font8x14=font_name
font8x8=font_name
....
The keymap and screenmap can be set by adding the following to [.filename]#/etc/rc.conf#:
[.programlisting]
....
scrnmap=screenmap_name
keymap=keymap_name
keychange="fkey_number sequence"
....
To see the list of available screenmaps, type `ls /usr/share/syscons/scrnmaps`. Do not include the [.filename]#.scm# suffix when specifying _screenmap_name_. A screenmap with a corresponding mapped font is usually needed as a workaround for expanding bit 8 to bit 9 on a VGA adapter's font character matrix so that letters are moved out of the pseudographics area if the screen font uses a bit 8 column.
To see the list of available keymaps, type `ls /usr/share/syscons/keymaps`. When specifying the _keymap_name_, do not include the [.filename]#.kbd# suffix. To test keymaps without rebooting, use man:kbdmap[1].
The `keychange` entry is usually needed to program function keys to match the selected terminal type because function key sequences cannot be defined in the keymap.
Next, set the correct console terminal type in [.filename]#/etc/ttys# for all virtual terminal entries. <<locale-charset>> summarizes the available terminal types.:
[[locale-charset]]
.Defined Terminal Types for Character Sets
[cols="1,1", frame="none", options="header"]
|===
| Character Set
| Terminal Type
|ISO8859-1 or ISO8859-15
|`cons25l1`
|ISO8859-2
|`cons25l2`
|ISO8859-7
|`cons25l7`
|KOI8-R
|`cons25r`
|KOI8-U
|`cons25u`
|CP437 (VGA default)
|`cons25`
|US-ASCII
|`cons25w`
|===
For languages with wide or multibyte characters, install a console for that language from the FreeBSD Ports Collection. The available ports are summarized in <<locale-console>>. Once installed, refer to the port's [.filename]#pkg-message# or man pages for configuration and usage instructions.
[[locale-console]]
.Available Console from Ports Collection
[cols="1,1", frame="none", options="header"]
|===
| Language
| Port Location
|Traditional Chinese (BIG-5)
|package:chinese/big5con[]
|Chinese/Japanese/Korean
|package:chinese/cce[]
|Chinese/Japanese/Korean
|package:chinese/zhcon[]
|Japanese
|package:chinese/kon2[]
|Japanese
|package:japanese/kon2-14dot[]
|Japanese
|package:japanese/kon2-16dot[]
|===
If moused is enabled in [.filename]#/etc/rc.conf#, additional configuration may be required. By default, the mouse cursor of the man:syscons[4] driver occupies the `0xd0`-`0xd3` range in the character set. If the language uses this range, move the cursor's range by adding the following line to [.filename]#/etc/rc.conf#:
[.programlisting]
....
mousechar_start=3
....
=== Xorg Setup
crossref:x11[x11,The X Window System] describes how to install and configure Xorg. When configuring Xorg for localization, additional fonts and input methods are available from the FreeBSD Ports Collection. Application specific i18n settings such as fonts and menus can be tuned in [.filename]#~/.Xresources# and should allow users to view their selected language in graphical application menus.
The X Input Method (XIM) protocol is an Xorg standard for inputting non-English characters. <<locale-xim>> summarizes the input method applications which are available in the FreeBSD Ports Collection. Additional Fcitx and Uim applications are also available.
[[locale-xim]]
.Available Input Methods
[cols="1,1", frame="none", options="header"]
|===
| Language
| Input Method
|Chinese
|package:chinese/gcin[]
|Chinese
|package:chinese/ibus-chewing[]
|Chinese
|package:chinese/ibus-pinyin[]
|Chinese
|package:chinese/oxim[]
|Chinese
|package:chinese/scim-fcitx[]
|Chinese
|package:chinese/scim-pinyin[]
|Chinese
|package:chinese/scim-tables[]
|Japanese
|package:japanese/ibus-anthy[]
|Japanese
|package:japanese/ibus-mozc[]
|Japanese
|package:japanese/ibus-skk[]
|Japanese
|package:japanese/im-ja[]
|Japanese
|package:japanese/kinput2[]
|Japanese
|package:japanese/scim-anthy[]
|Japanese
|package:japanese/scim-canna[]
|Japanese
|package:japanese/scim-honoka[]
|Japanese
|package:japanese/scim-honoka-plugin-romkan[]
|Japanese
|package:japanese/scim-honoka-plugin-wnn[]
|Japanese
|package:japanese/scim-prime[]
|Japanese
|package:japanese/scim-skk[]
|Japanese
|package:japanese/scim-tables[]
|Japanese
|package:japanese/scim-tomoe[]
|Japanese
|package:japanese/scim-uim[]
|Japanese
|package:japanese/skkinput[]
|Japanese
|package:japanese/skkinput3[]
|Japanese
|package:japanese/uim-anthy[]
|Korean
|package:korean/ibus-hangul[]
|Korean
|package:korean/imhangul[]
|Korean
|package:korean/nabi[]
|Korean
|package:korean/scim-hangul[]
|Korean
|package:korean/scim-tables[]
|Vietnamese
|package:vietnamese/xvnkb[]
|Vietnamese
|package:vietnamese/x-unikey[]
|===
[[l10n-compiling]]
== Finding i18n Applications
i18n applications are programmed using i18n kits under libraries. These allow developers to write a simple file and translate displayed menus and texts to each language.
The link:https://www.FreeBSD.org/ports/[FreeBSD Ports Collection] contains many applications with built-in support for wide or multibyte characters for several languages. Such applications include `i18n` in their names for easy identification. However, they do not always support the language needed.
Some applications can be compiled with the specific charset. This is usually done in the port's [.filename]#Makefile# or by passing a value to configure. Refer to the i18n documentation in the respective FreeBSD port's source for more information on how to determine the needed configure value or the port's [.filename]#Makefile# to determine which compile options to use when building the port.
[[lang-setup]]
== Locale Configuration for Specific Languages
This section provides configuration examples for localizing a FreeBSD system for the Russian language. It then provides some additional resources for localizing other languages.
[[ru-localize]]
=== Russian Language (KOI8-R Encoding)
This section shows the specific settings needed to localize a FreeBSD system for the Russian language. Refer to <<using-localization,Using Localization>> for a more complete description of each type of setting.
To set this locale for the login shell, add the following lines to each user's [.filename]#~/.login_conf#:
[.programlisting]
....
me:My Account:\
:charset=KOI8-R:\
:lang=ru_RU.KOI8-R:
....
To configure the console, add the following lines to [.filename]#/etc/rc.conf#:
[.programlisting]
....
keymap="ru.utf-8"
scrnmap="utf-82cp866"
font8x16="cp866b-8x16"
font8x14="cp866-8x14"
font8x8="cp866-8x8"
mousechar_start=3
....
For each `ttyv` entry in [.filename]#/etc/ttys#, use `cons25r` as the terminal type.
To configure printing, a special output filter is needed to convert from KOI8-R to CP866 since most printers with Russian characters come with hardware code page CP866. FreeBSD includes a default filter for this purpose, [.filename]#/usr/libexec/lpr/ru/koi2alt#. To use this filter, add this entry to [.filename]#/etc/printcap#:
[.programlisting]
....
lp|Russian local line printer:\
:sh:of=/usr/libexec/lpr/ru/koi2alt:\
:lp=/dev/lpt0:sd=/var/spool/output/lpd:lf=/var/log/lpd-errs:
....
Refer to man:printcap[5] for a more detailed explanation.
To configure support for Russian filenames in mounted MS-DOS(R) file systems, include `-L` and the locale name when adding an entry to [.filename]#/etc/fstab#:
[.programlisting]
....
/dev/ad0s2 /dos/c msdos rw,-Lru_RU.KOI8-R 0 0
....
Refer to man:mount_msdosfs[8] for more details.
To configure Russian fonts for Xorg, install the package:x11-fonts/xorg-fonts-cyrillic[] package. Then, check the `"Files"` section in [.filename]#/etc/X11/xorg.conf#. The following line must be added _before_ any other `FontPath` entries:
[.programlisting]
....
FontPath "/usr/local/lib/X11/fonts/cyrillic"
....
Additional Cyrillic fonts are available in the Ports Collection.
To activate a Russian keyboard, add the following to the `"Keyboard"` section of [.filename]#/etc/xorg.conf#:
[.programlisting]
....
Option "XkbLayout" "us,ru"
Option "XkbOptions" "grp:toggle"
....
Make sure that `XkbDisable` is commented out in that file.
For `grp:toggle` use kbd:[Right Alt], for `grp:ctrl_shift_toggle` use kbd[Ctrl+Shift]. For `grp:caps_toggle` use kbd:[CapsLock]. The old kbd:[CapsLock] function is still available in LAT mode only using kbd[Shift+CapsLock]. `grp:caps_toggle` does not work in Xorg for some unknown reason.
If the keyboard has "Windows(R)" keys, and some non-alphabetical keys are mapped incorrectly, add the following line to [.filename]#/etc/xorg.conf#:
[.programlisting]
....
Option "XkbVariant" ",winkeys"
....
[NOTE]
====
The Russian XKB keyboard may not work with non-localized applications. Minimally localized applications should call a `XtSetLanguageProc (NULL, NULL, NULL);` function early in the program.
====
See http://koi8.pp.ru/xwin.html[http://koi8.pp.ru/xwin.html] for more instructions on localizing Xorg applications. For more general information about KOI8-R encoding, refer to http://koi8.pp.ru/[http://koi8.pp.ru/].
=== Additional Language-Specific Resources
This section lists some additional resources for configuring other locales.
Traditional Chinese for Taiwan::
The FreeBSD-Taiwan Project has a Chinese HOWTO for FreeBSD at http://netlab.cse.yzu.edu.tw/\~statue/freebsd/zh-tut/[http://netlab.cse.yzu.edu.tw/~statue/freebsd/zh-tut/].
Greek Language Localization::
A complete article on Greek support in FreeBSD is available https://www.FreeBSD.org/doc/gr/articles/greek-language-support/[here], in Greek only, as part of the official FreeBSD Greek documentation.
Japanese and Korean Language Localization::
For Japanese, refer to http://www.jp.FreeBSD.org/[http://www.jp.FreeBSD.org/], and for Korean, refer to http://www.kr.FreeBSD.org/[http://www.kr.FreeBSD.org/].
Non-English FreeBSD Documentation::
Some FreeBSD contributors have translated parts of the FreeBSD documentation to other languages. They are available through links on the link:https://www.FreeBSD.org/[FreeBSD web site] or in [.filename]#/usr/shared/doc#.
diff --git a/documentation/content/en/books/handbook/linuxemu/_index.adoc b/documentation/content/en/books/handbook/linuxemu/_index.adoc
index d82ee1858f..8ce8274cd5 100644
--- a/documentation/content/en/books/handbook/linuxemu/_index.adoc
+++ b/documentation/content/en/books/handbook/linuxemu/_index.adoc
@@ -1,271 +1,272 @@
---
title: Chapter 10. Linux® Binary Compatibility
part: Part II. Common Tasks
prev: books/handbook/printing
next: books/handbook/wine
+description: FreeBSD provides binary compatibility with Linux®, allowing users to install and run most Linux® binaries on a FreeBSD system without having to first modify the binary
---
[[linuxemu]]
= Linux(R) Binary Compatibility
:doctype: book
:toc: macro
:toclevels: 1
:icons: font
:sectnums:
:sectnumlevels: 6
:source-highlighter: rouge
:experimental:
:skip-front-matter:
:xrefstyle: basic
:relfileprefix: ../
:outfilesuffix:
:sectnumoffset: 10
ifeval::["{backend}" == "html5"]
:imagesdir: ../../../../images/books/handbook/linuxemu/
endif::[]
ifeval::["{backend}" == "pdf"]
:imagesdir: ../../../../static/images/books/handbook/linuxemu/
endif::[]
ifeval::["{backend}" == "epub3"]
:imagesdir: ../../../../static/images/books/handbook/linuxemu/
endif::[]
include::shared/authors.adoc[]
include::shared/releases.adoc[]
include::shared/en/mailing-lists.adoc[]
include::shared/en/teams.adoc[]
include::shared/en/urls.adoc[]
toc::[]
[[linuxemu-synopsis]]
== Synopsis
FreeBSD provides binary compatibility with Linux(R), allowing users to install and run most Linux(R) binaries on a FreeBSD system without having to first modify the binary. It has even been reported that, in some situations, Linux(R) binaries perform better on FreeBSD than they do on Linux(R).
However, some Linux(R)-specific operating system features are not supported under FreeBSD. For example, Linux(R) binaries will not work on FreeBSD if they overly use i386(TM) specific calls, such as enabling virtual 8086 mode.
[NOTE]
====
Support for 64-bit binary compatibility with Linux(R) was added in FreeBSD 10.3.
====
After reading this chapter, you will know:
* How to enable Linux(R) binary compatibility on a FreeBSD system.
* How to install additional Linux(R) shared libraries.
* How to install Linux(R) applications on a FreeBSD system.
* The implementation details of Linux(R) compatibility in FreeBSD.
Before reading this chapter, you should:
* Know how to install crossref:ports[ports,additional third-party software].
[[linuxemu-lbc-install]]
== Configuring Linux(R) Binary Compatibility
By default, Linux(R) libraries are not installed and Linux(R) binary compatibility is not enabled. Linux(R) libraries can either be installed manually or from the FreeBSD Ports Collection.
Before attempting to build the port, load the Linux(R) kernel module, otherwise the build will fail:
[source,shell]
....
# kldload linux
....
For 64-bit compatibility:
[source,shell]
....
# kldload linux64
....
To verify that the module is loaded:
[source,shell]
....
% kldstat
Id Refs Address Size Name
1 2 0xc0100000 16bdb8 kernel
7 1 0xc24db000 d000 linux.ko
....
The package:emulators/linux_base-c7[] package or port is the easiest way to install a base set of Linux(R) libraries and binaries on a FreeBSD system. To install the port:
[source,shell]
....
# pkg install emulators/linux_base-c7
....
For Linux(R) compatibility to be enabled at boot time, add this line to [.filename]#/etc/rc.conf#:
[.programlisting]
....
linux_enable="YES"
....
On 64-bit machines, [.filename]#/etc/rc.d/abi# will automatically load the module for 64-bit emulation.
Since the Linux(R) binary compatibility layer has gained support for running both 32- and 64-bit Linux(R) binaries (on 64-bit x86 hosts), it is no longer possible to link the emulation functionality statically into a custom kernel.
For some applications, [.filename]#/compat/linux/proc#, [.filename]#/compat/linux/sys#, and [.filename]#/compat/linux/dev/shm# may need to be mounted. Add this line to [.filename]#/etc/fstab#:
....
linprocfs /compat/linux/proc linprocfs rw 0 0
linsysfs /compat/linux/sys linsysfs rw 0 0
tmpfs /compat/linux/dev/shm tmpfs rw,mode=1777 0 0
....
Then mount the filesystem accordingly:
[source,shell]
----
# mount /compat/linux/sys
# mount /compat/linux/proc
# mount /compat/linux/dev/shm
----
[[linuxemu-libs-manually]]
=== Installing Additional Libraries Manually
If a Linux(R) application complains about missing shared libraries after configuring Linux(R) binary compatibility, determine which shared libraries the Linux(R) binary needs and install them manually.
From a Linux(R) system, `ldd` can be used to determine which shared libraries the application needs. For example, to check which shared libraries `linuxdoom` needs, run this command from a Linux(R) system that has Doom installed:
[source,shell]
....
% ldd linuxdoom
libXt.so.3 (DLL Jump 3.1) => /usr/X11/lib/libXt.so.3.1.0
libX11.so.3 (DLL Jump 3.1) => /usr/X11/lib/libX11.so.3.1.0
libc.so.4 (DLL Jump 4.5pl26) => /lib/libc.so.4.6.29
....
Then, copy all the files in the last column of the output from the Linux(R) system into [.filename]#/compat/linux# on the FreeBSD system. Once copied, create symbolic links to the names in the first column. This example will result in the following files on the FreeBSD system:
[source,shell]
....
/compat/linux/usr/X11/lib/libXt.so.3.1.0
/compat/linux/usr/X11/lib/libXt.so.3 -> libXt.so.3.1.0
/compat/linux/usr/X11/lib/libX11.so.3.1.0
/compat/linux/usr/X11/lib/libX11.so.3 -> libX11.so.3.1.0
/compat/linux/lib/libc.so.4.6.29
/compat/linux/lib/libc.so.4 -> libc.so.4.6.29
....
If a Linux(R) shared library already exists with a matching major revision number to the first column of the `ldd` output, it does not need to be copied to the file named in the last column, as the existing library should work. It is advisable to copy the shared library if it is a newer version, though. The old one can be removed, as long as the symbolic link points to the new one.
For example, these libraries already exist on the FreeBSD system:
[source,shell]
....
/compat/linux/lib/libc.so.4.6.27
/compat/linux/lib/libc.so.4 -> libc.so.4.6.27
....
and `ldd` indicates that a binary requires a later version:
[source,shell]
....
libc.so.4 (DLL Jump 4.5pl26) -> libc.so.4.6.29
....
Since the existing library is only one or two versions out of date in the last digit, the program should still work with the slightly older version. However, it is safe to replace the existing [.filename]#libc.so# with the newer version:
[source,shell]
....
/compat/linux/lib/libc.so.4.6.29
/compat/linux/lib/libc.so.4 -> libc.so.4.6.29
....
Generally, one will need to look for the shared libraries that Linux(R) binaries depend on only the first few times that a Linux(R) program is installed on FreeBSD. After a while, there will be a sufficient set of Linux(R) shared libraries on the system to be able to run newly installed Linux(R) binaries without any extra work.
=== Installing Linux(R) ELF Binaries
ELF binaries sometimes require an extra step. When an unbranded ELF binary is executed, it will generate an error message:
[source,shell]
....
% ./my-linux-elf-binary
ELF binary type not known
Abort
....
To help the FreeBSD kernel distinguish between a FreeBSD ELF binary and a Linux(R) binary, use man:brandelf[1]:
[source,shell]
....
% brandelf -t Linux my-linux-elf-binary
....
Since the GNU toolchain places the appropriate branding information into ELF binaries automatically, this step is usually not necessary.
=== Installing a Linux(R) RPM Based Application
To install a Linux(R) RPM-based application, first install the package:archivers/rpm4[] package or port. Once installed, `root` can use this command to install a [.filename]#.rpm#:
[source,shell]
....
# cd /compat/linux
# rpm2cpio < /path/to/linux.archive.rpm | cpio -id
....
If necessary, `brandelf` the installed ELF binaries. Note that this will prevent a clean uninstall.
=== Configuring the Hostname Resolver
If DNS does not work or this error appears:
[source,shell]
....
resolv+: "bind" is an invalid keyword resolv+:
"hosts" is an invalid keyword
....
configure [.filename]#/compat/linux/etc/host.conf# as follows:
[.programlisting]
....
order hosts, bind
multi on
....
This specifies that [.filename]#/etc/hosts# is searched first and DNS is searched second. When [.filename]#/compat/linux/etc/host.conf# does not exist, Linux(R) applications use [.filename]#/etc/host.conf# and complain about the incompatible FreeBSD syntax. Remove `bind` if a name server is not configured using [.filename]#/etc/resolv.conf#.
[[linuxemu-advanced]]
== Advanced Topics
This section describes how Linux(R) binary compatibility works and is based on an email written to {freebsd-chat} by Terry Lambert mailto:tlambert@primenet.com[tlambert@primenet.com] (Message ID: `<199906020108.SAA07001@usr09.primenet.com>`).
FreeBSD has an abstraction called an "execution class loader". This is a wedge into the man:execve[2] system call.
Historically, the UNIX(R) loader examined the magic number (generally the first 4 or 8 bytes of the file) to see if it was a binary known to the system, and if so, invoked the binary loader.
If it was not the binary type for the system, the man:execve[2] call returned a failure, and the shell attempted to start executing it as shell commands. The assumption was a default of "whatever the current shell is".
Later, a hack was made for man:sh[1] to examine the first two characters, and if they were `:\n`, it invoked the man:csh[1] shell instead.
FreeBSD has a list of loaders, instead of a single loader, with a fallback to the `#!` loader for running shell interpreters or shell scripts.
For the Linux(R) ABI support, FreeBSD sees the magic number as an ELF binary. The ELF loader looks for a specialized _brand_, which is a comment section in the ELF image, and which is not present on SVR4/Solaris(TM) ELF binaries.
For Linux(R) binaries to function, they must be _branded_ as type `Linux` using man:brandelf[1]:
[source,shell]
....
# brandelf -t Linux file
....
When the ELF loader sees the `Linux` brand, the loader replaces a pointer in the `proc` structure. All system calls are indexed through this pointer. In addition, the process is flagged for special handling of the trap vector for the signal trampoline code, and several other (minor) fix-ups that are handled by the Linux(R) kernel module.
The Linux(R) system call vector contains, among other things, a list of `sysent[]` entries whose addresses reside in the kernel module.
When a system call is called by the Linux(R) binary, the trap code dereferences the system call function pointer off the `proc` structure, and gets the Linux(R), not the FreeBSD, system call entry points.
Linux(R) mode dynamically _reroots_ lookups. This is, in effect, equivalent to `union` to file system mounts. First, an attempt is made to lookup the file in [.filename]#/compat/linux/original-path#. If that fails, the lookup is done in [.filename]#/original-path#. This makes sure that binaries that require other binaries can run. For example, the Linux(R) toolchain can all run under Linux(R) ABI support. It also means that the Linux(R) binaries can load and execute FreeBSD binaries, if there are no corresponding Linux(R) binaries present, and that a man:uname[1] command can be placed in the [.filename]#/compat/linux# directory tree to ensure that the Linux(R) binaries cannot tell they are not running on Linux(R).
In effect, there is a Linux(R) kernel in the FreeBSD kernel. The various underlying functions that implement all of the services provided by the kernel are identical to both the FreeBSD system call table entries, and the Linux(R) system call table entries: file system operations, virtual memory operations, signal delivery, and System V IPC. The only difference is that FreeBSD binaries get the FreeBSD _glue_ functions, and Linux(R) binaries get the Linux(R) _glue_ functions. The FreeBSD _glue_ functions are statically linked into the kernel, and the Linux(R) _glue_ functions can be statically linked, or they can be accessed via a kernel module.
Technically, this is not really emulation, it is an ABI implementation. It is sometimes called "Linux(R) emulation" because the implementation was done at a time when there was no other word to describe what was going on. Saying that FreeBSD ran Linux(R) binaries was not true, since the code was not compiled in.
diff --git a/documentation/content/en/books/handbook/mail/_index.adoc b/documentation/content/en/books/handbook/mail/_index.adoc
index d603424c36..e7d1d8f0e1 100644
--- a/documentation/content/en/books/handbook/mail/_index.adoc
+++ b/documentation/content/en/books/handbook/mail/_index.adoc
@@ -1,933 +1,934 @@
---
title: Chapter 29. Electronic Mail
part: IV. Network Communication
prev: books/handbook/ppp-and-slip
next: books/handbook/network-servers
+description: This chapter provides a basic introduction to running a mail server on FreeBSD, as well as an introduction to sending and receiving email using FreeBSD
---
[[mail]]
= Electronic Mail
:doctype: book
:toc: macro
:toclevels: 1
:icons: font
:sectnums:
:sectnumlevels: 6
:source-highlighter: rouge
:experimental:
:skip-front-matter:
:xrefstyle: basic
:relfileprefix: ../
:outfilesuffix:
:sectnumoffset: 29
ifeval::["{backend}" == "html5"]
:imagesdir: ../../../../images/books/handbook/mail/
endif::[]
ifeval::["{backend}" == "pdf"]
:imagesdir: ../../../../static/images/books/handbook/mail/
endif::[]
ifeval::["{backend}" == "epub3"]
:imagesdir: ../../../../static/images/books/handbook/mail/
endif::[]
include::shared/authors.adoc[]
include::shared/releases.adoc[]
include::shared/en/mailing-lists.adoc[]
include::shared/en/teams.adoc[]
include::shared/en/urls.adoc[]
toc::[]
[[mail-synopsis]]
== Synopsis
"Electronic Mail", better known as email, is one of the most widely used forms of communication today. This chapter provides a basic introduction to running a mail server on FreeBSD, as well as an introduction to sending and receiving email using FreeBSD. For more complete coverage of this subject, refer to the books listed in crossref:bibliography[bibliography,Bibliography].
After reading this chapter, you will know:
* Which software components are involved in sending and receiving electronic mail.
* Where basic Sendmail configuration files are located in FreeBSD.
* The difference between remote and local mailboxes.
* How to block spammers from illegally using a mail server as a relay.
* How to install and configure an alternate Mail Transfer Agent, replacing Sendmail.
* How to troubleshoot common mail server problems.
* How to set up the system to send mail only.
* How to use mail with a dialup connection.
* How to configure SMTP authentication for added security.
* How to install and use a Mail User Agent, such as mutt, to send and receive email.
* How to download mail from a remote POP or IMAP server.
* How to automatically apply filters and rules to incoming email.
Before reading this chapter, you should:
* Properly set up a network connection (crossref:advanced-networking[advanced-networking,Advanced Networking]).
* Properly set up the DNS information for a mail host (crossref:network-servers[network-servers,Network Servers]).
* Know how to install additional third-party software (crossref:ports[ports,Installing Applications: Packages and Ports]).
[[mail-using]]
== Mail Components
There are five major parts involved in an email exchange: the Mail User Agent (MUA), the Mail Transfer Agent (MTA), a mail host, a remote or local mailbox, and DNS. This section provides an overview of these components.
Mail User Agent (MUA)::
The Mail User Agent (MUA) is an application which is used to compose, send, and receive emails. This application can be a command line program, such as the built-in `mail` utility or a third-party application from the Ports Collection, such as mutt, alpine, or elm. Dozens of graphical programs are also available in the Ports Collection, including Claws Mail, Evolution, and Thunderbird. Some organizations provide a web mail program which can be accessed through a web browser. More information about installing and using a MUA on FreeBSD can be found in <<mail-agents>>.
Mail Transfer Agent (MTA)::
The Mail Transfer Agent (MTA) is responsible for receiving incoming mail and delivering outgoing mail. FreeBSD ships with Sendmail as the default MTA, but it also supports numerous other mail server daemons, including Exim, Postfix, and qmail. Sendmail configuration is described in <<sendmail>>. If another MTA is installed using the Ports Collection, refer to its post-installation message for FreeBSD-specific configuration details and the application's website for more general configuration instructions.
Mail Host and Mailboxes::
The mail host is a server that is responsible for delivering and receiving mail for a host or a network. The mail host collects all mail sent to the domain and stores it either in the default [.filename]#mbox# or the alternative Maildir format, depending on the configuration. Once mail has been stored, it may either be read locally using a MUA or remotely accessed and collected using protocols such as POP or IMAP. If mail is read locally, a POP or IMAP server does not need to be installed.
+
To access mailboxes remotely, a POP or IMAP server is required as these protocols allow users to connect to their mailboxes from remote locations. IMAP offers several advantages over POP. These include the ability to store a copy of messages on a remote server after they are downloaded and concurrent updates. IMAP can be useful over low-speed links as it allows users to fetch the structure of messages without downloading them. It can also perform tasks such as searching on the server in order to minimize data transfer between clients and servers.
+
Several POP and IMAP servers are available in the Ports Collection. These include package:mail/qpopper[], package:mail/imap-uw[], package:mail/courier-imap[], and package:mail/dovecot2[].
+
[WARNING]
====
It should be noted that both POP and IMAP transmit information, including username and password credentials, in clear-text. To secure the transmission of information across these protocols, consider tunneling sessions over man:ssh[1] (crossref:security[security-ssh-tunneling,"SSH Tunneling"]) or using SSL (crossref:security[openssl,"OpenSSL"]).
====
Domain Name System (DNS)::
The Domain Name System (DNS) and its daemon `named` play a large role in the delivery of email. In order to deliver mail from one site to another, the MTA will look up the remote site in DNS to determine which host will receive mail for the destination. This process also occurs when mail is sent from a remote host to the MTA.
+
In addition to mapping hostnames to IP addresses, DNS is responsible for storing information specific to mail delivery, known as Mail eXchanger MX records. The MX record specifies which hosts will receive mail for a particular domain.
+
To view the MX records for a domain, specify the type of record. Refer to man:host[1], for more details about this command:
+
[source,shell]
....
% host -t mx FreeBSD.org
FreeBSD.org mail is handled by 10 mx1.FreeBSD.org
....
+
Refer to crossref:network-servers[network-dns,"Domain Name System (DNS)"] for more information about DNS and its configuration.
[[sendmail]]
== Sendmail Configuration Files
Sendmail is the default MTA installed with FreeBSD. It accepts mail from MUAs and delivers it to the appropriate mail host, as defined by its configuration. Sendmail can also accept network connections and deliver mail to local mailboxes or to another program.
The configuration files for Sendmail are located in [.filename]#/etc/mail#. This section describes these files in more detail.
[.filename]#/etc/mail/access#::
This access database file defines which hosts or IP addresses have access to the local mail server and what kind of access they have. Hosts listed as `OK`, which is the default option, are allowed to send mail to this host as long as the mail's final destination is the local machine. Hosts listed as `REJECT` are rejected for all mail connections. Hosts listed as `RELAY` are allowed to send mail for any destination using this mail server. Hosts listed as `ERROR` will have their mail returned with the specified mail error. If a host is listed as `SKIP`, Sendmail will abort the current search for this entry without accepting or rejecting the mail. Hosts listed as `QUARANTINE` will have their messages held and will receive the specified text as the reason for the hold.
+
Examples of using these options for both IPv4 and IPv6 addresses can be found in the FreeBSD sample configuration, [.filename]#/etc/mail/access.sample#:
+
[.programlisting]
....
# $FreeBSD$
#
# Mail relay access control list. Default is to reject mail unless the
# destination is local, or listed in /etc/mail/local-host-names
#
## Examples (commented out for safety)
#From:cyberspammer.com ERROR:"550 We don't accept mail from spammers"
#From:okay.cyberspammer.com OK
#Connect:sendmail.org RELAY
#To:sendmail.org RELAY
#Connect:128.32 RELAY
#Connect:128.32.2 SKIP
#Connect:IPv6:1:2:3:4:5:6:7 RELAY
#Connect:suspicious.example.com QUARANTINE:Mail from suspicious host
#Connect:[127.0.0.3] OK
#Connect:[IPv6:1:2:3:4:5:6:7:8] OK
....
+
To configure the access database, use the format shown in the sample to make entries in [.filename]#/etc/mail/access#, but do not put a comment symbol (`#`) in front of the entries. Create an entry for each host or network whose access should be configured. Mail senders that match the left side of the table are affected by the action on the right side of the table.
+
Whenever this file is updated, update its database and restart Sendmail:
+
[source,shell]
....
# makemap hash /etc/mail/access < /etc/mail/access
# service sendmail restart
....
[.filename]#/etc/mail/aliases#::
This database file contains a list of virtual mailboxes that are expanded to users, files, programs, or other aliases. Here are a few entries to illustrate the file format:
+
[.programlisting]
....
root: localuser
ftp-bugs: joe,eric,paul
bit.bucket: /dev/null
procmail: "|/usr/local/bin/procmail"
....
+
The mailbox name on the left side of the colon is expanded to the target(s) on the right. The first entry expands the `root` mailbox to the `localuser` mailbox, which is then looked up in the [.filename]#/etc/mail/aliases# database. If no match is found, the message is delivered to `localuser`. The second entry shows a mail list. Mail to `ftp-bugs` is expanded to the three local mailboxes `joe`, `eric`, and `paul`. A remote mailbox could be specified as _user@example.com_. The third entry shows how to write mail to a file, in this case [.filename]#/dev/null#. The last entry demonstrates how to send mail to a program, [.filename]#/usr/local/bin/procmail#, through a UNIX(R) pipe. Refer to man:aliases[5] for more information about the format of this file.
+
Whenever this file is updated, run `newaliases` to update and initialize the aliases database.
[.filename]#/etc/mail/sendmail.cf#::
This is the master configuration file for Sendmail. It controls the overall behavior of Sendmail, including everything from rewriting email addresses to printing rejection messages to remote mail servers. Accordingly, this configuration file is quite complex. Fortunately, this file rarely needs to be changed for standard mail servers.
+
The master Sendmail configuration file can be built from man:m4[1] macros that define the features and behavior of Sendmail. Refer to [.filename]#/usr/src/contrib/sendmail/cf/README# for some of the details.
+
Whenever changes to this file are made, Sendmail needs to be restarted for the changes to take effect.
[.filename]#/etc/mail/virtusertable#::
This database file maps mail addresses for virtual domains and users to real mailboxes. These mailboxes can be local, remote, aliases defined in [.filename]#/etc/mail/aliases#, or files. This allows multiple virtual domains to be hosted on one machine.
+
FreeBSD provides a sample configuration file in [.filename]#/etc/mail/virtusertable.sample# to further demonstrate its format. The following example demonstrates how to create custom entries using that format:
+
[.programlisting]
....
root@example.com root
postmaster@example.com postmaster@noc.example.net
@example.com joe
....
+
This file is processed in a first match order. When an email address matches the address on the left, it is mapped to the local mailbox listed on the right. The format of the first entry in this example maps a specific email address to a local mailbox, whereas the format of the second entry maps a specific email address to a remote mailbox. Finally, any email address from `example.com` which has not matched any of the previous entries will match the last mapping and be sent to the local mailbox `joe`. When creating custom entries, use this format and add them to [.filename]#/etc/mail/virtusertable#. Whenever this file is edited, update its database and restart Sendmail:
+
[source,shell]
....
# makemap hash /etc/mail/virtusertable < /etc/mail/virtusertable
# service sendmail restart
....
[.filename]#/etc/mail/relay-domains#::
In a default FreeBSD installation, Sendmail is configured to only send mail from the host it is running on. For example, if a POP server is available, users will be able to check mail from remote locations but they will not be able to send outgoing emails from outside locations. Typically, a few moments after the attempt, an email will be sent from `MAILER-DAEMON` with a `5.7 Relaying Denied` message.
+
The most straightforward solution is to add the ISP's FQDN to [.filename]#/etc/mail/relay-domains#. If multiple addresses are needed, add them one per line:
+
[.programlisting]
....
your.isp.example.com
other.isp.example.net
users-isp.example.org
www.example.org
....
+
After creating or editing this file, restart Sendmail with `service sendmail restart`.
+
Now any mail sent through the system by any host in this list, provided the user has an account on the system, will succeed. This allows users to send mail from the system remotely without opening the system up to relaying SPAM from the Internet.
[[mail-changingmta]]
== Changing the Mail Transfer Agent
FreeBSD comes with Sendmail already installed as the MTA which is in charge of outgoing and incoming mail. However, the system administrator can change the system's MTA. A wide choice of alternative MTAs is available from the `mail` category of the FreeBSD Ports Collection.
Once a new MTA is installed, configure and test the new software before replacing Sendmail. Refer to the documentation of the new MTA for information on how to configure the software.
Once the new MTA is working, use the instructions in this section to disable Sendmail and configure FreeBSD to use the replacement MTA.
[[mail-disable-sendmail]]
=== Disable Sendmail
[WARNING]
====
If Sendmail's outgoing mail service is disabled, it is important that it is replaced with an alternative mail delivery system. Otherwise, system functions such as man:periodic[8] will be unable to deliver their results by email. Many parts of the system expect a functional MTA. If applications continue to use Sendmail's binaries to try to send email after they are disabled, mail could go into an inactive Sendmail queue and never be delivered.
====
In order to completely disable Sendmail, add or edit the following lines in [.filename]#/etc/rc.conf#:
[.programlisting]
....
sendmail_enable="NO"
sendmail_submit_enable="NO"
sendmail_outbound_enable="NO"
sendmail_msp_queue_enable="NO"
....
To only disable Sendmail's incoming mail service, use only this entry in [.filename]#/etc/rc.conf#:
[.programlisting]
....
sendmail_enable="NO"
....
More information on Sendmail's startup options is available in man:rc.sendmail[8].
=== Replace the Default MTA
When a new MTA is installed using the Ports Collection, its startup script is also installed and startup instructions are mentioned in its package message. Before starting the new MTA, stop the running Sendmail processes. This example stops all of these services, then starts the Postfix service:
[source,shell]
....
# service sendmail stop
# service postfix start
....
To start the replacement MTA at system boot, add its configuration line to [.filename]#/etc/rc.conf#. This entry enables the Postfix MTA:
[.programlisting]
....
postfix_enable="YES"
....
Some extra configuration is needed as Sendmail is so ubiquitous that some software assumes it is already installed and configured. Check [.filename]#/etc/periodic.conf# and make sure that these values are set to `NO`. If this file does not exist, create it with these entries:
[.programlisting]
....
daily_clean_hoststat_enable="NO"
daily_status_mail_rejects_enable="NO"
daily_status_include_submit_mailq="NO"
daily_submit_queuerun="NO"
....
Some alternative MTAs provide their own compatible implementations of the Sendmail command-line interface in order to facilitate using them as drop-in replacements for Sendmail. However, some MUAs may try to execute standard Sendmail binaries instead of the new MTA's binaries. FreeBSD uses [.filename]#/etc/mail/mailer.conf# to map the expected Sendmail binaries to the location of the new binaries. More information about this mapping can be found in man:mailwrapper[8].
The default [.filename]#/etc/mail/mailer.conf# looks like this:
[.programlisting]
....
# $FreeBSD$
#
# Execute the "real" sendmail program, named /usr/libexec/sendmail/sendmail
#
sendmail /usr/libexec/sendmail/sendmail
send-mail /usr/libexec/sendmail/sendmail
mailq /usr/libexec/sendmail/sendmail
newaliases /usr/libexec/sendmail/sendmail
hoststat /usr/libexec/sendmail/sendmail
purgestat /usr/libexec/sendmail/sendmail
....
When any of the commands listed on the left are run, the system actually executes the associated command shown on the right. This system makes it easy to change what binaries are executed when these default binaries are invoked.
Some MTAs, when installed using the Ports Collection, will prompt to update this file for the new binaries. For example, Postfix will update the file like this:
[.programlisting]
....
#
# Execute the Postfix sendmail program, named /usr/local/sbin/sendmail
#
sendmail /usr/local/sbin/sendmail
send-mail /usr/local/sbin/sendmail
mailq /usr/local/sbin/sendmail
newaliases /usr/local/sbin/sendmail
....
If the installation of the MTA does not automatically update [.filename]#/etc/mail/mailer.conf#, edit this file in a text editor so that it points to the new binaries. This example points to the binaries installed by package:mail/ssmtp[]:
[.programlisting]
....
sendmail /usr/local/sbin/ssmtp
send-mail /usr/local/sbin/ssmtp
mailq /usr/local/sbin/ssmtp
newaliases /usr/local/sbin/ssmtp
hoststat /usr/bin/true
purgestat /usr/bin/true
....
Once everything is configured, it is recommended to reboot the system. Rebooting provides the opportunity to ensure that the system is correctly configured to start the new MTA automatically on boot.
[[mail-trouble]]
== Troubleshooting
=== Why do I have to use the FQDN for hosts on my site?
The host may actually be in a different domain. For example, in order for a host in `foo.bar.edu` to reach a host called `mumble` in the `bar.edu` domain, refer to it by the Fully-Qualified Domain Name FQDN, `mumble.bar.edu`, instead of just `mumble`.
This is because the version of BIND which ships with FreeBSD no longer provides default abbreviations for non-FQDNs other than the local domain. An unqualified host such as `mumble` must either be found as `mumble.foo.bar.edu`, or it will be searched for in the root domain.
In older versions of BIND, the search continued across `mumble.bar.edu`, and `mumble.edu`. RFC 1535 details why this is considered bad practice or even a security hole.
As a good workaround, place the line:
[.programlisting]
....
search foo.bar.edu bar.edu
....
instead of the previous:
[.programlisting]
....
domain foo.bar.edu
....
into [.filename]#/etc/resolv.conf#. However, make sure that the search order does not go beyond the "boundary between local and public administration", as RFC 1535 calls it.
=== How can I run a mail server on a dial-up PPP host?
Connect to a FreeBSD mail gateway on the LAN. The PPP connection is non-dedicated.
One way to do this is to get a full-time Internet server to provide secondary MX services for the domain. In this example, the domain is `example.com` and the ISP has configured `example.net` to provide secondary MX services to the domain:
[.programlisting]
....
example.com. MX 10 example.com.
MX 20 example.net.
....
Only one host should be specified as the final recipient. For Sendmail, add `Cw example.com` in [.filename]#/etc/mail/sendmail.cf# on `example.com`.
When the sending MTA attempts to deliver mail, it will try to connect to the system, `example.com`, over the PPP link. This will time out if the destination is offline. The MTA will automatically deliver it to the secondary MX site at the Internet Service Provider (ISP), `example.net`. The secondary MX site will periodically try to connect to the primary MX host, `example.com`.
Use something like this as a login script:
[.programlisting]
....
#!/bin/sh
# Put me in /usr/local/bin/pppmyisp
( sleep 60 ; /usr/sbin/sendmail -q ) &
/usr/sbin/ppp -direct pppmyisp
....
When creating a separate login script for users, instead use `sendmail -qRexample.com` in the script above. This will force all mail in the queue for `example.com` to be processed immediately.
A further refinement of the situation can be seen from this example from the {freebsd-isp}:
[.programlisting]
....
> we provide the secondary MX for a customer. The customer connects to
> our services several times a day automatically to get the mails to
> his primary MX (We do not call his site when a mail for his domains
> arrived). Our sendmail sends the mailqueue every 30 minutes. At the
> moment he has to stay 30 minutes online to be sure that all mail is
> gone to the primary MX.
>
> Is there a command that would initiate sendmail to send all the mails
> now? The user has not root-privileges on our machine of course.
In the privacy flags section of sendmail.cf, there is a
definition Opgoaway,restrictqrun
Remove restrictqrun to allow non-root users to start the queue processing.
You might also like to rearrange the MXs. We are the 1st MX for our
customers like this, and we have defined:
# If we are the best MX for a host, try directly instead of generating
# local config error.
OwTrue
That way a remote site will deliver straight to you, without trying
the customer connection. You then send to your customer. Only works for
hosts, so you need to get your customer to name their mail
machine customer.com as well as
hostname.customer.com in the DNS. Just put an A record in
the DNS for customer.com.
....
[[mail-advanced]]
== Advanced Topics
This section covers more involved topics such as mail configuration and setting up mail for an entire domain.
[[mail-config]]
=== Basic Configuration
Out of the box, one can send email to external hosts as long as [.filename]#/etc/resolv.conf# is configured or the network has access to a configured DNS server. To have email delivered to the MTA on the FreeBSD host, do one of the following:
* Run a DNS server for the domain.
* Get mail delivered directly to the FQDN for the machine.
In order to have mail delivered directly to a host, it must have a permanent static IP address, not a dynamic IP address. If the system is behind a firewall, it must be configured to allow SMTP traffic. To receive mail directly at a host, one of these two must be configured:
* Make sure that the lowest-numbered MX record in DNS points to the host's static IP address.
* Make sure there is no MX entry in the DNS for the host.
Either of the above will allow mail to be received directly at the host.
Try this:
[source,shell]
....
# hostname
example.FreeBSD.org
# host example.FreeBSD.org
example.FreeBSD.org has address 204.216.27.XX
....
In this example, mail sent directly to mailto:yourlogin@example.FreeBSD.org[yourlogin@example.FreeBSD.org] should work without problems, assuming Sendmail is running correctly on `example.FreeBSD.org`.
For this example:
[source,shell]
....
# host example.FreeBSD.org
example.FreeBSD.org has address 204.216.27.XX
example.FreeBSD.org mail is handled (pri=10) by nevdull.FreeBSD.org
....
All mail sent to `example.FreeBSD.org` will be collected on `hub` under the same username instead of being sent directly to your host.
The above information is handled by the DNS server. The DNS record that carries mail routing information is the MX entry. If no MX record exists, mail will be delivered directly to the host by way of its IP address.
The MX entry for `freefall.FreeBSD.org` at one time looked like this:
[.programlisting]
....
freefall MX 30 mail.crl.net
freefall MX 40 agora.rdrop.com
freefall MX 10 freefall.FreeBSD.org
freefall MX 20 who.cdrom.com
....
`freefall` had many MX entries. The lowest MX number is the host that receives mail directly, if available. If it is not accessible for some reason, the next lower-numbered host will accept messages temporarily, and pass it along when a lower-numbered host becomes available.
Alternate MX sites should have separate Internet connections in order to be most useful. Your ISP can provide this service.
[[mail-domain]]
=== Mail for a Domain
When configuring a MTA for a network, any mail sent to hosts in its domain should be diverted to the MTA so that users can receive their mail on the master mail server.
To make life easiest, a user account with the same _username_ should exist on both the MTA and the system with the MUA. Use man:adduser[8] to create the user accounts.
The MTA must be the designated mail exchanger for each workstation on the network. This is done in the DNS configuration with an MX record:
[.programlisting]
....
example.FreeBSD.org A 204.216.27.XX ; Workstation
MX 10 nevdull.FreeBSD.org ; Mailhost
....
This will redirect mail for the workstation to the MTA no matter where the A record points. The mail is sent to the MX host.
This must be configured on a DNS server. If the network does not run its own DNS server, talk to the ISP or DNS provider.
The following is an example of virtual email hosting. Consider a customer with the domain `customer1.org`, where all the mail for `customer1.org` should be sent to `mail.myhost.com`. The DNS entry should look like this:
[.programlisting]
....
customer1.org MX 10 mail.myhost.com
....
An `A` record is _not_ needed for `customer1.org` in order to only handle email for that domain. However, running `ping` against `customer1.org` will not work unless an `A` record exists for it.
Tell the MTA which domains and/or hostnames it should accept mail for. Either of the following will work for Sendmail:
* Add the hosts to [.filename]#/etc/mail/local-host-names# when using the `FEATURE(use_cw_file)`.
* Add a `Cwyour.host.com` line to [.filename]#/etc/sendmail.cf#.
[[outgoing-only]]
== Setting Up to Send Only
There are many instances where one may only want to send mail through a relay. Some examples are:
* The computer is a desktop machine that needs to use programs such as man:mail[1], using the ISP's mail relay.
* The computer is a server that does not handle mail locally, but needs to pass off all mail to a relay for processing.
While any MTA is capable of filling this particular niche, it can be difficult to properly configure a full-featured MTA just to handle offloading mail. Programs such as Sendmail and Postfix are overkill for this use.
Additionally, a typical Internet access service agreement may forbid one from running a "mail server".
The easiest way to fulfill those needs is to install the package:mail/ssmtp[] port:
[source,shell]
....
# cd /usr/ports/mail/ssmtp
# make install replace clean
....
Once installed, package:mail/ssmtp[] can be configured with [.filename]#/usr/local/etc/ssmtp/ssmtp.conf#:
[.programlisting]
....
root=yourrealemail@example.com
mailhub=mail.example.com
rewriteDomain=example.com
hostname=_HOSTNAME_
....
Use the real email address for `root`. Enter the ISP's outgoing mail relay in place of `mail.example.com`. Some ISPs call this the "outgoing mail server" or "SMTP server".
Make sure to disable Sendmail, including the outgoing mail service. See <<mail-disable-sendmail>> for details.
package:mail/ssmtp[] has some other options available. Refer to the examples in [.filename]#/usr/local/etc/ssmtp# or the manual page of ssmtp for more information.
Setting up ssmtp in this manner allows any software on the computer that needs to send mail to function properly, while not violating the ISP's usage policy or allowing the computer to be hijacked for spamming.
[[SMTP-dialup]]
== Using Mail with a Dialup Connection
When using a static IP address, one should not need to adjust the default configuration. Set the hostname to the assigned Internet name and Sendmail will do the rest.
When using a dynamically assigned IP address and a dialup PPP connection to the Internet, one usually has a mailbox on the ISP's mail server. In this example, the ISP's domain is `example.net`, the user name is `user`, the hostname is `bsd.home`, and the ISP has allowed `relay.example.net` as a mail relay.
In order to retrieve mail from the ISP's mailbox, install a retrieval agent from the Ports Collection. package:mail/fetchmail[] is a good choice as it supports many different protocols. Usually, the ISP will provide POP. When using user PPP, email can be automatically fetched when an Internet connection is established with the following entry in [.filename]#/etc/ppp/ppp.linkup#:
[.programlisting]
....
MYADDR:
!bg su user -c fetchmail
....
When using Sendmail to deliver mail to non-local accounts, configure Sendmail to process the mail queue as soon as the Internet connection is established. To do this, add this line after the above `fetchmail` entry in [.filename]#/etc/ppp/ppp.linkup#:
[.programlisting]
....
!bg su user -c "sendmail -q"
....
In this example, there is an account for `user` on `bsd.home`. In the home directory of `user` on `bsd.home`, create a [.filename]#.fetchmailrc# which contains this line:
[.programlisting]
....
poll example.net protocol pop3 fetchall pass MySecret
....
This file should not be readable by anyone except `user` as it contains the password `MySecret`.
In order to send mail with the correct `from:` header, configure Sendmail to use mailto:user@example.net[user@example.net] rather than mailto:user@bsd.home[user@bsd.home] and to send all mail via `relay.example.net`, allowing quicker mail transmission.
The following [.filename]#.mc# should suffice:
[.programlisting]
....
VERSIONID(`bsd.home.mc version 1.0')
OSTYPE(bsd4.4)dnl
FEATURE(nouucp)dnl
MAILER(local)dnl
MAILER(smtp)dnl
Cwlocalhost
Cwbsd.home
MASQUERADE_AS(`example.net')dnl
FEATURE(allmasquerade)dnl
FEATURE(masquerade_envelope)dnl
FEATURE(nocanonify)dnl
FEATURE(nodns)dnl
define(`SMART_HOST', `relay.example.net')
Dmbsd.home
define(`confDOMAIN_NAME',`bsd.home')dnl
define(`confDELIVERY_MODE',`deferred')dnl
....
Refer to the previous section for details of how to convert this file into the [.filename]#sendmail.cf# format. Do not forget to restart Sendmail after updating [.filename]#sendmail.cf#.
[[SMTP-Auth]]
== SMTP Authentication
Configuring SMTP authentication on the MTA provides a number of benefits. SMTP authentication adds a layer of security to Sendmail, and provides mobile users who switch hosts the ability to use the same MTA without the need to reconfigure their mail client's settings each time.
[.procedure]
. Install package:security/cyrus-sasl2[] from the Ports Collection. This port supports a number of compile-time options. For the SMTP authentication method demonstrated in this example, make sure that `LOGIN` is not disabled.
. After installing package:security/cyrus-sasl2[], edit [.filename]#/usr/local/lib/sasl2/Sendmail.conf#, or create it if it does not exist, and add the following line:
+
[.programlisting]
....
pwcheck_method: saslauthd
....
. Next, install package:security/cyrus-sasl2-saslauthd[] and add the following line to [.filename]#/etc/rc.conf#:
+
[.programlisting]
....
saslauthd_enable="YES"
....
+
Finally, start the saslauthd daemon:
+
[source,shell]
....
# service saslauthd start
....
+
This daemon serves as a broker for Sendmail to authenticate against the FreeBSD man:passwd[5] database. This saves the trouble of creating a new set of usernames and passwords for each user that needs to use SMTP authentication, and keeps the login and mail password the same.
. Next, edit [.filename]#/etc/make.conf# and add the following lines:
+
[.programlisting]
....
SENDMAIL_CFLAGS=-I/usr/local/include/sasl -DSASL
SENDMAIL_LDADD=/usr/local/lib/libsasl2.so
....
+
These lines provide Sendmail the proper configuration options for linking to package:cyrus-sasl2[] at compile time. Make sure that package:cyrus-sasl2[] has been installed before recompiling Sendmail.
. Recompile Sendmail by executing the following commands:
+
[source,shell]
....
# cd /usr/src/lib/libsmutil
# make cleandir && make obj && make
# cd /usr/src/lib/libsm
# make cleandir && make obj && make
# cd /usr/src/usr.sbin/sendmail
# make cleandir && make obj && make && make install
....
+
This compile should not have any problems if [.filename]#/usr/src# has not changed extensively and the shared libraries it needs are available.
. After Sendmail has been compiled and reinstalled, edit [.filename]#/etc/mail/freebsd.mc# or the local [.filename]#.mc#. Many administrators choose to use the output from man:hostname[1] as the name of [.filename]#.mc# for uniqueness. Add these lines:
+
[.programlisting]
....
dnl set SASL options
TRUST_AUTH_MECH(`GSSAPI DIGEST-MD5 CRAM-MD5 LOGIN')dnl
define(`confAUTH_MECHANISMS', `GSSAPI DIGEST-MD5 CRAM-MD5 LOGIN')dnl
....
+
These options configure the different methods available to Sendmail for authenticating users. To use a method other than pwcheck, refer to the Sendmail documentation.
. Finally, run man:make[1] while in [.filename]#/etc/mail#. That will run the new [.filename]#.mc# and create a [.filename]#.cf# named either [.filename]#freebsd.cf# or the name used for the local [.filename]#.mc#. Then, run `make install restart`, which will copy the file to [.filename]#sendmail.cf#, and properly restart Sendmail. For more information about this process, refer to [.filename]#/etc/mail/Makefile#.
To test the configuration, use a MUA to send a test message. For further investigation, set the `LogLevel` of Sendmail to `13` and watch [.filename]#/var/log/maillog# for any errors.
For more information, refer to http://www.sendmail.org/~ca/email/auth.html[SMTP authentication].
[[mail-agents]]
== Mail User Agents
A MUA is an application that is used to send and receive email. As email "evolves" and becomes more complex, MUAs are becoming increasingly powerful and provide users increased functionality and flexibility. The `mail` category of the FreeBSD Ports Collection contains numerous MUAs. These include graphical email clients such as Evolution or Balsa and console based clients such as mutt or alpine.
[[mail-command]]
=== `mail`
man:mail[1] is the default MUA installed with FreeBSD. It is a console based MUA that offers the basic functionality required to send and receive text-based email. It provides limited attachment support and can only access local mailboxes.
Although `mail` does not natively support interaction with POP or IMAP servers, these mailboxes may be downloaded to a local [.filename]#mbox# using an application such as fetchmail.
In order to send and receive email, run `mail`:
[source,shell]
....
% mail
....
The contents of the user's mailbox in [.filename]#/var/mail# are automatically read by `mail`. Should the mailbox be empty, the utility exits with a message indicating that no mail could be found. If mail exists, the application interface starts, and a list of messages will be displayed. Messages are automatically numbered, as can be seen in the following example:
[source,shell]
....
Mail version 8.1 6/6/93. Type ? for help.
"/var/mail/marcs": 3 messages 3 new
>N 1 root@localhost Mon Mar 8 14:05 14/510 "test"
N 2 root@localhost Mon Mar 8 14:05 14/509 "user account"
N 3 root@localhost Mon Mar 8 14:05 14/509 "sample"
....
Messages can now be read by typing kbd:[t] followed by the message number. This example reads the first email:
[source,shell]
....
& t 1
Message 1:
From root@localhost Mon Mar 8 14:05:52 2004
X-Original-To: marcs@localhost
Delivered-To: marcs@localhost
To: marcs@localhost
Subject: test
Date: Mon, 8 Mar 2004 14:05:52 +0200 (SAST)
From: root@localhost (Charlie Root)
This is a test message, please reply if you receive it.
....
As seen in this example, the message will be displayed with full headers. To display the list of messages again, press kbd:[h].
If the email requires a reply, press either kbd:[R] or kbd:[r] `mail` keys. kbd:[R] instructs `mail` to reply only to the sender of the email, while kbd:[r] replies to all other recipients of the message. These commands can be suffixed with the mail number of the message to reply to. After typing the response, the end of the message should be marked by a single kbd:[.] on its own line. An example can be seen below:
[source,shell]
....
& R 1
To: root@localhost
Subject: Re: test
Thank you, I did get your email.
.
EOT
....
In order to send a new email, press kbd:[m], followed by the recipient email address. Multiple recipients may be specified by separating each address with the kbd:[,] delimiter. The subject of the message may then be entered, followed by the message contents. The end of the message should be specified by putting a single kbd:[.] on its own line.
[source,shell]
....
& mail root@localhost
Subject: I mastered mail
Now I can send and receive email using mail ... :)
.
EOT
....
While using `mail`, press kbd:[?] to display help at any time. Refer to man:mail[1] for more help on how to use `mail`.
[NOTE]
====
man:mail[1] was not designed to handle attachments and thus deals with them poorly. Newer MUAs handle attachments in a more intelligent way. Users who prefer to use `mail` may find the package:converters/mpack[] port to be of considerable use.
====
[[mutt-command]]
=== mutt
mutt is a powerful MUA, with many features, including:
* The ability to thread messages.
* PGP support for digital signing and encryption of email.
* MIME support.
* Maildir support.
* Highly customizable.
Refer to http://www.mutt.org[http://www.mutt.org] for more information on mutt.
mutt may be installed using the package:mail/mutt[] port. After the port has been installed, mutt can be started by issuing the following command:
[source,shell]
....
% mutt
....
mutt will automatically read and display the contents of the user mailbox in [.filename]#/var/mail#. If no mails are found, mutt will wait for commands from the user. The example below shows mutt displaying a list of messages:
image::mutt1.png[]
To read an email, select it using the cursor keys and press kbd:[Enter]. An example of mutt displaying email can be seen below:
image::mutt2.png[]
Similar to man:mail[1], mutt can be used to reply only to the sender of the message as well as to all recipients. To reply only to the sender of the email, press kbd:[r]. To send a group reply to the original sender as well as all the message recipients, press kbd:[g].
[NOTE]
====
By default, mutt uses the man:vi[1] editor for creating and replying to emails. Each user can customize this by creating or editing the [.filename]#.muttrc# in their home directory and setting the `editor` variable or by setting the `EDITOR` environment variable. Refer to http://www.mutt.org/[http://www.mutt.org/] for more information about configuring mutt.
====
To compose a new mail message, press kbd:[m]. After a valid subject has been given, mutt will start man:vi[1] so the email can be written. Once the contents of the email are complete, save and quit from `vi`. mutt will resume, displaying a summary screen of the mail that is to be delivered. In order to send the mail, press kbd:[y]. An example of the summary screen can be seen below:
image::mutt3.png[]
mutt contains extensive help which can be accessed from most of the menus by pressing kbd:[?]. The top line also displays the keyboard shortcuts where appropriate.
[[alpine-command]]
=== alpine
alpine is aimed at a beginner user, but also includes some advanced features.
[WARNING]
====
alpine has had several remote vulnerabilities discovered in the past, which allowed remote attackers to execute arbitrary code as users on the local system, by the action of sending a specially-prepared email. While _known_ problems have been fixed, alpine code is written in an insecure style and the FreeBSD Security Officer believes there are likely to be other undiscovered vulnerabilities. Users install alpine at their own risk.
====
The current version of alpine may be installed using the package:mail/alpine[] port. Once the port has installed, alpine can be started by issuing the following command:
[source,shell]
....
% alpine
....
The first time alpine runs, it displays a greeting page with a brief introduction, as well as a request from the alpine development team to send an anonymous email message allowing them to judge how many users are using their client. To send this anonymous message, press kbd:[Enter]. Alternatively, press kbd:[E] to exit the greeting without sending an anonymous message. An example of the greeting page is shown below:
image::pine1.png[]
The main menu is then presented, which can be navigated using the cursor keys. This main menu provides shortcuts for the composing new mails, browsing mail directories, and administering address book entries. Below the main menu, relevant keyboard shortcuts to perform functions specific to the task at hand are shown.
The default directory opened by alpine is [.filename]#inbox#. To view the message index, press kbd:[I], or select the [.guimenuitem]#MESSAGE INDEX# option shown below:
image::pine2.png[]
The message index shows messages in the current directory and can be navigated by using the cursor keys. Highlighted messages can be read by pressing kbd:[Enter].
image::pine3.png[]
In the screenshot below, a sample message is displayed by alpine. Contextual keyboard shortcuts are displayed at the bottom of the screen. An example of one of a shortcut is kbd:[r], which tells the MUA to reply to the current message being displayed.
image::pine4.png[]
Replying to an email in alpine is done using the pico editor, which is installed by default with alpine. pico makes it easy to navigate the message and is easier for novice users to use than man:vi[1] or man:mail[1]. Once the reply is complete, the message can be sent by pressing kbd:[Ctrl+X]. alpine will ask for confirmation before sending the message.
image::pine5.png[]
alpine can be customized using the [.guimenuitem]#SETUP# option from the main menu. Consult http://www.washington.edu/alpine/[http://www.washington.edu/alpine/] for more information.
[[mail-fetchmail]]
== Using fetchmail
fetchmail is a full-featured IMAP and POP client. It allows users to automatically download mail from remote IMAP and POP servers and save it into local mailboxes where it can be accessed more easily. fetchmail can be installed using the package:mail/fetchmail[] port, and offers various features, including:
* Support for the POP3, APOP, KPOP, IMAP, ETRN and ODMR protocols.
* Ability to forward mail using SMTP, which allows filtering, forwarding, and aliasing to function normally.
* May be run in daemon mode to check periodically for new messages.
* Can retrieve multiple mailboxes and forward them, based on configuration, to different local users.
This section explains some of the basic features of fetchmail. This utility requires a [.filename]#.fetchmailrc# configuration in the user's home directory in order to run correctly. This file includes server information as well as login credentials. Due to the sensitive nature of the contents of this file, it is advisable to make it readable only by the user, with the following command:
[source,shell]
....
% chmod 600 .fetchmailrc
....
The following [.filename]#.fetchmailrc# serves as an example for downloading a single user mailbox using POP. It tells fetchmail to connect to `example.com` using a username of `joesoap` and a password of `XXX`. This example assumes that the user `joesoap` exists on the local system.
[.programlisting]
....
poll example.com protocol pop3 username "joesoap" password "XXX"
....
The next example connects to multiple POP and IMAP servers and redirects to different local usernames where applicable:
[.programlisting]
....
poll example.com proto pop3:
user "joesoap", with password "XXX", is "jsoap" here;
user "andrea", with password "XXXX";
poll example2.net proto imap:
user "john", with password "XXXXX", is "myth" here;
....
fetchmail can be run in daemon mode by running it with `-d`, followed by the interval (in seconds) that fetchmail should poll servers listed in [.filename]#.fetchmailrc#. The following example configures fetchmail to poll every 600 seconds:
[source,shell]
....
% fetchmail -d 600
....
More information on fetchmail can be found at http://www.fetchmail.info/[http://www.fetchmail.info/].
[[mail-procmail]]
== Using procmail
procmail is a powerful application used to filter incoming mail. It allows users to define "rules" which can be matched to incoming mails to perform specific functions or to reroute mail to alternative mailboxes or email addresses. procmail can be installed using the package:mail/procmail[] port. Once installed, it can be directly integrated into most MTAs. Consult the MTA documentation for more information. Alternatively, procmail can be integrated by adding the following line to a [.filename]#.forward# in the home directory of the user:
[.programlisting]
....
"|exec /usr/local/bin/procmail || exit 75"
....
The following section displays some basic procmail rules, as well as brief descriptions of what they do. Rules must be inserted into a [.filename]#.procmailrc#, which must reside in the user's home directory.
The majority of these rules can be found in man:procmailex[5].
To forward all mail from mailto:user@example.com[user@example.com] to an external address of mailto:goodmail@example2.com[goodmail@example2.com]:
[.programlisting]
....
:0
* ^From.*user@example.com
! goodmail@example2.com
....
To forward all mails shorter than 1000 bytes to an external address of mailto:goodmail@example2.com[goodmail@example2.com]:
[.programlisting]
....
:0
* < 1000
! goodmail@example2.com
....
To send all mail sent to mailto:alternate@example.com[alternate@example.com] to a mailbox called [.filename]#alternate#:
[.programlisting]
....
:0
* ^TOalternate@example.com
alternate
....
To send all mail with a subject of "Spam" to [.filename]#/dev/null#:
[.programlisting]
....
:0
^Subject:.*Spam
/dev/null
....
A useful recipe that parses incoming `FreeBSD.org` mailing lists and places each list in its own mailbox:
[.programlisting]
....
:0
* ^Sender:.owner-freebsd-\/[^@]+@FreeBSD.ORG
{
LISTNAME=${MATCH}
:0
* LISTNAME??^\/[^@]+
FreeBSD-${MATCH}
}
....
diff --git a/documentation/content/en/books/handbook/mirrors/_index.adoc b/documentation/content/en/books/handbook/mirrors/_index.adoc
index b4187c2139..2f1f895a81 100644
--- a/documentation/content/en/books/handbook/mirrors/_index.adoc
+++ b/documentation/content/en/books/handbook/mirrors/_index.adoc
@@ -1,908 +1,909 @@
---
title: Appendix A. Obtaining FreeBSD
part: Part V. Appendices
prev: books/handbook/partv
next: books/handbook/bibliography
+description: "How to get FreeBSD: CD and DVD sets, FTP sites and how to install and use Git"
---
[appendix]
[[mirrors]]
= Obtaining FreeBSD
:doctype: book
:icons: font
:sectnums:
:sectnumlevels: 6
:source-highlighter: rouge
:experimental:
:skip-front-matter:
:xrefstyle: basic
:relfileprefix: ../
:outfilesuffix:
:sectnumoffset: A
include::shared/mirrors.adoc[]
include::shared/authors.adoc[]
include::shared/releases.adoc[]
include::shared/en/mailing-lists.adoc[]
include::shared/en/teams.adoc[]
include::shared/en/urls.adoc[]
[[mirrors-cdrom]]
== CD and DVD Sets
FreeBSD CD and DVD sets are available from several online retailers:
* FreeBSD Mall, Inc. +
2420 Sand Creek Rd C-1 #347 +
Brentwood, CA +
94513 +
USA +
Phone: +1 925 240-6652 +
Fax: +1 925 674-0821 +
Email: <info@freebsdmall.com> +
WWW: https://www.freebsdmall.com
* Getlinux +
78 Rue de la Croix Rochopt +
Épinay-sous-Sénart +
91860 +
France +
Email: <contact@getlinux.fr> +
WWW: http://www.getlinux.fr/
* Dr. Hinner EDV +
Kochelseestr. 11 +
D-81371 München +
Germany +
Phone: (0177) 428 419 0 +
Email: <infow@hinner.de> +
WWW: http://www.hinner.de/linux/freebsd.html
[[mirrors-ftp]]
== FTP Sites
The official sources for FreeBSD are available via anonymous FTP from a worldwide set of mirror sites. The site link:ftp://ftp.FreeBSD.org/pub/FreeBSD/[ftp://ftp.FreeBSD.org/pub/FreeBSD/] is available via HTTP and FTP. It is made up of many machines operated by the project cluster administrators and behind GeoDNS to direct users to the closest available mirror.
Additionally, FreeBSD is available via anonymous FTP from the following mirror sites. When obtaining FreeBSD via anonymous FTP, please try to use a nearby site. The mirror sites listed as "Primary Mirror Sites" typically have the entire FreeBSD archive (all the currently available versions for each of the architectures) but faster download speeds are probably available from a site that is in your country or region. The regional sites carry the most recent versions for the most popular architecture(s) but might not carry the entire FreeBSD archive. All sites provide access via anonymous FTP but some sites also provide access via other methods. The access methods available for each site are provided in parentheses after the hostname.
<<central, {central}>>, <<primary, {mirrors-primary}>>, <<armenia, {mirrors-armenia}>>, <<australia, {mirrors-australia}>>, <<austria, {mirrors-austria}>>, <<brazil, {mirrors-brazil}>>, <<czech-republic, {mirrors-czech}>>, <<denmark, {mirrors-denmark}>>, <<estonia, {mirrors-estonia}>>, <<finland, {mirrors-finland}>>, <<france, {mirrors-france}>>, <<germany, {mirrors-germany}>>, <<greece, {mirrors-greece}>>, <<hong-kong, {mirrors-hongkong}>>, <<ireland, {mirrors-ireland}>>, <<japan, {mirrors-japan}>>, <<korea, {mirrors-korea}>>, <<latvia, {mirrors-latvia}>>, <<lithuania, {mirrors-lithuania}>>, <<netherlands, {mirrors-netherlands}>>, <<new-zealand, {mirrors-new-zealand}>>, <<norway, {mirrors-norway}>>, <<poland, {mirrors-poland}>>, <<russia, {mirrors-russia}>>, <<saudi-arabia, {mirrors-saudi-arabia}>>, <<slovenia, {mirrors-slovenia}>>, <<south-africa, {mirrors-south-africa}>>, <<spain, {mirrors-spain}>>, <<sweden, {mirrors-sweden}>>, <<switzerland, {mirrors-switzerland}>>, <<taiwan, {mirrors-taiwan}>>, <<ukraine, {mirrors-ukraine}>>, <<uk, {mirrors-uk}>>, <<usa, {mirrors-us}>>.
(as of UTC)
[[central]]
*{central}*
{central-ftp} (ftp / ftpv6 / {central-http} / {central-httpv6})
[[primary]]
*{mirrors-primary}*
In case of problems, please contact the hostmaster `<{mirrors-primary-email}>` for this domain.
* {mirrors-primary-ftp1} (ftp)
* {mirrors-primary-ftp2} (ftp)
* {mirrors-primary-ftp3} (ftp)
* {mirrors-primary-ftp4} (ftp / ftpv6 / {mirrors-primary-ftp4-http} / {mirrors-primary-ftp4-httpv6})
* {mirrors-primary-ftp5} (ftp)
* {mirrors-primary-ftp6} (ftp)
* {mirrors-primary-ftp7} (ftp)
* {mirrors-primary-ftp10} (ftp / ftpv6 / {mirrors-primary-ftp10-http} / {mirrors-primary-ftp10-httpv6})
* {mirrors-primary-ftp11} (ftp)
* {mirrors-primary-ftp13} (ftp)
* {mirrors-primary-ftp14} (ftp / {mirrors-primary-ftp14-http})
[[armenia]]
*{mirrors-armenia}*
In case of problems, please contact the hostmaster `<{mirrors-armenia-email}>` for this domain.
* {mirrors-armenia-ftp} (ftp / {mirrors-armenia-ftp-http} / rsync)
[[australia]]
*{mirrors-australia}*
In case of problems, please contact the hostmaster `<{mirrors-australia-email}>` for this domain.
* {mirrors-australia-ftp} (ftp)
* {mirrors-australia-ftp2} (ftp)
* {mirrors-australia-ftp3} (ftp)
[[austria]]
*{mirrors-austria}*
In case of problems, please contact the hostmaster `<{mirrors-austria-email}>` for this domain.
* {mirrors-austria-ftp} (ftp / ftpv6 / {mirrors-austria-ftp-http} / {mirrors-austria-ftp-httpv6})
[[brazil]]
*{mirrors-brazil}*
In case of problems, please contact the hostmaster `<{mirrors-brazil-email}>` for this domain.
* {mirrors-brazil-ftp2} (ftp / {mirrors-brazil-ftp2-http})
* {mirrors-brazil-ftp3} (ftp / rsync)
* {mirrors-brazil-ftp4} (ftp)
[[czech-republic]]
*{mirrors-czech}*
In case of problems, please contact the hostmaster `<{mirrors-czech-email}>` for this domain.
* {mirrors-czech-ftp} (ftp / {mirrors-czech-ftpv6} / {mirrors-czech-ftp-http} / {mirrors-czech-ftp-httpv6} / rsync / rsyncv6)
* {mirrors-czech-ftp2} (ftp / {mirrors-czech-ftp2-http})
[[denmark]]
*{mirrors-denmark}*
In case of problems, please contact the hostmaster `<{mirrors-denmark-email}>` for this domain.
* {mirrors-denmark-ftp} (ftp / ftpv6 / {mirrors-denmark-ftp-http} / {mirrors-denmark-ftp-httpv6})
[[estonia]]
*{mirrors-estonia}*
In case of problems, please contact the hostmaster `<{mirrors-estonia-email}>` for this domain.
* {mirrors-estonia-ftp} (ftp)
[[finland]]
*{mirrors-finland}*
In case of problems, please contact the hostmaster `<{mirrors-finland-email}>` for this domain.
* {mirrors-finland-ftp} (ftp)
[[france]]
*{mirrors-france}*
In case of problems, please contact the hostmaster `<{mirrors-france-email}>` for this domain.
* {mirrors-france-ftp} (ftp)
* {mirrors-france-ftp1} (ftp / {mirrors-france-ftp1-http} / rsync)
* {mirrors-france-ftp3} (ftp)
* {mirrors-france-ftp5} (ftp)
* {mirrors-france-ftp6} (ftp / rsync)
* {mirrors-france-ftp7} (ftp)
* {mirrors-france-ftp8} (ftp)
[[germany]]
*{mirrors-germany}*
In case of problems, please contact the hostmaster `<{mirrors-germany-email}>` for this domain.
* ftp://ftp.de.FreeBSD.org/pub/FreeBSD/ (ftp)
* ftp://ftp1.de.FreeBSD.org/freebsd/ (ftp / http://www1.de.FreeBSD.org/freebsd/ / rsync://rsync3.de.FreeBSD.org/freebsd/)
* ftp://ftp2.de.FreeBSD.org/pub/FreeBSD/ (ftp / http://ftp2.de.FreeBSD.org/pub/FreeBSD/ / rsync)
* ftp://ftp4.de.FreeBSD.org/FreeBSD/ (ftp / http://ftp4.de.FreeBSD.org/pub/FreeBSD/)
* ftp://ftp5.de.FreeBSD.org/pub/FreeBSD/ (ftp)
* ftp://ftp7.de.FreeBSD.org/pub/FreeBSD/ (ftp / http://ftp7.de.FreeBSD.org/pub/FreeBSD/)
* ftp://ftp8.de.FreeBSD.org/pub/FreeBSD/ (ftp)
[[greece]]
*{mirrors-greece}*
In case of problems, please contact the hostmaster `<{mirrors-greece-email}>` for this domain.
* {mirrors-greece-ftp} (ftp)
* {mirrors-greece-ftp2} (ftp)
[[hong-kong]]
*{mirrors-hongkong}*
{mirrors-hongkong-ftp} (ftp)
[[ireland]]
*{mirrors-ireland}*
In case of problems, please contact the hostmaster `<{mirrors-ireland-email}>` for this domain.
* {mirrors-ireland-ftp} (ftp / rsync)
[[japan]]
*{mirrors-japan}*
In case of problems, please contact the hostmaster `<{mirrors-japan-email}>` for this domain.
* {mirrors-japan-ftp} (ftp)
* {mirrors-japan-ftp2} (ftp)
* {mirrors-japan-ftp3} (ftp)
* {mirrors-japan-ftp4} (ftp)
* {mirrors-japan-ftp5} (ftp)
* {mirrors-japan-ftp6} (ftp)
* {mirrors-japan-ftp7} (ftp)
* {mirrors-japan-ftp8} (ftp)
* {mirrors-japan-ftp9} (ftp)
[[korea]]
*{mirrors-korea}*
In case of problems, please contact the hostmaster `<{mirrors-korea-email}>` for this domain.
* {mirrors-korea-ftp} (ftp / rsync)
* {mirrors-korea-ftp2} (ftp / {mirrors-korea-ftp2-http})
[[latvia]]
*{mirrors-latvia}*
In case of problems, please contact the hostmaster `<{mirrors-latvia-email}>` for this domain.
* {mirrors-latvia-ftp} (ftp / {mirrors-latvia-ftp-http})
[[lithuania]]
*{mirrors-lithuania}*
In case of problems, please contact the hostmaster `<{mirrors-lithuania-email}>` for this domain.
* {mirrors-lithuania-ftp} (ftp / {mirrors-lithuania-ftp-http})
[[netherlands]]
*{mirrors-netherlands}*
In case of problems, please contact the hostmaster `<{mirrors-netherlands-email}>` for this domain.
* {mirrors-netherlands-ftp} (ftp / {mirrors-netherlands-ftp-http} / rsync)
* {mirrors-netherlands-ftp2} (ftp)
[[new-zealand]]
*{mirrors-new-zealand}*
* {mirrors-new-zealand-ftp} (ftp / {mirrors-new-zealand-ftp-http})
[[norway]]
*{mirrors-norway}*
In case of problems, please contact the hostmaster `<{mirrors-norway-email}>` for this domain.
* {mirrors-norway-ftp} (ftp / rsync)
[[poland]]
*{mirrors-poland}*
In case of problems, please contact the hostmaster `<{mirrors-poland-email}>` for this domain.
* {mirrors-poland-ftp} (ftp)
* ftp2.pl.FreeBSD.org
[[russia]]
*{mirrors-russia}*
In case of problems, please contact the hostmaster `<{mirrors-russia-email}>` for this domain.
* {mirrors-russia-ftp} (ftp / {mirrors-russia-ftp-http} / rsync)
* {mirrors-russia-ftp2} (ftp / {mirrors-russia-ftp2-http} / rsync)
* {mirrors-russia-ftp4} (ftp)
* {mirrors-russia-ftp5} (ftp / {mirrors-russia-ftp5-http} / rsync)
* {mirrors-russia-ftp6} (ftp)
[[saudi-arabia]]
*{mirrors-saudi-arabia}*
In case of problems, please contact the hostmaster `<{mirrors-saudi-arabia-email}>` for this domain.
* {mirrors-saudi-arabia-ftp} (ftp)
[[slovenia]]
*{mirrors-slovenia}*
In case of problems, please contact the hostmaster `<{mirrors-slovenia-email}>` for this domain.
* {mirrors-slovenia-ftp} (ftp)
[[south-africa]]
*{mirrors-south-africa}*
In case of problems, please contact the hostmaster `<{mirrors-south-africa-email}>` for this domain.
* {mirrors-south-africa-ftp} (ftp)
* {mirrors-south-africa-ftp2} (ftp)
* {mirrors-south-africa-ftp4} (ftp)
[[spain]]
*{mirrors-spain}*
In case of problems, please contact the hostmaster `<{mirrors-spain-email}>` for this domain.
* {mirrors-spain-ftp} (ftp / {mirrors-spain-ftp-http})
* {mirrors-spain-ftp3} (ftp)
[[sweden]]
*{mirrors-sweden}*
In case of problems, please contact the hostmaster `<{mirrors-sweden-email}>` for this domain.
* {mirrors-sweden-ftp} (ftp)
* {mirrors-sweden-ftp2} (ftp / {mirrors-sweden-ftp2-rsync})
* {mirrors-sweden-ftp3} (ftp)
* {mirrors-sweden-ftp4} (ftp / {mirrors-sweden-ftp4v6} / {mirrors-sweden-ftp4-http} / {mirrors-sweden-ftp4-httpv6} / {mirrors-sweden-ftp4-rsync} / {mirrors-sweden-ftp4-rsyncv6})
* {mirrors-sweden-ftp6} (ftp / {mirrors-sweden-ftp6-http})
[[switzerland]]
*{mirrors-switzerland}*
In case of problems, please contact the hostmaster `<{mirrors-switzerland-email}>` for this domain.
* {mirrors-switzerland-ftp} (ftp / {mirrors-switzerland-ftp-http})
[[taiwan]]
*{mirrors-taiwan}*
In case of problems, please contact the hostmaster `<{mirrors-taiwan-email}>` for this domain.
* {mirrors-taiwan-ftp} (ftp / {mirrors-taiwan-ftpv6} / rsync / rsyncv6)
* {mirrors-taiwan-ftp2} (ftp / {mirrors-taiwan-ftp2v6} / {mirrors-taiwan-ftp2-http} / {mirrors-taiwan-ftp2-httpv6} / rsync / rsyncv6)
* {mirrors-taiwan-ftp4} (ftp)
* {mirrors-taiwan-ftp5} (ftp)
* {mirrors-taiwan-ftp6} (ftp / {mirrors-taiwan-ftp6v6} / rsync)
* {mirrors-taiwan-ftp7} (ftp)
* {mirrors-taiwan-ftp8} (ftp)
* {mirrors-taiwan-ftp11} (ftp / {mirrors-taiwan-ftp11-http})
* {mirrors-taiwan-ftp12} (ftp)
* {mirrors-taiwan-ftp13} (ftp)
* {mirrors-taiwan-ftp14} (ftp)
* {mirrors-taiwan-ftp15} (ftp)
[[ukraine]]
*{mirrors-ukraine}*
* {mirrors-ukraine-ftp} (ftp / {mirrors-ukraine-ftp-http})
* {mirrors-ukraine-ftp6} (ftp / {mirrors-ukraine-ftp6-http} / {mirrors-ukraine-ftp6-rsync})
* {mirrors-ukraine-ftp7} (ftp)
[[uk]]
*{mirrors-uk}*
In case of problems, please contact the hostmaster `<{mirrors-uk-email}>` for this domain.
* {mirrors-uk-ftp} (ftp)
* {mirrors-uk-ftp2} (ftp / {mirrors-uk-ftp2-rsync})
* {mirrors-uk-ftp3} (ftp)
* {mirrors-uk-ftp4} (ftp)
* {mirrors-uk-ftp5} (ftp)
[[usa]]
*{mirrors-us}*
In case of problems, please contact the hostmaster `<{mirrors-us-email}>` for this domain.
* {mirrors-us-ftp} (ftp)
* {mirrors-us-ftp2} (ftp)
* {mirrors-us-ftp3} (ftp)
* {mirrors-us-ftp4} (ftp / ftpv6 / {mirrors-us-ftp4-http} / {mirrors-us-ftp4-httpv6})
* {mirrors-us-ftp5} (ftp)
* {mirrors-us-ftp6} (ftp)
* {mirrors-us-ftp8} (ftp)
* {mirrors-us-ftp10} (ftp)
* {mirrors-us-ftp11} (ftp)
* {mirrors-us-ftp13} (ftp / {mirrors-us-ftp13-http} / rsync)
* {mirrors-us-ftp14} (ftp / {mirrors-us-ftp14-http})
* {mirrors-us-ftp15} (ftp)
[[git]]
== Using Git
[[git-intro]]
=== Introduction
As of December 2020, FreeBSD uses git as the primary version control system for storing all of FreeBSD's base source code and documentation.
[NOTE]
====
Git is generally a developer tool.
Users may prefer to use `freebsd-update` (crossref:cutting-edge[updating-upgrading-freebsdupdate,“FreeBSD Update”]) to update the FreeBSD base system, and `portsnap` (crossref:ports[ports-using,“Using the Ports Collection”]) to update the FreeBSD Ports Collection.
====
This section demonstrates how to install Git on a FreeBSD system and use it to create a local copy of a FreeBSD repository.
Additional information on the use of Git is included.
[[git-ssl-certificates]]
=== Root SSL Certificates
FreeBSD systems older than 12._x_ do not have proper root certificates.
Installing package:security/ca_root_nss[] on these systems allows Git to verify the identity of HTTPS repository servers.
The root SSL certificates can be installed from a port:
[source,shell]
....
# cd /usr/ports/security/ca_root_nss
# make install clean
....
or as a package:
[source,shell]
....
# pkg install ca_root_nss
....
[[git-install]]
=== Installation
Git can be installed as a package:
[source,shell]
....
# pkg install git
....
Git can also be installed from the Ports Collection:
[source,shell]
....
# cd /usr/ports/devel/git
# make install clean
....
[[git-usage]]
=== Running Git
To fetch a clean copy of the sources into a local directory, use `git`.
This directory of files is called the _working tree_.
[WARNING]
====
Move or delete an existing destination directory before using `git clone` for the first time.
Cloning over an existing non-git directory will fail.
====
Git uses URLs to designate a repository, taking the form of _protocol://hostname/path_.
The first component of the path is the FreeBSD repository to access.
There are three different repositories, `src` for the FreeBSD systerm source code, `doc` for documentation, and `ports` for the FreeBSD Ports Collection.
For example, the URL `https://git.FreeBSD.org/src.git` specifies the main branch of the src repository, using the `https` protocol.
[[git-url-table]]
.FreeBSD Git Repository URL Table
[options="header,foooter"]
|=======================================================
|Item | Git URL
| Web-based repository browser src | `https://cgit.freebsd.org/src`
| Read-only src repo via HTTPS | `https://git.freebsd.org/src.git`
| Read-only src repo via anon-ssh | `ssh://anongit@git.freebsd.org/src.git`
| Read/write src repo for committers | `ssh://git@gitrepo.freebsd.org/src.git` (*)
| Web-based repository browser doc | `https://cgit.freebsd.org/doc`
| Read-only doc repo via HTTPS | `https://git.freebsd.org/doc.git`
| Read-only doc repo via anon-ssh | `ssh://anongit@git.freebsd.org/doc.git`
| Read/write doc repo for committers | `ssh://git@gitrepo.freebsd.org/doc.git` (*)
| Web-based repository browser ports | `https://cgit.freebsd.org/ports`
| Read-only ports repo via HTTPS | `https://git.freebsd.org/ports.git`
| Read-only ports repo via anon-ssh | `ssh://anongit@git.freebsd.org/ports.git`
| Read/write ports repo for committers | `ssh://git@gitrepo.freebsd.org/ports.git` (*)
|=======================================================
- (*) `git` is a special user on the repository server which will map your registered ssh key in FreeBSD.org to your identity, no need to change it.
[WARNING]
====
Sometime after the switch to git is complete, `gitrepo.freebsd.org` will change to simply `repo.freebsd.org`.
====
To get started, clone a copy of the FreeBSD repository:
[source,shell]
....
# git clone -o freebsd [ -b branch ] https://git.FreeBSD.org/repo.git wcdir
....
where:
* _repo_ is one of the Project repositories: `src`, `ports`, or `doc`.
* _branch_ depends on the repository used.
`ports` and `doc` are mostly updated in the `main` branch, while `src` maintains the latest version of -CURRENT under `main` and the respective latest versions of the -STABLE branches under `stable/12` (12._x_) and `stable/13` (13._x_).
* _wcdir_ is the target directory where the contents of the specified branch should be placed.
This is usually [.filename]#/usr/ports# for `ports`, [.filename]#/usr/src# for `src`, and [.filename]#/usr/doc# for `doc`.
* _freebsd_ is the name of the origin to use.
By convention in the FreeBSD documentation, the origin is assumed to be `freebsd`.
This example checks out the `main` branch of the system sources from the FreeBSD repository using the HTTPS protocol, placing the local working copy in [.filename]#/usr/src#.
If [.filename]#/usr/src# is already present but was not created by `git`, remember to rename or delete it before the checkout.
Git will refuse to do anything otherwise.
[source,shell]
....
# git clone -o freebsd https://git.FreeBSD.org/src.git /usr/src
....
Because the initial checkout must download the full branch of the remote repository, it can take a while.
Please be patient.
After the initial checkout, the local working copy can be updated by running:
[source,shell]
....
# cd wcdir
# git pull --rebase
....
To update [.filename]#/usr/src# created in the example above, use:
[source,shell]
....
# cd /usr/src
# git pull --rebase
....
The update is much quicker than a checkout, only transferring files that have changed.
There are also external mirrors maintained by project members available, please refer to the <<external-mirrors>> section.
=== SSH related information
* `ssh://${user}@${url}/${repo}.git` can be written as `${user}@${url}:${repo}.git`, i.e., following two URLs are both valid for passing to `git`:
--
** `ssh://anongit@git.freebsd.org/${repo}.git`
** `anongit@git.freebsd.org:${repo}.git`
As well as the read-write repo:
** `ssh://git@(git)repo.freebsd.org/${repo}.git`
** `git@(git)repo.freebsd.org:${repo}.git`
--
* gitrepo.FreeBSD.org host key fingerprints:
** ECDSA key fingerprint is `SHA256:seWO5D27ySURcx4bknTNKlC1mgai0whP443PAKEvvZA`
** ED25519 key fingerprint is `SHA256:lNR6i4BEOaaUhmDHBA1WJsO7H3KtvjE2r5q4sOxtIWo`
** RSA key fingerprint is `SHA256:f453CUEFXEJAXlKeEHV+ajJfeEfx9MdKQUD7lIscnQI`
* git.FreeBSD.org host key fingerprints:
** ECDSA key fingerprint is `SHA256:/UlirUAsGiitupxmtsn7f9b7zCWd0vCs4Yo/tpVWP9w`
** ED25519 key fingerprint is `SHA256:y1ljKrKMD3lDObRUG3xJ9gXwEIuqnh306tSyFd1tuZE`
** RSA key fingerprint is `SHA256:jBe6FQGoH4HjvrIVM23dcnLZk9kmpdezR/CvQzm7rJM`
These are also published as SSHFP records in DNS.
=== Web-based repository browser
The FreeBSD project currently uses cgit as the web-based repository browser: https://cgit.freebsd.org/.
The URLs of indivirual repositories are listed in <<git-url-table>>.
=== For Users
Using `git clone` and `git pull` from the official distributed mirrors is recommended.
The GeoDNS should direct you to the nearest mirror to you.
=== For Developers
This section describes the read-write access for committers to push the commits from developers or contributors.
For read-only access, please refer to the users section above.
==== Daily use
* Clone the repository:
+
[source,shell]
....
% git clone -o freebsd --config remote.freebsd.fetch='+refs/notes/*:refs/notes/*' https://git.freebsd.org/${repo}.git
....
+
Then you should have the official mirrors as your remote:
+
[source,shell]
....
% git remote -v
freebsd https://git.freebsd.org/${repo}.git (fetch)
freebsd https://git.freebsd.org/${repo}.git (push)
....
* Configure the FreeBSD committer data:
+
The commit hook in repo.freebsd.org checks the "Commit" field matches the
committer's information in FreeBSD.org. The easiest way to get the suggested
config is by executing `/usr/local/bin/gen-gitconfig.sh` script on freefall:
+
[source,shell]
....
% gen-gitconfig.sh
[...]
% git config user.name (your name in gecos)
% git config user.email (your login)@FreeBSD.org
....
* Set the push URL:
+
[source,shell]
....
% git remote set-url --push freebsd git@gitrepo.freebsd.org:${repo}.git
....
+
Then you should have separated fetch and push URLs as the most efficient setup:
+
[source,shell]
....
% git remote -v
freebsd https://git.freebsd.org/${repo}.git (fetch)
freebsd git@gitrepo.freebsd.org:${repo}.git (push)
....
+
Again, note that `gitrepo.freebsd.org` will be canonicalized to `repo.freebsd.org` in the future.
* Install commit message template hook:
+
[source,shell]
....
% fetch https://cgit.freebsd.org/src/plain/tools/tools/git/hooks/prepare-commit-msg -o .git/hooks
% chmod 755 .git/hooks/prepare-commit-msg
....
==== "admin" branch
The `access` and `mentors` files are stored in an orphan branch, `internal/admin`, in each repository.
Following example is how to check out the `internal/admin` branch to a local branch named `admin`:
[source,shell]
....
% git config --add remote.freebsd.fetch '+refs/internal/*:refs/internal/*'
% git fetch
% git checkout -b admin internal/admin
....
Alternatively, you can add a worktree for the `admin` branch:
[source,shell]
....
git worktree add -b admin ../${repo}-admin internal/admin
....
For browsing `internal/admin` branch on web:
https://cgit.freebsd.org/${repo}/log/?h=internal/admin
For pushing, either specify the full refspec:
[source,shell]
....
git push freebsd HEAD:refs/internal/admin
....
Or set `push.default` to `upstream` which will make `git push` to push the current branch back to its upstream by default, which is more suitable for our workflow:
[source,shell]
....
git config push.default upstream
....
[WARNING]
====
These internal details may change often.
====
[[external-mirrors]]
=== External mirrors
Those mirrors are not hosted in FreeBSD.org but still maintained by the project members.
Users and developers are welcome to pull or browse repositories on those mirrors.
The project workflow with those mirrors are still under discussion.
==== Codeberg
- doc: https://codeberg.org/FreeBSD/freebsd-doc
- ports: https://codeberg.org/FreeBSD/freebsd-ports
- src: https://codeberg.org/FreeBSD/freebsd-src
==== GitHub
- doc: https://github.com/freebsd/freebsd-doc
- ports: https://github.com/freebsd/freebsd-ports
- src: https://github.com/freebsd/freebsd-src
==== GitLab
- doc: https://gitlab.com/FreeBSD/freebsd-doc
- ports: https://gitlab.com/FreeBSD/freebsd-ports
- src: https://gitlab.com/FreeBSD/freebsd-src
=== Mailing lists
General usage and questions about git in the FreeBSD project: [freebsd-git](https://lists.freebsd.org/mailman/listinfo/freebsd-git)
Commit messages will be sent to the following mailing lists:
- https://lists.freebsd.org/mailman/listinfo/dev-commits-doc-all[dev-commits-doc-all]: All changes to the doc repository
- https://lists.freebsd.org/mailman/listinfo/dev-commits-ports-all[dev-commits-ports-all]: All changes to the ports repository
- https://lists.freebsd.org/mailman/listinfo/dev-commits-ports-main[dev-commits-ports-main]: All changes to the "main" branch of the ports repository
- https://lists.freebsd.org/mailman/listinfo/dev-commits-ports-branches[dev-commits-ports-branches]: All changes to the quarterly branches of the ports repository
- https://lists.freebsd.org/mailman/listinfo/dev-commits-src-all[dev-commits-src-all]: All changes to the src repository
- https://lists.freebsd.org/mailman/listinfo/dev-commits-src-main[dev-commits-src-main]: All changes to the "main" branch of the src repository (the FreeBSD-CURRENT branch)
- https://lists.freebsd.org/mailman/listinfo/dev-commits-src-branches[dev-commits-src-branches]: All changes to all stable branches of the src repository
For more information, please refer to the "Commit message lists" section of C.2. "Mailing Lists" in handbook: https://www.freebsd.org/doc/en/books/handbook/eresources-mail.html
[[svn]]
== Using Subversion
[[svn-intro]]
=== Introduction
As of December 2020, FreeBSD uses git as the primary version control system for storing all of FreeBSD's source code and documentation.
Changes from the git repo on the `stable/11`, `stable/12` and related releng branches are exported to the subversion repository.
This export will continue through the life of these branches.
From July 2012 to March 2021, FreeBSD used Subversion as the only version control system for storing all of FreeBSD's Ports Collection.
As of April 2021, FreeBSD uses git as the only version control system for storing all of FreeBSD's Ports Collection.
[NOTE]
====
Subversion is generally a developer tool.
Users may prefer to use `freebsd-update` (crossref:cutting-edge[updating-upgrading-freebsdupdate,“FreeBSD Update”]) to update the FreeBSD base system, and `portsnap` (crossref:ports[ports-using,“Using the Ports Collection”]) to update the FreeBSD Ports Collection.
After March 2021, subversion use is only for legacy branches (`stable/11` and `stable/12`).
====
This section demonstrates how to install Subversion on a FreeBSD system and use it to create a local copy of a FreeBSD repository. Additional information on the use of Subversion is included.
[[svn-ssl-certificates]]
=== Root SSL Certificates
FreeBSD systems older than 12._x_ do not have proper root certificates.
Those certificates allow Subversion to verify the identity of HTTPS repository servers.
Installation instructions are described in <<git-ssl-certificates>>.
[[svn-svnlite]]
=== Svnlite
A lightweight version of Subversion is already installed on FreeBSD as `svnlite`. The port or package version of Subversion is only needed if the Python or Perl API is needed, or if a later version of Subversion is desired.
The only difference from normal Subversion use is that the command name is `svnlite`.
[[svn-install]]
=== Installation
If `svnlite` is unavailable or the full version of Subversion is needed, then it must be installed.
Subversion can be installed from the Ports Collection:
[source,shell]
....
# cd /usr/ports/devel/subversion
# make install clean
....
Subversion can also be installed as a package:
[source,shell]
....
# pkg install subversion
....
[[svn-usage]]
=== Running Subversion
To fetch a clean copy of the sources into a local directory, use `svn`. The files in this directory are called a _local working copy_.
[WARNING]
====
Move or delete an existing destination directory before using `checkout` for the first time.
Checkout over an existing non-`svn` directory can cause conflicts between the existing files and those brought in from the repository.
====
Subversion uses URLs to designate a repository, taking the form of _protocol://hostname/path_. The first component of the path is the FreeBSD repository to access. There are three different repositories, `base` for the FreeBSD base system source code, `ports` for the Ports Collection, and `doc` for documentation. For example, the URL `https://svn.FreeBSD.org/base/head/` specifies the main branch of the ports repository, using the `https` protocol.
A checkout from a given repository is performed with a command like this:
[source,shell]
....
# svn checkout https://svn.FreeBSD.org/repository/branch lwcdir
....
where:
* _repository_ is one of the Project repositories: `base`, `ports`, or `doc`.
* _branch_ depends on the repository used. `ports` and `doc` are mostly updated in the `head` branch, while `base` maintains the latest version of -CURRENT under `head` and the respective latest versions of the -STABLE branches under `stable/9` (9._x_) and `stable/10` (10._x_).
* _lwcdir_ is the target directory where the contents of the specified branch should be placed. This is usually [.filename]#/usr/ports# for `ports`, [.filename]#/usr/src# for `base`, and [.filename]#/usr/doc# for `doc`.
This example checks out the Source Tree from the FreeBSD repository using the HTTPS protocol, placing the local working copy in [.filename]#/usr/src#. If [.filename]#/usr/src# is already present but was not created by `svn`, remember to rename or delete it before the checkout.
[source,shell]
....
# svn checkout https://svn.FreeBSD.org/base/head /usr/src
....
Because the initial checkout must download the full branch of the remote repository, it can take a while. Please be patient.
After the initial checkout, the local working copy can be updated by running:
[source,shell]
....
# svn update lwcdir
....
To update [.filename]#/usr/ports# created in the example above, use:
[source,shell]
....
# svn update /usr/src
....
The update is much quicker than a checkout, only transferring files that have changed.
An alternate way of updating the local working copy after checkout is provided by the [.filename]#Makefile# in the [.filename]#/usr/ports#, [.filename]#/usr/src#, and [.filename]#/usr/doc# directories. Set `SVN_UPDATE` and use the `update` target. For example, to update [.filename]#/usr/src#:
[source,shell]
....
# cd /usr/src
# make update SVN_UPDATE=yes
....
[[svn-mirrors]]
=== Subversion Mirror Sites
The FreeBSD Subversion repository is:
[.programlisting]
....
svn.FreeBSD.org
....
This is a publicly accessible mirror network that uses GeoDNS to select an appropriate back end server. To view the FreeBSD Subversion repositories through a browser, use https://svnweb.FreeBSD.org/[https://svnweb.FreeBSD.org/].
HTTPS is the preferred protocol, but the [.filename]#security/ca_root_nss# package will need to be installed in order to automatically validate certificates.
=== For More Information
For other information about using Subversion, please see the "Subversion Book", titled http://svnbook.red-bean.com/[Version Control with Subversion], or the http://subversion.apache.org/docs/[Subversion Documentation].
[[mirrors-rsync]]
== Using rsync
These sites make FreeBSD available through the rsync protocol. The rsync utility transfers only the differences between two sets of files. This is useful for mirror sites of the FreeBSD FTP server. The rsync suite is available for many operating systems, on FreeBSD, see the package:net/rsync[] port or use the package.
Czech Republic::
rsync://ftp.cz.FreeBSD.org/
+
Available collections:
** ftp: A partial mirror of the FreeBSD FTP server.
** FreeBSD: A full mirror of the FreeBSD FTP server.
Netherlands::
rsync://ftp.nl.FreeBSD.org/
+
Available collections:
** FreeBSD: A full mirror of the FreeBSD FTP server.
Russia::
rsync://ftp.mtu.ru/
+
Available collections:
** FreeBSD: A full mirror of the FreeBSD FTP server.
** FreeBSD-Archive: The mirror of FreeBSD Archive FTP server.
Sweden::
rsync://ftp4.se.freebsd.org/
+
Available collections:
** FreeBSD: A full mirror of the FreeBSD FTP server.
Taiwan::
rsync://ftp.tw.FreeBSD.org/
+
rsync://ftp2.tw.FreeBSD.org/
+
rsync://ftp6.tw.FreeBSD.org/
+
Available collections:
** FreeBSD: A full mirror of the FreeBSD FTP server.
United Kingdom::
rsync://rsync.mirrorservice.org/
+
Available collections:
** ftp.freebsd.org: A full mirror of the FreeBSD FTP server.
United States of America::
rsync://ftp-master.FreeBSD.org/
+
This server may only be used by FreeBSD primary mirror sites.
+
Available collections:
+
--
** FreeBSD: The master archive of the FreeBSD FTP server.
** acl: The FreeBSD master ACL list.
--
+
rsync://ftp13.FreeBSD.org/
+
Available collections:
** FreeBSD: A full mirror of the FreeBSD FTP server.
diff --git a/documentation/content/en/books/handbook/multimedia/_index.adoc b/documentation/content/en/books/handbook/multimedia/_index.adoc
index a586b57e68..847f2d800e 100644
--- a/documentation/content/en/books/handbook/multimedia/_index.adoc
+++ b/documentation/content/en/books/handbook/multimedia/_index.adoc
@@ -1,1069 +1,1070 @@
---
title: Chapter 7. Multimedia
part: Part II. Common Tasks
prev: books/handbook/desktop
next: books/handbook/kernelconfig
+description: FreeBSD supports a wide variety of sound cards, allowing users to enjoy high fidelity output from a FreeBSD system
---
[[multimedia]]
= Multimedia
:doctype: book
:toc: macro
:toclevels: 1
:icons: font
:sectnums:
:sectnumlevels: 6
:source-highlighter: rouge
:experimental:
:skip-front-matter:
:xrefstyle: basic
:relfileprefix: ../
:outfilesuffix:
:sectnumoffset: 7
ifeval::["{backend}" == "html5"]
:imagesdir: ../../../../images/books/handbook/multimedia/
endif::[]
ifeval::["{backend}" == "pdf"]
:imagesdir: ../../../../static/images/books/handbook/multimedia/
endif::[]
ifeval::["{backend}" == "epub3"]
:imagesdir: ../../../../static/images/books/handbook/multimedia/
endif::[]
include::shared/authors.adoc[]
include::shared/releases.adoc[]
include::shared/en/mailing-lists.adoc[]
include::shared/en/teams.adoc[]
include::shared/en/urls.adoc[]
toc::[]
[[multimedia-synopsis]]
== Synopsis
FreeBSD supports a wide variety of sound cards, allowing users to enjoy high fidelity output from a FreeBSD system. This includes the ability to record and play back audio in the MPEG Audio Layer 3 (`MP3`), Waveform Audio File (`WAV`), Ogg Vorbis, and other formats. The FreeBSD Ports Collection contains many applications for editing recorded audio, adding sound effects, and controlling attached MIDI devices.
FreeBSD also supports the playback of video files and ``DVD``s. The FreeBSD Ports Collection contains applications to encode, convert, and playback various video media.
This chapter describes how to configure sound cards, video playback, TV tuner cards, and scanners on FreeBSD. It also describes some of the applications which are available for using these devices.
After reading this chapter, you will know how to:
* Configure a sound card on FreeBSD.
* Troubleshoot the sound setup.
* Playback and encode MP3s and other audio.
* Prepare a FreeBSD system for video playback.
* Play ``DVD``s, [.filename]#.mpg#, and [.filename]#.avi# files.
* Rip `CD` and `DVD` content into files.
* Configure a TV card.
* Install and setup MythTV on FreeBSD
* Configure an image scanner.
* Configure a Bluetooth headset.
Before reading this chapter, you should:
* Know how to install applications as described in crossref:ports[ports,Installing Applications: Packages and Ports].
[[sound-setup]]
== Setting Up the Sound Card
Before beginning the configuration, determine the model of the sound card and the chip it uses. FreeBSD supports a wide variety of sound cards. Check the supported audio devices list of the link:{u-rel120-hardware}[Hardware Notes] to see if the card is supported and which FreeBSD driver it uses.
In order to use the sound device, its device driver must be loaded. The easiest way is to load a kernel module for the sound card with man:kldload[8]. This example loads the driver for a built-in audio chipset based on the Intel specification:
[source,shell]
....
# kldload snd_hda
....
To automate the loading of this driver at boot time, add the driver to [.filename]#/boot/loader.conf#. The line for this driver is:
[.programlisting]
....
snd_hda_load="YES"
....
Other available sound modules are listed in [.filename]#/boot/defaults/loader.conf#. When unsure which driver to use, load the [.filename]#snd_driver# module:
[source,shell]
....
# kldload snd_driver
....
This is a metadriver which loads all of the most common sound drivers and can be used to speed up the search for the correct driver. It is also possible to load all sound drivers by adding the metadriver to [.filename]#/boot/loader.conf#.
To determine which driver was selected for the sound card after loading the [.filename]#snd_driver# metadriver, type `cat /dev/sndstat`.
=== Configuring a Custom Kernel with Sound Support
This section is for users who prefer to statically compile in support for the sound card in a custom kernel. For more information about recompiling a kernel, refer to crossref:kernelconfig[kernelconfig,Configuring the FreeBSD Kernel].
When using a custom kernel to provide sound support, make sure that the audio framework driver exists in the custom kernel configuration file:
[.programlisting]
....
device sound
....
Next, add support for the sound card. To continue the example of the built-in audio chipset based on the Intel specification from the previous section, use the following line in the custom kernel configuration file:
[.programlisting]
....
device snd_hda
....
Be sure to read the manual page of the driver for the device name to use for the driver.
Non-PnP ISA sound cards may require the IRQ and I/O port settings of the card to be added to [.filename]#/boot/device.hints#. During the boot process, man:loader[8] reads this file and passes the settings to the kernel. For example, an old Creative SoundBlaster(R) 16 ISA non-PnP card will use the man:snd_sbc[4] driver in conjunction with `snd_sb16`. For this card, the following lines must be added to the kernel configuration file:
[.programlisting]
....
device snd_sbc
device snd_sb16
....
If the card uses the `0x220` I/O port and IRQ `5`, these lines must also be added to [.filename]#/boot/device.hints#:
[.programlisting]
....
hint.sbc.0.at="isa"
hint.sbc.0.port="0x220"
hint.sbc.0.irq="5"
hint.sbc.0.drq="1"
hint.sbc.0.flags="0x15"
....
The syntax used in [.filename]#/boot/device.hints# is described in man:sound[4] and the manual page for the driver of the sound card.
The settings shown above are the defaults. In some cases, the IRQ or other settings may need to be changed to match the card. Refer to man:snd_sbc[4] for more information about this card.
[[sound-testing]]
=== Testing Sound
After loading the required module or rebooting into the custom kernel, the sound card should be detected. To confirm, run `dmesg | grep pcm`. This example is from a system with a built-in Conexant CX20590 chipset:
[source,shell]
....
pcm0: <NVIDIA (0x001c) (HDMI/DP 8ch)> at nid 5 on hdaa0
pcm1: <NVIDIA (0x001c) (HDMI/DP 8ch)> at nid 6 on hdaa0
pcm2: <Conexant CX20590 (Analog 2.0+HP/2.0)> at nid 31,25 and 35,27 on hdaa1
....
The status of the sound card may also be checked using this command:
[source,shell]
....
# cat /dev/sndstat
FreeBSD Audio Driver (newpcm: 64bit 2009061500/amd64)
Installed devices:
pcm0: <NVIDIA (0x001c) (HDMI/DP 8ch)> (play)
pcm1: <NVIDIA (0x001c) (HDMI/DP 8ch)> (play)
pcm2: <Conexant CX20590 (Analog 2.0+HP/2.0)> (play/rec) default
....
The output will vary depending upon the sound card. If no [.filename]#pcm# devices are listed, double-check that the correct device driver was loaded or compiled into the kernel. The next section lists some common problems and their solutions.
If all goes well, the sound card should now work in FreeBSD. If the `CD` or `DVD` drive is properly connected to the sound card, one can insert an audio `CD` in the drive and play it with man:cdcontrol[1]:
[source,shell]
....
% cdcontrol -f /dev/acd0 play 1
....
[WARNING]
====
Audio ``CD``s have specialized encodings which means that they should not be mounted using man:mount[8].
====
Various applications, such as package:audio/workman[], provide a friendlier interface. The package:audio/mpg123[] port can be installed to listen to MP3 audio files.
Another quick way to test the card is to send data to [.filename]#/dev/dsp#:
[source,shell]
....
% cat filename > /dev/dsp
....
where [.filename]#filename# can be any type of file. This command should produce some noise, confirming that the sound card is working.
[NOTE]
====
The [.filename]#/dev/dsp*# device nodes will be created automatically as needed. When not in use, they do not exist and will not appear in the output of man:ls[1].
====
[[bluetooth-headset]]
=== Setting up Bluetooth Sound Devices
Connecting to a Bluetooth device is out of scope for this chapter. Refer to crossref:advanced-networking[network-bluetooth,“Bluetooth”] for more information.
To get Bluetooth sound sink working with FreeBSD's sound system, users have to install package:audio/virtual_oss[] first:
[source,shell]
....
# pkg install virtual_oss
....
package:audio/virtual_oss[] requires `cuse` to be loaded into the kernel:
[source,shell]
....
# kldload cuse
....
To load `cuse` during system startup, run this command:
[source,shell]
....
# echo 'cuse_load=yes' >> /boot/loader.conf
....
To use headphones as a sound sink with package:audio/virtual_oss[], users need to create a virtual device after connecting to a Bluetooth audio device:
[source,shell]
....
# virtual_oss -C 2 -c 2 -r 48000 -b 16 -s 768 -R /dev/null -P /dev/bluetooth/headphones -d dsp
....
[NOTE]
====
_headphones_ in this example is a hostname from [.filename]#/etc/bluetooth/hosts#. `BT_ADDR` could be used instead.
====
Refer to man:virtual_oss[8] for more information.
[[troubleshooting]]
=== Troubleshooting Sound
<<multimedia-sound-common-error-messages>> lists some common error messages and their solutions:
[[multimedia-sound-common-error-messages]]
.Common Error Messages
[cols="1,1", frame="none", options="header"]
|===
| Error
| Solution
|`sb_dspwr(XX) timed out`
|
The I/O port is not set correctly.
|`bad irq XX`
|
The IRQ is set incorrectly. Make sure that the set IRQ and the sound IRQ are the same.
|`xxx: gus pcm not attached, out of memory`
|
There is not enough available memory to use the device.
|`xxx: can't open /dev/dsp!`
|
Type `fstat \| grep` dsp to check if another application is holding the device open. Noteworthy troublemakers are esound and KDE's sound support.
|===
Modern graphics cards often come with their own sound driver for use with `HDMI`. This sound device is sometimes enumerated before the sound card meaning that the sound card will not be used as the default playback device. To check if this is the case, run dmesg and look for `pcm`. The output looks something like this:
[.programlisting]
....
...
hdac0: HDA Driver Revision: 20100226_0142
hdac1: HDA Driver Revision: 20100226_0142
hdac0: HDA Codec #0: NVidia (Unknown)
hdac0: HDA Codec #1: NVidia (Unknown)
hdac0: HDA Codec #2: NVidia (Unknown)
hdac0: HDA Codec #3: NVidia (Unknown)
pcm0: <HDA NVidia (Unknown) PCM #0 DisplayPort> at cad 0 nid 1 on hdac0
pcm1: <HDA NVidia (Unknown) PCM #0 DisplayPort> at cad 1 nid 1 on hdac0
pcm2: <HDA NVidia (Unknown) PCM #0 DisplayPort> at cad 2 nid 1 on hdac0
pcm3: <HDA NVidia (Unknown) PCM #0 DisplayPort> at cad 3 nid 1 on hdac0
hdac1: HDA Codec #2: Realtek ALC889
pcm4: <HDA Realtek ALC889 PCM #0 Analog> at cad 2 nid 1 on hdac1
pcm5: <HDA Realtek ALC889 PCM #1 Analog> at cad 2 nid 1 on hdac1
pcm6: <HDA Realtek ALC889 PCM #2 Digital> at cad 2 nid 1 on hdac1
pcm7: <HDA Realtek ALC889 PCM #3 Digital> at cad 2 nid 1 on hdac1
...
....
In this example, the graphics card (`NVidia`) has been enumerated before the sound card (`Realtek ALC889`). To use the sound card as the default playback device, change `hw.snd.default_unit` to the unit that should be used for playback:
[source,shell]
....
# sysctl hw.snd.default_unit=n
....
where `n` is the number of the sound device to use. In this example, it should be `4`. Make this change permanent by adding the following line to [.filename]#/etc/sysctl.conf#:
[.programlisting]
....
hw.snd.default_unit=4
....
[[sound-multiple-sources]]
=== Utilizing Multiple Sound Sources
It is often desirable to have multiple sources of sound that are able to play simultaneously. FreeBSD uses "Virtual Sound Channels" to multiplex the sound card's playback by mixing sound in the kernel.
Three man:sysctl[8] knobs are available for configuring virtual channels:
[source,shell]
....
# sysctl dev.pcm.0.play.vchans=4
# sysctl dev.pcm.0.rec.vchans=4
# sysctl hw.snd.maxautovchans=4
....
This example allocates four virtual channels, which is a practical number for everyday use. Both `dev.pcm.0.play.vchans=4` and `dev.pcm.0.rec.vchans=4` are configurable after a device has been attached and represent the number of virtual channels [.filename]#pcm0# has for playback and recording. Since the [.filename]#pcm# module can be loaded independently of the hardware drivers, `hw.snd.maxautovchans` indicates how many virtual channels will be given to an audio device when it is attached. Refer to man:pcm[4] for more information.
[NOTE]
====
The number of virtual channels for a device cannot be changed while it is in use. First, close any programs using the device, such as music players or sound daemons.
====
The correct [.filename]#pcm# device will automatically be allocated transparently to a program that requests [.filename]#/dev/dsp0#.
=== Setting Default Values for Mixer Channels
The default values for the different mixer channels are hardcoded in the source code of the man:pcm[4] driver. While sound card mixer levels can be changed using man:mixer[8] or third-party applications and daemons, this is not a permanent solution. To instead set default mixer values at the driver level, define the appropriate values in [.filename]#/boot/device.hints#, as seen in this example:
[.programlisting]
....
hint.pcm.0.vol="50"
....
This will set the volume channel to a default value of `50` when the man:pcm[4] module is loaded.
[[sound-mp3]]
== MP3 Audio
This section describes some `MP3` players available for FreeBSD, how to rip audio `CD` tracks, and how to encode and decode ``MP3``s.
[[mp3-players]]
=== MP3 Players
A popular graphical `MP3` player is Audacious. It supports Winamp skins and additional plugins. The interface is intuitive, with a playlist, graphic equalizer, and more. Those familiar with Winamp will find Audacious simple to use. On FreeBSD, Audacious can be installed from the package:multimedia/audacious[] port or package. Audacious is a descendant of XMMS.
The package:audio/mpg123[] package or port provides an alternative, command-line `MP3` player. Once installed, specify the `MP3` file to play on the command line. If the system has multiple audio devices, the sound device can also be specified:
[source,shell]
....
# mpg123 -a /dev/dsp1.0 Foobar-GreatestHits.mp3
High Performance MPEG 1.0/2.0/2.5 Audio Player for Layers 1, 2 and 3
version 1.18.1; written and copyright by Michael Hipp and others
free software (LGPL) without any warranty but with best wishes
Playing MPEG stream from Foobar-GreatestHits.mp3 ...
MPEG 1.0 layer III, 128 kbit/s, 44100 Hz joint-stereo
....
Additional `MP3` players are available in the FreeBSD Ports Collection.
[[rip-cd]]
=== Ripping `CD` Audio Tracks
Before encoding a `CD` or `CD` track to `MP3`, the audio data on the `CD` must be ripped to the hard drive. This is done by copying the raw `CD` Digital Audio (`CDDA`) data to `WAV` files.
The `cdda2wav` tool, which is installed with the package:sysutils/cdrtools[] suite, can be used to rip audio information from ``CD``s.
With the audio `CD` in the drive, the following command can be issued as `root` to rip an entire `CD` into individual, per track, `WAV` files:
[source,shell]
....
# cdda2wav -D 0,1,0 -B
....
In this example, the `-D _0,1,0_` indicates the `SCSI` device [.filename]#0,1,0# containing the `CD` to rip. Use `cdrecord -scanbus` to determine the correct device parameters for the system.
To rip individual tracks, use `-t` to specify the track:
[source,shell]
....
# cdda2wav -D 0,1,0 -t 7
....
To rip a range of tracks, such as track one to seven, specify a range:
[source,shell]
....
# cdda2wav -D 0,1,0 -t 1+7
....
To rip from an `ATAPI` (`IDE`) `CDROM` drive, specify the device name in place of the `SCSI` unit numbers. For example, to rip track 7 from an IDE drive:
[source,shell]
....
# cdda2wav -D /dev/acd0 -t 7
....
Alternately, `dd` can be used to extract audio tracks on `ATAPI` drives, as described in crossref:disks[duplicating-audiocds,“Duplicating Audio CDs”].
[[mp3-encoding]]
=== Encoding and Decoding MP3s
Lame is a popular `MP3` encoder which can be installed from the package:audio/lame[] port. Due to patent issues, a package is not available.
The following command will convert the ripped `WAV` file [.filename]#audio01.wav# to [.filename]#audio01.mp3#:
[source,shell]
....
# lame -h -b 128 --tt "Foo Song Title" --ta "FooBar Artist" --tl "FooBar Album" \
--ty "2014" --tc "Ripped and encoded by Foo" --tg "Genre" audio01.wav audio01.mp3
....
The specified 128 kbits is a standard `MP3` bitrate while the 160 and 192 bitrates provide higher quality. The higher the bitrate, the larger the size of the resulting `MP3`. The `-h` turns on the "higher quality but a little slower" mode. The options beginning with `--t` indicate `ID3` tags, which usually contain song information, to be embedded within the `MP3` file. Additional encoding options can be found in the lame manual page.
In order to burn an audio `CD` from ``MP3``s, they must first be converted to a non-compressed file format. XMMS can be used to convert to the `WAV` format, while mpg123 can be used to convert to the raw Pulse-Code Modulation (`PCM`) audio data format.
To convert [.filename]#audio01.mp3# using mpg123, specify the name of the `PCM` file:
[source,shell]
....
# mpg123 -s audio01.mp3 > audio01.pcm
....
To use XMMS to convert a `MP3` to `WAV` format, use these steps:
[.procedure]
.Procedure: Converting to `WAV` Format in XMMS
. Launch XMMS.
. Right-click the window to bring up the XMMS menu.
. Select `Preferences` under `Options`.
. Change the Output Plugin to "Disk Writer Plugin".
. Press `Configure`.
. Enter or browse to a directory to write the uncompressed files to.
. Load the `MP3` file into XMMS as usual, with volume at 100% and EQ settings turned off.
. Press `Play`. The XMMS will appear as if it is playing the `MP3`, but no music will be heard. It is actually playing the `MP3` to a file.
. When finished, be sure to set the default Output Plugin back to what it was before in order to listen to ``MP3``s again.
Both the `WAV` and `PCM` formats can be used with cdrecord. When using `WAV` files, there will be a small tick sound at the beginning of each track. This sound is the header of the `WAV` file. The package:audio/sox[] port or package can be used to remove the header:
[source,shell]
....
% sox -t wav -r 44100 -s -w -c 2 track.wav track.raw
....
Refer to crossref:disks[creating-cds,“Creating and Using CD Media”] for more information on using a `CD` burner in FreeBSD.
[[video-playback]]
== Video Playback
Before configuring video playback, determine the model and chipset of the video card. While Xorg supports a wide variety of video cards, not all provide good playback performance. To obtain a list of extensions supported by the Xorg server using the card, run `xdpyinfo` while Xorg is running.
It is a good idea to have a short MPEG test file for evaluating various players and options. Since some `DVD` applications look for `DVD` media in [.filename]#/dev/dvd# by default, or have this device name hardcoded in them, it might be useful to make a symbolic link to the proper device:
[source,shell]
....
# ln -sf /dev/cd0 /dev/dvd
....
Due to the nature of man:devfs[5], manually created links will not persist after a system reboot. In order to recreate the symbolic link automatically when the system boots, add the following line to [.filename]#/etc/devfs.conf#:
[.programlisting]
....
link cd0 dvd
....
`DVD` decryption invokes certain functions that require write permission to the `DVD` device.
To enhance the shared memory Xorg interface, it is recommended to increase the values of these man:sysctl[8] variables:
[.programlisting]
....
kern.ipc.shmmax=67108864
kern.ipc.shmall=32768
....
[[video-interface]]
=== Determining Video Capabilities
There are several possible ways to display video under Xorg and what works is largely hardware dependent. Each method described below will have varying quality across different hardware.
Common video interfaces include:
. Xorg: normal output using shared memory.
. XVideo: an extension to the Xorg interface which allows video to be directly displayed in drawable objects through a special acceleration. This extension provides good quality playback even on low-end machines. The next section describes how to determine if this extension is running.
. `SDL`: the Simple Directmedia Layer is a porting layer for many operating systems, allowing cross-platform applications to be developed which make efficient use of sound and graphics. `SDL` provides a low-level abstraction to the hardware which can sometimes be more efficient than the Xorg interface. On FreeBSD, `SDL` can be installed using the package:devel/sdl20[] package or port.
. `DGA`: the Direct Graphics Access is an Xorg extension which allows a program to bypass the Xorg server and directly alter the framebuffer. As it relies on a low-level memory mapping, programs using it must be run as `root`. The `DGA` extension can be tested and benchmarked using man:dga[1]. When `dga` is running, it changes the colors of the display whenever a key is pressed. To quit, press kbd:[q].
. SVGAlib: a low level console graphics layer.
[[video-interface-xvideo]]
==== XVideo
To check whether this extension is running, use `xvinfo`:
[source,shell]
....
% xvinfo
....
XVideo is supported for the card if the result is similar to:
[source,shell]
....
X-Video Extension version 2.2
screen #0
Adaptor #0: "Savage Streams Engine"
number of ports: 1
port base: 43
operations supported: PutImage
supported visuals:
depth 16, visualID 0x22
depth 16, visualID 0x23
number of attributes: 5
"XV_COLORKEY" (range 0 to 16777215)
client settable attribute
client gettable attribute (current value is 2110)
"XV_BRIGHTNESS" (range -128 to 127)
client settable attribute
client gettable attribute (current value is 0)
"XV_CONTRAST" (range 0 to 255)
client settable attribute
client gettable attribute (current value is 128)
"XV_SATURATION" (range 0 to 255)
client settable attribute
client gettable attribute (current value is 128)
"XV_HUE" (range -180 to 180)
client settable attribute
client gettable attribute (current value is 0)
maximum XvImage size: 1024 x 1024
Number of image formats: 7
id: 0x32595559 (YUY2)
guid: 59555932-0000-0010-8000-00aa00389b71
bits per pixel: 16
number of planes: 1
type: YUV (packed)
id: 0x32315659 (YV12)
guid: 59563132-0000-0010-8000-00aa00389b71
bits per pixel: 12
number of planes: 3
type: YUV (planar)
id: 0x30323449 (I420)
guid: 49343230-0000-0010-8000-00aa00389b71
bits per pixel: 12
number of planes: 3
type: YUV (planar)
id: 0x36315652 (RV16)
guid: 52563135-0000-0000-0000-000000000000
bits per pixel: 16
number of planes: 1
type: RGB (packed)
depth: 0
red, green, blue masks: 0x1f, 0x3e0, 0x7c00
id: 0x35315652 (RV15)
guid: 52563136-0000-0000-0000-000000000000
bits per pixel: 16
number of planes: 1
type: RGB (packed)
depth: 0
red, green, blue masks: 0x1f, 0x7e0, 0xf800
id: 0x31313259 (Y211)
guid: 59323131-0000-0010-8000-00aa00389b71
bits per pixel: 6
number of planes: 3
type: YUV (packed)
id: 0x0
guid: 00000000-0000-0000-0000-000000000000
bits per pixel: 0
number of planes: 0
type: RGB (packed)
depth: 1
red, green, blue masks: 0x0, 0x0, 0x0
....
The formats listed, such as YUV2 and YUV12, are not present with every implementation of XVideo and their absence may hinder some players.
If the result instead looks like:
[source,shell]
....
X-Video Extension version 2.2
screen #0
no adaptors present
....
XVideo is probably not supported for the card. This means that it will be more difficult for the display to meet the computational demands of rendering video, depending on the video card and processor.
[[video-ports]]
=== Ports and Packages Dealing with Video
This section introduces some of the software available from the FreeBSD Ports Collection which can be used for video playback.
[[video-mplayer]]
==== MPlayer and MEncoder
MPlayer is a command-line video player with an optional graphical interface which aims to provide speed and flexibility. Other graphical front-ends to MPlayer are available from the FreeBSD Ports Collection.
MPlayer can be installed using the package:multimedia/mplayer[] package or port. Several compile options are available and a variety of hardware checks occur during the build process. For these reasons, some users prefer to build the port rather than install the package.
When compiling the port, the menu options should be reviewed to determine the type of support to compile into the port. If an option is not selected, MPlayer will not be able to display that type of video format. Use the arrow keys and spacebar to select the required formats. When finished, press kbd:[Enter] to continue the port compile and installation.
By default, the package or port will build the `mplayer` command line utility and the `gmplayer` graphical utility. To encode videos, compile the package:multimedia/mencoder[] port. Due to licensing restrictions, a package is not available for MEncoder.
The first time MPlayer is run, it will create [.filename]#~/.mplayer# in the user's home directory. This subdirectory contains default versions of the user-specific configuration files.
This section describes only a few common uses. Refer to mplayer(1) for a complete description of its numerous options.
To play the file [.filename]#testfile.avi#, specify the video interfaces with `-vo`, as seen in the following examples:
[source,shell]
....
% mplayer -vo xv testfile.avi
....
[source,shell]
....
% mplayer -vo sdl testfile.avi
....
[source,shell]
....
% mplayer -vo x11 testfile.avi
....
[source,shell]
....
# mplayer -vo dga testfile.avi
....
[source,shell]
....
# mplayer -vo 'sdl:dga' testfile.avi
....
It is worth trying all of these options, as their relative performance depends on many factors and will vary significantly with hardware.
To play a `DVD`, replace [.filename]#testfile.avi# with `dvd://_N_ -dvd-device _DEVICE_`, where _N_ is the title number to play and _DEVICE_ is the device node for the `DVD`. For example, to play title 3 from [.filename]#/dev/dvd#:
[source,shell]
....
# mplayer -vo xv dvd://3 -dvd-device /dev/dvd
....
[NOTE]
====
The default `DVD` device can be defined during the build of the MPlayer port by including the `WITH_DVD_DEVICE=/path/to/desired/device` option. By default, the device is [.filename]#/dev/cd0#. More details can be found in the port's [.filename]#Makefile.options#.
====
To stop, pause, advance, and so on, use a keybinding. To see the list of keybindings, run `mplayer -h` or read mplayer(1).
Additional playback options include `-fs -zoom`, which engages fullscreen mode, and `-framedrop`, which helps performance.
Each user can add commonly used options to their [.filename]#~/.mplayer/config# like so:
[.programlisting]
....
vo=xv
fs=yes
zoom=yes
....
`mplayer` can be used to rip a `DVD` title to a [.filename]#.vob#. To dump the second title from a `DVD`:
[source,shell]
....
# mplayer -dumpstream -dumpfile out.vob dvd://2 -dvd-device /dev/dvd
....
The output file, [.filename]#out.vob#, will be in `MPEG` format.
Anyone wishing to obtain a high level of expertise with UNIX(R) video should consult http://www.mplayerhq.hu/DOCS/[mplayerhq.hu/DOCS] as it is technically informative. This documentation should be considered as required reading before submitting any bug reports.
Before using `mencoder`, it is a good idea to become familiar with the options described at http://www.mplayerhq.hu/DOCS/HTML/en/mencoder.html[mplayerhq.hu/DOCS/HTML/en/mencoder.html]. There are innumerable ways to improve quality, lower bitrate, and change formats, and some of these options may make the difference between good or bad performance. Improper combinations of command line options can yield output files that are unplayable even by `mplayer`.
Here is an example of a simple copy:
[source,shell]
....
% mencoder input.avi -oac copy -ovc copy -o output.avi
....
To rip to a file, use `-dumpfile` with `mplayer`.
To convert [.filename]#input.avi# to the MPEG4 codec with MPEG3 audio encoding, first install the package:audio/lame[] port. Due to licensing restrictions, a package is not available. Once installed, type:
[source,shell]
....
% mencoder input.avi -oac mp3lame -lameopts br=192 \
-ovc lavc -lavcopts vcodec=mpeg4:vhq -o output.avi
....
This will produce output playable by applications such as `mplayer` and `xine`.
[.filename]#input.avi# can be replaced with `dvd://1 -dvd-device /dev/dvd` and run as `root` to re-encode a `DVD` title directly. Since it may take a few tries to get the desired result, it is recommended to instead dump the title to a file and to work on the file.
[[video-xine]]
==== The xine Video Player
xine is a video player with a reusable base library and a modular executable which can be extended with plugins. It can be installed using the package:multimedia/xine[] package or port.
In practice, xine requires either a fast CPU with a fast video card, or support for the XVideo extension. The xine video player performs best on XVideo interfaces.
By default, the xine player starts a graphical user interface. The menus can then be used to open a specific file.
Alternatively, xine may be invoked from the command line by specifying the name of the file to play:
[source,shell]
....
% xine -g -p mymovie.avi
....
Refer to http://www.xine-project.org/faq[xine-project.org/faq] for more information and troubleshooting tips.
[[video-ports-transcode]]
==== The Transcode Utilities
Transcode provides a suite of tools for re-encoding video and audio files. Transcode can be used to merge video files or repair broken files using command line tools with stdin/stdout stream interfaces.
In FreeBSD, Transcode can be installed using the package:multimedia/transcode[] package or port. Many users prefer to compile the port as it provides a menu of compile options for specifying the support and codecs to compile in. If an option is not selected, Transcode will not be able to encode that format. Use the arrow keys and spacebar to select the required formats. When finished, press kbd:[Enter] to continue the port compile and installation.
This example demonstrates how to convert a DivX file into a PAL MPEG-1 file (PAL VCD):
[source,shell]
....
% transcode -i input.avi -V --export_prof vcd-pal -o output_vcd
% mplex -f 1 -o output_vcd.mpg output_vcd.m1v output_vcd.mpa
....
The resulting `MPEG` file, [.filename]#output_vcd.mpg#, is ready to be played with MPlayer. The file can be burned on a `CD` media to create a video `CD` using a utility such as package:multimedia/vcdimager[] or package:sysutils/cdrdao[].
In addition to the manual page for `transcode`, refer to http://www.transcoding.org/cgi-bin/transcode[transcoding.org/cgi-bin/transcode] for further information and examples.
[[tvcard]]
== TV Cards
TV cards can be used to watch broadcast or cable TV on a computer. Most cards accept composite video via an `RCA` or S-video input and some cards include a `FM` radio tuner.
FreeBSD provides support for PCI-based TV cards using a Brooktree Bt848/849/878/879 video capture chip with the man:bktr[4] driver. This driver supports most Pinnacle PCTV video cards. Before purchasing a TV card, consult man:bktr[4] for a list of supported tuners.
=== Loading the Driver
In order to use the card, the man:bktr[4] driver must be loaded. To automate this at boot time, add the following line to [.filename]#/boot/loader.conf#:
[.programlisting]
....
bktr_load="YES"
....
Alternatively, one can statically compile support for the TV card into a custom kernel. In that case, add the following lines to the custom kernel configuration file:
[.programlisting]
....
device bktr
device iicbus
device iicbb
device smbus
....
These additional devices are necessary as the card components are interconnected via an I2C bus. Then, build and install a new kernel.
To test that the tuner is correctly detected, reboot the system. The TV card should appear in the boot messages, as seen in this example:
[.programlisting]
....
bktr0: <BrookTree 848A> mem 0xd7000000-0xd7000fff irq 10 at device 10.0 on pci0
iicbb0: <I2C bit-banging driver> on bti2c0
iicbus0: <Philips I2C bus> on iicbb0 master-only
iicbus1: <Philips I2C bus> on iicbb0 master-only
smbus0: <System Management Bus> on bti2c0
bktr0: Pinnacle/Miro TV, Philips SECAM tuner.
....
The messages will differ according to the hardware. If necessary, it is possible to override some of the detected parameters using man:sysctl[8] or custom kernel configuration options. For example, to force the tuner to a Philips SECAM tuner, add the following line to a custom kernel configuration file:
[.programlisting]
....
options OVERRIDE_TUNER=6
....
or, use man:sysctl[8]:
[source,shell]
....
# sysctl hw.bt848.tuner=6
....
Refer to man:bktr[4] for a description of the available man:sysctl[8] parameters and kernel options.
=== Useful Applications
To use the TV card, install one of the following applications:
* package:multimedia/fxtv[] provides TV-in-a-window and image/audio/video capture capabilities.
* package:multimedia/xawtv[] is another TV application with similar features.
* package:audio/xmradio[] provides an application for using the FM radio tuner of a TV card.
More applications are available in the FreeBSD Ports Collection.
=== Troubleshooting
If any problems are encountered with the TV card, check that the video capture chip and the tuner are supported by man:bktr[4] and that the right configuration options were used. For more support or to ask questions about supported TV cards, refer to the {freebsd-multimedia} mailing list.
[[mythtv]]
== MythTV
MythTV is a popular, open source Personal Video Recorder (`PVR`) application. This section demonstrates how to install and setup MythTV on FreeBSD. Refer to http://www.mythtv.org/wiki/[mythtv.org/wiki] for more information on how to use MythTV.
MythTV requires a frontend and a backend. These components can either be installed on the same system or on different machines.
The frontend can be installed on FreeBSD using the package:multimedia/mythtv-frontend[] package or port. Xorg must also be installed and configured as described in crossref:x11[x11,The X Window System]. Ideally, this system has a video card that supports X-Video Motion Compensation (`XvMC`) and, optionally, a Linux Infrared Remote Control (`LIRC`)-compatible remote.
To install both the backend and the frontend on FreeBSD, use the package:multimedia/mythtv[] package or port. A MySQL(TM) database server is also required and should automatically be installed as a dependency. Optionally, this system should have a tuner card and sufficient storage to hold recorded data.
=== Hardware
MythTV uses Video for Linux (`V4L`) to access video input devices such as encoders and tuners. In FreeBSD, MythTV works best with `USB` DVB-S/C/T cards as they are well supported by the package:multimedia/webcamd[] package or port which provides a `V4L` userland application. Any Digital Video Broadcasting (`DVB`) card supported by webcamd should work with MythTV. A list of known working cards can be found at https://wiki.freebsd.org/WebcamCompat[wiki.freebsd.org/WebcamCompat]. Drivers are also available for Hauppauge cards in the package:multimedia/pvr250[] and package:multimedia/pvrxxx[] ports, but they provide a non-standard driver interface that does not work with versions of MythTV greater than 0.23. Due to licensing restrictions, no packages are available and these two ports must be compiled.
The https://wiki.freebsd.org/HTPC[wiki.freebsd.org/HTPC] page contains a list of all available `DVB` drivers.
=== Setting up the MythTV Backend
To install MythTV using binary packages:
[source,shell]
....
# pkg install mythtv
....
Alternatively, to install from the Ports Collection:
[source,shell]
....
# cd /usr/ports/multimedia/mythtv
# make install
....
Once installed, set up the MythTV database:
[source,shell]
....
# mysql -uroot -p < /usr/local/shared/mythtv/database/mc.sql
....
Then, configure the backend:
[source,shell]
....
# mythtv-setup
....
Finally, start the backend:
[source,shell]
....
# sysrc mythbackend_enable=yes
# service mythbackend start
....
[[scanners]]
== Image Scanners
In FreeBSD, access to image scanners is provided by SANE (Scanner Access Now Easy), which is available in the FreeBSD Ports Collection. SANE will also use some FreeBSD device drivers to provide access to the scanner hardware.
FreeBSD supports both `SCSI` and `USB` scanners. Depending upon the scanner interface, different device drivers are required. Be sure the scanner is supported by SANE prior to performing any configuration. Refer to http://www.sane-project.org/sane-supported-devices.html[http://www.sane-project.org/sane-supported-devices.html] for more information about supported scanners.
This chapter describes how to determine if the scanner has been detected by FreeBSD. It then provides an overview of how to configure and use SANE on a FreeBSD system.
[[scanners-kernel-usb]]
=== Checking the Scanner
The [.filename]#GENERIC# kernel includes the device drivers needed to support `USB` scanners. Users with a custom kernel should ensure that the following lines are present in the custom kernel configuration file:
[.programlisting]
....
device usb
device uhci
device ohci
device ehci
device xhci
....
To determine if the `USB` scanner is detected, plug it in and use `dmesg` to determine whether the scanner appears in the system message buffer. If it does, it should display a message similar to this:
[source,shell]
....
ugen0.2: <EPSON> at usbus0
....
In this example, an EPSON Perfection(R) 1650 `USB` scanner was detected on [.filename]#/dev/ugen0.2#.
If the scanner uses a `SCSI` interface, it is important to know which `SCSI` controller board it will use. Depending upon the `SCSI` chipset, a custom kernel configuration file may be needed. The [.filename]#GENERIC# kernel supports the most common `SCSI` controllers. Refer to [.filename]#/usr/src/sys/conf/NOTES# to determine the correct line to add to a custom kernel configuration file. In addition to the `SCSI` adapter driver, the following lines are needed in a custom kernel configuration file:
[.programlisting]
....
device scbus
device pass
....
Verify that the device is displayed in the system message buffer:
[source,shell]
....
pass2 at aic0 bus 0 target 2 lun 0
pass2: <AGFA SNAPSCAN 600 1.10> Fixed Scanner SCSI-2 device
pass2: 3.300MB/s transfers
....
If the scanner was not powered-on at system boot, it is still possible to manually force detection by performing a `SCSI` bus scan with `camcontrol`:
[source,shell]
....
# camcontrol rescan all
Re-scan of bus 0 was successful
Re-scan of bus 1 was successful
Re-scan of bus 2 was successful
Re-scan of bus 3 was successful
....
The scanner should now appear in the `SCSI` devices list:
[source,shell]
....
# camcontrol devlist
<IBM DDRS-34560 S97B> at scbus0 target 5 lun 0 (pass0,da0)
<IBM DDRS-34560 S97B> at scbus0 target 6 lun 0 (pass1,da1)
<AGFA SNAPSCAN 600 1.10> at scbus1 target 2 lun 0 (pass3)
<PHILIPS CDD3610 CD-R/RW 1.00> at scbus2 target 0 lun 0 (pass2,cd0)
....
Refer to man:scsi[4] and man:camcontrol[8] for more details about `SCSI` devices on FreeBSD.
=== SANE Configuration
The SANE system provides the access to the scanner via backends (package:graphics/sane-backends[]). Refer to http://www.sane-project.org/sane-supported-devices.html[http://www.sane-project.org/sane-supported-devices.html] to determine which backend supports the scanner. A graphical scanning interface is provided by third party applications like Kooka (package:graphics/kooka[]) or XSane (package:graphics/xsane[]). SANE's backends are enough to test the scanner.
To install the backends from binary package:
[source,shell]
....
# pkg install sane-backends
....
Alternatively, to install from the Ports Collection
[source,shell]
....
# cd /usr/ports/graphics/sane-backends
# make install clean
....
After installing the package:graphics/sane-backends[] port or package, use `sane-find-scanner` to check the scanner detection by the SANE system:
[source,shell]
....
# sane-find-scanner -q
found SCSI scanner "AGFA SNAPSCAN 600 1.10" at /dev/pass3
....
The output should show the interface type of the scanner and the device node used to attach the scanner to the system. The vendor and the product model may or may not appear.
[NOTE]
====
Some `USB` scanners require firmware to be loaded. Refer to sane-find-scanner(1) and sane(7) for details.
====
Next, check if the scanner will be identified by a scanning frontend. The SANE backends include `scanimage` which can be used to list the devices and perform an image acquisition. Use `-L` to list the scanner devices. The first example is for a `SCSI` scanner and the second is for a `USB` scanner:
[source,shell]
....
# scanimage -L
device `snapscan:/dev/pass3' is a AGFA SNAPSCAN 600 flatbed scanner
# scanimage -L
device 'epson2:libusb:000:002' is a Epson GT-8200 flatbed scanner
....
In this second example, `epson2` is the backend name and `libusb:000:002` means [.filename]#/dev/ugen0.2# is the device node used by the scanner.
If `scanimage` is unable to identify the scanner, this message will appear:
[source,shell]
....
# scanimage -L
No scanners were identified. If you were expecting something different,
check that the scanner is plugged in, turned on and detected by the
sane-find-scanner tool (if appropriate). Please read the documentation
which came with this software (README, FAQ, manpages).
....
If this happens, edit the backend configuration file in [.filename]#/usr/local/etc/sane.d/# and define the scanner device used. For example, if the undetected scanner model is an EPSON Perfection(R) 1650 and it uses the `epson2` backend, edit [.filename]#/usr/local/etc/sane.d/epson2.conf#. When editing, add a line specifying the interface and the device node used. In this case, add the following line:
[.programlisting]
....
usb /dev/ugen0.2
....
Save the edits and verify that the scanner is identified with the right backend name and the device node:
[source,shell]
....
# scanimage -L
device 'epson2:libusb:000:002' is a Epson GT-8200 flatbed scanner
....
Once `scanimage -L` sees the scanner, the configuration is complete and the scanner is now ready to use.
While `scanimage` can be used to perform an image acquisition from the command line, it is often preferable to use a graphical interface to perform image scanning. Applications like Kooka or XSane are popular scanning frontends. They offer advanced features such as various scanning modes, color correction, and batch scans. XSane is also usable as a GIMP plugin.
=== Scanner Permissions
In order to have access to the scanner, a user needs read and write permissions to the device node used by the scanner. In the previous example, the `USB` scanner uses the device node [.filename]#/dev/ugen0.2# which is really a symlink to the real device node [.filename]#/dev/usb/0.2.0#. The symlink and the device node are owned, respectively, by the `wheel` and `operator` groups. While adding the user to these groups will allow access to the scanner, it is considered insecure to add a user to `wheel`. A better solution is to create a group and make the scanner device accessible to members of this group.
This example creates a group called `_usb_`:
[source,shell]
....
# pw groupadd usb
....
Then, make the [.filename]#/dev/ugen0.2# symlink and the [.filename]#/dev/usb/0.2.0# device node accessible to the `usb` group with write permissions of `0660` or `0664` by adding the following lines to [.filename]#/etc/devfs.rules#:
[.programlisting]
....
[system=5]
add path ugen0.2 mode 0660 group usb
add path usb/0.2.0 mode 0666 group usb
....
[NOTE]
====
It happens the device node changes with the addition or removal of devices, so one may want to give access to all USB devices using this ruleset instead:
[.programlisting]
....
[system=5]
add path 'ugen*' mode 0660 group usb
add path 'usb/*' mode 0666 group usb
....
====
Refer to man:devfs.rules[5] for more information about this file.
Next, enable the ruleset in /etc/rc.conf:
[.programlisting]
....
devfs_system_ruleset="system"
....
And, restart the man:devfs[8] system:
[source,shell]
....
# service devfs restart
....
Finally, add the users to `_usb_` in order to allow access to the scanner:
[source,shell]
....
# pw groupmod usb -m joe
....
For more details refer to man:pw[8].
diff --git a/documentation/content/en/books/handbook/network-servers/_index.adoc b/documentation/content/en/books/handbook/network-servers/_index.adoc
index 092271bf56..a3fdacf0b8 100644
--- a/documentation/content/en/books/handbook/network-servers/_index.adoc
+++ b/documentation/content/en/books/handbook/network-servers/_index.adoc
@@ -1,2563 +1,2564 @@
---
title: Chapter 30. Network Servers
part: IV. Network Communication
prev: books/handbook/mail
next: books/handbook/firewalls
+description: This chapter covers some of the more frequently used network services on UNIX systems
---
[[network-servers]]
= Network Servers
:doctype: book
:toc: macro
:toclevels: 1
:icons: font
:sectnums:
:sectnumlevels: 6
:source-highlighter: rouge
:experimental:
:skip-front-matter:
:xrefstyle: basic
:relfileprefix: ../
:outfilesuffix:
:sectnumoffset: 30
ifeval::["{backend}" == "html5"]
:imagesdir: ../../../../images/books/handbook/network-servers/
endif::[]
ifeval::["{backend}" == "pdf"]
:imagesdir: ../../../../static/images/books/handbook/network-servers/
endif::[]
ifeval::["{backend}" == "epub3"]
:imagesdir: ../../../../static/images/books/handbook/network-servers/
endif::[]
include::shared/authors.adoc[]
include::shared/releases.adoc[]
include::shared/en/mailing-lists.adoc[]
include::shared/en/teams.adoc[]
include::shared/en/urls.adoc[]
toc::[]
[[network-servers-synopsis]]
== Synopsis
This chapter covers some of the more frequently used network services on UNIX(R) systems. This includes installing, configuring, testing, and maintaining many different types of network services. Example configuration files are included throughout this chapter for reference.
By the end of this chapter, readers will know:
* How to manage the inetd daemon.
* How to set up the Network File System (NFS).
* How to set up the Network Information Server (NIS) for centralizing and sharing user accounts.
* How to set FreeBSD up to act as an LDAP server or client
* How to set up automatic network settings using DHCP.
* How to set up a Domain Name Server (DNS).
* How to set up the Apache HTTP Server.
* How to set up a File Transfer Protocol (FTP) server.
* How to set up a file and print server for Windows(R) clients using Samba.
* How to synchronize the time and date, and set up a time server using the Network Time Protocol (NTP).
* How to set up iSCSI.
This chapter assumes a basic knowledge of:
* [.filename]#/etc/rc# scripts.
* Network terminology.
* Installation of additional third-party software (crossref:ports[ports,Installing Applications: Packages and Ports]).
[[network-inetd]]
== The inetd Super-Server
The man:inetd[8] daemon is sometimes referred to as a Super-Server because it manages connections for many services. Instead of starting multiple applications, only the inetd service needs to be started. When a connection is received for a service that is managed by inetd, it determines which program the connection is destined for, spawns a process for that program, and delegates the program a socket. Using inetd for services that are not heavily used can reduce system load, when compared to running each daemon individually in stand-alone mode.
Primarily, inetd is used to spawn other daemons, but several trivial protocols are handled internally, such as chargen, auth, time, echo, discard, and daytime.
This section covers the basics of configuring inetd.
[[network-inetd-conf]]
=== Configuration File
Configuration of inetd is done by editing [.filename]#/etc/inetd.conf#. Each line of this configuration file represents an application which can be started by inetd. By default, every line starts with a comment (`#`), meaning that inetd is not listening for any applications. To configure inetd to listen for an application's connections, remove the `#` at the beginning of the line for that application.
After saving your edits, configure inetd to start at system boot by editing [.filename]#/etc/rc.conf#:
[.programlisting]
....
inetd_enable="YES"
....
To start inetd now, so that it listens for the service you configured, type:
[source,shell]
....
# service inetd start
....
Once inetd is started, it needs to be notified whenever a modification is made to [.filename]#/etc/inetd.conf#:
[[network-inetd-reread]]
.Reloading the inetd Configuration File
[example]
====
[source,shell]
....
# service inetd reload
....
====
Typically, the default entry for an application does not need to be edited beyond removing the `#`. In some situations, it may be appropriate to edit the default entry.
As an example, this is the default entry for man:ftpd[8] over IPv4:
[.programlisting]
....
ftp stream tcp nowait root /usr/libexec/ftpd ftpd -l
....
The seven columns in an entry are as follows:
[.programlisting]
....
service-name
socket-type
protocol
{wait|nowait}[/max-child[/max-connections-per-ip-per-minute[/max-child-per-ip]]]
user[:group][/login-class]
server-program
server-program-arguments
....
where:
service-name::
The service name of the daemon to start. It must correspond to a service listed in [.filename]#/etc/services#. This determines which port inetd listens on for incoming connections to that service. When using a custom service, it must first be added to [.filename]#/etc/services#.
socket-type::
Either `stream`, `dgram`, `raw`, or `seqpacket`. Use `stream` for TCP connections and `dgram` for UDP services.
protocol::
Use one of the following protocol names:
+
[.informaltable]
[cols="1,1", frame="none", options="header"]
|===
| Protocol Name
| Explanation
|tcp or tcp4
|TCP IPv4
|udp or udp4
|UDP IPv4
|tcp6
|TCP IPv6
|udp6
|UDP IPv6
|tcp46
|Both TCP IPv4 and IPv6
|udp46
|Both UDP IPv4 and IPv6
|===
{wait|nowait}[/max-child[/max-connections-per-ip-per-minute[/max-child-per-ip]]]::
In this field, `wait` or `nowait` must be specified. `max-child`, `max-connections-per-ip-per-minute` and `max-child-per-ip` are optional.
+
`wait|nowait` indicates whether or not the service is able to handle its own socket. `dgram` socket types must use `wait` while `stream` daemons, which are usually multi-threaded, should use `nowait`. `wait` usually hands off multiple sockets to a single daemon, while `nowait` spawns a child daemon for each new socket.
+
The maximum number of child daemons inetd may spawn is set by `max-child`. For example, to limit ten instances of the daemon, place a `/10` after `nowait`. Specifying `/0` allows an unlimited number of children.
+
`max-connections-per-ip-per-minute` limits the number of connections from any particular IP address per minute. Once the limit is reached, further connections from this IP address will be dropped until the end of the minute. For example, a value of `/10` would limit any particular IP address to ten connection attempts per minute. `max-child-per-ip` limits the number of child processes that can be started on behalf on any single IP address at any moment. These options can limit excessive resource consumption and help to prevent Denial of Service attacks.
+
An example can be seen in the default settings for man:fingerd[8]:
+
[.programlisting]
....
finger stream tcp nowait/3/10 nobody /usr/libexec/fingerd fingerd -k -s
....
user::
The username the daemon will run as. Daemons typically run as `root`, `daemon`, or `nobody`.
server-program::
The full path to the daemon. If the daemon is a service provided by inetd internally, use `internal`.
server-program-arguments::
Used to specify any command arguments to be passed to the daemon on invocation. If the daemon is an internal service, use `internal`.
[[network-inetd-cmdline]]
=== Command-Line Options
Like most server daemons, inetd has a number of options that can be used to modify its behavior. By default, inetd is started with `-wW -C 60`. These options enable TCP wrappers for all services, including internal services, and prevent any IP address from requesting any service more than 60 times per minute.
To change the default options which are passed to inetd, add an entry for `inetd_flags` in [.filename]#/etc/rc.conf#. If inetd is already running, restart it with `service inetd restart`.
The available rate limiting options are:
-c maximum::
Specify the default maximum number of simultaneous invocations of each service, where the default is unlimited. May be overridden on a per-service basis by using `max-child` in [.filename]#/etc/inetd.conf#.
-C rate::
Specify the default maximum number of times a service can be invoked from a single IP address per minute. May be overridden on a per-service basis by using `max-connections-per-ip-per-minute` in [.filename]#/etc/inetd.conf#.
-R rate::
Specify the maximum number of times a service can be invoked in one minute, where the default is `256`. A rate of `0` allows an unlimited number.
-s maximum::
Specify the maximum number of times a service can be invoked from a single IP address at any one time, where the default is unlimited. May be overridden on a per-service basis by using `max-child-per-ip` in [.filename]#/etc/inetd.conf#.
Additional options are available. Refer to man:inetd[8] for the full list of options.
[[network-inetd-security]]
=== Security Considerations
Many of the daemons which can be managed by inetd are not security-conscious. Some daemons, such as fingerd, can provide information that may be useful to an attacker. Only enable the services which are needed and monitor the system for excessive connection attempts. `max-connections-per-ip-per-minute`, `max-child` and `max-child-per-ip` can be used to limit such attacks.
By default, TCP wrappers is enabled. Consult man:hosts_access[5] for more information on placing TCP restrictions on various inetd invoked daemons.
[[network-nfs]]
== Network File System (NFS)
FreeBSD supports the Network File System (NFS), which allows a server to share directories and files with clients over a network. With NFS, users and programs can access files on remote systems as if they were stored locally.
NFS has many practical uses. Some of the more common uses include:
* Data that would otherwise be duplicated on each client can be kept in a single location and accessed by clients on the network.
* Several clients may need access to the [.filename]#/usr/ports/distfiles# directory. Sharing that directory allows for quick access to the source files without having to download them to each client.
* On large networks, it is often more convenient to configure a central NFS server on which all user home directories are stored. Users can log into a client anywhere on the network and have access to their home directories.
* Administration of NFS exports is simplified. For example, there is only one file system where security or backup policies must be set.
* Removable media storage devices can be used by other machines on the network. This reduces the number of devices throughout the network and provides a centralized location to manage their security. It is often more convenient to install software on multiple machines from a centralized installation media.
NFS consists of a server and one or more clients. The client remotely accesses the data that is stored on the server machine. In order for this to function properly, a few processes have to be configured and running.
These daemons must be running on the server:
[.informaltable]
[cols="1,1", frame="none", options="header"]
|===
| Daemon
| Description
|nfsd
|The NFS daemon which services requests from NFS clients.
|mountd
|The NFS mount daemon which carries out requests received from nfsd.
|rpcbind
| This daemon allows NFS clients to discover which port the NFS server is using.
|===
Running man:nfsiod[8] on the client can improve performance, but is not required.
[[network-configuring-nfs]]
=== Configuring the Server
The file systems which the NFS server will share are specified in [.filename]#/etc/exports#. Each line in this file specifies a file system to be exported, which clients have access to that file system, and any access options. When adding entries to this file, each exported file system, its properties, and allowed hosts must occur on a single line. If no clients are listed in the entry, then any client on the network can mount that file system.
The following [.filename]#/etc/exports# entries demonstrate how to export file systems. The examples can be modified to match the file systems and client names on the reader's network. There are many options that can be used in this file, but only a few will be mentioned here. See man:exports[5] for the full list of options.
This example shows how to export [.filename]#/cdrom# to three hosts named _alpha_, _bravo_, and _charlie_:
[.programlisting]
....
/cdrom -ro alpha bravo charlie
....
The `-ro` flag makes the file system read-only, preventing clients from making any changes to the exported file system. This example assumes that the host names are either in DNS or in [.filename]#/etc/hosts#. Refer to man:hosts[5] if the network does not have a DNS server.
The next example exports [.filename]#/home# to three clients by IP address. This can be useful for networks without DNS or [.filename]#/etc/hosts# entries. The `-alldirs` flag allows subdirectories to be mount points. In other words, it will not automatically mount the subdirectories, but will permit the client to mount the directories that are required as needed.
[.programlisting]
....
/usr/home -alldirs 10.0.0.2 10.0.0.3 10.0.0.4
....
This next example exports [.filename]#/a# so that two clients from different domains may access that file system. The `-maproot=root` allows `root` on the remote system to write data on the exported file system as `root`. If `-maproot=root` is not specified, the client's `root` user will be mapped to the server's `nobody` account and will be subject to the access limitations defined for `nobody`.
[.programlisting]
....
/a -maproot=root host.example.com box.example.org
....
A client can only be specified once per file system. For example, if [.filename]#/usr# is a single file system, these entries would be invalid as both entries specify the same host:
[.programlisting]
....
# Invalid when /usr is one file system
/usr/src client
/usr/ports client
....
The correct format for this situation is to use one entry:
[.programlisting]
....
/usr/src /usr/ports client
....
The following is an example of a valid export list, where [.filename]#/usr# and [.filename]#/exports# are local file systems:
[.programlisting]
....
# Export src and ports to client01 and client02, but only
# client01 has root privileges on it
/usr/src /usr/ports -maproot=root client01
/usr/src /usr/ports client02
# The client machines have root and can mount anywhere
# on /exports. Anyone in the world can mount /exports/obj read-only
/exports -alldirs -maproot=root client01 client02
/exports/obj -ro
....
To enable the processes required by the NFS server at boot time, add these options to [.filename]#/etc/rc.conf#:
[.programlisting]
....
rpcbind_enable="YES"
nfs_server_enable="YES"
mountd_enable="YES"
....
The server can be started now by running this command:
[source,shell]
....
# service nfsd start
....
Whenever the NFS server is started, mountd also starts automatically. However, mountd only reads [.filename]#/etc/exports# when it is started. To make subsequent [.filename]#/etc/exports# edits take effect immediately, force mountd to reread it:
[source,shell]
....
# service mountd reload
....
=== Configuring the Client
To enable NFS clients, set this option in each client's [.filename]#/etc/rc.conf#:
[.programlisting]
....
nfs_client_enable="YES"
....
Then, run this command on each NFS client:
[source,shell]
....
# service nfsclient start
....
The client now has everything it needs to mount a remote file system. In these examples, the server's name is `server` and the client's name is `client`. To mount [.filename]#/home# on `server` to the [.filename]#/mnt# mount point on `client`:
[source,shell]
....
# mount server:/home /mnt
....
The files and directories in [.filename]#/home# will now be available on `client`, in the [.filename]#/mnt# directory.
To mount a remote file system each time the client boots, add it to [.filename]#/etc/fstab#:
[.programlisting]
....
server:/home /mnt nfs rw 0 0
....
Refer to man:fstab[5] for a description of all available options.
=== Locking
Some applications require file locking to operate correctly. To enable locking, add these lines to [.filename]#/etc/rc.conf# on both the client and server:
[.programlisting]
....
rpc_lockd_enable="YES"
rpc_statd_enable="YES"
....
Then start the applications:
[source,shell]
....
# service lockd start
# service statd start
....
If locking is not required on the server, the NFS client can be configured to lock locally by including `-L` when running mount. Refer to man:mount_nfs[8] for further details.
[[network-autofs]]
=== Automating Mounts with man:autofs[5]
[NOTE]
====
The man:autofs[5] automount facility is supported starting with FreeBSD 10.1-RELEASE. To use the automounter functionality in older versions of FreeBSD, use man:amd[8] instead. This chapter only describes the man:autofs[5] automounter.
====
The man:autofs[5] facility is a common name for several components that, together, allow for automatic mounting of remote and local filesystems whenever a file or directory within that file system is accessed. It consists of the kernel component, man:autofs[5], and several userspace applications: man:automount[8], man:automountd[8] and man:autounmountd[8]. It serves as an alternative for man:amd[8] from previous FreeBSD releases. Amd is still provided for backward compatibility purposes, as the two use different map format; the one used by autofs is the same as with other SVR4 automounters, such as the ones in Solaris, MacOS X, and Linux.
The man:autofs[5] virtual filesystem is mounted on specified mountpoints by man:automount[8], usually invoked during boot.
Whenever a process attempts to access file within the man:autofs[5] mountpoint, the kernel will notify man:automountd[8] daemon and pause the triggering process. The man:automountd[8] daemon will handle kernel requests by finding the proper map and mounting the filesystem according to it, then signal the kernel to release blocked process. The man:autounmountd[8] daemon automatically unmounts automounted filesystems after some time, unless they are still being used.
The primary autofs configuration file is [.filename]#/etc/auto_master#. It assigns individual maps to top-level mounts. For an explanation of [.filename]#auto_master# and the map syntax, refer to man:auto_master[5].
There is a special automounter map mounted on [.filename]#/net#. When a file is accessed within this directory, man:autofs[5] looks up the corresponding remote mount and automatically mounts it. For instance, an attempt to access a file within [.filename]#/net/foobar/usr# would tell man:automountd[8] to mount the [.filename]#/usr# export from the host `foobar`.
.Mounting an Export with man:autofs[5]
[example]
====
In this example, `showmount -e` shows the exported file systems that can be mounted from the NFS server, `foobar`:
[source,shell]
....
% showmount -e foobar
Exports list on foobar:
/usr 10.10.10.0
/a 10.10.10.0
% cd /net/foobar/usr
....
====
The output from `showmount` shows [.filename]#/usr# as an export. When changing directories to [.filename]#/host/foobar/usr#, man:automountd[8] intercepts the request and attempts to resolve the hostname `foobar`. If successful, man:automountd[8] automatically mounts the source export.
To enable man:autofs[5] at boot time, add this line to [.filename]#/etc/rc.conf#:
[.programlisting]
....
autofs_enable="YES"
....
Then man:autofs[5] can be started by running:
[source,shell]
....
# service automount start
# service automountd start
# service autounmountd start
....
The man:autofs[5] map format is the same as in other operating systems. Information about this format from other sources can be useful, like the http://web.archive.org/web/20160813071113/http://images.apple.com/business/docs/Autofs.pdf[Mac OS X document].
Consult the man:automount[8], man:automountd[8], man:autounmountd[8], and man:auto_master[5] manual pages for more information.
[[network-nis]]
== Network Information System (NIS)
Network Information System (NIS) is designed to centralize administration of UNIX(R)-like systems such as Solaris(TM), HP-UX, AIX(R), Linux, NetBSD, OpenBSD, and FreeBSD. NIS was originally known as Yellow Pages but the name was changed due to trademark issues. This is the reason why NIS commands begin with `yp`.
NIS is a Remote Procedure Call (RPC)-based client/server system that allows a group of machines within an NIS domain to share a common set of configuration files. This permits a system administrator to set up NIS client systems with only minimal configuration data and to add, remove, or modify configuration data from a single location.
FreeBSD uses version 2 of the NIS protocol.
=== NIS Terms and Processes
Table 28.1 summarizes the terms and important processes used by NIS:
.NIS Terminology
[cols="1,1", frame="none", options="header"]
|===
| Term
| Description
|NIS domain name
|NIS servers and clients share an NIS domain name. Typically, this name does not have anything to do with DNS.
|man:rpcbind[8]
|This service enables RPC and must be running in order to run an NIS server or act as an NIS client.
|man:ypbind[8]
|This service binds an NIS client to its NIS server. It will take the NIS domain name and use RPC to connect to the server. It is the core of client/server communication in an NIS environment. If this service is not running on a client machine, it will not be able to access the NIS server.
|man:ypserv[8]
|This is the process for the NIS server. If this service stops running, the server will no longer be able to respond to NIS requests so hopefully, there is a slave server to take over. Some non-FreeBSD clients will not try to reconnect using a slave server and the ypbind process may need to be restarted on these clients.
|man:rpc.yppasswdd[8]
|This process only runs on NIS master servers. This daemon allows NIS clients to change their NIS passwords. If this daemon is not running, users will have to login to the NIS master server and change their passwords there.
|===
=== Machine Types
There are three types of hosts in an NIS environment:
* NIS master server
+
This server acts as a central repository for host configuration information and maintains the authoritative copy of the files used by all of the NIS clients. The [.filename]#passwd#, [.filename]#group#, and other various files used by NIS clients are stored on the master server. While it is possible for one machine to be an NIS master server for more than one NIS domain, this type of configuration will not be covered in this chapter as it assumes a relatively small-scale NIS environment.
* NIS slave servers
+
NIS slave servers maintain copies of the NIS master's data files in order to provide redundancy. Slave servers also help to balance the load of the master server as NIS clients always attach to the NIS server which responds first.
* NIS clients
+
NIS clients authenticate against the NIS server during log on.
Information in many files can be shared using NIS. The [.filename]#master.passwd#, [.filename]#group#, and [.filename]#hosts# files are commonly shared via NIS. Whenever a process on a client needs information that would normally be found in these files locally, it makes a query to the NIS server that it is bound to instead.
=== Planning Considerations
This section describes a sample NIS environment which consists of 15 FreeBSD machines with no centralized point of administration. Each machine has its own [.filename]#/etc/passwd# and [.filename]#/etc/master.passwd#. These files are kept in sync with each other only through manual intervention. Currently, when a user is added to the lab, the process must be repeated on all 15 machines.
The configuration of the lab will be as follows:
[.informaltable]
[cols="1,1,1", frame="none", options="header"]
|===
| Machine name
| IP address
| Machine role
|`ellington`
|`10.0.0.2`
|NIS master
|`coltrane`
|`10.0.0.3`
|NIS slave
|`basie`
|`10.0.0.4`
|Faculty workstation
|`bird`
|`10.0.0.5`
|Client machine
|`cli[1-11]`
|`10.0.0.[6-17]`
|Other client machines
|===
If this is the first time an NIS scheme is being developed, it should be thoroughly planned ahead of time. Regardless of network size, several decisions need to be made as part of the planning process.
==== Choosing a NIS Domain Name
When a client broadcasts its requests for info, it includes the name of the NIS domain that it is part of. This is how multiple servers on one network can tell which server should answer which request. Think of the NIS domain name as the name for a group of hosts.
Some organizations choose to use their Internet domain name for their NIS domain name. This is not recommended as it can cause confusion when trying to debug network problems. The NIS domain name should be unique within the network and it is helpful if it describes the group of machines it represents. For example, the Art department at Acme Inc. might be in the "acme-art"NIS domain. This example will use the domain name `test-domain`.
However, some non-FreeBSD operating systems require the NIS domain name to be the same as the Internet domain name. If one or more machines on the network have this restriction, the Internet domain name _must_ be used as the NIS domain name.
==== Physical Server Requirements
There are several things to keep in mind when choosing a machine to use as a NIS server. Since NIS clients depend upon the availability of the server, choose a machine that is not rebooted frequently. The NIS server should ideally be a stand alone machine whose sole purpose is to be an NIS server. If the network is not heavily used, it is acceptable to put the NIS server on a machine running other services. However, if the NIS server becomes unavailable, it will adversely affect all NIS clients.
=== Configuring the NIS Master Server
The canonical copies of all NIS files are stored on the master server. The databases used to store the information are called NIS maps. In FreeBSD, these maps are stored in [.filename]#/var/yp/[domainname]# where [.filename]#[domainname]# is the name of the NIS domain. Since multiple domains are supported, it is possible to have several directories, one for each domain. Each domain will have its own independent set of maps.
NIS master and slave servers handle all NIS requests through man:ypserv[8]. This daemon is responsible for receiving incoming requests from NIS clients, translating the requested domain and map name to a path to the corresponding database file, and transmitting data from the database back to the client.
Setting up a master NIS server can be relatively straight forward, depending on environmental needs. Since FreeBSD provides built-in NIS support, it only needs to be enabled by adding the following lines to [.filename]#/etc/rc.conf#:
[.programlisting]
....
nisdomainname="test-domain" <.>
nis_server_enable="YES" <.>
nis_yppasswdd_enable="YES" <.>
....
<.> This line sets the NIS domain name to `test-domain`.
<.> This automates the start up of the NIS server processes when the system boots.
<.> This enables the man:rpc.yppasswdd[8] daemon so that users can change their NIS password from a client machine.
Care must be taken in a multi-server domain where the server machines are also NIS clients. It is generally a good idea to force the servers to bind to themselves rather than allowing them to broadcast bind requests and possibly become bound to each other. Strange failure modes can result if one server goes down and others are dependent upon it. Eventually, all the clients will time out and attempt to bind to other servers, but the delay involved can be considerable and the failure mode is still present since the servers might bind to each other all over again.
A server that is also a client can be forced to bind to a particular server by adding these additional lines to [.filename]#/etc/rc.conf#:
[.programlisting]
....
nis_client_enable="YES" <.>
nis_client_flags="-S test-domain,server" <.>
....
<.> This enables running client stuff as well.
<.> This line sets the NIS domain name to `test-domain` and bind to itself.
After saving the edits, type `/etc/netstart` to restart the network and apply the values defined in [.filename]#/etc/rc.conf#. Before initializing the NIS maps, start man:ypserv[8]:
[source,shell]
....
# service ypserv start
....
==== Initializing the NIS Maps
NIS maps are generated from the configuration files in [.filename]#/etc# on the NIS master, with one exception: [.filename]#/etc/master.passwd#. This is to prevent the propagation of passwords to all the servers in the NIS domain. Therefore, before the NIS maps are initialized, configure the primary password files:
[source,shell]
....
# cp /etc/master.passwd /var/yp/master.passwd
# cd /var/yp
# vi master.passwd
....
It is advisable to remove all entries for system accounts as well as any user accounts that do not need to be propagated to the NIS clients, such as the `root` and any other administrative accounts.
[NOTE]
====
Ensure that the [.filename]#/var/yp/master.passwd# is neither group or world readable by setting its permissions to `600`.
====
After completing this task, initialize the NIS maps. FreeBSD includes the man:ypinit[8] script to do this. When generating maps for the master server, include `-m` and specify the NIS domain name:
[source,shell]
....
ellington# ypinit -m test-domain
Server Type: MASTER Domain: test-domain
Creating an YP server will require that you answer a few questions.
Questions will all be asked at the beginning of the procedure.
Do you want this procedure to quit on non-fatal errors? [y/n: n] n
Ok, please remember to go back and redo manually whatever fails.
If not, something might not work.
At this point, we have to construct a list of this domains YP servers.
rod.darktech.org is already known as master server.
Please continue to add any slave servers, one per line. When you are
done with the list, type a <control D>.
master server : ellington
next host to add: coltrane
next host to add: ^D
The current list of NIS servers looks like this:
ellington
coltrane
Is this correct? [y/n: y] y
[..output from map generation..]
NIS Map update completed.
ellington has been setup as an YP master server without any errors.
....
This will create [.filename]#/var/yp/Makefile# from [.filename]#/var/yp/Makefile.dist#. By default, this file assumes that the environment has a single NIS server with only FreeBSD clients. Since `test-domain` has a slave server, edit this line in [.filename]#/var/yp/Makefile# so that it begins with a comment (`#`):
[.programlisting]
....
NOPUSH = "True"
....
==== Adding New Users
Every time a new user is created, the user account must be added to the master NIS server and the NIS maps rebuilt. Until this occurs, the new user will not be able to login anywhere except on the NIS master. For example, to add the new user `jsmith` to the `test-domain` domain, run these commands on the master server:
[source,shell]
....
# pw useradd jsmith
# cd /var/yp
# make test-domain
....
The user could also be added using `adduser jsmith` instead of `pw useradd smith`.
=== Setting up a NIS Slave Server
To set up an NIS slave server, log on to the slave server and edit [.filename]#/etc/rc.conf# as for the master server. Do not generate any NIS maps, as these already exist on the master server. When running `ypinit` on the slave server, use `-s` (for slave) instead of `-m` (for master). This option requires the name of the NIS master in addition to the domain name, as seen in this example:
[source,shell]
....
coltrane# ypinit -s ellington test-domain
Server Type: SLAVE Domain: test-domain Master: ellington
Creating an YP server will require that you answer a few questions.
Questions will all be asked at the beginning of the procedure.
Do you want this procedure to quit on non-fatal errors? [y/n: n] n
Ok, please remember to go back and redo manually whatever fails.
If not, something might not work.
There will be no further questions. The remainder of the procedure
should take a few minutes, to copy the databases from ellington.
Transferring netgroup...
ypxfr: Exiting: Map successfully transferred
Transferring netgroup.byuser...
ypxfr: Exiting: Map successfully transferred
Transferring netgroup.byhost...
ypxfr: Exiting: Map successfully transferred
Transferring master.passwd.byuid...
ypxfr: Exiting: Map successfully transferred
Transferring passwd.byuid...
ypxfr: Exiting: Map successfully transferred
Transferring passwd.byname...
ypxfr: Exiting: Map successfully transferred
Transferring group.bygid...
ypxfr: Exiting: Map successfully transferred
Transferring group.byname...
ypxfr: Exiting: Map successfully transferred
Transferring services.byname...
ypxfr: Exiting: Map successfully transferred
Transferring rpc.bynumber...
ypxfr: Exiting: Map successfully transferred
Transferring rpc.byname...
ypxfr: Exiting: Map successfully transferred
Transferring protocols.byname...
ypxfr: Exiting: Map successfully transferred
Transferring master.passwd.byname...
ypxfr: Exiting: Map successfully transferred
Transferring networks.byname...
ypxfr: Exiting: Map successfully transferred
Transferring networks.byaddr...
ypxfr: Exiting: Map successfully transferred
Transferring netid.byname...
ypxfr: Exiting: Map successfully transferred
Transferring hosts.byaddr...
ypxfr: Exiting: Map successfully transferred
Transferring protocols.bynumber...
ypxfr: Exiting: Map successfully transferred
Transferring ypservers...
ypxfr: Exiting: Map successfully transferred
Transferring hosts.byname...
ypxfr: Exiting: Map successfully transferred
coltrane has been setup as an YP slave server without any errors.
Remember to update map ypservers on ellington.
....
This will generate a directory on the slave server called [.filename]#/var/yp/test-domain# which contains copies of the NIS master server's maps. Adding these [.filename]#/etc/crontab# entries on each slave server will force the slaves to sync their maps with the maps on the master server:
[.programlisting]
....
20 * * * * root /usr/libexec/ypxfr passwd.byname
21 * * * * root /usr/libexec/ypxfr passwd.byuid
....
These entries are not mandatory because the master server automatically attempts to push any map changes to its slaves. However, since clients may depend upon the slave server to provide correct password information, it is recommended to force frequent password map updates. This is especially important on busy networks where map updates might not always complete.
To finish the configuration, run `/etc/netstart` on the slave server in order to start the NIS services.
=== Setting Up an NIS Client
An NIS client binds to an NIS server using man:ypbind[8]. This daemon broadcasts RPC requests on the local network. These requests specify the domain name configured on the client. If an NIS server in the same domain receives one of the broadcasts, it will respond to ypbind, which will record the server's address. If there are several servers available, the client will use the address of the first server to respond and will direct all of its NIS requests to that server. The client will automatically ping the server on a regular basis to make sure it is still available. If it fails to receive a reply within a reasonable amount of time, ypbind will mark the domain as unbound and begin broadcasting again in the hopes of locating another server.
To configure a FreeBSD machine to be an NIS client:
[.procedure]
====
. Edit [.filename]#/etc/rc.conf# and add the following lines in order to set the NIS domain name and start man:ypbind[8] during network startup:
+
[.programlisting]
....
nisdomainname="test-domain"
nis_client_enable="YES"
....
. To import all possible password entries from the NIS server, use `vipw` to remove all user accounts except one from [.filename]#/etc/master.passwd#. When removing the accounts, keep in mind that at least one local account should remain and this account should be a member of `wheel`. If there is a problem with NIS, this local account can be used to log in remotely, become the superuser, and fix the problem. Before saving the edits, add the following line to the end of the file:
+
[.programlisting]
....
+:::::::::
....
+
This line configures the client to provide anyone with a valid account in the NIS server's password maps an account on the client. There are many ways to configure the NIS client by modifying this line. One method is described in <<network-netgroups>>. For more detailed reading, refer to the book `Managing NFS and NIS`, published by O'Reilly Media.
. To import all possible group entries from the NIS server, add this line to [.filename]#/etc/group#:
+
[.programlisting]
....
+:*::
....
====
To start the NIS client immediately, execute the following commands as the superuser:
[source,shell]
....
# /etc/netstart
# service ypbind start
....
After completing these steps, running `ypcat passwd` on the client should show the server's [.filename]#passwd# map.
=== NIS Security
Since RPC is a broadcast-based service, any system running ypbind within the same domain can retrieve the contents of the NIS maps. To prevent unauthorized transactions, man:ypserv[8] supports a feature called "securenets" which can be used to restrict access to a given set of hosts. By default, this information is stored in [.filename]#/var/yp/securenets#, unless man:ypserv[8] is started with `-p` and an alternate path. This file contains entries that consist of a network specification and a network mask separated by white space. Lines starting with `#` are considered to be comments. A sample [.filename]#securenets# might look like this:
[.programlisting]
....
# allow connections from local host -- mandatory
127.0.0.1 255.255.255.255
# allow connections from any host
# on the 192.168.128.0 network
192.168.128.0 255.255.255.0
# allow connections from any host
# between 10.0.0.0 to 10.0.15.255
# this includes the machines in the testlab
10.0.0.0 255.255.240.0
....
If man:ypserv[8] receives a request from an address that matches one of these rules, it will process the request normally. If the address fails to match a rule, the request will be ignored and a warning message will be logged. If the [.filename]#securenets# does not exist, `ypserv` will allow connections from any host.
crossref:security[tcpwrappers,"TCP Wrapper"] is an alternate mechanism for providing access control instead of [.filename]#securenets#. While either access control mechanism adds some security, they are both vulnerable to "IP spoofing" attacks. All NIS-related traffic should be blocked at the firewall.
Servers using [.filename]#securenets# may fail to serve legitimate NIS clients with archaic TCP/IP implementations. Some of these implementations set all host bits to zero when doing broadcasts or fail to observe the subnet mask when calculating the broadcast address. While some of these problems can be fixed by changing the client configuration, other problems may force the retirement of these client systems or the abandonment of [.filename]#securenets#.
The use of TCP Wrapper increases the latency of the NIS server. The additional delay may be long enough to cause timeouts in client programs, especially in busy networks with slow NIS servers. If one or more clients suffer from latency, convert those clients into NIS slave servers and force them to bind to themselves.
==== Barring Some Users
In this example, the `basie` system is a faculty workstation within the NIS domain. The [.filename]#passwd# map on the master NIS server contains accounts for both faculty and students. This section demonstrates how to allow faculty logins on this system while refusing student logins.
To prevent specified users from logging on to a system, even if they are present in the NIS database, use `vipw` to add `-_username_` with the correct number of colons towards the end of [.filename]#/etc/master.passwd# on the client, where _username_ is the username of a user to bar from logging in. The line with the blocked user must be before the `+` line that allows NIS users. In this example, `bill` is barred from logging on to `basie`:
[source,shell]
....
basie# cat /etc/master.passwd
root:[password]:0:0::0:0:The super-user:/root:/bin/csh
toor:[password]:0:0::0:0:The other super-user:/root:/bin/sh
daemon:*:1:1::0:0:Owner of many system processes:/root:/usr/sbin/nologin
operator:*:2:5::0:0:System &:/:/usr/sbin/nologin
bin:*:3:7::0:0:Binaries Commands and Source,,,:/:/usr/sbin/nologin
tty:*:4:65533::0:0:Tty Sandbox:/:/usr/sbin/nologin
kmem:*:5:65533::0:0:KMem Sandbox:/:/usr/sbin/nologin
games:*:7:13::0:0:Games pseudo-user:/usr/games:/usr/sbin/nologin
news:*:8:8::0:0:News Subsystem:/:/usr/sbin/nologin
man:*:9:9::0:0:Mister Man Pages:/usr/share/man:/usr/sbin/nologin
bind:*:53:53::0:0:Bind Sandbox:/:/usr/sbin/nologin
uucp:*:66:66::0:0:UUCP pseudo-user:/var/spool/uucppublic:/usr/libexec/uucp/uucico
xten:*:67:67::0:0:X-10 daemon:/usr/local/xten:/usr/sbin/nologin
pop:*:68:6::0:0:Post Office Owner:/nonexistent:/usr/sbin/nologin
nobody:*:65534:65534::0:0:Unprivileged user:/nonexistent:/usr/sbin/nologin
-bill:::::::::
+:::::::::
basie#
....
[[network-netgroups]]
=== Using Netgroups
Barring specified users from logging on to individual systems becomes unscaleable on larger networks and quickly loses the main benefit of NIS: _centralized_ administration.
Netgroups were developed to handle large, complex networks with hundreds of users and machines. Their use is comparable to UNIX(R) groups, where the main difference is the lack of a numeric ID and the ability to define a netgroup by including both user accounts and other netgroups.
To expand on the example used in this chapter, the NIS domain will be extended to add the users and systems shown in Tables 28.2 and 28.3:
.Additional Users
[cols="1,1", frame="none", options="header"]
|===
| User Name(s)
| Description
|`alpha`, `beta`
|IT department employees
|`charlie`, `delta`
|IT department apprentices
|`echo`, `foxtrott`, `golf`, ...
|employees
|`able`, `baker`, ...
|interns
|===
.Additional Systems
[cols="1,1", frame="none", options="header"]
|===
| Machine Name(s)
| Description
|`war`, `death`, `famine`, `pollution`
|Only IT employees are allowed to log onto these servers.
|`pride`, `greed`, `envy`, `wrath`, `lust`, `sloth`
|All members of the IT department are allowed to login onto these servers.
|`one`, `two`, `three`, `four`, ...
|Ordinary workstations used by employees.
|`trashcan`
|A very old machine without any critical data. Even interns are allowed to use this system.
|===
When using netgroups to configure this scenario, each user is assigned to one or more netgroups and logins are then allowed or forbidden for all members of the netgroup. When adding a new machine, login restrictions must be defined for all netgroups. When a new user is added, the account must be added to one or more netgroups. If the NIS setup is planned carefully, only one central configuration file needs modification to grant or deny access to machines.
The first step is the initialization of the NIS`netgroup` map. In FreeBSD, this map is not created by default. On the NIS master server, use an editor to create a map named [.filename]#/var/yp/netgroup#.
This example creates four netgroups to represent IT employees, IT apprentices, employees, and interns:
[.programlisting]
....
IT_EMP (,alpha,test-domain) (,beta,test-domain)
IT_APP (,charlie,test-domain) (,delta,test-domain)
USERS (,echo,test-domain) (,foxtrott,test-domain) \
(,golf,test-domain)
INTERNS (,able,test-domain) (,baker,test-domain)
....
Each entry configures a netgroup. The first column in an entry is the name of the netgroup. Each set of brackets represents either a group of one or more users or the name of another netgroup. When specifying a user, the three comma-delimited fields inside each group represent:
. The name of the host(s) where the other fields representing the user are valid. If a hostname is not specified, the entry is valid on all hosts.
. The name of the account that belongs to this netgroup.
. The NIS domain for the account. Accounts may be imported from other NIS domains into a netgroup.
If a group contains multiple users, separate each user with whitespace. Additionally, each field may contain wildcards. See man:netgroup[5] for details.
Netgroup names longer than 8 characters should not be used. The names are case sensitive and using capital letters for netgroup names is an easy way to distinguish between user, machine and netgroup names.
Some non-FreeBSD NIS clients cannot handle netgroups containing more than 15 entries. This limit may be circumvented by creating several sub-netgroups with 15 users or fewer and a real netgroup consisting of the sub-netgroups, as seen in this example:
[.programlisting]
....
BIGGRP1 (,joe1,domain) (,joe2,domain) (,joe3,domain) [...]
BIGGRP2 (,joe16,domain) (,joe17,domain) [...]
BIGGRP3 (,joe31,domain) (,joe32,domain)
BIGGROUP BIGGRP1 BIGGRP2 BIGGRP3
....
Repeat this process if more than 225 (15 times 15) users exist within a single netgroup.
To activate and distribute the new NIS map:
[source,shell]
....
ellington# cd /var/yp
ellington# make
....
This will generate the three NIS maps [.filename]#netgroup#, [.filename]#netgroup.byhost# and [.filename]#netgroup.byuser#. Use the map key option of man:ypcat[1] to check if the new NIS maps are available:
[source,shell]
....
ellington% ypcat -k netgroup
ellington% ypcat -k netgroup.byhost
ellington% ypcat -k netgroup.byuser
....
The output of the first command should resemble the contents of [.filename]#/var/yp/netgroup#. The second command only produces output if host-specific netgroups were created. The third command is used to get the list of netgroups for a user.
To configure a client, use man:vipw[8] to specify the name of the netgroup. For example, on the server named `war`, replace this line:
[.programlisting]
....
+:::::::::
....
with
[.programlisting]
....
+@IT_EMP:::::::::
....
This specifies that only the users defined in the netgroup `IT_EMP` will be imported into this system's password database and only those users are allowed to login to this system.
This configuration also applies to the `~` function of the shell and all routines which convert between user names and numerical user IDs. In other words, `cd ~_user_` will not work, `ls -l` will show the numerical ID instead of the username, and `find . -user joe -print` will fail with the message `No such user`. To fix this, import all user entries without allowing them to login into the servers. This can be achieved by adding an extra line:
[.programlisting]
....
+:::::::::/usr/sbin/nologin
....
This line configures the client to import all entries but to replace the shell in those entries with [.filename]#/usr/sbin/nologin#.
Make sure that extra line is placed _after_`+@IT_EMP:::::::::`. Otherwise, all user accounts imported from NIS will have [.filename]#/usr/sbin/nologin# as their login shell and no one will be able to login to the system.
To configure the less important servers, replace the old `+:::::::::` on the servers with these lines:
[.programlisting]
....
+@IT_EMP:::::::::
+@IT_APP:::::::::
+:::::::::/usr/sbin/nologin
....
The corresponding lines for the workstations would be:
[.programlisting]
....
+@IT_EMP:::::::::
+@USERS:::::::::
+:::::::::/usr/sbin/nologin
....
NIS supports the creation of netgroups from other netgroups which can be useful if the policy regarding user access changes. One possibility is the creation of role-based netgroups. For example, one might create a netgroup called `BIGSRV` to define the login restrictions for the important servers, another netgroup called `SMALLSRV` for the less important servers, and a third netgroup called `USERBOX` for the workstations. Each of these netgroups contains the netgroups that are allowed to login onto these machines. The new entries for the NIS`netgroup` map would look like this:
[.programlisting]
....
BIGSRV IT_EMP IT_APP
SMALLSRV IT_EMP IT_APP ITINTERN
USERBOX IT_EMP ITINTERN USERS
....
This method of defining login restrictions works reasonably well when it is possible to define groups of machines with identical restrictions. Unfortunately, this is the exception and not the rule. Most of the time, the ability to define login restrictions on a per-machine basis is required.
Machine-specific netgroup definitions are another possibility to deal with the policy changes. In this scenario, the [.filename]#/etc/master.passwd# of each system contains two lines starting with "+". The first line adds a netgroup with the accounts allowed to login onto this machine and the second line adds all other accounts with [.filename]#/usr/sbin/nologin# as shell. It is recommended to use the "ALL-CAPS" version of the hostname as the name of the netgroup:
[.programlisting]
....
+@BOXNAME:::::::::
+:::::::::/usr/sbin/nologin
....
Once this task is completed on all the machines, there is no longer a need to modify the local versions of [.filename]#/etc/master.passwd# ever again. All further changes can be handled by modifying the NIS map. Here is an example of a possible `netgroup` map for this scenario:
[.programlisting]
....
# Define groups of users first
IT_EMP (,alpha,test-domain) (,beta,test-domain)
IT_APP (,charlie,test-domain) (,delta,test-domain)
DEPT1 (,echo,test-domain) (,foxtrott,test-domain)
DEPT2 (,golf,test-domain) (,hotel,test-domain)
DEPT3 (,india,test-domain) (,juliet,test-domain)
ITINTERN (,kilo,test-domain) (,lima,test-domain)
D_INTERNS (,able,test-domain) (,baker,test-domain)
#
# Now, define some groups based on roles
USERS DEPT1 DEPT2 DEPT3
BIGSRV IT_EMP IT_APP
SMALLSRV IT_EMP IT_APP ITINTERN
USERBOX IT_EMP ITINTERN USERS
#
# And a groups for a special tasks
# Allow echo and golf to access our anti-virus-machine
SECURITY IT_EMP (,echo,test-domain) (,golf,test-domain)
#
# machine-based netgroups
# Our main servers
WAR BIGSRV
FAMINE BIGSRV
# User india needs access to this server
POLLUTION BIGSRV (,india,test-domain)
#
# This one is really important and needs more access restrictions
DEATH IT_EMP
#
# The anti-virus-machine mentioned above
ONE SECURITY
#
# Restrict a machine to a single user
TWO (,hotel,test-domain)
# [...more groups to follow]
....
It may not always be advisable to use machine-based netgroups. When deploying a couple of dozen or hundreds of systems, role-based netgroups instead of machine-based netgroups may be used to keep the size of the NIS map within reasonable limits.
=== Password Formats
NIS requires that all hosts within an NIS domain use the same format for encrypting passwords. If users have trouble authenticating on an NIS client, it may be due to a differing password format. In a heterogeneous network, the format must be supported by all operating systems, where DES is the lowest common standard.
To check which format a server or client is using, look at this section of [.filename]#/etc/login.conf#:
[.programlisting]
....
default:\
:passwd_format=des:\
:copyright=/etc/COPYRIGHT:\
[Further entries elided]
....
In this example, the system is using the DES format. Other possible values are `blf` for Blowfish and `md5` for MD5 encrypted passwords.
If the format on a host needs to be edited to match the one being used in the NIS domain, the login capability database must be rebuilt after saving the change:
[source,shell]
....
# cap_mkdb /etc/login.conf
....
[NOTE]
====
The format of passwords for existing user accounts will not be updated until each user changes their password _after_ the login capability database is rebuilt.
====
[[network-ldap]]
== Lightweight Directory Access Protocol (LDAP)
The Lightweight Directory Access Protocol (LDAP) is an application layer protocol used to access, modify, and authenticate objects using a distributed directory information service. Think of it as a phone or record book which stores several levels of hierarchical, homogeneous information. It is used in Active Directory and OpenLDAP networks and allows users to access to several levels of internal information utilizing a single account. For example, email authentication, pulling employee contact information, and internal website authentication might all make use of a single user account in the LDAP server's record base.
This section provides a quick start guide for configuring an LDAP server on a FreeBSD system. It assumes that the administrator already has a design plan which includes the type of information to store, what that information will be used for, which users should have access to that information, and how to secure this information from unauthorized access.
=== LDAP Terminology and Structure
LDAP uses several terms which should be understood before starting the configuration. All directory entries consist of a group of _attributes_. Each of these attribute sets contains a unique identifier known as a _Distinguished Name_ (DN) which is normally built from several other attributes such as the common or _Relative Distinguished Name_ (RDN). Similar to how directories have absolute and relative paths, consider a DN as an absolute path and the RDN as the relative path.
An example LDAP entry looks like the following. This example searches for the entry for the specified user account (`uid`), organizational unit (`ou`), and organization (`o`):
[source,shell]
....
% ldapsearch -xb "uid=trhodes,ou=users,o=example.com"
# extended LDIF
#
# LDAPv3
# base <uid=trhodes,ou=users,o=example.com> with scope subtree
# filter: (objectclass=*)
# requesting: ALL
#
# trhodes, users, example.com
dn: uid=trhodes,ou=users,o=example.com
mail: trhodes@example.com
cn: Tom Rhodes
uid: trhodes
telephoneNumber: (123) 456-7890
# search result
search: 2
result: 0 Success
# numResponses: 2
# numEntries: 1
....
This example entry shows the values for the `dn`, `mail`, `cn`, `uid`, and `telephoneNumber` attributes. The cn attribute is the RDN.
More information about LDAP and its terminology can be found at http://www.openldap.org/doc/admin24/intro.html[http://www.openldap.org/doc/admin24/intro.html].
[[ldap-config]]
=== Configuring an LDAP Server
FreeBSD does not provide a built-in LDAP server. Begin the configuration by installing package:net/openldap-server[] package or port:
[source,shell]
....
# pkg install openldap-server
....
There is a large set of default options enabled in the link:{linux-users}#software[package]. Review them by running `pkg info openldap-server`. If they are not sufficient (for example if SQL support is needed), please consider recompiling the port using the appropriate crossref:ports[ports-using,framework].
The installation creates the directory [.filename]#/var/db/openldap-data# to hold the data. The directory to store the certificates must be created:
[source,shell]
....
# mkdir /usr/local/etc/openldap/private
....
The next phase is to configure the Certificate Authority. The following commands must be executed from [.filename]#/usr/local/etc/openldap/private#. This is important as the file permissions need to be restrictive and users should not have access to these files. More detailed information about certificates and their parameters can be found in crossref:security[openssl,"OpenSSL"]. To create the Certificate Authority, start with this command and follow the prompts:
[source,shell]
....
# openssl req -days 365 -nodes -new -x509 -keyout ca.key -out ../ca.crt
....
The entries for the prompts may be generic _except_ for the `Common Name`. This entry must be _different_ than the system hostname. If this will be a self signed certificate, prefix the hostname with `CA` for Certificate Authority.
The next task is to create a certificate signing request and a private key. Input this command and follow the prompts:
[source,shell]
....
# openssl req -days 365 -nodes -new -keyout server.key -out server.csr
....
During the certificate generation process, be sure to correctly set the `Common Name` attribute. The Certificate Signing Request must be signed with the Certificate Authority in order to be used as a valid certificate:
[source,shell]
....
# openssl x509 -req -days 365 -in server.csr -out ../server.crt -CA ../ca.crt -CAkey ca.key -CAcreateserial
....
The final part of the certificate generation process is to generate and sign the client certificates:
[source,shell]
....
# openssl req -days 365 -nodes -new -keyout client.key -out client.csr
# openssl x509 -req -days 3650 -in client.csr -out ../client.crt -CA ../ca.crt -CAkey ca.key
....
Remember to use the same `Common Name` attribute when prompted. When finished, ensure that a total of eight (8) new files have been generated through the proceeding commands.
The daemon running the OpenLDAP server is [.filename]#slapd#. Its configuration is performed through [.filename]#slapd.ldif#: the old [.filename]#slapd.conf# has been deprecated by OpenLDAP.
http://www.openldap.org/doc/admin24/slapdconf2.html[Configuration examples] for [.filename]#slapd.ldif# are available and can also be found in [.filename]#/usr/local/etc/openldap/slapd.ldif.sample#. Options are documented in slapd-config(5). Each section of [.filename]#slapd.ldif#, like all the other LDAP attribute sets, is uniquely identified through a DN. Be sure that no blank lines are left between the `dn:` statement and the desired end of the section. In the following example, TLS will be used to implement a secure channel. The first section represents the global configuration:
[.programlisting]
....
#
# See slapd-config(5) for details on configuration options.
# This file should NOT be world readable.
#
dn: cn=config
objectClass: olcGlobal
cn: config
#
#
# Define global ACLs to disable default read access.
#
olcArgsFile: /var/run/openldap/slapd.args
olcPidFile: /var/run/openldap/slapd.pid
olcTLSCertificateFile: /usr/local/etc/openldap/server.crt
olcTLSCertificateKeyFile: /usr/local/etc/openldap/private/server.key
olcTLSCACertificateFile: /usr/local/etc/openldap/ca.crt
#olcTLSCipherSuite: HIGH
olcTLSProtocolMin: 3.1
olcTLSVerifyClient: never
....
The Certificate Authority, server certificate and server private key files must be specified here. It is recommended to let the clients choose the security cipher and omit option `olcTLSCipherSuite` (incompatible with TLS clients other than [.filename]#openssl#). Option `olcTLSProtocolMin` lets the server require a minimum security level: it is recommended. While verification is mandatory for the server, it is not for the client: `olcTLSVerifyClient: never`.
The second section is about the backend modules and can be configured as follows:
[.programlisting]
....
#
# Load dynamic backend modules:
#
dn: cn=module,cn=config
objectClass: olcModuleList
cn: module
olcModulepath: /usr/local/libexec/openldap
olcModuleload: back_mdb.la
#olcModuleload: back_bdb.la
#olcModuleload: back_hdb.la
#olcModuleload: back_ldap.la
#olcModuleload: back_passwd.la
#olcModuleload: back_shell.la
....
The third section is devoted to load the needed `ldif` schemas to be used by the databases: they are essential.
[.programlisting]
....
dn: cn=schema,cn=config
objectClass: olcSchemaConfig
cn: schema
include: file:///usr/local/etc/openldap/schema/core.ldif
include: file:///usr/local/etc/openldap/schema/cosine.ldif
include: file:///usr/local/etc/openldap/schema/inetorgperson.ldif
include: file:///usr/local/etc/openldap/schema/nis.ldif
....
Next, the frontend configuration section:
[.programlisting]
....
# Frontend settings
#
dn: olcDatabase={-1}frontend,cn=config
objectClass: olcDatabaseConfig
objectClass: olcFrontendConfig
olcDatabase: {-1}frontend
olcAccess: to * by * read
#
# Sample global access control policy:
# Root DSE: allow anyone to read it
# Subschema (sub)entry DSE: allow anyone to read it
# Other DSEs:
# Allow self write access
# Allow authenticated users read access
# Allow anonymous users to authenticate
#
#olcAccess: to dn.base="" by * read
#olcAccess: to dn.base="cn=Subschema" by * read
#olcAccess: to *
# by self write
# by users read
# by anonymous auth
#
# if no access controls are present, the default policy
# allows anyone and everyone to read anything but restricts
# updates to rootdn. (e.g., "access to * by * read")
#
# rootdn can always read and write EVERYTHING!
#
olcPasswordHash: {SSHA}
# {SSHA} is already the default for olcPasswordHash
....
Another section is devoted to the _configuration backend_, the only way to later access the OpenLDAP server configuration is as a global super-user.
[.programlisting]
....
dn: olcDatabase={0}config,cn=config
objectClass: olcDatabaseConfig
olcDatabase: {0}config
olcAccess: to * by * none
olcRootPW: {SSHA}iae+lrQZILpiUdf16Z9KmDmSwT77Dj4U
....
The default administrator username is `cn=config`. Type [.filename]#slappasswd# in a shell, choose a password and use its hash in `olcRootPW`. If this option is not specified now, before [.filename]#slapd.ldif# is imported, no one will be later able to modify the _global configuration_ section.
The last section is about the database backend:
[.programlisting]
....
#######################################################################
# LMDB database definitions
#######################################################################
#
dn: olcDatabase=mdb,cn=config
objectClass: olcDatabaseConfig
objectClass: olcMdbConfig
olcDatabase: mdb
olcDbMaxSize: 1073741824
olcSuffix: dc=domain,dc=example
olcRootDN: cn=mdbadmin,dc=domain,dc=example
# Cleartext passwords, especially for the rootdn, should
# be avoided. See slappasswd(8) and slapd-config(5) for details.
# Use of strong authentication encouraged.
olcRootPW: {SSHA}X2wHvIWDk6G76CQyCMS1vDCvtICWgn0+
# The database directory MUST exist prior to running slapd AND
# should only be accessible by the slapd and slap tools.
# Mode 700 recommended.
olcDbDirectory: /var/db/openldap-data
# Indices to maintain
olcDbIndex: objectClass eq
....
This database hosts the _actual contents_ of the LDAP directory. Types other than `mdb` are available. Its super-user, not to be confused with the global one, is configured here: a (possibly custom) username in `olcRootDN` and the password hash in `olcRootPW`; [.filename]#slappasswd# can be used as before.
This http://www.openldap.org/devel/gitweb.cgi?p=openldap.git;a=tree;f=tests/data/regressions/its8444;h=8a5e808e63b0de3d2bdaf2cf34fecca8577ca7fd;hb=HEAD[repository] contains four examples of [.filename]#slapd.ldif#. To convert an existing [.filename]#slapd.conf# into [.filename]#slapd.ldif#, refer to http://www.openldap.org/doc/admin24/slapdconf2.html[this page] (please note that this may introduce some unuseful options).
When the configuration is completed, [.filename]#slapd.ldif# must be placed in an empty directory. It is recommended to create it as:
[source,shell]
....
# mkdir /usr/local/etc/openldap/slapd.d/
....
Import the configuration database:
[source,shell]
....
# /usr/local/sbin/slapadd -n0 -F /usr/local/etc/openldap/slapd.d/ -l /usr/local/etc/openldap/slapd.ldif
....
Start the [.filename]#slapd# daemon:
[source,shell]
....
# /usr/local/libexec/slapd -F /usr/local/etc/openldap/slapd.d/
....
Option `-d` can be used for debugging, as specified in slapd(8). To verify that the server is running and working:
[source,shell]
....
# ldapsearch -x -b '' -s base '(objectclass=*)' namingContexts
# extended LDIF
#
# LDAPv3
# base <> with scope baseObject
# filter: (objectclass=*)
# requesting: namingContexts
#
#
dn:
namingContexts: dc=domain,dc=example
# search result
search: 2
result: 0 Success
# numResponses: 2
# numEntries: 1
....
The server must still be trusted. If that has never been done before, follow these instructions. Install the OpenSSL package or port:
[source,shell]
....
# pkg install openssl
....
From the directory where [.filename]#ca.crt# is stored (in this example, [.filename]#/usr/local/etc/openldap#), run:
[source,shell]
....
# c_rehash .
....
Both the CA and the server certificate are now correctly recognized in their respective roles. To verify this, run this command from the [.filename]#server.crt# directory:
[source,shell]
....
# openssl verify -verbose -CApath . server.crt
....
If [.filename]#slapd# was running, restart it. As stated in [.filename]#/usr/local/etc/rc.d/slapd#, to properly run [.filename]#slapd# at boot the following lines must be added to [.filename]#/etc/rc.conf#:
[.programlisting]
....
slapd_enable="YES"
slapd_flags='-h "ldapi://%2fvar%2frun%2fopenldap%2fldapi/
ldap://0.0.0.0/"'
slapd_sockets="/var/run/openldap/ldapi"
slapd_cn_config="YES"
....
[.filename]#slapd# does not provide debugging at boot. Check [.filename]#/var/log/debug.log#, [.filename]#dmesg -a# and [.filename]#/var/log/messages# for this purpose.
The following example adds the group `team` and the user `john` to the `domain.example`LDAP database, which is still empty. First, create the file [.filename]#domain.ldif#:
[source,shell]
....
# cat domain.ldif
dn: dc=domain,dc=example
objectClass: dcObject
objectClass: organization
o: domain.example
dc: domain
dn: ou=groups,dc=domain,dc=example
objectClass: top
objectClass: organizationalunit
ou: groups
dn: ou=users,dc=domain,dc=example
objectClass: top
objectClass: organizationalunit
ou: users
dn: cn=team,ou=groups,dc=domain,dc=example
objectClass: top
objectClass: posixGroup
cn: team
gidNumber: 10001
dn: uid=john,ou=users,dc=domain,dc=example
objectClass: top
objectClass: account
objectClass: posixAccount
objectClass: shadowAccount
cn: John McUser
uid: john
uidNumber: 10001
gidNumber: 10001
homeDirectory: /home/john/
loginShell: /usr/bin/bash
userPassword: secret
....
See the OpenLDAP documentation for more details. Use [.filename]#slappasswd# to replace the plain text password `secret` with a hash in `userPassword`. The path specified as `loginShell` must exist in all the systems where `john` is allowed to login. Finally, use the `mdb` administrator to modify the database:
[source,shell]
....
# ldapadd -W -D "cn=mdbadmin,dc=domain,dc=example" -f domain.ldif
....
Modifications to the _global configuration_ section can only be performed by the global super-user. For example, assume that the option `olcTLSCipherSuite: HIGH:MEDIUM:SSLv3` was initially specified and must now be deleted. First, create a file that contains the following:
[source,shell]
....
# cat global_mod
dn: cn=config
changetype: modify
delete: olcTLSCipherSuite
....
Then, apply the modifications:
[source,shell]
....
# ldapmodify -f global_mod -x -D "cn=config" -W
....
When asked, provide the password chosen in the _configuration backend_ section. The username is not required: here, `cn=config` represents the DN of the database section to be modified. Alternatively, use `ldapmodify` to delete a single line of the database, `ldapdelete` to delete a whole entry.
If something goes wrong, or if the global super-user cannot access the configuration backend, it is possible to delete and re-write the whole configuration:
[source,shell]
....
# rm -rf /usr/local/etc/openldap/slapd.d/
....
[.filename]#slapd.ldif# can then be edited and imported again. Please, follow this procedure only when no other solution is available.
This is the configuration of the server only. The same machine can also host an LDAP client, with its own separate configuration.
[[network-dhcp]]
== Dynamic Host Configuration Protocol (DHCP)
The Dynamic Host Configuration Protocol (DHCP) allows a system to connect to a network in order to be assigned the necessary addressing information for communication on that network. FreeBSD includes the OpenBSD version of `dhclient` which is used by the client to obtain the addressing information. FreeBSD does not install a DHCP server, but several servers are available in the FreeBSD Ports Collection. The DHCP protocol is fully described in http://www.freesoft.org/CIE/RFC/2131/[RFC 2131]. Informational resources are also available at http://www.isc.org/downloads/dhcp/[isc.org/downloads/dhcp/].
This section describes how to use the built-in DHCP client. It then describes how to install and configure a DHCP server.
[NOTE]
====
In FreeBSD, the man:bpf[4] device is needed by both the DHCP server and DHCP client. This device is included in the [.filename]#GENERIC# kernel that is installed with FreeBSD. Users who prefer to create a custom kernel need to keep this device if DHCP is used.
It should be noted that [.filename]#bpf# also allows privileged users to run network packet sniffers on that system.
====
=== Configuring a DHCP Client
DHCP client support is included in the FreeBSD installer, making it easy to configure a newly installed system to automatically receive its networking addressing information from an existing DHCP server. Refer to crossref:bsdinstall[bsdinstall-post,"Accounts, Time Zone, Services and Hardening"] for examples of network configuration.
When `dhclient` is executed on the client machine, it begins broadcasting requests for configuration information. By default, these requests use UDP port 68. The server replies on UDP port 67, giving the client an IP address and other relevant network information such as a subnet mask, default gateway, and DNS server addresses. This information is in the form of a DHCP"lease" and is valid for a configurable time. This allows stale IP addresses for clients no longer connected to the network to automatically be reused. DHCP clients can obtain a great deal of information from the server. An exhaustive list may be found in man:dhcp-options[5].
By default, when a FreeBSD system boots, its DHCP client runs in the background, or _asynchronously_. Other startup scripts continue to run while the DHCP process completes, which speeds up system startup.
Background DHCP works well when the DHCP server responds quickly to the client's requests. However, DHCP may take a long time to complete on some systems. If network services attempt to run before DHCP has assigned the network addressing information, they will fail. Using DHCP in _synchronous_ mode prevents this problem as it pauses startup until the DHCP configuration has completed.
This line in [.filename]#/etc/rc.conf# is used to configure background or asynchronous mode:
[.programlisting]
....
ifconfig_fxp0="DHCP"
....
This line may already exist if the system was configured to use DHCP during installation. Replace the _fxp0_ shown in these examples with the name of the interface to be dynamically configured, as described in crossref:config[config-network-setup,“Setting Up Network Interface Cards”].
To instead configure the system to use synchronous mode, and to pause during startup while DHCP completes, use "`SYNCDHCP`":
[.programlisting]
....
ifconfig_fxp0="SYNCDHCP"
....
Additional client options are available. Search for `dhclient` in man:rc.conf[5] for details.
The DHCP client uses the following files:
* [.filename]#/etc/dhclient.conf#
+
The configuration file used by `dhclient`. Typically, this file contains only comments as the defaults are suitable for most clients. This configuration file is described in man:dhclient.conf[5].
* [.filename]#/sbin/dhclient#
+
More information about the command itself can be found in man:dhclient[8].
* [.filename]#/sbin/dhclient-script#
+
The FreeBSD-specific DHCP client configuration script. It is described in man:dhclient-script[8], but should not need any user modification to function properly.
* [.filename]#/var/db/dhclient.leases.interface#
+
The DHCP client keeps a database of valid leases in this file, which is written as a log and is described in man:dhclient.leases[5].
[[network-dhcp-server]]
=== Installing and Configuring a DHCP Server
This section demonstrates how to configure a FreeBSD system to act as a DHCP server using the Internet Systems Consortium (ISC) implementation of the DHCP server. This implementation and its documentation can be installed using the package:net/isc-dhcp43-server[] package or port.
The installation of package:net/isc-dhcp43-server[] installs a sample configuration file. Copy [.filename]#/usr/local/etc/dhcpd.conf.example# to [.filename]#/usr/local/etc/dhcpd.conf# and make any edits to this new file.
The configuration file is comprised of declarations for subnets and hosts which define the information that is provided to DHCP clients. For example, these lines configure the following:
[.programlisting]
....
option domain-name "example.org";<.>
option domain-name-servers ns1.example.org;<.>
option subnet-mask 255.255.255.0;<.>
default-lease-time 600;<.>
max-lease-time 72400;<.>
ddns-update-style none;<.>
subnet 10.254.239.0 netmask 255.255.255.224 {
range 10.254.239.10 10.254.239.20;<.>
option routers rtr-239-0-1.example.org, rtr-239-0-2.example.org;<.>
}
host fantasia {
hardware ethernet 08:00:07:26:c0:a5;<.>
fixed-address fantasia.fugue.com;<.>
}
....
<.> This option specifies the default search domain that will be provided to clients. Refer to man:resolv.conf[5] for more information.
<.> This option specifies a comma separated list of DNS servers that the client should use. They can be listed by their Fully Qualified Domain Names (FQDN), as seen in the example, or by their IP addresses.
<.> The subnet mask that will be provided to clients.
<.> The default lease expiry time in seconds. A client can be configured to override this value.
<.> The maximum allowed length of time, in seconds, for a lease. Should a client request a longer lease, a lease will still be issued, but it will only be valid for `max-lease-time`.
<.> The default of `none` disables dynamic DNS updates. Changing this to `interim` configures the DHCP server to update a DNS server whenever it hands out a lease so that the DNS server knows which IP addresses are associated with which computers in the network. Do not change the default setting unless the DNS server has been configured to support dynamic DNS.
<.> This line creates a pool of available IP addresses which are reserved for allocation to DHCP clients. The range of addresses must be valid for the network or subnet specified in the previous line.
<.> Declares the default gateway that is valid for the network or subnet specified before the opening `{` bracket.
<.> Specifies the hardware MAC address of a client so that the DHCP server can recognize the client when it makes a request.
<.> Specifies that this host should always be given the same IP address. Using the hostname is correct, since the DHCP server will resolve the hostname before returning the lease information.
This configuration file supports many more options. Refer to dhcpd.conf(5), installed with the server, for details and examples.
Once the configuration of [.filename]#dhcpd.conf# is complete, enable the DHCP server in [.filename]#/etc/rc.conf#:
[.programlisting]
....
dhcpd_enable="YES"
dhcpd_ifaces="dc0"
....
Replace the `dc0` with the interface (or interfaces, separated by whitespace) that the DHCP server should listen on for DHCP client requests.
Start the server by issuing the following command:
[source,shell]
....
# service isc-dhcpd start
....
Any future changes to the configuration of the server will require the dhcpd service to be stopped and then started using man:service[8].
The DHCP server uses the following files. Note that the manual pages are installed with the server software.
* [.filename]#/usr/local/sbin/dhcpd#
+
More information about the dhcpd server can be found in dhcpd(8).
* [.filename]#/usr/local/etc/dhcpd.conf#
+
The server configuration file needs to contain all the information that should be provided to clients, along with information regarding the operation of the server. This configuration file is described in dhcpd.conf(5).
* [.filename]#/var/db/dhcpd.leases#
+
The DHCP server keeps a database of leases it has issued in this file, which is written as a log. Refer to dhcpd.leases(5), which gives a slightly longer description.
* [.filename]#/usr/local/sbin/dhcrelay#
+
This daemon is used in advanced environments where one DHCP server forwards a request from a client to another DHCP server on a separate network. If this functionality is required, install the package:net/isc-dhcp43-relay[] package or port. The installation includes dhcrelay(8) which provides more detail.
[[network-dns]]
== Domain Name System (DNS)
Domain Name System (DNS) is the protocol through which domain names are mapped to IP addresses, and vice versa. DNS is coordinated across the Internet through a somewhat complex system of authoritative root, Top Level Domain (TLD), and other smaller-scale name servers, which host and cache individual domain information. It is not necessary to run a name server to perform DNS lookups on a system.
The following table describes some of the terms associated with DNS:
.DNS Terminology
[cols="1,1", frame="none", options="header"]
|===
| Term
| Definition
|Forward DNS
|Mapping of hostnames to IP addresses.
|Origin
|Refers to the domain covered in a particular zone file.
|Resolver
|A system process through which a machine queries a name server for zone information.
|Reverse DNS
|Mapping of IP addresses to hostnames.
|Root zone
|The beginning of the Internet zone hierarchy. All zones fall under the root zone, similar to how all files in a file system fall under the root directory.
|Zone
|An individual domain, subdomain, or portion of the DNS administered by the same authority.
|===
Examples of zones:
* `.` is how the root zone is usually referred to in documentation.
* `org.` is a Top Level Domain (TLD) under the root zone.
* `example.org.` is a zone under the `org.`TLD.
* `1.168.192.in-addr.arpa` is a zone referencing all IP addresses which fall under the `192.168.1.*`IP address space.
As one can see, the more specific part of a hostname appears to its left. For example, `example.org.` is more specific than `org.`, as `org.` is more specific than the root zone. The layout of each part of a hostname is much like a file system: the [.filename]#/dev# directory falls within the root, and so on.
=== Reasons to Run a Name Server
Name servers generally come in two forms: authoritative name servers, and caching (also known as resolving) name servers.
An authoritative name server is needed when:
* One wants to serve DNS information to the world, replying authoritatively to queries.
* A domain, such as `example.org`, is registered and IP addresses need to be assigned to hostnames under it.
* An IP address block requires reverse DNS entries (IP to hostname).
* A backup or second name server, called a slave, will reply to queries.
A caching name server is needed when:
* A local DNS server may cache and respond more quickly than querying an outside name server.
When one queries for `www.FreeBSD.org`, the resolver usually queries the uplink ISP's name server, and retrieves the reply. With a local, caching DNS server, the query only has to be made once to the outside world by the caching DNS server. Additional queries will not have to go outside the local network, since the information is cached locally.
=== DNS Server Configuration
Unbound is provided in the FreeBSD base system. By default, it will provide DNS resolution to the local machine only. While the base system package can be configured to provide resolution services beyond the local machine, it is recommended that such requirements be addressed by installing Unbound from the FreeBSD Ports Collection.
To enable Unbound, add the following to [.filename]#/etc/rc.conf#:
[.programlisting]
....
local_unbound_enable="YES"
....
Any existing nameservers in [.filename]#/etc/resolv.conf# will be configured as forwarders in the new Unbound configuration.
[NOTE]
====
If any of the listed nameservers do not support DNSSEC, local DNS resolution will fail. Be sure to test each nameserver and remove any that fail the test. The following command will show the trust tree or a failure for a nameserver running on `192.168.1.1`:
====
[source,shell]
....
% drill -S FreeBSD.org @192.168.1.1
....
Once each nameserver is confirmed to support DNSSEC, start Unbound:
[source,shell]
....
# service local_unbound onestart
....
This will take care of updating [.filename]#/etc/resolv.conf# so that queries for DNSSEC secured domains will now work. For example, run the following to validate the FreeBSD.org DNSSEC trust tree:
[source,shell]
....
% drill -S FreeBSD.org
;; Number of trusted keys: 1
;; Chasing: freebsd.org. A
DNSSEC Trust tree:
freebsd.org. (A)
|---freebsd.org. (DNSKEY keytag: 36786 alg: 8 flags: 256)
|---freebsd.org. (DNSKEY keytag: 32659 alg: 8 flags: 257)
|---freebsd.org. (DS keytag: 32659 digest type: 2)
|---org. (DNSKEY keytag: 49587 alg: 7 flags: 256)
|---org. (DNSKEY keytag: 9795 alg: 7 flags: 257)
|---org. (DNSKEY keytag: 21366 alg: 7 flags: 257)
|---org. (DS keytag: 21366 digest type: 1)
| |---. (DNSKEY keytag: 40926 alg: 8 flags: 256)
| |---. (DNSKEY keytag: 19036 alg: 8 flags: 257)
|---org. (DS keytag: 21366 digest type: 2)
|---. (DNSKEY keytag: 40926 alg: 8 flags: 256)
|---. (DNSKEY keytag: 19036 alg: 8 flags: 257)
;; Chase successful
....
[[network-apache]]
== Apache HTTP Server
The open source Apache HTTP Server is the most widely used web server. FreeBSD does not install this web server by default, but it can be installed from the package:www/apache24[] package or port.
This section summarizes how to configure and start version 2._x_ of the Apache HTTP Server on FreeBSD. For more detailed information about Apache 2.X and its configuration directives, refer to http://httpd.apache.org/[httpd.apache.org].
=== Configuring and Starting Apache
In FreeBSD, the main Apache HTTP Server configuration file is installed as [.filename]#/usr/local/etc/apache2x/httpd.conf#, where _x_ represents the version number. This ASCII text file begins comment lines with a `#`. The most frequently modified directives are:
`ServerRoot "/usr/local"`::
Specifies the default directory hierarchy for the Apache installation. Binaries are stored in the [.filename]#bin# and [.filename]#sbin# subdirectories of the server root and configuration files are stored in the [.filename]#etc/apache2x# subdirectory.
`ServerAdmin you@example.com`::
Change this to the email address to receive problems with the server. This address also appears on some server-generated pages, such as error documents.
`ServerName www.example.com:80`::
Allows an administrator to set a hostname which is sent back to clients for the server. For example, `www` can be used instead of the actual hostname. If the system does not have a registered DNS name, enter its IP address instead. If the server will listen on an alternate report, change `80` to the alternate port number.
`DocumentRoot "/usr/local/www/apache2_x_/data"`::
The directory where documents will be served from. By default, all requests are taken from this directory, but symbolic links and aliases may be used to point to other locations.
It is always a good idea to make a backup copy of the default Apache configuration file before making changes. When the configuration of Apache is complete, save the file and verify the configuration using `apachectl`. Running `apachectl configtest` should return `Syntax OK`.
To launch Apache at system startup, add the following line to [.filename]#/etc/rc.conf#:
[.programlisting]
....
apache24_enable="YES"
....
If Apache should be started with non-default options, the following line may be added to [.filename]#/etc/rc.conf# to specify the needed flags:
[.programlisting]
....
apache24_flags=""
....
If apachectl does not report configuration errors, start `httpd` now:
[source,shell]
....
# service apache24 start
....
The `httpd` service can be tested by entering `http://_localhost_` in a web browser, replacing _localhost_ with the fully-qualified domain name of the machine running `httpd`. The default web page that is displayed is [.filename]#/usr/local/www/apache24/data/index.html#.
The Apache configuration can be tested for errors after making subsequent configuration changes while `httpd` is running using the following command:
[source,shell]
....
# service apache24 configtest
....
[NOTE]
====
It is important to note that `configtest` is not an man:rc[8] standard, and should not be expected to work for all startup scripts.
====
=== Virtual Hosting
Virtual hosting allows multiple websites to run on one Apache server. The virtual hosts can be _IP-based_ or _name-based_. IP-based virtual hosting uses a different IP address for each website. Name-based virtual hosting uses the clients HTTP/1.1 headers to figure out the hostname, which allows the websites to share the same IP address.
To setup Apache to use name-based virtual hosting, add a `VirtualHost` block for each website. For example, for the webserver named `www.domain.tld` with a virtual domain of `www.someotherdomain.tld`, add the following entries to [.filename]#httpd.conf#:
[.programlisting]
....
<VirtualHost *>
ServerName www.domain.tld
DocumentRoot /www/domain.tld
</VirtualHost>
<VirtualHost *>
ServerName www.someotherdomain.tld
DocumentRoot /www/someotherdomain.tld
</VirtualHost>
....
For each virtual host, replace the values for `ServerName` and `DocumentRoot` with the values to be used.
For more information about setting up virtual hosts, consult the official Apache documentation at: http://httpd.apache.org/docs/vhosts/[http://httpd.apache.org/docs/vhosts/].
=== Apache Modules
Apache uses modules to augment the functionality provided by the basic server. Refer to http://httpd.apache.org/docs/current/mod/[http://httpd.apache.org/docs/current/mod/] for a complete listing of and the configuration details for the available modules.
In FreeBSD, some modules can be compiled with the package:www/apache24[] port. Type `make config` within [.filename]#/usr/ports/www/apache24# to see which modules are available and which are enabled by default. If the module is not compiled with the port, the FreeBSD Ports Collection provides an easy way to install many modules. This section describes three of the most commonly used modules.
==== SSL support
At one in point in time, support for SSL inside of Apache required a secondary module called [.filename]#mod_ssl#. This is no longer the case and the default install of Apache comes with SSL built into the web server. An example of how to enable support for SSL websites is available in the installed file, [.filename]#httpd-ssl.conf# inside of the [.filename]#/usr/local/etc/apache24/extra# directory. Inside this directory is also a sample file called named [.filename]#ssl.conf-sample#. It is recommended that both files be evaluated to properly set up secure websites in the Apache web server.
After the configuration of SSL is complete, the following line must be uncommented in the main [.filename]#http.conf# to activate the changes on the next restart or reload of Apache:
[.programlisting]
....
#Include etc/apache24/extra/httpd-ssl.conf
....
[WARNING]
====
SSL version two and version three have known vulnerability issues. It is highly recommended TLS version 1.2 and 1.3 be enabled in place of the older SSL options. This can be accomplished by setting the following options in the [.filename]#ssl.conf#:
====
[.programlisting]
....
SSLProtocol all -SSLv3 -SSLv2 +TLSv1.2 +TLSv1.3
SSLProxyProtocol all -SSLv2 -SSLv3 -TLSv1 -TLSv1.1
....
To complete the configuration of SSL in the web server, uncomment the following line to ensure that the configuration will be pulled into Apache during restart or reload:
[.programlisting]
....
# Secure (SSL/TLS) connections
Include etc/apache24/extra/httpd-ssl.conf
....
The following lines must also be uncommented in the [.filename]#httpd.conf# to fully support SSL in Apache:
[.programlisting]
....
LoadModule authn_socache_module libexec/apache24/mod_authn_socache.so
LoadModule socache_shmcb_module libexec/apache24/mod_socache_shmcb.so
LoadModule ssl_module libexec/apache24/mod_ssl.so
....
The next step is to work with a certificate authority to have the appropriate certificates installed on the system. This will set up a chain of trust for the site and prevent any warnings of self-signed certificates.
==== [.filename]#mod_perl#
The [.filename]#mod_perl# module makes it possible to write Apache modules in Perl. In addition, the persistent interpreter embedded in the server avoids the overhead of starting an external interpreter and the penalty of Perl start-up time.
The [.filename]#mod_perl# can be installed using the package:www/mod_perl2[] package or port. Documentation for using this module can be found at http://perl.apache.org/docs/2.0/index.html[http://perl.apache.org/docs/2.0/index.html].
==== [.filename]#mod_php#
_PHP: Hypertext Preprocessor_ (PHP) is a general-purpose scripting language that is especially suited for web development. Capable of being embedded into HTML, its syntax draws upon C, Java(TM), and Perl with the intention of allowing web developers to write dynamically generated webpages quickly.
Support for PHP for Apache and any other feature written in the language, can be added by installing the appropriate port.
For all supported versions, search the package database using `pkg`:
[source,shell]
....
# pkg search php
....
A list will be displayed including the versions and additional features they provide. The components are completely modular, meaning features are enabled by installing the appropriate port. To install PHP version 7.4 for Apache, issue the following command:
[source,shell]
....
# pkg install mod_php74
....
If any dependency packages need to be installed, they will be installed as well.
By default, PHP will not be enabled. The following lines will need to be added to the Apache configuration file located in [.filename]#/usr/local/etc/apache24# to make it active:
[.programlisting]
....
<FilesMatch "\.php$">
SetHandler application/x-httpd-php
</FilesMatch>
<FilesMatch "\.phps$">
SetHandler application/x-httpd-php-source
</FilesMatch>
....
In addition, the `DirectoryIndex` in the configuration file will also need to be updated and Apache will either need to be restarted or reloaded for the changes to take effect.
Support for many of the PHP features may also be installed by using `pkg`. For example, to install support for XML or SSL, install their respective ports:
[source,shell]
....
# pkg install php74-xml php74-openssl
....
As before, the Apache configuration will need to be reloaded for the changes to take effect, even in cases where it was just a module install.
To perform a graceful restart to reload the configuration, issue the following command:
[source,shell]
....
# apachectl graceful
....
Once the install is complete, there are two methods of obtaining the installed PHP support modules and the environmental information of the build. The first is to install the full PHP binary and running the command to gain the information:
[source,shell]
....
# pkg install php74
....
[source,shell]
....
# php -i |less
....
It is necessary to pass the output to a pager, such as the `more` or `less` to easier digest the amount of output.
Finally, to make any changes to the global configuration of PHP there is a well documented file installed into [.filename]#/usr/local/etc/php.ini#. At the time of install, this file will not exist because there are two versions to choose from, one is [.filename]#php.ini-development# and the other is [.filename]#php.ini-production#. These are starting points to assist administrators in their deployment.
==== HTTP2 Support
Apache support for the HTTP2 protocol is included by default when installing the port with `pkg`. The new version of HTTP includes many improvements over the previous version, including utilizing a single connection to a website, reducing overall roundtrips of TCP connections. Also, packet header data is compressed and HTTP2 requires encryption by default.
When Apache is configured to only use HTTP2, web browsers will require secure, encrypted HTTPS connections. When Apache is configured to use both versions, HTTP1.1 will be considered a fall back option if any issues arise during the connection.
While this change does require administrators to make changes, they are positive and equate to a more secure Internet for everyone. The changes are only required for sites not currently implementing SSL and TLS.
[NOTE]
====
This configuration depends on the previous sections, including TLS support. It is recommended those instructions be followed before continuing with this configuration.
====
Start the process by enabling the http2 module by uncommenting the line in [.filename]#/usr/local/etc/apache24/httpd.conf# and replace the mpm_prefork module with mpm_event as the former does not support HTTP2.
[.programlisting]
....
LoadModule http2_module libexec/apache24/mod_http2.so
LoadModule mpm_event_module libexec/apache24/mod_mpm_event.so
....
[NOTE]
====
There is a separate [.filename]#mod_http2# port that is available. It exists to deliver security and bug fixes quicker than the module installed with the bundled [.filename]#apache24# port. It is not required for HTTP2 support but is available. When installed, the [.filename]#mod_h2.so# should be used in place of [.filename]#mod_http2.so# in the Apache configuration.
====
There are two methods to implement HTTP2 in Apache; one way is globally for all sites and each VirtualHost running on the system. To enable HTTP2 globally, add the following line under the ServerName directive:
[.programlisting]
....
Protocols h2 http/1.1
....
[NOTE]
====
To enable HTTP2 over plaintext, use h2h2chttp/1.1 in the [.filename]#httpd.conf#.
====
Having the h2c here will allow plaintext HTTP2 data to pass on the system but is not recommended. In addition, using the http/1.1 here will allow fallback to the HTTP1.1 version of the protocol should it be needed by the system.
To enable HTTP2 for individual VirtualHosts, add the same line within the VirtualHost directive in either [.filename]#httpd.conf# or [.filename]#httpd-ssl.conf#.
Reload the configuration using the `apachectl`[parameter]#reload# command and test the configuration either by using either of the following methods after visiting one of the hosted pages:
[source,shell]
....
# grep "HTTP/2.0" /var/log/httpd-access.log
....
This should return something similar to the following:
[.programlisting]
....
192.168.1.205 - - [18/Oct/2020:18:34:36 -0400] "GET / HTTP/2.0" 304 -
192.0.2.205 - - [18/Oct/2020:19:19:57 -0400] "GET / HTTP/2.0" 304 -
192.0.0.205 - - [18/Oct/2020:19:20:52 -0400] "GET / HTTP/2.0" 304 -
192.0.2.205 - - [18/Oct/2020:19:23:10 -0400] "GET / HTTP/2.0" 304 -
....
The other method is using the web browser's built in site debugger or `tcpdump`; however, using either method is beyond the scope of this document.
Support for HTTP2 reverse proxy connections by using the [.filename]#mod_proxy_http2.so# module. When configuring the ProxyPass or RewriteRules [P] statements, they should use h2:// for the connection.
=== Dynamic Websites
In addition to mod_perl and mod_php, other languages are available for creating dynamic web content. These include Django and Ruby on Rails.
==== Django
Django is a BSD-licensed framework designed to allow developers to write high performance, elegant web applications quickly. It provides an object-relational mapper so that data types are developed as Python objects. A rich dynamic database-access API is provided for those objects without the developer ever having to write SQL. It also provides an extensible template system so that the logic of the application is separated from the HTML presentation.
Django depends on [.filename]#mod_python#, and an SQL database engine. In FreeBSD, the package:www/py-django[] port automatically installs [.filename]#mod_python# and supports the PostgreSQL, MySQL, or SQLite databases, with the default being SQLite. To change the database engine, type `make config` within [.filename]#/usr/ports/www/py-django#, then install the port.
Once Django is installed, the application will need a project directory along with the Apache configuration in order to use the embedded Python interpreter. This interpreter is used to call the application for specific URLs on the site.
To configure Apache to pass requests for certain URLs to the web application, add the following to [.filename]#httpd.conf#, specifying the full path to the project directory:
[.programlisting]
....
<Location "/">
SetHandler python-program
PythonPath "['/dir/to/the/django/packages/'] + sys.path"
PythonHandler django.core.handlers.modpython
SetEnv DJANGO_SETTINGS_MODULE mysite.settings
PythonAutoReload On
PythonDebug On
</Location>
....
Refer to https://docs.djangoproject.com[https://docs.djangoproject.com] for more information on how to use Django.
==== Ruby on Rails
Ruby on Rails is another open source web framework that provides a full development stack. It is optimized to make web developers more productive and capable of writing powerful applications quickly. On FreeBSD, it can be installed using the package:www/rubygem-rails[] package or port.
Refer to http://guides.rubyonrails.org[http://guides.rubyonrails.org] for more information on how to use Ruby on Rails.
[[network-ftp]]
== File Transfer Protocol (FTP)
The File Transfer Protocol (FTP) provides users with a simple way to transfer files to and from an FTP server. FreeBSD includes FTP server software, ftpd, in the base system.
FreeBSD provides several configuration files for controlling access to the FTP server. This section summarizes these files. Refer to man:ftpd[8] for more details about the built-in FTP server.
=== Configuration
The most important configuration step is deciding which accounts will be allowed access to the FTP server. A FreeBSD system has a number of system accounts which should not be allowed FTP access. The list of users disallowed any FTP access can be found in [.filename]#/etc/ftpusers#. By default, it includes system accounts. Additional users that should not be allowed access to FTP can be added.
In some cases it may be desirable to restrict the access of some users without preventing them completely from using FTP. This can be accomplished be creating [.filename]#/etc/ftpchroot# as described in man:ftpchroot[5]. This file lists users and groups subject to FTP access restrictions.
To enable anonymous FTP access to the server, create a user named `ftp` on the FreeBSD system. Users will then be able to log on to the FTP server with a username of `ftp` or `anonymous`. When prompted for the password, any input will be accepted, but by convention, an email address should be used as the password. The FTP server will call man:chroot[2] when an anonymous user logs in, to restrict access to only the home directory of the `ftp` user.
There are two text files that can be created to specify welcome messages to be displayed to FTP clients. The contents of [.filename]#/etc/ftpwelcome# will be displayed to users before they reach the login prompt. After a successful login, the contents of [.filename]#/etc/ftpmotd# will be displayed. Note that the path to this file is relative to the login environment, so the contents of [.filename]#~ftp/etc/ftpmotd# would be displayed for anonymous users.
Once the FTP server has been configured, set the appropriate variable in [.filename]#/etc/rc.conf# to start the service during boot:
[.programlisting]
....
ftpd_enable="YES"
....
To start the service now:
[source,shell]
....
# service ftpd start
....
Test the connection to the FTP server by typing:
[source,shell]
....
% ftp localhost
....
The ftpd daemon uses man:syslog[3] to log messages. By default, the system log daemon will write messages related to FTP in [.filename]#/var/log/xferlog#. The location of the FTP log can be modified by changing the following line in [.filename]#/etc/syslog.conf#:
[.programlisting]
....
ftp.info /var/log/xferlog
....
[NOTE]
====
Be aware of the potential problems involved with running an anonymous FTP server. In particular, think twice about allowing anonymous users to upload files. It may turn out that the FTP site becomes a forum for the trade of unlicensed commercial software or worse. If anonymous FTP uploads are required, then verify the permissions so that these files cannot be read by other anonymous users until they have been reviewed by an administrator.
====
[[network-samba]]
== File and Print Services for Microsoft(R) Windows(R) Clients (Samba)
Samba is a popular open source software package that provides file and print services using the SMB/CIFS protocol. This protocol is built into Microsoft(R) Windows(R) systems. It can be added to non-Microsoft(R) Windows(R) systems by installing the Samba client libraries. The protocol allows clients to access shared data and printers. These shares can be mapped as a local disk drive and shared printers can be used as if they were local printers.
On FreeBSD, the Samba client libraries can be installed using the package:net/samba413[] port or package. The client provides the ability for a FreeBSD system to access SMB/CIFS shares in a Microsoft(R) Windows(R) network.
A FreeBSD system can also be configured to act as a Samba server by installing the same package:net/samba413[] port or package. This allows the administrator to create SMB/CIFS shares on the FreeBSD system which can be accessed by clients running Microsoft(R) Windows(R) or the Samba client libraries.
=== Server Configuration
Samba is configured in [.filename]#/usr/local/etc/smb4.conf#. This file must be created before Samba can be used.
A simple [.filename]#smb4.conf# to share directories and printers with Windows(R) clients in a workgroup is shown here. For more complex setups involving LDAP or Active Directory, it is easier to use man:samba-tool[8] to create the initial [.filename]#smb4.conf#.
[.programlisting]
....
[global]
workgroup = WORKGROUP
server string = Samba Server Version %v
netbios name = ExampleMachine
wins support = Yes
security = user
passdb backend = tdbsam
# Example: share /usr/src accessible only to 'developer' user
[src]
path = /usr/src
valid users = developer
writable = yes
browsable = yes
read only = no
guest ok = no
public = no
create mask = 0666
directory mask = 0755
....
==== Global Settings
Settings that describe the network are added in [.filename]#/usr/local/etc/smb4.conf#:
`workgroup`::
The name of the workgroup to be served.
`netbios name`::
The NetBIOS name by which a Samba server is known. By default, it is the same as the first component of the host's DNS name.
`server string`::
The string that will be displayed in the output of `net view` and some other networking tools that seek to display descriptive text about the server.
`wins support`::
Whether Samba will act as a WINS server. Do not enable support for WINS on more than one server on the network.
==== Security Settings
The most important settings in [.filename]#/usr/local/etc/smb4.conf# are the security model and the backend password format. These directives control the options:
`security`::
The most common settings are `security = share` and `security = user`. If the clients use usernames that are the same as their usernames on the FreeBSD machine, user level security should be used. This is the default security policy and it requires clients to first log on before they can access shared resources.
+
In share level security, clients do not need to log onto the server with a valid username and password before attempting to connect to a shared resource. This was the default security model for older versions of Samba.
`passdb backend`::
Samba has several different backend authentication models. Clients may be authenticated with LDAP, NIS+, an SQL database, or a modified password file. The recommended authentication method, `tdbsam`, is ideal for simple networks and is covered here. For larger or more complex networks, `ldapsam` is recommended. `smbpasswd` was the former default and is now obsolete.
==== Samba Users
FreeBSD user accounts must be mapped to the `SambaSAMAccount` database for Windows(R) clients to access the share. Map existing FreeBSD user accounts using man:pdbedit[8]:
[source,shell]
....
# pdbedit -a username
....
This section has only mentioned the most commonly used settings. Refer to the https://wiki.samba.org[Official Samba Wiki] for additional information about the available configuration options.
=== Starting Samba
To enable Samba at boot time, add the following line to [.filename]#/etc/rc.conf#:
[.programlisting]
....
samba_server_enable="YES"
....
To start Samba now:
[source,shell]
....
# service samba_server start
Performing sanity check on Samba configuration: OK
Starting nmbd.
Starting smbd.
....
Samba consists of three separate daemons. Both the nmbd and smbd daemons are started by `samba_enable`. If winbind name resolution is also required, set:
[.programlisting]
....
winbindd_enable="YES"
....
Samba can be stopped at any time by typing:
[source,shell]
....
# service samba_server stop
....
Samba is a complex software suite with functionality that allows broad integration with Microsoft(R) Windows(R) networks. For more information about functionality beyond the basic configuration described here, refer to https://www.samba.org[https://www.samba.org].
[[network-ntp]]
== Clock Synchronization with NTP
Over time, a computer's clock is prone to drift. This is problematic as many network services require the computers on a network to share the same accurate time. Accurate time is also needed to ensure that file timestamps stay consistent. The Network Time Protocol (NTP) is one way to provide clock accuracy in a network.
FreeBSD includes man:ntpd[8] which can be configured to query other NTP servers to synchronize the clock on that machine or to provide time services to other computers in the network.
This section describes how to configure ntpd on FreeBSD. Further documentation can be found in [.filename]#/usr/share/doc/ntp/# in HTML format.
=== NTP Configuration
On FreeBSD, the built-in ntpd can be used to synchronize a system's clock. Ntpd is configured using man:rc.conf[5] variables and [.filename]#/etc/ntp.conf#, as detailed in the following sections.
Ntpd communicates with its network peers using UDP packets. Any firewalls between your machine and its NTP peers must be configured to allow UDP packets in and out on port 123.
==== The [.filename]#/etc/ntp.conf# file
Ntpd reads [.filename]#/etc/ntp.conf# to determine which NTP servers to query. Choosing several NTP servers is recommended in case one of the servers becomes unreachable or its clock proves unreliable. As ntpd receives responses, it favors reliable servers over the less reliable ones. The servers which are queried can be local to the network, provided by an ISP, or selected from an http://support.ntp.org/bin/view/Servers/WebHome[ online list of publicly accessible NTP servers]. When choosing a public NTP server, select one that is geographically close and review its usage policy. The `pool` configuration keyword selects one or more servers from a pool of servers. An http://support.ntp.org/bin/view/Servers/NTPPoolServers[ online list of publicly accessible NTP pools] is available, organized by geographic area. In addition, FreeBSD provides a project-sponsored pool, `0.freebsd.pool.ntp.org`.
.Sample [.filename]#/etc/ntp.conf#
[example]
====
This is a simple example of an [.filename]#ntp.conf# file. It can safely be used as-is; it contains the recommended `restrict` options for operation on a publicly-accessible network connection.
[.programlisting]
....
# Disallow ntpq control/query access. Allow peers to be added only
# based on pool and server statements in this file.
restrict default limited kod nomodify notrap noquery nopeer
restrict source limited kod nomodify notrap noquery
# Allow unrestricted access from localhost for queries and control.
restrict 127.0.0.1
restrict ::1
# Add a specific server.
server ntplocal.example.com iburst
# Add FreeBSD pool servers until 3-6 good servers are available.
tos minclock 3 maxclock 6
pool 0.freebsd.pool.ntp.org iburst
# Use a local leap-seconds file.
leapfile "/var/db/ntpd.leap-seconds.list"
....
====
The format of this file is described in man:ntp.conf[5]. The descriptions below provide a quick overview of just the keywords used in the sample file above.
By default, an NTP server is accessible to any network host. The `restrict` keyword controls which systems can access the server. Multiple `restrict` entries are supported, each one refining the restrictions given in previous statements. The values shown in the example grant the local system full query and control access, while allowing remote systems only the ability to query the time. For more details, refer to the `Access Control Support` subsection of man:ntp.conf[5].
The `server` keyword specifies a single server to query. The file can contain multiple server keywords, with one server listed on each line. The `pool` keyword specifies a pool of servers. Ntpd will add one or more servers from this pool as needed to reach the number of peers specified using the `tos minclock` value. The `iburst` keyword directs ntpd to perform a burst of eight quick packet exchanges with a server when contact is first established, to help quickly synchronize system time.
The `leapfile` keyword specifies the location of a file containing information about leap seconds. The file is updated automatically by man:periodic[8]. The file location specified by this keyword must match the location set in the `ntp_db_leapfile` variable in [.filename]#/etc/rc.conf#.
==== NTP entries in [.filename]#/etc/rc.conf#
Set `ntpd_enable=YES` to start ntpd at boot time. Once `ntpd_enable=YES` has been added to [.filename]#/etc/rc.conf#, ntpd can be started immediately without rebooting the system by typing:
[source,shell]
....
# service ntpd start
....
Only `ntpd_enable` must be set to use ntpd. The [.filename]#rc.conf# variables listed below may also be set as needed.
Set `ntpd_sync_on_start=YES` to allow ntpd to step the clock any amount, one time at startup. Normally ntpd will log an error message and exit if the clock is off by more than 1000 seconds. This option is especially useful on systems without a battery-backed realtime clock.
Set `ntpd_oomprotect=YES` to protect the ntpd daemon from being killed by the system attempting to recover from an Out Of Memory (OOM) condition.
Set `ntpd_config=` to the location of an alternate [.filename]#ntp.conf# file.
Set `ntpd_flags=` to contain any other ntpd flags as needed, but avoid using these flags which are managed internally by [.filename]#/etc/rc.d/ntpd#:
* `-p` (pid file location)
* `-c` (set `ntpd_config=` instead)
==== Ntpd and the unpriveleged `ntpd` user
Ntpd on FreeBSD can start and run as an unpriveleged user. Doing so requires the man:mac_ntpd[4] policy module. The [.filename]#/etc/rc.d/ntpd# startup script first examines the NTP configuration. If possible, it loads the `mac_ntpd` module, then starts ntpd as unpriveleged user `ntpd` (user id 123). To avoid problems with file and directory access, the startup script will not automatically start ntpd as `ntpd` when the configuration contains any file-related options.
The presence of any of the following in `ntpd_flags` requires manual configuration as described below to run as the `ntpd` user:
* -f or --driftfile
* -i or --jaildir
* -k or --keyfile
* -l or --logfile
* -s or --statsdir
The presence of any of the following keywords in [.filename]#ntp.conf# requires manual configuration as described below to run as the `ntpd` user:
* crypto
* driftfile
* key
* logdir
* statsdir
To manually configure ntpd to run as user `ntpd` you must:
* Ensure that the `ntpd` user has access to all the files and directories specified in the configuration.
* Arrange for the `mac_ntpd` module to be loaded or compiled into the kernel. See man:mac_ntpd[4] for details.
* Set `ntpd_user="ntpd"` in [.filename]#/etc/rc.conf#
=== Using NTP with a PPP Connection
ntpd does not need a permanent connection to the Internet to function properly. However, if a PPP connection is configured to dial out on demand, NTP traffic should be prevented from triggering a dial out or keeping the connection alive. This can be configured with `filter` directives in [.filename]#/etc/ppp/ppp.conf#. For example:
[.programlisting]
....
set filter dial 0 deny udp src eq 123
# Prevent NTP traffic from initiating dial out
set filter dial 1 permit 0 0
set filter alive 0 deny udp src eq 123
# Prevent incoming NTP traffic from keeping the connection open
set filter alive 1 deny udp dst eq 123
# Prevent outgoing NTP traffic from keeping the connection open
set filter alive 2 permit 0/0 0/0
....
For more details, refer to the `PACKET FILTERING` section in man:ppp[8] and the examples in [.filename]#/usr/share/examples/ppp/#.
[NOTE]
====
Some Internet access providers block low-numbered ports, preventing NTP from functioning since replies never reach the machine.
====
[[network-iscsi]]
== iSCSI Initiator and Target Configuration
iSCSI is a way to share storage over a network. Unlike NFS, which works at the file system level, iSCSI works at the block device level.
In iSCSI terminology, the system that shares the storage is known as the _target_. The storage can be a physical disk, or an area representing multiple disks or a portion of a physical disk. For example, if the disk(s) are formatted with ZFS, a zvol can be created to use as the iSCSI storage.
The clients which access the iSCSI storage are called _initiators_. To initiators, the storage available through iSCSI appears as a raw, unformatted disk known as a LUN. Device nodes for the disk appear in [.filename]#/dev/# and the device must be separately formatted and mounted.
FreeBSD provides a native, kernel-based iSCSI target and initiator. This section describes how to configure a FreeBSD system as a target or an initiator.
[[network-iscsi-target]]
=== Configuring an iSCSI Target
To configure an iSCSI target, create the [.filename]#/etc/ctl.conf# configuration file, add a line to [.filename]#/etc/rc.conf# to make sure the man:ctld[8] daemon is automatically started at boot, and then start the daemon.
The following is an example of a simple [.filename]#/etc/ctl.conf# configuration file. Refer to man:ctl.conf[5] for a more complete description of this file's available options.
[.programlisting]
....
portal-group pg0 {
discovery-auth-group no-authentication
listen 0.0.0.0
listen [::]
}
target iqn.2012-06.com.example:target0 {
auth-group no-authentication
portal-group pg0
lun 0 {
path /data/target0-0
size 4G
}
}
....
The first entry defines the `pg0` portal group. Portal groups define which network addresses the man:ctld[8] daemon will listen on. The `discovery-auth-group no-authentication` entry indicates that any initiator is allowed to perform iSCSI target discovery without authentication. Lines three and four configure man:ctld[8] to listen on all IPv4 (`listen 0.0.0.0`) and IPv6 (`listen [::]`) addresses on the default port of 3260.
It is not necessary to define a portal group as there is a built-in portal group called `default`. In this case, the difference between `default` and `pg0` is that with `default`, target discovery is always denied, while with `pg0`, it is always allowed.
The second entry defines a single target. Target has two possible meanings: a machine serving iSCSI or a named group of LUNs. This example uses the latter meaning, where `iqn.2012-06.com.example:target0` is the target name. This target name is suitable for testing purposes. For actual use, change `com.example` to the real domain name, reversed. The `2012-06` represents the year and month of acquiring control of that domain name, and `target0` can be any value. Any number of targets can be defined in this configuration file.
The `auth-group no-authentication` line allows all initiators to connect to the specified target and `portal-group pg0` makes the target reachable through the `pg0` portal group.
The next section defines the LUN. To the initiator, each LUN will be visible as a separate disk device. Multiple LUNs can be defined for each target. Each LUN is identified by a number, where LUN 0 is mandatory. The `path /data/target0-0` line defines the full path to a file or zvol backing the LUN. That path must exist before starting man:ctld[8]. The second line is optional and specifies the size of the LUN.
Next, to make sure the man:ctld[8] daemon is started at boot, add this line to [.filename]#/etc/rc.conf#:
[.programlisting]
....
ctld_enable="YES"
....
To start man:ctld[8] now, run this command:
[source,shell]
....
# service ctld start
....
As the man:ctld[8] daemon is started, it reads [.filename]#/etc/ctl.conf#. If this file is edited after the daemon starts, use this command so that the changes take effect immediately:
[source,shell]
....
# service ctld reload
....
==== Authentication
The previous example is inherently insecure as it uses no authentication, granting anyone full access to all targets. To require a username and password to access targets, modify the configuration as follows:
[.programlisting]
....
auth-group ag0 {
chap username1 secretsecret
chap username2 anothersecret
}
portal-group pg0 {
discovery-auth-group no-authentication
listen 0.0.0.0
listen [::]
}
target iqn.2012-06.com.example:target0 {
auth-group ag0
portal-group pg0
lun 0 {
path /data/target0-0
size 4G
}
}
....
The `auth-group` section defines username and password pairs. An initiator trying to connect to `iqn.2012-06.com.example:target0` must first specify a defined username and secret. However, target discovery is still permitted without authentication. To require target discovery authentication, set `discovery-auth-group` to a defined `auth-group` name instead of `no-authentication`.
It is common to define a single exported target for every initiator. As a shorthand for the syntax above, the username and password can be specified directly in the target entry:
[.programlisting]
....
target iqn.2012-06.com.example:target0 {
portal-group pg0
chap username1 secretsecret
lun 0 {
path /data/target0-0
size 4G
}
}
....
[[network-iscsi-initiator]]
=== Configuring an iSCSI Initiator
[NOTE]
====
The iSCSI initiator described in this section is supported starting with FreeBSD 10.0-RELEASE. To use the iSCSI initiator available in older versions, refer to man:iscontrol[8].
====
The iSCSI initiator requires that the man:iscsid[8] daemon is running. This daemon does not use a configuration file. To start it automatically at boot, add this line to [.filename]#/etc/rc.conf#:
[.programlisting]
....
iscsid_enable="YES"
....
To start man:iscsid[8] now, run this command:
[source,shell]
....
# service iscsid start
....
Connecting to a target can be done with or without an [.filename]#/etc/iscsi.conf# configuration file. This section demonstrates both types of connections.
==== Connecting to a Target Without a Configuration File
To connect an initiator to a single target, specify the IP address of the portal and the name of the target:
[source,shell]
....
# iscsictl -A -p 10.10.10.10 -t iqn.2012-06.com.example:target0
....
To verify if the connection succeeded, run `iscsictl` without any arguments. The output should look similar to this:
[.programlisting]
....
Target name Target portal State
iqn.2012-06.com.example:target0 10.10.10.10 Connected: da0
....
In this example, the iSCSI session was successfully established, with [.filename]#/dev/da0# representing the attached LUN. If the `iqn.2012-06.com.example:target0` target exports more than one LUN, multiple device nodes will be shown in that section of the output:
[source,shell]
....
Connected: da0 da1 da2.
....
Any errors will be reported in the output, as well as the system logs. For example, this message usually means that the man:iscsid[8] daemon is not running:
[.programlisting]
....
Target name Target portal State
iqn.2012-06.com.example:target0 10.10.10.10 Waiting for iscsid(8)
....
The following message suggests a networking problem, such as a wrong IP address or port:
[.programlisting]
....
Target name Target portal State
iqn.2012-06.com.example:target0 10.10.10.11 Connection refused
....
This message means that the specified target name is wrong:
[.programlisting]
....
Target name Target portal State
iqn.2012-06.com.example:target0 10.10.10.10 Not found
....
This message means that the target requires authentication:
[.programlisting]
....
Target name Target portal State
iqn.2012-06.com.example:target0 10.10.10.10 Authentication failed
....
To specify a CHAP username and secret, use this syntax:
[source,shell]
....
# iscsictl -A -p 10.10.10.10 -t iqn.2012-06.com.example:target0 -u user -s secretsecret
....
==== Connecting to a Target with a Configuration File
To connect using a configuration file, create [.filename]#/etc/iscsi.conf# with contents like this:
[.programlisting]
....
t0 {
TargetAddress = 10.10.10.10
TargetName = iqn.2012-06.com.example:target0
AuthMethod = CHAP
chapIName = user
chapSecret = secretsecret
}
....
The `t0` specifies a nickname for the configuration file section. It will be used by the initiator to specify which configuration to use. The other lines specify the parameters to use during connection. The `TargetAddress` and `TargetName` are mandatory, whereas the other options are optional. In this example, the CHAP username and secret are shown.
To connect to the defined target, specify the nickname:
[source,shell]
....
# iscsictl -An t0
....
Alternately, to connect to all targets defined in the configuration file, use:
[source,shell]
....
# iscsictl -Aa
....
To make the initiator automatically connect to all targets in [.filename]#/etc/iscsi.conf#, add the following to [.filename]#/etc/rc.conf#:
[.programlisting]
....
iscsictl_enable="YES"
iscsictl_flags="-Aa"
....
diff --git a/documentation/content/en/books/handbook/pgpkeys/_index.adoc b/documentation/content/en/books/handbook/pgpkeys/_index.adoc
index e569fc5d9a..07c45357cb 100644
--- a/documentation/content/en/books/handbook/pgpkeys/_index.adoc
+++ b/documentation/content/en/books/handbook/pgpkeys/_index.adoc
@@ -1,52 +1,53 @@
---
title: Appendix D. OpenPGP Keys
part: Part V. Appendices
prev: books/handbook/eresources
next: books/handbook/glossary
+description: List of OpenPGP keys of the FreeBSD officers are shown here
---
[appendix]
[[pgpkeys]]
= OpenPGP Keys
:doctype: book
:icons: font
:sectnums:
:sectnumlevels: 6
:source-highlighter: rouge
:experimental:
:skip-front-matter:
:xrefstyle: basic
:relfileprefix: ../
:outfilesuffix:
:sectnumoffset: D
include::shared/authors.adoc[]
include::shared/releases.adoc[]
include::shared/en/mailing-lists.adoc[]
include::shared/en/teams.adoc[]
include::shared/en/urls.adoc[]
:pgpkeys-path:
The OpenPGP keys of the `FreeBSD.org` officers are shown here. These keys can be used to verify a signature or send encrypted email to one of the officers. A full list of FreeBSD OpenPGP keys is available in the link:{pgpkeys}[PGP Keys] article. The complete keyring can be downloaded at https://www.FreeBSD.org/doc/pgpkeyring.txt[https://www.FreeBSD.org/doc/pgpkeyring.txt].
[[pgpkeys-officers]]
== Officers
=== {security-officer-name} `<{security-officer-email}>`
include::{pgpkeys-path}static/pgpkeys/security-officer.key[]
=== {secteam-secretary-name} `<{secteam-secretary-email}>`
include::{pgpkeys-path}static/pgpkeys/secteam-secretary.key[]
=== {core-secretary-name} `<{core-secretary-email}>`
include::{pgpkeys-path}static/pgpkeys/core-secretary.key[]
=== {portmgr-secretary-name} `<{portmgr-secretary-email}>`
include::{pgpkeys-path}static/pgpkeys/portmgr-secretary.key[]
=== `{doceng-secretary-email}`
include::{pgpkeys-path}static/pgpkeys/doceng-secretary.key[]
:sectnums:
:sectnumlevels: 6
diff --git a/documentation/content/en/books/handbook/ports/_index.adoc b/documentation/content/en/books/handbook/ports/_index.adoc
index cd49ab7eb9..26af9c7292 100644
--- a/documentation/content/en/books/handbook/ports/_index.adoc
+++ b/documentation/content/en/books/handbook/ports/_index.adoc
@@ -1,1172 +1,1173 @@
---
title: "Chapter 4. Installing Applications: Packages and Ports"
part: Part I. Getting Started
prev: books/handbook/basics
next: books/handbook/x11
+description: "FreeBSD provides two complementary technologies for installing third-party software: the FreeBSD Ports Collection, for installing from source, and packages, for installing from pre-built binaries"
---
[[ports]]
= Installing Applications: Packages and Ports
:doctype: book
:toc: macro
:toclevels: 1
:icons: font
:sectnums:
:sectnumlevels: 6
:source-highlighter: rouge
:experimental:
:skip-front-matter:
:xrefstyle: basic
:relfileprefix: ../
:outfilesuffix:
:sectnumoffset: 4
ifeval::["{backend}" == "html5"]
:imagesdir: ../../../../images/books/handbook/ports/
endif::[]
ifeval::["{backend}" == "pdf"]
:imagesdir: ../../../../static/images/books/handbook/ports/
endif::[]
ifeval::["{backend}" == "epub3"]
:imagesdir: ../../../../static/images/books/handbook/ports/
endif::[]
include::shared/authors.adoc[]
include::shared/releases.adoc[]
include::shared/en/mailing-lists.adoc[]
include::shared/en/teams.adoc[]
include::shared/en/urls.adoc[]
toc::[]
[[ports-synopsis]]
== Synopsis
FreeBSD is bundled with a rich collection of system tools as part of the base system. In addition, FreeBSD provides two complementary technologies for installing third-party software: the FreeBSD Ports Collection, for installing from source, and packages, for installing from pre-built binaries. Either method may be used to install software from local media or from the network.
After reading this chapter, you will know:
* The difference between binary packages and ports.
* How to find third-party software that has been ported to FreeBSD.
* How to manage binary packages using pkg.
* How to build third-party software from source using the Ports Collection.
* How to find the files installed with the application for post-installation configuration.
* What to do if a software installation fails.
[[ports-overview]]
== Overview of Software Installation
The typical steps for installing third-party software on a UNIX(R) system include:
[.procedure]
. Find and download the software, which might be distributed in source code format or as a binary.
. Unpack the software from its distribution format. This is typically a tarball compressed with a program such as man:compress[1], man:gzip[1], man:bzip2[1] or man:xz[1].
. Locate the documentation in [.filename]#INSTALL#, [.filename]#README# or some file in a [.filename]#doc/# subdirectory and read up on how to install the software.
. If the software was distributed in source format, compile it. This may involve editing a [.filename]#Makefile# or running a `configure` script.
. Test and install the software.
A FreeBSD _port_ is a collection of files designed to automate the process of compiling an application from source code. The files that comprise a port contain all the necessary information to automatically download, extract, patch, compile, and install the application.
If the software has not already been adapted and tested on FreeBSD, the source code might need editing in order for it to install and run properly.
However, over link:https://www.FreeBSD.org/ports/[{numports}] third-party applications have already been ported to FreeBSD. When feasible, these applications are made available for download as pre-compiled _packages_.
Packages can be manipulated with the FreeBSD package management commands.
Both packages and ports understand dependencies. If a package or port is used to install an application and a dependent library is not already installed, the library will automatically be installed first.
A FreeBSD package contains pre-compiled copies of all the commands for an application, as well as any configuration files and documentation. A package can be manipulated with the man:pkg[8] commands, such as `pkg install`.
While the two technologies are similar, packages and ports each have their own strengths. Select the technology that meets your requirements for installing a particular application.
.Package Benefits
* A compressed package tarball is typically smaller than the compressed tarball containing the source code for the application.
* Packages do not require compilation time. For large applications, such as Mozilla, KDE, or GNOME, this can be important on a slow system.
* Packages do not require any understanding of the process involved in compiling software on FreeBSD.
.Port Benefits
* Packages are normally compiled with conservative options because they have to run on the maximum number of systems. By compiling from the port, one can change the compilation options.
* Some applications have compile-time options relating to which features are installed. For example, Apache can be configured with a wide variety of different built-in options.
+
In some cases, multiple packages will exist for the same application to specify certain settings. For example, Ghostscript is available as a [.filename]#ghostscript# package and a [.filename]#ghostscript-nox11# package, depending on whether or not Xorg is installed. Creating multiple packages rapidly becomes impossible if an application has more than one or two different compile-time options.
* The licensing conditions of some software forbid binary distribution. Such software must be distributed as source code which must be compiled by the end-user.
* Some people do not trust binary distributions or prefer to read through source code in order to look for potential problems.
* Source code is needed in order to apply custom patches.
To keep track of updated ports, subscribe to the {freebsd-ports} and the {freebsd-ports-bugs}.
[WARNING]
====
Before installing any application, check https://vuxml.freebsd.org/[] for security issues related to the application or type `pkg audit -F` to check all installed applications for known vulnerabilities.
====
The remainder of this chapter explains how to use packages and ports to install and manage third-party software on FreeBSD.
[[ports-finding-applications]]
== Finding Software
FreeBSD's list of available applications is growing all the time. There are a number of ways to find software to install:
* The FreeBSD web site maintains an up-to-date searchable list of all the available applications, at link:https://www.FreeBSD.org/ports/[https://www.FreeBSD.org/ports/]. The ports can be searched by application name or by software category.
* Dan Langille maintains http://www.FreshPorts.org/[FreshPorts.org] which provides a comprehensive search utility and also tracks changes to the applications in the Ports Collection. Registered users can create a customized watch list in order to receive an automated email when their watched ports are updated.
* If finding a particular application becomes challenging, try searching a site like http://www.sourceforge.net/[SourceForge.net] or http://www.github.com/[GitHub.com] then check back at the link:https://www.FreeBSD.org/ports/[FreeBSD site] to see if the application has been ported.
* To search the binary package repository for an application:
+
[source,shell]
....
# pkg search subversion
git-subversion-1.9.2
java-subversion-1.8.8_2
p5-subversion-1.8.8_2
py27-hgsubversion-1.6
py27-subversion-1.8.8_2
ruby-subversion-1.8.8_2
subversion-1.8.8_2
subversion-book-4515
subversion-static-1.8.8_2
subversion16-1.6.23_4
subversion17-1.7.16_2
....
+
Package names include the version number and, in the case of ports based on python, the version number of the version of python the package was built with. Some ports also have multiple versions available. In the case of Subversion, there are different versions available, as well as different compile options. In this case, the statically linked version of Subversion. When indicating which package to install, it is best to specify the application by the port origin, which is the path in the ports tree. Repeat the `pkg search` with `-o` to list the origin of each package:
+
[source,shell]
....
# pkg search -o subversion
devel/git-subversion
java/java-subversion
devel/p5-subversion
devel/py-hgsubversion
devel/py-subversion
devel/ruby-subversion
devel/subversion16
devel/subversion17
devel/subversion
devel/subversion-book
devel/subversion-static
....
+
Searching by shell globs, regular expressions, exact match, by description, or any other field in the repository database is also supported by `pkg search`. After installing package:ports-mgmt/pkg[] or package:ports-mgmt/pkg-devel[], see man:pkg-search[8] for more details.
* If the Ports Collection is already installed, there are several methods to query the local version of the ports tree. To find out which category a port is in, type `whereis _file_`, where _file_ is the program to be installed:
+
[source,shell]
....
# whereis lsof
lsof: /usr/ports/sysutils/lsof
....
+
Alternately, an man:echo[1] statement can be used:
+
[source,shell]
....
# echo /usr/ports/*/*lsof*
/usr/ports/sysutils/lsof
....
+
Note that this will also return any matched files downloaded into the [.filename]#/usr/ports/distfiles# directory.
* Another way to find software is by using the Ports Collection's built-in search mechanism. To use the search feature, cd to [.filename]#/usr/ports# then run `make search name=program-name` where _program-name_ is the name of the software. For example, to search for `lsof`:
+
[source,shell]
....
# cd /usr/ports
# make search name=lsof
Port: lsof-4.88.d,8
Path: /usr/ports/sysutils/lsof
Info: Lists information about open files (similar to fstat(1))
Maint: ler@lerctr.org
Index: sysutils
B-deps:
R-deps:
....
+
[TIP]
====
The built-in search mechanism uses a file of index information. If a message indicates that the [.filename]#INDEX# is required, run `make fetchindex` to download the current index file. With the [.filename]#INDEX# present, `make search` will be able to perform the requested search.
====
+
The "Path:" line indicates where to find the port.
+
To receive less information, use the `quicksearch` feature:
+
[source,shell]
....
# cd /usr/ports
# make quicksearch name=lsof
Port: lsof-4.88.d,8
Path: /usr/ports/sysutils/lsof
Info: Lists information about open files (similar to fstat(1))
....
+
For more in-depth searching, use `make search key=_string_` or `make quicksearch key=_string_`, where _string_ is some text to search for. The text can be in comments, descriptions, or dependencies in order to find ports which relate to a particular subject when the name of the program is unknown.
+
When using `search` or `quicksearch`, the search string is case-insensitive. Searching for "LSOF" will yield the same results as searching for "lsof".
[[pkgng-intro]]
== Using pkg for Binary Package Management
pkg is the next generation replacement for the traditional FreeBSD package management tools, offering many features that make dealing with binary packages faster and easier.
For sites wishing to only use prebuilt binary packages from the FreeBSD mirrors, managing packages with pkg can be sufficient.
However, for those sites building from source or using their own repositories, a separate <<ports-upgrading-tools,port management tool>> will be needed.
Since pkg only works with binary packages, it is not a replacement for such tools. Those tools can be used to install software from both binary packages and the Ports Collection, while pkg installs only binary packages.
[[pkgng-initial-setup]]
=== Getting Started with pkg
FreeBSD includes a bootstrap utility which can be used to download and install pkg and its manual pages. This utility is designed to work with versions of FreeBSD starting with 10._X_.
[NOTE]
====
Not all FreeBSD versions and architectures support this bootstrap process. The current list is at https://pkg.freebsd.org/[]. For other cases, pkg must instead be installed from the Ports Collection or as a binary package.
====
To bootstrap the system, run:
[source,shell]
....
# /usr/sbin/pkg
....
You must have a working Internet connection for the bootstrap process to succeed.
Otherwise, to install the port, run:
[source,shell]
....
# cd /usr/ports/ports-mgmt/pkg
# make
# make install clean
....
When upgrading an existing system that originally used the older pkg_* tools, the database must be converted to the new format, so that the new tools are aware of the already installed packages. Once pkg has been installed, the package database must be converted from the traditional format to the new format by running this command:
[source,shell]
....
# pkg2ng
....
[NOTE]
====
This step is not required for new installations that do not yet have any third-party software installed.
====
[IMPORTANT]
====
This step is not reversible. Once the package database has been converted to the pkg format, the traditional `pkg_*` tools should no longer be used.
====
[NOTE]
====
The package database conversion may emit errors as the contents are converted to the new version. Generally, these errors can be safely ignored. However, a list of software that was not successfully converted is shown after `pkg2ng` finishes. These applications must be manually reinstalled.
====
To ensure that the Ports Collection registers new software with pkg instead of the traditional packages database, FreeBSD versions earlier than 10._X_ require this line in [.filename]#/etc/make.conf#:
[.programlisting]
....
WITH_PKGNG= yes
....
By default, pkg uses the binary packages from the FreeBSD package mirrors (the _repository_). For information about building a custom package repository, see <<ports-poudriere>>.
Additional pkg configuration options are described in man:pkg.conf[5].
Usage information for pkg is available in the man:pkg[8] manual page or by running `pkg` without additional arguments.
Each pkg command argument is documented in a command-specific manual page. To read the manual page for `pkg install`, for example, run either of these commands:
[source,shell]
....
# pkg help install
....
[source,shell]
....
# man pkg-install
....
The rest of this section demonstrates common binary package management tasks which can be performed using pkg. Each demonstrated command provides many switches to customize its use. Refer to a command's help or man page for details and more examples.
[[quarterly-latest-branch]]
=== Quarterly and Latest Ports Branches
The `Quarterly` branch provides users with a more predictable and stable experience for port and package installation and upgrades. This is done essentially by only allowing non-feature updates. Quarterly branches aim to receive security fixes (that may be version updates, or backports of commits), bug fixes and ports compliance or framework changes. The Quarterly branch is cut from HEAD at the beginning of every (yearly) quarter in January, April, July, and October. Branches are named according to the year (YYYY) and quarter (Q1-4) they are created in. For example, the quarterly branch created in January 2016, is named 2016Q1. And the `Latest` branch provides the latest versions of the packages to the users.
To switch from quarterly to latest run the following commands:
[source,shell]
....
# mkdir -p /usr/local/etc/pkg/repos
# cp /etc/pkg/FreeBSD.conf /usr/local/etc/pkg/repos/FreeBSD.conf
....
Edit the file [.filename]#/usr/local/etc/pkg/repos/FreeBSD.conf# and change the string _quarterly_ to _latest_ in the `url:` line.
The result should be similar to the following:
[.programlisting]
....
FreeBSD: {
url: "pkg+http://pkg.FreeBSD.org/${ABI}/latest",
mirror_type: "srv",
signature_type: "fingerprints",
fingerprints: "/usr/share/keys/pkg",
enabled: yes
}
....
And finally run this command to update from the new (latest) repository metadata.
[source,shell]
....
# pkg update -f
....
[[pkgng-pkg-info]]
=== Obtaining Information About Installed Packages
Information about the packages installed on a system can be viewed by running `pkg info` which, when run without any switches, will list the package version for either all installed packages or the specified package.
For example, to see which version of pkg is installed, run:
[source,shell]
....
# pkg info pkg
pkg-1.1.4_1
....
[[pkgng-installing-deinstalling]]
=== Installing and Removing Packages
To install a binary package use the following command, where _packagename_ is the name of the package to install:
[source,shell]
....
# pkg install packagename
....
This command uses repository data to determine which version of the software to install and if it has any uninstalled dependencies. For example, to install curl:
[source,shell]
....
# pkg install curl
Updating repository catalogue
/usr/local/tmp/All/curl-7.31.0_1.txz 100% of 1181 kB 1380 kBps 00m01s
/usr/local/tmp/All/ca_root_nss-3.15.1_1.txz 100% of 288 kB 1700 kBps 00m00s
Updating repository catalogue
The following 2 packages will be installed:
Installing ca_root_nss: 3.15.1_1
Installing curl: 7.31.0_1
The installation will require 3 MB more space
0 B to be downloaded
Proceed with installing packages [y/N]: y
Checking integrity... done
[1/2] Installing ca_root_nss-3.15.1_1... done
[2/2] Installing curl-7.31.0_1... done
Cleaning up cache files...Done
....
The new package and any additional packages that were installed as dependencies can be seen in the installed packages list:
[source,shell]
....
# pkg info
ca_root_nss-3.15.1_1 The root certificate bundle from the Mozilla Project
curl-7.31.0_1 Non-interactive tool to get files from FTP, GOPHER, HTTP(S) servers
pkg-1.1.4_6 New generation package manager
....
Packages that are no longer needed can be removed with `pkg delete`. For example:
[source,shell]
....
# pkg delete curl
The following packages will be deleted:
curl-7.31.0_1
The deletion will free 3 MB
Proceed with deleting packages [y/N]: y
[1/1] Deleting curl-7.31.0_1... done
....
[[pkgng-upgrading]]
=== Upgrading Installed Packages
Installed packages can be upgraded to their latest versions by running:
[source,shell]
....
# pkg upgrade
....
This command will compare the installed versions with those available in the repository catalogue and upgrade them from the repository.
[[pkgng-auditing]]
=== Auditing Installed Packages
Software vulnerabilities are regularly discovered in third-party applications. To address this, pkg includes a built-in auditing mechanism. To determine if there are any known vulnerabilities for the software installed on the system, run:
[source,shell]
....
# pkg audit -F
....
[[pkgng-autoremove]]
=== Automatically Removing Unused Packages
Removing a package may leave behind dependencies which are no longer required. Unneeded packages that were installed as dependencies (leaf packages) can be automatically detected and removed using:
[source,shell]
....
# pkg autoremove
Packages to be autoremoved:
ca_root_nss-3.15.1_1
The autoremoval will free 723 kB
Proceed with autoremoval of packages [y/N]: y
Deinstalling ca_root_nss-3.15.1_1... done
....
Packages installed as dependencies are called _automatic_ packages. Non-automatic packages, i.e the packages that were explicity installed not as a dependency to another package, can be listed using:
[source,shell]
....
# pkg prime-list
nginx
openvpn
sudo
....
`pkg prime-list` is an alias command declared in [.filename]#/usr/local/etc/pkg.conf#. There are many others that can be used to query the package database of the system. For instance, command `pkg prime-origins` can be used to get the origin port directory of the list mentioned above:
[source,shell]
....
# pkg prime-origins
www/nginx
security/openvpn
security/sudo
....
This list can be used to rebuild all packages installed on a system using build tools such as package:ports-mgmt/poudriere[] or package:ports-mgmt/synth[].
Marking an installed package as automatic can be done using:
[source,shell]
....
# pkg set -A 1 devel/cmake
....
Once a package is a leaf package and is marked as automatic, it gets selected by `pkg autoremove`.
Marking an installed package as _not_ automatic can be done using:
[source,shell]
....
# pkg set -A 0 devel/cmake
....
[[pkgng-backup]]
=== Restoring the Package Database
Unlike the traditional package management system, pkg includes its own package database backup mechanism. This functionality is enabled by default.
[TIP]
====
To disable the periodic script from backing up the package database, set `daily_backup_pkgdb_enable="NO"` in man:periodic.conf[5].
====
To restore the contents of a previous package database backup, run the following command replacing _/path/to/pkg.sql_ with the location of the backup:
[source,shell]
....
# pkg backup -r /path/to/pkg.sql
....
[NOTE]
====
If restoring a backup taken by the periodic script, it must be decompressed prior to being restored.
====
To run a manual backup of the pkg database, run the following command, replacing _/path/to/pkg.sql_ with a suitable file name and location:
[source,shell]
....
# pkg backup -d /path/to/pkg.sql
....
[[pkgng-clean]]
=== Removing Stale Packages
By default, pkg stores binary packages in a cache directory defined by `PKG_CACHEDIR` in man:pkg.conf[5]. Only copies of the latest installed packages are kept. Older versions of pkg kept all previous packages. To remove these outdated binary packages, run:
[source,shell]
....
# pkg clean
....
The entire cache may be cleared by running:
[source,shell]
....
# pkg clean -a
....
[[pkgng-set]]
=== Modifying Package Metadata
Software within the FreeBSD Ports Collection can undergo major version number changes. To address this, pkg has a built-in command to update package origins. This can be useful, for example, if package:lang/php5[] is renamed to package:lang/php53[] so that package:lang/php5[] can now represent version `5.4`.
To change the package origin for the above example, run:
[source,shell]
....
# pkg set -o lang/php5:lang/php53
....
As another example, to update package:lang/ruby18[] to package:lang/ruby19[], run:
[source,shell]
....
# pkg set -o lang/ruby18:lang/ruby19
....
As a final example, to change the origin of the [.filename]#libglut# shared libraries from package:graphics/libglut[] to package:graphics/freeglut[], run:
[source,shell]
....
# pkg set -o graphics/libglut:graphics/freeglut
....
[NOTE]
====
When changing package origins, it is important to reinstall packages that are dependent on the package with the modified origin. To force a reinstallation of dependent packages, run:
[source,shell]
....
# pkg install -Rf graphics/freeglut
....
====
[[ports-using]]
== Using the Ports Collection
The Ports Collection is a set of [.filename]##Makefile##s, patches, and description files. Each set of these files is used to compile and install an individual application on FreeBSD, and is called a _port_.
By default, the Ports Collection itself is stored as a subdirectory of [.filename]#/usr/ports#.
[WARNING]
====
Before installing and using the Ports Collection, please be aware that it is generally ill-advised to use the Ports Collection in conjunction with the binary packages provided via pkg to install software.
pkg, by default, tracks quarterly branch-releases of the ports tree and not HEAD. Dependencies could be different for a port in HEAD compared to its counterpart in a quarterly branch release and this could result in conflicts between dependencies installed by pkg and those from the Ports Collection.
If the Ports Collection and pkg must be used in conjunction, then be sure that your Ports Collection and pkg are on the same branch release of the ports tree.
====
The Ports Collection contains directories for software categories. Inside each category are subdirectories for individual applications. Each application subdirectory contains a set of files that tells FreeBSD how to compile and install that program, called a _ports skeleton_. Each port skeleton includes these files and directories:
* [.filename]#Makefile#: contains statements that specify how the application should be compiled and where its components should be installed.
* [.filename]#distinfo#: contains the names and checksums of the files that must be downloaded to build the port.
* [.filename]#files/#: this directory contains any patches needed for the program to compile and install on FreeBSD. This directory may also contain other files used to build the port.
* [.filename]#pkg-descr#: provides a more detailed description of the program.
* [.filename]#pkg-plist#: a list of all the files that will be installed by the port. It also tells the ports system which files to remove upon deinstallation.
Some ports include [.filename]#pkg-message# or other files to handle special situations. For more details on these files, and on ports in general, refer to the link:{porters-handbook}[FreeBSD Porter's Handbook].
The port does not include the actual source code, also known as a [.filename]#distfile#. The extract portion of building a port will automatically save the downloaded source to [.filename]#/usr/ports/distfiles#.
[[ports-using-installation-methods]]
=== Installing the Ports Collection
Before an application can be compiled using a port, the Ports Collection must first be installed. If it was not installed during the installation of FreeBSD, use one of the following methods to install it:
[[ports-using-portsnap-method]]
[.procedure]
****
*Procedure: Portsnap Method*
The base system of FreeBSD includes Portsnap. This is a fast and user-friendly tool for retrieving the Ports Collection and is the recommended choice for most users not running FreeBSD-CURRENT. This utility connects to a FreeBSD site, verifies the secure key, and downloads a new copy of the Ports Collection. The key is used to verify the integrity of all downloaded files.
. To download a compressed snapshot of the Ports Collection into [.filename]#/var/db/portsnap#:
+
[source,shell]
....
# portsnap fetch
....
+
. When running Portsnap for the first time, extract the snapshot into [.filename]#/usr/ports#:
+
[source,shell]
....
# portsnap extract
....
+
. After the first use of Portsnap has been completed as shown above, [.filename]#/usr/ports# can be updated as needed by running:
+
[source,shell]
....
# portsnap fetch
# portsnap update
....
+
When using `fetch`, the `extract` or the `update` operation may be run consecutively, like so:
+
[source,shell]
....
# portsnap fetch update
....
****
[[ports-using-git-method]]
[.procedure]
****
*Procedure: Git Method*
If more control over the ports tree is needed or if local changes need to be maintained, or if running FreeBSD-CURRENT, Git can be used to obtain the Ports Collection. Refer to link:{committers-guide}#git-primer[the Git Primer] for a detailed description of Git.
. Git must be installed before it can be used to check out the ports tree. If a copy of the ports tree is already present, install Git like this:
+
[source,shell]
....
# cd /usr/ports/devel/git
# make install clean
....
+
If the ports tree is not available, or pkg is being used to manage packages, Git can be installed as a package:
+
[source,shell]
....
# pkg install git
....
+
. Check out a copy of the HEAD branch of the ports tree:
+
[source,shell]
....
# git clone https://git.FreeBSD.org/ports.git /usr/ports
....
+
. Or, check out a copy of a quarterly branch:
+
[source,shell]
....
# git clone https://git.FreeBSD.org/ports.git -b 2020Q3 /usr/ports
....
+
. As needed, update [.filename]#/usr/ports# after the initial Git checkout:
+
[source,shell]
....
# git -C /usr/ports pull
....
+
. As needed, switch [.filename]#/usr/ports# to a different quarterly branch:
+
[source,shell]
....
# git -C /usr/ports switch 2020Q4
....
****
=== Installing Ports
This section provides basic instructions on using the Ports Collection to install or remove software. The detailed description of available `make` targets and environment variables is available in man:ports[7].
[WARNING]
====
Before compiling any port, be sure to update the Ports Collection as described in the previous section. Since the installation of any third-party software can introduce security vulnerabilities, it is recommended to first check https://vuxml.freebsd.org/[] for known security issues related to the port. Alternately, run `pkg audit -F` before installing a new port. This command can be configured to automatically perform a security audit and an update of the vulnerability database during the daily security system check. For more information, refer to man:pkg-audit[8] and man:periodic[8].
====
Using the Ports Collection assumes a working Internet connection. It also requires superuser privilege.
To compile and install the port, change to the directory of the port to be installed, then type `make install` at the prompt. Messages will indicate the progress:
[source,shell]
....
# cd /usr/ports/sysutils/lsof
# make install
>> lsof_4.88D.freebsd.tar.gz doesn't seem to exist in /usr/ports/distfiles/.
>> Attempting to fetch from ftp://lsof.itap.purdue.edu/pub/tools/unix/lsof/.
===> Extracting for lsof-4.88
...
[extraction output snipped]
...
>> Checksum OK for lsof_4.88D.freebsd.tar.gz.
===> Patching for lsof-4.88.d,8
===> Applying FreeBSD patches for lsof-4.88.d,8
===> Configuring for lsof-4.88.d,8
...
[configure output snipped]
...
===> Building for lsof-4.88.d,8
...
[compilation output snipped]
...
===> Installing for lsof-4.88.d,8
...
[installation output snipped]
...
===> Generating temporary packing list
===> Compressing manual pages for lsof-4.88.d,8
===> Registering installation for lsof-4.88.d,8
===> SECURITY NOTE:
This port has installed the following binaries which execute with
increased privileges.
/usr/local/sbin/lsof
#
....
Since `lsof` is a program that runs with increased privileges, a security warning is displayed as it is installed. Once the installation is complete, the prompt will be returned.
Some shells keep a cache of the commands that are available in the directories listed in the `PATH` environment variable, to speed up lookup operations for the executable file of these commands. Users of the `tcsh` shell should type `rehash` so that a newly installed command can be used without specifying its full path. Use `hash -r` instead for the `sh` shell. Refer to the documentation for the shell for more information.
During installation, a working subdirectory is created which contains all the temporary files used during compilation. Removing this directory saves disk space and minimizes the chance of problems later when upgrading to the newer version of the port:
[source,shell]
....
# make clean
===> Cleaning for lsof-88.d,8
#
....
[NOTE]
====
To save this extra step, instead use `make install clean` when compiling the port.
====
==== Customizing Ports Installation
Some ports provide build options which can be used to enable or disable application components, provide security options, or allow for other customizations. Examples include package:www/firefox[], package:security/gpgme[], and package:mail/sylpheed-claws[]. If the port depends upon other ports which have configurable options, it may pause several times for user interaction as the default behavior is to prompt the user to select options from a menu. To avoid this and do all of the configuration in one batch, run `make config-recursive` within the port skeleton. Then, run `make install [clean]` to compile and install the port.
[TIP]
====
When using `config-recursive`, the list of ports to configure are gathered by the `all-depends-list` target. It is recommended to run `make config-recursive` until all dependent ports options have been defined, and ports options screens no longer appear, to be certain that all dependency options have been configured.
====
There are several ways to revisit a port's build options menu in order to add, remove, or change these options after a port has been built. One method is to `cd` into the directory containing the port and type `make config`. Another option is to use `make showconfig`. Another option is to execute `make rmconfig` which will remove all selected options and allow you to start over. All of these options, and others, are explained in great detail in man:ports[7].
The ports system uses man:fetch[1] to download the source files, which supports various environment variables. The `FTP_PASSIVE_MODE`, `FTP_PROXY`, and `FTP_PASSWORD` variables may need to be set if the FreeBSD system is behind a firewall or FTP/HTTP proxy. See man:fetch[3] for the complete list of supported variables.
For users who cannot be connected to the Internet all the time, `make fetch` can be run within [.filename]#/usr/ports#, to fetch all distfiles, or within a category, such as [.filename]#/usr/ports/net#, or within the specific port skeleton. Note that if a port has any dependencies, running this command in a category or ports skeleton will _not_ fetch the distfiles of ports from another category. Instead, use `make fetch-recursive` to also fetch the distfiles for all the dependencies of a port.
In rare cases, such as when an organization has a local distfiles repository, the `MASTER_SITES` variable can be used to override the download locations specified in the [.filename]#Makefile#. When using, specify the alternate location:
[source,shell]
....
# cd /usr/ports/directory
# make MASTER_SITE_OVERRIDE= \
ftp://ftp.organization.org/pub/FreeBSD/ports/distfiles/ fetch
....
The `WRKDIRPREFIX` and `PREFIX` variables can override the default working and target directories. For example:
[source,shell]
....
# make WRKDIRPREFIX=/usr/home/example/ports install
....
will compile the port in [.filename]#/usr/home/example/ports# and install everything under [.filename]#/usr/local#.
[source,shell]
....
# make PREFIX=/usr/home/example/local install
....
will compile the port in [.filename]#/usr/ports# and install it in [.filename]#/usr/home/example/local#. And:
[source,shell]
....
# make WRKDIRPREFIX=../ports PREFIX=../local install
....
will combine the two.
These can also be set as environmental variables. Refer to the manual page for your shell for instructions on how to set an environmental variable.
[[ports-removing]]
=== Removing Installed Ports
Installed ports can be uninstalled using `pkg delete`. Examples for using this command can be found in the man:pkg-delete[8] manual page.
Alternately, `make deinstall` can be run in the port's directory:
[source,shell]
....
# cd /usr/ports/sysutils/lsof
# make deinstall
===> Deinstalling for sysutils/lsof
===> Deinstalling
Deinstallation has been requested for the following 1 packages:
lsof-4.88.d,8
The deinstallation will free 229 kB
[1/1] Deleting lsof-4.88.d,8... done
....
It is recommended to read the messages as the port is uninstalled. If the port has any applications that depend upon it, this information will be displayed but the uninstallation will proceed. In such cases, it may be better to reinstall the application in order to prevent broken dependencies.
[[ports-upgrading]]
=== Upgrading Ports
Over time, newer versions of software become available in the Ports Collection. This section describes how to determine which software can be upgraded and how to perform the upgrade.
To determine if newer versions of installed ports are available, ensure that the latest version of the ports tree is installed, using the updating command described in either <<ports-using-portsnap-method, “Portsnap Method”>> or <<ports-using-git-method, “Git Method”>>. On FreeBSD 10 and later, or if the system has been converted to pkg, the following command will list the installed ports which are out of date:
[source,shell]
....
# pkg version -l "<"
....
For FreeBSD 9._X_ and lower, the following command will list the installed ports that are out of date:
[source,shell]
....
# pkg_version -l "<"
....
[IMPORTANT]
====
Before attempting an upgrade, read [.filename]#/usr/ports/UPDATING# from the top of the file to the date closest to the last time ports were upgraded or the system was installed. This file describes various issues and additional steps users may encounter and need to perform when updating a port, including such things as file format changes, changes in locations of configuration files, or any incompatibilities with previous versions. Make note of any instructions which match any of the ports that need upgrading and follow these instructions when performing the upgrade.
====
[[ports-upgrading-tools]]
==== Tools to Upgrade and Manage Ports
The Ports Collection contains several utilities to perform the actual upgrade. Each has its strengths and weaknesses.
Historically, most installations used either Portmaster or Portupgrade. Synth is a newer alternative.
[NOTE]
====
The choice of which tool is best for a particular system is up to the system administrator. It is recommended practice to back up your data before using any of these tools.
====
[[portmaster]]
==== Upgrading Ports Using Portmaster
package:ports-mgmt/portmaster[] is a very small utility for upgrading installed ports. It is designed to use the tools installed with the FreeBSD base system without depending on other ports or databases. To install this utility as a port:
[source,shell]
....
# cd /usr/ports/ports-mgmt/portmaster
# make install clean
....
Portmaster defines four categories of ports:
* Root port: has no dependencies and is not a dependency of any other ports.
* Trunk port: has no dependencies, but other ports depend upon it.
* Branch port: has dependencies and other ports depend upon it.
* Leaf port: has dependencies but no other ports depend upon it.
To list these categories and search for updates:
[source,shell]
....
# portmaster -L
===>>> Root ports (No dependencies, not depended on)
===>>> ispell-3.2.06_18
===>>> screen-4.0.3
===>>> New version available: screen-4.0.3_1
===>>> tcpflow-0.21_1
===>>> 7 root ports
...
===>>> Branch ports (Have dependencies, are depended on)
===>>> apache22-2.2.3
===>>> New version available: apache22-2.2.8
...
===>>> Leaf ports (Have dependencies, not depended on)
===>>> automake-1.9.6_2
===>>> bash-3.1.17
===>>> New version available: bash-3.2.33
...
===>>> 32 leaf ports
===>>> 137 total installed ports
===>>> 83 have new versions available
....
This command is used to upgrade all outdated ports:
[source,shell]
....
# portmaster -a
....
[NOTE]
====
By default, Portmaster makes a backup package before deleting the existing port. If the installation of the new version is successful, Portmaster deletes the backup. Using `-b` instructs Portmaster not to automatically delete the backup. Adding `-i` starts Portmaster in interactive mode, prompting for confirmation before upgrading each port. Many other options are available. Read through the manual page for man:portmaster[8] for details regarding their usage.
====
If errors are encountered during the upgrade process, add `-f` to upgrade and rebuild all ports:
[source,shell]
....
# portmaster -af
....
Portmaster can also be used to install new ports on the system, upgrading all dependencies before building and installing the new port. To use this function, specify the location of the port in the Ports Collection:
[source,shell]
....
# portmaster shells/bash
....
More information about package:ports-mgmt/portmaster[] may be found in its [.filename]#pkg-descr#.
[[portupgrade]]
==== Upgrading Ports Using Portupgrade
package:ports-mgmt/portupgrade[] is another utility that can be used to upgrade ports. It installs a suite of applications which can be used to manage ports. However, it is dependent upon Ruby. To install the port:
[source,shell]
....
# cd /usr/ports/ports-mgmt/portupgrade
# make install clean
....
Before performing an upgrade using this utility, it is recommended to scan the list of installed ports using `pkgdb -F` and to fix all the inconsistencies it reports.
To upgrade all the outdated ports installed on the system, use `portupgrade -a`. Alternately, include `-i` to be asked for confirmation of every individual upgrade:
[source,shell]
....
# portupgrade -ai
....
To upgrade only a specified application instead of all available ports, use `portupgrade _pkgname_`. It is very important to include `-R` to first upgrade all the ports required by the given application:
[source,shell]
....
# portupgrade -R firefox
....
If `-P` is included, Portupgrade searches for available packages in the local directories listed in `PKG_PATH`. If none are available locally, it then fetches packages from a remote site. If packages can not be found locally or fetched remotely, Portupgrade will use ports. To avoid using ports entirely, specify `-PP`. This last set of options tells Portupgrade to abort if no packages are available:
[source,shell]
....
# portupgrade -PP gnome3
....
To just fetch the port distfiles, or packages, if `-P` is specified, without building or installing anything, use `-F`. For further information on all of the available switches, refer to the manual page for `portupgrade`.
More information about package:ports-mgmt/portupgrade[] may be found in its [.filename]#pkg-descr#.
[[ports-disk-space]]
=== Ports and Disk Space
Using the Ports Collection will use up disk space over time. After building and installing a port, running `make clean` within the ports skeleton will clean up the temporary [.filename]#work# directory. If Portmaster is used to install a port, it will automatically remove this directory unless `-K` is specified. If Portupgrade is installed, this command will remove all [.filename]#work# directories found within the local copy of the Ports Collection:
[source,shell]
....
# portsclean -C
....
In addition, outdated source distribution files accumulate in [.filename]#/usr/ports/distfiles# over time. To use Portupgrade to delete all the distfiles that are no longer referenced by any ports:
[source,shell]
....
# portsclean -D
....
Portupgrade can remove all distfiles not referenced by any port currently installed on the system:
[source,shell]
....
# portsclean -DD
....
If Portmaster is installed, use:
[source,shell]
....
# portmaster --clean-distfiles
....
By default, this command is interactive and prompts the user to confirm if a distfile should be deleted.
In addition to these commands, package:ports-mgmt/pkg_cutleaves[] automates the task of removing installed ports that are no longer needed.
[[ports-poudriere]]
== Building Packages with Poudriere
Poudriere is a `BSD`-licensed utility for creating and testing FreeBSD packages. It uses FreeBSD jails to set up isolated compilation environments. These jails can be used to build packages for versions of FreeBSD that are different from the system on which it is installed, and also to build packages for i386 if the host is an amd64 system. Once the packages are built, they are in a layout identical to the official mirrors. These packages are usable by man:pkg[8] and other package management tools.
Poudriere is installed using the package:ports-mgmt/poudriere[] package or port. The installation includes a sample configuration file [.filename]#/usr/local/etc/poudriere.conf.sample#. Copy this file to [.filename]#/usr/local/etc/poudriere.conf#. Edit the copied file to suit the local configuration.
While `ZFS` is not required on the system running poudriere, it is beneficial. When `ZFS` is used, `ZPOOL` must be specified in [.filename]#/usr/local/etc/poudriere.conf# and `FREEBSD_HOST` should be set to a nearby mirror. Defining `CCACHE_DIR` enables the use of package:devel/ccache[] to cache compilation and reduce build times for frequently-compiled code. It may be convenient to put poudriere datasets in an isolated tree mounted at [.filename]#/poudriere#. Defaults for the other configuration values are adequate.
The number of processor cores detected is used to define how many builds will run in parallel. Supply enough virtual memory, either with `RAM` or swap space. If virtual memory runs out, the compilation jails will stop and be torn down, resulting in weird error messages.
[[poudriere-initialization]]
=== Initialize Jails and Port Trees
After configuration, initialize poudriere so that it installs a jail with the required FreeBSD tree and a ports tree. Specify a name for the jail using `-j` and the FreeBSD version with `-v`. On systems running FreeBSD/amd64, the architecture can be set with `-a` to either `i386` or `amd64`. The default is the architecture shown by `uname`.
[source,shell]
....
# poudriere jail -c -j 11amd64 -v 11.4-RELEASE
[00:00:00] Creating 11amd64 fs at /poudriere/jails/11amd64... done
[00:00:00] Using pre-distributed MANIFEST for FreeBSD 11.4-RELEASE amd64
[00:00:00] Fetching base for FreeBSD 11.4-RELEASE amd64
/poudriere/jails/11amd64/fromftp/base.txz 125 MB 4110 kBps 31s
[00:00:33] Extracting base... done
[00:00:54] Fetching src for FreeBSD 11.4-RELEASE amd64
/poudriere/jails/11amd64/fromftp/src.txz 154 MB 4178 kBps 38s
[00:01:33] Extracting src... done
[00:02:31] Fetching lib32 for FreeBSD 11.4-RELEASE amd64
/poudriere/jails/11amd64/fromftp/lib32.txz 24 MB 3969 kBps 06s
[00:02:38] Extracting lib32... done
[00:02:42] Cleaning up... done
[00:02:42] Recording filesystem state for clean... done
[00:02:42] Upgrading using ftp
/etc/resolv.conf -> /poudriere/jails/11amd64/etc/resolv.conf
Looking up update.FreeBSD.org mirrors... 3 mirrors found.
Fetching public key from update4.freebsd.org... done.
Fetching metadata signature for 11.4-RELEASE from update4.freebsd.org... done.
Fetching metadata index... done.
Fetching 2 metadata files... done.
Inspecting system... done.
Preparing to download files... done.
Fetching 124 patches.....10....20....30....40....50....60....70....80....90....100....110....120.. done.
Applying patches... done.
Fetching 6 files... done.
The following files will be added as part of updating to
11.4-RELEASE-p1:
/usr/src/contrib/unbound/.github
/usr/src/contrib/unbound/.github/FUNDING.yml
/usr/src/contrib/unbound/contrib/drop2rpz
/usr/src/contrib/unbound/contrib/unbound_portable.service.in
/usr/src/contrib/unbound/services/rpz.c
/usr/src/contrib/unbound/services/rpz.h
/usr/src/lib/libc/tests/gen/spawnp_enoexec.sh
The following files will be updated as part of updating to
11.4-RELEASE-p1:
[…]
Installing updates...Scanning //usr/share/certs/blacklisted for certificates...
Scanning //usr/share/certs/trusted for certificates...
done.
11.4-RELEASE-p1
[00:04:06] Recording filesystem state for clean... done
[00:04:07] Jail 11amd64 11.4-RELEASE-p1 amd64 is ready to be used
....
[source,shell]
....
# poudriere ports -c -p local -m git+https
[00:00:00] Creating local fs at /poudriere/ports/local... done
[00:00:00] Checking out the ports tree... done
....
On a single computer, poudriere can build ports with multiple configurations, in multiple jails, and from different port trees. Custom configurations for these combinations are called _sets_. See the CUSTOMIZATION section of man:poudriere[8] for details after package:ports-mgmt/poudriere[] or package:ports-mgmt/poudriere-devel[] is installed.
The basic configuration shown here puts a single jail-, port-, and set-specific [.filename]#make.conf# in [.filename]#/usr/local/etc/poudriere.d#. The filename in this example is created by combining the jail name, port name, and set name: [.filename]#11amd64-local-workstation-make.conf#. The system [.filename]#make.conf# and this new file are combined at build time to create the [.filename]#make.conf# used by the build jail.
Packages to be built are entered in [.filename]#11amd64-local-workstation-pkglist#:
[.programlisting]
....
editors/emacs
devel/git
ports-mgmt/pkg
...
....
Options and dependencies for the specified ports are configured:
[source,shell]
....
# poudriere options -j 11amd64 -p local -z workstation -f 11amd64-local-workstation-pkglist
....
Finally, packages are built and a package repository is created:
[source,shell]
....
# poudriere bulk -j 11amd64 -p local -z workstation -f 11amd64-local-workstation-pkglist
....
While running, pressing kbd:[Ctrl+t] displays the current state of the build. Poudriere also builds files in [.filename]#/poudriere/logs/bulk/jailname# that can be used with a web server to display build information.
After completion, the new packages are now available for installation from the poudriere repository.
For more information on using poudriere, see man:poudriere[8] and the main web site, https://github.com/freebsd/poudriere/wiki[].
=== Configuring pkg Clients to Use a Poudriere Repository
While it is possible to use both a custom repository along side of the official repository, sometimes it is useful to disable the official repository. This is done by creating a configuration file that overrides and disables the official configuration file. Create [.filename]#/usr/local/etc/pkg/repos/FreeBSD.conf# that contains the following:
[.programlisting]
....
FreeBSD: {
enabled: no
}
....
Usually it is easiest to serve a poudriere repository to the client machines via HTTP. Set up a webserver to serve up the package directory, for instance: [.filename]#/usr/local/poudriere/data/packages/11amd64#, where [.filename]#11amd64# is the name of the build.
If the URL to the package repository is: `http://pkg.example.com/11amd64`, then the repository configuration file in [.filename]#/usr/local/etc/pkg/repos/custom.conf# would look like:
[.programlisting]
....
custom: {
url: "http://pkg.example.com/11amd64",
enabled: yes,
}
....
[[ports-nextsteps]]
== Post-Installation Considerations
Regardless of whether the software was installed from a binary package or port, most third-party applications require some level of configuration after installation. The following commands and locations can be used to help determine what was installed with the application.
* Most applications install at least one default configuration file in [.filename]#/usr/local/etc#. In cases where an application has a large number of configuration files, a subdirectory will be created to hold them. Often, sample configuration files are installed which end with a suffix such as [.filename]#.sample#. The configuration files should be reviewed and possibly edited to meet the system's needs. To edit a sample file, first copy it without the [.filename]#.sample# extension.
* Applications which provide documentation will install it into [.filename]#/usr/local/shared/doc# and many applications also install manual pages. This documentation should be consulted before continuing.
* Some applications run services which must be added to [.filename]#/etc/rc.conf# before starting the application. These applications usually install a startup script in [.filename]#/usr/local/etc/rc.d#. See crossref:config[configtuning-starting-services,Starting Services] for more information.
+
[NOTE]
====
By design, applications do not run their startup script upon installation, nor do they run their stop script upon deinstallation or upgrade. This decision is left to the individual system administrator.
====
* Users of man:csh[1] should run `rehash` to rebuild the known binary list in the shells `PATH`.
* Use `pkg info` to determine which files, man pages, and binaries were installed with the application.
[[ports-broken]]
== Dealing with Broken Ports
When a port does not build or install, try the following:
. Search to see if there is a fix pending for the port in the link:https://www.FreeBSD.org/support/[Problem Report database]. If so, implementing the proposed fix may fix the issue.
. Ask the maintainer of the port for help. Type `make maintainer` in the ports skeleton or read the port's [.filename]#Makefile# to find the maintainer's email address. Remember to include the `$FreeBSD:` line from the port's [.filename]#Makefile# and the output leading up to the error in the email to the maintainer.
+
[NOTE]
====
Some ports are not maintained by an individual but instead by a group maintainer represented by a link:{mailing-list-faq}[mailing list]. Many, but not all, of these addresses look like mailto:freebsd-listname@FreeBSD.org[freebsd-listname@FreeBSD.org]. Please take this into account when sending an email.
In particular, ports maintained by mailto:ports@FreeBSD.org[ports@FreeBSD.org] are not maintained by a specific individual. Instead, any fixes and support come from the general community who subscribe to that mailing list. More volunteers are always needed!
====
+
If there is no response to the email, use Bugzilla to submit a bug report using the instructions in link:{problem-reports}[Writing FreeBSD Problem Reports].
. Fix it! The link:{porters-handbook}[Porter's Handbook] includes detailed information on the ports infrastructure so that you can fix the occasional broken port or even submit your own!
. Install the package instead of the port using the instructions in <<pkgng-intro>>.
diff --git a/documentation/content/en/books/handbook/ppp-and-slip/_index.adoc b/documentation/content/en/books/handbook/ppp-and-slip/_index.adoc
index 0292741014..95ec634319 100644
--- a/documentation/content/en/books/handbook/ppp-and-slip/_index.adoc
+++ b/documentation/content/en/books/handbook/ppp-and-slip/_index.adoc
@@ -1,827 +1,828 @@
---
title: Chapter 28. PPP
part: IV. Network Communication
prev: books/handbook/serialcomms
next: books/handbook/mail
+description: FreeBSD supports the Point-to-Point (PPP) protocol which can be used to establish a network or Internet connection using a dial-up modem
---
[[ppp-and-slip]]
= PPP
:doctype: book
:toc: macro
:toclevels: 1
:icons: font
:sectnums:
:sectnumlevels: 6
:source-highlighter: rouge
:experimental:
:skip-front-matter:
:xrefstyle: basic
:relfileprefix: ../
:outfilesuffix:
:sectnumoffset: 28
ifeval::["{backend}" == "html5"]
:imagesdir: ../../../../images/books/handbook/ppp-and-slip/
endif::[]
ifeval::["{backend}" == "pdf"]
:imagesdir: ../../../../static/images/books/handbook/ppp-and-slip/
endif::[]
ifeval::["{backend}" == "epub3"]
:imagesdir: ../../../../static/images/books/handbook/ppp-and-slip/
endif::[]
include::shared/authors.adoc[]
include::shared/releases.adoc[]
include::shared/en/mailing-lists.adoc[]
include::shared/en/teams.adoc[]
include::shared/en/urls.adoc[]
toc::[]
[[ppp-and-slip-synopsis]]
== Synopsis
FreeBSD supports the Point-to-Point (PPP) protocol which can be used to establish a network or Internet connection using a dial-up modem. This chapter describes how to configure modem-based communication services in FreeBSD.
After reading this chapter, you will know:
* How to configure, use, and troubleshoot a PPP connection.
* How to set up PPP over Ethernet (PPPoE).
* How to set up PPP over ATM (PPPoA).
Before reading this chapter, you should:
* Be familiar with basic network terminology.
* Understand the basics and purpose of a dial-up connection and PPP.
[[userppp]]
== Configuring PPP
FreeBSD provides built-in support for managing dial-up PPP connections using man:ppp[8]. The default FreeBSD kernel provides support for [.filename]#tun# which is used to interact with a modem hardware. Configuration is performed by editing at least one configuration file, and configuration files containing examples are provided. Finally, `ppp` is used to start and manage connections.
In order to use a PPP connection, the following items are needed:
* A dial-up account with an Internet Service Provider (ISP).
* A dial-up modem.
* The dial-up number for the ISP.
* The login name and password assigned by the ISP.
* The IP address of one or more DNS servers. Normally, the ISP provides these addresses. If it did not, FreeBSD can be configured to use DNS negotiation.
If any of the required information is missing, contact the ISP.
The following information may be supplied by the ISP, but is not necessary:
* The IP address of the default gateway. If this information is unknown, the ISP will automatically provide the correct value during connection setup. When configuring PPP on FreeBSD, this address is referred to as `HISADDR`.
* The subnet mask. If the ISP has not provided one, `255.255.255.255` will be used in the man:ppp[8] configuration file.
*
+
If the ISP has assigned a static IP address and hostname, it should be input into the configuration file. Otherwise, this information will be automatically provided during connection setup.
The rest of this section demonstrates how to configure FreeBSD for common PPP connection scenarios. The required configuration file is [.filename]#/etc/ppp/ppp.conf# and additional files and examples are available in [.filename]#/usr/share/examples/ppp/#.
[NOTE]
====
Throughout this section, many of the file examples display line numbers. These line numbers have been added to make it easier to follow the discussion and are not meant to be placed in the actual file.
When editing a configuration file, proper indentation is important. Lines that end in a `:` start in the first column (beginning of the line) while all other lines should be indented as shown using spaces or tabs.
====
[[userppp-staticIP]]
=== Basic Configuration
In order to configure a PPP connection, first edit [.filename]#/etc/ppp/ppp.conf# with the dial-in information for the ISP. This file is described as follows:
[.programlisting]
....
1 default:
2 set log Phase Chat LCP IPCP CCP tun command
3 ident user-ppp VERSION
4 set device /dev/cuau0
5 set speed 115200
6 set dial "ABORT BUSY ABORT NO\\sCARRIER TIMEOUT 5 \
7 \"\" AT OK-AT-OK ATE1Q0 OK \\dATDT\\T TIMEOUT 40 CONNECT"
8 set timeout 180
9 enable dns
10
11 provider:
12 set phone "(123) 456 7890"
13 set authname foo
14 set authkey bar
15 set timeout 300
16 set ifaddr x.x.x.x/0 y.y.y.y/0 255.255.255.255 0.0.0.0
17 add default HISADDR
....
Line 1:::
Identifies the `default` entry. Commands in this entry (lines 2 through 9) are executed automatically when `ppp` is run.
Line 2:::
Enables verbose logging parameters for testing the connection. Once the configuration is working satisfactorily, this line should be reduced to:
+
[.programlisting]
....
set log phase tun
....
Line 3:::
Displays the version of man:ppp[8] to the PPP software running on the other side of the connection.
Line 4:::
Identifies the device to which the modem is connected, where [.filename]#COM1# is [.filename]#/dev/cuau0# and [.filename]#COM2# is [.filename]#/dev/cuau1#.
Line 5:::
Sets the connection speed. If `115200` does not work on an older modem, try `38400` instead.
Lines 6 & 7:::
The dial string written as an expect-send syntax. Refer to man:chat[8] for more information.
+
Note that this command continues onto the next line for readability. Any command in [.filename]#ppp.conf# may do this if the last character on the line is `\`.
Line 8:::
Sets the idle timeout for the link in seconds.
Line 9:::
Instructs the peer to confirm the DNS settings. If the local network is running its own DNS server, this line should be commented out, by adding a `#` at the beginning of the line, or removed.
Line 10:::
A blank line for readability. Blank lines are ignored by man:ppp[8].
Line 11:::
Identifies an entry called `provider`. This could be changed to the name of the ISP so that `load _ISP_` can be used to start the connection.
Line 12:::
Use the phone number for the ISP. Multiple phone numbers may be specified using the colon (`:`) or pipe character (`|`) as a separator. To rotate through the numbers, use a colon. To always attempt to dial the first number first and only use the other numbers if the first number fails, use the pipe character. Always enclose the entire set of phone numbers between quotation marks (`"`) to prevent dialing failures.
Lines 13 & 14:::
Use the user name and password for the ISP.
Line 15:::
Sets the default idle timeout in seconds for the connection. In this example, the connection will be closed automatically after 300 seconds of inactivity. To prevent a timeout, set this value to zero.
Line 16:::
Sets the interface addresses. The values used depend upon whether a static IP address has been obtained from the ISP or if it instead negotiates a dynamic IP address during connection.
+
If the ISP has allocated a static IP address and default gateway, replace _x.x.x.x_ with the static IP address and replace _y.y.y.y_ with the IP address of the default gateway. If the ISP has only provided a static IP address without a gateway address, replace _y.y.y.y_ with `10.0.0.2/0`.
+
If the IP address changes whenever a connection is made, change this line to the following value. This tells man:ppp[8] to use the IP Configuration Protocol (IPCP) to negotiate a dynamic IP address:
+
[.programlisting]
....
set ifaddr 10.0.0.1/0 10.0.0.2/0 255.255.255.255 0.0.0.0
....
Line 17:::
Keep this line as-is as it adds a default route to the gateway. The `HISADDR` will automatically be replaced with the gateway address specified on line 16. It is important that this line appears after line 16.
Depending upon whether man:ppp[8] is started manually or automatically, a [.filename]#/etc/ppp/ppp.linkup# may also need to be created which contains the following lines. This file is required when running `ppp` in `-auto` mode. This file is used after the connection has been established. At this point, the IP address will have been assigned and it is now possible to add the routing table entries. When creating this file, make sure that _provider_ matches the value demonstrated in line 11 of [.filename]#ppp.conf#.
[.programlisting]
....
provider:
add default HISADDR
....
This file is also needed when the default gateway address is "guessed" in a static IP address configuration. In this case, remove line 17 from [.filename]#ppp.conf# and create [.filename]#/etc/ppp/ppp.linkup# with the above two lines. More examples for this file can be found in [.filename]#/usr/share/examples/ppp/#.
By default, `ppp` must be run as `root`. To change this default, add the account of the user who should run `ppp` to the `network` group in [.filename]#/etc/group#.
Then, give the user access to one or more entries in [.filename]#/etc/ppp/ppp.conf# with `allow`. For example, to give `fred` and `mary` permission to only the `provider:` entry, add this line to the `provider:` section:
[.programlisting]
....
allow users fred mary
....
To give the specified users access to all entries, put that line in the `default` section instead.
=== Advanced Configuration
It is possible to configure PPP to supply DNS and NetBIOS nameserver addresses on demand.
To enable these extensions with PPP version 1.x, the following lines might be added to the relevant section of [.filename]#/etc/ppp/ppp.conf#.
[.programlisting]
....
enable msext
set ns 203.14.100.1 203.14.100.2
set nbns 203.14.100.5
....
And for PPP version 2 and above:
[.programlisting]
....
accept dns
set dns 203.14.100.1 203.14.100.2
set nbns 203.14.100.5
....
This will tell the clients the primary and secondary name server addresses, and a NetBIOS nameserver host.
In version 2 and above, if the `set dns` line is omitted, PPP will use the values found in [.filename]#/etc/resolv.conf#.
[[userppp-PAPnCHAP]]
==== PAP and CHAP Authentication
Some ISPs set their system up so that the authentication part of the connection is done using either of the PAP or CHAP authentication mechanisms. If this is the case, the ISP will not give a `login:` prompt at connection, but will start talking PPP immediately.
PAP is less secure than CHAP, but security is not normally an issue here as passwords, although being sent as plain text with PAP, are being transmitted down a serial line only. There is not much room for crackers to "eavesdrop".
The following alterations must be made:
[.programlisting]
....
13 set authname MyUserName
14 set authkey MyPassword
15 set login
....
Line 13:::
This line specifies the PAP/CHAP user name. Insert the correct value for _MyUserName_.
Line 14:::
This line specifies the PAP/CHAP password. Insert the correct value for _MyPassword_. You may want to add an additional line, such as:
+
[.programlisting]
....
16 accept PAP
....
+
or
+
[.programlisting]
....
16 accept CHAP
....
+
to make it obvious that this is the intention, but PAP and CHAP are both accepted by default.
Line 15:::
The ISP will not normally require a login to the server when using PAP or CHAP. Therefore, disable the "set login" string.
[[userppp-nat]]
==== Using PPP Network Address Translation Capability
PPP has ability to use internal NAT without kernel diverting capabilities. This functionality may be enabled by the following line in [.filename]#/etc/ppp/ppp.conf#:
[.programlisting]
....
nat enable yes
....
Alternatively, NAT may be enabled by command-line option `-nat`. There is also [.filename]#/etc/rc.conf# knob named `ppp_nat`, which is enabled by default.
When using this feature, it may be useful to include the following [.filename]#/etc/ppp/ppp.conf# options to enable incoming connections forwarding:
[.programlisting]
....
nat port tcp 10.0.0.2:ftp ftp
nat port tcp 10.0.0.2:http http
....
or do not trust the outside at all
[.programlisting]
....
nat deny_incoming yes
....
[[userppp-final]]
=== Final System Configuration
While `ppp` is now configured, some edits still need to be made to [.filename]#/etc/rc.conf#.
Working from the top down in this file, make sure the `hostname=` line is set:
[.programlisting]
....
hostname="foo.example.com"
....
If the ISP has supplied a static IP address and name, use this name as the host name.
Look for the `network_interfaces` variable. To configure the system to dial the ISP on demand, make sure the [.filename]#tun0# device is added to the list, otherwise remove it.
[.programlisting]
....
network_interfaces="lo0 tun0"
ifconfig_tun0=
....
[NOTE]
====
The `ifconfig_tun0` variable should be empty, and a file called [.filename]#/etc/start_if.tun0# should be created. This file should contain the line:
[.programlisting]
....
ppp -auto mysystem
....
This script is executed at network configuration time, starting the ppp daemon in automatic mode. If this machine acts as a gateway, consider including `-alias`. Refer to the manual page for further details.
====
Make sure that the router program is set to `NO` with the following line in [.filename]#/etc/rc.conf#:
[.programlisting]
....
router_enable="NO"
....
It is important that the `routed` daemon is not started, as `routed` tends to delete the default routing table entries created by `ppp`.
It is probably a good idea to ensure that the `sendmail_flags` line does not include the `-q` option, otherwise `sendmail` will attempt to do a network lookup every now and then, possibly causing your machine to dial out. You may try:
[.programlisting]
....
sendmail_flags="-bd"
....
The downside is that `sendmail` is forced to re-examine the mail queue whenever the ppp link. To automate this, include `!bg` in [.filename]#ppp.linkup#:
[.programlisting]
....
1 provider:
2 delete ALL
3 add 0 0 HISADDR
4 !bg sendmail -bd -q30m
....
An alternative is to set up a "dfilter" to block SMTP traffic. Refer to the sample files for further details.
=== Using `ppp`
All that is left is to reboot the machine. After rebooting, either type:
[source,shell]
....
# ppp
....
and then `dial provider` to start the PPP session, or, to configure `ppp` to establish sessions automatically when there is outbound traffic and [.filename]#start_if.tun0# does not exist, type:
[source,shell]
....
# ppp -auto provider
....
It is possible to talk to the `ppp` program while it is running in the background, but only if a suitable diagnostic port has been set up. To do this, add the following line to the configuration:
[.programlisting]
....
set server /var/run/ppp-tun%d DiagnosticPassword 0177
....
This will tell PPP to listen to the specified UNIX(R) domain socket, asking clients for the specified password before allowing access. The `%d` in the name is replaced with the [.filename]#tun# device number that is in use.
Once a socket has been set up, the man:pppctl[8] program may be used in scripts that wish to manipulate the running program.
[[userppp-mgetty]]
=== Configuring Dial-in Services
crossref:serialcomms[dialup,“Dial-in Service”] provides a good description on enabling dial-up services using man:getty[8].
An alternative to `getty` is package:comms/mgetty+sendfax[] port), a smarter version of `getty` designed with dial-up lines in mind.
The advantages of using `mgetty` is that it actively _talks_ to modems, meaning if port is turned off in [.filename]#/etc/ttys# then the modem will not answer the phone.
Later versions of `mgetty` (from 0.99beta onwards) also support the automatic detection of PPP streams, allowing clients scriptless access to the server.
Refer to http://mgetty.greenie.net/doc/mgetty_toc.html[http://mgetty.greenie.net/doc/mgetty_toc.html] for more information on `mgetty`.
By default the package:comms/mgetty+sendfax[] port comes with the `AUTO_PPP` option enabled allowing `mgetty` to detect the LCP phase of PPP connections and automatically spawn off a ppp shell. However, since the default login/password sequence does not occur it is necessary to authenticate users using either PAP or CHAP.
This section assumes the user has successfully compiled, and installed the package:comms/mgetty+sendfax[] port on his system.
Ensure that [.filename]#/usr/local/etc/mgetty+sendfax/login.config# has the following:
[.programlisting]
....
/AutoPPP/ - - /etc/ppp/ppp-pap-dialup
....
This tells `mgetty` to run [.filename]#ppp-pap-dialup# for detected PPP connections.
Create an executable file called [.filename]#/etc/ppp/ppp-pap-dialup# containing the following:
[.programlisting]
....
#!/bin/sh
exec /usr/sbin/ppp -direct pap$IDENT
....
For each dial-up line enabled in [.filename]#/etc/ttys#, create a corresponding entry in [.filename]#/etc/ppp/ppp.conf#. This will happily co-exist with the definitions we created above.
[.programlisting]
....
pap:
enable pap
set ifaddr 203.14.100.1 203.14.100.20-203.14.100.40
enable proxy
....
Each user logging in with this method will need to have a username/password in [.filename]#/etc/ppp/ppp.secret#, or alternatively add the following option to authenticate users via PAP from [.filename]#/etc/passwd#.
[.programlisting]
....
enable passwdauth
....
To assign some users a static IP number, specify the number as the third argument in [.filename]#/etc/ppp/ppp.secret#. See [.filename]#/usr/share/examples/ppp/ppp.secret.sample# for examples.
[[ppp-troubleshoot]]
== Troubleshooting PPP Connections
This section covers a few issues which may arise when using PPP over a modem connection. Some ISPs present the `ssword` prompt while others present `password`. If the `ppp` script is not written accordingly, the login attempt will fail. The most common way to debug `ppp` connections is by connecting manually as described in this section.
=== Check the Device Nodes
When using a custom kernel, make sure to include the following line in the kernel configuration file:
[.programlisting]
....
device uart
....
The [.filename]#uart# device is already included in the `GENERIC` kernel, so no additional steps are necessary in this case. Just check the `dmesg` output for the modem device with:
[source,shell]
....
# dmesg | grep uart
....
This should display some pertinent output about the [.filename]#uart# devices. These are the COM ports we need. If the modem acts like a standard serial port, it should be listed on [.filename]#uart1#, or [.filename]#COM2#. If so, a kernel rebuild is not required. When matching up, if the modem is on [.filename]#uart1#, the modem device would be [.filename]#/dev/cuau1#.
=== Connecting Manually
Connecting to the Internet by manually controlling `ppp` is quick, easy, and a great way to debug a connection or just get information on how the ISP treats `ppp` client connections. Lets start PPP from the command line. Note that in all of our examples we will use _example_ as the hostname of the machine running PPP. To start `ppp`:
[source,shell]
....
# ppp
....
[source,shell]
....
ppp ON example> set device /dev/cuau1
....
This second command sets the modem device to [.filename]#cuau1#.
[source,shell]
....
ppp ON example> set speed 115200
....
This sets the connection speed to 115,200 kbps.
[source,shell]
....
ppp ON example> enable dns
....
This tells `ppp` to configure the resolver and add the nameserver lines to [.filename]#/etc/resolv.conf#. If `ppp` cannot determine the hostname, it can manually be set later.
[source,shell]
....
ppp ON example> term
....
This switches to "terminal" mode in order to manually control the modem.
[.programlisting]
....
deflink: Entering terminal mode on /dev/cuau1
type '~h' for help
....
[source,shell]
....
at
OK
atdt123456789
....
Use `at` to initialize the modem, then use `atdt` and the number for the ISP to begin the dial in process.
[source,shell]
....
CONNECT
....
Confirmation of the connection, if we are going to have any connection problems, unrelated to hardware, here is where we will attempt to resolve them.
[source,shell]
....
ISP Login:myusername
....
At this prompt, return the prompt with the username that was provided by the ISP.
[source,shell]
....
ISP Pass:mypassword
....
At this prompt, reply with the password that was provided by the ISP. Just like logging into FreeBSD, the password will not echo.
[source,shell]
....
Shell or PPP:ppp
....
Depending on the ISP, this prompt might not appear. If it does, it is asking whether to use a shell on the provider or to start `ppp`. In this example, `ppp` was selected in order to establish an Internet connection.
[source,shell]
....
Ppp ON example>
....
Notice that in this example the first `p` has been capitalized. This shows that we have successfully connected to the ISP.
[source,shell]
....
Ppp ON example>
....
We have successfully authenticated with our ISP and are waiting for the assigned IP address.
[source,shell]
....
PPP ON example>
....
We have made an agreement on an IP address and successfully completed our connection.
[source,shell]
....
PPP ON example>add default HISADDR
....
Here we add our default route, we need to do this before we can talk to the outside world as currently the only established connection is with the peer. If this fails due to existing routes, put a bang character `!` in front of the `add`. Alternatively, set this before making the actual connection and it will negotiate a new route accordingly.
If everything went good we should now have an active connection to the Internet, which could be thrown into the background using kbd:[CTRL+z]. If `PPP` returns to `ppp` the connection has been lost. This is good to know because it shows the connection status. Capital P's represent a connection to the ISP and lowercase p's show that the connection has been lost.
=== Debugging
If a connection cannot be established, turn hardware flow CTS/RTS to off using `set ctsrts off`. This is mainly the case when connected to some PPP-capable terminal servers, where PPP hangs when it tries to write data to the communication link, and waits for a Clear To Send (CTS) signal which may never come. When using this option, include `set accmap` as it may be required to defeat hardware dependent on passing certain characters from end to end, most of the time XON/XOFF. Refer to man:ppp[8] for more information on this option and how it is used.
An older modem may need `set parity even`. Parity is set at none be default, but is used for error checking with a large increase in traffic, on older modems.
PPP may not return to the command mode, which is usually a negotiation error where the ISP is waiting for negotiating to begin. At this point, using `~p` will force ppp to start sending the configuration information.
If a login prompt never appears, PAP or CHAP authentication is most likely required. To use PAP or CHAP, add the following options to PPP before going into terminal mode:
[source,shell]
....
ppp ON example> set authname myusername
....
Where _myusername_ should be replaced with the username that was assigned by the ISP.
[source,shell]
....
ppp ON example> set authkey mypassword
....
Where _mypassword_ should be replaced with the password that was assigned by the ISP.
If a connection is established, but cannot seem to find any domain name, try to man:ping[8] an IP address. If there is 100 percent (100%) packet loss, it is likely that a default route was not assigned. Double check that `add default HISADDR` was set during the connection. If a connection can be made to a remote IP address, it is possible that a resolver address has not been added to [.filename]#/etc/resolv.conf#. This file should look like:
[.programlisting]
....
domain example.com
nameserver x.x.x.x
nameserver y.y.y.y
....
Where _x.x.x.x_ and _y.y.y.y_ should be replaced with the IP address of the ISP's DNS servers.
To configure man:syslog[3] to provide logging for the PPP connection, make sure this line exists in [.filename]#/etc/syslog.conf#:
[.programlisting]
....
!ppp
*.* /var/log/ppp.log
....
[[pppoe]]
== Using PPP over Ethernet (PPPoE)
This section describes how to set up PPP over Ethernet (PPPoE).
Here is an example of a working [.filename]#ppp.conf#:
[.programlisting]
....
default:
set log Phase tun command # you can add more detailed logging if you wish
set ifaddr 10.0.0.1/0 10.0.0.2/0
name_of_service_provider:
set device PPPoE:xl1 # replace xl1 with your Ethernet device
set authname YOURLOGINNAME
set authkey YOURPASSWORD
set dial
set login
add default HISADDR
....
As `root`, run:
[source,shell]
....
# ppp -ddial name_of_service_provider
....
Add the following to [.filename]#/etc/rc.conf#:
[.programlisting]
....
ppp_enable="YES"
ppp_mode="ddial"
ppp_nat="YES" # if you want to enable nat for your local network, otherwise NO
ppp_profile="name_of_service_provider"
....
=== Using a PPPoE Service Tag
Sometimes it will be necessary to use a service tag to establish the connection. Service tags are used to distinguish between different PPPoE servers attached to a given network.
Any required service tag information should be in the documentation provided by the ISP.
As a last resort, one could try installing the package:net/rr-pppoe[] package or port. Bear in mind however, this may de-program your modem and render it useless, so think twice before doing it. Simply install the program shipped with the modem. Then, access the menu:System[] menu from the program. The name of the profile should be listed there. It is usually _ISP_.
The profile name (service tag) will be used in the PPPoE configuration entry in [.filename]#ppp.conf# as the provider part for `set device`. Refer to man:ppp[8] for full details. It should look like this:
[.programlisting]
....
set device PPPoE:xl1:ISP
....
Do not forget to change _xl1_ to the proper device for the Ethernet card.
Do not forget to change _ISP_ to the profile.
For additional information, refer to http://renaud.waldura.com/doc/freebsd/pppoe/[Cheaper Broadband with FreeBSD on DSL] by Renaud Waldura.
[[ppp-3com]]
=== PPPoE with a 3Com(R) HomeConnect(TM) ADSL Modem Dual Link
This modem does not follow the PPPoE specification defined in http://www.faqs.org/rfcs/rfc2516.html[RFC 2516].
In order to make FreeBSD capable of communicating with this device, a sysctl must be set. This can be done automatically at boot time by updating [.filename]#/etc/sysctl.conf#:
[.programlisting]
....
net.graph.nonstandard_pppoe=1
....
or can be done immediately with the command:
[source,shell]
....
# sysctl net.graph.nonstandard_pppoe=1
....
Unfortunately, because this is a system-wide setting, it is not possible to talk to a normal PPPoE client or server and a 3Com(R) HomeConnect(TM) ADSL Modem at the same time.
[[pppoa]]
== Using PPP over ATM (PPPoA)
The following describes how to set up PPP over ATM (PPPoA). PPPoA is a popular choice among European DSL providers.
=== Using mpd
The mpd application can be used to connect to a variety of services, in particular PPTP services. It can be installed using the package:net/mpd5[] package or port. Many ADSL modems require that a PPTP tunnel is created between the modem and computer.
Once installed, configure mpd to suit the provider's settings. The port places a set of sample configuration files which are well documented in [.filename]#/usr/local/etc/mpd/#. A complete guide to configure mpd is available in HTML format in [.filename]#/usr/ports/shared/doc/mpd/#. Here is a sample configuration for connecting to an ADSL service with mpd. The configuration is spread over two files, first the [.filename]#mpd.conf#:
[NOTE]
====
This example [.filename]#mpd.conf# only works with mpd 4.x.
====
[.programlisting]
....
default:
load adsl
adsl:
new -i ng0 adsl adsl
set bundle authname username <.>
set bundle password password <.>
set bundle disable multilink
set link no pap acfcomp protocomp
set link disable chap
set link accept chap
set link keep-alive 30 10
set ipcp no vjcomp
set ipcp ranges 0.0.0.0/0 0.0.0.0/0
set iface route default
set iface disable on-demand
set iface enable proxy-arp
set iface idle 0
open
....
<.> The username used to authenticate with your ISP.
<.> The password used to authenticate with your ISP.
Information about the link, or links, to establish is found in [.filename]#mpd.links#. An example [.filename]#mpd.links# to accompany the above example is given beneath:
[.programlisting]
....
adsl:
set link type pptp
set pptp mode active
set pptp enable originate outcall
set pptp self 10.0.0.1 <.>
set pptp peer 10.0.0.138 <.>
....
<.> The IP address of FreeBSD computer running mpd.
<.> The IP address of the ADSL modem. The Alcatel SpeedTouch(TM) Home defaults to `10.0.0.138`.
It is possible to initialize the connection easily by issuing the following command as `root`:
[source,shell]
....
# mpd -b adsl
....
To view the status of the connection:
[source,shell]
....
% ifconfig ng0
ng0: flags=88d1<UP,POINTOPOINT,RUNNING,NOARP,SIMPLEX,MULTICAST> mtu 1500
inet 216.136.204.117 --> 204.152.186.171 netmask 0xffffffff
....
Using mpd is the recommended way to connect to an ADSL service with FreeBSD.
=== Using pptpclient
It is also possible to use FreeBSD to connect to other PPPoA services using package:net/pptpclient[].
To use package:net/pptpclient[] to connect to a DSL service, install the port or package, then edit [.filename]#/etc/ppp/ppp.conf#. An example section of [.filename]#ppp.conf# is given below. For further information on [.filename]#ppp.conf# options consult man:ppp[8].
[.programlisting]
....
adsl:
set log phase chat lcp ipcp ccp tun command
set timeout 0
enable dns
set authname username <.>
set authkey password <.>
set ifaddr 0 0
add default HISADDR
....
<.> The username for the DSL provider.
<.> The password for your account.
[WARNING]
====
Since the account's password is added to [.filename]#ppp.conf# in plain text form, make sure nobody can read the contents of this file:
[source,shell]
....
# chown root:wheel /etc/ppp/ppp.conf
# chmod 600 /etc/ppp/ppp.conf
....
====
This will open a tunnel for a PPP session to the DSL router. Ethernet DSL modems have a preconfigured LAN IP address to connect to. In the case of the Alcatel SpeedTouch(TM) Home, this address is `10.0.0.138`. The router's documentation should list the address the device uses. To open the tunnel and start a PPP session:
[source,shell]
....
# pptp address adsl
....
[TIP]
====
If an ampersand ("&") is added to the end of this command, pptp will return the prompt.
====
A [.filename]#tun# virtual tunnel device will be created for interaction between the pptp and ppp processes. Once the prompt is returned, or the pptp process has confirmed a connection, examine the tunnel:
[source,shell]
....
% ifconfig tun0
tun0: flags=8051<UP,POINTOPOINT,RUNNING,MULTICAST> mtu 1500
inet 216.136.204.21 --> 204.152.186.171 netmask 0xffffff00
Opened by PID 918
....
If the connection fails, check the configuration of the router, which is usually accessible using a web browser. Also, examine the output of `pptp` and the contents of the log file, [.filename]#/var/log/ppp.log# for clues.
diff --git a/documentation/content/en/books/handbook/preface/_index.adoc b/documentation/content/en/books/handbook/preface/_index.adoc
index 1c0216159b..0458e59d58 100644
--- a/documentation/content/en/books/handbook/preface/_index.adoc
+++ b/documentation/content/en/books/handbook/preface/_index.adoc
@@ -1,246 +1,247 @@
---
title: Preface
prev: books/handbook/
next: books/handbook/parti
+description: The FreeBSD newcomer will find that the first section of this book guides the user through the FreeBSD installation process and gently introduces the concepts and conventions that underpin UNIX
---
[preface]
[[book-preface]]
= Preface
:doctype: book
:toc: macro
:toclevels: 1
:icons: font
:source-highlighter: rouge
:experimental:
:skip-front-matter:
:xrefstyle: basic
:relfileprefix: ../
:outfilesuffix:
[[preface-audience]]
== Intended Audience
The FreeBSD newcomer will find that the first section of this book guides the user through the FreeBSD installation process and gently introduces the concepts and conventions that underpin UNIX(R). Working through this section requires little more than the desire to explore, and the ability to take on board new concepts as they are introduced.
Once you have traveled this far, the second, far larger, section of the Handbook is a comprehensive reference to all manner of topics of interest to FreeBSD system administrators. Some of these chapters may recommend that you do some prior reading, and this is noted in the synopsis at the beginning of each chapter.
For a list of additional sources of information, please see crossref:bibliography[bibliography,Bibliography].
[[preface-changes-from3]]
== Changes from the Third Edition
The current online version of the Handbook represents the cumulative effort of many hundreds of contributors over the past 10 years. The following are some of the significant changes since the two volume third edition was published in 2004:
* crossref:dtrace[dtrace,DTrace] has been added with information about the powerful DTrace performance analysis tool.
* crossref:filesystems[filesystems,Other File Systems] has been added with information about non-native file systems in FreeBSD, such as ZFS from Sun(TM).
* crossref:audit[audit,Security Event Auditing] has been added to cover the new auditing capabilities in FreeBSD and explain its use.
* crossref:virtualization[virtualization,Virtualization] has been added with information about installing FreeBSD on virtualization software.
* crossref:bsdinstall[bsdinstall,Installing FreeBSD] has been added to cover installation of FreeBSD using the new installation utility, bsdinstall.
[[preface-changes-from2]]
== Changes from the Second Edition (2004)
The third edition was the culmination of over two years of work by the dedicated members of the FreeBSD Documentation Project. The printed edition grew to such a size that it was necessary to publish as two separate volumes. The following are the major changes in this new edition:
* crossref:config[config-tuning,Configuration and Tuning] has been expanded with new information about the ACPI power and resource management, the `cron` system utility, and more kernel tuning options.
* crossref:security[security,Security] has been expanded with new information about virtual private networks (VPNs), file system access control lists (ACLs), and security advisories.
* crossref:mac[mac,Mandatory Access Control] is a new chapter with this edition. It explains what MAC is and how this mechanism can be used to secure a FreeBSD system.
* crossref:disks[disks,Storage] has been expanded with new information about USB storage devices, file system snapshots, file system quotas, file and network backed filesystems, and encrypted disk partitions.
* A troubleshooting section has been added to crossref:ppp-and-slip[ppp-and-slip,PPP].
* crossref:mail[mail,Electronic Mail] has been expanded with new information about using alternative transport agents, SMTP authentication, UUCP, fetchmail, procmail, and other advanced topics.
* crossref:network-servers[network-servers,Network Servers] is all new with this edition. This chapter includes information about setting up the Apache HTTP Server, ftpd, and setting up a server for Microsoft(R) Windows(R) clients with Samba. Some sections from crossref:advanced-networking[advanced-networking,Advanced Networking] were moved here to improve the presentation.
* crossref:advanced-networking[advanced-networking,Advanced Networking] has been expanded with new information about using Bluetooth(R) devices with FreeBSD, setting up wireless networks, and Asynchronous Transfer Mode (ATM) networking.
* A glossary has been added to provide a central location for the definitions of technical terms used throughout the book.
* A number of aesthetic improvements have been made to the tables and figures throughout the book.
[[preface-changes]]
== Changes from the First Edition (2001)
The second edition was the culmination of over two years of work by the dedicated members of the FreeBSD Documentation Project. The following were the major changes in this edition:
* A complete Index has been added.
* All ASCII figures have been replaced by graphical diagrams.
* A standard synopsis has been added to each chapter to give a quick summary of what information the chapter contains, and what the reader is expected to know.
* The content has been logically reorganized into three parts: "Getting Started", "System Administration", and "Appendices".
* crossref:basics[basics,FreeBSD Basics] has been expanded to contain additional information about processes, daemons, and signals.
* crossref:ports[ports,Installing Applications: Packages and Ports] has been expanded to contain additional information about binary package management.
* crossref:x11[x11,The X Window System] has been completely rewritten with an emphasis on using modern desktop technologies such as KDE and GNOME on XFree86(TM) 4.X.
* crossref:boot[boot,The FreeBSD Booting Process] has been expanded.
* crossref:disks[disks,Storage] has been written from what used to be two separate chapters on "Disks" and "Backups". We feel that the topics are easier to comprehend when presented as a single chapter. A section on RAID (both hardware and software) has also been added.
* crossref:serialcomms[serialcomms,Serial Communications] has been completely reorganized and updated for FreeBSD 4.X/5.X.
* crossref:ppp-and-slip[ppp-and-slip,PPP] has been substantially updated.
* Many new sections have been added to crossref:advanced-networking[advanced-networking,Advanced Networking].
* crossref:mail[mail,Electronic Mail] has been expanded to include more information about configuring sendmail.
* crossref:linuxemu[linuxemu,Linux® Binary Compatibility] has been expanded to include information about installing Oracle(R) and SAP(R) R/3(R).
* The following new topics are covered in this second edition:
** crossref:config[config-tuning,Configuration and Tuning].
** crossref:multimedia[multimedia,Multimedia].
[[preface-overview]]
== Organization of This Book
This book is split into five logically distinct sections. The first section, _Getting Started_, covers the installation and basic usage of FreeBSD. It is expected that the reader will follow these chapters in sequence, possibly skipping chapters covering familiar topics. The second section, _Common Tasks_, covers some frequently used features of FreeBSD. This section, and all subsequent sections, can be read out of order. Each chapter begins with a succinct synopsis that describes what the chapter covers and what the reader is expected to already know. This is meant to allow the casual reader to skip around to find chapters of interest. The third section, _System Administration_, covers administration topics. The fourth section, _Network Communication_, covers networking and server topics. The fifth section contains appendices of reference information.
_crossref:introduction[introduction,Introduction]_::
Introduces FreeBSD to a new user. It describes the history of the FreeBSD Project, its goals and development model.
_crossref:bsdinstall[bsdinstall,Installing FreeBSD]_::
Walks a user through the entire installation process of FreeBSD 9._x_ and later using bsdinstall.
_crossref:basics[basics,FreeBSD Basics]_::
Covers the basic commands and functionality of the FreeBSD operating system. If you are familiar with Linux(R) or another flavor of UNIX(R) then you can probably skip this chapter.
_crossref:ports[ports,Installing Applications: Packages and Ports]_::
Covers the installation of third-party software with both FreeBSD's innovative "Ports Collection" and standard binary packages.
_crossref:x11[x11,The X Window System]_::
Describes the X Window System in general and using X11 on FreeBSD in particular. Also describes common desktop environments such as KDE and GNOME.
_crossref:desktop[desktop,Desktop Applications]_::
Lists some common desktop applications, such as web browsers and productivity suites, and describes how to install them on FreeBSD.
_crossref:multimedia[multimedia,Multimedia]_::
Shows how to set up sound and video playback support for your system. Also describes some sample audio and video applications.
_crossref:kernelconfig[kernelconfig,Configuring the FreeBSD Kernel]_::
Explains why you might need to configure a new kernel and provides detailed instructions for configuring, building, and installing a custom kernel.
_crossref:printing[printing,Printing]_::
Describes managing printers on FreeBSD, including information about banner pages, printer accounting, and initial setup.
_crossref:linuxemu[linuxemu,Linux® Binary Compatibility]_::
Describes the Linux(R) compatibility features of FreeBSD. Also provides detailed installation instructions for many popular Linux(R) applications such as Oracle(R) and Mathematica(R).
_crossref:config[config-tuning,Configuration and Tuning]_::
Describes the parameters available for system administrators to tune a FreeBSD system for optimum performance. Also describes the various configuration files used in FreeBSD and where to find them.
_crossref:boot[boot,The FreeBSD Booting Process]_::
Describes the FreeBSD boot process and explains how to control this process with configuration options.
_crossref:security[security,Security]_::
Describes many different tools available to help keep your FreeBSD system secure, including Kerberos, IPsec and OpenSSH.
_crossref:jails[jails,Jails]_::
Describes the jails framework, and the improvements of jails over the traditional chroot support of FreeBSD.
_crossref:mac[mac,Mandatory Access Control]_::
Explains what Mandatory Access Control (MAC) is and how this mechanism can be used to secure a FreeBSD system.
_crossref:audit[audit,Security Event Auditing]_::
Describes what FreeBSD Event Auditing is, how it can be installed, configured, and how audit trails can be inspected or monitored.
_crossref:disks[disks,Storage]_::
Describes how to manage storage media and filesystems with FreeBSD. This includes physical disks, RAID arrays, optical and tape media, memory-backed disks, and network filesystems.
_crossref:geom[geom,GEOM: Modular Disk Transformation Framework]_::
Describes what the GEOM framework in FreeBSD is and how to configure various supported RAID levels.
_crossref:filesystems[filesystems,Other File Systems]_::
Examines support of non-native file systems in FreeBSD, like the Z File System from Sun(TM).
_crossref:virtualization[virtualization,Virtualization]_::
Describes what virtualization systems offer, and how they can be used with FreeBSD.
_crossref:l10n[l10n,Localization - i18n/L10n Usage and Setup]_::
Describes how to use FreeBSD in languages other than English. Covers both system and application level localization.
_crossref:cutting-edge[updating-upgrading,Updating and Upgrading FreeBSD]_::
Explains the differences between FreeBSD-STABLE, FreeBSD-CURRENT, and FreeBSD releases. Describes which users would benefit from tracking a development system and outlines that process. Covers the methods users may take to update their system to the latest security release.
_crossref:dtrace[dtrace,DTrace]_::
Describes how to configure and use the DTrace tool from Sun(TM) in FreeBSD. Dynamic tracing can help locate performance issues, by performing real time system analysis.
_crossref:serialcomms[serialcomms,Serial Communications]_::
Explains how to connect terminals and modems to your FreeBSD system for both dial in and dial out connections.
_crossref:ppp-and-slip[ppp-and-slip,PPP]_::
Describes how to use PPP to connect to remote systems with FreeBSD.
_crossref:mail[mail,Electronic Mail]_::
Explains the different components of an email server and dives into simple configuration topics for the most popular mail server software: sendmail.
_crossref:network-servers[network-servers,Network Servers]_::
Provides detailed instructions and example configuration files to set up your FreeBSD machine as a network filesystem server, domain name server, network information system server, or time synchronization server.
_crossref:firewalls[firewalls,Firewalls]_::
Explains the philosophy behind software-based firewalls and provides detailed information about the configuration of the different firewalls available for FreeBSD.
_crossref:advanced-networking[advanced-networking,Advanced Networking]_::
Describes many networking topics, including sharing an Internet connection with other computers on your LAN, advanced routing topics, wireless networking, Bluetooth(R), ATM, IPv6, and much more.
_crossref:mirrors[mirrors,Obtaining FreeBSD]_::
Lists different sources for obtaining FreeBSD media on CDROM or DVD as well as different sites on the Internet that allow you to download and install FreeBSD.
_crossref:bibliography[bibliography,Bibliography]_::
This book touches on many different subjects that may leave you hungry for a more detailed explanation. The bibliography lists many excellent books that are referenced in the text.
_crossref:eresources[eresources,Resources on the Internet]_::
Describes the many forums available for FreeBSD users to post questions and engage in technical conversations about FreeBSD.
_crossref:pgpkeys[pgpkeys,OpenPGP Keys]_::
Lists the PGP fingerprints of several FreeBSD Developers.
[[preface-conv]]
== Conventions used in this book
To provide a consistent and easy to read text, several conventions are followed throughout the book.
[[preface-conv-typographic]]
=== Typographic Conventions
_Italic_::
An _italic_ font is used for filenames, URLs, emphasized text, and the first usage of technical terms.
`Monospace`::
A `monospaced` font is used for error messages, commands, environment variables, names of ports, hostnames, user names, group names, device names, variables, and code fragments.
Bold::
A *bold* font is used for applications, commands, and keys.
[[preface-conv-commands]]
=== User Input
Keys are shown in *bold* to stand out from other text. Key combinations that are meant to be typed simultaneously are shown with `+` between the keys, such as:
kbd:[Ctrl+Alt+Del]
Meaning the user should type the kbd:[Ctrl], kbd:[Alt], and kbd:[Del] keys at the same time.
Keys that are meant to be typed in sequence will be separated with commas, for example:
kbd:[Ctrl+X], kbd:[Ctrl+S]
Would mean that the user is expected to type the kbd:[Ctrl] and kbd:[X] keys simultaneously and then to type the kbd:[Ctrl] and kbd:[S] keys simultaneously.
[[preface-conv-examples]]
=== Examples
Examples starting with [.filename]#C:\># indicate a MS-DOS(R) command. Unless otherwise noted, these commands may be executed from a "Command Prompt" window in a modern Microsoft(R) Windows(R) environment.
[source,shell]
....
C:\> tools\fdimage floppies\kern.flp A:
....
Examples starting with # indicate a command that must be invoked as the superuser in FreeBSD. You can login as `root` to type the command, or login as your normal account and use man:su[1] to gain superuser privileges.
[source,shell]
....
# dd if=kern.flp of=/dev/fd0
....
Examples starting with % indicate a command that should be invoked from a normal user account. Unless otherwise noted, C-shell syntax is used for setting environment variables and other shell commands.
[source,shell]
....
% top
....
[[preface-acknowledgements]]
== Acknowledgments
The book you are holding represents the efforts of many hundreds of people around the world. Whether they sent in fixes for typos, or submitted complete chapters, all the contributions have been useful.
Several companies have supported the development of this document by paying authors to work on it full-time, paying for publication, etc. In particular, BSDi (subsequently acquired by http://www.windriver.com[Wind River Systems]) paid members of the FreeBSD Documentation Project to work on improving this book full time leading up to the publication of the first printed edition in March 2000 (ISBN 1-57176-241-8). Wind River Systems then paid several additional authors to make a number of improvements to the print-output infrastructure and to add additional chapters to the text. This work culminated in the publication of the second printed edition in November 2001 (ISBN 1-57176-303-1). In 2003-2004, http://www.freebsdmall.com[FreeBSD Mall, Inc], paid several contributors to improve the Handbook in preparation for the third printed edition.
diff --git a/documentation/content/en/books/handbook/printing/_index.adoc b/documentation/content/en/books/handbook/printing/_index.adoc
index d475e637ce..3f2cf4bb17 100644
--- a/documentation/content/en/books/handbook/printing/_index.adoc
+++ b/documentation/content/en/books/handbook/printing/_index.adoc
@@ -1,768 +1,769 @@
---
title: Chapter 9. Printing
part: Part II. Common Tasks
prev: books/handbook/kernelconfig
next: books/handbook/linuxemu
+description: This chapter covers the printing system in FreeBSD
---
[[printing]]
= Printing
:doctype: book
:toc: macro
:toclevels: 1
:icons: font
:sectnums:
:sectnumlevels: 6
:source-highlighter: rouge
:experimental:
:skip-front-matter:
:xrefstyle: basic
:relfileprefix: ../
:outfilesuffix:
:sectnumoffset: 9
ifeval::["{backend}" == "html5"]
:imagesdir: ../../../../images/books/handbook/printing/
endif::[]
ifeval::["{backend}" == "pdf"]
:imagesdir: ../../../../static/images/books/handbook/printing/
endif::[]
ifeval::["{backend}" == "epub3"]
:imagesdir: ../../../../static/images/books/handbook/printing/
endif::[]
include::shared/authors.adoc[]
include::shared/releases.adoc[]
include::shared/en/mailing-lists.adoc[]
include::shared/en/teams.adoc[]
include::shared/en/urls.adoc[]
toc::[]
Putting information on paper is a vital function, despite many attempts to eliminate it. Printing has two basic components. The data must be delivered to the printer, and must be in a form that the printer can understand.
[[printing-quick-start]]
== Quick Start
Basic printing can be set up quickly. The printer must be capable of printing plain `ASCII` text. For printing to other types of files, see <<printing-lpd-filters>>.
[.procedure]
****
. Create a directory to store files while they are being printed:
+
[source,shell]
....
# mkdir -p /var/spool/lpd/lp
# chown daemon:daemon /var/spool/lpd/lp
# chmod 770 /var/spool/lpd/lp
....
+
. As `root`, create [.filename]#/etc/printcap# with these contents:
+
[.programlisting]
....
lp:\
:lp=/dev/unlpt0:\ <.>
:sh:\
:mx#0:\
:sd=/var/spool/lpd/lp:\
:lf=/var/log/lpd-errs:
....
+
<.> This line is for a printer connected to a `USB` port.
+
For a printer connected to a parallel or "printer" port, use:
+
[.programlisting]
....
:lp=/dev/lpt0:\
....
+
For a printer connected directly to a network, use:
+
[.programlisting]
....
:lp=:rm=network-printer-name:rp=raw:\
....
+
Replace _network-printer-name_ with the `DNS` host name of the network printer.
+
. Enable LPD by editing [.filename]#/etc/rc.conf#, adding this line:
+
[.programlisting]
....
lpd_enable="YES"
....
+
Start the service:
+
[source,shell]
....
# service lpd start
Starting lpd.
....
+
. Print a test:
+
[source,shell]
....
# printf "1. This printer can print.\n2. This is the second line.\n" | lpr
....
+
[TIP]
====
If both lines do not start at the left border, but "stairstep" instead, see <<printing-lpd-filters-stairstep>>.
====
+
Text files can now be printed with `lpr`. Give the filename on the command line, or pipe output directly into `lpr`.
+
[source,shell]
....
% lpr textfile.txt
% ls -lh | lpr
....
****
[[printing-connections]]
== Printer Connections
Printers are connected to computer systems in a variety of ways. Small desktop printers are usually connected directly to a computer's `USB` port. Older printers are connected to a parallel or "printer" port. Some printers are directly connected to a network, making it easy for multiple computers to share them. A few printers use a rare serial port connection.
FreeBSD can communicate with all of these types of printers.
[[printing-connections-usb]]
`USB`::
`USB` printers can be connected to any available `USB` port on the computer.
+
When FreeBSD detects a `USB` printer, two device entries are created: [.filename]#/dev/ulpt0# and [.filename]#/dev/unlpt0#. Data sent to either device will be relayed to the printer. After each print job, [.filename]#ulpt0# resets the `USB` port. Resetting the port can cause problems with some printers, so the [.filename]#unlpt0# device is usually used instead. [.filename]#unlpt0# does not reset the USB port at all.
[[printing-connections-parallel]]
Parallel (`IEEE`-1284)::
The parallel port device is [.filename]#/dev/lpt0#. This device appears whether a printer is attached or not, it is not autodetected.
+
Vendors have largely moved away from these "legacy" ports, and many computers no longer have them. Adapters can be used to connect a parallel printer to a `USB` port. With such an adapter, the printer can be treated as if it were actually a `USB` printer. Devices called _print servers_ can also be used to connect parallel printers directly to a network.
[[printing-connections-serial]]
Serial (RS-232)::
Serial ports are another legacy port, rarely used for printers except in certain niche applications. Cables, connectors, and required wiring vary widely.
+
For serial ports built into a motherboard, the serial device name is [.filename]#/dev/cuau0# or [.filename]#/dev/cuau1#. Serial `USB` adapters can also be used, and these will appear as [.filename]#/dev/cuaU0#.
+
Several communication parameters must be known to communicate with a serial printer. The most important are _baud rate_ or `BPS` (Bits Per Second) and _parity_. Values vary, but typical serial printers use a baud rate of 9600 and no parity.
[[printing-connections-network]]
Network::
Network printers are connected directly to the local computer network.
+
The `DNS` hostname of the printer must be known. If the printer is assigned a dynamic address by `DHCP`, `DNS` should be dynamically updated so that the host name always has the correct `IP` address. Network printers are often given static `IP` addresses to avoid this problem.
+
Most network printers understand print jobs sent with the LPD protocol. A print queue name can also be specified. Some printers process data differently depending on which queue is used. For example, a `raw` queue prints the data unchanged, while the `text` queue adds carriage returns to plain text.
+
Many network printers can also print data sent directly to port 9100.
[[printing-connections-summary]]
=== Summary
Wired network connections are usually the easiest to set up and give the fastest printing. For direct connection to the computer, `USB` is preferred for speed and simplicity. Parallel connections work but have limitations on cable length and speed. Serial connections are more difficult to configure. Cable wiring differs between models, and communication parameters like baud rate and parity bits must add to the complexity. Fortunately, serial printers are rare.
[[printing-pdls]]
== Common Page Description Languages
Data sent to a printer must be in a language that the printer can understand. These languages are called Page Description Languages, or PDLs.
[[print-pdls-ascii]]
`ASCII`::
Plain `ASCII` text is the simplest way to send data to a printer. Characters correspond one to one with what will be printed: an `A` in the data prints an `A` on the page. Very little formatting is available. There is no way to select a font or proportional spacing. The forced simplicity of plain `ASCII` means that text can be printed straight from the computer with little or no encoding or translation. The printed output corresponds directly with what was sent.
+
Some inexpensive printers cannot print plain `ASCII` text. This makes them more difficult to set up, but it is usually still possible.
[[print-pdls-postscript]]
PostScript(R)::
PostScript(R) is almost the opposite of `ASCII`. Rather than simple text, a PostScript(R) program is a set of instructions that draw the final document. Different fonts and graphics can be used. However, this power comes at a price. The program that draws the page must be written. Usually this program is generated by application software, so the process is invisible to the user.
+
Inexpensive printers sometimes leave out PostScript(R) compatibility as a cost-saving measure.
[[print-pdls-pcl]]
`PCL` (Printer Command Language)::
`PCL` is an extension of `ASCII`, adding escape sequences for formatting, font selection, and printing graphics. Many printers provide `PCL5` support. Some support the newer `PCL6` or `PCLXL`. These later versions are supersets of `PCL5` and can provide faster printing.
[[print-pdls-host-based]]
Host-Based::
Manufacturers can reduce the cost of a printer by giving it a simple processor and very little memory. These printers are not capable of printing plain text. Instead, bitmaps of text and graphics are drawn by a driver on the host computer and then sent to the printer. These are called _host-based_ printers.
+
Communication between the driver and a host-based printer is often through proprietary or undocumented protocols, making them functional only on the most common operating systems.
[[print-pdls-table]]
=== Converting PostScript(R) to Other PDLs
Many applications from the Ports Collection and FreeBSD utilities produce PostScript(R) output. This table shows the utilities available to convert that into other common PDLs:
[[print-pdls-ps-to-other-tbl]]
.Output PDLs
[cols="1,1,1", frame="none", options="header"]
|===
<| Output PDL
<| Generated By
<| Notes
|`PCL` or `PCL5`
|package:print/ghostscript9-base[]
|`-sDEVICE=ljet4` for monochrome, `-sDEVICE=cljet5` for color
|`PCLXL` or `PCL6`
|package:print/ghostscript9-base[]
|`-sDEVICE=pxlmono` for monochrome, `-sDEVICE=pxlcolor` for color
|`ESC/P2`
|package:print/ghostscript9-base[]
|`-sDEVICE=uniprint`
|`XQX`
|package:print/foo2zjs[]
|
|===
[[print-pdls-summary]]
=== Summary
For the easiest printing, choose a printer that supports PostScript(R). Printers that support `PCL` are the next preferred. With package:print/ghostscript9-base[], these printers can be used as if they understood PostScript(R) natively. Printers that support PostScript(R) or `PCL` directly almost always support direct printing of plain `ASCII` text files also.
Line-based printers like typical inkjets usually do not support PostScript(R) or `PCL`. They often can print plain `ASCII` text files. package:print/ghostscript9-base[] supports the PDLs used by some of these printers. However, printing an entire graphic-based page on these printers is often very slow due to the large amount of data to be transferred and printed.
Host-based printers are often more difficult to set up. Some cannot be used at all because of proprietary PDLs. Avoid these printers when possible.
Descriptions of many PDLs can be found at http://www.undocprint.org/formats/page_description_languages[]. The particular `PDL` used by various models of printers can be found at http://www.openprinting.org/printers[].
[[printing-direct]]
== Direct Printing
For occasional printing, files can be sent directly to a printer device without any setup. For example, a file called [.filename]#sample.txt# can be sent to a `USB` printer:
[source,shell]
....
# cp sample.txt /dev/unlpt0
....
Direct printing to network printers depends on the abilities of the printer, but most accept print jobs on port 9100, and man:nc[1] can be used with them. To print the same file to a printer with the `DNS` hostname of _netlaser_:
[source,shell]
....
# nc netlaser 9100 < sample.txt
....
[[printing-lpd]]
== LPD (Line Printer Daemon)
Printing a file in the background is called _spooling_. A spooler allows the user to continue with other programs on the computer without waiting for the printer to slowly complete the print job.
FreeBSD includes a spooler called man:lpd[8]. Print jobs are submitted with man:lpr[1].
[[printing-lpd-setup]]
=== Initial Setup
A directory for storing print jobs is created, ownership is set, and the permissions are set to prevent other users from viewing the contents of those files:
[source,shell]
....
# mkdir -p /var/spool/lpd/lp
# chown daemon:daemon /var/spool/lpd/lp
# chmod 770 /var/spool/lpd/lp
....
Printers are defined in [.filename]#/etc/printcap#. An entry for each printer includes details like a name, the port where it is attached, and various other settings. Create [.filename]#/etc/printcap# with these contents:
[.programlisting]
....
lp:\ <.>
:lp=/dev/unlpt0:\ <.>
:sh:\ <.>
:mx#0:\ <.>
:sd=/var/spool/lpd/lp:\ <.>
:lf=/var/log/lpd-errs: <.>
....
<.> The name of this printer. man:lpr[1] sends print jobs to the `lp` printer unless another printer is specified with `-P`, so the default printer should be named `lp`.
<.> The device where the printer is connected. Replace this line with the appropriate one for the connection type shown here.
<.> Suppress the printing of a header page at the start of a print job.
<.> Do not limit the maximum size of a print job.
<.> The path to the spooling directory for this printer. Each printer uses its own spooling directory.
<.> The log file where errors on this printer will be reported.
After creating [.filename]#/etc/printcap#, use man:chkprintcap[8] to test it for errors:
[source,shell]
....
# chkprintcap
....
Fix any reported problems before continuing.
Enable man:lpd[8] in [.filename]#/etc/rc.conf#:
[.programlisting]
....
lpd_enable="YES"
....
Start the service:
[source,shell]
....
# service lpd start
....
[[printing-lpd-lpr]]
=== Printing with man:lpr[1]
Documents are sent to the printer with `lpr`. A file to be printed can be named on the command line or piped into `lpr`. These two commands are equivalent, sending the contents of [.filename]#doc.txt# to the default printer:
[source,shell]
....
% lpr doc.txt
% cat doc.txt | lpr
....
Printers can be selected with `-P`. To print to a printer called _laser_:
[source,shell]
....
% lpr -Plaser doc.txt
....
[[printing-lpd-filters]]
=== Filters
The examples shown so far have sent the contents of a text file directly to the printer. As long as the printer understands the content of those files, output will be printed correctly.
Some printers are not capable of printing plain text, and the input file might not even be plain text.
_Filters_ allow files to be translated or processed. The typical use is to translate one type of input, like plain text, into a form that the printer can understand, like PostScript(R) or `PCL`. Filters can also be used to provide additional features, like adding page numbers or highlighting source code to make it easier to read.
The filters discussed here are _input filters_ or _text filters_. These filters convert the incoming file into different forms. Use man:su[1] to become `root` before creating the files.
Filters are specified in [.filename]#/etc/printcap# with the `if=` identifier. To use [.filename]#/usr/local/libexec/lf2crlf# as a filter, modify [.filename]#/etc/printcap# like this:
[.programlisting]
....
lp:\
:lp=/dev/unlpt0:\
:sh:\
:mx#0:\
:sd=/var/spool/lpd/lp:\
:if=/usr/local/libexec/lf2crlf:\ <.>
:lf=/var/log/lpd-errs:
....
<.> `if=` identifies the _input filter_ that will be used on incoming text.
[TIP]
====
The backslash _line continuation_ characters at the end of the lines in [.filename]#printcap# entries reveal that an entry for a printer is really just one long line with entries delimited by colon characters. An earlier example can be rewritten as a single less-readable line:
[.programlisting]
....
lp:lp=/dev/unlpt0:sh:mx#0:sd=/var/spool/lpd/lp:if=/usr/local/libexec/lf2crlf:lf=/var/log/lpd-errs:
....
====
[[printing-lpd-filters-stairstep]]
==== Preventing Stairstepping on Plain Text Printers
Typical FreeBSD text files contain only a single line feed character at the end of each line. These lines will "stairstep" on a standard printer:
[.programlisting]
....
A printed file looks
like the steps of a staircase
scattered by the wind
....
A filter can convert the newline characters into carriage returns and newlines. The carriage returns make the printer return to the left after each line. Create [.filename]#/usr/local/libexec/lf2crlf# with these contents:
[.programlisting]
....
#!/bin/sh
CR=$'\r'
/usr/bin/sed -e "s/$/${CR}/g"
....
Set the permissions and make it executable:
[source,shell]
....
# chmod 555 /usr/local/libexec/lf2crlf
....
Modify [.filename]#/etc/printcap# to use the new filter:
[.programlisting]
....
:if=/usr/local/libexec/lf2crlf:\
....
Test the filter by printing the same plain text file. The carriage returns will cause each line to start at the left side of the page.
[[printing-lpd-filters-enscript]]
==== Fancy Plain Text on PostScript(R) Printers with package:print/enscript[]
GNUEnscript converts plain text files into nicely-formatted PostScript(R) for printing on PostScript(R) printers. It adds page numbers, wraps long lines, and provides numerous other features to make printed text files easier to read. Depending on the local paper size, install either package:print/enscript-letter[] or package:print/enscript-a4[] from the Ports Collection.
Create [.filename]#/usr/local/libexec/enscript# with these contents:
[.programlisting]
....
#!/bin/sh
/usr/local/bin/enscript -o -
....
Set the permissions and make it executable:
[source,shell]
....
# chmod 555 /usr/local/libexec/enscript
....
Modify [.filename]#/etc/printcap# to use the new filter:
[.programlisting]
....
:if=/usr/local/libexec/enscript:\
....
Test the filter by printing a plain text file.
[[printing-lpd-filters-ps2pcl]]
==== Printing PostScript(R) to `PCL` Printers
Many programs produce PostScript(R) documents. However, inexpensive printers often only understand plain text or `PCL`. This filter converts PostScript(R) files to `PCL` before sending them to the printer.
Install the Ghostscript PostScript(R) interpreter, package:print/ghostscript9-base[], from the Ports Collection.
Create [.filename]#/usr/local/libexec/ps2pcl# with these contents:
[.programlisting]
....
#!/bin/sh
/usr/local/bin/gs -dSAFER -dNOPAUSE -dBATCH -q -sDEVICE=ljet4 -sOutputFile=- -
....
Set the permissions and make it executable:
[source,shell]
....
# chmod 555 /usr/local/libexec/ps2pcl
....
PostScript(R) input sent to this script will be rendered and converted to `PCL` before being sent on to the printer.
Modify [.filename]#/etc/printcap# to use this new input filter:
[.programlisting]
....
:if=/usr/local/libexec/ps2pcl:\
....
Test the filter by sending a small PostScript(R) program to it:
[source,shell]
....
% printf "%%\!PS \n /Helvetica findfont 18 scalefont setfont \
72 432 moveto (PostScript printing successful.) show showpage \004" | lpr
....
[[printing-lpd-filters-smart]]
==== Smart Filters
A filter that detects the type of input and automatically converts it to the correct format for the printer can be very convenient. The first two characters of a PostScript(R) file are usually `%!`. A filter can detect those two characters. PostScript(R) files can be sent on to a PostScript(R) printer unchanged. Text files can be converted to PostScript(R) with Enscript as shown earlier. Create [.filename]#/usr/local/libexec/psif# with these contents:
[.programlisting]
....
#!/bin/sh
#
# psif - Print PostScript or plain text on a PostScript printer
#
IFS="" read -r first_line
first_two_chars=`expr "$first_line" : '\(..\)'`
case "$first_two_chars" in
%!)
# %! : PostScript job, print it.
echo "$first_line" && cat && exit 0
exit 2
;;
*)
# otherwise, format with enscript
( echo "$first_line"; cat ) | /usr/local/bin/enscript -o - && exit 0
exit 2
;;
esac
....
Set the permissions and make it executable:
[source,shell]
....
# chmod 555 /usr/local/libexec/psif
....
Modify [.filename]#/etc/printcap# to use this new input filter:
[.programlisting]
....
:if=/usr/local/libexec/psif:\
....
Test the filter by printing PostScript(R) and plain text files.
[[printing-lpd-filters-othersmart]]
==== Other Smart Filters
Writing a filter that detects many different types of input and formats them correctly is challenging. package:print/apsfilter[] from the Ports Collection is a smart "magic" filter that detects dozens of file types and automatically converts them to the `PDL` understood by the printer. See http://www.apsfilter.org[] for more details.
[[printing-lpd-queues]]
=== Multiple Queues
The entries in [.filename]#/etc/printcap# are really definitions of _queues_. There can be more than one queue for a single printer. When combined with filters, multiple queues provide users more control over how their jobs are printed.
As an example, consider a networked PostScript(R) laser printer in an office. Most users want to print plain text, but a few advanced users want to be able to print PostScript(R) files directly. Two entries can be created for the same printer in [.filename]#/etc/printcap#:
[.programlisting]
....
textprinter:\
:lp=9100@officelaser:\
:sh:\
:mx#0:\
:sd=/var/spool/lpd/textprinter:\
:if=/usr/local/libexec/enscript:\
:lf=/var/log/lpd-errs:
psprinter:\
:lp=9100@officelaser:\
:sh:\
:mx#0:\
:sd=/var/spool/lpd/psprinter:\
:lf=/var/log/lpd-errs:
....
Documents sent to `textprinter` will be formatted by the [.filename]#/usr/local/libexec/enscript# filter shown in an earlier example. Advanced users can print PostScript(R) files on `psprinter`, where no filtering is done.
This multiple queue technique can be used to provide direct access to all kinds of printer features. A printer with a duplexer could use two queues, one for ordinary single-sided printing, and one with a filter that sends the command sequence to enable double-sided printing and then sends the incoming file.
[[printing-lpd-monitor]]
=== Monitoring and Controlling Printing
Several utilities are available to monitor print jobs and check and control printer operation.
[[printing-lpd-monitor-lpq]]
==== man:lpq[1]
man:lpq[1] shows the status of a user's print jobs. Print jobs from other users are not shown.
Show the current user's pending jobs on a single printer:
[source,shell]
....
% lpq -Plp
Rank Owner Job Files Total Size
1st jsmith 0 (standard input) 12792 bytes
....
Show the current user's pending jobs on all printers:
[source,shell]
....
% lpq -a
lp:
Rank Owner Job Files Total Size
1st jsmith 1 (standard input) 27320 bytes
laser:
Rank Owner Job Files Total Size
1st jsmith 287 (standard input) 22443 bytes
....
[[printing-lpd-monitor-lprm]]
==== man:lprm[1]
man:lprm[1] is used to remove print jobs. Normal users are only allowed to remove their own jobs. `root` can remove any or all jobs.
Remove all pending jobs from a printer:
[source,shell]
....
# lprm -Plp -
dfA002smithy dequeued
cfA002smithy dequeued
dfA003smithy dequeued
cfA003smithy dequeued
dfA004smithy dequeued
cfA004smithy dequeued
....
Remove a single job from a printer. man:lpq[1] is used to find the job number.
[source,shell]
....
% lpq
Rank Owner Job Files Total Size
1st jsmith 5 (standard input) 12188 bytes
% lprm -Plp 5
dfA005smithy dequeued
cfA005smithy dequeued
....
[[printing-lpd-monitor-lpc]]
==== man:lpc[8]
man:lpc[8] is used to check and modify printer status. `lpc` is followed by a command and an optional printer name. `all` can be used instead of a specific printer name, and the command will be applied to all printers. Normal users can view status with man:lpc[8]. Only `root` can use commands which modify printer status.
Show the status of all printers:
[source,shell]
....
% lpc status all
lp:
queuing is enabled
printing is enabled
1 entry in spool area
printer idle
laser:
queuing is enabled
printing is enabled
1 entry in spool area
waiting for laser to come up
....
Prevent a printer from accepting new jobs, then begin accepting new jobs again:
[source,shell]
....
# lpc disable lp
lp:
queuing disabled
# lpc enable lp
lp:
queuing enabled
....
Stop printing, but continue to accept new jobs. Then begin printing again:
[source,shell]
....
# lpc stop lp
lp:
printing disabled
# lpc start lp
lp:
printing enabled
daemon started
....
Restart a printer after some error condition:
[source,shell]
....
# lpc restart lp
lp:
no daemon to abort
printing enabled
daemon restarted
....
Turn the print queue off and disable printing, with a message to explain the problem to users:
[source,shell]
....
# lpc down lp Repair parts will arrive on Monday
lp:
printer and queuing disabled
status message is now: Repair parts will arrive on Monday
....
Re-enable a printer that is down:
[source,shell]
....
# lpc up lp
lp:
printing enabled
daemon started
....
See man:lpc[8] for more commands and options.
[[printing-lpd-shared]]
=== Shared Printers
Printers are often shared by multiple users in businesses and schools. Additional features are provided to make sharing printers more convenient.
[[printing-shared-aliases]]
==== Aliases
The printer name is set in the first line of the entry in [.filename]#/etc/printcap#. Additional names, or _aliases_, can be added after that name. Aliases are separated from the name and each other by vertical bars:
[.programlisting]
....
lp|repairsprinter|salesprinter:\
....
Aliases can be used in place of the printer name. For example, users in the Sales department print to their printer with
[source,shell]
....
% lpr -Psalesprinter sales-report.txt
....
Users in the Repairs department print to _their_ printer with
[source,shell]
....
% lpr -Prepairsprinter repairs-report.txt
....
All of the documents print on that single printer. When the Sales department grows enough to need their own printer, the alias can be removed from the shared printer entry and used as the name of a new printer. Users in both departments continue to use the same commands, but the Sales documents are sent to the new printer.
[[printing-shared-headers]]
==== Header Pages
It can be difficult for users to locate their documents in the stack of pages produced by a busy shared printer. _Header pages_ were created to solve this problem. A header page with the user name and document name is printed before each print job. These pages are also sometimes called _banner_ or _separator_ pages.
Enabling header pages differs depending on whether the printer is connected directly to the computer with a `USB`, parallel, or serial cable, or is connected remotely over a network.
Header pages on directly-connected printers are enabled by removing the `:sh:\` (Suppress Header) line from the entry in [.filename]#/etc/printcap#. These header pages only use line feed characters for new lines. Some printers will need the [.filename]#/usr/share/examples/printing/hpif# filter to prevent stairstepped text. The filter configures `PCL` printers to print both carriage returns and line feeds when a line feed is received.
Header pages for network printers must be configured on the printer itself. Header page entries in [.filename]#/etc/printcap# are ignored. Settings are usually available from the printer front panel or a configuration web page accessible with a web browser.
[[printing-lpd-references]]
=== References
Example files: [.filename]#/usr/share/examples/printing/#.
The _4.3BSD Line Printer Spooler Manual_, [.filename]#/usr/share/doc/smm/07.lpd/paper.ascii.gz#.
Manual pages: man:printcap[5], man:lpd[8], man:lpr[1], man:lpc[8], man:lprm[1], man:lpq[1].
[[printing-other]]
== Other Printing Systems
Several other printing systems are available in addition to the built-in man:lpd[8]. These systems offer support for other protocols or additional features.
[[printing-other-cups]]
=== CUPS (Common UNIX(R) Printing System)
CUPS is a popular printing system available on many operating systems. Using CUPS on FreeBSD is documented in a separate article: link:{cups}[CUPS]
[[printing-other-hplip]]
=== HPLIP
Hewlett Packard provides a printing system that supports many of their inkjet and laser printers. The port is package:print/hplip[]. The main web page is at http://hplipopensource.com/hplip-web/index.html[]. The port handles all the installation details on FreeBSD. Configuration information is shown at http://hplipopensource.com/hplip-web/install/manual/hp_setup.html[].
[[printing-other-lprng]]
=== LPRng
LPRng was developed as an enhanced alternative to man:lpd[8]. The port is package:sysutils/LPRng[]. For details and documentation, see http://www.lprng.com/[].
diff --git a/documentation/content/en/books/handbook/security/_index.adoc b/documentation/content/en/books/handbook/security/_index.adoc
index 8c194efdac..461f532239 100644
--- a/documentation/content/en/books/handbook/security/_index.adoc
+++ b/documentation/content/en/books/handbook/security/_index.adoc
@@ -1,2152 +1,2153 @@
---
title: Chapter 14. Security
part: Part III. System Administration
prev: books/handbook/boot
next: books/handbook/jails
+description: Hundreds of standard practices have been authored about how to secure systems and networks, and as a user of FreeBSD, understanding how to protect against attacks and intruders is a must
---
[[security]]
= Security
:doctype: book
:toc: macro
:toclevels: 1
:icons: font
:sectnums:
:sectnumlevels: 6
:source-highlighter: rouge
:experimental:
:skip-front-matter:
:xrefstyle: basic
:relfileprefix: ../
:outfilesuffix:
:sectnumoffset: 14
ifeval::["{backend}" == "html5"]
:imagesdir: ../../../../images/books/handbook/security/
endif::[]
ifeval::["{backend}" == "pdf"]
:imagesdir: ../../../../static/images/books/handbook/security/
endif::[]
ifeval::["{backend}" == "epub3"]
:imagesdir: ../../../../static/images/books/handbook/security/
endif::[]
include::shared/authors.adoc[]
include::shared/releases.adoc[]
include::shared/en/mailing-lists.adoc[]
include::shared/en/teams.adoc[]
include::shared/en/urls.adoc[]
toc::[]
[[security-synopsis]]
== Synopsis
Security, whether physical or virtual, is a topic so broad that an entire industry has evolved around it. Hundreds of standard practices have been authored about how to secure systems and networks, and as a user of FreeBSD, understanding how to protect against attacks and intruders is a must.
In this chapter, several fundamentals and techniques will be discussed. The FreeBSD system comes with multiple layers of security, and many more third party utilities may be added to enhance security.
After reading this chapter, you will know:
* Basic FreeBSD system security concepts.
* The various crypt mechanisms available in FreeBSD.
* How to set up one-time password authentication.
* How to configure TCP Wrapper for use with man:inetd[8].
* How to set up Kerberos on FreeBSD.
* How to configure IPsec and create a VPN.
* How to configure and use OpenSSH on FreeBSD.
* How to use file system ACLs.
* How to use pkg to audit third party software packages installed from the Ports Collection.
* How to utilize FreeBSD security advisories.
* What Process Accounting is and how to enable it on FreeBSD.
* How to control user resources using login classes or the resource limits database.
Before reading this chapter, you should:
* Understand basic FreeBSD and Internet concepts.
Additional security topics are covered elsewhere in this Handbook. For example, Mandatory Access Control is discussed in crossref:mac[mac,Mandatory Access Control] and Internet firewalls are discussed in crossref:firewalls[firewalls,Firewalls].
[[security-intro]]
== Introduction
Security is everyone's responsibility. A weak entry point in any system could allow intruders to gain access to critical information and cause havoc on an entire network. One of the core principles of information security is the CIA triad, which stands for the Confidentiality, Integrity, and Availability of information systems.
The CIA triad is a bedrock concept of computer security as customers and users expect their data to be protected. For example, a customer expects that their credit card information is securely stored (confidentiality), that their orders are not changed behind the scenes (integrity), and that they have access to their order information at all times (availablility).
To provide CIA, security professionals apply a defense in depth strategy. The idea of defense in depth is to add several layers of security to prevent one single layer failing and the entire security system collapsing. For example, a system administrator cannot simply turn on a firewall and consider the network or system secure. One must also audit accounts, check the integrity of binaries, and ensure malicious tools are not installed. To implement an effective security strategy, one must understand threats and how to defend against them.
What is a threat as it pertains to computer security? Threats are not limited to remote attackers who attempt to access a system without permission from a remote location. Threats also include employees, malicious software, unauthorized network devices, natural disasters, security vulnerabilities, and even competing corporations.
Systems and networks can be accessed without permission, sometimes by accident, or by remote attackers, and in some cases, via corporate espionage or former employees. As a user, it is important to prepare for and admit when a mistake has led to a security breach and report possible issues to the security team. As an administrator, it is important to know of the threats and be prepared to mitigate them.
When applying security to systems, it is recommended to start by securing the basic accounts and system configuration, and then to secure the network layer so that it adheres to the system policy and the organization's security procedures. Many organizations already have a security policy that covers the configuration of technology devices. The policy should include the security configuration of workstations, desktops, mobile devices, phones, production servers, and development servers. In many cases, standard operating procedures (SOPs) already exist. When in doubt, ask the security team.
The rest of this introduction describes how some of these basic security configurations are performed on a FreeBSD system. The rest of this chapter describes some specific tools which can be used when implementing a security policy on a FreeBSD system.
[[security-accounts]]
=== Preventing Logins
In securing a system, a good starting point is an audit of accounts. Ensure that `root` has a strong password and that this password is not shared. Disable any accounts that do not need login access.
To deny login access to accounts, two methods exist. The first is to lock the account. This example locks the `toor` account:
[source,shell]
....
# pw lock toor
....
The second method is to prevent login access by changing the shell to [.filename]#/usr/sbin/nologin#. Only the superuser can change the shell for other users:
[source,shell]
....
# chsh -s /usr/sbin/nologin toor
....
The [.filename]#/usr/sbin/nologin# shell prevents the system from assigning a shell to the user when they attempt to login.
[[security-accountmgmt]]
=== Permitted Account Escalation
In some cases, system administration needs to be shared with other users. FreeBSD has two methods to handle this. The first one, which is not recommended, is a shared root password used by members of the `wheel` group. With this method, a user types `su` and enters the password for `wheel` whenever superuser access is needed. The user should then type `exit` to leave privileged access after finishing the commands that required administrative access. To add a user to this group, edit [.filename]#/etc/group# and add the user to the end of the `wheel` entry. The user must be separated by a comma character with no space.
The second, and recommended, method to permit privilege escalation is to install the package:security/sudo[] package or port. This software provides additional auditing, more fine-grained user control, and can be configured to lock users into running only the specified privileged commands.
After installation, use `visudo` to edit [.filename]#/usr/local/etc/sudoers#. This example creates a new `webadmin` group, adds the `trhodes` account to that group, and configures that group access to restart package:apache24[]:
[source,shell]
....
# pw groupadd webadmin -M trhodes -g 6000
# visudo
%webadmin ALL=(ALL) /usr/sbin/service apache24 *
....
[[security-passwords]]
=== Password Hashes
Passwords are a necessary evil of technology. When they must be used, they should be complex and a powerful hash mechanism should be used to encrypt the version that is stored in the password database. FreeBSD supports the DES, MD5, SHA256, SHA512, and Blowfish hash algorithms in its `crypt()` library. The default of SHA512 should not be changed to a less secure hashing algorithm, but can be changed to the more secure Blowfish algorithm.
[NOTE]
====
Blowfish is not part of AES and is not considered compliant with any Federal Information Processing Standards (FIPS). Its use may not be permitted in some environments.
====
To determine which hash algorithm is used to encrypt a user's password, the superuser can view the hash for the user in the FreeBSD password database. Each hash starts with a symbol which indicates the type of hash mechanism used to encrypt the password. If DES is used, there is no beginning symbol. For MD5, the symbol is `$`. For SHA256 and SHA512, the symbol is `$6$`. For Blowfish, the symbol is `$2a$`. In this example, the password for `dru` is hashed using the default SHA512 algorithm as the hash starts with `$6$`. Note that the encrypted hash, not the password itself, is stored in the password database:
[source,shell]
....
# grep dru /etc/master.passwd
dru:$6$pzIjSvCAn.PBYQBA$PXpSeWPx3g5kscj3IMiM7tUEUSPmGexxta.8Lt9TGSi2lNQqYGKszsBPuGME0:1001:1001::0:0:dru:/usr/home/dru:/bin/csh
....
The hash mechanism is set in the user's login class. For this example, the user is in the `default` login class and the hash algorithm is set with this line in [.filename]#/etc/login.conf#:
[.programlisting]
....
:passwd_format=sha512:\
....
To change the algorithm to Blowfish, modify that line to look like this:
[.programlisting]
....
:passwd_format=blf:\
....
Then run `cap_mkdb /etc/login.conf` as described in <<users-limiting>>. Note that this change will not affect any existing password hashes. This means that all passwords should be re-hashed by asking users to run `passwd` in order to change their password.
For remote logins, two-factor authentication should be used. An example of two-factor authentication is "something you have", such as a key, and "something you know", such as the passphrase for that key. Since OpenSSH is part of the FreeBSD base system, all network logins should be over an encrypted connection and use key-based authentication instead of passwords. For more information, refer to <<openssh>>. Kerberos users may need to make additional changes to implement OpenSSH in their network. These changes are described in <<kerberos5>>.
[[security-pwpolicy]]
=== Password Policy Enforcement
Enforcing a strong password policy for local accounts is a fundamental aspect of system security. In FreeBSD, password length, password strength, and password complexity can be implemented using built-in Pluggable Authentication Modules (PAM).
This section demonstrates how to configure the minimum and maximum password length and the enforcement of mixed characters using the [.filename]#pam_passwdqc.so# module. This module is enforced when a user changes their password.
To configure this module, become the superuser and uncomment the line containing `pam_passwdqc.so` in [.filename]#/etc/pam.d/passwd#. Then, edit that line to match the password policy:
[.programlisting]
....
password requisite pam_passwdqc.so min=disabled,disabled,disabled,12,10 similar=deny retry=3 enforce=users
....
This example sets several requirements for new passwords. The `min` setting controls the minimum password length. It has five values because this module defines five different types of passwords based on their complexity. Complexity is defined by the type of characters that must exist in a password, such as letters, numbers, symbols, and case. The types of passwords are described in man:pam_passwdqc[8]. In this example, the first three types of passwords are disabled, meaning that passwords that meet those complexity requirements will not be accepted, regardless of their length. The `12` sets a minimum password policy of at least twelve characters, if the password also contains characters with three types of complexity. The `10` sets the password policy to also allow passwords of at least ten characters, if the password contains characters with four types of complexity.
The `similar` setting denies passwords that are similar to the user's previous password. The `retry` setting provides a user with three opportunities to enter a new password.
Once this file is saved, a user changing their password will see a message similar to the following:
[source,shell]
....
% passwd
Changing local password for trhodes
Old Password:
You can now choose the new password.
A valid password should be a mix of upper and lower case letters,
digits and other characters. You can use a 12 character long
password with characters from at least 3 of these 4 classes, or
a 10 character long password containing characters from all the
classes. Characters that form a common pattern are discarded by
the check.
Alternatively, if no one else can see your terminal now, you can
pick this as your password: "trait-useful&knob".
Enter new password:
....
If a password that does not match the policy is entered, it will be rejected with a warning and the user will have an opportunity to try again, up to the configured number of retries.
Most password policies require passwords to expire after so many days. To set a password age time in FreeBSD, set `passwordtime` for the user's login class in [.filename]#/etc/login.conf#. The `default` login class contains an example:
[.programlisting]
....
# :passwordtime=90d:\
....
So, to set an expiry of 90 days for this login class, remove the comment symbol (`#`), save the edit, and run `cap_mkdb /etc/login.conf`.
To set the expiration on individual users, pass an expiration date or the number of days to expiry and a username to `pw`:
[source,shell]
....
# pw usermod -p 30-apr-2015 -n trhodes
....
As seen here, an expiration date is set in the form of day, month, and year. For more information, see man:pw[8].
[[security-rkhunter]]
=== Detecting Rootkits
A _rootkit_ is any unauthorized software that attempts to gain `root` access to a system. Once installed, this malicious software will normally open up another avenue of entry for an attacker. Realistically, once a system has been compromised by a rootkit and an investigation has been performed, the system should be reinstalled from scratch. There is tremendous risk that even the most prudent security or systems engineer will miss something an attacker left behind.
A rootkit does do one thing useful for administrators: once detected, it is a sign that a compromise happened at some point. But, these types of applications tend to be very well hidden. This section demonstrates a tool that can be used to detect rootkits, package:security/rkhunter[].
After installation of this package or port, the system may be checked using the following command. It will produce a lot of information and will require some manual pressing of kbd:[ENTER]:
[source,shell]
....
# rkhunter -c
....
After the process completes, a status message will be printed to the screen. This message will include the amount of files checked, suspect files, possible rootkits, and more. During the check, some generic security warnings may be produced about hidden files, the OpenSSH protocol selection, and known vulnerable versions of installed software. These can be handled now or after a more detailed analysis has been performed.
Every administrator should know what is running on the systems they are responsible for. Third-party tools like rkhunter and package:sysutils/lsof[], and native commands such as `netstat` and `ps`, can show a great deal of information on the system. Take notes on what is normal, ask questions when something seems out of place, and be paranoid. While preventing a compromise is ideal, detecting a compromise is a must.
[[security-ids]]
=== Binary Verification
Verification of system files and binaries is important because it provides the system administration and security teams information about system changes. A software application that monitors the system for changes is called an Intrusion Detection System (IDS).
FreeBSD provides native support for a basic IDS system. While the nightly security emails will notify an administrator of changes, the information is stored locally and there is a chance that a malicious user could modify this information in order to hide their changes to the system. As such, it is recommended to create a separate set of binary signatures and store them on a read-only, root-owned directory or, preferably, on a removable USB disk or remote rsync server.
The built-in `mtree` utility can be used to generate a specification of the contents of a directory. A seed, or a numeric constant, is used to generate the specification and is required to check that the specification has not changed. This makes it possible to determine if a file or binary has been modified. Since the seed value is unknown by an attacker, faking or checking the checksum values of files will be difficult to impossible. The following example generates a set of SHA256 hashes, one for each system binary in [.filename]#/bin#, and saves those values to a hidden file in ``root``'s home directory, [.filename]#/root/.bin_chksum_mtree#:
[source,shell]
....
# mtree -s 3483151339707503 -c -K cksum,sha256digest -p /bin > /root/.bin_chksum_mtree
# mtree: /bin checksum: 3427012225
....
The _3483151339707503_ represents the seed. This value should be remembered, but not shared.
Viewing [.filename]#/root/.bin_cksum_mtree# should yield output similar to the following:
[.programlisting]
....
# user: root
# machine: dreadnaught
# tree: /bin
# date: Mon Feb 3 10:19:53 2014
# .
/set type=file uid=0 gid=0 mode=0555 nlink=1 flags=none
. type=dir mode=0755 nlink=2 size=1024 \
time=1380277977.000000000
\133 nlink=2 size=11704 time=1380277977.000000000 \
cksum=484492447 \
sha256digest=6207490fbdb5ed1904441fbfa941279055c3e24d3a4049aeb45094596400662a
cat size=12096 time=1380277975.000000000 cksum=3909216944 \
sha256digest=65ea347b9418760b247ab10244f47a7ca2a569c9836d77f074e7a306900c1e69
chflags size=8168 time=1380277975.000000000 cksum=3949425175 \
sha256digest=c99eb6fc1c92cac335c08be004a0a5b4c24a0c0ef3712017b12c89a978b2dac3
chio size=18520 time=1380277975.000000000 cksum=2208263309 \
sha256digest=ddf7c8cb92a58750a675328345560d8cc7fe14fb3ccd3690c34954cbe69fc964
chmod size=8640 time=1380277975.000000000 cksum=2214429708 \
sha256digest=a435972263bf814ad8df082c0752aa2a7bdd8b74ff01431ccbd52ed1e490bbe7
....
The machine's hostname, the date and time the specification was created, and the name of the user who created the specification are included in this report. There is a checksum, size, time, and SHA256 digest for each binary in the directory.
To verify that the binary signatures have not changed, compare the current contents of the directory to the previously generated specification, and save the results to a file. This command requires the seed that was used to generate the original specification:
[source,shell]
....
# mtree -s 3483151339707503 -p /bin < /root/.bin_chksum_mtree >> /root/.bin_chksum_output
# mtree: /bin checksum: 3427012225
....
This should produce the same checksum for [.filename]#/bin# that was produced when the specification was created. If no changes have occurred to the binaries in this directory, the [.filename]#/root/.bin_chksum_output# output file will be empty. To simulate a change, change the date on [.filename]#/bin/cat# using `touch` and run the verification command again:
[source,shell]
....
# touch /bin/cat
# mtree -s 3483151339707503 -p /bin < /root/.bin_chksum_mtree >> /root/.bin_chksum_output
# more /root/.bin_chksum_output
cat changed
modification time expected Fri Sep 27 06:32:55 2013 found Mon Feb 3 10:28:43 2014
....
It is recommended to create specifications for the directories which contain binaries and configuration files, as well as any directories containing sensitive data. Typically, specifications are created for [.filename]#/bin#, [.filename]#/sbin#, [.filename]#/usr/bin#, [.filename]#/usr/sbin#, [.filename]#/usr/local/bin#, [.filename]#/etc#, and [.filename]#/usr/local/etc#.
More advanced IDS systems exist, such as package:security/aide[]. In most cases, `mtree` provides the functionality administrators need. It is important to keep the seed value and the checksum output hidden from malicious users. More information about `mtree` can be found in man:mtree[8].
[[security-tuning]]
=== System Tuning for Security
In FreeBSD, many system features can be tuned using `sysctl`. A few of the security features which can be tuned to prevent Denial of Service (DoS) attacks will be covered in this section. More information about using `sysctl`, including how to temporarily change values and how to make the changes permanent after testing, can be found in crossref:config[configtuning-sysctl,“Tuning with sysctl(8)”].
[NOTE]
====
Any time a setting is changed with `sysctl`, the chance to cause undesired harm is increased, affecting the availability of the system. All changes should be monitored and, if possible, tried on a testing system before being used on a production system.
====
By default, the FreeBSD kernel boots with a security level of `-1`. This is called "insecure mode" because immutable file flags may be turned off and all devices may be read from or written to. The security level will remain at `-1` unless it is altered through `sysctl` or by a setting in the startup scripts. The security level may be increased during system startup by setting `kern_securelevel_enable` to `YES` in [.filename]#/etc/rc.conf#, and the value of `kern_securelevel` to the desired security level. See man:security[7] and man:init[8] for more information on these settings and the available security levels.
[WARNING]
====
Increasing the `securelevel` can break Xorg and cause other issues. Be prepared to do some debugging.
====
The `net.inet.tcp.blackhole` and `net.inet.udp.blackhole` settings can be used to drop incoming SYN packets on closed ports without sending a return RST response. The default behavior is to return an RST to show a port is closed. Changing the default provides some level of protection against ports scans, which are used to determine which applications are running on a system. Set `net.inet.tcp.blackhole` to `2` and `net.inet.udp.blackhole` to `1`. Refer to man:blackhole[4] for more information about these settings.
The `net.inet.icmp.drop_redirect` and `net.inet.ip.redirect` settings help prevent against _redirect attacks_. A redirect attack is a type of DoS which sends mass numbers of ICMP type 5 packets. Since these packets are not required, set `net.inet.icmp.drop_redirect` to `1` and set `net.inet.ip.redirect` to `0`.
Source routing is a method for detecting and accessing non-routable addresses on the internal network. This should be disabled as non-routable addresses are normally not routable on purpose. To disable this feature, set `net.inet.ip.sourceroute` and `net.inet.ip.accept_sourceroute` to `0`.
When a machine on the network needs to send messages to all hosts on a subnet, an ICMP echo request message is sent to the broadcast address. However, there is no reason for an external host to perform such an action. To reject all external broadcast requests, set `net.inet.icmp.bmcastecho` to `0`.
Some additional settings are documented in man:security[7].
[[one-time-passwords]]
== One-time Passwords
By default, FreeBSD includes support for One-time Passwords In Everything (OPIE). OPIE is designed to prevent replay attacks, in which an attacker discovers a user's password and uses it to access a system. Since a password is only used once in OPIE, a discovered password is of little use to an attacker. OPIE uses a secure hash and a challenge/response system to manage passwords. The FreeBSD implementation uses the MD5 hash by default.
OPIE uses three different types of passwords. The first is the usual UNIX(R) or Kerberos password. The second is the one-time password which is generated by `opiekey`. The third type of password is the "secret password" which is used to generate one-time passwords. The secret password has nothing to do with, and should be different from, the UNIX(R) password.
There are two other pieces of data that are important to OPIE. One is the "seed" or "key", consisting of two letters and five digits. The other is the "iteration count", a number between 1 and 100. OPIE creates the one-time password by concatenating the seed and the secret password, applying the MD5 hash as many times as specified by the iteration count, and turning the result into six short English words which represent the one-time password. The authentication system keeps track of the last one-time password used, and the user is authenticated if the hash of the user-provided password is equal to the previous password. Since a one-way hash is used, it is impossible to generate future one-time passwords if a successfully used password is captured. The iteration count is decremented after each successful login to keep the user and the login program in sync. When the iteration count gets down to `1`, OPIE must be reinitialized.
There are a few programs involved in this process. A one-time password, or a consecutive list of one-time passwords, is generated by passing an iteration count, a seed, and a secret password to man:opiekey[1]. In addition to initializing OPIE, man:opiepasswd[1] is used to change passwords, iteration counts, or seeds. The relevant credential files in [.filename]#/etc/opiekeys# are examined by man:opieinfo[1] which prints out the invoking user's current iteration count and seed.
This section describes four different sorts of operations. The first is how to set up one-time-passwords for the first time over a secure connection. The second is how to use `opiepasswd` over an insecure connection. The third is how to log in over an insecure connection. The fourth is how to generate a number of keys which can be written down or printed out to use at insecure locations.
=== Initializing OPIE
To initialize OPIE for the first time, run this command from a secure location:
[source,shell]
....
% opiepasswd -c
Adding unfurl:
Only use this method from the console; NEVER from remote. If you are using
telnet, xterm, or a dial-in, type ^C now or exit with no password.
Then run opiepasswd without the -c parameter.
Using MD5 to compute responses.
Enter new secret pass phrase:
Again new secret pass phrase:
ID unfurl OTP key is 499 to4268
MOS MALL GOAT ARM AVID COED
....
The `-c` sets console mode which assumes that the command is being run from a secure location, such as a computer under the user's control or an SSH session to a computer under the user's control.
When prompted, enter the secret password which will be used to generate the one-time login keys. This password should be difficult to guess and should be different than the password which is associated with the user's login account. It must be between 10 and 127 characters long. Remember this password.
The `ID` line lists the login name (`unfurl`), default iteration count (`499`), and default seed (`to4268`). When logging in, the system will remember these parameters and display them, meaning that they do not have to be memorized. The last line lists the generated one-time password which corresponds to those parameters and the secret password. At the next login, use this one-time password.
=== Insecure Connection Initialization
To initialize or change the secret password on an insecure system, a secure connection is needed to some place where `opiekey` can be run. This might be a shell prompt on a trusted machine. An iteration count is needed, where 100 is probably a good value, and the seed can either be specified or the randomly-generated one used. On the insecure connection, the machine being initialized, use man:opiepasswd[1]:
[source,shell]
....
% opiepasswd
Updating unfurl:
You need the response from an OTP generator.
Old secret pass phrase:
otp-md5 498 to4268 ext
Response: GAME GAG WELT OUT DOWN CHAT
New secret pass phrase:
otp-md5 499 to4269
Response: LINE PAP MILK NELL BUOY TROY
ID mark OTP key is 499 gr4269
LINE PAP MILK NELL BUOY TROY
....
To accept the default seed, press kbd:[Return]. Before entering an access password, move over to the secure connection and give it the same parameters:
[source,shell]
....
% opiekey 498 to4268
Using the MD5 algorithm to compute response.
Reminder: Do not use opiekey from telnet or dial-in sessions.
Enter secret pass phrase:
GAME GAG WELT OUT DOWN CHAT
....
Switch back over to the insecure connection, and copy the generated one-time password over to the relevant program.
=== Generating a Single One-time Password
After initializing OPIE and logging in, a prompt like this will be displayed:
[source,shell]
....
% telnet example.com
Trying 10.0.0.1...
Connected to example.com
Escape character is '^]'.
FreeBSD/i386 (example.com) (ttypa)
login: <username>
otp-md5 498 gr4269 ext
Password:
....
The OPIE prompts provides a useful feature. If kbd:[Return] is pressed at the password prompt, the prompt will turn echo on and display what is typed. This can be useful when attempting to type in a password by hand from a printout.
At this point, generate the one-time password to answer this login prompt. This must be done on a trusted system where it is safe to run man:opiekey[1]. There are versions of this command for Windows(R), Mac OS(R) and FreeBSD. This command needs the iteration count and the seed as command line options. Use cut-and-paste from the login prompt on the machine being logged in to.
On the trusted system:
[source,shell]
....
% opiekey 498 to4268
Using the MD5 algorithm to compute response.
Reminder: Do not use opiekey from telnet or dial-in sessions.
Enter secret pass phrase:
GAME GAG WELT OUT DOWN CHAT
....
Once the one-time password is generated, continue to log in.
=== Generating Multiple One-time Passwords
Sometimes there is no access to a trusted machine or secure connection. In this case, it is possible to use man:opiekey[1] to generate a number of one-time passwords beforehand. For example:
[source,shell]
....
% opiekey -n 5 30 zz99999
Using the MD5 algorithm to compute response.
Reminder: Do not use opiekey from telnet or dial-in sessions.
Enter secret pass phrase: <secret password>
26: JOAN BORE FOSS DES NAY QUIT
27: LATE BIAS SLAY FOLK MUCH TRIG
28: SALT TIN ANTI LOON NEAL USE
29: RIO ODIN GO BYE FURY TIC
30: GREW JIVE SAN GIRD BOIL PHI
....
The `-n 5` requests five keys in sequence, and `30` specifies what the last iteration number should be. Note that these are printed out in _reverse_ order of use. The really paranoid might want to write the results down by hand; otherwise, print the list. Each line shows both the iteration count and the one-time password. Scratch off the passwords as they are used.
=== Restricting Use of UNIX(R) Passwords
OPIE can restrict the use of UNIX(R) passwords based on the IP address of a login session. The relevant file is [.filename]#/etc/opieaccess#, which is present by default. Refer to man:opieaccess[5] for more information on this file and which security considerations to be aware of when using it.
Here is a sample [.filename]#opieaccess#:
[.programlisting]
....
permit 192.168.0.0 255.255.0.0
....
This line allows users whose IP source address (which is vulnerable to spoofing) matches the specified value and mask, to use UNIX(R) passwords at any time.
If no rules in [.filename]#opieaccess# are matched, the default is to deny non-OPIE logins.
[[tcpwrappers]]
== TCP Wrapper
TCP Wrapper is a host-based access control system which extends the abilities of crossref:network-servers[network-inetd,“The inetd Super-Server”]. It can be configured to provide logging support, return messages, and connection restrictions for the server daemons under the control of inetd. Refer to man:tcpd[8] for more information about TCP Wrapper and its features.
TCP Wrapper should not be considered a replacement for a properly configured firewall. Instead, TCP Wrapper should be used in conjunction with a firewall and other security enhancements in order to provide another layer of protection in the implementation of a security policy.
=== Initial Configuration
To enable TCP Wrapper in FreeBSD, add the following lines to [.filename]#/etc/rc.conf#:
[.programlisting]
....
inetd_enable="YES"
inetd_flags="-Ww"
....
Then, properly configure [.filename]#/etc/hosts.allow#.
[NOTE]
====
Unlike other implementations of TCP Wrapper, the use of [.filename]#hosts.deny# is deprecated in FreeBSD. All configuration options should be placed in [.filename]#/etc/hosts.allow#.
====
In the simplest configuration, daemon connection policies are set to either permit or block, depending on the options in [.filename]#/etc/hosts.allow#. The default configuration in FreeBSD is to allow all connections to the daemons started with inetd.
Basic configuration usually takes the form of `daemon : address : action`, where `daemon` is the daemon which inetd started, `address` is a valid hostname, IP address, or an IPv6 address enclosed in brackets ([ ]), and `action` is either `allow` or `deny`. TCP Wrapper uses a first rule match semantic, meaning that the configuration file is scanned from the beginning for a matching rule. When a match is found, the rule is applied and the search process stops.
For example, to allow POP3 connections via the package:mail/qpopper[] daemon, the following lines should be appended to [.filename]#hosts.allow#:
[.programlisting]
....
# This line is required for POP3 connections:
qpopper : ALL : allow
....
Whenever this file is edited, restart inetd:
[source,shell]
....
# service inetd restart
....
=== Advanced Configuration
TCP Wrapper provides advanced options to allow more control over the way connections are handled. In some cases, it may be appropriate to return a comment to certain hosts or daemon connections. In other cases, a log entry should be recorded or an email sent to the administrator. Other situations may require the use of a service for local connections only. This is all possible through the use of configuration options known as wildcards, expansion characters, and external command execution.
Suppose that a situation occurs where a connection should be denied yet a reason should be sent to the host who attempted to establish that connection. That action is possible with `twist`. When a connection attempt is made, `twist` executes a shell command or script. An example exists in [.filename]#hosts.allow#:
[.programlisting]
....
# The rest of the daemons are protected.
ALL : ALL \
: severity auth.info \
: twist /bin/echo "You are not welcome to use %d from %h."
....
In this example, the message "You are not allowed to use _daemon name_ from _hostname_." will be returned for any daemon not configured in [.filename]#hosts.allow#. This is useful for sending a reply back to the connection initiator right after the established connection is dropped. Any message returned _must_ be wrapped in quote (`"`) characters.
[WARNING]
====
It may be possible to launch a denial of service attack on the server if an attacker floods these daemons with connection requests.
====
Another possibility is to use `spawn`. Like `twist`, `spawn` implicitly denies the connection and may be used to run external shell commands or scripts. Unlike `twist`, `spawn` will not send a reply back to the host who established the connection. For example, consider the following configuration:
[.programlisting]
....
# We do not allow connections from example.com:
ALL : .example.com \
: spawn (/bin/echo %a from %h attempted to access %d >> \
/var/log/connections.log) \
: deny
....
This will deny all connection attempts from `*.example.com` and log the hostname, IP address, and the daemon to which access was attempted to [.filename]#/var/log/connections.log#. This example uses the substitution characters `%a` and `%h`. Refer to man:hosts_access[5] for the complete list.
To match every instance of a daemon, domain, or IP address, use `ALL`. Another wildcard is `PARANOID` which may be used to match any host which provides an IP address that may be forged because the IP address differs from its resolved hostname. In this example, all connection requests to Sendmail which have an IP address that varies from its hostname will be denied:
[.programlisting]
....
# Block possibly spoofed requests to sendmail:
sendmail : PARANOID : deny
....
[CAUTION]
====
Using the `PARANOID` wildcard will result in denied connections if the client or server has a broken DNS setup.
====
To learn more about wildcards and their associated functionality, refer to man:hosts_access[5].
[NOTE]
====
When adding new configuration lines, make sure that any unneeded entries for that daemon are commented out in [.filename]#hosts.allow#.
====
[[kerberos5]]
== Kerberos
Kerberos is a network authentication protocol which was originally created by the Massachusetts Institute of Technology (MIT) as a way to securely provide authentication across a potentially hostile network. The Kerberos protocol uses strong cryptography so that both a client and server can prove their identity without sending any unencrypted secrets over the network. Kerberos can be described as an identity-verifying proxy system and as a trusted third-party authentication system. After a user authenticates with Kerberos, their communications can be encrypted to assure privacy and data integrity.
The only function of Kerberos is to provide the secure authentication of users and servers on the network. It does not provide authorization or auditing functions. It is recommended that Kerberos be used with other security methods which provide authorization and audit services.
The current version of the protocol is version 5, described in RFC 4120. Several free implementations of this protocol are available, covering a wide range of operating systems. MIT continues to develop their Kerberos package. It is commonly used in the US as a cryptography product, and has historically been subject to US export regulations. In FreeBSD, MITKerberos is available as the package:security/krb5[] package or port. The Heimdal Kerberos implementation was explicitly developed outside of the US to avoid export regulations. The Heimdal Kerberos distribution is included in the base FreeBSD installation, and another distribution with more configurable options is available as package:security/heimdal[] in the Ports Collection.
In Kerberos users and services are identified as "principals" which are contained within an administrative grouping, called a "realm". A typical user principal would be of the form `_user_@_REALM_` (realms are traditionally uppercase).
This section provides a guide on how to set up Kerberos using the Heimdal distribution included in FreeBSD.
For purposes of demonstrating a Kerberos installation, the name spaces will be as follows:
* The DNS domain (zone) will be `example.org`.
* The Kerberos realm will be `EXAMPLE.ORG`.
[NOTE]
====
Use real domain names when setting up Kerberos, even if it will run internally. This avoids DNS problems and assures inter-operation with other Kerberos realms.
====
=== Setting up a Heimdal KDC
The Key Distribution Center (KDC) is the centralized authentication service that Kerberos provides, the "trusted third party" of the system. It is the computer that issues Kerberos tickets, which are used for clients to authenticate to servers. As the KDC is considered trusted by all other computers in the Kerberos realm, it has heightened security concerns. Direct access to the KDC should be limited.
While running a KDC requires few computing resources, a dedicated machine acting only as a KDC is recommended for security reasons.
To begin, install the package:security/heimdal[] package as follows:
[source,shell]
....
# pkg install heimdal
....
Next, update [.filename]#/etc/rc.conf# using `sysrc` as follows:
[source,shell]
....
# sysrc kdc_enable=yes
# sysrc kadmind_enable=yes
....
Next, edit [.filename]#/etc/krb5.conf# as follows:
[.programlisting]
....
[libdefaults]
default_realm = EXAMPLE.ORG
[realms]
EXAMPLE.ORG = {
kdc = kerberos.example.org
admin_server = kerberos.example.org
}
[domain_realm]
.example.org = EXAMPLE.ORG
....
In this example, the KDC will use the fully-qualified hostname `kerberos.example.org`. The hostname of the KDC must be resolvable in the DNS.
Kerberos can also use the DNS to locate KDCs, instead of a `[realms]` section in [.filename]#/etc/krb5.conf#. For large organizations that have their own DNS servers, the above example could be trimmed to:
[.programlisting]
....
[libdefaults]
default_realm = EXAMPLE.ORG
[domain_realm]
.example.org = EXAMPLE.ORG
....
With the following lines being included in the `example.org` zone file:
[.programlisting]
....
_kerberos._udp IN SRV 01 00 88 kerberos.example.org.
_kerberos._tcp IN SRV 01 00 88 kerberos.example.org.
_kpasswd._udp IN SRV 01 00 464 kerberos.example.org.
_kerberos-adm._tcp IN SRV 01 00 749 kerberos.example.org.
_kerberos IN TXT EXAMPLE.ORG
....
[NOTE]
====
In order for clients to be able to find the Kerberos services, they _must_ have either a fully configured [.filename]#/etc/krb5.conf# or a minimally configured [.filename]#/etc/krb5.conf#_and_ a properly configured DNS server.
====
Next, create the Kerberos database which contains the keys of all principals (users and hosts) encrypted with a master password. It is not required to remember this password as it will be stored in [.filename]#/var/heimdal/m-key#; it would be reasonable to use a 45-character random password for this purpose. To create the master key, run `kstash` and enter a password:
[source,shell]
....
# kstash
Master key: xxxxxxxxxxxxxxxxxxxxxxx
Verifying password - Master key: xxxxxxxxxxxxxxxxxxxxxxx
....
Once the master key has been created, the database should be initialized. The Kerberos administrative tool man:kadmin[8] can be used on the KDC in a mode that operates directly on the database, without using the man:kadmind[8] network service, as `kadmin -l`. This resolves the chicken-and-egg problem of trying to connect to the database before it is created. At the `kadmin` prompt, use `init` to create the realm's initial database:
[source,shell]
....
# kadmin -l
kadmin> init EXAMPLE.ORG
Realm max ticket life [unlimited]:
....
Lastly, while still in `kadmin`, create the first principal using `add`. Stick to the default options for the principal for now, as these can be changed later with `modify`. Type `?` at the prompt to see the available options.
[source,shell]
....
kadmin> add tillman
Max ticket life [unlimited]:
Max renewable life [unlimited]:
Principal expiration time [never]:
Password expiration time [never]:
Attributes []:
Password: xxxxxxxx
Verifying password - Password: xxxxxxxx
....
Next, start the KDC services by running:
[source,shell]
....
# service kdc start
# service kadmind start
....
While there will not be any kerberized daemons running at this point, it is possible to confirm that the KDC is functioning by obtaining a ticket for the principal that was just created:
[source,shell]
....
% kinit tillman
tillman@EXAMPLE.ORG's Password:
....
Confirm that a ticket was successfully obtained using `klist`:
[source,shell]
....
% klist
Credentials cache: FILE:/tmp/krb5cc_1001
Principal: tillman@EXAMPLE.ORG
Issued Expires Principal
Aug 27 15:37:58 2013 Aug 28 01:37:58 2013 krbtgt/EXAMPLE.ORG@EXAMPLE.ORG
....
The temporary ticket can be destroyed when the test is finished:
[source,shell]
....
% kdestroy
....
=== Configuring a Server to Use Kerberos
The first step in configuring a server to use Kerberos authentication is to ensure that it has the correct configuration in [.filename]#/etc/krb5.conf#. The version from the KDC can be used as-is, or it can be regenerated on the new system.
Next, create [.filename]#/etc/krb5.keytab# on the server. This is the main part of "Kerberizing" a service - it corresponds to generating a secret shared between the service and the KDC. The secret is a cryptographic key, stored in a "keytab". The keytab contains the server's host key, which allows it and the KDC to verify each others' identity. It must be transmitted to the server in a secure fashion, as the security of the server can be broken if the key is made public. Typically, the [.filename]#keytab# is generated on an administrator's trusted machine using `kadmin`, then securely transferred to the server, e.g., with man:scp[1]; it can also be created directly on the server if that is consistent with the desired security policy. It is very important that the keytab is transmitted to the server in a secure fashion: if the key is known by some other party, that party can impersonate any user to the server! Using `kadmin` on the server directly is convenient, because the entry for the host principal in the KDC database is also created using `kadmin`.
Of course, `kadmin` is a kerberized service; a Kerberos ticket is needed to authenticate to the network service, but to ensure that the user running `kadmin` is actually present (and their session has not been hijacked), `kadmin` will prompt for the password to get a fresh ticket. The principal authenticating to the kadmin service must be permitted to use the `kadmin` interface, as specified in [.filename]#/var/heimdal/kadmind.acl#. See the section titled "Remote administration" in `info heimdal` for details on designing access control lists. Instead of enabling remote `kadmin` access, the administrator could securely connect to the KDC via the local console or man:ssh[1], and perform administration locally using `kadmin -l`.
After installing [.filename]#/etc/krb5.conf#, use `add --random-key` in `kadmin`. This adds the server's host principal to the database, but does not extract a copy of the host principal key to a keytab. To generate the keytab, use `ext` to extract the server's host principal key to its own keytab:
[source,shell]
....
# kadmin
kadmin> add --random-key host/myserver.example.org
Max ticket life [unlimited]:
Max renewable life [unlimited]:
Principal expiration time [never]:
Password expiration time [never]:
Attributes []:
kadmin> ext_keytab host/myserver.example.org
kadmin> exit
....
Note that `ext_keytab` stores the extracted key in [.filename]#/etc/krb5.keytab# by default. This is good when being run on the server being kerberized, but the `--keytab _path/to/file_` argument should be used when the keytab is being extracted elsewhere:
[source,shell]
....
# kadmin
kadmin> ext_keytab --keytab=/tmp/example.keytab host/myserver.example.org
kadmin> exit
....
The keytab can then be securely copied to the server using man:scp[1] or a removable media. Be sure to specify a non-default keytab name to avoid inserting unneeded keys into the system's keytab.
At this point, the server can read encrypted messages from the KDC using its shared key, stored in [.filename]#krb5.keytab#. It is now ready for the Kerberos-using services to be enabled. One of the most common such services is man:sshd[8], which supports Kerberos via the GSS-API. In [.filename]#/etc/ssh/sshd_config#, add the line:
[.programlisting]
....
GSSAPIAuthentication yes
....
After making this change, man:sshd[8] must be restarted for the new configuration to take effect: `service sshd restart`.
=== Configuring a Client to Use Kerberos
As it was for the server, the client requires configuration in [.filename]#/etc/krb5.conf#. Copy the file in place (securely) or re-enter it as needed.
Test the client by using `kinit`, `klist`, and `kdestroy` from the client to obtain, show, and then delete a ticket for an existing principal. Kerberos applications should also be able to connect to Kerberos enabled servers. If that does not work but obtaining a ticket does, the problem is likely with the server and not with the client or the KDC. In the case of kerberized man:ssh[1], GSS-API is disabled by default, so test using `ssh -o GSSAPIAuthentication=yes _hostname_`.
When testing a Kerberized application, try using a packet sniffer such as `tcpdump` to confirm that no sensitive information is sent in the clear.
Various Kerberos client applications are available. With the advent of a bridge so that applications using SASL for authentication can use GSS-API mechanisms as well, large classes of client applications can use Kerberos for authentication, from Jabber clients to IMAP clients.
Users within a realm typically have their Kerberos principal mapped to a local user account. Occasionally, one needs to grant access to a local user account to someone who does not have a matching Kerberos principal. For example, `tillman@EXAMPLE.ORG` may need access to the local user account `webdevelopers`. Other principals may also need access to that local account.
The [.filename]#.k5login# and [.filename]#.k5users# files, placed in a user's home directory, can be used to solve this problem. For example, if the following [.filename]#.k5login# is placed in the home directory of `webdevelopers`, both principals listed will have access to that account without requiring a shared password:
[.programlisting]
....
tillman@example.org
jdoe@example.org
....
Refer to man:ksu[1] for more information about [.filename]#.k5users#.
=== MIT Differences
The major difference between the MIT and Heimdal implementations is that `kadmin` has a different, but equivalent, set of commands and uses a different protocol. If the KDC is MIT, the Heimdal version of `kadmin` cannot be used to administer the KDC remotely, and vice versa.
Client applications may also use slightly different command line options to accomplish the same tasks. Following the instructions at http://web.mit.edu/Kerberos/www/[http://web.mit.edu/Kerberos/www/] is recommended. Be careful of path issues: the MIT port installs into [.filename]#/usr/local/# by default, and the FreeBSD system applications run instead of the MIT versions if `PATH` lists the system directories first.
When using MIT Kerberos as a KDC on FreeBSD, the following edits should also be made to [.filename]#rc.conf#:
[.programlisting]
....
kdc_program="/usr/local/sbin/kdc"
kadmind_program="/usr/local/sbin/kadmind"
kdc_flags=""
kdc_enable="YES"
kadmind_enable="YES"
....
=== Kerberos Tips, Tricks, and Troubleshooting
When configuring and troubleshooting Kerberos, keep the following points in mind:
* When using either Heimdal or MITKerberos from ports, ensure that the `PATH` lists the port's versions of the client applications before the system versions.
* If all the computers in the realm do not have synchronized time settings, authentication may fail. crossref:network-servers[network-ntp,“Clock Synchronization with NTP”] describes how to synchronize clocks using NTP.
* If the hostname is changed, the `host/` principal must be changed and the keytab updated. This also applies to special keytab entries like the `HTTP/` principal used for Apache's package:www/mod_auth_kerb[].
* All hosts in the realm must be both forward and reverse resolvable in DNS or, at a minimum, exist in [.filename]#/etc/hosts#. CNAMEs will work, but the A and PTR records must be correct and in place. The error message for unresolvable hosts is not intuitive: `Kerberos5 refuses authentication because Read req failed: Key table entry not found`.
* Some operating systems that act as clients to the KDC do not set the permissions for `ksu` to be setuid `root`. This means that `ksu` does not work. This is a permissions problem, not a KDC error.
* With MITKerberos, to allow a principal to have a ticket life longer than the default lifetime of ten hours, use `modify_principal` at the man:kadmin[8] prompt to change the `maxlife` of both the principal in question and the `krbtgt` principal. The principal can then use `kinit -l` to request a ticket with a longer lifetime.
* When running a packet sniffer on the KDC to aid in troubleshooting while running `kinit` from a workstation, the Ticket Granting Ticket (TGT) is sent immediately, even before the password is typed. This is because the Kerberos server freely transmits a TGT to any unauthorized request. However, every TGT is encrypted in a key derived from the user's password. When a user types their password, it is not sent to the KDC, it is instead used to decrypt the TGT that `kinit` already obtained. If the decryption process results in a valid ticket with a valid time stamp, the user has valid Kerberos credentials. These credentials include a session key for establishing secure communications with the Kerberos server in the future, as well as the actual TGT, which is encrypted with the Kerberos server's own key. This second layer of encryption allows the Kerberos server to verify the authenticity of each TGT.
* Host principals can have a longer ticket lifetime. If the user principal has a lifetime of a week but the host being connected to has a lifetime of nine hours, the user cache will have an expired host principal and the ticket cache will not work as expected.
* When setting up [.filename]#krb5.dict# to prevent specific bad passwords from being used as described in man:kadmind[8], remember that it only applies to principals that have a password policy assigned to them. The format used in [.filename]#krb5.dict# is one string per line. Creating a symbolic link to [.filename]#/usr/share/dict/words# might be useful.
=== Mitigating Kerberos Limitations
Since Kerberos is an all or nothing approach, every service enabled on the network must either be modified to work with Kerberos or be otherwise secured against network attacks. This is to prevent user credentials from being stolen and re-used. An example is when Kerberos is enabled on all remote shells but the non-Kerberized POP3 mail server sends passwords in plain text.
The KDC is a single point of failure. By design, the KDC must be as secure as its master password database. The KDC should have absolutely no other services running on it and should be physically secure. The danger is high because Kerberos stores all passwords encrypted with the same master key which is stored as a file on the KDC.
A compromised master key is not quite as bad as one might fear. The master key is only used to encrypt the Kerberos database and as a seed for the random number generator. As long as access to the KDC is secure, an attacker cannot do much with the master key.
If the KDC is unavailable, network services are unusable as authentication cannot be performed. This can be alleviated with a single master KDC and one or more slaves, and with careful implementation of secondary or fall-back authentication using PAM.
Kerberos allows users, hosts and services to authenticate between themselves. It does not have a mechanism to authenticate the KDC to the users, hosts, or services. This means that a trojaned `kinit` could record all user names and passwords. File system integrity checking tools like package:security/tripwire[] can alleviate this.
=== Resources and Further Information
* http://www.faqs.org/faqs/Kerberos-faq/general/preamble.html[The Kerberos FAQ]
* http://web.mit.edu/Kerberos/www/dialogue.html[Designing an Authentication System: a Dialog in Four Scenes]
* https://www.ietf.org/rfc/rfc4120.txt[RFC 4120, The Kerberos Network Authentication Service (V5)]
* http://web.mit.edu/Kerberos/www/[MIT Kerberos home page]
* https://github.com/heimdal/heimdal/wiki[Heimdal Kerberos project wiki page]
[[openssl]]
== OpenSSL
OpenSSL is an open source implementation of the SSL and TLS protocols. It provides an encryption transport layer on top of the normal communications layer, allowing it to be intertwined with many network applications and services.
The version of OpenSSL included in FreeBSD supports the Secure Sockets Layer 3.0 (SSLv3) and Transport Layer Security 1.0/1.1/1.2 (TLSv1/TLSv1.1/TLSv1.2) network security protocols and can be used as a general cryptographic library. In FreeBSD 12.0-RELEASE and above, OpenSSL also supports Transport Layer Security 1.3 (TLSv1.3).
OpenSSL is often used to encrypt authentication of mail clients and to secure web based transactions such as credit card payments. Some ports, such as package:www/apache24[] and package:databases/postgresql11-server[], include a compile option for building with OpenSSL. If selected, the port will add support using OpenSSL from the base system. To instead have the port compile against OpenSSL from the package:security/openssl[] port, add the following to [.filename]#/etc/make.conf#:
[.programlisting]
....
DEFAULT_VERSIONS+= ssl=openssl
....
Another common use of OpenSSL is to provide certificates for use with software applications. Certificates can be used to verify the credentials of a company or individual. If a certificate has not been signed by an external _Certificate Authority_ (CA), such as http://www.verisign.com[http://www.verisign.com], the application that uses the certificate will produce a warning. There is a cost associated with obtaining a signed certificate and using a signed certificate is not mandatory as certificates can be self-signed. However, using an external authority will prevent warnings and can put users at ease.
This section demonstrates how to create and use certificates on a FreeBSD system. Refer to crossref:network-servers[ldap-config,“Configuring an LDAP Server”] for an example of how to create a CA for signing one's own certificates.
For more information about SSL, read the free https://www.feistyduck.com/books/openssl-cookbook/[OpenSSL Cookbook].
=== Generating Certificates
To generate a certificate that will be signed by an external CA, issue the following command and input the information requested at the prompts. This input information will be written to the certificate. At the `Common Name` prompt, input the fully qualified name for the system that will use the certificate. If this name does not match the server, the application verifying the certificate will issue a warning to the user, rendering the verification provided by the certificate as useless.
[source,shell]
....
# openssl req -new -nodes -out req.pem -keyout cert.key -sha256 -newkey rsa:2048
Generating a 2048 bit RSA private key
..................+++
.............................................................+++
writing new private key to 'cert.key'
-----
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [AU]:US
State or Province Name (full name) [Some-State]:PA
Locality Name (eg, city) []:Pittsburgh
Organization Name (eg, company) [Internet Widgits Pty Ltd]:My Company
Organizational Unit Name (eg, section) []:Systems Administrator
Common Name (eg, YOUR name) []:localhost.example.org
Email Address []:trhodes@FreeBSD.org
Please enter the following 'extra' attributes
to be sent with your certificate request
A challenge password []:
An optional company name []:Another Name
....
Other options, such as the expire time and alternate encryption algorithms, are available when creating a certificate. A complete list of options is described in man:openssl[1].
This command will create two files in the current directory. The certificate request, [.filename]#req.pem#, can be sent to a CA who will validate the entered credentials, sign the request, and return the signed certificate. The second file, [.filename]#cert.key#, is the private key for the certificate and should be stored in a secure location. If this falls in the hands of others, it can be used to impersonate the user or the server.
Alternately, if a signature from a CA is not required, a self-signed certificate can be created. First, generate the RSA key:
[source,shell]
....
# openssl genrsa -rand -genkey -out cert.key 2048
0 semi-random bytes loaded
Generating RSA private key, 2048 bit long modulus
.............................................+++
.................................................................................................................+++
e is 65537 (0x10001)
....
Use this key to create a self-signed certificate. Follow the usual prompts for creating a certificate:
[source,shell]
....
# openssl req -new -x509 -days 365 -key cert.key -out cert.crt -sha256
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [AU]:US
State or Province Name (full name) [Some-State]:PA
Locality Name (eg, city) []:Pittsburgh
Organization Name (eg, company) [Internet Widgits Pty Ltd]:My Company
Organizational Unit Name (eg, section) []:Systems Administrator
Common Name (e.g. server FQDN or YOUR name) []:localhost.example.org
Email Address []:trhodes@FreeBSD.org
....
This will create two new files in the current directory: a private key file [.filename]#cert.key#, and the certificate itself, [.filename]#cert.crt#. These should be placed in a directory, preferably under [.filename]#/etc/ssl/#, which is readable only by `root`. Permissions of `0700` are appropriate for these files and can be set using `chmod`.
=== Using Certificates
One use for a certificate is to encrypt connections to the Sendmail mail server in order to prevent the use of clear text authentication.
[NOTE]
====
Some mail clients will display an error if the user has not installed a local copy of the certificate. Refer to the documentation included with the software for more information on certificate installation.
====
In FreeBSD 10.0-RELEASE and above, it is possible to create a self-signed certificate for Sendmail automatically. To enable this, add the following lines to [.filename]#/etc/rc.conf#:
[.programlisting]
....
sendmail_enable="YES"
sendmail_cert_create="YES"
sendmail_cert_cn="localhost.example.org"
....
This will automatically create a self-signed certificate, [.filename]#/etc/mail/certs/host.cert#, a signing key, [.filename]#/etc/mail/certs/host.key#, and a CA certificate, [.filename]#/etc/mail/certs/cacert.pem#. The certificate will use the `Common Name` specified in `sendmail_cert_cn`. After saving the edits, restart Sendmail:
[source,shell]
....
# service sendmail restart
....
If all went well, there will be no error messages in [.filename]#/var/log/maillog#. For a simple test, connect to the mail server's listening port using `telnet`:
[source,shell]
....
# telnet example.com 25
Trying 192.0.34.166...
Connected to example.com.
Escape character is '^]'.
220 example.com ESMTP Sendmail 8.14.7/8.14.7; Fri, 18 Apr 2014 11:50:32 -0400 (EDT)
ehlo example.com
250-example.com Hello example.com [192.0.34.166], pleased to meet you
250-ENHANCEDSTATUSCODES
250-PIPELINING
250-8BITMIME
250-SIZE
250-DSN
250-ETRN
250-AUTH LOGIN PLAIN
250-STARTTLS
250-DELIVERBY
250 HELP
quit
221 2.0.0 example.com closing connection
Connection closed by foreign host.
....
If the `STARTTLS` line appears in the output, everything is working correctly.
[[ipsec]]
== VPN over IPsec
Internet Protocol Security (IPsec) is a set of protocols which sit on top of the Internet Protocol (IP) layer. It allows two or more hosts to communicate in a secure manner by authenticating and encrypting each IP packet of a communication session. The FreeBSD IPsec network stack is based on the http://www.kame.net/[http://www.kame.net/] implementation and supports both IPv4 and IPv6 sessions.
IPsec is comprised of the following sub-protocols:
* _Encapsulated Security Payload (ESP)_: this protocol protects the IP packet data from third party interference by encrypting the contents using symmetric cryptography algorithms such as Blowfish and 3DES.
* _Authentication Header (AH)_: this protocol protects the IP packet header from third party interference and spoofing by computing a cryptographic checksum and hashing the IP packet header fields with a secure hashing function. This is then followed by an additional header that contains the hash, to allow the information in the packet to be authenticated.
* _IP Payload Compression Protocol (IPComp_): this protocol tries to increase communication performance by compressing the IP payload in order to reduce the amount of data sent.
These protocols can either be used together or separately, depending on the environment.
IPsec supports two modes of operation. The first mode, _Transport Mode_, protects communications between two hosts. The second mode, _Tunnel Mode_, is used to build virtual tunnels, commonly known as Virtual Private Networks (VPNs). Consult man:ipsec[4] for detailed information on the IPsec subsystem in FreeBSD.
IPsec support is enabled by default on FreeBSD 11 and later. For previous versions of FreeBSD, add these options to a custom kernel configuration file and rebuild the kernel using the instructions in crossref:kernelconfig[kernelconfig,Configuring the FreeBSD Kernel]:
[source,shell]
....
options IPSEC IP security
device crypto
....
If IPsec debugging support is desired, the following kernel option should also be added:
[source,shell]
....
options IPSEC_DEBUG debug for IP security
....
This rest of this chapter demonstrates the process of setting up an IPsecVPN between a home network and a corporate network. In the example scenario:
* Both sites are connected to the Internet through a gateway that is running FreeBSD.
* The gateway on each network has at least one external IP address. In this example, the corporate LAN's external IP address is `172.16.5.4` and the home LAN's external IP address is `192.168.1.12`.
* The internal addresses of the two networks can be either public or private IP addresses. However, the address space must not collide. For example, both networks cannot use `192.168.1.x`. In this example, the corporate LAN's internal IP address is `10.246.38.1` and the home LAN's internal IP address is `10.0.0.5`.
=== Configuring a VPN on FreeBSD
To begin, package:security/ipsec-tools[] must be installed from the Ports Collection. This software provides a number of applications which support the configuration.
The next requirement is to create two man:gif[4] pseudo-devices which will be used to tunnel packets and allow both networks to communicate properly. As `root`, run the following commands, replacing _internal_ and _external_ with the real IP addresses of the internal and external interfaces of the two gateways:
[source,shell]
....
# ifconfig gif0 create
# ifconfig gif0 internal1 internal2
# ifconfig gif0 tunnel external1 external2
....
Verify the setup on each gateway, using `ifconfig`. Here is the output from Gateway 1:
[.programlisting]
....
gif0: flags=8051 mtu 1280
tunnel inet 172.16.5.4 --> 192.168.1.12
inet6 fe80::2e0:81ff:fe02:5881%gif0 prefixlen 64 scopeid 0x6
inet 10.246.38.1 --> 10.0.0.5 netmask 0xffffff00
....
Here is the output from Gateway 2:
[.programlisting]
....
gif0: flags=8051 mtu 1280
tunnel inet 192.168.1.12 --> 172.16.5.4
inet 10.0.0.5 --> 10.246.38.1 netmask 0xffffff00
inet6 fe80::250:bfff:fe3a:c1f%gif0 prefixlen 64 scopeid 0x4
....
Once complete, both internal IP addresses should be reachable using man:ping[8]:
[source,shell]
....
priv-net# ping 10.0.0.5
PING 10.0.0.5 (10.0.0.5): 56 data bytes
64 bytes from 10.0.0.5: icmp_seq=0 ttl=64 time=42.786 ms
64 bytes from 10.0.0.5: icmp_seq=1 ttl=64 time=19.255 ms
64 bytes from 10.0.0.5: icmp_seq=2 ttl=64 time=20.440 ms
64 bytes from 10.0.0.5: icmp_seq=3 ttl=64 time=21.036 ms
--- 10.0.0.5 ping statistics ---
4 packets transmitted, 4 packets received, 0% packet loss
round-trip min/avg/max/stddev = 19.255/25.879/42.786/9.782 ms
corp-net# ping 10.246.38.1
PING 10.246.38.1 (10.246.38.1): 56 data bytes
64 bytes from 10.246.38.1: icmp_seq=0 ttl=64 time=28.106 ms
64 bytes from 10.246.38.1: icmp_seq=1 ttl=64 time=42.917 ms
64 bytes from 10.246.38.1: icmp_seq=2 ttl=64 time=127.525 ms
64 bytes from 10.246.38.1: icmp_seq=3 ttl=64 time=119.896 ms
64 bytes from 10.246.38.1: icmp_seq=4 ttl=64 time=154.524 ms
--- 10.246.38.1 ping statistics ---
5 packets transmitted, 5 packets received, 0% packet loss
round-trip min/avg/max/stddev = 28.106/94.594/154.524/49.814 ms
....
As expected, both sides have the ability to send and receive ICMP packets from the privately configured addresses. Next, both gateways must be told how to route packets in order to correctly send traffic from either network. The following commands will achieve this goal:
[source,shell]
....
corp-net# route add 10.0.0.0 10.0.0.5 255.255.255.0
corp-net# route add net 10.0.0.0: gateway 10.0.0.5
priv-net# route add 10.246.38.0 10.246.38.1 255.255.255.0
priv-net# route add host 10.246.38.0: gateway 10.246.38.1
....
At this point, internal machines should be reachable from each gateway as well as from machines behind the gateways. Again, use man:ping[8] to confirm:
[source,shell]
....
corp-net# ping 10.0.0.8
PING 10.0.0.8 (10.0.0.8): 56 data bytes
64 bytes from 10.0.0.8: icmp_seq=0 ttl=63 time=92.391 ms
64 bytes from 10.0.0.8: icmp_seq=1 ttl=63 time=21.870 ms
64 bytes from 10.0.0.8: icmp_seq=2 ttl=63 time=198.022 ms
64 bytes from 10.0.0.8: icmp_seq=3 ttl=63 time=22.241 ms
64 bytes from 10.0.0.8: icmp_seq=4 ttl=63 time=174.705 ms
--- 10.0.0.8 ping statistics ---
5 packets transmitted, 5 packets received, 0% packet loss
round-trip min/avg/max/stddev = 21.870/101.846/198.022/74.001 ms
priv-net# ping 10.246.38.107
PING 10.246.38.1 (10.246.38.107): 56 data bytes
64 bytes from 10.246.38.107: icmp_seq=0 ttl=64 time=53.491 ms
64 bytes from 10.246.38.107: icmp_seq=1 ttl=64 time=23.395 ms
64 bytes from 10.246.38.107: icmp_seq=2 ttl=64 time=23.865 ms
64 bytes from 10.246.38.107: icmp_seq=3 ttl=64 time=21.145 ms
64 bytes from 10.246.38.107: icmp_seq=4 ttl=64 time=36.708 ms
--- 10.246.38.107 ping statistics ---
5 packets transmitted, 5 packets received, 0% packet loss
round-trip min/avg/max/stddev = 21.145/31.721/53.491/12.179 ms
....
Setting up the tunnels is the easy part. Configuring a secure link is a more in depth process. The following configuration uses pre-shared (PSK) RSA keys. Other than the IP addresses, the [.filename]#/usr/local/etc/racoon/racoon.conf# on both gateways will be identical and look similar to:
[.programlisting]
....
path pre_shared_key "/usr/local/etc/racoon/psk.txt"; #location of pre-shared key file
log debug; #log verbosity setting: set to 'notify' when testing and debugging is complete
padding # options are not to be changed
{
maximum_length 20;
randomize off;
strict_check off;
exclusive_tail off;
}
timer # timing options. change as needed
{
counter 5;
interval 20 sec;
persend 1;
# natt_keepalive 15 sec;
phase1 30 sec;
phase2 15 sec;
}
listen # address [port] that racoon will listen on
{
isakmp 172.16.5.4 [500];
isakmp_natt 172.16.5.4 [4500];
}
remote 192.168.1.12 [500]
{
exchange_mode main,aggressive;
doi ipsec_doi;
situation identity_only;
my_identifier address 172.16.5.4;
peers_identifier address 192.168.1.12;
lifetime time 8 hour;
passive off;
proposal_check obey;
# nat_traversal off;
generate_policy off;
proposal {
encryption_algorithm blowfish;
hash_algorithm md5;
authentication_method pre_shared_key;
lifetime time 30 sec;
dh_group 1;
}
}
sainfo (address 10.246.38.0/24 any address 10.0.0.0/24 any) # address $network/$netmask $type address $network/$netmask $type ( $type being any or esp)
{ # $network must be the two internal networks you are joining.
pfs_group 1;
lifetime time 36000 sec;
encryption_algorithm blowfish,3des;
authentication_algorithm hmac_md5,hmac_sha1;
compression_algorithm deflate;
}
....
For descriptions of each available option, refer to the manual page for [.filename]#racoon.conf#.
The Security Policy Database (SPD) needs to be configured so that FreeBSD and racoon are able to encrypt and decrypt network traffic between the hosts.
This can be achieved with a shell script, similar to the following, on the corporate gateway. This file will be used during system initialization and should be saved as [.filename]#/usr/local/etc/racoon/setkey.conf#.
[.programlisting]
....
flush;
spdflush;
# To the home network
spdadd 10.246.38.0/24 10.0.0.0/24 any -P out ipsec esp/tunnel/172.16.5.4-192.168.1.12/use;
spdadd 10.0.0.0/24 10.246.38.0/24 any -P in ipsec esp/tunnel/192.168.1.12-172.16.5.4/use;
....
Once in place, racoon may be started on both gateways using the following command:
[source,shell]
....
# /usr/local/sbin/racoon -F -f /usr/local/etc/racoon/racoon.conf -l /var/log/racoon.log
....
The output should be similar to the following:
[source,shell]
....
corp-net# /usr/local/sbin/racoon -F -f /usr/local/etc/racoon/racoon.conf
Foreground mode.
2006-01-30 01:35:47: INFO: begin Identity Protection mode.
2006-01-30 01:35:48: INFO: received Vendor ID: KAME/racoon
2006-01-30 01:35:55: INFO: received Vendor ID: KAME/racoon
2006-01-30 01:36:04: INFO: ISAKMP-SA established 172.16.5.4[500]-192.168.1.12[500] spi:623b9b3bd2492452:7deab82d54ff704a
2006-01-30 01:36:05: INFO: initiate new phase 2 negotiation: 172.16.5.4[0]192.168.1.12[0]
2006-01-30 01:36:09: INFO: IPsec-SA established: ESP/Tunnel 192.168.1.12[0]->172.16.5.4[0] spi=28496098(0x1b2d0e2)
2006-01-30 01:36:09: INFO: IPsec-SA established: ESP/Tunnel 172.16.5.4[0]->192.168.1.12[0] spi=47784998(0x2d92426)
2006-01-30 01:36:13: INFO: respond new phase 2 negotiation: 172.16.5.4[0]192.168.1.12[0]
2006-01-30 01:36:18: INFO: IPsec-SA established: ESP/Tunnel 192.168.1.12[0]->172.16.5.4[0] spi=124397467(0x76a279b)
2006-01-30 01:36:18: INFO: IPsec-SA established: ESP/Tunnel 172.16.5.4[0]->192.168.1.12[0] spi=175852902(0xa7b4d66)
....
To ensure the tunnel is working properly, switch to another console and use man:tcpdump[1] to view network traffic using the following command. Replace `em0` with the network interface card as required:
[source,shell]
....
# tcpdump -i em0 host 172.16.5.4 and dst 192.168.1.12
....
Data similar to the following should appear on the console. If not, there is an issue and debugging the returned data will be required.
[.programlisting]
....
01:47:32.021683 IP corporatenetwork.com > 192.168.1.12.privatenetwork.com: ESP(spi=0x02acbf9f,seq=0xa)
01:47:33.022442 IP corporatenetwork.com > 192.168.1.12.privatenetwork.com: ESP(spi=0x02acbf9f,seq=0xb)
01:47:34.024218 IP corporatenetwork.com > 192.168.1.12.privatenetwork.com: ESP(spi=0x02acbf9f,seq=0xc)
....
At this point, both networks should be available and seem to be part of the same network. Most likely both networks are protected by a firewall. To allow traffic to flow between them, rules need to be added to pass packets. For the man:ipfw[8] firewall, add the following lines to the firewall configuration file:
[.programlisting]
....
ipfw add 00201 allow log esp from any to any
ipfw add 00202 allow log ah from any to any
ipfw add 00203 allow log ipencap from any to any
ipfw add 00204 allow log udp from any 500 to any
....
[NOTE]
====
The rule numbers may need to be altered depending on the current host configuration.
====
For users of man:pf[4] or man:ipf[8], the following rules should do the trick:
[.programlisting]
....
pass in quick proto esp from any to any
pass in quick proto ah from any to any
pass in quick proto ipencap from any to any
pass in quick proto udp from any port = 500 to any port = 500
pass in quick on gif0 from any to any
pass out quick proto esp from any to any
pass out quick proto ah from any to any
pass out quick proto ipencap from any to any
pass out quick proto udp from any port = 500 to any port = 500
pass out quick on gif0 from any to any
....
Finally, to allow the machine to start support for the VPN during system initialization, add the following lines to [.filename]#/etc/rc.conf#:
[.programlisting]
....
ipsec_enable="YES"
ipsec_program="/usr/local/sbin/setkey"
ipsec_file="/usr/local/etc/racoon/setkey.conf" # allows setting up spd policies on boot
racoon_enable="yes"
....
[[openssh]]
== OpenSSH
OpenSSH is a set of network connectivity tools used to provide secure access to remote machines. Additionally, TCP/IP connections can be tunneled or forwarded securely through SSH connections. OpenSSH encrypts all traffic to effectively eliminate eavesdropping, connection hijacking, and other network-level attacks.
OpenSSH is maintained by the OpenBSD project and is installed by default in FreeBSD. It is compatible with both SSH version 1 and 2 protocols.
When data is sent over the network in an unencrypted form, network sniffers anywhere in between the client and server can steal user/password information or data transferred during the session. OpenSSH offers a variety of authentication and encryption methods to prevent this from happening. More information about OpenSSH is available from http://www.openssh.com/[http://www.openssh.com/].
This section provides an overview of the built-in client utilities to securely access other systems and securely transfer files from a FreeBSD system. It then describes how to configure a SSH server on a FreeBSD system. More information is available in the man pages mentioned in this chapter.
=== Using the SSH Client Utilities
To log into a SSH server, use `ssh` and specify a username that exists on that server and the IP address or hostname of the server. If this is the first time a connection has been made to the specified server, the user will be prompted to first verify the server's fingerprint:
[source,shell]
....
# ssh user@example.com
The authenticity of host 'example.com (10.0.0.1)' can't be established.
ECDSA key fingerprint is 25:cc:73:b5:b3:96:75:3d:56:19:49:d2:5c:1f:91:3b.
Are you sure you want to continue connecting (yes/no)? yes
Permanently added 'example.com' (ECDSA) to the list of known hosts.
Password for user@example.com: user_password
....
SSH utilizes a key fingerprint system to verify the authenticity of the server when the client connects. When the user accepts the key's fingerprint by typing `yes` when connecting for the first time, a copy of the key is saved to [.filename]#.ssh/known_hosts# in the user's home directory. Future attempts to login are verified against the saved key and `ssh` will display an alert if the server's key does not match the saved key. If this occurs, the user should first verify why the key has changed before continuing with the connection.
By default, recent versions of OpenSSH only accept SSHv2 connections. By default, the client will use version 2 if possible and will fall back to version 1 if the server does not support version 2. To force `ssh` to only use the specified protocol, include `-1` or `-2`. Additional options are described in man:ssh[1].
Use man:scp[1] to securely copy a file to or from a remote machine. This example copies [.filename]#COPYRIGHT# on the remote system to a file of the same name in the current directory of the local system:
[source,shell]
....
# scp user@example.com:/COPYRIGHT COPYRIGHT
Password for user@example.com: *******
COPYRIGHT 100% |*****************************| 4735
00:00
#
....
Since the fingerprint was already verified for this host, the server's key is automatically checked before prompting for the user's password.
The arguments passed to `scp` are similar to `cp`. The file or files to copy is the first argument and the destination to copy to is the second. Since the file is fetched over the network, one or more of the file arguments takes the form `user@host:<path_to_remote_file>`. Be aware when copying directories recursively that `scp` uses `-r`, whereas `cp` uses `-R`.
To open an interactive session for copying files, use `sftp`. Refer to man:sftp[1] for a list of available commands while in an `sftp` session.
[[security-ssh-keygen]]
==== Key-based Authentication
Instead of using passwords, a client can be configured to connect to the remote machine using keys. To generate RSA authentication keys, use `ssh-keygen`. To generate a public and private key pair, specify the type of key and follow the prompts. It is recommended to protect the keys with a memorable, but hard to guess passphrase.
[source,shell]
....
% ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/home/user/.ssh/id_rsa):
Enter passphrase (empty for no passphrase): <.>
Enter same passphrase again: <.>
Your identification has been saved in /home/user/.ssh/id_rsa.
Your public key has been saved in /home/user/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:54Xm9Uvtv6H4NOo6yjP/YCfODryvUU7yWHzMqeXwhq8 user@host.example.com
The key's randomart image is:
+---[RSA 2048]----+
| |
| |
| |
| . o.. |
| .S*+*o |
| . O=Oo . . |
| = Oo= oo..|
| .oB.* +.oo.|
| =OE**.o..=|
+----[SHA256]-----+
....
<.> Type a passphrase here. It can contain spaces and symbols.
<.> Retype the passphrase to verify it.
The private key is stored in [.filename]#~/.ssh/id_rsa# and the public key is stored in [.filename]#~/.ssh/id_rsa.pub#. The _public_ key must be copied to [.filename]#~/.ssh/authorized_keys# on the remote machine for key-based authentication to work.
[WARNING]
====
Many users believe that keys are secure by design and will use a key without a passphrase. This is _dangerous_ behavior. An administrator can verify that a key pair is protected by a passphrase by viewing the private key manually. If the private key file contains the word `ENCRYPTED`, the key owner is using a passphrase. In addition, to better secure end users, `from` may be placed in the public key file. For example, adding `from="192.168.10.5"` in front of the `ssh-rsa` prefix will only allow that specific user to log in from that IP address.
====
The options and files vary with different versions of OpenSSH. To avoid problems, consult man:ssh-keygen[1].
If a passphrase is used, the user is prompted for the passphrase each time a connection is made to the server. To load SSH keys into memory and remove the need to type the passphrase each time, use man:ssh-agent[1] and man:ssh-add[1].
Authentication is handled by `ssh-agent`, using the private keys that are loaded into it. `ssh-agent` can be used to launch another application like a shell or a window manager.
To use `ssh-agent` in a shell, start it with a shell as an argument. Add the identity by running `ssh-add` and entering the passphrase for the private key. The user will then be able to `ssh` to any host that has the corresponding public key installed. For example:
[source,shell]
....
% ssh-agent csh
% ssh-add
Enter passphrase for key '/usr/home/user/.ssh/id_rsa': <.>
Identity added: /usr/home/user/.ssh/id_rsa (/usr/home/user/.ssh/id_rsa)
%
....
<.> Enter the passphrase for the key.
To use `ssh-agent` in Xorg, add an entry for it in [.filename]#~/.xinitrc#. This provides the `ssh-agent` services to all programs launched in Xorg. An example [.filename]#~/.xinitrc# might look like this:
[.programlisting]
....
exec ssh-agent startxfce4
....
This launches `ssh-agent`, which in turn launches XFCE, every time Xorg starts. Once Xorg has been restarted so that the changes can take effect, run `ssh-add` to load all of the SSH keys.
[[security-ssh-tunneling]]
==== SSH Tunneling
OpenSSH has the ability to create a tunnel to encapsulate another protocol in an encrypted session.
The following command tells `ssh` to create a tunnel for telnet:
[source,shell]
....
% ssh -2 -N -f -L 5023:localhost:23 user@foo.example.com
%
....
This example uses the following options:
`-2`::
Forces `ssh` to use version 2 to connect to the server.
`-N`::
Indicates no command, or tunnel only. If omitted, `ssh` initiates a normal session.
`-f`::
Forces `ssh` to run in the background.
`-L`::
Indicates a local tunnel in _localport:remotehost:remoteport_ format.
`user@foo.example.com`::
The login name to use on the specified remote SSH server.
An SSH tunnel works by creating a listen socket on `localhost` on the specified `localport`. It then forwards any connections received on `localport` via the SSH connection to the specified `remotehost:remoteport`. In the example, port `5023` on the client is forwarded to port `23` on the remote machine. Since port 23 is used by telnet, this creates an encrypted telnet session through an SSH tunnel.
This method can be used to wrap any number of insecure TCP protocols such as SMTP, POP3, and FTP, as seen in the following examples.
.Create a Secure Tunnel for SMTP
[example]
====
[source,shell]
....
% ssh -2 -N -f -L 5025:localhost:25 user@mailserver.example.com
user@mailserver.example.com's password: *****
% telnet localhost 5025
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
220 mailserver.example.com ESMTP
....
This can be used in conjunction with `ssh-keygen` and additional user accounts to create a more seamless SSH tunneling environment. Keys can be used in place of typing a password, and the tunnels can be run as a separate user.
====
.Secure Access of a POP3 Server
[example]
====
In this example, there is an SSH server that accepts connections from the outside. On the same network resides a mail server running a POP3 server. To check email in a secure manner, create an SSH connection to the SSH server and tunnel through to the mail server:
[source,shell]
....
% ssh -2 -N -f -L 2110:mail.example.com:110 user@ssh-server.example.com
user@ssh-server.example.com's password: ******
....
Once the tunnel is up and running, point the email client to send POP3 requests to `localhost` on port 2110. This connection will be forwarded securely across the tunnel to `mail.example.com`.
====
.Bypassing a Firewall
[example]
====
Some firewalls filter both incoming and outgoing connections. For example, a firewall might limit access from remote machines to ports 22 and 80 to only allow SSH and web surfing. This prevents access to any other service which uses a port other than 22 or 80.
The solution is to create an SSH connection to a machine outside of the network's firewall and use it to tunnel to the desired service:
[source,shell]
....
% ssh -2 -N -f -L 8888:music.example.com:8000 user@unfirewalled-system.example.org
user@unfirewalled-system.example.org's password: *******
....
In this example, a streaming Ogg Vorbis client can now be pointed to `localhost` port 8888, which will be forwarded over to `music.example.com` on port 8000, successfully bypassing the firewall.
====
=== Enabling the SSH Server
In addition to providing built-in SSH client utilities, a FreeBSD system can be configured as an SSH server, accepting connections from other SSH clients.
To see if sshd is operating, use the man:service[8] command:
[source,shell]
....
# service sshd status
....
If the service is not running, add the following line to [.filename]#/etc/rc.conf#.
[.programlisting]
....
sshd_enable="YES"
....
This will start sshd, the daemon program for OpenSSH, the next time the system boots. To start it now:
[source,shell]
....
# service sshd start
....
The first time sshd starts on a FreeBSD system, the system's host keys will be automatically created and the fingerprint will be displayed on the console. Provide users with the fingerprint so that they can verify it the first time they connect to the server.
Refer to man:sshd[8] for the list of available options when starting sshd and a more complete discussion about authentication, the login process, and the various configuration files.
At this point, the sshd should be available to all users with a username and password on the system.
=== SSH Server Security
While sshd is the most widely used remote administration facility for FreeBSD, brute force and drive by attacks are common to any system exposed to public networks. Several additional parameters are available to prevent the success of these attacks and will be described in this section.
It is a good idea to limit which users can log into the SSH server and from where using the `AllowUsers` keyword in the OpenSSH server configuration file. For example, to only allow `root` to log in from `192.168.1.32`, add this line to [.filename]#/etc/ssh/sshd_config#:
[.programlisting]
....
AllowUsers root@192.168.1.32
....
To allow `admin` to log in from anywhere, list that user without specifying an IP address:
[.programlisting]
....
AllowUsers admin
....
Multiple users should be listed on the same line, like so:
[.programlisting]
....
AllowUsers root@192.168.1.32 admin
....
After making changes to [.filename]#/etc/ssh/sshd_config#, tell sshd to reload its configuration file by running:
[source,shell]
....
# service sshd reload
....
[NOTE]
====
When this keyword is used, it is important to list each user that needs to log into this machine. Any user that is not specified in that line will be locked out. Also, the keywords used in the OpenSSH server configuration file are case-sensitive. If the keyword is not spelled correctly, including its case, it will be ignored. Always test changes to this file to make sure that the edits are working as expected. Refer to man:sshd_config[5] to verify the spelling and use of the available keywords.
====
In addition, users may be forced to use two factor authentication via the use of a public and private key. When required, the user may generate a key pair through the use of man:ssh-keygen[1] and send the administrator the public key. This key file will be placed in the [.filename]#authorized_keys# as described above in the client section. To force the users to use keys only, the following option may be configured:
[.programlisting]
....
AuthenticationMethods publickey
....
[TIP]
====
Do not confuse [.filename]#/etc/ssh/sshd_config# with [.filename]#/etc/ssh/ssh_config# (note the extra `d` in the first filename). The first file configures the server and the second file configures the client. Refer to man:ssh_config[5] for a listing of the available client settings.
====
[[fs-acl]]
== Access Control Lists
Access Control Lists (ACLs) extend the standard UNIX(R) permission model in a POSIX(R).1e compatible way. This permits an administrator to take advantage of a more fine-grained permissions model.
The FreeBSD [.filename]#GENERIC# kernel provides ACL support for UFS file systems. Users who prefer to compile a custom kernel must include the following option in their custom kernel configuration file:
[.programlisting]
....
options UFS_ACL
....
If this option is not compiled in, a warning message will be displayed when attempting to mount a file system with ACL support. ACLs rely on extended attributes which are natively supported in UFS2.
This chapter describes how to enable ACL support and provides some usage examples.
=== Enabling ACL Support
ACLs are enabled by the mount-time administrative flag, `acls`, which may be added to [.filename]#/etc/fstab#. The mount-time flag can also be automatically set in a persistent manner using man:tunefs[8] to modify a superblock ACLs flag in the file system header. In general, it is preferred to use the superblock flag for several reasons:
* The superblock flag cannot be changed by a remount using `mount -u` as it requires a complete `umount` and fresh `mount`. This means that ACLs cannot be enabled on the root file system after boot. It also means that ACL support on a file system cannot be changed while the system is in use.
* Setting the superblock flag causes the file system to always be mounted with ACLs enabled, even if there is not an [.filename]#fstab# entry or if the devices re-order. This prevents accidental mounting of the file system without ACL support.
[NOTE]
====
It is desirable to discourage accidental mounting without ACLs enabled because nasty things can happen if ACLs are enabled, then disabled, then re-enabled without flushing the extended attributes. In general, once ACLs are enabled on a file system, they should not be disabled, as the resulting file protections may not be compatible with those intended by the users of the system, and re-enabling ACLs may re-attach the previous ACLs to files that have since had their permissions changed, resulting in unpredictable behavior.
====
File systems with ACLs enabled will show a plus (`+`) sign in their permission settings:
[.programlisting]
....
drwx------ 2 robert robert 512 Dec 27 11:54 private
drwxrwx---+ 2 robert robert 512 Dec 23 10:57 directory1
drwxrwx---+ 2 robert robert 512 Dec 22 10:20 directory2
drwxrwx---+ 2 robert robert 512 Dec 27 11:57 directory3
drwxr-xr-x 2 robert robert 512 Nov 10 11:54 public_html
....
In this example, [.filename]#directory1#, [.filename]#directory2#, and [.filename]#directory3# are all taking advantage of ACLs, whereas [.filename]#private# and [.filename]#public_html# are not.
=== Using ACLs
File system ACLs can be viewed using `getfacl`. For instance, to view the ACL settings on [.filename]#test#:
[source,shell]
....
% getfacl test
#file:test
#owner:1001
#group:1001
user::rw-
group::r--
other::r--
....
To change the ACL settings on this file, use `setfacl`. To remove all of the currently defined ACLs from a file or file system, include `-k`. However, the preferred method is to use `-b` as it leaves the basic fields required for ACLs to work.
[source,shell]
....
% setfacl -k test
....
To modify the default ACL entries, use `-m`:
[source,shell]
....
% setfacl -m u:trhodes:rwx,group:web:r--,o::--- test
....
In this example, there were no pre-defined entries, as they were removed by the previous command. This command restores the default options and assigns the options listed. If a user or group is added which does not exist on the system, an `Invalid argument` error will be displayed.
Refer to man:getfacl[1] and man:setfacl[1] for more information about the options available for these commands.
[[security-pkg]]
== Monitoring Third Party Security Issues
In recent years, the security world has made many improvements to how vulnerability assessment is handled. The threat of system intrusion increases as third party utilities are installed and configured for virtually any operating system available today.
Vulnerability assessment is a key factor in security. While FreeBSD releases advisories for the base system, doing so for every third party utility is beyond the FreeBSD Project's capability. There is a way to mitigate third party vulnerabilities and warn administrators of known security issues. A FreeBSD add on utility known as pkg includes options explicitly for this purpose.
pkg polls a database for security issues. The database is updated and maintained by the FreeBSD Security Team and ports developers.
Please refer to crossref:ports[pkgng-intro,instructions] for installing pkg.
Installation provides man:periodic[8] configuration files for maintaining the pkg audit database, and provides a programmatic method of keeping it updated. This functionality is enabled if `daily_status_security_pkgaudit_enable` is set to `YES` in man:periodic.conf[5]. Ensure that daily security run emails, which are sent to ``root``'s email account, are being read.
After installation, and to audit third party utilities as part of the Ports Collection at any time, an administrator may choose to update the database and view known vulnerabilities of installed packages by invoking:
[source,shell]
....
# pkg audit -F
....
pkg displays messages any published vulnerabilities in installed packages:
[.programlisting]
....
Affected package: cups-base-1.1.22.0_1
Type of problem: cups-base -- HPGL buffer overflow vulnerability.
Reference: <https://www.FreeBSD.org/ports/portaudit/40a3bca2-6809-11d9-a9e7-0001020eed82.html>
1 problem(s) in your installed packages found.
You are advised to update or deinstall the affected package(s) immediately.
....
By pointing a web browser to the displayed URL, an administrator may obtain more information about the vulnerability. This will include the versions affected, by FreeBSD port version, along with other web sites which may contain security advisories.
pkg is a powerful utility and is extremely useful when coupled with package:ports-mgmt/portmaster[].
[[security-advisories]]
== FreeBSD Security Advisories
Like many producers of quality operating systems, the FreeBSD Project has a security team which is responsible for determining the End-of-Life (EoL) date for each FreeBSD release and to provide security updates for supported releases which have not yet reached their EoL. More information about the FreeBSD security team and the supported releases is available on the link:https://www.FreeBSD.org/security[FreeBSD security page].
One task of the security team is to respond to reported security vulnerabilities in the FreeBSD operating system. Once a vulnerability is confirmed, the security team verifies the steps necessary to fix the vulnerability and updates the source code with the fix. It then publishes the details as a "Security Advisory". Security advisories are published on the link:https://www.FreeBSD.org/security/advisories/[FreeBSD website] and mailed to the {freebsd-security-notifications}, {freebsd-security}, and {freebsd-announce} mailing lists.
This section describes the format of a FreeBSD security advisory.
=== Format of a Security Advisory
Here is an example of a FreeBSD security advisory:
[.programlisting]
....
=============================================================================
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA512
=============================================================================
FreeBSD-SA-14:04.bind Security Advisory
The FreeBSD Project
Topic: BIND remote denial of service vulnerability
Category: contrib
Module: bind
Announced: 2014-01-14
Credits: ISC
Affects: FreeBSD 8.x and FreeBSD 9.x
Corrected: 2014-01-14 19:38:37 UTC (stable/9, 9.2-STABLE)
2014-01-14 19:42:28 UTC (releng/9.2, 9.2-RELEASE-p3)
2014-01-14 19:42:28 UTC (releng/9.1, 9.1-RELEASE-p10)
2014-01-14 19:38:37 UTC (stable/8, 8.4-STABLE)
2014-01-14 19:42:28 UTC (releng/8.4, 8.4-RELEASE-p7)
2014-01-14 19:42:28 UTC (releng/8.3, 8.3-RELEASE-p14)
CVE Name: CVE-2014-0591
For general information regarding FreeBSD Security Advisories,
including descriptions of the fields above, security branches, and the
following sections, please visit <URL:http://security.FreeBSD.org/>.
I. Background
BIND 9 is an implementation of the Domain Name System (DNS) protocols.
The named(8) daemon is an Internet Domain Name Server.
II. Problem Description
Because of a defect in handling queries for NSEC3-signed zones, BIND can
crash with an "INSIST" failure in name.c when processing queries possessing
certain properties. This issue only affects authoritative nameservers with
at least one NSEC3-signed zone. Recursive-only servers are not at risk.
III. Impact
An attacker who can send a specially crafted query could cause named(8)
to crash, resulting in a denial of service.
IV. Workaround
No workaround is available, but systems not running authoritative DNS service
with at least one NSEC3-signed zone using named(8) are not vulnerable.
V. Solution
Perform one of the following:
1) Upgrade your vulnerable system to a supported FreeBSD stable or
release / security branch (releng) dated after the correction date.
2) To update your vulnerable system via a source code patch:
The following patches have been verified to apply to the applicable
FreeBSD release branches.
a) Download the relevant patch from the location below, and verify the
detached PGP signature using your PGP utility.
[FreeBSD 8.3, 8.4, 9.1, 9.2-RELEASE and 8.4-STABLE]
# fetch http://security.FreeBSD.org/patches/SA-14:04/bind-release.patch
# fetch http://security.FreeBSD.org/patches/SA-14:04/bind-release.patch.asc
# gpg --verify bind-release.patch.asc
[FreeBSD 9.2-STABLE]
# fetch http://security.FreeBSD.org/patches/SA-14:04/bind-stable-9.patch
# fetch http://security.FreeBSD.org/patches/SA-14:04/bind-stable-9.patch.asc
# gpg --verify bind-stable-9.patch.asc
b) Execute the following commands as root:
# cd /usr/src
# patch < /path/to/patch
Recompile the operating system using buildworld and installworld as
described in <URL:https://www.FreeBSD.org/handbook/makeworld.html>.
Restart the applicable daemons, or reboot the system.
3) To update your vulnerable system via a binary patch:
Systems running a RELEASE version of FreeBSD on the i386 or amd64
platforms can be updated via the freebsd-update(8) utility:
# freebsd-update fetch
# freebsd-update install
VI. Correction details
The following list contains the correction revision numbers for each
affected branch.
Branch/path Revision
- -------------------------------------------------------------------------
stable/8/ r260646
releng/8.3/ r260647
releng/8.4/ r260647
stable/9/ r260646
releng/9.1/ r260647
releng/9.2/ r260647
- -------------------------------------------------------------------------
To see which files were modified by a particular revision, run the
following command, replacing NNNNNN with the revision number, on a
machine with Subversion installed:
# svn diff -cNNNNNN --summarize svn://svn.freebsd.org/base
Or visit the following URL, replacing NNNNNN with the revision number:
<URL:https://svnweb.freebsd.org/base?view=revision&revision=NNNNNN>
VII. References
<URL:https://kb.isc.org/article/AA-01078>
<URL:http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2014-0591>
The latest revision of this advisory is available at
<URL:http://security.FreeBSD.org/advisories/FreeBSD-SA-14:04.bind.asc>
-----BEGIN PGP SIGNATURE-----
iQIcBAEBCgAGBQJS1ZTYAAoJEO1n7NZdz2rnOvQP/2/68/s9Cu35PmqNtSZVVxVG
ZSQP5EGWx/lramNf9566iKxOrLRMq/h3XWcC4goVd+gZFrvITJSVOWSa7ntDQ7TO
XcinfRZ/iyiJbs/Rg2wLHc/t5oVSyeouyccqODYFbOwOlk35JjOTMUG1YcX+Zasg
ax8RV+7Zt1QSBkMlOz/myBLXUjlTZ3Xg2FXVsfFQW5/g2CjuHpRSFx1bVNX6ysoG
9DT58EQcYxIS8WfkHRbbXKh9I1nSfZ7/Hky/kTafRdRMrjAgbqFgHkYTYsBZeav5
fYWKGQRJulYfeZQ90yMTvlpF42DjCC3uJYamJnwDIu8OhS1WRBI8fQfr9DRzmRua
OK3BK9hUiScDZOJB6OqeVzUTfe7MAA4/UwrDtTYQ+PqAenv1PK8DZqwXyxA9ThHb
zKO3OwuKOVHJnKvpOcr+eNwo7jbnHlis0oBksj/mrq2P9m2ueF9gzCiq5Ri5Syag
Wssb1HUoMGwqU0roS8+pRpNC8YgsWpsttvUWSZ8u6Vj/FLeHpiV3mYXPVMaKRhVm
067BA2uj4Th1JKtGleox+Em0R7OFbCc/9aWC67wiqI6KRyit9pYiF3npph+7D5Eq
7zPsUdDd+qc+UTiLp3liCRp5w6484wWdhZO6wRtmUgxGjNkxFoNnX8CitzF8AaqO
UWWemqWuz3lAZuORQ9KX
=OQzQ
-----END PGP SIGNATURE-----
....
Every security advisory uses the following format:
* Each security advisory is signed by the PGP key of the Security Officer. The public key for the Security Officer can be verified at crossref:pgpkeys[pgpkeys,OpenPGP Keys].
* The name of the security advisory always begins with `FreeBSD-SA-` (for FreeBSD Security Advisory), followed by the year in two digit format (`14:`), followed by the advisory number for that year (`04.`), followed by the name of the affected application or subsystem (`bind`). The advisory shown here is the fourth advisory for 2014 and it affects BIND.
* The `Topic` field summarizes the vulnerability.
* The `Category` refers to the affected part of the system which may be one of `core`, `contrib`, or `ports`. The `core` category means that the vulnerability affects a core component of the FreeBSD operating system. The `contrib` category means that the vulnerability affects software included with FreeBSD, such as BIND. The `ports` category indicates that the vulnerability affects software available through the Ports Collection.
* The `Module` field refers to the component location. In this example, the `bind` module is affected; therefore, this vulnerability affects an application installed with the operating system.
* The `Announced` field reflects the date the security advisory was published. This means that the security team has verified that the problem exists and that a patch has been committed to the FreeBSD source code repository.
* The `Credits` field gives credit to the individual or organization who noticed the vulnerability and reported it.
* The `Affects` field explains which releases of FreeBSD are affected by this vulnerability.
* The `Corrected` field indicates the date, time, time offset, and releases that were corrected. The section in parentheses shows each branch for which the fix has been merged, and the version number of the corresponding release from that branch. The release identifier itself includes the version number and, if appropriate, the patch level. The patch level is the letter `p` followed by a number, indicating the sequence number of the patch, allowing users to track which patches have already been applied to the system.
* The `CVE Name` field lists the advisory number, if one exists, in the public http://cve.mitre.org[cve.mitre.org] security vulnerabilities database.
* The `Background` field provides a description of the affected module.
* The `Problem Description` field explains the vulnerability. This can include information about the flawed code and how the utility could be maliciously used.
* The `Impact` field describes what type of impact the problem could have on a system.
* The `Workaround` field indicates if a workaround is available to system administrators who cannot immediately patch the system .
* The `Solution` field provides the instructions for patching the affected system. This is a step by step tested and verified method for getting a system patched and working securely.
* The `Correction Details` field displays each affected Subversion branch with the revision number that contains the corrected code.
* The `References` field offers sources of additional information regarding the vulnerability.
[[security-accounting]]
== Process Accounting
Process accounting is a security method in which an administrator may keep track of system resources used and their allocation among users, provide for system monitoring, and minimally track a user's commands.
Process accounting has both positive and negative points. One of the positives is that an intrusion may be narrowed down to the point of entry. A negative is the amount of logs generated by process accounting, and the disk space they may require. This section walks an administrator through the basics of process accounting.
[NOTE]
====
If more fine-grained accounting is needed, refer to crossref:audit[audit,Security Event Auditing].
====
=== Enabling and Utilizing Process Accounting
Before using process accounting, it must be enabled using the following commands:
[source,shell]
....
# sysrc accounting_enable=yes
# service accounting start
....
The accounting information is stored in files located in [.filename]#/var/account#, which is automatically created, if necessary, the first time the accounting service starts. These files contain sensitive information, including all the commands issued by all users. Write access to the files is limited to `root`, and read access is limited to `root` and members of the `wheel` group. To also prevent members of `wheel` from reading the files, change the mode of the [.filename]#/var/account# directory to allow access only by `root`.
Once enabled, accounting will begin to track information such as CPU statistics and executed commands. All accounting logs are in a non-human readable format which can be viewed using `sa`. If issued without any options, `sa` prints information relating to the number of per-user calls, the total elapsed time in minutes, total CPU and user time in minutes, and the average number of I/O operations. Refer to man:sa[8] for the list of available options which control the output.
To display the commands issued by users, use `lastcomm`. For example, this command prints out all usage of `ls` by `trhodes` on the `ttyp1` terminal:
[source,shell]
....
# lastcomm ls trhodes ttyp1
....
Many other useful options exist and are explained in man:lastcomm[1], man:acct[5], and man:sa[8].
[[security-resourcelimits]]
== Resource Limits
FreeBSD provides several methods for an administrator to limit the amount of system resources an individual may use. Disk quotas limit the amount of disk space available to users. Quotas are discussed in crossref:disks[quotas,"Disk Quotas"].
Limits to other resources, such as CPU and memory, can be set using either a flat file or a command to configure a resource limits database. The traditional method defines login classes by editing [.filename]#/etc/login.conf#. While this method is still supported, any changes require a multi-step process of editing this file, rebuilding the resource database, making necessary changes to [.filename]#/etc/master.passwd#, and rebuilding the password database. This can become time consuming, depending upon the number of users to configure.
`rctl` can be used to provide a more fine-grained method for controlling resource limits. This command supports more than user limits as it can also be used to set resource constraints on processes and jails.
This section demonstrates both methods for controlling resources, beginning with the traditional method.
[[users-limiting]]
=== Configuring Login Classes
In the traditional method, login classes and the resource limits to apply to a login class are defined in [.filename]#/etc/login.conf#. Each user account can be assigned to a login class, where `default` is the default login class. Each login class has a set of login capabilities associated with it. A login capability is a `_name_=_value_` pair, where _name_ is a well-known identifier and _value_ is an arbitrary string which is processed accordingly depending on the _name_.
[NOTE]
====
Whenever [.filename]#/etc/login.conf# is edited, the [.filename]#/etc/login.conf.db# must be updated by executing the following command:
[source,shell]
....
# cap_mkdb /etc/login.conf
....
====
Resource limits differ from the default login capabilities in two ways. First, for every limit, there is a _soft_ and _hard_ limit. A soft limit may be adjusted by the user or application, but may not be set higher than the hard limit. The hard limit may be lowered by the user, but can only be raised by the superuser. Second, most resource limits apply per process to a specific user.
<<resource-limits>> lists the most commonly used resource limits. All of the available resource limits and capabilities are described in detail in man:login.conf[5].
[[resource-limits]]
.Login Class Resource Limits
[cols="20%,80%", frame="none", options="header"]
|===
| Resource Limit
| Description
|coredumpsize
|The limit on the size of a core file generated by a program is subordinate to other limits on disk usage, such as `filesize` or disk quotas. This limit is often used as a less severe method of controlling disk space consumption. Since users do not generate core files and often do not delete them, this setting may save them from running out of disk space should a large program crash.
|cputime
|The maximum amount of CPU time a user's process may consume. Offending processes will be killed by the kernel. This is a limit on CPU _time_ consumed, not the percentage of the CPU as displayed in some of the fields generated by `top` and `ps`.
|filesize
|The maximum size of a file the user may own. Unlike disk quotas (crossref:disks[quotas,"Disk Quotas"]), this limit is enforced on individual files, not the set of all files a user owns.
|maxproc
|The maximum number of foreground and background processes a user can run. This limit may not be larger than the system limit specified by `kern.maxproc`. Setting this limit too small may hinder a user's productivity as some tasks, such as compiling a large program, start lots of processes.
|memorylocked
|The maximum amount of memory a process may request to be locked into main memory using man:mlock[2]. Some system-critical programs, such as man:amd[8], lock into main memory so that if the system begins to swap, they do not contribute to disk thrashing.
|memoryuse
|The maximum amount of memory a process may consume at any given time. It includes both core memory and swap usage. This is not a catch-all limit for restricting memory consumption, but is a good start.
|openfiles
|The maximum number of files a process may have open. In FreeBSD, files are used to represent sockets and IPC channels, so be careful not to set this too low. The system-wide limit for this is defined by `kern.maxfiles`.
|sbsize
|The limit on the amount of network memory a user may consume. This can be generally used to limit network communications.
|stacksize
|The maximum size of a process stack. This alone is not sufficient to limit the amount of memory a program may use, so it should be used in conjunction with other limits.
|===
There are a few other things to remember when setting resource limits:
* Processes started at system startup by [.filename]#/etc/rc# are assigned to the `daemon` login class.
* Although the default [.filename]#/etc/login.conf# is a good source of reasonable values for most limits, they may not be appropriate for every system. Setting a limit too high may open the system up to abuse, while setting it too low may put a strain on productivity.
* Xorg takes a lot of resources and encourages users to run more programs simultaneously.
* Many limits apply to individual processes, not the user as a whole. For example, setting `openfiles` to `50` means that each process the user runs may open up to `50` files. The total amount of files a user may open is the value of `openfiles` multiplied by the value of `maxproc`. This also applies to memory consumption.
For further information on resource limits and login classes and capabilities in general, refer to man:cap.mkdb[1], man:getrlimit[2], and man:login.conf[5].
=== Enabling and Configuring Resource Limits
The `kern.racct.enable` tunable must be set to a non-zero value. Custom kernels require specific configuration:
[.programlisting]
....
options RACCT
options RCTL
....
Once the system has rebooted into the new kernel, `rctl` may be used to set rules for the system.
Rule syntax is controlled through the use of a subject, subject-id, resource, and action, as seen in this example rule:
[.programlisting]
....
user:trhodes:maxproc:deny=10/user
....
In this rule, the subject is `user`, the subject-id is `trhodes`, the resource, `maxproc`, is the maximum number of processes, and the action is `deny`, which blocks any new processes from being created. This means that the user, `trhodes`, will be constrained to no greater than `10` processes. Other possible actions include logging to the console, passing a notification to man:devd[8], or sending a sigterm to the process.
Some care must be taken when adding rules. Since this user is constrained to `10` processes, this example will prevent the user from performing other tasks after logging in and executing a `screen` session. Once a resource limit has been hit, an error will be printed, as in this example:
[source,shell]
....
% man test
/usr/bin/man: Cannot fork: Resource temporarily unavailable
eval: Cannot fork: Resource temporarily unavailable
....
As another example, a jail can be prevented from exceeding a memory limit. This rule could be written as:
[source,shell]
....
# rctl -a jail:httpd:memoryuse:deny=2G/jail
....
Rules will persist across reboots if they have been added to [.filename]#/etc/rctl.conf#. The format is a rule, without the preceding command. For example, the previous rule could be added as:
[.programlisting]
....
# Block jail from using more than 2G memory:
jail:httpd:memoryuse:deny=2G/jail
....
To remove a rule, use `rctl` to remove it from the list:
[source,shell]
....
# rctl -r user:trhodes:maxproc:deny=10/user
....
A method for removing all rules is documented in man:rctl[8]. However, if removing all rules for a single user is required, this command may be issued:
[source,shell]
....
# rctl -r user:trhodes
....
Many other resources exist which can be used to exert additional control over various `subjects`. See man:rctl[8] to learn about them.
[[security-sudo]]
== Shared Administration with Sudo
System administrators often need the ability to grant enhanced permissions to users so they may perform privileged tasks. The idea that team members are provided access to a FreeBSD system to perform their specific tasks opens up unique challenges to every administrator. These team members only need a subset of access beyond normal end user levels; however, they almost always tell management they are unable to perform their tasks without superuser access. Thankfully, there is no reason to provide such access to end users because tools exist to manage this exact requirement.
Up to this point, the security chapter has covered permitting access to authorized users and attempting to prevent unauthorized access. Another problem arises once authorized users have access to the system resources. In many cases, some users may need access to application startup scripts, or a team of administrators need to maintain the system. Traditionally, the standard users and groups, file permissions, and even the man:su[1] command would manage this access. And as applications required more access, as more users needed to use system resources, a better solution was required. The most used application is currently Sudo.
Sudo allows administrators to configure more rigid access to system commands and provide for some advanced logging features. As a tool, it is available from the Ports Collection as package:security/sudo[] or by use of the man:pkg[8] utility. To use the man:pkg[8] tool:
[source,shell]
....
# pkg install sudo
....
After the installation is complete, the installed `visudo` will open the configuration file with a text editor. Using `visudo` is highly recommended as it comes with a built in syntax checker to verify there are no errors before the file is saved.
The configuration file is made up of several small sections which allow for extensive configuration. In the following example, web application maintainer, user1, needs to start, stop, and restart the web application known as _webservice_. To grant this user permission to perform these tasks, add this line to the end of [.filename]#/usr/local/etc/sudoers#:
[.programlisting]
....
user1 ALL=(ALL) /usr/sbin/service webservice *
....
The user may now start _webservice_ using this command:
[source,shell]
....
% sudo /usr/sbin/service webservice start
....
While this configuration allows a single user access to the webservice service; however, in most organizations, there is an entire web team in charge of managing the service. A single line can also give access to an entire group. These steps will create a web group, add a user to this group, and allow all members of the group to manage the service:
[source,shell]
....
# pw groupadd -g 6001 -n webteam
....
Using the same man:pw[8] command, the user is added to the webteam group:
[source,shell]
....
# pw groupmod -m user1 -n webteam
....
Finally, this line in [.filename]#/usr/local/etc/sudoers# allows any member of the webteam group to manage _webservice_:
[.programlisting]
....
%webteam ALL=(ALL) /usr/sbin/service webservice *
....
Unlike man:su[1], Sudo only requires the end user password. This adds an advantage where users will not need shared passwords, a finding in most security audits and just bad all the way around.
Users permitted to run applications with Sudo only enter their own passwords. This is more secure and gives better control than man:su[1], where the `root` password is entered and the user acquires all `root` permissions.
[TIP]
====
Most organizations are moving or have moved toward a two factor authentication model. In these cases, the user may not have a password to enter. Sudo provides for these cases with the `NOPASSWD` variable. Adding it to the configuration above will allow all members of the _webteam_ group to manage the service without the password requirement:
[.programlisting]
....
%webteam ALL=(ALL) NOPASSWD: /usr/sbin/service webservice *
....
====
[[security-sudo-loggin]]
=== Logging Output
An advantage to implementing Sudo is the ability to enable session logging. Using the built in log mechanisms and the included sudoreplay command, all commands initiated through Sudo are logged for later verification. To enable this feature, add a default log directory entry, this example uses a user variable. Several other log filename conventions exist, consult the manual page for sudoreplay for additional information.
[.programlisting]
....
Defaults iolog_dir=/var/log/sudo-io/%{user}
....
[TIP]
====
This directory will be created automatically after the logging is configured. It is best to let the system create directory with default permissions just to be safe. In addition, this entry will also log administrators who use the sudoreplay command. To change this behavior, read and uncomment the logging options inside [.filename]#sudoers#.
====
Once this directive has been added to the [.filename]#sudoers# file, any user configuration can be updated with the request to log access. In the example shown, the updated _webteam_ entry would have the following additional changes:
[.programlisting]
....
%webteam ALL=(ALL) NOPASSWD: LOG_INPUT: LOG_OUTPUT: /usr/sbin/service webservice *
....
From this point on, all _webteam_ members altering the status of the _webservice_ application will be logged. The list of previous and current sessions can be displayed with:
[source,shell]
....
# sudoreplay -l
....
In the output, to replay a specific session, search for the `TSID=` entry, and pass that to sudoreplay with no other options to replay the session at normal speed. For example:
[source,shell]
....
# sudoreplay user1/00/00/02
....
[WARNING]
====
While sessions are logged, any administrator is able to remove sessions and leave only a question of why they had done so. It is worthwhile to add a daily check through an intrusion detection system (IDS) or similar software so that other administrators are alerted to manual alterations.
====
The `sudoreplay` is extremely extendable. Consult the documentation for more information.
[[security-doas]]
== Using doas as an alternative to sudo
As an alternative to package:security/sudo[] package:security/doas[] can be used to provide the ability for users to get enhanced privileges.
The doas utility is available via the ports collection in package:security/doas[] or via the man:pkg[8] utility.
After the installation [.filename]#/usr/local/etc/doas.conf# must be configured to grant access for users for specific commands, or roles.
The simpliest entry could be the following, which grants local_user root permissions without asking for its password when executing the doas command.
[source,shell]
....
permit nopass local_user as root
....
For more configuration examples, please read man:doas.conf[5].
After the installation and configuration of the `doas` utility, a command can now be executed with enhanced privileges, like for example.
[source,shell]
....
$ doas vi /etc/rc.conf
....
diff --git a/documentation/content/en/books/handbook/serialcomms/_index.adoc b/documentation/content/en/books/handbook/serialcomms/_index.adoc
index f461ff723c..4c5ca164df 100644
--- a/documentation/content/en/books/handbook/serialcomms/_index.adoc
+++ b/documentation/content/en/books/handbook/serialcomms/_index.adoc
@@ -1,1008 +1,1009 @@
---
title: Chapter 27. Serial Communications
part: Part IV. Network Communication
prev: books/handbook/partiv
next: books/handbook/ppp-and-slip
+description: This chapter covers some of the ways serial communications can be used on FreeBSD
---
[[serialcomms]]
= Serial Communications
:doctype: book
:toc: macro
:toclevels: 1
:icons: font
:sectnums:
:sectnumlevels: 6
:source-highlighter: rouge
:experimental:
:skip-front-matter:
:xrefstyle: basic
:relfileprefix: ../
:outfilesuffix:
:sectnumoffset: 27
ifeval::["{backend}" == "html5"]
:imagesdir: ../../../../images/books/handbook/serialcomms/
endif::[]
ifeval::["{backend}" == "pdf"]
:imagesdir: ../../../../static/images/books/handbook/serialcomms/
endif::[]
ifeval::["{backend}" == "epub3"]
:imagesdir: ../../../../static/images/books/handbook/serialcomms/
endif::[]
include::shared/authors.adoc[]
include::shared/releases.adoc[]
include::shared/en/mailing-lists.adoc[]
include::shared/en/teams.adoc[]
include::shared/en/urls.adoc[]
toc::[]
[[serial-synopsis]]
== Synopsis
UNIX(R) has always had support for serial communications as the very first UNIX(R) machines relied on serial lines for user input and output. Things have changed a lot from the days when the average terminal consisted of a 10-character-per-second serial printer and a keyboard. This chapter covers some of the ways serial communications can be used on FreeBSD.
After reading this chapter, you will know:
* How to connect terminals to a FreeBSD system.
* How to use a modem to dial out to remote hosts.
* How to allow remote users to login to a FreeBSD system with a modem.
* How to boot a FreeBSD system from a serial console.
Before reading this chapter, you should:
* Know how to crossref:kernelconfig[kernelconfig, configure and install a custom kernel].
* Understand crossref:basics[basics, FreeBSD permissions and processes].
* Have access to the technical manual for the serial hardware to be used with FreeBSD.
[[serial]]
== Serial Terminology and Hardware
The following terms are often used in serial communications:
bps::
Bits per Second (bps) is the rate at which data is transmitted.
DTE::
Data Terminal Equipment (DTE) is one of two endpoints in a serial communication. An example would be a computer.
DCE::
Data Communications Equipment (DTE) is the other endpoint in a serial communication. Typically, it is a modem or serial terminal.
RS-232::
The original standard which defined hardware serial communications. It has since been renamed to TIA-232.
When referring to communication data rates, this section does not use the term _baud_. Baud refers to the number of electrical state transitions made in a period of time, while bps is the correct term to use.
To connect a serial terminal to a FreeBSD system, a serial port on the computer and the proper cable to connect to the serial device are needed. Users who are already familiar with serial hardware and cabling can safely skip this section.
[[term-cables-null]]
=== Serial Cables and Ports
There are several different kinds of serial cables. The two most common types are null-modem cables and standard RS-232 cables. The documentation for the hardware should describe the type of cable required.
These two types of cables differ in how the wires are connected to the connector. Each wire represents a signal, with the defined signals summarized in <<serialcomms-signal-names>>. A standard serial cable passes all of the RS-232C signals straight through. For example, the "Transmitted Data" pin on one end of the cable goes to the "Transmitted Data" pin on the other end. This is the type of cable used to connect a modem to the FreeBSD system, and is also appropriate for some terminals.
A null-modem cable switches the "Transmitted Data" pin of the connector on one end with the "Received Data" pin on the other end. The connector can be either a DB-25 or a DB-9.
A null-modem cable can be constructed using the pin connections summarized in <<nullmodem-db25>>, <<nullmodem-db9>>, and <<nullmodem-db9-25>>. While the standard calls for a straight-through pin 1 to pin 1 "Protective Ground" line, it is often omitted. Some terminals work using only pins 2, 3, and 7, while others require different configurations. When in doubt, refer to the documentation for the hardware.
[[serialcomms-signal-names]]
.RS-232C Signal Names
[cols="1,1", frame="none", options="header"]
|===
<| Acronyms
<| Names
|RD
|Received Data
|TD
|Transmitted Data
|DTR
|Data Terminal Ready
|DSR
|Data Set Ready
|DCD
|Data Carrier Detect
|SG
|Signal Ground
|RTS
|Request to Send
|CTS
|Clear to Send
|===
[[nullmodem-db25]]
.DB-25 to DB-25 Null-Modem Cable
[cols="1,1,1,1,1", frame="none", options="header"]
|===
<| Signal
<| Pin #
|
<| Pin #
<| Signal
|SG
|7
|connects to
|7
|SG
|TD
|2
|connects to
|3
|RD
|RD
|3
|connects to
|2
|TD
|RTS
|4
|connects to
|5
|CTS
|CTS
|5
|connects to
|4
|RTS
|DTR
|20
|connects to
|6
|DSR
|DTR
|20
|connects to
|8
|DCD
|DSR
|6
|connects to
|20
|DTR
|DCD
|8
|connects to
|20
|DTR
|===
[[nullmodem-db9]]
.DB-9 to DB-9 Null-Modem Cable
[cols="1,1,1,1,1", frame="none", options="header"]
|===
<| Signal
<| Pin #
|
<| Pin #
<| Signal
|RD
|2
|connects to
|3
|TD
|TD
|3
|connects to
|2
|RD
|DTR
|4
|connects to
|6
|DSR
|DTR
|4
|connects to
|1
|DCD
|SG
|5
|connects to
|5
|SG
|DSR
|6
|connects to
|4
|DTR
|DCD
|1
|connects to
|4
|DTR
|RTS
|7
|connects to
|8
|CTS
|CTS
|8
|connects to
|7
|RTS
|===
[[nullmodem-db9-25]]
.DB-9 to DB-25 Null-Modem Cable
[cols="1,1,1,1,1", frame="none", options="header"]
|===
<| Signal
<| Pin #
|
<| Pin #
<| Signal
|RD
|2
|connects to
|2
|TD
|TD
|3
|connects to
|3
|RD
|DTR
|4
|connects to
|6
|DSR
|DTR
|4
|connects to
|8
|DCD
|SG
|5
|connects to
|7
|SG
|DSR
|6
|connects to
|20
|DTR
|DCD
|1
|connects to
|20
|DTR
|RTS
|7
|connects to
|5
|CTS
|CTS
|8
|connects to
|4
|RTS
|===
[NOTE]
====
When one pin at one end connects to a pair of pins at the other end, it is usually implemented with one short wire between the pair of pins in their connector and a long wire to the other single pin.
====
Serial ports are the devices through which data is transferred between the FreeBSD host computer and the terminal. Several kinds of serial ports exist. Before purchasing or constructing a cable, make sure it will fit the ports on the terminal and on the FreeBSD system.
Most terminals have DB-25 ports. Personal computers may have DB-25 or DB-9 ports. A multiport serial card may have RJ-12 or RJ-45/ ports. See the documentation that accompanied the hardware for specifications on the kind of port or visually verify the type of port.
In FreeBSD, each serial port is accessed through an entry in [.filename]#/dev#. There are two different kinds of entries:
* Call-in ports are named [.filename]#/dev/ttyuN# where _N_ is the port number, starting from zero. If a terminal is connected to the first serial port ([.filename]#COM1#), use [.filename]#/dev/ttyu0# to refer to the terminal. If the terminal is on the second serial port ([.filename]#COM2#), use [.filename]#/dev/ttyu1#, and so forth. Generally, the call-in port is used for terminals. Call-in ports require that the serial line assert the "Data Carrier Detect" signal to work correctly.
* Call-out ports are named [.filename]#/dev/cuauN# on FreeBSD versions 8.X and higher and [.filename]#/dev/cuadN# on FreeBSD versions 7.X and lower. Call-out ports are usually not used for terminals, but are used for modems. The call-out port can be used if the serial cable or the terminal does not support the "Data Carrier Detect" signal.
FreeBSD also provides initialization devices ([.filename]#/dev/ttyuN.init# and [.filename]#/dev/cuauN.init# or [.filename]#/dev/cuadN.init#) and locking devices ([.filename]#/dev/ttyuN.lock# and [.filename]#/dev/cuauN.lock# or [.filename]#/dev/cuadN.lock#). The initialization devices are used to initialize communications port parameters each time a port is opened, such as `crtscts` for modems which use `RTS/CTS` signaling for flow control. The locking devices are used to lock flags on ports to prevent users or programs changing certain parameters. Refer to man:termios[4], man:sio[4], and man:stty[1] for information on terminal settings, locking and initializing devices, and setting terminal options, respectively.
[[serial-hw-config]]
=== Serial Port Configuration
By default, FreeBSD supports four serial ports which are commonly known as [.filename]#COM1#, [.filename]#COM2#, [.filename]#COM3#, and [.filename]#COM4#. FreeBSD also supports dumb multi-port serial interface cards, such as the BocaBoard 1008 and 2016, as well as more intelligent multi-port cards such as those made by Digiboard. However, the default kernel only looks for the standard [.filename]#COM# ports.
To see if the system recognizes the serial ports, look for system boot messages that start with `uart`:
[source,shell]
....
# grep uart /var/run/dmesg.boot
....
If the system does not recognize all of the needed serial ports, additional entries can be added to [.filename]#/boot/device.hints#. This file already contains `hint.uart.0.\*` entries for [.filename]#COM1# and `hint.uart.1.*` entries for [.filename]#COM2#. When adding a port entry for [.filename]#COM3# use `0x3E8`, and for [.filename]#COM4# use `0x2E8`. Common IRQ addresses are `5` for [.filename]#COM3# and `9` for [.filename]#COM4#.
To determine the default set of terminal I/O settings used by the port, specify its device name. This example determines the settings for the call-in port on [.filename]#COM2#:
[source,shell]
....
# stty -a -f /dev/ttyu1
....
System-wide initialization of serial devices is controlled by [.filename]#/etc/rc.d/serial#. This file affects the default settings of serial devices. To change the settings for a device, use `stty`. By default, the changed settings are in effect until the device is closed and when the device is reopened, it goes back to the default set. To permanently change the default set, open and adjust the settings of the initialization device. For example, to turn on `CLOCAL` mode, 8 bit communication, and `XON/XOFF` flow control for [.filename]#ttyu5#, type:
[source,shell]
....
# stty -f /dev/ttyu5.init clocal cs8 ixon ixoff
....
To prevent certain settings from being changed by an application, make adjustments to the locking device. For example, to lock the speed of [.filename]#ttyu5# to 57600 bps, type:
[source,shell]
....
# stty -f /dev/ttyu5.lock 57600
....
Now, any application that opens [.filename]#ttyu5# and tries to change the speed of the port will be stuck with 57600 bps.
[[term]]
== Terminals
Terminals provide a convenient and low-cost way to access a FreeBSD system when not at the computer's console or on a connected network. This section describes how to use terminals with FreeBSD.
The original UNIX(R) systems did not have consoles. Instead, users logged in and ran programs through terminals that were connected to the computer's serial ports.
The ability to establish a login session on a serial port still exists in nearly every UNIX(R)-like operating system today, including FreeBSD. By using a terminal attached to an unused serial port, a user can log in and run any text program that can normally be run on the console or in an `xterm` window.
Many terminals can be attached to a FreeBSD system. An older spare computer can be used as a terminal wired into a more powerful computer running FreeBSD. This can turn what might otherwise be a single-user computer into a powerful multiple-user system.
FreeBSD supports three types of terminals:
Dumb terminals::
Dumb terminals are specialized hardware that connect to computers over serial lines. They are called "dumb" because they have only enough computational power to display, send, and receive text. No programs can be run on these devices. Instead, dumb terminals connect to a computer that runs the needed programs.
+
There are hundreds of kinds of dumb terminals made by many manufacturers, and just about any kind will work with FreeBSD. Some high-end terminals can even display graphics, but only certain software packages can take advantage of these advanced features.
+
Dumb terminals are popular in work environments where workers do not need access to graphical applications.
Computers Acting as Terminals::
Since a dumb terminal has just enough ability to display, send, and receive text, any spare computer can be a dumb terminal. All that is needed is the proper cable and some _terminal emulation_ software to run on the computer.
+
This configuration can be useful. For example, if one user is busy working at the FreeBSD system's console, another user can do some text-only work at the same time from a less powerful personal computer hooked up as a terminal to the FreeBSD system.
+
There are at least two utilities in the base-system of FreeBSD that can be used to work through a serial connection: man:cu[1] and man:tip[1].
+
For example, to connect from a client system that runs FreeBSD to the serial connection of another system:
+
[source,shell]
....
# cu -l /dev/cuauN
....
+
Ports are numbered starting from zero. This means that [.filename]#COM1# is [.filename]#/dev/cuau0#.
+
Additional programs are available through the Ports Collection, such as package:comms/minicom[].
X Terminals::
X terminals are the most sophisticated kind of terminal available. Instead of connecting to a serial port, they usually connect to a network like Ethernet. Instead of being relegated to text-only applications, they can display any Xorg application.
+
This chapter does not cover the setup, configuration, or use of X terminals.
[[term-config]]
=== Terminal Configuration
This section describes how to configure a FreeBSD system to enable a login session on a serial terminal. It assumes that the system recognizes the serial port to which the terminal is connected and that the terminal is connected with the correct cable.
In FreeBSD, `init` reads [.filename]#/etc/ttys# and starts a `getty` process on the available terminals. The `getty` process is responsible for reading a login name and starting the `login` program. The ports on the FreeBSD system which allow logins are listed in [.filename]#/etc/ttys#. For example, the first virtual console, [.filename]#ttyv0#, has an entry in this file, allowing logins on the console. This file also contains entries for the other virtual consoles, serial ports, and pseudo-ttys. For a hardwired terminal, the serial port's [.filename]#/dev# entry is listed without the `/dev` part. For example, [.filename]#/dev/ttyv0# is listed as `ttyv0`.
The default [.filename]#/etc/ttys# configures support for the first four serial ports, [.filename]#ttyu0# through [.filename]#ttyu3#:
[.programlisting]
....
ttyu0 "/usr/libexec/getty std.9600" dialup off secure
ttyu1 "/usr/libexec/getty std.9600" dialup off secure
ttyu2 "/usr/libexec/getty std.9600" dialup off secure
ttyu3 "/usr/libexec/getty std.9600" dialup off secure
....
When attaching a terminal to one of those ports, modify the default entry to set the required speed and terminal type, to turn the device `on` and, if needed, to change the port's `secure` setting. If the terminal is connected to another port, add an entry for the port.
<<ex-etc-ttys>> configures two terminals in [.filename]#/etc/ttys#. The first entry configures a Wyse-50 connected to [.filename]#COM2#. The second entry configures an old computer running Procomm terminal software emulating a VT-100 terminal. The computer is connected to the sixth serial port on a multi-port serial card.
[example]
[[ex-etc-ttys]]
.Configuring Terminal Entries
====
[.programlisting]
....
ttyu1 "/usr/libexec/getty std.38400" wy50 on insecure
ttyu5 "/usr/libexec/getty std.19200" vt100 on insecure
....
The first field specifies the device name of the serial terminal.
The second field tells `getty` to initialize and open the line, set the line speed, prompt for a user name, and then execute the `login` program. The optional _getty type_ configures characteristics on the terminal line, like bps rate and parity. The available getty types are listed in [.filename]#/etc/gettytab#. In almost all cases, the getty types that start with `std` will work for hardwired terminals as these entries ignore parity. There is a `std` entry for each bps rate from 110 to 115200. Refer to man:gettytab[5] for more information.When setting the getty type, make sure to match the communications settings used by the terminal. For this example, the Wyse-50 uses no parity and connects at 38400 bps. The computer uses no parity and connects at 19200 bps.
The third field is the type of terminal. For dial-up ports, `unknown` or `dialup` is typically used since users may dial up with practically any type of terminal or software. Since the terminal type does not change for hardwired terminals, a real terminal type from [.filename]#/etc/termcap# can be specified. For this example, the Wyse-50 uses the real terminal type while the computer running Procomm is set to emulate a VT-100.
The fourth field specifies if the port should be enabled. To enable logins on this port, this field must be set to `on`.
The final field is used to specify whether the port is secure. Marking a port as `secure` means that it is trusted enough to allow `root` to login from that port. Insecure ports do not allow `root` logins. On an insecure port, users must login from unprivileged accounts and then use `su` or a similar mechanism to gain superuser privileges, as described in crossref:basics[users-superuser,“The Superuser Account”]. For security reasons, it is recommended to change this setting to `insecure`.
====
After making any changes to [.filename]#/etc/ttys#, send a SIGHUP (hangup) signal to the `init` process to force it to re-read its configuration file:
[source,shell]
....
# kill -HUP 1
....
Since `init` is always the first process run on a system, it always has a process ID of `1`.
If everything is set up correctly, all cables are in place, and the terminals are powered up, a `getty` process should now be running on each terminal and login prompts should be available on each terminal.
[[term-debug]]
=== Troubleshooting the Connection
Even with the most meticulous attention to detail, something could still go wrong while setting up a terminal. Here is a list of common symptoms and some suggested fixes.
If no login prompt appears, make sure the terminal is plugged in and powered up. If it is a personal computer acting as a terminal, make sure it is running terminal emulation software on the correct serial port.
Make sure the cable is connected firmly to both the terminal and the FreeBSD computer. Make sure it is the right kind of cable.
Make sure the terminal and FreeBSD agree on the bps rate and parity settings. For a video display terminal, make sure the contrast and brightness controls are turned up. If it is a printing terminal, make sure paper and ink are in good supply.
Use `ps` to make sure that a `getty` process is running and serving the terminal. For example, the following listing shows that a `getty` is running on the second serial port, [.filename]#ttyu1#, and is using the `std.38400` entry in [.filename]#/etc/gettytab#:
[source,shell]
....
# ps -axww|grep ttyu
22189 d1 Is+ 0:00.03 /usr/libexec/getty std.38400 ttyu1
....
If no `getty` process is running, make sure the port is enabled in [.filename]#/etc/ttys#. Remember to run `kill -HUP 1` after modifying [.filename]#/etc/ttys#.
If the `getty` process is running but the terminal still does not display a login prompt, or if it displays a prompt but will not accept typed input, the terminal or cable may not support hardware handshaking. Try changing the entry in [.filename]#/etc/ttys# from `std.38400` to `3wire.38400`, then run `kill -HUP 1` after modifying [.filename]#/etc/ttys#. The `3wire` entry is similar to `std`, but ignores hardware handshaking. The bps may also need to be reduced or software flow control enabled when using `3wire` to prevent buffer overflows.
If garbage appears instead of a login prompt, make sure the terminal and FreeBSD agree on the bps rate and parity settings. Check the `getty` processes to make sure the correct _getty_ type is in use. If not, edit [.filename]#/etc/ttys# and run `kill -HUP 1`.
If characters appear doubled and the password appears when typed, switch the terminal, or the terminal emulation software, from "half duplex" or "local echo" to "full duplex."
[[dialup]]
== Dial-in Service
Configuring a FreeBSD system for dial-in service is similar to configuring terminals, except that modems are used instead of terminal devices. FreeBSD supports both external and internal modems.
External modems are more convenient because they often can be configured via parameters stored in non-volatile RAM and they usually provide lighted indicators that display the state of important RS-232 signals, indicating whether the modem is operating properly.
Internal modems usually lack non-volatile RAM, so their configuration may be limited to setting DIP switches. If the internal modem has any signal indicator lights, they are difficult to view when the system's cover is in place.
When using an external modem, a proper cable is needed. A standard RS-232C serial cable should suffice.
FreeBSD needs the RTS and CTS signals for flow control at speeds above 2400 bps, the CD signal to detect when a call has been answered or the line has been hung up, and the DTR signal to reset the modem after a session is complete. Some cables are wired without all of the needed signals, so if a login session does not go away when the line hangs up, there may be a problem with the cable. Refer to <<term-cables-null>> for more information about these signals.
Like other UNIX(R)-like operating systems, FreeBSD uses the hardware signals to find out when a call has been answered or a line has been hung up and to hangup and reset the modem after a call. FreeBSD avoids sending commands to the modem or watching for status reports from the modem.
FreeBSD supports the NS8250, NS16450, NS16550, and NS16550A-based RS-232C (CCITT V.24) communications interfaces. The 8250 and 16450 devices have single-character buffers. The 16550 device provides a 16-character buffer, which allows for better system performance. Bugs in plain 16550 devices prevent the use of the 16-character buffer, so use 16550A devices if possible. As single-character-buffer devices require more work by the operating system than the 16-character-buffer devices, 16550A-based serial interface cards are preferred. If the system has many active serial ports or will have a heavy load, 16550A-based cards are better for low-error-rate communications.
The rest of this section demonstrates how to configure a modem to receive incoming connections, how to communicate with the modem, and offers some troubleshooting tips.
[[dialup-ttys]]
=== Modem Configuration
As with terminals, `init` spawns a `getty` process for each configured serial port used for dial-in connections. When a user dials the modem's line and the modems connect, the "Carrier Detect" signal is reported by the modem. The kernel notices that the carrier has been detected and instructs `getty` to open the port and display a `login:` prompt at the specified initial line speed. In a typical configuration, if garbage characters are received, usually due to the modem's connection speed being different than the configured speed, `getty` tries adjusting the line speeds until it receives reasonable characters. After the user enters their login name, `getty` executes `login`, which completes the login process by asking for the user's password and then starting the user's shell.
There are two schools of thought regarding dial-up modems. One configuration method is to set the modems and systems so that no matter at what speed a remote user dials in, the dial-in RS-232 interface runs at a locked speed. The benefit of this configuration is that the remote user always sees a system login prompt immediately. The downside is that the system does not know what a user's true data rate is, so full-screen programs like Emacs will not adjust their screen-painting methods to make their response better for slower connections.
The second method is to configure the RS-232 interface to vary its speed based on the remote user's connection speed. As `getty` does not understand any particular modem's connection speed reporting, it gives a `login:` message at an initial speed and watches the characters that come back in response. If the user sees junk, they should press kbd:[Enter] until they see a recognizable prompt. If the data rates do not match, `getty` sees anything the user types as junk, tries the next speed, and gives the `login:` prompt again. This procedure normally only takes a keystroke or two before the user sees a good prompt. This login sequence does not look as clean as the locked-speed method, but a user on a low-speed connection should receive better interactive response from full-screen programs.
When locking a modem's data communications rate at a particular speed, no changes to [.filename]#/etc/gettytab# should be needed. However, for a matching-speed configuration, additional entries may be required in order to define the speeds to use for the modem. This example configures a 14.4 Kbps modem with a top interface speed of 19.2 Kbps using 8-bit, no parity connections. It configures `getty` to start the communications rate for a V.32bis connection at 19.2 Kbps, then cycles through 9600 bps, 2400 bps, 1200 bps, 300 bps, and back to 19.2 Kbps. Communications rate cycling is implemented with the `nx=` (next table) capability. Each line uses a `tc=` (table continuation) entry to pick up the rest of the settings for a particular data rate.
[.programlisting]
....
#
# Additions for a V.32bis Modem
#
um|V300|High Speed Modem at 300,8-bit:\
:nx=V19200:tc=std.300:
un|V1200|High Speed Modem at 1200,8-bit:\
:nx=V300:tc=std.1200:
uo|V2400|High Speed Modem at 2400,8-bit:\
:nx=V1200:tc=std.2400:
up|V9600|High Speed Modem at 9600,8-bit:\
:nx=V2400:tc=std.9600:
uq|V19200|High Speed Modem at 19200,8-bit:\
:nx=V9600:tc=std.19200:
....
For a 28.8 Kbps modem, or to take advantage of compression on a 14.4 Kbps modem, use a higher communications rate, as seen in this example:
[.programlisting]
....
#
# Additions for a V.32bis or V.34 Modem
# Starting at 57.6 Kbps
#
vm|VH300|Very High Speed Modem at 300,8-bit:\
:nx=VH57600:tc=std.300:
vn|VH1200|Very High Speed Modem at 1200,8-bit:\
:nx=VH300:tc=std.1200:
vo|VH2400|Very High Speed Modem at 2400,8-bit:\
:nx=VH1200:tc=std.2400:
vp|VH9600|Very High Speed Modem at 9600,8-bit:\
:nx=VH2400:tc=std.9600:
vq|VH57600|Very High Speed Modem at 57600,8-bit:\
:nx=VH9600:tc=std.57600:
....
For a slow CPU or a heavily loaded system without 16550A-based serial ports, this configuration may produce `sio` "silo" errors at 57.6 Kbps.
The configuration of [.filename]#/etc/ttys# is similar to <<ex-etc-ttys>>, but a different argument is passed to `getty` and `dialup` is used for the terminal type. Replace _xxx_ with the process `init` will run on the device:
[.programlisting]
....
ttyu0 "/usr/libexec/getty xxx" dialup on
....
The `dialup` terminal type can be changed. For example, setting `vt102` as the default terminal type allows users to use VT102 emulation on their remote systems.
For a locked-speed configuration, specify the speed with a valid type listed in [.filename]#/etc/gettytab#. This example is for a modem whose port speed is locked at 19.2 Kbps:
[.programlisting]
....
ttyu0 "/usr/libexec/getty std.19200" dialup on
....
In a matching-speed configuration, the entry needs to reference the appropriate beginning "auto-baud" entry in [.filename]#/etc/gettytab#. To continue the example for a matching-speed modem that starts at 19.2 Kbps, use this entry:
[.programlisting]
....
ttyu0 "/usr/libexec/getty V19200" dialup on
....
After editing [.filename]#/etc/ttys#, wait until the modem is properly configured and connected before signaling `init`:
[source,shell]
....
# kill -HUP 1
....
High-speed modems, like V.32, V.32bis, and V.34 modems, use hardware (`RTS/CTS`) flow control. Use `stty` to set the hardware flow control flag for the modem port. This example sets the `crtscts` flag on [.filename]#COM2#'s dial-in and dial-out initialization devices:
[source,shell]
....
# stty -f /dev/ttyu1.init crtscts
# stty -f /dev/cuau1.init crtscts
....
=== Troubleshooting
This section provides a few tips for troubleshooting a dial-up modem that will not connect to a FreeBSD system.
Hook up the modem to the FreeBSD system and boot the system. If the modem has status indication lights, watch to see whether the modem's DTR indicator lights when the `login:` prompt appears on the system's console. If it lights up, that should mean that FreeBSD has started a `getty` process on the appropriate communications port and is waiting for the modem to accept a call.
If the DTR indicator does not light, login to the FreeBSD system through the console and type `ps ax` to see if FreeBSD is running a `getty` process on the correct port:
[source,shell]
....
114 ?? I 0:00.10 /usr/libexec/getty V19200 ttyu0
....
If the second column contains a `d0` instead of a `??` and the modem has not accepted a call yet, this means that `getty` has completed its open on the communications port. This could indicate a problem with the cabling or a misconfigured modem because `getty` should not be able to open the communications port until the carrier detect signal has been asserted by the modem.
If no `getty` processes are waiting to open the port, double-check that the entry for the port is correct in [.filename]#/etc/ttys#. Also, check [.filename]#/var/log/messages# to see if there are any log messages from `init` or `getty`.
Next, try dialing into the system. Be sure to use 8 bits, no parity, and 1 stop bit on the remote system. If a prompt does not appear right away, or the prompt shows garbage, try pressing kbd:[Enter] about once per second. If there is still no `login:` prompt, try sending a `BREAK`. When using a high-speed modem, try dialing again after locking the dialing modem's interface speed.
If there is still no `login:` prompt, check [.filename]#/etc/gettytab# again and double-check that:
* The initial capability name specified in the entry in [.filename]#/etc/ttys# matches the name of a capability in [.filename]#/etc/gettytab#.
* Each `nx=` entry matches another [.filename]#gettytab# capability name.
* Each `tc=` entry matches another [.filename]#gettytab# capability name.
If the modem on the FreeBSD system will not answer, make sure that the modem is configured to answer the phone when DTR is asserted. If the modem seems to be configured correctly, verify that the DTR line is asserted by checking the modem's indicator lights.
If it still does not work, try sending an email to the {freebsd-questions} describing the modem and the problem.
[[dialout]]
== Dial-out Service
The following are tips for getting the host to connect over the modem to another computer. This is appropriate for establishing a terminal session with a remote host.
This kind of connection can be helpful to get a file on the Internet if there are problems using PPP. If PPP is not working, use the terminal session to FTP the needed file. Then use zmodem to transfer it to the machine.
[[hayes-unsupported]]
=== Using a Stock Hayes Modem
A generic Hayes dialer is built into `tip`. Use `at=hayes` in [.filename]#/etc/remote#.
The Hayes driver is not smart enough to recognize some of the advanced features of newer modems messages like `BUSY`, `NO DIALTONE`, or `CONNECT 115200`. Turn those messages off when using `tip` with `ATX0&W`.
The dial timeout for `tip` is 60 seconds. The modem should use something less, or else `tip` will think there is a communication problem. Try `ATS7=45&W`.
[[direct-at]]
=== Using `AT` Commands
Create a "direct" entry in [.filename]#/etc/remote#. For example, if the modem is hooked up to the first serial port, [.filename]#/dev/cuau0#, use the following line:
[.programlisting]
....
cuau0:dv=/dev/cuau0:br#19200:pa=none
....
Use the highest bps rate the modem supports in the `br` capability. Then, type `tip cuau0` to connect to the modem.
Or, use `cu` as `root` with the following command:
[source,shell]
....
# cu -lline -sspeed
....
_line_ is the serial port, such as [.filename]#/dev/cuau0#, and _speed_ is the speed, such as `57600`. When finished entering the AT commands, type `~.` to exit.
[[gt-failure]]
=== The `@` Sign Does Not Work
The `@` sign in the phone number capability tells `tip` to look in [.filename]#/etc/phones# for a phone number. But, the `@` sign is also a special character in capability files like [.filename]#/etc/remote#, so it needs to be escaped with a backslash:
[.programlisting]
....
pn=\@
....
[[dial-command-line]]
=== Dialing from the Command Line
Put a "generic" entry in [.filename]#/etc/remote#. For example:
[.programlisting]
....
tip115200|Dial any phone number at 115200 bps:\
:dv=/dev/cuau0:br#115200:at=hayes:pa=none:du:
tip57600|Dial any phone number at 57600 bps:\
:dv=/dev/cuau0:br#57600:at=hayes:pa=none:du:
....
This should now work:
[source,shell]
....
# tip -115200 5551234
....
Users who prefer `cu` over `tip`, can use a generic `cu` entry:
[.programlisting]
....
cu115200|Use cu to dial any number at 115200bps:\
:dv=/dev/cuau1:br#57600:at=hayes:pa=none:du:
....
and type:
[source,shell]
....
# cu 5551234 -s 115200
....
[[set-bps]]
=== Setting the bps Rate
Put in an entry for `tip1200` or `cu1200`, but go ahead and use whatever bps rate is appropriate with the `br` capability. `tip` thinks a good default is 1200 bps which is why it looks for a `tip1200` entry. 1200 bps does not have to be used, though.
[[terminal-server]]
=== Accessing a Number of Hosts Through a Terminal Server
Rather than waiting until connected and typing `CONNECT _host_` each time, use ``tip``'s `cm` capability. For example, these entries in [.filename]#/etc/remote# will let you type `tip pain` or `tip muffin` to connect to the hosts `pain` or `muffin`, and `tip deep13` to connect to the terminal server.
[.programlisting]
....
pain|pain.deep13.com|Forrester's machine:\
:cm=CONNECT pain\n:tc=deep13:
muffin|muffin.deep13.com|Frank's machine:\
:cm=CONNECT muffin\n:tc=deep13:
deep13:Gizmonics Institute terminal server:\
:dv=/dev/cuau2:br#38400:at=hayes:du:pa=none:pn=5551234:
....
[[tip-multiline]]
=== Using More Than One Line with `tip`
This is often a problem where a university has several modem lines and several thousand students trying to use them.
Make an entry in [.filename]#/etc/remote# and use `@` for the `pn` capability:
[.programlisting]
....
big-university:\
:pn=\@:tc=dialout
dialout:\
:dv=/dev/cuau3:br#9600:at=courier:du:pa=none:
....
Then, list the phone numbers in [.filename]#/etc/phones#:
[.programlisting]
....
big-university 5551111
big-university 5551112
big-university 5551113
big-university 5551114
....
`tip` will try each number in the listed order, then give up. To keep retrying, run `tip` in a `while` loop.
[[multi-controlp]]
=== Using the Force Character
kbd:[Ctrl+P] is the default "force" character, used to tell `tip` that the next character is literal data. The force character can be set to any other character with the `~s` escape, which means "set a variable."
Type `~sforce=_single-char_` followed by a newline. _single-char_ is any single character. If _single-char_ is left out, then the force character is the null character, which is accessed by typing kbd:[Ctrl+2] or kbd:[Ctrl+Space]. A pretty good value for _single-char_ is kbd:[Shift+Ctrl+6], which is only used on some terminal servers.
To change the force character, specify the following in [.filename]#~/.tiprc#:
[.programlisting]
....
force=single-char
....
[[uppercase]]
=== Upper Case Characters
This happens when kbd:[Ctrl+A] is pressed, which is ``tip``'s "raise character", specially designed for people with broken caps-lock keys. Use `~s` to set `raisechar` to something reasonable. It can be set to be the same as the force character, if neither feature is used.
Here is a sample [.filename]#~/.tiprc# for Emacs users who need to type kbd:[Ctrl+2] and kbd:[Ctrl+A]:
[.programlisting]
....
force=^^
raisechar=^^
....
The `^^` is kbd:[Shift+Ctrl+6].
[[tip-filetransfer]]
=== File Transfers with `tip`
When talking to another UNIX(R)-like operating system, files can be sent and received using `~p` (put) and `~t` (take). These commands run `cat` and `echo` on the remote system to accept and send files. The syntax is:
`~p` local-file [ remote-file ]
`~t` remote-file [ local-file ]
There is no error checking, so another protocol, like zmodem, should probably be used.
[[zmodem-tip]]
=== Using zmodem with `tip`?
To receive files, start the sending program on the remote end. Then, type `~C rz` to begin receiving them locally.
To send files, start the receiving program on the remote end. Then, type `~C sz _files_` to send them to the remote system.
[[serialconsole-setup]]
== Setting Up the Serial Console
FreeBSD has the ability to boot a system with a dumb terminal on a serial port as a console. This configuration is useful for system administrators who wish to install FreeBSD on machines that have no keyboard or monitor attached, and developers who want to debug the kernel or device drivers.
As described in crossref:boot[boot,The FreeBSD Booting Process], FreeBSD employs a three stage bootstrap. The first two stages are in the boot block code which is stored at the beginning of the FreeBSD slice on the boot disk. The boot block then loads and runs the boot loader as the third stage code.
In order to set up booting from a serial console, the boot block code, the boot loader code, and the kernel need to be configured.
[[serialconsole-howto-fast]]
=== Quick Serial Console Configuration
This section provides a fast overview of setting up the serial console. This procedure can be used when the dumb terminal is connected to [.filename]#COM1#.
[.procedure]
.Procedure: Configuring a Serial Console on [.filename]#COM1#
. Connect the serial cable to [.filename]#COM1# and the controlling terminal.
. To configure boot messages to display on the serial console, issue the following command as the superuser:
+
[source,shell]
....
# echo 'console="comconsole"' >> /boot/loader.conf
....
. Edit [.filename]#/etc/ttys# and change `off` to `on` and `dialup` to `vt100` for the [.filename]#ttyu0# entry. Otherwise, a password will not be required to connect via the serial console, resulting in a potential security hole.
. Reboot the system to see if the changes took effect.
If a different configuration is required, see the next section for a more in-depth configuration explanation.
[[serialconsole-howto]]
=== In-Depth Serial Console Configuration
This section provides a more detailed explanation of the steps needed to setup a serial console in FreeBSD.
[.procedure]
.Procedure: Configuring a Serial Console
. Prepare a serial cable.
+
Use either a null-modem cable or a standard serial cable and a null-modem adapter. See <<term-cables-null>> for a discussion on serial cables.
. Unplug the keyboard.
+
Many systems probe for the keyboard during the Power-On Self-Test (POST) and will generate an error if the keyboard is not detected. Some machines will refuse to boot until the keyboard is plugged in.
+
If the computer complains about the error, but boots anyway, no further configuration is needed.
+
If the computer refuses to boot without a keyboard attached, configure the BIOS so that it ignores this error. Consult the motherboard's manual for details on how to do this.
+
[TIP]
====
Try setting the keyboard to "Not installed" in the BIOS. This setting tells the BIOS not to probe for a keyboard at power-on so it should not complain if the keyboard is absent. If that option is not present in the BIOS, look for an "Halt on Error" option instead. Setting this to "All but Keyboard" or to "No Errors" will have the same effect.
====
+
If the system has a PS/2(R) mouse, unplug it as well. PS/2(R) mice share some hardware with the keyboard and leaving the mouse plugged in can fool the keyboard probe into thinking the keyboard is still there.
+
[NOTE]
====
While most systems will boot without a keyboard, quite a few will not boot without a graphics adapter. Some systems can be configured to boot with no graphics adapter by changing the "graphics adapter" setting in the BIOS configuration to "Not installed". Other systems do not support this option and will refuse to boot if there is no display hardware in the system. With these machines, leave some kind of graphics card plugged in, even if it is just a junky mono board. A monitor does not need to be attached.
====
. Plug a dumb terminal, an old computer with a modem program, or the serial port on another UNIX(R) box into the serial port.
. Add the appropriate `hint.sio.*` entries to [.filename]#/boot/device.hints# for the serial port. Some multi-port cards also require kernel configuration options. Refer to man:sio[4] for the required options and device hints for each supported serial port.
. Create [.filename]#boot.config# in the root directory of the `a` partition on the boot drive.
+
This file instructs the boot block code how to boot the system. In order to activate the serial console, one or more of the following options are needed. When using multiple options, include them all on the same line:
+
`-h`:::
Toggles between the internal and serial consoles. Use this to switch console devices. For instance, to boot from the internal (video) console, use `-h` to direct the boot loader and the kernel to use the serial port as its console device. Alternatively, to boot from the serial port, use `-h` to tell the boot loader and the kernel to use the video display as the console instead.
`-D`:::
Toggles between the single and dual console configurations. In the single configuration, the console will be either the internal console (video display) or the serial port, depending on the state of `-h`. In the dual console configuration, both the video display and the serial port will become the console at the same time, regardless of the state of `-h`. However, the dual console configuration takes effect only while the boot block is running. Once the boot loader gets control, the console specified by `-h` becomes the only console.
`-P`:::
Makes the boot block probe the keyboard. If no keyboard is found, the `-D` and `-h` options are automatically set.
+
[NOTE]
====
Due to space constraints in the current version of the boot blocks, `-P` is capable of detecting extended keyboards only. Keyboards with less than 101 keys and without F11 and F12 keys may not be detected. Keyboards on some laptops may not be properly found because of this limitation. If this is the case, do not use `-P`.
====
+
Use either `-P` to select the console automatically or `-h` to activate the serial console. Refer to man:boot[8] and man:boot.config[5] for more details.
+
The options, except for `-P`, are passed to the boot loader. The boot loader will determine whether the internal video or the serial port should become the console by examining the state of `-h`. This means that if `-D` is specified but `-h` is not specified in [.filename]#/boot.config#, the serial port can be used as the console only during the boot block as the boot loader will use the internal video display as the console.
. Boot the machine.
+
When FreeBSD starts, the boot blocks echo the contents of [.filename]#/boot.config# to the console. For example:
+
[source,shell]
....
/boot.config: -P
Keyboard: no
....
+
The second line appears only if `-P` is in [.filename]#/boot.config# and indicates the presence or absence of the keyboard. These messages go to either the serial or internal console, or both, depending on the option in [.filename]#/boot.config#:
+
[.informaltable]
[cols="1,1", frame="none", options="header"]
|===
<| Options
<| Message goes to
|none
|internal console
|`-h`
|serial console
|`-D`
|serial and internal consoles
|`-Dh`
|serial and internal consoles
|`-P`, keyboard present
|internal console
|`-P`, keyboard absent
|serial console
|===
+
After the message, there will be a small pause before the boot blocks continue loading the boot loader and before any further messages are printed to the console. Under normal circumstances, there is no need to interrupt the boot blocks, but one can do so in order to make sure things are set up correctly.
+
Press any key, other than kbd:[Enter], at the console to interrupt the boot process. The boot blocks will then prompt for further action:
+
[source,shell]
....
>> FreeBSD/i386 BOOT
Default: 0:ad(0,a)/boot/loader
boot:
....
+
Verify that the above message appears on either the serial or internal console, or both, according to the options in [.filename]#/boot.config#. If the message appears in the correct console, press kbd:[Enter] to continue the boot process.
+
If there is no prompt on the serial terminal, something is wrong with the settings. Enter `-h` then kbd:[Enter] or kbd:[Return] to tell the boot block (and then the boot loader and the kernel) to choose the serial port for the console. Once the system is up, go back and check what went wrong.
During the third stage of the boot process, one can still switch between the internal console and the serial console by setting appropriate environment variables in the boot loader. See man:loader[8] for more information.
[NOTE]
====
This line in [.filename]#/boot/loader.conf# or [.filename]#/boot/loader.conf.local# configures the boot loader and the kernel to send their boot messages to the serial console, regardless of the options in [.filename]#/boot.config#:
[.programlisting]
....
console="comconsole"
....
That line should be the first line of [.filename]#/boot/loader.conf# so that boot messages are displayed on the serial console as early as possible.
If that line does not exist, or if it is set to `console="vidconsole"`, the boot loader and the kernel will use whichever console is indicated by `-h` in the boot block. See man:loader.conf[5] for more information.
At the moment, the boot loader has no option equivalent to `-P` in the boot block, and there is no provision to automatically select the internal console and the serial console based on the presence of the keyboard.
====
[TIP]
====
While it is not required, it is possible to provide a `login` prompt over the serial line. To configure this, edit the entry for the serial port in [.filename]#/etc/ttys# using the instructions in <<term-config>>. If the speed of the serial port has been changed, change `std.9600` to match the new setting.
====
=== Setting a Faster Serial Port Speed
By default, the serial port settings are 9600 baud, 8 bits, no parity, and 1 stop bit. To change the default console speed, use one of the following options:
* Edit [.filename]#/etc/make.conf# and set `BOOT_COMCONSOLE_SPEED` to the new console speed. Then, recompile and install the boot blocks and the boot loader:
+
[source,shell]
....
# cd /sys/boot
# make clean
# make
# make install
....
+
If the serial console is configured in some other way than by booting with `-h`, or if the serial console used by the kernel is different from the one used by the boot blocks, add the following option, with the desired speed, to a custom kernel configuration file and compile a new kernel:
+
[.programlisting]
....
options CONSPEED=19200
....
* Add the `-S__19200__` boot option to [.filename]#/boot.config#, replacing `_19200_` with the speed to use.
* Add the following options to [.filename]#/boot/loader.conf#. Replace `_115200_` with the speed to use.
+
[.programlisting]
....
boot_multicons="YES"
boot_serial="YES"
comconsole_speed="115200"
console="comconsole,vidconsole"
....
[[serialconsole-ddb]]
=== Entering the DDB Debugger from the Serial Line
To configure the ability to drop into the kernel debugger from the serial console, add the following options to a custom kernel configuration file and compile the kernel using the instructions in crossref:kernelconfig[kernelconfig,Configuring the FreeBSD Kernel]. Note that while this is useful for remote diagnostics, it is also dangerous if a spurious BREAK is generated on the serial port. Refer to man:ddb[4] and man:ddb[8] for more information about the kernel debugger.
[.programlisting]
....
options BREAK_TO_DEBUGGER
options DDB
....
diff --git a/documentation/content/en/books/handbook/usb-device-mode/_index.adoc b/documentation/content/en/books/handbook/usb-device-mode/_index.adoc
index a0a2d2baba..938425a6b0 100644
--- a/documentation/content/en/books/handbook/usb-device-mode/_index.adoc
+++ b/documentation/content/en/books/handbook/usb-device-mode/_index.adoc
@@ -1,260 +1,261 @@
---
title: Chapter 26. USB Device Mode / USB OTG
part: Part III. System Administration
prev: books/handbook/dtrace
next: books/handbook/partiv
+description: This chapter covers the use of USB Device Mode and USB On The Go (USB OTG) in FreeBSD
---
[[usb-device-mode]]
= USB Device Mode / USB OTG
:doctype: book
:toc: macro
:toclevels: 1
:icons: font
:sectnums:
:sectnumlevels: 6
:source-highlighter: rouge
:experimental:
:skip-front-matter:
:xrefstyle: basic
:relfileprefix: ../
:outfilesuffix:
:sectnumoffset: 26
ifeval::["{backend}" == "html5"]
:imagesdir: ../../../../images/books/handbook/usb-device-mode/
endif::[]
ifeval::["{backend}" == "pdf"]
:imagesdir: ../../../../static/images/books/handbook/usb-device-mode/
endif::[]
ifeval::["{backend}" == "epub3"]
:imagesdir: ../../../../static/images/books/handbook/usb-device-mode/
endif::[]
include::shared/authors.adoc[]
include::shared/releases.adoc[]
include::shared/en/mailing-lists.adoc[]
include::shared/en/teams.adoc[]
include::shared/en/urls.adoc[]
toc::[]
[[usb-device-mode-synopsis]]
== Synopsis
This chapter covers the use of USB Device Mode and USB On The Go (USB OTG) in FreeBSD. This includes virtual serial consoles, virtual network interfaces, and virtual USB drives.
When running on hardware that supports USB device mode or USB OTG, like that built into many embedded boards, the FreeBSD USB stack can run in _device mode_. Device mode makes it possible for the computer to present itself as different kinds of USB device classes, including serial ports, network adapters, and mass storage, or a combination thereof. A USB host like a laptop or desktop computer is able to access them just like physical USB devices. Device mode is sometimes called the "USB gadget mode".
There are two basic ways the hardware can provide the device mode functionality: with a separate "client port", which only supports the device mode, and with a USB OTG port, which can provide both device and host mode. For USB OTG ports, the USB stack switches between host-side and device-side automatically, depending on what is connected to the port. Connecting a USB device like a memory stick to the port causes FreeBSD to switch to host mode. Connecting a USB host like a computer causes FreeBSD to switch to device mode. Single purpose "client ports" always work in device mode.
What FreeBSD presents to the USB host depends on the `hw.usb.template` sysctl. Some templates provide a single device, such as a serial terminal; others provide multiple ones, which can all be used at the same time. An example is the template 10, which provides a mass storage device, a serial console, and a network interface. See man:usb_template[4] for the list of available values.
Note that in some cases, depending on the hardware and the hosts operating system, for the host to notice the configuration change, it must be either physically disconnected and reconnected, or forced to rescan the USB bus in a system-specific way. When FreeBSD is running on the host, man:usbconfig[8] `reset` can be used. This also must be done after loading [.filename]#usb_template.ko# if the USB host was already connected to the USBOTG socket.
After reading this chapter, you will know:
* How to set up USB Device Mode functionality on FreeBSD.
* How to configure the virtual serial port on FreeBSD.
* How to connect to the virtual serial port from various operating systems.
* How to configure FreeBSD to provide a virtual USB network interface.
* How to configure FreeBSD to provide a virtual USB storage device.
[[usb-device-mode-terminals]]
== USB Virtual Serial Ports
=== Configuring USB Device Mode Serial Ports
Virtual serial port support is provided by templates number 3, 8, and 10. Note that template 3 works with Microsoft Windows 10 without the need for special drivers and INF files. Other host operating systems work with all three templates. Both man:usb_template[4] and man:umodem[4] kernel modules must be loaded.
To enable USB device mode serial ports, add those lines to [.filename]#/etc/ttys#:
[.programlisting]
....
ttyU0 "/usr/libexec/getty 3wire" vt100 onifconsole secure
ttyU1 "/usr/libexec/getty 3wire" vt100 onifconsole secure
....
Then add these lines to [.filename]#/etc/devd.conf#:
[.programlisting]
....
notify 100 {
match "system" "DEVFS";
match "subsystem" "CDEV";
match "type" "CREATE";
match "cdev" "ttyU[0-9]+";
action "/sbin/init q";
};
....
Reload the configuration if man:devd[8] is already running:
[source,shell]
....
# service devd restart
....
Make sure the necessary modules are loaded and the correct template is set at boot by adding those lines to [.filename]#/boot/loader.conf#, creating it if it does not already exist:
[source,shell]
....
umodem_load="YES"
hw.usb.template=3
....
To load the module and set the template without rebooting use:
[source,shell]
....
# kldload umodem
# sysctl hw.usb.template=3
....
=== Connecting to USB Device Mode Serial Ports from FreeBSD
To connect to a board configured to provide USB device mode serial ports, connect the USB host, such as a laptop, to the boards USB OTG or USB client port. Use `pstat -t` on the host to list the terminal lines. Near the end of the list you should see a USB serial port, eg "ttyU0". To open the connection, use:
[source,shell]
....
# cu -l /dev/ttyU0
....
After pressing the kbd:[Enter] key a few times you will see a login prompt.
=== Connecting to USB Device Mode Serial Ports from macOS
To connect to a board configured to provide USB device mode serial ports, connect the USB host, such as a laptop, to the boards USB OTG or USB client port. To open the connection, use:
[source,shell]
....
# cu -l /dev/cu.usbmodemFreeBSD1
....
=== Connecting to USB Device Mode Serial Ports from Linux
To connect to a board configured to provide USB device mode serial ports, connect the USB host, such as a laptop, to the boards USB OTG or USB client port. To open the connection, use:
[source,shell]
....
# minicom -D /dev/ttyACM0
....
=== Connecting to USB Device Mode Serial Ports from Microsoft Windows 10
To connect to a board configured to provide USB device mode serial ports, connect the USB host, such as a laptop, to the boards USB OTG or USB client port. To open a connection you will need a serial terminal program, such as PuTTY. To check the COM port name used by Windows, run Device Manager, expand "Ports (COM & LPT)". You will see a name similar to "USB Serial Device (COM4)". Run serial terminal program of your choice, for example PuTTY. In the PuTTY dialog set "Connection type" to "Serial", type the COMx obtained from Device Manager in the "Serial line" dialog box and click Open.
[[usb-device-mode-network]]
== USB Device Mode Network Interfaces
Virtual network interfaces support is provided by templates number 1, 8, and 10. Note that none of them works with Microsoft Windows. Other host operating systems work with all three templates. Both man:usb_template[4] and man:if_cdce[4] kernel modules must be loaded.
Make sure the necessary modules are loaded and the correct template is set at boot by adding those lines to [.filename]#/boot/loader.conf#, creating it if it does not already exist:
[.programlisting]
....
if_cdce_load="YES"
hw.usb.template=1
....
To load the module and set the template without rebooting use:
[source,shell]
....
# kldload if_cdce
# sysctl hw.usb.template=1
....
[[usb-device-mode-storage]]
== USB Virtual Storage Device
[NOTE]
====
The man:cfumass[4] driver is a USB device mode driver first available in FreeBSD 12.0.
====
Mass Storage target is provided by templates 0 and 10. Both man:usb_template[4] and man:cfumass[4] kernel modules must be loaded. man:cfumass[4] interfaces to the CTL subsystem, the same one that is used for iSCSI or Fibre Channel targets. On the host side, USB Mass Storage initiators can only access a single LUN, LUN 0.
=== Configuring USB Mass Storage Target Using the cfumass Startup Script
The simplest way to set up a read-only USB storage target is to use the [.filename]#cfumass# rc script. To configure it this way, copy the files to be presented to the USB host machine into the `/var/cfumass` directory, and add this line to [.filename]#/etc/rc.conf#:
[.programlisting]
....
cfumass_enable="YES"
....
To configure the target without restarting, run this command:
[source,shell]
....
# service cfumass start
....
Differently from serial and network functionality, the template should not be set to 0 or 10 in [.filename]#/boot/loader.conf#. This is because the LUN must be set up before setting the template. The cfumass startup script sets the correct template number automatically when started.
=== Configuring USB Mass Storage Using Other Means
The rest of this chapter provides detailed description of setting the target without using the cfumass rc file. This is necessary if eg one wants to provide a writeable LUN.
USB Mass Storage does not require the man:ctld[8] daemon to be running, although it can be used if desired. This is different from iSCSI. Thus, there are two ways to configure the target: man:ctladm[8], or man:ctld[8]. Both require the [.filename]#cfumass.ko# kernel module to be loaded. The module can be loaded manually:
[source,shell]
....
# kldload cfumass
....
If [.filename]#cfumass.ko# has not been built into the kernel, [.filename]#/boot/loader.conf# can be set to load the module at boot:
[.programlisting]
....
cfumass_load="YES"
....
A LUN can be created without the man:ctld[8] daemon:
[source,shell]
....
# ctladm create -b block -o file=/data/target0
....
This presents the contents of the image file [.filename]#/data/target0# as a LUN to the USB host. The file must exist before executing the command. To configure the LUN at system startup, add the command to [.filename]#/etc/rc.local#.
man:ctld[8] can also be used to manage LUNs. Create [.filename]#/etc/ctl.conf#, add a line to [.filename]#/etc/rc.conf# to make sure man:ctld[8] is automatically started at boot, and then start the daemon.
This is an example of a simple [.filename]#/etc/ctl.conf# configuration file. Refer to man:ctl.conf[5] for a more complete description of the options.
[.programlisting]
....
target naa.50015178f369f092 {
lun 0 {
path /data/target0
size 4G
}
}
....
The example creates a single target with a single LUN. The `naa.50015178f369f092` is a device identifier composed of 32 random hexadecimal digits. The `path` line defines the full path to a file or zvol backing the LUN. That file must exist before starting man:ctld[8]. The second line is optional and specifies the size of the LUN.
To make sure the man:ctld[8] daemon is started at boot, add this line to [.filename]#/etc/rc.conf#:
[.programlisting]
....
ctld_enable="YES"
....
To start man:ctld[8] now, run this command:
[source,shell]
....
# service ctld start
....
As the man:ctld[8] daemon is started, it reads [.filename]#/etc/ctl.conf#. If this file is edited after the daemon starts, reload the changes so they take effect immediately:
[source,shell]
....
# service ctld reload
....
diff --git a/documentation/content/en/books/handbook/virtualization/_index.adoc b/documentation/content/en/books/handbook/virtualization/_index.adoc
index 235485fbc4..a36bc3550e 100644
--- a/documentation/content/en/books/handbook/virtualization/_index.adoc
+++ b/documentation/content/en/books/handbook/virtualization/_index.adoc
@@ -1,1069 +1,1070 @@
---
title: Chapter 22. Virtualization
part: Part III. System Administration
prev: books/handbook/filesystems
next: books/handbook/l10n
+description: Virtualization software allows multiple operating systems to run simultaneously on the same computer
---
[[virtualization]]
= Virtualization
:doctype: book
:toc: macro
:toclevels: 1
:icons: font
:sectnums:
:sectnumlevels: 6
:source-highlighter: rouge
:experimental:
:skip-front-matter:
:xrefstyle: basic
:relfileprefix: ../
:outfilesuffix:
:sectnumoffset: 22
ifeval::["{backend}" == "html5"]
:imagesdir: ../../../../images/books/handbook/virtualization/
endif::[]
ifeval::["{backend}" == "pdf"]
:imagesdir: ../../../../static/images/books/handbook/virtualization/
endif::[]
ifeval::["{backend}" == "epub3"]
:imagesdir: ../../../../static/images/books/handbook/virtualization/
endif::[]
include::shared/authors.adoc[]
include::shared/releases.adoc[]
include::shared/en/mailing-lists.adoc[]
include::shared/en/teams.adoc[]
include::shared/en/urls.adoc[]
toc::[]
[[virtualization-synopsis]]
== Synopsis
Virtualization software allows multiple operating systems to run simultaneously on the same computer. Such software systems for PCs often involve a host operating system which runs the virtualization software and supports any number of guest operating systems.
After reading this chapter, you will know:
* The difference between a host operating system and a guest operating system.
* How to install FreeBSD on an Intel(R)-based Apple(R) Mac(R) computer.
* How to install FreeBSD on Microsoft(R) Windows(R) with Virtual PC.
* How to install FreeBSD as a guest in bhyve.
* How to tune a FreeBSD system for best performance under virtualization.
Before reading this chapter, you should:
* Understand the crossref:basics[basics,basics of UNIX(R) and FreeBSD].
* Know how to crossref:bsdinstall[bsdinstall,install FreeBSD].
* Know how to crossref:advanced-networking[advanced-networking,set up a network connection].
* Know how to crossref:ports[ports,install additional third-party software].
[[virtualization-guest-parallels]]
== FreeBSD as a Guest on Parallels for Mac OS(R) X
Parallels Desktop for Mac(R) is a commercial software product available for Intel(R) based Apple(R) Mac(R) computers running Mac OS(R) 10.4.6 or higher. FreeBSD is a fully supported guest operating system. Once Parallels has been installed on Mac OS(R) X, the user must configure a virtual machine and then install the desired guest operating system.
[[virtualization-guest-parallels-install]]
=== Installing FreeBSD on Parallels/Mac OS(R) X
The first step in installing FreeBSD on Parallels is to create a new virtual machine for installing FreeBSD. Select [.guimenuitem]#FreeBSD# as the menu:Guest OS Type[] when prompted:
image::parallels-freebsd1.png[]
Choose a reasonable amount of disk and memory depending on the plans for this virtual FreeBSD instance. 4GB of disk space and 512MB of RAM work well for most uses of FreeBSD under Parallels:
image::parallels-freebsd2.png[]
image::parallels-freebsd3.png[]
image::parallels-freebsd4.png[]
image::parallels-freebsd5.png[]
Select the type of networking and a network interface:
image::parallels-freebsd6.png[]
image::parallels-freebsd7.png[]
Save and finish the configuration:
image::parallels-freebsd8.png[]
image::parallels-freebsd9.png[]
After the FreeBSD virtual machine has been created, FreeBSD can be installed on it. This is best done with an official FreeBSD CD/DVD or with an ISO image downloaded from an official FTP site. Copy the appropriate ISO image to the local Mac(R) filesystem or insert a CD/DVD in the Mac(R)'s CD-ROM drive. Click on the disc icon in the bottom right corner of the FreeBSD Parallels window. This will bring up a window that can be used to associate the CD-ROM drive in the virtual machine with the ISO file on disk or with the real CD-ROM drive.
image::parallels-freebsd11.png[]
Once this association with the CD-ROM source has been made, reboot the FreeBSD virtual machine by clicking the reboot icon. Parallels will reboot with a special BIOS that first checks if there is a CD-ROM.
image::parallels-freebsd10.png[]
In this case it will find the FreeBSD installation media and begin a normal FreeBSD installation. Perform the installation, but do not attempt to configure Xorg at this time.
image::parallels-freebsd12.png[]
When the installation is finished, reboot into the newly installed FreeBSD virtual machine.
image::parallels-freebsd13.png[]
[[virtualization-guest-parallels-configure]]
=== Configuring FreeBSD on Parallels
After FreeBSD has been successfully installed on Mac OS(R) X with Parallels, there are a number of configuration steps that can be taken to optimize the system for virtualized operation.
[.procedure]
. Set Boot Loader Variables
+
The most important step is to reduce the `kern.hz` tunable to reduce the CPU utilization of FreeBSD under the Parallels environment. This is accomplished by adding the following line to [.filename]#/boot/loader.conf#:
+
[.programlisting]
....
kern.hz=100
....
+
Without this setting, an idle FreeBSD Parallels guest will use roughly 15% of the CPU of a single processor iMac(R). After this change the usage will be closer to 5%.
. Create a New Kernel Configuration File
+
All of the SCSI, FireWire, and USB device drivers can be removed from a custom kernel configuration file. Parallels provides a virtual network adapter used by the man:ed[4] driver, so all network devices except for man:ed[4] and man:miibus[4] can be removed from the kernel.
. Configure Networking
+
The most basic networking setup uses DHCP to connect the virtual machine to the same local area network as the host Mac(R). This can be accomplished by adding `ifconfig_ed0="DHCP"` to [.filename]#/etc/rc.conf#. More advanced networking setups are described in crossref:advanced-networking[advanced-networking,Advanced Networking].
[[virtualization-guest-virtualpc]]
== FreeBSD as a Guest on Virtual PC for Windows(R)
Virtual PC for Windows(R) is a Microsoft(R) software product available for free download. See this website for the http://www.microsoft.com/windows/downloads/virtualpc/sysreq.mspx[system requirements]. Once Virtual PC has been installed on Microsoft(R) Windows(R), the user can configure a virtual machine and then install the desired guest operating system.
[[virtualization-guest-virtualpc-install]]
=== Installing FreeBSD on Virtual PC
The first step in installing FreeBSD on Virtual PC is to create a new virtual machine for installing FreeBSD. Select [.guimenuitem]#Create a virtual machine# when prompted:
image::virtualpc-freebsd1.png[]
image::virtualpc-freebsd2.png[]
Select [.guimenuitem]#Other# as the [.guimenuitem]#Operating system# when prompted:
image::virtualpc-freebsd3.png[]
Then, choose a reasonable amount of disk and memory depending on the plans for this virtual FreeBSD instance. 4GB of disk space and 512MB of RAM work well for most uses of FreeBSD under Virtual PC:
image::virtualpc-freebsd4.png[]
image::virtualpc-freebsd5.png[]
Save and finish the configuration:
image::virtualpc-freebsd6.png[]
Select the FreeBSD virtual machine and click menu:Settings[], then set the type of networking and a network interface:
image::virtualpc-freebsd7.png[]
image::virtualpc-freebsd8.png[]
After the FreeBSD virtual machine has been created, FreeBSD can be installed on it. This is best done with an official FreeBSD CD/DVD or with an ISO image downloaded from an official FTP site. Copy the appropriate ISO image to the local Windows(R) filesystem or insert a CD/DVD in the CD drive, then double click on the FreeBSD virtual machine to boot. Then, click menu:CD[] and choose menu:Capture ISO Image...[] on the Virtual PC window. This will bring up a window where the CD-ROM drive in the virtual machine can be associated with an ISO file on disk or with the real CD-ROM drive.
image::virtualpc-freebsd9.png[]
image::virtualpc-freebsd10.png[]
Once this association with the CD-ROM source has been made, reboot the FreeBSD virtual machine by clicking menu:Action[] and menu:Reset[]. Virtual PC will reboot with a special BIOS that first checks for a CD-ROM.
image::virtualpc-freebsd11.png[]
In this case it will find the FreeBSD installation media and begin a normal FreeBSD installation. Continue with the installation, but do not attempt to configure Xorg at this time.
image::virtualpc-freebsd12.png[]
When the installation is finished, remember to eject the CD/DVD or release the ISO image. Finally, reboot into the newly installed FreeBSD virtual machine.
image::virtualpc-freebsd13.png[]
[[virtualization-guest-virtualpc-configure]]
=== Configuring FreeBSD on Virtual PC
After FreeBSD has been successfully installed on Microsoft(R) Windows(R) with Virtual PC, there are a number of configuration steps that can be taken to optimize the system for virtualized operation.
[.procedure]
. Set Boot Loader Variables
+
The most important step is to reduce the `kern.hz` tunable to reduce the CPU utilization of FreeBSD under the Virtual PC environment. This is accomplished by adding the following line to [.filename]#/boot/loader.conf#:
+
[.programlisting]
....
kern.hz=100
....
+
Without this setting, an idle FreeBSD Virtual PC guest OS will use roughly 40% of the CPU of a single processor computer. After this change, the usage will be closer to 3%.
. Create a New Kernel Configuration File
+
All of the SCSI, FireWire, and USB device drivers can be removed from a custom kernel configuration file. Virtual PC provides a virtual network adapter used by the man:de[4] driver, so all network devices except for man:de[4] and man:miibus[4] can be removed from the kernel.
. Configure Networking
+
The most basic networking setup uses DHCP to connect the virtual machine to the same local area network as the Microsoft(R) Windows(R) host. This can be accomplished by adding `ifconfig_de0="DHCP"` to [.filename]#/etc/rc.conf#. More advanced networking setups are described in crossref:advanced-networking[advanced-networking,Advanced Networking].
[[virtualization-guest-vmware]]
== FreeBSD as a Guest on VMware Fusion for Mac OS(R)
VMware Fusion for Mac(R) is a commercial software product available for Intel(R) based Apple(R) Mac(R) computers running Mac OS(R) 10.4.9 or higher. FreeBSD is a fully supported guest operating system. Once VMware Fusion has been installed on Mac OS(R) X, the user can configure a virtual machine and then install the desired guest operating system.
[[virtualization-guest-vmware-install]]
=== Installing FreeBSD on VMware Fusion
The first step is to start VMware Fusion which will load the Virtual Machine Library. Click [.guimenuitem]#New# to create the virtual machine:
image::vmware-freebsd01.png[]
This will load the New Virtual Machine Assistant. Click [.guimenuitem]#Continue# to proceed:
image::vmware-freebsd02.png[]
Select [.guimenuitem]#Other# as the [.guimenuitem]#Operating System# and either [.guimenuitem]#FreeBSD# or [.guimenuitem]#FreeBSD 64-bit#, as the menu:Version[] when prompted:
image::vmware-freebsd03.png[]
Choose the name of the virtual machine and the directory where it should be saved:
image::vmware-freebsd04.png[]
Choose the size of the Virtual Hard Disk for the virtual machine:
image::vmware-freebsd05.png[]
Choose the method to install the virtual machine, either from an ISO image or from a CD/DVD:
image::vmware-freebsd06.png[]
Click [.guimenuitem]#Finish# and the virtual machine will boot:
image::vmware-freebsd07.png[]
Install FreeBSD as usual:
image::vmware-freebsd08.png[]
Once the install is complete, the settings of the virtual machine can be modified, such as memory usage:
[NOTE]
====
The System Hardware settings of the virtual machine cannot be modified while the virtual machine is running.
====
image::vmware-freebsd09.png[]
The number of CPUs the virtual machine will have access to:
image::vmware-freebsd10.png[]
The status of the CD-ROM device. Normally the CD/DVD/ISO is disconnected from the virtual machine when it is no longer needed.
image::vmware-freebsd11.png[]
The last thing to change is how the virtual machine will connect to the network. To allow connections to the virtual machine from other machines besides the host, choose [.guimenuitem]#Connect directly to the physical network (Bridged)#. Otherwise, [.guimenuitem]#Share the host's internet connection (NAT)# is preferred so that the virtual machine can have access to the Internet, but the network cannot access the virtual machine.
image::vmware-freebsd12.png[]
After modifying the settings, boot the newly installed FreeBSD virtual machine.
[[virtualization-guest-vmware-configure]]
=== Configuring FreeBSD on VMware Fusion
After FreeBSD has been successfully installed on Mac OS(R) X with VMware Fusion, there are a number of configuration steps that can be taken to optimize the system for virtualized operation.
[.procedure]
. Set Boot Loader Variables
+
The most important step is to reduce the `kern.hz` tunable to reduce the CPU utilization of FreeBSD under the VMware Fusion environment. This is accomplished by adding the following line to [.filename]#/boot/loader.conf#:
+
[.programlisting]
....
kern.hz=100
....
+
Without this setting, an idle FreeBSD VMware Fusion guest will use roughly 15% of the CPU of a single processor iMac(R). After this change, the usage will be closer to 5%.
. Create a New Kernel Configuration File
+
All of the FireWire, and USB device drivers can be removed from a custom kernel configuration file. VMware Fusion provides a virtual network adapter used by the man:em[4] driver, so all network devices except for man:em[4] can be removed from the kernel.
. Configure Networking
+
The most basic networking setup uses DHCP to connect the virtual machine to the same local area network as the host Mac(R). This can be accomplished by adding `ifconfig_em0="DHCP"` to [.filename]#/etc/rc.conf#. More advanced networking setups are described in crossref:advanced-networking[advanced-networking,Advanced Networking].
[[virtualization-guest-virtualbox]]
== FreeBSD as a Guest on VirtualBox(TM)
FreeBSD works well as a guest in VirtualBox(TM). The virtualization software is available for most common operating systems, including FreeBSD itself.
The VirtualBox(TM) guest additions provide support for:
* Clipboard sharing.
* Mouse pointer integration.
* Host time synchronization.
* Window scaling.
* Seamless mode.
[NOTE]
====
These commands are run in the FreeBSD guest.
====
First, install the package:emulators/virtualbox-ose-additions[] package or port in the FreeBSD guest. This will install the port:
[source,shell]
....
# cd /usr/ports/emulators/virtualbox-ose-additions && make install clean
....
Add these lines to [.filename]#/etc/rc.conf#:
[.programlisting]
....
vboxguest_enable="YES"
vboxservice_enable="YES"
....
If man:ntpd[8] or man:ntpdate[8] is used, disable host time synchronization:
[.programlisting]
....
vboxservice_flags="--disable-timesync"
....
Xorg will automatically recognize the `vboxvideo` driver. It can also be manually entered in [.filename]#/etc/X11/xorg.conf#:
[.programlisting]
....
Section "Device"
Identifier "Card0"
Driver "vboxvideo"
VendorName "InnoTek Systemberatung GmbH"
BoardName "VirtualBox Graphics Adapter"
EndSection
....
To use the `vboxmouse` driver, adjust the mouse section in [.filename]#/etc/X11/xorg.conf#:
[.programlisting]
....
Section "InputDevice"
Identifier "Mouse0"
Driver "vboxmouse"
EndSection
....
HAL users should create the following [.filename]#/usr/local/etc/hal/fdi/policy/90-vboxguest.fdi# or copy it from [.filename]#/usr/local/share/hal/fdi/policy/10osvendor/90-vboxguest.fdi#:
[.programlisting]
....
<?xml version="1.0" encoding="utf-8"?>
<!--
# Sun VirtualBox
# Hal driver description for the vboxmouse driver
# $Id: chapter.xml,v 1.33 2012-03-17 04:53:52 eadler Exp $
Copyright (C) 2008-2009 Sun Microsystems, Inc.
This file is part of VirtualBox Open Source Edition (OSE, as
available from http://www.virtualbox.org. This file is free software;
you can redistribute it and/or modify it under the terms of the GNU
General Public License (GPL) as published by the Free Software
Foundation, in version 2 as it comes in the "COPYING" file of the
VirtualBox OSE distribution. VirtualBox OSE is distributed in the
hope that it will be useful, but WITHOUT ANY WARRANTY of any kind.
Please contact Sun Microsystems, Inc., 4150 Network Circle, Santa
Clara, CA 95054 USA or visit http://www.sun.com if you need
additional information or have any questions.
-->
<deviceinfo version="0.2">
<device>
<match key="info.subsystem" string="pci">
<match key="info.product" string="VirtualBox guest Service">
<append key="info.capabilities" type="strlist">input</append>
<append key="info.capabilities" type="strlist">input.mouse</append>
<merge key="input.x11_driver" type="string">vboxmouse</merge>
<merge key="input.device" type="string">/dev/vboxguest</merge>
</match>
</match>
</device>
</deviceinfo>
....
Shared folders for file transfers between host and VM are accessible by mounting them using `mount_vboxvfs`. A shared folder can be created on the host using the VirtualBox GUI or via `vboxmanage`. For example, to create a shared folder called _myshare_ under [.filename]#/mnt/bsdboxshare# for the VM named _BSDBox_, run:
[source,shell]
....
# vboxmanage sharedfolder add 'BSDBox' --name myshare --hostpath /mnt/bsdboxshare
....
Note that the shared folder name must not contain spaces. Mount the shared folder from within the guest system like this:
[source,shell]
....
# mount_vboxvfs -w myshare /mnt
....
[[virtualization-host-virtualbox]]
== FreeBSD as a Host with VirtualBox(TM)
VirtualBox(TM) is an actively developed, complete virtualization package, that is available for most operating systems including Windows(R), Mac OS(R), Linux(R) and FreeBSD. It is equally capable of running Windows(R) or UNIX(R)-like guests. It is released as open source software, but with closed-source components available in a separate extension pack. These components include support for USB 2.0 devices. More information may be found on the http://www.virtualbox.org/wiki/Downloads[Downloads page of the VirtualBox(TM) wiki]. Currently, these extensions are not available for FreeBSD.
[[virtualization-virtualbox-install]]
=== Installing VirtualBox(TM)
VirtualBox(TM) is available as a FreeBSD package or port in package:emulators/virtualbox-ose[]. The port can be installed using these commands:
[source,shell]
....
# cd /usr/ports/emulators/virtualbox-ose
# make install clean
....
One useful option in the port's configuration menu is the `GuestAdditions` suite of programs. These provide a number of useful features in guest operating systems, like mouse pointer integration (allowing the mouse to be shared between host and guest without the need to press a special keyboard shortcut to switch) and faster video rendering, especially in Windows(R) guests. The guest additions are available in the menu:Devices[] menu, after the installation of the guest is finished.
A few configuration changes are needed before VirtualBox(TM) is started for the first time. The port installs a kernel module in [.filename]#/boot/modules# which must be loaded into the running kernel:
[source,shell]
....
# kldload vboxdrv
....
To ensure the module is always loaded after a reboot, add this line to [.filename]#/boot/loader.conf#:
[.programlisting]
....
vboxdrv_load="YES"
....
To use the kernel modules that allow bridged or host-only networking, add this line to [.filename]#/etc/rc.conf# and reboot the computer:
[.programlisting]
....
vboxnet_enable="YES"
....
The `vboxusers` group is created during installation of VirtualBox(TM). All users that need access to VirtualBox(TM) will have to be added as members of this group. `pw` can be used to add new members:
[source,shell]
....
# pw groupmod vboxusers -m yourusername
....
The default permissions for [.filename]#/dev/vboxnetctl# are restrictive and need to be changed for bridged networking:
[source,shell]
....
# chown root:vboxusers /dev/vboxnetctl
# chmod 0660 /dev/vboxnetctl
....
To make this permissions change permanent, add these lines to [.filename]#/etc/devfs.conf#:
[.programlisting]
....
own vboxnetctl root:vboxusers
perm vboxnetctl 0660
....
To launch VirtualBox(TM), type from an Xorg session:
[source,shell]
....
% VirtualBox
....
For more information on configuring and using VirtualBox(TM), refer to the http://www.virtualbox.org[official website]. For FreeBSD-specific information and troubleshooting instructions, refer to the http://wiki.FreeBSD.org/VirtualBox[relevant page in the FreeBSD wiki].
[[virtualization-virtualbox-usb-support]]
=== VirtualBox(TM) USB Support
VirtualBox(TM) can be configured to pass USB devices through to the guest operating system. The host controller of the OSE version is limited to emulating USB 1.1 devices until the extension pack supporting USB 2.0 and 3.0 devices becomes available on FreeBSD.
For VirtualBox(TM) to be aware of USB devices attached to the machine, the user needs to be a member of the `operator` group.
[source,shell]
....
# pw groupmod operator -m yourusername
....
Then, add the following to [.filename]#/etc/devfs.rules#, or create this file if it does not exist yet:
[.programlisting]
....
[system=10]
add path 'usb/*' mode 0660 group operator
....
To load these new rules, add the following to [.filename]#/etc/rc.conf#:
[.programlisting]
....
devfs_system_ruleset="system"
....
Then, restart devfs:
[source,shell]
....
# service devfs restart
....
Restart the login session and VirtualBox(TM) for these changes to take effect, and create USB filters as necessary.
[[virtualization-virtualbox-host-dvd-cd-access]]
=== VirtualBox(TM) Host DVD/CD Access
Access to the host DVD/CD drives from guests is achieved through the sharing of the physical drives. Within VirtualBox(TM), this is set up from the Storage window in the Settings of the virtual machine. If needed, create an empty IDECD/DVD device first. Then choose the Host Drive from the popup menu for the virtual CD/DVD drive selection. A checkbox labeled `Passthrough` will appear. This allows the virtual machine to use the hardware directly. For example, audio CDs or the burner will only function if this option is selected.
HAL needs to run for VirtualBox(TM)DVD/CD functions to work, so enable it in [.filename]#/etc/rc.conf# and start it if it is not already running:
[.programlisting]
....
hald_enable="YES"
....
[source,shell]
....
# service hald start
....
In order for users to be able to use VirtualBox(TM)DVD/CD functions, they need access to [.filename]#/dev/xpt0#, [.filename]#/dev/cdN#, and [.filename]#/dev/passN#. This is usually achieved by making the user a member of `operator`. Permissions to these devices have to be corrected by adding these lines to [.filename]#/etc/devfs.conf#:
[.programlisting]
....
perm cd* 0660
perm xpt0 0660
perm pass* 0660
....
[source,shell]
....
# service devfs restart
....
[[virtualization-host-bhyve]]
== FreeBSD as a Host with bhyve
The bhyveBSD-licensed hypervisor became part of the base system with FreeBSD 10.0-RELEASE. This hypervisor supports a number of guests, including FreeBSD, OpenBSD, and many Linux(R) distributions. By default, bhyve provides access to serial console and does not emulate a graphical console. Virtualization offload features of newer CPUs are used to avoid the legacy methods of translating instructions and manually managing memory mappings.
The bhyve design requires a processor that supports Intel(R) Extended Page Tables (EPT) or AMD(R) Rapid Virtualization Indexing (RVI) or Nested Page Tables (NPT). Hosting Linux(R) guests or FreeBSD guests with more than one vCPU requires VMX unrestricted mode support (UG). Most newer processors, specifically the Intel(R) Core(TM) i3/i5/i7 and Intel(R) Xeon(TM) E3/E5/E7, support these features. UG support was introduced with Intel's Westmere micro-architecture. For a complete list of Intel(R) processors that support EPT, refer to https://ark.intel.com/content/www/us/en/ark/search/featurefilter.html?productType=873&0_ExtendedPageTables=True[]. RVI is found on the third generation and later of the AMD Opteron(TM) (Barcelona) processors. The easiest way to tell if a processor supports bhyve is to run `dmesg` or look in [.filename]#/var/run/dmesg.boot# for the `POPCNT` processor feature flag on the `Features2` line for AMD(R) processors or `EPT` and `UG` on the `VT-x` line for Intel(R) processors.
[[virtualization-bhyve-prep]]
=== Preparing the Host
The first step to creating a virtual machine in bhyve is configuring the host system. First, load the bhyve kernel module:
[source,shell]
....
# kldload vmm
....
Then, create a [.filename]#tap# interface for the network device in the virtual machine to attach to. In order for the network device to participate in the network, also create a bridge interface containing the [.filename]#tap# interface and the physical interface as members. In this example, the physical interface is _igb0_:
[source,shell]
....
# ifconfig tap0 create
# sysctl net.link.tap.up_on_open=1
net.link.tap.up_on_open: 0 -> 1
# ifconfig bridge0 create
# ifconfig bridge0 addm igb0 addm tap0
# ifconfig bridge0 up
....
[[virtualization-bhyve-freebsd]]
=== Creating a FreeBSD Guest
Create a file to use as the virtual disk for the guest machine. Specify the size and name of the virtual disk:
[source,shell]
....
# truncate -s 16G guest.img
....
Download an installation image of FreeBSD to install:
[source,shell]
....
# fetch ftp://ftp.freebsd.org/pub/FreeBSD/releases/ISO-IMAGES/12.2/FreeBSD-12.2-RELEASE-amd64-bootonly.iso
FreeBSD-12.2-RELEASE-amd64-bootonly.iso 100% of 230 MB 570 kBps 06m17s
....
FreeBSD comes with an example script for running a virtual machine in bhyve. The script will start the virtual machine and run it in a loop, so it will automatically restart if it crashes. The script takes a number of options to control the configuration of the machine: `-c` controls the number of virtual CPUs, `-m` limits the amount of memory available to the guest, `-t` defines which [.filename]#tap# device to use, `-d` indicates which disk image to use, `-i` tells bhyve to boot from the CD image instead of the disk, and `-I` defines which CD image to use. The last parameter is the name of the virtual machine, used to track the running machines. This example starts the virtual machine in installation mode:
[source,shell]
....
# sh /usr/share/examples/bhyve/vmrun.sh -c 1 -m 1024M -t tap0 -d guest.img -i -I FreeBSD-12.2-RELEASE-amd64-bootonly.iso guestname
....
The virtual machine will boot and start the installer. After installing a system in the virtual machine, when the system asks about dropping in to a shell at the end of the installation, choose btn:[Yes].
Reboot the virtual machine. While rebooting the virtual machine causes bhyve to exit, the [.filename]#vmrun.sh# script runs `bhyve` in a loop and will automatically restart it. When this happens, choose the reboot option from the boot loader menu in order to escape the loop. Now the guest can be started from the virtual disk:
[source,shell]
....
# sh /usr/share/examples/bhyve/vmrun.sh -c 4 -m 1024M -t tap0 -d guest.img guestname
....
[[virtualization-bhyve-linux]]
=== Creating a Linux(R) Guest
In order to boot operating systems other than FreeBSD, the package:sysutils/grub2-bhyve[] port must be first installed.
Next, create a file to use as the virtual disk for the guest machine:
[source,shell]
....
# truncate -s 16G linux.img
....
Starting a virtual machine with bhyve is a two step process. First a kernel must be loaded, then the guest can be started. The Linux(R) kernel is loaded with package:sysutils/grub2-bhyve[]. Create a [.filename]#device.map# that grub will use to map the virtual devices to the files on the host system:
[.programlisting]
....
(hd0) ./linux.img
(cd0) ./somelinux.iso
....
Use package:sysutils/grub2-bhyve[] to load the Linux(R) kernel from the ISO image:
[source,shell]
....
# grub-bhyve -m device.map -r cd0 -M 1024M linuxguest
....
This will start grub. If the installation CD contains a [.filename]#grub.cfg#, a menu will be displayed. If not, the `vmlinuz` and `initrd` files must be located and loaded manually:
[source,shell]
....
grub> ls
(hd0) (cd0) (cd0,msdos1) (host)
grub> ls (cd0)/isolinux
boot.cat boot.msg grub.conf initrd.img isolinux.bin isolinux.cfg memtest
splash.jpg TRANS.TBL vesamenu.c32 vmlinuz
grub> linux (cd0)/isolinux/vmlinuz
grub> initrd (cd0)/isolinux/initrd.img
grub> boot
....
Now that the Linux(R) kernel is loaded, the guest can be started:
[source,shell]
....
# bhyve -A -H -P -s 0:0,hostbridge -s 1:0,lpc -s 2:0,virtio-net,tap0 -s 3:0,virtio-blk,./linux.img \
-s 4:0,ahci-cd,./somelinux.iso -l com1,stdio -c 4 -m 1024M linuxguest
....
The system will boot and start the installer. After installing a system in the virtual machine, reboot the virtual machine. This will cause bhyve to exit. The instance of the virtual machine needs to be destroyed before it can be started again:
[source,shell]
....
# bhyvectl --destroy --vm=linuxguest
....
Now the guest can be started directly from the virtual disk. Load the kernel:
[source,shell]
....
# grub-bhyve -m device.map -r hd0,msdos1 -M 1024M linuxguest
grub> ls
(hd0) (hd0,msdos2) (hd0,msdos1) (cd0) (cd0,msdos1) (host)
(lvm/VolGroup-lv_swap) (lvm/VolGroup-lv_root)
grub> ls (hd0,msdos1)/
lost+found/ grub/ efi/ System.map-2.6.32-431.el6.x86_64 config-2.6.32-431.el6.x
86_64 symvers-2.6.32-431.el6.x86_64.gz vmlinuz-2.6.32-431.el6.x86_64
initramfs-2.6.32-431.el6.x86_64.img
grub> linux (hd0,msdos1)/vmlinuz-2.6.32-431.el6.x86_64 root=/dev/mapper/VolGroup-lv_root
grub> initrd (hd0,msdos1)/initramfs-2.6.32-431.el6.x86_64.img
grub> boot
....
Boot the virtual machine:
[source,shell]
....
# bhyve -A -H -P -s 0:0,hostbridge -s 1:0,lpc -s 2:0,virtio-net,tap0 \
-s 3:0,virtio-blk,./linux.img -l com1,stdio -c 4 -m 1024M linuxguest
....
Linux(R) will now boot in the virtual machine and eventually present you with the login prompt. Login and use the virtual machine. When you are finished, reboot the virtual machine to exit bhyve. Destroy the virtual machine instance:
[source,shell]
....
# bhyvectl --destroy --vm=linuxguest
....
[[virtualization-bhyve-uefi]]
=== Booting bhyve Virtual Machines with UEFI Firmware
In addition to bhyveload and grub-bhyve, the bhyve hypervisor can also boot virtual machines using the UEFI userspace firmware. This option may support guest operating systems that are not supported by the other loaders.
In order to make use of the UEFI support in bhyve, first obtain the UEFI firmware images. This can be done by installing package:sysutils/bhyve-firmware[] port or package.
With the firmware in place, add the flags `-l bootrom,_/path/to/firmware_` to your bhyve command line. The actual bhyve command may look like this:
[source,shell]
....
# bhyve -AHP -s 0:0,hostbridge -s 1:0,lpc \
-s 2:0,virtio-net,tap1 -s 3:0,virtio-blk,./disk.img \
-s 4:0,ahci-cd,./install.iso -c 4 -m 1024M \
-l bootrom,/usr/local/share/uefi-firmware/BHYVE_UEFI.fd \
guest
....
package:sysutils/bhyve-firmware[] also contains a CSM-enabled firmware, to boot guests with no UEFI support in legacy BIOS mode:
[source,shell]
....
# bhyve -AHP -s 0:0,hostbridge -s 1:0,lpc \
-s 2:0,virtio-net,tap1 -s 3:0,virtio-blk,./disk.img \
-s 4:0,ahci-cd,./install.iso -c 4 -m 1024M \
-l bootrom,/usr/local/share/uefi-firmware/BHYVE_UEFI_CSM.fd \
guest
....
[[virtualization-bhyve-framebuffer]]
=== Graphical UEFI Framebuffer for bhyve Guests
The UEFI firmware support is particularly useful with predominantly graphical guest operating systems such as Microsoft Windows(R).
Support for the UEFI-GOP framebuffer may also be enabled with the `-s 29,fbuf,tcp=_0.0.0.0:5900_` flags. The framebuffer resolution may be configured with `w=_800_` and `h=_600_`, and bhyve can be instructed to wait for a VNC connection before booting the guest by adding `wait`. The framebuffer may be accessed from the host or over the network via the VNC protocol. Additionally, `-s 30,xhci,tablet` can be added to achieve precise mouse cursor synchronization with the host.
The resulting bhyve command would look like this:
[source,shell]
....
# bhyve -AHP -s 0:0,hostbridge -s 31:0,lpc \
-s 2:0,virtio-net,tap1 -s 3:0,virtio-blk,./disk.img \
-s 4:0,ahci-cd,./install.iso -c 4 -m 1024M \
-s 29,fbuf,tcp=0.0.0.0:5900,w=800,h=600,wait \
-s 30,xhci,tablet \
-l bootrom,/usr/local/share/uefi-firmware/BHYVE_UEFI.fd \
guest
....
Note, in BIOS emulation mode, the framebuffer will cease receiving updates once control is passed from firmware to guest operating system.
[[virtualization-bhyve-zfs]]
=== Using ZFS with bhyve Guests
If ZFS is available on the host machine, using ZFS volumes instead of disk image files can provide significant performance benefits for the guest VMs. A ZFS volume can be created by:
[source,shell]
....
# zfs create -V16G -o volmode=dev zroot/linuxdisk0
....
When starting the VM, specify the ZFS volume as the disk drive:
[source,shell]
....
# bhyve -A -H -P -s 0:0,hostbridge -s 1:0,lpc -s 2:0,virtio-net,tap0 -s3:0,virtio-blk,/dev/zvol/zroot/linuxdisk0 \
-l com1,stdio -c 4 -m 1024M linuxguest
....
[[virtualization-bhyve-nmdm]]
=== Virtual Machine Consoles
It is advantageous to wrap the bhyve console in a session management tool such as package:sysutils/tmux[] or package:sysutils/screen[] in order to detach and reattach to the console. It is also possible to have the console of bhyve be a null modem device that can be accessed with `cu`. To do this, load the [.filename]#nmdm# kernel module and replace `-l com1,stdio` with `-l com1,/dev/nmdm0A`. The [.filename]#/dev/nmdm# devices are created automatically as needed, where each is a pair, corresponding to the two ends of the null modem cable ([.filename]#/dev/nmdm0A# and [.filename]#/dev/nmdm0B#). See man:nmdm[4] for more information.
[source,shell]
....
# kldload nmdm
# bhyve -A -H -P -s 0:0,hostbridge -s 1:0,lpc -s 2:0,virtio-net,tap0 -s 3:0,virtio-blk,./linux.img \
-l com1,/dev/nmdm0A -c 4 -m 1024M linuxguest
# cu -l /dev/nmdm0B
Connected
Ubuntu 13.10 handbook ttyS0
handbook login:
....
[[virtualization-bhyve-managing]]
=== Managing Virtual Machines
A device node is created in [.filename]#/dev/vmm# for each virtual machine. This allows the administrator to easily see a list of the running virtual machines:
[source,shell]
....
# ls -al /dev/vmm
total 1
dr-xr-xr-x 2 root wheel 512 Mar 17 12:19 ./
dr-xr-xr-x 14 root wheel 512 Mar 17 06:38 ../
crw------- 1 root wheel 0x1a2 Mar 17 12:20 guestname
crw------- 1 root wheel 0x19f Mar 17 12:19 linuxguest
crw------- 1 root wheel 0x1a1 Mar 17 12:19 otherguest
....
A specified virtual machine can be destroyed using `bhyvectl`:
[source,shell]
....
# bhyvectl --destroy --vm=guestname
....
[[virtualization-bhyve-onboot]]
=== Persistent Configuration
In order to configure the system to start bhyve guests at boot time, the following configurations must be made in the specified files:
[.procedure]
. [.filename]#/etc/sysctl.conf#
+
[.programlisting]
....
net.link.tap.up_on_open=1
....
. [.filename]#/etc/rc.conf#
+
[.programlisting]
....
cloned_interfaces="bridge0 tap0"
ifconfig_bridge0="addm igb0 addm tap0"
kld_list="nmdm vmm"
....
[[virtualization-host-xen]]
== FreeBSD as a Xen(TM)-Host
Xen is a GPLv2-licensed https://en.wikipedia.org/wiki/Hypervisor#Classification[type 1 hypervisor] for Intel(R) and ARM(R) architectures. FreeBSD has included i386(TM) and AMD(R) 64-Bit https://wiki.xenproject.org/wiki/DomU[DomU] and https://en.wikipedia.org/wiki/Amazon_Elastic_Compute_Cloud[Amazon EC2] unprivileged domain (virtual machine) support since FreeBSD 8.0 and includes Dom0 control domain (host) support in FreeBSD 11.0. Support for para-virtualized (PV) domains has been removed from FreeBSD 11 in favor of hardware virtualized (HVM) domains, which provides better performance.
Xen(TM) is a bare-metal hypervisor, which means that it is the first program loaded after the BIOS. A special privileged guest called the Domain-0 (`Dom0` for short) is then started. The Dom0 uses its special privileges to directly access the underlying physical hardware, making it a high-performance solution. It is able to access the disk controllers and network adapters directly. The Xen(TM) management tools to manage and control the Xen(TM) hypervisor are also used by the Dom0 to create, list, and destroy VMs. Dom0 provides virtual disks and networking for unprivileged domains, often called `DomU`. Xen(TM) Dom0 can be compared to the service console of other hypervisor solutions, while the DomU is where individual guest VMs are run.
Xen(TM) can migrate VMs between different Xen(TM) servers. When the two xen hosts share the same underlying storage, the migration can be done without having to shut the VM down first. Instead, the migration is performed live while the DomU is running and there is no need to restart it or plan a downtime. This is useful in maintenance scenarios or upgrade windows to ensure that the services provided by the DomU are still provided. Many more features of Xen(TM) are listed on the https://wiki.xenproject.org/wiki/Category:Overview[Xen Wiki Overview page]. Note that not all features are supported on FreeBSD yet.
[[virtualization-host-xen-requirements]]
=== Hardware Requirements for Xen(TM) Dom0
To run the Xen(TM) hypervisor on a host, certain hardware functionality is required. Hardware virtualized domains require Extended Page Table (http://en.wikipedia.org/wiki/Extended_Page_Table[EPT]) and Input/Output Memory Management Unit (http://en.wikipedia.org/wiki/List_of_IOMMU-supporting_hardware[IOMMU]) support in the host processor.
[NOTE]
====
In order to run a FreeBSD Xen(TM) Dom0 the box must be booted using legacy boot (BIOS).
====
[[virtualization-host-xen-dom0-setup]]
=== Xen(TM) Dom0 Control Domain Setup
Users of FreeBSD 11 should install the package:emulators/xen-kernel47[] and package:sysutils/xen-tools47[] packages that are based on Xen version 4.7. Systems running on FreeBSD-12.0 or newer can use Xen 4.11 provided by package:emulators/xen-kernel411[] and package:sysutils/xen-tools411[], respectively.
Configuration files must be edited to prepare the host for the Dom0 integration after the Xen packages are installed. An entry to [.filename]#/etc/sysctl.conf# disables the limit on how many pages of memory are allowed to be wired. Otherwise, DomU VMs with higher memory requirements will not run.
[source,shell]
....
# echo 'vm.max_wired=-1' >> /etc/sysctl.conf
....
Another memory-related setting involves changing [.filename]#/etc/login.conf#, setting the `memorylocked` option to `unlimited`. Otherwise, creating DomU domains may fail with `Cannot allocate memory` errors. After making the change to [.filename]#/etc/login.conf#, run `cap_mkdb` to update the capability database. See crossref:security[security-resourcelimits,"Resource Limits"] for details.
[source,shell]
....
# sed -i '' -e 's/memorylocked=64K/memorylocked=unlimited/' /etc/login.conf
# cap_mkdb /etc/login.conf
....
Add an entry for the Xen(TM) console to [.filename]#/etc/ttys#:
[source,shell]
....
# echo 'xc0 "/usr/libexec/getty Pc" xterm onifconsole secure' >> /etc/ttys
....
Selecting a Xen(TM) kernel in [.filename]#/boot/loader.conf# activates the Dom0. Xen(TM) also requires resources like CPU and memory from the host machine for itself and other DomU domains. How much CPU and memory depends on the individual requirements and hardware capabilities. In this example, 8 GB of memory and 4 virtual CPUs are made available for the Dom0. The serial console is also activated and logging options are defined.
The following command is used for Xen 4.7 packages:
[source,shell]
....
# echo 'hw.pci.mcfg=0' >> /boot/loader.conf
# echo 'if_tap_load="YES"' >> /boot/loader.conf
# echo 'xen_kernel="/boot/xen"' >> /boot/loader.conf
# echo 'xen_cmdline="dom0_mem=8192M dom0_max_vcpus=4 dom0pvh=1 console=com1,vga com1=115200,8n1 guest_loglvl=all loglvl=all"' >> /boot/loader.conf
....
For Xen versions 4.11 and higher, the following command should be used instead:
[source,shell]
....
# echo 'if_tap_load="YES"' >> /boot/loader.conf
# echo 'xen_kernel="/boot/xen"' >> /boot/loader.conf
# echo 'xen_cmdline="dom0_mem=8192M dom0_max_vcpus=4 dom0=pvh console=com1,vga com1=115200,8n1 guest_loglvl=all loglvl=all"' >> /boot/loader.conf
....
[TIP]
====
Log files that Xen(TM) creates for the DomU VMs are stored in [.filename]#/var/log/xen#. Please be sure to check the contents of that directory if experiencing issues.
====
Activate the xencommons service during system startup:
[source,shell]
....
# sysrc xencommons_enable=yes
....
These settings are enough to start a Dom0-enabled system. However, it lacks network functionality for the DomU machines. To fix that, define a bridged interface with the main NIC of the system which the DomU VMs can use to connect to the network. Replace _em0_ with the host network interface name.
[source,shell]
....
# sysrc cloned_interfaces="bridge0"
# sysrc ifconfig_bridge0="addm em0 SYNCDHCP"
# sysrc ifconfig_em0="up"
....
Restart the host to load the Xen(TM) kernel and start the Dom0.
[source,shell]
....
# reboot
....
After successfully booting the Xen(TM) kernel and logging into the system again, the Xen(TM) management tool `xl` is used to show information about the domains.
[source,shell]
....
# xl list
Name ID Mem VCPUs State Time(s)
Domain-0 0 8192 4 r----- 962.0
....
The output confirms that the Dom0 (called `Domain-0`) has the ID `0` and is running. It also has the memory and virtual CPUs that were defined in [.filename]#/boot/loader.conf# earlier. More information can be found in the https://www.xenproject.org/help/documentation.html[Xen(TM) Documentation]. DomU guest VMs can now be created.
[[virtualization-host-xen-domu-setup]]
=== Xen(TM) DomU Guest VM Configuration
Unprivileged domains consist of a configuration file and virtual or physical hard disks. Virtual disk storage for the DomU can be files created by man:truncate[1] or ZFS volumes as described in crossref:zfs[zfs-zfs-volume,“Creating and Destroying Volumes”]. In this example, a 20 GB volume is used. A VM is created with the ZFS volume, a FreeBSD ISO image, 1 GB of RAM and two virtual CPUs. The ISO installation file is retrieved with man:fetch[1] and saved locally in a file called [.filename]#freebsd.iso#.
[source,shell]
....
# fetch ftp://ftp.freebsd.org/pub/FreeBSD/releases/ISO-IMAGES/12.0/FreeBSD-12.0-RELEASE-amd64-bootonly.iso -o freebsd.iso
....
A ZFS volume of 20 GB called [.filename]#xendisk0# is created to serve as the disk space for the VM.
[source,shell]
....
# zfs create -V20G -o volmode=dev zroot/xendisk0
....
The new DomU guest VM is defined in a file. Some specific definitions like name, keymap, and VNC connection details are also defined. The following [.filename]#freebsd.cfg# contains a minimum DomU configuration for this example:
[source,shell]
....
# cat freebsd.cfg
builder = "hvm" <.>
name = "freebsd" <.>
memory = 1024 <.>
vcpus = 2 <.>
vif = [ 'mac=00:16:3E:74:34:32,bridge=bridge0' ] <.>
disk = [
'/dev/zvol/tank/xendisk0,raw,hda,rw', <.>
'/root/freebsd.iso,raw,hdc:cdrom,r' <.>
]
vnc = 1 <.>
vnclisten = "0.0.0.0"
serial = "pty"
usbdevice = "tablet"
....
These lines are explained in more detail:
<.> This defines what kind of virtualization to use. `hvm` refers to hardware-assisted virtualization or hardware virtual machine. Guest operating systems can run unmodified on CPUs with virtualization extensions, providing nearly the same performance as running on physical hardware. `generic` is the default value and creates a PV domain.
<.> Name of this virtual machine to distinguish it from others running on the same Dom0. Required.
<.> Quantity of RAM in megabytes to make available to the VM. This amount is subtracted from the hypervisor's total available memory, not the memory of the Dom0.
<.> Number of virtual CPUs available to the guest VM. For best performance, do not create guests with more virtual CPUs than the number of physical CPUs on the host.
<.> Virtual network adapter. This is the bridge connected to the network interface of the host. The `mac` parameter is the MAC address set on the virtual network interface. This parameter is optional, if no MAC is provided Xen(TM) will generate a random one.
<.> Full path to the disk, file, or ZFS volume of the disk storage for this VM. Options and multiple disk definitions are separated by commas.
<.> Defines the Boot medium from which the initial operating system is installed. In this example, it is the ISO image downloaded earlier. Consult the Xen(TM) documentation for other kinds of devices and options to set.
<.> Options controlling VNC connectivity to the serial console of the DomU. In order, these are: active VNC support, define IP address on which to listen, device node for the serial console, and the input method for precise positioning of the mouse and other input methods. `keymap` defines which keymap to use, and is `english` by default.
After the file has been created with all the necessary options, the DomU is created by passing it to `xl create` as a parameter.
[source,shell]
....
# xl create freebsd.cfg
....
[NOTE]
====
Each time the Dom0 is restarted, the configuration file must be passed to `xl create` again to re-create the DomU. By default, only the Dom0 is created after a reboot, not the individual VMs. The VMs can continue where they left off as they stored the operating system on the virtual disk. The virtual machine configuration can change over time (for example, when adding more memory). The virtual machine configuration files must be properly backed up and kept available to be able to re-create the guest VM when needed.
====
The output of `xl list` confirms that the DomU has been created.
[source,shell]
....
# xl list
Name ID Mem VCPUs State Time(s)
Domain-0 0 8192 4 r----- 1653.4
freebsd 1 1024 1 -b---- 663.9
....
To begin the installation of the base operating system, start the VNC client, directing it to the main network address of the host or to the IP address defined on the `vnclisten` line of [.filename]#freebsd.cfg#. After the operating system has been installed, shut down the DomU and disconnect the VNC viewer. Edit [.filename]#freebsd.cfg#, removing the line with the `cdrom` definition or commenting it out by inserting a `#` character at the beginning of the line. To load this new configuration, it is necessary to remove the old DomU with `xl destroy`, passing either the name or the id as the parameter. Afterwards, recreate it using the modified [.filename]*freebsd.cfg*.
[source,shell]
....
# xl destroy freebsd
# xl create freebsd.cfg
....
The machine can then be accessed again using the VNC viewer. This time, it will boot from the virtual disk where the operating system has been installed and can be used as a virtual machine.
[[virtualization-host-xen-troubleshooting]]
=== Troubleshooting
This section contains basic information in order to help troubleshoot issues found when using FreeBSD as a Xen(TM) host or guest.
[[virtualization-host-xen-troubleshooting-host]]
==== Host Boot Troubleshooting
Please note that the following troubleshooting tips are intended for Xen(TM) 4.11 or newer. If you are still using Xen(TM) 4.7 and having issues consider migrating to a newer version of Xen(TM).
In order to troubleshoot host boot issues you will likely need a serial cable, or a debug USB cable. Verbose Xen(TM) boot output can be obtained by adding options to the `xen_cmdline` option found in [.filename]#loader.conf#. A couple of relevant debug options are:
* `iommu=debug`: can be used to print additional diagnostic information about the iommu.
* `dom0=verbose`: can be used to print additional diagnostic information about the dom0 build process.
* `sync_console`: flag to force synchronous console output. Useful for debugging to avoid losing messages due to rate limiting. Never use this option in production environments since it can allow malicious guests to perform DoS attacks against Xen(TM) using the console.
FreeBSD should also be booted in verbose mode in order to identify any issues. To activate verbose booting, run this command:
[source,shell]
....
# echo 'boot_verbose="YES"' >> /boot/loader.conf
....
If none of these options help solving the problem, please send the serial boot log to mailto:freebsd-xen@FreeBSD.org[freebsd-xen@FreeBSD.org] and mailto:xen-devel@lists.xenproject.org[xen-devel@lists.xenproject.org] for further analysis.
[[virtualization-host-xen-troubleshooting-guest]]
==== Guest Creation Troubleshooting
Issues can also arise when creating guests, the following attempts to provide some help for those trying to diagnose guest creation issues.
The most common cause of guest creation failures is the `xl` command spitting some error and exiting with a return code different than 0. If the error provided is not enough to help identify the issue, more verbose output can also be obtained from `xl` by using the `v` option repeatedly.
[source,shell]
....
# xl -vvv create freebsd.cfg
Parsing config from freebsd.cfg
libxl: debug: libxl_create.c:1693:do_domain_create: Domain 0:ao 0x800d750a0: create: how=0x0 callback=0x0 poller=0x800d6f0f0
libxl: debug: libxl_device.c:397:libxl__device_disk_set_backend: Disk vdev=xvda spec.backend=unknown
libxl: debug: libxl_device.c:432:libxl__device_disk_set_backend: Disk vdev=xvda, using backend phy
libxl: debug: libxl_create.c:1018:initiate_domain_create: Domain 1:running bootloader
libxl: debug: libxl_bootloader.c:328:libxl__bootloader_run: Domain 1:not a PV/PVH domain, skipping bootloader
libxl: debug: libxl_event.c:689:libxl__ev_xswatch_deregister: watch w=0x800d96b98: deregister unregistered
domainbuilder: detail: xc_dom_allocate: cmdline="", features=""
domainbuilder: detail: xc_dom_kernel_file: filename="/usr/local/lib/xen/boot/hvmloader"
domainbuilder: detail: xc_dom_malloc_filemap : 326 kB
libxl: debug: libxl_dom.c:988:libxl__load_hvm_firmware_module: Loading BIOS: /usr/local/share/seabios/bios.bin
...
....
If the verbose output does not help diagnose the issue there are also QEMU and Xen(TM) toolstack logs in [.filename]#/var/log/xen#. Note that the name of the domain is appended to the log name, so if the domain is named `freebsd` you should find a [.filename]#/var/log/xen/xl-freebsd.log# and likely a [.filename]#/var/log/xen/qemu-dm-freebsd.log#. Both log files can contain useful information for debugging. If none of this helps solve the issue, please send the description of the issue you are facing and as much information as possible to mailto:freebsd-xen@FreeBSD.org[freebsd-xen@FreeBSD.org] and mailto:xen-devel@lists.xenproject.org[xen-devel@lists.xenproject.org] in order to get help.
diff --git a/documentation/content/en/books/handbook/wine/_index.adoc b/documentation/content/en/books/handbook/wine/_index.adoc
index 9cf6a58e7c..39d3d8cab3 100644
--- a/documentation/content/en/books/handbook/wine/_index.adoc
+++ b/documentation/content/en/books/handbook/wine/_index.adoc
@@ -1,785 +1,786 @@
---
title: Chapter 11. WINE
part: Part II. Common Tasks
prev: books/handbook/linuxemu
next: books/handbook/partiii
+description: This chapter will describe how to install WINE on a FreeBSD system and how to configure WINE
---
[[wine]]
= WINE
:doctype: book
:toc: macro
:toclevels: 1
:icons: font
:sectnums:
:sectnumlevels: 6
:source-highlighter: rouge
:experimental:
:skip-front-matter:
:xrefstyle: basic
:relfileprefix: ../
:outfilesuffix:
:sectnumoffset: 11
ifeval::["{backend}" == "html5"]
:imagesdir: ../../../../images/books/handbook/wine/
endif::[]
ifeval::["{backend}" == "pdf"]
:imagesdir: ../../../../static/images/books/handbook/wine/
endif::[]
ifeval::["{backend}" == "epub3"]
:imagesdir: ../../../../static/images/books/handbook/wine/
endif::[]
include::shared/authors.adoc[]
include::shared/releases.adoc[]
include::shared/en/mailing-lists.adoc[]
include::shared/en/teams.adoc[]
include::shared/en/urls.adoc[]
toc::[]
[[wine-synopsis]]
== Synopsis
https://www.winehq.org/[WINE], which stands for Wine Is Not an Emulator, is technically a software translation layer. It enables to install and run some software written for Windows(R) on FreeBSD (and other) systems.
It operates by intercepting system calls, or requests from the software to the operating system, and translating them from Windows(R) calls to calls that FreeBSD understands. It will also translate any responses as needed into what the Windows(R) software is expecting. So in some ways, it _emulates_ a Windows(R) environment, in that it provides many of the resources Windows(R) applications are expecting.
However, it is not an emulator in the traditional sense. Many of these solutions operate by constructing an entire other computer using software processes in place of hardware Virtualization (such as that provided by the package:emulators/qemu[] port) operates in this way. One of the benefits of this approach is the ability to install a full version of the OS in question to the emulator. It means that the environment will not look any different to applications than a real machine, and chances are good that everything will work on it. The downside to this approach is the fact that software acting as hardware is inherently slower than actual hardware. The computer built in software (called the _guest_) requires resources from the real machine (the _host_), and holds on to those resources for as long as it is running.
The WINE Project, on the other hand, is much lighter on system's resources. It will translate system calls on the fly, so while it is difficult to be as fast as a real Windows(R) computer, it can come very close. On the other hand, WINE is trying to keep up with a moving target in terms of all the different system calls and other functionality it needs to support. As a result there may be applications that do not work as expected on WINE, will not work at all, or will not even install to begin with.
At the end of the day, WINE provides another option to try to get a particular Windows(R) software program running on FreeBSD. It can always serve as the first option which, if successful, offers a good experience without unnecessarily depleting the host FreeBSD system's resources.
This chapter will describe:
* How to install WINE on a FreeBSD system.
* How WINE operates, and how it is different from other alternatives like virtualizaton.
* How to fine-tune WINE to the specific needs of some applications.
* How to install GUI helpers for WINE.
* Common tips and solutions for on FreeBSD.
* Considerations for WINE on FreeBSD in terms of the multi-user environment.
Before reading this chapter, it will be useful to:
* Understand the crossref:basics[basics,basics of UNIX(R) and FreeBSD].
* Know how to crossref:bsdinstall[bsdinstall,install FreeBSD].
* Know how to crossref:advanced-networking[advanced-networking,set up a network connection].
* Know how to crossref:ports[ports,install additional third-party software.
[[wine-overview-concepts]]
== WINE Overview & Concepts
WINE is a complex system, so before running it on a FreeBSD system it is worth gaining an understanding of what it is and how it works.
[[what-is-wine]]
=== What is WINE?
As mentioned in the <<wine-synopsis,Synopsis>> for this chapter, WINE is a compatibility layer that allows Windows(R) applications to run on other operating systems. In theory, it means these programs should run on systems like FreeBSD, macOS, and Android.
When WINE runs a Windows(R) executable, two things occur:
* Firstly, WINE implements an environment that mimics that of various versions of Windows(R). For example, if an application requests access to a resource such as RAM, WINE has a memory interface that looks and acts (as far as the application is concerned) like Windows(R).
* Then, once that application makes use of that interface, WINE takes the incoming request for space in memory and translates it to something compatible with the host system. In the same way when the application retrieves that data, WINE facilitates fetching it from the host system and passing it back to the Windows(R) application.
[[wine-and-the-os-system]]
=== WINE and the FreeBSD System
Installing WINE on a FreeBSD system will entail a few different components:
* FreeBSD applications for tasks such as running the Windows(R) executables, configuring the WINE sub-system, or compiling programs with WINE support.
* A large number of libraries that implement the core functions of Windows(R) (for example [.filename]#/lib/wine/api-ms-core-memory-l1-1-1.dll.so#, which is part of the aforementioned memory interface).
* A number of Windows(R) executables, which are (or mimic) common utilities (such as [.filename]#/lib/wine/notepad.exe.so#, which provides the standard Windows(R) text editor).
* Additional Windows(R) assets, in particular fonts (like the Tahoma font, which is stored in [.filename]#share/wine/fonts/tahoma.ttf# in the install root).
[[graphical-versus-text-modeterminal-programs-in-wine]]
=== Graphical Versus Text Mode/Terminal Programs in WINE
As an operating system where terminal utilities are "first-class citizens," it is natural to assume that WINE will contain extensive support for text-mode program. However, the majority of applications for Windows(R), especially the most popular ones, are designed with a graphical user interface (GUI) in mind. Therefore, WINE's utilities are designed by default to launch graphical programs.
However, there are three methods available to run these so-called Console User Interface (CUI) programs:
* The _Bare Streams_ approach will display the output directly to standard output.
* The _wineconsole_ utility can be used with either the _user_ or _curses_ backed to utilize some of the enhancements the WINE system provides for CUI applications.
These approaches are described in greater detail on the https://wiki.winehq.org/Wine_User%27s_Guide#Text_mode_programs_.28CUI:_Console_User_Interface.29[WINE Wiki].
[[wine-derivative-projects]]
=== WINE Derivative Projects
WINE itself is a mature open source project, so it is little surprise it is used as the foundation of more complex solutions.
[[commercial-wine-implementations]]
==== Commercial WINE Implementations
A number of companies have taken WINE and made it a core of their own, proprietary products (WINE's LGPL license permits this). Two of the most famous of these are as follows:
* Codeweavers CrossOver
This solution provides a simplified "one-click" installation of WINE, which contains additional enhancements and optimizations (although the company contributes many of these back upstream to the WINE project). One area of focus for Codeweavers is to make the most popular applications install and run smoothly.
While the company once produced a native FreeBSD version of their CrossOver solution, it appears to have long been abandoned. While some resources (such as a https://www.codeweavers.com/compatibility/crossover/forum/freebsd[dedicated forum]) are still present, they also have seen no activity for some time.
* Steam Proton
Gaming company Steam also uses WINE to enable Windows(R) games to install and run on other systems. it is primary target is Linux-based systems, though some support exists for macOS as well.
While Steam does not offer a native FreeBSD client,there are several options for using the Linux(R) client using FreeBSD's Linux Compatibility Layer.
[[wine-companion-programs]]
==== WINE Companion Programs
In addition to proprietary offerings, other projects have released applications designed to work in tandem with the standard, open source version of WINE. The goals for these can range from making installation easier to offering easy ways to get popular software installed.
These solutions are covered in greater detail in the later section on <<wine-management-guis,GUI frontends>>, and include the following:
* winetricks
* Homura
[[alternatives-to-wine]]
=== Alternatives to WINE
For FreeBSD users, some alternatives to using WINE are as follows:
* Dual-Booting: A straightforward option is to run desired Windows(R) applications natively on that OS. This of course means existing FreeBSD in order to boot Windows(R), so this method is not feasible if access to programs in both systems is required simultaneously.
* Virtual Machines: Virtual Machines (VMs), as mentioned earlier in this chapter, are software processes that emulate full sets of hardware, on which additional operating systems (including Windows(R)) can be installed and run. Modern tools make VMs easy to create and manage, but this method comes at a cost. A good portion of the host systems resources must be allocated to each VM, and those resources cannot be reclaimed by the host as long as the VM is running. A few examples of VM managers include the open source solutions qemu, bhyve, and VirtualBox. See the chapter on <<virtualization,Virtualization>> for more detail.
* Remote Access: Like many other UNIX(R)-like systems, FreeBSD can run a variety of applications enabling users to remotely access Windows(R) computers and use their programs or data. In addtion to clients such as xrdp that connect to the standard Windows(R) Remote Desktop Protocol, other open source standards such as vnc can also be used (provided a compatible server is present on the other side).
[[installing-wine-on-freebsd]]
== Installing WINE on FreeBSD
WINE can be installed via the pkg tool, or by compiling the port(s).
[[wine-prerequistes]]
=== WINE Prerequistes
Before installing WINE itself, it is useful to have the following pre-requisites installed.
* A GUI
Most Windows(R) programs are expecting to have a graphical user interface available. If WINE is installed without one present, its dependencies will include the Wayland compositor, and so a GUI will be installed along with WINE. But it is useful to have the GUI of choice installed, configured, and working correctly before installing WINE.
* wine-gecko
The Windows(R) operating system has for some time had a default web browser pre-installed: Internet Explorer. As a result, some applications work under the assumption that there will always be something capable of displaying web pages. In order to provide this functionality, the WINE layer includes a web browser component using the Mozilla project's Gecko engine. When WINE is first launched it will offer to download and install this, and there are reasons users might want it do so (these will be covered in a later chapter). But they can also install it prior to installing WINE, or alongside the install of WINE proper.
Install this package with the following:
[source,shell]
....
# pkg install wine-gecko
....
Alternately, compile the port with the following:
[source,shell]
....
# cd /usr/ports/emulator/wine-gecko
# make install
....
* wine-mono
This port installs the MONO framework, an open source implementation of Microsoft's .NET. Including this with the WINE installation will make it that much more likely that any applications written in .NET will install and run on the system.
To install the package:
[source,shell]
....
# pkg install wine-mono
....
To compile from the ports collection:
[source,shell]
....
# cd /usr/ports/emulator/wine-mono
# make install
....
[[installing-wine]]
=== Installing WINE via FreeBSD Package Repositories
With the pre-requisites in place, install WINE via package with the following command:
[source,shell]
....
# pkg install wine
....
Alternately compile the WINE sub-system from source with the following:
[source,shell]
....
# cd /usr/ports/emulator/wine
# make install
....
[[thirtytwo-vs-sixtyfour-bit-wine]]
=== Concerns of 32- Versus 64-Bit in WINE Installations
Like most software, Windows(R) applications made the upgrade from the older 32-bit architecture to 64 bits. And most recent software is written for 64-bit operating systems, although modern OSes can sometimes continue to run older 32-bit programs as well. FreeBSD is no different, having had support for 64-bit since the 5.x series.
However, using old software no longer supported by default is a common use for emulators, and users commonly turn to WINE to play games and use other programs that do not run properly on modern hardware. Fortunately, FreeBSD can support all three scenarios:
* On modern, 64-bit machine and want to run 64-bit Windows(R) software, simply install the ports mentioned in the above sections. The ports system will automatically install the 64-bit version.
* Alternately, users might have an older 32-bit machine that they do not want to run with its original, now non-supported software. They can install the 32-bit (i386) version of FreeBSD, then install the ports in the above sections. Again, on a 32-bit machine the ports system will install the corresponding 32-bit version of WINE by default.
However, given a 64-bit version of FreeBSD and need to run *32-bit* Windows(R) applications, installing a different port is required to enable 32-bit compatibility. To install the pre-compiled package, use the following:
[source,shell]
....
# pkg install i386-wine
....
Or compile the port with the following:
[source,shell]
....
# cd /usr/ports/emulator/i386-wine
# make install
....
[[running-first-wine-program]]
== Running a First WINE Program on FreeBSD
Now that WINE is installed, the next step is to try it out by running a simple program. An easy way to do this is to download a self-contained application, i.e., one can simply unpack and run without any complex installation process.
So-called "portable" versions of applications are good choices for this test, as are programs that run with only a single executable file.
[[running-a-program-from-the-command-line]]
=== Running a Program from the Command Line
There are two different methods to launch a Windows program from the terminal. The first, and most straightforward is to navigate to the directory containing the program's executable ([.filename]#.EXE#) and issue the following:
[source,shell]
....
% wine program.exe
....
For applications that take command-line arguments, add them after the executable as usual:
[source,shell]
....
% wine program2.exe -file file.txt
....
Alternately, supply the full path to the executable to use it in a script, for example:
[source,shell]
....
% wine /home/user/bin/program.exe
....
[[running-a-program-from-a-gui]]
=== Running a Program from a GUI
After installation graphical shells should be updated with new associations for Windows executable ([.filename]#.EXE#) files. It will now be possible to browse the system using a file manager, and launch the Windows application in the same way as other files and programs (either a single- or double-click, depending on the desktop's settings).
On most desktops, check to make sure this association is correct by right-clicking on the file, and looking for an entry in the context menu to open the file. One of the options (hopefully the default one) will be with the *Wine Windows Program Loader*, as shown in the below screenshot:
image::wine-run-np++-1.png[]
In the event the program does not run as expected, try launching it from the command line and review any messages displayed in the terminal to troubleshoot.
In the event WINE is not the default application for [.filename]#.EXE# files after install, check the MIME associate for this extension in the current desktop environment, graphical shell, or file manager.
[[configuring-wine-installation]]
== Configuring WINE Installation
With an understanding of what WINE is and how it works at a high level, the next step to effectively using it on FreeBSD is becoming familiar with its configuration. The following sections will describe the key concept of the _WINE prefix_, and illustrate how it is used to control the behavior of applications run through WINE.
[[wine-prefixes]]
=== WINE Prefixes
A WINE _prefix_ is a directory, usually located beneath the default location of [.filename]#$HOME/.wine# though it can be located elsewhere. The prefix is a set of configurations and support files used by the wine to configure and run the Windows(R) environment a given application needs. By default, a brand new WINE installation will create the following structure when first launched by a user:
* [.filename]#.update-timestamp#: contains the last modified date of [.filename]#file /usr/share/wine/wine.inf#. It is used by WINE to determine if a prefix is out of date, and automatically update it if needed.
* [.filename]#dosdevices/#: contains information on mappings of Windows(R) resources to resources on the host FreeBSD system. For example, after a new WINE installation, this should contain at least two entries which enable access to the FreeBSD filesystem using Windows(R)-style drive letters:
** [.filename]#c:@#: A link to [.filename]#drive_c# described below.
** [.filename]#z:@#: A link to the root directory of the system.
* [.filename]#drive_c/#: emulates the main (i.e., [.filename]#C:#) drive of a Windows(R) system. It contains a directory structure and associated files mirroring that of standard Windows(R) systems. A fresh WINE prefix will contain Windows(R) 10 directories such as _Users_ and _Windows_ that holds the OS itself. Furthermore, applications installed within a prefix will be located in either _Program Files_ or _Program Files (x86)_, depending on their architecture.
* [.filename]#system.reg#: This Registry file contains information on the Windows(R) installation, which in the case of WINE is the environment in [.filename]#drive_c#.
* [.filename]#user.reg#: This Registry file contains the current user's personal configurations, made either by varous software or through the use of the Registry Editor.
* [.filename]#userdef.reg#: This Registry file is a default set of configurations for newly-created users.
[[creating-and-using-wine-prefixes]]
=== Creating and Using WINE Prefixes
While WINE will create a default prefix in the user's [.filename]#$HOME/.wine/#, it is possible to set up multiple prefixes. There are a few reasons to do this:
* The most common reason is to emulate different versions of Windows(R), according to the compatibility needs of the software in question.
* In addition, it is common to encounter software that does not work correctly in the default environment, and requires special configuration. it is useful to isolate these in their own, custom prefixes, so the changes do not impact other applications.
* Similarly, copying the default or "main" prefix into a separate "testing" one in order to evaluate an application's compatibility can reduce the chance of corruption.
Creating a prefix from the terminal requires the following command:
[source,shell]
....
% WINEPREFIX="/home/username/.wine-new" winecfg
....
This will run the winecfg program, which can be used to configure wine prefixes (more on this in a later section). But by providing a directory path value for the `WINEPREFIX` environment variable, a new prefix is created at that location if one does not already exist.
Supplying the same variable to the wine program will similarly cause the selected program to be run with the specified prefix:
[source,shell]
....
% WINEPREFIX="/home/username/.wine-new" wine program.exe
....
[[configuring-wine-prefixes-with-winecfg]]
=== Configuring WINE Prefixes with winecfg
As described above WINE includes a tool called winecfg to configure prefixes from within a GUI. It contains a variety of functions, which are detailed in the sections below. When winecfg is run from within a prefix, or provided the location of a prefix within the `WINEPREFIX` variable, it enables the configuration of the selected prefix as described in the below sections.
Selections made on the _Applications_ tab will affect the scope of changes made in the _Libraries_ and _Graphics_ tabs, which will be limited to the application selected. See the section on https://wiki.winehq.org/Wine_User%27s_Guide#Using_Winecfg[Using Winecfg] in the WINE Wiki for more details.
[[applications]]
==== Applications
image::wine-config-1.png[]
The _Applications_ contains controls enabling the association of programs with a particular version of Windows(R). On first start-up the _Application settings_ section will contain a single entry: _Default Settings_. This corresponds to all the default configurations of the prefix, which (as the disabled _Remove application_ button implies) cannot be deleted.
But additional applications can be added with the following process:
. Click the _Add application_ button.
. Use the provided dialog to select the desired program's executable.
. Select the version of Windows(R) to be used with the selected program.
[[libraries]]
==== Libraries
image::wine-config-2.png[]
WINE provides a set of open source library files as part of its distribution that provide the same functions as their Windows(R) counterparts. However, as noted earlier in this chapter, the WINE project is always trying to keep pace with new updates to these libraries. As a result, the versions that ship with WINE may be missing functionality that the latest Windows(R) programs are expecting.
However, winecfg makes it possible specify overrides for the built-in libraries, particularly there is a version of Windows(R) available on the same machine as the host FreeBSD installation. For each library to be overridden, do the following:
. Open the _New override for library_ drop-down and select the library to be replaced.
. Click the _Add_ button.
. The new override will appear in the _Existing overrides_ list, notice the _native, builtin_ designation in parentheses.
. Click to select the library.
. Click the _Edit_ button.
. Use the provided dialog to select a corresponding library to be used in place of the built-in one.
Be sure to select a file that is truly the corresponding version of the built-in one, otherwise there may be unexpected behavior.
[[graphics]]
==== Graphics
image::wine-config-3.png[]
The _Graphics_ tab provides some options to make the windows of programs run via WINE operate smoothly with FreeBSD
* Automatic mouse capture when windows are full-screen.
* Allowing the FreeBSD window manager to decorate the windows, such as their title bars, for programs running via WINE.
* Allowing the window manager to control windows for programs running via WINE, such as running resizing functions on them.
* Create an emulated virtual desktop, within which all WINE programs will run. If this item is selected, the size of the virtual desktop can be specified using the _Desktop size_ input boxes.
* Setting the screen resolution for programs running via WINE.
[[desktop-integration]]
==== Desktop Integration
image::wine-config-4.png[]
This tab allows configuration of the following items:
* The theme and related visual settings to be used for programs running via WINE.
* Whether the WINE sub-system should manage MIME types (used to determine which application opens a particular file type) internally.
* Mappings of directories in the host FreeBSD system to useful folders within the Windows(R) environment. To change an existing association, select the desired item and click _Browse_, then use the provided dialog to select a directory.
[[drives]]
==== Drives
image::wine-config-5.png[]
The _Drives_ tab allows linking of directories in the host FreeBSD system to drive letters in the Windows(R) environment. The default values in this tab should look familiar, as they're displaying the contents of [.filename]#dosdevices/# in the current WINE prefix. Changes made via this dialog will reflect in [.filename]#dosdevices#, and properly-formatted links created in that directory will display in this tab.
To create a new entry, such as for a CD-ROM (mounted at [.filename]#/mnt/cdrom#), take the following steps:
. Click the _Add _ button.
. In the provided dialog, choose a free drive letter.
. Click _OK_.
. Fill in the _Path_ input box by either typing the path to the resource, or click _Browse _ and use the provided dialog to select it.
By default WINE will autodetect the type of resource linked, but this can be manually overridden. See https://wiki.winehq.org/Wine_User%27s_Guide#Drive_Settings[the section in the WINE Wiki] for more detail on advanced options.
[[audio]]
==== Audio
image::wine-config-6.png[]
This tab contains some configurable options for routing sound from Windows(R) programs to the native FreeBSD sound system, including:
* Driver selection
* Default device selection
* Sound test
[[about]]
==== About
image::wine-config-7.png[]
The final tab contains information on the WINE project, including a link to the website. It also allows entry of (entirely optional) user information, although this is not sent anywhere as it is in other operating systems.
[[wine-management-guis]]
== WINE Management GUIs
While the base install of WINE comes with a GUI configuration tool in winecfg, it is main purpose is just that: strictly configuring an existing WINE prefix. There are, however, more advanced applications that will assist in the initial installation of applications as well as optimizing their WINE environments. The below sections include a selection of the most popular.
[[winetricks]]
=== Winetricks
winetricks is a cross-platform, general purpose helper program for WINE. It is not developed by the WINE project proper, but rather maintained on https://github.com/Winetricks/winetricks[Github] by a group of contributors. It contains some automated "recipes" for getting common applications to work on WINE, both by optimizing the settings as well as acquiring some DLL libraries automatically.
[[installing-winetricks]]
==== Installing winetricks
To install winetricks on a FreeBSD using binary packages, use the following commands (note winetricks requires either the i386-wine or i386-wine-devel package, and is therefore not installed automatically with other dependencies):
[source,shell]
....
# pkg install i386-wine winetricks
....
To compile it from source, issue the following in the terminal:
[source,shell]
....
# cd /usr/ports/emulators/i386-wine
# make install
# cd /usr/ports/emulators/winetricks
# make install
....
If a manual installation is required, refer to the https://github.com/Winetricks/winetricks[Github] account for instructions.
[[using-winetricks]]
==== Using winetricks
Run winetricks with the following command:
[source,shell]
....
% winetricks
....
Note: this should be in a 32-bit prefix to run winetricks. Launching winetricks displays a window with a number of choices, as follows:
image::winetricks-run-1.png[]
Selecting either _Install an application_, _Install a benchmark_, or _Install a game_ shows a list with supported options, such as the one below for applications:
image::winetricks-run-2.png[]
Selecting one or more items and clicking _OK_ will start their installation process(es). Initially, some messages that appear to be errors may show up, but they're actually informational alerts as winetricks configures the WINE environment to get around known issues for the application:
image::winetricks-app-install-1.png[]
Once these are circumvented, the actual installer for the application will be run:
image::winetricks-app-install-2.png[]
Once the installation completes, the new Windows application should be available from the desktop environment's standard menu (shown in the screenshot below for the LXQT desktop environment):
image::winetricks-menu-1.png[]
In order to remove the application, run winetricks again, and select _Run an uninstaller_.
image::winetricks-uninstall-1.png[]
A Windows(R)-style dialog will appear with a list of installed programs and components. Select the application to be removed, then click the _Modify/Remove_ button.
image::winetricks-uninstall-2.png[]
This will run the applications built-in installer, which should also have the option to uninstall.
image::winetricks-uninstall-3.png[]
[[homura]]
=== Homura
Homura is an application similar to winetricks, although it was inspired by the https://lutris.net/[Lutris] gaming system for Linux. But while it is focused on games, there are also non-gaming applications available for install through Homura.
[[installing-homura]]
==== Installing Homura
To install Homura's binary package, issue the following command:
[source,shell]
....
# pkg install homura
....
Homura is available in the FreeBSD Ports system. However, than the _emulators_ section of Ports or binary packages, look for it in the _games_ section.
[source,shell]
....
# cd /usr/ports/games/homura
# make install
....
[[using-homura]]
==== Using Homura
Homura's usage is quite similar to that of winetricks. When using it for the first time, launch it from the command line (or a desktop environment runner applet) with:
[source,shell]
....
% Homura
....
This should result in a friendly welcome message. Click _OK_ to continue.
image::homura-launch-1.png[]
The program will also offer to place a link in the application menu of compatible environments:
image::homura-run-2.png[]
Depending on the setup of the FreeBSD machine, Homura may display a message urging the install of native graphics drivers.
image::homura-run-3.png[]
The application's window should then appear, which amounts to a "main menu" with all its options. Many of the items are the same as winetricks, although Homura offers some additional, helpful options such as opening its data folder (_Open Homura Folder_) or running a specified program (_Run a executable in prefix_).
image::homura-install-1.png[]
To select one of Homura's supported applications to install, select _Installation_, and click _OK_. This will display a list of applications Homura can install automatically. Select one, and click _OK_ to start the process.
image::homura-install-2.png[]
As a first step Homura will download the selected program. A notification may appear in supported desktop environments.
image::homura-install-3.png[]
The program will also create a new prefix for the application. A standard WINE dialog with this message will display.
image::homura-install-4.png[]
Next, Homura will install any prerequisites for the selected program. This may involve downloading and extracting a fair number of files, the details of which will show in dialogs.
image::homura-install-5.png[]
Downloaded packages are automatically opened and run as required.
image::homura-install-6.png[]
The installation may end with a simple desktop notification or message in the terminal, depending on how Homura was launched. But in either case Homura should return to the main screen. To confirm the installation was successful, select _Launcher_, and click _OK_.
image::homura-install-7.png[]
This will display a list of installed applications.
image::homura-install-8.png[]
To run the new program, select it from the list, and click _OK_. To uninstall the application, select _Uninstallation_ from the main screen, which will display a similar list. Select the program to be removed, and click _OK_.
image::homura-uninstall-1.png[]
[[running-multiple-management-guis]]
=== Running Multiple Management GUIs
it is worth noting that the above solutions are not mutually exclusive. it is perfectly acceptable, even advantageous, to have both installed at the same time, as they support a different set of programs.
However, it is wise to ensure that they do not access any of the same WINE prefixes. Each of these solutions applies workarounds and makes changes to the registries based on known workarounds to existing WINE issues in order to make a given application run smoothly. Allowing both winetricks and Homura to access the same prefix could lead to some of these being overwritten, with the result being some or all applications do not work as expected.
[[wine-in-multi-user-os-installations]]
== WINE in Multi-User FreeBSD Installations
[[issues-with-using-a-common-wine-prefix]]
=== Issues with Using a Common WINE Prefix
Like most UNIX(R)-like operating systems, FreeBSD is designed for multiple users to be logged in and working at the same time. On the other hand, Windows(R) is multi-user in the sense that there can be multiple user accounts set up on one system. But the expectation is that only one will be using the physical machine (a desktop or laptop PC) at any given moment.
More recent consumer versions of Windows(R) have taken some steps to improve the OS in multi-user scenarios. But it is still largely structured around a single-user experience. Furthermore, the measures the WINE project has taken to create acompatible environment means, unlike FreeBSD applications (including WINE itself), it will resemble this single-user environment.
So it follows that each user will have to maintain their own set of configurations, which is potentially good. Yet it is advantageous to install applications, particularly large ones like office suites or games, only once. Two examples of reasons to do this are maintenance (software updates need only be applied once) and efficiency in storage (no duplicated files).
There are two strategies to minimze the impact of multiple WINE users in the system.
[[installing-applications-to-a-common-drivesettings]]
=== Installing Applications to a Common Drive
As shown in the section on WINE Configuration, WINE provides the ability to attach additional drives to a given prefix. In this way, applications can be installed to a common location, while each user will still have an prefix where individual settings may be kept (depending on the program). This is a good setup if there are relatively few applications to be shared between users, and they are programs that require few custom tweaks changes to the prefix in order to function.
The steps to make install applications in this way are as follows:
. First, set up a shared location on the system where the files will be stored, such as [.filename]#/mnt/windows-drive_d/#. Creating new directories is described in man page for the mkdir command.
. Next, set permissions for this new directory to allow only desired users to access it. One approach to this is to create a new group such as "windows," add the desired users to that group (see the sub-section on groups in the Handbook's Users and Basic Account Management section), and set to the permissions on the directory to `770` (the section on Permissions in the FreeBSD Basics chapter of the Handbook illustrates this process).
. Finally, add the location as a drive to the user's prefix using the winecfg as described in the above section on WINE Configuration in this chapter.
Once complete, applications can be installed to this location, and subsequently run using the assigned drive letter (or the standard UNIX(R)-style directory path). However, as noted above, only one user should be running these applications (which may be accessing files within their installation directory) at the same time. Some applications may also exhibit unexpected behavior when run by a user who is not the owner, despite being a member of the group that should have full "read/write/execute" permissions for the entire directory.
[[using-a-common-installation-of-wine]]
=== Using a Common Installation of WINE
If, on the other hand, there are many applications to be shared, or they require specific tuning in order to work correctly, a different approach may be required. In this method, a completely separate user is created specifically for the purposes of storing the WINE prefix and all its installed applications. Individual users are then granted permission to run programs as this user using the su command. The result is that these users can launch a WINE application as they normally would, only it will act as though launched by the newly-created user, and therefore use the centrally-maintained prefix containing both settings and programs. To accomplish this, take the following steps.
Create a new user with the following command (as root), which will step through the required details:
[source,shell]
....
# adduser
....
Enter the username (e.g., _windows_) and Full name ("Microsoft Windows"). Then accept the defaults for the remainder of the questions. Next, install the sudo utlity using binary packages with the following:
[source,shell]
....
# pkg install sudo
....
Once installed, edit [.filename]#/etc/sudoers# as follows:
[.programlisting]
....
# User alias specification
# define which users can run the wine/windows programs
User_Alias WINDOWS_USERS = user1,user2
# define which users can administrate (become root)
User_Alias ADMIN = user1
# Cmnd alias specification
# define which commands the WINDOWS_USERS may run
Cmnd_Alias WINDOWS = /usr/bin/wine,/usr/bin/winecfg
# Defaults
Defaults:WINDOWS_USERS env_reset
Defaults:WINDOWS_USERS env_keep += DISPLAY
Defaults:WINDOWS_USERS env_keep += XAUTHORITY
Defaults !lecture,tty_tickets,!fqdn
# User privilege specification
root ALL=(ALL) ALL
# Members of the admin user_alias, defined above, may gain root privileges
ADMIN ALL=(ALL) ALL
# The WINDOWS_USERS may run WINDOWS programs as user windows without a password
WINDOWS_USERS ALL = (windows) NOPASSWD: WINDOWS
....
The result of these changes is the users named in the _User_Alias_ section are permitted to run the programs listed in the _Cmnd Alias_ section using the resources listed in the _Defaults_ section (the current display) as if they were the user listed in the final line of the file. In other words, users designates as _WINDOWS_USERS_ can run the wine and winecfg applications as user _windows_. As a bonus, the configuration here means they will not be required to enter the password for the _windows_ user.
Next provide access to the display back to the _windows_ user, as whom the WINE programs will be running:
[source,shell]
....
% xhost +local:windows
....
This should be added to the list of commands run either at login or when the default graphical environment starts. Once all the above are complete, a user configured as one of the `WINDOW_USERS` in [.filename]#sudoers# can run programs using the shared prefix with the following command:
it is worth noting that multiple users accessing this shared environment at the same time is still risky. However, consider also that the shared environment can itself contain multiple prefixes. In this way an administrator can create a tested and verified set of programs, each with its own prefix. At the same time, one user can play a game while another works with office programs without the need for redundant software installations.
[[wine-on-os-faq]]
== WINE on FreeBSD FAQ
The following section describes some frequently asked questions, tips/tricks, or common issues in running WINE on FreeBSD, along with their respective answers.
[[basic-installation-and-usage]]
=== Basic Installation and Usage
[[how-to-install-32-bit-and-64-bit-wine-on-the-same-system]]
==== How to Install 32-bit and 64-bit WINE on the Same System?
As described earlier in this section, the wine and i386-wine packages conflict with one another, and therefore cannot be installed on the same system in the normal way. However, multiple installs can be achieved using mechanisms like chroots/jails, or by building WINE from source (note this does _not_ mean building the port).
[[can-dos-programs-be-run-on-wine]]
==== Can DOS Programs Be Run on WINE?
They can, as "Console User Interface" applications as mentioned eariler in this section. However, there is an arguably better method for running DOS software: DOSBox. On the other hand, there's little reason not to at least try it. Simply create a new prefix, install the software, and if it does not work delete the prefix.
[[should-the-wine-devel-packageport-be-installed-to-use-the-development-version-of-wine-instead-of-stable]]
==== Should the "wine-devel" Package/Port be Installed to Use the Development Version of WINE Instead of Stable?
Yes, installing this version will install the "development" version of WINE. As with the 32- and 64-bit versions, they cannot be installed together with the stable versions unless additional measures are taken.
Note that WINE also has a "Staging" version, which contains the most recent updates. This was at one time available as a FreeBSD port; however, it has since been removed. It can be compiled directly from source however.
[[install-optimization]]
=== Install Optimization
[[how-should-windows-hardware-graphics-drivers-be-handled]]
==== How Should Windows(R) Hardware (e.g., Graphics) Drivers be Handled?
Operating system drivers transfer commands between applications and hardware. WINE emulates a Windows(R) environment, including the drivers, which in turn use FreeBSD's native drivers for this transfer. it is not advisable to install Windows(R) drivers, as the WINE system is designed to use the host systems drivers. If, for example, a graphics card that benefits from dedicated drivers, install them using the standard FreeBSD methods, not Windows(R) installers.
[[is-there-a-way-to-make-windows-fonts-look-better]]
==== Is There a way to Make Windows(R) Fonts Look Better?
A user on the FreeBSD forums suggests this configuration to fix out-of-the-box look of WINE fonts, which can be slightly pixelated.
According to https://forums.freebsd.org/threads/make-wine-ui-fonts-look-good.68273/[a post in the FreeBSD Forums], adding the following to [.filename]#.config/fontconfig/fonts.conf# will add anti-aliasing and make text more readable.
[.programlisting]
....
<?xml version="1.0"?>
<!DOCTYPE fontconfig SYSTEM "fonts.dtd>"
<fontconfig>
<!-- antialias all fonts -->
<match target="font">
<edit name="antialias" mode="assign"><bool>true</bool></edit>>
<edit name="hinting" mode="assign"><bool>true</bool></edit>>
<edit name="hintstyle" mode="assign"><const>hintslight</const></edit>>
<edit name="rgba" mode="assign"><const>rgb</const></edit>>
</match>
</fontconfig>
....
[[does-having-windows-installed-elsewhere-on-a-system-help-wine-operate]]
==== Does Having Windows(R) Installed Elsewhere on a System Help WINE Operate?
It may, depending on the application being run. As mentioned in the section describing winecfg, some built-in WINE DLLs and other libraries can be overridden by providing a path to an alternate version. Provided the Windows(R) partition or drive is mounted to the FreeBSD system and accessible to the user, configuring some of these overrides will use native Windows(R) libraries and may decrease the chance of unexpected behavior.
[[application-specific]]
=== Application-Specific
[[where-is-the-best-place-to-see-if-application-x-works-on-wine]]
==== Where is the Best Place to see if Application X Works on WINE?
The first stop in determining compatibiliy should be the https://appdb.winehq.org/[WINE AppDB]. This is a compilation of reports of programs working (or not) on all supported platforms, although (as previously mentioned), solutions for one platform are often applicable to others.
[[is-there-anything-that-will-help-games-run-better]]
==== Is There Anything That Will Help Games Run Better?
Perhaps. Many Windows(R) games rely on DirectX, a proprietary Microsoft graphics layer. However there are projects in the open source community attempting to implement support for this technology.
The _dxvk_ project, which is an attempt to implement DirectX using the FreeBSD-compatible Vulkan graphics sub-system, is one such. Although its primary target is WINE on Linux, https://forums.freebsd.org/threads/what-about-gaming-on-freebsd.723/page-9[some FreeBSD users report] compiling and using dxvk.
In addition, work is under way on a https://www.freshports.org/emulators/wine-proton/[wine-proton port]. This will bring the work of Valve, developer of the Steam gaming platform, to FreeBSD. Proton is a distribution of WINE designed to allow many Windows(R) games to run on other operating systems with minimal setup.
[[is-there-anywhere-freebsd-wine-users-gather-to-exchange-tips-and-tricks]]
==== Is There Anywhere FreeBSD WINE Users Gather to Exchange Tips and Tricks?
There are plenty of places FreeBSD users discuss issues related to WINE that can be searched for solutions:
* https://forums.freebsd.org/[The FreeBSD forums], particularly the _Installation and Maintenance of Ports or Packages_ or _Emulation and virtualization_ forums.
* https://wiki.freebsd.org/IRC/Channels[FreeBSD IRC channels] including #freebsd (for general support), #freebsd-games, and others.
* https://discord.gg/2CCuhCt[The BSD World Discord server's] channels including _bsd-desktop_, _bsd-gaming_, _bsd-wine_, and others.
[[other-os-resources]]
=== Other OS Resources
There are a number of resources focused on other operating systems that may be useful for FreeBSD users:
* https://wiki.winehq.org/[The WINE Wiki] has a wealth of information on using WINE, much of which is applicable across many of WINE's supported operating systems.
* Similarly, the documentation available from other OS projects can also be of good value. https://wiki.archlinux.org/index.php/wine[The WINE page] on the Arch Linux Wiki is a particularly good example, although some of the "Third-party applications" (i.e., "companion applications") are obviously not available on FreeBSD.
* Finally, Codeweavers (a developer of a commercial version of WINE) is an active upstream contributor. Oftentimes answers to questions in https://www.codeweavers.com/support/forums[their support forum] can be of aid in troubleshooting problems with the open source version of WINE.
diff --git a/documentation/content/en/books/handbook/x11/_index.adoc b/documentation/content/en/books/handbook/x11/_index.adoc
index 81365c5356..a925660070 100644
--- a/documentation/content/en/books/handbook/x11/_index.adoc
+++ b/documentation/content/en/books/handbook/x11/_index.adoc
@@ -1,1427 +1,1428 @@
---
title: Chapter 5. The X Window System
part: Part I. Getting Started
prev: books/handbook/ports
next: books/handbook/partii
+description: This chapter describes how to install and configure Xorg on FreeBSD, which provides the open source X Window System used to provide a graphical environment
---
[[x11]]
= The X Window System
:doctype: book
:toc: macro
:toclevels: 1
:icons: font
:sectnums:
:sectnumlevels: 6
:source-highlighter: rouge
:experimental:
:skip-front-matter:
:xrefstyle: basic
:relfileprefix: ../
:outfilesuffix:
:sectnumoffset: 5
ifeval::["{backend}" == "html5"]
:imagesdir: ../../../../images/books/handbook/x11/
endif::[]
ifeval::["{backend}" == "pdf"]
:imagesdir: ../../../../static/images/books/handbook/x11/
endif::[]
ifeval::["{backend}" == "epub3"]
:imagesdir: ../../../../static/images/books/handbook/x11/
endif::[]
include::shared/authors.adoc[]
include::shared/releases.adoc[]
include::shared/en/mailing-lists.adoc[]
include::shared/en/teams.adoc[]
include::shared/en/urls.adoc[]
toc::[]
[[x11-synopsis]]
== Synopsis
An installation of FreeBSD using bsdinstall does not automatically install a graphical user interface. This chapter describes how to install and configure Xorg, which provides the open source X Window System used to provide a graphical environment. It then describes how to find and install a desktop environment or window manager.
[NOTE]
====
Users who prefer an installation method that automatically configures the Xorg should refer to https://ghostbsd.org[GhostBSD], https://www.midnightbsd.org[MidnightBSD] or https://www.nomad.org[NomadBSD].
====
For more information on the video hardware that Xorg supports, refer to the http://www.x.org/[x.org] website.
After reading this chapter, you will know:
* The various components of the X Window System, and how they interoperate.
* How to install and configure Xorg.
* How to install and configure several window managers and desktop environments.
* How to use TrueType(R) fonts in Xorg.
* How to set up your system for graphical logins (XDM).
Before reading this chapter, you should:
* Know how to install additional third-party software as described in crossref:ports[ports,Installing Applications: Packages and Ports].
[[x-understanding]]
== Terminology
While it is not necessary to understand all of the details of the various components in the X Window System and how they interact, some basic knowledge of these components can be useful.
X server::
X was designed from the beginning to be network-centric, and adopts a "client-server" model. In this model, the "X server" runs on the computer that has the keyboard, monitor, and mouse attached. The server's responsibility includes tasks such as managing the display, handling input from the keyboard and mouse, and handling input or output from other devices such as a tablet or a video projector. This confuses some people, because the X terminology is exactly backward to what they expect. They expect the "X server" to be the big powerful machine down the hall, and the "X client" to be the machine on their desk.
X client::
Each X application, such as XTerm or Firefox, is a "client". A client sends messages to the server such as "Please draw a window at these coordinates", and the server sends back messages such as "The user just clicked on the OK button".
+
In a home or small office environment, the X server and the X clients commonly run on the same computer. It is also possible to run the X server on a less powerful computer and to run the X applications on a more powerful system. In this scenario, the communication between the X client and server takes place over the network.
window manager::
X does not dictate what windows should look like on-screen, how to move them around with the mouse, which keystrokes should be used to move between windows, what the title bars on each window should look like, whether or not they have close buttons on them, and so on. Instead, X delegates this responsibility to a separate window manager application. There are http://www.xwinman.org/[dozens of window managers] available. Each window manager provides a different look and feel: some support virtual desktops, some allow customized keystrokes to manage the desktop, some have a "Start" button, and some are themeable, allowing a complete change of the desktop's look-and-feel. Window managers are available in the [.filename]#x11-wm# category of the Ports Collection.
+
Each window manager uses a different configuration mechanism. Some expect configuration file written by hand while others provide graphical tools for most configuration tasks.
desktop environment::
KDE and GNOME are considered to be desktop environments as they include an entire suite of applications for performing common desktop tasks. These may include office suites, web browsers, and games.
focus policy::
The window manager is responsible for the mouse focus policy. This policy provides some means for choosing which window is actively receiving keystrokes and it should also visibly indicate which window is currently active.
+
One focus policy is called "click-to-focus". In this model, a window becomes active upon receiving a mouse click. In the "focus-follows-mouse" policy, the window that is under the mouse pointer has focus and the focus is changed by pointing at another window. If the mouse is over the root window, then this window is focused. In the "sloppy-focus" model, if the mouse is moved over the root window, the most recently used window still has the focus. With sloppy-focus, focus is only changed when the cursor enters a new window, and not when exiting the current window. In the "click-to-focus" policy, the active window is selected by mouse click. The window may then be raised and appear in front of all other windows. All keystrokes will now be directed to this window, even if the cursor is moved to another window.
+
Different window managers support different focus models. All of them support click-to-focus, and the majority of them also support other policies. Consult the documentation for the window manager to determine which focus models are available.
widgets::
Widget is a term for all of the items in the user interface that can be clicked or manipulated in some way. This includes buttons, check boxes, radio buttons, icons, and lists. A widget toolkit is a set of widgets used to create graphical applications. There are several popular widget toolkits, including Qt, used by KDE, and GTK+, used by GNOME. As a result, applications will have a different look and feel, depending upon which widget toolkit was used to create the application.
[[x-install]]
== Installing Xorg
On FreeBSD, Xorg can be installed as a package or port.
The binary package can be installed quickly but with fewer options for customization:
[source,shell]
....
# pkg install xorg
....
To build and install from the Ports Collection:
[source,shell]
....
# cd /usr/ports/x11/xorg
# make install clean
....
Either of these installations results in the complete Xorg system being installed. Binary packages are the best option for most users.
A smaller version of the X system suitable for experienced users is available in package:x11/xorg-minimal[]. Most of the documents, libraries, and applications will not be installed. Some applications require these additional components to function.
[[x-config]]
== Xorg Configuration
[[x-config-quick-start]]
=== Quick Start
Xorg supports most common video cards, keyboards, and pointing devices.
[TIP]
====
Video cards, monitors, and input devices are automatically detected and do not require any manual configuration. Do not create [.filename]#xorg.conf# or run a `-configure` step unless automatic configuration fails.
====
[.procedure]
. If Xorg has been used on this computer before, move or remove any existing configuration files:
+
[source,shell]
....
# mv /etc/X11/xorg.conf ~/xorg.conf.etc
# mv /usr/local/etc/X11/xorg.conf ~/xorg.conf.localetc
....
. Add the user who will run Xorg to the `video` or `wheel` group to enable 3D acceleration when available. To add user _jru_ to whichever group is available:
+
[source,shell]
....
# pw groupmod video -m jru || pw groupmod wheel -m jru
....
. The `TWM` window manager is included by default. It is started when Xorg starts:
+
[source,shell]
....
% startx
....
. On some older versions of FreeBSD, the system console must be set to man:vt[4] before switching back to the text console will work properly. See <<x-config-kms>>.
[[x-config-user-group]]
=== User Group for Accelerated Video
Access to [.filename]#/dev/dri# is needed to allow 3D acceleration on video cards. It is usually simplest to add the user who will be running X to either the `video` or `wheel` group. Here, man:pw[8] is used to add user _slurms_ to the `video` group, or to the `wheel` group if there is no `video` group:
[source,shell]
....
# pw groupmod video -m slurms || pw groupmod wheel -m slurms
....
[[x-config-kms]]
=== Kernel Mode Setting (`KMS`)
When the computer switches from displaying the console to a higher screen resolution for X, it must set the video output _mode_. Recent versions of `Xorg` use a system inside the kernel to do these mode changes more efficiently. Older versions of FreeBSD use man:sc[4], which is not aware of the `KMS` system. The end result is that after closing X, the system console is blank, even though it is still working. The newer man:vt[4] console avoids this problem.
Add this line to [.filename]#/boot/loader.conf# to enable man:vt[4]:
[.programlisting]
....
kern.vty=vt
....
[[x-config-files]]
=== Configuration Files
Manual configuration is usually not necessary. Please do not manually create configuration files unless autoconfiguration does not work.
[[x-config-files-directory]]
==== Directory
Xorg looks in several directories for configuration files. [.filename]#/usr/local/etc/X11/# is the recommended directory for these files on FreeBSD. Using this directory helps keep application files separate from operating system files.
Storing configuration files in the legacy [.filename]#/etc/X11/# still works. However, this mixes application files with the base FreeBSD files and is not recommended.
[[x-config-files-single-or-multi]]
==== Single or Multiple Files
It is easier to use multiple files that each configure a specific setting than the traditional single [.filename]#xorg.conf#. These files are stored in the [.filename]#xorg.conf.d/# subdirectory of the main configuration file directory. The full path is typically [.filename]#/usr/local/etc/X11/xorg.conf.d/#.
Examples of these files are shown later in this section.
The traditional single [.filename]#xorg.conf# still works, but is neither as clear nor as flexible as multiple files in the [.filename]#xorg.conf.d/# subdirectory.
[[x-config-video-cards]]
=== Video Cards
The Ports framework provides the drm graphics drivers necessary for X11 operation on recent hardware.
Users can use one of the following drivers available from package:graphics/drm-kmod[].
These drivers use interfaces in the kernel that are normally private.
As such, it is strongly recommended that the drivers be built via the ports system via the `PORTS_MODULES` variable.
With `PORTS_MODULES`, every time you build the kernel, the corresponding port(s) containing kernel modules are re-built against the updated sources.
This ensures the kernel module stays in-sync with the kernel itself.
The kernel and ports trees should be updated together for maximum compatibility.
You can add `PORTS_MODULES` to your [.filename]#/etc/make.conf# file to ensure all kernels you build rebuild this module.
Advanced users can add it to their kernel config files with the `makeoptions` directive.
If you run GENERIC and use freebsd-update, you can just build the [.filename]#graphics/drm-kmod# or [.filename]#x11/nvidia-driver# port after each `freebsd-update install` invocation.
[example]
====
[.filename]#/etc/make.conf#
[.programlisting]
....
SYSDIR=path/to/src/sys
PORTS_MODULES=graphics/drm-kmod x11/nvidia-driver
....
This will rebuild everything, but can select one or the other depending on which GPU / graphics card you have.
====
[[x-config-video-cards-ports]]
Intel KMS driver, Radeon KMS driver, AMD KMS driver::
2D and 3D acceleration is supported on most Intel KMS driver graphics cards provided by Intel.
+
Driver name: `i915kms`
+
2D and 3D acceleration is supported on most older Radeon KMS driver graphics cards provided by AMD.
+
Driver name: `radeonkms`
+
2D and 3D acceleration is supported on most newer AMD KMS driver graphics cards provided by AMD.
+
Driver name: `amdgpu`
+
For reference, please see https://en.wikipedia.org/wiki/List_of_Intel_graphics_processing_units[] or https://en.wikipedia.org/wiki/List_of_AMD_graphics_processing_units[] for a list of supported GPUs.
[[x-config-video-cards-intel]]
Intel(R)::
3D acceleration is supported on most Intel(R) graphics up to Ivy Bridge (HD Graphics 2500, 4000, and P4000), including Iron Lake (HD Graphics) and Sandy Bridge (HD Graphics 2000).
+
Driver name: `intel`
+
For reference, see https://en.wikipedia.org/wiki/List_of_Intel_graphics_processing_units[].
[[x-config-video-cards-radeon]]
AMD(R) Radeon::
2D and 3D acceleration is supported on Radeon cards up to and including the HD6000 series.
+
Driver name: `radeon`
+
For reference, see https://en.wikipedia.org/wiki/List_of_AMD_graphics_processing_units[].
[[x-config-video-cards-nvidia]]
NVIDIA::
Several NVIDIA drivers are available in the [.filename]#x11# category of the Ports Collection. Install the driver that matches the video card.
+
For reference, see https://en.wikipedia.org/wiki/List_of_Nvidia_graphics_processing_units[].
+
Kernel support for NVIDIA cards is found in either the [.filename]#x11/nvidia-driver# port or the [.filename]#x11/nvidia-driver-xxx# port.
Modern cards use the former.
Legacy cards use the -xxx ports, where xxx is one of 304, 340 or 390 indicating the version of the driver.
For those, fill in the `-xxx` using the http://download.nvidia.com/XFree86/FreeBSD-x86_64/465.19.01/README/[ Supported NvIDIA GPU Products ] page.
This page lists the devices supported by different versions of the driver.
Legacy drivers run on both i386 and amd64.
The current driver only supports amd64.
Read http://download.nvidia.com/XFree86/FreeBSD-x86_64/465.19.01/README/[installation and configuration of NVIDIA driver] for details.
While we recommend this driver be rebuilt with each kernel rebuild for maximum safety, it uses almost no private kernel interfaces and is usually safe across kernel updates.
[[x-config-video-cards-hybrid]]
Hybrid Combination Graphics::
Some notebook computers add additional graphics processing units to those built into the chipset or processor. _Optimus_ combines Intel(R) and NVIDIA hardware. _Switchable Graphics_ or _Hybrid Graphics_ are a combination of an Intel(R) or AMD(R) processor and an AMD(R) Radeon `GPU`.
+
Implementations of these hybrid graphics systems vary, and Xorg on FreeBSD is not able to drive all versions of them.
+
Some computers provide a `BIOS` option to disable one of the graphics adapters or select a _discrete_ mode which can be used with one of the standard video card drivers. For example, it is sometimes possible to disable the NVIDIA `GPU` in an Optimus system. The Intel(R) video can then be used with an Intel(R) driver.
+
`BIOS` settings depend on the model of computer. In some situations, both ``GPU``s can be left enabled, but creating a configuration file that only uses the main `GPU` in the `Device` section is enough to make such a system functional.
[[x-config-video-cards-other]]
Other Video Cards::
Drivers for some less-common video cards can be found in the [.filename]#x11-drivers# directory of the Ports Collection.
+
Cards that are not supported by a specific driver might still be usable with the package:x11-drivers/xf86-video-vesa[] driver. This driver is installed by package:x11/xorg[]. It can also be installed manually as package:x11-drivers/xf86-video-vesa[]. Xorg attempts to use this driver when a specific driver is not found for the video card.
+
package:x11-drivers/xf86-video-scfb[] is a similar nonspecialized video driver that works on many `UEFI` and ARM(R) computers.
[[x-config-video-cards-file]]
Setting the Video Driver in a File::
To set the Intel(R) driver in a configuration file:
+
[[x-config-video-cards-file-intel]]
.Select Intel(R) Video Driver in a File
[example]
====
[.filename]#/usr/local/etc/X11/xorg.conf.d/driver-intel.conf#
[.programlisting]
....
Section "Device"
Identifier "Card0"
Driver "intel"
# BusID "PCI:1:0:0"
EndSection
....
If more than one video card is present, the `BusID` identifier can be uncommented and set to select the desired card. A list of video card bus ``ID``s can be displayed with `pciconf -lv | grep -B3 display`.
====
+
To set the Radeon driver in a configuration file:
+
[[x-config-video-cards-file-radeon]]
.Select Radeon Video Driver in a File
[example]
====
[.filename]#/usr/local/etc/X11/xorg.conf.d/driver-radeon.conf#
[.programlisting]
....
Section "Device"
Identifier "Card0"
Driver "radeon"
EndSection
....
====
+
To set the `VESA` driver in a configuration file:
+
[[x-config-video-cards-file-vesa]]
.Select `VESA` Video Driver in a File
[example]
====
[.filename]#/usr/local/etc/X11/xorg.conf.d/driver-vesa.conf#
[.programlisting]
....
Section "Device"
Identifier "Card0"
Driver "vesa"
EndSection
....
====
+
To set the `scfb` driver for use with a `UEFI` or ARM(R) computer:
+
[[x-config-video-cards-file-scfb]]
.Select `scfb` Video Driver in a File
[example]
====
[.filename]#/usr/local/etc/X11/xorg.conf.d/driver-scfb.conf#
[.programlisting]
....
Section "Device"
Identifier "Card0"
Driver "scfb"
EndSection
....
====
[[x-config-monitors]]
=== Monitors
Almost all monitors support the Extended Display Identification Data standard (`EDID`). Xorg uses `EDID` to communicate with the monitor and detect the supported resolutions and refresh rates. Then it selects the most appropriate combination of settings to use with that monitor.
Other resolutions supported by the monitor can be chosen by setting the desired resolution in configuration files, or after the X server has been started with man:xrandr[1].
[[x-config-monitors-xrandr]]
Using man:xrandr[1]::
Run man:xrandr[1] without any parameters to see a list of video outputs and detected monitor modes:
+
[source,shell]
....
% xrandr
Screen 0: minimum 320 x 200, current 3000 x 1920, maximum 8192 x 8192
DVI-0 connected primary 1920x1200+1080+0 (normal left inverted right x axis y axis) 495mm x 310mm
1920x1200 59.95*+
1600x1200 60.00
1280x1024 85.02 75.02 60.02
1280x960 60.00
1152x864 75.00
1024x768 85.00 75.08 70.07 60.00
832x624 74.55
800x600 75.00 60.32
640x480 75.00 60.00
720x400 70.08
DisplayPort-0 disconnected (normal left inverted right x axis y axis)
HDMI-0 disconnected (normal left inverted right x axis y axis)
....
+
This shows that the `DVI-0` output is being used to display a screen resolution of 1920x1200 pixels at a refresh rate of about 60 Hz. Monitors are not attached to the `DisplayPort-0` and `HDMI-0` connectors.
+
Any of the other display modes can be selected with man:xrandr[1]. For example, to switch to 1280x1024 at 60 Hz:
+
[source,shell]
....
% xrandr --output DVI-0 --mode 1280x1024 --rate 60
....
+
A common task is using the external video output on a notebook computer for a video projector.
+
The type and quantity of output connectors varies between devices, and the name given to each output varies from driver to driver. What one driver calls `HDMI-1`, another might call `HDMI1`. So the first step is to run man:xrandr[1] to list all the available outputs:
+
[source,shell]
....
% xrandr
Screen 0: minimum 320 x 200, current 1366 x 768, maximum 8192 x 8192
LVDS1 connected 1366x768+0+0 (normal left inverted right x axis y axis) 344mm x 193mm
1366x768 60.04*+
1024x768 60.00
800x600 60.32 56.25
640x480 59.94
VGA1 connected (normal left inverted right x axis y axis)
1280x1024 60.02 + 75.02
1280x960 60.00
1152x864 75.00
1024x768 75.08 70.07 60.00
832x624 74.55
800x600 72.19 75.00 60.32 56.25
640x480 75.00 72.81 66.67 60.00
720x400 70.08
HDMI1 disconnected (normal left inverted right x axis y axis)
DP1 disconnected (normal left inverted right x axis y axis)
....
+
Four outputs were found: the built-in panel `LVDS1`, and external `VGA1`, `HDMI1`, and `DP1` connectors.
+
The projector has been connected to the `VGA1` output. man:xrandr[1] is now used to set that output to the native resolution of the projector and add the additional space to the right side of the desktop:
+
[source,shell]
....
% xrandr --output VGA1 --auto --right-of LVDS1
....
+
`--auto` chooses the resolution and refresh rate detected by `EDID`. If the resolution is not correctly detected, a fixed value can be given with `--mode` instead of the `--auto` statement. For example, most projectors can be used with a 1024x768 resolution, which is set with `--mode 1024x768`.
+
man:xrandr[1] is often run from [.filename]#.xinitrc# to set the appropriate mode when X starts.
[[x-config-monitors-files]]
Setting Monitor Resolution in a File::
To set a screen resolution of 1024x768 in a configuration file:
+
.Set Screen Resolution in a File
[example]
====
[.filename]#/usr/local/etc/X11/xorg.conf.d/screen-resolution.conf#
[.programlisting]
....
Section "Screen"
Identifier "Screen0"
Device "Card0"
SubSection "Display"
Modes "1024x768"
EndSubSection
EndSection
....
====
+
The few monitors that do not have `EDID` can be configured by setting `HorizSync` and `VertRefresh` to the range of frequencies supported by the monitor.
+
.Manually Setting Monitor Frequencies
[example]
====
[.filename]#/usr/local/etc/X11/xorg.conf.d/monitor0-freq.conf#
[.programlisting]
....
Section "Monitor"
Identifier "Monitor0"
HorizSync 30-83 # kHz
VertRefresh 50-76 # Hz
EndSection
....
====
[[x-config-input]]
=== Input Devices
[[x-config-input-keyboard]]
==== Keyboards
[[x-config-input-keyboard-layout]]
Keyboard Layout::
The standardized location of keys on a keyboard is called a _layout_. Layouts and other adjustable parameters are listed in man:xkeyboard-config[7].
+
A United States layout is the default. To select an alternate layout, set the `XkbLayout` and `XkbVariant` options in an `InputClass`. This will be applied to all input devices that match the class.
+
This example selects a French keyboard layout.
+
.Setting a Keyboard Layout
[example]
====
[.filename]#/usr/local/etc/X11/xorg.conf.d/keyboard-fr.conf#
[.programlisting]
....
Section "InputClass"
Identifier "KeyboardDefaults"
MatchIsKeyboard "on"
Option "XkbLayout" "fr"
EndSection
....
====
+
.Setting Multiple Keyboard Layouts
[example]
====
Set United States, Spanish, and Ukrainian keyboard layouts. Cycle through these layouts by pressing kbd:[Alt+Shift]. package:x11/xxkb[] or package:x11/sbxkb[] can be used for improved layout switching control and current layout indicators.
[.filename]#/usr/local/etc/X11/xorg.conf.d/kbd-layout-multi.conf#
[.programlisting]
....
Section "InputClass"
Identifier "All Keyboards"
MatchIsKeyboard "yes"
Option "XkbLayout" "us, es, ua"
EndSection
....
====
[[x-config-input-keyboard-zap]]
Closing Xorg From the Keyboard::
X can be closed with a combination of keys. By default, that key combination is not set because it conflicts with keyboard commands for some applications. Enabling this option requires changes to the keyboard `InputDevice` section:
+
.Enabling Keyboard Exit from X
[example]
====
[.filename]#/usr/local/etc/X11/xorg.conf.d/keyboard-zap.conf#
[.programlisting]
....
Section "InputClass"
Identifier "KeyboardDefaults"
MatchIsKeyboard "on"
Option "XkbOptions" "terminate:ctrl_alt_bksp"
EndSection
....
====
[[x11-input-mice]]
==== Mice and Pointing Devices
[IMPORTANT]
====
If using package:xorg-server[] 1.20.8 or later under FreeBSD {rel121-current} and not using man:moused[8], add `kern.evdev.rcpt_mask=12` to [.filename]#/etc/sysctl.conf#.
====
Many mouse parameters can be adjusted with configuration options. See man:mousedrv[4] for a full list.
[[x11-input-mice-buttons]]
Mouse Buttons::
The number of buttons on a mouse can be set in the mouse `InputDevice` section of [.filename]#xorg.conf#. To set the number of buttons to 7:
+
.Setting the Number of Mouse Buttons
[example]
====
[.filename]#/usr/local/etc/X11/xorg.conf.d/mouse0-buttons.conf#
[.programlisting]
....
Section "InputDevice"
Identifier "Mouse0"
Option "Buttons" "7"
EndSection
....
====
[[x-config-manual-configuration]]
=== Manual Configuration
In some cases, Xorg autoconfiguration does not work with particular hardware, or a different configuration is desired. For these cases, a custom configuration file can be created.
[WARNING]
====
Do not create manual configuration files unless required. Unnecessary manual configuration can prevent proper operation.
====
A configuration file can be generated by Xorg based on the detected hardware. This file is often a useful starting point for custom configurations.
Generating an [.filename]#xorg.conf#:
[source,shell]
....
# Xorg -configure
....
The configuration file is saved to [.filename]#/root/xorg.conf.new#. Make any changes desired, then test that file (using `-retro` so there is a visible background) with:
[source,shell]
....
# Xorg -retro -config /root/xorg.conf.new
....
After the new configuration has been adjusted and tested, it can be split into smaller files in the normal location, [.filename]#/usr/local/etc/X11/xorg.conf.d/#.
[[x-fonts]]
== Using Fonts in Xorg
[[type1]]
=== Type1 Fonts
The default fonts that ship with Xorg are less than ideal for typical desktop publishing applications. Large presentation fonts show up jagged and unprofessional looking, and small fonts are almost completely unintelligible. However, there are several free, high quality Type1 (PostScript(R)) fonts available which can be readily used with Xorg. For instance, the URW font collection (package:x11-fonts/urwfonts[]) includes high quality versions of standard type1 fonts (Times Roman(TM), Helvetica(TM), Palatino(TM) and others). The Freefonts collection (package:x11-fonts/freefonts[]) includes many more fonts, but most of them are intended for use in graphics software such as the Gimp, and are not complete enough to serve as screen fonts. In addition, Xorg can be configured to use TrueType(R) fonts with a minimum of effort. For more details on this, see the man:X[7] manual page or <<truetype>>.
To install the above Type1 font collections from binary packages, run the following commands:
[source,shell]
....
# pkg install urwfonts
....
Alternatively, to build from the Ports Collection, run the following commands:
[source,shell]
....
# cd /usr/ports/x11-fonts/urwfonts
# make install clean
....
And likewise with the freefont or other collections. To have the X server detect these fonts, add an appropriate line to the X server configuration file ([.filename]#/etc/X11/xorg.conf#), which reads:
[.programlisting]
....
FontPath "/usr/local/shared/fonts/urwfonts/"
....
Alternatively, at the command line in the X session run:
[source,shell]
....
% xset fp+ /usr/local/shared/fonts/urwfonts
% xset fp rehash
....
This will work but will be lost when the X session is closed, unless it is added to the startup file ([.filename]#~/.xinitrc# for a normal `startx` session, or [.filename]#~/.xsession# when logging in through a graphical login manager like XDM). A third way is to use the new [.filename]#/usr/local/etc/fonts/local.conf# as demonstrated in <<antialias>>.
[[truetype]]
=== TrueType(R) Fonts
Xorg has built in support for rendering TrueType(R) fonts. There are two different modules that can enable this functionality. The freetype module is used in this example because it is more consistent with the other font rendering back-ends. To enable the freetype module just add the following line to the `"Module"` section of [.filename]#/etc/X11/xorg.conf#.
[.programlisting]
....
Load "freetype"
....
Now make a directory for the TrueType(R) fonts (for example, [.filename]#/usr/local/shared/fonts/TrueType#) and copy all of the TrueType(R) fonts into this directory. Keep in mind that TrueType(R) fonts cannot be directly taken from an Apple(R) Mac(R); they must be in UNIX(R)/MS-DOS(R)/Windows(R) format for use by Xorg. Once the files have been copied into this directory, use mkfontscale to create a [.filename]#fonts.dir#, so that the X font renderer knows that these new files have been installed. `mkfontscale` can be installed as a package:
[source,shell]
....
# pkg install mkfontscale
....
Then create an index of X font files in a directory:
[source,shell]
....
# cd /usr/local/shared/fonts/TrueType
# mkfontscale
....
Now add the TrueType(R) directory to the font path. This is just the same as described in <<type1>>:
[source,shell]
....
% xset fp+ /usr/local/shared/fonts/TrueType
% xset fp rehash
....
or add a `FontPath` line to [.filename]#xorg.conf#.
Now Gimp, LibreOffice, and all of the other X applications should now recognize the installed TrueType(R) fonts. Extremely small fonts (as with text in a high resolution display on a web page) and extremely large fonts (within LibreOffice) will look much better now.
[[antialias]]
=== Anti-Aliased Fonts
All fonts in Xorg that are found in [.filename]#/usr/local/shared/fonts/# and [.filename]#~/.fonts/# are automatically made available for anti-aliasing to Xft-aware applications. Most recent applications are Xft-aware, including KDE, GNOME, and Firefox.
To control which fonts are anti-aliased, or to configure anti-aliasing properties, create (or edit, if it already exists) the file [.filename]#/usr/local/etc/fonts/local.conf#. Several advanced features of the Xft font system can be tuned using this file; this section describes only some simple possibilities. For more details, please see man:fonts-conf[5].
This file must be in XML format. Pay careful attention to case, and make sure all tags are properly closed. The file begins with the usual XML header followed by a DOCTYPE definition, and then the `<fontconfig>` tag:
[.programlisting]
....
<?xml version="1.0"?>
<!DOCTYPE fontconfig SYSTEM "fonts.dtd">
<fontconfig>
....
As previously stated, all fonts in [.filename]#/usr/local/shared/fonts/# as well as [.filename]#~/.fonts/# are already made available to Xft-aware applications. To add another directory outside of these two directory trees, add a line like this to [.filename]#/usr/local/etc/fonts/local.conf#:
[.programlisting]
....
<dir>/path/to/my/fonts</dir>
....
After adding new fonts, and especially new font directories, rebuild the font caches:
[source,shell]
....
# fc-cache -f
....
Anti-aliasing makes borders slightly fuzzy, which makes very small text more readable and removes "staircases" from large text, but can cause eyestrain if applied to normal text. To exclude font sizes smaller than 14 point from anti-aliasing, include these lines:
[.programlisting]
....
<match target="font">
<test name="size" compare="less">
<double>14</double>
</test>
<edit name="antialias" mode="assign">
<bool>false</bool>
</edit>
</match>
<match target="font">
<test name="pixelsize" compare="less" qual="any">
<double>14</double>
</test>
<edit mode="assign" name="antialias">
<bool>false</bool>
</edit>
</match>
....
Spacing for some monospaced fonts might also be inappropriate with anti-aliasing. This seems to be an issue with KDE, in particular. One possible fix is to force the spacing for such fonts to be 100. Add these lines:
[.programlisting]
....
<match target="pattern" name="family">
<test qual="any" name="family">
<string>fixed</string>
</test>
<edit name="family" mode="assign">
<string>mono</string>
</edit>
</match>
<match target="pattern" name="family">
<test qual="any" name="family">
<string>console</string>
</test>
<edit name="family" mode="assign">
<string>mono</string>
</edit>
</match>
....
(this aliases the other common names for fixed fonts as `"mono"`), and then add:
[.programlisting]
....
<match target="pattern" name="family">
<test qual="any" name="family">
<string>mono</string>
</test>
<edit name="spacing" mode="assign">
<int>100</int>
</edit>
</match>
....
Certain fonts, such as Helvetica, may have a problem when anti-aliased. Usually this manifests itself as a font that seems cut in half vertically. At worst, it may cause applications to crash. To avoid this, consider adding the following to [.filename]#local.conf#:
[.programlisting]
....
<match target="pattern" name="family">
<test qual="any" name="family">
<string>Helvetica</string>
</test>
<edit name="family" mode="assign">
<string>sans-serif</string>
</edit>
</match>
....
After editing [.filename]#local.conf#, make certain to end the file with the `</fontconfig>` tag. Not doing this will cause changes to be ignored.
Users can add personalized settings by creating their own [.filename]#~/.config/fontconfig/fonts.conf#. This file uses the same `XML` format described above.
One last point: with an LCD screen, sub-pixel sampling may be desired. This basically treats the (horizontally separated) red, green and blue components separately to improve the horizontal resolution; the results can be dramatic. To enable this, add the line somewhere in [.filename]#local.conf#:
[.programlisting]
....
<match target="font">
<test qual="all" name="rgba">
<const>unknown</const>
</test>
<edit name="rgba" mode="assign">
<const>rgb</const>
</edit>
</match>
....
[NOTE]
====
Depending on the sort of display, `rgb` may need to be changed to `bgr`, `vrgb` or `vbgr`: experiment and see which works best.
====
[[x-xdm]]
== The X Display Manager
Xorg provides an X Display Manager, XDM, which can be used for login session management. XDM provides a graphical interface for choosing which display server to connect to and for entering authorization information such as a login and password combination.
This section demonstrates how to configure the X Display Manager on FreeBSD. Some desktop environments provide their own graphical login manager. Refer to <<x11-wm-gnome>> for instructions on how to configure the GNOME Display Manager and <<x11-wm-kde>> for instructions on how to configure the KDE Display Manager.
=== Configuring XDM
To install XDM, use the package:x11/xdm[] package or port. Once installed, XDM can be configured to run when the machine boots up by adding the following line to [.filename]#/etc/rc.conf#:
[.programlisting]
....
xdm_enable="YES"
....
XDM will run on the ninth virtual terminal by default.
The XDM configuration directory is located in [.filename]#/usr/local/etc/X11/xdm#. This directory contains several files used to change the behavior and appearance of XDM, as well as a few scripts and programs used to set up the desktop when XDM is running. <<xdm-config-files>> summarizes the function of each of these files. The exact syntax and usage of these files is described in man:xdm[1].
[[xdm-config-files]]
.XDM Configuration Files
[cols="1,1", frame="none", options="header"]
|===
| File
| Description
|[.filename]#Xaccess#
|The protocol for connecting to XDM is called the X Display Manager Connection Protocol (`XDMCP`). This file is a client authorization ruleset for controlling `XDMCP` connections from remote machines. By default, this file does not allow any remote clients to connect.
|[.filename]#Xresources#
|This file controls the look and feel of the XDM display chooser and login screens. The default configuration is a simple rectangular login window with the hostname of the machine displayed at the top in a large font and "Login:" and "Password:" prompts below. The format of this file is identical to the app-defaults file described in the Xorg documentation.
|[.filename]#Xservers#
|The list of local and remote displays the chooser should provide as login choices.
|[.filename]#Xsession#
|Default session script for logins which is run by XDM after a user has logged in. This points to a customized session script in [.filename]#~/.xsession#.
|[.filename]#Xsetup_#*
|Script to automatically launch applications before displaying the chooser or login interfaces. There is a script for each display being used, named [.filename]#Xsetup_*#, where `*` is the local display number. Typically these scripts run one or two programs in the background such as `xconsole`.
|[.filename]#xdm-config#
|Global configuration for all displays running on this machine.
|[.filename]#xdm-errors#
|Contains errors generated by the server program. If a display that XDM is trying to start hangs, look at this file for error messages. These messages are also written to the user's [.filename]#~/.xsession-errors# on a per-session basis.
|[.filename]#xdm-pid#
|The running process `ID` of XDM.
|===
=== Configuring Remote Access
By default, only users on the same system can login using XDM. To enable users on other systems to connect to the display server, edit the access control rules and enable the connection listener.
To configure XDM to listen for any remote connection, comment out the `DisplayManager.requestPort` line in [.filename]#/usr/local/etc/X11/xdm/xdm-config# by putting a `!` in front of it:
[source,shell]
....
! SECURITY: do not listen for XDMCP or Chooser requests
! Comment out this line if you want to manage X terminals with xdm
DisplayManager.requestPort: 0
....
Save the edits and restart XDM. To restrict remote access, look at the example entries in [.filename]#/usr/local/etc/X11/xdm/Xaccess# and refer to man:xdm[1] for further information.
[[x11-wm]]
== Desktop Environments
This section describes how to install three popular desktop environments on a FreeBSD system. A desktop environment can range from a simple window manager to a complete suite of desktop applications. Over a hundred desktop environments are available in the [.filename]#x11-wm# category of the Ports Collection.
[[x11-wm-gnome]]
=== GNOME
GNOME is a user-friendly desktop environment. It includes a panel for starting applications and displaying status, a desktop, a set of tools and applications, and a set of conventions that make it easy for applications to cooperate and be consistent with each other. More information regarding GNOME on FreeBSD can be found at https://www.FreeBSD.org/gnome[https://www.FreeBSD.org/gnome]. That web site contains additional documentation about installing, configuring, and managing GNOME on FreeBSD.
This desktop environment can be installed from a package:
[source,shell]
....
# pkg install gnome3
....
To instead build GNOME from ports, use the following command. GNOME is a large application and will take some time to compile, even on a fast computer.
[source,shell]
....
# cd /usr/ports/x11/gnome3
# make install clean
....
GNOME requires [.filename]#/proc# to be mounted. Add this line to [.filename]#/etc/fstab# to mount this file system automatically during system startup:
[.programlisting]
....
proc /proc procfs rw 0 0
....
GNOME uses D-Bus for a message bus and hardware abstraction. These applications are automatically installed as dependencies of GNOME. Enable them in [.filename]#/etc/rc.conf# so they will be started when the system boots:
[.programlisting]
....
dbus_enable="YES"
....
After installation, configure Xorg to start GNOME. The easiest way to do this is to enable the GNOME Display Manager, GDM, which is installed as part of the GNOME package or port. It can be enabled by adding this line to [.filename]#/etc/rc.conf#:
[.programlisting]
....
gdm_enable="YES"
....
It is often desirable to also start all GNOME services. To achieve this, add a second line to [.filename]#/etc/rc.conf#:
[.programlisting]
....
gnome_enable="YES"
....
GDM will start automatically when the system boots.
A second method for starting GNOME is to type `startx` from the command-line after configuring [.filename]#~/.xinitrc#. If this file already exists, replace the line that starts the current window manager with one that starts [.filename]#/usr/local/bin/gnome-session#. If this file does not exist, create it with this command:
[source,shell]
....
% echo "exec /usr/local/bin/gnome-session" > ~/.xinitrc
....
A third method is to use XDM as the display manager. In this case, create an executable [.filename]#~/.xsession#:
[source,shell]
....
% echo "exec /usr/local/bin/gnome-session" > ~/.xsession
....
[[x11-wm-kde]]
=== KDE
KDE is another easy-to-use desktop environment. This desktop provides a suite of applications with a consistent look and feel, a standardized menu and toolbars, keybindings, color-schemes, internationalization, and a centralized, dialog-driven desktop configuration. More information on KDE can be found at http://www.kde.org/[http://www.kde.org/]. For FreeBSD-specific information, consult http://freebsd.kde.org/[http://freebsd.kde.org].
To install the KDE package, type:
[source,shell]
....
# pkg install x11/kde5
....
To instead build the KDE port, use the following command. Installing the port will provide a menu for selecting which components to install. KDE is a large application and will take some time to compile, even on a fast computer.
[source,shell]
....
# cd /usr/ports/x11/kde5
# make install clean
....
KDE requires [.filename]#/proc# to be mounted. Add this line to [.filename]#/etc/fstab# to mount this file system automatically during system startup:
[.programlisting]
....
proc /proc procfs rw 0 0
....
KDE uses D-Bus for a message bus and hardware abstraction. These applications are automatically installed as dependencies of KDE. Enable them in [.filename]#/etc/rc.conf# so they will be started when the system boots:
[.programlisting]
....
dbus_enable="YES"
....
Since KDE Plasma 5, the KDE Display Manager, KDM is no longer developed. A possible replacement is SDDM. To install it, type:
[source,shell]
....
# pkg install x11/sddm
....
Add this line to [.filename]#/etc/rc.conf#:
[.programlisting]
....
sddm_enable="YES"
....
A second method for launching KDE Plasma is to type `startx` from the command line. For this to work, the following line is needed in [.filename]#~/.xinitrc#:
[.programlisting]
....
exec ck-launch-session startplasma-x11
....
A third method for starting KDE Plasma is through XDM. To do so, create an executable [.filename]#~/.xsession# as follows:
[source,shell]
....
% echo "exec ck-launch-session startplasma-x11" > ~/.xsession
....
Once KDE Plasma is started, refer to its built-in help system for more information on how to use its various menus and applications.
[[x11-wm-xfce]]
=== Xfce
Xfce is a desktop environment based on the GTK+ toolkit used by GNOME. However, it is more lightweight and provides a simple, efficient, easy-to-use desktop. It is fully configurable, has a main panel with menus, applets, and application launchers, provides a file manager and sound manager, and is themeable. Since it is fast, light, and efficient, it is ideal for older or slower machines with memory limitations. More information on Xfce can be found at http://www.xfce.org/[http://www.xfce.org].
To install the Xfce package:
[source,shell]
....
# pkg install xfce
....
Alternatively, to build the port:
[source,shell]
....
# cd /usr/ports/x11-wm/xfce4
# make install clean
....
Xfce uses D-Bus for a message bus. This application is automatically installed as dependency of Xfce. Enable it in [.filename]#/etc/rc.conf# so it will be started when the system boots:
[.programlisting]
....
dbus_enable="YES"
....
Unlike GNOME or KDE, Xfce does not provide its own login manager. In order to start Xfce from the command line by typing `startx`, first create [.filename]#~/.xinitrc# with this command:
[source,shell]
....
% echo ". /usr/local/etc/xdg/xfce4/xinitrc" > ~/.xinitrc
....
An alternate method is to use XDM. To configure this method, create an executable [.filename]#~/.xsession#:
[source,shell]
....
% echo ". /usr/local/etc/xdg/xfce4/xinitrc" > ~/.xsession
....
[[x-compiz-fusion]]
== Installing Compiz Fusion
One way to make using a desktop computer more pleasant is with nice 3D effects.
Installing the Compiz Fusion package is easy, but configuring it requires a few steps that are not described in the port's documentation.
[[x-compiz-video-card]]
=== Setting up the FreeBSD nVidia Driver
Desktop effects can cause quite a load on the graphics card. For an nVidia-based graphics card, the proprietary driver is required for good performance. Users of other graphics cards can skip this section and continue with the [.filename]#xorg.conf# configuration.
To determine which nVidia driver is needed see the link:{faq}#idp59950544[FAQ question on the subject].
Having determined the correct driver to use for your card, installation is as simple as installing any other package.
For example, to install the latest driver:
[source,shell]
....
# pkg install x11/nvidia-driver
....
The driver will create a kernel module, which needs to be loaded at system startup. Add the following line to [.filename]#/boot/loader.conf#:
[.programlisting]
....
nvidia_load="YES"
....
[NOTE]
====
To immediately load the kernel module into the running kernel issue a command like `kldload nvidia`. However, it has been noted that some versions of Xorg will not function properly if the driver is not loaded at boot time. After editing [.filename]#/boot/loader.conf#, a reboot is recommended.
====
With the kernel module loaded, you normally only need to change a single line in [.filename]#xorg.conf# to enable the proprietary driver:
Find the following line in [.filename]#/etc/X11/xorg.conf#:
[.programlisting]
....
Driver "nv"
....
and change it to:
[.programlisting]
....
Driver "nvidia"
....
Start the GUI as usual, and you should be greeted by the nVidia splash. Everything should work as usual.
[[xorg-configuration]]
=== Configuring `xorg.conf` for Desktop Effects
To enable Compiz Fusion, [.filename]#/etc/X11/xorg.conf# needs to be modified:
Add the following section to enable composite effects:
[.programlisting]
....
Section "Extensions"
Option "Composite" "Enable"
EndSection
....
Locate the "Screen" section which should look similar to the one below:
[.programlisting]
....
Section "Screen"
Identifier "Screen0"
Device "Card0"
Monitor "Monitor0"
...
....
and add the following two lines (after "Monitor" will do):
[.programlisting]
....
DefaultDepth 24
Option "AddARGBGLXVisuals" "True"
....
Locate the "Subsection" that refers to the screen resolution that you wish to use. For example, if you wish to use 1280x1024, locate the section that follows. If the desired resolution does not appear in any subsection, you may add the relevant entry by hand:
[.programlisting]
....
SubSection "Display"
Viewport 0 0
Modes "1280x1024"
EndSubSection
....
A color depth of 24 bits is needed for desktop composition, change the above subsection to:
[.programlisting]
....
SubSection "Display"
Viewport 0 0
Depth 24
Modes "1280x1024"
EndSubSection
....
Finally, confirm that the "glx" and "extmod" modules are loaded in the "Module" section:
[.programlisting]
....
Section "Module"
Load "extmod"
Load "glx"
...
....
The preceding can be done automatically with package:x11/nvidia-xconfig[] by running (as root):
[source,shell]
....
# nvidia-xconfig --add-argb-glx-visuals
# nvidia-xconfig --composite
# nvidia-xconfig --depth=24
....
[[compiz-fusion]]
=== Installing and Configuring Compiz Fusion
Installing Compiz Fusion is as simple as any other package:
[source,shell]
....
# pkg install x11-wm/compiz-fusion
....
When the installation is finished, start your graphic desktop and at a terminal, enter the following commands (as a normal user):
[source,shell]
....
% compiz --replace --sm-disable --ignore-desktop-hints ccp &
% emerald --replace &
....
Your screen will flicker for a few seconds, as your window manager (e.g., Metacity if you are using GNOME) is replaced by Compiz Fusion. Emerald takes care of the window decorations (i.e., close, minimize, maximize buttons, title bars and so on).
You may convert this to a trivial script and have it run at startup automatically (e.g., by adding to "Sessions" in a GNOME desktop):
[.programlisting]
....
#! /bin/sh
compiz --replace --sm-disable --ignore-desktop-hints ccp &
emerald --replace &
....
Save this in your home directory as, for example, [.filename]#start-compiz# and make it executable:
[source,shell]
....
% chmod +x ~/start-compiz
....
Then use the GUI to add it to [.guimenuitem]#Startup Programs# (located in [.guimenuitem]#System#, [.guimenuitem]#Preferences#, [.guimenuitem]#Sessions# on a GNOME desktop).
To actually select all the desired effects and their settings, execute (again as a normal user) the Compiz Config Settings Manager:
[source,shell]
....
% ccsm
....
[NOTE]
====
In GNOME, this can also be found in the [.guimenuitem]#System#, [.guimenuitem]#Preferences# menu.
====
If you have selected "gconf support" during the build, you will also be able to view these settings using `gconf-editor` under `apps/compiz`.
[[x11-troubleshooting]]
== Troubleshooting
If the mouse does not work, you will need to first configure it before proceeding. In recent Xorg versions, the `InputDevice` sections in [.filename]#xorg.conf# are ignored in favor of the autodetected devices. To restore the old behavior, add the following line to the `ServerLayout` or `ServerFlags` section of this file:
[.programlisting]
....
Option "AutoAddDevices" "false"
....
Input devices may then be configured as in previous versions, along with any other options needed (e.g., keyboard layout switching).
[NOTE]
====
As previously explained the hald daemon will, by default, automatically detect your keyboard. There are chances that your keyboard layout or model will not be correct, desktop environments like GNOME, KDE or Xfce provide tools to configure the keyboard. However, it is possible to set the keyboard properties directly either with the help of the man:setxkbmap[1] utility or with a hald's configuration rule.
For example if, one wants to use a PC 102 keys keyboard coming with a french layout, we have to create a keyboard configuration file for hald called [.filename]#x11-input.fdi# and saved in the [.filename]#/usr/local/etc/hal/fdi/policy# directory. This file should contain the following lines:
[.programlisting]
....
<?xml version="1.0" encoding="utf-8"?>
<deviceinfo version="0.2">
<device>
<match key="info.capabilities" contains="input.keyboard">
<merge key="input.x11_options.XkbModel" type="string">pc102</merge>
<merge key="input.x11_options.XkbLayout" type="string">fr</merge>
</match>
</device>
</deviceinfo>
....
If this file already exists, just copy and add to your file the lines regarding the keyboard configuration.
You will have to reboot your machine to force hald to read this file.
It is possible to do the same configuration from an X terminal or a script with this command line:
[source,shell]
....
% setxkbmap -model pc102 -layout fr
....
[.filename]#/usr/local/shared/X11/xkb/rules/base.lst# lists the various keyboard, layouts and options available.
====
The [.filename]#xorg.conf.new# configuration file may now be tuned to taste. Open the file in a text editor such as man:emacs[1] or man:ee[1]. If the monitor is an older or unusual model that does not support autodetection of sync frequencies, those settings can be added to [.filename]#xorg.conf.new# under the `"Monitor"` section:
[.programlisting]
....
Section "Monitor"
Identifier "Monitor0"
VendorName "Monitor Vendor"
ModelName "Monitor Model"
HorizSync 30-107
VertRefresh 48-120
EndSection
....
Most monitors support sync frequency autodetection, making manual entry of these values unnecessary. For the few monitors that do not support autodetection, avoid potential damage by only entering values provided by the manufacturer.
X allows DPMS (Energy Star) features to be used with capable monitors. The man:xset[1] program controls the time-outs and can force standby, suspend, or off modes. If you wish to enable DPMS features for your monitor, you must add the following line to the monitor section:
[.programlisting]
....
Option "DPMS"
....
While the [.filename]#xorg.conf.new# configuration file is still open in an editor, select the default resolution and color depth desired. This is defined in the `"Screen"` section:
[.programlisting]
....
Section "Screen"
Identifier "Screen0"
Device "Card0"
Monitor "Monitor0"
DefaultDepth 24
SubSection "Display"
Viewport 0 0
Depth 24
Modes "1024x768"
EndSubSection
EndSection
....
The `DefaultDepth` keyword describes the color depth to run at by default. This can be overridden with the `-depth` command line switch to man:Xorg[1]. The `Modes` keyword describes the resolution to run at for the given color depth. Note that only VESA standard modes are supported as defined by the target system's graphics hardware. In the example above, the default color depth is twenty-four bits per pixel. At this color depth, the accepted resolution is 1024 by 768 pixels.
Finally, write the configuration file and test it using the test mode given above.
[NOTE]
====
One of the tools available to assist you during troubleshooting process are the Xorg log files, which contain information on each device that the Xorg server attaches to. Xorg log file names are in the format of [.filename]#/var/log/Xorg.0.log#. The exact name of the log can vary from [.filename]#Xorg.0.log# to [.filename]#Xorg.8.log# and so forth.
====
If all is well, the configuration file needs to be installed in a common location where man:Xorg[1] can find it. This is typically [.filename]#/etc/X11/xorg.conf# or [.filename]#/usr/local/etc/X11/xorg.conf#.
[source,shell]
....
# cp xorg.conf.new /etc/X11/xorg.conf
....
The Xorg configuration process is now complete. Xorg may be now started with the man:startx[1] utility. The Xorg server may also be started with the use of man:xdm[1].
=== Configuration with Intel(R) `i810` Graphics Chipsets
Configuration with Intel(R) i810 integrated chipsets requires the [.filename]#agpgart# AGP programming interface for Xorg to drive the card. See the man:agp[4] driver manual page for more information.
This will allow configuration of the hardware as any other graphics board. Note on systems without the man:agp[4] driver compiled in the kernel, trying to load the module with man:kldload[8] will not work. This driver has to be in the kernel at boot time through being compiled in or using [.filename]#/boot/loader.conf#.
=== Adding a Widescreen Flatpanel to the Mix
This section assumes a bit of advanced configuration knowledge. If attempts to use the standard configuration tools above have not resulted in a working configuration, there is information enough in the log files to be of use in getting the setup working. Use of a text editor will be necessary.
Current widescreen (WSXGA, WSXGA+, WUXGA, WXGA, WXGA+, et.al.) formats support 16:10 and 10:9 formats or aspect ratios that can be problematic. Examples of some common screen resolutions for 16:10 aspect ratios are:
* 2560x1600
* 1920x1200
* 1680x1050
* 1440x900
* 1280x800
At some point, it will be as easy as adding one of these resolutions as a possible `Mode` in the `Section "Screen"` as such:
[.programlisting]
....
Section "Screen"
Identifier "Screen0"
Device "Card0"
Monitor "Monitor0"
DefaultDepth 24
SubSection "Display"
Viewport 0 0
Depth 24
Modes "1680x1050"
EndSubSection
EndSection
....
Xorg is smart enough to pull the resolution information from the widescreen via I2C/DDC information so it knows what the monitor can handle as far as frequencies and resolutions.
If those `ModeLines` do not exist in the drivers, one might need to give Xorg a little hint. Using [.filename]#/var/log/Xorg.0.log# one can extract enough information to manually create a `ModeLine` that will work. Simply look for information resembling this:
[.programlisting]
....
(II) MGA(0): Supported additional Video Mode:
(II) MGA(0): clock: 146.2 MHz Image Size: 433 x 271 mm
(II) MGA(0): h_active: 1680 h_sync: 1784 h_sync_end 1960 h_blank_end 2240 h_border: 0
(II) MGA(0): v_active: 1050 v_sync: 1053 v_sync_end 1059 v_blanking: 1089 v_border: 0
(II) MGA(0): Ranges: V min: 48 V max: 85 Hz, H min: 30 H max: 94 kHz, PixClock max 170 MHz
....
This information is called EDID information. Creating a `ModeLine` from this is just a matter of putting the numbers in the correct order:
[.programlisting]
....
ModeLine <name> <clock> <4 horiz. timings> <4 vert. timings>
....
So that the `ModeLine` in `Section "Monitor"` for this example would look like this:
[.programlisting]
....
Section "Monitor"
Identifier "Monitor1"
VendorName "Bigname"
ModelName "BestModel"
ModeLine "1680x1050" 146.2 1680 1784 1960 2240 1050 1053 1059 1089
Option "DPMS"
EndSection
....
Now having completed these simple editing steps, X should start on your new widescreen monitor.
[[compiz-troubleshooting]]
=== Troubleshooting Compiz Fusion
==== I have installed Compiz Fusion, and after running the commands you mention, my windows are left without title bars and buttons. What is wrong?
You are probably missing a setting in [.filename]#/etc/X11/xorg.conf#. Review this file carefully and check especially the `DefaultDepth` and `AddARGBGLXVisuals` directives.
==== When I run the command to start Compiz Fusion, the X server crashes and I am back at the console. What is wrong?
If you check [.filename]#/var/log/Xorg.0.log#, you will probably find error messages during the X startup. The most common would be:
[source,shell]
....
(EE) NVIDIA(0): Failed to initialize the GLX module; please check in your X
(EE) NVIDIA(0): log file that the GLX module has been loaded in your X
(EE) NVIDIA(0): server, and that the module is the NVIDIA GLX module. If
(EE) NVIDIA(0): you continue to encounter problems, Please try
(EE) NVIDIA(0): reinstalling the NVIDIA driver.
....
This is usually the case when you upgrade Xorg. You will need to reinstall the package:x11/nvidia-driver[] package so glx is built again.
diff --git a/documentation/content/en/books/handbook/zfs/_index.adoc b/documentation/content/en/books/handbook/zfs/_index.adoc
index 66aaafc27f..f1e890bcdf 100644
--- a/documentation/content/en/books/handbook/zfs/_index.adoc
+++ b/documentation/content/en/books/handbook/zfs/_index.adoc
@@ -1,2426 +1,2427 @@
---
title: Chapter 20. The Z File System (ZFS)
part: Part III. System Administration
prev: books/handbook/geom
next: books/handbook/filesystems
+description: The Z File System, or ZFS, is an advanced file system designed to overcome many of the major problems found in previous designs
---
[[zfs]]
= The Z File System (ZFS)
:doctype: book
:toc: macro
:toclevels: 1
:icons: font
:sectnums:
:sectnumlevels: 6
:source-highlighter: rouge
:experimental:
:skip-front-matter:
:xrefstyle: basic
:relfileprefix: ../
:outfilesuffix:
:sectnumoffset: 20
ifeval::["{backend}" == "html5"]
:imagesdir: ../../../../images/books/handbook/zfs/
endif::[]
ifeval::["{backend}" == "pdf"]
:imagesdir: ../../../../static/images/books/handbook/zfs/
endif::[]
ifeval::["{backend}" == "epub3"]
:imagesdir: ../../../../static/images/books/handbook/zfs/
endif::[]
include::shared/authors.adoc[]
include::shared/releases.adoc[]
include::shared/en/mailing-lists.adoc[]
include::shared/en/teams.adoc[]
include::shared/en/urls.adoc[]
toc::[]
The _Z File System_, or ZFS, is an advanced file system designed to overcome many of the major problems found in previous designs.
Originally developed at Sun(TM), ongoing open source ZFS development has moved to the http://open-zfs.org[OpenZFS Project].
ZFS has three major design goals:
* Data integrity: All data includes a <<zfs-term-checksum,checksum>> of the data. When data is written, the checksum is calculated and written along with it. When that data is later read back, the checksum is calculated again. If the checksums do not match, a data error has been detected. ZFS will attempt to automatically correct errors when data redundancy is available.
* Pooled storage: physical storage devices are added to a pool, and storage space is allocated from that shared pool. Space is available to all file systems, and can be increased by adding new storage devices to the pool.
* Performance: multiple caching mechanisms provide increased performance. <<zfs-term-arc,ARC>> is an advanced memory-based read cache. A second level of disk-based read cache can be added with <<zfs-term-l2arc,L2ARC>>, and disk-based synchronous write cache is available with <<zfs-term-zil,ZIL>>.
A complete list of features and terminology is shown in <<zfs-term>>.
[[zfs-differences]]
== What Makes ZFS Different
ZFS is significantly different from any previous file system because it is more than just a file system. Combining the traditionally separate roles of volume manager and file system provides ZFS with unique advantages. The file system is now aware of the underlying structure of the disks. Traditional file systems could only be created on a single disk at a time. If there were two disks then two separate file systems would have to be created. In a traditional hardware RAID configuration, this problem was avoided by presenting the operating system with a single logical disk made up of the space provided by a number of physical disks, on top of which the operating system placed a file system. Even in the case of software RAID solutions like those provided by GEOM, the UFS file system living on top of the RAID transform believed that it was dealing with a single device. ZFS's combination of the volume manager and the file system solves this and allows the creation of many file systems all sharing a pool of available storage. One of the biggest advantages to ZFS's awareness of the physical layout of the disks is that existing file systems can be grown automatically when additional disks are added to the pool. This new space is then made available to all of the file systems. ZFS also has a number of different properties that can be applied to each file system, giving many advantages to creating a number of different file systems and datasets rather than a single monolithic file system.
[[zfs-quickstart]]
== Quick Start Guide
There is a startup mechanism that allows FreeBSD to mount ZFS pools during system initialization. To enable it, add this line to [.filename]#/etc/rc.conf#:
[.programlisting]
....
zfs_enable="YES"
....
Then start the service:
[source,shell]
....
# service zfs start
....
The examples in this section assume three SCSI disks with the device names [.filename]#da0#, [.filename]#da1#, and [.filename]#da2#. Users of SATA hardware should instead use [.filename]#ada# device names.
[[zfs-quickstart-single-disk-pool]]
=== Single Disk Pool
To create a simple, non-redundant pool using a single disk device:
[source,shell]
....
# zpool create example /dev/da0
....
To view the new pool, review the output of `df`:
[source,shell]
....
# df
Filesystem 1K-blocks Used Avail Capacity Mounted on
/dev/ad0s1a 2026030 235230 1628718 13% /
devfs 1 1 0 100% /dev
/dev/ad0s1d 54098308 1032846 48737598 2% /usr
example 17547136 0 17547136 0% /example
....
This output shows that the `example` pool has been created and mounted. It is now accessible as a file system. Files can be created on it and users can browse it:
[source,shell]
....
# cd /example
# ls
# touch testfile
# ls -al
total 4
drwxr-xr-x 2 root wheel 3 Aug 29 23:15 .
drwxr-xr-x 21 root wheel 512 Aug 29 23:12 ..
-rw-r--r-- 1 root wheel 0 Aug 29 23:15 testfile
....
However, this pool is not taking advantage of any ZFS features. To create a dataset on this pool with compression enabled:
[source,shell]
....
# zfs create example/compressed
# zfs set compression=gzip example/compressed
....
The `example/compressed` dataset is now a ZFS compressed file system. Try copying some large files to [.filename]#/example/compressed#.
Compression can be disabled with:
[source,shell]
....
# zfs set compression=off example/compressed
....
To unmount a file system, use `zfs umount` and then verify with `df`:
[source,shell]
....
# zfs umount example/compressed
# df
Filesystem 1K-blocks Used Avail Capacity Mounted on
/dev/ad0s1a 2026030 235232 1628716 13% /
devfs 1 1 0 100% /dev
/dev/ad0s1d 54098308 1032864 48737580 2% /usr
example 17547008 0 17547008 0% /example
....
To re-mount the file system to make it accessible again, use `zfs mount` and verify with `df`:
[source,shell]
....
# zfs mount example/compressed
# df
Filesystem 1K-blocks Used Avail Capacity Mounted on
/dev/ad0s1a 2026030 235234 1628714 13% /
devfs 1 1 0 100% /dev
/dev/ad0s1d 54098308 1032864 48737580 2% /usr
example 17547008 0 17547008 0% /example
example/compressed 17547008 0 17547008 0% /example/compressed
....
The pool and file system may also be observed by viewing the output from `mount`:
[source,shell]
....
# mount
/dev/ad0s1a on / (ufs, local)
devfs on /dev (devfs, local)
/dev/ad0s1d on /usr (ufs, local, soft-updates)
example on /example (zfs, local)
example/compressed on /example/compressed (zfs, local)
....
After creation, ZFS datasets can be used like any file systems. However, many other features are available which can be set on a per-dataset basis. In the example below, a new file system called `data` is created. Important files will be stored here, so it is configured to keep two copies of each data block:
[source,shell]
....
# zfs create example/data
# zfs set copies=2 example/data
....
It is now possible to see the data and space utilization by issuing `df`:
[source,shell]
....
# df
Filesystem 1K-blocks Used Avail Capacity Mounted on
/dev/ad0s1a 2026030 235234 1628714 13% /
devfs 1 1 0 100% /dev
/dev/ad0s1d 54098308 1032864 48737580 2% /usr
example 17547008 0 17547008 0% /example
example/compressed 17547008 0 17547008 0% /example/compressed
example/data 17547008 0 17547008 0% /example/data
....
Notice that each file system on the pool has the same amount of available space. This is the reason for using `df` in these examples, to show that the file systems use only the amount of space they need and all draw from the same pool. ZFS eliminates concepts such as volumes and partitions, and allows multiple file systems to occupy the same pool.
To destroy the file systems and then destroy the pool as it is no longer needed:
[source,shell]
....
# zfs destroy example/compressed
# zfs destroy example/data
# zpool destroy example
....
[[zfs-quickstart-raid-z]]
=== RAID-Z
Disks fail. One method of avoiding data loss from disk failure is to implement RAID. ZFS supports this feature in its pool design. RAID-Z pools require three or more disks but provide more usable space than mirrored pools.
This example creates a RAID-Z pool, specifying the disks to add to the pool:
[source,shell]
....
# zpool create storage raidz da0 da1 da2
....
[NOTE]
====
Sun(TM) recommends that the number of devices used in a RAID-Z configuration be between three and nine. For environments requiring a single pool consisting of 10 disks or more, consider breaking it up into smaller RAID-Z groups. If only two disks are available and redundancy is a requirement, consider using a ZFS mirror. Refer to man:zpool[8] for more details.
====
The previous example created the `storage` zpool. This example makes a new file system called `home` in that pool:
[source,shell]
....
# zfs create storage/home
....
Compression and keeping extra copies of directories and files can be enabled:
[source,shell]
....
# zfs set copies=2 storage/home
# zfs set compression=gzip storage/home
....
To make this the new home directory for users, copy the user data to this directory and create the appropriate symbolic links:
[source,shell]
....
# cp -rp /home/* /storage/home
# rm -rf /home /usr/home
# ln -s /storage/home /home
# ln -s /storage/home /usr/home
....
Users data is now stored on the freshly-created [.filename]#/storage/home#. Test by adding a new user and logging in as that user.
Try creating a file system snapshot which can be rolled back later:
[source,shell]
....
# zfs snapshot storage/home@08-30-08
....
Snapshots can only be made of a full file system, not a single directory or file.
The `@` character is a delimiter between the file system name or the volume name. If an important directory has been accidentally deleted, the file system can be backed up, then rolled back to an earlier snapshot when the directory still existed:
[source,shell]
....
# zfs rollback storage/home@08-30-08
....
To list all available snapshots, run `ls` in the file system's [.filename]#.zfs/snapshot# directory. For example, to see the previously taken snapshot:
[source,shell]
....
# ls /storage/home/.zfs/snapshot
....
It is possible to write a script to perform regular snapshots on user data. However, over time, snapshots can consume a great deal of disk space. The previous snapshot can be removed using the command:
[source,shell]
....
# zfs destroy storage/home@08-30-08
....
After testing, [.filename]#/storage/home# can be made the real [.filename]#/home# using this command:
[source,shell]
....
# zfs set mountpoint=/home storage/home
....
Run `df` and `mount` to confirm that the system now treats the file system as the real [.filename]#/home#:
[source,shell]
....
# mount
/dev/ad0s1a on / (ufs, local)
devfs on /dev (devfs, local)
/dev/ad0s1d on /usr (ufs, local, soft-updates)
storage on /storage (zfs, local)
storage/home on /home (zfs, local)
# df
Filesystem 1K-blocks Used Avail Capacity Mounted on
/dev/ad0s1a 2026030 235240 1628708 13% /
devfs 1 1 0 100% /dev
/dev/ad0s1d 54098308 1032826 48737618 2% /usr
storage 26320512 0 26320512 0% /storage
storage/home 26320512 0 26320512 0% /home
....
This completes the RAID-Z configuration. Daily status updates about the file systems created can be generated as part of the nightly man:periodic[8] runs. Add this line to [.filename]#/etc/periodic.conf#:
[.programlisting]
....
daily_status_zfs_enable="YES"
....
[[zfs-quickstart-recovering-raid-z]]
=== Recovering RAID-Z
Every software RAID has a method of monitoring its `state`. The status of RAID-Z devices may be viewed with this command:
[source,shell]
....
# zpool status -x
....
If all pools are <<zfs-term-online,Online>> and everything is normal, the message shows:
[source,shell]
....
all pools are healthy
....
If there is an issue, perhaps a disk is in the <<zfs-term-offline,Offline>> state, the pool state will look similar to:
[source,shell]
....
pool: storage
state: DEGRADED
status: One or more devices has been taken offline by the administrator.
Sufficient replicas exist for the pool to continue functioning in a
degraded state.
action: Online the device using 'zpool online' or replace the device with
'zpool replace'.
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
storage DEGRADED 0 0 0
raidz1 DEGRADED 0 0 0
da0 ONLINE 0 0 0
da1 OFFLINE 0 0 0
da2 ONLINE 0 0 0
errors: No known data errors
....
This indicates that the device was previously taken offline by the administrator with this command:
[source,shell]
....
# zpool offline storage da1
....
Now the system can be powered down to replace [.filename]#da1#. When the system is back online, the failed disk can replaced in the pool:
[source,shell]
....
# zpool replace storage da1
....
From here, the status may be checked again, this time without `-x` so that all pools are shown:
[source,shell]
....
# zpool status storage
pool: storage
state: ONLINE
scrub: resilver completed with 0 errors on Sat Aug 30 19:44:11 2008
config:
NAME STATE READ WRITE CKSUM
storage ONLINE 0 0 0
raidz1 ONLINE 0 0 0
da0 ONLINE 0 0 0
da1 ONLINE 0 0 0
da2 ONLINE 0 0 0
errors: No known data errors
....
In this example, everything is normal.
[[zfs-quickstart-data-verification]]
=== Data Verification
ZFS uses checksums to verify the integrity of stored data. These are enabled automatically upon creation of file systems.
[WARNING]
====
Checksums can be disabled, but it is _not_ recommended! Checksums take very little storage space and provide data integrity. Many ZFS features will not work properly with checksums disabled. There is no noticeable performance gain from disabling these checksums.
====
Checksum verification is known as _scrubbing_. Verify the data integrity of the `storage` pool with this command:
[source,shell]
....
# zpool scrub storage
....
The duration of a scrub depends on the amount of data stored. Larger amounts of data will take proportionally longer to verify. Scrubs are very I/O intensive, and only one scrub is allowed to run at a time. After the scrub completes, the status can be viewed with `status`:
[source,shell]
....
# zpool status storage
pool: storage
state: ONLINE
scrub: scrub completed with 0 errors on Sat Jan 26 19:57:37 2013
config:
NAME STATE READ WRITE CKSUM
storage ONLINE 0 0 0
raidz1 ONLINE 0 0 0
da0 ONLINE 0 0 0
da1 ONLINE 0 0 0
da2 ONLINE 0 0 0
errors: No known data errors
....
The completion date of the last scrub operation is displayed to help track when another scrub is required. Routine scrubs help protect data from silent corruption and ensure the integrity of the pool.
Refer to man:zfs[8] and man:zpool[8] for other ZFS options.
[[zfs-zpool]]
== `zpool` Administration
ZFS administration is divided between two main utilities. The `zpool` utility controls the operation of the pool and deals with adding, removing, replacing, and managing disks. The <<zfs-zfs,`zfs`>> utility deals with creating, destroying, and managing datasets, both <<zfs-term-filesystem,file systems>> and <<zfs-term-volume,volumes>>.
[[zfs-zpool-create]]
=== Creating and Destroying Storage Pools
Creating a ZFS storage pool (_zpool_) involves making a number of decisions that are relatively permanent because the structure of the pool cannot be changed after the pool has been created. The most important decision is what types of vdevs into which to group the physical disks. See the list of <<zfs-term-vdev,vdev types>> for details about the possible options. After the pool has been created, most vdev types do not allow additional disks to be added to the vdev. The exceptions are mirrors, which allow additional disks to be added to the vdev, and stripes, which can be upgraded to mirrors by attaching an additional disk to the vdev. Although additional vdevs can be added to expand a pool, the layout of the pool cannot be changed after pool creation. Instead, the data must be backed up and the pool destroyed and recreated.
Create a simple mirror pool:
[source,shell]
....
# zpool create mypool mirror /dev/ada1 /dev/ada2
# zpool status
pool: mypool
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
mypool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
ada1 ONLINE 0 0 0
ada2 ONLINE 0 0 0
errors: No known data errors
....
Multiple vdevs can be created at once. Specify multiple groups of disks separated by the vdev type keyword, `mirror` in this example:
[source,shell]
....
# zpool create mypool mirror /dev/ada1 /dev/ada2 mirror /dev/ada3 /dev/ada4
# zpool status
pool: mypool
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
mypool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
ada1 ONLINE 0 0 0
ada2 ONLINE 0 0 0
mirror-1 ONLINE 0 0 0
ada3 ONLINE 0 0 0
ada4 ONLINE 0 0 0
errors: No known data errors
....
Pools can also be constructed using partitions rather than whole disks. Putting ZFS in a separate partition allows the same disk to have other partitions for other purposes. In particular, partitions with bootcode and file systems needed for booting can be added. This allows booting from disks that are also members of a pool. There is no performance penalty on FreeBSD when using a partition rather than a whole disk. Using partitions also allows the administrator to _under-provision_ the disks, using less than the full capacity. If a future replacement disk of the same nominal size as the original actually has a slightly smaller capacity, the smaller partition will still fit, and the replacement disk can still be used.
Create a <<zfs-term-vdev-raidz,RAID-Z2>> pool using partitions:
[source,shell]
....
# zpool create mypool raidz2 /dev/ada0p3 /dev/ada1p3 /dev/ada2p3 /dev/ada3p3 /dev/ada4p3 /dev/ada5p3
# zpool status
pool: mypool
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
mypool ONLINE 0 0 0
raidz2-0 ONLINE 0 0 0
ada0p3 ONLINE 0 0 0
ada1p3 ONLINE 0 0 0
ada2p3 ONLINE 0 0 0
ada3p3 ONLINE 0 0 0
ada4p3 ONLINE 0 0 0
ada5p3 ONLINE 0 0 0
errors: No known data errors
....
A pool that is no longer needed can be destroyed so that the disks can be reused. Destroying a pool involves first unmounting all of the datasets in that pool. If the datasets are in use, the unmount operation will fail and the pool will not be destroyed. The destruction of the pool can be forced with `-f`, but this can cause undefined behavior in applications which had open files on those datasets.
[[zfs-zpool-attach]]
=== Adding and Removing Devices
There are two cases for adding disks to a zpool: attaching a disk to an existing vdev with `zpool attach`, or adding vdevs to the pool with `zpool add`. Only some <<zfs-term-vdev,vdev types>> allow disks to be added to the vdev after creation.
A pool created with a single disk lacks redundancy. Corruption can be detected but not repaired, because there is no other copy of the data. The <<zfs-term-copies,copies>> property may be able to recover from a small failure such as a bad sector, but does not provide the same level of protection as mirroring or RAID-Z. Starting with a pool consisting of a single disk vdev, `zpool attach` can be used to add an additional disk to the vdev, creating a mirror. `zpool attach` can also be used to add additional disks to a mirror group, increasing redundancy and read performance. If the disks being used for the pool are partitioned, replicate the layout of the first disk on to the second. `gpart backup` and `gpart restore` can be used to make this process easier.
Upgrade the single disk (stripe) vdev _ada0p3_ to a mirror by attaching _ada1p3_:
[source,shell]
....
# zpool status
pool: mypool
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
mypool ONLINE 0 0 0
ada0p3 ONLINE 0 0 0
errors: No known data errors
# zpool attach mypool ada0p3 ada1p3
Make sure to wait until resilver is done before rebooting.
If you boot from pool 'mypool', you may need to update
boot code on newly attached disk 'ada1p3'.
Assuming you use GPT partitioning and 'da0' is your new boot disk
you may use the following command:
gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 da0
# gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ada1
bootcode written to ada1
# zpool status
pool: mypool
state: ONLINE
status: One or more devices is currently being resilvered. The pool will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
scan: resilver in progress since Fri May 30 08:19:19 2014
527M scanned out of 781M at 47.9M/s, 0h0m to go
527M resilvered, 67.53% done
config:
NAME STATE READ WRITE CKSUM
mypool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
ada0p3 ONLINE 0 0 0
ada1p3 ONLINE 0 0 0 (resilvering)
errors: No known data errors
# zpool status
pool: mypool
state: ONLINE
scan: resilvered 781M in 0h0m with 0 errors on Fri May 30 08:15:58 2014
config:
NAME STATE READ WRITE CKSUM
mypool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
ada0p3 ONLINE 0 0 0
ada1p3 ONLINE 0 0 0
errors: No known data errors
....
When adding disks to the existing vdev is not an option, as for RAID-Z, an alternative method is to add another vdev to the pool. Additional vdevs provide higher performance, distributing writes across the vdevs. Each vdev is responsible for providing its own redundancy. It is possible, but discouraged, to mix vdev types, like `mirror` and `RAID-Z`. Adding a non-redundant vdev to a pool containing mirror or RAID-Z vdevs risks the data on the entire pool. Writes are distributed, so the failure of the non-redundant disk will result in the loss of a fraction of every block that has been written to the pool.
Data is striped across each of the vdevs. For example, with two mirror vdevs, this is effectively a RAID 10 that stripes writes across two sets of mirrors. Space is allocated so that each vdev reaches 100% full at the same time. There is a performance penalty if the vdevs have different amounts of free space, as a disproportionate amount of the data is written to the less full vdev.
When attaching additional devices to a boot pool, remember to update the bootcode.
Attach a second mirror group ([.filename]#ada2p3# and [.filename]#ada3p3#) to the existing mirror:
[source,shell]
....
# zpool status
pool: mypool
state: ONLINE
scan: resilvered 781M in 0h0m with 0 errors on Fri May 30 08:19:35 2014
config:
NAME STATE READ WRITE CKSUM
mypool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
ada0p3 ONLINE 0 0 0
ada1p3 ONLINE 0 0 0
errors: No known data errors
# zpool add mypool mirror ada2p3 ada3p3
# gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ada2
bootcode written to ada2
# gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ada3
bootcode written to ada3
# zpool status
pool: mypool
state: ONLINE
scan: scrub repaired 0 in 0h0m with 0 errors on Fri May 30 08:29:51 2014
config:
NAME STATE READ WRITE CKSUM
mypool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
ada0p3 ONLINE 0 0 0
ada1p3 ONLINE 0 0 0
mirror-1 ONLINE 0 0 0
ada2p3 ONLINE 0 0 0
ada3p3 ONLINE 0 0 0
errors: No known data errors
....
Currently, vdevs cannot be removed from a pool, and disks can only be removed from a mirror if there is enough remaining redundancy. If only one disk in a mirror group remains, it ceases to be a mirror and reverts to being a stripe, risking the entire pool if that remaining disk fails.
Remove a disk from a three-way mirror group:
[source,shell]
....
# zpool status
pool: mypool
state: ONLINE
scan: scrub repaired 0 in 0h0m with 0 errors on Fri May 30 08:29:51 2014
config:
NAME STATE READ WRITE CKSUM
mypool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
ada0p3 ONLINE 0 0 0
ada1p3 ONLINE 0 0 0
ada2p3 ONLINE 0 0 0
errors: No known data errors
# zpool detach mypool ada2p3
# zpool status
pool: mypool
state: ONLINE
scan: scrub repaired 0 in 0h0m with 0 errors on Fri May 30 08:29:51 2014
config:
NAME STATE READ WRITE CKSUM
mypool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
ada0p3 ONLINE 0 0 0
ada1p3 ONLINE 0 0 0
errors: No known data errors
....
[[zfs-zpool-status]]
=== Checking the Status of a Pool
Pool status is important. If a drive goes offline or a read, write, or checksum error is detected, the corresponding error count increases. The `status` output shows the configuration and status of each device in the pool and the status of the entire pool. Actions that need to be taken and details about the last <<zfs-zpool-scrub,`scrub`>> are also shown.
[source,shell]
....
# zpool status
pool: mypool
state: ONLINE
scan: scrub repaired 0 in 2h25m with 0 errors on Sat Sep 14 04:25:50 2013
config:
NAME STATE READ WRITE CKSUM
mypool ONLINE 0 0 0
raidz2-0 ONLINE 0 0 0
ada0p3 ONLINE 0 0 0
ada1p3 ONLINE 0 0 0
ada2p3 ONLINE 0 0 0
ada3p3 ONLINE 0 0 0
ada4p3 ONLINE 0 0 0
ada5p3 ONLINE 0 0 0
errors: No known data errors
....
[[zfs-zpool-clear]]
=== Clearing Errors
When an error is detected, the read, write, or checksum counts are incremented. The error message can be cleared and the counts reset with `zpool clear _mypool_`. Clearing the error state can be important for automated scripts that alert the administrator when the pool encounters an error. Further errors may not be reported if the old errors are not cleared.
[[zfs-zpool-replace]]
=== Replacing a Functioning Device
There are a number of situations where it may be desirable to replace one disk with a different disk. When replacing a working disk, the process keeps the old disk online during the replacement. The pool never enters a <<zfs-term-degraded,degraded>> state, reducing the risk of data loss. `zpool replace` copies all of the data from the old disk to the new one. After the operation completes, the old disk is disconnected from the vdev. If the new disk is larger than the old disk, it may be possible to grow the zpool, using the new space. See <<zfs-zpool-online,Growing a Pool>>.
Replace a functioning device in the pool:
[source,shell]
....
# zpool status
pool: mypool
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
mypool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
ada0p3 ONLINE 0 0 0
ada1p3 ONLINE 0 0 0
errors: No known data errors
# zpool replace mypool ada1p3 ada2p3
Make sure to wait until resilver is done before rebooting.
If you boot from pool 'zroot', you may need to update
boot code on newly attached disk 'ada2p3'.
Assuming you use GPT partitioning and 'da0' is your new boot disk
you may use the following command:
gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 da0
# gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ada2
# zpool status
pool: mypool
state: ONLINE
status: One or more devices is currently being resilvered. The pool will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
scan: resilver in progress since Mon Jun 2 14:21:35 2014
604M scanned out of 781M at 46.5M/s, 0h0m to go
604M resilvered, 77.39% done
config:
NAME STATE READ WRITE CKSUM
mypool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
ada0p3 ONLINE 0 0 0
replacing-1 ONLINE 0 0 0
ada1p3 ONLINE 0 0 0
ada2p3 ONLINE 0 0 0 (resilvering)
errors: No known data errors
# zpool status
pool: mypool
state: ONLINE
scan: resilvered 781M in 0h0m with 0 errors on Mon Jun 2 14:21:52 2014
config:
NAME STATE READ WRITE CKSUM
mypool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
ada0p3 ONLINE 0 0 0
ada2p3 ONLINE 0 0 0
errors: No known data errors
....
[[zfs-zpool-resilver]]
=== Dealing with Failed Devices
When a disk in a pool fails, the vdev to which the disk belongs enters the <<zfs-term-degraded,degraded>> state. All of the data is still available, but performance may be reduced because missing data must be calculated from the available redundancy. To restore the vdev to a fully functional state, the failed physical device must be replaced. ZFS is then instructed to begin the <<zfs-term-resilver,resilver>> operation. Data that was on the failed device is recalculated from available redundancy and written to the replacement device. After completion, the vdev returns to <<zfs-term-online,online>> status.
If the vdev does not have any redundancy, or if multiple devices have failed and there is not enough redundancy to compensate, the pool enters the <<zfs-term-faulted,faulted>> state. If a sufficient number of devices cannot be reconnected to the pool, the pool becomes inoperative and data must be restored from backups.
When replacing a failed disk, the name of the failed disk is replaced with the GUID of the device. A new device name parameter for `zpool replace` is not required if the replacement device has the same device name.
Replace a failed disk using `zpool replace`:
[source,shell]
....
# zpool status
pool: mypool
state: DEGRADED
status: One or more devices could not be opened. Sufficient replicas exist for
the pool to continue functioning in a degraded state.
action: Attach the missing device and online it using 'zpool online'.
see: http://illumos.org/msg/ZFS-8000-2Q
scan: none requested
config:
NAME STATE READ WRITE CKSUM
mypool DEGRADED 0 0 0
mirror-0 DEGRADED 0 0 0
ada0p3 ONLINE 0 0 0
316502962686821739 UNAVAIL 0 0 0 was /dev/ada1p3
errors: No known data errors
# zpool replace mypool 316502962686821739 ada2p3
# zpool status
pool: mypool
state: DEGRADED
status: One or more devices is currently being resilvered. The pool will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
scan: resilver in progress since Mon Jun 2 14:52:21 2014
641M scanned out of 781M at 49.3M/s, 0h0m to go
640M resilvered, 82.04% done
config:
NAME STATE READ WRITE CKSUM
mypool DEGRADED 0 0 0
mirror-0 DEGRADED 0 0 0
ada0p3 ONLINE 0 0 0
replacing-1 UNAVAIL 0 0 0
15732067398082357289 UNAVAIL 0 0 0 was /dev/ada1p3/old
ada2p3 ONLINE 0 0 0 (resilvering)
errors: No known data errors
# zpool status
pool: mypool
state: ONLINE
scan: resilvered 781M in 0h0m with 0 errors on Mon Jun 2 14:52:38 2014
config:
NAME STATE READ WRITE CKSUM
mypool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
ada0p3 ONLINE 0 0 0
ada2p3 ONLINE 0 0 0
errors: No known data errors
....
[[zfs-zpool-scrub]]
=== Scrubbing a Pool
It is recommended that pools be <<zfs-term-scrub,scrubbed>> regularly, ideally at least once every month. The `scrub` operation is very disk-intensive and will reduce performance while running. Avoid high-demand periods when scheduling `scrub` or use <<zfs-advanced-tuning-scrub_delay,`vfs.zfs.scrub_delay`>> to adjust the relative priority of the `scrub` to prevent it interfering with other workloads.
[source,shell]
....
# zpool scrub mypool
# zpool status
pool: mypool
state: ONLINE
scan: scrub in progress since Wed Feb 19 20:52:54 2014
116G scanned out of 8.60T at 649M/s, 3h48m to go
0 repaired, 1.32% done
config:
NAME STATE READ WRITE CKSUM
mypool ONLINE 0 0 0
raidz2-0 ONLINE 0 0 0
ada0p3 ONLINE 0 0 0
ada1p3 ONLINE 0 0 0
ada2p3 ONLINE 0 0 0
ada3p3 ONLINE 0 0 0
ada4p3 ONLINE 0 0 0
ada5p3 ONLINE 0 0 0
errors: No known data errors
....
In the event that a scrub operation needs to be cancelled, issue `zpool scrub -s _mypool_`.
[[zfs-zpool-selfheal]]
=== Self-Healing
The checksums stored with data blocks enable the file system to _self-heal_. This feature will automatically repair data whose checksum does not match the one recorded on another device that is part of the storage pool. For example, a mirror with two disks where one drive is starting to malfunction and cannot properly store the data any more. This is even worse when the data has not been accessed for a long time, as with long term archive storage. Traditional file systems need to run algorithms that check and repair the data like man:fsck[8]. These commands take time, and in severe cases, an administrator has to manually decide which repair operation must be performed. When ZFS detects a data block with a checksum that does not match, it tries to read the data from the mirror disk. If that disk can provide the correct data, it will not only give that data to the application requesting it, but also correct the wrong data on the disk that had the bad checksum. This happens without any interaction from a system administrator during normal pool operation.
The next example demonstrates this self-healing behavior. A mirrored pool of disks [.filename]#/dev/ada0# and [.filename]#/dev/ada1# is created.
[source,shell]
....
# zpool create healer mirror /dev/ada0 /dev/ada1
# zpool status healer
pool: healer
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
healer ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
ada0 ONLINE 0 0 0
ada1 ONLINE 0 0 0
errors: No known data errors
# zpool list
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
healer 960M 92.5K 960M - - 0% 0% 1.00x ONLINE -
....
Some important data that have to be protected from data errors using the self-healing feature are copied to the pool. A checksum of the pool is created for later comparison.
[source,shell]
....
# cp /some/important/data /healer
# zfs list
NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT
healer 960M 67.7M 892M 7% 1.00x ONLINE -
# sha1 /healer > checksum.txt
# cat checksum.txt
SHA1 (/healer) = 2753eff56d77d9a536ece6694bf0a82740344d1f
....
Data corruption is simulated by writing random data to the beginning of one of the disks in the mirror. To prevent ZFS from healing the data as soon as it is detected, the pool is exported before the corruption and imported again afterwards.
[WARNING]
====
This is a dangerous operation that can destroy vital data. It is shown here for demonstrational purposes only and should not be attempted during normal operation of a storage pool. Nor should this intentional corruption example be run on any disk with a different file system on it. Do not use any other disk device names other than the ones that are part of the pool. Make certain that proper backups of the pool are created before running the command!
====
[source,shell]
....
# zpool export healer
# dd if=/dev/random of=/dev/ada1 bs=1m count=200
200+0 records in
200+0 records out
209715200 bytes transferred in 62.992162 secs (3329227 bytes/sec)
# zpool import healer
....
The pool status shows that one device has experienced an error. Note that applications reading data from the pool did not receive any incorrect data. ZFS provided data from the [.filename]#ada0# device with the correct checksums. The device with the wrong checksum can be found easily as the `CKSUM` column contains a nonzero value.
[source,shell]
....
# zpool status healer
pool: healer
state: ONLINE
status: One or more devices has experienced an unrecoverable error. An
attempt was made to correct the error. Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors
using 'zpool clear' or replace the device with 'zpool replace'.
see: http://illumos.org/msg/ZFS-8000-4J
scan: none requested
config:
NAME STATE READ WRITE CKSUM
healer ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
ada0 ONLINE 0 0 0
ada1 ONLINE 0 0 1
errors: No known data errors
....
The error was detected and handled by using the redundancy present in the unaffected [.filename]#ada0# mirror disk. A checksum comparison with the original one will reveal whether the pool is consistent again.
[source,shell]
....
# sha1 /healer >> checksum.txt
# cat checksum.txt
SHA1 (/healer) = 2753eff56d77d9a536ece6694bf0a82740344d1f
SHA1 (/healer) = 2753eff56d77d9a536ece6694bf0a82740344d1f
....
The two checksums that were generated before and after the intentional tampering with the pool data still match. This shows how ZFS is capable of detecting and correcting any errors automatically when the checksums differ. Note that this is only possible when there is enough redundancy present in the pool. A pool consisting of a single device has no self-healing capabilities. That is also the reason why checksums are so important in ZFS and should not be disabled for any reason. No man:fsck[8] or similar file system consistency check program is required to detect and correct this and the pool was still available during the time there was a problem. A scrub operation is now required to overwrite the corrupted data on [.filename]#ada1#.
[source,shell]
....
# zpool scrub healer
# zpool status healer
pool: healer
state: ONLINE
status: One or more devices has experienced an unrecoverable error. An
attempt was made to correct the error. Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors
using 'zpool clear' or replace the device with 'zpool replace'.
see: http://illumos.org/msg/ZFS-8000-4J
scan: scrub in progress since Mon Dec 10 12:23:30 2012
10.4M scanned out of 67.0M at 267K/s, 0h3m to go
9.63M repaired, 15.56% done
config:
NAME STATE READ WRITE CKSUM
healer ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
ada0 ONLINE 0 0 0
ada1 ONLINE 0 0 627 (repairing)
errors: No known data errors
....
The scrub operation reads data from [.filename]#ada0# and rewrites any data with an incorrect checksum on [.filename]#ada1#. This is indicated by the `(repairing)` output from `zpool status`. After the operation is complete, the pool status changes to:
[source,shell]
....
# zpool status healer
pool: healer
state: ONLINE
status: One or more devices has experienced an unrecoverable error. An
attempt was made to correct the error. Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors
using 'zpool clear' or replace the device with 'zpool replace'.
see: http://illumos.org/msg/ZFS-8000-4J
scan: scrub repaired 66.5M in 0h2m with 0 errors on Mon Dec 10 12:26:25 2012
config:
NAME STATE READ WRITE CKSUM
healer ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
ada0 ONLINE 0 0 0
ada1 ONLINE 0 0 2.72K
errors: No known data errors
....
After the scrub operation completes and all the data has been synchronized from [.filename]#ada0# to [.filename]#ada1#, the error messages can be <<zfs-zpool-clear,cleared>> from the pool status by running `zpool clear`.
[source,shell]
....
# zpool clear healer
# zpool status healer
pool: healer
state: ONLINE
scan: scrub repaired 66.5M in 0h2m with 0 errors on Mon Dec 10 12:26:25 2012
config:
NAME STATE READ WRITE CKSUM
healer ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
ada0 ONLINE 0 0 0
ada1 ONLINE 0 0 0
errors: No known data errors
....
The pool is now back to a fully working state and all the errors have been cleared.
[[zfs-zpool-online]]
=== Growing a Pool
The usable size of a redundant pool is limited by the capacity of the smallest device in each vdev. The smallest device can be replaced with a larger device. After completing a <<zfs-zpool-replace,replace>> or <<zfs-term-resilver,resilver>> operation, the pool can grow to use the capacity of the new device. For example, consider a mirror of a 1 TB drive and a 2 TB drive. The usable space is 1 TB. When the 1 TB drive is replaced with another 2 TB drive, the resilvering process copies the existing data onto the new drive. As both of the devices now have 2 TB capacity, the mirror's available space can be grown to 2 TB.
Expansion is triggered by using `zpool online -e` on each device. After expansion of all devices, the additional space becomes available to the pool.
[[zfs-zpool-import]]
=== Importing and Exporting Pools
Pools are _exported_ before moving them to another system. All datasets are unmounted, and each device is marked as exported but still locked so it cannot be used by other disk subsystems. This allows pools to be _imported_ on other machines, other operating systems that support ZFS, and even different hardware architectures (with some caveats, see man:zpool[8]). When a dataset has open files, `zpool export -f` can be used to force the export of a pool. Use this with caution. The datasets are forcibly unmounted, potentially resulting in unexpected behavior by the applications which had open files on those datasets.
Export a pool that is not in use:
[source,shell]
....
# zpool export mypool
....
Importing a pool automatically mounts the datasets. This may not be the desired behavior, and can be prevented with `zpool import -N`. `zpool import -o` sets temporary properties for this import only. `zpool import altroot=` allows importing a pool with a base mount point instead of the root of the file system. If the pool was last used on a different system and was not properly exported, an import might have to be forced with `zpool import -f`. `zpool import -a` imports all pools that do not appear to be in use by another system.
List all available pools for import:
[source,shell]
....
# zpool import
pool: mypool
id: 9930174748043525076
state: ONLINE
action: The pool can be imported using its name or numeric identifier.
config:
mypool ONLINE
ada2p3 ONLINE
....
Import the pool with an alternative root directory:
[source,shell]
....
# zpool import -o altroot=/mnt mypool
# zfs list
zfs list
NAME USED AVAIL REFER MOUNTPOINT
mypool 110K 47.0G 31K /mnt/mypool
....
[[zfs-zpool-upgrade]]
=== Upgrading a Storage Pool
After upgrading FreeBSD, or if a pool has been imported from a system using an older version of ZFS, the pool can be manually upgraded to the latest version of ZFS to support newer features. Consider whether the pool may ever need to be imported on an older system before upgrading. Upgrading is a one-way process. Older pools can be upgraded, but pools with newer features cannot be downgraded.
Upgrade a v28 pool to support `Feature Flags`:
[source,shell]
....
# zpool status
pool: mypool
state: ONLINE
status: The pool is formatted using a legacy on-disk format. The pool can
still be used, but some features are unavailable.
action: Upgrade the pool using 'zpool upgrade'. Once this is done, the
pool will no longer be accessible on software that does not support feat
flags.
scan: none requested
config:
NAME STATE READ WRITE CKSUM
mypool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
ada0 ONLINE 0 0 0
ada1 ONLINE 0 0 0
errors: No known data errors
# zpool upgrade
This system supports ZFS pool feature flags.
The following pools are formatted with legacy version numbers and can
be upgraded to use feature flags. After being upgraded, these pools
will no longer be accessible by software that does not support feature
flags.
VER POOL
--- ------------
28 mypool
Use 'zpool upgrade -v' for a list of available legacy versions.
Every feature flags pool has all supported features enabled.
# zpool upgrade mypool
This system supports ZFS pool feature flags.
Successfully upgraded 'mypool' from version 28 to feature flags.
Enabled the following features on 'mypool':
async_destroy
empty_bpobj
lz4_compress
multi_vdev_crash_dump
....
The newer features of ZFS will not be available until `zpool upgrade` has completed. `zpool upgrade -v` can be used to see what new features will be provided by upgrading, as well as which features are already supported.
Upgrade a pool to support additional feature flags:
[source,shell]
....
# zpool status
pool: mypool
state: ONLINE
status: Some supported features are not enabled on the pool. The pool can
still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
the pool may no longer be accessible by software that does not support
the features. See zpool-features(7) for details.
scan: none requested
config:
NAME STATE READ WRITE CKSUM
mypool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
ada0 ONLINE 0 0 0
ada1 ONLINE 0 0 0
errors: No known data errors
# zpool upgrade
This system supports ZFS pool feature flags.
All pools are formatted using feature flags.
Some supported features are not enabled on the following pools. Once a
feature is enabled the pool may become incompatible with software
that does not support the feature. See zpool-features(7) for details.
POOL FEATURE
---------------
zstore
multi_vdev_crash_dump
spacemap_histogram
enabled_txg
hole_birth
extensible_dataset
bookmarks
filesystem_limits
# zpool upgrade mypool
This system supports ZFS pool feature flags.
Enabled the following features on 'mypool':
spacemap_histogram
enabled_txg
hole_birth
extensible_dataset
bookmarks
filesystem_limits
....
[WARNING]
====
The boot code on systems that boot from a pool must be updated to support the new pool version. Use `gpart bootcode` on the partition that contains the boot code. There are two types of bootcode available, depending on way the system boots: GPT (the most common option) and EFI (for more modern systems).
For legacy boot using GPT, use the following command:
[source,shell]
....
# gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ada1
....
For systems using EFI to boot, execute the following command:
[source,shell]
....
# gpart bootcode -p /boot/boot1.efifat -i 1 ada1
....
Apply the bootcode to all bootable disks in the pool. See man:gpart[8] for more information.
====
[[zfs-zpool-history]]
=== Displaying Recorded Pool History
Commands that modify the pool are recorded. Recorded actions include the creation of datasets, changing properties, or replacement of a disk. This history is useful for reviewing how a pool was created and which user performed a specific action and when. History is not kept in a log file, but is part of the pool itself. The command to review this history is aptly named `zpool history`:
[source,shell]
....
# zpool history
History for 'tank':
2013-02-26.23:02:35 zpool create tank mirror /dev/ada0 /dev/ada1
2013-02-27.18:50:58 zfs set atime=off tank
2013-02-27.18:51:09 zfs set checksum=fletcher4 tank
2013-02-27.18:51:18 zfs create tank/backup
....
The output shows `zpool` and `zfs` commands that were executed on the pool along with a timestamp. Only commands that alter the pool in some way are recorded. Commands like `zfs list` are not included. When no pool name is specified, the history of all pools is displayed.
`zpool history` can show even more information when the options `-i` or `-l` are provided. `-i` displays user-initiated events as well as internally logged ZFS events.
[source,shell]
....
# zpool history -i
History for 'tank':
2013-02-26.23:02:35 [internal pool create txg:5] pool spa 28; zfs spa 28; zpl 5;uts 9.1-RELEASE 901000 amd64
2013-02-27.18:50:53 [internal property set txg:50] atime=0 dataset = 21
2013-02-27.18:50:58 zfs set atime=off tank
2013-02-27.18:51:04 [internal property set txg:53] checksum=7 dataset = 21
2013-02-27.18:51:09 zfs set checksum=fletcher4 tank
2013-02-27.18:51:13 [internal create txg:55] dataset = 39
2013-02-27.18:51:18 zfs create tank/backup
....
More details can be shown by adding `-l`. History records are shown in a long format, including information like the name of the user who issued the command and the hostname on which the change was made.
[source,shell]
....
# zpool history -l
History for 'tank':
2013-02-26.23:02:35 zpool create tank mirror /dev/ada0 /dev/ada1 [user 0 (root) on :global]
2013-02-27.18:50:58 zfs set atime=off tank [user 0 (root) on myzfsbox:global]
2013-02-27.18:51:09 zfs set checksum=fletcher4 tank [user 0 (root) on myzfsbox:global]
2013-02-27.18:51:18 zfs create tank/backup [user 0 (root) on myzfsbox:global]
....
The output shows that the `root` user created the mirrored pool with disks [.filename]#/dev/ada0# and [.filename]#/dev/ada1#. The hostname `myzfsbox` is also shown in the commands after the pool's creation. The hostname display becomes important when the pool is exported from one system and imported on another. The commands that are issued on the other system can clearly be distinguished by the hostname that is recorded for each command.
Both options to `zpool history` can be combined to give the most detailed information possible for any given pool. Pool history provides valuable information when tracking down the actions that were performed or when more detailed output is needed for debugging.
[[zfs-zpool-iostat]]
=== Performance Monitoring
A built-in monitoring system can display pool I/O statistics in real time. It shows the amount of free and used space on the pool, how many read and write operations are being performed per second, and how much I/O bandwidth is currently being utilized. By default, all pools in the system are monitored and displayed. A pool name can be provided to limit monitoring to just that pool. A basic example:
[source,shell]
....
# zpool iostat
capacity operations bandwidth
pool alloc free read write read write
---------- ----- ----- ----- ----- ----- -----
data 288G 1.53T 2 11 11.3K 57.1K
....
To continuously monitor I/O activity, a number can be specified as the last parameter, indicating a interval in seconds to wait between updates. The next statistic line is printed after each interval. Press kbd:[Ctrl+C] to stop this continuous monitoring. Alternatively, give a second number on the command line after the interval to specify the total number of statistics to display.
Even more detailed I/O statistics can be displayed with `-v`. Each device in the pool is shown with a statistics line. This is useful in seeing how many read and write operations are being performed on each device, and can help determine if any individual device is slowing down the pool. This example shows a mirrored pool with two devices:
[source,shell]
....
# zpool iostat -v
capacity operations bandwidth
pool alloc free read write read write
----------------------- ----- ----- ----- ----- ----- -----
data 288G 1.53T 2 12 9.23K 61.5K
mirror 288G 1.53T 2 12 9.23K 61.5K
ada1 - - 0 4 5.61K 61.7K
ada2 - - 1 4 5.04K 61.7K
----------------------- ----- ----- ----- ----- ----- -----
....
[[zfs-zpool-split]]
=== Splitting a Storage Pool
A pool consisting of one or more mirror vdevs can be split into two pools. Unless otherwise specified, the last member of each mirror is detached and used to create a new pool containing the same data. The operation should first be attempted with `-n`. The details of the proposed operation are displayed without it actually being performed. This helps confirm that the operation will do what the user intends.
[[zfs-zfs]]
== `zfs` Administration
The `zfs` utility is responsible for creating, destroying, and managing all ZFS datasets that exist within a pool. The pool is managed using <<zfs-zpool,`zpool`>>.
[[zfs-zfs-create]]
=== Creating and Destroying Datasets
Unlike traditional disks and volume managers, space in ZFS is _not_ preallocated. With traditional file systems, after all of the space is partitioned and assigned, there is no way to add an additional file system without adding a new disk. With ZFS, new file systems can be created at any time. Each <<zfs-term-dataset,_dataset_>> has properties including features like compression, deduplication, caching, and quotas, as well as other useful properties like readonly, case sensitivity, network file sharing, and a mount point. Datasets can be nested inside each other, and child datasets will inherit properties from their parents. Each dataset can be administered, <<zfs-zfs-allow,delegated>>, <<zfs-zfs-send,replicated>>, <<zfs-zfs-snapshot,snapshotted>>, <<zfs-zfs-jail,jailed>>, and destroyed as a unit. There are many advantages to creating a separate dataset for each different type or set of files. The only drawbacks to having an extremely large number of datasets is that some commands like `zfs list` will be slower, and the mounting of hundreds or even thousands of datasets can slow the FreeBSD boot process.
Create a new dataset and enable <<zfs-term-compression-lz4,LZ4 compression>> on it:
[source,shell]
....
# zfs list
NAME USED AVAIL REFER MOUNTPOINT
mypool 781M 93.2G 144K none
mypool/ROOT 777M 93.2G 144K none
mypool/ROOT/default 777M 93.2G 777M /
mypool/tmp 176K 93.2G 176K /tmp
mypool/usr 616K 93.2G 144K /usr
mypool/usr/home 184K 93.2G 184K /usr/home
mypool/usr/ports 144K 93.2G 144K /usr/ports
mypool/usr/src 144K 93.2G 144K /usr/src
mypool/var 1.20M 93.2G 608K /var
mypool/var/crash 148K 93.2G 148K /var/crash
mypool/var/log 178K 93.2G 178K /var/log
mypool/var/mail 144K 93.2G 144K /var/mail
mypool/var/tmp 152K 93.2G 152K /var/tmp
# zfs create -o compress=lz4 mypool/usr/mydataset
# zfs list
NAME USED AVAIL REFER MOUNTPOINT
mypool 781M 93.2G 144K none
mypool/ROOT 777M 93.2G 144K none
mypool/ROOT/default 777M 93.2G 777M /
mypool/tmp 176K 93.2G 176K /tmp
mypool/usr 704K 93.2G 144K /usr
mypool/usr/home 184K 93.2G 184K /usr/home
mypool/usr/mydataset 87.5K 93.2G 87.5K /usr/mydataset
mypool/usr/ports 144K 93.2G 144K /usr/ports
mypool/usr/src 144K 93.2G 144K /usr/src
mypool/var 1.20M 93.2G 610K /var
mypool/var/crash 148K 93.2G 148K /var/crash
mypool/var/log 178K 93.2G 178K /var/log
mypool/var/mail 144K 93.2G 144K /var/mail
mypool/var/tmp 152K 93.2G 152K /var/tmp
....
Destroying a dataset is much quicker than deleting all of the files that reside on the dataset, as it does not involve scanning all of the files and updating all of the corresponding metadata.
Destroy the previously-created dataset:
[source,shell]
....
# zfs list
NAME USED AVAIL REFER MOUNTPOINT
mypool 880M 93.1G 144K none
mypool/ROOT 777M 93.1G 144K none
mypool/ROOT/default 777M 93.1G 777M /
mypool/tmp 176K 93.1G 176K /tmp
mypool/usr 101M 93.1G 144K /usr
mypool/usr/home 184K 93.1G 184K /usr/home
mypool/usr/mydataset 100M 93.1G 100M /usr/mydataset
mypool/usr/ports 144K 93.1G 144K /usr/ports
mypool/usr/src 144K 93.1G 144K /usr/src
mypool/var 1.20M 93.1G 610K /var
mypool/var/crash 148K 93.1G 148K /var/crash
mypool/var/log 178K 93.1G 178K /var/log
mypool/var/mail 144K 93.1G 144K /var/mail
mypool/var/tmp 152K 93.1G 152K /var/tmp
# zfs destroy mypool/usr/mydataset
# zfs list
NAME USED AVAIL REFER MOUNTPOINT
mypool 781M 93.2G 144K none
mypool/ROOT 777M 93.2G 144K none
mypool/ROOT/default 777M 93.2G 777M /
mypool/tmp 176K 93.2G 176K /tmp
mypool/usr 616K 93.2G 144K /usr
mypool/usr/home 184K 93.2G 184K /usr/home
mypool/usr/ports 144K 93.2G 144K /usr/ports
mypool/usr/src 144K 93.2G 144K /usr/src
mypool/var 1.21M 93.2G 612K /var
mypool/var/crash 148K 93.2G 148K /var/crash
mypool/var/log 178K 93.2G 178K /var/log
mypool/var/mail 144K 93.2G 144K /var/mail
mypool/var/tmp 152K 93.2G 152K /var/tmp
....
In modern versions of ZFS, `zfs destroy` is asynchronous, and the free space might take several minutes to appear in the pool. Use `zpool get freeing _poolname_` to see the `freeing` property, indicating how many datasets are having their blocks freed in the background. If there are child datasets, like <<zfs-term-snapshot,snapshots>> or other datasets, then the parent cannot be destroyed. To destroy a dataset and all of its children, use `-r` to recursively destroy the dataset and all of its children. Use `-n -v` to list datasets and snapshots that would be destroyed by this operation, but do not actually destroy anything. Space that would be reclaimed by destruction of snapshots is also shown.
[[zfs-zfs-volume]]
=== Creating and Destroying Volumes
A volume is a special type of dataset. Rather than being mounted as a file system, it is exposed as a block device under [.filename]#/dev/zvol/poolname/dataset#. This allows the volume to be used for other file systems, to back the disks of a virtual machine, or to be exported using protocols like iSCSI or HAST.
A volume can be formatted with any file system, or used without a file system to store raw data. To the user, a volume appears to be a regular disk. Putting ordinary file systems on these _zvols_ provides features that ordinary disks or file systems do not normally have. For example, using the compression property on a 250 MB volume allows creation of a compressed FAT file system.
[source,shell]
....
# zfs create -V 250m -o compression=on tank/fat32
# zfs list tank
NAME USED AVAIL REFER MOUNTPOINT
tank 258M 670M 31K /tank
# newfs_msdos -F32 /dev/zvol/tank/fat32
# mount -t msdosfs /dev/zvol/tank/fat32 /mnt
# df -h /mnt | grep fat32
Filesystem Size Used Avail Capacity Mounted on
/dev/zvol/tank/fat32 249M 24k 249M 0% /mnt
# mount | grep fat32
/dev/zvol/tank/fat32 on /mnt (msdosfs, local)
....
Destroying a volume is much the same as destroying a regular file system dataset. The operation is nearly instantaneous, but it may take several minutes for the free space to be reclaimed in the background.
[[zfs-zfs-rename]]
=== Renaming a Dataset
The name of a dataset can be changed with `zfs rename`. The parent of a dataset can also be changed with this command. Renaming a dataset to be under a different parent dataset will change the value of those properties that are inherited from the parent dataset. When a dataset is renamed, it is unmounted and then remounted in the new location (which is inherited from the new parent dataset). This behavior can be prevented with `-u`.
Rename a dataset and move it to be under a different parent dataset:
[source,shell]
....
# zfs list
NAME USED AVAIL REFER MOUNTPOINT
mypool 780M 93.2G 144K none
mypool/ROOT 777M 93.2G 144K none
mypool/ROOT/default 777M 93.2G 777M /
mypool/tmp 176K 93.2G 176K /tmp
mypool/usr 704K 93.2G 144K /usr
mypool/usr/home 184K 93.2G 184K /usr/home
mypool/usr/mydataset 87.5K 93.2G 87.5K /usr/mydataset
mypool/usr/ports 144K 93.2G 144K /usr/ports
mypool/usr/src 144K 93.2G 144K /usr/src
mypool/var 1.21M 93.2G 614K /var
mypool/var/crash 148K 93.2G 148K /var/crash
mypool/var/log 178K 93.2G 178K /var/log
mypool/var/mail 144K 93.2G 144K /var/mail
mypool/var/tmp 152K 93.2G 152K /var/tmp
# zfs rename mypool/usr/mydataset mypool/var/newname
# zfs list
NAME USED AVAIL REFER MOUNTPOINT
mypool 780M 93.2G 144K none
mypool/ROOT 777M 93.2G 144K none
mypool/ROOT/default 777M 93.2G 777M /
mypool/tmp 176K 93.2G 176K /tmp
mypool/usr 616K 93.2G 144K /usr
mypool/usr/home 184K 93.2G 184K /usr/home
mypool/usr/ports 144K 93.2G 144K /usr/ports
mypool/usr/src 144K 93.2G 144K /usr/src
mypool/var 1.29M 93.2G 614K /var
mypool/var/crash 148K 93.2G 148K /var/crash
mypool/var/log 178K 93.2G 178K /var/log
mypool/var/mail 144K 93.2G 144K /var/mail
mypool/var/newname 87.5K 93.2G 87.5K /var/newname
mypool/var/tmp 152K 93.2G 152K /var/tmp
....
Snapshots can also be renamed like this. Due to the nature of snapshots, they cannot be renamed into a different parent dataset. To rename a recursive snapshot, specify `-r`, and all snapshots with the same name in child datasets will also be renamed.
[source,shell]
....
# zfs list -t snapshot
NAME USED AVAIL REFER MOUNTPOINT
mypool/var/newname@first_snapshot 0 - 87.5K -
# zfs rename mypool/var/newname@first_snapshot new_snapshot_name
# zfs list -t snapshot
NAME USED AVAIL REFER MOUNTPOINT
mypool/var/newname@new_snapshot_name 0 - 87.5K -
....
[[zfs-zfs-set]]
=== Setting Dataset Properties
Each ZFS dataset has a number of properties that control its behavior. Most properties are automatically inherited from the parent dataset, but can be overridden locally. Set a property on a dataset with `zfs set _property=value dataset_`. Most properties have a limited set of valid values, `zfs get` will display each possible property and valid values. Most properties can be reverted to their inherited values using `zfs inherit`.
User-defined properties can also be set. They become part of the dataset configuration and can be used to provide additional information about the dataset or its contents. To distinguish these custom properties from the ones supplied as part of ZFS, a colon (`:`) is used to create a custom namespace for the property.
[source,shell]
....
# zfs set custom:costcenter=1234 tank
# zfs get custom:costcenter tank
NAME PROPERTY VALUE SOURCE
tank custom:costcenter 1234 local
....
To remove a custom property, use `zfs inherit` with `-r`. If the custom property is not defined in any of the parent datasets, it will be removed completely (although the changes are still recorded in the pool's history).
[source,shell]
....
# zfs inherit -r custom:costcenter tank
# zfs get custom:costcenter tank
NAME PROPERTY VALUE SOURCE
tank custom:costcenter - -
# zfs get all tank | grep custom:costcenter
#
....
[[zfs-zfs-set-share]]
==== Getting and Setting Share Properties
Two commonly used and useful dataset properties are the NFS and SMB share options. Setting these define if and how ZFS datasets may be shared on the network. At present, only setting sharing via NFS is supported on FreeBSD. To get the current status of a share, enter:
[source,shell]
....
# zfs get sharenfs mypool/usr/home
NAME PROPERTY VALUE SOURCE
mypool/usr/home sharenfs on local
# zfs get sharesmb mypool/usr/home
NAME PROPERTY VALUE SOURCE
mypool/usr/home sharesmb off local
....
To enable sharing of a dataset, enter:
[source,shell]
....
# zfs set sharenfs=on mypool/usr/home
....
It is also possible to set additional options for sharing datasets through NFS, such as `-alldirs`, `-maproot` and `-network`. To set additional options to a dataset shared through NFS, enter:
[source,shell]
....
# zfs set sharenfs="-alldirs,-maproot=root,-network=192.168.1.0/24" mypool/usr/home
....
[[zfs-zfs-snapshot]]
=== Managing Snapshots
<<zfs-term-snapshot,Snapshots>> are one of the most powerful features of ZFS. A snapshot provides a read-only, point-in-time copy of the dataset. With Copy-On-Write (COW), snapshots can be created quickly by preserving the older version of the data on disk. If no snapshots exist, space is reclaimed for future use when data is rewritten or deleted. Snapshots preserve disk space by recording only the differences between the current dataset and a previous version. Snapshots are allowed only on whole datasets, not on individual files or directories. When a snapshot is created from a dataset, everything contained in it is duplicated. This includes the file system properties, files, directories, permissions, and so on. Snapshots use no additional space when they are first created, only consuming space as the blocks they reference are changed. Recursive snapshots taken with `-r` create a snapshot with the same name on the dataset and all of its children, providing a consistent moment-in-time snapshot of all of the file systems. This can be important when an application has files on multiple datasets that are related or dependent upon each other. Without snapshots, a backup would have copies of the files from different points in time.
Snapshots in ZFS provide a variety of features that even other file systems with snapshot functionality lack. A typical example of snapshot use is to have a quick way of backing up the current state of the file system when a risky action like a software installation or a system upgrade is performed. If the action fails, the snapshot can be rolled back and the system has the same state as when the snapshot was created. If the upgrade was successful, the snapshot can be deleted to free up space. Without snapshots, a failed upgrade often requires a restore from backup, which is tedious, time consuming, and may require downtime during which the system cannot be used. Snapshots can be rolled back quickly, even while the system is running in normal operation, with little or no downtime. The time savings are enormous with multi-terabyte storage systems and the time required to copy the data from backup. Snapshots are not a replacement for a complete backup of a pool, but can be used as a quick and easy way to store a copy of the dataset at a specific point in time.
[[zfs-zfs-snapshot-creation]]
==== Creating Snapshots
Snapshots are created with `zfs snapshot _dataset_@_snapshotname_`. Adding `-r` creates a snapshot recursively, with the same name on all child datasets.
Create a recursive snapshot of the entire pool:
[source,shell]
....
# zfs list -t all
NAME USED AVAIL REFER MOUNTPOINT
mypool 780M 93.2G 144K none
mypool/ROOT 777M 93.2G 144K none
mypool/ROOT/default 777M 93.2G 777M /
mypool/tmp 176K 93.2G 176K /tmp
mypool/usr 616K 93.2G 144K /usr
mypool/usr/home 184K 93.2G 184K /usr/home
mypool/usr/ports 144K 93.2G 144K /usr/ports
mypool/usr/src 144K 93.2G 144K /usr/src
mypool/var 1.29M 93.2G 616K /var
mypool/var/crash 148K 93.2G 148K /var/crash
mypool/var/log 178K 93.2G 178K /var/log
mypool/var/mail 144K 93.2G 144K /var/mail
mypool/var/newname 87.5K 93.2G 87.5K /var/newname
mypool/var/newname@new_snapshot_name 0 - 87.5K -
mypool/var/tmp 152K 93.2G 152K /var/tmp
# zfs snapshot -r mypool@my_recursive_snapshot
# zfs list -t snapshot
NAME USED AVAIL REFER MOUNTPOINT
mypool@my_recursive_snapshot 0 - 144K -
mypool/ROOT@my_recursive_snapshot 0 - 144K -
mypool/ROOT/default@my_recursive_snapshot 0 - 777M -
mypool/tmp@my_recursive_snapshot 0 - 176K -
mypool/usr@my_recursive_snapshot 0 - 144K -
mypool/usr/home@my_recursive_snapshot 0 - 184K -
mypool/usr/ports@my_recursive_snapshot 0 - 144K -
mypool/usr/src@my_recursive_snapshot 0 - 144K -
mypool/var@my_recursive_snapshot 0 - 616K -
mypool/var/crash@my_recursive_snapshot 0 - 148K -
mypool/var/log@my_recursive_snapshot 0 - 178K -
mypool/var/mail@my_recursive_snapshot 0 - 144K -
mypool/var/newname@new_snapshot_name 0 - 87.5K -
mypool/var/newname@my_recursive_snapshot 0 - 87.5K -
mypool/var/tmp@my_recursive_snapshot 0 - 152K -
....
Snapshots are not shown by a normal `zfs list` operation. To list snapshots, `-t snapshot` is appended to `zfs list`. `-t all` displays both file systems and snapshots.
Snapshots are not mounted directly, so no path is shown in the `MOUNTPOINT` column. There is no mention of available disk space in the `AVAIL` column, as snapshots cannot be written to after they are created. Compare the snapshot to the original dataset from which it was created:
[source,shell]
....
# zfs list -rt all mypool/usr/home
NAME USED AVAIL REFER MOUNTPOINT
mypool/usr/home 184K 93.2G 184K /usr/home
mypool/usr/home@my_recursive_snapshot 0 - 184K -
....
Displaying both the dataset and the snapshot together reveals how snapshots work in <<zfs-term-cow,COW>> fashion. They save only the changes (_delta_) that were made and not the complete file system contents all over again. This means that snapshots take little space when few changes are made. Space usage can be made even more apparent by copying a file to the dataset, then making a second snapshot:
[source,shell]
....
# cp /etc/passwd /var/tmp
# zfs snapshot mypool/var/tmp@after_cp
# zfs list -rt all mypool/var/tmp
NAME USED AVAIL REFER MOUNTPOINT
mypool/var/tmp 206K 93.2G 118K /var/tmp
mypool/var/tmp@my_recursive_snapshot 88K - 152K -
mypool/var/tmp@after_cp 0 - 118K -
....
The second snapshot contains only the changes to the dataset after the copy operation. This yields enormous space savings. Notice that the size of the snapshot `_mypool/var/tmp@my_recursive_snapshot_` also changed in the `USED` column to indicate the changes between itself and the snapshot taken afterwards.
[[zfs-zfs-snapshot-diff]]
==== Comparing Snapshots
ZFS provides a built-in command to compare the differences in content between two snapshots. This is helpful when many snapshots were taken over time and the user wants to see how the file system has changed over time. For example, `zfs diff` lets a user find the latest snapshot that still contains a file that was accidentally deleted. Doing this for the two snapshots that were created in the previous section yields this output:
[source,shell]
....
# zfs list -rt all mypool/var/tmp
NAME USED AVAIL REFER MOUNTPOINT
mypool/var/tmp 206K 93.2G 118K /var/tmp
mypool/var/tmp@my_recursive_snapshot 88K - 152K -
mypool/var/tmp@after_cp 0 - 118K -
# zfs diff mypool/var/tmp@my_recursive_snapshot
M /var/tmp/
+ /var/tmp/passwd
....
The command lists the changes between the specified snapshot (in this case `_mypool/var/tmp@my_recursive_snapshot_`) and the live file system. The first column shows the type of change:
[.informaltable]
[cols="20%,80%"]
|===
|+
|The path or file was added.
|-
|The path or file was deleted.
|M
|The path or file was modified.
|R
|The path or file was renamed.
|===
Comparing the output with the table, it becomes clear that [.filename]#passwd# was added after the snapshot `_mypool/var/tmp@my_recursive_snapshot_` was created. This also resulted in a modification to the parent directory mounted at `_/var/tmp_`.
Comparing two snapshots is helpful when using the ZFS replication feature to transfer a dataset to a different host for backup purposes.
Compare two snapshots by providing the full dataset name and snapshot name of both datasets:
[source,shell]
....
# cp /var/tmp/passwd /var/tmp/passwd.copy
# zfs snapshot mypool/var/tmp@diff_snapshot
# zfs diff mypool/var/tmp@my_recursive_snapshot mypool/var/tmp@diff_snapshot
M /var/tmp/
+ /var/tmp/passwd
+ /var/tmp/passwd.copy
# zfs diff mypool/var/tmp@my_recursive_snapshot mypool/var/tmp@after_cp
M /var/tmp/
+ /var/tmp/passwd
....
A backup administrator can compare two snapshots received from the sending host and determine the actual changes in the dataset. See the <<zfs-zfs-send,Replication>> section for more information.
[[zfs-zfs-snapshot-rollback]]
==== Snapshot Rollback
When at least one snapshot is available, it can be rolled back to at any time. Most of the time this is the case when the current state of the dataset is no longer required and an older version is preferred. Scenarios such as local development tests have gone wrong, botched system updates hampering the system's overall functionality, or the requirement to restore accidentally deleted files or directories are all too common occurrences. Luckily, rolling back a snapshot is just as easy as typing `zfs rollback _snapshotname_`. Depending on how many changes are involved, the operation will finish in a certain amount of time. During that time, the dataset always remains in a consistent state, much like a database that conforms to ACID principles is performing a rollback. This is happening while the dataset is live and accessible without requiring a downtime. Once the snapshot has been rolled back, the dataset has the same state as it had when the snapshot was originally taken. All other data in that dataset that was not part of the snapshot is discarded. Taking a snapshot of the current state of the dataset before rolling back to a previous one is a good idea when some data is required later. This way, the user can roll back and forth between snapshots without losing data that is still valuable.
In the first example, a snapshot is rolled back because of a careless `rm` operation that removes too much data than was intended.
[source,shell]
....
# zfs list -rt all mypool/var/tmp
NAME USED AVAIL REFER MOUNTPOINT
mypool/var/tmp 262K 93.2G 120K /var/tmp
mypool/var/tmp@my_recursive_snapshot 88K - 152K -
mypool/var/tmp@after_cp 53.5K - 118K -
mypool/var/tmp@diff_snapshot 0 - 120K -
# ls /var/tmp
passwd passwd.copy vi.recover
# rm /var/tmp/passwd*
# ls /var/tmp
vi.recover
....
At this point, the user realized that too many files were deleted and wants them back. ZFS provides an easy way to get them back using rollbacks, but only when snapshots of important data are performed on a regular basis. To get the files back and start over from the last snapshot, issue the command:
[source,shell]
....
# zfs rollback mypool/var/tmp@diff_snapshot
# ls /var/tmp
passwd passwd.copy vi.recover
....
The rollback operation restored the dataset to the state of the last snapshot. It is also possible to roll back to a snapshot that was taken much earlier and has other snapshots that were created after it. When trying to do this, ZFS will issue this warning:
[source,shell]
....
# zfs list -rt snapshot mypool/var/tmp
AME USED AVAIL REFER MOUNTPOINT
mypool/var/tmp@my_recursive_snapshot 88K - 152K -
mypool/var/tmp@after_cp 53.5K - 118K -
mypool/var/tmp@diff_snapshot 0 - 120K -
# zfs rollback mypool/var/tmp@my_recursive_snapshot
cannot rollback to 'mypool/var/tmp@my_recursive_snapshot': more recent snapshots exist
use '-r' to force deletion of the following snapshots:
mypool/var/tmp@after_cp
mypool/var/tmp@diff_snapshot
....
This warning means that snapshots exist between the current state of the dataset and the snapshot to which the user wants to roll back. To complete the rollback, these snapshots must be deleted. ZFS cannot track all the changes between different states of the dataset, because snapshots are read-only. ZFS will not delete the affected snapshots unless the user specifies `-r` to indicate that this is the desired action. If that is the intention, and the consequences of losing all intermediate snapshots is understood, the command can be issued:
[source,shell]
....
# zfs rollback -r mypool/var/tmp@my_recursive_snapshot
# zfs list -rt snapshot mypool/var/tmp
NAME USED AVAIL REFER MOUNTPOINT
mypool/var/tmp@my_recursive_snapshot 8K - 152K -
# ls /var/tmp
vi.recover
....
The output from `zfs list -t snapshot` confirms that the intermediate snapshots were removed as a result of `zfs rollback -r`.
[[zfs-zfs-snapshot-snapdir]]
==== Restoring Individual Files from Snapshots
Snapshots are mounted in a hidden directory under the parent dataset: [.filename]#.zfs/snapshots/snapshotname#. By default, these directories will not be displayed even when a standard `ls -a` is issued. Although the directory is not displayed, it is there nevertheless and can be accessed like any normal directory. The property named `snapdir` controls whether these hidden directories show up in a directory listing. Setting the property to `visible` allows them to appear in the output of `ls` and other commands that deal with directory contents.
[source,shell]
....
# zfs get snapdir mypool/var/tmp
NAME PROPERTY VALUE SOURCE
mypool/var/tmp snapdir hidden default
# ls -a /var/tmp
. .. passwd vi.recover
# zfs set snapdir=visible mypool/var/tmp
# ls -a /var/tmp
. .. .zfs passwd vi.recover
....
Individual files can easily be restored to a previous state by copying them from the snapshot back to the parent dataset. The directory structure below [.filename]#.zfs/snapshot# has a directory named exactly like the snapshots taken earlier to make it easier to identify them. In the next example, it is assumed that a file is to be restored from the hidden [.filename]#.zfs# directory by copying it from the snapshot that contained the latest version of the file:
[source,shell]
....
# rm /var/tmp/passwd
# ls -a /var/tmp
. .. .zfs vi.recover
# ls /var/tmp/.zfs/snapshot
after_cp my_recursive_snapshot
# ls /var/tmp/.zfs/snapshot/after_cp
passwd vi.recover
# cp /var/tmp/.zfs/snapshot/after_cp/passwd /var/tmp
....
When `ls .zfs/snapshot` was issued, the `snapdir` property might have been set to hidden, but it would still be possible to list the contents of that directory. It is up to the administrator to decide whether these directories will be displayed. It is possible to display these for certain datasets and prevent it for others. Copying files or directories from this hidden [.filename]#.zfs/snapshot# is simple enough. Trying it the other way around results in this error:
[source,shell]
....
# cp /etc/rc.conf /var/tmp/.zfs/snapshot/after_cp/
cp: /var/tmp/.zfs/snapshot/after_cp/rc.conf: Read-only file system
....
The error reminds the user that snapshots are read-only and cannot be changed after creation. Files cannot be copied into or removed from snapshot directories because that would change the state of the dataset they represent.
Snapshots consume space based on how much the parent file system has changed since the time of the snapshot. The `written` property of a snapshot tracks how much space is being used by the snapshot.
Snapshots are destroyed and the space reclaimed with `zfs destroy _dataset_@_snapshot_`. Adding `-r` recursively removes all snapshots with the same name under the parent dataset. Adding `-n -v` to the command displays a list of the snapshots that would be deleted and an estimate of how much space would be reclaimed without performing the actual destroy operation.
[[zfs-zfs-clones]]
=== Managing Clones
A clone is a copy of a snapshot that is treated more like a regular dataset. Unlike a snapshot, a clone is not read only, is mounted, and can have its own properties. Once a clone has been created using `zfs clone`, the snapshot it was created from cannot be destroyed. The child/parent relationship between the clone and the snapshot can be reversed using `zfs promote`. After a clone has been promoted, the snapshot becomes a child of the clone, rather than of the original parent dataset. This will change how the space is accounted, but not actually change the amount of space consumed. The clone can be mounted at any point within the ZFS file system hierarchy, not just below the original location of the snapshot.
To demonstrate the clone feature, this example dataset is used:
[source,shell]
....
# zfs list -rt all camino/home/joe
NAME USED AVAIL REFER MOUNTPOINT
camino/home/joe 108K 1.3G 87K /usr/home/joe
camino/home/joe@plans 21K - 85.5K -
camino/home/joe@backup 0K - 87K -
....
A typical use for clones is to experiment with a specific dataset while keeping the snapshot around to fall back to in case something goes wrong. Since snapshots cannot be changed, a read/write clone of a snapshot is created. After the desired result is achieved in the clone, the clone can be promoted to a dataset and the old file system removed. This is not strictly necessary, as the clone and dataset can coexist without problems.
[source,shell]
....
# zfs clone camino/home/joe@backup camino/home/joenew
# ls /usr/home/joe*
/usr/home/joe:
backup.txz plans.txt
/usr/home/joenew:
backup.txz plans.txt
# df -h /usr/home
Filesystem Size Used Avail Capacity Mounted on
usr/home/joe 1.3G 31k 1.3G 0% /usr/home/joe
usr/home/joenew 1.3G 31k 1.3G 0% /usr/home/joenew
....
After a clone is created it is an exact copy of the state the dataset was in when the snapshot was taken. The clone can now be changed independently from its originating dataset. The only connection between the two is the snapshot. ZFS records this connection in the property `origin`. Once the dependency between the snapshot and the clone has been removed by promoting the clone using `zfs promote`, the `origin` of the clone is removed as it is now an independent dataset. This example demonstrates it:
[source,shell]
....
# zfs get origin camino/home/joenew
NAME PROPERTY VALUE SOURCE
camino/home/joenew origin camino/home/joe@backup -
# zfs promote camino/home/joenew
# zfs get origin camino/home/joenew
NAME PROPERTY VALUE SOURCE
camino/home/joenew origin - -
....
After making some changes like copying [.filename]#loader.conf# to the promoted clone, for example, the old directory becomes obsolete in this case. Instead, the promoted clone can replace it. This can be achieved by two consecutive commands: `zfs destroy` on the old dataset and `zfs rename` on the clone to name it like the old dataset (it could also get an entirely different name).
[source,shell]
....
# cp /boot/defaults/loader.conf /usr/home/joenew
# zfs destroy -f camino/home/joe
# zfs rename camino/home/joenew camino/home/joe
# ls /usr/home/joe
backup.txz loader.conf plans.txt
# df -h /usr/home
Filesystem Size Used Avail Capacity Mounted on
usr/home/joe 1.3G 128k 1.3G 0% /usr/home/joe
....
The cloned snapshot is now handled like an ordinary dataset. It contains all the data from the original snapshot plus the files that were added to it like [.filename]#loader.conf#. Clones can be used in different scenarios to provide useful features to ZFS users. For example, jails could be provided as snapshots containing different sets of installed applications. Users can clone these snapshots and add their own applications as they see fit. Once they are satisfied with the changes, the clones can be promoted to full datasets and provided to end users to work with like they would with a real dataset. This saves time and administrative overhead when providing these jails.
[[zfs-zfs-send]]
=== Replication
Keeping data on a single pool in one location exposes it to risks like theft and natural or human disasters. Making regular backups of the entire pool is vital. ZFS provides a built-in serialization feature that can send a stream representation of the data to standard output. Using this technique, it is possible to not only store the data on another pool connected to the local system, but also to send it over a network to another system. Snapshots are the basis for this replication (see the section on <<zfs-zfs-snapshot,ZFS snapshots>>). The commands used for replicating data are `zfs send` and `zfs receive`.
These examples demonstrate ZFS replication with these two pools:
[source,shell]
....
# zpool list
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
backup 960M 77K 896M - - 0% 0% 1.00x ONLINE -
mypool 984M 43.7M 940M - - 0% 4% 1.00x ONLINE -
....
The pool named _mypool_ is the primary pool where data is written to and read from on a regular basis. A second pool, _backup_ is used as a standby in case the primary pool becomes unavailable. Note that this fail-over is not done automatically by ZFS, but must be manually done by a system administrator when needed. A snapshot is used to provide a consistent version of the file system to be replicated. Once a snapshot of _mypool_ has been created, it can be copied to the _backup_ pool. Only snapshots can be replicated. Changes made since the most recent snapshot will not be included.
[source,shell]
....
# zfs snapshot mypool@backup1
# zfs list -t snapshot
NAME USED AVAIL REFER MOUNTPOINT
mypool@backup1 0 - 43.6M -
....
Now that a snapshot exists, `zfs send` can be used to create a stream representing the contents of the snapshot. This stream can be stored as a file or received by another pool. The stream is written to standard output, but must be redirected to a file or pipe or an error is produced:
[source,shell]
....
# zfs send mypool@backup1
Error: Stream can not be written to a terminal.
You must redirect standard output.
....
To back up a dataset with `zfs send`, redirect to a file located on the mounted backup pool. Ensure that the pool has enough free space to accommodate the size of the snapshot being sent, which means all of the data contained in the snapshot, not just the changes from the previous snapshot.
[source,shell]
....
# zfs send mypool@backup1 > /backup/backup1
# zpool list
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
backup 960M 63.7M 896M - - 0% 6% 1.00x ONLINE -
mypool 984M 43.7M 940M - - 0% 4% 1.00x ONLINE -
....
The `zfs send` transferred all the data in the snapshot called _backup1_ to the pool named _backup_. Creating and sending these snapshots can be done automatically with a man:cron[8] job.
Instead of storing the backups as archive files, ZFS can receive them as a live file system, allowing the backed up data to be accessed directly. To get to the actual data contained in those streams, `zfs receive` is used to transform the streams back into files and directories. The example below combines `zfs send` and `zfs receive` using a pipe to copy the data from one pool to another. The data can be used directly on the receiving pool after the transfer is complete. A dataset can only be replicated to an empty dataset.
[source,shell]
....
# zfs snapshot mypool@replica1
# zfs send -v mypool@replica1 | zfs receive backup/mypool
send from @ to mypool@replica1 estimated size is 50.1M
total estimated size is 50.1M
TIME SENT SNAPSHOT
# zpool list
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
backup 960M 63.7M 896M - - 0% 6% 1.00x ONLINE -
mypool 984M 43.7M 940M - - 0% 4% 1.00x ONLINE -
....
[[zfs-send-incremental]]
==== Incremental Backups
`zfs send` can also determine the difference between two snapshots and send only the differences between the two. This saves disk space and transfer time. For example:
[source,shell]
....
# zfs snapshot mypool@replica2
# zfs list -t snapshot
NAME USED AVAIL REFER MOUNTPOINT
mypool@replica1 5.72M - 43.6M -
mypool@replica2 0 - 44.1M -
# zpool list
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
backup 960M 61.7M 898M - - 0% 6% 1.00x ONLINE -
mypool 960M 50.2M 910M - - 0% 5% 1.00x ONLINE -
....
A second snapshot called _replica2_ was created. This second snapshot contains only the changes that were made to the file system between now and the previous snapshot, _replica1_. Using `zfs send -i` and indicating the pair of snapshots generates an incremental replica stream containing only the data that has changed. This can only succeed if the initial snapshot already exists on the receiving side.
[source,shell]
....
# zfs send -v -i mypool@replica1 mypool@replica2 | zfs receive /backup/mypool
send from @replica1 to mypool@replica2 estimated size is 5.02M
total estimated size is 5.02M
TIME SENT SNAPSHOT
# zpool list
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
backup 960M 80.8M 879M - - 0% 8% 1.00x ONLINE -
mypool 960M 50.2M 910M - - 0% 5% 1.00x ONLINE -
# zfs list
NAME USED AVAIL REFER MOUNTPOINT
backup 55.4M 240G 152K /backup
backup/mypool 55.3M 240G 55.2M /backup/mypool
mypool 55.6M 11.6G 55.0M /mypool
# zfs list -t snapshot
NAME USED AVAIL REFER MOUNTPOINT
backup/mypool@replica1 104K - 50.2M -
backup/mypool@replica2 0 - 55.2M -
mypool@replica1 29.9K - 50.0M -
mypool@replica2 0 - 55.0M -
....
The incremental stream was successfully transferred. Only the data that had changed was replicated, rather than the entirety of _replica1_. Only the differences were sent, which took much less time to transfer and saved disk space by not copying the complete pool each time. This is useful when having to rely on slow networks or when costs per transferred byte must be considered.
A new file system, _backup/mypool_, is available with all of the files and data from the pool _mypool_. If `-P` is specified, the properties of the dataset will be copied, including compression settings, quotas, and mount points. When `-R` is specified, all child datasets of the indicated dataset will be copied, along with all of their properties. Sending and receiving can be automated so that regular backups are created on the second pool.
[[zfs-send-ssh]]
==== Sending Encrypted Backups over SSH
Sending streams over the network is a good way to keep a remote backup, but it does come with a drawback. Data sent over the network link is not encrypted, allowing anyone to intercept and transform the streams back into data without the knowledge of the sending user. This is undesirable, especially when sending the streams over the internet to a remote host. SSH can be used to securely encrypt data send over a network connection. Since ZFS only requires the stream to be redirected from standard output, it is relatively easy to pipe it through SSH. To keep the contents of the file system encrypted in transit and on the remote system, consider using https://wiki.freebsd.org/PEFS[PEFS].
A few settings and security precautions must be completed first. Only the necessary steps required for the `zfs send` operation are shown here. For more information on SSH, see crossref:security[openssh,"OpenSSH"].
This configuration is required:
* Passwordless SSH access between sending and receiving host using SSH keys
* Normally, the privileges of the `root` user are needed to send and receive streams. This requires logging in to the receiving system as `root`. However, logging in as `root` is disabled by default for security reasons. The <<zfs-zfs-allow,ZFS Delegation>> system can be used to allow a non-`root` user on each system to perform the respective send and receive operations.
* On the sending system:
+
[source,shell]
....
# zfs allow -u someuser send,snapshot mypool
....
* To mount the pool, the unprivileged user must own the directory, and regular users must be allowed to mount file systems. On the receiving system:
+
[source,shell]
....
# sysctl vfs.usermount=1
vfs.usermount: 0 -> 1
# echo vfs.usermount=1 >> /etc/sysctl.conf
# zfs create recvpool/backup
# zfs allow -u someuser create,mount,receive recvpool/backup
# chown someuser /recvpool/backup
....
The unprivileged user now has the ability to receive and mount datasets, and the _home_ dataset can be replicated to the remote system:
[source,shell]
....
% zfs snapshot -r mypool/home@monday
% zfs send -R mypool/home@monday | ssh someuser@backuphost zfs recv -dvu recvpool/backup
....
A recursive snapshot called _monday_ is made of the file system dataset _home_ that resides on the pool _mypool_. Then it is sent with `zfs send -R` to include the dataset, all child datasets, snapshots, clones, and settings in the stream. The output is piped to the waiting `zfs receive` on the remote host _backuphost_ through SSH. Using a fully qualified domain name or IP address is recommended. The receiving machine writes the data to the _backup_ dataset on the _recvpool_ pool. Adding `-d` to `zfs recv` overwrites the name of the pool on the receiving side with the name of the snapshot. `-u` causes the file systems to not be mounted on the receiving side. When `-v` is included, more detail about the transfer is shown, including elapsed time and the amount of data transferred.
[[zfs-zfs-quota]]
=== Dataset, User, and Group Quotas
<<zfs-term-quota,Dataset quotas>> are used to restrict the amount of space that can be consumed by a particular dataset. <<zfs-term-refquota,Reference Quotas>> work in very much the same way, but only count the space used by the dataset itself, excluding snapshots and child datasets. Similarly, <<zfs-term-userquota,user>> and <<zfs-term-groupquota,group>> quotas can be used to prevent users or groups from using all of the space in the pool or dataset.
The following examples assume that the users already exist in the system. Before adding a user to the system, make sure to create their home dataset first and set the `mountpoint` to `/home/_bob_`. Then, create the user and make the home directory point to the dataset's `mountpoint` location. This will properly set owner and group permissions without shadowing any pre-existing home directory paths that might exist.
To enforce a dataset quota of 10 GB for [.filename]#storage/home/bob#:
[source,shell]
....
# zfs set quota=10G storage/home/bob
....
To enforce a reference quota of 10 GB for [.filename]#storage/home/bob#:
[source,shell]
....
# zfs set refquota=10G storage/home/bob
....
To remove a quota of 10 GB for [.filename]#storage/home/bob#:
[source,shell]
....
# zfs set quota=none storage/home/bob
....
The general format is `userquota@_user_=_size_`, and the user's name must be in one of these formats:
* POSIX compatible name such as _joe_.
* POSIX numeric ID such as _789_.
* SID name such as _joe.bloggs@example.com_.
* SID numeric ID such as _S-1-123-456-789_.
For example, to enforce a user quota of 50 GB for the user named _joe_:
[source,shell]
....
# zfs set userquota@joe=50G
....
To remove any quota:
[source,shell]
....
# zfs set userquota@joe=none
....
[NOTE]
====
User quota properties are not displayed by `zfs get all`. Non-`root` users can only see their own quotas unless they have been granted the `userquota` privilege. Users with this privilege are able to view and set everyone's quota.
====
The general format for setting a group quota is: `groupquota@_group_=_size_`.
To set the quota for the group _firstgroup_ to 50 GB, use:
[source,shell]
....
# zfs set groupquota@firstgroup=50G
....
To remove the quota for the group _firstgroup_, or to make sure that one is not set, instead use:
[source,shell]
....
# zfs set groupquota@firstgroup=none
....
As with the user quota property, non-`root` users can only see the quotas associated with the groups to which they belong. However, `root` or a user with the `groupquota` privilege can view and set all quotas for all groups.
To display the amount of space used by each user on a file system or snapshot along with any quotas, use `zfs userspace`. For group information, use `zfs groupspace`. For more information about supported options or how to display only specific options, refer to man:zfs[1].
Users with sufficient privileges, and `root`, can list the quota for [.filename]#storage/home/bob# using:
[source,shell]
....
# zfs get quota storage/home/bob
....
[[zfs-zfs-reservation]]
=== Reservations
<<zfs-term-reservation,Reservations>> guarantee a minimum amount of space will always be available on a dataset. The reserved space will not be available to any other dataset. This feature can be especially useful to ensure that free space is available for an important dataset or log files.
The general format of the `reservation` property is `reservation=_size_`, so to set a reservation of 10 GB on [.filename]#storage/home/bob#, use:
[source,shell]
....
# zfs set reservation=10G storage/home/bob
....
To clear any reservation:
[source,shell]
....
# zfs set reservation=none storage/home/bob
....
The same principle can be applied to the `refreservation` property for setting a <<zfs-term-refreservation,Reference Reservation>>, with the general format `refreservation=_size_`.
This command shows any reservations or refreservations that exist on [.filename]#storage/home/bob#:
[source,shell]
....
# zfs get reservation storage/home/bob
# zfs get refreservation storage/home/bob
....
[[zfs-zfs-compression]]
=== Compression
ZFS provides transparent compression. Compressing data at the block level as it is written not only saves space, but can also increase disk throughput. If data is compressed by 25%, but the compressed data is written to the disk at the same rate as the uncompressed version, resulting in an effective write speed of 125%. Compression can also be a great alternative to <<zfs-zfs-deduplication,Deduplication>> because it does not require additional memory.
ZFS offers several different compression algorithms, each with different trade-offs. With the introduction of LZ4 compression in ZFS v5000, it is possible to enable compression for the entire pool without the large performance trade-off of other algorithms. The biggest advantage to LZ4 is the _early abort_ feature. If LZ4 does not achieve at least 12.5% compression in the first part of the data, the block is written uncompressed to avoid wasting CPU cycles trying to compress data that is either already compressed or uncompressible. For details about the different compression algorithms available in ZFS, see the <<zfs-term-compression,Compression>> entry in the terminology section.
The administrator can monitor the effectiveness of compression using a number of dataset properties.
[source,shell]
....
# zfs get used,compressratio,compression,logicalused mypool/compressed_dataset
NAME PROPERTY VALUE SOURCE
mypool/compressed_dataset used 449G -
mypool/compressed_dataset compressratio 1.11x -
mypool/compressed_dataset compression lz4 local
mypool/compressed_dataset logicalused 496G -
....
The dataset is currently using 449 GB of space (the used property). Without compression, it would have taken 496 GB of space (the `logicalused` property). This results in the 1.11:1 compression ratio.
Compression can have an unexpected side effect when combined with <<zfs-term-userquota,User Quotas>>. User quotas restrict how much space a user can consume on a dataset, but the measurements are based on how much space is used _after compression_. So if a user has a quota of 10 GB, and writes 10 GB of compressible data, they will still be able to store additional data. If they later update a file, say a database, with more or less compressible data, the amount of space available to them will change. This can result in the odd situation where a user did not increase the actual amount of data (the `logicalused` property), but the change in compression caused them to reach their quota limit.
Compression can have a similar unexpected interaction with backups. Quotas are often used to limit how much data can be stored to ensure there is sufficient backup space available. However since quotas do not consider compression, more data may be written than would fit with uncompressed backups.
[[zfs-zfs-compression-zstd]]
=== Zstandard Compression
In OpenZFS 2.0, a new compression algorithm was added. Zstandard (Zstd) offers higher compression ratios than the default LZ4 while offering much greater speeds than the alternative, gzip. OpenZFS 2.0 is available starting with FreeBSD 12.1-RELEASE via package:sysutils/openzfs[] and has been the default in FreeBSD 13-CURRENT since September 2020, and will by in FreeBSD 13.0-RELEASE.
Zstd provides a large selection of compression levels, providing fine-grained control over performance versus compression ratio. One of the main advantages of Zstd is that the decompression speed is independent of the compression level. For data that is written once but read many times, Zstd allows the use of the highest compression levels without a read performance penalty.
Even when data is updated frequently, there are often performance gains that come from enabling compression. One of the biggest advantages comes from the compressed ARC feature. ZFS's Adaptive Replacement Cache (ARC) caches the compressed version of the data in RAM, decompressing it each time it is needed. This allows the same amount of RAM to store more data and metadata, increasing the cache hit ratio.
ZFS offers 19 levels of Zstd compression, each offering incrementally more space savings in exchange for slower compression. The default level is `zstd-3` and offers greater compression than LZ4 without being significantly slower. Levels above 10 require significant amounts of memory to compress each block, so they are discouraged on systems with less than 16 GB of RAM. ZFS also implements a selection of the Zstd_fast_ levels, which get correspondingly faster but offer lower compression ratios. ZFS supports `zstd-fast-1` through `zstd-fast-10`, `zstd-fast-20` through `zstd-fast-100` in increments of 10, and finally `zstd-fast-500` and `zstd-fast-1000` which provide minimal compression, but offer very high performance.
If ZFS is not able to allocate the required memory to compress a block with Zstd, it will fall back to storing the block uncompressed. This is unlikely to happen outside of the highest levels of Zstd on systems that are memory constrained. The sysctl `kstat.zfs.misc.zstd.compress_alloc_fail` counts how many times this has occurred since the ZFS module was loaded.
[[zfs-zfs-deduplication]]
=== Deduplication
When enabled, <<zfs-term-deduplication,deduplication>> uses the checksum of each block to detect duplicate blocks. When a new block is a duplicate of an existing block, ZFS writes an additional reference to the existing data instead of the whole duplicate block. Tremendous space savings are possible if the data contains many duplicated files or repeated information. Be warned: deduplication requires an extremely large amount of memory, and most of the space savings can be had without the extra cost by enabling compression instead.
To activate deduplication, set the `dedup` property on the target pool:
[source,shell]
....
# zfs set dedup=on pool
....
Only new data being written to the pool will be deduplicated. Data that has already been written to the pool will not be deduplicated merely by activating this option. A pool with a freshly activated deduplication property will look like this example:
[source,shell]
....
# zpool list
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
pool 2.84G 2.19M 2.83G - - 0% 0% 1.00x ONLINE -
....
The `DEDUP` column shows the actual rate of deduplication for the pool. A value of `1.00x` shows that data has not been deduplicated yet. In the next example, the ports tree is copied three times into different directories on the deduplicated pool created above.
[source,shell]
....
# for d in dir1 dir2 dir3; do
> mkdir $d && cp -R /usr/ports $d &
> done
....
Redundant data is detected and deduplicated:
[source,shell]
....
# zpool list
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
pool 2.84G 20.9M 2.82G - - 0% 0% 3.00x ONLINE -
....
The `DEDUP` column shows a factor of `3.00x`. Multiple copies of the ports tree data was detected and deduplicated, using only a third of the space. The potential for space savings can be enormous, but comes at the cost of having enough memory to keep track of the deduplicated blocks.
Deduplication is not always beneficial, especially when the data on a pool is not redundant. ZFS can show potential space savings by simulating deduplication on an existing pool:
[source,shell]
....
# zdb -S pool
Simulated DDT histogram:
bucket allocated referenced
______ ______________________________ ______________________________
refcnt blocks LSIZE PSIZE DSIZE blocks LSIZE PSIZE DSIZE
------ ------ ----- ----- ----- ------ ----- ----- -----
1 2.58M 289G 264G 264G 2.58M 289G 264G 264G
2 206K 12.6G 10.4G 10.4G 430K 26.4G 21.6G 21.6G
4 37.6K 692M 276M 276M 170K 3.04G 1.26G 1.26G
8 2.18K 45.2M 19.4M 19.4M 20.0K 425M 176M 176M
16 174 2.83M 1.20M 1.20M 3.33K 48.4M 20.4M 20.4M
32 40 2.17M 222K 222K 1.70K 97.2M 9.91M 9.91M
64 9 56K 10.5K 10.5K 865 4.96M 948K 948K
128 2 9.50K 2K 2K 419 2.11M 438K 438K
256 5 61.5K 12K 12K 1.90K 23.0M 4.47M 4.47M
1K 2 1K 1K 1K 2.98K 1.49M 1.49M 1.49M
Total 2.82M 303G 275G 275G 3.20M 319G 287G 287G
dedup = 1.05, compress = 1.11, copies = 1.00, dedup * compress / copies = 1.16
....
After `zdb -S` finishes analyzing the pool, it shows the space reduction ratio that would be achieved by activating deduplication. In this case, `1.16` is a very poor space saving ratio that is mostly provided by compression. Activating deduplication on this pool would not save any significant amount of space, and is not worth the amount of memory required to enable deduplication. Using the formula _ratio = dedup * compress / copies_, system administrators can plan the storage allocation, deciding whether the workload will contain enough duplicate blocks to justify the memory requirements. If the data is reasonably compressible, the space savings may be very good. Enabling compression first is recommended, and compression can also provide greatly increased performance. Only enable deduplication in cases where the additional savings will be considerable and there is sufficient memory for the <<zfs-term-deduplication,DDT>>.
[[zfs-zfs-jail]]
=== ZFS and Jails
`zfs jail` and the corresponding `jailed` property are used to delegate a ZFS dataset to a crossref:jails[jails,Jail]. `zfs jail _jailid_` attaches a dataset to the specified jail, and `zfs unjail` detaches it. For the dataset to be controlled from within a jail, the `jailed` property must be set. Once a dataset is jailed, it can no longer be mounted on the host because it may have mount points that would compromise the security of the host.
[[zfs-zfs-allow]]
== Delegated Administration
A comprehensive permission delegation system allows unprivileged users to perform ZFS administration functions. For example, if each user's home directory is a dataset, users can be given permission to create and destroy snapshots of their home directories. A backup user can be given permission to use replication features. A usage statistics script can be allowed to run with access only to the space utilization data for all users. It is even possible to delegate the ability to delegate permissions. Permission delegation is possible for each subcommand and most properties.
[[zfs-zfs-allow-create]]
=== Delegating Dataset Creation
`zfs allow _someuser_ create _mydataset_` gives the specified user permission to create child datasets under the selected parent dataset. There is a caveat: creating a new dataset involves mounting it. That requires setting the FreeBSD `vfs.usermount` man:sysctl[8] to `1` to allow non-root users to mount a file system. There is another restriction aimed at preventing abuse: non-`root` users must own the mountpoint where the file system is to be mounted.
[[zfs-zfs-allow-allow]]
=== Delegating Permission Delegation
`zfs allow _someuser_ allow _mydataset_` gives the specified user the ability to assign any permission they have on the target dataset, or its children, to other users. If a user has the `snapshot` permission and the `allow` permission, that user can then grant the `snapshot` permission to other users.
[[zfs-advanced]]
== Advanced Topics
[[zfs-advanced-tuning]]
=== Tuning
There are a number of tunables that can be adjusted to make ZFS perform best for different workloads.
* [[zfs-advanced-tuning-arc_max]] `_vfs.zfs.arc_max_` - Maximum size of the <<zfs-term-arc,ARC>>. The default is all RAM but 1 GB, or 5/8 of all RAM, whichever is more. However, a lower value should be used if the system will be running any other daemons or processes that may require memory. This value can be adjusted at runtime with man:sysctl[8] and can be set in [.filename]#/boot/loader.conf# or [.filename]#/etc/sysctl.conf#.
* [[zfs-advanced-tuning-arc_meta_limit]] `_vfs.zfs.arc_meta_limit_` - Limit the portion of the <<zfs-term-arc,ARC>> that can be used to store metadata. The default is one fourth of `vfs.zfs.arc_max`. Increasing this value will improve performance if the workload involves operations on a large number of files and directories, or frequent metadata operations, at the cost of less file data fitting in the <<zfs-term-arc,ARC>>. This value can be adjusted at runtime with man:sysctl[8] and can be set in [.filename]#/boot/loader.conf# or [.filename]#/etc/sysctl.conf#.
* [[zfs-advanced-tuning-arc_min]] `_vfs.zfs.arc_min_` - Minimum size of the <<zfs-term-arc,ARC>>. The default is one half of `vfs.zfs.arc_meta_limit`. Adjust this value to prevent other applications from pressuring out the entire <<zfs-term-arc,ARC>>. This value can be adjusted at runtime with man:sysctl[8] and can be set in [.filename]#/boot/loader.conf# or [.filename]#/etc/sysctl.conf#.
* [[zfs-advanced-tuning-vdev-cache-size]] `_vfs.zfs.vdev.cache.size_` - A preallocated amount of memory reserved as a cache for each device in the pool. The total amount of memory used will be this value multiplied by the number of devices. This value can only be adjusted at boot time, and is set in [.filename]#/boot/loader.conf#.
* [[zfs-advanced-tuning-min-auto-ashift]] `_vfs.zfs.min_auto_ashift_` - Minimum `ashift` (sector size) that will be used automatically at pool creation time. The value is a power of two. The default value of `9` represents `2^9 = 512`, a sector size of 512 bytes. To avoid _write amplification_ and get the best performance, set this value to the largest sector size used by a device in the pool.
+
Many drives have 4 KB sectors. Using the default `ashift` of `9` with these drives results in write amplification on these devices. Data that could be contained in a single 4 KB write must instead be written in eight 512-byte writes. ZFS tries to read the native sector size from all devices when creating a pool, but many drives with 4 KB sectors report that their sectors are 512 bytes for compatibility. Setting `vfs.zfs.min_auto_ashift` to `12` (`2^12 = 4096`) before creating a pool forces ZFS to use 4 KB blocks for best performance on these drives.
+
Forcing 4 KB blocks is also useful on pools where disk upgrades are planned. Future disks are likely to use 4 KB sectors, and `ashift` values cannot be changed after a pool is created.
+
In some specific cases, the smaller 512-byte block size might be preferable. When used with 512-byte disks for databases, or as storage for virtual machines, less data is transferred during small random reads. This can provide better performance, especially when using a smaller ZFS record size.
* [[zfs-advanced-tuning-prefetch_disable]] `_vfs.zfs.prefetch_disable_` - Disable prefetch. A value of `0` is enabled and `1` is disabled. The default is `0`, unless the system has less than 4 GB of RAM. Prefetch works by reading larger blocks than were requested into the <<zfs-term-arc,ARC>> in hopes that the data will be needed soon. If the workload has a large number of random reads, disabling prefetch may actually improve performance by reducing unnecessary reads. This value can be adjusted at any time with man:sysctl[8].
* [[zfs-advanced-tuning-vdev-trim_on_init]] `_vfs.zfs.vdev.trim_on_init_` - Control whether new devices added to the pool have the `TRIM` command run on them. This ensures the best performance and longevity for SSDs, but takes extra time. If the device has already been secure erased, disabling this setting will make the addition of the new device faster. This value can be adjusted at any time with man:sysctl[8].
* [[zfs-advanced-tuning-vdev-max_pending]] `_vfs.zfs.vdev.max_pending_` - Limit the number of pending I/O requests per device. A higher value will keep the device command queue full and may give higher throughput. A lower value will reduce latency. This value can be adjusted at any time with man:sysctl[8].
* [[zfs-advanced-tuning-top_maxinflight]] `_vfs.zfs.top_maxinflight_` - Maximum number of outstanding I/Os per top-level <<zfs-term-vdev,vdev>>. Limits the depth of the command queue to prevent high latency. The limit is per top-level vdev, meaning the limit applies to each <<zfs-term-vdev-mirror,mirror>>, <<zfs-term-vdev-raidz,RAID-Z>>, or other vdev independently. This value can be adjusted at any time with man:sysctl[8].
* [[zfs-advanced-tuning-l2arc_write_max]] `_vfs.zfs.l2arc_write_max_` - Limit the amount of data written to the <<zfs-term-l2arc,L2ARC>> per second. This tunable is designed to extend the longevity of SSDs by limiting the amount of data written to the device. This value can be adjusted at any time with man:sysctl[8].
* [[zfs-advanced-tuning-l2arc_write_boost]] `_vfs.zfs.l2arc_write_boost_` - The value of this tunable is added to <<zfs-advanced-tuning-l2arc_write_max,`vfs.zfs.l2arc_write_max`>> and increases the write speed to the SSD until the first block is evicted from the <<zfs-term-l2arc,L2ARC>>. This "Turbo Warmup Phase" is designed to reduce the performance loss from an empty <<zfs-term-l2arc,L2ARC>> after a reboot. This value can be adjusted at any time with man:sysctl[8].
* [[zfs-advanced-tuning-scrub_delay]]`_vfs.zfs.scrub_delay_` - Number of ticks to delay between each I/O during a <<zfs-term-scrub,`scrub`>>. To ensure that a `scrub` does not interfere with the normal operation of the pool, if any other I/O is happening the `scrub` will delay between each command. This value controls the limit on the total IOPS (I/Os Per Second) generated by the `scrub`. The granularity of the setting is determined by the value of `kern.hz` which defaults to 1000 ticks per second. This setting may be changed, resulting in a different effective IOPS limit. The default value is `4`, resulting in a limit of: 1000 ticks/sec / 4 = 250 IOPS. Using a value of _20_ would give a limit of: 1000 ticks/sec / 20 = 50 IOPS. The speed of `scrub` is only limited when there has been recent activity on the pool, as determined by <<zfs-advanced-tuning-scan_idle,`vfs.zfs.scan_idle`>>. This value can be adjusted at any time with man:sysctl[8].
* [[zfs-advanced-tuning-resilver_delay]] `_vfs.zfs.resilver_delay_` - Number of milliseconds of delay inserted between each I/O during a <<zfs-term-resilver,resilver>>. To ensure that a resilver does not interfere with the normal operation of the pool, if any other I/O is happening the resilver will delay between each command. This value controls the limit of total IOPS (I/Os Per Second) generated by the resilver. The granularity of the setting is determined by the value of `kern.hz` which defaults to 1000 ticks per second. This setting may be changed, resulting in a different effective IOPS limit. The default value is 2, resulting in a limit of: 1000 ticks/sec / 2 = 500 IOPS. Returning the pool to an <<zfs-term-online,Online>> state may be more important if another device failing could <<zfs-term-faulted,Fault>> the pool, causing data loss. A value of 0 will give the resilver operation the same priority as other operations, speeding the healing process. The speed of resilver is only limited when there has been other recent activity on the pool, as determined by <<zfs-advanced-tuning-scan_idle,`vfs.zfs.scan_idle`>>. This value can be adjusted at any time with man:sysctl[8].
* [[zfs-advanced-tuning-scan_idle]] `_vfs.zfs.scan_idle_` - Number of milliseconds since the last operation before the pool is considered idle. When the pool is idle the rate limiting for <<zfs-term-scrub,`scrub`>> and <<zfs-term-resilver,resilver>> are disabled. This value can be adjusted at any time with man:sysctl[8].
* [[zfs-advanced-tuning-txg-timeout]] `_vfs.zfs.txg.timeout_` - Maximum number of seconds between <<zfs-term-txg,transaction group>>s. The current transaction group will be written to the pool and a fresh transaction group started if this amount of time has elapsed since the previous transaction group. A transaction group my be triggered earlier if enough data is written. The default value is 5 seconds. A larger value may improve read performance by delaying asynchronous writes, but this may cause uneven performance when the transaction group is written. This value can be adjusted at any time with man:sysctl[8].
[[zfs-advanced-i386]]
=== ZFS on i386
Some of the features provided by ZFS are memory intensive, and may require tuning for maximum efficiency on systems with limited RAM.
==== Memory
As a bare minimum, the total system memory should be at least one gigabyte. The amount of recommended RAM depends upon the size of the pool and which ZFS features are used. A general rule of thumb is 1 GB of RAM for every 1 TB of storage. If the deduplication feature is used, a general rule of thumb is 5 GB of RAM per TB of storage to be deduplicated. While some users successfully use ZFS with less RAM, systems under heavy load may panic due to memory exhaustion. Further tuning may be required for systems with less than the recommended RAM requirements.
==== Kernel Configuration
Due to the address space limitations of the i386(TM) platform, ZFS users on the i386(TM) architecture must add this option to a custom kernel configuration file, rebuild the kernel, and reboot:
[.programlisting]
....
options KVA_PAGES=512
....
This expands the kernel address space, allowing the `vm.kvm_size` tunable to be pushed beyond the currently imposed limit of 1 GB, or the limit of 2 GB for PAE. To find the most suitable value for this option, divide the desired address space in megabytes by four. In this example, it is `512` for 2 GB.
==== Loader Tunables
The [.filename]#kmem# address space can be increased on all FreeBSD architectures. On a test system with 1 GB of physical memory, success was achieved with these options added to [.filename]#/boot/loader.conf#, and the system restarted:
[.programlisting]
....
vm.kmem_size="330M"
vm.kmem_size_max="330M"
vfs.zfs.arc_max="40M"
vfs.zfs.vdev.cache.size="5M"
....
For a more detailed list of recommendations for ZFS-related tuning, see https://wiki.freebsd.org/ZFSTuningGuide[].
[[zfs-links]]
== Additional Resources
* http://open-zfs.org[OpenZFS]
* https://wiki.freebsd.org/ZFSTuningGuide[FreeBSD Wiki - ZFS Tuning]
* http://docs.oracle.com/cd/E19253-01/819-5461/index.html[Oracle Solaris ZFS Administration Guide]
* https://calomel.org/zfs_raid_speed_capacity.html[Calomel Blog - ZFS Raidz Performance, Capacity and Integrity]
[[zfs-term]]
== ZFS Features and Terminology
ZFS is a fundamentally different file system because it is more than just a file system. ZFS combines the roles of file system and volume manager, enabling additional storage devices to be added to a live system and having the new space available on all of the existing file systems in that pool immediately. By combining the traditionally separate roles, ZFS is able to overcome previous limitations that prevented RAID groups being able to grow. Each top level device in a pool is called a _vdev_, which can be a simple disk or a RAID transformation such as a mirror or RAID-Z array. ZFS file systems (called _datasets_) each have access to the combined free space of the entire pool. As blocks are allocated from the pool, the space available to each file system decreases. This approach avoids the common pitfall with extensive partitioning where free space becomes fragmented across the partitions.
[.informaltable]
[cols="10%,90%"]
|===
|[[zfs-term-pool]]pool
|A storage _pool_ is the most basic building block of ZFS. A pool is made up of one or more vdevs, the underlying devices that store the data. A pool is then used to create one or more file systems (datasets) or block devices (volumes). These datasets and volumes share the pool of remaining free space. Each pool is uniquely identified by a name and a GUID. The features available are determined by the ZFS version number on the pool.
|[[zfs-term-vdev]]vdev Types
a|A pool is made up of one or more vdevs, which themselves can be a single disk or a group of disks, in the case of a RAID transform. When multiple vdevs are used, ZFS spreads data across the vdevs to increase performance and maximize usable space.
* [[zfs-term-vdev-disk]] _Disk_ - The most basic type of vdev is a standard block device. This can be an entire disk (such as [.filename]#/dev/ada0# or [.filename]#/dev/da0#) or a partition ([.filename]#/dev/ada0p3#). On FreeBSD, there is no performance penalty for using a partition rather than the entire disk. This differs from recommendations made by the Solaris documentation.
+
[CAUTION]
====
Using an entire disk as part of a bootable pool is strongly discouraged, as this may render the pool unbootable. Likewise, you should not use an entire disk as part of a mirror or RAID-Z vdev. These are because it is impossible to reliably determine the size of an unpartitioned disk at boot time and because there's no place to put in boot code.
====
* [[zfs-term-vdev-file]] _File_ - In addition to disks, ZFS pools can be backed by regular files, this is especially useful for testing and experimentation. Use the full path to the file as the device path in `zpool create`. All vdevs must be at least 128 MB in size.
* [[zfs-term-vdev-mirror]] _Mirror_ - When creating a mirror, specify the `mirror` keyword followed by the list of member devices for the mirror. A mirror consists of two or more devices, all data will be written to all member devices. A mirror vdev will only hold as much data as its smallest member. A mirror vdev can withstand the failure of all but one of its members without losing any data.
+
[NOTE]
====
A regular single disk vdev can be upgraded to a mirror vdev at any time with `zpool <<zfs-zpool-attach,attach>>`.
====
* [[zfs-term-vdev-raidz]] _RAID-Z_ - ZFS implements RAID-Z, a variation on standard RAID-5 that offers better distribution of parity and eliminates the "RAID-5 write hole" in which the data and parity information become inconsistent after an unexpected restart. ZFS supports three levels of RAID-Z which provide varying levels of redundancy in exchange for decreasing levels of usable storage. The types are named RAID-Z1 through RAID-Z3 based on the number of parity devices in the array and the number of disks which can fail while the pool remains operational.
+
In a RAID-Z1 configuration with four disks, each 1 TB, usable storage is 3 TB and the pool will still be able to operate in degraded mode with one faulted disk. If an additional disk goes offline before the faulted disk is replaced and resilvered, all data in the pool can be lost.
+
In a RAID-Z3 configuration with eight disks of 1 TB, the volume will provide 5 TB of usable space and still be able to operate with three faulted disks. Sun(TM) recommends no more than nine disks in a single vdev. If the configuration has more disks, it is recommended to divide them into separate vdevs and the pool data will be striped across them.
+
A configuration of two RAID-Z2 vdevs consisting of 8 disks each would create something similar to a RAID-60 array. A RAID-Z group's storage capacity is approximately the size of the smallest disk multiplied by the number of non-parity disks. Four 1 TB disks in RAID-Z1 has an effective size of approximately 3 TB, and an array of eight 1 TB disks in RAID-Z3 will yield 5 TB of usable space.
* [[zfs-term-vdev-spare]] _Spare_ - ZFS has a special pseudo-vdev type for keeping track of available hot spares. Note that installed hot spares are not deployed automatically; they must manually be configured to replace the failed device using `zfs replace`.
* [[zfs-term-vdev-log]] _Log_ - ZFS Log Devices, also known as ZFS Intent Log (<<zfs-term-zil,ZIL>>) move the intent log from the regular pool devices to a dedicated device, typically an SSD. Having a dedicated log device can significantly improve the performance of applications with a high volume of synchronous writes, especially databases. Log devices can be mirrored, but RAID-Z is not supported. If multiple log devices are used, writes will be load balanced across them.
* [[zfs-term-vdev-cache]] _Cache_ - Adding a cache vdev to a pool will add the storage of the cache to the <<zfs-term-l2arc,L2ARC>>. Cache devices cannot be mirrored. Since a cache device only stores additional copies of existing data, there is no risk of data loss.
|[[zfs-term-txg]] Transaction Group (TXG)
|Transaction Groups are the way changed blocks are grouped together and eventually written to the pool. Transaction groups are the atomic unit that ZFS uses to assert consistency. Each transaction group is assigned a unique 64-bit consecutive identifier. There can be up to three active transaction groups at a time, one in each of these three states:
* _Open_ - When a new transaction group is created, it is in the open state, and accepts new writes. There is always a transaction group in the open state, however the transaction group may refuse new writes if it has reached a limit. Once the open transaction group has reached a limit, or the <<zfs-advanced-tuning-txg-timeout,`vfs.zfs.txg.timeout`>> has been reached, the transaction group advances to the next state.
* _Quiescing_ - A short state that allows any pending operations to finish while not blocking the creation of a new open transaction group. Once all of the transactions in the group have completed, the transaction group advances to the final state.
* _Syncing_ - All of the data in the transaction group is written to stable storage. This process will in turn modify other data, such as metadata and space maps, that will also need to be written to stable storage. The process of syncing involves multiple passes. The first, all of the changed data blocks, is the biggest, followed by the metadata, which may take multiple passes to complete. Since allocating space for the data blocks generates new metadata, the syncing state cannot finish until a pass completes that does not allocate any additional space. The syncing state is also where _synctasks_ are completed. Synctasks are administrative operations, such as creating or destroying snapshots and datasets, that modify the uberblock are completed. Once the sync state is complete, the transaction group in the quiescing state is advanced to the syncing state.
All administrative functions, such as <<zfs-term-snapshot,`snapshot`>> are written as part of the transaction group. When a synctask is created, it is added to the currently open transaction group, and that group is advanced as quickly as possible to the syncing state to reduce the latency of administrative commands.
|[[zfs-term-arc]]Adaptive Replacement Cache (ARC)
|ZFS uses an Adaptive Replacement Cache (ARC), rather than a more traditional Least Recently Used (LRU) cache. An LRU cache is a simple list of items in the cache, sorted by when each object was most recently used. New items are added to the top of the list. When the cache is full, items from the bottom of the list are evicted to make room for more active objects. An ARC consists of four lists; the Most Recently Used (MRU) and Most Frequently Used (MFU) objects, plus a ghost list for each. These ghost lists track recently evicted objects to prevent them from being added back to the cache. This increases the cache hit ratio by avoiding objects that have a history of only being used occasionally. Another advantage of using both an MRU and MFU is that scanning an entire file system would normally evict all data from an MRU or LRU cache in favor of this freshly accessed content. With ZFS, there is also an MFU that only tracks the most frequently used objects, and the cache of the most commonly accessed blocks remains.
|[[zfs-term-l2arc]]L2ARC
|L2ARC is the second level of the ZFS caching system. The primary ARC is stored in RAM. Since the amount of available RAM is often limited, ZFS can also use <<zfs-term-vdev-cache,cache vdevs>>. Solid State Disks (SSDs) are often used as these cache devices due to their higher speed and lower latency compared to traditional spinning disks. L2ARC is entirely optional, but having one will significantly increase read speeds for files that are cached on the SSD instead of having to be read from the regular disks. L2ARC can also speed up <<zfs-term-deduplication,deduplication>> because a DDT that does not fit in RAM but does fit in the L2ARC will be much faster than a DDT that must be read from disk. The rate at which data is added to the cache devices is limited to prevent prematurely wearing out SSDs with too many writes. Until the cache is full (the first block has been evicted to make room), writing to the L2ARC is limited to the sum of the write limit and the boost limit, and afterwards limited to the write limit. A pair of man:sysctl[8] values control these rate limits. <<zfs-advanced-tuning-l2arc_write_max,`vfs.zfs.l2arc_write_max`>> controls how many bytes are written to the cache per second, while <<zfs-advanced-tuning-l2arc_write_boost,`vfs.zfs.l2arc_write_boost`>> adds to this limit during the "Turbo Warmup Phase" (Write Boost).
|[[zfs-term-zil]]ZIL
|ZIL accelerates synchronous transactions by using storage devices like SSDs that are faster than those used in the main storage pool. When an application requests a synchronous write (a guarantee that the data has been safely stored to disk rather than merely cached to be written later), the data is written to the faster ZIL storage, then later flushed out to the regular disks. This greatly reduces latency and improves performance. Only synchronous workloads like databases will benefit from a ZIL. Regular asynchronous writes such as copying files will not use the ZIL at all.
|[[zfs-term-cow]]Copy-On-Write
|Unlike a traditional file system, when data is overwritten on ZFS, the new data is written to a different block rather than overwriting the old data in place. Only when this write is complete is the metadata then updated to point to the new location. In the event of a shorn write (a system crash or power loss in the middle of writing a file), the entire original contents of the file are still available and the incomplete write is discarded. This also means that ZFS does not require a man:fsck[8] after an unexpected shutdown.
|[[zfs-term-dataset]]Dataset
|_Dataset_ is the generic term for a ZFS file system, volume, snapshot or clone. Each dataset has a unique name in the format _poolname/path@snapshot_. The root of the pool is technically a dataset as well. Child datasets are named hierarchically like directories. For example, _mypool/home_, the home dataset, is a child of _mypool_ and inherits properties from it. This can be expanded further by creating _mypool/home/user_. This grandchild dataset will inherit properties from the parent and grandparent. Properties on a child can be set to override the defaults inherited from the parents and grandparents. Administration of datasets and their children can be <<zfs-zfs-allow,delegated>>.
|[[zfs-term-filesystem]]File system
|A ZFS dataset is most often used as a file system. Like most other file systems, a ZFS file system is mounted somewhere in the systems directory hierarchy and contains files and directories of its own with permissions, flags, and other metadata.
|[[zfs-term-volume]]Volume
|In addition to regular file system datasets, ZFS can also create volumes, which are block devices. Volumes have many of the same features, including copy-on-write, snapshots, clones, and checksumming. Volumes can be useful for running other file system formats on top of ZFS, such as UFS virtualization, or exporting iSCSI extents.
|[[zfs-term-snapshot]]Snapshot
|The <<zfs-term-cow,copy-on-write>> (COW) design of ZFS allows for nearly instantaneous, consistent snapshots with arbitrary names. After taking a snapshot of a dataset, or a recursive snapshot of a parent dataset that will include all child datasets, new data is written to new blocks, but the old blocks are not reclaimed as free space. The snapshot contains the original version of the file system, and the live file system contains any changes made since the snapshot was taken. No additional space is used. As new data is written to the live file system, new blocks are allocated to store this data. The apparent size of the snapshot will grow as the blocks are no longer used in the live file system, but only in the snapshot. These snapshots can be mounted read only to allow for the recovery of previous versions of files. It is also possible to <<zfs-zfs-snapshot,rollback>> a live file system to a specific snapshot, undoing any changes that took place after the snapshot was taken. Each block in the pool has a reference counter which keeps track of how many snapshots, clones, datasets, or volumes make use of that block. As files and snapshots are deleted, the reference count is decremented. When a block is no longer referenced, it is reclaimed as free space. Snapshots can also be marked with a <<zfs-zfs-snapshot,hold>>. When a snapshot is held, any attempt to destroy it will return an `EBUSY` error. Each snapshot can have multiple holds, each with a unique name. The <<zfs-zfs-snapshot,release>> command removes the hold so the snapshot can deleted. Snapshots can be taken on volumes, but they can only be cloned or rolled back, not mounted independently.
|[[zfs-term-clone]]Clone
|Snapshots can also be cloned. A clone is a writable version of a snapshot, allowing the file system to be forked as a new dataset. As with a snapshot, a clone initially consumes no additional space. As new data is written to a clone and new blocks are allocated, the apparent size of the clone grows. When blocks are overwritten in the cloned file system or volume, the reference count on the previous block is decremented. The snapshot upon which a clone is based cannot be deleted because the clone depends on it. The snapshot is the parent, and the clone is the child. Clones can be _promoted_, reversing this dependency and making the clone the parent and the previous parent the child. This operation requires no additional space. Since the amount of space used by the parent and child is reversed, existing quotas and reservations might be affected.
|[[zfs-term-checksum]]Checksum
|Every block that is allocated is also checksummed. The checksum algorithm used is a per-dataset property, see <<zfs-zfs-set,`set`>>. The checksum of each block is transparently validated as it is read, allowing ZFS to detect silent corruption. If the data that is read does not match the expected checksum, ZFS will attempt to recover the data from any available redundancy, like mirrors or RAID-Z. Validation of all checksums can be triggered with <<zfs-term-scrub,`scrub`>>. Checksum algorithms include:
* `fletcher2`
* `fletcher4`
* `sha256`
The `fletcher` algorithms are faster, but `sha256` is a strong cryptographic hash and has a much lower chance of collisions at the cost of some performance. Checksums can be disabled, but it is not recommended.
|[[zfs-term-compression]]Compression
|Each dataset has a compression property, which defaults to off. This property can be set to one of a number of compression algorithms. This will cause all new data that is written to the dataset to be compressed. Beyond a reduction in space used, read and write throughput often increases because fewer blocks are read or written.
[[zfs-term-compression-lz4]]
* _LZ4_ - Added in ZFS pool version 5000 (feature flags), LZ4 is now the recommended compression algorithm. LZ4 compresses approximately 50% faster than LZJB when operating on compressible data, and is over three times faster when operating on uncompressible data. LZ4 also decompresses approximately 80% faster than LZJB. On modern CPUs, LZ4 can often compress at over 500 MB/s, and decompress at over 1.5 GB/s (per single CPU core).
[[zfs-term-compression-lzjb]]
* _LZJB_ - The default compression algorithm. Created by Jeff Bonwick (one of the original creators of ZFS). LZJB offers good compression with less CPU overhead compared to GZIP. In the future, the default compression algorithm will likely change to LZ4.
[[zfs-term-compression-gzip]]
* _GZIP_ - A popular stream compression algorithm available in ZFS. One of the main advantages of using GZIP is its configurable level of compression. When setting the `compress` property, the administrator can choose the level of compression, ranging from `gzip1`, the lowest level of compression, to `gzip9`, the highest level of compression. This gives the administrator control over how much CPU time to trade for saved disk space.
[[zfs-term-compression-zle]]
* _ZLE_ - Zero Length Encoding is a special compression algorithm that only compresses continuous runs of zeros. This compression algorithm is only useful when the dataset contains large blocks of zeros.
|[[zfs-term-copies]]Copies
|When set to a value greater than 1, the `copies` property instructs ZFS to maintain multiple copies of each block in the <<zfs-term-filesystem,File System>> or <<zfs-term-volume,Volume>>. Setting this property on important datasets provides additional redundancy from which to recover a block that does not match its checksum. In pools without redundancy, the copies feature is the only form of redundancy. The copies feature can recover from a single bad sector or other forms of minor corruption, but it does not protect the pool from the loss of an entire disk.
|[[zfs-term-deduplication]]Deduplication
|Checksums make it possible to detect duplicate blocks of data as they are written. With deduplication, the reference count of an existing, identical block is increased, saving storage space. To detect duplicate blocks, a deduplication table (DDT) is kept in memory. The table contains a list of unique checksums, the location of those blocks, and a reference count. When new data is written, the checksum is calculated and compared to the list. If a match is found, the existing block is used. The SHA256 checksum algorithm is used with deduplication to provide a secure cryptographic hash. Deduplication is tunable. If `dedup` is `on`, then a matching checksum is assumed to mean that the data is identical. If `dedup` is set to `verify`, then the data in the two blocks will be checked byte-for-byte to ensure it is actually identical. If the data is not identical, the hash collision will be noted and the two blocks will be stored separately. As DDT must store the hash of each unique block, it consumes a very large amount of memory. A general rule of thumb is 5-6 GB of ram per 1 TB of deduplicated data). In situations where it is not practical to have enough RAM to keep the entire DDT in memory, performance will suffer greatly as the DDT must be read from disk before each new block is written. Deduplication can use L2ARC to store the DDT, providing a middle ground between fast system memory and slower disks. Consider using compression instead, which often provides nearly as much space savings without the additional memory requirement.
|[[zfs-term-scrub]]Scrub
|Instead of a consistency check like man:fsck[8], ZFS has `scrub`. `scrub` reads all data blocks stored on the pool and verifies their checksums against the known good checksums stored in the metadata. A periodic check of all the data stored on the pool ensures the recovery of any corrupted blocks before they are needed. A scrub is not required after an unclean shutdown, but is recommended at least once every three months. The checksum of each block is verified as blocks are read during normal use, but a scrub makes certain that even infrequently used blocks are checked for silent corruption. Data security is improved, especially in archival storage situations. The relative priority of `scrub` can be adjusted with <<zfs-advanced-tuning-scrub_delay,`vfs.zfs.scrub_delay`>> to prevent the scrub from degrading the performance of other workloads on the pool.
|[[zfs-term-quota]]Dataset Quota
a|ZFS provides very fast and accurate dataset, user, and group space accounting in addition to quotas and space reservations. This gives the administrator fine grained control over how space is allocated and allows space to be reserved for critical file systems.
ZFS supports different types of quotas: the dataset quota, the <<zfs-term-refquota,reference quota (refquota)>>, the <<zfs-term-userquota,user quota>>, and the <<zfs-term-groupquota,group quota>>.
Quotas limit the amount of space that a dataset and all of its descendants, including snapshots of the dataset, child datasets, and the snapshots of those datasets, can consume.
[NOTE]
====
Quotas cannot be set on volumes, as the `volsize` property acts as an implicit quota.
====
|[[zfs-term-refquota]]Reference Quota
|A reference quota limits the amount of space a dataset can consume by enforcing a hard limit. However, this hard limit includes only space that the dataset references and does not include space used by descendants, such as file systems or snapshots.
|[[zfs-term-userquota]]User Quota
|User quotas are useful to limit the amount of space that can be used by the specified user.
|[[zfs-term-groupquota]]Group Quota
|The group quota limits the amount of space that a specified group can consume.
|[[zfs-term-reservation]]Dataset Reservation
|The `reservation` property makes it possible to guarantee a minimum amount of space for a specific dataset and its descendants. If a 10 GB reservation is set on [.filename]#storage/home/bob#, and another dataset tries to use all of the free space, at least 10 GB of space is reserved for this dataset. If a snapshot is taken of [.filename]#storage/home/bob#, the space used by that snapshot is counted against the reservation. The <<zfs-term-refreservation,`refreservation`>> property works in a similar way, but it _excludes_ descendants like snapshots.
Reservations of any sort are useful in many situations, such as planning and testing the suitability of disk space allocation in a new system, or ensuring that enough space is available on file systems for audio logs or system recovery procedures and files.
|[[zfs-term-refreservation]]Reference Reservation
|The `refreservation` property makes it possible to guarantee a minimum amount of space for the use of a specific dataset _excluding_ its descendants. This means that if a 10 GB reservation is set on [.filename]#storage/home/bob#, and another dataset tries to use all of the free space, at least 10 GB of space is reserved for this dataset. In contrast to a regular <<zfs-term-reservation,reservation>>, space used by snapshots and descendant datasets is not counted against the reservation. For example, if a snapshot is taken of [.filename]#storage/home/bob#, enough disk space must exist outside of the `refreservation` amount for the operation to succeed. Descendants of the main data set are not counted in the `refreservation` amount and so do not encroach on the space set.
|[[zfs-term-resilver]]Resilver
|When a disk fails and is replaced, the new disk must be filled with the data that was lost. The process of using the parity information distributed across the remaining drives to calculate and write the missing data to the new drive is called _resilvering_.
|[[zfs-term-online]]Online
|A pool or vdev in the `Online` state has all of its member devices connected and fully operational. Individual devices in the `Online` state are functioning normally.
|[[zfs-term-offline]]Offline
|Individual devices can be put in an `Offline` state by the administrator if there is sufficient redundancy to avoid putting the pool or vdev into a <<zfs-term-faulted,Faulted>> state. An administrator may choose to offline a disk in preparation for replacing it, or to make it easier to identify.
|[[zfs-term-degraded]]Degraded
|A pool or vdev in the `Degraded` state has one or more disks that have been disconnected or have failed. The pool is still usable, but if additional devices fail, the pool could become unrecoverable. Reconnecting the missing devices or replacing the failed disks will return the pool to an <<zfs-term-online,Online>> state after the reconnected or new device has completed the <<zfs-term-resilver,Resilver>> process.
|[[zfs-term-faulted]]Faulted
|A pool or vdev in the `Faulted` state is no longer operational. The data on it can no longer be accessed. A pool or vdev enters the `Faulted` state when the number of missing or failed devices exceeds the level of redundancy in the vdev. If missing devices can be reconnected, the pool will return to an <<zfs-term-online,Online>> state. If there is insufficient redundancy to compensate for the number of failed disks, then the contents of the pool are lost and must be restored from backups.
|===
diff --git a/documentation/content/en/books/porters-handbook/_index.adoc b/documentation/content/en/books/porters-handbook/_index.adoc
index 0a2f7a0c11..60a9df7f8a 100644
--- a/documentation/content/en/books/porters-handbook/_index.adoc
+++ b/documentation/content/en/books/porters-handbook/_index.adoc
@@ -1,22 +1,23 @@
---
title: FreeBSD Porter's Handbook
authors:
- author: The FreeBSD Documentation Project
-copyright: 2000-2020 The FreeBSD Documentation Project
+copyright: 2000-2021 The FreeBSD Documentation Project
+description: FreeBSD Porter's Handbook Index
trademarks: ["freebsd", "sun", "unix", "general"]
next: books/porters-handbook/porting-why
---
= FreeBSD Porter's Handbook
:doctype: book
:toc: macro
:toclevels: 1
:icons: font
:sectnums:
:sectnumlevels: 6
:source-highlighter: rouge
:experimental:
'''
include::content/en/books/porters-handbook/toc.adoc[]
diff --git a/documentation/content/en/books/porters-handbook/book.adoc b/documentation/content/en/books/porters-handbook/book.adoc
index 3c56e705f4..3b9c23bffa 100644
--- a/documentation/content/en/books/porters-handbook/book.adoc
+++ b/documentation/content/en/books/porters-handbook/book.adoc
@@ -1,81 +1,82 @@
---
title: FreeBSD Porter's Handbook
authors:
- author: The FreeBSD Documentation Project
-copyright: 2000-2020 The FreeBSD Documentation Project
+copyright: 2000-2021 The FreeBSD Documentation Project
+description: FreeBSD Porter's Handbook
trademarks: ["freebsd", "sun", "unix", "general"]
---
= FreeBSD Porter's Handbook
:doctype: book
:toc: macro
:toclevels: 2
:icons: font
:xrefstyle: basic
:relfileprefix: ../
:outfilesuffix:
:sectnums:
:sectnumlevels: 6
:partnums:
:chapter-signifier: Chapter
:part-signifier: Part
:source-highlighter: rouge
:experimental:
:skip-front-matter:
ifeval::["{backend}" == "html5"]
include::shared/mirrors.adoc[]
include::shared/authors.adoc[]
include::shared/releases.adoc[]
include::shared/en/mailing-lists.adoc[]
include::shared/en/teams.adoc[]
include::shared/en/urls.adoc[]
:chapters-path: content/en/books/porters-handbook/
endif::[]
ifeval::["{backend}" == "pdf"]
include::../../../../shared/mirrors.adoc[]
include::../../../../shared/authors.adoc[]
include::../../../../shared/releases.adoc[]
include::../../../../shared/en/mailing-lists.adoc[]
include::../../../../shared/en/teams.adoc[]
include::../../../../shared/en/urls.adoc[]
:chapters-path:
endif::[]
ifeval::["{backend}" == "epub3"]
include::../../../../shared/mirrors.adoc[]
include::../../../../shared/authors.adoc[]
include::../../../../shared/releases.adoc[]
include::../../../../shared/en/mailing-lists.adoc[]
include::../../../../shared/en/teams.adoc[]
include::../../../../shared/en/urls.adoc[]
:chapters-path:
endif::[]
'''
toc::[]
include::{chapters-path}toc-tables.adoc[]
include::{chapters-path}toc-examples.adoc[]
-include::{chapters-path}porting-why/_index.adoc[leveloffset=+1, lines=7..21;32..-1]
-include::{chapters-path}new-port/_index.adoc[leveloffset=+1, lines=7..21;32..-1]
-include::{chapters-path}quick-porting/_index.adoc[leveloffset=+1, lines=7..21;32..-1]
-include::{chapters-path}slow-porting/_index.adoc[leveloffset=+1, lines=7..21;32..-1]
-include::{chapters-path}makefiles/_index.adoc[leveloffset=+1, lines=7..22;33..-1]
-include::{chapters-path}special/_index.adoc[leveloffset=+1, lines=7..21;32..-1]
-include::{chapters-path}flavors/_index.adoc[leveloffset=+1, lines=7..21;32..-1]
-include::{chapters-path}plist/_index.adoc[leveloffset=+1, lines=7..21;32..-1]
-include::{chapters-path}pkg-files/_index.adoc[leveloffset=+1, lines=7..21;32..-1]
-include::{chapters-path}testing/_index.adoc[leveloffset=+1, lines=7..21;32..-1]
-include::{chapters-path}upgrading/_index.adoc[leveloffset=+1, lines=7..21;32..-1]
-include::{chapters-path}security/_index.adoc[leveloffset=+1, lines=7..21;32..-1]
-include::{chapters-path}porting-dads/_index.adoc[leveloffset=+1, lines=7..23;34..-1]
-include::{chapters-path}porting-samplem/_index.adoc[leveloffset=+1, lines=7..21;32..-1]
-include::{chapters-path}order/_index.adoc[leveloffset=+1, lines=7..21;32..-1]
-include::{chapters-path}keeping-up/_index.adoc[leveloffset=+1, lines=7..21;32..-1]
-include::{chapters-path}uses/_index.adoc[leveloffset=+1, lines=7..22;33..-1]
-include::{chapters-path}versions/_index.adoc[leveloffset=+1, lines=6..20;31..-1]
+include::{chapters-path}porting-why/_index.adoc[leveloffset=+1, lines=8..22;33..-1]
+include::{chapters-path}new-port/_index.adoc[leveloffset=+1, lines=8..22;33..-1]
+include::{chapters-path}quick-porting/_index.adoc[leveloffset=+1, lines=8..22;33..-1]
+include::{chapters-path}slow-porting/_index.adoc[leveloffset=+1, lines=8..22;33..-1]
+include::{chapters-path}makefiles/_index.adoc[leveloffset=+1, lines=8..23;34..-1]
+include::{chapters-path}special/_index.adoc[leveloffset=+1, lines=8..22;33..-1]
+include::{chapters-path}flavors/_index.adoc[leveloffset=+1, lines=8..22;33..-1]
+include::{chapters-path}plist/_index.adoc[leveloffset=+1, lines=8..22;33..-1]
+include::{chapters-path}pkg-files/_index.adoc[leveloffset=+1, lines=8..22;33..-1]
+include::{chapters-path}testing/_index.adoc[leveloffset=+1, lines=8..22;33..-1]
+include::{chapters-path}upgrading/_index.adoc[leveloffset=+1, lines=8..22;33..-1]
+include::{chapters-path}security/_index.adoc[leveloffset=+1, lines=8..22;33..-1]
+include::{chapters-path}porting-dads/_index.adoc[leveloffset=+1, lines=8..24;35..-1]
+include::{chapters-path}porting-samplem/_index.adoc[leveloffset=+1, lines=8..22;33..-1]
+include::{chapters-path}order/_index.adoc[leveloffset=+1, lines=8..22;33..-1]
+include::{chapters-path}keeping-up/_index.adoc[leveloffset=+1, lines=8..22;33..-1]
+include::{chapters-path}uses/_index.adoc[leveloffset=+1, lines=8..23;34..-1]
+include::{chapters-path}versions/_index.adoc[leveloffset=+1, lines=7..21;32..-1]
diff --git a/documentation/content/en/books/porters-handbook/flavors/_index.adoc b/documentation/content/en/books/porters-handbook/flavors/_index.adoc
index 0e540c204d..660d4bc0c1 100644
--- a/documentation/content/en/books/porters-handbook/flavors/_index.adoc
+++ b/documentation/content/en/books/porters-handbook/flavors/_index.adoc
@@ -1,337 +1,338 @@
---
title: Chapter 7. Flavors
prev: books/porters-handbook/special
next: books/porters-handbook/plist
+description: Flavors are a way to have multiple variations of a port
---
[[flavors]]
= Flavors
:doctype: book
:toc: macro
:toclevels: 1
:icons: font
:sectnums:
:sectnumlevels: 6
:source-highlighter: rouge
:experimental:
:skip-front-matter:
:xrefstyle: basic
:relfileprefix: ../
:outfilesuffix:
:sectnumoffset: 7
include::shared/mirrors.adoc[]
include::shared/authors.adoc[]
include::shared/releases.adoc[]
include::shared/en/mailing-lists.adoc[]
include::shared/en/teams.adoc[]
include::shared/en/urls.adoc[]
toc::[]
[[flavors-intro]]
== An Introduction to Flavors
Flavors are a way to have multiple variations of a port. The port is built multiple times, with variations.
For example, a port can have a normal version with many features and quite a few dependencies, and a light "lite" version with only basic features and minimal dependencies.
Another example could be, a port can have a GTK flavor and a QT flavor, depending on which toolkit it uses.
[[flavors-using]]
== Using FLAVORS
To declare a port having multiple flavors, add `FLAVORS` to its [.filename]#Makefile#. The first flavor in `FLAVORS` is the default flavor.
[TIP]
====
It can help simplify the logic of the [.filename]#Makefile# to also define `FLAVOR` as:
[.programlisting]
....
FLAVOR?= ${FLAVORS:[1]}
....
====
[IMPORTANT]
====
To distinguish flavors from options, which are always uppercase letters, flavor names can _only_ contain lowercase letters, numbers, and the underscore `_`.
====
[[flavors-using-ex1]]
.Basic Flavors Usage
[example]
====
If a port has a "lite" slave port, the slave port can be removed, and the port can be converted to flavors with:
[.programlisting]
....
FLAVORS= default lite
lite_PKGNAMESUFFIX= -lite
[...]
.if ${FLAVOR:U} != lite
[enable non lite features]
.endif
....
====
[[flavors-using-ex2]]
.Another Basic Flavors Usage
[example]
====
If a port has a `-nox11` slave port, the slave port can be removed, and the port can be converted to flavors with:
[.programlisting]
....
FLAVORS= x11 nox11
FLAVOR?= ${FLAVORS:[1]}
nox11_PKGNAMESUFFIX= -nox11
[...]
.if ${FLAVOR} == x11
[enable x11 features]
.endif
....
====
[[flavors-using-ex3]]
.More Complex Flavors Usage
[example]
====
Here is a slightly edited excerpt of what is present in package:devel/libpeas[], a port that uses the <<flavors-auto-python,Python flavors>>. With the default Python 2 and 3 versions being 2.7 and 3.6, it will automatically get `FLAVORS=py27 py36`
[.programlisting]
....
USES= gnome python
USE_PYTHON= flavors
.if ${FLAVOR:Upy27:Mpy2*}
USE_GNOME= pygobject3
CONFIGURE_ARGS+= --enable-python2 --disable-python3
BUILD_WRKSRC= ${WRKSRC}/loaders/python
INSTALL_WRKSRC= ${WRKSRC}/loaders/python
.else # py3*
USE_GNOME+= py3gobject3
CONFIGURE_ARGS+= --disable-python2 --enable-python3 \
ac_cv_path_PYTHON3_CONFIG=${LOCALBASE}/bin/python${PYTHON_VER}-config
BUILD_WRKSRC= ${WRKSRC}/loaders/python3
INSTALL_WRKSRC= ${WRKSRC}/loaders/python3
.endif
py34_PLIST= ${.CURDIR}/pkg-plist-py3
py35_PLIST= ${.CURDIR}/pkg-plist-py3
py36_PLIST= ${.CURDIR}/pkg-plist-py3
....
This port does not use `USE_PYTHON=distutils` but needs Python flavors anyway.
To guard against `FLAVOR` being empty, which would cause a man:make[1] error, use `${FLAVOR:U}` in string comparisons instead of `${FLAVOR}`.
The Gnome Python gobject3 bindings have two different names, one for Python 2, pygobject3 and one for Python 3, py3gobject3.
The `configure` script has to run in [.filename]#${WRKSRC}#, but we are only interested in building and installing the Python 2 or Python 3 parts of the software, so set the build and install base directories appropriately.
Hint about the correct Python 3 config script path name.
The packing list is different when the built with Python 3. As there are three possible Python 3 versions, set `PLIST` for all three using the <<flavors-using-helpers,helper>>.
====
[[flavors-using-helpers]]
=== Flavors Helpers
To make the [.filename]#Makefile# easier to write, a few flavors helpers exist.
This list of helpers will set their variable:
* `_flavor__PKGNAMEPREFIX`
* `_flavor__PKGNAMESUFFIX`
* `_flavor__PLIST`
* `_flavor__DESCR`
This list of helpers will append to their variable:
* `_flavor__CONFLICTS`
* `_flavor__CONFLICTS_BUILD`
* `_flavor__CONFLICTS_INSTALL`
* `_flavor__PKG_DEPENDS`
* `_flavor__EXTRACT_DEPENDS`
* `_flavor__PATCH_DEPENDS`
* `_flavor__FETCH_DEPENDS`
* `_flavor__BUILD_DEPENDS`
* `_flavor__LIB_DEPENDS`
* `_flavor__RUN_DEPENDS`
* `_flavor__TEST_DEPENDS`
[[flavors-helpers-ex1]]
.Flavor Specific `PKGNAME`
[example]
====
As all packages must have a different package name, flavors must change theirs, using `_flavor__PKGNAMEPREFIX` and `_flavor__PKGNAMESUFFIX` makes this easy:
[.programlisting]
....
FLAVORS= normal lite
lite_PKGNAMESUFFIX= -lite
....
====
[[flavors-auto-php]]
== `USES=php` and Flavors
When using crossref:uses[uses-php,`php`] with one of these arguments, `phpize`, `ext`, `zend`, or `pecl`, the port will automatically have `FLAVORS` filled in with the PHP versions it supports.
[[flavors-auto-php-ex1]]
.Simple `USES=php` Extension
[example]
====
This will generate package for all the supported versions:
[.programlisting]
....
PORTNAME= some-ext
PORTVERSION= 0.0.1
PKGNAMEPREFIX= ${PHP_PKGNAMEPREFIX}
USES= php:ext
....
This will generate package for all the supported versions but 7.2:
[.programlisting]
....
PORTNAME= some-ext
PORTVERSION= 0.0.1
PKGNAMEPREFIX= ${PHP_PKGNAMEPREFIX}
USES= php:ext
IGNORE_WITH_PHP= 72
....
====
[[flavors-auto-php-app]]
=== PHP Flavors with PHP Applications
PHP applications can also be flavorized.
This allows generating packages for all PHP versions, so that users can use them with whatever version they need on their servers.
[IMPORTANT]
====
PHP applications that are flavorized _must_ append `PHP_PKGNAMESUFFIX` to their package names.
====
[[flavors-auto-php-app-ex1]]
.Flavorizing a PHP Application
[example]
====
Adding Flavors support to a PHP application is straightforward:
[.programlisting]
....
PKGNAMESUFFIX= ${PHP_PKGNAMESUFFIX}
USES= php:flavors
....
====
[TIP]
====
When adding a dependency on a PHP flavored port, use `@${PHP_FLAVOR}`. _Never_ use `FLAVOR` directly.
====
[[flavors-auto-python]]
== `USES=python` and Flavors
When using crossref:uses[uses-python,`python`] and `USE_PYTHON=distutils`, the port will automatically have `FLAVORS` filled in with the Python versions it supports.
[[flavors-auto-python-ex1]]
.Simple `USES=python`
[example]
====
Supposing the current Python supported versions are 2.7, 3.4, 3.5, and 3.6, and the default Python 2 and 3 versions are 2.7 and 3.6, a port with:
[.programlisting]
....
USES= python
USE_PYTHON= distutils
....
Will get these flavors: `py27`, and `py36`.
[.programlisting]
....
USES= python
USE_PYTHON= distutils allflavors
....
Will get these flavors: `py27`, `py34`, `py35` and `py36`.
====
[[flavors-auto-python-ex2]]
.`USES=python` with Version Requirements
[example]
====
Supposing the current Python supported versions are 2.7, 3.4, 3.5, and 3.6, and the default Python 2 and 3 versions are 2.7 and 3.6, a port with:
[.programlisting]
....
USES= python:-3.5
USE_PYTHON= distutils
....
Will get this flavor: `py27`.
[.programlisting]
....
USES= python:-3.5
USE_PYTHON= distutils allflavors
....
Will get these flavors: `py27`, `py34`, and `py35`.
[.programlisting]
....
USES= python:3.4+
USE_PYTHON= distutils
....
Will get this flavor: `py36`.
[.programlisting]
....
USES= python:3.4+
USE_PYTHON= distutils allflavors
....
Will get these flavors: `py34`, `py35`, and `py36`.
====
`PY_FLAVOR` is available to depend on the correct version of Python modules. All dependencies on flavored Python ports should use `PY_FLAVOR`, and not `FLAVOR` directly..
[[flavors-auto-python-ex3]]
.For a Port Not Using `distutils`
[example]
====
If the default Python 3 version is 3.6, the following will set `PY_FLAVOR` to `py36`:
[.programlisting]
....
RUN_DEPENDS= ${PYTHON_PKGNAMEPREFIX}mutagen>0:audio/py-mutagen@${PY_FLAVOR}
USES= python:3.5+
....
====
[[flavors-auto-lua]]
== `USES=lua` and Flavors
When using crossref:uses[uses-lua,`lua:module`] or crossref:uses[uses-lua,`lua:flavors`], the port will automatically have `FLAVORS` filled in with the Lua versions it supports. However, it is not expected that ordinary applications (rather than Lua modules) should use this feature; most applications that embed or otherwise use Lua should simply use `USES=lua`.
`LUA_FLAVOR` is available (and must be used) to depend on the correct version of dependencies regardless of whether the port used the `flavors` or `module` parameters.
See crossref:special[using-lua,Using Lua] for further information.
diff --git a/documentation/content/en/books/porters-handbook/keeping-up/_index.adoc b/documentation/content/en/books/porters-handbook/keeping-up/_index.adoc
index 90929c9877..11396904ac 100644
--- a/documentation/content/en/books/porters-handbook/keeping-up/_index.adoc
+++ b/documentation/content/en/books/porters-handbook/keeping-up/_index.adoc
@@ -1,87 +1,88 @@
---
title: Chapter 16. Keeping Up
prev: books/porters-handbook/order
next: books/porters-handbook/uses
+description: How to keep up the FreeBSD Ports Collection
---
[[keeping-up]]
= Keeping Up
:doctype: book
:toc: macro
:toclevels: 1
:icons: font
:sectnums:
:sectnumlevels: 6
:source-highlighter: rouge
:experimental:
:skip-front-matter:
:xrefstyle: basic
:relfileprefix: ../
:outfilesuffix:
:sectnumoffset: 16
include::shared/mirrors.adoc[]
include::shared/authors.adoc[]
include::shared/releases.adoc[]
include::shared/en/mailing-lists.adoc[]
include::shared/en/teams.adoc[]
include::shared/en/urls.adoc[]
toc::[]
The FreeBSD Ports Collection is constantly changing. Here is some information on how to keep up.
[[freshports]]
== FreshPorts
One of the easiest ways to learn about updates that have already been committed is by subscribing to http://www.FreshPorts.org/[FreshPorts]. Multiple ports can be monitored. Maintainers are strongly encouraged to subscribe, because they will receive notification of not only their own changes, but also any changes that any other FreeBSD committer has made. (These are often necessary to keep up with changes in the underlying ports framework-although it would be most polite to receive an advance heads-up from those committing such changes, sometimes this is overlooked or impractical. Also, in some cases, the changes are very minor in nature. We expect everyone to use their best judgement in these cases.)
To use FreshPorts, an account is required. Those with registered email addresses at `@FreeBSD.org` will see the opt-in link on the right-hand side of the web pages. Those who already have a FreshPorts account but are not using a `@FreeBSD.org` email address can change the email to `@FreeBSD.org`, subscribe, then change it back again.
FreshPorts also has a sanity test feature which automatically tests each commit to the FreeBSD ports tree. If subscribed to this service, a committer will receive notifications of any errors which FreshPorts detects during sanity testing of their commits.
[[cgit]]
== The Web Interface to the Source Repository
It is possible to browse the files in the source repository by using a web interface. Changes that affect the entire port system are now documented in the https://cgit.freebsd.org/ports/tree/CHANGES[CHANGES] file. Changes that affect individual ports are now documented in the https://cgit.FreeBSD.org/ports/tree/UPDATING[UPDATING] file. However, the definitive answer to any question is undoubtedly to read the source code of https://cgit.FreeBSD.org/ports/tree/Mk/bsd.port.mk[bsd.port.mk], and associated files.
[[ports-mailing-list]]
== The FreeBSD Ports Mailing List
As a ports maintainer, consider subscribing to {freebsd-ports}. Important changes to the way ports work will be announced there, and then committed to [.filename]#CHANGES#.
If the volume of messages on this mailing list is too high, consider following {freebsd-ports-announce} which contains only announcements.
[[build-cluster]]
== The FreeBSD Port Building Cluster
One of the least-publicized strengths of FreeBSD is that an entire cluster of machines is dedicated to continually building the Ports Collection, for each of the major OS releases and for each Tier-1 architecture.
Individual ports are built unless they are specifically marked with `IGNORE`. Ports that are marked with `BROKEN` will still be attempted, to see if the underlying problem has been resolved. (This is done by passing `TRYBROKEN` to the port's [.filename]#Makefile#.)
[[distfile-survey]]
== Portscout: the FreeBSD Ports Distfile Scanner
The build cluster is dedicated to building the latest release of each port with distfiles that have already been fetched. However, as the Internet continually changes, distfiles can quickly go missing. http://portscout.FreeBSD.org[Portscout], the FreeBSD Ports distfile scanner, attempts to query every download site for every port to find out if each distfile is still available. Portscout can generate HTML reports and send emails about newly available ports to those who request them. Unless not otherwise subscribed, maintainers are asked to check periodically for changes, either by hand or using the RSS feed.
Portscout's first page gives the email address of the port maintainer, the number of ports the maintainer is responsible for, the number of those ports with new distfiles, and the percentage of those ports that are out-of-date. The search function allows for searching by email address for a specific maintainer, and for selecting whether only out-of-date ports are shown.
Upon clicking on a maintainer's email address, a list of all of their ports is displayed, along with port category, current version number, whether or not there is a new version, when the port was last updated, and finally when it was last checked. A search function on this page allows the user to search for a specific port.
Clicking on a port name in the list displays the http://freshports.org[FreshPorts] port information.
Additional documentation is available in the https://github.com/freebsd/portscout[Portscout repository].
[[portsmon]]
== The FreeBSD Ports Monitoring System
Another handy resource is the http://portsmon.FreeBSD.org[FreeBSD Ports Monitoring System] (also known as `portsmon`). This system comprises a database that processes information from several sources and allows it to be browsed via a web interface. Currently, the ports Problem Reports (PRs), the error logs from the build cluster, and individual files from the ports collection are used. In the future, this will be expanded to include the distfile survey, as well as other sources.
To get started, use the http://portsmon.FreeBSD.org/portoverview.py[Overview of One Port] search page to find all the information about a port.
This is the only resource available that maps PR entries to portnames. PR submitters do not always include the portname in their Synopsis, although we would prefer that they did. So, `portsmon` is a good place to find out whether an existing port has any PRs filed against it, any build errors, or if a new port the porter is considering creating has already been submitted.
[NOTE]
======
The FreeBSD Ports Monitoring System (portsmon) is currently not working due to latest Python updates.
======
diff --git a/documentation/content/en/books/porters-handbook/makefiles/_index.adoc b/documentation/content/en/books/porters-handbook/makefiles/_index.adoc
index 99a66179dd..7cf6fc4f9a 100644
--- a/documentation/content/en/books/porters-handbook/makefiles/_index.adoc
+++ b/documentation/content/en/books/porters-handbook/makefiles/_index.adoc
@@ -1,4852 +1,4853 @@
---
title: Chapter 5. Configuring the Makefile
prev: books/porters-handbook/slow-porting
next: books/porters-handbook/special
+description: Configuring the Makefile for FreeBSD Ports
---
[[makefiles]]
= Configuring the Makefile
:doctype: book
:toc: macro
:toclevels: 1
:icons: font
:sectnums:
:sectnumlevels: 6
:source-highlighter: rouge
:experimental:
:skip-front-matter:
:xrefstyle: basic
:relfileprefix: ../
:outfilesuffix:
:sectnumoffset: 5
:g-plus-plus: g++
include::shared/mirrors.adoc[]
include::shared/authors.adoc[]
include::shared/releases.adoc[]
include::shared/en/mailing-lists.adoc[]
include::shared/en/teams.adoc[]
include::shared/en/urls.adoc[]
toc::[]
Configuring the [.filename]#Makefile# is pretty simple, and again we suggest looking at existing examples before starting. Also, there is a crossref:porting-samplem[porting-samplem,sample Makefile] in this handbook, so take a look and please follow the ordering of variables and sections in that template to make the port easier for others to read.
Consider these problems in sequence during the design of the new [.filename]#Makefile#:
[[makefile-source]]
== The Original Source
Does it live in `DISTDIR` as a standard ``gzip``ped tarball named something like [.filename]#foozolix-1.2.tar.gz#? If so, go on to the next step. If not, the distribution file format might require overriding one or more of `DISTVERSION`, `DISTNAME`, `EXTRACT_CMD`, `EXTRACT_BEFORE_ARGS`, `EXTRACT_AFTER_ARGS`, `EXTRACT_SUFX`, or `DISTFILES`.
In the worst case, create a custom `do-extract` target to override the default. This is rarely, if ever, necessary.
[[makefile-naming]]
== Naming
The first part of the port's [.filename]#Makefile# names the port, describes its version number, and lists it in the correct category.
[[makefile-portname]]
=== `PORTNAME`
Set `PORTNAME` to the base name of the software. It is used as the base for the FreeBSD package, and for <<makefile-distname,`DISTNAME`>>.
[IMPORTANT]
====
The package name must be unique across the entire ports tree. Make sure that the `PORTNAME` is not already in use by an existing port, and that no other port already has the same `PKGBASE`. If the name has already been used, add either <<porting-pkgnameprefix-suffix,`PKGNAMEPREFIX` or `PKGNAMESUFFIX`>>.
====
[[makefile-versions]]
=== Versions, `DISTVERSION` _or_ `PORTVERSION`
Set `DISTVERSION` to the version number of the software.
`PORTVERSION` is the version used for the FreeBSD package. It will be automatically derived from `DISTVERSION` to be compatible with FreeBSD's package versioning scheme. If the version contains _letters_, it might be needed to set `PORTVERSION` and not `DISTVERSION`.
[IMPORTANT]
====
Only one of `PORTVERSION` and `DISTVERSION` can be set at a time.
====
From time to time, some software will use a version scheme that is not compatible with how `DISTVERSION` translates in `PORTVERSION`.
[TIP]
====
When updating a port, it is possible to use man:pkg-version[8]'s `-t` argument to check if the new version is greater or lesser than before. See <<makefile-versions-ex-pkg-version>>.
====
[[makefile-versions-ex-pkg-version]]
.Using man:pkg-version[8] to Compare Versions
[example]
====
`pkg version -t` takes two versions as arguments, it will respond with `<`, `=` or `>` if the first version is less, equal, or more than the second version, respectively.
[source,shell]
....
% pkg version -t 1.2 1.3
< <.>
% pkg version -t 1.2 1.2
= <.>
% pkg version -t 1.2 1.2.0
= <.>
% pkg version -t 1.2 1.2.p1
> <.>
% pkg version -t 1.2.a1 1.2.b1
< <.>
% pkg version -t 1.2 1.2p1
< <.>
....
<.> `1.2` is before `1.3`.
<.> `1.2` and `1.2` are equal as they have the same version.
<.> `1.2` and `1.2.0` are equal as nothing equals zero.
<.> `1.2` is after `1.2.p1` as `.p1`, think "pre-release 1".
<.> `1.2.a1` is before `1.2.b1`, think "alpha" and "beta", and `a` is before `b`.
<.> `1.2` is before `1.2p1` as `2p1`, think "2, patch level 1" which is a version after any `2.X` but before `3`.
[NOTE]
****
In here, the `a`, `b`, and `p` are used as if meaning "alpha", "beta" or "pre-release" and "patch level", but they are only letters and are sorted alphabetically, so any letter can be used, and they will be sorted appropriately.
****
====
.Examples of `DISTVERSION` and the Derived `PORTVERSION`
[cols="10%,90%", frame="none", options="header"]
|===
| DISTVERSION
| PORTVERSION
|0.7.1d
|0.7.1.d
|10Alpha3
|10.a3
|3Beta7-pre2
|3.b7.p2
|8:f_17
|8f.17
|===
[[makefile-versions-ex1]]
.Using `DISTVERSION`
[example]
====
When the version only contains numbers separated by dots, dashes or underscores, use `DISTVERSION`.
[.programlisting]
....
PORTNAME= nekoto
DISTVERSION= 1.2-4
....
It will generate a `PORTVERSION` of `1.2.4`.
====
[[makefile-versions-ex2]]
.Using `DISTVERSION` When the Version Starts with a Letter or a Prefix
[example]
====
When the version starts or ends with a letter, or a prefix or a suffix that is not part of the version, use `DISTVERSIONPREFIX`, `DISTVERSION`, and `DISTVERSIONSUFFIX`.
If the version is `v1.2-4`:
[.programlisting]
....
PORTNAME= nekoto
DISTVERSIONPREFIX= v
DISTVERSION= 1_2_4
....
Some of the time, projects using GitHub will use their name in their versions. For example, the version could be `nekoto-1.2-4`:
[.programlisting]
....
PORTNAME= nekoto
DISTVERSIONPREFIX= nekoto-
DISTVERSION= 1.2_4
....
Those projects also sometimes use some string at the end of the version, for example, `1.2-4_RELEASE`:
[.programlisting]
....
PORTNAME= nekoto
DISTVERSION= 1.2-4
DISTVERSIONSUFFIX= _RELEASE
....
Or they do both, for example, `nekoto-1.2-4_RELEASE`:
[.programlisting]
....
PORTNAME= nekoto
DISTVERSIONPREFIX= nekoto-
DISTVERSION= 1.2-4
DISTVERSIONSUFFIX= _RELEASE
....
`DISTVERSIONPREFIX` and `DISTVERSIONSUFFIX` will not be used while constructing `PORTVERSION`, but only used in `DISTNAME`.
All will generate a `PORTVERSION` of `1.2.4`.
====
[[makefile-versions-ex3]]
.Using `DISTVERSION` When the Version Contains Letters Meaning "alpha", "beta", or "pre-release"
[example]
====
When the version contains numbers separated by dots, dashes or underscores, and letters are used to mean "alpha", "beta" or "pre-release", which is, before the version without the letters, use `DISTVERSION`.
[.programlisting]
....
PORTNAME= nekoto
DISTVERSION= 1.2-pre4
....
[.programlisting]
....
PORTNAME= nekoto
DISTVERSION= 1.2p4
....
Both will generate a `PORTVERSION` of `1.2.p4` which is before than 1.2. man:pkg-version[8] can be used to check that fact:
[source,shell]
....
% pkg version -t 1.2.p4 1.2
<
....
====
[[makefile-versions-ex4]]
.Not Using `DISTVERSION` When the Version Contains Letters Meaning "Patch Level"
[example]
====
When the version contains letters that are not meant as "alpha", "beta", or "pre", but more in a "patch level", and meaning after the version without the letters, use `PORTVERSION`.
[.programlisting]
....
PORTNAME= nekoto
PORTVERSION= 1.2p4
....
In this case, using `DISTVERSION` is not possible because it would generate a version of `1.2.p4` which would be before `1.2` and not after. man:pkg-version[8] will verify this:
[source,shell]
....
% pkg version -t 1.2 1.2.p4
> <.>
% pkg version -t 1.2 1.2p4
< <.>
....
<.> `1.2` is after `1.2.p4`, which is _wrong_ in this case.
<.> `1.2` is before `1.2p4`, which is what was needed.
====
For some more advanced examples of setting `PORTVERSION`, when the software's versioning is really not compatible with FreeBSD's, or `DISTNAME` when the distribution file does not contain the version itself, see <<makefile-distname>>.
[[makefile-naming-revepoch]]
=== `PORTREVISION` and `PORTEPOCH`
[[makefile-portrevision]]
==== `PORTREVISION`
`PORTREVISION` is a monotonically increasing value which is reset to 0 with every increase of `DISTVERSION`, typically every time there is a new official vendor release. If `PORTREVISION` is non-zero, the value is appended to the package name. Changes to `PORTREVISION` are used by automated tools like man:pkg-version[8] to determine that a new package is available.
`PORTREVISION` must be increased each time a change is made to the port that changes the generated package in any way. That includes changes that only affect a package built with non-default <<makefile-options,options>>.
Examples of when `PORTREVISION` must be bumped:
* Addition of patches to correct security vulnerabilities, bugs, or to add new functionality to the port.
* Changes to the port [.filename]#Makefile# to enable or disable compile-time options in the package.
* Changes in the packing list or the install-time behavior of the package. For example, a change to a script which generates initial data for the package, like man:ssh[1] host keys.
* Version bump of a port's shared library dependency (in this case, someone trying to install the old package after installing a newer version of the dependency will fail since it will look for the old libfoo.x instead of libfoo.(x+1)).
* Silent changes to the port distfile which have significant functional differences. For example, changes to the distfile requiring a correction to [.filename]#distinfo# with no corresponding change to `DISTVERSION`, where a `diff -ru` of the old and new versions shows non-trivial changes to the code.
Examples of changes which do not require a `PORTREVISION` bump:
* Style changes to the port skeleton with no functional change to what appears in the resulting package.
* Changes to `MASTER_SITES` or other functional changes to the port which do not affect the resulting package.
* Trivial patches to the distfile such as correction of typos, which are not important enough that users of the package have to go to the trouble of upgrading.
* Build fixes which cause a package to become compilable where it was previously failing. As long as the changes do not introduce any functional change on any other platforms on which the port did previously build. Since `PORTREVISION` reflects the content of the package, if the package was not previously buildable then there is no need to increase `PORTREVISION` to mark a change.
A rule of thumb is to decide whether a change committed to a port is something which _some_ people would benefit from having. Either because of an enhancement, fix, or by virtue that the new package will actually work at all. Then weigh that against that fact that it will cause everyone who regularly updates their ports tree to be compelled to update. If yes, `PORTREVISION` must be bumped.
[NOTE]
====
People using binary packages will _never_ see the update if `PORTREVISION` is not bumped. Without increasing `PORTREVISION`, the package builders have no way to detect the change and thus, will not rebuild the package.
====
[[makefile-portepoch]]
==== `PORTEPOCH`
From time to time a software vendor or FreeBSD porter will do something silly and release a version of their software which is actually numerically less than the previous version. An example of this is a port which goes from foo-20000801 to foo-1.0 (the former will be incorrectly treated as a newer version since 20000801 is a numerically greater value than 1).
[TIP]
====
The results of version number comparisons are not always obvious. `pkg version` (see man:pkg-version[8]) can be used to test the comparison of two version number strings. For example:
[source,shell]
....
% pkg version -t 0.031 0.29
>
....
The `>` output indicates that version 0.031 is considered greater than version 0.29, which may not have been obvious to the porter.
====
In situations such as this, `PORTEPOCH` must be increased. If `PORTEPOCH` is nonzero it is appended to the package name as described in section 0 above. `PORTEPOCH` must never be decreased or reset to zero, because that would cause comparison to a package from an earlier epoch to fail. For example, the package would not be detected as out of date. The new version number, `1.0,1` in the above example, is still numerically less than the previous version, 20000801, but the `,1` suffix is treated specially by automated tools and found to be greater than the implied suffix `,0` on the earlier package.
Dropping or resetting `PORTEPOCH` incorrectly leads to no end of grief. If the discussion above was not clear enough, please consult the {freebsd-ports}.
It is expected that `PORTEPOCH` will not be used for the majority of ports, and that sensible use of `DISTVERSION`, or that use `PORTVERSION` carefully, can often preempt it becoming necessary if a future release of the software changes the version structure. However, care is needed by FreeBSD porters when a vendor release is made without an official version number - such as a code "snapshot" release. The temptation is to label the release with the release date, which will cause problems as in the example above when a new "official" release is made.
For example, if a snapshot release is made on the date `20000917`, and the previous version of the software was version `1.2`, do not use `20000917` for `DISTVERSION`. The correct way is a `DISTVERSION` of `1.2.20000917`, or similar, so that the succeeding release, say `1.3`, is still a numerically greater value.
[[makefile-portrevision-example]]
==== Example of `PORTREVISION` and `PORTEPOCH` Usage
The `gtkmumble` port, version `0.10`, is committed to the ports collection:
[.programlisting]
....
PORTNAME= gtkmumble
DISTVERSION= 0.10
....
`PKGNAME` becomes `gtkmumble-0.10`.
A security hole is discovered which requires a local FreeBSD patch. `PORTREVISION` is bumped accordingly.
[.programlisting]
....
PORTNAME= gtkmumble
DISTVERSION= 0.10
PORTREVISION= 1
....
`PKGNAME` becomes `gtkmumble-0.10_1`
A new version is released by the vendor, numbered `0.2` (it turns out the author actually intended `0.10` to actually mean `0.1.0`, not "what comes after 0.9" - oops, too late now). Since the new minor version `2` is numerically less than the previous version `10`, `PORTEPOCH` must be bumped to manually force the new package to be detected as "newer". Since it is a new vendor release of the code, `PORTREVISION` is reset to 0 (or removed from the [.filename]#Makefile#).
[.programlisting]
....
PORTNAME= gtkmumble
DISTVERSION= 0.2
PORTEPOCH= 1
....
`PKGNAME` becomes `gtkmumble-0.2,1`
The next release is 0.3. Since `PORTEPOCH` never decreases, the version variables are now:
[.programlisting]
....
PORTNAME= gtkmumble
DISTVERSION= 0.3
PORTEPOCH= 1
....
`PKGNAME` becomes `gtkmumble-0.3,1`
[NOTE]
====
If `PORTEPOCH` were reset to `0` with this upgrade, someone who had installed the `gtkmumble-0.10_1` package would not detect the `gtkmumble-0.3` package as newer, since `3` is still numerically less than `10`. Remember, this is the whole point of `PORTEPOCH` in the first place.
====
[[porting-pkgnameprefix-suffix]]
=== `PKGNAMEPREFIX` and `PKGNAMESUFFIX`
Two optional variables, `PKGNAMEPREFIX` and `PKGNAMESUFFIX`, are combined with `PORTNAME` and `PORTVERSION` to form `PKGNAME` as `${PKGNAMEPREFIX}${PORTNAME}${PKGNAMESUFFIX}-${PORTVERSION}`. Make sure this conforms to our <<porting-pkgname,guidelines for a good package name>>. In particular, the use of a hyphen (`-`) in `PORTVERSION` is _not_ allowed. Also, if the package name has the _language-_ or the _-compiled.specifics_ part (see below), use `PKGNAMEPREFIX` and `PKGNAMESUFFIX`, respectively. Do not make them part of `PORTNAME`.
[[porting-pkgname]]
=== Package Naming Conventions
These are the conventions to follow when naming packages. This is to make the package directory easy to scan, as there are already thousands of packages and users are going to turn away if they hurt their eyes!
Package names take the form of [.filename]#language_region-name-compiled.specifics-version.numbers#.
The package name is defined as `${PKGNAMEPREFIX}${PORTNAME}${PKGNAMESUFFIX}-${PORTVERSION}`. Make sure to set the variables to conform to that format.
[[porting-pkgname-language]]
[.filename]#language_region-#::
FreeBSD strives to support the native language of its users. The _language-_ part is a two letter abbreviation of the natural language defined by ISO-639 when the port is specific to a certain language. Examples are `ja` for Japanese, `ru` for Russian, `vi` for Vietnamese, `zh` for Chinese, `ko` for Korean and `de` for German.
+
If the port is specific to a certain region within the language area, add the two letter country code as well. Examples are `en_US` for US English and `fr_CH` for Swiss French.
+
The _language-_ part is set in `PKGNAMEPREFIX`.
[[porting-pkgname-name]]
[.filename]#name#::
Make sure that the port's name and version are clearly separated and placed into `PORTNAME` and `DISTVERSION`. The only reason for `PORTNAME` to contain a version part is if the upstream distribution is really named that way, as in the package:textproc/libxml2[] or package:japanese/kinput2-freewnn[] ports. Otherwise, `PORTNAME` cannot contain any version-specific information. It is quite normal for several ports to have the same `PORTNAME`, as the package:www/apache*[] ports do; in that case, different versions (and different index entries) are distinguished by `PKGNAMEPREFIX` and `PKGNAMESUFFIX` values.
+
There is a tradition of naming `Perl 5` modules by prepending `p5-` and converting the double-colon separator to a hyphen. For example, the `Data::Dumper` module becomes `p5-Data-Dumper`.
[[porting-pkgname-compiled-specifics]]
[.filename]#-compiled.specifics#::
If the port can be built with different <<makefile-masterdir,hardcoded defaults>> (usually part of the directory name in a family of ports), the _-compiled.specifics_ part states the compiled-in defaults. The hyphen is optional. Examples are paper size and font units.
+
The _-compiled.specifics_ part is set in `PKGNAMESUFFIX`.
[[porting-pkgname-version-numbers]]
[.filename]#-version.numbers#::
The version string follows a dash (`-`) and is a period-separated list of integers and single lowercase alphabetics. In particular, it is not permissible to have another dash inside the version string. The only exception is the string `pl` (meaning "patchlevel"), which can be used _only_ when there are no major and minor version numbers in the software. If the software version has strings like "alpha", "beta", "rc", or "pre", take the first letter and put it immediately after a period. If the version string continues after those names, the numbers follow the single alphabet without an extra period between them (for example, `1.0b2`).
+
The idea is to make it easier to sort ports by looking at the version string. In particular, make sure version number components are always delimited by a period, and if the date is part of the string, use the `d__yyyy.mm.dd__` format, not `_dd.mm.yyyy_` or the non-Y2K compliant `_yy.mm.dd_` format. It is important to prefix the version with a letter, here `d` (for date), in case a release with an actual version number is made, which would be numerically less than `_yyyy_`.
[IMPORTANT]
====
Package name must be unique among all of the ports tree, check that there is not already a port with the same `PORTNAME` and if there is add one of <<porting-pkgnameprefix-suffix,`PKGNAMEPREFIX` or `PKGNAMESUFFIX`>>.
====
Here are some (real) examples on how to convert the name as called by the software authors to a suitable package name, for each line, only one of `DISTVERSION` or `PORTVERSION` is set in, depending on which would be used in the port's [.filename]#Makefile#:
.Package Naming Examples
[cols="1,1,1,1,1,1,1", frame="none", options="header"]
|===
| Distribution Name
| PKGNAMEPREFIX
| PORTNAME
| PKGNAMESUFFIX
| DISTVERSION
| PORTVERSION
| Reason or comment
|mule-2.2.2
|(empty)
|mule
|(empty)
|2.2.2
|
|No changes required
|mule-1.0.1
|(empty)
|mule
|1
|1.0.1
|
|This is version 1 of mule, and version 2 already exists
|EmiClock-1.0.2
|(empty)
|emiclock
|(empty)
|1.0.2
|
|No uppercase names for single programs
|rdist-1.3alpha
|(empty)
|rdist
|(empty)
|1.3alpha
|
|Version will be `1.3.a`
|es-0.9-beta1
|(empty)
|es
|(empty)
|0.9-beta1
|
|Version will be `0.9.b1`
|mailman-2.0rc3
|(empty)
|mailman
|(empty)
|2.0rc3
|
|Version will be `2.0.r3`
|v3.3beta021.src
|(empty)
|tiff
|(empty)
|
|3.3
|What the heck was that anyway?
|tvtwm
|(empty)
|tvtwm
|(empty)
|
|p11
|No version in the filename, use what upstream says it is
|piewm
|(empty)
|piewm
|(empty)
|1.0
|
|No version in the filename, use what upstream says it is
|xvgr-2.10pl1
|(empty)
|xvgr
|(empty)
|
|2.10.pl1
|In that case, `pl1` means patch level, so using DISTVERSION is not possible.
|gawk-2.15.6
|ja-
|gawk
|(empty)
|2.15.6
|
|Japanese language version
|psutils-1.13
|(empty)
|psutils
|-letter
|1.13
|
|Paper size hardcoded at package build time
|pkfonts
|(empty)
|pkfonts
|300
|1.0
|
|Package for 300dpi fonts
|===
If there is absolutely no trace of version information in the original source and it is unlikely that the original author will ever release another version, just set the version string to `1.0` (like the `piewm` example above). Otherwise, ask the original author or use the date string the source file was released on (`d__yyyy.mm.dd__`, or `d__yyyymmdd__`) as the version.
[TIP]
====
Use any letter. Here, `d` here stands for date, if the source is a Git repository, `g` followed by the commit date is commonly used, using `s` for snapshot is also common.
====
[[makefile-categories]]
== Categorization
[[makefile-categories-definition]]
=== `CATEGORIES`
When a package is created, it is put under [.filename]#/usr/ports/packages/All# and links are made from one or more subdirectories of [.filename]#/usr/ports/packages#. The names of these subdirectories are specified by the variable `CATEGORIES`. It is intended to make life easier for the user when he is wading through the pile of packages on the FTP site or the CDROM. Please take a look at the <<porting-categories,current list of categories>> and pick the ones that are suitable for the port.
This list also determines where in the ports tree the port is imported. If there is more than one category here, the port files must be put in the subdirectory with the name of the first category. See <<choosing-categories,below>> for more discussion about how to pick the right categories.
[[porting-categories]]
=== Current List of Categories
Here is the current list of port categories. Those marked with an asterisk (`*`) are _virtual_ categories-those that do not have a corresponding subdirectory in the ports tree. They are only used as secondary categories, and only for search purposes.
[NOTE]
====
For non-virtual categories, there is a one-line description in `COMMENT` in that subdirectory's [.filename]#Makefile#.
====
[.informaltable]
[cols="1,1,1", frame="none", options="header"]
|===
| Category
| Description
| Notes
|[.filename]#accessibility#
|Ports to help disabled users.
|
|[.filename]#afterstep#`*`
|Ports to support the http://www.afterstep.org[AfterStep] window manager.
|
|[.filename]#arabic#
|Arabic language support.
|
|[.filename]#archivers#
|Archiving tools.
|
|[.filename]#astro#
|Astronomical ports.
|
|[.filename]#audio#
|Sound support.
|
|[.filename]#benchmarks#
|Benchmarking utilities.
|
|[.filename]#biology#
|Biology-related software.
|
|[.filename]#cad#
|Computer aided design tools.
|
|[.filename]#chinese#
|Chinese language support.
|
|[.filename]#comms#
|Communication software.
|Mostly software to talk to the serial port.
|[.filename]#converters#
|Character code converters.
|
|[.filename]#databases#
|Databases.
|
|[.filename]#deskutils#
|Things that used to be on the desktop before computers were invented.
|
|[.filename]#devel#
|Development utilities.
|Do not put libraries here just because they are libraries. They should _not_ be in this category unless they truly do not belong anywhere else.
|[.filename]#dns#
|DNS-related software.
|
|[.filename]#docs#`*`
|Meta-ports for FreeBSD documentation.
|
|[.filename]#editors#
|General editors.
|Specialized editors go in the section for those tools. For example, a mathematical-formula editor will go in [.filename]#math#, and have [.filename]#editors# as a second category.
|[.filename]#education#`*`
|Education-related software.
|This includes applications, utilities, or games primarily or substantially designed to help the user learn a specific topic or study in general. It also includes course-writing applications, course-delivery applications, and classroom or school management applications
|[.filename]#elisp#`*`
|Emacs-lisp ports.
|
|[.filename]#emulators#
|Emulators for other operating systems.
|Terminal emulators do _not_ belong here. X-based ones go to [.filename]#x11# and text-based ones to either [.filename]#comms# or [.filename]#misc#, depending on the exact functionality.
|[.filename]#enlightenment#`*`
|Ports related to the Enlightenment window manager.
|
|[.filename]#finance#
|Monetary, financial and related applications.
|
|[.filename]#french#
|French language support.
|
|[.filename]#ftp#
|FTP client and server utilities.
|If the port speaks both FTP and HTTP, put it in [.filename]#ftp# with a secondary category of [.filename]#www#.
|[.filename]#games#
|Games.
|
|[.filename]#geography#`*`
|Geography-related software.
|
|[.filename]#german#
|German language support.
|
|[.filename]#gnome#`*`
|Ports from the http://www.gnome.org[GNOME] Project.
|
|[.filename]#gnustep#`*`
|Software related to the GNUstep desktop environment.
|
|[.filename]#graphics#
|Graphics utilities.
|
|[.filename]#hamradio#`*`
|Software for amateur radio.
|
|[.filename]#haskell#`*`
|Software related to the Haskell language.
|
|[.filename]#hebrew#
|Hebrew language support.
|
|[.filename]#hungarian#
|Hungarian language support.
|
|[.filename]#irc#
|Internet Relay Chat utilities.
|
|[.filename]#japanese#
|Japanese language support.
|
|[.filename]#java#
|Software related to the Java(TM) language.
|The [.filename]#java# category must not be the only one for a port. Save for ports directly related to the Java language, porters are also encouraged not to use [.filename]#java# as the main category of a port.
|[.filename]#kde#`*`
|Ports from the http://www.kde.org[KDE] Project (generic).
|
|[.filename]#kde-applications#`*`
|Applications from the http://www.kde.org[KDE] Project.
|
|[.filename]#kde-frameworks#`*`
|Add-on libraries from the http://www.kde.org[KDE] Project for programming with Qt.
|
|[.filename]#kde-plasma#`*`
|Desktop from the http://www.kde.org[KDE] Project.
|
|[.filename]#kld#`*`
|Kernel loadable modules.
|
|[.filename]#korean#
|Korean language support.
|
|[.filename]#lang#
|Programming languages.
|
|[.filename]#linux#`*`
|Linux applications and support utilities.
|
|[.filename]#lisp#`*`
|Software related to the Lisp language.
|
|[.filename]#mail#
|Mail software.
|
|[.filename]#mate#`*`
|Ports related to the MATE desktop environment, a fork of GNOME 2.
|
|[.filename]#math#
|Numerical computation software and other utilities for mathematics.
|
|[.filename]#mbone#`*`
|MBone applications.
|
|[.filename]#misc#
|Miscellaneous utilities
|Things that do not belong anywhere else. If at all possible, try to find a better category for the port than `misc`, as ports tend to be overlooked in here.
|[.filename]#multimedia#
|Multimedia software.
|
|[.filename]#net#
|Miscellaneous networking software.
|
|[.filename]#net-im#
|Instant messaging software.
|
|[.filename]#net-mgmt#
|Networking management software.
|
|[.filename]#net-p2p#
|Peer to peer network applications.
|
|[.filename]#net-vpn#`*`
|Virtual Private Network applications.
|
|[.filename]#news#
|USENET news software.
|
|[.filename]#parallel#`*`
|Applications dealing with parallelism in computing.
|
|[.filename]#pear#`*`
|Ports related to the Pear PHP framework.
|
|[.filename]#perl5#`*`
|Ports that require Perl version 5 to run.
|
|[.filename]#plan9#`*`
|Various programs from http://www.cs.bell-labs.com/plan9dist/[Plan9].
|
|[.filename]#polish#
|Polish language support.
|
|[.filename]#ports-mgmt#
|Ports for managing, installing and developing FreeBSD ports and packages.
|
|[.filename]#portuguese#
|Portuguese language support.
|
|[.filename]#print#
|Printing software.
|Desktop publishing tools (previewers, etc.) belong here too.
|[.filename]#python#`*`
|Software related to the http://www.python.org/[Python] language.
|
|[.filename]#ruby#`*`
|Software related to the http://www.ruby-lang.org/[Ruby] language.
|
|[.filename]#rubygems#`*`
|Ports of http://www.rubygems.org/[RubyGems] packages.
|
|[.filename]#russian#
|Russian language support.
|
|[.filename]#scheme#`*`
|Software related to the Scheme language.
|
|[.filename]#science#
|Scientific ports that do not fit into other categories such as [.filename]#astro#, [.filename]#biology# and [.filename]#math#.
|
|[.filename]#security#
|Security utilities.
|
|[.filename]#shells#
|Command line shells.
|
|[.filename]#spanish#`*`
|Spanish language support.
|
|[.filename]#sysutils#
|System utilities.
|
|[.filename]#tcl#`*`
|Ports that use Tcl to run.
|
|[.filename]#textproc#
|Text processing utilities.
|It does not include desktop publishing tools, which go to [.filename]#print#.
|[.filename]#tk#`*`
|Ports that use Tk to run.
|
|[.filename]#ukrainian#
|Ukrainian language support.
|
|[.filename]#vietnamese#
|Vietnamese language support.
|
|[.filename]#wayland#`*`
|Ports to support the Wayland display server.
|
|[.filename]#windowmaker#`*`
|Ports to support the Window Maker window manager.
|
|[.filename]#www#
|Software related to the World Wide Web.
|HTML language support belongs here too.
|[.filename]#x11#
|The X Window System and friends.
|This category is only for software that directly supports the window system. Do not put regular X applications here. Most of them go into other [.filename]#x11-*# categories (see below).
|[.filename]#x11-clocks#
|X11 clocks.
|
|[.filename]#x11-drivers#
|X11 drivers.
|
|[.filename]#x11-fm#
|X11 file managers.
|
|[.filename]#x11-fonts#
|X11 fonts and font utilities.
|
|[.filename]#x11-servers#
|X11 servers.
|
|[.filename]#x11-themes#
|X11 themes.
|
|[.filename]#x11-toolkits#
|X11 toolkits.
|
|[.filename]#x11-wm#
|X11 window managers.
|
|[.filename]#xfce#`*`
|Ports related to the http://www.xfce.org/[Xfce] desktop environment.
|
|[.filename]#zope#`*`
|http://www.zope.org/[Zope] support.
|
|===
[[choosing-categories]]
=== Choosing the Right Category
As many of the categories overlap, choosing which of the categories will be the primary category of the port can be tedious. There are several rules that govern this issue. Here is the list of priorities, in decreasing order of precedence:
* The first category must be a physical category (see <<porting-categories,above>>). This is necessary to make the packaging work. Virtual categories and physical categories may be intermixed after that.
* Language specific categories always come first. For example, if the port installs Japanese X11 fonts, then the `CATEGORIES` line would read [.filename]#japanese x11-fonts#.
* Specific categories are listed before less-specific ones. For instance, an HTML editor is listed as [.filename]#www editors#, not the other way around. Also, do not list [.filename]#net# when the port belongs to any of [.filename]#irc#, [.filename]#mail#, [.filename]#news#, [.filename]#security#, or [.filename]#www#, as [.filename]#net# is included implicitly.
* [.filename]#x11# is used as a secondary category only when the primary category is a natural language. In particular, do not put [.filename]#x11# in the category line for X applications.
* Emacs modes are placed in the same ports category as the application supported by the mode, not in [.filename]#editors#. For example, an Emacs mode to edit source files of some programming language goes into [.filename]#lang#.
* Ports installing loadable kernel modules also have the virtual category [.filename]#kld# in their `CATEGORIES` line. This is one of the things handled automatically by adding `USES=kmod`.
* [.filename]#misc# does not appear with any other non-virtual category. If there is `misc` with something else in `CATEGORIES`, that means `misc` can safely be deleted and the port placed only in the other subdirectory.
* If the port truly does not belong anywhere else, put it in [.filename]#misc#.
If the category is not clearly defined, please put a comment to that effect in the https://bugs.freebsd.org/submit/[port submission] in the bug database so we can discuss it before we import it. As a committer, send a note to the {freebsd-ports} so we can discuss it first. Too often, new ports are imported to the wrong category only to be moved right away.
[[proposing-categories]]
=== Proposing a New Category
As the Ports Collection has grown over time, various new categories have been introduced. New categories can either be _virtual_ categories-those that do not have a corresponding subdirectory in the ports tree- or _physical_ categories-those that do. This section discusses the issues involved in creating a new physical category. Read it thouroughly before proposing a new one.
Our existing practice has been to avoid creating a new physical category unless either a large number of ports would logically belong to it, or the ports that would belong to it are a logically distinct group that is of limited general interest (for instance, categories related to spoken human languages), or preferably both.
The rationale for this is that such a change creates a link:{committers-guide}#ports[fair amount of work] for both the committers and also for all users who track changes to the Ports Collection. In addition, proposed category changes just naturally seem to attract controversy. (Perhaps this is because there is no clear consensus on when a category is "too big", nor whether categories should lend themselves to browsing (and thus what number of categories would be an ideal number), and so forth.)
Here is the procedure:
[.procedure]
. Propose the new category on {freebsd-ports}. Include a detailed rationale for the new category, including why the existing categories are not sufficient, and the list of existing ports proposed to move. (If there are new ports pending in Bugzilla that would fit this category, list them too.) If you are the maintainer and/or submitter, respectively, mention that as it may help the case.
. Participate in the discussion.
. If it seems that there is support for the idea, file a PR which includes both the rationale and the list of existing ports that need to be moved. Ideally, this PR would also include these patches:
** [.filename]##Makefile##s for the new ports once they are repocopied
** [.filename]#Makefile# for the new category
** [.filename]#Makefile# for the old ports' categories
** [.filename]##Makefile##s for ports that depend on the old ports
** (for extra credit, include the other files that have to change, as per the procedure in the Committer's Guide.)
. Since it affects the ports infrastructure and involves moving and patching many ports but also possibly running regression tests on the build cluster, assign the PR to the {portmgr}.
. If that PR is approved, a committer will need to follow the rest of the procedure that is link:{committers-guide}#PORTS[outlined in the Committer's Guide].
Proposing a new virtual category is similar to the above but much less involved, since no ports will actually have to move. In this case, the only patches to include in the PR would be those to add the new category to `CATEGORIES` of the affected ports.
[[proposing-reorg]]
=== Proposing Reorganizing All the Categories
Occasionally someone proposes reorganizing the categories with either a 2-level structure, or some other kind of keyword structure. To date, nothing has come of any of these proposals because, while they are very easy to make, the effort involved to retrofit the entire existing ports collection with any kind of reorganization is daunting to say the very least. Please read the history of these proposals in the mailing list archives before posting this idea. Furthermore, be prepared to be challenged to offer a working prototype.
[[makefile-distfiles]]
== The Distribution Files
The second part of the [.filename]#Makefile# describes the files that must be downloaded to build the port, and where they can be downloaded.
[[makefile-distname]]
=== `DISTNAME`
`DISTNAME` is the name of the port as called by the authors of the software. `DISTNAME` defaults to `${PORTNAME}-${DISTVERSIONPREFIX}${DISTVERSION}${DISTVERSIONSUFFIX}`, and if not set, `DISTVERSION` defaults to `${PORTVERSION}` so override `DISTNAME` only if necessary. `DISTNAME` is only used in two places. First, the distribution file list (`DISTFILES`) defaults to `${DISTNAME}${EXTRACT_SUFX}`. Second, the distribution file is expected to extract into a subdirectory named `WRKSRC`, which defaults to [.filename]#work/${DISTNAME}#.
Some vendor's distribution names which do not fit into the `${PORTNAME}-${PORTVERSION}`-scheme can be handled automatically by setting `DISTVERSIONPREFIX`, `DISTVERSION`, and `DISTVERSIONSUFFIX`. `PORTVERSION` will be derived from `DISTVERSION` automatically.
[IMPORTANT]
====
Only one of `PORTVERSION` and `DISTVERSION` can be set at a time. If `DISTVERSION` does not derive a correct `PORTVERSION`, do not use `DISTVERSION`.
====
If the upstream version scheme can be derived into a ports-compatible version scheme, set some variable to the upstream version, _do not_ use `DISTVERSION` as the variable name. Set `PORTVERSION` to the computed version based on the variable you created, and set `DISTNAME` accordingly.
If the upstream version scheme cannot easily be coerced into a ports-compatible value, set `PORTVERSION` to a sensible value, and set `DISTNAME` with `PORTNAME` with the verbatim upstream version.
[[makefile-distname-ex1]]
.Deriving `PORTVERSION` Manually
[example]
====
BIND9 uses a version scheme that is not compatible with the ports versions (it has `-` in its versions) and cannot be derived using `DISTVERSION` because after the 9.9.9 release, it will release a "patchlevels" in the form of `9.9.9-P1`. DISTVERSION would translate that into `9.9.9.p1`, which, in the ports versioning scheme means 9.9.9 pre-release 1, which is before 9.9.9 and not after. So `PORTVERSION` is manually derived from an `ISCVERSION` variable to output `9.9.9p1`.
The order into which the ports framework, and pkg, will sort versions is checked using the `-t` argument of man:pkg-version[8]:
[source,shell]
....
% pkg version -t 9.9.9 9.9.9.p1
> <.>
% pkg version -t 9.9.9 9.9.9p1
< <.>
....
<.> The `>` sign means that the first argument passed to `-t` is greater than the second argument. `9.9.9` is after `9.9.9.p1`.
<.> The `<` sign means that the first argument passed to `-t` is less than the second argument. `9.9.9` is before `9.9.9p1`.
In the port [.filename]#Makefile#, for example package:dns/bind99[], it is achieved by:
[.programlisting]
....
PORTNAME= bind
PORTVERSION= ${ISCVERSION:S/-P/P/:S/b/.b/:S/a/.a/:S/rc/.rc/}
CATEGORIES= dns net
MASTER_SITES= ISC/bind9/${ISCVERSION}
PKGNAMESUFFIX= 99
DISTNAME= ${PORTNAME}-${ISCVERSION}
MAINTAINER= mat@FreeBSD.org
COMMENT= BIND DNS suite with updated DNSSEC and DNS64
LICENSE= ISCL
# ISC releases things like 9.8.0-P1 or 9.8.1rc1, which our versioning does not like
ISCVERSION= 9.9.9-P6
....
Define upstream version in `ISCVERSION`, with a comment saying _why_ it is needed.
Use `ISCVERSION` to get a ports-compatible `PORTVERSION`.
Use `ISCVERSION` directly to get the correct URL for fetching the distribution file.
Use `ISCVERSION` directly to name the distribution file.
====
[[makefile-distname-ex2]]
.Derive `DISTNAME` from `PORTVERSION`
[example]
====
From time to time, the distribution file name has little or no relation to the version of the software.
In package:comms/kermit[], only the last element of the version is present in the distribution file:
[.programlisting]
....
PORTNAME= kermit
PORTVERSION= 9.0.304
CATEGORIES= comms ftp net
MASTER_SITES= ftp://ftp.kermitproject.org/kermit/test/tar/
DISTNAME= cku${PORTVERSION:E}-dev20
....
The `:E` man:make[1] modifier returns the suffix of the variable, in this case, `304`. The distribution file is correctly generated as `cku304-dev20.tar.gz`.
====
[[makefile-distname-ex3]]
.Exotic Case 1
[example]
====
Sometimes, there is no relation between the software name, its version, and the distribution file it is distributed in.
From package:audio/libworkman[]:
[.programlisting]
....
PORTNAME= libworkman
PORTVERSION= 1.4
CATEGORIES= audio
MASTER_SITES= LOCAL/jim
DISTNAME= ${PORTNAME}-1999-06-20
....
====
[[makefile-distname-ex4]]
.Exotic Case 2
[example]
====
In package:comms/librs232[], the distribution file is not versioned, so using <<makefile-dist_subdir,`DIST_SUBDIR`>> is needed:
[.programlisting]
....
PORTNAME= librs232
PORTVERSION= 20160710
CATEGORIES= comms
MASTER_SITES= http://www.teuniz.net/RS-232/
DISTNAME= RS-232
DIST_SUBDIR= ${PORTNAME}-${PORTVERSION}
....
====
[NOTE]
====
`PKGNAMEPREFIX` and `PKGNAMESUFFIX` do not affect `DISTNAME`. Also note that if `WRKSRC` is equal to [.filename]#${WRKDIR}/${DISTNAME}# while the original source archive is named something other than `${PORTNAME}-${PORTVERSION}${EXTRACT_SUFX}`, leave `DISTNAME` alone- defining only `DISTFILES` is easier than both `DISTNAME` and `WRKSRC` (and possibly `EXTRACT_SUFX`).
====
[[makefile-master_sites]]
=== `MASTER_SITES`
Record the directory part of the FTP/HTTP-URL pointing at the original tarball in `MASTER_SITES`. Do not forget the trailing slash ([.filename]#/#)!
The `make` macros will try to use this specification for grabbing the distribution file with `FETCH` if they cannot find it already on the system.
It is recommended that multiple sites are included on this list, preferably from different continents. This will safeguard against wide-area network problems.
[IMPORTANT]
====
`MASTER_SITES` must not be blank. It must point to the actual site hosting the distribution files. It cannot point to web archives, or the FreeBSD distribution files cache sites. The only exception to this rule is ports that do not have any distribution files. For example, meta-ports do not have any distribution files, so `MASTER_SITES` does not need to be set.
====
[[makefile-master_sites-shorthand]]
==== Using `MASTER_SITE_*` Variables
Shortcut abbreviations are available for popular archives like SourceForge (`SOURCEFORGE`), GNU (`GNU`), or Perl CPAN (`PERL_CPAN`). `MASTER_SITES` can use them directly:
[.programlisting]
....
MASTER_SITES= GNU/make
....
The older expanded format still works, but all ports have been converted to the compact format. The expanded format looks like this:
[.programlisting]
....
MASTER_SITES= ${MASTER_SITE_GNU}
MASTER_SITE_SUBDIR= make
....
These values and variables are defined in https://cgit.freebsd.org/ports/tree/Mk/bsd.sites.mk[Mk/bsd.sites.mk]. New entries are added often, so make sure to check the latest version of this file before submitting a port.
[TIP]
====
For any `MASTER_SITE_FOO` variable, the shorthand `_FOO_` can be used. For example, use:
[.programlisting]
....
MASTER_SITES= FOO
....
If `MASTER_SITE_SUBDIR` is needed, use this:
[.programlisting]
....
MASTER_SITES= FOO/bar
....
====
[NOTE]
====
Some `MASTER_SITE_*` names are quite long, and for ease of use, shortcuts have been defined:
[[makefile-master_sites-shortcut]]
.Shortcuts for `MASTER_SITE_*` Macros
[cols="1,1", frame="none", options="header"]
|===
| Macro
| Shortcut
|`PERL_CPAN`
|`CPAN`
|`GITHUB`
|`GH`
|`GITHUB_CLOUD`
|`GHC`
|`LIBREOFFICE_DEV`
|`LODEV`
|`NETLIB`
|`NL`
|`RUBYGEMS`
|`RG`
|`SOURCEFORGE`
|`SF`
|===
====
[[makefile-master_sites-magic]]
==== Magic MASTER_SITES Macros
Several "magic" macros exist for popular sites with a predictable directory structure. For these, just use the abbreviation and the system will choose a subdirectory automatically. For a port named `Stardict`, of version `1.2.3`, and hosted on SourceForge, adding this line:
[.programlisting]
....
MASTER_SITES= SF
....
infers a subdirectory named `/project/stardict/stardict/1.2.3`. If the inferred directory is incorrect, it can be overridden:
[.programlisting]
....
MASTER_SITES= SF/stardict/WyabdcRealPeopleTTS/${PORTVERSION}
....
This can also be written as
[.programlisting]
....
MASTER_SITES= SF
MASTER_SITE_SUBDIR= stardict/WyabdcRealPeopleTTS/${PORTVERSION}
....
[[makefile-master_sites-popular]]
.Magic `MASTER_SITES` Macros
[cols="1,1", frame="none", options="header"]
|===
| Macro
| Assumed subdirectory
|`APACHE_COMMONS_BINARIES`
|`${PORTNAME:S,commons-,,}`
|`APACHE_COMMONS_SOURCE`
|`${PORTNAME:S,commons-,,}`
|`APACHE_JAKARTA`
|`${PORTNAME:S,-,/,}/source`
|`BERLIOS`
|`${PORTNAME:tl}.berlios`
|`CHEESESHOP`
|`source/${DISTNAME:C/(.).\*/\1/}/${DISTNAME:C/(.*)-[0-9].*/\1/}`
|`CPAN`
|`${PORTNAME:C/-.*//}`
|`DEBIAN`
|`pool/main/${PORTNAME:C/^((lib)?.).*$/\1/}/${PORTNAME}`
|`FARSIGHT`
|`${PORTNAME}`
|`FESTIVAL`
|`${PORTREVISION}`
|`GCC`
|`releases/${DISTNAME}`
|`GENTOO`
|`distfiles`
|`GIMP`
|`${PORTNAME}/${PORTVERSION:R}/`
|`GH`
|`${GH_ACCOUNT}/${GH_PROJECT}/tar.gz/${GH_TAGNAME}?dummy=/`
|`GHC`
|`${GH_ACCOUNT}/${GH_PROJECT}/`
|`GNOME`
|`sources/${PORTNAME}/${PORTVERSION:C/^([0-9]+\.[0-9]+).*/\1/}`
|`GNU`
|`${PORTNAME}`
|`GNUPG`
|`${PORTNAME}`
|`GNU_ALPHA`
|`${PORTNAME}`
|`HORDE`
|`${PORTNAME}`
|`LODEV`
|`${PORTNAME}`
|`MATE`
|`${PORTVERSION:C/^([0-9]+\.[0-9]+).*/\1/}`
|`MOZDEV`
|`${PORTNAME:tl}`
|`NL`
|`${PORTNAME}`
|`QT`
|`archive/qt/${PORTVERSION:R}`
|`SAMBA`
|`${PORTNAME}`
|`SAVANNAH`
|`${PORTNAME:tl}`
|`SF`
|`${PORTNAME:tl}/${PORTNAME:tl}/${PORTVERSION}`
|===
[[makefile-master_sites-github]]
=== `USE_GITHUB`
If the distribution file comes from a specific commit or tag on https://github.com[GitHub] for which there is no officially released file, there is an easy way to set the right `DISTNAME` and `MASTER_SITES` automatically. These variables are available:
[[makefile-master_sites-github-description]]
.`USE_GITHUB` Description
[cols="1,1,1", options="header"]
|===
| Variable
| Description
| Default
|`GH_ACCOUNT`
|Account name of the GitHub user hosting the project
|`${PORTNAME}`
|`GH_PROJECT`
|Name of the project on GitHub
|`${PORTNAME}`
|`GH_TAGNAME`
|Name of the tag to download (2.0.1, hash, ...) Using the name of a branch here is incorrect. It is also possible to use the hash of a commit id to do a snapshot.
|`${DISTVERSIONPREFIX}${DISTVERSION}${DISTVERSIONSUFFIX}`
|`GH_SUBDIR`
|When the software needs an additional distribution file to be extracted within `${WRKSRC}`, this variable can be used. See the examples in <<makefile-master_sites-github-multiple>> for more information.
|(none)
|`GH_TUPLE`
|`GH_TUPLE` allows putting `GH_ACCOUNT`, `GH_PROJECT`, `GH_TAGNAME`, and `GH_SUBDIR` into a single variable. The format is _account_`:`_project_`:`_tagname_`:`_group_`/`_subdir_. The `/`_subdir_ part is optional. It is helpful when there is more than one GitHub project from which to fetch.
|===
[IMPORTANT]
====
Do not use `GH_TUPLE` for the default distribution file, as it has no default.
====
[[makefile-master_sites-github-ex1]]
.Simple Use of `USE_GITHUB`
[example]
====
While trying to make a port for version `1.2.7` of pkg from the FreeBSD user on github, at https://github.com/freebsd/pkg[], The [.filename]#Makefile# would end up looking like this (slightly stripped for the example):
[.programlisting]
....
PORTNAME= pkg
DISTVERSION= 1.2.7
USE_GITHUB= yes
GH_ACCOUNT= freebsd
....
It will automatically have `MASTER_SITES` set to `GH GHC` and `WRKSRC` to `${WRKDIR}/pkg-1.2.7`.
====
[[makefile-master_sites-github-ex2]]
.More Complete Use of `USE_GITHUB`
[example]
====
While trying to make a port for the bleeding edge version of pkg from the FreeBSD user on github, at https://github.com/freebsd/pkg[], the [.filename]#Makefile# ends up looking like this (slightly stripped for the example):
[.programlisting]
....
PORTNAME= pkg-devel
DISTVERSION= 1.3.0.a.20140411
USE_GITHUB= yes
GH_ACCOUNT= freebsd
GH_PROJECT= pkg
GH_TAGNAME= 6dbb17b
....
It will automatically have `MASTER_SITES` set to `GH GHC` and `WRKSRC` to `${WRKDIR}/pkg-6dbb17b`.
[TIP]
****
`20140411` is the date of the commit referenced in `GH_TAGNAME`, not the date the [.filename]#Makefile# is edited, or the date the commit is made.
****
====
[[makefile-master_sites-github-ex3]]
.Use of `USE_GITHUB` with `DISTVERSIONPREFIX`
[example]
====
From time to time, `GH_TAGNAME` is a slight variation from `DISTVERSION`. For example, if the version is `1.0.2`, the tag is `v1.0.2`. In those cases, it is possible to use `DISTVERSIONPREFIX` or `DISTVERSIONSUFFIX`:
[.programlisting]
....
PORTNAME= foo
DISTVERSIONPREFIX= v
DISTVERSION= 1.0.2
USE_GITHUB= yes
....
It will automatically set `GH_TAGNAME` to `v1.0.2`, while `WRKSRC` will be kept to `${WRKDIR}/foo-1.0.2`.
====
[[makefile-master_sites-github-ex4]]
.Using `USE_GITHUB` When Upstream Does Not Use Versions
[example]
====
If there never was a version upstream, do not invent one like `0.1` or `1.0`. Create the port with a `DISTVERSION` of `g__YYYYMMDD__`, where `g` is for Git, and `_YYYYMMDD_` represents the date the commit referenced in `GH_TAGNAME`.
[.programlisting]
....
PORTNAME= bar
DISTVERSION= g20140411
USE_GITHUB= yes
GH_TAGNAME= c472d66b
....
This creates a versioning scheme that increases over time, and that is still before version `0` (see <<makefile-versions-ex-pkg-version>> for details on man:pkg-version[8]):
[source,shell]
....
% pkg version -t g20140411 0
<
....
Which means using `PORTEPOCH` will not be needed in case upstream decides to cut versions in the future.
====
[[makefile-master_sites-github-ex5]]
.Using `USE_GITHUB` to Access a Commit Between Two Versions
[example]
====
If the current version of the software uses a Git tag, and the port needs to be updated to a newer, intermediate version, without a tag, use man:git-describe[1] to find out the version to use:
[source,shell]
....
% git describe --tags f0038b1
v0.7.3-14-gf0038b1
....
`v0.7.3-14-gf0038b1` can be split into three parts:
`v0.7.3`::
This is the last Git tag that appears in the commit history before the requested commit.
`-14`::
This means that the requested commit, `f0038b1`, is the 14th commit after the `v0.7.3` tag.
`-gf0038b1`::
The `-g` means "Git", and the `f0038b1` is the commit hash that this reference points to.
[.programlisting]
....
PORTNAME= bar
DISTVERSIONPREFIX= v
DISTVERSION= 0.7.3-14
DISTVERSIONSUFFIX= -gf0038b1
USE_GITHUB= yes
....
This creates a versioning scheme that increases over time (well, over commits), and does not conflict with the creation of a `0.7.4` version. (See <<makefile-versions-ex-pkg-version>> for details on man:pkg-version[8]):
[source,shell]
....
% pkg version -t 0.7.3 0.7.3.14
<
% pkg version -t 0.7.3.14 0.7.4
<
....
[NOTE]
****
If the requested commit is the same as a tag, a shorter description is shown by default. The longer version is equivalent:
[source,shell]
....
% git describe --tags c66c71d
v0.7.3
% git describe --tags --long c66c71d
v0.7.3-0-gc66c71d
....
****
====
[[makefile-master_sites-github-multiple]]
==== Fetching Multiple Files from GitHub
The `USE_GITHUB` framework also supports fetching multiple distribution files from different places in GitHub. It works in a way very similar to <<porting-master-sites-n>>.
Multiple values are added to `GH_ACCOUNT`, `GH_PROJECT`, and `GH_TAGNAME`. Each different value is assigned a group. The main value can either have no group, or the `:DEFAULT` group. A value can be omitted if it is the same as the default as listed in <<makefile-master_sites-github-description>>.
`GH_TUPLE` can also be used when there are a lot of distribution files. It helps keep the account, project, tagname, and group information at the same place.
For each group, a `${WRKSRC_group}` helper variable is created, containing the directory into which the file has been extracted. The `${WRKSRC_group}` variables can be used to move directories around during `post-extract`, or add to `CONFIGURE_ARGS`, or whatever is needed so that the software builds correctly.
[CAUTION]
====
The `:__group__` part _must_ be used for _only one_ distribution file. It is used as a unique key and using it more than once will overwrite the previous values.
====
[NOTE]
====
As this is only syntactic sugar above `DISTFILES` and `MASTER_SITES`, the group names must adhere to the restrictions on group names outlined in <<porting-master-sites-n>>
====
When fetching multiple files from GitHub, sometimes the default distribution file is not fetched from GitHub. To disable fetching the default distribution, set:
[.programlisting]
....
USE_GITHUB= nodefault
....
[IMPORTANT]
====
When using `USE_GITHUB=nodefault`, the [.filename]#Makefile# must set `DISTFILES` in its crossref:porting-order[porting-order-portname,top block]. The definition should be:
[.programlisting]
....
DISTFILES= ${DISTNAME}${EXTRACT_SUFX}
....
====
[[makefile-master_sites-github-multi]]
.Use of `USE_GITHUB` with Multiple Distribution Files
[example]
====
From time to time, there is a need to fetch more than one distribution file. For example, when the upstream git repository uses submodules. This can be done easily using groups in the `GH_*` variables:
[.programlisting]
....
PORTNAME= foo
DISTVERSION= 1.0.2
USE_GITHUB= yes
GH_ACCOUNT= bar:icons,contrib
GH_PROJECT= foo-icons:icons foo-contrib:contrib
GH_TAGNAME= 1.0:icons fa579bc:contrib
GH_SUBDIR= ext/icons:icons
CONFIGURE_ARGS= --with-contrib=${WRKSRC_contrib}
....
This will fetch three distribution files from github. The default one comes from [.filename]#foo/foo# and is version `1.0.2`. The second one, with the `icons` group, comes from [.filename]#bar/foo-icons# and is in version `1.0`. The third one comes from [.filename]#bar/foo-contrib# and uses the Git commit `fa579bc`. The distribution files are named [.filename]#foo-foo-1.0.2_GH0.tar.gz#, [.filename]#bar-foo-icons-1.0_GH0.tar.gz#, and [.filename]#bar-foo-contrib-fa579bc_GH0.tar.gz#.
All the distribution files are extracted in `${WRKDIR}` in their respective subdirectories. The default file is still extracted in `${WRKSRC}`, in this case, [.filename]#${WRKDIR}/foo-1.0.2#. Each additional distribution file is extracted in `${WRKSRC_group}`. Here, for the `icons` group, it is called `${WRKSRC_icons}` and it contains [.filename]#${WRKDIR}/foo-icons-1.0#. The file with the `contrib` group is called `${WRKSRC_contrib}` and contains `${WRKDIR}/foo-contrib-fa579bc`.
The software's build system expects to find the icons in a [.filename]#ext/icons# subdirectory in its sources, so `GH_SUBDIR` is used. `GH_SUBDIR` makes sure that [.filename]#ext# exists, but that [.filename]#ext/icons# does not already exist. Then it does this:
[.programlisting]
....
post-extract:
@${MV} ${WRKSRC_icons} ${WRKSRC}/ext/icons
....
====
[[makefile-master_sites-github-multi2]]
.Use of `USE_GITHUB` with Multiple Distribution Files Using `GH_TUPLE`
[example]
====
This is functionally equivalent to <<makefile-master_sites-github-multi>>, but using `GH_TUPLE`:
[.programlisting]
....
PORTNAME= foo
DISTVERSION= 1.0.2
USE_GITHUB= yes
GH_TUPLE= bar:foo-icons:1.0:icons/ext/icons \
bar:foo-contrib:fa579bc:contrib
CONFIGURE_ARGS= --with-contrib=${WRKSRC_contrib}
....
Grouping was used in the previous example with `bar:icons,contrib`. Some redundant information is present with `GH_TUPLE` because grouping is not possible.
====
[[makefile-master_sites-github-submodules]]
.How to Use `USE_GITHUB` with Git Submodules?
[example]
====
Ports with GitHub as an upstream repository sometimes use submodules. See man:git-submodule[1] for more information.
The problem with submodules is that each is a separate repository. As such, they each must be fetched separately.
Using package:finance/moneymanagerex[] as an example, its GitHub repository is https://github.com/moneymanagerex/moneymanagerex[]. It has a https://github.com/moneymanagerex/moneymanagerex/blob/master/.gitmodules[.gitmodules] file at the root. This file describes all the submodules used in this repository, and lists additional repositories needed. This file will tell what additional repositories are needed:
[.programlisting]
....
[submodule "lib/wxsqlite3"]
path = lib/wxsqlite3
url = https://github.com/utelle/wxsqlite3.git
[submodule "3rd/mongoose"]
path = 3rd/mongoose
url = https://github.com/cesanta/mongoose.git
[submodule "3rd/LuaGlue"]
path = 3rd/LuaGlue
url = https://github.com/moneymanagerex/LuaGlue.git
[submodule "3rd/cgitemplate"]
path = 3rd/cgitemplate
url = https://github.com/moneymanagerex/html-template.git
[...]
....
The only information missing from that file is the commit hash or tag to use as a version. This information is found after cloning the repository:
[source,shell]
....
% git clone --recurse-submodules https://github.com/moneymanagerex/moneymanagerex.git
Cloning into 'moneymanagerex'...
remote: Counting objects: 32387, done.
[...]
Submodule '3rd/LuaGlue' (https://github.com/moneymanagerex/LuaGlue.git) registered for path '3rd/LuaGlue'
Submodule '3rd/cgitemplate' (https://github.com/moneymanagerex/html-template.git) registered for path '3rd/cgitemplate'
Submodule '3rd/mongoose' (https://github.com/cesanta/mongoose.git) registered for path '3rd/mongoose'
Submodule 'lib/wxsqlite3' (https://github.com/utelle/wxsqlite3.git) registered for path 'lib/wxsqlite3'
[...]
Cloning into '/home/mat/work/freebsd/ports/finance/moneymanagerex/moneymanagerex/3rd/LuaGlue'...
Cloning into '/home/mat/work/freebsd/ports/finance/moneymanagerex/moneymanagerex/3rd/cgitemplate'...
Cloning into '/home/mat/work/freebsd/ports/finance/moneymanagerex/moneymanagerex/3rd/mongoose'...
Cloning into '/home/mat/work/freebsd/ports/finance/moneymanagerex/moneymanagerex/lib/wxsqlite3'...
[...]
Submodule path '3rd/LuaGlue': checked out 'c51d11a247ee4d1e9817dfa2a8da8d9e2f97ae3b'
Submodule path '3rd/cgitemplate': checked out 'cd434eeeb35904ebcd3d718ba29c281a649b192c'
Submodule path '3rd/mongoose': checked out '2140e5992ab9a3a9a34ce9a281abf57f00f95cda'
Submodule path 'lib/wxsqlite3': checked out 'fb66eb230d8aed21dec273b38c7c054dcb7d6b51'
[...]
% cd moneymanagerex
% git submodule status
c51d11a247ee4d1e9817dfa2a8da8d9e2f97ae3b 3rd/LuaGlue (heads/master)
cd434eeeb35904ebcd3d718ba29c281a649b192c 3rd/cgitemplate (cd434ee)
2140e5992ab9a3a9a34ce9a281abf57f00f95cda 3rd/mongoose (6.2-138-g2140e59)
fb66eb230d8aed21dec273b38c7c054dcb7d6b51 lib/wxsqlite3 (v3.4.0)
[...]
....
It can also be found on GitHub. Each subdirectory that is a submodule is shown as `_directory @ hash_`, for example, `mongoose @ 2140e59`.
[NOTE]
****
While getting the information from GitHub seems more straightforward, the information found using `git submodule status` will provide more meaningful information. For example, here, ``lib/wxsqlite3``'s commit hash `fb66eb2` correspond to `v3.4.0`. Both can be used interchangeably, but when a tag is available, use it.
****
Now that all the required information has been gathered, the [.filename]#Makefile# can be written (only GitHub-related lines are shown):
[.programlisting]
....
PORTNAME= moneymanagerex
DISTVERSIONPREFIX= v
DISTVERSION= 1.3.0
USE_GITHUB= yes
GH_TUPLE= utelle:wxsqlite3:v3.4.0:wxsqlite3/lib/wxsqlite3 \
moneymanagerex:LuaGlue:c51d11a:lua_glue/3rd/LuaGlue \
moneymanagerex:html-template:cd434ee:html_template/3rd/cgitemplate \
cesanta:mongoose:2140e59:mongoose/3rd/mongoose \
[...]
....
====
[[makefile-master_sites-gitlab]]
=== `USE_GITLAB`
Similar to GitHub, if the distribution file comes from https://gitlab.com[gitlab.com] or is hosting the GitLab software, these variables are available for use and might need to be set.
[[makefile-master_sites-gitlab-description]]
.`USE_GITLAB` Description
[cols="1,1,1", options="header"]
|===
| Variable
| Description
| Default
|`GL_SITE`
|Site name hosting the GitLab project
|https://gitlab.com
|`GL_ACCOUNT`
|Account name of the GitLab user hosting the project
|`${PORTNAME}`
|`GL_PROJECT`
|Name of the project on GitLab
|`${PORTNAME}`
|`GL_COMMIT`
|The commit hash to download. Must be the full 160 bit, 40 character hex sha1 hash. This is a required variable for GitLab.
|`(none)`
|`GL_SUBDIR`
|When the software needs an additional distribution file to be extracted within `${WRKSRC}`, this variable can be used. See the examples in <<makefile-master_sites-gitlab-multiple>> for more information.
|(none)
|`GL_TUPLE`
|`GL_TUPLE` allows putting `GL_SITE`, `GL_ACCOUNT`, `GL_PROJECT`, `GL_COMMIT`, and `GL_SUBDIR` into a single variable. The format is _site_`:`_account_`:`_project_`:`_commit_`:`_group_`/`_subdir_. The _site_`:` and `/`_subdir_ part is optional. It is helpful when there are more than one GitLab project from which to fetch.
|===
[[makefile-master_sites-gitlab-ex1]]
.Simple Use of `USE_GITLAB`
[example]
====
While trying to make a port for version `1.14` of libsignon-glib from the accounts-sso user on gitlab.com, at https://gitlab.com/accounts-sso/libsignon-glib[], The [.filename]#Makefile# would end up looking like this for fetching the distribution files:
[.programlisting]
....
PORTNAME= libsignon-glib
DISTVERSION= 1.14
USE_GITLAB= yes
GL_ACCOUNT= accounts-sso
GL_COMMIT= e90302e342bfd27bc8c9132ab9d0ea3d8723fd03
....
It will automatically have `MASTER_SITES` set to https://gitlab.com[gitlab.com] and `WRKSRC` to `${WRKDIR}/libsignon-glib-e90302e342bfd27bc8c9132ab9d0ea3d8723fd03-e90302e342bfd27bc8c9132ab9d0ea3d8723fd03`.
====
[[makefile-master_sites-gitlab-ex2]]
.More Complete Use of `USE_GITLAB`
[example]
====
A more complete use of the above if port had no versioning and foobar from the foo user on project bar on a self hosted GitLab site `https://gitlab.example.com`, the [.filename]#Makefile# ends up looking like this for fetching distribution files:
[.programlisting]
....
PORTNAME= foobar
DISTVERSION= g20170906
USE_GITLAB= yes
GL_SITE= https://gitlab.example.com
GL_ACCOUNT= foo
GL_PROJECT= bar
GL_COMMIT= 9c1669ce60c3f4f5eb43df874d7314483fb3f8a6
....
It will have `MASTER_SITES` set to "`https://gitlab.example.com`" and `WRKSRC` to `${WRKDIR}/bar-9c1669ce60c3f4f5eb43df874d7314483fb3f8a6-9c1669ce60c3f4f5eb43df874d7314483fb3f8a6`.
[TIP]
****
`20170906` is the date of the commit referenced in `GL_COMMIT`, not the date the [.filename]#Makefile# is edited, or the date the commit to the FreeBSD ports tree is made.
****
[NOTE]
****
``GL_SITE``'s protocol, port and webroot can all be modified in the same variable.
****
====
[[makefile-master_sites-gitlab-multiple]]
==== Fetching Multiple Files from GitLab
The `USE_GITLAB` framework also supports fetching multiple distribution files from different places from GitLab and GitLab hosted sites. It works in a way very similar to <<porting-master-sites-n>> and <<makefile-master_sites-gitlab-multiple>>.
Multiple values are added to `GL_SITE`, `GL_ACCOUNT`, `GL_PROJECT` and `GL_COMMIT`. Each different value is assigned a group. <<makefile-master_sites-gitlab-description>>.
`GL_TUPLE` can also be used when there are a lot of distribution files. It helps keep the site, account, project, commit, and group information at the same place.
For each group, a `${WRKSRC_group}` helper variable is created, containing the directory into which the file has been extracted. The `${WRKSRC_group}` variables can be used to move directories around during `post-extract`, or add to `CONFIGURE_ARGS`, or whatever is needed so that the software builds correctly.
[CAUTION]
====
The `:__group__` part _must_ be used for _only one_ distribution file. It is used as a unique key and using it more than once will overwrite the previous values.
====
[NOTE]
====
As this is only syntactic sugar above `DISTFILES` and `MASTER_SITES`, the group names must adhere to the restrictions on group names outlined in <<porting-master-sites-n>>
====
When fetching multiple files using GitLab, sometimes the default distribution file is not fetched from a GitLab site. To disable fetching the default distribution, set:
[.programlisting]
....
USE_GITLAB= nodefault
....
[IMPORTANT]
====
When using `USE_GITLAB=nodefault`, the [.filename]#Makefile# must set `DISTFILES` in its <<porting-order-portname,top block>>. The definition should be:
[.programlisting]
....
DISTFILES= ${DISTNAME}${EXTRACT_SUFX}
....
====
[[makefile-master_sites-gitlab-multi]]
.Use of `USE_GITLAB` with Multiple Distribution Files
[example]
====
From time to time, there is a need to fetch more than one distribution file. For example, when the upstream git repository uses submodules. This can be done easily using groups in the `GL_*` variables:
[.programlisting]
....
PORTNAME= foo
DISTVERSION= 1.0.2
USE_GITLAB= yes
GL_SITE= https://gitlab.example.com:9434/gitlab:icons
GL_ACCOUNT= bar:icons,contrib
GL_PROJECT= foo-icons:icons foo-contrib:contrib
GL_COMMIT= c189207a55da45305c884fe2b50e086fcad4724b ae7368cab1ca7ca754b38d49da064df87968ffe4:icons 9e4dd76ad9b38f33fdb417a4c01935958d5acd2a:contrib
GL_SUBDIR= ext/icons:icons
CONFIGURE_ARGS= --with-contrib=${WRKSRC_contrib}
....
This will fetch two distribution files from gitlab.com and one from `gitlab.example.com` hosting GitLab. The default one comes from [.filename]#https://gitlab.com/foo/foo# and commit is `c189207a55da45305c884fe2b50e086fcad4724b`. The second one, with the `icons` group, comes from [.filename]#https://gitlab.example.com:9434/gitlab/bar/foo-icons# and commit is `ae7368cab1ca7ca754b38d49da064df87968ffe4`. The third one comes from [.filename]#https://gitlab.com/bar/foo-contrib# and is commit `9e4dd76ad9b38f33fdb417a4c01935958d5acd2a`. The distribution files are named [.filename]#foo-foo-c189207a55da45305c884fe2b50e086fcad4724b_GL0.tar.gz#, [.filename]#bar-foo-icons-ae7368cab1ca7ca754b38d49da064df87968ffe4_GL0.tar.gz#, and [.filename]#bar-foo-contrib-9e4dd76ad9b38f33fdb417a4c01935958d5acd2a_GL0.tar.gz#.
All the distribution files are extracted in `${WRKDIR}` in their respective subdirectories. The default file is still extracted in `${WRKSRC}`, in this case, [.filename]#${WRKDIR}/foo-c189207a55da45305c884fe2b50e086fcad4724b-c189207a55da45305c884fe2b50e086fcad4724b#. Each additional distribution file is extracted in `${WRKSRC_group}`. Here, for the `icons` group, it is called `${WRKSRC_icons}` and it contains [.filename]#${WRKDIR}/foo-icons-ae7368cab1ca7ca754b38d49da064df87968ffe4-ae7368cab1ca7ca754b38d49da064df87968ffe4#. The file with the `contrib` group is called `${WRKSRC_contrib}` and contains `${WRKDIR}/foo-contrib-9e4dd76ad9b38f33fdb417a4c01935958d5acd2a-9e4dd76ad9b38f33fdb417a4c01935958d5acd2a`.
The software's build system expects to find the icons in a [.filename]#ext/icons# subdirectory in its sources, so `GL_SUBDIR` is used. `GL_SUBDIR` makes sure that [.filename]#ext# exists, but that [.filename]#ext/icons# does not already exist. Then it does this:
[.programlisting]
....
post-extract:
@${MV} ${WRKSRC_icons} ${WRKSRC}/ext/icons
....
====
[[makefile-master_sites-gitlab-multi2]]
.Use of `USE_GITLAB` with Multiple Distribution Files Using `GL_TUPLE`
[example]
====
This is functionally equivalent to <<makefile-master_sites-gitlab-multi>>, but using `GL_TUPLE`:
[.programlisting]
....
PORTNAME= foo
DISTVERSION= 1.0.2
USE_GITLAB= yes
GL_COMMIT= c189207a55da45305c884fe2b50e086fcad4724b
GL_TUPLE= https://gitlab.example.com:9434/gitlab:bar:foo-icons:ae7368cab1ca7ca754b38d49da064df87968ffe4:icons/ext/icons \
bar:foo-contrib:9e4dd76ad9b38f33fdb417a4c01935958d5acd2a:contrib
CONFIGURE_ARGS= --with-contrib=${WRKSRC_contrib}
....
Grouping was used in the previous example with `bar:icons,contrib`. Some redundant information is present with `GL_TUPLE` because grouping is not possible.
====
[[makefile-extract_sufx]]
=== `EXTRACT_SUFX`
If there is one distribution file, and it uses an odd suffix to indicate the compression mechanism, set `EXTRACT_SUFX`.
For example, if the distribution file was named [.filename]#foo.tar.gzip# instead of the more normal [.filename]#foo.tar.gz#, write:
[.programlisting]
....
DISTNAME= foo
EXTRACT_SUFX= .tar.gzip
....
The `USES=tar[:__xxx__]`, `USES=lha` or `USES=zip` automatically set `EXTRACT_SUFX` to the most common archives extensions as necessary, see crossref:uses[uses,Using `USES` Macros] for more details. If neither of these are set then `EXTRACT_SUFX` defaults to `.tar.gz`.
[NOTE]
====
As `EXTRACT_SUFX` is only used in `DISTFILES`, only set one of them..
====
[[makefile-distfiles-definition]]
=== `DISTFILES`
Sometimes the names of the files to be downloaded have no resemblance to the name of the port. For example, it might be called [.filename]#source.tar.gz# or similar. In other cases the application's source code might be in several different archives, all of which must be downloaded.
If this is the case, set `DISTFILES` to be a space separated list of all the files that must be downloaded.
[.programlisting]
....
DISTFILES= source1.tar.gz source2.tar.gz
....
If not explicitly set, `DISTFILES` defaults to `${DISTNAME}${EXTRACT_SUFX}`.
[[makefile-extract_only]]
=== `EXTRACT_ONLY`
If only some of the `DISTFILES` must be extracted-for example, one of them is the source code, while another is an uncompressed document-list the filenames that must be extracted in `EXTRACT_ONLY`.
[.programlisting]
....
DISTFILES= source.tar.gz manual.html
EXTRACT_ONLY= source.tar.gz
....
When none of the `DISTFILES` need to be uncompressed, set `EXTRACT_ONLY` to the empty string.
[.programlisting]
....
EXTRACT_ONLY=
....
[[porting-patchfiles]]
=== `PATCHFILES`
If the port requires some additional patches that are available by FTP or HTTP, set `PATCHFILES` to the names of the files and `PATCH_SITES` to the URL of the directory that contains them (the format is the same as `MASTER_SITES`).
If the patch is not relative to the top of the source tree (that is, `WRKSRC`) because it contains some extra pathnames, set `PATCH_DIST_STRIP` accordingly. For instance, if all the pathnames in the patch have an extra `foozolix-1.0/` in front of the filenames, then set `PATCH_DIST_STRIP=-p1`.
Do not worry if the patches are compressed; they will be decompressed automatically if the filenames end with [.filename]#.Z#, [.filename]#.gz#, [.filename]#.bz2# or [.filename]#.xz#.
If the patch is distributed with some other files, such as documentation, in a compressed tarball, using `PATCHFILES` is not possible. If that is the case, add the name and the location of the patch tarball to `DISTFILES` and `MASTER_SITES`. Then, use `EXTRA_PATCHES` to point to those files and [.filename]#bsd.port.mk# will automatically apply them. In particular, do _not_ copy patch files into [.filename]#${PATCHDIR}#. That directory may not be writable.
[TIP]
====
If there are multiple patches and they need mixed values for the strip parameter, it can be added alongside the patch name in `PATCHFILES`, e.g:
[.programlisting]
....
PATCHFILES= patch1 patch2:-p1
....
This does not conflict with <<porting-master-sites-n,the master site grouping feature>>, adding a group also works:
[.programlisting]
....
PATCHFILES= patch2:-p1:source2
....
====
[NOTE]
====
The tarball will have been extracted alongside the regular source by then, so there is no need to explicitly extract it if it is a regular compressed tarball. Take extra care not to overwrite something that already exists in that directory if extracting it manually. Also, do not forget to add a command to remove the copied patch in the `pre-clean` target.
====
[[porting-master-sites-n]]
=== Multiple Distribution or Patches Files from Multiple Locations
(Consider this to be a somewhat "advanced topic"; those new to this document may wish to skip this section at first).
This section has information on the fetching mechanism known as both `MASTER_SITES:n` and `MASTER_SITES_NN`. We will refer to this mechanism as `MASTER_SITES:n`.
A little background first. OpenBSD has a neat feature inside `DISTFILES` and `PATCHFILES` which allows files and patches to be postfixed with `:n` identifiers. Here, `n` can be any word containing `[0-9a-zA-Z_]` and denote a group designation. For example:
[.programlisting]
....
DISTFILES= alpha:0 beta:1
....
In OpenBSD, distribution file [.filename]#alpha# will be associated with variable `MASTER_SITES0` instead of our common `MASTER_SITES` and [.filename]#beta# with `MASTER_SITES1`.
This is a very interesting feature which can decrease that endless search for the correct download site.
Just picture 2 files in `DISTFILES` and 20 sites in `MASTER_SITES`, the sites slow as hell where [.filename]#beta# is carried by all sites in `MASTER_SITES`, and [.filename]#alpha# can only be found in the 20th site. It would be such a waste to check all of them if the maintainer knew this beforehand, would it not? Not a good start for that lovely weekend!
Now that you have the idea, just imagine more `DISTFILES` and more `MASTER_SITES`. Surely our "distfiles survey meister" would appreciate the relief to network strain that this would bring.
In the next sections, information will follow on the FreeBSD implementation of this idea. We improved a bit on OpenBSD's concept.
[IMPORTANT]
====
The group names cannot have dashes in them (`-`), in fact, they cannot have any characters out of the `[a-zA-Z0-9_]` range. This is because, while man:make[1] is ok with variable names containing dashes, man:sh[1] is not.
====
[[porting-master-sites-n-simplified]]
==== Simplified Information
This section explains how to quickly prepare fine grained fetching of multiple distribution files and patches from different sites and subdirectories. We describe here a case of simplified `MASTER_SITES:n` usage. This will be sufficient for most scenarios. More detailed information are available in <<ports-master-sites-n-detailed>>.
Some applications consist of multiple distribution files that must be downloaded from a number of different sites. For example, Ghostscript consists of the core of the program, and then a large number of driver files that are used depending on the user's printer. Some of these driver files are supplied with the core, but many others must be downloaded from a variety of different sites.
To support this, each entry in `DISTFILES` may be followed by a colon and a "group name". Each site listed in `MASTER_SITES` is then followed by a colon, and the group that indicates which distribution files are downloaded from this site.
For example, consider an application with the source split in two parts, [.filename]#source1.tar.gz# and [.filename]#source2.tar.gz#, which must be downloaded from two different sites. The port's [.filename]#Makefile# would include lines like <<ports-master-sites-n-example-simple-use-one-file-per-site>>.
[[ports-master-sites-n-example-simple-use-one-file-per-site]]
.Simplified Use of `MASTER_SITES:n` with One File Per Site
[example]
====
[.programlisting]
....
MASTER_SITES= ftp://ftp1.example.com/:source1 \
http://www.example.com/:source2
DISTFILES= source1.tar.gz:source1 \
source2.tar.gz:source2
....
====
Multiple distribution files can have the same group. Continuing the previous example, suppose that there was a third distfile, [.filename]#source3.tar.gz#, that is downloaded from `ftp.example2.com`. The [.filename]#Makefile# would then be written like <<ports-master-sites-n-example-simple-use-more-than-one-file-per-site>>.
[[ports-master-sites-n-example-simple-use-more-than-one-file-per-site]]
.Simplified Use of `MASTER_SITES:n` with More Than One File Per Site
[example]
====
[.programlisting]
....
MASTER_SITES= ftp://ftp.example.com/:source1 \
http://www.example.com/:source2
DISTFILES= source1.tar.gz:source1 \
source2.tar.gz:source2 \
source3.tar.gz:source2
....
====
[[ports-master-sites-n-detailed]]
==== Detailed Information
Okay, so the previous example did not reflect the new port's needs? In this section we will explain in detail how the fine grained fetching mechanism `MASTER_SITES:n` works and how it can be used.
. Elements can be postfixed with `:__n__` where _n_ is `[^:,]+`, that is, _n_ could conceptually be any alphanumeric string but we will limit it to `[a-zA-Z_][0-9a-zA-Z_]+` for now.
+
Moreover, string matching is case sensitive; that is, `n` is different from `N`.
+
However, these words cannot be used for postfixing purposes since they yield special meaning: `default`, `all` and `ALL` (they are used internally in item <<porting-master-sites-n-what-changes-in-port-targets, ii>>). Furthermore, `DEFAULT` is a special purpose word (check item <<porting-master-sites-n-DEFAULT-group,3>>).
. Elements postfixed with `:n` belong to the group `n`, `:m` belong to group `m` and so forth.
+
[[porting-master-sites-n-DEFAULT-group]]
. Elements without a postfix are groupless, they all belong to the special group `DEFAULT`. Any elements postfixed with `DEFAULT`, is just being redundant unless an element belongs to both `DEFAULT` and other groups at the same time (check item <<porting-master-sites-n-comma-operator,5>>).
+
These examples are equivalent but the first one is preferred:
+
[.programlisting]
....
MASTER_SITES= alpha
....
+
[.programlisting]
....
MASTER_SITES= alpha:DEFAULT
....
. Groups are not exclusive, an element may belong to several different groups at the same time and a group can either have either several different elements or none at all.
+
[[porting-master-sites-n-comma-operator]]
. When an element belongs to several groups at the same time, use the comma operator (`,`).
+
Instead of repeating it several times, each time with a different postfix, we can list several groups at once in a single postfix. For instance, `:m,n,o` marks an element that belongs to group `m`, `n` and `o`.
+
All these examples are equivalent but the last one is preferred:
+
[.programlisting]
....
MASTER_SITES= alpha alpha:SOME_SITE
....
+
[.programlisting]
....
MASTER_SITES= alpha:DEFAULT alpha:SOME_SITE
....
+
[.programlisting]
....
MASTER_SITES= alpha:SOME_SITE,DEFAULT
....
+
[.programlisting]
....
MASTER_SITES= alpha:DEFAULT,SOME_SITE
....
. All sites within a given group are sorted according to `MASTER_SORT_AWK`. All groups within `MASTER_SITES` and `PATCH_SITES` are sorted as well.
+
[[porting-master-sites-n-group-semantics]]
. Group semantics can be used in any of the variables `MASTER_SITES`, `PATCH_SITES`, `MASTER_SITE_SUBDIR`, `PATCH_SITE_SUBDIR`, `DISTFILES`, and `PATCHFILES` according to this syntax:
.. All `MASTER_SITES`, `PATCH_SITES`, `MASTER_SITE_SUBDIR` and `PATCH_SITE_SUBDIR` elements must be terminated with the forward slash `/` character. If any elements belong to any groups, the group postfix `:__n__` must come right after the terminator `/`. The `MASTER_SITES:n` mechanism relies on the existence of the terminator `/` to avoid confusing elements where a `:n` is a valid part of the element with occurrences where `:n` denotes group `n`. For compatibility purposes, since the `/` terminator was not required before in both `MASTER_SITE_SUBDIR` and `PATCH_SITE_SUBDIR` elements, if the postfix immediate preceding character is not a `/` then `:n` will be considered a valid part of the element instead of a group postfix even if an element is postfixed with `:n`. See both <<ports-master-sites-n-example-detailed-use-master-site-subdir>> and <<ports-master-sites-n-example-detailed-use-complete-example-master-sites>>.
+
[[ports-master-sites-n-example-detailed-use-master-site-subdir]]
.Detailed Use of `MASTER_SITES:n` in `MASTER_SITE_SUBDIR`
[example]
====
[.programlisting]
....
MASTER_SITE_SUBDIR= old:n new/:NEW
....
*** Directories within group `DEFAULT` -> old:n
*** Directories within group `NEW` -> new
====
+
[[ports-master-sites-n-example-detailed-use-complete-example-master-sites]]
.Detailed Use of `MASTER_SITES:n` with Comma Operator, Multiple Files, Multiple Sites and Multiple Subdirectories
[example]
====
[.programlisting]
....
MASTER_SITES= http://site1/%SUBDIR%/ http://site2/:DEFAULT \
http://site3/:group3 http://site4/:group4 \
http://site5/:group5 http://site6/:group6 \
http://site7/:DEFAULT,group6 \
http://site8/%SUBDIR%/:group6,group7 \
http://site9/:group8
DISTFILES= file1 file2:DEFAULT file3:group3 \
file4:group4,group5,group6 file5:grouping \
file6:group7
MASTER_SITE_SUBDIR= directory-trial:1 directory-n/:groupn \
directory-one/:group6,DEFAULT \
directory
....
The previous example results in this fine grained fetching. Sites are listed in the exact order they will be used.
*** [.filename]#file1# will be fetched from
**** `MASTER_SITE_OVERRIDE`
**** http://site1/directory-trial:1/
**** http://site1/directory-one/
**** http://site1/directory/
**** http://site2/
**** http://site7/
**** `MASTER_SITE_BACKUP`
*** [.filename]#file2# will be fetched exactly as [.filename]#file1# since they both belong to the same group
**** `MASTER_SITE_OVERRIDE`
**** http://site1/directory-trial:1/
**** http://site1/directory-one/
**** http://site1/directory/
**** http://site2/
**** http://site7/
**** `MASTER_SITE_BACKUP`
*** [.filename]#file3# will be fetched from
**** `MASTER_SITE_OVERRIDE`
**** http://site3/
**** `MASTER_SITE_BACKUP`
*** [.filename]#file4# will be fetched from
**** `MASTER_SITE_OVERRIDE`
**** http://site4/
**** http://site5/
**** http://site6/
**** http://site7/
**** http://site8/directory-one/
**** `MASTER_SITE_BACKUP`
*** [.filename]#file5# will be fetched from
**** `MASTER_SITE_OVERRIDE`
**** `MASTER_SITE_BACKUP`
*** [.filename]#file6# will be fetched from
**** `MASTER_SITE_OVERRIDE`
**** http://site8/
**** `MASTER_SITE_BACKUP`
====
. How do I group one of the special macros from [.filename]#bsd.sites.mk#, for example, SourceForge (`SF`)?
+
This has been simplified as much as possible. See <<ports-master-sites-n-example-detailed-use-master-site-sourceforge>>.
+
[[ports-master-sites-n-example-detailed-use-master-site-sourceforge]]
.Detailed Use of `MASTER_SITES:n` with SourceForge (`SF`)
[example]
====
[.programlisting]
....
MASTER_SITES= http://site1/ SF/something/1.0:sourceforge,TEST
DISTFILES= something.tar.gz:sourceforge
....
[.filename]#something.tar.gz# will be fetched from all sites within SourceForge.
====
. How do I use this with `PATCH*`?
+
All examples were done with `MASTER*` but they work exactly the same for `PATCH*` ones as can be seen in <<ports-master-sites-n-example-detailed-use-patch-sites>>.
+
[[ports-master-sites-n-example-detailed-use-patch-sites]]
.Simplified Use of `MASTER_SITES:n` with `PATCH_SITES`
[example]
====
[.programlisting]
....
PATCH_SITES= http://site1/ http://site2/:test
PATCHFILES= patch1:test
....
====
[[port-master-sites-n-what-changed]]
==== What Does Change for Ports? What Does Not?
[lowerroman]
. All current ports remain the same. The `MASTER_SITES:n` feature code is only activated if there are elements postfixed with `:__n__` like elements according to the aforementioned syntax rules, especially as shown in item <<porting-master-sites-n-group-semantics, 7>>.
+
[[porting-master-sites-n-what-changes-in-port-targets]]
. The port targets remain the same: `checksum`, `makesum`, `patch`, `configure`, `build`, etc. With the obvious exceptions of `do-fetch`, `fetch-list`, `master-sites` and `patch-sites`.
** `do-fetch`: deploys the new grouping postfixed `DISTFILES` and `PATCHFILES` with their matching group elements within both `MASTER_SITES` and `PATCH_SITES` which use matching group elements within both `MASTER_SITE_SUBDIR` and `PATCH_SITE_SUBDIR`. Check <<ports-master-sites-n-example-detailed-use-complete-example-master-sites>>.
** `fetch-list`: works like old `fetch-list` with the exception that it groups just like `do-fetch`.
** `master-sites` and `patch-sites`: (incompatible with older versions) only return the elements of group `DEFAULT`; in fact, they execute targets `master-sites-default` and `patch-sites-default` respectively.
+
Furthermore, using target either `master-sites-all` or `patch-sites-all` is preferred to directly checking either `MASTER_SITES` or `PATCH_SITES`. Also, directly checking is not guaranteed to work in any future versions. Check item <<porting-master-sites-n-new-port-targets-master-sites-all, B>> for more information on these new port targets.
. New port targets
.. There are `master-sites-_n_` and `patch-sites-_n_` targets which will list the elements of the respective group _n_ within `MASTER_SITES` and `PATCH_SITES` respectively. For instance, both `master-sites-DEFAULT` and `patch-sites-DEFAULT` will return the elements of group `DEFAULT`, `master-sites-test` and `patch-sites-test` of group `test`, and thereon.
+
[[porting-master-sites-n-new-port-targets-master-sites-all]]
.. There are new targets `master-sites-all` and `patch-sites-all` which do the work of the old `master-sites` and `patch-sites` ones. They return the elements of all groups as if they all belonged to the same group with the caveat that it lists as many `MASTER_SITE_BACKUP` and `MASTER_SITE_OVERRIDE` as there are groups defined within either `DISTFILES` or `PATCHFILES`; respectively for `master-sites-all` and `patch-sites-all`.
[[makefile-dist_subdir]]
=== `DIST_SUBDIR`
Do not let the port clutter [.filename]#/usr/ports/distfiles#. If the port requires a lot of files to be fetched, or contains a file that has a name that might conflict with other ports (for example, [.filename]#Makefile#), set `DIST_SUBDIR` to the name of the port (`${PORTNAME}` or `${PKGNAMEPREFIX}${PORTNAME}` are fine). This will change `DISTDIR` from the default [.filename]#/usr/ports/distfiles# to [.filename]#/usr/ports/distfiles/${DIST_SUBDIR}#, and in effect puts everything that is required for the port into that subdirectory.
It will also look at the subdirectory with the same name on the backup master site at http://distcache.FreeBSD.org[http://distcache.FreeBSD.org] (Setting `DISTDIR` explicitly in [.filename]#Makefile# will not accomplish this, so please use `DIST_SUBDIR`.)
[NOTE]
====
This does not affect `MASTER_SITES` defined in the [.filename]#Makefile#.
====
[[makefile-maintainer]]
== `MAINTAINER`
Set your mail-address here. Please. _:-)_
Only a single address without the comment part is allowed as a `MAINTAINER` value. The format used is `user@hostname.domain`. Please do not include any descriptive text such as a real name in this entry. That merely confuses the Ports infrastructure and most tools using it.
The maintainer is responsible for keeping the port up to date and making sure that it works correctly. For a detailed description of the responsibilities of a port maintainer, refer to link:{contributing}#maintain-port[The challenge for port maintainers].
[NOTE]
====
A maintainer volunteers to keep a port in good working order. Maintainers have the primary responsibility for their ports, but not exclusive ownership. Ports exist for the benefit of the community and, in reality, belong to the community. What this means is that people other than the maintainer can make changes to a port. Large changes to the Ports Collection might require changes to many ports. The FreeBSD Ports Management Team or members of other teams might modify ports to fix dependency issues or other problems, like a version bump for a shared library update.
Some types of fixes have "blanket approval" from the {portmgr}, allowing any committer to fix those categories of problems on any port. These fixes do not need approval from the maintainer.
Blanket approval for most ports applies to fixes like infrastructure changes, or trivial and _tested_ build and runtime fixes. The current list is available in link:{committers-guide}#ports-qa-misc-blanket-approval[Ports section of the Committer's Guide].
====
Other changes to the port will be sent to the maintainer for review and approval before being committed. If the maintainer does not respond to an update request after two weeks (excluding major public holidays), then that is considered a maintainer timeout, and the update can be made without explicit maintainer approval. If the maintainer does not respond within three months, or if there have been three consecutive timeouts, then that maintainer is considered absent without leave, and all of their ports can be assigned back to the pool. Exceptions to this are anything maintained by the {portmgr}, or the {security-officer}. No unauthorized commits may ever be made to ports maintained by those groups.
We reserve the right to modify the maintainer's submission to better match existing policies and style of the Ports Collection without explicit blessing from the submitter or the maintainer. Also, large infrastructural changes can result in a port being modified without the maintainer's consent. These kinds of changes will never affect the port's functionality.
The {portmgr} reserves the right to revoke or override anyone's maintainership for any reason, and the {security-officer} reserves the right to revoke or override maintainership for security reasons.
[[makefile-comment]]
== `COMMENT`
The comment is a one-line description of a port shown by `pkg info`. Please follow these rules when composing it:
. The COMMENT string should be 70 characters or less.
. Do _not_ include the package name or version number of software.
. The comment must begin with a capital and end without a period.
. Do not start with an indefinite article (that is, A or An).
. Capitalize names such as Apache, JavaScript, or Perl.
. Use a serial comma for lists of words: "green, red, and blue."
. Check for spelling errors.
Here is an example:
[.programlisting]
....
COMMENT= Cat chasing a mouse all over the screen
....
The COMMENT variable immediately follows the MAINTAINER variable in the [.filename]#Makefile#.
[[licenses]]
== Licenses
Each port must document the license under which it is available. If it is not an OSI approved license it must also document any restrictions on redistribution.
[[licenses-license]]
=== `LICENSE`
A short name for the license or licenses if more than one license apply.
If it is one of the licenses listed in <<licenses-license-list>>, only `LICENSE_FILE` and `LICENSE_DISTFILES` variables can be set.
If this is a license that has not been defined in the ports framework (see <<licenses-license-list>>), the `LICENSE_PERMS` and `LICENSE_NAME` must be set, along with either `LICENSE_FILE` or `LICENSE_TEXT`. `LICENSE_DISTFILES` and `LICENSE_GROUPS` can also be set, but are not required.
The predefined licenses are shown in <<licenses-license-list>>. The current list is always available in [.filename]#Mk/bsd.licenses.db.mk#.
[[licenses-license-ex1]]
.Simplest Usage, Predefined Licenses
[example]
====
When the [.filename]#README# of some software says "This software is under the terms of the GNU Lesser General Public License as published by the Free Software Foundation; either version 2.1 of the License, or (at your option) any later version." but does not provide the license file, use this:
[.programlisting]
....
LICENSE= LGPL21+
....
When the software provides the license file, use this:
[.programlisting]
....
LICENSE= LGPL21+
LICENSE_FILE= ${WRKSRC}/COPYING
....
====
For the predefined licenses, the default permissions are `dist-mirror dist-sell pkg-mirror pkg-sell auto-accept`.
[[licenses-license-list]]
.Predefined License List
[cols="1,1,1,1", frame="none", options="header"]
|===
| Short Name
| Name
| Group
| Permissions
|`AGPLv3`
|GNU Affero General Public License version 3
|`FSF GPL OSI`
|(default)
|`AGPLv3+`
|GNU Affero General Public License version 3 (or later)
|`FSF GPL OSI`
|(default)
|`APACHE10`
|Apache License 1.0
|`FSF`
|(default)
|`APACHE11`
|Apache License 1.1
|`FSF OSI`
|(default)
|`APACHE20`
|Apache License 2.0
|`FSF OSI`
|(default)
|`ART10`
|Artistic License version 1.0
|`OSI`
|(default)
|`ART20`
|Artistic License version 2.0
|`FSF GPL OSI`
|(default)
|`ARTPERL10`
|Artistic License (perl) version 1.0
|`OSI`
|(default)
|`BSD`
|BSD license Generic Version (deprecated)
|`FSF OSI COPYFREE`
|(default)
|`BSD2CLAUSE`
|BSD 2-clause "Simplified" License
|`FSF OSI COPYFREE`
|(default)
|`BSD3CLAUSE`
|BSD 3-clause "New" or "Revised" License
|`FSF OSI COPYFREE`
|(default)
|`BSD4CLAUSE`
|BSD 4-clause "Original" or "Old" License
|`FSF`
|(default)
|`BSL`
|Boost Software License
|`FSF OSI COPYFREE`
|(default)
|`CC-BY-1.0`
|Creative Commons Attribution 1.0
|
|(default)
|`CC-BY-2.0`
|Creative Commons Attribution 2.0
|
|(default)
|`CC-BY-2.5`
|Creative Commons Attribution 2.5
|
|(default)
|`CC-BY-3.0`
|Creative Commons Attribution 3.0
|
|(default)
|`CC-BY-4.0`
|Creative Commons Attribution 4.0
|
|(default)
|`CC-BY-NC-1.0`
|Creative Commons Attribution Non Commercial 1.0
|
|`dist-mirror``pkg-mirror``auto-accept`
|`CC-BY-NC-2.0`
|Creative Commons Attribution Non Commercial 2.0
|
|`dist-mirror``pkg-mirror``auto-accept`
|`CC-BY-NC-2.5`
|Creative Commons Attribution Non Commercial 2.5
|
|`dist-mirror``pkg-mirror``auto-accept`
|`CC-BY-NC-3.0`
|Creative Commons Attribution Non Commercial 3.0
|
|`dist-mirror``pkg-mirror``auto-accept`
|`CC-BY-NC-4.0`
|Creative Commons Attribution Non Commercial 4.0
|
|`dist-mirror``pkg-mirror``auto-accept`
|`CC-BY-NC-ND-1.0`
|Creative Commons Attribution Non Commercial No Derivatives 1.0
|
|`dist-mirror``pkg-mirror``auto-accept`
|`CC-BY-NC-ND-2.0`
|Creative Commons Attribution Non Commercial No Derivatives 2.0
|
|`dist-mirror``pkg-mirror``auto-accept`
|`CC-BY-NC-ND-2.5`
|Creative Commons Attribution Non Commercial No Derivatives 2.5
|
|`dist-mirror``pkg-mirror``auto-accept`
|`CC-BY-NC-ND-3.0`
|Creative Commons Attribution Non Commercial No Derivatives 3.0
|
|`dist-mirror``pkg-mirror``auto-accept`
|`CC-BY-NC-ND-4.0`
|Creative Commons Attribution Non Commercial No Derivatives 4.0
|
|`dist-mirror``pkg-mirror``auto-accept`
|`CC-BY-NC-SA-1.0`
|Creative Commons Attribution Non Commercial Share Alike 1.0
|
|`dist-mirror``pkg-mirror``auto-accept`
|`CC-BY-NC-SA-2.0`
|Creative Commons Attribution Non Commercial Share Alike 2.0
|
|`dist-mirror``pkg-mirror``auto-accept`
|`CC-BY-NC-SA-2.5`
|Creative Commons Attribution Non Commercial Share Alike 2.5
|
|`dist-mirror``pkg-mirror``auto-accept`
|`CC-BY-NC-SA-3.0`
|Creative Commons Attribution Non Commercial Share Alike 3.0
|
|`dist-mirror``pkg-mirror``auto-accept`
|`CC-BY-NC-SA-4.0`
|Creative Commons Attribution Non Commercial Share Alike 4.0
|
|`dist-mirror``pkg-mirror``auto-accept`
|`CC-BY-ND-1.0`
|Creative Commons Attribution No Derivatives 1.0
|
|(default)
|`CC-BY-ND-2.0`
|Creative Commons Attribution No Derivatives 2.0
|
|(default)
|`CC-BY-ND-2.5`
|Creative Commons Attribution No Derivatives 2.5
|
|(default)
|`CC-BY-ND-3.0`
|Creative Commons Attribution No Derivatives 3.0
|
|(default)
|`CC-BY-ND-4.0`
|Creative Commons Attribution No Derivatives 4.0
|
|(default)
|`CC-BY-SA-1.0`
|Creative Commons Attribution Share Alike 1.0
|
|(default)
|`CC-BY-SA-2.0`
|Creative Commons Attribution Share Alike 2.0
|
|(default)
|`CC-BY-SA-2.5`
|Creative Commons Attribution Share Alike 2.5
|
|(default)
|`CC-BY-SA-3.0`
|Creative Commons Attribution Share Alike 3.0
|
|(default)
|`CC-BY-SA-4.0`
|Creative Commons Attribution Share Alike 4.0
|
|(default)
|`CC0-1.0`
|Creative Commons Zero v1.0 Universal
|`FSF GPL COPYFREE`
|(default)
|`CDDL`
|Common Development and Distribution License
|`FSF OSI`
|(default)
|`CPAL-1.0`
|Common Public Attribution License
|`FSF OSI`
|(default)
|`ClArtistic`
|Clarified Artistic License
|`FSF GPL OSI`
|(default)
|`EPL`
|Eclipse Public License
|`FSF OSI`
|(default)
|`GFDL`
|GNU Free Documentation License
|`FSF`
|(default)
|`GMGPL`
|GNAT Modified General Public License
|`FSF GPL OSI`
|(default)
|`GPLv1`
|GNU General Public License version 1
|`FSF GPL OSI`
|(default)
|`GPLv1+`
|GNU General Public License version 1 (or later)
|`FSF GPL OSI`
|(default)
|`GPLv2`
|GNU General Public License version 2
|`FSF GPL OSI`
|(default)
|`GPLv2+`
|GNU General Public License version 2 (or later)
|`FSF GPL OSI`
|(default)
|`GPLv3`
|GNU General Public License version 3
|`FSF GPL OSI`
|(default)
|`GPLv3+`
|GNU General Public License version 3 (or later)
|`FSF GPL OSI`
|(default)
|`GPLv3RLE`
|GNU GPL version 3 Runtime Library Exception
|`FSF GPL OSI`
|(default)
|`GPLv3RLE+`
|GNU GPL version 3 Runtime Library Exception (or later)
|`FSF GPL OSI`
|(default)
|`ISCL`
|Internet Systems Consortium License
|`FSF GPL OSI COPYFREE`
|(default)
|`LGPL20`
|GNU Library General Public License version 2.0
|`FSF GPL OSI`
|(default)
|`LGPL20+`
|GNU Library General Public License version 2.0 (or later)
|`FSF GPL OSI`
|(default)
|`LGPL21`
|GNU Lesser General Public License version 2.1
|`FSF GPL OSI`
|(default)
|`LGPL21+`
|GNU Lesser General Public License version 2.1 (or later)
|`FSF GPL OSI`
|(default)
|`LGPL3`
|GNU Lesser General Public License version 3
|`FSF GPL OSI`
|(default)
|`LGPL3+`
|GNU Lesser General Public License version 3 (or later)
|`FSF GPL OSI`
|(default)
|`LPPL10`
|LaTeX Project Public License version 1.0
|`FSF OSI`
|`dist-mirror dist-sell`
|`LPPL11`
|LaTeX Project Public License version 1.1
|`FSF OSI`
|`dist-mirror dist-sell`
|`LPPL12`
|LaTeX Project Public License version 1.2
|`FSF OSI`
|`dist-mirror dist-sell`
|`LPPL13`
|LaTeX Project Public License version 1.3
|`FSF OSI`
|`dist-mirror dist-sell`
|`LPPL13a`
|LaTeX Project Public License version 1.3a
|`FSF OSI`
|`dist-mirror dist-sell`
|`LPPL13b`
|LaTeX Project Public License version 1.3b
|`FSF OSI`
|`dist-mirror dist-sell`
|`LPPL13c`
|LaTeX Project Public License version 1.3c
|`FSF OSI`
|`dist-mirror dist-sell`
|`MIT`
|MIT license / X11 license
|`COPYFREE FSF GPL OSI`
|(default)
|`MPL10`
|Mozilla Public License version 1.0
|`FSF OSI`
|(default)
|`MPL11`
|Mozilla Public License version 1.1
|`FSF OSI`
|(default)
|`MPL20`
|Mozilla Public License version 2.0
|`FSF OSI`
|(default)
|`NCSA`
|University of Illinois/NCSA Open Source License
|`COPYFREE FSF GPL OSI`
|(default)
|`NONE`
|No license specified
|
|`none`
|`OFL10`
|SIL Open Font License version 1.0 (http://scripts.sil.org/OFL)
|`FONTS`
|(default)
|`OFL11`
|SIL Open Font License version 1.1 (http://scripts.sil.org/OFL)
|`FONTS`
|(default)
|`OWL`
|Open Works License (owl.apotheon.org)
|`COPYFREE`
|(default)
|`OpenSSL`
|OpenSSL License
|`FSF`
|(default)
|`PD`
|Public Domain
|`GPL COPYFREE`
|(default)
|`PHP202`
|PHP License version 2.02
|`FSF OSI`
|(default)
|`PHP30`
|PHP License version 3.0
|`FSF OSI`
|(default)
|`PHP301`
|PHP License version 3.01
|`FSF OSI`
|(default)
|`PSFL`
|Python Software Foundation License
|`FSF GPL OSI`
|(default)
|`PostgreSQL`
|PostgreSQL License
|`FSF GPL OSI COPYFREE`
|(default)
|`RUBY`
|Ruby License
|`FSF`
|(default)
|`UNLICENSE`
|The Unlicense
|`COPYFREE FSF GPL`
|(default)
|`WTFPL`
|Do What the Fuck You Want To Public License version 2
|`GPL FSF COPYFREE`
|(default)
|`WTFPL1`
|Do What the Fuck You Want To Public License version 1
|`GPL FSF COPYFREE`
|(default)
|`ZLIB`
|zlib License
|`GPL FSF OSI`
|(default)
|`ZPL21`
|Zope Public License version 2.1
|`GPL OSI`
|(default)
|===
[[licenses-license_perms]]
=== `LICENSE_PERMS` and `LICENSE_PERMS_NAME_`
Permissions. use `none` if empty.
.License Permissions List
[[licenses-license_perms-dist-mirror]]
`dist-mirror`::
Redistribution of the distribution files is permitted. The distribution files will be added to the FreeBSD `MASTER_SITE_BACKUP` CDN.
[[licenses-license_perms-no-dist-mirror]]
`no-dist-mirror`::
Redistribution of the distribution files is prohibited. This is equivalent to setting crossref:special[porting-restrictions-restricted,`RESTRICTED`]. The distribution files will _not_ be added to the FreeBSD `MASTER_SITE_BACKUP` CDN.
[[licenses-license_perms-dist-sell]]
`dist-sell`::
Selling of distribution files is permitted. The distribution files will be present on the installer images.
[[licenses-license_perms-no-dist-sell]]
`no-dist-sell`::
Selling of distribution files is prohibited. This is equivalent to setting crossref:special[porting-restrictions-no_cdrom,`NO_CDROM`].
[[licenses-license_perms-pkg-mirror]]
`pkg-mirror`::
Free redistribution of package is permitted. The package will be distributed on the FreeBSD package CDN https://pkg.freebsd.org/[https://pkg.freebsd.org/].
[[licenses-license_perms-no-pkg-mirror]]
`no-pkg-mirror`::
Free redistribution of package is prohibited. Equivalent to setting crossref:special[porting-restrictions-no_package,`NO_PACKAGE`]. The package will _not_ be distributed from the FreeBSD package CDN https://pkg.freebsd.org/[https://pkg.freebsd.org/].
[[licenses-license_perms-pkg-sell]]
`pkg-sell`::
Selling of package is permitted. The package will be present on the installer images.
[[licenses-license_perms-no-pkg-sell]]
`no-pkg-sell`::
Selling of package is prohibited. This is equivalent to setting crossref:special[porting-restrictions-no_cdrom,`NO_CDROM`]. The package will _not_ be present on the installer images.
[[licenses-license_perms-auto-accept]]
`auto-accept`::
License is accepted by default. Prompts to accept a license are not displayed unless the user has defined `LICENSES_ASK`. Use this unless the license states the user must accept the terms of the license.
[[licenses-license_perms-no-auto-accept]]
`no-auto-accept`::
License is not accepted by default. The user will always be asked to confirm the acceptance of this license. This must be used if the license states that the user must accept its terms.
When both `_permission_` and `no-_permission_` is present the `no-_permission_` will cancel `_permission_`.
When `_permission_` is not present, it is considered to be a `no-_permission_`.
[WARNING]
====
Some missing permissions will prevent a port (and all ports depending on it) from being usable by package users:
A port without the `auto-accept` permission will never be be built and all the ports depending on it will be ignored.
A port without the `pkg-mirror` permission will be removed, as well as all the ports depending on it, after the build and they will ever end up being distributed.
====
[[licenses-license_perms-ex1]]
.Nonstandard License
[example]
====
Read the terms of the license and translate those using the available permissions.
[.programlisting]
....
LICENSE= UNKNOWN
LICENSE_NAME= unknown
LICENSE_TEXT= This program is NOT in public domain.\
It can be freely distributed for non-commercial purposes only.
LICENSE_PERMS= dist-mirror no-dist-sell pkg-mirror no-pkg-sell auto-accept
....
====
[[licenses-license_perms-ex2]]
.Standard and Nonstandard Licenses
[example]
====
Read the terms of the license and express those using the available permissions. In case of doubt, please ask for guidance on the {freebsd-ports}.
[.programlisting]
....
LICENSE= WARSOW GPLv2
LICENSE_COMB= multi
LICENSE_NAME_WARSOW= Warsow Content License
LICENSE_FILE_WARSOW= ${WRKSRC}/docs/license.txt
LICENSE_PERMS_WARSOW= dist-mirror pkg-mirror auto-accept
....
When the permissions of the GPLv2 and the UNKNOWN licenses are mixed, the port ends up with `dist-mirror dist-sell pkg-mirror pkg-sell auto-accept dist-mirror no-dist-sell pkg-mirror no-pkg-sell auto-accept`. The `no-_permissions_` cancel the _permissions_. The resulting list of permissions are _dist-mirror pkg-mirror auto-accept_. The distribution files and the packages will not be available on the installer images.
====
[[licenses-license_groups]]
=== `LICENSE_GROUPS` and `LICENSE_GROUPS_NAME`
Groups the license belongs.
.Predefined License Groups List
[[licenses-license_groups-FSF]]
`FSF`::
Free Software Foundation Approved, see the http://www.fsf.org/licensing[FSF Licensing & Compliance Team].
[[licenses-license_groups-GPL]]
`GPL`::
GPL Compatible
[[licenses-license_groups-OSI]]
`OSI`::
OSI Approved, see the Open Source Initiative http://opensource.org/licenses[Open Source Licenses] page.
[[licenses-license_groups-COPYFREE]]
`COPYFREE`::
Comply with Copyfree Standard Definition, see the http://copyfree.org/standard/licenses[Copyfree Licenses] page.
[[licenses-license_groups-FONTS]]
`FONTS`::
Font licenses
[[licenses-license_name]]
=== `LICENSE_NAME` and `LICENSE_NAME_NAME`
Full name of the license.
[[licenses-license_name-ex1]]
.`LICENSE_NAME`
[example]
====
[.programlisting]
....
LICENSE= UNRAR
LICENSE_NAME= UnRAR License
LICENSE_FILE= ${WRKSRC}/license.txt
LICENSE_PERMS= dist-mirror dist-sell pkg-mirror pkg-sell auto-accept
....
====
[[licenses-license_file]]
=== `LICENSE_FILE` and `LICENSE_FILE_NAME`
Full path to the file containing the license text, usually [.filename]#${WRKSRC}/some/file#. If the file is not in the distfile, and its content is too long to be put in <<licenses-license_text,`LICENSE_TEXT`>>, put it in a new file in [.filename]#${FILESDIR}#.
[[licenses-license_file-ex1]]
.`LICENSE_FILE`
[example]
====
[.programlisting]
....
LICENSE= GPLv3+
LICENSE_FILE= ${WRKSRC}/COPYING
....
====
[[licenses-license_text]]
=== `LICENSE_TEXT` and `LICENSE_TEXT_NAME`
Text to use as a license. Useful when the license is not in the distribution files and its text is short.
[[licenses-license_text-ex1]]
.`LICENSE_TEXT`
[example]
====
[.programlisting]
....
LICENSE= UNKNOWN
LICENSE_NAME= unknown
LICENSE_TEXT= This program is NOT in public domain.\
It can be freely distributed for non-commercial purposes only,\
and THERE IS NO WARRANTY FOR THIS PROGRAM.
LICENSE_PERMS= dist-mirror no-dist-sell pkg-mirror no-pkg-sell auto-accept
....
====
[[licenses-license_distfiles]]
=== `LICENSE_DISTFILES` and `LICENSE_DISTFILES_NAME`
The distribution files to which the licenses apply. Defaults to all the distribution files.
[[licenses-license_distfiles-ex1]]
.`LICENSE_DISTFILES`
[example]
====
Used when the distribution files do not all have the same license. For example, one has a code license, and another has some artwork that cannot be redistributed:
[.programlisting]
....
MASTER_SITES= SF/some-game
DISTFILES= ${DISTNAME}${EXTRACT_SUFX} artwork.zip
LICENSE= BSD3CLAUSE ARTWORK
LICENSE_COMB= dual
LICENSE_NAME_ARTWORK= The game artwork license
LICENSE_TEXT_ARTWORK= The README says that the files cannot be redistributed
LICENSE_PERMS_ARTWORK= pkg-mirror pkg-sell auto-accept
LICENSE_DISTFILES_BSD3CLAUSE= ${DISTNAME}${EXTRACT_SUFX}
LICENSE_DISTFILES_ARTWORK= artwork.zip
....
====
[[licenses-license_comb]]
=== `LICENSE_COMB`
Set to `multi` if all licenses apply. Set to `dual` if any license applies. Defaults to `single`.
[[licenses-license_comb-ex1]]
.Dual Licenses
[example]
====
When a port says "This software may be distributed under the GNU General Public License or the Artistic License", it means that either license can be used. Use this:
[.programlisting]
....
LICENSE= ART10 GPLv1
LICENSE_COMB= dual
....
If license files are provided, use this:
[.programlisting]
....
LICENSE= ART10 GPLv1
LICENSE_COMB= dual
LICENSE_FILE_ART10= ${WRKSRC}/Artistic
LICENSE_FILE_GPLv1= ${WRKSRC}/Copying
....
====
[[licenses-license_comb-ex2]]
.Multiple Licenses
[example]
====
When part of a port has one license, and another part has a different license, use `multi`:
[.programlisting]
....
LICENSE= GPLv2 LGPL21+
LICENSE_COMB= multi
....
====
[[makefile-portscout]]
== `PORTSCOUT`
Portscout is an automated distfile check utility for the FreeBSD Ports Collection, described in detail in crossref:keeping-up[distfile-survey,Portscout: the FreeBSD Ports Distfile Scanner].
`PORTSCOUT` defines special conditions within which the Portscout distfile scanner is restricted.
Situations where `PORTSCOUT` is set include:
* When distfiles have to be ignored for specific versions. For example, to exclude version _8.2_ and version _8.3_ from distfile version checks because they are known to be broken, add:
+
[.programlisting]
....
PORTSCOUT= skipv:8.2,8.3
....
* When distfile version checks have to be disabled completely. For example, if a port is not going to be updated ever again, add:
+
[.programlisting]
....
PORTSCOUT= ignore:1
....
* When specific versions or specific major and minor revisions of a distfile must be checked. For example, if only version _0.6.4_ must be monitored because newer versions have compatibility issues with FreeBSD, add:
+
[.programlisting]
....
PORTSCOUT= limit:^0\.6\.4
....
* When URLs listing the available versions differ from the download URLs. For example, to limit distfile version checks to the download page for the package:databases/pgtune[] port, add:
+
[.programlisting]
....
PORTSCOUT= site:http://pgfoundry.org/frs/?group_id=1000416
....
[[makefile-depend]]
== Dependencies
Many ports depend on other ports. This is a very convenient feature of most Unix-like operating systems, including FreeBSD. Multiple ports can share a common dependency, rather than bundling that dependency with every port or package that needs it. There are seven variables that can be used to ensure that all the required bits will be on the user's machine. There are also some pre-supported dependency variables for common cases, plus a few more to control the behavior of dependencies.
[IMPORTANT]
====
When software has extra dependencies that provide extra features, the base dependencies listed in `*_DEPENDS` should include the extra dependencies that would benefit most users. The base dependencies should never be a "minimal" dependency set. The goal is not to include every dependency possible. Only include those that will benefit most people.
====
[[makefile-lib_depends]]
=== `LIB_DEPENDS`
This variable specifies the shared libraries this port depends on. It is a list of `_lib:dir_` tuples where `_lib_` is the name of the shared library, `_dir_` is the directory in which to find it in case it is not available. For example,
[.programlisting]
....
LIB_DEPENDS= libjpeg.so:graphics/jpeg
....
will check for a shared jpeg library with any version, and descend into the [.filename]#graphics/jpeg# subdirectory of the ports tree to build and install it if it is not found.
The dependency is checked twice, once from within the `build` target and then from within the `install` target. Also, the name of the dependency is put into the package so that `pkg install` (see man:pkg-install[8]) will automatically install it if it is not on the user's system.
[[makefile-run_depends]]
=== `RUN_DEPENDS`
This variable specifies executables or files this port depends on during run-time. It is a list of ``_path:dir_``[:``_target_``] tuples where `_path_` is the name of the executable or file, _dir_ is the directory in which to find it in case it is not available, and _target_ is the target to call in that directory. If _path_ starts with a slash (`/`), it is treated as a file and its existence is tested with `test -e`; otherwise, it is assumed to be an executable, and `which -s` is used to determine if the program exists in the search path.
For example,
[.programlisting]
....
RUN_DEPENDS= ${LOCALBASE}/news/bin/innd:news/inn \
xmlcatmgr:textproc/xmlcatmgr
....
will check if the file or directory [.filename]#/usr/local/news/bin/innd# exists, and build and install it from the [.filename]#news/inn# subdirectory of the ports tree if it is not found. It will also see if an executable called `xmlcatmgr` is in the search path, and descend into [.filename]#textproc/xmlcatmgr# to build and install it if it is not found.
[NOTE]
====
In this case, `innd` is actually an executable; if an executable is in a place that is not expected to be in the search path, use the full pathname.
====
[NOTE]
====
The official search `PATH` used on the ports build cluster is
[.programlisting]
....
/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin
....
====
The dependency is checked from within the `install` target. Also, the name of the dependency is put into the package so that `pkg install` (see man:pkg-install[8]) will automatically install it if it is not on the user's system. The _target_ part can be omitted if it is the same as `DEPENDS_TARGET`.
A quite common situation is when `RUN_DEPENDS` is literally the same as `BUILD_DEPENDS`, especially if ported software is written in a scripted language or if it requires the same build and run-time environment. In this case, it is both tempting and intuitive to directly assign one to the other:
[.programlisting]
....
RUN_DEPENDS= ${BUILD_DEPENDS}
....
However, such assignment can pollute run-time dependencies with entries not defined in the port's original `BUILD_DEPENDS`. This happens because of man:make[1]'s lazy evaluation of variable assignment. Consider a [.filename]#Makefile# with `USE_*`, which are processed by [.filename]#ports/Mk/bsd.*.mk# to augment initial build dependencies. For example, `USES= gmake` adds package:devel/gmake[] to `BUILD_DEPENDS`. To prevent such additional dependencies from polluting `RUN_DEPENDS`, create another variable with the current content of `BUILD_DEPENDS` and assign it to both `BUILD_DEPENDS` and `RUN_DEPENDS`:
[.programlisting]
....
MY_DEPENDS= some:devel/some \
other:lang/other
BUILD_DEPENDS= ${MY_DEPENDS}
RUN_DEPENDS= ${MY_DEPENDS}
....
[IMPORTANT]
====
_Do not_ use `:=` to assign `BUILD_DEPENDS` to `RUN_DEPENDS` or vice-versa. All variables are expanded immediately, which is exactly the wrong thing to do and almost always a failure.
====
[[makefile-build_depends]]
=== `BUILD_DEPENDS`
This variable specifies executables or files this port requires to build. Like `RUN_DEPENDS`, it is a list of ``_path:dir_``[:``_target_``] tuples. For example,
[.programlisting]
....
BUILD_DEPENDS= unzip:archivers/unzip
....
will check for an executable called `unzip`, and descend into the [.filename]#archivers/unzip# subdirectory of the ports tree to build and install it if it is not found.
[NOTE]
====
"build" here means everything from extraction to compilation. The dependency is checked from within the `extract` target. The _target_ part can be omitted if it is the same as `DEPENDS_TARGET`
====
[[makefile-fetch_depends]]
=== `FETCH_DEPENDS`
This variable specifies executables or files this port requires to fetch. Like the previous two, it is a list of ``_path:dir_``[:``_target_``] tuples. For example,
[.programlisting]
....
FETCH_DEPENDS= ncftp2:net/ncftp2
....
will check for an executable called `ncftp2`, and descend into the [.filename]#net/ncftp2# subdirectory of the ports tree to build and install it if it is not found.
The dependency is checked from within the `fetch` target. The _target_ part can be omitted if it is the same as `DEPENDS_TARGET`.
[[makefile-extract_depends]]
=== `EXTRACT_DEPENDS`
This variable specifies executables or files this port requires for extraction. Like the previous, it is a list of ``_path:dir_``[:``_target_``] tuples. For example,
[.programlisting]
....
EXTRACT_DEPENDS= unzip:archivers/unzip
....
will check for an executable called `unzip`, and descend into the [.filename]#archivers/unzip# subdirectory of the ports tree to build and install it if it is not found.
The dependency is checked from within the `extract` target. The _target_ part can be omitted if it is the same as `DEPENDS_TARGET`.
[NOTE]
====
Use this variable only if the extraction does not already work (the default assumes `tar`) and cannot be made to work using `USES=tar`, `USES=lha` or `USES=zip` described in crossref:uses[uses,Using `USES` Macros].
====
[[makefile-patch_depends]]
=== `PATCH_DEPENDS`
This variable specifies executables or files this port requires to patch. Like the previous, it is a list of ``_path:dir_``[:``_target_``] tuples. For example,
[.programlisting]
....
PATCH_DEPENDS= ${NONEXISTENT}:java/jfc:extract
....
will descend into the [.filename]#java/jfc# subdirectory of the ports tree to extract it.
The dependency is checked from within the `patch` target. The _target_ part can be omitted if it is the same as `DEPENDS_TARGET`.
[[makefile-uses]]
=== `USES`
Parameters can be added to define different features and dependencies used by the port. They are specified by adding this line to the [.filename]#Makefile#:
[.programlisting]
....
USES= feature[:arguments]
....
For the complete list of values, please see crossref:uses[uses,Using `USES` Macros].
[WARNING]
====
`USES` cannot be assigned after inclusion of [.filename]#bsd.port.pre.mk#.
====
[[makefile-use-vars]]
=== `USE_*`
Several variables exist to define common dependencies shared by many ports. Their use is optional, but helps to reduce the verbosity of the port [.filename]##Makefile##s. Each of them is styled as `USE_*`. These variables may be used only in the port [.filename]##Makefile##s and [.filename]#ports/Mk/bsd.*.mk#. They are not meant for user-settable options - use `PORT_OPTIONS` for that purpose.
[NOTE]
====
It is _always_ incorrect to set any `USE_*` in [.filename]#/etc/make.conf#. For instance, setting
[.programlisting]
....
USE_GCC=X.Y
....
(where X.Y is version number) would add a dependency on gccXY for every port, including `lang/gccXY` itself!
====
[[makefile-use-vars-table]]
.`USE_*`
[cols="1,1", frame="none", options="header"]
|===
| Variable
| Means
|`USE_GCC`
a|
The port requires GCC (`gcc` or `{g-plus-plus}`) to build. Some ports need any GCC version, some require modern, recent versions. It is typically set to `any` (in this case, GCC from base would be used on versions of FreeBSD that still have it, or `lang/gcc` port would be installed when default C/C++ compiler is Clang); or `yes` (means always use stable, modern GCC from `lang/gcc` port). The exact version can also be specified, with a value such as `4.7`. The minimal required version can be specified as `4.6+`. The GCC from the base system is used when it satisfies the requested version, otherwise an appropriate compiler is built from the port, and `CC` and `CXX` are adjusted accordingly.
[NOTE]
====
`USE_GCC` will register a build-time and a run-time dependency.
====
|===
Variables related to gmake and [.filename]#configure# are described in crossref:special[building,Building Mechanisms], while autoconf, automake and libtool are described in crossref:special[using-autotools,Using GNU Autotools]. Perl related variables are described in crossref:special[using-perl,Using Perl]. X11 variables are listed in crossref:special[using-x11,Using X11]. crossref:special[using-gnome,Using Gnome] deals with GNOME and crossref:special[using-kde,Using KDE] with KDE related variables. crossref:special[using-java,Using Java] documents Java variables, while crossref:special[using-php,Web Applications, Apache and PHP] contains information on Apache, PHP and PEAR modules. Python is discussed in crossref:special[using-python,Using Python], while Ruby in crossref:special[using-ruby,Using Ruby]. crossref:special[using-sdl,Using SDL] provides variables used for SDL applications and finally, crossref:special[using-xfce,Using Xfce] contains information on Xfce.
[[makefile-version-dependency]]
=== Minimal Version of a Dependency
A minimal version of a dependency can be specified in any `*_DEPENDS` except `LIB_DEPENDS` using this syntax:
[.programlisting]
....
p5-Spiffy>=0.26:devel/p5-Spiffy
....
The first field contains a dependent package name, which must match the entry in the package database, a comparison sign, and a package version. The dependency is satisfied if p5-Spiffy-0.26 or newer is installed on the machine.
[[makefile-note-on-dependencies]]
=== Notes on Dependencies
As mentioned above, the default target to call when a dependency is required is `DEPENDS_TARGET`. It defaults to `install`. This is a user variable; it is never defined in a port's [.filename]#Makefile#. If the port needs a special way to handle a dependency, use the `:target` part of `*_DEPENDS` instead of redefining `DEPENDS_TARGET`.
When running `make clean`, the port dependencies are automatically cleaned too. If this is not desirable, define `NOCLEANDEPENDS` in the environment. This may be particularly desirable if the port has something that takes a long time to rebuild in its dependency list, such as KDE, GNOME or Mozilla.
To depend on another port unconditionally, use the variable `${NONEXISTENT}` as the first field of `BUILD_DEPENDS` or `RUN_DEPENDS`. Use this only when the source of the other port is needed. Compilation time can be saved by specifying the target too. For instance
[.programlisting]
....
BUILD_DEPENDS= ${NONEXISTENT}:graphics/jpeg:extract
....
will always descend to the `jpeg` port and extract it.
[[makefile-circular-dependencies]]
=== Circular Dependencies Are Fatal
[IMPORTANT]
====
Do not introduce any circular dependencies into the ports tree!
====
The ports building technology does not tolerate circular dependencies. If one is introduced, someone, somewhere in the world, will have their FreeBSD installation broken almost immediately, with many others quickly to follow. These can really be hard to detect. If in doubt, before making that change, make sure to run: `cd /usr/ports; make index`. That process can be quite slow on older machines, but it may be able to save a large number of people, including yourself, a lot of grief in the process.
[[makefile-automatic-dependencies]]
=== Problems Caused by Automatic Dependencies
Dependencies must be declared either explicitly or by using the <<makefile-options,OPTIONS framework>>. Using other methods like automatic detection complicates indexing, which causes problems for port and package management.
[[makefile-automatic-dependencies-bad]]
.Wrong Declaration of an Optional Dependency
[example]
====
[.programlisting]
....
.include <bsd.port.pre.mk>
.if exists(${LOCALBASE}/bin/foo)
LIB_DEPENDS= libbar.so:foo/bar
.endif
....
====
The problem with trying to automatically add dependencies is that files and settings outside an individual port can change at any time. For example: an index is built, then a batch of ports are installed. But one of the ports installs the tested file. The index is now incorrect, because an installed port unexpectedly has a new dependency. The index may still be wrong even after rebuilding if other ports also determine their need for dependencies based on the existence of other files.
[[makefile-automatic-dependencies-good]]
.Correct Declaration of an Optional Dependency
[example]
====
[.programlisting]
....
OPTIONS_DEFINE= BAR
BAR_DESC= Calling cellphones via bar
BAR_LIB_DEPENDS= libbar.so:foo/bar
....
====
Testing option variables is the correct method. It will not cause inconsistencies in the index of a batch of ports, provided the options were defined prior to the index build. Simple scripts can then be used to automate the building, installation, and updating of these ports and their packages.
[[makefile-masterdir]]
== Slave Ports and `MASTERDIR`
If the port needs to build slightly different versions of packages by having a variable (for instance, resolution, or paper size) take different values, create one subdirectory per package to make it easier for users to see what to do, but try to share as many files as possible between ports. Typically, by using variables cleverly, only a very short [.filename]#Makefile# is needed in all but one of the directories. In the sole [.filename]#Makefile#, use `MASTERDIR` to specify the directory where the rest of the files are. Also, use a variable as part of <<porting-pkgname,`PKGNAMESUFFIX`>> so the packages will have different names.
This will be best demonstrated by an example. This is part of [.filename]#print/pkfonts300/Makefile#;
[.programlisting]
....
PORTNAME= pkfonts${RESOLUTION}
PORTVERSION= 1.0
DISTFILES= pk${RESOLUTION}.tar.gz
PLIST= ${PKGDIR}/pkg-plist.${RESOLUTION}
.if !defined(RESOLUTION)
RESOLUTION= 300
.else
.if ${RESOLUTION} != 118 && ${RESOLUTION} != 240 && \
${RESOLUTION} != 300 && ${RESOLUTION} != 360 && \
${RESOLUTION} != 400 && ${RESOLUTION} != 600
.BEGIN:
@${ECHO_MSG} "Error: invalid value for RESOLUTION: \"${RESOLUTION}\""
@${ECHO_MSG} "Possible values are: 118, 240, 300, 360, 400 and 600."
@${FALSE}
.endif
.endif
....
package:print/pkfonts300[] also has all the regular patches, package files, etc. Running `make` there, it will take the default value for the resolution (300) and build the port normally.
As for other resolutions, this is the _entire_ [.filename]#print/pkfonts360/Makefile#:
[.programlisting]
....
RESOLUTION= 360
MASTERDIR= ${.CURDIR}/../pkfonts300
.include "${MASTERDIR}/Makefile"
....
([.filename]#print/pkfonts118/Makefile#, [.filename]#print/pkfonts600/Makefile#, and all the other are similar). `MASTERDIR` definition tells [.filename]#bsd.port.mk# that the regular set of subdirectories like `FILESDIR` and `SCRIPTDIR` are to be found under [.filename]#pkfonts300#. The `RESOLUTION=360` line will override the `RESOLUTION=300` line in [.filename]#pkfonts300/Makefile# and the port will be built with resolution set to 360.
[[makefile-manpages]]
== Man Pages
If the port anchors its man tree somewhere other than `PREFIX`, use `MANDIRS` to specify those directories. Note that the files corresponding to manual pages must be placed in [.filename]#pkg-plist# along with the rest of the files. The purpose of `MANDIRS` is to enable automatic compression of manual pages, therefore the file names are suffixed with [.filename]#.gz#.
[[makefile-info]]
== Info Files
If the package needs to install GNU info files, list them in `INFO` (without the trailing `.info`), one entry per document. These files are assumed to be installed to [.filename]#PREFIX/INFO_PATH#. Change `INFO_PATH` if the package uses a different location. However, this is not recommended. These entries contain just the path relative to [.filename]#PREFIX/INFO_PATH#. For example, package:lang/gcc34[] installs info files to [.filename]#PREFIX/INFO_PATH/gcc34#, and `INFO` will be something like this:
[.programlisting]
....
INFO= gcc34/cpp gcc34/cppinternals gcc34/g77 ...
....
Appropriate installation/de-installation code will be automatically added to the temporary [.filename]#pkg-plist# before package registration.
[[makefile-options]]
== Makefile Options
Many applications can be built with optional or differing configurations. Examples include choice of natural (human) language, GUI versus command-line, or type of database to support. Users may need a different configuration than the default, so the ports system provides hooks the port author can use to control which variant will be built. Supporting these options properly will make users happy, and effectively provide two or more ports for the price of one.
[[makefile-options-options]]
=== `OPTIONS`
[[makefile-options-background]]
==== Background
`OPTIONS_*` give the user installing the port a dialog showing the available options, and then saves those options to [.filename]#${PORT_DBDIR}/${OPTIONS_NAME}/options#. The next time the port is built, the options are reused. `PORT_DBDIR` defaults to [.filename]#/var/db/ports#. `OPTIONS_NAME` is to the port origin with an underscore as the space separator, for example, for package:dns/bind99[] it will be `dns_bind99`.
When the user runs `make config` (or runs `make build` for the first time), the framework checks for [.filename]#${PORT_DBDIR}/${OPTIONS_NAME}/options#. If that file does not exist, the values of `OPTIONS_*` are used, and a dialog box is displayed where the options can be enabled or disabled. Then [.filename]#options# is saved and the configured variables are used when building the port.
If a new version of the port adds new `OPTIONS`, the dialog will be presented to the user with the saved values of old `OPTIONS` prefilled.
`make showconfig` shows the saved configuration. Use `make rmconfig` to remove the saved configuration.
[[makefile-options-syntax]]
==== Syntax
`OPTIONS_DEFINE` contains a list of `OPTIONS` to be used. These are independent of each other and are not grouped:
[.programlisting]
....
OPTIONS_DEFINE= OPT1 OPT2
....
Once defined, `OPTIONS` are described (optional, but strongly recommended):
[.programlisting]
....
OPT1_DESC= Describe OPT1
OPT2_DESC= Describe OPT2
OPT3_DESC= Describe OPT3
OPT4_DESC= Describe OPT4
OPT5_DESC= Describe OPT5
OPT6_DESC= Describe OPT6
....
[.filename]#ports/Mk/bsd.options.desc.mk# has descriptions for many common `OPTIONS`. While often useful, override them if the description is insufficient for the port.
[TIP]
====
When describing options, view it from the perspective of the user: "What functionality does it change?" and "Why would I want to enable this?" Do not just repeat the name. For example, describing the `NLS` option as "include NLS support" does not help the user, who can already see the option name but may not know what it means. Describing it as "Native Language Support via gettext utilities" is much more helpful.
====
[IMPORTANT]
====
Option names are always in all uppercase. They cannot use mixed case or lowercase.
====
`OPTIONS` can be grouped as radio choices, where only one choice from each group is allowed:
[.programlisting]
....
OPTIONS_SINGLE= SG1
OPTIONS_SINGLE_SG1= OPT3 OPT4
....
[WARNING]
====
There _must_ be one of each `OPTIONS_SINGLE` group selected at all times for the options to be valid. One option of each group _must_ be added to `OPTIONS_DEFAULT`.
====
`OPTIONS` can be grouped as radio choices, where none or only one choice from each group is allowed:
[.programlisting]
....
OPTIONS_RADIO= RG1
OPTIONS_RADIO_RG1= OPT7 OPT8
....
`OPTIONS` can also be grouped as "multiple-choice" lists, where _at least one_ option must be enabled:
[.programlisting]
....
OPTIONS_MULTI= MG1
OPTIONS_MULTI_MG1= OPT5 OPT6
....
`OPTIONS` can also be grouped as "multiple-choice" lists, where none or any option can be enabled:
[.programlisting]
....
OPTIONS_GROUP= GG1
OPTIONS_GROUP_GG1= OPT9 OPT10
....
`OPTIONS` are unset by default, unless they are listed in `OPTIONS_DEFAULT`:
[.programlisting]
....
OPTIONS_DEFAULT= OPT1 OPT3 OPT6
....
`OPTIONS` definitions must appear before the inclusion of [.filename]#bsd.port.options.mk#. `PORT_OPTIONS` values can only be tested after the inclusion of [.filename]#bsd.port.options.mk#. Inclusion of [.filename]#bsd.port.pre.mk# can be used instead, too, and is still widely used in ports written before the introduction of [.filename]#bsd.port.options.mk#. But be aware that some variables will not work as expected after the inclusion of [.filename]#bsd.port.pre.mk#, typically some `USE_*` flags.
[[ports-options-simple-use]]
.Simple Use of `OPTIONS`
[example]
====
[.programlisting]
....
OPTIONS_DEFINE= FOO BAR
OPTIONS_DEFAULT=FOO
FOO_DESC= Option foo support
BAR_DESC= Feature bar support
# Will add --with-foo / --without-foo
FOO_CONFIGURE_WITH= foo
BAR_RUN_DEPENDS= bar:bar/bar
.include <bsd.port.mk>
....
====
[[ports-options-check-unset]]
.Check for Unset Port `OPTIONS`
[example]
====
[.programlisting]
....
.if ! ${PORT_OPTIONS:MEXAMPLES}
CONFIGURE_ARGS+=--without-examples
.endif
....
The form shown above is discouraged. The preferred method is using a configure knob to really enable and disable the feature to match the option:
[.programlisting]
....
# Will add --with-examples / --without-examples
EXAMPLES_CONFIGURE_WITH= examples
....
====
[[ports-options-practical-use]]
.Practical Use of `OPTIONS`
[example]
====
[.programlisting]
....
OPTIONS_DEFINE= EXAMPLES
OPTIONS_DEFAULT= PGSQL LDAP SSL
OPTIONS_SINGLE= BACKEND
OPTIONS_SINGLE_BACKEND= MYSQL PGSQL BDB
OPTIONS_MULTI= AUTH
OPTIONS_MULTI_AUTH= LDAP PAM SSL
EXAMPLES_DESC= Install extra examples
MYSQL_DESC= Use MySQL as backend
PGSQL_DESC= Use PostgreSQL as backend
BDB_DESC= Use Berkeley DB as backend
LDAP_DESC= Build with LDAP authentication support
PAM_DESC= Build with PAM support
SSL_DESC= Build with OpenSSL support
# Will add USE_PGSQL=yes
PGSQL_USE= pgsql=yes
# Will add --enable-postgres / --disable-postgres
PGSQL_CONFIGURE_ENABLE= postgres
ICU_LIB_DEPENDS= libicuuc.so:devel/icu
# Will add --with-examples / --without-examples
EXAMPLES_CONFIGURE_WITH= examples
# Check other OPTIONS
.include <bsd.port.mk>
....
====
[[makefile-options-default]]
==== Default Options
These options are always on by default.
* `DOCS` - build and install documentation.
* `NLS` - Native Language Support.
* `EXAMPLES` - build and install examples.
* `IPV6` - IPv6 protocol support.
[NOTE]
====
There is no need to add these to `OPTIONS_DEFAULT`. To have them active, and show up in the options selection dialog, however, they must be added to `OPTIONS_DEFINE`.
====
[[makefile-options-auto-activation]]
=== Feature Auto-Activation
When using a GNU configure script, keep an eye on which optional features are activated by auto-detection. Explicitly disable optional features that are not needed by adding `--without-xxx` or `--disable-xxx` in `CONFIGURE_ARGS`.
[[makefile-options-auto-activation-bad]]
.Wrong Handling of an Option
[example]
====
[.programlisting]
....
.if ${PORT_OPTIONS:MFOO}
LIB_DEPENDS+= libfoo.so:devel/foo
CONFIGURE_ARGS+= --enable-foo
.endif
....
====
In the example above, imagine a library libfoo is installed on the system. The user does not want this application to use libfoo, so he toggled the option off in the `make config` dialog. But the application's configure script detects the library present in the system and includes its support in the resulting executable. Now when the user decides to remove libfoo from the system, the ports system does not protest (no dependency on libfoo was recorded) but the application breaks.
[[makefile-options-auto-activation-good]]
.Correct Handling of an Option
[example]
====
[.programlisting]
....
FOO_LIB_DEPENDS= libfoo.so:devel/foo
# Will add --enable-foo / --disable-foo
FOO_CONFIGURE_ENABLE= foo
....
====
[NOTE]
====
Under some circumstances, the shorthand conditional syntax can cause problems with complex constructs. The errors are usually `Malformed conditional`, an alternative syntax can be used.
[.programlisting]
....
.if !empty(VARIABLE:MVALUE)
....
as an alternative to
[.programlisting]
....
.if ${VARIABLE:MVALUE}
....
====
[[options-helpers]]
=== Options Helpers
There are some macros to help simplify conditional values which differ based on the options set. For easier access, a comprehensive list is provided:
`PLIST_SUB`, `SUB_LIST`::
For automatic `%%_OPT_%%` and `%%NO__OPT__%%` generation, see <<options_sub>>.
+
For more complex usage, see <<options-variables>>.
`CONFIGURE_ARGS`::
For `--enable-_x_` and `--disable-_x_`, see <<options-configure_enable>>.
+
For `--with-_x_` and `--without-_x_`, see <<options-configure_with>>.
+
For all other cases, see <<options-configure_on>>.
`CMAKE_ARGS`::
For arguments that are booleans (`on`, `off`, `true`, `false`, `0`, `1`) see <<options-cmake_bool>>.
+
For all other cases, see <<options-cmake_on>>.
`MESON_ARGS`::
For arguments that take `true` or `false`, see <<options-meson_true>>.
+
For arguments that take `yes` or `no`, use <<options-meson_yes>>.
+
For arguments that take `enabled` or `disabled`, see <<options-meson_enabled>>.
+
For all other cases, use <<options-meson_on>>.
`QMAKE_ARGS`::
See <<options-qmake_on>>.
`USE_*`::
See <<options-use>>.
`*_DEPENDS`::
See <<options-dependencies>>.
`*` (Any variable)::
The most used variables have direct helpers, see <<options-variables>>.
+
For any variable without a specific helper, see <<options-vars>>.
Options dependencies::
When an option need another option to work, see <<options-implies>>.
Options conflicts::
When an option cannot work if another is also enabled, see <<options-prevents>>.
Build targets::
When an option need some extra processing, see <<options-targets>>.
[[options_sub]]
==== `OPTIONS_SUB`
If `OPTIONS_SUB` is set to `yes` then each of the options added to `OPTIONS_DEFINE` will be added to `PLIST_SUB` and `SUB_LIST`, for example:
[.programlisting]
....
OPTIONS_DEFINE= OPT1
OPTIONS_SUB= yes
....
is equivalent to:
[.programlisting]
....
OPTIONS_DEFINE= OPT1
.include <bsd.port.options.mk>
.if ${PORT_OPTIONS:MOPT1}
PLIST_SUB+= OPT1="" NO_OPT1="@comment "
SUB_LIST+= OPT1="" NO_OPT1="@comment "
.else
PLIST_SUB+= OPT1="@comment " NO_OPT1=""
SUB_LIST+= OPT1="@comment " NO_OPT1=""
.endif
....
[NOTE]
====
The value of `OPTIONS_SUB` is ignored. Setting it to any value will add `PLIST_SUB` and `SUB_LIST` entries for _all_ options.
====
[[options-use]]
==== `OPT_USE` and `OPT_USE_OFF`
When option _OPT_ is selected, for each `_key=value_` pair in ``OPT_USE``, _value_ is appended to the corresponding `USE_KEY`. If _value_ has spaces in it, replace them with commas and they will be changed back to spaces during processing. `OPT_USE_OFF` works the same way, but when `OPT` is _not_ selected. For example:
[.programlisting]
....
OPTIONS_DEFINE= OPT1
OPT1_USES= xorg
OPT1_USE= mysql=yes xorg=x11,xextproto,xext,xrandr
OPT1_USE_OFF= openssl=yes
....
is equivalent to:
[.programlisting]
....
OPTIONS_DEFINE= OPT1
.include <bsd.port.options.mk>
.if ${PORT_OPTIONS:MOPT1}
USE_MYSQL= yes
USES+= xorg
USE_XORG= x11 xextproto xext xrandr
.else
USE_OPENSSL= yes
.endif
....
[[options-configure-helpers]]
==== `CONFIGURE_ARGS` Helpers
[[options-configure_enable]]
===== `OPT_CONFIGURE_ENABLE`
When option _OPT_ is selected, for each _entry_ in `OPT_CONFIGURE_ENABLE` then `--enable-_entry_` is appended to `CONFIGURE_ARGS`. When option _OPT_ is _not_ selected, `--disable-_entry_` is appended to `CONFIGURE_ARGS`. An optional argument can be specified with an `=` symbol. This argument is only appended to the `--enable-_entry_` configure option. For example:
[.programlisting]
....
OPTIONS_DEFINE= OPT1 OPT2
OPT1_CONFIGURE_ENABLE= test1 test2
OPT2_CONFIGURE_ENABLE= test2=exhaustive
....
is equivalent to:
[.programlisting]
....
OPTIONS_DEFINE= OPT1
.include <bsd.port.options.mk>
.if ${PORT_OPTIONS:MOPT1}
CONFIGURE_ARGS+= --enable-test1 --enable-test2
.else
CONFIGURE_ARGS+= --disable-test1 --disable-test2
.endif
.if ${PORT_OPTIONS:MOPT2}
CONFIGURE_ARGS+= --enable-test2=exhaustive
.else
CONFIGURE_ARGS+= --disable-test2
.endif
....
[[options-configure_with]]
===== `OPT_CONFIGURE_WITH`
When option _OPT_ is selected, for each _entry_ in `_OPT_CONFIGURE_WITH` then `--with-_entry_` is appended to `CONFIGURE_ARGS`. When option _OPT_ is _not_ selected, `--without-_entry_` is appended to `CONFIGURE_ARGS`. An optional argument can be specified with an `=` symbol. This argument is only appended to the `--with-_entry_` configure option. For example:
[.programlisting]
....
OPTIONS_DEFINE= OPT1 OPT2
OPT1_CONFIGURE_WITH= test1
OPT2_CONFIGURE_WITH= test2=exhaustive
....
is equivalent to:
[.programlisting]
....
OPTIONS_DEFINE= OPT1 OPT2
.include <bsd.port.options.mk>
.if ${PORT_OPTIONS:MOPT1}
CONFIGURE_ARGS+= --with-test1
.else
CONFIGURE_ARGS+= --without-test1
.endif
.if ${PORT_OPTIONS:MOPT2}
CONFIGURE_ARGS+= --with-test2=exhaustive
.else
CONFIGURE_ARGS+= --without-test2
.endif
....
[[options-configure_on]]
===== `OPT_CONFIGURE_ON` and `OPT_CONFIGURE_OFF`
When option _OPT_ is selected, the value of `OPT_CONFIGURE_ON`, if defined, is appended to `CONFIGURE_ARGS`. `OPT_CONFIGURE_OFF` works the same way, but when `OPT` is _not_ selected. For example:
[.programlisting]
....
OPTIONS_DEFINE= OPT1
OPT1_CONFIGURE_ON= --add-test
OPT1_CONFIGURE_OFF= --no-test
....
is equivalent to:
[.programlisting]
....
OPTIONS_DEFINE= OPT1
.include <bsd.port.options.mk>
.if ${PORT_OPTIONS:MOPT1}
CONFIGURE_ARGS+= --add-test
.else
CONFIGURE_ARGS+= --no-test
.endif
....
[TIP]
====
Most of the time, the helpers in <<options-configure_enable>> and <<options-configure_with>> provide a shorter and more comprehensive functionality.
====
[[options-cmake-helpers]]
==== `CMAKE_ARGS` Helpers
[[options-cmake_on]]
===== `OPT_CMAKE_ON` and `OPT_CMAKE_OFF`
When option _OPT_ is selected, the value of `OPT_CMAKE_ON`, if defined, is appended to `CMAKE_ARGS`. `OPT_CMAKE_OFF` works the same way, but when `OPT` is _not_ selected. For example:
[.programlisting]
....
OPTIONS_DEFINE= OPT1
OPT1_CMAKE_ON= -DTEST:BOOL=true -DDEBUG:BOOL=true
OPT1_CMAKE_OFF= -DOPTIMIZE:BOOL=true
....
is equivalent to:
[.programlisting]
....
OPTIONS_DEFINE= OPT1
.include <bsd.port.options.mk>
.if ${PORT_OPTIONS:MOPT1}
CMAKE_ARGS+= -DTEST:BOOL=true -DDEBUG:BOOL=true
.else
CMAKE_ARGS+= -DOPTIMIZE:BOOL=true
.endif
....
[TIP]
====
See <<options-cmake_bool>> for a shorter helper when the value is boolean.
====
[[options-cmake_bool]]
===== `OPT_CMAKE_BOOL` and `OPT_CMAKE_BOOL_OFF`
When option _OPT_ is selected, for each _entry_ in `OPT_CMAKE_BOOL` then `-D_entry_:BOOL=true` is appended to `CMAKE_ARGS`. When option _OPT_ is _not_ selected, `-D_entry_:BOOL=false` is appended to `CONFIGURE_ARGS`. `OPT_CMAKE_BOOL_OFF` is the opposite, `-D_entry_:BOOL=false` is appended to `CMAKE_ARGS` when the option is selected, and `-D_entry_:BOOL=true` when the option is _not_ selected. For example:
[.programlisting]
....
OPTIONS_DEFINE= OPT1
OPT1_CMAKE_BOOL= TEST DEBUG
OPT1_CMAKE_BOOL_OFF= OPTIMIZE
....
is equivalent to:
[.programlisting]
....
OPTIONS_DEFINE= OPT1
.include <bsd.port.options.mk>
.if ${PORT_OPTIONS:MOPT1}
CMAKE_ARGS+= -DTEST:BOOL=true -DDEBUG:BOOL=true \
-DOPTIMIZE:BOOL=false
.else
CMAKE_ARGS+= -DTEST:BOOL=false -DDEBUG:BOOL=false \
-DOPTIMIZE:BOOL=true
.endif
....
[[options-meson-helpers]]
==== `MESON_ARGS` Helpers
[[options-meson_on]]
===== `OPT_MESON_ON` and `OPT_MESON_OFF`
When option _OPT_ is selected, the value of `OPT_MESON_ON`, if defined, is appended to `MESON_ARGS`. `OPT_MESON_OFF` works the same way, but when `OPT` is _not_ selected. For example:
[.programlisting]
....
OPTIONS_DEFINE= OPT1
OPT1_MESON_ON= -Dopt=1
OPT1_MESON_OFF= -Dopt=2
....
is equivalent to:
[.programlisting]
....
OPTIONS_DEFINE= OPT1
.include <bsd.port.options.mk>
.if ${PORT_OPTIONS:MOPT1}
MESON_ARGS+= -Dopt=1
.else
MESON_ARGS+= -Dopt=2
.endif
....
[[options-meson_true]]
===== `OPT_MESON_TRUE` and `OPT_MESON_FALSE`
When option _OPT_ is selected, for each _entry_ in `OPT_MESON_TRUE` then `-D_entry_=true` is appended to `MESON_ARGS`. When option _OPT_ is _not_ selected, `-D_entry_=false` is appended to `MESON_ARGS`. `OPT_MESON_FALSE` is the opposite, `-D_entry_=false` is appended to `MESON_ARGS` when the option is selected, and `-D_entry_=true` when the option is _not_ selected. For example:
[.programlisting]
....
OPTIONS_DEFINE= OPT1
OPT1_MESON_TRUE= test debug
OPT1_MESON_FALSE= optimize
....
is equivalent to:
[.programlisting]
....
OPTIONS_DEFINE= OPT1
.include <bsd.port.options.mk>
.if ${PORT_OPTIONS:MOPT1}
MESON_ARGS+= -Dtest=true -Ddebug=true \
-Doptimize=false
.else
MESON_ARGS+= -Dtest=false -Ddebug=false \
-Doptimize=true
.endif
....
[[options-meson_yes]]
===== `OPT_MESON_YES` and `OPT_MESON_NO`
When option _OPT_ is selected, for each _entry_ in `OPT_MESON_YES` then `-D_entry_=yes` is appended to `MESON_ARGS`. When option _OPT_ is _not_ selected, `-D_entry_=no` is appended to `MESON_ARGS`. `OPT_MESON_NO` is the opposite, `-D_entry_=no` is appended to `MESON_ARGS` when the option is selected, and `-D_entry_=yes` when the option is _not_ selected. For example:
[.programlisting]
....
OPTIONS_DEFINE= OPT1
OPT1_MESON_YES= test debug
OPT1_MESON_NO= optimize
....
is equivalent to:
[.programlisting]
....
OPTIONS_DEFINE= OPT1
.include <bsd.port.options.mk>
.if ${PORT_OPTIONS:MOPT1}
MESON_ARGS+= -Dtest=yes -Ddebug=yes \
-Doptimize=no
.else
MESON_ARGS+= -Dtest=no -Ddebug=no \
-Doptimize=yes
.endif
....
[[options-meson_enabled]]
===== `OPT_MESON_ENABLED` and `OPT_MESON_DISABLED`
When option _OPT_ is selected, for each _entry_ in `OPT_MESON_ENABLED` then `-D_entry_=enabled` is appended to `MESON_ARGS`. When option _OPT_ is _not_ selected, `-D_entry_=disabled` is appended to `MESON_ARGS`. `OPT_MESON_DISABLED` is the opposite, `-D_entry_=disabled` is appended to `MESON_ARGS` when the option is selected, and `-D_entry_=enabled` when the option is _not_ selected. For example:
[.programlisting]
....
OPTIONS_DEFINE= OPT1
OPT1_MESON_ENABLED= test
OPT1_MESON_DISABLED= debug
....
is equivalent to:
[.programlisting]
....
OPTIONS_DEFINE= OPT1
.include <bsd.port.options.mk>
.if ${PORT_OPTIONS:MOPT1}
MESON_ARGS+= -Dtest=enabled -Ddebug=disabled
.else
MESON_ARGS+= -Dtest=disabled -Ddebug=enabled
.endif
....
[[options-qmake_on]]
==== `OPT_QMAKE_ON` and `OPT_QMAKE_OFF`
When option _OPT_ is selected, the value of `OPT_QMAKE_ON`, if defined, is appended to `QMAKE_ARGS`. `OPT_QMAKE_OFF` works the same way, but when `OPT` is _not_ selected. For example:
[.programlisting]
....
OPTIONS_DEFINE= OPT1
OPT1_QMAKE_ON= -DTEST:BOOL=true
OPT1_QMAKE_OFF= -DPRODUCTION:BOOL=true
....
is equivalent to:
[.programlisting]
....
OPTIONS_DEFINE= OPT1
.include <bsd.port.options.mk>
.if ${PORT_OPTIONS:MOPT1}
QMAKE_ARGS+= -DTEST:BOOL=true
.else
QMAKE_ARGS+= -DPRODUCTION:BOOL=true
.endif
....
[[options-implies]]
==== `OPT_IMPLIES`
Provides a way to add dependencies between options.
When _OPT_ is selected, all the options listed in this variable will be selected too. Using the <<options-configure_enable,`OPT_CONFIGURE_ENABLE`>> described earlier to illustrate:
[.programlisting]
....
OPTIONS_DEFINE= OPT1 OPT2
OPT1_IMPLIES= OPT2
OPT1_CONFIGURE_ENABLE= opt1
OPT2_CONFIGURE_ENABLE= opt2
....
Is equivalent to:
[.programlisting]
....
OPTIONS_DEFINE= OPT1 OPT2
.include <bsd.port.options.mk>
.if ${PORT_OPTIONS:MOPT1}
CONFIGURE_ARGS+= --enable-opt1
.else
CONFIGURE_ARGS+= --disable-opt1
.endif
.if ${PORT_OPTIONS:MOPT2} || ${PORT_OPTIONS:MOPT1}
CONFIGURE_ARGS+= --enable-opt2
.else
CONFIGURE_ARGS+= --disable-opt2
.endif
....
[[options-implies-ex1]]
.Simple Use of `OPT_IMPLIES`
[example]
====
This port has a `X11` option, and a `GNOME` option that needs the `X11` option to be selected to build.
[.programlisting]
....
OPTIONS_DEFINE= X11 GNOME
OPTIONS_DEFAULT= X11
X11_USES= xorg
X11_USE= xorg=xi,xextproto
GNOME_USE= gnome=gtk30
GNOME_IMPLIES= X11
....
====
[[options-prevents]]
==== `OPT_PREVENTS` and `OPT_PREVENTS_MSG`
Provides a way to add conflicts between options.
When _OPT_ is selected, all the options listed in `OPT_PREVENTS` must be un-selected. If `OPT_PREVENTS_MSG` is set and a conflict is triggered, its content will be shown explaining why they conflict. For example:
[.programlisting]
....
OPTIONS_DEFINE= OPT1 OPT2
OPT1_PREVENTS= OPT2
OPT1_PREVENTS_MSG= OPT1 and OPT2 enable conflicting options
....
Is roughly equivalent to:
[.programlisting]
....
OPTIONS_DEFINE= OPT1 OPT2
.include <bsd.port.options.mk>
.if ${PORT_OPTIONS:MOPT2} && ${PORT_OPTIONS:MOPT1}
BROKEN= Option OPT1 conflicts with OPT2 (select only one)
.endif
....
The only difference is that the first one will write an error after running `make config`, suggesting changing the selected options.
[[options-prevents-ex1]]
.Simple Use of `OPT_PREVENTS`
[example]
====
This port has `X509` and `SCTP` options. Both options add patches, but the patches conflict with each other, so they cannot be selected at the same time.
[.programlisting]
....
OPTIONS_DEFINE= X509 SCTP
SCTP_PATCHFILES= ${PORTNAME}-6.8p1-sctp-2573.patch.gz:-p1
SCTP_CONFIGURE_WITH= sctp
X509_PATCH_SITES= http://www.roumenpetrov.info/openssh/x509/:x509
X509_PATCHFILES= ${PORTNAME}-7.0p1+x509-8.5.diff.gz:-p1:x509
X509_PREVENTS= SCTP
X509_PREVENTS_MSG= X509 and SCTP patches conflict
....
====
[[options-vars]]
==== `OPT_VARS` and `OPT_VARS_OFF`
Provides a generic way to set and append to variables.
[WARNING]
====
Before using `OPT_VARS` and `OPT_VARS_OFF`, see if there is already a more specific helper available in <<options-variables>>.
====
When option _OPT_ is selected, and `OPT_VARS` defined, `_key_=_value_` and `_key_+=_value_` pairs are evaluated from `OPT_VARS`. An `=` cause the existing value of `KEY` to be overwritten, an `+=` appends to the value. `OPT_VARS_OFF` works the same way, but when `OPT` is _not_ selected.
[.programlisting]
....
OPTIONS_DEFINE= OPT1 OPT2 OPT3
OPT1_VARS= also_build+=bin1
OPT2_VARS= also_build+=bin2
OPT3_VARS= bin3_build=yes
OPT3_VARS_OFF= bin3_build=no
MAKE_ARGS= ALSO_BUILD="${ALSO_BUILD}" BIN3_BUILD="${BIN3_BUILD}"
....
is equivalent to:
[.programlisting]
....
OPTIONS_DEFINE= OPT1 OPT2
MAKE_ARGS= ALSO_BUILD="${ALSO_BUILD}" BIN3_BUILD="${BIN3_BUILD}"
.include <bsd.port.options.mk>
.if ${PORT_OPTIONS:MOPT1}
ALSO_BUILD+= bin1
.endif
.if ${PORT_OPTIONS:MOPT2}
ALSO_BUILD+= bin2
.endif
.if ${PORT_OPTIONS:MOPT2}
BIN3_BUILD= yes
.else
BIN3_BUILD= no
.endif
....
[IMPORTANT]
====
Values containing whitespace must be enclosed in quotes:
[.programlisting]
....
OPT_VARS= foo="bar baz"
....
This is due to the way man:make[1] variable expansion deals with whitespace. When `OPT_VARS= foo=bar baz` is expanded, the variable ends up containing two strings, `foo=bar` and `baz`. But the submitter probably intended there to be only one string, `foo=bar baz`. Quoting the value prevents whitespace from being used as a delimiter.
Also, _do not_ add extra spaces after the `_var_=` sign and before the value, it would also be split into two strings. _This will not work_:
[.programlisting]
....
OPT_VARS= foo= bar
....
====
[[options-dependencies]]
==== Dependencies, `OPT_DEPTYPE` and `OPT_DEPTYPE_OFF`
For any of these dependency types:
* `PKG_DEPENDS`
* `EXTRACT_DEPENDS`
* `PATCH_DEPENDS`
* `FETCH_DEPENDS`
* `BUILD_DEPENDS`
* `LIB_DEPENDS`
* `RUN_DEPENDS`
When option _OPT_ is selected, the value of `PT_DEPTYPE`, if defined, is appended to `DEPTYPE`. `OPT_DEPTYPE_OFF` works the same, but when `OPT` is _not_ selected. For example:
[.programlisting]
....
OPTIONS_DEFINE= OPT1
OPT1_LIB_DEPENDS= liba.so:devel/a
OPT1_LIB_DEPENDS_OFF= libb.so:devel/b
....
is equivalent to:
[.programlisting]
....
OPTIONS_DEFINE= OPT1
.include <bsd.port.options.mk>
.if ${PORT_OPTIONS:MOPT1}
LIB_DEPENDS+= liba.so:devel/a
.else
LIB_DEPENDS+= libb.so:devel/b
.endif
....
[[options-variables]]
==== Generic Variables Replacement, `OPT_VARIABLE` and `OPT_VARIABLE_OFF`
For any of these variables:
* `ALL_TARGET`
* `BINARY_ALIAS`
* `BROKEN`
* `CATEGORIES`
* `CFLAGS`
* `CONFIGURE_ENV`
* `CONFLICTS`
* `CONFLICTS_BUILD`
* `CONFLICTS_INSTALL`
* `CPPFLAGS`
* `CXXFLAGS`
* `DESKTOP_ENTRIES`
* `DISTFILES`
* `EXTRACT_ONLY`
* `EXTRA_PATCHES`
* `GH_ACCOUNT`
* `GH_PROJECT`
* `GH_SUBDIR`
* `GH_TAGNAME`
* `GH_TUPLE`
* `GL_ACCOUNT`
* `GL_COMMIT`
* `GL_PROJECT`
* `GL_SITE`
* `GL_SUBDIR`
* `GL_TUPLE`
* `IGNORE`
* `INFO`
* `INSTALL_TARGET`
* `LDFLAGS`
* `LIBS`
* `MAKE_ARGS`
* `MAKE_ENV`
* `MASTER_SITES`
* `PATCHFILES`
* `PATCH_SITES`
* `PLIST_DIRS`
* `PLIST_FILES`
* `PLIST_SUB`
* `PORTDOCS`
* `PORTEXAMPLES`
* `SUB_FILES`
* `SUB_LIST`
* `TEST_TARGET`
* `USES`
When option _OPT_ is selected, the value of `OPT_ABOVEVARIABLE`, if defined, is appended to `_ABOVEVARIABLE_`. `OPT_ABOVEVARIABLE_OFF` works the same way, but when `OPT` is _not_ selected. For example:
[.programlisting]
....
OPTIONS_DEFINE= OPT1
OPT1_USES= gmake
OPT1_CFLAGS_OFF= -DTEST
....
is equivalent to:
[.programlisting]
....
OPTIONS_DEFINE= OPT1
.include <bsd.port.options.mk>
.if ${PORT_OPTIONS:MOPT1}
USES+= gmake
.else
CFLAGS+= -DTEST
.endif
....
[NOTE]
====
Some variables are not in this list, in particular `PKGNAMEPREFIX` and `PKGNAMESUFFIX`. This is intentional. A port _must not_ change its name when its option set changes.
====
[WARNING]
====
Some of these variables, at least `ALL_TARGET`, `DISTFILES` and `INSTALL_TARGET`, have their default values set _after_ the options are processed.
With these lines in the [.filename]#Makefile#:
[.programlisting]
....
ALL_TARGET= all
DOCS_ALL_TARGET= doc
....
If the `DOCS` option is enabled, `ALL_TARGET` will have a final value of `all doc`; if the option is disabled, it would have a value of `all`.
With only the options helper line in the [.filename]#Makefile#:
[.programlisting]
....
DOCS_ALL_TARGET= doc
....
If the `DOCS` option is enabled, `ALL_TARGET` will have a final value of `doc`; if the option is disabled, it would have a value of `all`.
====
[[options-targets]]
==== Additional Build Targets, `_target_-_OPT_-on` and `_target_-_OPT_-off`
These [.filename]#Makefile# targets can accept optional extra build targets:
* `pre-fetch`
* `do-fetch`
* `post-fetch`
* `pre-extract`
* `do-extract`
* `post-extract`
* `pre-patch`
* `do-patch`
* `post-patch`
* `pre-configure`
* `do-configure`
* `post-configure`
* `pre-build`
* `do-build`
* `post-build`
* `pre-install`
* `do-install`
* `post-install`
* `post-stage`
* `pre-package`
* `do-package`
* `post-package`
When option _OPT_ is selected, the target `_TARGET_-_OPT_-on`, if defined, is executed after `_TARGET_`. `_TARGET_-_OPT_-off` works the same way, but when `OPT` is _not_ selected. For example:
[.programlisting]
....
OPTIONS_DEFINE= OPT1
post-patch-OPT1-on:
@${REINPLACE_CMD} -e '/opt1/s|/usr/bin/|${EXAMPLESDIR}/|' ${WRKSRC}/Makefile
post-patch-OPT1-off:
@${REINPLACE_CMD} -e '/opt1/s|/usr/bin/|${PREFIX}/bin/|' ${WRKSRC}/Makefile
....
is equivalent to:
[.programlisting]
....
OPTIONS_DEFINE= OPT1
.include <bsd.port.options.mk>
post-patch:
.if ${PORT_OPTIONS:MOPT1}
@${REINPLACE_CMD} -e '/opt1/s|/usr/bin/|${EXAMPLESDIR}/|' ${WRKSRC}/Makefile
.else
@${REINPLACE_CMD} -e '/opt1/s|/usr/bin/|${PREFIX}/bin/|' ${WRKSRC}/Makefile
.endif
....
[[makefile-wrkdir]]
== Specifying the Working Directory
Each port is extracted into a working directory, which must be writable. The ports system defaults to having `DISTFILES` unpack in to a directory called `${DISTNAME}`. In other words, if the [.filename]#Makefile# has:
[.programlisting]
....
PORTNAME= foo
DISTVERSION= 1.0
....
then the port's distribution files contain a top-level directory, [.filename]#foo-1.0#, and the rest of the files are located under that directory.
A number of variables can be overridden if that is not the case.
[[makefile-wrksrc]]
=== `WRKSRC`
The variable lists the name of the directory that is created when the application's distfiles are extracted. If our previous example extracted into a directory called [.filename]#foo# (and not [.filename]#foo-1.0#) write:
[.programlisting]
....
WRKSRC= ${WRKDIR}/foo
....
or possibly
[.programlisting]
....
WRKSRC= ${WRKDIR}/${PORTNAME}
....
[[makefile-wrksrc_subdir]]
=== `WRKSRC_SUBDIR`
If the source files needed for the port are in a subdirectory of the extracted distribution file, set `WRKSRC_SUBDIR` to that directory.
[.programlisting]
....
WRKSRC_SUBDIR= src
....
[[makefile-no_wrksubdir]]
=== `NO_WRKSUBDIR`
If the port does not extract in to a subdirectory at all, then set `NO_WRKSUBDIR` to indicate that.
[.programlisting]
....
NO_WRKSUBDIR= yes
....
[NOTE]
====
Because `WRKDIR` is the only directory that is supposed to be writable during the build, and is used to store many files recording the status of the build, the port's extraction will be forced into a subdirectory.
====
[[conflicts]]
== Conflict Handling
There are three different variables to register a conflict between packages and ports: `CONFLICTS`, `CONFLICTS_INSTALL` and `CONFLICTS_BUILD`.
[NOTE]
====
The conflict variables automatically set the variable `IGNORE`, which is more fully documented in crossref:porting-dads[dads-noinstall,Marking a Port Not Installable with `BROKEN`, `FORBIDDEN`, or `IGNORE`].
====
When removing one of several conflicting ports, it is advisable to retain `CONFLICTS` in those other ports for a few months to cater for users who only update once in a while.
[[conclicts-conflicts_install]]
`CONFLICTS_INSTALL`::
If the package cannot coexist with other packages (because of file conflicts, runtime incompatibilities, etc.). `CONFLICTS_INSTALL` check is done after the build stage and prior to the install stage.
[[conclicts-conflicts_build]]
`CONFLICTS_BUILD`::
If the port cannot be built when other specific ports are already installed. Build conflicts are not recorded in the resulting package.
[[conclicts-conflicts]]
`CONFLICTS`::
If the port cannot be built if a certain port is already installed and the resulting package cannot coexist with the other package. `CONFLICTS` check is done prior to the build stage and prior to the install stage.
The most common content of one of these variable is the package base of another port. The package base is the package name without the appended version, it can be obtained by running `make -V PKGBASE`.
[[conflicts-ex1]]
.Basic usage of `CONFLICTS*`
[example]
====
package:dns/bind99[] cannot be installed if package:dns/bind910[] is present because they install same files. First gather the package base to use:
[source,shell]
....
% make -C dns/bind99 -V PKGBASE
bind99
% make -C dns/bind910 -V PKGBASE
bind910
....
Then add to the [.filename]#Makefile# of package:dns/bind99[]:
[.programlisting]
....
CONFLICTS_INSTALL= bind910
....
And add to the [.filename]#Makefile# of package:dns/bind910[]:
[.programlisting]
....
CONFLICTS_INSTALL= bind99
....
====
Sometimes, only certain versions of another port are incompatible. When this is the case, use the full package name including the version. If necessary, use shell globs like `*` and `?` so that all necessary versions are matched.
[[conflicts-ex2]]
.Using `CONFLICTS*` With Globs.
[example]
====
From versions from 2.0 and up-to 2.4.1_2, package:deskutils/gnotime[] used to install a bundled version of package:databases/qof[].
To reflect this past, the [.filename]#Makefile# of package:databases/qof[] contains:
[.programlisting]
....
CONFLICTS_INSTALL= gnotime-2.[0-3]* \
gnotime-2.4.0* gnotime-2.4.1 \
gnotime-2.4.1_[12]
....
The first entry match versions `2.0` through `2.3`, the second all the revisions of `2.4.0`, the third the exact `2.4.1` version, and the last the first and second revisions of the `2.4.1` version.
package:deskutils/gnotime[] does not have any conflicts line because its current version does not conflict with anything else.
====
The variable `DISABLE_CONFLICTS` may be temporarily set when making targets that are not affected by conflicts. The variable is not to be set in port Makefiles.
[source,shell]
....
% make -DDISABLE_CONFLICTS patch
....
[[install]]
== Installing Files
[IMPORTANT]
====
The `install` phase is very important to the end user because it adds files to their system. All the additional commands run in the port [.filename]#Makefile#'s `*-install` targets should be echoed to the screen. _Do not_ silence these commands with `@` or `.SILENT`.
====
[[install-macros]]
=== `INSTALL_*` Macros
Use the macros provided in [.filename]#bsd.port.mk# to ensure correct modes of files in the port's `*-install` targets. Set ownership directly in [.filename]#pkg-plist# with the corresponding entries, such as `@(_owner_,_group_,)`, `@owner _owner_`, and `@group _group_`. These operators work until overridden, or until the end of [.filename]#pkg-plist#, so remember to reset them after they are no longer needed. The default ownership is `root:wheel`. See crossref:plist[plist-keywords-base,Base Keywords] for more information.
* `INSTALL_PROGRAM` is a command to install binary executables.
* `INSTALL_SCRIPT` is a command to install executable scripts.
* `INSTALL_LIB` is a command to install shared libraries (but not static libraries).
* `INSTALL_KLD` is a command to install kernel loadable modules. Some architectures do not like having the modules stripped, so use this command instead of `INSTALL_PROGRAM`.
* `INSTALL_DATA` is a command to install sharable data, including static libraries.
* `INSTALL_MAN` is a command to install manpages and other documentation (it does not compress anything).
These variables are set to the man:install[1] command with the appropriate flags for each situation.
[IMPORTANT]
====
Do not use `INSTALL_LIB` to install static libraries, because stripping them renders them useless. Use `INSTALL_DATA` instead.
====
[[install-strip]]
=== Stripping Binaries and Shared Libraries
Installed binaries should be stripped. Do not strip binaries manually unless absolutely required. The `INSTALL_PROGRAM` macro installs and strips a binary at the same time. The `INSTALL_LIB` macro does the same thing to shared libraries.
When a file must be stripped, but neither `INSTALL_PROGRAM` nor `INSTALL_LIB` macros are desirable, `${STRIP_CMD}` strips the program or shared library. This is typically done within the `post-install` target. For example:
[.programlisting]
....
post-install:
${STRIP_CMD} ${STAGEDIR}${PREFIX}/bin/xdl
....
When multiple files need to be stripped:
[.programlisting]
....
post-install:
.for l in geometry media body track world
${STRIP_CMD} ${STAGEDIR}${PREFIX}/lib/lib${PORTNAME}-${l}.so.0
.endfor
....
Use man:file[1] on a file to determine if it has been stripped. Binaries are reported by man:file[1] as `stripped`, or `not stripped`. Additionally, man:strip[1] will detect programs that have already been stripped and exit cleanly.
[IMPORTANT]
====
When `WITH_DEBUG` is defined, elf files _must not_ be stripped.
The variables (`STRIP_CMD`, `INSTALL_PROGRAM`, `INSTALL_LIB`, ...) and crossref:uses[uses,`USES`] provided by the framework handle this automatically.
Some software, add `-s` to their `LDFLAGS`, in this case, either remove `-s` if `WITH_DEBUG` is set, or remove it unconditionally and use `STRIP_CMD` in `post-install`.
====
[[install-copytree]]
=== Installing a Whole Tree of Files
Sometimes, a large number of files must be installed while preserving their hierarchical organization. For example, copying over a whole directory tree from `WRKSRC` to a target directory under `PREFIX`. Note that `PREFIX`, `EXAMPLESDIR`, `DATADIR`, and other path variables must always be prepended with `STAGEDIR` to respect staging (see crossref:special[staging,Staging]).
Two macros exist for this situation. The advantage of using these macros instead of `cp` is that they guarantee proper file ownership and permissions on target files. The first macro, `COPYTREE_BIN`, will set all the installed files to be executable, thus being suitable for installing into [.filename]#PREFIX/bin#. The second macro, `COPYTREE_SHARE`, does not set executable permissions on files, and is therefore suitable for installing files under [.filename]#PREFIX/share# target.
[.programlisting]
....
post-install:
${MKDIR} ${STAGEDIR}${EXAMPLESDIR}
(cd ${WRKSRC}/examples && ${COPYTREE_SHARE} . ${STAGEDIR}${EXAMPLESDIR})
....
This example will install the contents of the [.filename]#examples# directory in the vendor distfile to the proper examples location of the port.
[.programlisting]
....
post-install:
${MKDIR} ${STAGEDIR}${DATADIR}/summer
(cd ${WRKSRC}/temperatures && ${COPYTREE_SHARE} "June July August" ${STAGEDIR}${DATADIR}/summer)
....
And this example will install the data of summer months to the [.filename]#summer# subdirectory of a [.filename]#DATADIR#.
Additional `find` arguments can be passed via the third argument to `COPYTREE_*` macros. For example, to install all files from the first example except Makefiles, one can use these commands.
[.programlisting]
....
post-install:
${MKDIR} ${STAGEDIR}${EXAMPLESDIR}
(cd ${WRKSRC}/examples && \
${COPYTREE_SHARE} . ${STAGEDIR}${EXAMPLESDIR} "! -name Makefile")
....
These macros do not add the installed files to [.filename]#pkg-plist#. They must be added manually. For optional documentation (`PORTDOCS`, see <<install-documentation>>) and examples (`PORTEXAMPLES`), the `%%PORTDOCS%%` or `%%PORTEXAMPLES%%` prefixes must be prepended in [.filename]#pkg-plist#.
[[install-documentation]]
=== Install Additional Documentation
If the software has some documentation other than the standard man and info pages that is useful for the user, install it under `DOCSDIR` This can be done, like the previous item, in the `post-install` target.
Create a new directory for the port. The directory name is `DOCSDIR`. This usually equals `PORTNAME`. However, if the user might want different versions of the port to be installed at the same time, the whole `PKGNAME` can be used.
Since only the files listed in [.filename]#pkg-plist# are installed, it is safe to always install documentation to `STAGEDIR` (see crossref:special[staging,Staging]). Hence `.if` blocks are only needed when the installed files are large enough to cause significant I/O overhead.
[.programlisting]
....
post-install:
${MKDIR} ${STAGEDIR}${DOCSDIR}
${INSTALL_MAN} ${WRKSRC}/docs/xvdocs.ps ${STAGEDIR}${DOCSDIR}
....
On the other hand, if there is a DOCS option in the port, install the documentation in a `post-install-DOCS-on` target. These targets are described in <<options-targets>>.
Here are some handy variables and how they are expanded by default when used in the [.filename]#Makefile#:
* `DATADIR` gets expanded to [.filename]#PREFIX/shared/PORTNAME#.
* `DATADIR_REL` gets expanded to [.filename]#share/PORTNAME#.
* `DOCSDIR` gets expanded to [.filename]#PREFIX/shared/doc/PORTNAME#.
* `DOCSDIR_REL` gets expanded to [.filename]#share/doc/PORTNAME#.
* `EXAMPLESDIR` gets expanded to [.filename]#PREFIX/shared/examples/PORTNAME#.
* `EXAMPLESDIR_REL` gets expanded to [.filename]#share/examples/PORTNAME#.
[NOTE]
====
The `DOCS` option only controls additional documentation installed in `DOCSDIR`. It does not apply to standard man pages and info pages. Things installed in `EXAMPLESDIR` are controlled by the `EXAMPLES` option.
====
These variables are exported to `PLIST_SUB`. Their values will appear there as pathnames relative to [.filename]#PREFIX# if possible. That is, [.filename]#share/doc/PORTNAME# will be substituted for `%%DOCSDIR%%` in the packing list by default, and so on. (See more on [.filename]#pkg-plist# substitution crossref:plist[plist-sub,here].)
All conditionally installed documentation files and directories are included in [.filename]#pkg-plist# with the `%%PORTDOCS%%` prefix, for example:
[.programlisting]
....
%%PORTDOCS%%%%DOCSDIR%%/AUTHORS
%%PORTDOCS%%%%DOCSDIR%%/CONTACT
....
As an alternative to enumerating the documentation files in [.filename]#pkg-plist#, a port can set the variable `PORTDOCS` to a list of file names and shell glob patterns to add to the final packing list. The names will be relative to `DOCSDIR`. Therefore, a port that utilizes `PORTDOCS`, and uses a non-default location for its documentation, must set `DOCSDIR` accordingly. If a directory is listed in `PORTDOCS` or matched by a glob pattern from this variable, the entire subtree of contained files and directories will be registered in the final packing list. If the `DOCS` option has been unset then files and directories listed in `PORTDOCS` would not be installed or added to port packing list. Installing the documentation at `PORTDOCS` as shown above remains up to the port itself. A typical example of utilizing `PORTDOCS`:
[.programlisting]
....
PORTDOCS= README.* ChangeLog docs/*
....
[NOTE]
====
The equivalents of `PORTDOCS` for files installed under `DATADIR` and `EXAMPLESDIR` are `PORTDATA` and `PORTEXAMPLES`, respectively.
The contents of [.filename]#pkg-message# are displayed upon installation. See crossref:pkg-files[porting-message,the section on using [.filename]#pkg-message#] for details. [.filename]#pkg-message# does not need to be added to [.filename]#pkg-plist#.
====
[[install-subdirs]]
=== Subdirectories Under `PREFIX`
Try to let the port put things in the right subdirectories of `PREFIX`. Some ports lump everything and put it in the subdirectory with the port's name, which is incorrect. Also, many ports put everything except binaries, header files and manual pages in a subdirectory of [.filename]#lib#, which does not work well with the BSD paradigm. Many of the files must be moved to one of these directories: [.filename]#etc# (setup/configuration files), [.filename]#libexec# (executables started internally), [.filename]#sbin# (executables for superusers/managers), [.filename]#info# (documentation for info browser) or [.filename]#share# (architecture independent files). See man:hier[7] for details; the rules governing [.filename]#/usr# pretty much apply to [.filename]#/usr/local# too. The exception are ports dealing with USENET "news". They may use [.filename]#PREFIX/news# as a destination for their files.
[[binary-alias]]
== Use `BINARY_ALIAS` to Rename Commands Instead of Patching the Build
When `BINARY_ALIAS` is defined it will create symlinks of the given commands in a directory which will be prepended to `PATH`.
Use it to substitute hardcoded commands the build phase relies on without having to patch any build files.
[[binary-alias-ex1]]
.Using `BINARY_ALIAS` to Make `gsed` Available as `sed`
[example]
====
Some ports expect `sed` to behave like GNU sed and use features that man:sed[1] does not provide. GNU sed is available from package:textproc/gsed[] on FreeBSD.
Use `BINARY_ALIAS` to substitute `sed` with `gsed` for the duration of the build:
[.programlisting]
....
BUILD_DEPENDS= gsed:textproc/gsed
...
BINARY_ALIAS= sed=gsed
....
====
[[binary-alias-ex2]]
.Using `BINARY_ALIAS` to Provide Aliases for Hardcoded `python3` Commands
[example]
====
A port that has a hardcoded reference to `python3` in its build scripts will need to have it available in `PATH` at build time. Use `BINARY_ALIAS` to create an alias that points to the right Python 3 binary:
[.programlisting]
....
USES= python:3.4+,build
...
BINARY_ALIAS= python3=${PYTHON_CMD}
....
See crossref:special[using-python,Using Python] for more information about `USES=python`.
====
[NOTE]
====
Binary aliases are created after the dependencies provided via `BUILD_DEPENDS` and `LIB_DEPENDS` are processed and before the `configure` target. This leads to various limitations. For example, programs installed via `TEST_DEPENDS` cannot be used to create a binary alias as test dependencies specified this way are processed after binary aliases are created.
====
diff --git a/documentation/content/en/books/porters-handbook/new-port/_index.adoc b/documentation/content/en/books/porters-handbook/new-port/_index.adoc
index 32eee5a9ea..ae040fb893 100644
--- a/documentation/content/en/books/porters-handbook/new-port/_index.adoc
+++ b/documentation/content/en/books/porters-handbook/new-port/_index.adoc
@@ -1,43 +1,44 @@
---
title: Chapter 2. Making a New Port
prev: books/porters-handbook/porting-why
next: books/porters-handbook/quick-porting
+description: How to make a new FreeBSD Port
---
[[own-port]]
= Making a New Port
:doctype: book
:toc: macro
:toclevels: 1
:icons: font
:sectnums:
:sectnumlevels: 6
:source-highlighter: rouge
:experimental:
:skip-front-matter:
:xrefstyle: basic
:relfileprefix: ../
:outfilesuffix:
:sectnumoffset: 2
include::shared/mirrors.adoc[]
include::shared/authors.adoc[]
include::shared/releases.adoc[]
include::shared/en/mailing-lists.adoc[]
include::shared/en/teams.adoc[]
include::shared/en/urls.adoc[]
toc::[]
Interested in making a new port, or upgrading existing ports? Great!
What follows are some guidelines for creating a new port for FreeBSD. To upgrade an existing port, read this, then read crossref:port-upgrading[port-upgrading,Upgrading a Port].
When this document is not sufficiently detailed, refer to [.filename]#/usr/ports/Mk/bsd.port.mk#, which is included by all port [.filename]#Makefiles#. Even those not hacking [.filename]##Makefile##s daily can gain much knowledge from it. Additionally, specific questions can be sent to the {freebsd-ports}.
[NOTE]
====
Only a fraction of the variables (`_VAR_`) that can be overridden are mentioned in this document. Most (if not all) are documented at the start of [.filename]#/usr/ports/Mk/bsd.port.mk#; the others probably ought to be. Note that this file uses a non-standard tab setting: Emacs and Vim will recognize the setting on loading the file. Both man:vi[1] and man:ex[1] can be set to use the correct value by typing `:set tabstop=4` once the file has been loaded.
====
Looking for something easy to start with? Take a look at the https://wiki.freebsd.org/WantedPorts[list of requested ports] and see if you can work on one (or more).
diff --git a/documentation/content/en/books/porters-handbook/order/_index.adoc b/documentation/content/en/books/porters-handbook/order/_index.adoc
index 0802c2e28a..171a802641 100644
--- a/documentation/content/en/books/porters-handbook/order/_index.adoc
+++ b/documentation/content/en/books/porters-handbook/order/_index.adoc
@@ -1,242 +1,243 @@
---
title: Chapter 15. Order of Variables in Port Makefiles
prev: books/porters-handbook/porting-samplem
next: books/porters-handbook/keeping-up
+description: Order of Variables in FreeBSD Port Makefiles
---
[[porting-order]]
= Order of Variables in Port Makefiles
:doctype: book
:toc: macro
:toclevels: 1
:icons: font
:sectnums:
:sectnumlevels: 6
:source-highlighter: rouge
:experimental:
:skip-front-matter:
:xrefstyle: basic
:relfileprefix: ../
:outfilesuffix:
:sectnumoffset: 15
include::shared/mirrors.adoc[]
include::shared/authors.adoc[]
include::shared/releases.adoc[]
include::shared/en/mailing-lists.adoc[]
include::shared/en/teams.adoc[]
include::shared/en/urls.adoc[]
toc::[]
The first sections of the [.filename]#Makefile# must always come in the same order. This standard makes it so everyone can easily read any port without having to search for variables in a random order.
[NOTE]
====
The sections and variables described here are mandatory in a ordinary port. In a slave port, many sections and variables can be skipped.
====
[IMPORTANT]
====
Each following block must be separated from the previous block by a single blank line.
In the following blocks, only set the variables that are required by the port. Define these variables in the order they are shown here.
====
[[porting-order-portname]]
== `PORTNAME` Block
This block is the most important. It defines the port name, version, distribution file location, and category. The variables must be in this order:
* crossref:makefiles[makefile-portname,`PORTNAME`]
* crossref:makefiles[makefile-versions,`PORTVERSION`][<<portversion-footnote, 1>>]
* crossref:makefiles[makefile-versions,`DISTVERSIONPREFIX`]
* crossref:makefiles[makefile-versions,`DISTVERSION`][<<portversion-footnote, 1>>]
* crossref:makefiles[makefile-versions,`DISTVERSIONSUFFIX`]
* crossref:makefiles[makefile-portrevision,`PORTREVISION`]
* crossref:makefiles[makefile-portepoch,`PORTEPOCH`]
* crossref:makefiles[makefile-categories,`CATEGORIES`]
* crossref:makefiles[makefile-master_sites,`MASTER_SITES`]
* crossref:makefiles[makefile-master_sites-shorthand,`MASTER_SITE_SUBDIR`] (deprecated)
* crossref:makefiles[porting-pkgnameprefix-suffix,`PKGNAMEPREFIX`]
* crossref:makefiles[porting-pkgnameprefix-suffix,`PKGNAMESUFFIX`]
* crossref:makefiles[makefile-distname,`DISTNAME`]
* crossref:makefiles[makefile-extract_sufx,`EXTRACT_SUFX`]
* crossref:makefiles[makefile-distfiles-definition,`DISTFILES`]
* crossref:makefiles[makefile-dist_subdir,`DIST_SUBDIR`]
* crossref:makefiles[makefile-extract_only,`EXTRACT_ONLY`]
[[portversion-footnote]]
[IMPORTANT]
====
Only one of PORTVERSION and DISTVERSION can be used.
====
[[porting-order-patch]]
== `PATCHFILES` Block
This block is optional. The variables are:
* crossref:makefiles[porting-patchfiles,`PATCH_SITES`]
* crossref:makefiles[porting-patchfiles,`PATCHFILES`]
* crossref:makefiles[porting-patchfiles,`PATCH_DIST_STRIP`]
[[porting-order-maintainer]]
== `MAINTAINER` Block
This block is mandatory. The variables are:
* crossref:makefiles[makefile-maintainer,`MAINTAINER`]
* crossref:makefiles[makefile-comment,`COMMENT`]
[[porting-order-license]]
== `LICENSE` Block
This block is optional, although it is highly recommended. The variables are:
* crossref:makefiles[licenses-license,`LICENSE`]
* crossref:makefiles[licenses-license_comb,`LICENSE_COMB`]
* crossref:makefiles[licenses-license_groups,`LICENSE_GROUPS`] or `LICENSE_GROUPS_NAME`
* crossref:makefiles[licenses-license_name,`LICENSE_NAME`] or `LICENSE_NAME_NAME`
* crossref:makefiles[licenses-license_text,`LICENSE_TEXT`] or `LICENSE_TEXT_NAME`
* crossref:makefiles[licenses-license_file,`LICENSE_FILE`] or `LICENSE_FILE_NAME`
* crossref:makefiles[licenses-license_perms,`LICENSE_PERMS`] or `LICENSE_PERMS_NAME_`
* crossref:makefiles[licenses-license_distfiles,`LICENSE_DISTFILES`] or `LICENSE_DISTFILES_NAME`
If there are multiple licenses, sort the different LICENSE_VAR_NAME variables by license name.
[[porting-order-broken]]
== Generic `BROKEN`/`IGNORE`/`DEPRECATED` Messages
This block is optional. The variables are:
* crossref:porting-dads[dads-deprecated,`DEPRECATED`]
* crossref:porting-dads[dads-deprecated,`EXPIRATION_DATE`]
* crossref:porting-dads[dads-noinstall,`FORBIDDEN`]
* crossref:porting-dads[dads-noinstall,`BROKEN`]
* crossref:porting-dads[dads-noinstall,`BROKEN_*`]
* crossref:porting-dads[dads-noinstall,`IGNORE`]
* crossref:porting-dads[dads-noinstall,`IGNORE_*`]
* crossref:porting-dads[dads-noinstall,`ONLY_FOR_ARCHS`]
* crossref:porting-dads[dads-noinstall,`ONLY_FOR_ARCHS_REASON*`]
* crossref:porting-dads[dads-noinstall,`NOT_FOR_ARCHS`]
* crossref:porting-dads[dads-noinstall,`NOT_FOR_ARCHS_REASON*`]
[NOTE]
====
`BROKEN_*` and `IGNORE_*` can be any generic variables, for example, `IGNORE_amd64`, `BROKEN_FreeBSD_10`, etc. With the exception of variables that depend on a crossref:uses[uses,`USES`], place those in <<porting-order-uses>>. For instance, `IGNORE_WITH_PHP` only works if crossref:uses[xuses-php,`php`] is set, and `BROKEN_SSL` only if crossref:uses[uses-ssl,`ssl`] is set.
If the port is marked BROKEN when some conditions are met, and such conditions can only be tested after including [.filename]#bsd.port.options.mk# or [.filename]#bsd.port.pre.mk#, then those variables should be set later, in <<porting-order-rest>>.
====
[[porting-order-depends]]
== The Dependencies Block
This block is optional. The variables are:
* crossref:makefiles:[makefile-fetch_depends,`FETCH_DEPENDS`]
* crossref:makefiles:[makefile-extract_depends,`EXTRACT_DEPENDS`]
* crossref:makefiles:[makefile-patch_depends,`PATCH_DEPENDS`]
* crossref:makefiles:[makefile-build_depends,`BUILD_DEPENDS`]
* crossref:makefiles:[makefile-lib_depends,`LIB_DEPENDS`]
* crossref:makefiles:[makefile-run_depends,`RUN_DEPENDS`]
* `TEST_DEPENDS`
[[porting-order-flavors]]
== Flavors
This block is optional.
Start this section with defining `FLAVORS`. Continue with the possible Flavors helpers. See crossref:flavors[flavors-using,Using FLAVORS] for more Information.
Constructs setting variables not available as helpers using `.if ${FLAVOR:U} == foo` should go in their respective sections below.
[[porting-order-uses]]
== `USES` and `USE_x`
Start this section with defining `USES`, and then possible `USE_x`.
Keep related variables close together. For example, if using crossref:makefiles[makefile-master_sites-github,`USE_GITHUB`], always put the `GH_*` variables right after it.
[[porting-order-variables]]
== Standard bsd.port.mk Variables
This section block is for variables that can be defined in [.filename]#bsd.port.mk# that do not belong in any of the previous section blocks.
Order is not important, however try to keep similar variables together. For example uid and gid variables `USERS` and `GROUPS`. Configuration variables `CONFIGURE_*` and `*_CONFIGURE`. List of files, and directories `PORTDOCS` and `PORTEXAMPLES`.
[[porting-order-options]]
== Options and Helpers
If the port uses the crossref:makefiles[makefile-options,options framework], define `OPTIONS_DEFINE` and `OPTIONS_DEFAULT` first, then the other `OPTIONS_*` variables first, then the `*_DESC` descriptions, then the options helpers. Try and sort all of those alphabetically.
[[porting-order-options-ex1]]
.Options Variables Order Example
[example]
====
The `FOO` and `BAR` options do not have a standard description, so one need to be written. The other options already have one in [.filename]#Mk/bsd.options.desc.mk# so writing one is not needed. The `DOCS` and `EXAMPLES` use target helpers to install their files, they are shown here for completeness, though they belong in <<porting-order-targets>>, so other variables and targets could be inserted before them.
[.programlisting]
....
OPTIONS_DEFINE= DOCS EXAMPLES FOO BAR
OPTIONS_DEFAULT= FOO
OPTIONS_RADIO= SSL
OPTIONS_RADIO_SSL= OPENSSL GNUTLS
OPTIONS_SUB= yes
BAR_DESC= Enable bar support
FOO_DESC= Enable foo support
BAR_CONFIGURE_WITH= bar=${LOCALBASE}
FOO_CONFIGURE_ENABLE= foo
GNUTLS_CONFIGURE_ON= --with-ssl=gnutls
OPENSSL_CONFIGURE_ON= --with-ssl=openssl
post-install-DOCS-on:
${MKDIR} ${STAGEDIR}${DOCSDIR}
cd ${WRKSRC}/doc && ${COPYTREE_SHARE} . ${STAGEDIR}${DOCSDIR}
post-install-EXAMPLES-on:
${MKDIR} ${STAGEDIR}${EXAMPLESDIR}
cd ${WRKSRC}/ex && ${COPYTREE_SHARE} . ${STAGEDIR}${EXAMPLESDIR}
....
====
[[porting-order-rest]]
== The Rest of the Variables
And then, the rest of the variables that are not mentioned in the previous blocks.
[[porting-order-targets]]
== The Targets
After all the variables are defined, the optional man:make[1] targets can be defined. Keep `pre-*` before `post-*` and in the same order as the different stages run:
* `fetch`
* `extract`
* `patch`
* `configure`
* `build`
* `install`
* `test`
[TIP]
====
When using options helpers target keep them alphabetically sorted, but keep the `*-on` before the `*-off`. When also using the main target, keep the main target before the optional ones:
[.programlisting]
....
post-install:
# install generic bits
post-install-DOCS-on:
# Install documentation
post-install-X11-on:
# Install X11 related bits
post-install-X11-off:
# Install bits that should be there if X11 is disabled
....
====
diff --git a/documentation/content/en/books/porters-handbook/pkg-files/_index.adoc b/documentation/content/en/books/porters-handbook/pkg-files/_index.adoc
index 9a8bd74cea..500f45c63f 100644
--- a/documentation/content/en/books/porters-handbook/pkg-files/_index.adoc
+++ b/documentation/content/en/books/porters-handbook/pkg-files/_index.adoc
@@ -1,271 +1,272 @@
---
title: Chapter 9. pkg-*
prev: books/porters-handbook/plist
next: books/porters-handbook/testing
+description: Tricks about the pkg-* files
---
[[pkg-files]]
= pkg-*
:doctype: book
:toc: macro
:toclevels: 1
:icons: font
:sectnums:
:sectnumlevels: 6
:source-highlighter: rouge
:experimental:
:skip-front-matter:
:xrefstyle: basic
:relfileprefix: ../
:outfilesuffix:
:sectnumoffset: 9
include::shared/mirrors.adoc[]
include::shared/authors.adoc[]
include::shared/releases.adoc[]
include::shared/en/mailing-lists.adoc[]
include::shared/en/teams.adoc[]
include::shared/en/urls.adoc[]
toc::[]
There are some tricks we have not mentioned yet about the [.filename]#pkg-*# files that come in handy sometimes.
[[porting-message]]
== pkg-message
To display a message when the package is installed, place the message in [.filename]#pkg-message#. This capability is often useful to display additional installation steps to be taken after a `pkg install` or `pkg upgrade`.
[IMPORTANT]
====
* [.filename]#pkg-message# must contain only information that is _vital_ to setup and operation on FreeBSD, and that is unique to the port in question.
* Setup information should only be shown on initial install. Upgrade instructions should be shown only when upgrading from the relevant version.
* Do not surround the messages with either whitespace or lines of symbols (like `----------`, `**********`, or `==========`). Leave the formatting to man:pkg[8].
* Committers have blanket approval to constrain existing messages to install or upgrade ranges using the UCL format specifications.
====
pkg-message supports two formats:
raw::
A regular plain text file. Its message is only displayed on install.
UCL::
If the file starts with "`[`" then it is considered to be a UCL file. The UCL format is described on https://github.com/vstakhov/libucl[libucl's GitHub page].
[NOTE]
====
Do not add an entry for [.filename]#pkg-message# in [.filename]#pkg-plist#.
====
[[porting-message-ucl]]
=== UCL in pkg-message
The format is the following. It should be an array of objects. The objects themselves can have these keywords:
`message`::
The actual message to be displayed. This keyword is mandatory.
`type`::
When the message should be displayed.
`maximum_version`::
Only if `type` is `upgrade`. Display if upgrading from a version strictly lower than the version specified.
`minimum_version`::
Only if `type` is `upgrade`. Display if upgrading from a version stictly greater than the version specified.
The `maximum_version` and `minimum_version` keywords can be combined.
The `type` keyword can have three values:
`install`::
The message should only be displayed when the package is installed.
`remove`::
The message should only be displayed when the package is removed.
`upgrade`::
the message should only be displayed during an upgrade of the package..
[IMPORTANT]
====
To preserve the compatibility with non UCL [.filename]#pkg-message# files, the first line of a UCL [.filename]#pkg-message# _MUST be_ a single "`[`", and the last line _MUST be_ a single "`]`".
====
[[porting-message-ucl-short-ex]]
.UCL Short Strings
[example]
====
The message is delimited by double quotes `"`, this is used for simple single line strings:
[.programlisting]
....
[
{ type: install
message: "Simple message"
}
]
....
====
[[porting-message-ucl-multiline-ex]]
.UCL Multiline Strings
[example]
====
Multiline strings use the standard here document notation. The multiline delimiter _must_ start just after `<<` symbols without any whitespace and it _must_ consist of capital letters only. To finish a multiline string, add the delimiter string on a line of its own without any whitespace. The message from <<porting-message-ucl-short-ex>> can be written as:
[.programlisting]
....
[
{ type: install
message: <<EOM
Simple message
EOM
}
]
....
====
[[porting-message-ucl-ex2]]
.Display a Message on Install/Deinstall
[example]
====
When a message only needs to be displayed on installation or uninstallation, set the type:
[.programlisting]
....
[
{
type: remove
message: "package being removed."
}
{ type: install, message: "package being installed."}
]
....
====
[[porting-message-ucl-ex3]]
.Display a Message on Upgrade
[example]
====
When a port is upgraded, the message displayed can be even more tailored to the port's needs.
[.programlisting]
....
[
{
type: upgrade
message: "Package is being upgraded."
}
{
type: upgrade
maximum_version: "1.0"
message: "Upgrading from before 1.0 need to do this."
}
{
type: upgrade
minimum_version: "1.0"
message: "Upgrading from after 1.0 should do that."
}
{
type: upgrade
maximum_version: "3.0"
minimum_version: "1.0"
message: "Upgrading from > 1.0 and < 3.0 remove that file."
}
]
....
[IMPORTANT]
****
When displaying a message on upgrade, it is important to limit when it is being shown to the user. Most of the time it is by using `maximum_version` to limit its usage to upgrades from before a certain version when something specific needs to be done.
****
====
[[pkg-install]]
== pkg-install
If the port needs to execute commands when the binary package is installed with `pkg add` or `pkg install`, use [.filename]#pkg-install#. This script will automatically be added to the package. It will be run twice by `pkg`, the first time as `${SH} pkg-install ${PKGNAME} PRE-INSTALL` before the package is installed, and the second time as `${SH} pkg-install ${PKGNAME} POST-INSTALL` after it has been installed. `$2` can be tested to determine which mode the script is being run in. The `PKG_PREFIX` environmental variable will be set to the package installation directory.
[IMPORTANT]
====
This script is here to help you set up the package so that it is as ready to use as possible. It _must not_ be abused to start services, stop services, or run any other commands that will modify the currently running system.
====
[[pkg-deinstall]]
== pkg-deinstall
This script executes when a package is removed.
This script will be run twice by `pkg delete` The first time as `${SH} pkg-deinstall ${PKGNAME} DEINSTALL` before the port is de-installed and the second time as `${SH} pkg-deinstall ${PKGNAME} POST-DEINSTALL` after the port has been de-installed. `$2` can be tested to determine which mode the script is being run in. The `PKG_PREFIX` environmental variable will be set to the package installation directory
[IMPORTANT]
====
This script is here to help you set up the package so that it is as ready to use as possible. It _must not_ be abused to start services, stop services, or run any other commands that will modify the currently running system.
====
[[pkg-names]]
== Changing the Names of pkg-*
All the names of [.filename]#pkg-\*# are defined using variables that can be changed in the [.filename]#Makefile# if needed. This is especially useful when sharing the same [.filename]#pkg-*# files among several ports or when it is necessary to write to one of these files. See crossref:porting-dads[porting-wrkdir,writing to places other than `WRKDIR`] for why it is a bad idea to write directly into the directory containing the [.filename]#pkg-*# files.
Here is a list of variable names and their default values. (`PKGDIR` defaults to `${MASTERDIR}`.)
[.informaltable]
[cols="1,1", frame="none", options="header"]
|===
| Variable
| Default value
|`DESCR`
|`${PKGDIR}/pkg-descr`
|`PLIST`
|`${PKGDIR}/pkg-plist`
|`PKGINSTALL`
|`${PKGDIR}/pkg-install`
|`PKGDEINSTALL`
|`${PKGDIR}/pkg-deinstall`
|`PKGMESSAGE`
|`${PKGDIR}/pkg-message`
|===
[[using-sub-files]]
== Making Use of `SUB_FILES` and `SUB_LIST`
`SUB_FILES` and `SUB_LIST` are useful for dynamic values in port files, such as the installation `PREFIX` in [.filename]#pkg-message#.
`SUB_FILES` specifies a list of files to be automatically modified. Each [.filename]#file# in the `SUB_FILES` list must have a corresponding [.filename]#file.in# present in `FILESDIR`. A modified version will be created as [.filename]#${WRKDIR}/file#. Files defined as a value of `USE_RC_SUBR` are automatically added to `SUB_FILES`. For the files [.filename]#pkg-message#, [.filename]#pkg-install#, and [.filename]#pkg-deinstall#, the corresponding Makefile variable is automatically set to point to the processed version.
`SUB_LIST` is a list of `VAR=VALUE` pairs. For each pair, `%%VAR%%` will be replaced with `VALUE` in each file listed in `SUB_FILES`. Several common pairs are automatically defined: `PREFIX`, `LOCALBASE`, `DATADIR`, `DOCSDIR`, `EXAMPLESDIR`, `WWWDIR`, and `ETCDIR`. Any line beginning with `@comment` followed by a space, will be deleted from resulting files after a variable substitution.
This example replaces `%%ARCH%%` with the system architecture in a [.filename]#pkg-message#:
[.programlisting]
....
SUB_FILES= pkg-message
SUB_LIST= ARCH=${ARCH}
....
Note that for this example, [.filename]#pkg-message.in# must exist in `FILESDIR`.
Example of a good [.filename]#pkg-message.in#:
[.programlisting]
....
Now it is time to configure this package.
Copy %%PREFIX%%/shared/examples/putsy/%%ARCH%%.conf into your home directory
as .putsy.conf and edit it.
....
diff --git a/documentation/content/en/books/porters-handbook/plist/_index.adoc b/documentation/content/en/books/porters-handbook/plist/_index.adoc
index e1868a6f4f..765544a7b1 100644
--- a/documentation/content/en/books/porters-handbook/plist/_index.adoc
+++ b/documentation/content/en/books/porters-handbook/plist/_index.adoc
@@ -1,599 +1,600 @@
---
title: Chapter 8. Advanced pkg-plist Practices
prev: books/porters-handbook/flavors
next: books/porters-handbook/pkg-files
+description: Advanced pkg-plist Practices
---
[[plist]]
= Advanced pkg-plist Practices
:doctype: book
:toc: macro
:toclevels: 1
:icons: font
:sectnums:
:sectnumlevels: 6
:source-highlighter: rouge
:experimental:
:skip-front-matter:
:xrefstyle: basic
:relfileprefix: ../
:outfilesuffix:
:sectnumoffset: 8
include::shared/mirrors.adoc[]
include::shared/authors.adoc[]
include::shared/releases.adoc[]
include::shared/en/mailing-lists.adoc[]
include::shared/en/teams.adoc[]
include::shared/en/urls.adoc[]
toc::[]
[[plist-sub]]
== Changing pkg-plist Based on Make Variables
Some ports, particularly the `p5-` ports, need to change their [.filename]#pkg-plist# depending on what options they are configured with (or version of `perl`, in the case of `p5-` ports). To make this easy, any instances in [.filename]#pkg-plist# of `%%OSREL%%`, `%%PERL_VER%%`, and `%%PERL_VERSION%%` will be substituted appropriately. The value of `%%OSREL%%` is the numeric revision of the operating system (for example, `4.9`). `%%PERL_VERSION%%` and `%%PERL_VER%%` is the full version number of `perl` (for example, `5.8.9`). Several other `%%_VARS_%%` related to port's documentation files are described in crossref:makefiles[install-documentation,the relevant section].
To make other substitutions, set `PLIST_SUB` with a list of `_VAR=VALUE_` pairs and instances of `%%_VAR_%%` will be substituted with _VALUE_ in [.filename]#pkg-plist#.
For instance, if a port installs many files in a version-specific subdirectory, use a placeholder for the version so that [.filename]#pkg-plist# does not have to be regenerated every time the port is updated. For example:
[.programlisting]
....
OCTAVE_VERSION= ${PORTREVISION}
PLIST_SUB= OCTAVE_VERSION=${OCTAVE_VERSION}
....
in the [.filename]#Makefile# and use `%%OCTAVE_VERSION%%` wherever the version shows up in [.filename]#pkg-plist#. When the port is upgraded, it will not be necessary to edit dozens (or in some cases, hundreds) of lines in [.filename]#pkg-plist#.
If files are installed conditionally on the options set in the port, the usual way of handling it is prefixing [.filename]#pkg-plist# lines with a `%%OPT%%` for lines needed when the option is enabled, or `%%NO_OPT%%` when the option is disabled, and adding `OPTIONS_SUB=yes` to the [.filename]#Makefile#. See crossref:makefiles[options_sub,`OPTIONS_SUB`] for more information.
For instance, if there are files that are only installed when the `X11` option is enabled, and [.filename]#Makefile# has:
[.programlisting]
....
OPTIONS_DEFINE= X11
OPTIONS_SUB= yes
....
In [.filename]#pkg-plist#, put `%%X11%%` in front of the lines only being installed when the option is enabled, like this :
[.programlisting]
....
%%X11%%bin/foo-gui
....
This substitution will be done between the `pre-install` and `do-install` targets, by reading from [.filename]#PLIST# and writing to [.filename]#TMPPLIST# (default: [.filename]#WRKDIR/.PLIST.mktmp#). So if the port builds [.filename]#PLIST# on the fly, do so in or before `pre-install`. Also, if the port needs to edit the resulting file, do so in `post-install` to a file named [.filename]#TMPPLIST#.
Another way of modifying a port's packing list is based on setting the variables `PLIST_FILES` and `PLIST_DIRS`. The value of each variable is regarded as a list of pathnames to write to [.filename]#TMPPLIST# along with [.filename]#PLIST# contents. While names listed in `PLIST_FILES` and `PLIST_DIRS` are subject to `%%_VAR_%%` substitution as described above, it is better to use the `${_VAR_}` directly. Except for that, names from `PLIST_FILES` will appear in the final packing list unchanged, while `@dir` will be prepended to names from `PLIST_DIRS`. To take effect, `PLIST_FILES` and `PLIST_DIRS` must be set before [.filename]#TMPPLIST# is written, that is, in `pre-install` or earlier.
From time to time, using `OPTIONS_SUB` is not enough. In those cases, adding a specific `_TAG_` to `PLIST_SUB` inside the [.filename]#Makefile# with a special value of `@comment`, makes package tools to ignore the line. For instance, if some files are only installed when the `X11` option is on and the architecture is `i386`:
[.programlisting]
....
.include <bsd.port.pre.mk>
.if ${PORT_OPTIONS:MX11} && ${ARCH} == "i386"
PLIST_SUB+= X11I386=""
.else
PLIST_SUB+= X11I386="@comment "
.endif
....
[[plist-cleaning]]
== Empty Directories
[[plist-dir-cleaning]]
=== Cleaning Up Empty Directories
When being de-installed, a port has to remove empty directories it created. Most of these directories are removed automatically by man:pkg[8], but for directories created outside of [.filename]#${PREFIX}#, or empty directories, some more work needs to be done. This is usually accomplished by adding `@dir` lines for those directories. Subdirectories must be deleted before deleting parent directories.
[.programlisting]
....
[...]
@dir /var/games/oneko/saved-games
@dir /var/games/oneko
....
[[plist-dir-empty]]
=== Creating Empty Directories
Empty directories created during port installation need special attention. They must be present when the package is created. If they are not created by the port code, create them in the [.filename]#Makefile#:
[.programlisting]
....
post-install:
${MKDIR} ${STAGEDIR}${PREFIX}/some/directory
....
Add the directory to [.filename]#pkg-plist# like any other. For example:
[.programlisting]
....
@dir some/directory
....
[[plist-config]]
== Configuration Files
If the port installs configuration files to [.filename]#PREFIX/etc# (or elsewhere) do _not_ list them in [.filename]#pkg-plist#. That will cause `pkg delete` to remove files that have been carefully edited by the user, and a re-installation will wipe them out.
Instead, install sample files with a [.filename]#filename.sample# extension. The `@sample` macro automates this, see <<plist-keywords-sample>> for what it does exactly. For each sample file, add a line to [.filename]#pkg-plist#:
[.programlisting]
....
@sample etc/orbit.conf.sample
....
If there is a very good reason not to install a working configuration file by default, only list the sample filename in [.filename]#pkg-plist#, without the `@sample` followed by a space part, and add a crossref:pkg-files[porting-message,message] pointing out that the user must copy and edit the file before the software will work.
[TIP]
====
When a port installs its configuration in a subdirectory of [.filename]#${PREFIX}/etc#, use `ETCDIR`, which defaults to `${PREFIX}/etc/${PORTNAME}`, it can be overridden in the ports [.filename]#Makefile# if there is a convention for the port to use some other directory. The `%%ETCDIR%%` macro will be used in its stead in [.filename]#pkg-plist#.
====
[NOTE]
====
The sample configuration files should always have the [.filename]#.sample# suffix. If for some historical reason using the standard suffix is not possible, or if the sample files come from some other directory, use this construct:
[.programlisting]
....
@sample etc/orbit.conf-dist etc/orbit.conf
....
or
[.programlisting]
....
@sample %%EXAMPLESDIR%%/orbit.conf etc/orbit.conf
....
The format is `@sample _sample-file actual-config-file_`.
====
[[plist-dynamic]]
== Dynamic Versus Static Package List
A _static package list_ is a package list which is available in the Ports Collection either as [.filename]#pkg-plist# (with or without variable substitution), or embedded into the [.filename]#Makefile# via `PLIST_FILES` and `PLIST_DIRS`. Even if the contents are auto-generated by a tool or a target in the Makefile _before_ the inclusion into the Ports Collection by a committer (for example, using `make makeplist`), this is still considered a static list, since it is possible to examine it without having to download or compile the distfile.
A _dynamic package list_ is a package list which is generated at the time the port is compiled based upon the files and directories which are installed. It is not possible to examine it before the source code of the ported application is downloaded and compiled, or after running a `make clean`.
While the use of dynamic package lists is not forbidden, maintainers should use static package lists wherever possible, as it enables users to man:grep[1] through available ports to discover, for example, which port installs a certain file. Dynamic lists should be primarily used for complex ports where the package list changes drastically based upon optional features of the port (and thus maintaining a static package list is infeasible), or ports which change the package list based upon the version of dependent software used. For example, ports which generate docs with Javadoc.
[[plist-autoplist]]
== Automated Package List Creation
First, make sure the port is almost complete, with only [.filename]#pkg-plist# missing. Running `make makeplist` will show an example for [.filename]#pkg-plist#. The output of `makeplist` must be double checked for correctness as it tries to automatically guess a few things, and can get it wrong.
User configuration files should be installed as [.filename]#filename.sample#, as it is described in <<plist-config>>. [.filename]#info/dir# must not be listed and appropriate [.filename]#install-info# lines must be added as noted in the crossref:makefiles[makefile-info,info files] section. Any libraries installed by the port must be listed as specified in the crossref:special[porting-shlibs,shared libraries] section.
[[plist-autoplist-regex]]
=== Expanding `PLIST_SUB` with Regular Expressions
Strings to be replaced sometimes need to be very specific to avoid undesired replacements. This is a common problem with shorter values.
To address this problem, for each `_PLACEHOLDER_=_value_`, a `PLACEHOLDER_regex=regex` can be set, with the `_regex_` part matching _value_ more precisely.
[[plist-autoplist-regex-ex1]]
.Using PLIST_SUB with Regular Expressions
[example]
====
Perl ports can install architecture dependent files in a specific tree. On FreeBSD to ease porting, this tree is called `mach`. For example, a port that installs a file whose path contains `mach` could have that part of the path string replaced with the wrong values. Consider this [.filename]#Makefile#:
[.programlisting]
....
PORTNAME= Machine-Build
DISTVERSION= 1
CATEGORIES= devel perl5
MASTER_SITES= CPAN
PKGNAMEPREFIX= p5-
MAINTAINER= perl@FreeBSD.org
COMMENT= Building machine
USES= perl5
USE_PERL5= configure
PLIST_SUB= PERL_ARCH=mach
....
The files installed by the port are:
[.programlisting]
....
/usr/local/bin/machine-build
/usr/local/lib/perl5/site_perl/man/man1/machine-build.1.gz
/usr/local/lib/perl5/site_perl/man/man3/Machine::Build.3.gz
/usr/local/lib/perl5/site_perl/Machine/Build.pm
/usr/local/lib/perl5/site_perl/mach/5.20/Machine/Build/Build.so
....
Running `make makeplist` wrongly generates:
[.programlisting]
....
bin/%%PERL_ARCH%%ine-build
%%PERL5_MAN1%%/%%PERL_ARCH%%ine-build.1.gz
%%PERL5_MAN3%%/Machine::Build.3.gz
%%SITE_PERL%%/Machine/Build.pm
%%SITE_PERL%%/%%PERL_ARCH%%/%%PERL_VER%%/Machine/Build/Build.so
....
Change the `PLIST_SUB` line from the [.filename]#Makefile# to:
[.programlisting]
....
PLIST_SUB= PERL_ARCH=mach \
PERL_ARCH_regex=\bmach\b
....
Now `make makeplist` correctly generates:
[.programlisting]
....
bin/machine-build
%%PERL5_MAN1%%/machine-build.1.gz
%%PERL5_MAN3%%/Machine::Build.3.gz
%%SITE_PERL%%/Machine/Build.pm
%%SITE_PERL%%/%%PERL_ARCH%%/%%PERL_VER%%/Machine/Build/Build.so
....
====
[[plist-keywords]]
== Expanding Package List with Keywords
All keywords can also take optional arguments in parentheses. The arguments are owner, group, and mode. This argument is used on the file or directory referenced. To change the owner, group, and mode of a configuration file, use:
[.programlisting]
....
@sample(games,games,640) etc/config.sample
....
The arguments are optional. If only the group and mode need to be changed, use:
[.programlisting]
....
@sample(,games,660) etc/config.sample
....
[WARNING]
====
If a keyword is used on an crossref:makefiles[makefile-options,optional] entry, it must to be added after the helper:
[.programlisting]
....
%%FOO%%@sample etc/orbit.conf.sample
....
This is because the options plist helpers are used to comment out the line, so they need to be put first. See crossref:makefiles[options_sub,`OPTIONS_SUB`] for more information.
====
[[plist-keywords-desktop-file-utils]]
=== `@desktop-file-utils`
Will run `update-desktop-database -q` after installation and deinstallation. _Never_ use directly, add crossref:uses[uses-desktop-file-utils,`USES=desktop-file-utils`] to the [.filename]#Makefile#.
[[plist-keywords-fc]]
=== `@fc` _directory_
Add a `@dir` entry for the directory passed as an argument, and run `fc-cache -fs` on that directory after installation and deinstallation.
[[plist-keywords-fcfontsdir]]
=== `@fcfontsdir` _directory_
Add a `@dir` entry for the directory passed as an argument, and run `fc-cache -fs`, `mkfontscale` and `mkfontdir` on that directory after installation and deinstallation. Additionally, on deinstallation, it removes the [.filename]#fonts.scale# and [.filename]#fonts.dir# cache files if they are empty. This keyword is equivalent to adding both <<plist-keywords-fc,`@fc` _directory_>> and <<plist-keywords-fontsdir,`@fontsdir` _directory_>>.
[[plist-keywords-fontsdir]]
=== `@fontsdir` _directory_
Add a `@dir` entry for the directory passed as an argument, and run `mkfontscale` and `mkfontdir` on that directory after installation and deinstallation. Additionally, on deinstallation, it removes the [.filename]#fonts.scale# and [.filename]#fonts.dir# cache files if they are empty.
[[plist-keywords-glib-schemas]]
=== `@glib-schemas`
Runs `glib-compile-schemas` on installation and deinstallation.
[[plist-keywords-info]]
=== `@info` _file_
Add the file passed as argument to the plist, and updates the info document index on installation and deinstallation. Additionally, it removes the index if empty on deinstallation. This should never be used manually, but always through `INFO`. See crossref:makefiles[makefile-info,Info Files] for more information.
[[plist-keywords-kld]]
=== `@kld` _directory_
Runs `kldxref` on the directory on installation and deinstallation. Additionally, on deinstallation, it will remove the directory if empty.
[[plist-keywords-rmtry]]
=== `@rmtry` _file_
Will remove the file on deinstallation, and not give an error if the file is not there.
[[plist-keywords-sample]]
=== `@sample` _file_ [_file_]
This is used to handle installation of configuration files, through example files bundled with the package. The "actual", non-sample, file is either the second filename, if present, or the first filename without the [.filename]#.sample# extension.
This does three things. First, add the first file passed as argument, the sample file, to the plist. Then, on installation, if the actual file is not found, copy the sample file to the actual file. And finally, on deinstallation, remove the actual file if it has not been modified. See <<plist-config>> for more information.
[[plist-keywords-shared-mime-info]]
=== `@shared-mime-info` _directory_
Runs `update-mime-database` on the directory on installation and deinstallation.
[[plist-keywords-shell]]
=== `@shell` _file_
Add the file passed as argument to the plist.
On installation, add the full path to _file_ to [.filename]#/etc/shells#, while making sure it is not added twice. On deinstallation, remove it from [.filename]#/etc/shells#.
[[plist-keywords-terminfo]]
=== `@terminfo`
Do not use by itself. If the port installs [.filename]#*.terminfo# files, add crossref:uses[uses-terminfo,USES=terminfo] to its [.filename]#Makefile#.
On installation and deinstallation, if `tic` is present, refresh [.filename]#${PREFIX}/shared/misc/terminfo.db# from the [.filename]#*.terminfo# files in [.filename]#${PREFIX}/shared/misc#.
[[plist-keywords-base]]
=== Base Keywords
There are a few keywords that are hardcoded, and documented in man:pkg-create[8]. For the sake of completeness, they are also documented here.
[[plist-keywords-base-empty]]
==== `@` [_file_]
The empty keyword is a placeholder to use when the file's owner, group, or mode need to be changed. For example, to set the group of the file to `games` and add the setgid bit, add:
[.programlisting]
....
@(,games,2755) sbin/daemon
....
[[plist-keywords-base-exec]]
==== `@preexec` _command_, `@postexec` _command_, `@preunexec` _command_, `@postunexec` _command_
Execute _command_ as part of the package installation or deinstallation process.
`@preexec` _command_::
Execute _command_ as part of the [.filename]#pre-install# scripts.
`@postexec` _command_::
Execute _command_ as part of the [.filename]#post-install# scripts.
`@preunexec` _command_::
Execute _command_ as part of the [.filename]#pre-deinstall# scripts.
`@postunexec` _command_::
Execute _command_ as part of the [.filename]#post-deinstall# scripts.
If _command_ contains any of these sequences somewhere in it, they are expanded inline. For these examples, assume that `@cwd` is set to [.filename]#/usr/local# and the last extracted file was [.filename]#bin/emacs#.
`%F`::
Expand to the last filename extracted (as specified). In the example case [.filename]#bin/emacs#.
`%D`::
Expand to the current directory prefix, as set with `@cwd`. In the example case [.filename]#/usr/local#.
`%B`::
Expand to the basename of the fully qualified filename, that is, the current directory prefix plus the last filespec, minus the trailing filename. In the example case, that would be [.filename]#/usr/local/bin#.
`%f`::
Expand to the filename part of the fully qualified name, or the converse of `%B`. In the example case, [.filename]#emacs#.
[IMPORTANT]
====
These keywords are here to help you set up the package so that it is as ready to use as possible. They _must not_ be abused to start services, stop services, or run any other commands that will modify the currently running system.
====
[[plist-keywords-base-mode]]
==== `@mode` _mode_
Set default permission for all subsequently extracted files to _mode_. Format is the same as that used by man:chmod[1]. Use without an arg to set back to default permissions (mode of the file while being packed).
[IMPORTANT]
====
This must be a numeric mode, like `644`, `4755`, or `600`. It cannot be a relative mode like `u+s`.
====
[[plist-keywords-base-owner]]
==== `@owner` _user_
Set default ownership for all subsequent files to _user_. Use without an argument to set back to default ownership (`root`).
[[plist-keywords-base-group]]
==== `@group` _group_
Set default group ownership for all subsequent files to _group_. Use without an arg to set back to default group ownership (`wheel`).
[[plist-keywords-base-comment]]
==== `@comment` _string_
This line is ignored when packing.
[[plist-keywords-base-dir]]
==== `@dir` _directory_
Declare directory name. By default, directories created under `PREFIX` by a package installation are automatically removed. Use this when an empty directory under `PREFIX` needs to be created, or when the directory needs to have non default owner, group, or mode. Directories outside of `PREFIX` need to be registered. For example, [.filename]#/var/db/${PORTNAME}# needs to have a `@dir` entry whereas [.filename]#${PREFIX}/shared/${PORTNAME}# does not if it contains files or uses the default owner, group, and mode.
[[plist-keywords-base-exec-deprecated]]
==== `@exec` _command_, `@unexec` _command_ (Deprecated)
Execute _command_ as part of the installation or deinstallation process. Please use <<plist-keywords-base-exec>> instead.
[[plist-keywords-base-dirrm]]
==== `@dirrm` _directory_ (Deprecated)
Declare directory name to be deleted at deinstall time. By default, directories created under `PREFIX` by a package installation are deleted when the package is deinstalled.
[[plist-keywords-base-dirrmtry]]
==== `@dirrmtry` _directory_ (Deprecated)
Declare directory name to be removed, as for `@dirrm`, but does not issue a warning if the directory cannot be removed.
[[plist-keywords-creating-new]]
=== Creating New Keywords
Package list files can be extended by keywords that are defined in the [.filename]#${PORTSDIR}/Keywords# directory. The settings for each keyword are stored in a UCL file named [.filename]#keyword.ucl#. The file must contain at least one of these sections:
* `attributes`
* `action`
* `pre-install`
* `post-install`
* `pre-deinstall`
* `post-deinstall`
* `pre-upgrade`
* `post-upgrade`
[[plist-keywords-attributes]]
==== `attributes`
Changes the owner, group, or mode used by the keyword. Contains an associative array where the possible keys are `owner`, `group`, and `mode`. The values are, respectively, a user name, a group name, and a file mode. For example:
[.programlisting]
....
attributes: { owner: "games", group: "games", mode: 0555 }
....
[[plist-keywords-action]]
==== `action`
Defines what happens to the keyword's parameter. Contains an array where the possible values are:
`setprefix`::
Set the prefix for the next plist entries.
`dir`::
Register a directory to be created on install and removed on deinstall.
`dirrm`::
Register a directory to be deleted on deinstall. Deprecated.
`dirrmtry`::
Register a directory to try and deleted on deinstall. Deprecated.
`file`::
Register a file.
`setmode`::
Set the mode for the next plist entries.
`setowner`::
Set the owner for the next plist entries.
`setgroup`::
Set the group for the next plist entries.
`comment`::
Does not do anything, equivalent to not entering an `action` section.
`ignore_next`::
Ignore the next entry in the plist.
[[plist-keywords-arguments]]
==== `arguments`
If set to `true`, adds argument handling, splitting the whole line, `%@`, into numbered arguments, `%1`, `%2`, and so on. For example, for this line:
[.programlisting]
....
@foo some.content other.content
....
`%1` and `%2` will contain:
[.programlisting]
....
some.content
other.content
....
It also affects how the <<plist-keywords-action,`action`>> entry works. When there is more than one argument, the argument number must be specified. For example:
[.programlisting]
....
actions: [file(1)]
....
[[plist-keywords-pre-post]]
==== `pre-install`, `post-install`, `pre-deinstall`, `post-deinstall`, `pre-upgrade`, `post-upgrade`
These keywords contains a man:sh[1] script to be executed before or after installation, deinstallation, or upgrade of the package. In addition to the usual `@exec %_foo_` placeholders described in <<plist-keywords-base-exec>>, there is a new one, `%@`, which represents the argument of the keyword.
[[plist-keywords-examples]]
==== Custom Keyword Examples
[[plist-keywords-fc-example]]
.Example of a `@dirrmtryecho` Keyword
[example]
====
This keyword does two things, it adds a `@dirrmtry _directory_` line to the packing list, and echoes the fact that the directory is removed when deinstalling the package.
[.programlisting]
....
actions: [dirrmtry]
post-deinstall: <<EOD
echo "Directory %D/%@ removed."
EOD
....
====
[[plist-keywords-sample-example]]
.Real Life Example, How `@sample` is Implemented
[example]
====
This keyword does three things. It adds the first _filename_ passed as an argument to `@sample` to the packing list, it adds to the `post-install` script instructions to copy the sample to the actual configuration file if it does not already exist, and it adds to the `post-deinstall` instructions to remove the configuration file if it has not been modified.
[.programlisting]
....
actions: [file(1)]
arguments: true
post-install: <<EOD
case "%1" in
/*) sample_file="%1" ;;
*) sample_file="%D/%1" ;;
esac
target_file="${sample_file%.sample}"
set -- %@
if [ $# -eq 2 ]; then
target_file=${2}
fi
case "${target_file}" in
/*) target_file="${target_file}" ;;
*) target_file="%D/${target_file}" ;;
esac
if ! [ -f "${target_file}" ]; then
/bin/cp -p "${sample_file}" "${target_file}" && \
/bin/chmod u+w "${target_file}"
fi
EOD
pre-deinstall: <<EOD
case "%1" in
/*) sample_file="%1" ;;
*) sample_file="%D/%1" ;;
esac
target_file="${sample_file%.sample}"
set -- %@
if [ $# -eq 2 ]; then
set -- %@
target_file=${2}
fi
case "${target_file}" in
/*) target_file="${target_file}" ;;
*) target_file="%D/${target_file}" ;;
esac
if cmp -s "${target_file}" "${sample_file}"; then
rm -f "${target_file}"
else
echo "You may need to manually remove ${target_file} if it is no longer needed."
fi
EOD
....
====
diff --git a/documentation/content/en/books/porters-handbook/porting-dads/_index.adoc b/documentation/content/en/books/porters-handbook/porting-dads/_index.adoc
index abb3fcbdd9..03e9c7ff29 100644
--- a/documentation/content/en/books/porters-handbook/porting-dads/_index.adoc
+++ b/documentation/content/en/books/porters-handbook/porting-dads/_index.adoc
@@ -1,465 +1,466 @@
---
title: Chapter 13. Dos and Don'ts
prev: books/porters-handbook/security
next: books/porters-handbook/porting-samplem
+description: A list of common dos and don'ts that are encountered during the FreeBSD porting process
---
[[porting-dads]]
= Dos and Don'ts
:doctype: book
:toc: macro
:toclevels: 1
:icons: font
:sectnums:
:sectnumlevels: 6
:source-highlighter: rouge
:experimental:
:skip-front-matter:
:xrefstyle: basic
:relfileprefix: ../
:outfilesuffix:
:sectnumoffset: 13
:freebsd-version: __FreeBSD_version
:freebsd: __FreeBSD__
include::shared/mirrors.adoc[]
include::shared/authors.adoc[]
include::shared/releases.adoc[]
include::shared/en/mailing-lists.adoc[]
include::shared/en/teams.adoc[]
include::shared/en/urls.adoc[]
toc::[]
[[dads-intro]]
== Introduction
Here is a list of common dos and don'ts that are encountered during the porting process. Check the port against this list, but also check ports in the https://bugs.FreeBSD.org/search/[PR database] that others have submitted. Submit any comments on ports as described in link:{contributing}#CONTRIB-GENERAL[Bug Reports and General Commentary]. Checking ports in the PR database will both make it faster for us to commit them, and prove that you know what you are doing.
[[porting-wrkdir]]
== `WRKDIR`
Do not write anything to files outside `WRKDIR`. `WRKDIR` is the only place that is guaranteed to be writable during the port build (see link:{handbook}#PORTS-CD[ installing ports from a CDROM] for an example of building ports from a read-only tree). The [.filename]##pkg-*## files can be modified by crossref:pkg-files[pkg-names,redefining a variable] rather than overwriting the file.
[[porting-wrkdirprefix]]
== `WRKDIRPREFIX`
Make sure the port honors `WRKDIRPREFIX`. Most ports do not have to worry about this. In particular, when referring to a `WRKDIR` of another port, note that the correct location is [.filename]#WRKDIRPREFIXPORTSDIR/subdir/name/work# not [.filename]#PORTSDIR/subdir/name/work# or [.filename]#.CURDIR/../../subdir/name/work# or some such.
Also, if defining `WRKDIR`, make sure to prepend `${WRKDIRPREFIX}${.CURDIR}` in the front.
[[porting-versions]]
== Differentiating Operating Systems and OS Versions
Some code needs modifications or conditional compilation based upon what version of FreeBSD Unix it is running under. The preferred way to tell FreeBSD versions apart are the `{freebsd-version}` and `{freebsd}` macros defined in https://cgit.freebsd.org/src/tree/sys/sys/param.h[sys/param.h]. If this file is not included add the code,
[.programlisting]
....
#include <sys/param.h>
....
to the proper place in the [.filename]#.c# file.
`{freebsd}` is defined in all versions of FreeBSD as their major version number. For example, in FreeBSD 9.x, `{freebsd}` is defined to be `9`.
[.programlisting]
....
#if __FreeBSD__ >= 9
# if __FreeBSD_version >= 901000
/* 9.1+ release specific code here */
# endif
#endif
....
A complete list of `{freebsd-version}` values is available in crossref:versions[versions,__FreeBSD_version Values].
[[dads-after-port-mk]]
== Writing Something After bsd.port.mk
Do not write anything after the `.include <bsd.port.mk>` line. It usually can be avoided by including [.filename]#bsd.port.pre.mk# somewhere in the middle of the [.filename]#Makefile# and [.filename]#bsd.port.post.mk# at the end.
[IMPORTANT]
====
Include either the [.filename]#bsd.port.pre.mk#/[.filename]#bsd.port.post.mk# pair or [.filename]#bsd.port.mk# only; do not mix these two usages.
====
[.filename]#bsd.port.pre.mk# only defines a few variables, which can be used in tests in the [.filename]#Makefile#, [.filename]#bsd.port.post.mk# defines the rest.
Here are some important variables defined in [.filename]#bsd.port.pre.mk# (this is not the complete list, please read [.filename]#bsd.port.mk# for the complete list).
[.informaltable]
[cols="1,1", frame="none", options="header"]
|===
| Variable
| Description
|`ARCH`
|The architecture as returned by `uname -m` (for example, `i386`)
|`OPSYS`
|The operating system type, as returned by `uname -s` (for example, `FreeBSD`)
|`OSREL`
|The release version of the operating system (for example, `2.1.5` or `2.2.7`)
|`OSVERSION`
|The numeric version of the operating system; the same as crossref:versions[versions,`{freebsd-version}`].
|`LOCALBASE`
|The base of the "local" tree (for example, `/usr/local`)
|`PREFIX`
|Where the port installs itself (see crossref:testing[porting-prefix,more on `PREFIX`]).
|===
[NOTE]
====
When `MASTERDIR` is needed, always define it before including [.filename]#bsd.port.pre.mk#.
====
Here are some examples of things that can be added after [.filename]#bsd.port.pre.mk#:
[.programlisting]
....
# no need to compile lang/perl5 if perl5 is already in system
.if ${OSVERSION} > 300003
BROKEN= perl is in system
.endif
....
Always use tab instead of spaces after `BROKEN=`.
[[dads-sh-exec]]
== Use the `exec` Statement in Wrapper Scripts
If the port installs a shell script whose purpose is to launch another program, and if launching that program is the last action performed by the script, make sure to launch the program using the `exec` statement, for instance:
[.programlisting]
....
#!/bin/sh
exec %%LOCALBASE%%/bin/java -jar %%DATADIR%%/foo.jar "$@"
....
The `exec` statement replaces the shell process with the specified program. If `exec` is omitted, the shell process remains in memory while the program is executing, and needlessly consumes system resources.
[[dads-rational]]
== Do Things Rationally
The [.filename]#Makefile# should do things in a simple and reasonable manner. Making it a couple of lines shorter or more readable is always better. Examples include using a make `.if` construct instead of a shell `if` construct, not redefining `do-extract` if redefining `EXTRACT*` is enough, and using `GNU_CONFIGURE` instead of `CONFIGURE_ARGS += --prefix=${PREFIX}`.
If a lot of new code is needed to do something, there may already be an implementation of it in [.filename]#bsd.port.mk#. While hard to read, there are a great many seemingly-hard problems for which [.filename]#bsd.port.mk# already provides a shorthand solution.
[[dads-cc]]
== Respect Both `CC` and `CXX`
The port must respect both `CC` and `CXX`. What we mean by this is that the port must not set the values of these variables absolutely, overriding existing values; instead, it may append whatever values it needs to the existing values. This is so that build options that affect all ports can be set globally.
If the port does not respect these variables, please add `NO_PACKAGE=ignores either cc or cxx` to the [.filename]#Makefile#.
Here is an example of a [.filename]#Makefile# respecting both `CC` and `CXX`. Note the `?=`:
[.programlisting]
....
CC?= gcc
....
[.programlisting]
....
CXX?= g++
....
Here is an example which respects neither `CC` nor `CXX`:
[.programlisting]
....
CC= gcc
....
[.programlisting]
....
CXX= g++
....
Both `CC` and `CXX` can be defined on FreeBSD systems in [.filename]#/etc/make.conf#. The first example defines a value if it was not previously set in [.filename]#/etc/make.conf#, preserving any system-wide definitions. The second example clobbers anything previously defined.
[[dads-cflags]]
== Respect `CFLAGS`
The port must respect `CFLAGS`. What we mean by this is that the port must not set the value of this variable absolutely, overriding the existing value. Instead, it may append whatever values it needs to the existing value. This is so that build options that affect all ports can be set globally.
If it does not, please add `NO_PACKAGE=ignores cflags` to the [.filename]#Makefile#.
Here is an example of a [.filename]#Makefile# respecting `CFLAGS`. Note the `+=`:
[.programlisting]
....
CFLAGS+= -Wall -Werror
....
Here is an example which does not respect `CFLAGS`:
[.programlisting]
....
CFLAGS= -Wall -Werror
....
`CFLAGS` is defined on FreeBSD systems in [.filename]#/etc/make.conf#. The first example appends additional flags to `CFLAGS`, preserving any system-wide definitions. The second example clobbers anything previously defined.
Remove optimization flags from the third party [.filename]##Makefile##s. The system `CFLAGS` contains system-wide optimization flags. An example from an unmodified [.filename]#Makefile#:
[.programlisting]
....
CFLAGS= -O3 -funroll-loops -DHAVE_SOUND
....
Using system optimization flags, the [.filename]#Makefile# would look similar to this example:
[.programlisting]
....
CFLAGS+= -DHAVE_SOUND
....
[[dads-verbose-logs]]
== Verbose Build Logs
Make the port build system display all commands executed during the build stage. Complete build logs are crucial to debugging port problems.
Non-informative build log example (bad):
[.programlisting]
....
CC source1.o
CC source2.o
CCLD someprogram
....
Verbose build log example (good):
[.programlisting]
....
cc -O2 -pipe -I/usr/local/include -c -o source1.o source1.c
cc -O2 -pipe -I/usr/local/include -c -o source2.o source2.c
cc -o someprogram source1.o source2.o -L/usr/local/lib -lsomelib
....
Some build systems such as CMake, ninja, and GNU configure are set up for verbose logging by the ports framework. In other cases, ports might need individual tweaks.
[[dads-feedback]]
== Feedback
Do send applicable changes and patches to the upstream maintainer for inclusion in the next release of the code. This makes updating to the next release that much easier.
[[dads-readme]]
== README.html
[.filename]#README.html# is not part of the port, but generated by `make readme`. Do not include this file in patches or commits.
[NOTE]
====
If `make readme` fails, make sure that the default value of `ECHO_MSG` has not been modified by the port.
====
[[dads-noinstall]]
== Marking a Port Not Installable with `BROKEN`, `FORBIDDEN`, or `IGNORE`
In certain cases, users must be prevented from installing a port. There are several variables that can be used in a port's [.filename]#Makefile# to tell the user that the port cannot be installed. The value of these make variables will be the reason that is shown to users for why the port refuses to install itself. Please use the correct make variable. Each variable conveys radically different meanings, both to users and to automated systems that depend on [.filename]##Makefile##s, such as crossref:keeping-up[build-cluster,the ports build cluster], crossref:keeping-up[freshports,FreshPorts], and crossref:keeping-up[portsmon,portsmon].
[[dads-noinstall-variables]]
=== Variables
* `BROKEN` is reserved for ports that currently do not compile, install, deinstall, or run correctly. Use it for ports where the problem is believed to be temporary.
+
If instructed, the build cluster will still attempt to try to build them to see if the underlying problem has been resolved. (However, in general, the cluster is run without this.)
+
For instance, use `BROKEN` when a port:
** does not compile
** fails its configuration or installation process
** installs files outside of [.filename]#${PREFIX}#
** does not remove all its files cleanly upon deinstall (however, it may be acceptable, and desirable, for the port to leave user-modified files behind)
** has runtime issues on systems where it is supposed to run fine.
* `FORBIDDEN` is used for ports that contain a security vulnerability or induce grave concern regarding the security of a FreeBSD system with a given port installed (for example, a reputably insecure program or a program that provides easily exploitable services). Mark ports as `FORBIDDEN` as soon as a particular piece of software has a vulnerability and there is no released upgrade. Ideally upgrade ports as soon as possible when a security vulnerability is discovered so as to reduce the number of vulnerable FreeBSD hosts (we like being known for being secure), however sometimes there is a noticeable time gap between disclosure of a vulnerability and an updated release of the vulnerable software. Do not mark a port `FORBIDDEN` for any reason other than security.
* `IGNORE` is reserved for ports that must not be built for some other reason. Use it for ports where the problem is believed to be structural. The build cluster will not, under any circumstances, build ports marked as `IGNORE`. For instance, use `IGNORE` when a port:
** does not work on the installed version of FreeBSD
** has a distfile which may not be automatically fetched due to licensing restrictions
** does not work with some other currently installed port (for instance, the port depends on package:www/apache20[] but package:www/apache22[] is installed)
+
[NOTE]
====
If a port would conflict with a currently installed port (for example, if they install a file in the same place that performs a different function), crossref:makefiles[conflicts,use `CONFLICTS` instead]. `CONFLICTS` will set `IGNORE` by itself.
====
[[dads-noinstall-notes]]
=== Implementation Notes
Do not quote the values of `BROKEN`, `IGNORE`, and related variables. Due to the way the information is shown to the user, the wording of messages for each variable differ:
[.programlisting]
....
BROKEN= fails to link with base -lcrypto
....
[.programlisting]
....
IGNORE= unsupported on recent versions
....
resulting in this output from `make describe`:
[.programlisting]
....
===> foobar-0.1 is marked as broken: fails to link with base -lcrypto.
....
[.programlisting]
....
===> foobar-0.1 is unsupported on recent versions.
....
[[dads-arch]]
== Architectural Considerations
[[dads-arch-general]]
=== General Notes on Architectures
FreeBSD runs on many more processor architectures than just the well-known x86-based ones. Some ports have constraints which are particular to one or more of these architectures.
For the list of supported architectures, run:
[.programlisting]
....
cd ${SRCDIR}; make targets
....
The values are shown in the form `TARGET`/`TARGET_ARCH`. The ports read-only makevar `ARCH` is set based on the value of `TARGET_ARCH`. Port [.filename]##Makefile##s should test the value of this Makevar.
[[dads-arch-neutral]]
=== Marking a Port as Architecture Neutral
Ports that do not have any architecture-dependent files or requirements are identified by setting `NO_ARCH=yes`.
[NOTE]
====
`NO_ARCH` is meant to indicate that there is no need to build a package for each of the supported architectures. The goal is to reduce the amount of resources spent on building and distributing the packages such as network bandwidth and disk space on mirrors and on distribution media. Currently, however, our package infrastructure (e.g., package managers, mirrors, and package builders) is not set up to fully benefit from `NO_ARCH`.
====
[[dads-arch-ignore]]
=== Marking a Port as Ignored Only On Certain Architectures
* To mark a port as ``IGNORE``d only on certain architectures, there are two other convenience variables that will automatically set `IGNORE`: `ONLY_FOR_ARCHS` and `NOT_FOR_ARCHS`. Examples:
+
[.programlisting]
....
ONLY_FOR_ARCHS= i386 amd64
....
+
[.programlisting]
....
NOT_FOR_ARCHS= ia64 sparc64
....
+
A custom `IGNORE` message can be set using `ONLY_FOR_ARCHS_REASON` and `NOT_FOR_ARCHS_REASON`. Per architecture entries are possible with `ONLY_FOR_ARCHS_REASON_ARCH` and `NOT_FOR_ARCHS_REASON_ARCH`.
[[dads-arch-i386]]
* If a port fetches i386 binaries and installs them, set `IA32_BINARY_PORT`. If this variable is set, [.filename]#/usr/lib32# must be present for IA32 versions of libraries and the kernel must support IA32 compatibility. If one of these two dependencies is not satisfied, `IGNORE` will be set automatically.
[[dads-arch-cluster]]
=== Cluster-Specific Considerations
* Some ports attempt to tune themselves to the exact machine they are being built on by specifying `-march=native` to the compiler. This should be avoided: either list it under an off-by-default option, or delete it entirely.
+
Otherwise, the default package produced by the build cluster might not run on every single machine of that `ARCH`.
[[dads-deprecated]]
== Marking a Port for Removal with `DEPRECATED` or `EXPIRATION_DATE`
Do remember that `BROKEN` and `FORBIDDEN` are to be used as a temporary resort if a port is not working. Permanently broken ports will be removed from the tree entirely.
When it makes sense to do so, users can be warned about a pending port removal with `DEPRECATED` and `EXPIRATION_DATE`. The former is a string stating why the port is scheduled for removal; the latter is a string in ISO 8601 format (YYYY-MM-DD). Both will be shown to the user.
It is possible to set `DEPRECATED` without an `EXPIRATION_DATE` (for instance, recommending a newer version of the port), but the converse does not make any sense.
There is no set policy on how much notice to give. Current practice seems to be one month for security-related issues and two months for build issues. This also gives any interested committers a little time to fix the problems.
[[dads-dot-error]]
== Avoid Use of the `.error` Construct
The correct way for a [.filename]#Makefile# to signal that the port cannot be installed due to some external factor (for instance, the user has specified an illegal combination of build options) is to set a non-blank value to `IGNORE`. This value will be formatted and shown to the user by `make install`.
It is a common mistake to use `.error` for this purpose. The problem with this is that many automated tools that work with the ports tree will fail in this situation. The most common occurrence of this is seen when trying to build [.filename]#/usr/ports/INDEX# (see crossref:testing[make-describe,Running `make describe`]). However, even more trivial commands such as `make maintainer` also fail in this scenario. This is not acceptable.
[[dot-error-breaks-index]]
.How to Avoid Using `.error`
[example]
====
The first of the next two [.filename]#Makefile# snippets will cause `make index` to fail, while the second one will not:
[.programlisting]
....
.error "option is not supported"
....
[.programlisting]
....
IGNORE=option is not supported
....
====
[[dads-sysctl]]
== Usage of sysctl
The usage of [.filename]#sysctl# is discouraged except in targets. This is because the evaluation of any ``makevar``s, such as used during `make index`, then has to run the command, further slowing down that process.
Only use man:sysctl[8] through `SYSCTL`, as it contains the fully qualified path and can be overridden, if one has such a special need.
[[dads-rerolling-distfiles]]
== Rerolling Distfiles
Sometimes the authors of software change the content of released distfiles without changing the file's name. Verify that the changes are official and have been performed by the author. It has happened in the past that the distfile was silently altered on the download servers with the intent to cause harm or compromise end user security.
Put the old distfile aside, download the new one, unpack them and compare the content with man:diff[1]. If there is nothing suspicious, update [.filename]#distinfo#.
[IMPORTANT]
====
Be sure to summarize the differences in the PR and commit log, so that other people know that nothing bad has happened.
====
Contact the authors of the software and confirm the changes with them.
[[dads-use-posix-standards]]
== Use POSIX Standards
FreeBSD ports generally expect POSIX compliance. Some software and build systems make assumptions based on a particular operating system or environment that can cause problems when used in a port.
Do not use [.filename]#/proc# if there are any other ways of getting the information. For example, `setprogname(argv[0])` in `main()` and then man:getprogname[3] to know the executable name.
Do not rely on behavior that is undocumented by POSIX.
Do not record timestamps in the critical path of the application if it also works without. Getting timestamps may be slow, depending on the accuracy of timestamps in the OS. If timestamps are really needed, determine how precise they have to be and use an API which is documented to just deliver the needed precision.
A number of simple syscalls (for example man:gettimeofday[2], man:getpid[2]) are much faster on Linux(R) than on any other operating system due to caching and the vsyscall performance optimizations. Do not rely on them being cheap in performance-critical applications. In general, try hard to avoid syscalls if possible.
Do not rely on Linux(R)-specific socket behavior. In particular, default socket buffer sizes are different (call man:setsockopt[2] with `SO_SNDBUF` and `SO_RCVBUF`, and while Linux(R)'s man:send[2] blocks when the socket buffer is full, FreeBSD's will fail and set `ENOBUFS` in errno.
If relying on non-standard behavior is required, encapsulate it properly into a generic API, do a check for the behavior in the configure stage, and stop if it is missing.
Check the https://www.freebsd.org/cgi/man.cgi[man pages] to see if the function used is a POSIX interface (in the "STANDARDS" section of the man page).
Do not assume that [.filename]#/bin/sh# is bash. Ensure that a command line passed to man:system[3] will work with a POSIX compliant shell.
A list of common bashisms is available https://wiki.ubuntu.com/DashAsBinSh[here].
Check that headers are included in the POSIX or man page recommended way. For example, [.filename]#sys/types.h# is often forgotten, which is not as much of a problem for Linux(R) as it is for FreeBSD.
[[dads-misc]]
== Miscellanea
Always double-check [.filename]#pkg-descr# and [.filename]#pkg-plist#. If reviewing a port and a better wording can be achieved, do so.
Do not copy more copies of the GNU General Public License into our system, please.
Please be careful to note any legal issues! Do not let us illegally distribute software!
diff --git a/documentation/content/en/books/porters-handbook/porting-samplem/_index.adoc b/documentation/content/en/books/porters-handbook/porting-samplem/_index.adoc
index 0307f18a51..6cc090c65c 100644
--- a/documentation/content/en/books/porters-handbook/porting-samplem/_index.adoc
+++ b/documentation/content/en/books/porters-handbook/porting-samplem/_index.adoc
@@ -1,119 +1,120 @@
---
title: Chapter 14. A Sample Makefile
prev: books/porters-handbook/porting-dads
next: books/porters-handbook/order
+description: A sample Makefile that can be used to create a new FreeBSD Port
---
[[porting-samplem]]
= A Sample Makefile
:doctype: book
:toc: macro
:toclevels: 1
:icons: font
:sectnums:
:sectnumlevels: 6
:source-highlighter: rouge
:experimental:
:skip-front-matter:
:xrefstyle: basic
:relfileprefix: ../
:outfilesuffix:
:sectnumoffset: 14
include::shared/mirrors.adoc[]
include::shared/authors.adoc[]
include::shared/releases.adoc[]
include::shared/en/mailing-lists.adoc[]
include::shared/en/teams.adoc[]
include::shared/en/urls.adoc[]
toc::[]
Here is a sample [.filename]#Makefile# that can be used to create a new port. Make sure to remove all the extra comments (ones between brackets).
The format shown is the recommended one for ordering variables, empty lines between sections, and so on. This format is designed so that the most important information is easy to locate. We recommend using crossref:quick-porting[porting-portlint,portlint] to check the [.filename]#Makefile#.
[.programlisting]
....
[section to describe the port itself and the master site - PORTNAME
and PORTVERSION or the DISTVERSION* variables are always first,
followed by CATEGORIES, and then MASTER_SITES, which can be followed
by MASTER_SITE_SUBDIR. PKGNAMEPREFIX and PKGNAMESUFFIX, if needed,
will be after that. Then comes DISTNAME, EXTRACT_SUFX and/or
DISTFILES, and then EXTRACT_ONLY, as necessary.]
PORTNAME= xdvi
DISTVERSION= 18.2
CATEGORIES= print
[do not forget the trailing slash ("/")!
if not using MASTER_SITE_* macros]
MASTER_SITES= ${MASTER_SITE_XCONTRIB}
MASTER_SITE_SUBDIR= applications
PKGNAMEPREFIX= ja-
DISTNAME= xdvi-pl18
[set this if the source is not in the standard ".tar.gz" form]
EXTRACT_SUFX= .tar.Z
[section for distributed patches -- can be empty]
PATCH_SITES= ftp://ftp.sra.co.jp/pub/X11/japanese/
PATCHFILES= xdvi-18.patch1.gz xdvi-18.patch2.gz
[If the distributed patches were not made relative to ${WRKSRC},
this may need to be tweaked]
PATCH_DIST_STRIP= -p1
[maintainer; *mandatory*! This is the person who is volunteering to
handle port updates, build breakages, and to whom a users can direct
questions and bug reports. To keep the quality of the Ports Collection
as high as possible, we do not accept new ports that are assigned to
"ports@FreeBSD.org".]
MAINTAINER= asami@FreeBSD.org
COMMENT= DVI Previewer for the X Window System
[license -- should not be empty]
LICENSE= BSD2CLAUSE
LICENSE_FILE= ${WRKSRC}/LICENSE
[dependencies -- can be empty]
RUN_DEPENDS= gs:print/ghostscript
[If it requires GNU make, not /usr/bin/make, to build...]
USES= gmake
[If it is an X application and requires "xmkmf -a" to be run...]
USES= imake
[this section is for other standard bsd.port.mk variables that do not]
belong to any of the above]
[If it asks questions during configure, build, install...]
IS_INTERACTIVE= yes
[If it extracts to a directory other than ${DISTNAME}...]
WRKSRC= ${WRKDIR}/xdvi-new
[If it requires a "configure" script generated by GNU autoconf to be run]
GNU_CONFIGURE= yes
[et cetera.]
[If it requires options, this section is for options]
OPTIONS_DEFINE= DOCS EXAMPLES FOO
OPTIONS_DEFAULT= FOO
[If options will change the files in plist]
OPTIONS_SUB=yes
FOO_DESC= Enable foo support
FOO_CONFIGURE_ENABLE= foo
[non-standard variables to be used in the rules below]
MY_FAVORITE_RESPONSE= "yeah, right"
[then the special rules, in the order they are called]
pre-fetch:
i go fetch something, yeah
post-patch:
i need to do something after patch, great
pre-install:
and then some more stuff before installing, wow
[and then the epilogue]
.include <bsd.port.mk>
....
diff --git a/documentation/content/en/books/porters-handbook/porting-why/_index.adoc b/documentation/content/en/books/porters-handbook/porting-why/_index.adoc
index 66ee06036a..87ee609d9d 100644
--- a/documentation/content/en/books/porters-handbook/porting-why/_index.adoc
+++ b/documentation/content/en/books/porters-handbook/porting-why/_index.adoc
@@ -1,34 +1,35 @@
---
title: Chapter 1. Introduction
prev: books/porters-handbook/
next: books/porters-handbook/new-port
+description: Why port a program to the FreeBSD Ports Collection
---
[[why-port]]
= Introduction
:doctype: book
:toc: macro
:toclevels: 1
:icons: font
:sectnums:
:sectnumlevels: 6
:source-highlighter: rouge
:experimental:
:skip-front-matter:
:xrefstyle: basic
:relfileprefix: ../
:outfilesuffix:
:sectnumoffset: 1
include::shared/mirrors.adoc[]
include::shared/authors.adoc[]
include::shared/releases.adoc[]
include::shared/en/mailing-lists.adoc[]
include::shared/en/teams.adoc[]
include::shared/en/urls.adoc[]
toc::[]
The FreeBSD Ports Collection is the way almost everyone installs applications ("ports") on FreeBSD. Like everything else about FreeBSD, it is primarily a volunteer effort. It is important to keep this in mind when reading this document.
In FreeBSD, anyone may submit a new port, or volunteer to maintain an existing unmaintained port. No special commit privilege is needed.
diff --git a/documentation/content/en/books/porters-handbook/quick-porting/_index.adoc b/documentation/content/en/books/porters-handbook/quick-porting/_index.adoc
index 5f587c54e9..95c4e3efe3 100644
--- a/documentation/content/en/books/porters-handbook/quick-porting/_index.adoc
+++ b/documentation/content/en/books/porters-handbook/quick-porting/_index.adoc
@@ -1,276 +1,277 @@
---
title: Chapter 3. Quick Porting
prev: books/porters-handbook/new-port
next: books/porters-handbook/slow-porting
+description: How to quickly create a new FreeBSD Port
---
[[quick-porting]]
= Quick Porting
:doctype: book
:toc: macro
:toclevels: 1
:icons: font
:sectnums:
:sectnumlevels: 6
:source-highlighter: rouge
:experimental:
:skip-front-matter:
:xrefstyle: basic
:relfileprefix: ../
:outfilesuffix:
:sectnumoffset: 3
include::shared/mirrors.adoc[]
include::shared/authors.adoc[]
include::shared/releases.adoc[]
include::shared/en/mailing-lists.adoc[]
include::shared/en/teams.adoc[]
include::shared/en/urls.adoc[]
toc::[]
This section describes how to quickly create a new port. For applications where this quick method is not adequate, the full "Slow Porting" process is described in crossref:slow-porting[slow-porting,Slow Porting].
First, get the original tarball and put it into `DISTDIR`, which defaults to [.filename]#/usr/ports/distfiles#.
[NOTE]
====
These steps assume that the software compiled out-of-the-box. In other words, absolutely no changes were required for the application to work on a FreeBSD system. If anything had to be changed, refer to crossref:slow-porting[slow-porting,Slow Porting].
====
[NOTE]
====
It is recommended to set the `DEVELOPER` man:make[1] variable in [.filename]#/etc/make.conf# before getting into porting.
[source,shell]
....
# echo DEVELOPER=yes >> /etc/make.conf
....
This setting enables the "developer mode" that displays deprecation warnings and activates some further quality checks on calling `make`.
====
[[porting-makefile]]
== Writing the Makefile
The minimal [.filename]#Makefile# would look something like this:
[.programlisting]
....
PORTNAME= oneko
DISTVERSION= 1.1b
CATEGORIES= games
MASTER_SITES= ftp://ftp.cs.columbia.edu/archives/X11R5/contrib/
MAINTAINER= youremail@example.com
COMMENT= Cat chasing a mouse all over the screen
.include <bsd.port.mk>
....
Try to figure it out. A more detailed example is shown in the crossref:porting-samplem[porting-samplem,sample Makefile] section.
[[porting-desc]]
== Writing the Description Files
There are two description files that are required for any port, whether they actually package or not. They are [.filename]#pkg-descr# and [.filename]#pkg-plist#. Their [.filename]#pkg-# prefix distinguishes them from other files.
[[porting-pkg-descr]]
=== [.filename]#pkg-descr#
This is a longer description of the port. One to a few paragraphs concisely explaining what the port does is sufficient.
[NOTE]
====
This is _not_ a manual or an in-depth description on how to use or compile the port! _Please be careful when copying from the [.filename]#README# or manpage_. Too often they are not a concise description of the port or are in an awkward format. For example, manpages have justified spacing, which looks particularly bad with monospaced fonts.
On the other hand, the content of [.filename]#pkg-descr# must be longer than the crossref:makefiles[makefile-comment,`COMMENT` line from the Makefile. It must explain in more depth what the port is all about.
====
A well-written [.filename]#pkg-descr# describes the port completely enough that users would not have to consult the documentation or visit the website to understand what the software does, how it can be useful, or what particularly nice features it has. Mentioning certain requirements like a graphical toolkit, heavy dependencies, runtime environment, or implementation languages help users decide whether this port will work for them.
Include a URL to the official WWW homepage. Prepend _one_ of the websites (pick the most common one) with `WWW:` (followed by single space) so that automated tools will work correctly. If the URI is the root of the website or directory, it must be terminated with a slash.
[NOTE]
====
If the listed webpage for a port is not available, try to search the Internet first to see if the official site moved, was renamed, or is hosted elsewhere.
====
This example shows how [.filename]#pkg-descr# looks:
[.programlisting]
....
This is a port of oneko, in which a cat chases a poor mouse all over
the screen.
:
(etc.)
WWW: http://www.oneko.org/
....
[[porting-pkg-plist]]
=== [.filename]#pkg-plist#
This file lists all the files installed by the port. It is also called the "packing list" because the package is generated by packing the files listed here. The pathnames are relative to the installation prefix (usually [.filename]#/usr/local#).
Here is a small example:
[.programlisting]
....
bin/oneko
man/man1/oneko.1.gz
lib/X11/app-defaults/Oneko
lib/X11/oneko/cat1.xpm
lib/X11/oneko/cat2.xpm
lib/X11/oneko/mouse.xpm
....
Refer to the man:pkg-create[8] manual page for details on the packing list.
[NOTE]
====
It is recommended to keep all the filenames in this file sorted alphabetically. It will make verifying changes when upgrading the port much easier.
====
[TIP]
====
Creating a packing list manually can be a very tedious task. If the port installs a large numbers of files, crosref:plist[plist-autoplist,creating the packing list automatically] might save time.
====
There is only one case when [.filename]#pkg-plist# can be omitted from a port. If the port installs just a handful of files, list them in `PLIST_FILES`, within the port's [.filename]#Makefile#. For instance, we could get along without [.filename]#pkg-plist# in the above [.filename]#oneko# port by adding these lines to the [.filename]#Makefile#:
[.programlisting]
....
PLIST_FILES= bin/oneko \
man/man1/oneko.1.gz \
lib/X11/app-defaults/Oneko \
lib/X11/oneko/cat1.xpm \
lib/X11/oneko/cat2.xpm \
lib/X11/oneko/mouse.xpm
....
[NOTE]
====
Usage of `PLIST_FILES` should not be abused. When looking for the origin of a file, people usually try to grep through the [.filename]#pkg-plist# files in the ports tree. Listing files in `PLIST_FILES` in the [.filename]#Makefile# makes that search more difficult.
====
[TIP]
====
If a port needs to create an empty directory, or creates directories outside of [.filename]#${PREFIX}# during installation, refer to crossref:plist:plist-dir-cleaning,Cleaning Up Empty Directories] for more information.
====
[TIP]
====
As `PLIST_FILES` is a man:make[1] variable, any entry with spaces must be quoted. For example, if using keywords described in man:pkg-create[8] and crossref:plist[plist-keywords,Expanding Package List with Keywords], the entry must be quoted.
[.programlisting]
....
PLIST_FILES= "@sample ${ETCDIR}/oneko.conf.sample"
....
====
Later we will see how [.filename]#pkg-plist# and `PLIST_FILES` can be used to fulfill crossref:plist[plist,more sophisticated tasks].
[[porting-checksum]]
== Creating the Checksum File
Just type `make makesum`. The ports framework will automatically generate [.filename]#distinfo#. Do not try to generate the file manually.
[[porting-testing]]
== Testing the Port
Make sure that the port rules do exactly what is desired, including packaging up the port. These are the important points to verify:
* [.filename]#pkg-plist# does not contain anything not installed by the port.
* [.filename]#pkg-plist# contains everything that is installed by the port.
* The port can be installed using the `install` target. This verifies that the install script works correctly.
* The port can be deinstalled properly using the `deinstall` target. This verifies that the deinstall script works correctly.
* The port only has access to network resources during the `fetch` target phase. This is important for package builders, such as package:ports-mgmt/poudriere[].
* Make sure that `make package` can be run as a normal user (that is, not as `root`). If that fails, the software may need to be patched. See also crossref:uses[uses-fakeroot,`fakeroot`] and crossref:uses[uses-uidfix,`uidfix`].
[.procedure]
.Procedure: Recommended Test Ordering
. `make stage`
. `make stage-qa`
. `make package`
. `make install`
. `make deinstall`
. `make package` (as user)
Make certain no warnings are shown in any of the stages.
Thorough automated testing can be done with package:ports-mgmt/poudriere[] from the Ports Collection, see crossref:testing[testing-poudriere,Poudriere] for more information. It maintains `jails` where all of the steps shown above can be tested without affecting the state of the host system.
[[porting-portlint]]
== Checking the Port with `portlint`
Please use `portlint` to see if the port conforms to our guidelines. The package:ports-mgmt/portlint[] program is part of the ports collection. In particular, check that the crossref:porting-samplem[porting-samplem,Makefile] is in the right shape and the crossref:porting-pkgname[porting-pkgname,package] is named appropriately.
[IMPORTANT]
====
Do not blindly follow the output of `portlint`. It is a static lint tool and sometimes gets things wrong.
====
[[porting-submitting]]
== Submitting the New Port
Before submitting the new port, read the crossref:porting-dads[porting-dads,DOs and DON'Ts] section.
Once happy with the port, the only thing remaining is to put it in the main FreeBSD ports tree and make everybody else happy about it too.
[IMPORTANT]
====
We do not need the [.filename]#work# directory or the [.filename]#pkgname.txz# package, so delete them now.
====
Next, create a man:patch[1], file. Assuming the port is called `oneko` and is in the `games` category.
[[porting-submitting-diff]]
.Creating a [.filename]#.diff# for a New Port
[example]
====
Add all the files with `git add .`, then generate the diff with `git diff`. For example:
[source,shell]
....
% git add .
% git diff --staged > oneko.diff
....
[IMPORTANT]
****
To make it easier for committers to apply the patch on their working copy of the ports tree, please generate the [.filename]#.diff# from the base of your ports tree.
****
====
Submit [.filename]#oneko.diff# with the https://bugs.freebsd.org/submit/[bug submission form]. Use product "Ports & Packages", component "Individual Port(s)", and follow the guidelines shown there. Add a short description of the program to the Description field of the PR (perhaps a short version of `COMMENT`), and remember to add [.filename]#oneko.diff# as an attachment.
[NOTE]
====
Giving a good description in the summary of the problem report makes the work of port committers and triagers a lot easier. The expected format for new ports is "[NEW PORT] _category/portname short description of the port_" for new ports. Using this scheme makes it easier and faster to begin the work of committing the new port.
====
After submitting the port, please be patient. The time needed to include a new port in FreeBSD can vary from a few days to a few months. A simple search form of the Problem Report database can be searched at https://bugs.freebsd.org/bugzilla/query.cgi[].
To get a listing of _open_ port PRs, select _Open_ and _Ports & Packages_ in the search form, then click btn:[Search].
After looking at the new port, we will reply if necessary, and commit it to the tree. The submitter's name will also be added to the list of link:{contributors}#contrib-additional[Additional FreeBSD Contributors] and other files.
It is also possible to submit ports using a man:shar[1] file. Using the previous example with the `oneko` port above.
.Creating a [.filename]#.shar# for a New Port
[[porting-submitting-shar]]
[example]
====
go to the directory above where the port directory is located, and use `tar` to create the shar archive:
[source,shell]
....
% cd ..
% tar cf oneko.shar --format shar oneko
....
====
[.filename]#oneko.shar# can then be submitted in the same way as [.filename]#oneko.diff# above.
diff --git a/documentation/content/en/books/porters-handbook/security/_index.adoc b/documentation/content/en/books/porters-handbook/security/_index.adoc
index d2919d9774..9e5c3d7790 100644
--- a/documentation/content/en/books/porters-handbook/security/_index.adoc
+++ b/documentation/content/en/books/porters-handbook/security/_index.adoc
@@ -1,237 +1,238 @@
---
title: Chapter 12. Security
prev: books/porters-handbook/upgrading
next: books/porters-handbook/porting-dads
+description: Security instructions when making a FreeBSD Port
---
[[security]]
= Security
:doctype: book
:toc: macro
:toclevels: 1
:icons: font
:sectnums:
:sectnumlevels: 6
:source-highlighter: rouge
:experimental:
:skip-front-matter:
:xrefstyle: basic
:relfileprefix: ../
:outfilesuffix:
:sectnumoffset: 12
include::shared/mirrors.adoc[]
include::shared/authors.adoc[]
include::shared/releases.adoc[]
include::shared/en/mailing-lists.adoc[]
include::shared/en/teams.adoc[]
include::shared/en/urls.adoc[]
toc::[]
[[security-intro]]
== Why Security is So Important
Bugs are occasionally introduced to the software. Arguably, the most dangerous of them are those opening security vulnerabilities. From the technical viewpoint, such vulnerabilities are to be closed by exterminating the bugs that caused them. However, the policies for handling mere bugs and security vulnerabilities are very different.
A typical small bug affects only those users who have enabled some combination of options triggering the bug. The developer will eventually release a patch followed by a new version of the software, free of the bug, but the majority of users will not take the trouble of upgrading immediately because the bug has never vexed them. A critical bug that may cause data loss represents a graver issue. Nevertheless, prudent users know that a lot of possible accidents, besides software bugs, are likely to lead to data loss, and so they make backups of important data; in addition, a critical bug will be discovered really soon.
A security vulnerability is all different. First, it may remain unnoticed for years because often it does not cause software malfunction. Second, a malicious party can use it to gain unauthorized access to a vulnerable system, to destroy or alter sensitive data; and in the worst case the user will not even notice the harm caused. Third, exposing a vulnerable system often assists attackers to break into other systems that could not be compromised otherwise. Therefore closing a vulnerability alone is not enough: notify the audience of it in the most clear and comprehensive manner, which will allow them to evaluate the danger and take appropriate action.
[[security-fix]]
== Fixing Security Vulnerabilities
While on the subject of ports and packages, a security vulnerability may initially appear in the original distribution or in the port files. In the former case, the original software developer is likely to release a patch or a new version instantly. Update the port promptly with respect to the author's fix. If the fix is delayed for some reason, either crossref:porting-dads[dads-noinstall,mark the port as `FORBIDDEN`] or introduce a patch file to the port. In the case of a vulnerable port, just fix the port as soon as possible. In either case, follow crossref:port-upgrading[port-upgrading,the standard procedure for submitting changes] unless having rights to commit it directly to the ports tree.
[IMPORTANT]
====
Being a ports committer is not enough to commit to an arbitrary port. Remember that ports usually have maintainers, must be respected.
====
Please make sure that the port's revision is bumped as soon as the vulnerability has been closed. That is how the users who upgrade installed packages on a regular basis will see they need to run an update. Besides, a new package will be built and distributed over FTP and WWW mirrors, replacing the vulnerable one. Bump `PORTREVISION` unless `DISTVERSION` has changed in the course of correcting the vulnerability. That is, bump `PORTREVISION` if adding a patch file to the port, but do not bump it if updating the port to the latest software version and thus already touched `DISTVERSION`. Please refer to the crossref:makefiles[makefile-naming-revepoch,corresponding section] for more information.
[[security-notify]]
== Keeping the Community Informed
[[security-notify-vuxml-db]]
=== The VuXML Database
A very important and urgent step to take as early after a security vulnerability is discovered as possible is to notify the community of port users about the jeopardy. Such notification serves two purposes. First, if the danger is really severe it will be wise to apply an instant workaround. For example, stop the affected network service or even deinstall the port completely until the vulnerability is closed. Second, a lot of users tend to upgrade installed packages only occasionally. They will know from the notification that they _must_ update the package without delay as soon as a corrected version is available.
Given the huge number of ports in the tree, a security advisory cannot be issued on each incident without creating a flood and losing the attention of the audience when it comes to really serious matters. Therefore security vulnerabilities found in ports are recorded in https://vuxml.freebsd.org/[the FreeBSD VuXML database]. The Security Officer Team members also monitor it for issues requiring their intervention.
Committers can update the VuXML database themselves, assisting the Security Officer Team and delivering crucial information to the community more quickly. Those who are not committers or have discovered an exceptionally severe vulnerability should not hesitate to contact the Security Officer Team directly, as described on the https://www.freebsd.org/security/#how[FreeBSD Security Information] page.
The VuXML database is an XML document. Its source file [.filename]#vuln.xml# is kept right inside the port package:security/vuxml[]. Therefore the file's full pathname will be [.filename]#PORTSDIR/security/vuxml/vuln.xml#. Each time a security vulnerability is discovered in a port, please add an entry for it to that file. Until familiar with VuXML, the best thing to do is to find an existing entry fitting the case at hand, then copy it and use it as a template.
[[security-notify-vuxml-intro]]
=== A Short Introduction to VuXML
The full-blown XML format is complex, and far beyond the scope of this book. However, to gain basic insight on the structure of a VuXML entry only the notion of tags is needed. XML tag names are enclosed in angle brackets. Each opening <tag> must have a matching closing </tag>. Tags may be nested. If nesting, the inner tags must be closed before the outer ones. There is a hierarchy of tags, that is, more complex rules of nesting them. This is similar to HTML. The major difference is that XML is e__X__tensible, that is, based on defining custom tags. Due to its intrinsic structure XML puts otherwise amorphous data into shape. VuXML is particularly tailored to mark up descriptions of security vulnerabilities.
Now consider a realistic VuXML entry:
[.programlisting]
....
<vuln vid="f4bc80f4-da62-11d8-90ea-0004ac98a7b9"> <.>
<topic>Several vulnerabilities found in Foo</topic> <.>
<affects>
<package>
<name>foo</name> <.>
<name>foo-devel</name>
<name>ja-foo</name>
<range><ge>1.6</ge><lt>1.9</lt></range> <.>
<range><ge>2.*</ge><lt>2.4_1</lt></range>
<range><eq>3.0b1</eq></range>
</package>
<package>
<name>openfoo</name> <.>
<range><lt>1.10_7</lt></range> <.>
<range><ge>1.2,1</ge><lt>1.3_1,1</lt></range>
</package>
</affects>
<description>
<body xmlns="http://www.w3.org/1999/xhtml">
<p>J. Random Hacker reports:</p> <.>
<blockquote
cite="http://j.r.hacker.com/advisories/1">
<p>Several issues in the Foo software may be exploited
via carefully crafted QUUX requests. These requests will
permit the injection of Bar code, mumble theft, and the
readability of the Foo administrator account.</p>
</blockquote>
</body>
</description>
<references> <.>
<freebsdsa>SA-10:75.foo</freebsdsa> <.>
<freebsdpr>ports/987654</freebsdpr> <.>
<cvename>CAN-2010-0201</cvename> <.>
<cvename>CAN-2010-0466</cvename>
<bid>96298</bid> <.>
<certsa>CA-2010-99</certsa> <.>
<certvu>740169</certvu> <.>
<uscertsa>SA10-99A</uscertsa> <.>
<uscertta>SA10-99A</uscertta> <.>
<mlist msgid="201075606@hacker.com">http://marc.theaimsgroup.com/?l=bugtraq&amp;m=203886607825605</mlist> <.>
<url>http://j.r.hacker.com/advisories/1</url> <.>
</references>
<dates>
<discovery>2010-05-25</discovery> <.>
<entry>2010-07-13</entry> <.>
<modified>2010-09-17</modified> <.>
</dates>
</vuln>
....
The tag names are supposed to be self-explanatory so we shall take a closer look only at fields which needs to be filled in:
<.> This is the top-level tag of a VuXML entry. It has a mandatory attribute, `vid`, specifying a universally unique identifier (UUID) for this entry (in quotes). Generate a UUID for each new VuXML entry (and do not forget to substitute it for the template UUID unless writing the entry from scratch). use man:uuidgen[1] to generate a VuXML UUID.
<.> This is a one-line description of the issue found.
<.> The names of packages affected are listed there. Multiple names can be given since several packages may be based on a single master port or software product. This may include stable and development branches, localized versions, and slave ports featuring different choices of important build-time configuration options.
<.> Affected versions of the package(s) are specified there as one or more ranges using a combination of `<lt>`, `<le>`, `<eq>`, `<ge>`, and `<gt>` elements. Check that the version ranges given do not overlap. +
In a range specification, `\*` (asterisk) denotes the smallest version number. In particular, `2.*` is less than `2.a`. Therefore an asterisk may be used for a range to match all possible `alpha`, `beta`, and `RC` versions. For instance, `<ge>2.*</ge><lt>3.*</lt>` will selectively match every `2.x` version while `<ge>2.0</ge><lt>3.0</lt>` will not since the latter misses `2.r3` and matches `3.b`. +
The above example specifies that affected are versions `1.6` and up to but not including `1.9`, versions `2.x` before `2.4_1`, and version `3.0b1`.
<.> Several related package groups (essentially, ports) can be listed in the `<affected>` section. This can be used if several software products (say FooBar, FreeBar and OpenBar) grow from the same code base and still share its bugs and vulnerabilities. Note the difference from listing multiple names within a single <package> section.
<.> The version ranges have to allow for `PORTEPOCH` and `PORTREVISION` if applicable. Please remember that according to the collation rules, a version with a non-zero `PORTEPOCH` is greater than any version without `PORTEPOCH`, for example, `3.0,1` is greater than `3.1` or even than `8.9`.
<.> This is a summary of the issue. XHTML is used in this field. At least enclosing `<p>` and `</p>` has to appear. More complex mark-up may be used, but only for the sake of accuracy and clarity: No eye candy please.
<.> This section contains references to relevant documents. As many references as apply are encouraged.
<.> This is a https://www.freebsd.org/security/#adv[FreeBSD security advisory].
<.> This is a https://www.freebsd.org/support/[FreeBSD problem report].
<.> This is a http://www.cve.mitre.org/[MITRE CVE] identifier.
<.> This is a http://www.securityfocus.com/bid[SecurityFocus Bug ID].
<.> This is a http://www.cert.org/[US-CERT] security advisory.
<.> This is a http://www.cert.org/[US-CERT] vulnerability note.
<.> This is a http://www.cert.org/[US-CERT] Cyber Security Alert.
<.> This is a http://www.cert.org/[US-CERT] Technical Cyber Security Alert.
<.> This is a URL to an archived posting in a mailing list. The attribute `msgid` is optional and may specify the message ID of the posting.
<.> This is a generic URL. Only it if none of the other reference categories apply.
<.> This is the date when the issue was disclosed (_YYYY-MM-DD_).
<.> This is the date when the entry was added (_YYYY-MM-DD_).
<.> This is the date when any information in the entry was last modified (_YYYY-MM-DD_). New entries must not include this field. Add it when editing an existing entry.
[[security-notify-vuxml-testing]]
=== Testing Changes to the VuXML Database
This example describes a new entry for a vulnerability in the package `dropbear` that has been fixed in version `dropbear-2013.59`.
As a prerequisite, install a fresh version of package:security/vuxml[] port.
First, check whether there already is an entry for this vulnerability. If there were such an entry, it would match the previous version of the package, `2013.58`:
[source,shell]
....
% pkg audit dropbear-2013.58
....
If there is none found, add a new entry for this vulnerability.
[source,shell]
....
% cd ${PORTSDIR}/security/vuxml
% make newentry
....
Verify its syntax and formatting:
[source,shell]
....
% make validate
....
The previous command generates the [.filename]#vuln-flat.xml# file. It can also
be generated with:
[source,shell]
....
% make vuln-flat.xml
....
[NOTE]
====
At least one of these packages needs to be installed: package:textproc/libxml2[], package:textproc/jade[].
====
Verify that the `<affected>` section of the entry will match the correct packages:
[source,shell]
....
% pkg audit -f ${PORTSDIR}/security/vuxml/vuln-flat.xml dropbear-2013.58
....
Make sure that the entry produces no spurious matches in the output.
Now check whether the right package versions are matched by the entry:
[source,shell]
....
% pkg audit -f ${PORTSDIR}/security/vuxml/vuln-flat.xml dropbear-2013.58 dropbear-2013.59
dropbear-2012.58 is vulnerable:
dropbear -- exposure of sensitive information, DoS
CVE: CVE-2013-4434
CVE: CVE-2013-4421
WWW: http://portaudit.FreeBSD.org/8c9b48d1-3715-11e3-a624-00262d8b701d.html
1 problem(s) in the installed packages found.
....
The former version matches while the latter one does not.
diff --git a/documentation/content/en/books/porters-handbook/slow-porting/_index.adoc b/documentation/content/en/books/porters-handbook/slow-porting/_index.adoc
index 4146408768..f61c4f2135 100644
--- a/documentation/content/en/books/porters-handbook/slow-porting/_index.adoc
+++ b/documentation/content/en/books/porters-handbook/slow-porting/_index.adoc
@@ -1,286 +1,287 @@
---
title: Chapter 4. Slow Porting
prev: books/porters-handbook/quick-porting
next: books/porters-handbook/makefiles
+description: Description about creating a FreeBSD Port when the program need some modifications
---
[[slow-porting]]
= Slow Porting
:doctype: book
:toc: macro
:toclevels: 1
:icons: font
:sectnums:
:sectnumlevels: 6
:source-highlighter: rouge
:experimental:
:skip-front-matter:
:xrefstyle: basic
:relfileprefix: ../
:outfilesuffix:
:sectnumoffset: 4
include::shared/mirrors.adoc[]
include::shared/authors.adoc[]
include::shared/releases.adoc[]
include::shared/en/mailing-lists.adoc[]
include::shared/en/teams.adoc[]
include::shared/en/urls.adoc[]
toc::[]
Okay, so it was not that simple, and the port required some modifications to get it to work. In this section, we will explain, step by step, how to modify it to get it to work with the ports paradigm.
[[slow-work]]
== How Things Work
First, this is the sequence of events which occurs when the user first types `make` in the port's directory. Having [.filename]#bsd.port.mk# in another window while reading this really helps to understand it.
But do not worry, not many people understand exactly how [.filename]#bsd.port.mk# is working... _:-)_
[.procedure]
. The `fetch` target is run. The `fetch` target is responsible for making sure that the tarball exists locally in `DISTDIR`. If `fetch` cannot find the required files in `DISTDIR` it will look up the URL `MASTER_SITES`, which is set in the Makefile, as well as our FTP mirrors where we put distfiles as backup. It will then attempt to fetch the named distribution file with `FETCH`, assuming that the requesting site has direct access to the Internet. If that succeeds, it will save the file in `DISTDIR` for future use and proceed.
. The `extract` target is run. It looks for the port's distribution file (typically a compressed tarball) in `DISTDIR` and unpacks it into a temporary subdirectory specified by `WRKDIR` (defaults to [.filename]#work#).
. The `patch` target is run. First, any patches defined in `PATCHFILES` are applied. Second, if any patch files named [.filename]#patch-*# are found in `PATCHDIR` (defaults to the [.filename]#files# subdirectory), they are applied at this time in alphabetical order.
. The `configure` target is run. This can do any one of many different things.
.. If it exists, [.filename]#scripts/configure# is run.
.. If `HAS_CONFIGURE` or `GNU_CONFIGURE` is set, [.filename]#WRKSRC/configure# is run.
. The `build` target is run. This is responsible for descending into the port's private working directory (`WRKSRC`) and building it.
. The `stage` target is run. This puts the final set of built files into a temporary directory (`STAGEDIR`, see crossref:special[staging,Staging]). The hierarchy of this directory mirrors that of the system on which the package will be installed.
. The `package` target is run. This creates a package using the files from the temporary directory created during the `stage` target and the port's [.filename]#pkg-plist#.
. The `install` target is run. This installs the package created during the `package` target into the host system.
The above are the default actions. In addition, define targets `pre-_something_` or `post-_something_`, or put scripts with those names, in the [.filename]#scripts# subdirectory, and they will be run before or after the default actions are done.
For example, if there is a `post-extract` target defined in the [.filename]#Makefile#, and a file [.filename]#pre-build# in the [.filename]#scripts# subdirectory, the `post-extract` target will be called after the regular extraction actions, and [.filename]#pre-build# will be executed before the default build rules are done. It is recommended to use [.filename]#Makefile# targets if the actions are simple enough, because it will be easier for someone to figure out what kind of non-default action the port requires.
The default actions are done by the `do-_something_` targets from [.filename]#bsd.port.mk#. For example, the commands to extract a port are in the target `do-extract`. If the default target does not do the job right, redefine the `do-_something_` target in the [.filename]#Makefile#.
[NOTE]
====
The "main" targets (for example, `extract`, `configure`, etc.) do nothing more than make sure all the stages up to that one are completed and call the real targets or scripts, and they are not intended to be changed. To fix the extraction, fix `do-extract`, but never ever change the way `extract` operates! Additionally, the target `post-deinstall` is invalid and is not run by the ports infrastructure.
====
Now that what goes on when the user types `make install` is better understood, let us go through the recommended steps to create the perfect port.
[[slow-sources]]
== Getting the Original Sources
Get the original sources (normally) as a compressed tarball ([.filename]#foo.tar.gz# or [.filename]#foo.tar.bz2#) and copy it into `DISTDIR`. Always use _mainstream_ sources when and where possible.
Set the variable `MASTER_SITES` to reflect where the original tarball resides. Shorthand definitions exist for most mainstream sites in [.filename]#bsd.sites.mk#. Please use these sites-and the associated definitions-if at all possible, to help avoid the problem of having the same information repeated over again many times in the source base. As these sites tend to change over time, this becomes a maintenance nightmare for everyone involved. See crossref:makefiles[makefile-master_sites,`MASTER_SITES`] for details.
If there is no FTP/HTTP site that is well-connected to the net, or can only find sites that have irritatingly non-standard formats, put a copy on a reliable FTP or HTTP server (for example, a home page).
If a convenient and reliable place to put the distfile cannot be found, we can "house" it ourselves on `ftp.FreeBSD.org`; however, this is the least-preferred solution. The distfile must be placed into [.filename]#~/public_distfiles/# of someone's `freefall` account. Ask the person who commits the port to do this. This person will also set `MASTER_SITES` to `LOCAL/_username_` where `_username_` is their FreeBSD cluster login.
If the port's distfile changes all the time without any kind of version update by the author, consider putting the distfile on a home page and listing it as the first `MASTER_SITES`. Try to talk the port author out of doing this; it really does help to establish some kind of source code control. Hosting a specific version will prevent users from getting `checksum mismatch` errors, and also reduce the workload of maintainers of our FTP site. Also, if there is only one master site for the port, it is recommended to house a backup on a home page and list it as the second `MASTER_SITES`.
If the port requires additional patches that are available on the Internet, fetch them too and put them in `DISTDIR`. Do not worry if they come from a site other than where the main source tarball comes, we have a way to handle these situations (see the description of crossref:makefiles[porting-patchfiles,PATCHFILES] below).
[[slow-modifying]]
== Modifying the Port
Unpack a copy of the tarball in a private directory and make whatever changes are necessary to get the port to compile properly under the current version of FreeBSD. Keep _careful track_ of steps, as they will be needed to automate the process shortly. Everything, including the deletion, addition, or modification of files has to be doable using an automated script or patch file when the port is finished.
If the port requires significant user interaction/customization to compile or install, take a look at one of Larry Wall's classic Configure scripts and perhaps do something similar. The goal of the new ports collection is to make each port as "plug-and-play" as possible for the end-user while using a minimum of disk space.
[NOTE]
====
Unless explicitly stated, patch files, scripts, and other files created and contributed to the FreeBSD ports collection are assumed to be covered by the standard BSD copyright conditions.
====
[[slow-patch]]
== Patching
In the preparation of the port, files that have been added or changed can be recorded with man:diff[1] for later feeding to man:patch[1]. Doing this with a typical file involves saving a copy of the original file before making any changes using a [.filename]#.orig# suffix.
[source,shell]
....
% cp file file.orig
....
After all changes have been made, `cd` back to the port directory. Use `make makepatch` to generate updated patch files in the [.filename]#files# directory.
[TIP]
====
Use `BINARY_ALIAS` to substitute hardcoded commands during the build and avoid patching build files. See crossref:makefiles[binary-alias,Use `BINARY_ALIAS` to Rename Commands Instead of Patching the Build] for more information.
====
[[slow-patch-rules]]
=== General Rules for Patching
Patch files are stored in `PATCHDIR`, usually [.filename]#files/#, from where they will be automatically applied. All patches must be relative to `WRKSRC`. Typically `WRKSRC` is a subdirectory of `WRKDIR`, the directory where the distfile is extracted. Use `make -V WRKSRC` to see the actual path. The patch names are to follow these rules:
* Avoid having more than one patch modify the same file. For example, having both [.filename]#patch-foobar.c# and [.filename]#patch-foobar.c2# making changes to [.filename]#${WRKSRC}/foobar.c# makes them fragile and difficult to debug.
* When creating names for patch files, replace each underscore (`\_`) with two underscores (`\__`) and each slash (`/`) with one underscore (`_`). For example, to patch a file named [.filename]#src/freeglut_joystick.c#, name the corresponding patch [.filename]#patch-src_freeglut__joystick.c#. Do not name patches like [.filename]#patch-aa# or [.filename]#patch-ab#. Always use the path and file name in patch names. Using `make makepatch` automatically generates the correct names.
* A patch may modify multiple files if the changes are related and the patch is named appropriately. For example, [.filename]#patch-add-missing-stdlib.h#.
* Only use characters `[-+.\_a-zA-Z0-9]` for naming patches. In particular, __do not use `::` as a path separator,__ use `_` instead.
Minimize the amount of non-functional whitespace changes in patches. It is common in the Open Source world for projects to share large amounts of a code base, but obey different style and indenting rules. When taking a working piece of functionality from one project to fix similar areas in another, please be careful: the resulting patch may be full of non-functional changes. It not only increases the size of the ports repository but makes it hard to find out what exactly caused the problem and what was changed at all.
If a file must be deleted, do it in the `post-extract` target rather than as part of the patch.
[[slow-patch-manual]]
=== Manual Patch Generation
[NOTE]
====
Manual patch creation is usually not necessary. Automatic patch generation as described earlier in this section is the preferred method. However, manual patching may be required occasionally.
====
Patches are saved into files named [.filename]#patch-*# where * indicates the pathname of the file that is patched, such as [.filename]#patch-Imakefile# or [.filename]#patch-src-config.h#.
Patches with file names which do not start with [.filename]#patch-# will not be applied automatically.
After the file has been modified, man:diff[1] is used to record the differences between the original and the modified version. `-u` causes man:diff[1] to produce "unified" diffs, the preferred form.
[source,shell]
....
% diff -u file.orig file > patch-pathname-file
....
When generating patches for new, added files, `-N` is used to tell man:diff[1] to treat the non-existent original file as if it existed but was empty:
[source,shell]
....
% diff -u -N newfile.orig newfile > patch-pathname-newfile
....
Using the recurse (`-r`) option to man:diff[1] to generate patches is fine, but please look at the resulting patches to make sure there is no unnecessary junk in there. In particular, diffs between two backup files, [.filename]##Makefile##s when the port uses `Imake` or GNU `configure`, etc., are unnecessary and have to be deleted. If it was necessary to edit [.filename]#configure.in# and run `autoconf` to regenerate `configure`, do not take the diffs of `configure` (it often grows to a few thousand lines!). Instead, define `USES=autoreconf` and take the diffs of [.filename]#configure.in#.
[[slow-patch-automatic-replacements]]
=== Simple Automatic Replacements
Simple replacements can be performed directly from the port [.filename]#Makefile# using the in-place mode of man:sed[1]. This is useful when changes use the value of a variable:
[.programlisting]
....
post-patch:
@${REINPLACE_CMD} -e 's|/usr/local|${PREFIX}|g' ${WRKSRC}/Makefile
....
[IMPORTANT]
====
Only use man:sed[1] to replace variable content. You must use patch files instead of man:sed[1] to replace static content.
====
Quite often, software being ported uses the CR/LF convention in source files. This may cause problems with further patching, compiler warnings, or script execution (like `/bin/sh^M not found`.) To quickly convert all files from CR/LF to just LF, add this entry to the port [.filename]#Makefile#:
[.programlisting]
....
USES= dos2unix
....
A list of specific files to convert can be given:
[.programlisting]
....
USES= dos2unix
DOS2UNIX_FILES= util.c util.h
....
Use `DOS2UNIX_REGEX` to convert a group of files across subdirectories. Its argument is a man:find[1]-compatible regular expression. More on the format is in man:re_format[7]. This option is useful for converting all files of a given extension. For example, convert all source code files, leaving binary files intact:
[.programlisting]
....
USES= dos2unix
DOS2UNIX_REGEX= .*\.([ch]|cpp)
....
A similar option is `DOS2UNIX_GLOB`, which runs `find` for each element listed in it.
[.programlisting]
....
USES= dos2unix
DOS2UNIX_GLOB= *.c *.cpp *.h
....
The base directory for the conversion can be set. This is useful when there are multiple distfiles and several contain files which require line-ending conversion.
[.programlisting]
....
USES= dos2unix
DOS2UNIX_WRKSRC= ${WRKDIR}
....
[[slow-patch-extra]]
=== Patching Conditionally
Some ports need patches that are only applied for specific FreeBSD versions or when a particular option is enabled or disabled. Conditional patches are specified by placing the full paths to the patch files in `EXTRA_PATCHES`.
Conditional patch file names usually start with [.filename]#extra-# although this is not necessary.
However, their file names _must not_ start with [.filename]#patch-#.
If they do, they are applied unconditionally by the framework which is undesired for conditional patches.
[[slow-patch-extra-ex1]]
.Applying a Patch for a Specific FreeBSD Version
[example]
====
[.programlisting]
....
.include <bsd.port.options.mk>
# Patch in the iconv const qualifier before this
.if ${OPSYS} == FreeBSD && ${OSVERSION} < 1100069
EXTRA_PATCHES= ${PATCHDIR}/extra-patch-fbsd10
.endif
.include <bsd.port.mk>
....
====
[[slow-patch-extra-ex2]]
.Optionally Applying a Patch
[example]
====
When an crossref:makefiles[makefile-options,option] requires a patch, use ``opt_EXTRA_PATCHES`` and ``opt_EXTRA_PATCHES_OFF`` to make the patch conditional on the `opt` option. See crossref:makefiles[options-variables,Generic Variables Replacement, `OPT_VARIABLE` and `OPT_VARIABLE_OFF`] for more information.
[.programlisting]
....
OPTIONS_DEFINE= FOO BAR
FOO_EXTRA_PATCHES= ${PATCHDIR}/extra-patch-foo
BAR_EXTRA_PATCHES_OFF= ${PATCHDIR}/extra-patch-bar.c \
${PATCHDIR}/extra-patch-bar.h
....
====
[[slow-patch-extra-ex-dirs]]
.Using `EXTRA_PATCHES` With a Directory
[example]
====
Sometimes, there are many patches that are needed for a feature, in this case, it is possible to point `EXTRA_PATCHES` to a directory, and it will automatically apply all files named [.filename]#patch-*# in it.
Create a subdirectory in [.filename]#${PATCHDIR}#, and move the patches in it. For example:
[source,shell]
....
% ls -l files/foo-patches
-rw-r--r-- 1 root wheel 350 Jan 16 01:27 patch-Makefile.in
-rw-r--r-- 1 root wheel 3084 Jan 18 15:37 patch-configure
....
Then add this to the [.filename]#Makefile#:
[.programlisting]
....
OPTIONS_DEFINE= FOO
FOO_EXTRA_PATCHES= ${PATCHDIR}/foo-patches
....
The framework will then use all the files named [.filename]#patch-*# in that directory.
====
[[slow-configure]]
== Configuring
Include any additional customization commands in the [.filename]#configure# script and save it in the [.filename]#scripts# subdirectory. As mentioned above, it is also possible do this with [.filename]#Makefile# targets and/or scripts with the name [.filename]#pre-configure# or [.filename]#post-configure#.
[[slow-user-input]]
== Handling User Input
If the port requires user input to build, configure, or install, set `IS_INTERACTIVE` in the [.filename]#Makefile#. This will allow "overnight builds" to skip it. If the user sets the variable `BATCH` in their environment (and if the user sets the variable `INTERACTIVE`, then _only_ those ports requiring interaction are built). This will save a lot of wasted time on the set of machines that continually build ports (see below).
It is also recommended that if there are reasonable default answers to the questions, `PACKAGE_BUILDING` be used to turn off the interactive script when it is set. This will allow us to build the packages for CDROMs and FTP.
diff --git a/documentation/content/en/books/porters-handbook/special/_index.adoc b/documentation/content/en/books/porters-handbook/special/_index.adoc
index 71a35cd0ef..e5fb6ec03a 100644
--- a/documentation/content/en/books/porters-handbook/special/_index.adoc
+++ b/documentation/content/en/books/porters-handbook/special/_index.adoc
@@ -1,4327 +1,4328 @@
---
title: Chapter 6. Special Considerations
prev: books/porters-handbook/makefiles
next: books/porters-handbook/flavors
+description: Special considerations when creating a new FreeBSD Port
---
[[special]]
= Special Considerations
:doctype: book
:toc: macro
:toclevels: 1
:icons: font
:sectnums:
:sectnumlevels: 6
:source-highlighter: rouge
:experimental:
:skip-front-matter:
:xrefstyle: basic
:relfileprefix: ../
:outfilesuffix:
:sectnumoffset: 6
include::shared/mirrors.adoc[]
include::shared/authors.adoc[]
include::shared/releases.adoc[]
include::shared/en/mailing-lists.adoc[]
include::shared/en/teams.adoc[]
include::shared/en/urls.adoc[]
toc::[]
This section explains the most common things to consider when creating a port.
[[staging]]
== Staging
[.filename]#bsd.port.mk# expects ports to work with a "stage directory". This means that a port must not install files directly to the regular destination directories (that is, under `PREFIX`, for example) but instead into a separate directory from which the package is then built. In many cases, this does not require root privileges, making it possible to build packages as an unprivileged user. With staging, the port is built and installed into the stage directory, `STAGEDIR`. A package is created from the stage directory and then installed on the system. Automake tools refer to this concept as `DESTDIR`, but in FreeBSD, `DESTDIR` has a different meaning (see crossref:testing[porting-prefix,`PREFIX` and `DESTDIR`]).
[NOTE]
====
No port _really_ needs to be root. It can mostly be avoided by using crossref:uses[uses-uidfix,`USES=uidfix`]. If the port still runs commands like man:chown[8], man:chgrp[1], or forces owner or group with man:install[1] then use crossref:uses[uses-fakeroot,`USES=fakeroot`] to fake those calls. Some patching of the port's [.filename]#Makefiles# will be needed.
====
Meta ports, or ports that do not install files themselves but only depend on other ports, must avoid needlessly extracting the man:mtree[8] to the stage directory. This is the basic directory layout of the package, and these empty directories will be seen as orphans. To prevent man:mtree[8] extraction, add this line:
[.programlisting]
....
NO_MTREE= yes
....
[TIP]
====
Metaports should use <<uses-metaport,`USES=metaport`>>. It sets up defaults for ports that do not fetch, build, or install anything.
====
Staging is enabled by prepending `STAGEDIR` to paths used in the `pre-install`, `do-install`, and `post-install` targets (see the examples through the book). Typically, this includes `PREFIX`, `ETCDIR`, `DATADIR`, `EXAMPLESDIR`, `MANPREFIX`, `DOCSDIR`, and so on. Directories should be created as part of the `post-install` target. Avoid using absolute paths whenever possible.
[TIP]
====
Ports that install kernel modules must prepend `STAGEDIR` to their destination, by default [.filename]#/boot/modules#.
====
[[staging-symlink]]
=== Handling Symbolic Links
When creating a symbolic link, relative ones are strongly recommended. Use `${RLN}` to create relative symbolic links. It uses man:install[1] under the hood to automatically figure out the relative link to create.
[[staging-ex1]]
.Create Relative Symbolic Links Automatically
[example]
====
`${RLN}` uses man:install[1]'s relative symbolic feature which frees the porter of computing the relative path.
[.programlisting]
....
${RLN} ${STAGEDIR}${PREFIX}/lib/libfoo.so.42 ${STAGEDIR}${PREFIX}/lib/libfoo.so
${RLN} ${STAGEDIR}${PREFIX}/libexec/foo/bar ${STAGEDIR}${PREFIX}/bin/bar
${RLN} ${STAGEDIR}/var/cache/foo ${STAGEDIR}${PREFIX}/share/foo
....
Will generate:
[source,shell]
....
% ls -lF ${STAGEDIR}${PREFIX}/lib
lrwxr-xr-x 1 nobody nobody 181 Aug 3 11:27 libfoo.so@ -> libfoo.so.42
-rwxr-xr-x 1 nobody nobody 15 Aug 3 11:24 libfoo.so.42*
% ls -lF ${STAGEDIR}${PREFIX}/bin
lrwxr-xr-x 1 nobody nobody 181 Aug 3 11:27 bar@ -> ../libexec/foo/bar
% ls -lF ${STAGEDIRDIR}${PREFIX}/share
lrwxr-xr-x 1 nobody nobody 181 Aug 3 11:27 foo@ -> ../../../var/cache/foo
....
====
[[bundled-libs]]
== Bundled Libraries
This section explains why bundled dependencies are considered bad and what to do about them.
[[bundled-libs-why-bad]]
=== Why Bundled Libraries Are Bad
Some software requires the porter to locate third-party libraries and add the required dependencies to the port. Other software bundles all necessary libraries into the distribution file. The second approach seems easier at first, but there are some serious drawbacks:
This list is loosely based on the https://fedoraproject.org/wiki/Packaging:No_Bundled_Libraries[Fedora] and http://wiki.gentoo.org/wiki/Why_not_bundle_dependencies[Gentoo] wikis, both licensed under the http://creativecommons.org/licenses/by-sa/3.0/[CC-BY-SA 3.0] license.
Security::
If vulnerabilities are found in the upstream library and fixed there, they might not be fixed in the library bundled with the port. One reason could be that the author is not aware of the problem. This means that the porter must fix them, or upgrade to a non-vulnerable version, and send a patch to the author. This all takes time, which results in software being vulnerable longer than necessary. This in turn makes it harder to coordinate a fix without unnecessarily leaking information about the vulnerability.
Bugs::
This problem is similar to the problem with security in the last paragraph, but generally less severe.
Forking::
It is easier for the author to fork the upstream library once it is bundled. While convenient on first sight, it means that the code diverges from upstream making it harder to address security or other problems with the software. A reason for this is that patching becomes harder.
+
Another problem of forking is that because code diverges from upstream, bugs get solved over and over again instead of just once at a central location. This defeats the idea of open source software in the first place.
Symbol collision::
When a library is installed on the system, it might collide with the bundled version. This can cause immediate errors at compile or link time. It can also cause errors when running the program which might be harder to track down. The latter problem could be caused because the versions of the two libraries are incompatible.
Licensing::
When bundling projects from different sources, license issues can arise more easily, especially when licenses are incompatible.
Waste of resources::
Bundled libraries waste resources on several levels. It takes longer to build the actual application, especially if these libraries are already present on the system. At run-time, they can take up unnecessary memory when the system-wide library is already loaded by one program and the bundled library is loaded by another program.
Waste of effort::
When a library needs patches for FreeBSD, these patches have to be duplicated again in the bundled library. This wastes developer time because the patches might not apply cleanly. It can also be hard to notice that these patches are required in the first place.
[[bundled-libs-practices]]
=== What to do About Bundled Libraries
Whenever possible, use the unbundled version of the library by adding a `LIB_DEPENDS` to the port. If such a port does not exist yet, consider creating it.
Only use bundled libraries if the upstream has a good track record on security and using unbundled versions leads to overly complex patches.
[NOTE]
====
In some very special cases, for example emulators, like Wine, a port has to bundle libraries, because they are in a different architecture, or they have been modified to fit the software's use. In that case, those libraries should not be exposed to other ports for linking. Add `BUNDLE_LIBS=yes` to the port's [.filename]#Makefile#. This will tell man:pkg[8] to not compute provided libraries. Always ask the {portmgr} before adding this to a port.
====
[[porting-shlibs]]
== Shared Libraries
If the port installs one or more shared libraries, define a `USE_LDCONFIG` make variable, which will instruct a [.filename]#bsd.port.mk# to run `${LDCONFIG} -m` on the directory where the new library is installed (usually [.filename]#PREFIX/lib#) during `post-install` target to register it into the shared library cache. This variable, when defined, will also facilitate addition of an appropriate `@exec /sbin/ldconfig -m` and `@unexec /sbin/ldconfig -R` pair into [.filename]#pkg-plist#, so that a user who installed the package can start using the shared library immediately and de-installation will not cause the system to still believe the library is there.
[.programlisting]
....
USE_LDCONFIG= yes
....
The default directory can be overridden by setting `USE_LDCONFIG` to a list of directories into which shared libraries are to be installed. For example, if the port installs shared libraries into [.filename]#PREFIX/lib/foo# and [.filename]#PREFIX/lib/bar# use this in [.filename]#Makefile#:
[.programlisting]
....
USE_LDCONFIG= ${PREFIX}/lib/foo ${PREFIX}/lib/bar
....
Please double-check, often this is not necessary at all or can be avoided through `-rpath` or setting `LD_RUN_PATH` during linking (see package:lang/mosml[] for an example), or through a shell-wrapper which sets `LD_LIBRARY_PATH` before invoking the binary, like package:www/seamonkey[] does.
When installing 32-bit libraries on 64-bit system, use `USE_LDCONFIG32` instead.
If the software uses <<using-autotools,autotools>>, and specifically `libtool`, add crossref:uses[uses-libtool,`USES=libtool`].
When the major library version number increments in the update to the new port version, all other ports that link to the affected library must have their `PORTREVISION` incremented, to force recompilation with the new library version.
[[porting-restrictions]]
== Ports with Distribution Restrictions or Legal Concerns
Licenses vary, and some of them place restrictions on how the application can be packaged, whether it can be sold for profit, and so on.
[IMPORTANT]
====
It is the responsibility of a porter to read the licensing terms of the software and make sure that the FreeBSD project will not be held accountable for violating them by redistributing the source or compiled binaries either via FTP/HTTP or CD-ROM. If in doubt, please contact the {freebsd-ports}.
====
In situations like this, the variables described in the next sections can be set.
[[porting-restrictions-no_package]]
=== `NO_PACKAGE`
This variable indicates that we may not generate a binary package of the application. For instance, the license may disallow binary redistribution, or it may prohibit distribution of packages created from patched sources.
However, the port's `DISTFILES` may be freely mirrored on FTP/HTTP. They may also be distributed on a CD-ROM (or similar media) unless `NO_CDROM` is set as well.
If the binary package is not generally useful, and the application must always be compiled from the source code, use `NO_PACKAGE`. For example, if the application has configuration information that is site specific hard coded into it at compile time, set `NO_PACKAGE`.
Set `NO_PACKAGE` to a string describing the reason why the package cannot be generated.
[[porting-restrictions-no_cdrom]]
=== `NO_CDROM`
This variable alone indicates that, although we are allowed to generate binary packages, we may put neither those packages nor the port's `DISTFILES` onto a CD-ROM (or similar media) for resale. However, the binary packages and the port's `DISTFILES` will still be available via FTP/HTTP.
If this variable is set along with `NO_PACKAGE`, then only the port's `DISTFILES` will be available, and only via FTP/HTTP.
Set `NO_CDROM` to a string describing the reason why the port cannot be redistributed on CD-ROM. For instance, use this if the port's license is for "non-commercial" use only.
[[porting-restrictions-nofetchfiles]]
=== `NOFETCHFILES`
Files defined in `NOFETCHFILES` are not fetchable from any of `MASTER_SITES`. An example of such a file is when the file is supplied on CD-ROM by the vendor.
Tools which check for the availability of these files on `MASTER_SITES` have to ignore these files and not report about them.
[[porting-restrictions-restricted]]
=== `RESTRICTED`
Set this variable alone if the application's license permits neither mirroring the application's `DISTFILES` nor distributing the binary package in any way.
Do not set `NO_CDROM` or `NO_PACKAGE` along with `RESTRICTED`, since the latter variable implies the former ones.
Set `RESTRICTED` to a string describing the reason why the port cannot be redistributed. Typically, this indicates that the port contains proprietary software and that the user will need to manually download the `DISTFILES`, possibly after registering for the software or agreeing to accept the terms of an EULA.
[[porting-restrictions-restricted_files]]
=== `RESTRICTED_FILES`
When `RESTRICTED` or `NO_CDROM` is set, this variable defaults to `${DISTFILES} ${PATCHFILES}`, otherwise it is empty. If only some of the distribution files are restricted, then set this variable to list them.
[[porting-restrictions-legal_text]]
=== `LEGAL_TEXT`
If the port has legal concerns not addressed by the above variables, set `LEGAL_TEXT` to a string explaining the concern. For example, if special permission was obtained for FreeBSD to redistribute the binary, this variable must indicate so.
[[porting-restrictions-legal]]
=== [.filename]#/usr/ports/LEGAL# and `LEGAL`
A port which sets any of the above variables must also be added to [.filename]#/usr/ports/LEGAL#. The first column is a glob which matches the restricted distfiles. The second column is the port's origin. The third column is the output of `make -VLEGAL`.
[[porting-restrictions-examples]]
=== Examples
The preferred way to state "the distfiles for this port must be fetched manually" is as follows:
[.programlisting]
....
.if !exists(${DISTDIR}/${DISTNAME}${EXTRACT_SUFX})
IGNORE= may not be redistributed because of licensing reasons. Please visit some-website to accept their license and download ${DISTFILES} into ${DISTDIR}
.endif
....
This both informs the user, and sets the proper metadata on the user's machine for use by automated programs.
Note that this stanza must be preceded by an inclusion of [.filename]#bsd.port.pre.mk#.
[[building]]
== Building Mechanisms
[[parallel-builds]]
=== Building Ports in Parallel
The FreeBSD ports framework supports parallel building using multiple `make` sub-processes, which allows SMP systems to utilize all of their available CPU power, allowing port builds to be faster and more effective.
This is achieved by passing `-jX` flag to man:make[1] running on vendor code. This is the default build behavior of ports. Unfortunately, not all ports handle parallel building well and it may be required to explicitly disable this feature by adding the `MAKE_JOBS_UNSAFE=yes` variable. It is used when a port is known to be broken with `-jX` due to race conditions causing intermittent build failures.
[IMPORTANT]
====
When setting `MAKE_JOBS_UNSAFE`, it is very important to explain either with a comment in the [.filename]#Makefile#, or at least in the commit message, _why_ the port does not build when enabling. Otherwise, it is almost impossible to either fix the problem, or test if it has been fixed when committing an update at a later date.
====
[[using-make]]
=== `make`, `gmake`, and `imake`
Several differing `make` implementations exist. Ported software often requires a particular implementation, like GNU`make`, known in FreeBSD as `gmake`.
If the port uses GNU make, add `gmake` to `USES`.
`MAKE_CMD` can be used to reference the specific command configured by the `USES` setting in the port's [.filename]#Makefile#. Only use `MAKE_CMD` within the application [.filename]#Makefile#s in `WRKSRC` to call the `make` implementation expected by the ported software.
If the port is an X application that uses imake to create [.filename]##Makefile##s from [.filename]##Imakefile##s, set `USES= imake`.. See the crossref:uses[uses-imake,`USES=imake`] section of crossref:uses[uses,Using `USES` Macros] for more details.
If the port's source [.filename]#Makefile# has something other than `all` as the main build target, set `ALL_TARGET` accordingly. The same goes for `install` and `INSTALL_TARGET`.
[[using-configure]]
=== `configure` Script
If the port uses the `configure` script to generate [.filename]#Makefile# from [.filename]#Makefile.in#, set `GNU_CONFIGURE=yes`. To give extra arguments to the `configure` script (the default argument is `--prefix=${PREFIX} --infodir=${PREFIX}/${INFO_PATH} --mandir=${MANPREFIX}/man --build=${CONFIGURE_TARGET}`), set those extra arguments in `CONFIGURE_ARGS`. Extra environment variables can be passed using `CONFIGURE_ENV`.
[[using-configure-variables]]
.Variables for Ports That Use `configure`
[cols="1,1", frame="none", options="header"]
|===
| Variable
| Means
|`GNU_CONFIGURE`
|The port uses `configure` script to prepare build.
|`HAS_CONFIGURE`
|Same as `GNU_CONFIGURE`, except default configure target is not added to `CONFIGURE_ARGS`.
|`CONFIGURE_ARGS`
|Additional arguments passed to `configure` script.
|`CONFIGURE_ENV`
|Additional environment variables to be set for `configure` script run.
|`CONFIGURE_TARGET`
|Override default configure target. Default value is `${MACHINE_ARCH}-portbld-freebsd${OSREL}`.
|===
[[using-cmake]]
=== Using `cmake`
For ports that use CMake, define `USES= cmake`.
[[using-cmake-variables]]
.Variables for Ports That Use `cmake`
[cols="1,1", frame="none", options="header"]
|===
| Variable
| Means
|`CMAKE_ARGS`
|Port specific CMake flags to be passed to the `cmake` binary.
|`CMAKE_ON`
|For each entry in `CMAKE_ON`, an enabled boolean value is added to `CMAKE_ARGS`. See <<using-cmake-example2>>.
|`CMAKE_OFF`
|For each entry in `CMAKE_OFF`, a disabled boolean value is added to `CMAKE_ARGS`. See <<using-cmake-example2>>.
|`CMAKE_BUILD_TYPE`
|Type of build (CMake predefined build profiles). Default is `Release`, or `Debug` if `WITH_DEBUG` is set.
|`CMAKE_SOURCE_PATH`
|Path to the source directory. Default is `${WRKSRC}`.
|`CONFIGURE_ENV`
|Additional environment variables to be set for the `cmake` binary.
|===
[[using-cmake-user-variables]]
.Variables the Users Can Define for `cmake` Builds
[cols="1,1", frame="none", options="header"]
|===
| Variable
| Means
|`CMAKE_NOCOLOR`
|Disables color build output. Default not set, unless `BATCH` or `PACKAGE_BUILDING` are set.
|===
CMake supports these build profiles: `Debug`, `Release`, `RelWithDebInfo` and `MinSizeRel`. `Debug` and `Release` profiles respect system `\*FLAGS`, `RelWithDebInfo` and `MinSizeRel` will set `CFLAGS` to `-O2 -g` and `-Os -DNDEBUG` correspondingly. The lower-cased value of `CMAKE_BUILD_TYPE` is exported to `PLIST_SUB` and must be used if the port installs [.filename]#*.cmake# depending on the build type (see package:devel/kf5-kcrash[] for an example). Please note that some projects may define their own build profiles and/or force particular build type by setting `CMAKE_BUILD_TYPE` in [.filename]#CMakeLists.txt#. To make a port for such a project respect `CFLAGS` and `WITH_DEBUG`, the `CMAKE_BUILD_TYPE` definitions must be removed from those files.
Most CMake-based projects support an out-of-source method of building. The out-of-source build for a port is the default setting. An in-source build can be requested by using the `:insource` suffix. With out-of-source builds, `CONFIGURE_WRKSRC`, `BUILD_WRKSRC` and `INSTALL_WRKSRC` will be set to `${WRKDIR}/.build` and this directory will be used to keep all files generated during configuration and build stages, leaving the source directory intact.
[[using-cmake-example]]
.`USES= cmake` Example
[example]
====
This snippet demonstrates the use of CMake for a port. `CMAKE_SOURCE_PATH` is not usually required, but can be set when the sources are not located in the top directory, or if only a subset of the project is intended to be built by the port.
[.programlisting]
....
USES= cmake
CMAKE_SOURCE_PATH= ${WRKSRC}/subproject
....
====
[[using-cmake-example2]]
.`CMAKE_ON` and `CMAKE_OFF`
[example]
====
When adding boolean values to `CMAKE_ARGS`, it is easier to use the `CMAKE_ON` and `CMAKE_OFF` variables instead. This:
[.programlisting]
....
CMAKE_ON= VAR1 VAR2
CMAKE_OFF= VAR3
....
Is equivalent to:
[.programlisting]
....
CMAKE_ARGS= -DVAR1:BOOL=TRUE -DVAR2:BOOL=TRUE -DVAR3:BOOL=FALSE
....
[IMPORTANT]
======
This is only for the default values off `CMAKE_ARGS`. The helpers described in crossref:makefiles[options-cmake_bool,`OPT_CMAKE_BOOL` and `OPT_CMAKE_BOOL_OFF`] use the same semantics, but for optional values.
======
====
[[using-scons]]
=== Using `scons`
If the port uses SCons, define `USES=scons`.
To make third party [.filename]#SConstruct# respect everything that is passed to SCons in the environment (that is, most importantly, `CC/CXX/CFLAGS/CXXFLAGS`), patch [.filename]#SConstruct# so build `Environment` is constructed like this:
[.programlisting]
....
env = Environment(**ARGUMENTS)
....
It may be then modified with `env.Append` and `env.Replace`.
[[using-cargo]]
=== Building Rust Applications with `cargo`
For ports that use Cargo, define `USES=cargo`.
[[using-cargo-user-variables]]
.Variables the Users Can Define for `cargo` Builds
[cols="1,1,1", frame="none", options="header"]
|===
| Variable
| Default
| Description
|`CARGO_CRATES`
|
|List of crates the port depends on. Each entry needs to have a format like `cratename-semver` for example, `libc-0.2.40`. Port maintainers can generate this list from [.filename]#Cargo.lock# using `make cargo-crates`. Manually bumping crate versions is possible but be mindful of transitive dependencies.
|`CARGO_FEATURES`
|
|List of application features to build (space separated list). To deactivate all default features add the special token `--no-default-features` to `CARGO_FEATURES`. Manually passing it to `CARGO_BUILD_ARGS`, `CARGO_INSTALL_ARGS`, and `CARGO_TEST_ARGS` is not needed.
|`CARGO_CARGOTOML`
|`${WRKSRC}/Cargo.toml`
|The path to the [.filename]#Cargo.toml# to use.
|`CARGO_CARGOLOCK`
|`${WRKSRC}/Cargo.lock`
|The path to the [.filename]#Cargo.lock# to use for `make cargo-crates`. It is possible to specify more than one lock file when necessary.
|`CARGO_ENV`
|
|A list of environment variables to pass to Cargo similar to `MAKE_ENV`.
|`RUSTFLAGS`
|
|Flags to pass to the Rust compiler.
|`CARGO_CONFIGURE`
|`yes`
|Use the default `do-configure`.
|`CARGO_UPDATE_ARGS`
|
|Extra arguments to pass to Cargo during the configure phase. Valid arguments can be looked up with `cargo update --help`.
|`CARGO_BUILDDEP`
|`yes`
|Add a build dependency on package:lang/rust[].
|`CARGO_CARGO_BIN`
|`${LOCALBASE}/bin/cargo`
|Location of the `cargo` binary.
|`CARGO_BUILD`
|`yes`
|Use the default `do-build`.
|`CARGO_BUILD_ARGS`
|
|Extra arguments to pass to Cargo during the build phase. Valid arguments can be looked up with `cargo build --help`.
|`CARGO_INSTALL`
|`yes`
|Use the default `do-install`.
|`CARGO_INSTALL_ARGS`
|
|Extra arguments to pass to Cargo during the install phase. Valid arguments can be looked up with `cargo install --help`.
|`CARGO_INSTALL_PATH`
|`.`
|Path to the crate to install. This is passed to `cargo install` via its `--path` argument. When multiple paths are specified `cargo install` is run multiple times.
|`CARGO_TEST`
|`yes`
|Use the default `do-test`.
|`CARGO_TEST_ARGS`
|
|Extra arguments to pass to Cargo during the test phase. Valid arguments can be looked up with `cargo test --help`.
|`CARGO_TARGET_DIR`
|`${WRKDIR}/target`
|Location of the cargo output directory.
|`CARGO_DIST_SUBDIR`
|[.filename]#rust/crates#
|Directory relative to `DISTDIR` where the crate distribution files will be stored.
|`CARGO_VENDOR_DIR`
|`${WRKSRC}/cargo-crates`
|Location of the vendor directory where all crates will be extracted to. Try to keep this under `PATCH_WRKSRC`, so that patches can be applied easily.
|`CARGO_USE_GITHUB`
|`no`
|Enable fetching of crates locked to specific Git commits on GitHub via `GH_TUPLE`. This will try to patch all [.filename]#Cargo.toml# under `WRKDIR` to point to the offline sources instead of fetching them from a Git repository during the build.
|`CARGO_USE_GITLAB`
|`no`
|Same as `CARGO_USE_GITHUB` but for GitLab instances and `GL_TUPLE`.
|===
[[cargo-ex1]]
.Creating a Port for a Simple Rust Application
[example]
====
Creating a Cargo based port is a three stage process. First we need to provide a ports template that fetches the application distribution file:
[.programlisting]
....
PORTNAME= tokei
DISTVERSIONPREFIX= v
DISTVERSION= 7.0.2
CATEGORIES= devel
MAINTAINER= tobik@FreeBSD.org
COMMENT= Display statistics about your code
USES= cargo
USE_GITHUB= yes
GH_ACCOUNT= Aaronepower
.include <bsd.port.mk>
....
Generate an initial [.filename]#distinfo#:
[source,shell]
....
% make makesum
=> Aaronepower-tokei-v7.0.2_GH0.tar.gz doesn't seem to exist in /usr/ports/distfiles/.
=> Attempting to fetch https://codeload.github.com/Aaronepower/tokei/tar.gz/v7.0.2?dummy=/Aaronepower-tokei-v7.0.2_GH0.tar.gz
fetch: https://codeload.github.com/Aaronepower/tokei/tar.gz/v7.0.2?dummy=/Aaronepower-tokei-v7.0.2_GH0.tar.gz: size of remote file is not known
Aaronepower-tokei-v7.0.2_GH0.tar.gz 45 kB 239 kBps 00m00s
....
Now the distribution file is ready to use and we can go ahead and extract crate dependencies from the bundled [.filename]#Cargo.lock#:
[source,shell]
....
% make cargo-crates
CARGO_CRATES= aho-corasick-0.6.4 \
ansi_term-0.11.0 \
arrayvec-0.4.7 \
atty-0.2.9 \
bitflags-1.0.1 \
byteorder-1.2.2 \
[...]
....
The output of this command needs to be pasted directly into the Makefile:
[.programlisting]
....
PORTNAME= tokei
DISTVERSIONPREFIX= v
DISTVERSION= 7.0.2
CATEGORIES= devel
MAINTAINER= tobik@FreeBSD.org
COMMENT= Display statistics about your code
USES= cargo
USE_GITHUB= yes
GH_ACCOUNT= Aaronepower
CARGO_CRATES= aho-corasick-0.6.4 \
ansi_term-0.11.0 \
arrayvec-0.4.7 \
atty-0.2.9 \
bitflags-1.0.1 \
byteorder-1.2.2 \
[...]
.include <bsd.port.mk>
....
[.filename]#distinfo# needs to be regenerated to contain all the crate distribution files:
[source,shell]
....
% make makesum
=> rust/crates/aho-corasick-0.6.4.tar.gz doesn't seem to exist in /usr/ports/distfiles/.
=> Attempting to fetch https://crates.io/api/v1/crates/aho-corasick/0.6.4/download?dummy=/rust/crates/aho-corasick-0.6.4.tar.gz
rust/crates/aho-corasick-0.6.4.tar.gz 100% of 24 kB 6139 kBps 00m00s
=> rust/crates/ansi_term-0.11.0.tar.gz doesn't seem to exist in /usr/ports/distfiles/.
=> Attempting to fetch https://crates.io/api/v1/crates/ansi_term/0.11.0/download?dummy=/rust/crates/ansi_term-0.11.0.tar.gz
rust/crates/ansi_term-0.11.0.tar.gz 100% of 16 kB 21 MBps 00m00s
=> rust/crates/arrayvec-0.4.7.tar.gz doesn't seem to exist in /usr/ports/distfiles/.
=> Attempting to fetch https://crates.io/api/v1/crates/arrayvec/0.4.7/download?dummy=/rust/crates/arrayvec-0.4.7.tar.gz
rust/crates/arrayvec-0.4.7.tar.gz 100% of 22 kB 3237 kBps 00m00s
=> rust/crates/atty-0.2.9.tar.gz doesn't seem to exist in /usr/ports/distfiles/.
=> Attempting to fetch https://crates.io/api/v1/crates/atty/0.2.9/download?dummy=/rust/crates/atty-0.2.9.tar.gz
rust/crates/atty-0.2.9.tar.gz 100% of 5898 B 81 MBps 00m00s
=> rust/crates/bitflags-1.0.1.tar.gz doesn't seem to exist in /usr/ports/distfiles/.
[...]
....
The port is now ready for a test build and further adjustments like creating a plist, writing a description, adding license information, options, etc. as normal.
If you are not testing your port in a clean environment like with Poudriere, remember to run `make clean` before any testing.
====
[[cargo-ex2]]
.Enabling Additional Application Features
[example]
====
Some applications define additional features in their [.filename]#Cargo.toml#. They can be compiled in by setting `CARGO_FEATURES` in the port.
Here we enable Tokei's `json` and `yaml` features:
[.programlisting]
....
CARGO_FEATURES= json yaml
....
====
[[cargo-ex4]]
.Encoding Application Features As Port Options
[example]
====
An example `[features]` section in [.filename]#Cargo.toml# could look like this:
[.programlisting]
....
[features]
pulseaudio_backend = ["librespot-playback/pulseaudio-backend"]
portaudio_backend = ["librespot-playback/portaudio-backend"]
default = ["pulseaudio_backend"]
....
`pulseaudio_backend` is a default feature. It is always enabled unless we explicitly turn off default features by adding `--no-default-features` to `CARGO_FEATURES`. Here we turn the `portaudio_backend` and `pulseaudio_backend` features into port options:
[.programlisting]
....
CARGO_FEATURES= --no-default-features
OPTIONS_DEFINE= PORTAUDIO PULSEAUDIO
PORTAUDIO_VARS= CARGO_FEATURES+=portaudio_backend
PULSEAUDIO_VARS= CARGO_FEATURES+=pulseaudio_backend
....
====
[[cargo-ex3]]
.Listing Crate Licenses
[example]
====
Crates have their own licenses. It is important to know what they are when adding a `LICENSE` block to the port (see crossref:makefiles[licenses,Licenses]). The helper target `cargo-crates-licenses` will try to list all the licenses of all crates defined in `CARGO_CRATES`.
[source,shell]
....
% make cargo-crates-licenses
aho-corasick-0.6.4 Unlicense/MIT
ansi_term-0.11.0 MIT
arrayvec-0.4.7 MIT/Apache-2.0
atty-0.2.9 MIT
bitflags-1.0.1 MIT/Apache-2.0
byteorder-1.2.2 Unlicense/MIT
[...]
....
[NOTE]
======
The license names `make cargo-crates-licenses` outputs are SPDX 2.1 licenses expression which do not match the license names defined in the ports framework. They need to be translated to the names from crossref:makefiles[licenses-license-list,Predefined License List].
======
====
[[using-meson]]
=== Using `meson`
For ports that use Meson, define `USES=meson`.
[[using-meson-variables]]
.Variables for Ports That Use `meson`
[cols="1,1", frame="none", options="header"]
|===
| Variable
| Description
|`MESON_ARGS`
|Port specific Meson flags to be passed to the `meson` binary.
|`MESON_BUILD_DIR`
|Path to the build directory relative to `WRKSRC`. Default is `_build`.
|===
[[using-meson-example]]
.`USES=meson` Example
[example]
====
This snippet demonstrates the use of Meson for a port.
[.programlisting]
....
USES= meson
MESON_ARGS= -Dfoo=enabled
....
====
[[using-go]]
=== Building Go Applications
For ports that use Go, define `USES=go`. Refer to crossref:uses[uses-go,`go`] for a list of variables that can be set to control the build process.
[[go-ex1]]
.Creating a Port for a Go Modules Based Application
[example]
====
In most cases, it is sufficient to set the `GO_MODULE` variable to the value specified by the `module` directive in `go.mod`:
[.programlisting]
....
PORTNAME= hey
PORTVERSION= 0.1.4
DISTVERSIONPREFIX= v
CATEGORIES= benchmarks
MAINTAINER= dmgk@FreeBSD.org
COMMENT= Tiny program that sends some load to a web application
LICENSE= APACHE20
LICENSE_FILE= ${WRKSRC}/LICENSE
USES= go:modules
GO_MODULE= github.com/rakyll/hey
PLIST_FILES= bin/hey
.include <bsd.port.mk>
....
If the "easy" way is not adequate or more control over dependencies is needed, the full porting process is described below.
Creating a Go based port is a five stage process. First we need to provide a ports template that fetches the application distribution file:
[.programlisting]
....
PORTNAME= ghq
DISTVERSIONPREFIX= v
DISTVERSION= 0.12.5
CATEGORIES= devel
MAINTAINER= tobik@FreeBSD.org
COMMENT= Remote repository management made easy
USES= go:modules
USE_GITHUB= yes
GH_ACCOUNT= motemen
.include <bsd.port.mk>
....
Generate an initial [.filename]#distinfo#:
[source,shell]
....
% make makesum
===> License MIT accepted by the user
=> motemen-ghq-v0.12.5_GH0.tar.gz doesn't seem to exist in /usr/ports/distfiles/.
=> Attempting to fetch https://codeload.github.com/motemen/ghq/tar.gz/v0.12.5?dummy=/motemen-ghq-v0.12.5_GH0.tar.gz
fetch: https://codeload.github.com/motemen/ghq/tar.gz/v0.12.5?dummy=/motemen-ghq-v0.12.5_GH0.tar.gz: size of remote file is not known
motemen-ghq-v0.12.5_GH0.tar.gz 32 kB 177 kBps 00s
....
Now the distribution file is ready to use and we can extract the required Go module dependencies. This step requires having package:ports-mgmt/modules2tuple[] installed:
[source,shell]
....
% make gomod-vendor
[...]
GH_TUPLE= \
Songmu:gitconfig:v0.0.2:songmu_gitconfig/vendor/github.com/Songmu/gitconfig \
daviddengcn:go-colortext:186a3d44e920:daviddengcn_go_colortext/vendor/github.com/daviddengcn/go-colortext \
go-yaml:yaml:v2.2.2:go_yaml_yaml/vendor/gopkg.in/yaml.v2 \
golang:net:3ec191127204:golang_net/vendor/golang.org/x/net \
golang:sync:112230192c58:golang_sync/vendor/golang.org/x/sync \
golang:xerrors:3ee3066db522:golang_xerrors/vendor/golang.org/x/xerrors \
motemen:go-colorine:45d19169413a:motemen_go_colorine/vendor/github.com/motemen/go-colorine \
urfave:cli:v1.20.0:urfave_cli/vendor/github.com/urfave/cli
....
The output of this command needs to be pasted directly into the Makefile:
[.programlisting]
....
PORTNAME= ghq
DISTVERSIONPREFIX= v
DISTVERSION= 0.12.5
CATEGORIES= devel
MAINTAINER= tobik@FreeBSD.org
COMMENT= Remote repository management made easy
USES= go:modules
USE_GITHUB= yes
GH_ACCOUNT= motemen
GH_TUPLE= Songmu:gitconfig:v0.0.2:songmu_gitconfig/vendor/github.com/Songmu/gitconfig \
daviddengcn:go-colortext:186a3d44e920:daviddengcn_go_colortext/vendor/github.com/daviddengcn/go-colortext \
go-yaml:yaml:v2.2.2:go_yaml_yaml/vendor/gopkg.in/yaml.v2 \
golang:net:3ec191127204:golang_net/vendor/golang.org/x/net \
golang:sync:112230192c58:golang_sync/vendor/golang.org/x/sync \
golang:xerrors:3ee3066db522:golang_xerrors/vendor/golang.org/x/xerrors \
motemen:go-colorine:45d19169413a:motemen_go_colorine/vendor/github.com/motemen/go-colorine \
urfave:cli:v1.20.0:urfave_cli/vendor/github.com/urfave/cli
.include <bsd.port.mk>
....
[.filename]#distinfo# needs to be regenerated to contain all the distribution files:
[source,shell]
....
% make makesum
=> Songmu-gitconfig-v0.0.2_GH0.tar.gz doesn't seem to exist in /usr/ports/distfiles/.
=> Attempting to fetch https://codeload.github.com/Songmu/gitconfig/tar.gz/v0.0.2?dummy=/Songmu-gitconfig-v0.0.2_GH0.tar.gz
fetch: https://codeload.github.com/Songmu/gitconfig/tar.gz/v0.0.2?dummy=/Songmu-gitconfig-v0.0.2_GH0.tar.gz: size of remote file is not known
Songmu-gitconfig-v0.0.2_GH0.tar.gz 5662 B 936 kBps 00s
=> daviddengcn-go-colortext-186a3d44e920_GH0.tar.gz doesn't seem to exist in /usr/ports/distfiles/.
=> Attempting to fetch https://codeload.github.com/daviddengcn/go-colortext/tar.gz/186a3d44e920?dummy=/daviddengcn-go-colortext-186a3d44e920_GH0.tar.gz
fetch: https://codeload.github.com/daviddengcn/go-colortext/tar.gz/186a3d44e920?dummy=/daviddengcn-go-colortext-186a3d44e920_GH0.tar.gz: size of remote file is not known
daviddengcn-go-colortext-186a3d44e920_GH0.tar. 4534 B 1098 kBps 00s
[...]
....
The port is now ready for a test build and further adjustments like creating a plist, writing a description, adding license information, options, etc. as normal.
If you are not testing your port in a clean environment like with Poudriere, remember to run `make clean` before any testing.
====
[[go-ex2]]
.Setting Output Binary Name or Installation Path
[example]
====
Some ports need to install the resulting binary under a different name or to a path other than the default `${PREFIX}/bin`. This can be done by using `GO_TARGET` tuple syntax, for example:
[.programlisting]
....
GO_TARGET= ./cmd/ipfs:ipfs-go
....
will install `ipfs` binary as `${PREFIX}/bin/ipfs-go` and
[.programlisting]
....
GO_TARGET= ./dnscrypt-proxy:${PREFIX}/sbin/dnscrypt-proxy
....
will install `dnscrypt-proxy` to `${PREFIX}/sbin`.
====
[[using-cabal]]
=== Building Haskell Applications with `cabal`
For ports that use Cabal, build system defines `USES=cabal`. Refer to crossref:uses[uses-cabal,`cabal`] for a list of variables that can be set to control the build process.
[[cabal-ex1]]
.Creating a Port for a Hackage-hosted Haskell Application
[example]
====
When preparing a Haskell Cabal port, the package:devel/hs-cabal-install[] program is required, so make sure it is installed beforehand. First we need to define common ports variables that allows cabal-install to fetch the package distribution file:
[.programlisting]
....
PORTNAME= ShellCheck
DISTVERSION= 0.6.0
CATEGORIES= devel
MAINTAINER= haskell@FreeBSD.org
COMMENT= Shell script analysis tool
USES= cabal
.include <bsd.port.mk>
....
This minimal Makefile allows us to fetch the distribution file:
[source,shell]
....
% make cabal-extract
[...]
Downloading the latest package list from hackage.haskell.org
cabal get ShellCheck-0.6.0
Downloading ShellCheck-0.6.0
Downloaded ShellCheck-0.6.0
Unpacking to ShellCheck-0.6.0/
....
Now we have ShellCheck.cabal package description file, which allows us to fetch all package's dependencies, including transitive ones:
[source,shell]
....
% make cabal-extract-deps
[...]
Resolving dependencies...
Downloading base-orphans-0.8.2
Downloaded base-orphans-0.8.2
Downloading primitive-0.7.0.0
Starting base-orphans-0.8.2 (lib)
Building base-orphans-0.8.2 (lib)
Downloaded primitive-0.7.0.0
Downloading dlist-0.8.0.7
[...]
....
As a side effect, the package's dependencies are also compiled, so the command may take some time. Once done, a list of required dependencies can generated:
[source,shell]
....
% make make-use-cabal
USE_CABAL=QuickCheck-2.12.6.1 \
hashable-1.3.0.0 \
integer-logarithms-1.0.3 \
[...]
....
Haskell packages may contain revisions, just like FreeBSD ports. Revisions can affect only [.filename]#.cabal# files, but it is still important to pull them in. To check `USE_CABAL` items for available revision updates, run following command:
[source,shell]
....
% make make-use-cabal-revs
USE_CABAL=QuickCheck-2.12.6.1_1 \
hashable-1.3.0.0 \
integer-logarithms-1.0.3_2 \
[...]
....
Note additional version numbers after `_` symbol. Put newly generated `USE_CABAL` list instead of an old one.
Finally, [.filename]#distinfo# needs to be regenerated to contain all the distribution files:
[source,shell]
....
% make makesum
=> ShellCheck-0.6.0.tar.gz doesn't seem to exist in /usr/local/poudriere/ports/git/distfiles/cabal.
=> Attempting to fetch https://hackage.haskell.org/package/ShellCheck-0.6.0/ShellCheck-0.6.0.tar.gz
ShellCheck-0.6.0.tar.gz 136 kB 642 kBps 00s
=> QuickCheck-2.12.6.1/QuickCheck-2.12.6.1.tar.gz doesn't seem to exist in /usr/local/poudriere/ports/git/distfiles/cabal.
=> Attempting to fetch https://hackage.haskell.org/package/QuickCheck-2.12.6.1/QuickCheck-2.12.6.1.tar.gz
QuickCheck-2.12.6.1/QuickCheck-2.12.6.1.tar.gz 65 kB 361 kBps 00s
[...]
....
The port is now ready for a test build and further adjustments like creating a plist, writing a description, adding license information, options, etc. as normal.
If you are not testing your port in a clean environment like with Poudriere, remember to run `make clean` before any testing.
====
[[using-autotools]]
== Using GNU Autotools
If a port needs any of the GNU Autotools software, add `USES=autoreconf`. See crossref:uses[uses-autoreconf,`autoreconf`] for more information.
[[using-gettext]]
== Using GNU `gettext`
[[using-gettext-basic]]
=== Basic Usage
If the port requires `gettext`, set `USES= gettext`, and the port will inherit a dependency on [.filename]#libintl.so# from package:devel/gettext[]. Other values for `gettext` usage are listed in crossref:uses[uses-gettext,`USES=gettext`].
A rather common case is a port using `gettext` and `configure`. Generally, GNU `configure` should be able to locate `gettext` automatically.
[.programlisting]
....
USES= gettext
GNU_CONFIGURE= yes
....
If it ever fails to, hints at the location of `gettext` can be passed in `CPPFLAGS` and `LDFLAGS` using `localbase` as follows:
[.programlisting]
....
USES= gettext localbase:ldflags
GNU_CONFIGURE= yes
....
[[using-gettext-optional]]
=== Optional Usage
Some software products allow for disabling NLS. For example, through passing `--disable-nls` to `configure`. In that case, the port must use `gettext` conditionally, depending on the status of the `NLS` option. For ports of low to medium complexity, use this idiom:
[.programlisting]
....
GNU_CONFIGURE= yes
OPTIONS_DEFINE= NLS
OPTIONS_SUB= yes
NLS_USES= gettext
NLS_CONFIGURE_ENABLE= nls
.include <bsd.port.mk>
....
Or using the older way of using options:
[.programlisting]
....
GNU_CONFIGURE= yes
OPTIONS_DEFINE= NLS
.include <bsd.port.options.mk>
.if ${PORT_OPTIONS:MNLS}
USES+= gettext
PLIST_SUB+= NLS=""
.else
CONFIGURE_ARGS+= --disable-nls
PLIST_SUB+= NLS="@comment "
.endif
.include <bsd.port.mk>
....
The next item on the to-do list is to arrange so that the message catalog files are included in the packing list conditionally. The [.filename]#Makefile# part of this task is already provided by the idiom. It is explained in the section on crossref:plist[plist-sub,advanced [.filename]#pkg-plist# practices]. In a nutshell, each occurrence of `%%NLS%%` in [.filename]#pkg-plist# will be replaced by "`@comment `" if NLS is disabled, or by a null string if NLS is enabled. Consequently, the lines prefixed by `%%NLS%%` will become mere comments in the final packing list if NLS is off; otherwise the prefix will be just left out. Then insert `%%NLS%%` before each path to a message catalog file in [.filename]#pkg-plist#. For example:
[.programlisting]
....
%%NLS%%share/locale/fr/LC_MESSAGES/foobar.mo
%%NLS%%share/locale/no/LC_MESSAGES/foobar.mo
....
In high complexity cases, more advanced techniques may be needed, such as crossref:plist[plist-dynamic,dynamic packing list generation].
[[using-gettext-catalog-directories]]
=== Handling Message Catalog Directories
There is a point to note about installing message catalog files. The target directories for them, which reside under [.filename]#LOCALBASE/share/locale#, must not be created and removed by a port. The most popular languages have their respective directories listed in [.filename]#PORTSDIR/Templates/BSD.local.dist#. The directories for many other languages are governed by the package:devel/gettext[] port. Consult its [.filename]#pkg-plist# and see whether the port is going to install a message catalog file for a unique language.
[[using-perl]]
== Using Perl
If `MASTER_SITES` is set to `CPAN`, the correct subdirectory is usually selected automatically. If the default subdirectory is wrong, `CPAN/Module` can be used to change it. `MASTER_SITES` can also be set to the old `MASTER_SITE_PERL_CPAN`, then the preferred value of `MASTER_SITE_SUBDIR` is the top-level hierarchy name. For example, the recommended value for `p5-Module-Name` is `Module`. The top-level hierarchy can be examined at http://cpan.org/modules/by-module/[cpan.org]. This keeps the port working when the author of the module changes.
The exception to this rule is when the relevant directory does not exist or the distfile does not exist in that directory. In such case, using author's id as `MASTER_SITE_SUBDIR` is allowed. The `CPAN:AUTHOR` macro can be used, which will be translated to the hashed author directory. For example, `CPAN:AUTHOR` will be converted to `authors/id/A/AU/AUTHOR`.
When a port needs Perl support, it must set `USES=perl5` with the optional `USE_PERL5` described in crossref:uses[uses-perl5,the perl5 USES description].
[[using-perl-variables]]
.Read-Only Variables for Ports That Use Perl
[cols="1,1", frame="none", options="header"]
|===
| Read only variables
| Means
|`PERL`
|The full path of the Perl 5 interpreter, either in the system or installed from a port, but without the version number. Use this when the software needs the path to the Perl interpreter. To replace "``#!``"lines in scripts, use crossref:uses[uses-shebangfix,`shebangfix`].
|`PERL_VERSION`
|The full version of Perl installed (for example, `5.8.9`).
|`PERL_LEVEL`
|The installed Perl version as an integer of the form `MNNNPP` (for example, `500809`).
|`PERL_ARCH`
|Where Perl stores architecture dependent libraries. Defaults to `${ARCH}-freebsd`.
|`PERL_PORT`
|Name of the Perl port that is installed (for example, `perl5`).
|`SITE_PERL`
|Directory name where site specific Perl packages go. This value is added to `PLIST_SUB`.
|===
[NOTE]
====
Ports of Perl modules which do not have an official website must link to `cpan.org` in the WWW line of [.filename]#pkg-descr#. The preferred URL form is `http://search.cpan.org/dist/Module-Name/` (including the trailing slash).
====
[NOTE]
====
Do not use `${SITE_PERL}` in dependency declarations. Doing so assumes that [.filename]#perl5.mk# has been included, which is not always true. Ports depending on this port will have incorrect dependencies if this port's files move later in an upgrade. The right way to declare Perl module dependencies is shown in the example below.
====
[[use-perl-dependency-example]]
.Perl Dependency Example
[example]
====
[.programlisting]
....
p5-IO-Tee>=0.64:devel/p5-IO-Tee
....
====
For Perl ports that install manual pages, the macro `PERL5_MAN3` and `PERL5_MAN1` can be used inside [.filename]#pkg-plist#. For example,
[.programlisting]
....
lib/perl5/5.14/man/man1/event.1.gz
lib/perl5/5.14/man/man3/AnyEvent::I3.3.gz
....
can be replaced with
[.programlisting]
....
%%PERL5_MAN1%%/event.1.gz
%%PERL5_MAN3%%/AnyEvent::I3.3.gz
....
[NOTE]
====
There are no `PERL5_MAN_x_` macros for the other sections (_x_ in `2` and `4` to `9`) because those get installed in the regular directories.
====
[[use-perl-ex-build]]
.A Port Which Only Requires Perl to Build
[example]
====
As the default USE_PERL5 value is build and run, set it to:
[.programlisting]
....
USES= perl5
USE_PERL5= build
....
====
[[use-perl-ex-patch]]
.A Port Which Also Requires Perl to Patch
[example]
====
From time to time, using man:sed[1] for patching is not enough. When using man:perl[1] is easier, use:
[.programlisting]
....
USES= perl5
USE_PERL5= patch build run
....
====
[[use-perl-ex-configure]]
.A Perl Module Which Needs `ExtUtils::MakeMaker` to Build
[example]
====
Most Perl modules come with a [.filename]#Makefile.PL# configure script. In this case, set:
[.programlisting]
....
USES= perl5
USE_PERL5= configure
....
====
[[use-perl-ex-modbuild]]
.A Perl Module Which Needs `Module::Build` to Build
[example]
====
When a Perl module comes with a [.filename]#Build.PL# configure script, it can require Module::Build, in which case, set
[.programlisting]
....
USES= perl5
USE_PERL5= modbuild
....
If it instead requires Module::Build::Tiny, set
[.programlisting]
....
USES= perl5
USE_PERL5= modbuildtiny
....
====
[[using-x11]]
== Using X11
[[x11-variables]]
=== X.Org Components
The X11 implementation available in The Ports Collection is X.Org. If the application depends on X components, add `USES= xorg` and set `USE_XORG` to the list of required components. A full list can be found in crossref:uses[uses-xorg,`xorg`].
The Mesa Project is an effort to provide free OpenGL implementation. To specify a dependency on various components of this project, use `USES= gl` and `USE_GL`. See crossref:uses[uses-gl,`gl`] for a full list of available components. For backwards compatibility, the value of `yes` maps to `glu`.
[[use-xorg-example]]
.`USE_XORG` Example
[example]
====
[.programlisting]
....
USES= gl xorg
USE_GL= glu
USE_XORG= xrender xft xkbfile xt xaw
....
====
[[using-xorg-variables]]
.Variables for Ports That Use X
[cols="1,1", frame="none"]
|===
|`USES= imake`
|The port uses `imake`.
|`XMKMF`
|Set to the path of `xmkmf` if not in the `PATH`. Defaults to `xmkmf -a`.
|===
[[using-x11-vars]]
.Using X11-Related Variables
[example]
====
[.programlisting]
....
# Use some X11 libraries
USES= xorg
USE_XORG= x11 xpm
....
====
[[x11-motif]]
=== Ports That Require Motif
If the port requires a Motif library, define `USES= motif` in the [.filename]#Makefile#. Default Motif implementation is package:x11-toolkits/open-motif[]. Users can choose package:x11-toolkits/lesstif[] instead by setting `WANT_LESSTIF` in their [.filename]#make.conf#.
`MOTIFLIB` will be set by [.filename]#motif.mk# to reference the appropriate Motif library. Please patch the source of the port to use `${MOTIFLIB}` wherever the Motif library is referenced in the original [.filename]#Makefile# or [.filename]#Imakefile#.
There are two common cases:
* If the port refers to the Motif library as `-lXm` in its [.filename]#Makefile# or [.filename]#Imakefile#, substitute `${MOTIFLIB}` for it.
* If the port uses `XmClientLibs` in its [.filename]#Imakefile#, change it to `${MOTIFLIB} ${XTOOLLIB} ${XLIB}`.
Note that `MOTIFLIB` (usually) expands to `-L/usr/local/lib -lXm -lXp` or `/usr/local/lib/libXm.a`, so there is no need to add `-L` or `-l` in front.
[[x11-fonts]]
=== X11 Fonts
If the port installs fonts for the X Window System, put them in [.filename]#LOCALBASE/lib/X11/fonts/local#.
[[x11-fake-display]]
=== Getting a Fake `DISPLAY` with Xvfb
Some applications require a working X11 display for compilation to succeed. This poses a problem for machines that operate headless. When this variable is used, the build infrastructure will start the virtual framebuffer X server. The working `DISPLAY` is then passed to the build. See crossref:uses[uses-display,`USES=display`] for the possible arguments.
[.programlisting]
....
USES= display
....
[[desktop-entries]]
=== Desktop Entries
Desktop entries (http://standards.freedesktop.org/desktop-entry-spec/latest/[a Freedesktop standard]) provide a way to automatically adjust desktop features when a new program is installed, without requiring user intervention. For example, newly-installed programs automatically appear in the application menus of compatible desktop environments. Desktop entries originated in the GNOME desktop environment, but are now a standard and also work with KDE and Xfce. This bit of automation provides a real benefit to the user, and desktop entries are encouraged for applications which can be used in a desktop environment.
[[desktop-entries-predefined]]
==== Using Predefined [.filename]#.desktop# Files
Ports that include predefined [.filename]#*.desktop# must include those files in [.filename]#pkg-plist# and install them in the [.filename]#$LOCALBASE/share/applications# directory. The crossref:makefiles[install-macros,`INSTALL_DATA` macro] is useful for installing these files.
[[updating-desktop-database]]
==== Updating Desktop Database
If a port has a MimeType entry in its [.filename]#portname.desktop#, the desktop database must be updated after install and deinstall. To do this, define `USES`= desktop-file-utils.
[[desktop-entries-macro]]
==== Creating Desktop Entries with `DESKTOP_ENTRIES`
Desktop entries can be easily created for applications by using `DESKTOP_ENTRIES`. A file named [.filename]#name.desktop# will be created, installed, and added to [.filename]#pkg-plist# automatically. Syntax is:
[.programlisting]
....
DESKTOP_ENTRIES= "NAME" "COMMENT" "ICON" "COMMAND" "CATEGORY" StartupNotify
....
The list of possible categories is available on the http://standards.freedesktop.org/menu-spec/latest/apa.html[Freedesktop website]. `StartupNotify` indicates whether the application is compatible with _startup notifications_. These are typically a graphic indicator like a clock that appear at the mouse pointer, menu, or panel to give the user an indication when a program is starting. A program that is compatible with startup notifications clears the indicator after it has started. Programs that are not compatible with startup notifications would never clear the indicator (potentially confusing and infuriating the user), and must have `StartupNotify` set to `false` so the indicator is not shown at all.
Example:
[.programlisting]
....
DESKTOP_ENTRIES= "ToME" "Roguelike game based on JRR Tolkien's work" \
"${DATADIR}/xtra/graf/tome-128.png" \
"tome -v -g" "Application;Game;RolePlaying;" \
false
....
[[using-gnome]]
== Using GNOME
[[using-gnome-introduction]]
=== Introduction
This chapter explains the GNOME framework as used by ports. The framework can be loosely divided into the base components, GNOME desktop components, and a few special macros that simplify the work of port maintainers.
[[use-gnome]]
=== Using `USE_GNOME`
Adding this variable to the port allows the use of the macros and components defined in [.filename]#bsd.gnome.mk#. The code in [.filename]#bsd.gnome.mk# adds the needed build-time, run-time or library dependencies or the handling of special files. GNOME applications under FreeBSD use the `USE_GNOME` infrastructure. Include all the needed components as a space-separated list. The `USE_GNOME` components are divided into these virtual lists: basic components, GNOME 3 components and legacy components. If the port needs only GTK3 libraries, this is the shortest way to define it:
[.programlisting]
....
USE_GNOME= gtk30
....
`USE_GNOME` components automatically add the dependencies they need. Please see <<gnome-components>> for an exhaustive list of all `USE_GNOME` components and which other components they imply and their dependencies.
Here is an example [.filename]#Makefile# for a GNOME port that uses many of the techniques outlined in this document. Please use it as a guide for creating new ports.
[.programlisting]
....
# $FreeBSD$
PORTNAME= regexxer
DISTVERSION= 0.10
CATEGORIES= devel textproc gnome
MASTER_SITES= GNOME
MAINTAINER= kwm@FreeBSD.org
COMMENT= Interactive tool for performing search and replace operations
USES= gettext gmake localbase:ldflags pathfix pkgconfig tar:xz
GNU_CONFIGURE= yes
USE_GNOME= gnomeprefix intlhack gtksourceviewmm3
INSTALLS_ICONS= yes
GLIB_SCHEMAS= org.regexxer.gschema.xml
.include <bsd.port.mk>
....
[NOTE]
====
The `USE_GNOME` macro without any arguments does not add any dependencies to the port. `USE_GNOME` cannot be set after [.filename]#bsd.port.pre.mk#.
====
[[using-gnome-variables]]
=== Variables
This section explains which macros are available and how they are used. Like they are used in the above example. The <<gnome-components>> has a more in-depth explanation. `USE_GNOME` has to be set for these macros to be of use.
`INSTALLS_ICONS`::
GTK+ ports which install Freedesktop-style icons to [.filename]#${LOCALBASE}/share/icons# should use this macro to ensure that the icons are cached and will display correctly. The cache file is named [.filename]#icon-theme.cache#. Do not include that file in [.filename]#pkg-plist#. This macro handles that automatically. This macro is not needed for Qt, which uses an internal method.
`GLIB_SCHEMAS`::
List of all the glib schema files the port installs. The macro will add the files to the port plist and handle the registration of these files on install and deinstall.
+
The glib schema files are written in XML and end with the [.filename]#gschema.xml# extension. They are installed in the [.filename]#share/glib-2.0/schemas/# directory. These schema files contain all application config values with their default settings. The actual database used by the applications is built by glib-compile-schema, which is run by the `GLIB_SCHEMAS` macro.
+
[.programlisting]
....
GLIB_SCHEMAS=foo.gschema.xml
....
+
[NOTE]
====
Do not add glib schemas to the [.filename]#pkg-plist#. If they are listed in [.filename]#pkg-plist#, they will not be registered and the applications might not work properly.
====
`GCONF_SCHEMAS`::
List all the gconf schema files. The macro will add the schema files to the port plist and will handle their registration on install and deinstall.
+
GConf is the XML-based database that virtually all GNOME applications use for storing their settings. These files are installed into the [.filename]#etc/gconf/schemas# directory. This database is defined by installed schema files that are used to generate [.filename]#%gconf.xml# key files. For each schema file installed by the port, there must be an entry in the [.filename]#Makefile#:
+
[.programlisting]
....
GCONF_SCHEMAS=my_app.schemas my_app2.schemas my_app3.schemas
....
+
[NOTE]
====
Gconf schemas are listed in the `GCONF_SCHEMAS` macro rather than [.filename]#pkg-plist#. If they are listed in [.filename]#pkg-plist#, they will not be registered and the applications might not work properly.
====
`INSTALLS_OMF`::
Open Source Metadata Framework (OMF) files are commonly used by GNOME 2 applications. These files contain the application help file information, and require special processing by ScrollKeeper/rarian. To properly register OMF files when installing GNOME applications from packages, make sure that `omf` files are listed in `pkg-plist` and that the port [.filename]#Makefile# has `INSTALLS_OMF` defined:
+
[.programlisting]
....
INSTALLS_OMF=yes
....
+
When set, [.filename]#bsd.gnome.mk# automatically scans [.filename]#pkg-plist# and adds appropriate `@exec` and `@unexec` directives for each [.filename]#.omf# to track in the OMF registration database.
[[gnome-components]]
== GNOME Components
For further help with a GNOME port, look at some of the link:https://www.FreeBSD.org/ports/gnome.html[existing ports] for examples. The link:https://www.FreeBSD.org/gnome/[FreeBSD GNOME page] has contact information if more help is needed. The components are divided into GNOME components that are currently in use and legacy components. If the component supports argument, they are listed between parenthesis in the description. The first is the default. "Both" is shown if the component defaults to adding to both build and run dependencies.
[[gnome-components-list]]
.GNOME Components
[cols="1,1,1", options="header"]
|===
| Component
| Associated program
| Description
|`atk`
|accessibility/atk
|Accessibility toolkit (ATK)
|`atkmm`
|accessibility/atkmm
|c++ bindings for atk
|`cairo`
|graphics/cairo
|Vector graphics library with cross-device output support
|`cairomm`
|graphics/cairomm
|c++ bindings for cairo
|`dconf`
|devel/dconf
|Configuration database system (both, build, run)
|`evolutiondataserver3`
|databases/evolution-data-server
|Data backends for the Evolution integrated mail/PIM suite
|`gdkpixbuf2`
|graphics/gdk-pixbuf2
|Graphics library for GTK+
|`glib20`
|devel/glib20
|GNOME core library `glib20`
|`glibmm`
|devel/glibmm
|c++ bindings for glib20
|`gnomecontrolcenter3`
|sysutils/gnome-control-center
|GNOME 3 Control Center
|`gnomedesktop3`
|x11/gnome-desktop
|GNOME 3 desktop UI library
|`gsound`
|audio/gsound
|GObject library for playing system sounds (both, build, run)
|`gtk-update-icon-cache`
|graphics/gtk-update-icon-cache
|Gtk-update-icon-cache utility from the Gtk+ toolkit
|`gtk20`
|x11-toolkits/gtk20
|Gtk+ 2 toolkit
|`gtk30`
|x11-toolkits/gtk30
|Gtk+ 3 toolkit
|`gtkmm20`
|x11-toolkits/gtkmm20
|c++ bindings 2.0 for the gtk20 toolkit
|`gtkmm24`
|x11-toolkits/gtkmm24
|c++ bindings 2.4 for the gtk20 toolkit
|`gtkmm30`
|x11-toolkits/gtkmm30
|c++ bindings 3.0 for the gtk30 toolkit
|`gtksourceview2`
|x11-toolkits/gtksourceview2
|Widget that adds syntax highlighting to GtkTextView
|`gtksourceview3`
|x11-toolkits/gtksourceview3
|Text widget that adds syntax highlighting to the GtkTextView widget
|`gtksourceviewmm3`
|x11-toolkits/gtksourceviewmm3
|c++ bindings for the gtksourceview3 library
|`gvfs`
|devel/gvfs
|GNOME virtual file system
|`intltool`
|textproc/intltool
|Tool for internationalization (also see intlhack)
|`introspection`
|devel/gobject-introspection
|Basic introspection bindings and tools to generate introspection bindings. Most of the time :build is enough, :both/:run is only need for applications that use introspection bindings. (both, build, run)
|`libgda5`
|databases/libgda5
|Provides uniform access to different kinds of data sources
|`libgda5-ui`
|databases/libgda5-ui
|UI library from the libgda5 library
|`libgdamm5`
|databases/libgdamm5
|c++ bindings for the libgda5 library
|`libgsf`
|devel/libgsf
|Extensible I/O abstraction for dealing with structured file formats
|`librsvg2`
|graphics/librsvg2
|Library for parsing and rendering SVG vector-graphic files
|`libsigc++20`
|devel/libsigc++20
|Callback Framework for C++
|`libxml++26`
|textproc/libxml++26
|c++ bindings for the libxml2 library
|`libxml2`
|textproc/libxml2
|XML parser library (both, build, run)
|`libxslt`
|textproc/libxslt
|XSLT C library (both, build, run)
|`metacity`
|x11-wm/metacity
|Window manager from GNOME
|`nautilus3`
|x11-fm/nautilus
|GNOME file manager
|`pango`
|x11-toolkits/pango
|Open-source framework for the layout and rendering of i18n text
|`pangomm`
|x11-toolkits/pangomm
|c++ bindings for the pango library
|`py3gobject3`
|devel/py3-gobject3
|Python 3, GObject 3.0 bindings
|`pygobject3`
|devel/py-gobject3
|Python 2, GObject 3.0 bindings
|`vte3`
|x11-toolkits/vte3
|Terminal widget with improved accessibility and I18N support
|===
[[gnome-components-macro]]
.GNOME Macro Components
[cols="1,1", options="header"]
|===
| Component
| Description
|`gnomeprefix`
|Supply `configure` with some default locations.
|`intlhack`
|Same as intltool, but patches to make sure [.filename]#share/locale/# is used. Please only use when `intltool` alone is not enough.
|`referencehack`
|This macro is there to help splitting of the API or reference documentation into its own port.
|===
[[gnome-components-legacy]]
.GNOME Legacy Components
[cols="1,1,1", options="header"]
|===
| Component
| Associated program
| Description
|`atspi`
|accessibility/at-spi
|Assistive Technology Service Provider Interface
|`esound`
|audio/esound
|Enlightenment sound package
|`gal2`
|x11-toolkits/gal2
|Collection of widgets taken from GNOME 2 gnumeric
|`gconf2`
|devel/gconf2
|Configuration database system for GNOME 2
|`gconfmm26`
|devel/gconfmm26
|c++ bindings for gconf2
|`gdkpixbuf`
|graphics/gdk-pixbuf
|Graphics library for GTK+
|`glib12`
|devel/glib12
|glib 1.2 core library
|`gnomedocutils`
|textproc/gnome-doc-utils
|GNOME doc utils
|`gnomemimedata`
|misc/gnome-mime-data
|MIME and Application database for GNOME 2
|`gnomesharp20`
|x11-toolkits/gnome-sharp20
|GNOME 2 interfaces for the .NET runtime
|`gnomespeech`
|accessibility/gnome-speech
|GNOME 2 text-to-speech API
|`gnomevfs2`
|devel/gnome-vfs
|GNOME 2 Virtual File System
|`gtk12`
|x11-toolkits/gtk12
|Gtk+ 1.2 toolkit
|`gtkhtml3`
|www/gtkhtml3
|Lightweight HTML rendering/printing/editing engine
|`gtkhtml4`
|www/gtkhtml4
|Lightweight HTML rendering/printing/editing engine
|`gtksharp20`
|x11-toolkits/gtk-sharp20
|GTK+ and GNOME 2 interfaces for the .NET runtime
|`gtksourceview`
|x11-toolkits/gtksourceview
|Widget that adds syntax highlighting to GtkTextView
|`libartgpl2`
|graphics/libart_lgpl
|Library for high-performance 2D graphics
|`libbonobo`
|devel/libbonobo
|Component and compound document system for GNOME 2
|`libbonoboui`
|x11-toolkits/libbonoboui
|GUI frontend to the libbonobo component of GNOME 2
|`libgda4`
|databases/libgda4
|Provides uniform access to different kinds of data sources
|`libglade2`
|devel/libglade2
|GNOME 2 glade library
|`libgnome`
|x11/libgnome
|Libraries for GNOME 2, a GNU desktop environment
|`libgnomecanvas`
|graphics/libgnomecanvas
|Graphics library for GNOME 2
|`libgnomekbd`
|x11/libgnomekbd
|GNOME 2 keyboard shared library
|`libgnomeprint`
|print/libgnomeprint
|Gnome 2 print support library
|`libgnomeprintui`
|x11-toolkits/libgnomeprintui
|Gnome 2 print support library
|`libgnomeui`
|x11-toolkits/libgnomeui
|Libraries for the GNOME 2 GUI, a GNU desktop environment
|`libgtkhtml`
|www/libgtkhtml
|Lightweight HTML rendering/printing/editing engine
|`libgtksourceviewmm`
|x11-toolkits/libgtksourceviewmm
|c++ binding of GtkSourceView
|`libidl`
|devel/libIDL
|Library for creating trees of CORBA IDL file
|`libsigc++12`
|devel/libsigc++12
|Callback Framework for C++
|`libwnck`
|x11-toolkits/libwnck
|Library used for writing pagers and taskslists
|`libwnck3`
|x11-toolkits/libwnck3
|Library used for writing pagers and taskslists
|`orbit2`
|devel/ORBit2
|High-performance CORBA ORB with support for the C language
|`pygnome2`
|x11-toolkits/py-gnome2
|Python bindings for GNOME 2
|`pygobject`
|devel/py-gobject
|Python 2, GObject 2.0 bindings
|`pygtk2`
|x11-toolkits/py-gtk2
|Set of Python bindings for GTK+
|`pygtksourceview`
|x11-toolkits/py-gtksourceview
|Python bindings for GtkSourceView 2
|`vte`
|x11-toolkits/vte
|Terminal widget with improved accessibility and I18N support
|===
[[gnome-components-deprecated]]
.Deprecated Components: Do Not Use
[cols="1,1", options="header"]
|===
| Component
| Description
|`pangox-compat`
|pangox-compat has been deprecated and split off from the pango package.
|===
[[using-qt]]
== Using Qt
[NOTE]
====
For ports that are part of Qt itself, see crossref:uses[uses-qt-dist,`qt-dist`].
====
[[qt-common]]
=== Ports That Require Qt
The Ports Collection provides support for Qt 5 with `USES+=qt:5`. Set `USE_QT` to the list of required Qt components (libraries, tools, plugins).
The Qt framework exports a number of variables which can be used by ports, some of them listed below:
[[using-qt-variables]]
.Variables Provided to Ports That Use Qt
[cols="1,1", frame="none"]
|===
|`QMAKE`
|Full path to `qmake` binary.
|`LRELEASE`
|Full path to `lrelease` utility.
|`MOC`
|Full path to `moc`.
|`RCC`
|Full path to `rcc`.
|`UIC`
|Full path to `uic`.
|`QT_INCDIR`
|Qt include directory.
|`QT_LIBDIR`
|Qt libraries path.
|`QT_PLUGINDIR`
|Qt plugins path.
|===
[[qt-components]]
=== Component Selection
Individual Qt tool and library dependencies must be specified in `USE_QT`. Every component can be suffixed with `_build` or `_run`, the suffix indicating whether the dependency on the component is at buildtime or runtime. If unsuffixed, the component will be depended on at both build- and runtime. Usually, library components are specified unsuffixed, tool components are mostly specified with the `_build` suffix and plugin components are specified with the `_run` suffix. The most commonly used components are listed below (all available components are listed in `_USE_QT_ALL`, and `_USE_QT5_ONLY` in [.filename]#/usr/ports/Mk/Uses/qt.mk#):
[[using-qt-library-list]]
.Available Qt Library Components
[cols="1,1", frame="none", options="header"]
|===
| Name
| Description
|`3d`
|Qt3D module
|`assistant`
|Qt 5 documentation browser
|`canvas3d`
|Qt canvas3d module
|`charts`
|Qt 5 charts module
|`concurrent`
|Qt multi-threading module
|`connectivity`
|Qt connectivity (Bluetooth/NFC) module
|`core`
|Qt core non-graphical module
|`datavis3d`
|Qt 5 3D data visualization module
|`dbus`
|Qt D-Bus inter-process communication module
|`declarative`
|Qt declarative framework for dynamic user interfaces
|`designer`
|Qt 5 graphical user interface designer
|`diag`
|Tool for reporting diagnostic information about Qt and its environment
|`doc`
|Qt 5 documentation
|`examples`
|Qt 5 examples sourcecode
|`gamepad`
|Qt 5 Gamepad Module
|`graphicaleffects`
|Qt Quick graphical effects
|`gui`
|Qt graphical user interface module
|`help`
|Qt online help integration module
|`l10n`
|Qt localized messages
|`linguist`
|Qt 5 translation tool
|`location`
|Qt location module
|`multimedia`
|Qt audio, video, radio and camera support module
|`network`
|Qt network module
|`networkauth`
|Qt network auth module
|`opengl`
|Qt 5-compatible OpenGL support module
|`paths`
|Command line client to QStandardPaths
|`phonon4`
|KDE multimedia framework
|`pixeltool`
|Qt 5 screen magnifier
|`plugininfo`
|Qt5 plugin metadata dumper
|`printsupport`
|Qt print support module
|`qdbus`
|Qt command-line interface to D-Bus
|`qdbusviewer`
|Qt 5 graphical interface to D-Bus
|`qdoc`
|Qt documentation generator
|`qdoc-data`
|QDoc configuration files
|`qev`
|Qt QWidget events introspection tool
|`qmake`
|Qt Makefile generator
|`quickcontrols`
|Set of controls for building complete interfaces in Qt Quick
|`quickcontrols2`
|Set of controls for building complete interfaces in Qt Quick
|`remoteobjects`
|Qt5 SXCML module
|`script`
|Qt 4-compatible scripting module
|`scripttools`
|Qt Script additional components
|`scxml`
|Qt5 SXCML module
|`sensors`
|Qt sensors module
|`serialbus`
|Qt functions to access industrial bus systems
|`serialport`
|Qt functions to access serial ports
|`speech`
|Accessibilty features for Qt5
|`sql`
|Qt SQL database integration module
|`sql-ibase`
|Qt InterBase/Firebird database plugin
|`sql-mysql`
|Qt MySQL database plugin
|`sql-odbc`
|Qt Open Database Connectivity plugin
|`sql-pgsql`
|Qt PostgreSQL database plugin
|`sql-sqlite2`
|Qt SQLite 2 database plugin
|`sql-sqlite3`
|Qt SQLite 3 database plugin
|`sql-tds`
|Qt TDS Database Connectivity database plugin
|`svg`
|Qt SVG support module
|`testlib`
|Qt unit testing module
|`uiplugin`
|Custom Qt widget plugin interface for Qt Designer
|`uitools`
|Qt Designer UI forms support module
|`virtualkeyboard`
|Qt 5 Virtual Keyboard Module
|`wayland`
|Qt5 wrapper for Wayland
|`webchannel`
|Qt 5 library for integration of C++/QML with HTML/js clients
|`webengine`
|Qt 5 library to render web content
|`webkit`
|QtWebKit with a more modern WebKit code base
|`websockets`
|Qt implementation of WebSocket protocol
|`websockets-qml`
|Qt implementation of WebSocket protocol (QML bindings)
|`webview`
|Qt component for displaying web content
|`widgets`
|Qt C++ widgets module
|`x11extras`
|Qt platform-specific features for X11-based systems
|`xml`
|Qt SAX and DOM implementations
|`xmlpatterns`
|Qt support for XPath, XQuery, XSLT and XML Schema
|===
To determine the libraries an application depends on, run `ldd` on the main executable after a successful compilation.
[[using-qt-tools-list]]
.Available Qt Tool Components
[cols="1,1", frame="none", options="header"]
|===
| Name
| Description
|`buildtools`
|build tools (`moc`, `rcc`), needed for almost every Qt application.
|`linguisttools`
|localization tools: `lrelease`, `lupdate`
|`qmake`
|Makefile generator/build utility
|===
[[using-qt-plugins-list]]
.Available Qt Plugin Components
[cols="1,1", frame="none", options="header"]
|===
| Name
| Description
|`imageformats`
|plugins for TGA, TIFF, and MNG image formats
|===
[[qt5-components-example]]
.Selecting Qt 5 Components
[example]
====
In this example, the ported application uses the Qt 5 graphical user interface library, the Qt 5 core library, all of the Qt 5 code generation tools and Qt 5's Makefile generator. Since the `gui` library implies a dependency on the core library, `core` does not need to be specified. The Qt 5 code generation tools `moc`, `uic` and `rcc`, as well as the Makefile generator `qmake` are only needed at buildtime, thus they are specified with the `_build` suffix:
[.programlisting]
....
USES= qt:5
USE_QT= gui buildtools_build qmake_build
....
====
[[using-qmake]]
=== Using `qmake`
If the application provides a qmake project file ([.filename]#*.pro#), define `USES= qmake` along with `USE_QT`. `USES= qmake` already implies a build dependency on qmake, therefore the qmake component can be omitted from `USE_QT`. Similar to <<using-cmake,CMake>>, qmake supports out-of-source builds, which can be enabled by specifying the `outsource` argument (see <<using-qmake-example,`USES= qmake` example>>). Also see <<using-qmake-arguments>>.
[[using-qmake-arguments]]
.Possible Arguments for `USES= qmake`
[cols="1,1", frame="none", options="header"]
|===
| Variable
| Description
|`no_configure`
|Do not add the configure target. This is implied by `HAS_CONFIGURE=yes` and `GNU_CONFIGURE=yes`. It is required when the build only needs the environment setup from `USES= qmake`, but otherwise runs `qmake` on its own.
|`no_env`
|Suppress modification of the configure and make environments. It is only required when `qmake` is used to configure the software and the build fails to understand the environment setup by `USES= qmake`.
|`norecursive`
|Do not pass the `-recursive` argument to `qmake`.
|`outsource`
|Perform an out-of-source build.
|===
[[using-qmake-variables]]
.Variables for Ports That Use `qmake`
[cols="1,1", frame="none", options="header"]
|===
| Variable
| Description
|`QMAKE_ARGS`
|Port specific qmake flags to be passed to the `qmake` binary.
|`QMAKE_ENV`
|Environment variables to be set for the `qmake` binary. The default is `${CONFIGURE_ENV}`.
|`QMAKE_SOURCE_PATH`
|Path to qmake project files ([.filename]#.pro#). The default is `${WRKSRC}` if an out-of-source build is requested, empty otherwise.
|===
When using `USES= qmake`, these settings are deployed:
[.programlisting]
....
CONFIGURE_ARGS+= --with-qt-includes=${QT_INCDIR} \
--with-qt-libraries=${QT_LIBDIR} \
--with-extra-libs=${LOCALBASE}/lib \
--with-extra-includes=${LOCALBASE}/include
CONFIGURE_ENV+= QTDIR="${QT_PREFIX}" QMAKE="${QMAKE}" \
MOC="${MOC}" RCC="${RCC}" UIC="${UIC}" \
QMAKESPEC="${QMAKESPEC}"
PLIST_SUB+= QT_INCDIR=${QT_INCDIR_REL} \
QT_LIBDIR=${QT_LIBDIR_REL} \
QT_PLUGINDIR=${QT_PLUGINDIR_REL}
....
Some configure scripts do not support the arguments above. To suppress modification of `CONFIGURE_ENV` and `CONFIGURE_ARGS`, set `USES= qmake:no_env`.
[[using-qmake-example]]
.`USES= qmake` Example
[example]
====
This snippet demonstrates the use of qmake for a Qt 5 port:
[.programlisting]
....
USES= qmake:outsource qt:5
USE_QT= buildtools_build
....
====
Qt applications are often written to be cross-platform and often X11/Unix is not the platform they are developed on, which in turn leads to certain loose ends, like:
* _Missing additional include paths._ Many applications come with system tray icon support, but neglect to look for includes and/or libraries in the X11 directories. To add directories to `qmake`'s include and library search paths via the command line, use:
+
[.programlisting]
....
QMAKE_ARGS+= INCLUDEPATH+=${LOCALBASE}/include \
LIBS+=-L${LOCALBASE}/lib
....
* _Bogus installation paths._ Sometimes data such as icons or .desktop files are by default installed into directories which are not scanned by XDG-compatible applications. package:editors/texmaker[] is an example for this - look at [.filename]#patch-texmaker.pro# in the [.filename]#files# directory of that port for a template on how to remedy this directly in the `qmake` project file.
[[using-kde]]
== Using KDE
[[kde5-variables]]
=== KDE Variable Definitions
If the application depends on KDE, set `USES+=kde:5` and `USE_KDE` to the list of required components. `_build` and `_run` suffixes can be used to force components dependency type (for example, `baseapps_run`). If no suffix is set, a default dependency type will be used. To force both types, add the component twice with both suffixes (for example, `ecm_build ecm_run`). Available components are listed below (up-to-date components are also listed in [.filename]#/usr/ports/Mk/Uses/kde.mk#):
[[using-kde-components]]
.Available KDE Components
[cols="1,1", frame="none", options="header"]
|===
| Name
| Description
|`activities`
|KF5 runtime and library to organize work in separate activities
|`activities-stats`
|KF5 statistics for activities
|`activitymanagerd`
|System service to manage user's activities, track the usage patterns
|`akonadi`
|Storage server for KDE-Pim
|`akonadicalendar`
|Akonadi Calendar Integration
|`akonadiconsole`
|Akonadi management and debugging console
|`akonadicontacts`
|Libraries and daemons to implement Contact Management in Akonadi
|`akonadiimportwizard`
|Import data from other mail clients to KMail
|`akonadimime`
|Libraries and daemons to implement basic email handling
|`akonadinotes`
|KDE library for accessing mail storages in MBox format
|`akonadisearch`
|Libraries and daemons to implement searching in Akonadi
|`akregator`
|A Feed Reader by KDE
|`alarmcalendar`
|KDE API for KAlarm alarms
|`apidox`
|KF5 API Documentation Tools
|`archive`
|KF5 library that provides classes for handling archive formats
|`attica`
|Open Collaboration Services API library KDE5 version
|`attica5`
|Open Collaboration Services API library KDE5 version
|`auth`
|KF5 abstraction to system policy and authentication features
|`baloo`
|KF5 Framework for searching and managing user metadata
|`baloo-widgets`
|BalooWidgets library
|`baloo5`
|KF5 Framework for searching and managing user metadata
|`blog`
|KDE API for weblogging access
|`bookmarks`
|KF5 library for bookmarks and the XBEL format
|`breeze`
|Plasma5 artwork, styles and assets for the Breeze visual style
|`breeze-gtk`
|Plasma5 Breeze visual style for Gtk
|`breeze-icons`
|Breeze icon theme for KDE
|`calendarcore`
|KDE calendar access library
|`calendarsupport`
|Calendar support libraries for KDEPim
|`calendarutils`
|KDE utility and user interface functions for accessing calendar
|`codecs`
|KF5 library for string manipulation
|`completion`
|KF5 text completion helpers and widgets
|`config`
|KF5 widgets for configuration dialogs
|`configwidgets`
|KF5 widgets for configuration dialogs
|`contacts`
|KDE api to manage contact information
|`coreaddons`
|KF5 addons to QtCore
|`crash`
|KF5 library to handle crash analysis and bug report from apps
|`dbusaddons`
|KF5 addons to QtDBus
|`decoration`
|Plasma5 library to create window decorations
|`designerplugin`
|KF5 integration of Frameworks widgets in Qt Designer/Creator
|`discover`
|Plasma5 package management tools
|`dnssd`
|KF5 abstraction to system DNSSD features
|`doctools`
|KF5 documentation generation from docbook
|`drkonqi`
|Plasma5 crash handler
|`ecm`
|Extra modules and scripts for CMake
|`emoticons`
|KF5 library to convert emoticons
|`eventviews`
|Event view libriares for KDEPim
|`filemetadata`
|KF5 library for extracting file metadata
|`frameworkintegration`
|KF5 workspace and cross-framework integration plugins
|`gapi`
|KDE based library to access google services
|`globalaccel`
|KF5 library to add support for global workspace shortcuts
|`grantlee-editor`
|Editor for Grantlee themes
|`grantleetheme`
|KDE PIM grantleetheme
|`gravatar`
|Library for gravatar support
|`guiaddons`
|KF5 addons to QtGui
|`holidays`
|KDE library for calendar holidays
|`hotkeys`
|Plasma5 library for hotkeys
|`i18n`
|KF5 advanced internationalization framework
|`iconthemes`
|KF5 library for handling icons in applications
|`identitymanagement`
|KDE pim identities
|`idletime`
|KF5 library for monitoring user activity
|`imap`
|KDE API for IMAP support
|`incidenceeditor`
|Incidence editor libriares for KDEPim
|`infocenter`
|Plasma5 utility providing system information
|`init`
|KF5 process launcher to speed up launching KDE applications
|`itemmodels`
|KF5 models for Qt Model/View system
|`itemviews`
|KF5 widget addons for Qt Model/View
|`jobwidgets`
|KF5 widgets for tracking KJob instance
|`js`
|KF5 library providing an ECMAScript interpreter
|`jsembed`
|KF5 library for binding JavaScript objects to QObjects
|`kaddressbook`
|KDE contact manager
|`kalarm`
|Personal alarm scheduler
|`kalarm`
|Personal alarm scheduler
|`kate`
|Basic editor framework for the KDE system
|`kcmutils`
|KF5 utilities for working with KCModules
|`kde-cli-tools`
|Plasma5 non-interactive system tools
|`kde-gtk-config`
|Plasma5 GTK2 and GTK3 configurator
|`kdeclarative`
|KF5 library providing integration of QML and KDE Frameworks
|`kded`
|KF5 extensible daemon for providing system level services
|`kdelibs4support`
|KF5 porting aid from KDELibs4
|`kdepim-addons`
|KDE PIM addons
|`kdepim-apps-libs`
|KDE PIM mail related libraries
|`kdepim-runtime5`
|KDE PIM tools and services
|`kdeplasma-addons`
|Plasma5 addons to improve the Plasma experience
|`kdesu`
|KF5 integration with su for elevated privileges
|`kdewebkit`
|KF5 library providing integration of QtWebKit
|`kgamma5`
|Plasma5 monitor's gamma settings
|`khtml`
|KF5 KTHML rendering engine
|`kimageformats`
|KF5 library providing support for additional image formats
|`kio`
|KF5 resource and network access abstraction
|`kirigami2`
|QtQuick based components set
|`kitinerary`
|Data Model and Extraction System for Travel Reservation information
|`kmail`
|KDE mail client
|`kmail`
|KDE mail client
|`kmail-account-wizard`
|KDE mail account wizard
|`kmenuedit`
|Plasma5 menu editor
|`knotes`
|Popup notes
|`kontact`
|KDE Personal Information Manager
|`kontact`
|KDE Personal Information Manager
|`kontactinterface`
|KDE glue for embedding KParts into Kontact
|`korganizer`
|Calendar and scheduling Program
|`kpimdav`
|A DAV protocol implementation with KJobs
|`kpkpass`
|Library to deal with Apple Wallet pass files
|`kross`
|KF5 multi-language application scripting
|`kscreen`
|Plasma5 screen management library
|`kscreenlocker`
|Plasma5 secure lock screen architecture
|`ksmtp`
|Job-based library to send email through an SMTP server
|`ksshaskpass`
|Plasma5 ssh-add frontend
|`ksysguard`
|Plasma5 utility to track and control the running processes
|`kwallet-pam`
|Plasma5 KWallet PAM Integration
|`kwayland-integration`
|Integration plugins for a Wayland-based desktop
|`kwin`
|Plasma5 window manager
|`kwrited`
|Plasma5 daemon listening for wall and write messages
|`ldap`
|LDAP access API for KDE
|`libkcddb`
|KDE CDDB library
|`libkcompactdisc`
|KDE library for interfacing with audio CDs
|`libkdcraw`
|LibRaw interface for KDE
|`libkdegames`
|Libraries used by KDE games
|`libkdepim`
|KDE PIM Libraries
|`libkeduvocdocument`
|Library for reading and writing vocabulary files
|`libkexiv2`
|Exiv2 library interface for KDE
|`libkipi`
|KDE Image Plugin Interface
|`libkleo`
|Certificate manager for KDE
|`libksane`
|SANE library interface for KDE
|`libkscreen`
|Plasma5 screen management library
|`libksieve`
|Sieve libriares for KDEPim
|`libksysguard`
|Plasma5 library to track and control running processes
|`mailcommon`
|Common libriares for KDEPim
|`mailimporter`
|Import mbox files to KMail
|`mailtransport`
|KDE library to managing mail transport
|`marble`
|Virtual globe and world atlas for KDE
|`mbox`
|KDE library for accessing mail storages in MBox format
|`mbox-importer`
|Import mbox files to KMail
|`mediaplayer`
|KF5 plugin interface for media player features
|`messagelib`
|Library for handling messages
|`milou`
|Plasma5 Plasmoid for search
|`mime`
|Library for handling MIME data
|`newstuff`
|KF5 library for downloading application assets from the network
|`notifications`
|KF5 abstraction for system notifications
|`notifyconfig`
|KF5 configuration system for KNotify
|`okular`
|KDE universal document viewer
|`oxygen`
|Plasma5 Oxygen style
|`oxygen-icons5`
|The Oxygen icon theme for KDE
|`package`
|KF5 library to load and install packages
|`parts`
|KF5 document centric plugin system
|`people`
|KF5 library providing access to contacts
|`pim-data-exporter`
|Import and export KDE PIM settings
|`pimcommon`
|Common libriares for KDEPim
|`pimtextedit`
|KDE library for PIM-specific text editing utilities
|`plasma-browser-integration`
|Plasma5 components to integrate browsers into the desktop
|`plasma-desktop`
|Plasma5 plasma desktop
|`plasma-framework`
|KF5 plugin based UI runtime used to write user interfaces
|`plasma-integration`
|Qt Platform Theme integration plugins for the Plasma workspaces
|`plasma-pa`
|Plasma5 Plasma pulse audio mixer
|`plasma-sdk`
|Plasma5 applications useful for Plasma development
|`plasma-workspace`
|Plasma5 Plasma workspace
|`plasma-workspace-wallpapers`
|Plasma5 wallpapers
|`plotting`
|KF5 lightweight plotting framework
|`polkit-kde-agent-1`
|Plasma5 daemon providing a polkit authentication UI
|`powerdevil`
|Plasma5 tool to manage the power consumption settings
|`prison`
|API to produce barcodes
|`pty`
|KF5 pty abstraction
|`purpose`
|Offers available actions for a specific purpose
|`qqc2-desktop-style`
|Qt QuickControl2 style for KDE
|`runner`
|KF5 parallelized query system
|`service`
|KF5 advanced plugin and service introspection
|`solid`
|KF5 hardware integration and detection
|`sonnet`
|KF5 plugin-based spell checking library
|`syndication`
|KDE RSS feed handling library
|`syntaxhighlighting`
|KF5 syntax highlighting engine for structured text and code
|`systemsettings`
|Plasma5 system settings
|`texteditor`
|KF5 advanced embeddable text editor
|`textwidgets`
|KF5 advanced text editing widgets
|`threadweaver`
|KF5 addons to QtDBus
|`tnef`
|KDE API for the handling of TNEF data
|`unitconversion`
|KF5 library for unit conversion
|`user-manager`
|Plasma5 user manager
|`wallet`
|KF5 secure and unified container for user passwords
|`wayland`
|KF5 Client and Server library wrapper for the Wayland libraries
|`widgetsaddons`
|KF5 addons to QtWidgets
|`windowsystem`
|KF5 library for access to the windowing system
|`xmlgui`
|KF5 user configurable main windows
|`xmlrpcclient`
|KF5 interaction with XMLRPC services
|===
[[kde5-components-example]]
.`USE_KDE` Example
[example]
====
This is a simple example for a KDE port. `USES= cmake` instructs the port to utilize CMake, a configuration tool widely used by KDE projects (see <<using-cmake>> for detailed usage). `USE_KDE` brings dependency on KDE libraries. Required KDE components and other dependencies can be determined through the configure log. `USE_KDE` does not imply `USE_QT`. If a port requires some Qt components, specify them in `USE_QT`.
[.programlisting]
....
USES= cmake kde:5 qt:5
USE_KDE= ecm
USE_QT= core buildtools_build qmake_build
....
====
[[using-lxqt]]
== Using LXQt
Applications depending on LXQt should set `USES+= lxqt` and set `USE_LXQT` to the list of required components from the table below
[[using-lxqt-components]]
.Available LXQt Components
[cols="1,1", frame="none", options="header"]
|===
| Name
| Description
|`buildtools`
|Helpers for additional CMake modules
|`libfmqt`
|Libfm Qt bindings
|`lxqt`
|LXQt core library
|`qtxdg`
|Qt implementation of freedesktop.org XDG specifications
|===
[[lxqt-components-example]]
.`USE_LXQT` Example
[example]
====
This is a simple example, `USE_LXQT` adds a dependency on LXQt libraries. Required LXQt components and other dependencies can be determined from the configure log.
[.programlisting]
....
USES= cmake lxqt qt:5 tar:xz
USE_QT= core dbus widgets buildtools_build qmake_build
USE_LXQT= buildtools libfmqt
....
====
[[using-java]]
== Using Java
[[java-variables]]
=== Variable Definitions
If the port needs a Java(TM) Development Kit (JDK(TM)) to either build, run or even extract the distfile, then define `USE_JAVA`.
There are several JDKs in the ports collection, from various vendors, and in several versions. If the port must use a particular version, specify it using the `JAVA_VERSION` variable. The most current version is package:java/openjdk16[], with package:java/openjdk15[], package:java/openjdk14[], package:java/openjdk13[], package:java/openjdk12[], package:java/openjdk11[], package:java/openjdk8[], and package:java/openjdk7[] also available.
[[using-java-variables]]
.Variables Which May be Set by Ports That Use Java
[cols="1,1", frame="none", options="header"]
|===
| Variable
| Means
|`USE_JAVA`
|Define for the remaining variables to have any effect.
|`JAVA_VERSION`
|List of space-separated suitable Java versions for the port. An optional `"+"` allows specifying a range of versions (allowed values: `7[+] 8[+] 11[+] 12[+] 13[+] 14[+] 15[+] 16[+]`).
|`JAVA_OS`
|List of space-separated suitable JDK port operating systems for the port (allowed values: `native linux`).
|`JAVA_VENDOR`
|List of space-separated suitable JDK port vendors for the port (allowed values: `openjdk oracle`).
|`JAVA_BUILD`
|When set, add the selected JDK port to the build dependencies.
|`JAVA_RUN`
|When set, add the selected JDK port to the run dependencies.
|`JAVA_EXTRACT`
|When set, add the selected JDK port to the extract dependencies.
|===
Below is the list of all settings a port will receive after setting `USE_JAVA`:
[[using-java-variables2]]
.Variables Provided to Ports That Use Java
[cols="1,1", frame="none", options="header"]
|===
| Variable
| Value
|`JAVA_PORT`
|The name of the JDK port (for example, `java/openjdk6`).
|`JAVA_PORT_VERSION`
|The full version of the JDK port (for example, `1.6.0`). Only the first two digits of this version number are needed, use `${JAVA_PORT_VERSION:C/^([0-9])\.([0-9])(.*)$/\1.\2/}`.
|`JAVA_PORT_OS`
|The operating system used by the JDK port (for example, `'native'`).
|`JAVA_PORT_VENDOR`
|The vendor of the JDK port (for example, `'openjdk'`).
|`JAVA_PORT_OS_DESCRIPTION`
|Description of the operating system used by the JDK port (for example, `'Native'`).
|`JAVA_PORT_VENDOR_DESCRIPTION`
|Description of the vendor of the JDK port (for example, `'OpenJDK BSD Porting Team'`).
|`JAVA_HOME`
|Path to the installation directory of the JDK (for example, [.filename]#'/usr/local/openjdk6'#).
|`JAVAC`
|Path to the Java compiler to use (for example, [.filename]#'/usr/local/openjdk6/bin/javac'#).
|`JAR`
|Path to the `jar` tool to use (for example, [.filename]#'/usr/local/openjdk6/bin/jar'# or [.filename]#'/usr/local/bin/fastjar'#).
|`APPLETVIEWER`
|Path to the `appletviewer` utility (for example, [.filename]#'/usr/local/openjdk6/bin/appletviewer'#).
|`JAVA`
|Path to the `java` executable. Use this for executing Java programs (for example, [.filename]#'/usr/local/openjdk6/bin/java'#).
|`JAVADOC`
|Path to the `javadoc` utility program.
|`JAVAH`
|Path to the `javah` program.
|`JAVAP`
|Path to the `javap` program.
|`JAVA_KEYTOOL`
|Path to the `keytool` utility program.
|`JAVA_N2A`
|Path to the `native2ascii` tool.
|`JAVA_POLICYTOOL`
|Path to the `policytool` program.
|`JAVA_SERIALVER`
|Path to the `serialver` utility program.
|`RMIC`
|Path to the RMI stub/skeleton generator, `rmic`.
|`RMIREGISTRY`
|Path to the RMI registry program, `rmiregistry`.
|`RMID`
|Path to the RMI daemon program `rmid`.
|`JAVA_CLASSES`
|Path to the archive that contains the JDK class files, [.filename]#${JAVA_HOME}/jre/lib/rt.jar#.
|===
Use the `java-debug` make target to get information for debugging the port. It will display the value of many of the previously listed variables.
Additionally, these constants are defined so all Java ports may be installed in a consistent way:
[[using-java-constants]]
.Constants Defined for Ports That Use Java
[cols="1,1", frame="none", options="header"]
|===
| Constant
| Value
|`JAVASHAREDIR`
|The base directory for everything related to Java. Default: [.filename]#${PREFIX}/share/java#.
|`JAVAJARDIR`
|The directory where JAR files is installed. Default: [.filename]#${JAVASHAREDIR}/classes#.
|`JAVALIBDIR`
|The directory where JAR files installed by other ports are located. Default: [.filename]#${LOCALBASE}/share/java/classes#.
|===
The related entries are defined in both `PLIST_SUB` (documented in <<plist-sub>>) and `SUB_LIST`.
[[java-building-with-ant]]
=== Building with Ant
When the port is to be built using Apache Ant, it has to define `USE_ANT`. Ant is thus considered to be the sub-make command. When no `do-build` target is defined by the port, a default one will be set that runs Ant according to `MAKE_ENV`, `MAKE_ARGS` and `ALL_TARGET`. This is similar to the `USES= gmake` mechanism, which is documented in <<building>>.
[[java-best-practices]]
=== Best Practices
When porting a Java library, the port has to install the JAR file(s) in [.filename]#${JAVAJARDIR}#, and everything else under [.filename]#${JAVASHAREDIR}/${PORTNAME}# (except for the documentation, see below). To reduce the packing file size, reference the JAR file(s) directly in the [.filename]#Makefile#. Use this statement (where [.filename]#myport.jar# is the name of the JAR file installed as part of the port):
[.programlisting]
....
PLIST_FILES+= ${JAVAJARDIR}/myport.jar
....
When porting a Java application, the port usually installs everything under a single directory (including its JAR dependencies). The use of [.filename]#${JAVASHAREDIR}/${PORTNAME}# is strongly encouraged in this regard. It is up the porter to decide whether the port installs the additional JAR dependencies under this directory or uses the already installed ones (from [.filename]#${JAVAJARDIR}#).
When porting a Java(TM) application that requires an application server such as package:www/tomcat7[] to run the service, it is quite common for a vendor to distribute a [.filename]#.war#. A [.filename]#.war# is a Web application ARchive and is extracted when called by the application. Avoid adding a [.filename]#.war# to [.filename]#pkg-plist#. It is not considered best practice. An application server will expand war archive, but not clean it up properly if the port is removed. A more desirable way of working with this file is to extract the archive, then install the files, and lastly add these files to [.filename]#pkg-plist#.
[.programlisting]
....
TOMCATDIR= ${LOCALBASE}/apache-tomcat-7.0
WEBAPPDIR= myapplication
post-extract:
@${MKDIR} ${WRKDIR}/${PORTDIRNAME}
@${TAR} xf ${WRKDIR}/myapplication.war -C ${WRKDIR}/${PORTDIRNAME}
do-install:
cd ${WRKDIR} && \
${INSTALL} -d -o ${WWWOWN} -g ${WWWGRP} ${TOMCATDIR}/webapps/${PORTDIRNAME}
cd ${WRKDIR}/${PORTDIRNAME} && ${COPYTREE_SHARE} \* ${WEBAPPDIR}/${PORTDIRNAME}
....
Regardless of the type of port (library or application), the additional documentation is installed in the crossref:makefiles[install-documentation,same location] as for any other port. The Javadoc tool is known to produce a different set of files depending on the version of the JDK that is used. For ports that do not enforce the use of a particular JDK, it is therefore a complex task to specify the packing list ([.filename]#pkg-plist#). This is one reason why porters are strongly encouraged to use `PORTDOCS`. Moreover, even if the set of files that will be generated by `javadoc` can be predicted, the size of the resulting [.filename]#pkg-plist# advocates for the use of `PORTDOCS`.
The default value for `DATADIR` is [.filename]#${PREFIX}/share/${PORTNAME}#. It is a good idea to override `DATADIR` to [.filename]#${JAVASHAREDIR}/${PORTNAME}# for Java ports. Indeed, `DATADIR` is automatically added to `PLIST_SUB` (documented in crossref:plist[plist-sub,Changing pkg-plist Based on Make Variables]) so use `%%DATADIR%%` directly in [.filename]#pkg-plist#.
As for the choice of building Java ports from source or directly installing them from a binary distribution, there is no defined policy at the time of writing. However, people from the https://www.freebsd.org/java/[FreeBSD Java Project] encourage porters to have their ports built from source whenever it is a trivial task.
All the features that have been presented in this section are implemented in [.filename]#bsd.java.mk#. If the port needs more sophisticated Java support, please first have a look at the https://cgit.FreeBSD.org/ports/tree/Mk/bsd.java.mk[bsd.java.mk Git log] as it usually takes some time to document the latest features. Then, if the needed support that is lacking would be beneficial to many other Java ports, feel free to discuss it on the freebsd-java.
Although there is a `java` category for PRs, it refers to the JDK porting effort from the FreeBSD Java project. Therefore, submit the Java port in the `ports` category as for any other port, unless the issue is related to either a JDK implementation or [.filename]#bsd.java.mk#.
Similarly, there is a defined policy regarding the `CATEGORIES` of a Java port, which is detailed in crossref:makefiles[makefile-categories,Categorization].
[[using-php]]
== Web Applications, Apache and PHP
[[using-apache]]
=== Apache
[[using-apache-variables]]
.Variables for Ports That Use Apache
[cols="1,1", frame="none"]
|===
|`USE_APACHE`
|The port requires Apache. Possible values: `yes` (gets any version), `22`, `24`, `22-24`, `22+`, etc. The default APACHE version is `22`. More details are available in [.filename]#ports/Mk/bsd.apache.mk# and at https://wiki.freebsd.org/Apache/[wiki.freebsd.org/Apache/].
|`APXS`
|Full path to the `apxs` binary. Can be overridden in the port.
|`HTTPD`
|Full path to the `httpd` binary. Can be overridden in the port.
|`APACHE_VERSION`
|The version of present Apache installation (read-only variable). This variable is only available after inclusion of [.filename]#bsd.port.pre.mk#. Possible values: `22`, `24`.
|`APACHEMODDIR`
|Directory for Apache modules. This variable is automatically expanded in [.filename]#pkg-plist#.
|`APACHEINCLUDEDIR`
|Directory for Apache headers. This variable is automatically expanded in [.filename]#pkg-plist#.
|`APACHEETCDIR`
|Directory for Apache configuration files. This variable is automatically expanded in [.filename]#pkg-plist#.
|===
[[using-apache-modules]]
.Useful Variables for Porting Apache Modules
[cols="1,1", frame="none"]
|===
|`MODULENAME`
|Name of the module. Default value is `PORTNAME`. Example: `mod_hello`
|`SHORTMODNAME`
|Short name of the module. Automatically derived from `MODULENAME`, but can be overridden. Example: `hello`
|`AP_FAST_BUILD`
|Use `apxs` to compile and install the module.
|`AP_GENPLIST`
|Also automatically creates a [.filename]#pkg-plist#.
|`AP_INC`
|Adds a directory to a header search path during compilation.
|`AP_LIB`
|Adds a directory to a library search path during compilation.
|`AP_EXTRAS`
|Additional flags to pass to `apxs`.
|===
[[web-apps]]
=== Web Applications
Web applications must be installed into [.filename]#PREFIX/www/appname#. This path is available both in [.filename]#Makefile# and in [.filename]#pkg-plist# as `WWWDIR`, and the path relative to `PREFIX` is available in [.filename]#Makefile# as `WWWDIR_REL`.
The user and group of web server process are available as `WWWOWN` and `WWWGRP`, in case the ownership of some files needs to be changed. The default values of both are `www`. Use `WWWOWN?= myuser` and `WWWGRP?= mygroup` if the port needs different values. This allows the user to override them easily.
[IMPORTANT]
====
Use `WWWOWN` and `WWWGRP` sparingly. Remember that every file the web server can write to is a security risk waiting to happen.
====
Do not depend on Apache unless the web app explicitly needs Apache. Respect that users may wish to run a web application on a web server other than Apache.
[[php-variables]]
=== PHP
PHP web applications declare their dependency on it with `USES=php`. See crossref:uses[uses-php,`php`] for more information.
[[php-pear]]
=== PEAR Modules
Porting PEAR modules is a very simple process.
Add `USES=pear` to the port's [.filename]#Makefile#. The framework will install the relevant files in the right places and automatically generate the plist at install time.
[[pear-makefile]]
.Example Makefile for PEAR Class
[example]
====
[.programlisting]
....
PORTNAME= Date
DISTVERSION= 1.4.3
CATEGORIES= devel www pear
MAINTAINER= example@domain.com
COMMENT= PEAR Date and Time Zone Classes
USES= pear
.include <bsd.port.mk>
....
====
[TIP]
====
PEAR modules will automatically be flavorized using crossref:flavors[flavors-auto-php,PHP flavors].
====
[NOTE]
====
If a non default `PEAR_CHANNEL` is used, the build and run-time dependencies will automatically be added.
====
[IMPORTANT]
====
PEAR modules do not need to defined `PKGNAMESUFFIX` it is automatically filled in using `PEAR_PKGNAMEPREFIX`. If a port needs to add to `PKGNAMEPREFIX`, it must also use `PEAR_PKGNAMEPREFIX` to differentiate between different flavors.
====
[[php-horde]]
==== Horde Modules
In the same way, porting Horde modules is a simple process.
Add `USES=horde` to the port's [.filename]#Makefile#. The framework will install the relevant files in the right places and automatically generate the plist at install time.
The `USE_HORDE_BUILD` and `USE_HORDE_RUN` variables can be used to add buildtime and runtime dependencies on other Horde modules. See [.filename]#Mk/Uses/horde.mk# for a complete list of available modules.
[[horde-Makefile]]
.Example Makefile for Horde Module
[example]
====
[.programlisting]
....
PORTNAME= Horde_Core
DISTVERSION= 2.14.0
CATEGORIES= devel www pear
MAINTAINER= horde@FreeBSD.org
COMMENT= Horde Core Framework libraries
OPTIONS_DEFINE= KOLAB SOCKETS
KOLAB_DESC= Enable Kolab server support
SOCKETS_DESC= Depend on sockets PHP extension
USES= horde
USE_PHP= session
USE_HORDE_BUILD= Horde_Role
USE_HORDE_RUN= Horde_Role Horde_History Horde_Pack \
Horde_Text_Filter Horde_View
KOLAB_USE= HORDE_RUN=Horde_Kolab_Server,Horde_Kolab_Session
SOCKETS_USE= PHP=sockets
.include <bsd.port.mk>
....
====
[TIP]
====
As Horde modules are also PEAR modules they will also automatically be flavorized using crossref:flavors[flavors-auto-php,PHP flavors].
====
[[using-python]]
== Using Python
The Ports Collection supports parallel installation of multiple Python versions. Ports must use a correct `python` interpreter, according to the user-settable `PYTHON_VERSION`. Most prominently, this means replacing the path to `python` executable in scripts with the value of `PYTHON_CMD`.
Ports that install files under `PYTHON_SITELIBDIR` must use the `pyXY-` package name prefix, so their package name embeds the version of Python they are installed into.
[.programlisting]
....
PKGNAMEPREFIX= ${PYTHON_PKGNAMEPREFIX}
....
[[using-python-variables]]
.Most Useful Variables for Ports That Use Python
[cols="1,1", frame="none"]
|===
|`USES=python`
|The port needs Python. The minimal required version can be specified with values such as `2.7+`. Version ranges can also be specified by separating two version numbers with a dash: `USES=python:3.2-3.3`
|`USE_PYTHON=distutils`
|Use Python distutils for configuring, compiling, and installing. This is required when the port comes with [.filename]#setup.py#. This overrides the `do-build` and `do-install` targets and may also override `do-configure` if `GNU_CONFIGURE` is not defined. Additionally, it implies `USE_PYTHON=flavors`.
|`USE_PYTHON=autoplist`
|Create the packaging list automatically. This also requires `USE_PYTHON=distutils` to be set.
|`USE_PYTHON=concurrent`
|The port will use an unique prefix, typically `PYTHON_PKGNAMEPREFIX` for certain directories, such as `EXAMPLESDIR` and `DOCSDIR` and also will append a suffix, the python version from `PYTHON_VER`, to binaries and scripts to be installed. This allows ports to be installed for different Python versions at the same time, which otherwise would install conflicting files.
|`USE_PYTHON=flavors`
|The port does not use distutils but still supports multiple Python versions. `FLAVORS` will be set to the supported Python versions. See crossref:flavors[flavors-auto-python,`USES`=python and Flavors]for more information.
|`USE_PYTHON=optsuffix`
|If the current Python version is not the default version, the port will gain `PKGNAMESUFFIX=${PYTHON_PKGNAMESUFFIX}`. Only useful with flavors.
|`PYTHON_PKGNAMEPREFIX`
|Used as a `PKGNAMEPREFIX` to distinguish packages for different Python versions. Example: `py27-`
|`PYTHON_SITELIBDIR`
|Location of the site-packages tree, that contains installation path of Python (usually `LOCALBASE`). `PYTHON_SITELIBDIR` can be very useful when installing Python modules.
|`PYTHONPREFIX_SITELIBDIR`
|The PREFIX-clean variant of PYTHON_SITELIBDIR. Always use `%%PYTHON_SITELIBDIR%%` in [.filename]#pkg-plist# when possible. The default value of `%%PYTHON_SITELIBDIR%%` is `lib/python%%PYTHON_VERSION%%/site-packages`
|`PYTHON_CMD`
|Python interpreter command line, including version number.
|===
[[using-python-variables-helpers]]
.Python Module Dependency Helpers
[cols="1,1", frame="none"]
|===
|`PYNUMERIC`
|Dependency line for numeric extension.
|`PYNUMPY`
|Dependency line for the new numeric extension, numpy. (PYNUMERIC is deprecated by upstream vendor).
|`PYXML`
|Dependency line for XML extension (not needed for Python 2.0 and higher as it is also in base distribution).
|`PY_ENUM34`
|Conditional dependency on package:devel/py-enum34[] depending on the Python version.
|`PY_ENUM_COMPAT`
|Conditional dependency on package:devel/py-enum-compat[] depending on the Python version.
|`PY_PATHLIB`
|Conditional dependency on package:devel/py-pathlib[] depending on the Python version.
|`PY_IPADDRESS`
|Conditional dependency on package:net/py-ipaddress[] depending on the Python version.
|`PY_FUTURES`
|Conditional dependency on package:devel/py-futures[] depending on the Python version.
|===
A complete list of available variables can be found in [.filename]#/usr/ports/Mk/Uses/python.mk#.
[IMPORTANT]
====
All dependencies to Python ports using crossref:flavors[flavors-auto-python,Python flavors] (either with `USE_PYTHON=distutils` or `USE_PYTHON=flavors`) must have the Python flavor appended to their origin using `@${PY_FLAVOR}`. See <<python-Makefile>>.
====
[[python-Makefile]]
.Makefile for a Simple Python Module
[example]
====
[.programlisting]
....
PORTNAME= sample
DISTVERSION= 1.2.3
CATEGORIES= devel
MAINTAINER= john@doe.tld
COMMENT= Python sample module
RUN_DEPENDS= ${PYTHON_PKGNAMEPREFIX}six>0:devel/py-six@${PY_FLAVOR}
USES= python
USE_PYTHON= autoplist distutils
.include <bsd.port.mk>
....
====
Some Python applications claim to have `DESTDIR` support (which would be required for staging) but it is broken (Mailman up to 2.1.16, for instance). This can be worked around by recompiling the scripts. This can be done, for example, in the `post-build` target. Assuming the Python scripts are supposed to reside in `PYTHONPREFIX_SITELIBDIR` after installation, this solution can be applied:
[.programlisting]
....
(cd ${STAGEDIR}${PREFIX} \
&& ${PYTHON_CMD} ${PYTHON_LIBDIR}/compileall.py \
-d ${PREFIX} -f ${PYTHONPREFIX_SITELIBDIR:S;${PREFIX}/;;})
....
This recompiles the sources with a path relative to the stage directory, and prepends the value of `PREFIX` to the file name recorded in the byte-compiled output file by `-d`. `-f` is required to force recompilation, and the `:S;${PREFIX}/;;` strips prefixes from the value of `PYTHONPREFIX_SITELIBDIR` to make it relative to `PREFIX`.
[[using-tcl]]
== Using Tcl/Tk
The Ports Collection supports parallel installation of multiple Tcl/Tk versions. Ports should try to support at least the default Tcl/Tk version and higher with `USES=tcl`. It is possible to specify the desired version of `tcl` by appending `:_xx_`, for example, `USES=tcl:85`.
[[using-tcl-variables]]
.The Most Useful Read-Only Variables for Ports That Use Tcl/Tk
[cols="1,1", frame="none"]
|===
|`TCL_VER`
| chosen major.minor version of Tcl
|`TCLSH`
| full path of the Tcl interpreter
|`TCL_LIBDIR`
| path of the Tcl libraries
|`TCL_INCLUDEDIR`
| path of the Tcl C header files
|`TK_VER`
| chosen major.minor version of Tk
|`WISH`
| full path of the Tk interpreter
|`TK_LIBDIR`
| path of the Tk libraries
|`TK_INCLUDEDIR`
| path of the Tk C header files
|===
See the crossref:uses[uses-tcl,`USES=tcl`] and crossref:uses[uses-tk,`USES=tk`] of crossref:uses[uses,Using `USES` Macros] for a full description of those variables. A complete list of those variables is available in [.filename]#/usr/ports/Mk/Uses/tcl.mk#.
[[using-ruby]]
== Using Ruby
[[using-ruby-variables]]
.Useful Variables for Ports That Use Ruby
[cols="1,1", frame="none", options="header"]
|===
| Variable
| Description
|`USE_RUBY`
|Adds build and run dependencies on Ruby.
|`USE_RUBY_EXTCONF`
|The port uses [.filename]#extconf.rb# to configure.
|`USE_RUBY_SETUP`
|The port uses [.filename]#setup.rb# to configure.
|`RUBY_SETUP`
|Override the name of the setup script from [.filename]#setup.rb#. Another common value is [.filename]#install.rb#.
|===
This table shows the selected variables available to port authors via the ports infrastructure. These variables are used to install files into their proper locations. Use them in [.filename]#pkg-plist# as much as possible. Do not redefine these variables in the port.
[[using-ruby-variables-ro]]
.Selected Read-Only Variables for Ports That Use Ruby
[cols="1,1,1", frame="none", options="header"]
|===
| Variable
| Description
| Example value
|`RUBY_PKGNAMEPREFIX`
|Used as a `PKGNAMEPREFIX` to distinguish packages for different Ruby versions.
|`ruby19-`
|`RUBY_VERSION`
|Full version of Ruby in the form of `x.y.z[.p]`.
|`1.9.3.484`
|`RUBY_SITELIBDIR`
|Architecture independent libraries installation path.
|`/usr/local/lib/ruby/site_ruby/1.9`
|`RUBY_SITEARCHLIBDIR`
|Architecture dependent libraries installation path.
|`/usr/local/lib/ruby/site_ruby/1.9/amd64-freebsd10`
|`RUBY_MODDOCDIR`
|Module documentation installation path.
|`/usr/local/share/doc/ruby19/patsy`
|`RUBY_MODEXAMPLESDIR`
|Module examples installation path.
|`/usr/local/share/examples/ruby19/patsy`
|===
A complete list of available variables can be found in [.filename]#/usr/ports/Mk/bsd.ruby.mk#.
[[using-sdl]]
== Using SDL
`USE_SDL` is used to autoconfigure the dependencies for ports which use an SDL based library like package:devel/sdl12[] and package:graphics/sdl_image[].
These SDL libraries for version 1.2 are recognized:
* sdl: package:devel/sdl12[]
* console: package:devel/sdl_console[]
* gfx: package:graphics/sdl_gfx[]
* image: package:graphics/sdl_image[]
* mixer: package:audio/sdl_mixer[]
* mm: package:devel/sdlmm[]
* net: package:net/sdl_net[]
* pango: package:x11-toolkits/sdl_pango[]
* sound: package:audio/sdl_sound[]
* ttf: package:graphics/sdl_ttf[]
These SDL libraries for version 2.0 are recognized:
* sdl: package:devel/sdl20[]
* gfx: package:graphics/sdl2_gfx[]
* image: package:graphics/sdl2_image[]
* mixer: package:audio/sdl2_mixer[]
* net: package:net/sdl2_net[]
* ttf: package:graphics/sdl2_ttf[]
Therefore, if a port has a dependency on package:net/sdl_net[] and package:audio/sdl_mixer[], the syntax will be:
[.programlisting]
....
USE_SDL= net mixer
....
The dependency package:devel/sdl12[], which is required by package:net/sdl_net[] and package:audio/sdl_mixer[], is automatically added as well.
Using `USE_SDL` with entries for SDL 1.2, it will automatically:
* Add a dependency on sdl12-config to `BUILD_DEPENDS`
* Add the variable `SDL_CONFIG` to `CONFIGURE_ENV`
* Add the dependencies of the selected libraries to `LIB_DEPENDS`
Using `USE_SDL` with entries for SDL 2.0, it will automatically:
* Add a dependency on sdl2-config to `BUILD_DEPENDS`
* Add the variable `SDL2_CONFIG` to `CONFIGURE_ENV`
* Add the dependencies of the selected libraries to `LIB_DEPENDS`
[[using-wx]]
== Using wxWidgets
This section describes the status of the wxWidgets libraries in the ports tree and its integration with the ports system.
[[wx-introduction]]
=== Introduction
There are many versions of the wxWidgets libraries which conflict between them (install files under the same name). In the ports tree this problem has been solved by installing each version under a different name using version number suffixes.
The obvious disadvantage of this is that each application has to be modified to find the expected version. Fortunately, most of the applications call the `wx-config` script to determine the necessary compiler and linker flags. The script is named differently for every available version. Majority of applications respect an environment variable, or accept a configure argument, to specify which `wx-config` script to call. Otherwise they have to be patched.
[[wx-version]]
=== Version Selection
To make the port use a specific version of wxWidgets there are two variables available for defining (if only one is defined the other will be set to a default value):
[[wx-ver-sel-table]]
.Variables to Select wxWidgets Versions
[cols="1,1,1", frame="none", options="header"]
|===
| Variable
| Description
| Default value
|`USE_WX`
|List of versions the port can use
|All available versions
|`USE_WX_NOT`
|List of versions the port cannot use
|None
|===
The available wxWidgets versions and the corresponding ports in the tree are:
[[wx-widgets-versions-table]]
.Available wxWidgets Versions
[cols="1,1", frame="none", options="header"]
|===
| Version
| Port
|`2.8`
|package:x11-toolkits/wxgtk28[]
|`3.0`
|package:x11-toolkits/wxgtk30[]
|===
The variables in <<wx-ver-sel-table>> can be set to one or more of these combinations separated by spaces:
[[wx-widgets-versions-specification]]
.wxWidgets Version Specifications
[cols="1,1", frame="none", options="header"]
|===
| Description
| Example
|Single version
|`2.8`
|Ascending range
|`2.8+`
|Descending range
|`3.0-`
|Full range (must be ascending)
|`2.8-3.0`
|===
There are also some variables to select the preferred versions from the available ones. They can be set to a list of versions, the first ones will have higher priority.
[[wx-widgets-preferred-version]]
.Variables to Select Preferred wxWidgets Versions
[cols="1,1", frame="none", options="header"]
|===
| Name
| Designed for
|`WANT_WX_VER`
|the port
|`WITH_WX_VER`
|the user
|===
[[wx-components]]
=== Component Selection
There are other applications that, while not being wxWidgets libraries, are related to them. These applications can be specified in `WX_COMPS`. These components are available:
[[wx-widgets-components-table]]
.Available wxWidgets Components
[cols="1,1,1", frame="none", options="header"]
|===
| Name
| Description
| Version restriction
|`wx`
|main library
|none
|`contrib`
|contributed libraries
|`none`
|`python`
|wxPython (Python bindings)
|`2.8-3.0`
|===
The dependency type can be selected for each component by adding a suffix separated by a semicolon. If not present then a default type will be used (see <<wx-def-dep-types>>). These types are available:
[[wx-widgets-dependency-table]]
.Available wxWidgets Dependency Types
[cols="1,1", frame="none", options="header"]
|===
| Name
| Description
|`build`
|Component is required for building, equivalent to `BUILD_DEPENDS`
|`run`
|Component is required for running, equivalent to `RUN_DEPENDS`
|`lib`
|Component is required for building and running, equivalent to `LIB_DEPENDS`
|===
The default values for the components are detailed in this table:
[[wx-def-dep-types]]
.Default wxWidgets Dependency Types
[cols="1,1", frame="none", options="header"]
|===
| Component
| Dependency type
|`wx`
|`lib`
|`contrib`
|`lib`
|`python`
|`run`
|`mozilla`
|`lib`
|`svg`
|`lib`
|===
[[wx-components-example]]
.Selecting wxWidgets Components
[example]
====
This fragment corresponds to a port which uses wxWidgets version `2.4` and its contributed libraries.
[.programlisting]
....
USE_WX= 2.8
WX_COMPS= wx contrib
....
====
[[wx-version-detection]]
=== Detecting Installed Versions
To detect an installed version, define `WANT_WX`. If it is not set to a specific version then the components will have a version suffix. `HAVE_WX` will be filled after detection.
[[wx-ver-det-example]]
.Detecting Installed wxWidgets Versions and Components
[example]
====
This fragment can be used in a port that uses wxWidgets if it is installed, or an option is selected.
[.programlisting]
....
WANT_WX= yes
.include <bsd.port.pre.mk>
.if defined(WITH_WX) || !empty(PORT_OPTIONS:MWX) || !empty(HAVE_WX:Mwx-2.8)
USE_WX= 2.8
CONFIGURE_ARGS+= --enable-wx
.endif
....
This fragment can be used in a port that enables wxPython support if it is installed or if an option is selected, in addition to wxWidgets, both version `2.8`.
[.programlisting]
....
USE_WX= 2.8
WX_COMPS= wx
WANT_WX= 2.8
.include <bsd.port.pre.mk>
.if defined(WITH_WXPYTHON) || !empty(PORT_OPTIONS:MWXPYTHON) || !empty(HAVE_WX:Mpython)
WX_COMPS+= python
CONFIGURE_ARGS+= --enable-wxpython
.endif
....
====
[[wx-defined-variables]]
=== Defined Variables
These variables are available in the port (after defining one from <<wx-ver-sel-table>>).
[[wx-widgets-variables]]
.Variables Defined for Ports That Use wxWidgets
[cols="1,1", frame="none", options="header"]
|===
| Name
| Description
|`WX_CONFIG`
|The path to the wxWidgets`wx-config` script (with different name)
|`WXRC_CMD`
|The path to the wxWidgets`wxrc` program (with different name)
|`WX_VERSION`
|The wxWidgets version that is going to be used (for example, `2.6`)
|===
[[wx-premk]]
=== Processing in [.filename]#bsd.port.pre.mk#
Define `WX_PREMK` to be able to use the variables right after including [.filename]#bsd.port.pre.mk#.
[IMPORTANT]
====
When defining `WX_PREMK`, then the version, dependencies, components and defined variables will not change if modifying the wxWidgets port variables _after_ including [.filename]#bsd.port.pre.mk#.
====
[[wx-premk-example]]
.Using wxWidgets Variables in Commands
[example]
====
This fragment illustrates the use of `WX_PREMK` by running the `wx-config` script to obtain the full version string, assign it to a variable and pass it to the program.
[.programlisting]
....
USE_WX= 2.8
WX_PREMK= yes
.include <bsd.port.pre.mk>
.if exists(${WX_CONFIG})
VER_STR!= ${WX_CONFIG} --release
PLIST_SUB+= VERSION="${VER_STR}"
.endif
....
====
[NOTE]
====
The wxWidgets variables can be safely used in commands when they are inside targets without the need of `WX_PREMK`.
====
[[wx-additional-config-args]]
=== Additional `configure` Arguments
Some GNU `configure` scripts cannot find wxWidgets with just the `WX_CONFIG` environment variable set, requiring additional arguments. `WX_CONF_ARGS` can be used for provide them.
[[wx-conf-args-values]]
.Legal Values for `WX_CONF_ARGS`
[cols="1,1", frame="none", options="header"]
|===
| Possible value
| Resulting argument
|`absolute`
|`--with-wx-config=${WX_CONFIG}`
|`relative`
|`--with-wx=${LOCALBASE} --with-wx-config=${WX_CONFIG:T}`
|===
[[using-lua]]
== Using Lua
This section describes the status of the Lua libraries in the ports tree and its integration with the ports system.
[[lua-introduction]]
=== Introduction
There are many versions of the Lua libraries and corresponding interpreters, which conflict between them (install files under the same name). In the ports tree this problem has been solved by installing each version under a different name using version number suffixes.
The obvious disadvantage of this is that each application has to be modified to find the expected version. But it can be solved by adding some additional flags to the compiler and linker.
Applications that use Lua should normally build for just one version. However, loadable modules for Lua are built in a separate flavor for each Lua version that they support, and dependencies on such modules should specify the flavor using the `@${LUA_FLAVOR}` suffix on the port origin.
[[lua-version]]
=== Version Selection
A port using Lua should have a line of this form:
[.programlisting]
....
USES= lua
....
If a specific version of Lua, or range of versions, is needed, it can be specified as a parameter in the form `XY` (which may be used multiple times), `XY+`, `-XY`, or `XY-ZA`. The default version of Lua as set via `DEFAULT_VERSIONS` will be used if it falls in the requested range, otherwise the closest requested version to the default will be used. For example:
[.programlisting]
....
USES= lua:52-53
....
Note that no attempt is made to adjust the version selection based on the presence of any already-installed Lua version.
[NOTE]
====
The `XY+` form of version specification should not be used without careful consideration; the Lua API changes to some extent in every version, and configuration tools like CMake or Autoconf will often fail to work on future versions of Lua until updated to do so.
====
[[lua-version-config]]
=== Configuration and Compiler flags
Software that uses Lua may have been written to auto-detect the Lua version in use. In general ports should override this assumption, and force the use of the specific Lua version selected as described above. Depending on the software being ported, this might require any or all of:
* Using `LUA_VER` as part of a parameter to the software's configuration script via `CONFIGURE_ARGS` or `CONFIGURE_ENV` (or equivalent for other build systems);
* Adding `-I${LUA_INCDIR}`, `-L${LUA_LIBDIR}`, and `-llua-${LUA_VER}` to `CFLAGS`, `LDFLAGS`, `LIBS` respectively as appropriate;
* Patch the software's configuration or build files to select the correct version.
[[lua-version-flavors]]
=== Version Flavors
A port which installs a Lua module (rather than an application that simply makes use of Lua) should build a separate flavor for each supported Lua version. This is done by adding the `module` parameter:
[.programlisting]
....
USES= lua:module
....
A version number or range of versions can be specified as well; use a comma to separate parameters.
Since each flavor must have a different package name, the variable `LUA_PKGNAMEPREFIX` is provided which will be set to an appropriate value; the intended usage is:
[.programlisting]
....
PKGNAMEPREFIX= ${LUA_PKGNAMEPREFIX}
....
Module ports should normally install files only to `LUA_MODLIBDIR`, `LUA_MODSHAREDIR`, `LUA_DOCSDIR`, and `LUA_EXAMPLESDIR`, all of which are set up to refer to version-specific subdirectories. Installing any other files must be done with care to avoid conflicts between versions.
A port (other than a Lua module) which wishes to build a separate package for each Lua version should use the `flavors` parameter:
[.programlisting]
....
USES= lua:flavors
....
This operates the same way as the `module` parameter described above, but without the assumption that the package should be documented as a Lua module (so `LUA_DOCSDIR` and `LUA_EXAMPLESDIR` are not defined by default). However, the port may choose to define `LUA_DOCSUBDIR` as a suitable subdirectory name (usually the port's `PORTNAME` as long as this does not conflict with the `PORTNAME` of any module), in which case the framework will define both `LUA_DOCSDIR` and `LUA_EXAMPLESDIR`.
As with module ports, a flavored port should avoid installing files that would conflict between versions. Typically this is done by adding `LUA_VER_STR` as a suffix to program names (e.g. using crossref:uses[uses-uniquefiles,`uniquefiles`]), and otherwise using either `LUA_VER` or `LUA_VER_STR` as part of any other files or subdirectories used outside of `LUA_MODLIBDIR` and `LUA_MODSHAREDIR`.
[[lua-defined-variables]]
=== Defined Variables
These variables are available in the port.
[[using-lua-variables-ports]]
.Variables Defined for Ports That Use Lua
[cols="1,1", frame="none", options="header"]
|===
| Name
| Description
|`LUA_VER`
|The Lua version that is going to be used (for example, `5.1`)
|`LUA_VER_STR`
|The Lua version without the dots (for example, `51`)
|`LUA_FLAVOR`
|The flavor name corresponding to the selected Lua version, to be used for specifying dependencies
|`LUA_BASE`
|The prefix that should be used to locate Lua (and components) that are already installed
|`LUA_PREFIX`
|The prefix where Lua (and components) are to be installed by this port
|`LUA_INCDIR`
|The directory where Lua header files are installed
|`LUA_LIBDIR`
|The directory where Lua libraries are installed
|`LUA_REFMODLIBDIR`
|The directory where Lua module libraries ([.filename]#.so#) that are already installed are to be found
|`LUA_REFMODSHAREDIR`
|The directory where Lua modules ([.filename]#.lua#) that are already installed are to be found
|`LUA_MODLIBDIR`
|The directory where Lua module libraries ([.filename]#.so#) are to be installed by this port
|`LUA_MODSHAREDIR`
|The directory where Lua modules ([.filename]#.lua#) are to be installed by this port
|`LUA_PKGNAMEPREFIX`
|The package name prefix used by Lua modules
|`LUA_CMD`
|The name of the Lua interpreter (e.g. `lua53`)
|`LUAC_CMD`
|The name of the Lua compiler (e.g. `luac53`)
|===
These additional variables are available for ports that specified the `module` parameter:
[[using-lua-variables-modules]]
.Variables Defined for Lua Module Ports
[cols="1,1", frame="none", options="header"]
|===
| Name
| Description
|`LUA_DOCSDIR`
|the directory to which the module's documentation should be installed.
|`LUA_EXAMPLESDIR`
|the directory to which the module's example files should be installed.
|===
[[lua-examples]]
=== Examples
[[lua-app-Makefile]]
.Makefile for an application using Lua
[example]
====
This example shows how to reference a Lua module required at run time. Notice that the reference must specify a flavor.
[.programlisting]
....
PORTNAME= sample
DISTVERSION= 1.2.3
CATEGORIES= whatever
MAINTAINER= john@doe.tld
COMMENT= Sample
RUN_DEPENDS= ${LUA_REFMODLIBDIR}/lpeg.so:devel/lua-lpeg@${LUA_FLAVOR}
USES= lua
.include <bsd.port.mk>
....
====
[[lua-mod-Makefile]]
.Makefile for a simple Lua module
[example]
====
[.programlisting]
....
PORTNAME= sample
DISTVERSION= 1.2.3
CATEGORIES= whatever
PKGNAMEPREFIX= ${LUA_PKGNAMEPREFIX}
MAINTAINER= john@doe.tld
COMMENT= Sample
USES= lua:module
DOCSDIR= ${LUA_DOCSDIR}
.include <bsd.port.mk>
....
====
[[using-iconv]]
== Using `iconv`
FreeBSD has a native `iconv` in the operating system.
For software that needs `iconv`, define `USES=iconv`.
When a port defines `USES=iconv`, these variables will be available:
[.informaltable]
[cols="1,1,1,1", frame="none", options="header"]
|===
| Variable name
| Purpose
| Port iconv (when using WCHAR_T or //TRANSLIT extensions)
| Base iconv
|`ICONV_CMD`
|Directory where the `iconv` binary resides
|`${LOCALBASE}/bin/iconv`
|[.filename]#/usr/bin/iconv#
|`ICONV_LIB`
|`ld` argument to link to [.filename]#libiconv# (if needed)
|`-liconv`
|(empty)
|`ICONV_PREFIX`
|Directory where the `iconv` implementation resides (useful for configure scripts)
|`${LOCALBASE}`
|[.filename]#/usr#
|`ICONV_CONFIGURE_ARG`
|Preconstructed configure argument for configure scripts
|`--with-libiconv-prefix=${LOCALBASE}`
|(empty)
|`ICONV_CONFIGURE_BASE`
|Preconstructed configure argument for configure scripts
|`--with-libiconv=${LOCALBASE}`
|(empty)
|===
These two examples automatically populate the variables with the correct value for systems using package:converters/libiconv[] or the native `iconv` respectively:
[[iconv-simple-use]]
.Simple `iconv` Usage
[example]
====
[.programlisting]
....
USES= iconv
LDFLAGS+= -L${LOCALBASE}/lib ${ICONV_LIB}
....
====
[[iconv-configure-use]]
.`iconv` Usage with `configure`
[example]
====
[.programlisting]
....
USES= iconv
CONFIGURE_ARGS+=${ICONV_CONFIGURE_ARG}
....
====
As shown above, `ICONV_LIB` is empty when a native `iconv` is present. This can be used to detect the native `iconv` and respond appropriately.
Sometimes a program has an `ld` argument or search path hardcoded in a [.filename]#Makefile# or configure script. This approach can be used to solve that problem:
[[iconv-reinplace]]
.Fixing Hardcoded `-liconv`
[example]
====
[.programlisting]
....
USES= iconv
post-patch:
@${REINPLACE_CMD} -e 's/-liconv/${ICONV_LIB}/' ${WRKSRC}/Makefile
....
====
In some cases it is necessary to set alternate values or perform operations depending on whether there is a native `iconv`. [.filename]#bsd.port.pre.mk# must be included before testing the value of `ICONV_LIB`:
[[iconv-conditional]]
.Checking for Native `iconv` Availability
[example]
====
[.programlisting]
....
USES= iconv
.include <bsd.port.pre.mk>
post-patch:
.if empty(ICONV_LIB)
# native iconv detected
@${REINPLACE_CMD} -e 's|iconv||' ${WRKSRC}/Config.sh
.endif
.include <bsd.port.post.mk>
....
====
[[using-xfce]]
== Using Xfce
Ports that need Xfce libraries or applications set `USES=xfce`.
Specific Xfce library and application dependencies are set with values assigned to `USE_XFCE`. They are defined in [.filename]#/usr/ports/Mk/Uses/xfce.mk#. The possible values are:
.Values of `USE_XFCE`
garcon::
package:sysutils/garcon[]
libexo::
package:x11/libexo[]
libgui::
package:x11-toolkits/libxfce4gui[]
libmenu::
package:x11/libxfce4menu[]
libutil::
package:x11/libxfce4util[]
panel::
package:x11-wm/xfce4-panel[]
thunar::
package:x11-fm/thunar[]
xfconf::
package:x11/xfce4-conf[]
[[use-xfce]]
.`USES=xfce` Example
[example]
====
[.programlisting]
....
USES= xfce
USE_XFCE= libmenu
....
====
[[use-xfce-gtk2]]
.Using Xfce's Own GTK2 Widgets
[example]
====
In this example, the ported application uses the GTK2-specific widgets package:x11/libxfce4menu[] and package:x11/xfce4-conf[].
[.programlisting]
....
USES= xfce:gtk2
USE_XFCE= libmenu xfconf
....
====
[TIP]
====
Xfce components included this way will automatically include any dependencies they need. It is no longer necessary to specify the entire list. If the port only needs package:x11-wm/xfce4-panel[], use:
[.programlisting]
....
USES= xfce
USE_XFCE= panel
....
There is no need to list the components package:x11-wm/xfce4-panel[] needs itself like this:
[.programlisting]
....
USES= xfce
USE_XFCE= libexo libmenu libutil panel
....
However, Xfce components and non-Xfce dependencies of the port must be included explicitly. Do not count on an Xfce component to provide a sub-dependency other than itself for the main port.
====
[[using-databases]]
== Using Databases
Use one of the `USES` macros from <<using-databases-uses>> to add a dependency on a database.
[[using-databases-uses]]
.Database `USES` Macros
[cols="1,1", frame="none", options="header"]
|===
| Database
| USES Macro
|Berkeley DB
|crossref:uses[uses-bdb,`bdb`]
|MariaDB, MySQL, Percona
|crossref:uses[uses-mysql,`mysql`]
|PostgreSQL
|crossref:uses[uses-pgsql,`pgsql`]
|SQLite
|crossref:uses[uses-sqlite,`sqlite`]
|===
[[using-databases-bdb-ex1]]
.Using Berkeley DB 6
[example]
====
[.programlisting]
....
USES= bdb:6
....
See crossref:uses[uses-bdb,`bdb`] for more information.
====
[[using-databases-mysql-ex1]]
.Using MySQL
[example]
====
When a port needs the MySQL client library add
[.programlisting]
....
USES= mysql
....
See crossref:uses[uses-mysql,`mysql`] for more information.
====
[[using-databases-pgsql-ex1]]
.Using PostgreSQL
[example]
====
When a port needs the PostgreSQL server version 9.6 or later add
[.programlisting]
....
USES= pgsql:9.6+
WANT_PGSQL= server
....
See crossref:uses[uses-pgsql,`pgsql`] for more information.
====
[[using-databases-sqlite-ex1]]
.Using SQLite 3
[example]
====
[.programlisting]
....
USES= sqlite:3
....
See crossref:uses[uses-sqlite,`sqlite`] for more information.
====
[[rc-scripts]]
== Starting and Stopping Services (`rc` Scripts)
[.filename]#rc.d# scripts are used to start services on system startup, and to give administrators a standard way of stopping, starting and restarting the service. Ports integrate into the system [.filename]#rc.d# framework. Details on its usage can be found in link:{handbook}#configtuning-rcd/[the rc.d Handbook chapter]. Detailed explanation of the available commands is provided in man:rc[8] and man:rc.subr[8]. Finally, there is link:{rc-scripting}[an article] on practical aspects of [.filename]#rc.d# scripting.
With a mythical port called _doorman_, which needs to start a _doormand_ daemon. Add the following to the [.filename]#Makefile#:
[.programlisting]
....
USE_RC_SUBR= doormand
....
Multiple scripts may be listed and will be installed. Scripts must be placed in the [.filename]#files# subdirectory and a `.in` suffix must be added to their filename. Standard `SUB_LIST` expansions will be ran against this file. Use of the `%%PREFIX%%` and `%%LOCALBASE%%` expansions is strongly encouraged as well. More on `SUB_LIST` in crossref:pkg-files[using-sub-files,the relevant section].
As of FreeBSD 6.1-RELEASE, local [.filename]#rc.d# scripts (including those installed by ports) are included in the overall man:rcorder[8] of the base system.
An example simple [.filename]#rc.d# script to start the doormand daemon:
[.programlisting]
....
#!/bin/sh
# $FreeBSD$
#
# PROVIDE: doormand
# REQUIRE: LOGIN
# KEYWORD: shutdown
#
# Add these lines to /etc/rc.conf.local or /etc/rc.conf
# to enable this service:
#
# doormand_enable (bool): Set to NO by default.
# Set it to YES to enable doormand.
# doormand_config (path): Set to %%PREFIX%%/etc/doormand/doormand.cf
# by default.
. /etc/rc.subr
name=doormand
rcvar=doormand_enable
load_rc_config $name
: ${doormand_enable:="NO"}
: ${doormand_config="%%PREFIX%%/etc/doormand/doormand.cf"}
command=%%PREFIX%%/sbin/${name}
pidfile=/var/run/${name}.pid
command_args="-p $pidfile -f $doormand_config"
run_rc_command "$1"
....
Unless there is a very good reason to start the service earlier, or it runs as a particular user (other than root), all ports scripts must use:
[.programlisting]
....
REQUIRE: LOGIN
....
If the startup script launches a daemon that must be shutdown, the following will trigger a stop of the service on system shutdown:
[.programlisting]
....
KEYWORD: shutdown
....
If the script is not starting a persistent service this is not necessary.
For optional configuration elements the "=" style of default variable assignment is preferable to the ":=" style here, since the former sets a default value only if the variable is unset, and the latter sets one if the variable is unset _or_ null. A user might very well include something like:
[.programlisting]
....
doormand_flags=""
....
in their [.filename]#rc.conf.local#, and a variable substitution using ":=" would inappropriately override the user's intention. The `_enable` variable is not optional, and must use the ":" for the default.
[IMPORTANT]
====
Ports _must not_ start and stop their services when installing and deinstalling. Do not abuse the [.filename]#plist# keywords described in crossref:plist[plist-keywords-base-exec,`@preexec command,@postexec command,@preunexec command,@postunexec command`] by running commands that modify the currently running system, including starting or stopping services.
====
[[rc-scripts-checklist]]
=== Pre-Commit Checklist
Before contributing a port with an [.filename]#rc.d# script, and more importantly, before committing one, please consult this checklist to be sure that it is ready.
The package:devel/rclint[] port can check for most of these, but it is not a substitute for proper review.
[.procedure]
. If this is a new file, does it have a [.filename]#.sh# extension? If so, that must be changed to just [.filename]#file.in# since [.filename]#rc.d# files may not end with that extension.
. Does the file have a `$FreeBSD$` tag?
. Do the name of the file (minus [.filename]#.in#), the `PROVIDE` line, and `$` _name_ all match? The file name matching `PROVIDE` makes debugging easier, especially for man:rcorder[8] issues. Matching the file name and `$`_name_ makes it easier to figure out which variables are relevant in [.filename]#rc.conf[.local]#. It is also a policy for all new scripts, including those in the base system.
. Is the `REQUIRE` line set to `LOGIN`? This is mandatory for scripts that run as a non-root user. If it runs as root, is there a good reason for it to run prior to `LOGIN`? If not, it must run after so that local scrips can be loosely grouped to a point in man:rcorder[8] after most everything in the base is already running.
. Does the script start a persistent service? If so, it must have `KEYWORD: shutdown`.
. Make sure there is no `KEYWORD: FreeBSD` present. This has not been necessary nor desirable for years. It is also an indication that the new script was copy/pasted from an old script, so extra caution must be given to the review.
. If the script uses an interpreted language like `perl`, `python`, or `ruby`, make certain that `command_interpreter` is set appropriately, for example, for Perl, by adding `PERL=${PERL}` to `SUB_LIST` and using `%%PERL%%`. Otherwise,
+
[source,shell]
....
# service name stop
....
+
will probably not work properly. See man:service[8] for more information.
. Have all occurrences of [.filename]#/usr/local# been replaced with `%%PREFIX%%`?
. Do the default variable assignments come after `load_rc_config`?
. Are there default assignments to empty strings? They should be removed, but double-check that the option is documented in the comments at the top of the file.
. Are things that are set in variables actually used in the script?
. Are options listed in the default _name_`_flags` things that are actually mandatory? If so, they must be in `command_args`. `-d` is a red flag (pardon the pun) here, since it is usually the option to "daemonize" the process, and therefore is actually mandatory.
. `_name__flags` must never be included in `command_args` (and vice versa, although that error is less common).
. Does the script execute any code unconditionally? This is frowned on. Usually these things must be dealt with through a `start_precmd`.
. All boolean tests must use the `checkyesno` function. No hand-rolled tests for `[Yy][Ee][Ss]`, etc.
. If there is a loop (for example, waiting for something to start) does it have a counter to terminate the loop? We do not want the boot to be stuck forever if there is an error.
. Does the script create files or directories that need specific permissions, for example, a [.filename]#pid# that needs to be owned by the user that runs the process? Rather than the traditional man:touch[1]/man:chown[8]/man:chmod[1] routine, consider using man:install[1] with the proper command line arguments to do the whole procedure with one step.
[[users-and-groups]]
== Adding Users and Groups
Some ports require a particular user account to be present, usually for daemons that run as that user. For these ports, choose a _unique_ UID from 50 to 999 and register it in [.filename]#ports/UIDs# (for users) and [.filename]#ports/GIDs# (for groups). The unique identification should be the same for users and groups.
Please include a patch against these two files when requiring a new user or group to be created for the port.
Then use `USERS` and `GROUPS` in [.filename]#Makefile#, and the user will be automatically created when installing the port.
[.programlisting]
....
USERS= pulse
GROUPS= pulse pulse-access pulse-rt
....
The current list of reserved UIDs and GIDs can be found in [.filename]#ports/UIDs# and [.filename]#ports/GIDs#.
[[requiring-kernel-sources]]
== Ports That Rely on Kernel Sources
Some ports (such as kernel loadable modules) need the kernel source files so that the port can compile. Here is the correct way to determine if the user has them installed:
[.programlisting]
....
USES= kmod
....
Apart from this check, the `kmod` feature takes care of most items that these ports need to take into account.
[[go-libs]]
== Go Libraries
Ports must not package or install Go libs or source code. Go ports must fetch the required deps at the normal fetch time and should only install the programs and things users need, not the things Go developers would need.
Ports should (in order of preference):
* Use vendored dependencies included with the package source.
* Fetch the versions of deps specified by upstream (in the case of go.mod, vendor.json or similar).
* As a last resort (deps are not included nor versions specified exactly) fetch versions of dependencies available at the time of upstream development/release.
[[haskell-libs]]
== Haskell Libraries
Just like in case of Go language, Ports must not package or install Haskell libraries. Haskell ports must link statically to their dependencies and fetch all distribution files on fetch stage.
[[shell-completion]]
== Shell Completion Files
Many modern shells (including bash, fish, tcsh and zsh) support parameter and/or option tab-completion. This support usually comes from completion files, which contain the definitions for how tab completion will work for a certain command. Ports sometimes ship with their own completion files, or porters may have created them themselves.
When available, completion files should always be installed. It is not necessary to make an option for it. If an option is used, though, always enable it in `OPTIONS_DEFAULT`.
[[shell-completion-paths]]
.Shell completion file paths
[cols="1,1", frame="none"]
|===
|`bash`
|[.filename]#${PREFIX}/etc/bash_completion.d#
|`fish`
|[.filename]#${PREFIX}/share/fish/vendor_completions.d#
|`zsh`
|[.filename]#${PREFIX}/share/zsh/site-functions#
|===
Do not register any dependencies on the shells themselves.
diff --git a/documentation/content/en/books/porters-handbook/testing/_index.adoc b/documentation/content/en/books/porters-handbook/testing/_index.adoc
index c23adf3a17..eb569e32f6 100644
--- a/documentation/content/en/books/porters-handbook/testing/_index.adoc
+++ b/documentation/content/en/books/porters-handbook/testing/_index.adoc
@@ -1,533 +1,534 @@
---
title: Chapter 10. Testing the Port
prev: books/porters-handbook/pkg-files
next: books/porters-handbook/upgrading
+description: Testing a FreeBSD Port
---
[[testing]]
= Testing the Port
:doctype: book
:toc: macro
:toclevels: 1
:icons: font
:sectnums:
:sectnumlevels: 6
:source-highlighter: rouge
:experimental:
:skip-front-matter:
:xrefstyle: basic
:relfileprefix: ../
:outfilesuffix:
:sectnumoffset: 10
include::shared/mirrors.adoc[]
include::shared/authors.adoc[]
include::shared/releases.adoc[]
include::shared/en/mailing-lists.adoc[]
include::shared/en/teams.adoc[]
include::shared/en/urls.adoc[]
toc::[]
[[make-describe]]
== Running `make describe`
Several of the FreeBSD port maintenance tools, such as man:portupgrade[1], rely on a database called [.filename]#/usr/ports/INDEX# which keeps track of such items as port dependencies. [.filename]#INDEX# is created by the top-level [.filename]#ports/Makefile# via `make index`, which descends into each port subdirectory and executes `make describe` there. Thus, if `make describe` fails in any port, no one can generate [.filename]#INDEX#, and many people will quickly become unhappy.
[NOTE]
====
It is important to be able to generate this file no matter what options are present in [.filename]#make.conf#, so please avoid doing things such as using `.error` statements when (for instance) a dependency is not satisfied. (See crossref:porting-dads[dads-dot-error,Avoid Use of the `.error` Construct].)
====
If `make describe` produces a string rather than an error message, everything is probably safe. See [.filename]#bsd.port.mk# for the meaning of the string produced.
Also note that running a recent version of `portlint` (as specified in the next section) will cause `make describe` to be run automatically.
[[testing-portclippy]]
== Portclippy / Portfmt
Those tools come from package:ports-mgmt/portfmt[].
Portclippy is a linter that checks if variables in the [.filename]#Makefile# are in the correct order according to crossref:order[porting-order,Order of Variables in Port Makefiles].
Portfmt is a tool for automatically formatting [.filename]#Makefile#.
[[testing-portlint]]
== Portlint
Do check the port with crossref:quick-porting[porting-portlint,`portlint`] before submitting or committing it. `portlint` warns about many common errors, both functional and stylistic. For a new (or repocopied) port, `portlint -A` is the most thorough; for an existing port, `portlint -C` is sufficient.
Since `portlint` uses heuristics to try to figure out errors, it can produce false positive warnings. In addition, occasionally something that is flagged as a problem really cannot be done in any other way due to limitations in the ports framework. When in doubt, the best thing to do is ask on {freebsd-ports}.
[[testing-porttools]]
== Port Tools
The package:ports-mgmt/porttools[] program is part of the Ports Collection.
`port` is the front-end script, which can help simplify the testing job. Whenever a new port or an update to an existing one needs testing, use `port test` to test the port, including the <<testing-portlint,`portlint`>> checking. This command also detects and lists any files that are not listed in [.filename]#pkg-plist#. For example:
[source,shell]
....
# port test /usr/ports/net/csup
....
[[porting-prefix]]
== `PREFIX` and `DESTDIR`
`PREFIX` determines where the port will be installed. It defaults to [.filename]#/usr/local#, but can be set by the user to a custom path like [.filename]#/opt#. The port must respect the value of this variable.
`DESTDIR`, if set by the user, determines the complete alternative environment, usually a jail or an installed system mounted somewhere other than [.filename]#/#. A port will actually install into [.filename]#DESTDIR/PREFIX#, and register with the package database in [.filename]#DESTDIR/var/db/pkg#. `DESTDIR` is handled automatically by the ports infrastructure with man:chroot[8]. There is no need for modifications or any extra care to write `DESTDIR`-compliant ports.
The value of `PREFIX` will be set to `LOCALBASE` (defaulting to [.filename]#/usr/local#). If `USE_LINUX_PREFIX` is set, `PREFIX` will be `LINUXBASE` (defaulting to [.filename]#/compat/linux#).
Avoiding hard-coded [.filename]#/usr/local# paths in the source makes the port much more flexible and able to cater to the needs of other sites. Often, this can be accomplished by replacing occurrences of [.filename]#/usr/local# in the port's various [.filename]##Makefile##s with `${PREFIX}`. This variable is automatically passed down to every stage of the build and install processes.
Make sure the application is not installing things in [.filename]#/usr/local# instead of `PREFIX`. A quick test for such hard-coded paths is:
[source,shell]
....
% make clean; make package PREFIX=/var/tmp/`make -V PORTNAME`
....
If anything is installed outside of `PREFIX`, the package creation process will complain that it cannot find the files.
In addition, it is worth checking the same with the stage directory support (see crossref:special[staging,Staging]):
[source,shell]
....
% make stage && make check-plist && make stage-qa && make package
....
* `check-plist` checks for files missing from the plist, and files in the plist that are not installed by the port.
* `stage-qa` checks for common problems like bad shebang, symlinks pointing outside the stage directory, setuid files, and non-stripped libraries...
These tests will not find hard-coded paths inside the port's files, nor will it verify that `LOCALBASE` is being used to correctly refer to files from other ports. The temporarily-installed port in [.filename]#/var/tmp/`make -V PORTNAME`# must be tested for proper operation to make sure there are no problems with paths.
`PREFIX` must not be set explicitly in a port's [.filename]#Makefile#. Users installing the port may have set `PREFIX` to a custom location, and the port must respect that setting.
Refer to programs and files from other ports with the variables mentioned above, not explicit pathnames. For instance, if the port requires a macro `PAGER` to have the full pathname of `less`, do not use a literal path of [.filename]#/usr/local/bin/less#. Instead, use `${LOCALBASE}`:
[.programlisting]
....
-DPAGER=\"${LOCALBASE}/bin/less\"
....
The path with `LOCALBASE` is more likely to still work if the system administrator has moved the whole [.filename]#/usr/local# tree somewhere else.
[TIP]
====
All these tests are done automatically when running `poudriere testport` or `poudriere bulk -t`. It is highly recommended that every ports contributor install and test their ports with it. See <<testing-poudriere>> for more information.
====
[[testing-poudriere]]
== Poudriere
For a ports contributor, Poudriere is one of the most important and helpful testing and build tools. Its main features include:
* Bulk building of the entire ports tree, specific subsets of the ports tree, or a single port including its dependencies
* Automatic packaging of build results
* Generation of build log files per port
* Providing a signed man:pkg[8] repository
* Testing of port builds before submitting a patch to the FreeBSD bug tracker or committing to the ports tree
* Testing for successful ports builds using different options
Because Poudriere performs its building in a clean man:jail[8] environment and uses man:zfs[8] features, it has several advantages over traditional testing on the host system:
* No pollution of the host environment: No leftover files, no accidental removals, no changes of existing configuration files.
* Verify [.filename]#pkg-plist# for missing or superfluous entries
* Ports committers sometimes ask for a Poudriere log alongside a patch submission to assess whether the patch is ready for integration into the ports tree
It is also quite straightforward to set up and use, has no dependencies, and will run on any supported FreeBSD release. This section shows how to install, configure, and run Poudriere as part of the normal workflow of a ports contributor.
The examples in this section show a default file layout, as standard in FreeBSD. Substitute any local changes accordingly. The ports tree, represented by `${PORTSDIR}`, is located in [.filename]#/usr/ports#. Both `${LOCALBASE}` and `${PREFIX}` are [.filename]#/usr/local# by default.
[[testing-poudriere-installing]]
=== Installing Poudriere
Poudriere is available in the ports tree in package:ports-mgmt/poudriere[]. It can be installed using man:pkg[8] or from ports:
[source,shell]
....
# pkg install poudriere
....
or
[source,shell]
....
# make -C /usr/ports/ports-mgmt/poudriere install clean
....
There is also a work-in-progress version of Poudriere which will eventually become the next release. It is available in package:ports-mgmt/poudriere-devel[]. This development version is used for the official FreeBSD package builds, so it is well tested. It often has newer interesting features. A ports committer will want to use the development version because it is what is used in production, and has all the new features that will make sure everything is exactly right. A contributor will not necessarily need those as the most important fixes are backported to released version. The main reason for the use of the development version to build the official package is because it is faster, in a way that will shorten a full build from 18 hours to 17 hours when using a high end 32 CPU server with 128GB of RAM. Those optimizations will not matter a lot when building ports on a desktop machine.
[[testing-poudriere-setup]]
=== Setting Up Poudriere
The port installs a default configuration file, [.filename]#/usr/local/etc/poudriere.conf#. Each parameter is documented in the configuration file and in man:poudriere[8]. Here is a minimal example config file:
[.programlisting]
....
ZPOOL=tank
ZROOTFS=/poudriere
BASEFS=/poudriere
DISTFILES_CACHE=/usr/ports/distfiles
RESOLV_CONF=/etc/resolv.conf
FREEBSD_HOST=ftp://ftp.freebsd.org
SVN_HOST=svn.FreeBSD.org
....
`ZPOOL`::
The name of the ZFS storage pool which Poudriere shall use. Must be listed in the output of `zpool status`.
`ZROOTFS`::
The root of Poudriere-managed file systems. This entry will cause Poudriere to create man:zfs[8] file systems under `tank/poudriere`.
`BASEFS`::
The root mount point for Poudriere file systems. This entry will cause Poudriere to mount `tank/poudriere` to `/poudriere`.
`DISTFILES_CACHE`::
Defines where distfiles are stored. In this example, Poudriere and the host share the distfiles storage directory. This avoids downloading tarballs which are already present on the system. Please create this directory if it does not already exist so that Poudriere can find it.
`RESOLV_CONF`::
Use the host [.filename]#/etc/resolv.conf# inside jails for DNS. This is needed so jails can resolve the URLs of distfiles when downloading. It is not needed when using a proxy. Refer to the default configuration file for proxy configuration.
`FREEBSD_HOST`::
The FTP/HTTP server to use when the jails are installed from FreeBSD releases and updated with man:freebsd-update[8]. Choose a server location which is close, for example if the machine is located in Australia, use `ftp.au.freebsd.org`.
`SVN_HOST`::
The server from where jails are installed and updated when using Subversion. Also used for ports tree when not using man:portsnap[8]. Again, choose a nearby location. A list of official Subversion mirrors can be found in the link:{handbook}#svn-mirrors[FreeBSD Handbook Subversion section].
[[testing-poudriere-create-jails]]
=== Creating Poudriere Jails
Create the base jails which Poudriere will use for building:
[source,shell]
....
# poudriere jail -c -j 114Ramd64 -v 11.4-RELEASE -a amd64
....
Fetch a `11.4-RELEASE` for `amd64` from the FTP server given by `FREEBSD_HOST` in [.filename]#poudriere.conf#, create the zfs file system `tank/poudriere/jails/114Ramd64`, mount it on [.filename]#/poudriere/jails/114Ramd64# and extract the `11.4-RELEASE` tarballs into this file system.
[source,shell]
....
# poudriere jail -c -j 11i386 -v stable/11 -a i386 -m git+https
....
Create `tank/poudriere/jails/11i386`, mount it on [.filename]#/poudriere/jails/11i386#, then check out the tip of the Subversion branch of `FreeBSD-11-STABLE` from `SVN_HOST` in [.filename]#poudriere.conf# into [.filename]#/poudriere/jails/11i386/usr/src#, then complete a `buildworld` and install it into [.filename]#/poudriere/jails/11i386#.
[TIP]
====
If a specific Subversion revision is needed, append it to the version string. For example:
[source,shell]
....
# poudriere jail -c -j 11i386 -v stable/11@123456 -a i386 -m git+https
....
====
[NOTE]
====
While it is possible to build a newer version of FreeBSD on an older version, most of the time it will not run. For example, if a `stable/11` jail is needed, the host will have to run `stable/11` too. Running `11.4-RELEASE` is not enough.
====
[NOTE]
====
To create a Poudriere jail for `14.0-CURRENT`:
[source,shell]
....
# poudriere jail -c -j 14amd64 -v main -a amd64 -m git+https
....
In order to run a `14.0-CURRENT` Poudriere jail you must be running `14.0-CURRENT`. In general, newer kernels can build and run older jails. For instance, a `14.0-CURRENT` kernel can build and run a `11.4-STABLE` Poudriere jail if the `COMPAT_FREEBSD11` kernel option was compiled in (on by default in `14.0-CURRENT`[.filename]#GENERIC# kernel config).
====
[CAUTION]
====
The default `svn` protocol works but is not very secure. Using `svn+https` along with verifying the remote server's SSL fingerprint is advised. It will ensure that the files used for building the jail are from a trusted source.
====
A list of jails currently known to Poudriere can be shown with `poudriere jail -l`:
[source,shell]
....
# poudriere jail -l
JAILNAME VERSION ARCH METHOD
114Ramd64 11.4-RELEASE amd64 ftp
11i386 11.4-STABLE i386 svn+https
....
[[testing-poudriere-maintaining-jails]]
=== Keeping Poudriere Jails Updated
Managing updates is very straightforward. The command:
[source,shell]
....
# poudriere jail -u -j JAILNAME
....
updates the specified jail to the latest version available. For FreeBSD releases, update to the latest patchlevel with man:freebsd-update[8]. For FreeBSD versions built from source, update to the latest Subversion revision in the branch.
[TIP]
====
For jails employing a `git+*` method, it is helpful to add `-J _NumberOfParallelBuildJobs_` to speed up the build by increasing the number of parallel compile jobs used. For example, if the building machine has 6 CPUs, use:
[source,shell]
....
# poudriere jail -u -J 6 -j JAILNAME
....
====
[[testing-poudriere-ports-tree]]
=== Setting Up Ports Trees for Use with Poudriere
There are multiple ways to use ports trees in Poudriere. The most straightforward way is to have Poudriere create a default ports tree for itself, using either man:portsnap[8] (if running FreeBSD {rel121-current} or {rel114-current}) or Git (if running FreeBSD-CURRENT):
[source,shell]
....
# poudriere ports -c -m portsnap
....
or
[source,shell]
....
# poudriere ports -c -m git+https -B main
....
These commands create `tank/poudriere/ports/default`, mount it on [.filename]#/poudriere/ports/default#, and populate it using Git, man:portsnap[8], or Subversion. Afterward it is included in the list of known ports trees:
[source,shell]
....
# poudriere ports -l
PORTSTREE METHOD TIMESTAMP PATH
default git+https 2020-07-20 04:23:56 /poudriere/ports/default
....
[NOTE]
====
Note that the "default" ports tree is special. Each of the build commands explained later will implicitly use this ports tree unless specifically specified otherwise. To use another tree, add `-p _treename_` to the commands.
====
While useful for regular bulk builds, having this default ports tree with the man:portsnap[8] method may not be the best way to deal with local modifications for a ports contributor. As with the creation of jails, it is possible to use a different method for creating the ports tree. To add an additional ports tree for testing local modifications and ports development, checking out the tree via Subversion (as described above) is preferable.
[NOTE]
====
The http and https methods need package:devel/subversion[] built with the `SERF` option enabled. It is enabled by default.
====
[TIP]
====
The `svn` method allows extra qualifiers to tell Subversion exactly how to fetch data. This is explained in man:poudriere[8]. For instance, `poudriere ports -c -m svn+ssh -p subversive` uses SSH for the checkout.
====
[[testing-poudriere-ports-tree-manual]]
=== Using Manually Managed Ports Trees with Poudriere
Depending on the workflow, it can be extremely helpful to use ports trees which are maintained manually. For instance, if there is a local copy of the ports tree in [.filename]#/work/ports#, point Poudriere to the location:
* For Poudriere older than version 3.1.20:
+
[source,shell]
....
# poudriere ports -c -F -f none -M /work/ports -p development
....
* For Poudriere version 3.1.20 and later:
+
[source,shell]
....
# poudriere ports -c -m null -M /work/ports -p development
....
This will be listed in the table of known trees:
[source,shell]
....
# poudriere ports -l
PORTSTREE METHOD TIMESTAMP PATH
development null 2020-07-20 05:06:33 /work/ports
....
[NOTE]
====
The dash or `null` in the `METHOD` column means that Poudriere will not update or change this ports tree, ever. It is completely up to the user to maintain this tree, including all local modifications that may be used for testing new ports and submitting patches.
====
[[testing-poudriere-ports-tree-updating]]
=== Keeping Poudriere Ports Trees Updated
As straightforward as with jails described earlier:
[source,shell]
....
# poudriere ports -u -p PORTSTREE
....
Will update the given _PORTSTREE_, one tree given by the output of `poudriere -l`, to the latest revision available on the official servers.
[NOTE]
====
Ports trees without a method, see <<testing-poudriere-ports-tree-manual>>, cannot be updated like this. They must be updated manually by the porter.
====
[[testing-poudriere-testing-ports]]
=== Testing Ports
After jails and ports trees have been set up, the result of a contributor's modifications to the ports tree can be tested.
For example, local modifications to the package:www/firefox[] port located in [.filename]#/work/ports/www/firefox# can be tested in the previously created 11.4-RELEASE jail:
[source,shell]
....
# poudriere testport -j 114Ramd64 -p development -o www/firefox
....
This will build all dependencies of Firefox. If a dependency has been built previously and is still up-to-date, the pre-built package is installed. If a dependency has no up-to-date package, one will be built with default options in a jail. Then Firefox itself is built.
The complete build of every port is logged to [.filename]#/poudriere/data/logs/bulk/114Ri386-development/build-time/logs#.
The directory name `114Ri386-development` is derived from the arguments to `-j` and `-p`, respectively. For convenience, a symbolic link [.filename]#/poudriere/data/logs/bulk/114Ri386-development/latest# is also maintained. The link points to the latest _build-time_ directory. Also in this directory is an [.filename]#index.html# for observing the build process with a web browser.
By default, Poudriere cleans up the jails and leaves log files in the directories mentioned above. To ease investigation, jails can be kept running after the build by adding `-i` to `testport`:
[source,shell]
....
# poudriere testport -j 114Ramd64 -p development -i -o www/firefox
....
After the build completes, and regardless of whether it was successful, a shell is provided within the jail. The shell is used to investigate further. Poudriere can be told to leave the jail running after the build finishes with `-I`. Poudriere will show the command to run when the jail is no longer needed. It is then possible to man:jexec[8] into it:
[source,shell]
....
# poudriere testport -j 114Ramd64 -p development -I -o www/firefox
[...]
====>> Installing local Pkg repository to /usr/local/etc/pkg/repos
====>> Leaving jail 114Ramd64-development-n running, mounted at /poudriere/data/.m/114Ramd64-development/ref for interactive run testing
====>> To enter jail: jexec 114Ramd64-development-n env -i TERM=$TERM /usr/bin/login -fp root
====>> To stop jail: poudriere jail -k -j 114Ramd64 -p development
# jexec 114Ramd64-development-n env -i TERM=$TERM /usr/bin/login -fp root
# [do some stuff in the jail]
# exit
# poudriere jail -k -j 114Ramd64 -p development
====>> Umounting file systems
....
An integral part of the FreeBSD ports build infrastructure is the ability to tweak ports to personal preferences with options. These can be tested with Poudriere as well. Adding the `-c`:
[source,shell]
....
# poudriere testport -c -o www/firefox
....
Presents the port configuration dialog before the port is built. The ports given after `-o` in the format `_category_/_portname_` will use the specified options, all dependencies will use the default options. Testing dependent ports with non-default options can be accomplished using sets, see <<testing-poudriere-sets>>.
[TIP]
====
When testing ports where [.filename]#pkg-plist# is altered during build depending on the selected options, it is recommended to perform a test run with all options selected _and_ one with all options deselected.
====
[[testing-poudriere-sets]]
=== Using Sets
For all actions involving builds, a so-called _set_ can be specified using `-z _setname_`. A set refers to a fully independent build. This allows, for instance, usage of `testport` with non-standard options for the dependent ports.
To use sets, Poudriere expects an existing directory structure similar to `PORT_DBDIR`, defaults to [.filename]#/var/db/ports# in its configuration directory. This directory is then man:nullfs[5]-mounted into the jails where the ports and their dependencies are built. Usually a suitable starting point can be obtained by recursively copying the existing `PORT_DBDIR` to [.filename]#/usr/local/etc/poudriere.d/jailname-portname-setname-options#. This is described in detail in man:poudriere[8]. For instance, testing package:www/firefox[] in a specific set named `devset`, add the `-z devset` parameter to the testport command:
[source,shell]
....
# poudriere testport -j 114Ramd64 -p development -z devset -o www/firefox
....
This will look for the existence of these directories in this order:
* [.filename]#/usr/local/etc/poudriere.d/114Ramd64-development-devset-options#
* [.filename]#/usr/local/etc/poudriere.d/114Ramd64-devset-options#
* [.filename]#/usr/local/etc/poudriere.d/114Ramd64-development-options#
* [.filename]#/usr/local/etc/poudriere.d/devset-options#
* [.filename]#/usr/local/etc/poudriere.d/development-options#
* [.filename]#/usr/local/etc/poudriere.d/114Ramd64-options#
* [.filename]#/usr/local/etc/poudriere.d/options#
From this list, Poudriere man:nullfs[5]-mounts the _first existing_ directory tree into the [.filename]#/var/db/ports# directory of the build jails. Hence, all custom options are used for all the ports during this run of `testport`.
After the directory structure for a set is provided, the options for a particular port can be altered. For example:
[source,shell]
....
# poudriere options -c www/firefox -z devset
....
The configuration dialog for package:www/firefox[] is shown, and options can be edited. The selected options are saved to the `devset` set.
[NOTE]
====
Poudriere is very flexible in the option configuration. They can be set for particular jails, ports trees, and for multiple ports by one command. Refer to man:poudriere[8] for details.
====
[[testing-poudriere-make-conf]]
=== Providing a Custom [.filename]#make.conf# File
Similar to using sets, Poudriere will also use a custom [.filename]#make.conf# if it is provided. No special command line argument is necessary. Instead, Poudriere looks for existing files matching a name scheme derived from the command line. For instance:
[source,shell]
....
# poudriere testport -j 114Ramd64 -p development -z devset -o www/firefox
....
causes Poudriere to check for the existence of these files in this order:
* [.filename]#/usr/local/etc/poudriere.d/make.conf#
* [.filename]#/usr/local/etc/poudriere.d/devset-make.conf#
* [.filename]#/usr/local/etc/poudriere.d/development-make.conf#
* [.filename]#/usr/local/etc/poudriere.d/114Ramd64-make.conf#
* [.filename]#/usr/local/etc/poudriere.d/114Ramd64-development-make.conf#
* [.filename]#/usr/local/etc/poudriere.d/114Ramd64-devset-make.conf#
* [.filename]#/usr/local/etc/poudriere.d/114Ramd64-development-devset-make.conf#
Unlike with sets, all of the found files will be appended, _in that order_, into one [.filename]#make.conf# inside the build jails. It is hence possible to have general make variables, intended to affect all builds in [.filename]#/usr/local/etc/poudriere.d/make.conf#. Special variables, intended to affect only certain jails or sets can be set in specialised [.filename]#make.conf# files, such as [.filename]#/usr/local/etc/poudriere.d/114Ramd64-development-devset-make.conf#.
[[testing-poudriere-sets-perl]]
.Using [.filename]#make.conf# to Change Default Perl
[example]
====
To build a set with a non default Perl version, for example, `5.20`, using a set named `perl5-20`, create a [.filename]#perl5-20-make.conf# with this line:
[.programlisting]
....
DEFAULT_VERSIONS+= perl=5.20
....
[NOTE]
****
Note the use of `+=` so that if the variable is already set in the default [.filename]#make.conf# its content will not be overwritten.
****
====
[[testing-poudriere-pruning-distfiles]]
=== Pruning no Longer Needed Distfiles
Poudriere comes with a built-in mechanism to remove outdated distfiles that are no longer used by any port of a given tree. The command
[source,shell]
....
# poudriere distclean -p portstree
....
will scan the distfiles folder, `DISTFILES_CACHE` in [.filename]#poudriere.conf#, versus the ports tree given by the `-p _portstree_` argument and prompt for removal of those distfiles. To skip the prompt and remove all unused files unconditionally, the `-y` argument can be added:
[source,shell]
....
# poudriere distclean -p portstree -y
....
diff --git a/documentation/content/en/books/porters-handbook/upgrading/_index.adoc b/documentation/content/en/books/porters-handbook/upgrading/_index.adoc
index 69b161d486..1dde7f32e8 100644
--- a/documentation/content/en/books/porters-handbook/upgrading/_index.adoc
+++ b/documentation/content/en/books/porters-handbook/upgrading/_index.adoc
@@ -1,199 +1,200 @@
---
title: Chapter 11. Upgrading a Port
prev: books/porters-handbook/testing
next: books/porters-handbook/security
+description: Upgrading a FreeBSD Port
---
[[port-upgrading]]
= Upgrading a Port
:doctype: book
:toc: macro
:toclevels: 1
:icons: font
:sectnums:
:sectnumlevels: 6
:source-highlighter: rouge
:experimental:
:skip-front-matter:
:xrefstyle: basic
:relfileprefix: ../
:outfilesuffix:
:sectnumoffset: 11
include::shared/mirrors.adoc[]
include::shared/authors.adoc[]
include::shared/releases.adoc[]
include::shared/en/mailing-lists.adoc[]
include::shared/en/teams.adoc[]
include::shared/en/urls.adoc[]
toc::[]
When a port is not the most recent version available from the authors, update the local working copy of [.filename]#/usr/ports#. The port might have already been updated to the new version.
When working with more than a few ports, it will probably be easier to use Git to keep the whole ports collection up-to-date, as described in the link:{handbook}#ports-using/[Handbook]. This will have the added benefit of tracking all the port's dependencies.
The next step is to see if there is an update already pending. To do this, there are two options. There is a searchable interface to the https://bugs.freebsd.org/search/[FreeBSD Problem Report (PR) or bug database]. Select `Ports & Packages` in the `Product` multiple select menu, and enter the name of the port in the `Summary` field.
However, sometimes people forget to put the name of the port into the Summary field in an unambiguous fashion. In that case, try searching in the `Comment` field in the `Detailled Bug Information` section, or try the crossref:keeping-up[portsmon,FreeBSD Ports Monitoring System] (also known as `portsmon`). This system attempts to classify port PRs by portname. To search for PRs about a particular port, use the http://portsmon.FreeBSD.org/portoverview.py[Overview of One Port].
[NOTE]
======
The FreeBSD Ports Monitoring System (portsmon) is currently not working due to latest Python updates.
======
If there is no pending PR, the next step is to send an email to the port's maintainer, as shown by `make maintainer`. That person may already be working on an upgrade, or have a reason to not upgrade the port right now (because of, for example, stability problems of the new version), and there is no need to duplicate their work. Note that unmaintained ports are listed with a maintainer of `ports@FreeBSD.org`, which is just the general ports mailing list, so sending mail there probably will not help in this case.
If the maintainer asks you to do the upgrade or there is no maintainer, then help out FreeBSD by preparing the update! Please do this by using the man:diff[1] command in the base system.
To create a suitable `diff` for a single patch, copy the file that needs patching to [.filename]#something.orig#, save the changes to [.filename]#something# and then create the patch:
[source,shell]
....
% diff -u something.orig something > something.diff
....
Otherwise, either use the `git diff` method (<<git-diff>>) or copy the contents of the port to an entire different directory and use the result of the recursive man:diff[1] output of the new and old ports directories (for example, if the modified port directory is called [.filename]#superedit# and the original is in our tree as [.filename]#superedit.bak#, then save the result of `diff -ruN superedit.bak superedit`). Either unified or context diff is fine, but port committers generally prefer unified diffs. Note the use of the `-N` option-this is the accepted way to force diff to properly deal with the case of new files being added or old files being deleted. Before sending us the diff, please examine the output to make sure all the changes make sense. (In particular, make sure to first clean out the work directories with `make clean`).
[NOTE]
====
If some files have been added, copied, moved, or removed, add this information to the problem report so that the committer picking up the patch will know what man:git[1] commands to run.
====
To simplify common operations with patch files, use `make makepatch` as described in crossref:slow-porting[slow-patch,Patching]. Other tools exists, like [.filename]#/usr/ports/Tools/scripts/patchtool.py#. Before using it, please read [.filename]#/usr/ports/Tools/scripts/README.patchtool#.
If the port is unmaintained, and you are actively using it, please consider volunteering to become its maintainer. FreeBSD has over 4000 ports without maintainers, and this is an area where more volunteers are always needed. (For a detailed description of the responsibilities of maintainers, refer to the section in the link:{developers-handbook}#POLICIES-MAINTAINER[Developer's Handbook].)
To submit the diff, use the https://bugs.freebsd.org/submit/[bug submit form] (product `Ports & Packages`, component `Individual Port(s)`). Always include the category with the port name, followed by colon, and brief descripton of the issue. Examples: `_category/portname_: _add FOO option_`; `_category/portname_: _Update to X.Y_`. Please mention any added or deleted files in the message, as they have to be explicitly specified to man:git[1] when doing a commit. Do not compress or encode the diff.
Before submitting the bug, review the link:{problem-reports}#pr-writing/[ Writing the problem report] section in the Problem Reports article. It contains far more information about how to write useful problem reports.
[IMPORTANT]
====
If the upgrade is motivated by security concerns or a serious fault in the currently committed port, please notify the {portmgr} to request immediate rebuilding and redistribution of the port's package. Unsuspecting users of `pkg` will otherwise continue to install the old version via `pkg install` for several weeks.
====
[NOTE]
====
Please use man:diff[1] or `git diff` to create updates to existing ports. Other formats include the whole file and make it impossible to see just what has changed. When diffs are not included, the entire update might be ignored.
====
Now that all of that is done, read about how to keep up-to-date in crossref:keeping-up[keeping-up,Keeping Up].
[[git-diff]]
== Using Git to Make Patches
When possible, please submit a man:git[1] diff. They are easier to handle than diffs between "new and old" directories. It is easier to see what has changed, and to update the diff if something was modified in the Ports Collection since the work on it began, or if the committer asks for something to be fixed. Also, a patch generated with `git diff` can be easily applied with `git apply` and will save some time to the committer.
[source,shell]
....
% git clone https://git.FreeBSD.org/ports.git ~/my_wrkdir <.> <.>
% cd ~/my_wrkdir/dns/pdnsd
....
<.> This can be anywhere, of course. Building ports is not limited to within [.filename]#/usr/ports/#.
<.> https://git.FreeBSD.org/[git.FreeBSD.org] is the FreeBSD public Git server. See link:{handbook}mirrors/#git-url-table[FreeBSD Git Repository URL Table] for more information.
While in the port directory, make any changes that are needed. If adding, moving, or removing a file, use `git` to track these changes:
[source,shell]
....
% git add new_file
% git mv old_name new_name
% git rm deleted_file
....
Make sure to check the port using the checklist in crossref:quick-porting[porting-testing,Testing the Port] and crossref:quick-porting[porting-portlint,Checking the Port with `portlint`].
[source,shell]
....
% git status --short
% git pull --rebase <.>
....
<.> This will attempt to merge the differences between the patch and current repository version. Watch the output carefully. The letter in front of each file name indicates what was done with it.
The last step is to make a unified man:diff[1] of the changes:
[source,shell]
....
% git diff . > ../`make -VPKGNAME`.diff
....
[NOTE]
====
If files have been added, moved, or removed, include the man:git[1] `add`, `mv`, and `rm` commands that were used. `git mv` must be run before the patch can be applied. `git add` or `git rm` must be run after the patch is applied.
====
Send the patch following the link:{problem-reports}#pr-writing/[problem report submission guidelines].
[[moved-and-updating-files]]
== UPDATING and MOVED
[[moved-and-updating-updating]]
=== /usr/ports/UPDATING
If upgrading the port requires special steps like changing configuration files or running a specific program, it must be documented in this file. The format of an entry in this file is:
[.programlisting]
....
YYYYMMDD:
AFFECTS: users of portcategory/portname
AUTHOR: Your name <Your email address>
Special instructions
....
[TIP]
====
When including exact portmaster, portupgrade, and/or pkg instructions, please make sure to get the shell escaping right. For example, do _not_ use:
[source,shell]
....
# pkg delete -g -f docbook-xml* docbook-sk* docbook[2345]??-* docbook-4*
....
As shown, the command will only work with bourne shells. Instead, use the form shown below, which will work with both bourne shell and c-shell:
[source,shell]
....
# pkg delete -g -f docbook-xml\* docbook-sk\* docbook\[2345\]\?\?-\* docbook-4\*
....
====
[NOTE]
====
It is recommended that the AFFECTS line contains a glob matching all the ports affected by the entry so that automated tools can parse it as easily as possible. If an update concerns all the existing BIND 9 versions the `AFFECTS` content must be `users of dns/bind9*`, it must _not_ be `users of BIND 9`
====
[[moved-and-updating-moved]]
=== /usr/ports/MOVED
This file is used to list moved or removed ports. Each line in the file is made up of the name of the port, where the port was moved, when, and why. If the port was removed, the section detailing where it was moved can be left blank. Each section must be separated by the `|` (pipe) character, like so:
[.programlisting]
....
old name|new name (blank for deleted)|date of move|reason
....
The date must be entered in the form `YYYY-MM-DD`. New entries are added to the end of the list to keep it in chronological order, with the oldest entry at the top of the list.
If a port was removed but has since been restored, delete the line in this file that states that it was removed.
If a port was renamed and then renamed back to its original name, add a new one with the intermediate name to the old name, and remove the old entry as to not create a loop.
[NOTE]
====
Any changes must be validated with `Tools/scripts/MOVEDlint.awk`.
If using a ports directory other than [.filename]#/usr/ports#, use:
[source,shell]
....
% cd /home/user/ports
% env PORTSDIR=$PWD Tools/scripts/MOVEDlint.awk
....
====
diff --git a/documentation/content/en/books/porters-handbook/uses/_index.adoc b/documentation/content/en/books/porters-handbook/uses/_index.adoc
index fcfdfc1d46..6359e4867b 100644
--- a/documentation/content/en/books/porters-handbook/uses/_index.adoc
+++ b/documentation/content/en/books/porters-handbook/uses/_index.adoc
@@ -1,1669 +1,1670 @@
---
title: Chapter 17. Using USES Macros
prev: books/porters-handbook/keeping-up
next: books/porters-handbook/versions
+description: USES macros make it easy to declare requirements and settings for a FreeBSD Port
---
[[uses]]
= Using `USES` Macros
:doctype: book
:toc: macro
:toclevels: 1
:icons: font
:sectnums:
:sectnumlevels: 6
:source-highlighter: rouge
:experimental:
:skip-front-matter:
:xrefstyle: basic
:relfileprefix: ../
:outfilesuffix:
:sectnumoffset: 17
:c-plus-plus: c++
include::shared/mirrors.adoc[]
include::shared/authors.adoc[]
include::shared/releases.adoc[]
include::shared/en/mailing-lists.adoc[]
include::shared/en/teams.adoc[]
include::shared/en/urls.adoc[]
toc::[]
[[uses-intro]]
== An Introduction to `USES`
`USES` macros make it easy to declare requirements and settings for a port. They can add dependencies, change building behavior, add metadata to packages, and so on, all by selecting simple, preset values.
Each section in this chapter describes a possible value for `USES`, along with its possible arguments. Arguments are appeneded to the value after a colon (`:`). Multiple arguments are separated by commas (`,`).
[[uses-intro-ex1]]
.Using Multiple Values
[example]
====
[.programlisting]
....
USES= bison perl
....
====
[[uses-intro-ex2]]
.Adding an Argument
[example]
====
[.programlisting]
....
USES= tar:xz
....
====
[[uses-intro-ex3]]
.Adding Multiple Arguments
[example]
====
[.programlisting]
....
USES= drupal:7,theme
....
====
[[uses-intro-ex4]]
.Mixing it All Together
[example]
====
[.programlisting]
....
USES= pgsql:9.3+ cpe python:2.7,build
....
====
[[uses-7z]]
== `7z`
Possible arguments: (none), `p7zip`, `partial`
Extract using man:7z[1] instead of man:bsdtar[1] and sets `EXTRACT_SUFX=.7z`. The `p7zip` option forces a dependency on the `7z` from package:archivers/p7zip[] if the one from the base system is not able to extract the files. `EXTRACT_SUFX` is not changed if the `partial` option is used, this can be used if the main distribution file does not have a [.filename]#.7z# extension.
[[uses-ada]]
== `ada`
Possible arguments: (none), `5`, `6`
Depends on an Ada-capable compiler, and sets `CC` accordingly. Defaults to use gcc 5 from ports. Use the `:_X_` version option to force building with a different version.
[[uses-autoreconf]]
== `autoreconf`
Possible arguments: (none), `build`
Runs `autoreconf`. It encapsulates the `aclocal`, `autoconf`, `autoheader`, `automake`, `autopoint`, and `libtoolize` commands. Each command applies to [.filename]#${AUTORECONF_WRKSRC}/configure.ac# or its old name, [.filename]#${AUTORECONF_WRKSRC}/configure.in#. If [.filename]#configure.ac# defines subdirectories with their own [.filename]#configure.ac# using `AC_CONFIG_SUBDIRS`, `autoreconf` will recursively update those as well. The `:build` argument only adds build time dependencies on those tools but does not run `autoreconf`. A port can set `AUTORECONF_WRKSRC` if `WRKSRC` does not contain the path to [.filename]#configure.ac#.
[[uses-blaslapack]]
== `blaslapack`
Possible arguments: (none), `atlas`, `netlib` (default), `gotoblas`, `openblas`
Adds dependencies on Blas / Lapack libraries.
[[uses-bdb]]
== `bdb`
Possible arguments: (none), `48`, `5` (default), `6`
Add dependency on the Berkeley DB library. Default to package:databases/db5[]. It can also depend on package:databases/db48[] when using the `:48` argument or package:databases/db6[] with `:6`. It is possible to declare a range of acceptable values, `:48+` finds the highest installed version, and falls back to 4.8 if nothing else is installed. `INVALID_BDB_VER` can be used to specify versions which do not work with this port. The framework exposes the following variables to the port:
`BDB_LIB_NAME`::
The name of the Berkeley DB library. For example, when using package:databases/db5[], it contains `db-5.3`.
`BDB_LIB_CXX_NAME`::
The name of the Berkeley DBC++ library. For example, when using package:databases/db5[], it contains `db_cxx-5.3`.
`BDB_INCLUDE_DIR`::
The location of the Berkeley DB include directory. For example, when using package:databases/db5[], it will contain `${LOCALBASE}/include/db5`.
`BDB_LIB_DIR`::
The location of the Berkeley DB library directory. For example, when using package:databases/db5[], it contains `${LOCALBASE}/lib`.
`BDB_VER`::
The detected Berkeley DB version. For example, if using `USES=bdb:48+` and Berkeley DB 5 is installed, it contains `5`.
[IMPORTANT]
====
package:databases/db48[] is deprecated and unsupported. It must not be used by any port.
====
[[uses-bison]]
== `bison`
Possible arguments: (none), `build`, `run`, `both`
Uses package:devel/bison[] By default, with no arguments or with the `build` argument, it implies `bison` is a build-time dependency, `run` implies a run-time dependency, and `both` implies both run-time and build-time dependencies.
[[uses-cabal]]
== `cabal`
[IMPORTANT]
====
Ports should not be created for Haskell libraries, see crossref:special[haskell-libs,Haskell Libraries] for more information.
====
Possible arguments: (none), `hpack`
Sets default values and targets used to build Haskell software using Cabal. A build dependency on the Haskell compiler port (GHC) is added. If `hpack` argument is given, a build dependency on package:devel/hs-hpack[] is added and `hpack` is invoked at configuration step to generate .cabal file.
The framework provides the following variables:
`USE_CABAL`::
If the software uses Haskell dependencies, list them in this variable. Each item should be present on Hackage and be listed in form `packagename-_0.1.2_`. Dependencies can have revisions, which are specified after the `_` symbol. Automatic generation of dependency list is supported, see crossref:special[using-cabal,Building Haskell Applications with `cabal`].
`CABAL_FLAGS`::
List of flags to be passed to `cabal-install` during the configuring and building stage. The flags are passed verbatim.
`EXECUTABLES`::
List of executable files installed by the port. Default value: `${PORTNAME}`. Items from this list are automatically added to pkg-plist.
`SKIP_CABAL_PLIST`::
If defined, do not add items from `${EXECUTABLES}` to pkg-plist.
`opt_USE_CABAL`::
Adds items to `${USE_CABAL}` depending on `opt` option.
`opt_EXECUTABLES`::
Adds items to `${EXECUTABLES}` depending on `opt` option.
`opt_CABAL_FLAGS`::
If `opt` is enabled, append the value to `${CABAL_FLAGS}`. Otherwise, append `-value` to disable the flag.
`FOO_DATADIR_VARS`::
For an executable named `FOO` list Haskell packages, whose data files should be accessible by the executable.
[[uses-cargo]]
== `cargo`
Possible arguments: (none)
Uses Cargo for configuring, building, and testing. It can be used to port Rust applications that use the Cargo build system. For more information see crossref:special[using-cargo,Building Rust Applications with `cargo`].
[[uses-charsetfix]]
== `charsetfix`
Possible arguments: (none)
Prevents the port from installing [.filename]#charset.alias#. This must be installed only by package:converters/libiconv[]. `CHARSETFIX_MAKEFILEIN` can be set to a path relative to `WRKSRC` if [.filename]#charset.alias# is not installed by [.filename]#${WRKSRC}/Makefile.in#.
[[uses-cmake]]
== `cmake`
Possible arguments: (none), `insource`, `noninja`, `run`, `testing`
Use CMake for configuring the port and generating a build system.
By default an out-of-source build is performed, leaving the sources in `WRKSRC` free from build artifacts. With the `insource` argument, an in-source build will be performed instead. This argument should be an exception, used only when a regular out-of-source build does not work.
By default Ninja (package:devel/ninja[]) is used for the build. In some cases this does not work correctly. With the `noninja` argument, the build will use regular `make` for builds. This argument should only be used if a Ninja-based build does not work.
With the `run` argument, a run dependency is registered in addition to a build dependency.
With the `testing` argument, a test-target is added that uses CTest. When running tests the port will be re-configured for testing and re-built.
For more information see crossref:special[using-cmake,Using `cmake`].
[[uses-compiler]]
== `compiler`
Possible arguments: (none), `env` (default, implicit), `{c-plus-plus}17-lang`, `{c-plus-plus}14-lang`, `{c-plus-plus}11-lang`, `gcc-{c-plus-plus}11-lib`, `{c-plus-plus}11-lib`, `{c-plus-plus}0x`, `c11`, `openmp`, `nestedfct`, `features`
Determines which compiler to use based on any given wishes. Use `{c-plus-plus}17-lang` if the port needs a {c-plus-plus}17-capable compiler, `{c-plus-plus}14-lang` if the port needs a {c-plus-plus}14-capable compiler, `{c-plus-plus}11-lang` if the port needs a {c-plus-plus}11-capable compiler, `gcc-{c-plus-plus}11-lib` if the port needs the `g++` compiler with a {c-plus-plus}11 library, or `{c-plus-plus}11-lib` if the port needs a {c-plus-plus}11-ready standard library. If the port needs a compiler understanding {c-plus-plus}0X, C11, OpenMP, or nested functions, the corresponding parameters should be used.
Use `features` to request a list of features supported by the default compiler. After including [.filename]#bsd.port.pre.mk# the port can inspect the results using these variables:
* `COMPILER_TYPE`: the default compiler on the system, either gcc or clang
* `ALT_COMPILER_TYPE`: the alternative compiler on the system, either gcc or clang. Only set if two compilers are present in the base system.
* `COMPILER_VERSION`: the first two digits of the version of the default compiler.
* `ALT_COMPILER_VERSION`: the first two digits of the version of the alternative compiler, if present.
* `CHOSEN_COMPILER_TYPE`: the chosen compiler, either gcc or clang
* `COMPILER_FEATURES`: the features supported by the default compiler. It currently lists the {c-plus-plus} library.
[[uses-cpe]]
== `cpe`
Possible arguments: (none)
Include Common Platform Enumeration (CPE) information in package manifest as a CPE 2.3 formatted string. See the http://scap.nist.gov/specifications/cpe/[CPE specification] for details. To add CPE information to a port, follow these steps:
[.procedure]
. Search for the official CPE entry for the software product either by using the NVD's http://web.nvd.nist.gov/view/cpe/search[CPE search engine] or in the http://static.nvd.nist.gov/feeds/xml/cpe/dictionary/official-cpe-dictionary_v2.3.xml[official CPE dictionary] (warning, very large XML file). _Do not ever make up CPE data._
. Add `cpe` to `USES` and compare the result of `make -V CPE_STR` to the CPE dictionary entry. Continue one step at a time until `make -V CPE_STR` is correct.
. If the product name (second field, defaults to `PORTNAME`) is incorrect, define `CPE_PRODUCT`.
. If the vendor name (first field, defaults to `CPE_PRODUCT`) is incorrect, define `CPE_VENDOR`.
. If the version field (third field, defaults to `PORTVERSION`) is incorrect, define `CPE_VERSION`.
. If the update field (fourth field, defaults to empty) is incorrect, define `CPE_UPDATE`.
. If it is still not correct, check [.filename]#Mk/Uses/cpe.mk# for additional details, or contact the {ports-secteam}.
. Derive as much as possible of the CPE name from existing variables such as `PORTNAME` and `PORTVERSION`. Use variable modifiers to extract the relevant portions from these variables rather than hardcoding the name.
. _Always_ run `make -V CPE_STR` and check the output before committing anything that changes `PORTNAME` or `PORTVERSION` or any other variable which is used to derive `CPE_STR`.
[[uses-cran]]
== `cran`
Possible arguments: (none), `auto-plist`, `compiles`
Uses the Comprehensive R Archive Network. Specify `auto-plist` to automatically generate [.filename]#pkg-plist#. Specify `compiles` if the port has code that need to be compiled.
[[uses-desktop-file-utils]]
== `desktop-file-utils`
Possible arguments: (none)
Uses update-desktop-database from package:devel/desktop-file-utils[]. An extra post-install step will be run without interfering with any post-install steps already in the port [.filename]#Makefile#. A line with <<plist-keywords-desktop-file-utils,`@desktop-file-utils`>> will be added to the plist.
[[uses-desthack]]
== `desthack`
Possible arguments: (none)
Changes the behavior of GNU configure to properly support `DESTDIR` in case the original software does not.
[[uses-display]]
== `display`
Possible arguments: (none), _ARGS_
Set up a virtual display environment. If the environment variable `DISPLAY` is not set, then Xvfb is added as a build dependency, and `CONFIGURE_ENV` is extended with the port number of the currently running instance of Xvfb. The _ARGS_ parameter defaults to `install` and controls the phase around which to start and stop the virtual display.
[[uses-dos2unix]]
== `dos2unix`
Possible arguments: (none)
The port has files with line endings in DOS format which need to be converted. Several variables can be set to control which files will be converted. The default is to convert _all_ files, including binaries. See crossref:slow-porting[slow-patch-automatic-replacements,Simple Automatic Replacements] for examples.
* `DOS2UNIX_REGEX`: match file names based on a regular expression.
* `DOS2UNIX_FILES`: match literal file names.
* `DOS2UNIX_GLOB`: match file names based on a glob pattern.
* `DOS2UNIX_WRKSRC`: the directory from which to start the conversions. Defaults to `${WRKSRC}`.
[[uses-drupal]]
== `drupal`
Possible arguments: `7`, `module`, `theme`
Automate installation of a port that is a Drupal theme or module. Use with the version of Drupal that the port is expecting. For example, `USES=drupal:7,module` says that this port creates a Drupal 6 module. A Drupal 7 theme can be specified with `USES=drupal:7,theme`.
[[uses-eigen]]
== `eigen`
Possible arguments: 2, 3, build (default), run
Add dependency on package:math/eigen[].
[[uses-fakeroot]]
== `fakeroot`
Possible arguments: (none)
Changes some default behavior of build systems to allow installing as a user. See https://wiki.debian.org/FakeRoot[] for more information on `fakeroot`.
[[uses-fam]]
== `fam`
Possible arguments: (none), `fam`, `gamin`
Uses a File Alteration Monitor as a library dependency, either package:devel/fam[] or package:devel/gamin[]. End users can set WITH_FAM_SYSTEM to specify their preference.
[[uses-firebird]]
== `firebird`
Possible arguments: (none), `25`
Add a dependency to the client library of the Firebird database.
[[uses-fonts]]
== `fonts`
Possible arguments: (none), `fc`, `fcfontsdir` (default), `fontsdir`, `none`
Adds a runtime dependency on tools needed to register fonts. Depending on the argument, add a `crossref:plist[plist-keywords-fc,@fc] ${FONTSDIR}` line, `crossref:plist[plist-keywords-fcfontsdir,@fcfontsdir] ${FONTSDIR}` line, `crossref:plist[plist-keywords-fontsdir,@fontsdir] ${FONTSDIR}` line, or no line if the argument is `none`, to the plist. `FONTSDIR` defaults to [.filename]#${PREFIX}/share/fonts/${FONTNAME}# and `FONTNAME` to `${PORTNAME}`. Add `FONTSDIR` to `PLIST_SUB` and `SUB_LIST`
[[uses-fortran]]
== `fortran`
Possible arguments: `gcc` (default)
Uses the GNU Fortran compiler.
[[uses-fuse]]
== `fuse`
Possible arguments: `2` (default), `3`
The port will depend on the FUSE library and handle the dependency on the kernel module depending on the version of FreeBSD.
[[uses-gem]]
== `gem`
Possible arguments: (none), `noautoplist`
Handle building with RubyGems. If `noautoplist` is used, the packing list is not generated automatically.
[[uses-gettext]]
== `gettext`
Possible arguments: (none)
Deprecated. Will include both <<uses-gettext-runtime,`gettext-runtime`>> and <<uses-gettext-tools,`gettext-tools`>>.
[[uses-gettext-runtime]]
== `gettext-runtime`
Possible arguments: (none), `lib` (default), `build`, `run`
Uses package:devel/gettext-runtime[]. By default, with no arguments or with the `lib` argument, implies a library dependency on [.filename]#libintl.so#. `build` and `run` implies, respectively a build-time and a run-time dependency on [.filename]#gettext#.
[[uses-gettext-tools]]
== `gettext-tools`
Possible arguments: (none), `build` (default), `run`
Uses package:devel/gettext-tools[]. By default, with no argument, or with the `build` argument, a build time dependency on [.filename]#msgfmt# is registered. With the `run` argument, a run-time dependency is registered.
[[uses-ghostscript]]
== `ghostscript`
Possible arguments: _X_, `build`, `run`, `nox11`
A specific version _X_ can be used. Possible versions are `7`, `8`, `9`, and `agpl` (default). `nox11` indicates that the `-nox11` version of the port is required. `build` and `run` add build- and run-time dependencies on Ghostscript. The default is both build- and run-time dependencies.
[[uses-gl]]
== `gl`
Possible arguments: (none)
Provides an easy way to depend on GL components. The components should be listed in `USE_GL`. The available components are:
`egl`::
add a library dependency on [.filename]#libEGL.so# from package:graphics/mesa-libs[]
`gbm`::
Add a library dependency on [.filename]#libgbm.so# from package:graphics/mesa-libs[]
`gl`::
Add a library dependency on [.filename]#libGL.so# from package:graphics/mesa-libs[]
`glesv2`::
Add a library dependency on [.filename]#libGLESv2.so# from package:graphics/mesa-libs[]
`glew`::
Add a library dependency on [.filename]#libGLEW.so# from package:graphics/glew[]
`glu`::
Add a library dependency on [.filename]#libGLU.so# from package:graphics/libGLU[]
`glut`::
Add a library dependency on [.filename]#libglut.so# from package:graphics/freeglut[]
[[uses-gmake]]
== `gmake`
Possible arguments: (none)
Uses package:devel/gmake[] as a build-time dependency and sets up the environment to use `gmake` as the default `make` for the build.
[[uses-gnome]]
== `gnome`
Possible arguments: (none)
Provides an easy way to depend on GNOME components. The components should be listed in `USE_GNOME`. The available components are:
* `atk`
* `atkmm`
* `cairo`
* `cairomm`
* `dconf`
* `esound`
* `evolutiondataserver3`
* `gconf2`
* `gconfmm26`
* `gdkpixbuf`
* `gdkpixbuf2`
* `glib12`
* `glib20`
* `glibmm`
* `gnomecontrolcenter3`
* `gnomedesktop3`
* `gnomedocutils`
* `gnomemenus3`
* `gnomemimedata`
* `gnomeprefix`
* `gnomesharp20`
* `gnomevfs2`
* `gsound`
* `gtk-update-icon-cache`
* `gtk12`
* `gtk20`
* `gtk30`
* `gtkhtml3`
* `gtkhtml4`
* `gtkmm20`
* `gtkmm24`
* `gtkmm30`
* `gtksharp20`
* `gtksourceview`
* `gtksourceview2`
* `gtksourceview3`
* `gtksourceviewmm3`
* `gvfs`
* `intlhack`
* `intltool`
* `introspection`
* `libartlgpl2`
* `libbonobo`
* `libbonoboui`
* `libgda5`
* `libgda5-ui`
* `libgdamm5`
* `libglade2`
* `libgnome`
* `libgnomecanvas`
* `libgnomekbd`
* `libgnomeprint`
* `libgnomeprintui`
* `libgnomeui`
* `libgsf`
* `libgtkhtml`
* `libgtksourceviewmm`
* `libidl`
* `librsvg2`
* `libsigc++12`
* `libsigc++20`
* `libwnck`
* `libwnck3`
* `libxml++26`
* `libxml2`
* `libxslt`
* `metacity`
* `nautilus3`
* `orbit2`
* `pango`
* `pangomm`
* `pangox-compat`
* `py3gobject3`
* `pygnome2`
* `pygobject`
* `pygobject3`
* `pygtk2`
* `pygtksourceview`
* `referencehack`
* `vte`
* `vte3`
The default dependency is build- and run-time, it can be changed with `:build` or `:run`. For example:
[.programlisting]
....
USES= gnome
USE_GNOME= gnomemenus3:build intlhack
....
See crossref:special[using-gnome,Using GNOME] for more information.
[[uses-go]]
== `go`
[IMPORTANT]
====
Ports should not be created for Go libs, see crossref:special[go-libs,Go Libraries] for more information.
====
Possible arguments: (none), `modules`, `no_targets`, `run`
Sets default values and targets used to build Go software. A build dependency on the Go compiler port selected via `GO_PORT` is added. By default the build is performed in GOPATH mode. If Go software uses modules, the modules-aware mode can be switched on with `modules` argument. `no_targets` will setup build environment like `GO_ENV`, `GO_BUILDFLAGS` but skip creating `post-extract` and `do-{build,install,test}` targets. `run` will also add a run dependency on what is in `GO_PORT`.
The build process is controlled by several variables:
`GO_MODULE`::
The name of the application module as specified by the `module` directive in `go.mod`. In most cases, this is the only required variable for ports that use Go modules.
`GO_PKGNAME`::
The name of the Go package when building in GOPATH mode. This is the directory that will be created in `${GOPATH}/src`. If not set explicitly and `GH_SUBDIR` or `GL_SUBDIR` is present, `GO_PKGNAME` will be inferred from it. It is not needed when building in modules-aware mode.
`GO_TARGET`::
The packages to build. The default value is `${GO_PKGNAME}`. `GO_TARGET` can also be a tuple in the form `package:path` where path can be either a simple filename or a full path starting with `${PREFIX}`.
`GO_TESTTARGET`::
The packages to test. The default value is `./...` (the current package and all subpackages).
`CGO_CFLAGS`::
Additional `CFLAGS` values to be passed to the C compiler by `go`.
`CGO_LDFLAGS`::
Additional `LDFLAGS` values to be passed to the C compiler by `go`.
`GO_BUILDFLAGS`::
Additional build arguments to be passed to `go build`.
`GO_TESTFLAGS`::
Additional build arguments to be passed to `go test`.
`GO_PORT`::
The Go compiler port to use. By default this is package:lang/go[] but can be set to package:lang/go-devel[] in `make.conf` for testing with future Go versions.
+
[WARNING]
====
This variable must not be set by individual ports!
====
See crossref:special[using-go,Building Go Applications] for usage examples.
[[uses-gperf]]
== `gperf`
Possible arguments: (none)
Add a buildtime dependency on package:devel/gperf[] if `gperf` is not present in the base system.
[[uses-grantlee]]
== `grantlee`
Possible arguments: `5`, `selfbuild`
Handle dependency on Grantlee. Specify `5` to depend on the Qt5 based version, package:devel/grantlee5[]. `selfbuild` is used internally by package:devel/grantlee5[] to get their versions numbers.
[[uses-groff]]
== `groff`
Possible arguments: `build`, `run`, `both`
Registers a dependency on package:textproc/groff[] if not present in the base system.
[[uses-gssapi]]
== `gssapi`
Possible arguments: (none), `base` (default), `heimdal`, `mit`, `flags`, `bootstrap`
Handle dependencies needed by consumers of the GSS-API. Only libraries that provide the Kerberos mechanism are available. By default, or set to `base`, the GSS-API library from the base system is used. Can also be set to `heimdal` to use package:security/heimdal[], or `mit` to use package:security/krb5[].
When the local Kerberos installation is not in `LOCALBASE`, set `HEIMDAL_HOME` (for `heimdal`) or `KRB5_HOME` (for `krb5`) to the location of the Kerberos installation.
These variables are exported for the ports to use:
* `GSSAPIBASEDIR`
* `GSSAPICPPFLAGS`
* `GSSAPIINCDIR`
* `GSSAPILDFLAGS`
* `GSSAPILIBDIR`
* `GSSAPILIBS`
* `GSSAPI_CONFIGURE_ARGS`
The `flags` option can be given alongside `base`, `heimdal`, or `mit` to automatically add `GSSAPICPPFLAGS`, `GSSAPILDFLAGS`, and `GSSAPILIBS` to `CFLAGS`, `LDFLAGS`, and `LDADD`, respectively. For example, use `base,flags`.
The `bootstrap` option is a special prefix only for use by package:security/krb5[] and package:security/heimdal[]. For example, use `bootstrap,mit`.
[[uses-gssapi-ex1]]
.Typical Use
[example]
====
[.programlisting]
....
OPTIONS_SINGLE= GSSAPI
OPTIONS_SINGLE_GSSAPI= GSSAPI_BASE GSSAPI_HEIMDAL GSSAPI_MIT GSSAPI_NONE
GSSAPI_BASE_USES= gssapi
GSSAPI_BASE_CONFIGURE_ON= --with-gssapi=${GSSAPIBASEDIR} ${GSSAPI_CONFIGURE_ARGS}
GSSAPI_HEIMDAL_USES= gssapi:heimdal
GSSAPI_HEIMDAL_CONFIGURE_ON= --with-gssapi=${GSSAPIBASEDIR} ${GSSAPI_CONFIGURE_ARGS}
GSSAPI_MIT_USES= gssapi:mit
GSSAPI_MIT_CONFIGURE_ON= --with-gssapi=${GSSAPIBASEDIR} ${GSSAPI_CONFIGURE_ARGS}
GSSAPI_NONE_CONFIGURE_ON= --without-gssapi
....
====
[[uses-horde]]
== `horde`
Possible arguments: (none)
Add buildtime and runtime dependencies on package:devel/pear-channel-horde[]. Other Horde dependencies can be added with `USE_HORDE_BUILD` and `USE_HORDE_RUN`. See crossref:special[php-horde,Horde Modules] for more information.
[[uses-iconv]]
== `iconv`
Possible arguments: (none), `lib`, `build`, `patch`, `translit`, `wchar_t`
Uses `iconv` functions, either from the port package:converters/libiconv[] as a build-time and run-time dependency, or from the base system. By default, with no arguments or with the `lib` argument, implies `iconv` with build-time and run-time dependencies. `build` implies a build-time dependency, and `patch` implies a patch-time dependency. If the port uses the `WCHAR_T` or `//TRANSLIT` iconv extensions, add the relevant arguments so that the correct iconv is used. For more information see crossref:special[using-iconv,Using `iconv`].
[[uses-imake]]
== `imake`
Possible arguments: (none), `env`, `notall`, `noman`
Add package:devel/imake[] as a build-time dependency and run `xmkmf -a` during the `configure` stage. If the `env` argument is given, the `configure` target is not set. If the `-a` flag is a problem for the port, add the `notall` argument. If `xmkmf` does not generate a `install.man` target, add the `noman` argument.
[[uses-kde]]
== `kde`
Possible arguments: `5`
Add dependency on KDE components. See crossref:special[using-kde,Using KDE] for more information.
[[uses-kmod]]
== `kmod`
Possible arguments: (none), `debug`
Fills in the boilerplate for kernel module ports, currently:
* Add `kld` to `CATEGORIES`.
* Set `SSP_UNSAFE`.
* Set `IGNORE` if the kernel sources are not found in `SRC_BASE`.
* Define `KMODDIR` to [.filename]#/boot/modules# by default, add it to `PLIST_SUB` and `MAKE_ENV`, and create it upon installation. If `KMODDIR` is set to [.filename]#/boot/kernel#, it will be rewritten to [.filename]#/boot/modules#. This prevents breaking packages when upgrading the kernel due to [.filename]#/boot/kernel# being renamed to [.filename]#/boot/kernel.old# in the process.
* Handle cross-referencing kernel modules upon installation and deinstallation, using crossref:plist[plist-keywords-kld,`@kld`].
* If the `debug` argument is given, the port can install a debug version of the module into [.filename]#KERN_DEBUGDIR#/[.filename]#KMODDIR#. By default, `KERN_DEBUGDIR` is copied from `DEBUGDIR` and set to [.filename]#/usr/lib/debug#. The framework will take care of creating and removing any required directories.
[[uses-lha]]
== `lha`
Possible arguments: (none)
Set `EXTRACT_SUFX` to `.lzh`
[[uses-libarchive]]
== `libarchive`
Possible arguments: (none)
Registers a dependency on package:archivers/libarchive[]. Any ports depending on libarchive must include `USES=libarchive`.
[[uses-libedit]]
== `libedit`
Possible arguments: (none)
Registers a dependency on package:devel/libedit[]. Any ports depending on libedit must include `USES=libedit`.
[[uses-libtool]]
== `libtool`
Possible arguments: (none), `keepla`, `build`
Patches `libtool` scripts. This must be added to all ports that use `libtool`. The `keepla` argument can be used to keep [.filename]#.la# files. Some ports do not ship with their own copy of libtool and need a build time dependency on package:devel/libtool[], use the `:build` argument to add such dependency.
[[uses-linux]]
== `linux`
Possible arguments: `c6`, `c7`
Ports Linux compatibility framework. Specify `c6` to depend on CentOS 6 packags. Specify `c7` to depend on CentOS 7 packages. The available packages are:
* `allegro`
* `alsa-plugins-oss`
* `alsa-plugins-pulseaudio`
* `alsalib`
* `atk`
* `avahi-libs`
* `base`
* `cairo`
* `cups-libs`
* `curl`
* `cyrus-sasl2`
* `dbusglib`
* `dbuslibs`
* `devtools`
* `dri`
* `expat`
* `flac`
* `fontconfig`
* `gdkpixbuf2`
* `gnutls`
* `graphite2`
* `gtk2`
* `harfbuzz`
* `jasper`
* `jbigkit`
* `jpeg`
* `libasyncns`
* `libaudiofile`
* `libelf`
* `libgcrypt`
* `libgfortran`
* `libgpg-error`
* `libmng`
* `libogg`
* `libpciaccess`
* `libsndfile`
* `libsoup`
* `libssh2`
* `libtasn1`
* `libthai`
* `libtheora`
* `libv4l`
* `libvorbis`
* `libxml2`
* `mikmod`
* `naslibs`
* `ncurses-base`
* `nspr`
* `nss`
* `openal`
* `openal-soft`
* `openldap`
* `openmotif`
* `openssl`
* `pango`
* `pixman`
* `png`
* `pulseaudio-libs`
* `qt`
* `qt-x11`
* `qtwebkit`
* `scimlibs`
* `sdl12`
* `sdlimage`
* `sdlmixer`
* `sqlite3`
* `tcl85`
* `tcp_wrappers-libs`
* `tiff`
* `tk85`
* `ucl`
* `xorglibs`
[[uses-localbase]]
== `localbase`
Possible arguments: (none), `ldflags`
Ensures that libraries from dependencies in `LOCALBASE` are used instead of the ones from the base system. Specify `ldflags` to add `-L${LOCALBASE}/lib` to `LDFLAGS` instead of `LIBS`. Ports that depend on libraries that are also present in the base system should use this. It is also used internally by a few other `USES`.
[[uses-lua]]
== `lua`
Possible arguments: (none), `_XY_`, `_XY_+`, `-_XY_`, `_XY_-_ZA_`, `module`, `flavors`, `build`, `run`, `env`
Adds a dependency on Lua. By default this is a library dependency, unless overridden by the `build` and/or `run` option. The `env` option prevents the addition of any dependency, while still defining all the usual variables.
The default version is set by the usual `DEFAULT_VERSIONS` mechanism, unless a version or range of versions is specified as an argument, for example, `51` or `51-53`.
Applications using Lua are normally built for only a single Lua version. However, library modules intended to be loaded by Lua code should use the `module` option to build with multiple flavors.
For more information see crossref:special[using-lua,Using Lua].
[[uses-lxqt]]
== `lxqt`
Possible arguments: (none)
Handle dependencies for the LXQt Desktop Environment. Use `USE_LXQT` to select the components needed for the port. See crossref:special[using-lxqt,Using LXQt] for more information.
[[uses-makeinfo]]
== `makeinfo`
Possible arguments: (none)
Add a build-time dependency on `makeinfo` if it is not present in the base system.
[[uses-makeself]]
== `makeself`
Possible arguments: (none)
Indicates that the distribution files are makeself archives and sets the appropriate dependencies.
[[uses-mate]]
== `mate`
Possible arguments: (none)
Provides an easy way to depend on MATE components. The components should be listed in `USE_MATE`. The available components are:
* `autogen`
* `caja`
* `common`
* `controlcenter`
* `desktop`
* `dialogs`
* `docutils`
* `icontheme`
* `intlhack`
* `intltool`
* `libmatekbd`
* `libmateweather`
* `marco`
* `menus`
* `notificationdaemon`
* `panel`
* `pluma`
* `polkit`
* `session`
* `settingsdaemon`
The default dependency is build- and run-time, it can be changed with `:build` or `:run`. For example:
[.programlisting]
....
USES= mate
USE_MATE= menus:build intlhack
....
[[uses-meson]]
== `meson`
Possible arguments: (none)
Provide support for Meson based projects. For more information see crossref:special[using-meson,Using `meson`].
[[uses-metaport]]
== `metaport`
Possible arguments: (none)
Sets the following variables to make it easier to create a metaport: `MASTER_SITES`, `DISTFILES`, `EXTRACT_ONLY`, `NO_BUILD`, `NO_INSTALL`, `NO_MTREE`, `NO_ARCH`.
[[uses-mysql]]
== `mysql`
Possible arguments: (none), `_version_`, `client` (default), `server`, `embedded`
Provide support for MySQL. If no version is given, try to find the current installed version. Fall back to the default version, MySQL-5.6. The possible versions are `55`, `55m`, `55p`, `56`, `56p`, `56w`, `57`, `57p`, `80`, `100m`, `101m`, and `102m`. The `m` and `p` suffixes are for the MariaDB and Percona variants of MySQL. `server` and `embedded` add a build- and run-time dependency on the MySQL server. When using `server` or `embedded`, add `client` to also add a dependency on [.filename]#libmysqlclient.so#. A port can set `IGNORE_WITH_MYSQL` if some versions are not supported.
The framework sets `MYSQL_VER` to the detected MySQL version.
[[uses-mono]]
== `mono`
Possible arguments: (none), `nuget`
Adds a dependency on the Mono (currently only C#) framework by setting the appropriate dependencies.
Specify `nuget` when the port uses nuget packages. `NUGET_DEPENDS` needs to be set with the names and versions of the nuget packages in the format `_name_=_version_`. An optional package origin can be added using `_name_=_version_:_origin_`.
The helper target, `buildnuget`, will output the content of the `NUGET_DEPENDS` based on the provided [.filename]#packages.config#.
[[uses-motif]]
== `motif`
Possible arguments: (none)
Uses package:x11-toolkits/open-motif[] as a library dependency. End users can set `WANT_LESSTIF` for the dependency to be on package:x11-toolkits/lesstif[] instead of package:x11-toolkits/open-motif[].
[[uses-ncurses]]
== `ncurses`
Possible arguments: (none), `base`, `port`
Uses ncurses, and causes some useful variables to be set.
[[uses-ninja]]
== `ninja`
Possible arguments: (none)
Uses ninja to build the port.
[[uses-objc]]
== `objc`
Possible arguments: (none)
Add objective C dependencies (compiler, runtime library) if the base system does not support it.
[[uses-openal]]
== `openal`
Possible arguments: `al`, `soft` (default), `si`, `alut`
Uses OpenAL. The backend can be specified, with the software implementation as the default. The user can specify a preferred backend with `WANT_OPENAL`. Valid values for this knob are `soft` (default) and `si`.
[[uses-pathfix]]
== `pathfix`
Possible arguments: (none)
Look for [.filename]#Makefile.in# and [.filename]#configure# in `PATHFIX_WRKSRC` (defaults to `WRKSRC`) and fix common paths to make sure they respect the FreeBSD hierarchy. For example, it fixes the installation directory of `pkgconfig`'s [.filename]#.pc# files to [.filename]#${PREFIX}/libdata/pkgconfig#. If the port uses `USES=autoreconf`, [.filename]#Makefile.am# will be added to `PATHFIX_MAKEFILEIN` automatically.
If the port <<uses-cmake,`USES=cmake`>> it will look for [.filename]#CMakeLists.txt# in `PATHFIX_WRKSRC`. If needed, that default filename can be changed with `PATHFIX_CMAKELISTSTXT`.
[[uses-pear]]
== `pear`
Possible arguments: `env`
Adds a dependency on package:devel/pear[]. It will setup default behavior for software using the PHP Extension and Application Repository. Using the `env` arguments only sets up the PEAR environment variables. See crossref:special[php-pear,PEAR Modules] for more information.
[[uses-perl5]]
== `perl5`
Possible arguments: (none)
Depends on Perl. The configuration is done using `USE_PERL5`.
`USE_PERL5` can contain the phases in which to use Perl, can be `extract`, `patch`, `build`, `run`, or `test`.
`USE_PERL5` can also contain `configure`, `modbuild`, or `modbuildtiny` when [.filename]#Makefile.PL#, [.filename]#Build.PL#, or Module::Build::Tiny's flavor of [.filename]#Build.PL# is required.
`USE_PERL5` defaults to `build run`. When using `configure`, `modbuild`, or `modbuildtiny`, `build` and `run` are implied.
See crossref:special[using-perl,Using Perl] for more information.
[[uses-pgsql]]
== `pgsql`
Possible arguments: (none), `_X.Y_`, `_X.Y_+`, `_X.Y_-`, `_X.Y_-_Z.A_`
Provide support for PostgreSQL. Port maintainer can set version required. Minimum and maximum versions or a range can be specified; for example, `9.0-`, `8.4+`, `8.4-9.2.`
By default, the added dependency will be the client, but if the port requires additional components, this can be done using `WANT_PGSQL=_component[:target]_`; for example, `WANT_PGSQL=server:configure pltcl plperl`. The available components are:
* `client`
* `contrib`
* `docs`
* `pgtcl`
* `plperl`
* `plpython`
* `pltcl`
* `server`
[[uses-php]]
== `php`
Possible arguments: (none), `phpize`, `ext`, `zend`, `build`, `cli`, `cgi`, `mod`, `web`, `embed`, `pecl`, `flavors`, `noflavors`
Provide support for PHP. Add a runtime dependency on the default PHP version, package:lang/php56[].
`phpize`::
Use to build a PHP extension. Enables flavors.
`ext`::
Use to build, install and register a PHP extension. Enables flavors.
`zend`::
Use to build, install and register a Zend extension. Enables flavors.
`build`::
Set PHP also as a build-time dependency.
`cli`::
Needs the CLI version of PHP.
`cgi`::
Needs the CGI version of PHP.
`mod`::
Needs the Apache module for PHP.
`web`::
Needs the Apache module or the CGI version of PHP.
`embed`::
Needs the embedded library version of PHP.
`pecl`::
Provide defaults for fetching PHP extensions from the PECL repository. Enables flavors.
`flavors`::
Enable automatic crossref:flavors[flavors-auto-php,PHP flavors] generation. Flavors will be generated for all PHP versions, except the ones present in <<uses-php-ignore,`IGNORE_WITH_PHP`>>.
`noflavors`::
Disable automatic PHP flavors generation. _Must only_ be used with extensions provided by PHP itself.
Variables are used to specify which PHP modules are required, as well as which version of PHP are supported.
`USE_PHP`::
The list of required PHP extensions at run-time. Add `:build` to the extension name to add a build-time dependency. Example: `pcre xml:build gettext`
[[uses-php-ignore]]
`IGNORE_WITH_PHP`::
The port does not work with PHP of the given version. For possible values look at the content of `_ALL_PHP_VERSIONS` in [.filename]#Mk/Uses/php.mk#.
When building a PHP or Zend extension with `:ext` or `:zend`, these variables can be set:
`PHP_MODNAME`::
The name of the PHP or Zend extension. Default value is `${PORTNAME}`.
`PHP_HEADER_DIRS`::
A list of subdirectories from which to install header files. The framework will always install the header files that are present in the same directory as the extension.
`PHP_MOD_PRIO`::
The priority at which to load the extension. It is a number between `00` and `99`.
+
For extensions that do not depend on any extension, the priority is automatically set to `20`, for extensions that depend on another extension, the priority is automatically set to `30`. Some extensions may need to be loaded before every other extension, for example package:www/php56-opcache[]. Some may need to be loaded after an extension with a priority of `30`. In that case, add `PHP_MOD_PRIO=_XX_` in the port's Makefile. For example:
+
[.programlisting]
....
USES= php:ext
USE_PHP= wddx
PHP_MOD_PRIO= 40
....
These variables are available to use in `PKGNAMEPREFIX` or `PKGNAMESUFFIX`:
`PHP_PKGNAMEPREFIX`::
Contains `php_XY_-` where _XY_ is the current flavor's PHP version. Use with PHP extensions and modules.
`PHP_PKGNAMESUFFIX`::
Contains `-php_XY_` where _XY_ is the current flavor's PHP version. Use with PHP applications.
`PECL_PKGNAMEPREFIX`::
Contains `php_XY_-pecl-` where _XY_ is the current flavor's PHP version. Use with PECL modules.
[IMPORTANT]
====
With flavors, all PHP extensions, PECL extensions, PEAR modules _must have_ a different package name, so they must all use one of these three variables in their `PKGNAMEPREFIX` or `PKGNAMESUFFIX`.
====
[[uses-pkgconfig]]
== `pkgconfig`
Possible arguments: (none), `build` (default), `run`, `both`
Uses package:devel/pkgconf[]. With no arguments or with the `build` argument, it implies `pkg-config` as a build-time dependency. `run` implies a run-time dependency and `both` implies both run-time and build-time dependencies.
[[uses-pure]]
== `pure`
Possible arguments: (none), `ffi`
Uses package:lang/pure[]. Largely used for building related pure ports. With the `ffi` argument, it implies package:devel/pure-ffi[] as a run-time dependency.
[[uses-pyqt]]
== `pyqt`
Possible arguments: (none), `4`, `5`
Uses PyQt. If the port is part of PyQT itself, set `PYQT_DIST`. Use `USE_PYQT` to select the components the port needs. The available components are:
* `core`
* `dbus`
* `dbussupport`
* `demo`
* `designer`
* `designerplugin`
* `doc`
* `gui`
* `multimedia`
* `network`
* `opengl`
* `qscintilla2`
* `sip`
* `sql`
* `svg`
* `test`
* `webkit`
* `xml`
* `xmlpatterns`
These components are only available with PyQT4:
* `assistant`
* `declarative`
* `help`
* `phonon`
* `script`
* `scripttools`
These components are only available with PyQT5:
* `multimediawidgets`
* `printsupport`
* `qml`
* `serialport`
* `webkitwidgets`
* `widgets`
The default dependency for each component is build- and run-time, to select only build or run, add `_build` or `_run` to the component name. For example:
[.programlisting]
....
USES= pyqt
USE_PYQT= core doc_build designer_run
....
[[uses-python]]
== `python`
Possible arguments: (none), `_X.Y_`, `_X.Y+_`, `_-X.Y_`, `_X.Y-Z.A_`, `patch`, `build`, `run`, `test`
Uses Python. A supported version or version range can be specified. If Python is only needed at build time, run time or for the tests, it can be set as a build, run or test dependency with `build`, `run`, or `test`. If Python is also needed during the patch phase, use `patch`. See crossref:special[using-python, Using Python] for more information.
`PYTHON_NO_DEPENDS=yes` can be used when the variables exported by the framework are needed but a dependency on Python is not. It can happen when using with <<uses-shebangfix,`USES=shebangfix`>>, and the goal is only to fix the shebangs but not add a dependency on Python.
[[uses-qmail]]
== `qmail`
Possible arguments: (none), `build`, `run`, `both`, `vars`
Uses package:mail/qmail[]. With the `build` argument, it implies `qmail` as a build-time dependency. `run` implies a run-time dependency. Using no argument or the `both` argument implies both run-time and build-time dependencies. `vars` will only set QMAIL variables for the port to use.
[[uses-qmake]]
== `qmake`
Possible arguments: (none), `norecursive`, `outsource`, `no_env`, `no_configure`
Uses QMake for configuring. For more information see crossref:special[using-qmake,Using `qmake`].
[[uses-qt]]
== `qt`
Possible arguments: `5`, `no_env`
Add dependency on Qt components. `no_env` is passed directly to `USES= qmake`. See crossref:special[using-qt,Using Qt] for more information.
[[uses-qt-dist]]
== `qt-dist`
Possible arguments: (none) or `5` and (none) or one of `3d`, `activeqt`, `androidextras`, `base`, `canvas3d`, `charts`, `connectivity`, `datavis3d`, `declarative`, `doc`, `gamepad`, `graphicaleffects`, `imageformats`, `location`, `macextras`, `multimedia`, `networkauth`, `purchasing`, `quickcontrols2`, `quickcontrols`, `remoteobjects`, `script`, `scxml`, `sensors`, `serialbus`, `serialport`, `speech`, `svg`, `tools`, `translations`, `virtualkeyboard`, `wayland`, `webchannel`, `webengine`, `websockets`, `webview`, `winextras`, `x11extras`, `xmlpatterns`
Provides support for building Qt 5 components. It takes care of setting up the appropriate configuration environment for the port to build.
[[qt5-dist-example]]
.Building Qt 5 Components
[example]
====
The port is Qt 5's `networkauth` component, which is part of the `networkauth` distribution file.
[.programlisting]
....
PORTNAME= networkauth
DISTVERSION= ${QT5_VERSION}
USES= qt-dist:5
....
====
If `PORTNAME` does not match the component name, it can be passed as an argument to `qt-dist`.
[[qt5-dist-example-explicit]]
.Building Qt 5 Components with Different Names
[example]
====
The port is Qt 5's `gui` component, which is part of the `base` distribution file.
[.programlisting]
....
PORTNAME= gui
DISTVERSION= ${QT5_VERSION}
USES= qt-dist:5,base
....
====
[[uses-readline]]
== `readline`
Possible arguments: (none), `port`
Uses readline as a library dependency, and sets `CPPFLAGS` and `LDFLAGS` as necessary. If the `port` argument is used or if readline is not present in the base system, add a dependency on package:devel/readline[]
[[uses-samba]]
== `samba`
Possible arguments: `build`, `env`, `lib`, `run`
Handle dependency on Samba. `env` will not add any dependency and only set up the variables. `build` and `run` will add build-time and run-time dependency on [.filename]#smbd#. `lib` will add a dependency on [.filename]#libsmbclient.so#. The variables that are exported are:
`SAMBAPORT`::
The origin of the default Samba port.
`SAMBAINCLUDES`::
The location of the Samba header files.
`SAMBALIBS`::
The directory where the Samba shared libraries are available.
[[uses-scons]]
== `scons`
Possible arguments: (none)
Provide support for the use of package:devel/scons[]. See crossref:special[using-scons,Using `scons`] for more information.
[[uses-shared-mime-info]]
== `shared-mime-info`
Possible arguments: (none)
Uses update-mime-database from package:misc/shared-mime-info[]. This uses will automatically add a post-install step in such a way that the port itself still can specify there own post-install step if needed. It also add an crossref:plist[plist-keywords-shared-mime-info,`@shared-mime-info`] entry to the plist.
[[uses-shebangfix]]
== `shebangfix`
Possible arguments: (none)
A lot of software uses incorrect locations for script interpreters, most notably [.filename]#/usr/bin/perl# and [.filename]#/bin/bash#. The shebangfix macro fixes shebang lines in scripts listed in `SHEBANG_REGEX`, `SHEBANG_GLOB`, or `SHEBANG_FILES`.
`SHEBANG_REGEX`::
Contains _one_ extended regular expressions, and is used with the `-iregex` argument of man:find[1]. See <<uses-shebangfix-ex-regex>>.
`SHEBANG_GLOB`::
Contains a list of patterns used with the `-name` argument of man:find[1]. See <<uses-shebangfix-ex-glob>>.
`SHEBANG_FILES`::
Contains a list of files or man:sh[1] globs. The shebangfix macro is run from `${WRKSRC}`, so `SHEBANG_FILES` can contain paths that are relative to `${WRKSRC}`. It can also deal with absolute paths if files outside of `${WRKSRC}` require patching. See <<uses-shebangfix-ex-files>>.
Currently Bash, Java, Ksh, Lua, Perl, PHP, Python, Ruby, Tcl, and Tk are supported by default.
There are three configuration variables:
`SHEBANG_LANG`::
The list of supported interpreters.
`_interp__CMD`::
The path to the command interpreter on FreeBSD. The default value is `${LOCALBASE}/bin/_interp_`.
`_interp__OLD_CMD`::
The list of wrong invocations of interpreters. These are typically obsolete paths, or paths used on other operating systems that are incorrect on FreeBSD. They will be replaced by the correct path in `_interp__CMD`.
+
[NOTE]
====
These will _always_ be part of `_interp__OLD_CMD`: `"/usr/bin/env _interp_" /bin/_interp_ /usr/bin/_interp_ /usr/local/bin/_interp_`.
====
+
[TIP]
====
`_interp__OLD_CMD` contain multiple values. Any entry with spaces must be quoted. See <<uses-shebangfix-ex-ksh>>.
====
[IMPORTANT]
====
The fixing of shebangs is done during the `patch` phase. If scripts are created with incorrect shebangs during the `build` phase, the build process (for example, the [.filename]#configure# script, or the [.filename]#Makefiles#) must be patched or given the right path (for example, with `CONFIGURE_ENV`, `CONFIGURE_ARGS`, `MAKE_ENV`, or `MAKE_ARGS`) to generate the right shebangs.
Correct paths for supported interpreters are available in `_interp__CMD`.
====
[TIP]
====
When used with <<uses-python,`USES=python`>>, and the aim is only to fix the shebangs but a dependency on Python itself is not wanted, use `PYTHON_NO_DEPENDS=yes`.
====
[[uses-shebangfix-ex-lua]]
.Adding Another Interpreter to `USES=shebangfix`
[example]
====
To add another interpreter, set `SHEBANG_LANG`. For example:
[.programlisting]
....
SHEBANG_LANG= lua
....
====
[[uses-shebangfix-ex-ksh]]
.Specifying all the Paths When Adding an Interpreter to `USES=shebangfix`
[example]
====
If it was not already defined, and there were no default values for `_interp__OLD_CMD` and `_interp__CMD` the Ksh entry could be defined as:
[.programlisting]
....
SHEBANG_LANG= ksh
ksh_OLD_CMD= "/usr/bin/env ksh" /bin/ksh /usr/bin/ksh
ksh_CMD= ${LOCALBASE}/bin/ksh
....
====
[[uses-shebangfix-ex-strange]]
.Adding a Strange Location for an Interpreter
[example]
====
Some software uses strange locations for an interpreter. For example, an application might expect Python to be located in [.filename]#/opt/bin/python2.7#. The strange path to be replaced can be declared in the port [.filename]#Makefile#:
[.programlisting]
....
python_OLD_CMD= /opt/bin/python2.7
....
====
[[uses-shebangfix-ex-regex]]
.`USES=shebangfix` with `SHEBANG_REGEX`
[example]
====
To fix all the files in `${WRKSRC}/scripts` ending in [.filename]#.pl#, [.filename]#.sh#, or [.filename]#.cgi# do:
[.programlisting]
....
USES= shebangfix
SHEBANG_REGEX= ./scripts/.*\.(sh|pl|cgi)
....
[NOTE]
======
`SHEBANG_REGEX` is used by running `find -E`, which uses modern regular expressions also known as extended regular expressions. See man:re_format[7] for more information.
======
====
[[uses-shebangfix-ex-glob]]
.`USES=shebangfix` with `SHEBANG_GLOB`
[example]
====
To fix all the files in `${WRKSRC}` ending in [.filename]#.pl# or [.filename]#.sh#, do:
[.programlisting]
....
USES= shebangfix
SHEBANG_GLOB= *.sh *.pl
....
====
[[uses-shebangfix-ex-files]]
.`USES=shebangfix` with `SHEBANG_FILES`
[example]
====
To fix the files [.filename]#script/foobar.pl# and [.filename]#script/*.sh# in `${WRKSRC}`, do:
[.programlisting]
....
USES= shebangfix
SHEBANG_FILES= scripts/foobar.pl scripts/*.sh
....
====
[[uses-sqlite]]
== `sqlite`
Possible arguments: (none), `2`, `3`
Add a dependency on SQLite. The default version used is 3, but version 2 is also possible using the `:2` modifier.
[[uses-ssl]]
== `ssl`
Possible arguments: (none), `build`, `run`
Provide support for OpenSSL. A build- or run-time only dependency can be specified using `build` or `run`. These variables are available for the port's use, they are also added to `MAKE_ENV`:
`OPENSSLBASE`::
Path to the OpenSSL installation base.
`OPENSSLDIR`::
Path to OpenSSL's configuration files.
`OPENSSLLIB`::
Path to the OpenSSL libraries.
`OPENSSLINC`::
Path to the OpenSSL includes.
`OPENSSLRPATH`::
If defined, the path the linker needs to use to find the OpenSSL libraries.
[TIP]
====
If a port does not build with an OpenSSL flavor, set the `BROKEN_SSL` variable, and possibly the `BROKEN_SSL_REASON__flavor_`:
[.programlisting]
....
BROKEN_SSL= libressl
BROKEN_SSL_REASON_libressl= needs features only available in OpenSSL
....
====
[[uses-tar]]
== `tar`
Possible arguments: (none), `Z`, `bz2`, `bzip2`, `lzma`, `tbz`, `tbz2`, `tgz`, `txz`, `xz`
Set `EXTRACT_SUFX` to `.tar`, `.tar.Z`, `.tar.bz2`, `.tar.bz2`, `.tar.lzma`, `.tbz`, `.tbz2`, `.tgz`, `.txz` or `.tar.xz` respectively.
[[uses-tcl]]
== `tcl`
Possible arguments: _version_, `wrapper`, `build`, `run`, `tea`
Add a dependency on Tcl. A specific version can be requested using _version_. The version can be empty, one or more exact version numbers (currently `84`, `85`, or `86`), or a minimal version number (currently `84+`, `85+` or `86+`). To only request a non version specific wrapper, use `wrapper`. A build- or run-time only dependency can be specified using `build` or `run`. To build the port using the Tcl Extension Architecture, use `tea`. After including [.filename]#bsd.port.pre.mk# the port can inspect the results using these variables:
* `TCL_VER`: chosen major.minor version of Tcl
* `TCLSH`: full path of the Tcl interpreter
* `TCL_LIBDIR`: path of the Tcl libraries
* `TCL_INCLUDEDIR`: path of the Tcl C header files
* `TK_VER`: chosen major.minor version of Tk
* `WISH`: full path of the Tk interpreter
* `TK_LIBDIR`: path of the Tk libraries
* `TK_INCLUDEDIR`: path of the Tk C header files
[[uses-terminfo]]
== `terminfo`
Possible arguments: (none)
Adds crossref:plist[plist-keywords-terminfo,`@terminfo`] to the [.filename]#plist#. Use when the port installs [.filename]#*.terminfo# files in [.filename]#${PREFIX}/share/misc#.
[[uses-tk]]
== `tk`
Same as arguments for `tcl`
Small wrapper when using both Tcl and Tk. The same variables are returned as when using Tcl.
[[uses-uidfix]]
== `uidfix`
Possible arguments: (none)
Changes some default behavior (mostly variables) of the build system to allow installing this port as a normal user. Try this in the port before using <<uses-fakeroot,USES=fakeroot>> or patching.
[[uses-uniquefiles]]
== `uniquefiles`
Possible arguments: (none), `dirs`
Make files or directories 'unique', by adding a prefix or suffix. If the `dirs` argument is used, the port needs a prefix (and only a prefix) based on `UNIQUE_PREFIX` for standard directories `DOCSDIR`, `EXAMPLESDIR`, `DATADIR`, `WWWDIR`, `ETCDIR`. These variables are available for ports:
* `UNIQUE_PREFIX`: The prefix to be used for directories and files. Default: `${PKGNAMEPREFIX}`.
* `UNIQUE_PREFIX_FILES`: A list of files that need to be prefixed. Default: empty.
* `UNIQUE_SUFFIX`: The suffix to be used for files. Default: `${PKGNAMESUFFIX}`.
* `UNIQUE_SUFFIX_FILES`: A list of files that need to be suffixed. Default: empty.
[[uses-varnish]]
== `varnish`
Possible arguments: `4`, `5`
Handle dependencies on Varnish Cache. `4` will add a dependency on package:www/varnish4[]. `5` will add a dependency on package:www/varnish5[].
[[uses-webplugin]]
== `webplugin`
Possible arguments: (none), `ARGS`
Automatically create and remove symbolic links for each application that supports the webplugin framework. `ARGS` can be one of:
* `gecko`: support plug-ins based on Gecko
* `native`: support plug-ins for Gecko, Opera, and WebKit-GTK
* `linux`: support Linux plug-ins
* `all` (default, implicit): support all plug-in types
* (individual entries): support only the browsers listed
These variables can be adjusted:
* `WEBPLUGIN_FILES`: No default, must be set manually. The plug-in files to install.
* `WEBPLUGIN_DIR`: The directory to install the plug-in files to, default [.filename]#PREFIX/lib/browser_plugins/WEBPLUGIN_NAME#. Set this if the port installs plug-in files outside of the default directory to prevent broken symbolic links.
* `WEBPLUGIN_NAME`: The final directory to install the plug-in files into, default `PKGBASE`.
[[uses-xfce]]
== `xfce`
Possible arguments: (none), `gtk2`
Provide support for Xfce related ports. See crossref:special[using-xfce,Using Xfce] for details.
The `gtk2` argument specifies that the port requires GTK2 support. It adds additional features provided by some core components, for example, package:x11/libxfce4menu[] and package:x11-wm/xfce4-panel[].
[[uses-xorg]]
== `xorg`
Possible arguments: (none)
Provides an easy way to depend on X.org components. The components should be listed in `USE_XORG`. The available components are:
[[using-x11-components]]
.Available X.Org Components
[cols="1,1", frame="none", options="header"]
|===
| Name
| Description
|`dmx`
|DMX extension library
|`fontenc`
|The fontenc Library
|`fontutil`
|Create an index of X font files in a directory
|`ice`
|Inter Client Exchange library for X11
|`libfs`
|The FS library
|`pciaccess`
|Generic PCI access library
|`pixman`
|Low-level pixel manipulation library
|`sm`
|Session Management library for X11
|`x11`
|X11 library
|`xau`
|Authentication Protocol library for X11
|`xaw`
|X Athena Widgets library
|`xaw6`
|X Athena Widgets library
|`xaw7`
|X Athena Widgets library
|`xbitmaps`
|X.Org bitmaps data
|`xcb`
|The X protocol C-language Binding (XCB) library
|`xcomposite`
|X Composite extension library
|`xcursor`
|X client-side cursor loading library
|`xdamage`
|X Damage extension library
|`xdmcp`
|X Display Manager Control Protocol library
|`xext`
|X11 Extension library
|`xfixes`
|X Fixes extension library
|`xfont`
|X font library
|`xfont2`
|X font library
|`xft`
|Client-sided font API for X applications
|`xi`
|X Input extension library
|`xinerama`
|X11 Xinerama library
|`xkbfile`
|XKB file library
|`xmu`
|X Miscellaneous Utilities libraries
|`xmuu`
|X Miscellaneous Utilities libraries
|`xorg-macros`
|X.Org development aclocal macros
|`xorg-server`
|X.Org X server and related programs
|`xorgproto`
|xorg protocol headers
|`xpm`
|X Pixmap library
|`xpresent`
|X Present Extension library
|`xrandr`
|X Resize and Rotate extension library
|`xrender`
|X Render extension library
|`xres`
|X Resource usage library
|`xscrnsaver`
|The XScrnSaver library
|`xshmfence`
|Shared memory 'SyncFence' synchronization primitive
|`xt`
|X Toolkit library
|`xtrans`
|Abstract network code for X
|`xtst`
|X Test extension
|`xv`
|X Video Extension library
|`xvmc`
|X Video Extension Motion Compensation library
|`xxf86dga`
|X DGA Extension
|`xxf86vm`
|X Vidmode Extension
|===
[[uses-xorg-cat]]
== `xorg-cat`
Possible arguments: `app`, `data`, `doc`, `driver`, `font`, `lib`, `proto`, `util`, `xserver` and (none) or one off `autotools` (default), `meson`
Provide support for building Xorg components. It takes care of setting up common dependencies and an appropriate configuration environment needed. This is intended only for Xorg components.
The category has to match upstream categories.
The second argument is the build system to use. autotools is the default, but meson is also supported.
[[uses-zip]]
== `zip`
Possible arguments: (none), `infozip`
Indicates that the distribution files use the ZIP compression algorithm. For files using the InfoZip algorithm the `infozip` argument must be passed to set the appropriate dependencies.
diff --git a/documentation/content/en/books/porters-handbook/versions/_index.adoc b/documentation/content/en/books/porters-handbook/versions/_index.adoc
index 906490dfb9..463c577fad 100644
--- a/documentation/content/en/books/porters-handbook/versions/_index.adoc
+++ b/documentation/content/en/books/porters-handbook/versions/_index.adoc
@@ -1,6558 +1,6559 @@
---
title: Chapter 18. __FreeBSD_version Values
prev: books/porters-handbook/uses
+description: A list of changes into the sys/param.h file
---
[[versions]]
= `__FreeBSD_version` Values
:doctype: book
:toc: macro
:toclevels: 1
:icons: font
:sectnums:
:sectnumlevels: 6
:source-highlighter: rouge
:experimental:
:skip-front-matter:
:xrefstyle: basic
:relfileprefix: ../
:outfilesuffix:
:sectnumoffset: 18
include::shared/mirrors.adoc[]
include::shared/authors.adoc[]
include::shared/releases.adoc[]
include::shared/en/mailing-lists.adoc[]
include::shared/en/teams.adoc[]
include::shared/en/urls.adoc[]
toc::[]
Here is a convenient list of `__FreeBSD_version` values as defined in https://cgit.freebsd.org/src/tree/sys/sys/param.h[sys/param.h]:
[[versions-14]]
== FreeBSD 14 Versions
[[freebsd-versions-table-14]]
.FreeBSD 14 `__FreeBSD_version` Values
[cols="1,1,1,1", frame="none", options="header"]
|===
| Value
| Revision
| Date
| Release
|1400000
|gitref:a53ce3fc4938e37d5ec89304846203d2083c61a2[repository="src",length=12]
|January 22, 2021
|14.0-CURRENT.
|1400001
|gitref:739ecbcf1c4fd22b5f6ee0bb180a67644046a3e0[repository="src",length=12]
|January 23, 2021
|14.0-CURRENT after adding symlink support to lockless lookup.
|1400002
|gitref:2cf84258922f306a3f84866685d2f5346f67db58[repository="src",length=12]
|January 26, 2021
|14.0-CURRENT after fixing a clang assertion when building the package:devel/onetbb[] port.
|1400003
|gitref:d386f3a3c32f0396aa7995349dd65d6c59711393[repository="src",length=12]
|January 28, 2021
|14.0-CURRENT after adding various LinuxKPI bits conflicting with drm-kmod.
|1400004
|gitref:68f6800ce05c386ff045b4416d8595d09c4d8fdd[repository="src",length=12]
|February 8, 2021
|14.0-CURRENT after kernel interfaces for dispatching cryptographic operations were changed.
|1400005
|gitref:45eabf5754ac1d291bd677fdf29f59ce4bbc2c8f[repository="src",length=12]
|February 17, 2021
|14.0-CURRENT after changing the API of `ptrace(2)` `PT_GETDBREGS`/`PT_SETDBREGS` on arm64.
|1400006
|gitref:c96151d33509655efb7fb26768cb56a041c176f1[repository="src",length=12]
|March 17, 2021
|14.0-CURRENT after adding sndstat(4) enumeration ioctls.
|1400007
|gitref:d36d6816151705907393889d661cbfd25c630ca8[repository="src",length=12]
|April 6, 2021
|14.0-CURRENT after fixing wrong dlpi_tls_data.
|1400008
|gitref:e152bbecb221a592e7dbcabe3d1170a60f0d0dfe[repository="src",length=12]
|April 11, 2021
|14.0-CURRENT after changing the internal KAPI between the krpc and NFS.
|1400009
|gitref:9ca874cf740ee68c5742df8b5f9e20910085c011[repository="src",length=12]
|April 20, 2021
|14.0-CURRENT after adding TCP LRO support for VLAN and VxLAN.
|1400010
|gitref:a3a02acde1009f03dc78e979e051acee9f9247c2[repository="src",length=12]
|April 21, 2021
|14.0-CURRENT after changing the sndstat(4) ioctls nvlist schema and definitions.
|===
[[versions-13]]
== FreeBSD 13 Versions
[[freebsd-versions-table-13]]
.FreeBSD 13 `__FreeBSD_version` Values
[cols="1,1,1,1", frame="none", options="header"]
|===
| Value
| Revision
| Date
| Release
|1300000
|link:https://svnweb.freebsd.org/changeset/base/339436[339436]
|October 19, 2018
|13.0-CURRENT.
|1300001
|link:https://svnweb.freebsd.org/changeset/base/339730[339730]
|October 25, 2018
|13.0-CURRENT after bumping OpenSSL shared library version numbers.
|1300002
|link:https://svnweb.freebsd.org/changeset/base/339765[339765]
|October 25, 2018
|13.0-CURRENT after restoration of [.filename]#sys/joystick.h#.
|1300003
|link:https://svnweb.freebsd.org/changeset/base/340055[340055]
|November 2, 2018
|13.0-CURRENT after vop_symlink API change (`a_target` is now `const`.)
|1300004
|link:https://svnweb.freebsd.org/changeset/base/340841[340841]
|November 23, 2018
|13.0-CURRENT after enabling crtbegin and crtend code.
|1300005
|link:https://svnweb.freebsd.org/changeset/base/341836[341836]
|December 11, 2018
|13.0-CURRENT after enabling UFS inode checksums.
|1300006
|link:https://svnweb.freebsd.org/changeset/base/342398[342398]
|December 24, 2018
|13.0-CURRENT after fixing `sys/random.h` include to be usable from C++.
|1300007
|link:https://svnweb.freebsd.org/changeset/base/342629[342629]
|December 30, 2018
|13.0-CURRENT after changing the size of `struct linux_cdev` on 32-bit platforms.
|1300008
|link:https://svnweb.freebsd.org/changeset/base/342772[342772]
|January 4, 2019
|13.0-CURRENT after adding `kern.smp.threads_per_core` and `kern.smp.cores` sysctls.
|1300009
|link:https://svnweb.freebsd.org/changeset/base/343213[343213]
|January 20, 2019
|13.0-CURRENT after `struct ieee80211vap` structure change to resolve ioctl/detach race for ieee80211com structure.
|1300010
|link:https://svnweb.freebsd.org/changeset/base/343485[343485]
|January 27, 2019
|13.0-CURRENT after increasing `SPECNAMELEN` from 63 to MAXNAMELEN (255).
|1300011
|link:https://svnweb.freebsd.org/changeset/base/344041[344041]
|February 12, 2019
|13.0-CURRENT after man:renameat[2] has been corrected to work with kernels built with the `CAPABILITIES` option.
|1300012
|link:https://svnweb.freebsd.org/changeset/base/344062[344062]
|February 12, 2019
|13.0-CURRENT after `taskqgroup_attach()` and `taskqgroup_attach_cpu()` take a device_t and a struct resource pointer as arguments for denoting device interrupts.
|1300013
|link:https://svnweb.freebsd.org/changeset/base/344300[344300]
|February 19, 2019
|13.0-CURRENT after the removal of drm and drm2.
|1300014
|link:https://svnweb.freebsd.org/changeset/base/344779[344779]
|March 4, 2019
|13.0-CURRENT after upgrading clang, llvm, lld, lldb, compiler-rt and libc++ to 8.0.0 rc3.
|1300015
|link:https://svnweb.freebsd.org/changeset/base/345196[345196]
|March 15, 2019
|13.0-CURRENT after deanonymizing thread and proc state enums, so userland applications can use them without redefining the value names.
|1300016
|link:https://svnweb.freebsd.org/changeset/base/345236[345236]
|March 16, 2019
|13.0-CURRENT after enabling LLVM OpenMP 8.0.0 rc5 on amd64 by default.
|1300017
|link:https://svnweb.freebsd.org/changeset/base/345305[345305]
|March 19, 2019
|13.0-CURRENT after exposing the Rx mbuf buffer size to drivers in iflib.
|1300018
|link:https://svnweb.freebsd.org/changeset/base/346012[346012]
|March 16, 2019
|13.0-CURRENT after introduction of funlinkat syscall in link:https://svnweb.freebsd.org/changeset/base/345982[345982].
|1300019
|link:https://svnweb.freebsd.org/changeset/base/346282[346282]
|April 16, 2019
|13.0-CURRENT after addition of is_random_seeded(9) to man:random[4].
|1300020
|link:https://svnweb.freebsd.org/changeset/base/346358[346358]
|April 18, 2019
|13.0-CURRENT after restoring man:random[4] availability tradeoff prior to link:https://svnweb.freebsd.org/changeset/base/346250[346250] and adding new tunables and diagnostic sysctls for programmatically discovering early seeding problems after boot.
|1300021
|link:https://svnweb.freebsd.org/changeset/base/346645[346645]
|April 24, 2019
|13.0-CURRENT after LinuxKPI uses man:bus_dma[9] to be compatible with an IOMMU.
|1300022
|link:https://svnweb.freebsd.org/changeset/base/347089[347089]
|May 4, 2019
|13.0-CURRENT after fixing regression issue after link:https://svnweb.freebsd.org/changeset/base/346645[346645] in the LinuxKPI.
|1300023
|link:https://svnweb.freebsd.org/changeset/base/347192[347192]
|May 6, 2019
|13.0-CURRENT after list-ifying kernel dump device configuration.
|1300024
|link:https://svnweb.freebsd.org/changeset/base/347325[347325]
|May 8, 2019
|13.0-CURRENT after bumping the Mellanox driver version numbers (man:mlx4en[4]; man:mlx5en[4]).
|1300025
|link:https://svnweb.freebsd.org/changeset/base/347532[347532]
|May 13, 2019
|13.0-CURRENT after renaming `vm.max_wired` to `vm.max_user_wired` and changing its type.
|1300026
|link:https://svnweb.freebsd.org/changeset/base/347596[347596]
|May 14, 2019
|13.0-CURRENT after adding context member to ww_mutex in LinuxKPI.
|1300027
|link:https://svnweb.freebsd.org/changeset/base/347601[347601]
|May 14, 2019
|13.0-CURRENT after adding prepare to pm_ops in LinuxKPI.
|1300028
|link:https://svnweb.freebsd.org/changeset/base/347925[347925]
|May 17, 2019
|13.0-CURRENT after removal of bm, cs, de, ed, ep, ex, fe, pcn, sf, sn, tl, tx, txp, vx, wb, and xe drivers.
|1300029
|link:https://svnweb.freebsd.org/changeset/base/347984[347984]
|May 20, 2019
|13.0-CURRENT after removing some header pollution due to `sys/eventhandler.h`. Affected files may now need to explicitly include one or more of `sys/eventhandler.h`, `sys/ktr.h`, `sys/lock.h`, or `sys/mutex.h`, when the missing header may have been included implicitly prior to 1300029.
|1300030
|link:https://svnweb.freebsd.org/changeset/base/348350[348350]
|May 29, 2019
|13.0-CURRENT after adding relocation support to libdwarf on powerpc64 to fix handling of DWARF information on unlinked objects. Original commit in link:https://svnweb.freebsd.org/changeset/base/348347[348347].
|1300031
|link:https://svnweb.freebsd.org/changeset/base/348808[348808]
|June 8, 2019
|13.0-CURRENT after adding dpcpu and vnet section fixes to i386 kernel modules to avoid panics in certain conditions. i386 kernel modules need to be recompiled with the linker script magic in place or they will refuse to load.
|1300032
|link:https://svnweb.freebsd.org/changeset/base/349151[349151]
|June 17, 2019
|13.0-CURRENT after separating kernel crc32() implementation to its own header (gsb_crc32.h) and renaming the source to gsb_crc32.c.
|1300033
|link:https://svnweb.freebsd.org/changeset/base/349277[349277]
|June 21, 2019
|13.0-CURRENT after additions to LinuxKPI's `rcu` list.
|1300034
|link:https://svnweb.freebsd.org/changeset/base/349352[349352]
|June 24, 2019
|13.0-CURRENT after NAND and NANDFS removal.
|1300035
|link:https://svnweb.freebsd.org/changeset/base/349846[349846]
|July 8, 2019
|13.0-CURRENT after merging the vm_page hold and wire mechanisms.
|1300036
|link:https://svnweb.freebsd.org/changeset/base/349972[349972]
|July 13, 2019
|13.0-CURRENT after adding arm_drain_writebuf() and arm_sync_icache() for compatibility with NetBSD and OpenBSD.
|1300037
|link:https://svnweb.freebsd.org/changeset/base/350307[350307]
|July 24, 2019
|13.0-CURRENT after removal of libcap_random(3).
|1300038
|link:https://svnweb.freebsd.org/changeset/base/350437[350437]
|July 30, 2019
|13.0-CURRENT after removal of gzip'ed a.out support.
|1300039
|link:https://svnweb.freebsd.org/changeset/base/350665[350665]
|August 7, 2019
|13.0-CURRENT after merge of fusefs from projects/fuse2.
|1300040
|link:https://svnweb.freebsd.org/changeset/base/351140[351140]
|August 16, 2019
|13.0-CURRENT after deletion of sys/dir.h which has been deprecated since 1997.
|(not changed)
|link:https://svnweb.freebsd.org/changeset/base/351423[351423]
|August 23, 2019
|13.0-CURRENT after changing most arguments to man:ping6[8].
|1300041
|link:https://svnweb.freebsd.org/changeset/base/351480[351480]
|August 25, 2019
|13.0-CURRENT after removal of zlib 1.0.4 after the completion of kernel zlib unification.
|1300042
|link:https://svnweb.freebsd.org/changeset/base/351522[351522]
|August 27, 2019
|13.0-CURRENT after addition of kernel-side support for in-kernel TLS.
|1300043
|link:https://svnweb.freebsd.org/changeset/base/351698[351698]
|September 2, 2019
|13.0-CURRENT after removal of man:gets[3].
|1300044
|link:https://svnweb.freebsd.org/changeset/base/351701[351701]
|September 2, 2019
|13.0-CURRENT after adding sysfs create/remove functions that handles multiple files in one call to the LinuxKPI.
|1300045
|link:https://svnweb.freebsd.org/changeset/base/351729[351729]
|September 3, 2019
|13.0-CURRENT after adding sysctlbyname system call
|1300046
|link:https://svnweb.freebsd.org/changeset/base/351937[351937]
|September 6, 2019
|13.0-CURRENT after LinuxKPI sysfs improvements.
|1300047
|link:https://svnweb.freebsd.org/changeset/base/352110[352110]
|September 9, 2019
|13.0-CURRENT after changing the synchonization rules for vm_page reference counting..
|1300048
|link:https://svnweb.freebsd.org/changeset/base/352700[352700]
|September 25, 2019
|13.0-CURRENT after adding a shm_open2 syscall to support the upcoming memfd_create syscall.
|1300049
|link:https://svnweb.freebsd.org/changeset/base/353274[353274]
|October 7, 2019
|13.0-CURRENT after factoring out the VNET shutdown check into an own vnet structure field.
|1300050
|link:https://svnweb.freebsd.org/changeset/base/353358[353358]
|October 9, 2019
|13.0-CURRENT after updating llvm, clang, compiler-rt, libc++, libunwind, lld, lldb and openmp to 9.0.0 final release r372316.
|1300051
|link:https://svnweb.freebsd.org/changeset/base/353685[353685]
|October 17, 2019
|13.0-CURRENT after splitting out a more generic debugnet(4) from man:netdump[4].
|1300052
|link:https://svnweb.freebsd.org/changeset/base/353698[353698]
|October 17, 2019
|13.0-CURRENT after promoting the page busy field to a first class lock that no longer requires the object lock for consistency.
|1300053
|link:https://svnweb.freebsd.org/changeset/base/353700[353700]
|October 17, 2019
|13.0-CURRENT after implementing NetGDB.
|1300054
|link:https://svnweb.freebsd.org/changeset/base/353868[353868]
|October 21, 2019
|13.0-CURRENT after removing obsoleted KPIs that were used to access interface address lists.
|1300055
|link:https://svnweb.freebsd.org/changeset/base/354335[354335]
|November 4, 2019
|13.0-CURRENT after enabling device class group attributes in the LinuxKPI.
|1300056
|link:https://svnweb.freebsd.org/changeset/base/354460[354460]
|November 7, 2019
|13.0-CURRENT after fixing a potential OOB read security issue in libc++.
|1300057
|link:https://svnweb.freebsd.org/changeset/base/354694[354694]
|November 13, 2019
|13.0-CURRENT after adding support for AT_EXECPATH to elf_aux_info(3).
|1300058
|link:https://svnweb.freebsd.org/changeset/base/354820[354820]
|November 18, 2019
|13.0-CURRENT after widening the vm_page aflags field to 16 bits.
|1300059
|link:https://svnweb.freebsd.org/changeset/base/354835[354835]
|November 18, 2019
|13.0-CURRENT after converting the in-tree sysent targets to use the new [.filename]#makesyscalls.lua#.
|1300060
|link:https://svnweb.freebsd.org/changeset/base/354922[354922]
|November 20, 2019
|13.0-CURRENT after adding [.filename]#/etc/os-release# as a symbolic link to [.filename]#/var/run/os-release#.
|1300061
|link:https://svnweb.freebsd.org/changeset/base/354977[354977]
|November 21, 2019
|13.0-CURRENT after adding functions to man:bitstring[3] to find contiguous sequences of set or unset bits.
|1300062
|link:https://svnweb.freebsd.org/changeset/base/355309[355309]
|December 2, 2019
|13.0-CURRENT after adding TCP_STATS support.
|1300063
|link:https://svnweb.freebsd.org/changeset/base/355537[355537]
|December 8, 2019
|13.0-CURRENT after removal of VI_DOOMED (use VN_IS_DOOMED instead).
|1300064
|link:https://svnweb.freebsd.org/changeset/base/355658[355658]
|December 9, 2019
|13.0-CURRENT after correcting the C++ version check for declaring man:timespec_get[3].
|1300065
|link:https://svnweb.freebsd.org/changeset/base/355643[355643]
|December 12, 2019
|13.0-CURRENT after adding sigsetop extensions commonly found in musl libc and glibc.
|1300066
|link:https://svnweb.freebsd.org/changeset/base/355679[355679]
|December 12, 2019
|13.0-CURRENT after changing the internal interface between the NFS modules as part of the introduction of NFS 4.2.
|1300067
|link:https://svnweb.freebsd.org/changeset/base/355732[355732]
|December 13, 2019
|13.0-CURRENT after removing the deprecated `callout_handle_init`, `timeout`, and `untimeout` functions.
|1300068
|link:https://svnweb.freebsd.org/changeset/base/355828[355828]
|December 16, 2019
|13.0-CURRENT after doubling the value of `ARG_MAX`, for 64 bit platforms.
|1300069
|link:https://svnweb.freebsd.org/changeset/base/356051[356051]
|December 24, 2019
|13.0-CURRENT after the addition of busdma templates.
|1300070
|link:https://svnweb.freebsd.org/changeset/base/356113[356113]
|December 27, 2019
|13.0-CURRENT after eliminating the last MI difference in AT_* definitions (for powerpc).
|1300071
|link:https://svnweb.freebsd.org/changeset/base/356135[356135]
|December 27, 2019
|13.0-CURRENT after making USB statistics be per-device instead of per bus.
|1300072
|link:https://svnweb.freebsd.org/changeset/base/356185[356185]
|December 29, 2019
|13.0-CURRENT after removal of GEOM_SCHED class and gsched tool.
|1300073
|link:https://svnweb.freebsd.org/changeset/base/356263[356263]
|January 2, 2020
|13.0-CURRENT after removing arm/arm as a valid target.
|1300074
|link:https://svnweb.freebsd.org/changeset/base/356337[356337]
|January 3, 2020
|13.0-CURRENT after removing flags argument from VOP_UNLOCK.
|1300075
|link:https://svnweb.freebsd.org/changeset/base/356409[356409]
|January 6, 2020
|13.0-CURRENT after adding own counter for cancelled USB transfers.
|1300076
|link:https://svnweb.freebsd.org/changeset/base/356511[356511]
|January 8, 2020
|13.0-CURRENT after pushing vnop implementation into the fileop layer in posix_fallocate.
|(not changed)
|link:https://svnweb.freebsd.org/changeset/base/357396[357396]
|February 2, 2020
|13.0-CURRENT after removal of armv5 architecture code from the src tree.
|1300077
|link:https://svnweb.freebsd.org/changeset/base/357455[357455]
|February 3, 2020
|13.0-CURRENT after removal of sparc64 architecture code from the src tree.
|1300078
|link:https://svnweb.freebsd.org/changeset/base/358020[358020]
|February 17, 2020
|13.0-CURRENT after changing `struct vnet` and the VNET magic cookie.
|1300079
|link:https://svnweb.freebsd.org/changeset/base/358164[358164]
|February 20, 2020
|13.0-CURRENT after upgrading ncurses to 6.2.x
|1300080
|link:https://svnweb.freebsd.org/changeset/base/358172[358172]
|February 20, 2020
|13.0-CURRENT after adding realpathat syscall to VFS.
|1300081
|link:https://svnweb.freebsd.org/changeset/base/358218[358218]
|February 21, 2020
|13.0-CURRENT after after recent linuxkpi changes.
|1300082
|link:https://svnweb.freebsd.org/changeset/base/358497[358497]
|March 1, 2020
|13.0-CURRENT after removal of man:bktr[4].
|1300083
|link:https://svnweb.freebsd.org/changeset/base/358834[358834]
|March 10, 2020
|13.0-CURRENT after removal of man:amd[8], r358821.
|1300084
|link:https://svnweb.freebsd.org/changeset/base/358851[358851]
|March 10, 2020
|13.0-CURRENT after updating llvm, clang, compiler-rt, libc++, libunwind, lld, lldb and openmp to 10.0.0-rc3 c290cb61fdc.
|1300085
|link:https://svnweb.freebsd.org/changeset/base/359261[359261]
|March 23, 2020
|13.0-CURRENT after the import of the kyua test framework.
|1300086
|link:https://svnweb.freebsd.org/changeset/base/359347[359347]
|March 26, 2020
|13.0-CURRENT after switching powerpc and powerpcspe to the lld linker.
|1300087
|link:https://svnweb.freebsd.org/changeset/base/359374[359374]
|March 27, 2020
|13.0-CURRENT after refactoring the driver and consumer interfaces for in-kernel cryptography.
|1300088
|link:https://svnweb.freebsd.org/changeset/base/359530[359530]
|April 1, 2020
|13.0-CURRENT after removing support for procfs process debugging.
|1300089
|link:https://svnweb.freebsd.org/changeset/base/359727[359727]
|April 8, 2020
|13.0-CURRENT after cloning the RCU interface into a sleepable and a non-sleepable part in the LinuxKPI.
|1300090
|link:https://svnweb.freebsd.org/changeset/base/359747[359747]
|April 9, 2020
|13.0-CURRENT after removing the old NFS lock device driver that uses Giant.
|1300091
|link:https://svnweb.freebsd.org/changeset/base/359839[359839]
|April 12, 2020
|13.0-CURRENT after implementing a close_range(2) syscall.
|1300092
|link:https://svnweb.freebsd.org/changeset/base/359920[359920]
|April 14, 2020
|13.0-CURRENT after reworking unmapped mbufs in KTLS to carry ext_pgs in the mbuf itself.
|1300093
|link:https://svnweb.freebsd.org/changeset/base/360418[360418]
|April 27, 2020
|13.0-CURRENT after adding support for kernel TLS receive offload.
|1300094
|link:https://svnweb.freebsd.org/changeset/base/360796[360796]
|May 7, 2020
|13.0-CURRENT after linuxkpi changes.
|1300095
|link:https://svnweb.freebsd.org/changeset/base/361275[361275]
|May 20, 2020
|13.0-CURRENT after adding HyperV socket support for FreeBSD guests.
|1300096
|link:https://svnweb.freebsd.org/changeset/base/361410[361410]
|May 23, 2020
|13.0-CURRENT after updating llvm, clang, compiler-rt, libc++, libunwind, lld, lldb and openmp to 10.0.1 rc1 f79cd71e145.
|1300097
|link:https://svnweb.freebsd.org/changeset/base/361724[361724]
|June 2, 2020
|13.0-CURRENT after implementing __is_constexpr() function macro in the LinuxKPI.
|1300098
|link:https://svnweb.freebsd.org/changeset/base/362159[362159]
|June 14, 2020
|13.0-CURRENT after changing the `export_args ex_flags` field so that is 64bits.
|1300099
|link:https://svnweb.freebsd.org/changeset/base/362453[362453]
|June 20, 2020
|13.0-CURRENT after making liblzma use libmd implementation of SHA256.
|1300100
|link:https://svnweb.freebsd.org/changeset/base/362640[362640]
|June 26, 2020
|13.0-CURRENT after changing the internal API between the NFS kernel modules.
|1300101
|link:https://svnweb.freebsd.org/changeset/base/363077[363077]
|July 10, 2020
|13.0-CURRENT after implementing the array_size() function in the LinuxKPI.
|1300102
|link:https://svnweb.freebsd.org/changeset/base/363562[363562]
|July 26, 2020
|13.0-CURRENT after implementing lockless lookup in the VFS layer.
|1300103
|link:https://svnweb.freebsd.org/changeset/base/363757[363757]
|August 1, 2020
|13.0-CURRENT after making rights mandatory for NDINIT_ALL.
|1300104
|link:https://svnweb.freebsd.org/changeset/base/363783[363783]
|August 2, 2020
|13.0-CURRENT after vnode layout changes.
|1300105
|link:https://svnweb.freebsd.org/changeset/base/363894[363894]
|August 5, 2020
|13.0-CURRENT after vaccess() change.
|1300106
|link:https://svnweb.freebsd.org/changeset/base/364092[364092]
|August 11, 2020
|13.0-CURRENT after adding an argument to newnfs_connect() that indicates use TLS for the connection.
|1300107
|link:https://svnweb.freebsd.org/changeset/base/364109[364109]
|August 11, 2020
|13.0-CURRENT after change to clone the task struct fields related to RCU.
|1300108
|link:https://svnweb.freebsd.org/changeset/base/364233[364233]
|August 14, 2020
|13.0-CURRENT after adding a few wait_bit functions to the linuxkpi, which are needed for DRM from Linux v5.4.
|1300109
|link:https://svnweb.freebsd.org/changeset/base/364274[364274]
|August 16, 2020
|13.0-CURRENT after vget() argument removal and namei flags renumbering.
|(not changed)
|link:https://svnweb.freebsd.org/changeset/base/364284[364284]
|August 16, 2020
|13.0-CURRENT after updating llvm, clang, compiler-rt, libc++, libunwind, lld, lldb and openmp to release/11.x llvmorg-11.0.0-rc1-47-gff47911ddfc.
|1300110
|link:https://svnweb.freebsd.org/changeset/base/364331[364331]
|August 18, 2020
|13.0-CURRENT after deleting the unused `use_ext` argument to `nfscl_reqstart()`.
|1300111
|link:https://svnweb.freebsd.org/changeset/base/364476[364476]
|August 22, 2020
|13.0-CURRENT after adding TLS support to the kernel RPC.
|1300112
|link:https://svnweb.freebsd.org/changeset/base/364747[364747]
|August 25, 2020
|13.0-CURRENT after merging OpenZFS support.
|1300113
|link:https://svnweb.freebsd.org/changeset/base/364753[364753]
|August 25, 2020
|13.0-CURRENT after adding atomic and bswap functions to libcompiler_rt.
|1300114
|link:https://svnweb.freebsd.org/changeset/base/365459[365459]
|September 8, 2020
|13.0-CURRENT after changing arm64 AT_HWCAP definitions for elf_aux_info(3).
|1300115
|link:https://svnweb.freebsd.org/changeset/base/365705[365705]
|September 14, 2020
|13.0-CURRENT after fixing man:crunchgen[1] application build with `WARNS=6`.
|1300116
|link:https://svnweb.freebsd.org/changeset/base/366062[366062]
|September 22, 2020
|13.0-CURRENT after the introduction of the powerpc64le ARCH.
|1300117
|link:https://svnweb.freebsd.org/changeset/base/366070[366070]
|September 23, 2020
|13.0-CURRENT after reimplementing purgevfs to iterate vnodes instead of the entire hash.
|1300118
|link:https://svnweb.freebsd.org/changeset/base/366374[366374]
|October 2, 2020
|13.0-CURRENT after adding backlight support and `dmi_*` functions to the linuxkpi.
|1300119
|link:https://svnweb.freebsd.org/changeset/base/366432[366432]
|October 6, 2020
|13.0-CURRENT after populating the acquire context field of a `ww_mutex` in the LinuxKPI.
|1300120
|link:https://svnweb.freebsd.org/changeset/base/366666[366666]
|October 13, 2020
|13.0-CURRENT after the fix to arm64 write-only mappings.
|1300121
|link:https://svnweb.freebsd.org/changeset/base/366719[366719]
|October 15, 2020
|13.0-CURRENT after the addition of `VOP_EAGAIN`.
|1300122
|link:https://svnweb.freebsd.org/changeset/base/366782[366782]
|October 17, 2020
|13.0-CURRENT after the addition of `ptsname_r`.
|1300123
|link:https://svnweb.freebsd.org/changeset/base/366871[366871]
|October 20, 2020
|13.0-CURRENT after `VOP`, `VPTOCNP`, and `INACTIVE` changes.
|1300124
|link:https://svnweb.freebsd.org/changeset/base/367162[367162]
|October 30, 2020
|13.0-CURRENT after adding `cache_vop_mkdir` and renaming `cache_rename` to `cache_vop_rename`.
|1300125
|link:https://svnweb.freebsd.org/changeset/base/367347[367347]
|November 4, 2020
|13.0-CURRENT after using a `rms` lock for teardown handling in `zfs`.
|1300126
|link:https://svnweb.freebsd.org/changeset/base/367384[367384]
|November 5, 2020
|13.0-CURRENT after rationalizing per-cpu zones.
|1300127
|link:https://svnweb.freebsd.org/changeset/base/367432[367432]
|November 6, 2020
|13.0-CURRENT after moving `malloc_type_internal` into `malloc_type`.
|1300128
|link:https://svnweb.freebsd.org/changeset/base/367522[367522]
|November 9, 2020
|13.0-CURRENT after LinuxKPI additions to implement ACPI bits required by `drm-kmod` in the base system.
|1300129
|link:https://svnweb.freebsd.org/changeset/base/367627[367627]
|November 12, 2020
|13.0-CURRENT after retiring malloc_last_fail.
|1300130
|link:https://svnweb.freebsd.org/changeset/base/367777[367777]
|November 17, 2020
|13.0-CURRENT after p_pd / pwddesc split from p_fd / filedesc.
|1300131
|link:https://svnweb.freebsd.org/changeset/base/368417[368417]
|December 7, 2020
|13.0-CURRENT after removal of crypto file descriptors.
|1300132
|link:https://svnweb.freebsd.org/changeset/base/368659[368659]
|December 15, 2020
|13.0-CURRENT after improving handling of alternate settings in the USB stack.
|1300133
|link:https://cgit.freebsd.org/src/commit/?id=2ed0c8d801f5f72dbde7a7d30135c1cc361a1e90[2ed0c8d801f5]
|December 23, 2020
|13.0-CURRENT after changing the internal API between the NFS and kernel RPC modules.
|1300134
|link:https://cgit.freebsd.org/src/commit/?id=a84b0e94cdbf1a17a798ab7f77375aacb4d400ff[a84b0e94cdbf]
|January 7, 2021
|13.0-CURRENT after factoring out the hardware-independent part of USB HID support to a new module.
|1300135
|link:https://cgit.freebsd.org/src/commit/?id=35a39dc5b34962081eeda8dbcf0b99a31585499b[35a39dc5b349]
|January 12, 2021
|13.0-CURRENT after adding `kernel_fpu_begin`/`kernel_fpu_end` to the LinuxKPI.
|1300136
|link:https://cgit.freebsd.org/src/commit/?id=72c551930be195b5ea982c1b16767f54388424f2[72c551930be1]
|January 17, 2021
|13.0-CURRENT after reimplementing LinuxKPI's `irq_work` queue on top of fast taskqueue.
|1300137
|link:https://cgit.freebsd.org/src/commit/?id=010196adcfaf2bb610725394d40691874b4ff2af[010196adcfaf]
|January 30, 2021
|13.0-CURRENT after fixing a clang assertion when building the package:devel/onetbb[] port.
|1300138
|link:https://cgit.freebsd.org/src/commit/?id=dcee9964238b12a8e55917f292139f074b1a80b2[dcee9964238b]
|February 1, 2021
|13.0-ALPHA3 after adding lockless symlink lookup to vfs cache.
|1300139
|link:https://cgit.freebsd.org/src/commit/?id=91a07ed50ffca4dfada3e7f1f050ea746c1bac66[91a07ed50ffc]
|February 2, 2021
|13.0-ALPHA3 after adding various LinuxKPI bits conflicting with drm-kmod.
|1300500
|link:https://cgit.freebsd.org/src/commit/?id=3c6a89748a01869c18955d5e3bfcdf35f6705d26[3c6a89748a01]
|February 5, 2021
|13.0-STABLE after releng/13.0 was branched.
|===
[[versions-12]]
== FreeBSD 12 Versions
[[freebsd-versions-table-12]]
.FreeBSD 12 `__FreeBSD_version` Values
[cols="1,1,1,1", frame="none", options="header"]
|===
| Value
| Revision
| Date
| Release
|1200000
|link:https://svnweb.freebsd.org/changeset/base/302409[302409]
|July 7, 2016
|12.0-CURRENT.
|1200001
|link:https://svnweb.freebsd.org/changeset/base/302628[302628]
|July 12, 2016
|12.0-CURRENT after removing collation from `[a-z]`-type ranges.
|1200002
|link:https://svnweb.freebsd.org/changeset/base/304395[304395]
|August 18, 2016
|12.0-CURRENT after removing unused and obsolete `openbsd_poll` system call.
|1200003
|link:https://svnweb.freebsd.org/changeset/base/304608[304608]
|August 22, 2016
|12.0-CURRENT after adding C++11 `thread_local` support in rev link:https://svnweb.freebsd.org/changeset/base/303795[303795].
|1200004
|link:https://svnweb.freebsd.org/changeset/base/304752[304752]
|August 24, 2016
|12.0-CURRENT after fixing LC*MASK for man:newlocale[3] and man:querylocale[3] (rev link:https://svnweb.freebsd.org/changeset/base/304703[304703]).
|1200005
|link:https://svnweb.freebsd.org/changeset/base/304789[304789]
|August 25, 2016
|12.0-CURRENT after changing some ioctl interfaces in rev link:https://svnweb.freebsd.org/changeset/base/304787[304787] between the iSCSI userspace programs and the kernel.
|1200006
|link:https://svnweb.freebsd.org/changeset/base/305256[305256]
|September 1, 2016
|12.0-CURRENT after man:crunchgen[1] META_MODE fix in link:https://svnweb.freebsd.org/changeset/base/305254[305254].
|1200007
|link:https://svnweb.freebsd.org/changeset/base/305421[305421]
|September 5, 2016
|12.0-CURRENT after resolving a deadlock between `device_detach()` and man:usbd_do_request_flags[9].
|1200008
|link:https://svnweb.freebsd.org/changeset/base/305833[305833]
|September 15, 2016
|12.0-CURRENT after removing the 4.3BSD compatible macro `m_copy()` in link:https://svnweb.freebsd.org/changeset/base/305824[305824].
|1200009
|link:https://svnweb.freebsd.org/changeset/base/306077[306077]
|September 21, 2016
|12.0-CURRENT after removing `bio_taskqueue()` in link:https://svnweb.freebsd.org/changeset/base/305988[305988].
|1200010
|link:https://svnweb.freebsd.org/changeset/base/306276[306276]
|September 23, 2016
|12.0-CURRENT after mounting man:msdosfs[5] with longnames support by default.
|1200011
|link:https://svnweb.freebsd.org/changeset/base/306556[306556]
|October 1, 2016
|12.0-CURRENT after adding `fb_memattr` field to `fb_info` in link:https://svnweb.freebsd.org/changeset/base/306555[306555].
|1200012
|link:https://svnweb.freebsd.org/changeset/base/306592[306592]
|October 2, 2016
|12.0-CURRENT after man:net80211[4] changes (rev link:https://svnweb.freebsd.org/changeset/base/306590[306590], link:https://svnweb.freebsd.org/changeset/base/306591[306591]).
|1200013
|link:https://svnweb.freebsd.org/changeset/base/307140[307140]
|October 12, 2016
|12.0-CURRENT after installing header files required development with libzfs_core.
|1200014
|link:https://svnweb.freebsd.org/changeset/base/307529[307529]
|October 17, 2016
|12.0-CURRENT after merging common code in man:rtwn[4] and man:urtwn[4], and adding support for 802.11ac devices.
|1200015
|link:https://svnweb.freebsd.org/changeset/base/308874[308874]
|November 20, 2016
|12.0-CURRENT after some ABI change for unbreaking powerpc.
|1200016
|link:https://svnweb.freebsd.org/changeset/base/309017[309017]
|November 22, 2016
|12.0-CURRENT after removing `PG_CACHED`-related fields from `vmmeter`.
|1200017
|link:https://svnweb.freebsd.org/changeset/base/309124[309124]
|November 25, 2016
|12.0-CURRENT after upgrading our copies of clang, llvm, lldb, compiler-rt and libc++ to 3.9.0 release, and adding lld 3.9.0.
|1200018
|link:https://svnweb.freebsd.org/changeset/base/309676[309676]
|December 7, 2016
|12.0-CURRENT after adding the `ki_moretdname` member to `struct kinfo_proc` and `struct kinfo_proc32` to export the whole thread name to user-space utilities.
|1200019
|link:https://svnweb.freebsd.org/changeset/base/310149[310149]
|December 16, 2016
|12.0-CURRENT after starting to lay down the foundation for 11ac support.
|1200020
|link:https://svnweb.freebsd.org/changeset/base/312087[312087]
|January 13, 2017
|12.0-CURRENT after removing `fgetsock` and `fputsock`.
|1200021
|link:https://svnweb.freebsd.org/changeset/base/313858[313858]
|February 16, 2017
|12.0-CURRENT after removing MCA and EISA support.
|1200022
|link:https://svnweb.freebsd.org/changeset/base/314040[314040]
|February 21, 2017
|12.0-CURRENT after making the LinuxKPI task struct persistent across system calls.
|(not changed)
|link:https://svnweb.freebsd.org/changeset/base/314373[314373]
|March 2, 2017
|12.0-CURRENT after removing System V Release 4 binary compatibility support.
|1200023
|link:https://svnweb.freebsd.org/changeset/base/314564[314564]
|March 2, 2017
|12.0-CURRENT after upgrading our copies of clang, llvm, lld, lldb, compiler-rt and libc++ to 4.0.0.
|1200024
|link:https://svnweb.freebsd.org/changeset/base/314865[314865]
|March 7, 2017
|12.0-CURRENT after removal of [.filename]#pcap-int.h#
|1200025
|link:https://svnweb.freebsd.org/changeset/base/315430[315430]
|March 16, 2017
|12.0-CURRENT after addition of the [.filename]#<dev/mmc/mmc_ioctl.h># header.
|1200026
|link:https://svnweb.freebsd.org/changeset/base/315662[315662]
|March 16, 2017
|12.0-CURRENT after hiding `struct inpcb` and `struct tcpcb` from userland.
|1200027
|link:https://svnweb.freebsd.org/changeset/base/315673[315673]
|March 21, 2017
|12.0-CURRENT after making CAM SIM lock optional.
|1200028
|link:https://svnweb.freebsd.org/changeset/base/316683[316683]
|April 10, 2017
|12.0-CURRENT after renaming `smp_no_rendevous_barrier()` to `smp_no_rendezvous_barrier()` in link:https://svnweb.freebsd.org/changeset/base/316648[316648].
|1200029
|link:https://svnweb.freebsd.org/changeset/base/317176[317176]
|April 19, 2017
|12.0-CURRENT after the removal of `struct vmmeter` from `struct pcpu` from link:https://svnweb.freebsd.org/changeset/base/317061[317061].
|1200030
|link:https://svnweb.freebsd.org/changeset/base/317383[317383]
|April 24, 2017
|12.0-CURRENT after removing NATM support including man:en[4], man:fatm[4], man:hatm[4], and man:patm[4].
|1200031
|link:https://svnweb.freebsd.org/changeset/base/318736[318736]
|May 23, 2017
|12.0-CURRENT after types `ino_t`, `dev_t`, `nlink_t` were extended to 64bit and `struct dirent` changed layout (also known as ino64).
|1200032
|link:https://svnweb.freebsd.org/changeset/base/319664[319664]
|June 8, 2017
|12.0-CURRENT after removal of `groff`.
|1200033
|link:https://svnweb.freebsd.org/changeset/base/320043[320043]
|June 17, 2017
|12.0-CURRENT after the type of the `struct event` member `data` was increased to 64bit, and ext structure members added.
|1200034
|link:https://svnweb.freebsd.org/changeset/base/320085[320085]
|June 19, 2017
|12.0-CURRENT after the NFS client and server were changed so that they actually use the 64bit `ino_t`.
|1200035
|link:https://svnweb.freebsd.org/changeset/base/320317[320317]
|June 24, 2017
|12.0-CURRENT after the `MAP_GUARD` man:mmap[2] flag was added.
|1200036
|link:https://svnweb.freebsd.org/changeset/base/320347[320347]
|June 26, 2017
|12.0-CURRENT after changing `time_t` to 64 bits on powerpc (32-bit version).
|1200037
|link:https://svnweb.freebsd.org/changeset/base/320545[320545]
|July 1, 2017
|12.0-CURRENT after the cleanup and inlining of `bus_dmamap*` functions (link:https://svnweb.freebsd.org/changeset/base/320528[320528]).
|1200038
|link:https://svnweb.freebsd.org/changeset/base/320879[320879]
|July 10, 2017
|12.0-CURRENT after MMC CAM committed. (link:https://svnweb.freebsd.org/changeset/base/320844[320844]).
|1200039
|link:https://svnweb.freebsd.org/changeset/base/321369[321369]
|July 22, 2017
|12.0-CURRENT after upgrade of copies of clang, llvm, lld, lldb, compiler-rt and libc++ to 5.0.0 (trunk r308421).
|1200040
|link:https://svnweb.freebsd.org/changeset/base/321688[321688]
|July 29, 2017
|12.0-CURRENT after adding NFS client forced dismount support `umount -N`.
|1200041
|link:https://svnweb.freebsd.org/changeset/base/322762[322762]
|August 21, 2017
|12.0-CURRENT after WRFSBASE instruction become operational on amd64.
|1200042
|link:https://svnweb.freebsd.org/changeset/base/322900[322900]
|August 25, 2017
|12.0-CURRENT after PLPMTUD counters were changed to use man:counter[9].
|1200043
|link:https://svnweb.freebsd.org/changeset/base/322989[322989]
|August 28, 2017
|12.0-CURRENT after dropping x86 CACHE_LINE_SIZE down to 64 bytes.
|1200044
|link:https://svnweb.freebsd.org/changeset/base/323349[323349]
|September 8, 2017
|12.0-CURRENT after implementing poll_wait() in the LinuxKPI.
|1200045
|link:https://svnweb.freebsd.org/changeset/base/323706[323706]
|September 18, 2017
|12.0-CURRENT after adding shared memory support to LinuxKPI. (link:https://svnweb.freebsd.org/changeset/base/323703[323703]).
|1200046
|link:https://svnweb.freebsd.org/changeset/base/323910[323910]
|September 22, 2017
|12.0-CURRENT after adding support for 32-bit compatibility IOCTLs to LinuxKPI.
|1200047
|link:https://svnweb.freebsd.org/changeset/base/324053[324053]
|September 26, 2017
|12.0-CURRENT after removing M_HASHTYPE_RSS_UDP_IPV4_EX. (link:https://svnweb.freebsd.org/changeset/base/324052[324052]).
|1200048
|link:https://svnweb.freebsd.org/changeset/base/324227[324227]
|October 2, 2017
|12.0-CURRENT after hiding `struct socket` and `struct unpcb` from userland.
|1200049
|link:https://svnweb.freebsd.org/changeset/base/324281[324281]
|October 4, 2017
|12.0-CURRENT after adding the `value.u16` field to `struct diocgattr_arg`.
|1200050
|link:https://svnweb.freebsd.org/changeset/base/324342[324342]
|October 5, 2017
|12.0-CURRENT after adding the `armv7 MACHINE_ARCH`. (link:https://svnweb.freebsd.org/changeset/base/324340[324340]).
|1200051
|link:https://svnweb.freebsd.org/changeset/base/324455[324455]
|October 9, 2017
|12.0-CURRENT after removing [.filename]#libstand.a# as a public interface. (link:https://svnweb.freebsd.org/changeset/base/324454[324454]).
|1200052
|link:https://svnweb.freebsd.org/changeset/base/325028[325028]
|October 26, 2017
|12.0-CURRENT after fixing `ptrace()` to always clear the correct thread event when resuming.
|1200053
|link:https://svnweb.freebsd.org/changeset/base/325506[325506]
|November 7, 2017
|12.0-CURRENT after changing `struct mbuf` layout to add optional hardware timestamps for receive packets.
|1200054
|link:https://svnweb.freebsd.org/changeset/base/325852[325852]
|November 15, 2017
|12.0-CURRENT after changing the layout of `struct vmtotal` to allow for reporting large memory counters.
|1200055
|link:https://svnweb.freebsd.org/changeset/base/327740[327740]
|January 9, 2018
|12.0-CURRENT after adding `cpucontrol -e` support.
|1200056
|link:https://svnweb.freebsd.org/changeset/base/327952[327952]
|January 14, 2018
|12.0-CURRENT after upgrading clang, llvm, lld, lldb, compiler-rt and libc++ to 6.0.0 (branches/release_60 r321788).
|1200057
|link:https://svnweb.freebsd.org/changeset/base/329033[329033]
|February 8, 2018
|12.0-CURRENT after applying a clang 6.0.0 fix to make the wine ports build correctly.
|1200058
|link:https://svnweb.freebsd.org/changeset/base/329166[329166]
|February 12, 2018
|12.0-CURRENT after the lua loader was committed.
|1200059
|link:https://svnweb.freebsd.org/changeset/base/330299[330299]
|March 2, 2018
|12.0-CURRENT after removing the declaration of `union semun` unless `_WANT_SEMUN` is defined. Also the removal of `struct mymsg` and the renaming of kernel-only members of `struct semid_ds` and `struct msgid_ds`.
|1200060
|link:https://svnweb.freebsd.org/changeset/base/330384[330384]
|March 4, 2018
|12.0-CURRENT after upgrading clang, llvm, lld, lldb, compiler-rt and libc++ to 6.0.0 release.
|1200061
|link:https://svnweb.freebsd.org/changeset/base/332100[332100]
|April 6, 2018
|12.0-CURRENT after changing man:syslog[3] to emit RFC 5424 formatted messages.
|1200062
|link:https://svnweb.freebsd.org/changeset/base/332423[332423]
|April 12, 2018
|12.0-CURRENT after changing the Netmap API.
|1200063
|link:https://svnweb.freebsd.org/changeset/base/333446[333446]
|May 10, 2018
|12.0-CURRENT after reworking CTL frontend and backend options to use man:nv[3], allow creating multiple ioctl frontend ports.
|1200064
|link:https://svnweb.freebsd.org/changeset/base/334074[334074]
|May 22, 2018
|12.0-CURRENT after changing the ifnet address and multicast address TAILQ to CK_STAILQ.
|1200065
|link:https://svnweb.freebsd.org/changeset/base/334290[334290]
|May 28, 2018
|12.0-CURRENT after changing man:dwatch[1] to allow '-E code' to override profile EVENT_DETAILS.
|1200066
|link:https://svnweb.freebsd.org/changeset/base/334466[334466]
|June 1, 2018
|12.0-CURRENT after removal of in-kernel pmc tables for Intel.
|1200067
|link:https://svnweb.freebsd.org/changeset/base/334892[334892]
|June 9, 2018
|12.0-CURRENT after adding DW_LANG constants to libdwarf.
|1200068
|link:https://svnweb.freebsd.org/changeset/base/334930[334930]
|June 12, 2018
|12.0-CURRENT after changing the interface between the NFS modules.
|1200069
|link:https://svnweb.freebsd.org/changeset/base/335237[335237]
|June 15, 2018
|12.0-CURRENT after changing `struct kerneldumpheader` to version 4 (similar to version 2 in 11-STABLE and previous).
|1200070
|link:https://svnweb.freebsd.org/changeset/base/335873[335873]
|July 2, 2018
|12.0-CURRENT after inlining man:atomic[9] in modules on amd64 and i386 requiring all modules of consumers to be rebuilt for these architectures.
|1200071
|link:https://svnweb.freebsd.org/changeset/base/335930[335930]
|July 4, 2018
|12.0-CURRENT after changing the ABI and API of man:epoch[9] (link:https://svnweb.freebsd.org/changeset/base/335924[335924]) requiring modules of consumers to be rebuilt.
|1200072
|link:https://svnweb.freebsd.org/changeset/base/335979[335979]
|July 5, 2018
|12.0-CURRENT after changing the ABI and API of `struct xinpcb` and friends.
|1200073
|link:https://svnweb.freebsd.org/changeset/base/336313[336313]
|July 15, 2018
|12.0-CURRENT after changing the ABI and API of `struct if_shared_ctx` and `struct if_softc_ctx` requiring modules of man:iflib[9] consumers to be rebuilt.
|1200074
|link:https://svnweb.freebsd.org/changeset/base/336360[336360]
|July 16, 2018
|12.0-CURRENT after updating the configuration of libstdc++ to make use of C99 functions.
|1200075
|link:https://svnweb.freebsd.org/changeset/base/336538[336538]
|July 19, 2018
|12.0-CURRENT after zfsloader being folded into loader, and after adding ntpd:ntpd as uid:gid 123:123, and after removing arm big-endian support (MACHINE_ARCH=armeb).
|1200076
|link:https://svnweb.freebsd.org/changeset/base/336914[336914]
|July 30, 2018
|12.0-CURRENT after KPI changes to timespecadd.
|1200077
|link:https://svnweb.freebsd.org/changeset/base/337576[337576]
|August 10, 2018
|12.0-CURRENT after man:timespec_get[3] was added to the system.
|1200078
|link:https://svnweb.freebsd.org/changeset/base/337863[337863]
|August 15, 2018
|12.0-CURRENT after exec.created hook for jails.
|1200079
|link:https://svnweb.freebsd.org/changeset/base/338061[338061]
|August 19, 2018
|12.0-CURRENT after converting `arc4random` to using the Chacha20 algorithm and deprecating `arc4random_stir` and `arc4random_addrandom`.
|1200080
|link:https://svnweb.freebsd.org/changeset/base/338172[338172]
|August 22, 2018
|12.0-CURRENT after removing the drm drivers.
|1200081
|link:https://svnweb.freebsd.org/changeset/base/338182[338182]
|August 21, 2018
|12.0-CURRENT after KPI changes to NVMe.
|1200082
|link:https://svnweb.freebsd.org/changeset/base/338285[338285]
|August 24, 2018
|12.0-CURRENT after reverting the removal of the drm drivers.
|1200083
|link:https://svnweb.freebsd.org/changeset/base/338331[338331]
|August 26, 2018
|12.0-CURRENT after removing `arc4random_stir` and `arc4random_addrandom`.
|1200084
|link:https://svnweb.freebsd.org/changeset/base/338478[338478]
|September 5, 2018
|12.0-CURRENT after updating man:objcopy[1] to properly handle little-endian MIPS64 object files.
|1200085
|link:https://svnweb.freebsd.org/changeset/base/339270[339270]
|October 19, 2018
|12.0-STABLE after updating OpenSSL to version 1.1.1.
|1200086
|link:https://svnweb.freebsd.org/changeset/base/339732[339732]
|October 25, 2018
|12.0-STABLE after updating OpenSSL shared library version numbers.
|1200500
|link:https://svnweb.freebsd.org/changeset/base/340471[340471]
|November 16, 2018
|12-STABLE after releng/12.0 was branched.
|1200501
|link:https://svnweb.freebsd.org/changeset/base/342801[342801]
|January 6, 2019
|12-STABLE after merge of fixing linux_destroy_dev() behaviour when there are still files open from the destroying cdev.
|1200502
|link:https://svnweb.freebsd.org/changeset/base/343126[343126]
|January 17, 2019
|12-STABLE after enabling sys/random.h #include from C++.
|1200503
|link:https://svnweb.freebsd.org/changeset/base/344152[344152]
|Febrary 15, 2019
|12-STABLE after merge of fixing man:renameat[2] for CAPABILITIES kernels.
|1200504
|link:https://svnweb.freebsd.org/changeset/base/345169[345169]
|March 15, 2019
|12-STABLE after merging CCM for the benefit of the ZoF port.
|1200505
|link:https://svnweb.freebsd.org/changeset/base/345327[345327]
|March 20, 2019
|12-STABLE after merging support for selectively disabling ZFS without disabling loader.
|1200506
|link:https://svnweb.freebsd.org/changeset/base/346168[346168]
|April 12, 2019
|12-STABLE after merging llvm, clang, compiler-rt, libc++, libunwind, lld, lldb and openmp 8.0.0 final release r356365.
|1200507
|link:https://svnweb.freebsd.org/changeset/base/346337[346337]
|April 17, 2019
|12-STABLE after MFC of iflib changes in link:https://svnweb.freebsd.org/changeset/base/345303[345303], link:https://svnweb.freebsd.org/changeset/base/345658,[345658,] and partially of link:https://svnweb.freebsd.org/changeset/base/345305[345305].
|1200508
|link:https://svnweb.freebsd.org/changeset/base/346784[346784]
|April 27, 2019
|12-STABLE after ether_gen_addr availability.
|1200509
|link:https://svnweb.freebsd.org/changeset/base/347790[347790]
|May 16, 2019
|12-STABLE after bumping the Mellanox driver version numbers (man:mlx4en[4]; man:mlx5en[4]).
|1200510
|link:https://svnweb.freebsd.org/changeset/base/348036[348036]
|May 21, 2019
|12-STABLE after change to struct in linuxkpi from link:https://svnweb.freebsd.org/changeset/base/348035[348035].
|1200511
|link:https://svnweb.freebsd.org/changeset/base/348243[348243]
|May 24, 2019
|12-STABLE after MFC of link:https://svnweb.freebsd.org/changeset/base/347843[347843]: adding group_leader member to struct task_struct to the LinuxKPI.
|1200512
|link:https://svnweb.freebsd.org/changeset/base/348245[348245]
|May 24, 2019
|12-STABLE after adding context member to ww_mutex in LinuxKPI.
|1200513
|link:https://svnweb.freebsd.org/changeset/base/349763[349763]
|July 5, 2019
|12-STABLE after MFC of man:epoch[9] changes: link:https://svnweb.freebsd.org/changeset/base/349763[349763], link:https://svnweb.freebsd.org/changeset/base/340404[340404], link:https://svnweb.freebsd.org/changeset/base/340415[340415], link:https://svnweb.freebsd.org/changeset/base/340417[340417], link:https://svnweb.freebsd.org/changeset/base/340419[340419], link:https://svnweb.freebsd.org/changeset/base/340420[340420].
|1200514
|link:https://svnweb.freebsd.org/changeset/base/350083[350083]
|July 17, 2019
|12-STABLE after additions to LinuxKPI's rcu list.
|1200515
|link:https://svnweb.freebsd.org/changeset/base/350877[350877]
|August 11, 2019
|12-STABLE after MFC of link:https://svnweb.freebsd.org/changeset/base/349891[349891] (reorganize the SRCS lists as one file per line, and then alphabetize them) and link:https://svnweb.freebsd.org/changeset/base/349972[349972] (add arm_sync_icache() and arm_drain_writebuf() sysarch syscall wrappers).
|1200516
|link:https://svnweb.freebsd.org/changeset/base/351276[351276]
|August 20, 2019
|12-STABLE after MFC of various changes to iflib link:https://svnweb.freebsd.org/changeset/base/351276[351276].
|1200517
|link:https://svnweb.freebsd.org/changeset/base/352076[352076]
|September 9, 2019
|12-STABLE after adding sysfs create/remove functions that handles multiple files in one call to the LinuxKPI.
|1200518
|link:https://svnweb.freebsd.org/changeset/base/352114[352114]
|September 10, 2019
|12-STABLE after additional updates to LinuxKPI's sysfs.
|1200519
|link:https://svnweb.freebsd.org/changeset/base/352351[352351]
|September 15, 2019
|12-STABLE after MFC of the new fusefs driver.
|1201000
|link:https://svnweb.freebsd.org/changeset/base/352546[352546]
|September 20, 2019
|releng/12.1 branched from stable/12@r352480.
|1201500
|link:https://svnweb.freebsd.org/changeset/base/352547[352547]
|September 20, 2019
|12-STABLE after branching releng/12.1.
|1201501
|link:https://svnweb.freebsd.org/changeset/base/354598[354598]
|November 10, 2019
|12-STABLE after fixing a potential OOB read security issue in libc++.
|1201502
|link:https://svnweb.freebsd.org/changeset/base/354613[354613]
|November 11, 2019
|12-STABLE after enabling device class group attributes in the LinuxKPI.
|1201503
|link:https://svnweb.freebsd.org/changeset/base/354928[354928]
|November 21, 2019
|12-STABLE after adding support for AT_EXECPATH to elf_aux_info(3).
|1201504
|link:https://svnweb.freebsd.org/changeset/base/355658[355658]
|November 10, 2019
|12-STABLE after correcting the C++ version check for declaring man:timespec_get[3].
|1201505
|link:https://svnweb.freebsd.org/changeset/base/355899[355899]
|December 19, 2019
|12-STABLE after adding sigsetop extensions commonly found in musl libc and glibc.
|1201506
|link:https://svnweb.freebsd.org/changeset/base/355968[355968]
|December 21, 2019
|12-STABLE after doubling the value of `ARG_MAX`, for 64 bit platforms.
|1201507
|link:https://svnweb.freebsd.org/changeset/base/356306[356306]
|January 2, 2020
|12-STABLE after adding functions to man:bitstring[3] to find contiguous sequences of set or unset bits.
|1201508
|link:https://svnweb.freebsd.org/changeset/base/356394[356394]
|January 6, 2020
|12-STABLE after making USB statistics be per-device instead of per bus.
|1201509
|link:https://svnweb.freebsd.org/changeset/base/356460[356460]
|January 7, 2020
|12-STABLE after updating llvm, clang, compiler-rt, libc++, libunwind, lld, lldb and openmp to 9.0.0 final release r372316.
|1201510
|link:https://svnweb.freebsd.org/changeset/base/356679[356679]
|January 13, 2020
|12-STABLE after adding own counter for cancelled USB transfers.
|1201511
|link:https://svnweb.freebsd.org/changeset/base/357333[357333]
|January 31, 2020
|12-STABLE after adding [.filename]#/etc/os-release# as a symbolic link to [.filename]#/var/run/os-release#.
|1201512
|link:https://svnweb.freebsd.org/changeset/base/357612[357612]
|February 6, 2020
|12-STABLE after recent LinuxKPI changes.
|1201513
|link:https://svnweb.freebsd.org/changeset/base/359957[359957]
|Apr 15, 2020
|12-STABLE after cloning the RCU interface into a sleepable and a non-sleepable part in the LinuxKPI.
|1201514
|link:https://svnweb.freebsd.org/changeset/base/360525[360525]
|May 1, 2020
|12-STABLE after implementing full man:bus_dma[9] support in the LinuxKPI and pulling in all dependencies.
|1201515
|link:https://svnweb.freebsd.org/changeset/base/360545[360545]
|May 1, 2020
|12-STABLE after updating llvm, clang, compiler-rt, libc++, libunwind, lld, lldb and openmp to 10.0.0 release.
|1201516
|link:https://svnweb.freebsd.org/changeset/base/360620[360620]
|May 4, 2020
|12-STABLE after moving `id_mapped` to end of `bus_dma_impl` structure to preserve KPI.
|1201517
|link:https://svnweb.freebsd.org/changeset/base/361350[361350]
|May 21, 2020
|12-STABLE after renaming `vm.max_wired` to `vm.max_user_wired` and changing its type.
|1201518
|link:https://svnweb.freebsd.org/changeset/base/362319[362319]
|June 18, 2020
|12-STABLE after implementing __is_constexpr() function macro in the LinuxKPI.
|1201519
|link:https://svnweb.freebsd.org/changeset/base/362916[362916]
|July 4, 2020
|12-STABLE after making liblzma use libmd implementation of SHA256.
|1201520
|link:https://svnweb.freebsd.org/changeset/base/363494[363494]
|July 24, 2020
|12-STABLE after updating llvm, clang, compiler-rt, libc++, libunwind, lld, lldb and openmp to 10.0.1 release.
|1201521
|link:https://svnweb.freebsd.org/changeset/base/363790[363790]
|August 3, 2020
|12-STABLE after implementing the array_size() function in the LinuxKPI.
|1201522
|link:https://svnweb.freebsd.org/changeset/base/363832[363832]
|August 4, 2020
|12-STABLE after adding sysctlbyname system call.
|1201523
|link:https://svnweb.freebsd.org/changeset/base/364390[364390]
|August 19, 2020
|12-STABLE after change to clone the task struct fields related to RCU.
|1201524
|link:https://svnweb.freebsd.org/changeset/base/365356[365356]
|September 5, 2020
|12-STABLE after splitting XDR off into a separate kernel module, to minimize ZFS dependencies.
|1201525
|link:https://svnweb.freebsd.org/changeset/base/365471[365471]
|September 8, 2020
|12-STABLE after adding atomic and bswap functions to libcompiler_rt.
|1201526
|link:https://svnweb.freebsd.org/changeset/base/365608[365608]
|September 10, 2020
|12-STABLE after updating net80211 and kernel privilege checking API changes.
|1202000
|link:https://svnweb.freebsd.org/changeset/base/365618[365618]
|September 11, 2020
|releng/12.2 branched from stable/12@r365618.
|1202500
|link:https://svnweb.freebsd.org/changeset/base/365619[365619]
|September 11, 2020
|12-STABLE after branching releng/12.2.
|1202501
|link:https://svnweb.freebsd.org/changeset/base/365661[365661]
|September 12, 2020
|12-STABLE after followup commits to libcompiler_rt.
|1202502
|link:https://svnweb.freebsd.org/changeset/base/365816[365816]
|September 16, 2020
|12-STABLE after fixing man:crunchgen[1] application build with `WARNS=6`.
|1202503
|link:https://svnweb.freebsd.org/changeset/base/366878[366878]
|October 20, 2020
|12-STABLE after populating the acquire context field of a `ww_mutex` in the LinuxKPI.
|1202504
|link:https://svnweb.freebsd.org/changeset/base/367511[367511]
|November 9, 2020
|12-STABLE after the addition of `ptsname_r`.
|===
[[versions-11]]
== FreeBSD 11 Versions
[[freebsd-versions-table-11]]
.FreeBSD 11 `__FreeBSD_version` Values
[cols="1,1,1,1", frame="none", options="header"]
|===
| Value
| Revision
| Date
| Release
|1100000
|link:https://svnweb.freebsd.org/changeset/base/256284[256284]
|October 10, 2013
|11.0-CURRENT.
|1100001
|link:https://svnweb.freebsd.org/changeset/base/256776[256776]
|October 19, 2013
|11.0-CURRENT after addition of support for "first boot" [.filename]#rc.d# scripts, so ports can make use of this.
|1100002
|link:https://svnweb.freebsd.org/changeset/base/257696[257696]
|November 5, 2013
|11.0-CURRENT after dropping support for historic ioctls.
|1100003
|link:https://svnweb.freebsd.org/changeset/base/258284[258284]
|November 17, 2013
|11.0-CURRENT after iconv changes.
|1100004
|link:https://svnweb.freebsd.org/changeset/base/259424[259424]
|December 15, 2013
|11.0-CURRENT after the behavior change of `gss_pseudo_random` introduced in link:https://svnweb.freebsd.org/changeset/base/259286[259286].
|1100005
|link:https://svnweb.freebsd.org/changeset/base/260010[260010]
|December 28, 2013
|11.0-CURRENT after link:https://svnweb.freebsd.org/changeset/base/259951[259951] - Do not coalesce entries in man:vm_map_stack[9].
|1100006
|link:https://svnweb.freebsd.org/changeset/base/261246[261246]
|January 28, 2014
|11.0-CURRENT after upgrades of libelf and libdwarf.
|1100007
|link:https://svnweb.freebsd.org/changeset/base/261283[261283]
|January 30, 2014
|11.0-CURRENT after upgrade of libc++ to 3.4 release.
|1100008
|link:https://svnweb.freebsd.org/changeset/base/261881[261881]
|February 14, 2014
|11.0-CURRENT after libc++ 3.4 ABI compatibility fix.
|1100009
|link:https://svnweb.freebsd.org/changeset/base/261991[261991]
|February 16, 2014
|11.0-CURRENT after upgrade of llvm/clang to 3.4 release.
|1100010
|link:https://svnweb.freebsd.org/changeset/base/262630[262630]
|February 28, 2014
|11.0-CURRENT after upgrade of ncurses to 5.9 release (rev link:https://svnweb.freebsd.org/changeset/base/262629[262629]).
|1100011
|link:https://svnweb.freebsd.org/changeset/base/263102[263102]
|March 13, 2014
|11.0-CURRENT after ABI change in struct if_data.
|1100012
|link:https://svnweb.freebsd.org/changeset/base/263140[263140]
|March 14, 2014
|11.0-CURRENT after removal of Novell IPX protocol support.
|1100013
|link:https://svnweb.freebsd.org/changeset/base/263152[263152]
|March 14, 2014
|11.0-CURRENT after removal of AppleTalk protocol support.
|1100014
|link:https://svnweb.freebsd.org/changeset/base/263235[263235]
|March 16, 2014
|11.0-CURRENT after renaming [.filename]#<sys/capability.h># to [.filename]#<sys/capsicum.h># to avoid a clash with similarly named headers in other operating systems. A compatibility header is left in place to limit build breakage, but will be deprecated in due course.
|1100015
|link:https://svnweb.freebsd.org/changeset/base/263620[263620]
|March 22, 2014
|11.0-CURRENT after `cnt` rename to `vm_cnt`.
|1100016
|link:https://svnweb.freebsd.org/changeset/base/263660[263660]
|March 23, 2014
|11.0-CURRENT after addition of `armv6hf TARGET_ARCH`.
|1100017
|link:https://svnweb.freebsd.org/changeset/base/264121[264121]
|April 4, 2014
|11.0-CURRENT after GCC support for `__block` definition.
|1100018
|link:https://svnweb.freebsd.org/changeset/base/264212[264212]
|April 6, 2014
|11.0-CURRENT after support for UDP-Lite protocol (RFC 3828).
|1100019
|link:https://svnweb.freebsd.org/changeset/base/264289[264289]
|April 8, 2014
|11.0-CURRENT after FreeBSD-SA-14:06.openssl (rev link:https://svnweb.freebsd.org/changeset/base/264265[264265]).
|1100020
|link:https://svnweb.freebsd.org/changeset/base/265215[265215]
|May 1, 2014
|11.0-CURRENT after removing lindev in favor of having /dev/full by default (rev link:https://svnweb.freebsd.org/changeset/base/265212[265212]).
|1100021
|link:https://svnweb.freebsd.org/changeset/base/266151[266151]
|May 6, 2014
|11.0-CURRENT after [.filename]#src.opts.mk# changes, decoupling man:make.conf[5] from `buildworld` (rev link:https://svnweb.freebsd.org/changeset/base/265419[265419]).
|1100022
|link:https://svnweb.freebsd.org/changeset/base/266904[266904]
|May 30, 2014
|11.0-CURRENT after changes to man:strcasecmp[3], moving man:strcasecmp_l[3] and man:strncasecmp_l[3] from [.filename]#<string.h># to [.filename]#<strings.h># for POSIX 2008 compliance (rev link:https://svnweb.freebsd.org/changeset/base/266865[266865]).
|1100023
|link:https://svnweb.freebsd.org/changeset/base/267440[267440]
|June 13, 2014
|11.0-CURRENT after the CUSE library and kernel module have been attached to the build by default.
|1100024
|link:https://svnweb.freebsd.org/changeset/base/267992[267992]
|June 27, 2014
|11.0-CURRENT after man:sysctl[3] API change.
|1100025
|link:https://svnweb.freebsd.org/changeset/base/268066[268066]
|June 30, 2014
|11.0-CURRENT after man:regex[3] library update to add ">" and "<" delimiters.
|1100026
|link:https://svnweb.freebsd.org/changeset/base/268118[268118]
|July 1, 2014
|11.0-CURRENT after the internal interface between the NFS modules, including the krpc, was changed by (rev link:https://svnweb.freebsd.org/changeset/base/268115[268115]).
|1100027
|link:https://svnweb.freebsd.org/changeset/base/268441[268441]
|July 8, 2014
|11.0-CURRENT after FreeBSD-SA-14:17.kmem (rev link:https://svnweb.freebsd.org/changeset/base/268431[268431]).
|1100028
|link:https://svnweb.freebsd.org/changeset/base/268945[268945]
|July 21, 2014
|11.0-CURRENT after man:hdestroy[3] compliance fix changed ABI.
|1100029
|link:https://svnweb.freebsd.org/changeset/base/270173[270173]
|August 3, 2014
|11.0-CURRENT after `SOCK_DGRAM` bug fix (rev link:https://svnweb.freebsd.org/changeset/base/269489[269489]).
|1100030
|link:https://svnweb.freebsd.org/changeset/base/270929[270929]
|September 1, 2014
|11.0-CURRENT after `SOCK_RAW` sockets were changed to not modify packets at all.
|1100031
|link:https://svnweb.freebsd.org/changeset/base/271341[271341]
|September 9, 2014
|11.0-CURRENT after FreeBSD-SA-14:18.openssl (rev link:https://svnweb.freebsd.org/changeset/base/269686[269686]).
|1100032
|link:https://svnweb.freebsd.org/changeset/base/271438[271438]
|September 11, 2014
|11.0-CURRENT after API changes to `ifa_ifwithbroadaddr`, `ifa_ifwithdstaddr`, `ifa_ifwithnet`, and `ifa_ifwithroute`.
|1100033
|link:https://svnweb.freebsd.org/changeset/base/271657[271657]
|September 9, 2014
|11.0-CURRENT after changing `access`, `eaccess`, and `faccessat` to validate the mode argument.
|1100034
|link:https://svnweb.freebsd.org/changeset/base/271686[271686]
|September 16, 2014
|11.0-CURRENT after FreeBSD-SA-14:19.tcp (rev link:https://svnweb.freebsd.org/changeset/base/271666[271666]).
|1100035
|link:https://svnweb.freebsd.org/changeset/base/271705[271705]
|September 17, 2014
|11.0-CURRENT after i915 HW context support.
|1100036
|link:https://svnweb.freebsd.org/changeset/base/271724[271724]
|September 17, 2014
|Version bump to have ABI note distinguish binaries ready for strict man:mmap[2] flags checking (rev link:https://svnweb.freebsd.org/changeset/base/271724[271724]).
|1100037
|link:https://svnweb.freebsd.org/changeset/base/272674[272674]
|October 6, 2014
|11.0-CURRENT after addition of man:explicit_bzero[3] (rev link:https://svnweb.freebsd.org/changeset/base/272673[272673]).
|1100038
|link:https://svnweb.freebsd.org/changeset/base/272951[272951]
|October 11, 2014
|11.0-CURRENT after cleanup of TCP wrapper headers.
|1100039
|link:https://svnweb.freebsd.org/changeset/base/273250[273250]
|October 18, 2014
|11.0-CURRENT after removal of `MAP_RENAME` and `MAP_NORESERVE`.
|1100040
|link:https://svnweb.freebsd.org/changeset/base/273432[273432]
|October 21, 2014
|11.0-CURRENT after FreeBSD-SA-14:23 (rev link:https://svnweb.freebsd.org/changeset/base/273146[273146]).
|1100041
|link:https://svnweb.freebsd.org/changeset/base/273875[273875]
|October 30, 2014
|11.0-CURRENT after API changes to `syscall_register`, `syscall32_register`, `syscall_register_helper` and `syscall32_register_helper` (rev link:https://svnweb.freebsd.org/changeset/base/273707[273707]).
|1100042
|link:https://svnweb.freebsd.org/changeset/base/274046[274046]
|November 3, 2014
|11.0-CURRENT after a change to `struct tcpcb`.
|1100043
|link:https://svnweb.freebsd.org/changeset/base/274085[274085]
|November 4, 2014
|11.0-CURRENT after enabling man:vt[4] by default.
|1100044
|link:https://svnweb.freebsd.org/changeset/base/274116[274116]
|November 4, 2014
|11.0-CURRENT after adding new libraries/utilities (dpv and figpar) for data throughput visualization.
|1100045
|link:https://svnweb.freebsd.org/changeset/base/274162[274162]
|November 4, 2014
|11.0-CURRENT after FreeBSD-SA-14:23, FreeBSD-SA-14:24, and FreeBSD-SA-14:25.
|1100046
|link:https://svnweb.freebsd.org/changeset/base/274470[274470]
|November 13, 2014
|11.0-CURRENT after `kern_poll` signature change (rev link:https://svnweb.freebsd.org/changeset/base/274462[274462]).
|1100047
|link:https://svnweb.freebsd.org/changeset/base/274476[274476]
|November 13, 2014
|11.0-CURRENT after removal of no-at version of VFS syscalls helpers, like `kern_open`.
|1100048
|link:https://svnweb.freebsd.org/changeset/base/275358[275358]
|December 1, 2014
|11.0-CURRENT after starting the process of removing the use of the deprecated "M_FLOWID" flag from the network code.
|1100049
|link:https://svnweb.freebsd.org/changeset/base/275633[275633]
|December 9, 2014
|11.0-CURRENT after importing an important fix to the LLVM vectorizer, which could lead to buffer overruns in some cases.
|1100050
|link:https://svnweb.freebsd.org/changeset/base/275732[275732]
|December 12, 2014
|11.0-CURRENT after adding AES-ICM and AES-GCM to OpenCrypto.
|1100051
|link:https://svnweb.freebsd.org/changeset/base/276096[276096]
|December 23, 2014
|11.0-CURRENT after removing old NFS client and server code from the kernel.
|1100052
|link:https://svnweb.freebsd.org/changeset/base/276479[276479]
|December 31, 2014
|11.0-CURRENT after upgrade of clang, llvm and lldb to 3.5.0 release.
|1100053
|link:https://svnweb.freebsd.org/changeset/base/276781[276781]
|January 7, 2015
|11.0-CURRENT after man:MCLGET[9] gained a return value (rev link:https://svnweb.freebsd.org/changeset/base/276750[276750]).
|1100054
|link:https://svnweb.freebsd.org/changeset/base/277213[277213]
|January 15, 2015
|11.0-CURRENT after rewrite of callout subsystem.
|1100055
|link:https://svnweb.freebsd.org/changeset/base/277528[277528]
|January 22, 2015
|11.0-CURRENT after reverting callout changes in link:https://svnweb.freebsd.org/changeset/base/277213[277213].
|1100056
|link:https://svnweb.freebsd.org/changeset/base/277610[277610]
|January 23, 2015
|11.0-CURRENT after addition of `futimens` and `utimensat` system calls.
|1100057
|link:https://svnweb.freebsd.org/changeset/base/277897[277897]
|January 29, 2015
|11.0-CURRENT after removal of d_thread_t.
|1100058
|link:https://svnweb.freebsd.org/changeset/base/278228[278228]
|February 5, 2015
|11.0-CURRENT after addition of support for probing the SCSI VPD Extended Inquiry page (0x86).
|1100059
|link:https://svnweb.freebsd.org/changeset/base/278442[278442]
|February 9, 2015
|11.0-CURRENT after import of xz 5.2.0, which added multi-threaded compression and lzma gained libthr dependency (rev link:https://svnweb.freebsd.org/changeset/base/278433[278433]).
|1100060
|link:https://svnweb.freebsd.org/changeset/base/278846[278846]
|February 16, 2015
|11.0-CURRENT after forwarding `FBIO_BLANK` to framebuffer clients.
|1100061
|link:https://svnweb.freebsd.org/changeset/base/278964[278964]
|February 18, 2015
|11.0-CURRENT after `CDAI_FLAG_NONE` addition.
|1100062
|link:https://svnweb.freebsd.org/changeset/base/279221[279221]
|February 23, 2015
|11.0-CURRENT after man:mtio[4] and man:sa[4] API and man:ioctl[2] additions.
|1100063
|link:https://svnweb.freebsd.org/changeset/base/279728[279728]
|March 7, 2015
|11.0-CURRENT after adding mutex support to the `pps_ioctl()` API in the kernel.
|1100064
|link:https://svnweb.freebsd.org/changeset/base/279729[279729]
|March 7, 2015
|11.0-CURRENT after adding PPS support to USB serial drivers.
|1100065
|link:https://svnweb.freebsd.org/changeset/base/280031[280031]
|March 15, 2015
|11.0-CURRENT after upgrading clang, llvm and lldb to 3.6.0.
|1100066
|link:https://svnweb.freebsd.org/changeset/base/280306[280306]
|March 20, 2015
|11.0-CURRENT after removal of SSLv2 support from OpenSSL.
|1100067
|link:https://svnweb.freebsd.org/changeset/base/280630[280630]
|March 25, 2015
|11.0-CURRENT after removal of SSLv2 support from man:fetch[1] and man:fetch[3].
|1100068
|link:https://svnweb.freebsd.org/changeset/base/281172[281172]
|April 6, 2015
|11.0-CURRENT after change to net.inet6.ip6.mif6table sysctl.
|1100069
|link:https://svnweb.freebsd.org/changeset/base/281550[281550]
|April 15, 2015
|11.0-CURRENT after removal of const qualifier from man:iconv[3].
|1100070
|link:https://svnweb.freebsd.org/changeset/base/281613[281613]
|April 16, 2015
|11.0-CURRENT after moving ALTQ from [.filename]#contrib# to [.filename]#net/altq#.
|1100071
|link:https://svnweb.freebsd.org/changeset/base/282256[282256]
|April 29, 2015
|11.0-CURRENT after API/ABI change to man:smb[4] (rev link:https://svnweb.freebsd.org/changeset/base/281985[281985]).
|1100072
|link:https://svnweb.freebsd.org/changeset/base/282319[282319]
|May 1, 2015
|11.0-CURRENT after adding man:reallocarray[3] in libc (rev link:https://svnweb.freebsd.org/changeset/base/282314[282314]).
|1100073
|link:https://svnweb.freebsd.org/changeset/base/282650[282650]
|May 8, 2015
|11.0-CURRENT after extending the maximum number of allowed PCM channels in a PCM stream to 127 and decreasing the maximum number of sub-channels to 1.
|1100074
|link:https://svnweb.freebsd.org/changeset/base/283526[283526]
|May 25, 2015
|11.0-CURRENT after adding preliminary support for x86-64 Linux binaries (rev link:https://svnweb.freebsd.org/changeset/base/283424[283424]), and upgrading clang and llvm to 3.6.1.
|1100075
|link:https://svnweb.freebsd.org/changeset/base/283623[283623]
|May 27, 2015
|11.0-CURRENT after `dounmount()` requiring a reference on the passed struct mount (rev link:https://svnweb.freebsd.org/changeset/base/283602[283602]).
|1100076
|link:https://svnweb.freebsd.org/changeset/base/283983[283983]
|June 4, 2015
|11.0-CURRENT after disabled generation of legacy formatted password databases entries by default.
|1100077
|link:https://svnweb.freebsd.org/changeset/base/284233[284233]
|June 10, 2015
|11.0-CURRENT after API changes to `lim_cur`, `lim_max`, and `lim_rlimit` (rev link:https://svnweb.freebsd.org/changeset/base/284215[284215]).
|1100078
|link:https://svnweb.freebsd.org/changeset/base/286672[286672]
|August 12, 2015
|11.0-CURRENT after man:crunchgen[1] changes from link:https://svnweb.freebsd.org/changeset/base/284356[284356] to link:https://svnweb.freebsd.org/changeset/base/285986[285986].
|1100079
|link:https://svnweb.freebsd.org/changeset/base/286874[286874]
|August 18, 2015
|11.0-CURRENT after import of jemalloc 4.0.0 (rev link:https://svnweb.freebsd.org/changeset/base/286866[286866]).
|1100080
|link:https://svnweb.freebsd.org/changeset/base/288943[288943]
|October 5, 2015
|11.0-CURRENT after upgrading clang, llvm, lldb, compiler-rt and libc++ to 3.7.0.
|1100081
|link:https://svnweb.freebsd.org/changeset/base/289415[289415]
|October 16, 2015
|11.0-CURRENT after undating ZFS to support resumable send/receive (rev link:https://svnweb.freebsd.org/changeset/base/289362[289362]).
|1100082
|link:https://svnweb.freebsd.org/changeset/base/289594[289594]
|October 19, 2015
|11.0-CURRENT after Linux KPI updates.
|1100083
|link:https://svnweb.freebsd.org/changeset/base/289749[289749]
|October 22, 2015
|11.0-CURRENT after renaming [.filename]#linuxapi.ko# to [.filename]#linuxkpi.ko#.
|1100084
|link:https://svnweb.freebsd.org/changeset/base/290135[290135]
|October 29, 2015
|11.0-CURRENT after moving the LinuxKPI module into the default kernel build.
|1100085
|link:https://svnweb.freebsd.org/changeset/base/290207[290207]
|October 30, 2015
|11.0-CURRENT after import of OpenSSL 1.0.2d.
|1100086
|link:https://svnweb.freebsd.org/changeset/base/290275[290275]
|November 2, 2015
|11.0-CURRENT after making man:figpar[3] macros more unique.
|1100087
|link:https://svnweb.freebsd.org/changeset/base/290479[290479]
|November 7, 2015
|11.0-CURRENT after changing man:sysctl_add_oid[9]'s ABI.
|1100088
|link:https://svnweb.freebsd.org/changeset/base/290495[290495]
|November 7, 2015
|11.0-CURRENT after string collation and locales rework.
|1100089
|link:https://svnweb.freebsd.org/changeset/base/290505[290505]
|November 7, 2015
|11.0-CURRENT after API change to man:sysctl_add_oid[9] (rev link:https://svnweb.freebsd.org/changeset/base/290475[290475]).
|1100090
|link:https://svnweb.freebsd.org/changeset/base/290715[290715]
|November 10, 2015
|11.0-CURRENT after API change to callout_stop macro; (rev link:https://svnweb.freebsd.org/changeset/base/290664[290664]).
|1100091
|link:https://svnweb.freebsd.org/changeset/base/291537[291537]
|November 30, 2015
|11.0-CURRENT after changing the interface between the [.filename]#nfsd.ko# and [.filename]#nfscommon.ko# modules in link:https://svnweb.freebsd.org/changeset/base/291527[291527].
|1100092
|link:https://svnweb.freebsd.org/changeset/base/292499[292499]
|December 19, 2015
|11.0-CURRENT after removal of vm_pageout_grow_cache (rev link:https://svnweb.freebsd.org/changeset/base/292469[292469]).
|1100093
|link:https://svnweb.freebsd.org/changeset/base/292966[292966]
|December 30, 2015
|11.0-CURRENT after removal of sys/crypto/sha2.h (rev link:https://svnweb.freebsd.org/changeset/base/292782[292782]).
|1100094
|link:https://svnweb.freebsd.org/changeset/base/294086[294086]
|January 15, 2016
|11.0-CURRENT after LinuxKPI PCI changes (rev link:https://svnweb.freebsd.org/changeset/base/294086[294086]).
|1100095
|link:https://svnweb.freebsd.org/changeset/base/294327[294327]
|January 19, 2016
|11.0-CURRENT after LRO optimizations.
|1100096
|link:https://svnweb.freebsd.org/changeset/base/294505[294505]
|January 21, 2016
|11.0-CURRENT after LinuxKPI idr_* additions.
|1100097
|link:https://svnweb.freebsd.org/changeset/base/294860[294860]
|January 26, 2016
|11.0-CURRENT after API change to man:dpv[3].
|1100098
|link:https://svnweb.freebsd.org/changeset/base/295682[295682]
|February 16, 2016
|11.0-CURRENT after API change to rman (rev link:https://svnweb.freebsd.org/changeset/base/294883[294883]).
|1100099
|link:https://svnweb.freebsd.org/changeset/base/295739[295739]
|February 18, 2016
|11.0-CURRENT after allowing drivers to set the TCP ACK/data segment aggregation limit.
|1100100
|link:https://svnweb.freebsd.org/changeset/base/296136[296136]
|February 26, 2016
|11.0-CURRENT after man:bus_alloc_resource_any[9] API addition.
|1100101
|link:https://svnweb.freebsd.org/changeset/base/296417[296417]
|March 5, 2016
|11.0-CURRENT after upgrading our copies of clang, llvm, lldb and compiler-rt to 3.8.0 release.
|1100102
|link:https://svnweb.freebsd.org/changeset/base/296749[296749]
|March 12, 2016
|11.0-CURRENT after libelf cross-endian fix in rev link:https://svnweb.freebsd.org/changeset/base/296685[296685].
|1100103
|link:https://svnweb.freebsd.org/changeset/base/297000[297000]
|March 18, 2016
|11.0-CURRENT after using uintmax_t for rman ranges.
|1100104
|link:https://svnweb.freebsd.org/changeset/base/297156[297156]
|March 21, 2016
|11.0-CURRENT after tracking filemon usage via a proc.p_filemon pointer rather than its own lists.
|1100105
|link:https://svnweb.freebsd.org/changeset/base/297602[297602]
|April 6, 2016
|11.0-CURRENT after fixing sed functions `i` and `a` from discarding leading white space.
|1100106
|link:https://svnweb.freebsd.org/changeset/base/298486[298486]
|April 22, 2016
|11.0-CURRENT after fixes for using IPv6 addresses with RDMA.
|1100107
|link:https://svnweb.freebsd.org/changeset/base/299090[299090]
|May 4, 2016
|11.0-CURRENT after improving performance and functionality of the man:bitstring[3] api.
|1100108
|link:https://svnweb.freebsd.org/changeset/base/299530[299530]
|May 12, 2016
|11.0-CURRENT after fixing handling of IOCTLs in the LinuxKPI.
|1100109
|link:https://svnweb.freebsd.org/changeset/base/299933[299933]
|May 16, 2016
|11.0-CURRENT after implementing more Linux device related functions in the LinuxKPI.
|1100110
|link:https://svnweb.freebsd.org/changeset/base/300207[300207]
|May 19, 2016
|11.0-CURRENT after adding support for managing Shingled Magnetic Recording (SMR) drives.
|1100111
|link:https://svnweb.freebsd.org/changeset/base/300303[300303]
|May 20, 2016
|11.0-CURRENT after removing brk and sbrk from arm64.
|1100112
|link:https://svnweb.freebsd.org/changeset/base/300539[300539]
|May 23, 2016
|11.0-CURRENT after adding bit_count to the man:bitstring[3] API.
|1100113
|link:https://svnweb.freebsd.org/changeset/base/300701[300701]
|May 26, 2016
|11.0-CURRENT after disabling alignment faults on armv6.
|1100114
|link:https://svnweb.freebsd.org/changeset/base/300806[300806]
|May 26, 2016
|11.0-CURRENT after fixing man:crunchgen[1] usage with `MAKEOBJDIRPREFIX`.
|1100115
|link:https://svnweb.freebsd.org/changeset/base/300982[300982]
|May 30, 2016
|11.0-CURRENT after adding an mbuf flag for `M_HASHTYPE_`.
|1100116
|link:https://svnweb.freebsd.org/changeset/base/301011[301011]
|May 31, 2016
|11.0-CURRENT after SHA-512t256 (rev link:https://svnweb.freebsd.org/changeset/base/300903[300903]) and Skein (rev link:https://svnweb.freebsd.org/changeset/base/300966[300966]) where added to libmd, libcrypt, the kernel, and ZFS (rev link:https://svnweb.freebsd.org/changeset/base/301010[301010]).
|1100117
|link:https://svnweb.freebsd.org/changeset/base/301892[301892]
|June 6, 2016
|11.0-CURRENT after libpam was synced with stock link:https://svnweb.freebsd.org/changeset/base/301602[301602], bumping library version.
|1100118
|link:https://svnweb.freebsd.org/changeset/base/302071[302071]
|June 21, 2016
|11.0-CURRENT after breaking binary compatibility of struct disk link:https://svnweb.freebsd.org/changeset/base/302069[302069].
|1100119
|link:https://svnweb.freebsd.org/changeset/base/302150[302150]
|June 23, 2016
|11.0-CURRENT after switching geom_disk to using a pool mutex.
|1100120
|link:https://svnweb.freebsd.org/changeset/base/302153[302153]
|June 23, 2016
|11.0-CURRENT after adding spares to struct ifnet.
|1100121
|link:https://svnweb.freebsd.org/changeset/base/303979[303979]
|August 12, 2015
|11-STABLE after `releng/11.0` branched from 11-STABLE (rev link:https://svnweb.freebsd.org/changeset/base/303975[303975]).
|1100500
|link:https://svnweb.freebsd.org/changeset/base/303979[303979]
|August 12, 2016
|11.0-STABLE adding branched link:https://svnweb.freebsd.org/changeset/base/303976[303976].
|1100501
|link:https://svnweb.freebsd.org/changeset/base/304609[304609]
|August 22, 2016
|11.0-STABLE after adding C++11 thread_local support.
|1100502
|link:https://svnweb.freebsd.org/changeset/base/304865[304865]
|August 26, 2016
|11.0-STABLE after `LC_*_MASK` fix.
|1100503
|link:https://svnweb.freebsd.org/changeset/base/305733[305733]
|September 12, 2016
|11.0-STABLE after resolving a deadlock between `device_detach()` and man:usbd_do_request_flags[9].
|1100504
|link:https://svnweb.freebsd.org/changeset/base/307330[307330]
|October 14, 2016
|11.0-STABLE after ZFS merges.
|1100505
|link:https://svnweb.freebsd.org/changeset/base/307590[307590]
|October 19, 2016
|11.0-STABLE after `struct fb_info` change.
|1100506
|link:https://svnweb.freebsd.org/changeset/base/308048[308048]
|October 28, 2016
|11.0-STABLE after installing header files required development with libzfs_core.
|1100507
|link:https://svnweb.freebsd.org/changeset/base/310120[310120]
|December 15, 2016
|11.0-STABLE after adding the `ki_moretdname` member to `struct kinfo_proc` and `struct kinfo_proc32` to export the whole thread name to user-space utilities.
|1100508
|link:https://svnweb.freebsd.org/changeset/base/310618[310618]
|December 26, 2016
|11.0-STABLE after upgrading our copies of clang, llvm, lldb, compiler-rt and libc++ to 3.9.1 release, and adding lld 3.9.1.
|1100509
|link:https://svnweb.freebsd.org/changeset/base/311186[311186]
|January 3, 2017
|11.0-STABLE after man:crunchgen[1] META_MODE fix (rev link:https://svnweb.freebsd.org/changeset/base/311185[311185]).
|1100510
|link:https://svnweb.freebsd.org/changeset/base/315312[315312]
|March 15, 2017
|11.0-STABLE after MFC of `fget_cap`, `getsock_cap`, and related changes.
|1100511
|link:https://svnweb.freebsd.org/changeset/base/316423[316423]
|April 2, 2017
|11.0-STABLE after multiple MFCs updating clang, llvm, lld, lldb, compiler-rt and libc++ to 4.0.0 release.
|1100512
|link:https://svnweb.freebsd.org/changeset/base/316498[316498]
|April 4, 2017
|11.0-STABLE after making CAM SIM lock optional (revs link:https://svnweb.freebsd.org/changeset/base/315673[315673], link:https://svnweb.freebsd.org/changeset/base/315674[315674]).
|1100513
|link:https://svnweb.freebsd.org/changeset/base/318197[318197]
|May 11, 2017
|11.0-STABLE after merging the addition of the [.filename]#<dev/mmc/mmc_ioctl.h># header.
|1100514
|link:https://svnweb.freebsd.org/changeset/base/319279[319279]
|May 31, 2017
|11.0-STABLE after multiple MFCs of `libpcap`, `WITHOUT_INET6`, and a few other minor changes.
|1101000
|link:https://svnweb.freebsd.org/changeset/base/320486[320486]
|June 30, 2017
|`releng/11.1` branched from `stable/11`.
|1101001
|link:https://svnweb.freebsd.org/changeset/base/320763[320763]
|June 30, 2017
|11.1-RC1 After merging the `MAP_GUARD` man:mmap[2] flag addition.
|1101500
|link:https://svnweb.freebsd.org/changeset/base/320487[320487]
|June 30, 2017
|11-STABLE after `releng/11.1` branched.
|1101501
|link:https://svnweb.freebsd.org/changeset/base/320666[320666]
|July 5, 2017
|11-STABLE after merging the `MAP_GUARD` man:mmap[2] flag addition.
|1101502
|link:https://svnweb.freebsd.org/changeset/base/321688[321688]
|July 29, 2017
|11-STABLE after merging the NFS client forced dismount support `umount -N` addition.
|1101503
|link:https://svnweb.freebsd.org/changeset/base/323431[323431]
|September 11, 2017
|11-STABLE after merging changes making the WRFSBASE instruction operational on amd64.
|1101504
|link:https://svnweb.freebsd.org/changeset/base/324006[324006]
|September 26, 2017
|11-STABLE after merging libm from head, which adds man:cacoshl[3], man:cacosl[3], man:casinhl[3], man:casinl[3], man:catanl[3], man:catanhl[3], man:sincos[3], man:sincosf[3], and man:sincosl[3].
|1101505
|link:https://svnweb.freebsd.org/changeset/base/324023[324023]
|September 26, 2017
|11-STABLE after merging clang, llvm, lld, lldb, compiler-rt and libc++ 5.0.0 release.
|1101506
|link:https://svnweb.freebsd.org/changeset/base/325003[325003]
|October 25, 2017
|11-STABLE after merging link:https://svnweb.freebsd.org/changeset/base/324281[324281], adding the `value.u16` field to `struct diocgattr_arg`.
|1101507
|link:https://svnweb.freebsd.org/changeset/base/328379[328379]
|January 24, 2018
|11-STABLE after merging link:https://svnweb.freebsd.org/changeset/base/325028[325028], fixing `ptrace()` to always clear the correct thread event when resuming.
|1101508
|link:https://svnweb.freebsd.org/changeset/base/328386[328386]
|January 24, 2018
|11-STABLE after merging link:https://svnweb.freebsd.org/changeset/base/316648[316648], renaming smp_no_rendevous_barrier() to smp_no_rendezvous_barrier().
|1101509
|link:https://svnweb.freebsd.org/changeset/base/328653[328653]
|February 1, 2018
|11-STABLE after an overwrite merge backport of the LinuxKPI from FreeBSD-head.
|1101510
|link:https://svnweb.freebsd.org/changeset/base/329450[329450]
|February 17, 2018
|11-STABLE after the cmpxchg() macro is now fully functional in the LinuxKPI.
|1101511
|link:https://svnweb.freebsd.org/changeset/base/329981[329981]
|February 25, 2018
|11-STABLE after concluding the recent LinuxKPI related updates.
|1101512
|link:https://svnweb.freebsd.org/changeset/base/331219[331219]
|March 19, 2018
|11-STABLE after merging retpoline support from the upstream llvm, clang and lld 5.0 branches.
|1101513
|link:https://svnweb.freebsd.org/changeset/base/331838[331838]
|March 31, 2018
|11-STABLE after merging clang, llvm, lld, lldb, compiler-rt and libc++ 6.0.0 release, and several follow-up fixes.
|1101514
|link:https://svnweb.freebsd.org/changeset/base/332089[332089]
|April 5, 2018
|11-STABLE after merging link:https://svnweb.freebsd.org/changeset/base/328331[328331], adding a new and incompatible interpretation of ${name}_limits in rc scripts.
|1101515
|link:https://svnweb.freebsd.org/changeset/base/332363[332363]
|April 10, 2018
|11-STABLE after reverting link:https://svnweb.freebsd.org/changeset/base/331880[331880], removing the new and incompatible interpretation of ${name}_limits in rc scripts.
|1101516
|link:https://svnweb.freebsd.org/changeset/base/334392[334392]
|May 30, 2018
|11-STABLE after man:dwatch[1] touch-ups.
|1102000
|link:https://svnweb.freebsd.org/changeset/base/334459[334459]
|June 1, 2018
|`releng/11.2` branched from `stable/11`.
|1102500
|link:https://svnweb.freebsd.org/changeset/base/334461[334461]
|June 1, 2018
|11-STABLE after releng/11.2 branched.
|1102501
|link:https://svnweb.freebsd.org/changeset/base/335436[335436]
|June 20, 2018
|11-STABLE after LinuxKPI updates requiring recompilation of external kernel modules.
|1102502
|link:https://svnweb.freebsd.org/changeset/base/338617[338617]
|September 12, 2018
|11-STABLE after adding a socket option SO_TS_CLOCK and fixing recvmsg32() system call to properly down-convert layout of the 64-bit structures to match what 32-bit app(s) expect.
|1102503
|link:https://svnweb.freebsd.org/changeset/base/338931[338931]
|September 25, 2018
|11-STABLE after merging a TCP checksum fix to man:iflib[9] and adding new media types to if_media.h
|1102504
|link:https://svnweb.freebsd.org/changeset/base/340309[340309]
|November 9, 2018
|11-STABLE after several MFCs: updating man:objcopy[1] to properly handle little-endian MIPS64 object; correcting mips64el test to use ELF header; adding test for 64-bit ELF in _libelf_is_mips64el.
|1102505
|link:https://svnweb.freebsd.org/changeset/base/342804[342804]
|January 6, 2019
|11-STABLE after merge of fixing linux_destroy_dev() behaviour when there are still files open from the destroying cdev.
|1102506
|link:https://svnweb.freebsd.org/changeset/base/344220[344220]
|February 17, 2019
|11-STABLE after merging multiple commits to lualoader.
|1102507
|link:https://svnweb.freebsd.org/changeset/base/346296[346296]
|April 16, 2019
|11-STABLE after merging llvm, clang, compiler-rt, libc++, libunwind, lld, lldb and openmp 8.0.0 final release r356365.
|1102508
|link:https://svnweb.freebsd.org/changeset/base/346784[346784]
|April 27, 2019
|11-STABLE after ether_gen_addr availability.
|1102509
|link:https://svnweb.freebsd.org/changeset/base/347212[347212]
|May 6, 2019
|11-STABLE after MFC of link:https://svnweb.freebsd.org/changeset/base/345303[345303], link:https://svnweb.freebsd.org/changeset/base/345658,[345658,] and partially of link:https://svnweb.freebsd.org/changeset/base/345305[345305].
|1102510
|link:https://svnweb.freebsd.org/changeset/base/347883[347883]
|May 16, 2019
|11-STABLE after bumping the Mellanox driver version numbers (man:mlx4en[4]; man:mlx5en[4]).
|1103000
|link:https://svnweb.freebsd.org/changeset/base/349026[349026]
|June 14, 2019
|`releng/11.3` branched from `stable/11`.
|1103500
|link:https://svnweb.freebsd.org/changeset/base/349027[349027]
|June 14, 2019
|11-STABLE after releng/11.3 branched.
|1103501
|link:https://svnweb.freebsd.org/changeset/base/354598[354598]
|November 10, 2019
|11-STABLE after fixing a potential OOB read security issue in libc++.
|1103502
|link:https://svnweb.freebsd.org/changeset/base/354614[354614]
|November 11, 2019
|11-STABLE after adding sysfs create/remove functions that handles multiple files in one call to the LinuxKPI.
|1103503
|link:https://svnweb.freebsd.org/changeset/base/354615[354615]
|November 11, 2019
|11-STABLE after LinuxKPI sysfs improvements.
|1103504
|link:https://svnweb.freebsd.org/changeset/base/354616[354616]
|November 11, 2019
|11-STABLE after enabling device class group attributes in the LinuxKPI.
|1103505
|link:https://svnweb.freebsd.org/changeset/base/355899[355899]
|December 19, 2019
|11-STABLE after adding sigsetop extensions commonly found in musl libc and glibc.
|1103506
|link:https://svnweb.freebsd.org/changeset/base/356395[356395]
|January 6, 2020
|11-STABLE after making USB statistics be per-device instead of per bus.
|1103507
|link:https://svnweb.freebsd.org/changeset/base/356680[356680]
|January 13, 2020
|11-STABLE after adding own counter for cancelled USB transfers.
|1103508
|link:https://svnweb.freebsd.org/changeset/base/357613[357613]
|February 6, 2020
|11-STABLE after recent LinuxKPI changes.
|1103509
|link:https://svnweb.freebsd.org/changeset/base/359958[359958]
|April 15, 2020
|11-STABLE after moving `id_mapped` to end of `bus_dma_impl` structure to preserve KPI.
|1103510
|link:https://svnweb.freebsd.org/changeset/base/360658[360658]
|May 5, 2020
|11-STABLE after updating llvm, clang, compiler-rt, libc++, libunwind, lld, lldb and openmp to 9.0.0 final release r372316.
|1103511
|link:https://svnweb.freebsd.org/changeset/base/360784[360784]
|May 7, 2020
|11-STABLE after updating llvm, clang, compiler-rt, libc++, libunwind, lld, lldb and openmp to 10.0.0 release.
|1104000
|link:https://svnweb.freebsd.org/changeset/base/360804[360804]
|May 8, 2020
|`releng/11.4` branched from `stable/11`.
|1104001
|link:https://svnweb.freebsd.org/changeset/base/360822[360822]
|May 8, 2020
|11.4-BETA1 after updating llvm, clang, compiler-rt, libc++, libunwind, lld, lldb and openmp to 10.0.0 release.
|1104500
|link:https://svnweb.freebsd.org/changeset/base/360805[360805]
|May 8, 2020
|11-STABLE after releng/11.4 branched.
|1104501
|link:https://svnweb.freebsd.org/changeset/base/362320[362320]
|June 18, 2020
|11-STABLE after implementing __is_constexpr() function macro in the LinuxKPI.
|1104502
|link:https://svnweb.freebsd.org/changeset/base/362919[362919]
|July 4, 2020
|11-STABLE after making liblzma use libmd implementation of SHA256.
|1104503
|link:https://svnweb.freebsd.org/changeset/base/363496[363496]
|July 24, 2020
|11-STABLE after updating llvm, clang, compiler-rt, libc++, libunwind, lld, lldb and openmp to 10.0.1 release.
|1104504
|link:https://svnweb.freebsd.org/changeset/base/363792[363792]
|August 3, 2020
|11-STABLE after implementing the array_size() function in the LinuxKPI.
|1104505
|link:https://svnweb.freebsd.org/changeset/base/364391[364391]
|August 19, 2020
|11-STABLE after change to clone the task struct fields related to RCU.
|1104506
|link:https://svnweb.freebsd.org/changeset/base/365471[365471]
|September 8, 2020
|11-STABLE after adding atomic and bswap functions to libcompiler_rt.
|1104507
|link:https://svnweb.freebsd.org/changeset/base/365661[365661]
|September 12, 2020
|11-STABLE after followup commits to libcompiler_rt.
|1104508
|link:https://svnweb.freebsd.org/changeset/base/366879[366879]
|October 20, 2020
|11-STABLE after populating the acquire context field of a `ww_mutex` in the LinuxKPI.
|1104509
|link:https://svnweb.freebsd.org/changeset/base/366889[366889]
|October 20, 2020
|11-STABLE after additions to LinuxKPI's `RCU` list.
|1104510
|link:https://svnweb.freebsd.org/changeset/base/367513[367513]
|November 9, 2020
|11-STABLE after the addition of `ptsname_r`.
|===
[[versions-10]]
== FreeBSD 10 Versions
[[freebsd-versions-table-10]]
.FreeBSD 10 `__FreeBSD_version` Values
[cols="1,1,1,1", frame="none", options="header"]
|===
| Value
| Revision
| Date
| Release
|1000000
|link:https://svnweb.freebsd.org/changeset/base/225757[225757]
|September 26, 2011
|10.0-CURRENT.
|1000001
|link:https://svnweb.freebsd.org/changeset/base/227070[227070]
|November 4, 2011
|10-CURRENT after addition of the man:posix_fadvise[2] system call.
|1000002
|link:https://svnweb.freebsd.org/changeset/base/228444[228444]
|December 12, 2011
|10-CURRENT after defining boolean true/false in sys/types.h, sizeof(bool) may have changed (rev link:https://svnweb.freebsd.org/changeset/base/228444[228444]). 10-CURRENT after xlocale.h was introduced (rev link:https://svnweb.freebsd.org/changeset/base/227753[227753]).
|1000003
|link:https://svnweb.freebsd.org/changeset/base/228571[228571]
|December 16, 2011
|10-CURRENT after major changes to man:carp[4], changing size of struct in_aliasreq, struct in6_aliasreq (rev link:https://svnweb.freebsd.org/changeset/base/228571[228571]) and straitening arguments check of SIOCAIFADDR (rev link:https://svnweb.freebsd.org/changeset/base/228574[228574]).
|1000004
|link:https://svnweb.freebsd.org/changeset/base/229204[229204]
|January 1, 2012
|10-CURRENT after the removal of `skpc()` and the addition of man:memcchr[9] (rev link:https://svnweb.freebsd.org/changeset/base/229200[229200]).
|1000005
|link:https://svnweb.freebsd.org/changeset/base/230207[230207]
|January 16, 2012
|10-CURRENT after the removal of support for SIOCSIFADDR, SIOCSIFNETMASK, SIOCSIFBRDADDR, SIOCSIFDSTADDR ioctls.
|1000006
|link:https://svnweb.freebsd.org/changeset/base/230590[230590]
|January 26, 2012
|10-CURRENT after introduction of read capacity data asynchronous notification in the man:cam[4] layer.
|1000007
|link:https://svnweb.freebsd.org/changeset/base/231025[231025]
|February 5, 2012
|10-CURRENT after introduction of new man:tcp[4] socket options: TCP_KEEPINIT, TCP_KEEPIDLE, TCP_KEEPINTVL, and TCP_KEEPCNT.
|1000008
|link:https://svnweb.freebsd.org/changeset/base/231505[231505]
|February 11, 2012
|10-CURRENT after introduction of the new extensible man:sysctl[3] interface NET_RT_IFLISTL to query address lists.
|1000009
|link:https://svnweb.freebsd.org/changeset/base/232154[232154]
|February 25, 2012
|10-CURRENT after import of libarchive 3.0.3 (rev link:https://svnweb.freebsd.org/changeset/base/232153[232153]).
|1000010
|link:https://svnweb.freebsd.org/changeset/base/233757[233757]
|March 31, 2012
|10-CURRENT after xlocale cleanup.
|1000011
|link:https://svnweb.freebsd.org/changeset/base/234355[234355]
|April 16, 2012
|10-CURRENT import of LLVM/Clang 3.1 trunk link:https://svnweb.freebsd.org/changeset/base/154661[154661] (rev link:https://svnweb.freebsd.org/changeset/base/234353[234353]).
|1000012
|link:https://svnweb.freebsd.org/changeset/base/234924[234924]
|May 2, 2012
|10-CURRENT jemalloc import.
|1000013
|link:https://svnweb.freebsd.org/changeset/base/235788[235788]
|May 22, 2012
|10-CURRENT after byacc import.
|1000014
|link:https://svnweb.freebsd.org/changeset/base/237631[237631]
|June 27, 2012
|10-CURRENT after BSD sort becoming the default sort (rev link:https://svnweb.freebsd.org/changeset/base/237629[237629]).
|1000015
|link:https://svnweb.freebsd.org/changeset/base/238405[238405]
|July 12, 2012
|10-CURRENT after import of OpenSSL 1.0.1c.
|(not changed)
|link:https://svnweb.freebsd.org/changeset/base/238429[238429]
|July 13, 2012
|10-CURRENT after the fix for LLVM/Clang 3.1 regression.
|1000016
|link:https://svnweb.freebsd.org/changeset/base/239179[239179]
|August 8, 2012
|10-CURRENT after KBI change in man:ucom[4].
|1000017
|link:https://svnweb.freebsd.org/changeset/base/239214[239214]
|August 8, 2012
|10-CURRENT after adding streams feature to the USB stack.
|1000018
|link:https://svnweb.freebsd.org/changeset/base/240233[240233]
|September 8, 2012
|10-CURRENT after major rewrite of man:pf[4].
|1000019
|link:https://svnweb.freebsd.org/changeset/base/241245[241245]
|October 6, 2012
|10-CURRENT after man:pfil[9] KBI/KPI changed to supply packets in net byte order to AF_INET filter hooks.
|1000020
|link:https://svnweb.freebsd.org/changeset/base/241610[241610]
|October 16, 2012
|10-CURRENT after the network interface cloning KPI changed and struct if_clone becoming opaque.
|1000021
|link:https://svnweb.freebsd.org/changeset/base/241897[241897]
|October 22, 2012
|10-CURRENT after removal of support for non-MPSAFE filesystems and addition of support for FUSEFS (rev link:https://svnweb.freebsd.org/changeset/base/241519[241519]).
|1000022
|link:https://svnweb.freebsd.org/changeset/base/241913[241913]
|October 22, 2012
|10-CURRENT after the entire IPv4 stack switched to network byte order for IP packet header storage.
|1000023
|link:https://svnweb.freebsd.org/changeset/base/242619[242619]
|November 5, 2012
|10-CURRENT after jitter buffer in the common USB serial driver code, to temporarily store characters if the TTY buffer is full. Add flow stop and start signals when this happens.
|1000024
|link:https://svnweb.freebsd.org/changeset/base/242624[242624]
|November 5, 2012
|10-CURRENT after clang was made the default compiler on i386 and amd64.
|1000025
|link:https://svnweb.freebsd.org/changeset/base/243443[243443]
|November 17, 2012
|10-CURRENT after the sin6_scope_id member variable in struct sockaddr_in6 was changed to being filled by the kernel before passing the structure to the userland via sysctl or routing socket. This means the KAME-specific embedded scope id in sin6_addr.s6_addr[2] is always cleared in userland application.
|1000026
|link:https://svnweb.freebsd.org/changeset/base/245313[245313]
|January 11, 2013
|10-CURRENT after install gained the -N flag. May also be used to indicate the presence of nmtree.
|1000027
|link:https://svnweb.freebsd.org/changeset/base/246084[246084]
|January 29, 2013
|10-CURRENT after cat gained the -l flag (rev link:https://svnweb.freebsd.org/changeset/base/246083[246083]).
|1000028
|link:https://svnweb.freebsd.org/changeset/base/246759[246759]
|February 13, 2013
|10-CURRENT after USB moved to the driver structure requiring a rebuild of all USB modules.
|1000029
|link:https://svnweb.freebsd.org/changeset/base/247821[247821]
|March 4, 2013
|10-CURRENT after the introduction of tickless callout facility which also changed the layout of struct callout (rev link:https://svnweb.freebsd.org/changeset/base/247777[247777]).
|1000030
|link:https://svnweb.freebsd.org/changeset/base/248210[248210]
|March 12, 2013
|10-CURRENT after KPI breakage introduced in the VM subsystem to support read/write locking (rev link:https://svnweb.freebsd.org/changeset/base/248084[248084]).
|1000031
|link:https://svnweb.freebsd.org/changeset/base/249943[249943]
|April 26, 2013
|10-CURRENT after the dst parameter of the ifnet `if_output` method was changed to take const qualifier (rev link:https://svnweb.freebsd.org/changeset/base/249925[249925]).
|1000032
|link:https://svnweb.freebsd.org/changeset/base/250163[250163]
|May 1, 2013
|10-CURRENT after the introduction of the man:accept4[2] (rev link:https://svnweb.freebsd.org/changeset/base/250154[250154]) and man:pipe2[2] (rev link:https://svnweb.freebsd.org/changeset/base/250159[250159]) system calls.
|1000033
|link:https://svnweb.freebsd.org/changeset/base/250881[250881]
|May 21, 2013
|10-CURRENT after flex 2.5.37 import.
|1000034
|link:https://svnweb.freebsd.org/changeset/base/251294[251294]
|June 3, 2013
|10-CURRENT after the addition of these functions to libm: man:cacos[3], man:cacosf[3], man:cacosh[3], man:cacoshf[3], man:casin[3], man:casinf[3], man:casinh[3], man:casinhf[3], man:catan[3], man:catanf[3], man:catanh[3], man:catanhf[3], man:logl[3], man:log2l[3], man:log10l[3], man:log1pl[3], man:expm1l[3].
|1000035
|link:https://svnweb.freebsd.org/changeset/base/251527[251527]
|June 8, 2013
|10-CURRENT after the introduction of the man:aio_mlock[2] system call (rev link:https://svnweb.freebsd.org/changeset/base/251526[251526]).
|1000036
|link:https://svnweb.freebsd.org/changeset/base/253049[253049]
|July 9, 2013
|10-CURRENT after the addition of a new function to the kernel GSSAPI module's function call interface.
|1000037
|link:https://svnweb.freebsd.org/changeset/base/253089[253089]
|July 9, 2013
|10-CURRENT after the migration of statistics structures to PCPU counters. Changed structures include: `ahstat`, `arpstat`, `espstat`, `icmp6_ifstat`, `icmp6stat`, `in6_ifstat`, `ip6stat`, `ipcompstat`, `ipipstat`, `ipsecstat`, `mrt6stat`, `mrtstat`, `pfkeystat`, `pim6stat`, `pimstat`, `rip6stat`, `udpstat` (rev link:https://svnweb.freebsd.org/changeset/base/253081[253081]).
|1000038
|link:https://svnweb.freebsd.org/changeset/base/253396[253396]
|July 16, 2013
|10-CURRENT after making `ARM EABI` the default ABI on arm, armeb, armv6, and armv6eb architectures.
|1000039
|link:https://svnweb.freebsd.org/changeset/base/253549[253549]
|July 22, 2013
|10-CURRENT after `CAM` and man:mps[4] driver scanning changes.
|1000040
|link:https://svnweb.freebsd.org/changeset/base/253638[253638]
|July 24, 2013
|10-CURRENT after addition of libusb pkgconf files.
|1000041
|link:https://svnweb.freebsd.org/changeset/base/253970[253970]
|August 5, 2013
|10-CURRENT after change from `time_second` to `time_uptime` in `PF_INET6`.
|1000042
|link:https://svnweb.freebsd.org/changeset/base/254138[254138]
|August 9, 2013
|10-CURRENT after VM subsystem change to unify soft and hard busy mechanisms.
|1000043
|link:https://svnweb.freebsd.org/changeset/base/254273[254273]
|August 13, 2013
|10-CURRENT after `WITH_ICONV` is enabled by default. A new man:src.conf[5] option, `WITH_LIBICONV_COMPAT` (disabled by default) adds `libiconv_open` to provide compatibility with the package:converters/libiconv[] port.
|1000044
|link:https://svnweb.freebsd.org/changeset/base/254358[254358]
|August 15, 2013
|10-CURRENT after `libc.so` conversion to an man:ld[1] script (rev link:https://svnweb.freebsd.org/changeset/base/251668[251668]).
|1000045
|link:https://svnweb.freebsd.org/changeset/base/254389[254389]
|August 15, 2013
|10-CURRENT after devfs programming interface change by replacing the cdevsw flag `D_UNMAPPED_IO` with the struct cdev flag `SI_UNMAPPED`.
|1000046
|link:https://svnweb.freebsd.org/changeset/base/254537[254537]
|August 19, 2013
|10-CURRENT after addition of `M_PROTO[9-12]` and removal of `M_FRAG\|M_FIRSTFRAG\|M_LASTFRAG` mbuf flags (rev link:https://svnweb.freebsd.org/changeset/base/254524[254524], link:https://svnweb.freebsd.org/changeset/base/254526[254526]).
|1000047
|link:https://svnweb.freebsd.org/changeset/base/254627[254627]
|August 21, 2013
|10-CURRENT after man:stat[2] update to allow storing some Windows/DOS and CIFS file attributes as man:stat[2] flags.
|1000048
|link:https://svnweb.freebsd.org/changeset/base/254672[254672]
|August 22, 2013
|10-CURRENT after modification of structure `xsctp_inpcb`.
|1000049
|link:https://svnweb.freebsd.org/changeset/base/254760[254760]
|August 24, 2013
|10-CURRENT after man:physio[9] support for devices that do not function properly with split I/O, such as man:sa[4].
|1000050
|link:https://svnweb.freebsd.org/changeset/base/254844[254844]
|August 24, 2013
|10-CURRENT after modifications of structure `mbuf` (rev link:https://svnweb.freebsd.org/changeset/base/254780[254780], link:https://svnweb.freebsd.org/changeset/base/254799[254799], link:https://svnweb.freebsd.org/changeset/base/254804[254804], link:https://svnweb.freebsd.org/changeset/base/254807[254807]link:https://svnweb.freebsd.org/changeset/base/254842[254842]).
|1000051
|link:https://svnweb.freebsd.org/changeset/base/254887[254887]
|August 25, 2013
|10-CURRENT after Radeon KMS driver import (rev link:https://svnweb.freebsd.org/changeset/base/254885[254885]).
|1000052
|link:https://svnweb.freebsd.org/changeset/base/255180[255180]
|September 3, 2013
|10-CURRENT after import of NetBSD `libexecinfo` is connected to the build.
|1000053
|link:https://svnweb.freebsd.org/changeset/base/255305[255305]
|September 6, 2013
|10-CURRENT after API and ABI changes to the Capsicum framework.
|1000054
|link:https://svnweb.freebsd.org/changeset/base/255321[255321]
|September 6, 2013
|10-CURRENT after `gcc` and `libstdc++` are no longer built by default.
|1000055
|link:https://svnweb.freebsd.org/changeset/base/255449[255449]
|September 6, 2013
|10-CURRENT after addition of `MMAP_32BIT` man:mmap[2] flag (rev link:https://svnweb.freebsd.org/changeset/base/255426[255426]).
|1000100
|link:https://svnweb.freebsd.org/changeset/base/259065[259065]
|December 7, 2013
|`releng/10.0` branched from `stable/10`.
|1000500
|link:https://svnweb.freebsd.org/changeset/base/256283[256283]
|October 10, 2013
|10-STABLE after branch from `head/`.
|1000501
|link:https://svnweb.freebsd.org/changeset/base/256916[256916]
|October 22, 2013
|10-STABLE after addition of first-boot man:rc[8] support.
|1000502
|link:https://svnweb.freebsd.org/changeset/base/258398[258398]
|November 20, 2013
|10-STABLE after removal of iconv symbols from `libc.so.7`.
|1000510
|link:https://svnweb.freebsd.org/changeset/base/259067[259067]
|December 7, 2013
|`releng/10.0` __FreeBSD_version update to prevent the value from going backwards.
|1000700
|link:https://svnweb.freebsd.org/changeset/base/259069[259069]
|December 7, 2013
|10-STABLE after `releng/10.0` branch.
|1000701
|link:https://svnweb.freebsd.org/changeset/base/259447[259447]
|December 15, 2013
|10.0-STABLE after Heimdal encoding fix.
|1000702
|link:https://svnweb.freebsd.org/changeset/base/260135[260135]
|December 31, 2013
|10-STABLE after MAP_STACK fixes.
|1000703
|link:https://svnweb.freebsd.org/changeset/base/262801[262801]
|March 5, 2014
|10-STABLE after upgrade of libc++ to 3.4 release.
|1000704
|link:https://svnweb.freebsd.org/changeset/base/262889[262889]
|March 7, 2014
|10-STABLE after MFC of the man:vt[4] driver (rev link:https://svnweb.freebsd.org/changeset/base/262861[262861]).
|1000705
|link:https://svnweb.freebsd.org/changeset/base/263508[263508]
|March 21, 2014
|10-STABLE after upgrade of llvm/clang to 3.4 release.
|1000706
|link:https://svnweb.freebsd.org/changeset/base/264214[264214]
|April 6, 2014
|10-STABLE after GCC support for `__block` definition.
|1000707
|link:https://svnweb.freebsd.org/changeset/base/264289[264289]
|April 8, 2014
|10-STABLE after FreeBSD-SA-14:06.openssl.
|1000708
|link:https://svnweb.freebsd.org/changeset/base/265122[265122]
|April 30, 2014
|10-STABLE after FreeBSD-SA-14:07.devfs, FreeBSD-SA-14:08.tcp, and FreeBSD-SA-14:09.openssl.
|1000709
|link:https://svnweb.freebsd.org/changeset/base/265946[265946]
|May 13, 2014
|10-STABLE after support for UDP-Lite protocol (RFC 3828).
|1000710
|link:https://svnweb.freebsd.org/changeset/base/267465[267465]
|June 13, 2014
|10-STABLE after changes to man:strcasecmp[3], moving man:strcasecmp_l[3] and man:strncasecmp_l[3] from [.filename]#<string.h># to [.filename]#<strings.h># for POSIX 2008 compliance.
|1000711
|link:https://svnweb.freebsd.org/changeset/base/268442[268442]
|July 8, 2014
|10-STABLE after FreeBSD-SA-14:17.kmem (rev link:https://svnweb.freebsd.org/changeset/base/268432[268432]).
|1000712
|link:https://svnweb.freebsd.org/changeset/base/269400[269400]
|August 1, 2014
|10-STABLE after man:nfsd[8] 4.1 merge (rev link:https://svnweb.freebsd.org/changeset/base/269398[269398]).
|1000713
|link:https://svnweb.freebsd.org/changeset/base/269484[269484]
|August 3, 2014
|10-STABLE after man:regex[3] library update to add ">" and "<" delimiters.
|1000714
|link:https://svnweb.freebsd.org/changeset/base/270174[270174]
|August 3, 2014
|10-STABLE after `SOCK_DGRAM` bug fix (rev link:https://svnweb.freebsd.org/changeset/base/269490[269490]).
|1000715
|link:https://svnweb.freebsd.org/changeset/base/271341[271341]
|September 9, 2014
|10-STABLE after FreeBSD-SA-14:18 (rev link:https://svnweb.freebsd.org/changeset/base/269686[269686]).
|1000716
|link:https://svnweb.freebsd.org/changeset/base/271686[271686]
|September 16, 2014
|10-STABLE after FreeBSD-SA-14:19 (rev link:https://svnweb.freebsd.org/changeset/base/271667[271667]).
|1000717
|link:https://svnweb.freebsd.org/changeset/base/271816[271816]
|September 18, 2014
|10-STABLE after i915 HW context support.
|1001000
|link:https://svnweb.freebsd.org/changeset/base/272463[272463]
|October 2, 2014
|10.1-RC1 after releng/10.1 branch.
|1001500
|link:https://svnweb.freebsd.org/changeset/base/272464[272464]
|October 2, 2014
|10-STABLE after releng/10.1 branch.
|1001501
|link:https://svnweb.freebsd.org/changeset/base/273432[273432]
|October 21, 2014
|10-STABLE after FreeBSD-SA-14:20, FreeBSD-SA-14:22, and FreeBSD-SA-14:23 (rev link:https://svnweb.freebsd.org/changeset/base/273411[273411]).
|1001502
|link:https://svnweb.freebsd.org/changeset/base/274162[274162]
|November 4, 2014
|10-STABLE after FreeBSD-SA-14:23, FreeBSD-SA-14:24, and FreeBSD-SA-14:25.
|1001503
|link:https://svnweb.freebsd.org/changeset/base/275040[275040]
|November 25, 2014
|10-STABLE after merging new libraries/utilities (man:dpv[1] man:dpv[3], and man:figpar[3]) for data throughput visualization.
|1001504
|link:https://svnweb.freebsd.org/changeset/base/275742[275742]
|December 13, 2014
|10-STABLE after merging an important fix to the LLVM vectorizer, which could lead to buffer overruns in some cases.
|1001505
|link:https://svnweb.freebsd.org/changeset/base/276633[276633]
|January 3, 2015
|10-STABLE after merging some arm constants in link:https://svnweb.freebsd.org/changeset/base/276312[276312].
|1001506
|link:https://svnweb.freebsd.org/changeset/base/277087[277087]
|January 12, 2015
|10-STABLE after merging max table size update for yacc.
|1001507
|link:https://svnweb.freebsd.org/changeset/base/277790[277790]
|January 27, 2015
|10-STABLE after changes to the UDP tunneling callback to provide a context pointer and the source sockaddr.
|1001508
|link:https://svnweb.freebsd.org/changeset/base/278974[278974]
|February 18, 2015
|10-STABLE after addition of the `CDAI_TYPE_EXT_INQ` request type.
|1001509
|link:https://svnweb.freebsd.org/changeset/base/279287[279287]
|February 25, 2015
|10-STABLE after FreeBSD-EN-15:01.vt, FreeBSD-EN-15:02.openssl, FreeBSD-EN-15:03.freebsd-update, FreeBSD-SA-15:04.igmp, and FreeBSD-SA-15:05.bind.
|1001510
|link:https://svnweb.freebsd.org/changeset/base/279329[279329]
|February 26, 2015
|10-STABLE after MFC of rev link:https://svnweb.freebsd.org/changeset/base/278964[278964].
|1001511
|link:https://svnweb.freebsd.org/changeset/base/280246[280246]
|19 March, 2015
|10-STABLE after [.filename]#sys/capability.h# is renamed to [.filename]#sys/capsicum.h# (rev link:https://svnweb.freebsd.org/changeset/base/280224/[280224/]).
|1001512
|link:https://svnweb.freebsd.org/changeset/base/280438[280438]
|24 March, 2015
|10-STABLE after addition of new man:mtio[4], man:sa[4] ioctls.
|1001513
|link:https://svnweb.freebsd.org/changeset/base/281955[281955]
|24 April, 2015
|10-STABLE after starting the process of removing the use of the deprecated "M_FLOWID" flag from the network code.
|1001514
|link:https://svnweb.freebsd.org/changeset/base/282275[282275]
|April 30, 2015
|10-STABLE after MFC of man:iconv[3] fixes.
|1001515
|link:https://svnweb.freebsd.org/changeset/base/282781[282781]
|May 11, 2015
|10-STABLE after adding back `M_FLOWID`.
|1001516
|link:https://svnweb.freebsd.org/changeset/base/283341[283341]
|May 24, 2015
|10-STABLE after MFC of many USB things.
|1001517
|link:https://svnweb.freebsd.org/changeset/base/283950[283950]
|June 3, 2015
|10-STABLE after MFC of sound related things.
|1001518
|link:https://svnweb.freebsd.org/changeset/base/284204[284204]
|June 10, 2015
|10-STABLE after MFC of zfs vfs fixes (rev link:https://svnweb.freebsd.org/changeset/base/284203[284203]).
|1001519
|link:https://svnweb.freebsd.org/changeset/base/284720[284720]
|June 23, 2015
|10-STABLE after reverting bumping `MAXCPU` on amd64.
|1002000
|link:https://svnweb.freebsd.org/changeset/base/285830[285830]
|24 July, 2015
|`releng/10.2` branched from 10-STABLE.
|1002500
|link:https://svnweb.freebsd.org/changeset/base/285831[285831]
|24 July, 2015
|10-STABLE after `releng/10.2` branched from 10-STABLE.
|1002501
|link:https://svnweb.freebsd.org/changeset/base/289005[289005]
|8 October, 2015
|10-STABLE after merge of ZFS changes that affected the internal interface of zfeature_info structure (rev link:https://svnweb.freebsd.org/changeset/base/288572[288572]).
|1002502
|link:https://svnweb.freebsd.org/changeset/base/291243[291243]
|24 November, 2015
|10-STABLE after merge of dump device changes that affected the arguments of `g_dev_setdumpdev()`(rev link:https://svnweb.freebsd.org/changeset/base/291215[291215]).
|1002503
|link:https://svnweb.freebsd.org/changeset/base/292224[292224]
|14 December, 2015
|10-STABLE after merge of changes to the internal interface between the nfsd.ko and nfscommon.ko modules, requiring them to be upgraded together (rev link:https://svnweb.freebsd.org/changeset/base/292223[292223]).
|1002504
|link:https://svnweb.freebsd.org/changeset/base/292589[292589]
|22 December, 2015
|10-STABLE after merge of xz 5.2.2 merge (multithread support) (rev link:https://svnweb.freebsd.org/changeset/base/292588[292588]).
|1002505
|link:https://svnweb.freebsd.org/changeset/base/292908[292908]
|30 December, 2015
|10-STABLE after merge of changes to man:pci[4] (rev link:https://svnweb.freebsd.org/changeset/base/292907[292907]).
|1002506
|link:https://svnweb.freebsd.org/changeset/base/293476[293476]
|9 January, 2016
|10-STABLE after merge of man:utimensat[2] (rev link:https://svnweb.freebsd.org/changeset/base/293473[293473]).
|1002507
|link:https://svnweb.freebsd.org/changeset/base/293610[293610]
|9 January, 2016
|10-STABLE after merge of changes to man:linux[4] (rev link:https://svnweb.freebsd.org/changeset/base/293477[293477] through link:https://svnweb.freebsd.org/changeset/base/293609[293609]).
|1002508
|link:https://svnweb.freebsd.org/changeset/base/293619[293619]
|9 January, 2016
|10-STABLE after merge of changes to man:figpar[3] types/macros (rev link:https://svnweb.freebsd.org/changeset/base/290275[290275]).
|1002509
|link:https://svnweb.freebsd.org/changeset/base/295107[295107]
|1 February, 2016
|10-STABLE after merge of API change to man:dpv[3].
|1003000
|link:https://svnweb.freebsd.org/changeset/base/296373[296373]
|4 March, 2016
|`releng/10.3` branched from 10-STABLE.
|1003500
|link:https://svnweb.freebsd.org/changeset/base/296374[296374]
|4 March, 2016
|10-STABLE after `releng/10.3` branched from 10-STABLE.
|1003501
|link:https://svnweb.freebsd.org/changeset/base/298299[298299]
|19 June, 2016
|10-STABLE after adding kdbcontrol's -P option (rev link:https://svnweb.freebsd.org/changeset/base/298297[298297]).
|1003502
|link:https://svnweb.freebsd.org/changeset/base/299966[299966]
|19 June, 2016
|10-STABLE after libcrypto.so was made position independent.
|1003503
|link:https://svnweb.freebsd.org/changeset/base/300235[300235]
|19 June, 2016
|10-STABLE after allowing MK_ overrides (rev link:https://svnweb.freebsd.org/changeset/base/300233[300233]).
|1003504
|link:https://svnweb.freebsd.org/changeset/base/302066[302066]
|21 June, 2016
|10-STABLE after MFC of filemon changes from 11-CURRENT.
|1003505
|link:https://svnweb.freebsd.org/changeset/base/302228[302228]
|27 June, 2016
|10-STABLE after converting sed to use REG_STARTEND, fixing a Mesa issue.
|1003506
|link:https://svnweb.freebsd.org/changeset/base/304611[304611]
|August 22, 2016
|10-STABLE after adding C++11 thread_local support.
|1003507
|link:https://svnweb.freebsd.org/changeset/base/304864[304864]
|August 26, 2016
|10-STABLE after `LC_*_MASK` fix.
|1003508
|link:https://svnweb.freebsd.org/changeset/base/305734[305734]
|September 12, 2016
|10-STABLE after resolving a deadlock between `device_detach()` and man:usbd_do_request_flags[9].
|1003509
|link:https://svnweb.freebsd.org/changeset/base/307331[307331]
|October 14, 2016
|10-STABLE after ZFS merges.
|1003510
|link:https://svnweb.freebsd.org/changeset/base/308047[308047]
|October 28, 2016
|10-STABLE after installing header files required development with libzfs_core.
|1003511
|link:https://svnweb.freebsd.org/changeset/base/310121[310121]
|December 15, 2016
|10-STABLE after exporting whole thread name in `kinfo_proc` (rev link:https://svnweb.freebsd.org/changeset/base/309676[309676]).
|1003512
|link:https://svnweb.freebsd.org/changeset/base/315730[315730]
|March 22, 2017
|10-STABLE after libmd changes (rev link:https://svnweb.freebsd.org/changeset/base/314143[314143]).
|1003513
|link:https://svnweb.freebsd.org/changeset/base/316499[316499]
|April 4, 2017
|10-STABLE after making CAM SIM lock optional (revs link:https://svnweb.freebsd.org/changeset/base/315673[315673], link:https://svnweb.freebsd.org/changeset/base/315674[315674]).
|1003514
|link:https://svnweb.freebsd.org/changeset/base/318198[318198]
|May 11, 2017
|10-STABLE after merging the addition of the [.filename]#<dev/mmc/mmc_ioctl.h># header.
|1003515
|link:https://svnweb.freebsd.org/changeset/base/321222[321222]
|July 19, 2017
|10-STABLE after adding C++14 sized deallocation functions to libc++.
|1003516
|link:https://svnweb.freebsd.org/changeset/base/321717[321717]
|July 30, 2017
|10-STABLE after merging the `MAP_GUARD` man:mmap[2] flag addition.
|1004000
|link:https://svnweb.freebsd.org/changeset/base/323604[323604]
|September 15, 2017
|`releng/10.4` branched from 10-STABLE.
|1004500
|link:https://svnweb.freebsd.org/changeset/base/323605[323605]
|September 15, 2017
|10-STABLE after `releng/10.4` branched from 10-STABLE.
|1004501
|link:https://svnweb.freebsd.org/changeset/base/328379[328379]
|January 24, 2018
|10-STABLE after merging link:https://svnweb.freebsd.org/changeset/base/325028[325028], fixing `ptrace()` to always clear the correct thread event when resuming.
|1004502
|link:https://svnweb.freebsd.org/changeset/base/356396[356396]
|January 6, 2020
|10-STABLE after making USB statistics be per-device instead of per bus.
|1004503
|link:https://svnweb.freebsd.org/changeset/base/356681[356681]
|January 13, 2020
|10-STABLE after adding own counter for cancelled USB transfers.
|===
[[versions-9]]
== FreeBSD 9 Versions
[[freebsd-versions-table-9]]
.FreeBSD 9 `__FreeBSD_version` Values
[cols="1,1,1,1", frame="none", options="header"]
|===
| Value
| Revision
| Date
| Release
|900000
|link:https://svnweb.freebsd.org/changeset/base/196432[196432]
|August 22, 2009
|9.0-CURRENT.
|900001
|link:https://svnweb.freebsd.org/changeset/base/197019[197019]
|September 8, 2009
|9.0-CURRENT after importing x86emu, a software emulator for real mode x86 CPU from OpenBSD.
|900002
|link:https://svnweb.freebsd.org/changeset/base/197430[197430]
|September 23, 2009
|9.0-CURRENT after implementing the EVFILT_USER kevent filter functionality.
|900003
|link:https://svnweb.freebsd.org/changeset/base/200039[200039]
|December 2, 2009
|9.0-CURRENT after addition of man:sigpause[2] and PIE support in csu.
|900004
|link:https://svnweb.freebsd.org/changeset/base/200185[200185]
|December 6, 2009
|9.0-CURRENT after addition of libulog and its libutempter compatibility interface.
|900005
|link:https://svnweb.freebsd.org/changeset/base/200447[200447]
|December 12, 2009
|9.0-CURRENT after addition of man:sleepq_sleepcnt[9], which can be used to query the number of waiters on a specific waiting queue.
|900006
|link:https://svnweb.freebsd.org/changeset/base/201513[201513]
|January 4, 2010
|9.0-CURRENT after change of the man:scandir[3] and man:alphasort[3] prototypes to conform to SUSv4.
|900007
|link:https://svnweb.freebsd.org/changeset/base/202219[202219]
|January 13, 2010
|9.0-CURRENT after the removal of man:utmp[5] and the addition of utmpx (see man:getutxent[3]) for improved logging of user logins and system events.
|900008
|link:https://svnweb.freebsd.org/changeset/base/202722[202722]
|January 20, 2010
|9.0-CURRENT after the import of BSDL bc/dc and the deprecation of GNU bc/dc.
|900009
|link:https://svnweb.freebsd.org/changeset/base/203052[203052]
|January 26, 2010
|9.0-CURRENT after the addition of SIOCGIFDESCR and SIOCSIFDESCR ioctls to network interfaces. These ioctl can be used to manipulate interface description, as inspired by OpenBSD.
|900010
|link:https://svnweb.freebsd.org/changeset/base/205471[205471]
|March 22, 2010
|9.0-CURRENT after the import of zlib 1.2.4.
|900011
|link:https://svnweb.freebsd.org/changeset/base/207410[207410]
|April 24, 2010
|9.0-CURRENT after adding soft-updates journalling.
|900012
|link:https://svnweb.freebsd.org/changeset/base/207842[207842]
|May 10, 2010
|9.0-CURRENT after adding liblzma, xz, xzdec, and lzmainfo.
|900013
|link:https://svnweb.freebsd.org/changeset/base/208486[208486]
|May 24, 2010
|9.0-CURRENT after bringing in USB fixes for man:linux[4].
|900014
|link:https://svnweb.freebsd.org/changeset/base/208973[208973]
|June 10, 2010
|9.0-CURRENT after adding Clang.
|900015
|link:https://svnweb.freebsd.org/changeset/base/210390[210390]
|July 22, 2010
|9.0-CURRENT after the import of BSD grep.
|900016
|link:https://svnweb.freebsd.org/changeset/base/210565[210565]
|July 28, 2010
|9.0-CURRENT after adding mti_zone to struct malloc_type_internal.
|900017
|link:https://svnweb.freebsd.org/changeset/base/211701[211701]
|August 23, 2010
|9.0-CURRENT after changing back default grep to GNU grep and adding WITH_BSD_GREP knob.
|900018
|link:https://svnweb.freebsd.org/changeset/base/211735[211735]
|August 24, 2010
|9.0-CURRENT after the man:pthread_kill[3] -generated signal is identified as SI_LWP in si_code. Previously, si_code was SI_USER.
|900019
|link:https://svnweb.freebsd.org/changeset/base/211937[211937]
|August 28, 2010
|9.0-CURRENT after addition of the MAP_PREFAULT_READ flag to man:mmap[2].
|900020
|link:https://svnweb.freebsd.org/changeset/base/212381[212381]
|September 9, 2010
|9.0-CURRENT after adding drain functionality to sbufs, which also changed the layout of struct sbuf.
|900021
|link:https://svnweb.freebsd.org/changeset/base/212568[212568]
|September 13, 2010
|9.0-CURRENT after DTrace has grown support for userland tracing.
|900022
|link:https://svnweb.freebsd.org/changeset/base/213395[213395]
|October 2, 2010
|9.0-CURRENT after addition of the BSDL man utilities and retirement of GNU/GPL man utilities.
|900023
|link:https://svnweb.freebsd.org/changeset/base/213700[213700]
|October 11, 2010
|9.0-CURRENT after updating xz to git 20101010 snapshot.
|900024
|link:https://svnweb.freebsd.org/changeset/base/215127[215127]
|November 11, 2010
|9.0-CURRENT after libgcc.a was replaced by libcompiler_rt.a.
|900025
|link:https://svnweb.freebsd.org/changeset/base/215166[215166]
|November 12, 2010
|9.0-CURRENT after the introduction of the modularised congestion control.
|900026
|link:https://svnweb.freebsd.org/changeset/base/216088[216088]
|November 30, 2010
|9.0-CURRENT after the introduction of Serial Management Protocol (SMP) passthrough and the XPT_SMP_IO and XPT_GDEV_ADVINFO CAM CCBs.
|900027
|link:https://svnweb.freebsd.org/changeset/base/216212[216212]
|December 5, 2010
|9.0-CURRENT after the addition of log2 to libm.
|900028
|link:https://svnweb.freebsd.org/changeset/base/216615[216615]
|December 21, 2010
|9.0-CURRENT after the addition of the Hhook (Helper Hook), Khelp (Kernel Helpers) and Object Specific Data (OSD) KPIs.
|900029
|link:https://svnweb.freebsd.org/changeset/base/216758[216758]
|December 28, 2010
|9.0-CURRENT after the modification of the TCP stack to allow Khelp modules to interact with it via helper hook points and store per-connection data in the TCP control block.
|900030
|link:https://svnweb.freebsd.org/changeset/base/217309[217309]
|January 12, 2011
|9.0-CURRENT after the update of libdialog to version 20100428.
|900031
|link:https://svnweb.freebsd.org/changeset/base/218414[218414]
|February 7, 2011
|9.0-CURRENT after the addition of man:pthread_getthreadid_np[3].
|900032
|link:https://svnweb.freebsd.org/changeset/base/218425[218425]
|February 8, 2011
|9.0-CURRENT after the removal of the uio_yield prototype and symbol.
|900033
|link:https://svnweb.freebsd.org/changeset/base/218822[218822]
|February 18, 2011
|9.0-CURRENT after the update of binutils to version 2.17.50.
|900034
|link:https://svnweb.freebsd.org/changeset/base/219406[219406]
|March 8, 2011
|9.0-CURRENT after the struct sysvec (sv_schedtail) changes.
|900035
|link:https://svnweb.freebsd.org/changeset/base/220150[220150]
|March 29, 2011
|9.0-CURRENT after the update of base gcc and libstdc++ to the last GPLv2 licensed revision.
|900036
|link:https://svnweb.freebsd.org/changeset/base/220770[220770]
|April 18, 2011
|9.0-CURRENT after the removal of libobjc and Objective-C support from the base system.
|900037
|link:https://svnweb.freebsd.org/changeset/base/221862[221862]
|May 13, 2011
|9.0-CURRENT after importing the man:libprocstat[3] library and man:fuser[1] utility to the base system.
|900038
|link:https://svnweb.freebsd.org/changeset/base/222167[222167]
|May 22, 2011
|9.0-CURRENT after adding a lock flag argument to man:VFS_FHTOVP[9].
|900039
|link:https://svnweb.freebsd.org/changeset/base/223637[223637]
|June 28, 2011
|9.0-CURRENT after importing pf from OpenBSD 4.5.
|900040
|link:https://svnweb.freebsd.org/changeset/base/224217[224217]
|July 19, 2011
|Increase default MAXCPU for FreeBSD to 64 on amd64 and ia64 and to 128 for XLP (mips).
|900041
|link:https://svnweb.freebsd.org/changeset/base/224834[224834]
|August 13, 2011
|9.0-CURRENT after the implementation of Capsicum capabilities; fget(9) gains a rights argument.
|900042
|link:https://svnweb.freebsd.org/changeset/base/225350[225350]
|August 28, 2011
|Bump shared libraries' version numbers for libraries whose ABI has changed in preparation for 9.0.
|900043
|link:https://svnweb.freebsd.org/changeset/base/225350[225350]
|September 2, 2011
|Add automatic detection of USB mass storage devices which do not support the no synchronize cache SCSI command.
|900044
|link:https://svnweb.freebsd.org/changeset/base/225469[225469]
|September 10, 2011
|Re-factor auto-quirk. 9.0-RELEASE.
|900045
|link:https://svnweb.freebsd.org/changeset/base/229285[229285]
|January 2, 2012
|9-STABLE after MFC of true/false from 1000002.
|900500
|link:https://svnweb.freebsd.org/changeset/base/229318[229318]
|January 2, 2012
|9.0-STABLE.
|900501
|link:https://svnweb.freebsd.org/changeset/base/229723[229723]
|January 6, 2012
|9.0-STABLE after merging of addition of the man:posix_fadvise[2] system call.
|900502
|link:https://svnweb.freebsd.org/changeset/base/230237[230237]
|January 16, 2012
|9.0-STABLE after merging gperf 3.0.3
|900503
|link:https://svnweb.freebsd.org/changeset/base/231768[231768]
|February 15, 2012
|9.0-STABLE after introduction of the new extensible man:sysctl[3] interface NET_RT_IFLISTL to query address lists.
|900504
|link:https://svnweb.freebsd.org/changeset/base/232728[232728]
|March 3, 2012
|9.0-STABLE after changes related to mounting of filesystem inside a jail.
|900505
|link:https://svnweb.freebsd.org/changeset/base/232945[232945]
|March 13, 2012
|9.0-STABLE after introduction of new man:tcp[4] socket options: TCP_KEEPINIT, TCP_KEEPIDLE, TCP_KEEPINTVL, and TCP_KEEPCNT.
|900506
|link:https://svnweb.freebsd.org/changeset/base/235786[235786]
|May 22, 2012
|9.0-STABLE after introduction of the `quick_exit` function and related changes required for C++11.
|901000
|link:https://svnweb.freebsd.org/changeset/base/239082[239082]
|August 5, 2012
|9.1-RELEASE.
|901500
|link:https://svnweb.freebsd.org/changeset/base/239081[239081]
|August 6, 2012
|9.1-STABLE after branching releng/9.1 (RELENG_9_1).
|901501
|link:https://svnweb.freebsd.org/changeset/base/240659[240659]
|November 11, 2012
|9.1-STABLE after man:LIST_PREV[3] added to queue.h (rev link:https://svnweb.freebsd.org/changeset/base/242893[242893]) and KBI change in USB serial devices.
|901502
|link:https://svnweb.freebsd.org/changeset/base/243656[243656]
|November 28, 2012
|9.1-STABLE after USB serial jitter buffer requires rebuild of USB serial device modules.
|901503
|link:https://svnweb.freebsd.org/changeset/base/247090[247090]
|February 21, 2013
|9.1-STABLE after USB moved to the driver structure requiring a rebuild of all USB modules. Also indicates the presence of nmtree.
|901504
|link:https://svnweb.freebsd.org/changeset/base/248338[248338]
|March 15, 2013
|9.1-STABLE after install gained -l, -M, -N and related flags and cat gained the -l option.
|901505
|link:https://svnweb.freebsd.org/changeset/base/251687[251687]
|June 13, 2013
|9.1-STABLE after fixes in ctfmerge bootstrapping (rev link:https://svnweb.freebsd.org/changeset/base/249243[249243]).
|902001
|link:https://svnweb.freebsd.org/changeset/base/253912[253912]
|August 3, 2013
|`releng/9.2` branched from `stable/9`.
|902501
|link:https://svnweb.freebsd.org/changeset/base/253913[253913]
|August 2, 2013
|9.2-STABLE after creation of `releng/9.2` branch.
|902502
|link:https://svnweb.freebsd.org/changeset/base/254938[254938]
|August 26, 2013
|9.2-STABLE after inclusion of the `PIM_RESCAN` CAM path inquiry flag.
|902503
|link:https://svnweb.freebsd.org/changeset/base/254979[254979]
|August 27, 2013
|9.2-STABLE after inclusion of the `SI_UNMAPPED` cdev flag.
|902504
|link:https://svnweb.freebsd.org/changeset/base/256917[256917]
|October 22, 2013
|9.2-STABLE after inclusion of support for "first boot" man:rc[8] scripts.
|902505
|link:https://svnweb.freebsd.org/changeset/base/259448[259448]
|December 12, 2013
|9.2-STABLE after Heimdal encoding fix.
|902506
|link:https://svnweb.freebsd.org/changeset/base/260136[260136]
|December 31, 2013
|9-STABLE after MAP_STACK fixes (rev link:https://svnweb.freebsd.org/changeset/base/260082[260082]).
|902507
|link:https://svnweb.freebsd.org/changeset/base/262801[262801]
|March 5, 2014
|9-STABLE after upgrade of libc++ to 3.4 release.
|902508
|link:https://svnweb.freebsd.org/changeset/base/263171[263171]
|March 14, 2014
|9-STABLE after merge of the Radeon KMS driver (rev link:https://svnweb.freebsd.org/changeset/base/263170[263170]).
|902509
|link:https://svnweb.freebsd.org/changeset/base/263509[263509]
|March 21, 2014
|9-STABLE after upgrade of llvm/clang to 3.4 release.
|902510
|link:https://svnweb.freebsd.org/changeset/base/263818[263818]
|March 27, 2014
|9-STABLE after merge of the man:vt[4] driver.
|902511
|link:https://svnweb.freebsd.org/changeset/base/264289[264289]
|March 27, 2014
|9-STABLE after FreeBSD-SA-14:06.openssl.
|902512
|link:https://svnweb.freebsd.org/changeset/base/265123[265123]
|April 30, 2014
|9-STABLE after FreeBSD-SA-14:08.tcp.
|903000
|link:https://svnweb.freebsd.org/changeset/base/267656[267656]
|June 20, 2014
|9-RC1 `releng/9.3` branch.
|903500
|link:https://svnweb.freebsd.org/changeset/base/267657[267657]
|June 20, 2014
|9.3-STABLE `releng/9.3` branch.
|903501
|link:https://svnweb.freebsd.org/changeset/base/268443[268443]
|July 8, 2014
|9-STABLE after FreeBSD-SA-14:17.kmem (rev link:https://svnweb.freebsd.org/changeset/base/268433[268433]).
|903502
|link:https://svnweb.freebsd.org/changeset/base/270175[270175]
|August 19, 2014
|9-STABLE after `SOCK_DGRAM` bug fix (rev link:https://svnweb.freebsd.org/changeset/base/269789[269789]).
|903503
|link:https://svnweb.freebsd.org/changeset/base/271341[271341]
|September 9, 2014
|9-STABLE after FreeBSD-SA-14:18 (rev link:https://svnweb.freebsd.org/changeset/base/269687[269687]).
|903504
|link:https://svnweb.freebsd.org/changeset/base/271686[271686]
|September 16, 2014
|9-STABLE after FreeBSD-SA-14:19 (rev link:https://svnweb.freebsd.org/changeset/base/271668[271668]).
|903505
|link:https://svnweb.freebsd.org/changeset/base/273432[273432]
|October 21, 2014
|9-STABLE after FreeBSD-SA-14:20, FreeBSD-SA-14:21, and FreeBSD-SA-14:22 (rev link:https://svnweb.freebsd.org/changeset/base/273412[273412]).
|903506
|link:https://svnweb.freebsd.org/changeset/base/274162[274162]
|November 4, 2014
|9-STABLE after FreeBSD-SA-14:23, FreeBSD-SA-14:24, and FreeBSD-SA-14:25.
|903507
|link:https://svnweb.freebsd.org/changeset/base/275742[275742]
|December 13, 2014
|9-STABLE after merging an important fix to the LLVM vectorizer, which could lead to buffer overruns in some cases.
|903508
|link:https://svnweb.freebsd.org/changeset/base/279287[279287]
|February 25, 2015
|9-STABLE after FreeBSD-EN-15:01.vt, FreeBSD-EN-15:02.openssl, FreeBSD-EN-15:03.freebsd-update, FreeBSD-SA-15:04.igmp, and FreeBSD-SA-15:05.bind.
|903509
|link:https://svnweb.freebsd.org/changeset/base/296219[296219]
|February 29, 2016
|9-STABLE after bumping the default value of `compat.linux.osrelease` to `2.6.18` to support the linux-c6-* ports out of the box.
|903510
|link:https://svnweb.freebsd.org/changeset/base/300236[300236]
|May 19, 2016
|9-STABLE after System Binary Interface (SBI) page was moved in latest version of Berkeley Boot Loader (BBL) due to code size increase in link:https://svnweb.freebsd.org/changeset/base/300234[300234].
|903511
|link:https://svnweb.freebsd.org/changeset/base/305735[305735]
|September 12, 2016
|9-STABLE after resolving a deadlock between `device_detach()` and man:usbd_do_request_flags[9].
|===
[[versions-8]]
== FreeBSD 8 Versions
[[freebsd-versions-table-8]]
.FreeBSD 8 `__FreeBSD_version` Values
[cols="1,1,1,1", frame="none", options="header"]
|===
| Value
| Revision
| Date
| Release
|800000
|link:https://svnweb.freebsd.org/changeset/base/172531[172531]
|October 11, 2007
|8.0-CURRENT. Separating wide and single byte ctype.
|800001
|link:https://svnweb.freebsd.org/changeset/base/172688[172688]
|October 16, 2007
|8.0-CURRENT after libpcap 0.9.8 and tcpdump 3.9.8 import.
|800002
|link:https://svnweb.freebsd.org/changeset/base/172841[172841]
|October 21, 2007
|8.0-CURRENT after renaming man:kthread_create[9] and friends to man:kproc_create[9] etc.
|800003
|link:https://svnweb.freebsd.org/changeset/base/172932[172932]
|October 24, 2007
|8.0-CURRENT after ABI backwards compatibility to the FreeBSD 4/5/6 versions of the PCIOCGETCONF, PCIOCREAD and PCIOCWRITE IOCTLs was added, which required the ABI of the PCIOCGETCONF IOCTL to be broken again
|800004
|link:https://svnweb.freebsd.org/changeset/base/173573[173573]
|November 12, 2007
|8.0-CURRENT after man:agp[4] driver moved from src/sys/pci to src/sys/dev/agp
|800005
|link:https://svnweb.freebsd.org/changeset/base/174261[174261]
|December 4, 2007
|8.0-CURRENT after changes to the jumbo frame allocator (rev link:https://svnweb.freebsd.org/changeset/base/174247[174247]).
|800006
|link:https://svnweb.freebsd.org/changeset/base/174399[174399]
|December 7, 2007
|8.0-CURRENT after the addition of callgraph capture functionality to man:hwpmc[4].
|800007
|link:https://svnweb.freebsd.org/changeset/base/174901[174901]
|December 25, 2007
|8.0-CURRENT after `kdb_enter()` gains a "why" argument.
|800008
|link:https://svnweb.freebsd.org/changeset/base/174951[174951]
|December 28, 2007
|8.0-CURRENT after LK_EXCLUPGRADE option removal.
|800009
|link:https://svnweb.freebsd.org/changeset/base/175168[175168]
|January 9, 2008
|8.0-CURRENT after introduction of man:lockmgr_disown[9]
|800010
|link:https://svnweb.freebsd.org/changeset/base/175204[175204]
|January 10, 2008
|8.0-CURRENT after the man:vn_lock[9] prototype change.
|800011
|link:https://svnweb.freebsd.org/changeset/base/175295[175295]
|January 13, 2008
|8.0-CURRENT after the man:VOP_LOCK[9] and man:VOP_UNLOCK[9] prototype changes.
|800012
|link:https://svnweb.freebsd.org/changeset/base/175487[175487]
|January 19, 2008
|8.0-CURRENT after introduction of man:lockmgr_recursed[9], man:BUF_RECURSED[9] and man:BUF_ISLOCKED[9] and the removal of `BUF_REFCNT()`.
|800013
|link:https://svnweb.freebsd.org/changeset/base/175581[175581]
|January 23, 2008
|8.0-CURRENT after introduction of the "ASCII" encoding.
|800014
|link:https://svnweb.freebsd.org/changeset/base/175636[175636]
|January 24, 2008
|8.0-CURRENT after changing the prototype of man:lockmgr[9] and removal of `lockcount()` and `LOCKMGR_ASSERT()`.
|800015
|link:https://svnweb.freebsd.org/changeset/base/175688[175688]
|January 26, 2008
|8.0-CURRENT after extending the types of the man:fts[3] structures.
|800016
|link:https://svnweb.freebsd.org/changeset/base/175872[175872]
|February 1, 2008
|8.0-CURRENT after adding an argument to man:MEXTADD[9]
|800017
|link:https://svnweb.freebsd.org/changeset/base/176015[176015]
|February 6, 2008
|8.0-CURRENT after the introduction of LK_NODUP and LK_NOWITNESS options in the man:lockmgr[9] space.
|800018
|link:https://svnweb.freebsd.org/changeset/base/176112[176112]
|February 8, 2008
|8.0-CURRENT after the addition of m_collapse.
|800019
|link:https://svnweb.freebsd.org/changeset/base/176124[176124]
|February 9, 2008
|8.0-CURRENT after the addition of current working directory, root directory, and jail directory support to the kern.proc.filedesc sysctl.
|800020
|link:https://svnweb.freebsd.org/changeset/base/176251[176251]
|February 13, 2008
|8.0-CURRENT after introduction of man:lockmgr_assert[9] and `BUF_ASSERT` functions.
|800021
|link:https://svnweb.freebsd.org/changeset/base/176321[176321]
|February 15, 2008
|8.0-CURRENT after introduction of man:lockmgr_args[9] and LK_INTERNAL flag removal.
|800022
|link:https://svnweb.freebsd.org/changeset/base/176556[176556]
|(backed out)
|8.0-CURRENT after changing the default system ar to BSD man:ar[1].
|800023
|link:https://svnweb.freebsd.org/changeset/base/176560[176560]
|February 25, 2008
|8.0-CURRENT after changing the prototypes of man:lockstatus[9] and man:VOP_ISLOCKED[9];, more specifically retiring the `struct thread` argument.
|800024
|link:https://svnweb.freebsd.org/changeset/base/176709[176709]
|March 1, 2008
|8.0-CURRENT after axing out the `lockwaiters` and `BUF_LOCKWAITERS` functions, changing the return value of `brelvp` from void to int and introducing new flags for man:lockinit[9].
|800025
|link:https://svnweb.freebsd.org/changeset/base/176958[176958]
|March 8, 2008
|8.0-CURRENT after adding F_DUP2FD command to man:fcntl[2].
|800026
|link:https://svnweb.freebsd.org/changeset/base/177086[177086]
|March 12, 2008
|8.0-CURRENT after changing the priority parameter to cv_broadcastpri such that 0 means no priority.
|800027
|link:https://svnweb.freebsd.org/changeset/base/177551[177551]
|March 24, 2008
|8.0-CURRENT after changing the bpf monitoring ABI when zerocopy bpf buffers were added.
|800028
|link:https://svnweb.freebsd.org/changeset/base/177637[177637]
|March 26, 2008
|8.0-CURRENT after adding l_sysid to struct flock.
|800029
|link:https://svnweb.freebsd.org/changeset/base/177688[177688]
|March 28, 2008
|8.0-CURRENT after reintegration of the `BUF_LOCKWAITERS` function and the addition of man:lockmgr_waiters[9].
|800030
|link:https://svnweb.freebsd.org/changeset/base/177844[177844]
|April 1, 2008
|8.0-CURRENT after the introduction of the man:rw_try_rlock[9] and man:rw_try_wlock[9] functions.
|800031
|link:https://svnweb.freebsd.org/changeset/base/177958[177958]
|April 6, 2008
|8.0-CURRENT after the introduction of the `lockmgr_rw` and `lockmgr_args_rw` functions.
|800032
|link:https://svnweb.freebsd.org/changeset/base/178006[178006]
|April 8, 2008
|8.0-CURRENT after the implementation of the openat and related syscalls, introduction of the O_EXEC flag for the man:open[2], and providing the corresponding linux compatibility syscalls.
|800033
|link:https://svnweb.freebsd.org/changeset/base/178017[178017]
|April 8, 2008
|8.0-CURRENT after added man:write[2] support for man:psm[4] in native operation level. Now arbitrary commands can be written to [.filename]#/dev/psm%d# and status can be read back from it.
|800034
|link:https://svnweb.freebsd.org/changeset/base/178051[178051]
|April 10, 2008
|8.0-CURRENT after introduction of the `memrchr` function.
|800035
|link:https://svnweb.freebsd.org/changeset/base/178256[178256]
|April 16, 2008
|8.0-CURRENT after introduction of the `fdopendir` function.
|800036
|link:https://svnweb.freebsd.org/changeset/base/178362[178362]
|April 20, 2008
|8.0-CURRENT after switchover of 802.11 wireless to multi-bss support (aka vaps).
|800037
|link:https://svnweb.freebsd.org/changeset/base/178892[178892]
|May 9, 2008
|8.0-CURRENT after addition of multi routing table support (aka man:setfib[1], man:setfib[2]).
|800038
|link:https://svnweb.freebsd.org/changeset/base/179316[179316]
|May 26, 2008
|8.0-CURRENT after removal of netatm and ISDN4BSD. Also, the addition of the Compact C Type (CTF) tools.
|800039
|link:https://svnweb.freebsd.org/changeset/base/179784[179784]
|June 14, 2008
|8.0-CURRENT after removal of sgtty.
|800040
|link:https://svnweb.freebsd.org/changeset/base/180025[180025]
|June 26, 2008
|8.0-CURRENT with kernel NFS lockd client.
|800041
|link:https://svnweb.freebsd.org/changeset/base/180691[180691]
|July 22, 2008
|8.0-CURRENT after addition of man:arc4random_buf[3] and man:arc4random_uniform[3].
|800042
|link:https://svnweb.freebsd.org/changeset/base/181439[181439]
|August 8, 2008
|8.0-CURRENT after addition of man:cpuctl[4].
|800043
|link:https://svnweb.freebsd.org/changeset/base/181694[181694]
|August 13, 2008
|8.0-CURRENT after changing man:bpf[4] to use a single device node, instead of device cloning.
|800044
|link:https://svnweb.freebsd.org/changeset/base/181803[181803]
|August 17, 2008
|8.0-CURRENT after the commit of the first step of the vimage project renaming global variables to be virtualized with a V_ prefix with macros to map them back to their global names.
|800045
|link:https://svnweb.freebsd.org/changeset/base/181905[181905]
|August 20, 2008
|8.0-CURRENT after the integration of the MPSAFE TTY layer, including changes to various drivers and utilities that interact with it.
|800046
|link:https://svnweb.freebsd.org/changeset/base/182869[182869]
|September 8, 2008
|8.0-CURRENT after the separation of the GDT per CPU on amd64 architecture.
|800047
|link:https://svnweb.freebsd.org/changeset/base/182905[182905]
|September 10, 2008
|8.0-CURRENT after removal of VSVTX, VSGID and VSUID.
|800048
|link:https://svnweb.freebsd.org/changeset/base/183091[183091]
|September 16, 2008
|8.0-CURRENT after converting the kernel NFS mount code to accept individual mount options in the man:nmount[2] iovec, not just one big struct nfs_args.
|800049
|link:https://svnweb.freebsd.org/changeset/base/183114[183114]
|September 17, 2008
|8.0-CURRENT after the removal of man:suser[9] and man:suser_cred[9].
|800050
|link:https://svnweb.freebsd.org/changeset/base/184099[184099]
|October 20, 2008
|8.0-CURRENT after buffer cache API change.
|800051
|link:https://svnweb.freebsd.org/changeset/base/184205[184205]
|October 23, 2008
|8.0-CURRENT after the removal of the man:MALLOC[9] and man:FREE[9] macros.
|800052
|link:https://svnweb.freebsd.org/changeset/base/184419[184419]
|October 28, 2008
|8.0-CURRENT after the introduction of accmode_t and renaming of VOP_ACCESS 'a_mode' argument to 'a_accmode'.
|800053
|link:https://svnweb.freebsd.org/changeset/base/184555[184555]
|November 2, 2008
|8.0-CURRENT after the prototype change of man:vfs_busy[9] and the introduction of its MBF_NOWAIT and MBF_MNTLSTLOCK flags.
|800054
|link:https://svnweb.freebsd.org/changeset/base/185162[185162]
|November 22, 2008
|8.0-CURRENT after the addition of buf_ring, memory barriers and ifnet functions to facilitate multiple hardware transmit queues for cards that support them, and a lockless ring-buffer implementation to enable drivers to more efficiently manage queuing of packets.
|800055
|link:https://svnweb.freebsd.org/changeset/base/185363[185363]
|November 27, 2008
|8.0-CURRENT after the addition of Intel(TM) Core, Core2, and Atom support to man:hwpmc[4].
|800056
|link:https://svnweb.freebsd.org/changeset/base/185435[185435]
|November 29, 2008
|8.0-CURRENT after the introduction of multi-/no-IPv4/v6 jails.
|800057
|link:https://svnweb.freebsd.org/changeset/base/185522[185522]
|December 1, 2008
|8.0-CURRENT after the switch to the ath hal source code.
|800058
|link:https://svnweb.freebsd.org/changeset/base/185968[185968]
|December 12, 2008
|8.0-CURRENT after the introduction of the VOP_VPTOCNP operation.
|800059
|link:https://svnweb.freebsd.org/changeset/base/186119[186119]
|December 15, 2008
|8.0-CURRENT incorporates the new arp-v2 rewrite.
|800060
|link:https://svnweb.freebsd.org/changeset/base/186344[186344]
|December 19, 2008
|8.0-CURRENT after the addition of makefs.
|800061
|link:https://svnweb.freebsd.org/changeset/base/187289[187289]
|January 15, 2009
|8.0-CURRENT after TCP Appropriate Byte Counting.
|800062
|link:https://svnweb.freebsd.org/changeset/base/187830[187830]
|January 28, 2009
|8.0-CURRENT after removal of minor(), minor2unit(), unit2minor(), etc.
|800063
|link:https://svnweb.freebsd.org/changeset/base/188745[188745]
|February 18, 2009
|8.0-CURRENT after GENERIC config change to use the USB2 stack, but also the addition of man:fdevname[3].
|800064
|link:https://svnweb.freebsd.org/changeset/base/188946[188946]
|February 23, 2009
|8.0-CURRENT after the USB2 stack is moved to and replaces dev/usb.
|800065
|link:https://svnweb.freebsd.org/changeset/base/189092[189092]
|February 26, 2009
|8.0-CURRENT after the renaming of all functions in man:libmp[3].
|800066
|link:https://svnweb.freebsd.org/changeset/base/189110[189110]
|February 27, 2009
|8.0-CURRENT after changing USB devfs handling and layout.
|800067
|link:https://svnweb.freebsd.org/changeset/base/189136[189136]
|February 28, 2009
|8.0-CURRENT after adding getdelim(), getline(), stpncpy(), strnlen(), wcsnlen(), wcscasecmp(), and wcsncasecmp().
|800068
|link:https://svnweb.freebsd.org/changeset/base/189276[189276]
|March 2, 2009
|8.0-CURRENT after renaming the ushub devclass to uhub.
|800069
|link:https://svnweb.freebsd.org/changeset/base/189585[189585]
|March 9, 2009
|8.0-CURRENT after libusb20.so.1 was renamed to libusb.so.1.
|800070
|link:https://svnweb.freebsd.org/changeset/base/189592[189592]
|March 9, 2009
|8.0-CURRENT after merging IGMPv3 and Source-Specific Multicast (SSM) to the IPv4 stack.
|800071
|link:https://svnweb.freebsd.org/changeset/base/189825[189825]
|March 14, 2009
|8.0-CURRENT after gcc was patched to use C99 inline semantics in c99 and gnu99 mode.
|800072
|link:https://svnweb.freebsd.org/changeset/base/189853[189853]
|March 15, 2009
|8.0-CURRENT after the IFF_NEEDSGIANT flag has been removed; non-MPSAFE network device drivers are no longer supported.
|800073
|link:https://svnweb.freebsd.org/changeset/base/190265[190265]
|March 18, 2009
|8.0-CURRENT after the dynamic string token substitution has been implemented for rpath and needed paths.
|800074
|link:https://svnweb.freebsd.org/changeset/base/190373[190373]
|March 24, 2009
|8.0-CURRENT after tcpdump 4.0.0 and libpcap 1.0.0 import.
|800075
|link:https://svnweb.freebsd.org/changeset/base/190787[190787]
|April 6, 2009
|8.0-CURRENT after layout of structs vnet_net, vnet_inet and vnet_ipfw has been changed.
|800076
|link:https://svnweb.freebsd.org/changeset/base/190866[190866]
|April 9, 2009
|8.0-CURRENT after adding delay profiles in dummynet.
|800077
|link:https://svnweb.freebsd.org/changeset/base/190914[190914]
|April 14, 2009
|8.0-CURRENT after removing VOP_LEASE() and vop_vector.vop_lease.
|800078
|link:https://svnweb.freebsd.org/changeset/base/191080[191080]
|April 15, 2009
|8.0-CURRENT after struct rt_weight fields have been added to struct rt_metrics and struct rt_metrics_lite, changing the layout of struct rt_metrics_lite. A bump to RTM_VERSION was made, but backed out.
|800079
|link:https://svnweb.freebsd.org/changeset/base/191117[191117]
|April 15, 2009
|8.0-CURRENT after struct llentry pointers are added to struct route and struct route_in6.
|800080
|link:https://svnweb.freebsd.org/changeset/base/191126[191126]
|April 15, 2009
|8.0-CURRENT after layout of struct inpcb has been changed.
|800081
|link:https://svnweb.freebsd.org/changeset/base/191267[191267]
|April 19, 2009
|8.0-CURRENT after the layout of struct malloc_type has been changed.
|800082
|link:https://svnweb.freebsd.org/changeset/base/191368[191368]
|April 21, 2009
|8.0-CURRENT after the layout of struct ifnet has changed, and with if_ref() and if_rele() ifnet refcounting.
|800083
|link:https://svnweb.freebsd.org/changeset/base/191389[191389]
|April 22, 2009
|8.0-CURRENT after the implementation of a low-level Bluetooth HCI API.
|800084
|link:https://svnweb.freebsd.org/changeset/base/191672[191672]
|April 29, 2009
|8.0-CURRENT after IPv6 SSM and MLDv2 changes.
|800085
|link:https://svnweb.freebsd.org/changeset/base/191688[191688]
|April 30, 2009
|8.0-CURRENT after enabling support for VIMAGE kernel builds with one active image.
|800086
|link:https://svnweb.freebsd.org/changeset/base/191910[191910]
|May 8, 2009
|8.0-CURRENT after adding support for input lines of arbitrarily length in man:patch[1].
|800087
|link:https://svnweb.freebsd.org/changeset/base/191990[191990]
|May 11, 2009
|8.0-CURRENT after some VFS KPI changes. The thread argument has been removed from the FSD parts of the VFS. `VFS_*` functions do not need the context any more because it always refers to `curthread`. In some special cases, the old behavior is retained.
|800088
|link:https://svnweb.freebsd.org/changeset/base/192470[192470]
|May 20, 2009
|8.0-CURRENT after net80211 monitor mode changes.
|800089
|link:https://svnweb.freebsd.org/changeset/base/192649[192649]
|May 23, 2009
|8.0-CURRENT after adding UDP control block support.
|800090
|link:https://svnweb.freebsd.org/changeset/base/192669[192669]
|May 23, 2009
|8.0-CURRENT after virtualizing interface cloning.
|800091
|link:https://svnweb.freebsd.org/changeset/base/192895[192895]
|May 27, 2009
|8.0-CURRENT after adding hierarchical jails and removing global securelevel.
|800092
|link:https://svnweb.freebsd.org/changeset/base/193011[193011]
|May 29, 2009
|8.0-CURRENT after changing `sx_init_flags()` KPI. The `SX_ADAPTIVESPIN` is retired and a new `SX_NOADAPTIVE` flag is introduced to handle the reversed logic.
|800093
|link:https://svnweb.freebsd.org/changeset/base/193047[193047]
|May 29, 2009
|8.0-CURRENT after adding mnt_xflag to struct mount.
|800094
|link:https://svnweb.freebsd.org/changeset/base/193093[193093]
|May 30, 2009
|8.0-CURRENT after adding man:VOP_ACCESSX[9].
|800095
|link:https://svnweb.freebsd.org/changeset/base/193096[193096]
|May 30, 2009
|8.0-CURRENT after changing the polling KPI. The polling handlers now return the number of packets processed. A new `IFCAP_POLLING_NOCOUNT` is also introduced to specify that the return value is not significant and the counting should be skipped.
|800096
|link:https://svnweb.freebsd.org/changeset/base/193219[193219]
|June 1, 2009
|8.0-CURRENT after updating to the new netisr implementation and after changing the way we store and access FIBs.
|800097
|link:https://svnweb.freebsd.org/changeset/base/193731[193731]
|June 8, 2009
|8.0-CURRENT after the introduction of vnet destructor hooks and infrastructure.
|(not changed)
|link:https://svnweb.freebsd.org/changeset/base/194012[194012]
|June 11, 2009
|8.0-CURRENT after the introduction of netgraph outbound to inbound path call detection and queuing, which also changed the layout of struct thread.
|800098
|link:https://svnweb.freebsd.org/changeset/base/194210[194210]
|June 14, 2009
|8.0-CURRENT after OpenSSL 0.9.8k import.
|800099
|link:https://svnweb.freebsd.org/changeset/base/194675[194675]
|June 22, 2009
|8.0-CURRENT after NGROUPS update and moving route virtualization into its own VImage module.
|800100
|link:https://svnweb.freebsd.org/changeset/base/194920[194920]
|June 24, 2009
|8.0-CURRENT after SYSVIPC ABI change.
|800101
|link:https://svnweb.freebsd.org/changeset/base/195175[195175]
|June 29, 2009
|8.0-CURRENT after the removal of the /dev/net/* per-interface character devices.
|800102
|link:https://svnweb.freebsd.org/changeset/base/195634[195634]
|July 12, 2009
|8.0-CURRENT after padding was added to struct sackhint, struct tcpcb, and struct tcpstat.
|800103
|link:https://svnweb.freebsd.org/changeset/base/195654[195654]
|July 13, 2009
|8.0-CURRENT after replacing struct tcpopt with struct toeopt in the TOE driver interface to the TCP syncache.
|800104
|link:https://svnweb.freebsd.org/changeset/base/195699[195699]
|July 14, 2009
|8.0-CURRENT after the addition of the linker-set based per-vnet allocator.
|800105
|link:https://svnweb.freebsd.org/changeset/base/195767[195767]
|July 19, 2009
|8.0-CURRENT after version bump for all shared libraries that do not have symbol versioning turned on.
|800106
|link:https://svnweb.freebsd.org/changeset/base/195852[195852]
|July 24, 2009
|8.0-CURRENT after introduction of OBJT_SG VM object type.
|800107
|link:https://svnweb.freebsd.org/changeset/base/196037[196037]
|August 2, 2009
|8.0-CURRENT after making the newbus subsystem Giant free by adding the newbus sxlock and 8.0-RELEASE.
|800108
|link:https://svnweb.freebsd.org/changeset/base/199627[199627]
|November 21, 2009
|8.0-STABLE after implementing EVFILT_USER kevent filter.
|800500
|link:https://svnweb.freebsd.org/changeset/base/201749[201749]
|January 7, 2010
|8.0-STABLE after `__FreeBSD_version` bump to make `pkg_add -r` use packages-8-stable.
|800501
|link:https://svnweb.freebsd.org/changeset/base/202922[202922]
|January 24, 2010
|8.0-STABLE after change of the man:scandir[3] and man:alphasort[3] prototypes to conform to SUSv4.
|800502
|link:https://svnweb.freebsd.org/changeset/base/203299[203299]
|January 31, 2010
|8.0-STABLE after addition of man:sigpause[2].
|800503
|link:https://svnweb.freebsd.org/changeset/base/204344[204344]
|February 25, 2010
|8.0-STABLE after addition of SIOCGIFDESCR and SIOCSIFDESCR ioctls to network interfaces. These ioctl can be used to manipulate interface description, as inspired by OpenBSD.
|800504
|link:https://svnweb.freebsd.org/changeset/base/204546[204546]
|March 1, 2010
|8.0-STABLE after MFC of importing x86emu, a software emulator for real mode x86 CPU from OpenBSD.
|800505
|link:https://svnweb.freebsd.org/changeset/base/208259[208259]
|May 18, 2010
|8.0-STABLE after MFC of adding liblzma, xz, xzdec, and lzmainfo.
|801000
|link:https://svnweb.freebsd.org/changeset/base/209150[209150]
|June 14, 2010
|8.1-RELEASE
|801500
|link:https://svnweb.freebsd.org/changeset/base/209146[209146]
|June 14, 2010
|8.1-STABLE after 8.1-RELEASE.
|801501
|link:https://svnweb.freebsd.org/changeset/base/214762[214762]
|November 3, 2010
|8.1-STABLE after KBI change in struct sysentvec, and implementation of PL_FLAG_SCE/SCX/EXEC/SI and pl_siginfo for ptrace(PT_LWPINFO) .
|802000
|link:https://svnweb.freebsd.org/changeset/base/216639[216639]
|December 22, 2010
|8.2-RELEASE
|802500
|link:https://svnweb.freebsd.org/changeset/base/216654[216654]
|December 22, 2010
|8.2-STABLE after 8.2-RELEASE.
|802501
|link:https://svnweb.freebsd.org/changeset/base/219107[219107]
|February 28, 2011
|8.2-STABLE after merging DTrace changes, including support for userland tracing.
|802502
|link:https://svnweb.freebsd.org/changeset/base/219324[219324]
|March 6, 2011
|8.2-STABLE after merging log2 and log2f into libm.
|802503
|link:https://svnweb.freebsd.org/changeset/base/221275[221275]
|May 1, 2011
|8.2-STABLE after upgrade of the gcc to the last GPLv2 version from the FSF gcc-4_2-branch.
|802504
|link:https://svnweb.freebsd.org/changeset/base/222401[222401]
|May 28, 2011
|8.2-STABLE after introduction of the KPI and supporting infrastructure for modular congestion control.
|802505
|link:https://svnweb.freebsd.org/changeset/base/222406[222406]
|May 28, 2011
|8.2-STABLE after introduction of Hhook and Khelp KPIs.
|802506
|link:https://svnweb.freebsd.org/changeset/base/222408[222408]
|May 28, 2011
|8.2-STABLE after addition of OSD to struct tcpcb.
|802507
|link:https://svnweb.freebsd.org/changeset/base/222741[222741]
|June 6, 2011
|8.2-STABLE after ZFS v28 import.
|802508
|link:https://svnweb.freebsd.org/changeset/base/222846[222846]
|June 8, 2011
|8.2-STABLE after removal of the schedtail event handler and addition of the sv_schedtail method to struct sysvec.
|802509
|link:https://svnweb.freebsd.org/changeset/base/224017[224017]
|July 14, 2011
|8.2-STABLE after merging the SSSE3 support into binutils.
|802510
|link:https://svnweb.freebsd.org/changeset/base/224214[224214]
|July 19, 2011
|8.2-STABLE after addition of RFTSIGZMB flag for man:rfork[2].
|802511
|link:https://svnweb.freebsd.org/changeset/base/225458[225458]
|September 9, 2011
|8.2-STABLE after addition of automatic detection of USB mass storage devices which do not support the no synchronize cache SCSI command.
|802512
|link:https://svnweb.freebsd.org/changeset/base/225470[225470]
|September 10, 2011
|8.2-STABLE after merging of re-factoring of auto-quirk.
|802513
|link:https://svnweb.freebsd.org/changeset/base/226763[226763]
|October 25, 2011
|8.2-STABLE after merging of the MAP_PREFAULT_READ flag to man:mmap[2].
|802514
|link:https://svnweb.freebsd.org/changeset/base/227573[227573]
|November 16, 2011
|8.2-STABLE after merging of addition of man:posix_fallocate[2] syscall.
|802515
|link:https://svnweb.freebsd.org/changeset/base/229725[229725]
|January 6, 2012
|8.2-STABLE after merging of addition of the man:posix_fadvise[2] system call.
|802516
|link:https://svnweb.freebsd.org/changeset/base/230239[230239]
|January 16, 2012
|8.2-STABLE after merging gperf 3.0.3
|802517
|link:https://svnweb.freebsd.org/changeset/base/231769[231769]
|February 15, 2012
|8.2-STABLE after introduction of the new extensible man:sysctl[3] interface NET_RT_IFLISTL to query address lists.
|803000
|link:https://svnweb.freebsd.org/changeset/base/232446[232446]
|March 3, 2012
|8.3-RELEASE.
|803500
|link:https://svnweb.freebsd.org/changeset/base/232439[232439]
|March 3, 2012
|8.3-STABLE after branching releng/8.3 (RELENG_8_3).
|803501
|link:https://svnweb.freebsd.org/changeset/base/247091[247091]
|February 21, 2013
|8.3-STABLE after MFC of two USB fixes (rev link:https://svnweb.freebsd.org/changeset/base/246616[246616] and link:https://svnweb.freebsd.org/changeset/base/246759[246759]).
|804000
|link:https://svnweb.freebsd.org/changeset/base/248850[248850]
|March 28, 2013
|8.4-RELEASE.
|804500
|link:https://svnweb.freebsd.org/changeset/base/248819[248819]
|March 28, 2013
|8.4-STABLE after 8.4-RELEASE.
|804501
|link:https://svnweb.freebsd.org/changeset/base/259449[259449]
|December 16, 2013
|8.4-STABLE after MFC of upstream Heimdal encoding fix.
|804502
|link:https://svnweb.freebsd.org/changeset/base/265123[265123]
|April 30, 2014
|8.4-STABLE after FreeBSD-SA-14:08.tcp.
|804503
|link:https://svnweb.freebsd.org/changeset/base/268444[268444]
|July 9, 2014
|8.4-STABLE after FreeBSD-SA-14:17.kmem.
|804504
|link:https://svnweb.freebsd.org/changeset/base/271341[271341]
|September 9, 2014
|8.4-STABLE after FreeBSD-SA-14:18 (rev link:https://svnweb.freebsd.org/changeset/base/271305[271305]).
|804505
|link:https://svnweb.freebsd.org/changeset/base/271686[271686]
|September 16, 2014
|8.4-STABLE after FreeBSD-SA-14:19 (rev link:https://svnweb.freebsd.org/changeset/base/271668[271668]).
|804506
|link:https://svnweb.freebsd.org/changeset/base/273432[273432]
|October 21, 2014
|8.4-STABLE after FreeBSD-SA-14:21 (rev link:https://svnweb.freebsd.org/changeset/base/273413[273413]).
|804507
|link:https://svnweb.freebsd.org/changeset/base/274162[274162]
|November 4, 2014
|8.4-STABLE after FreeBSD-SA-14:23, FreeBSD-SA-14:24, and FreeBSD-SA-14:25.
|804508
|link:https://svnweb.freebsd.org/changeset/base/279287[279287]
|February 25, 2015
|8-STABLE after FreeBSD-EN-15:01.vt, FreeBSD-EN-15:02.openssl, FreeBSD-EN-15:03.freebsd-update, FreeBSD-SA-15:04.igmp, and FreeBSD-SA-15:05.bind.
|804509
|link:https://svnweb.freebsd.org/changeset/base/305736[305736]
|September 12, 2016
|8-STABLE after resolving a deadlock between `device_detach()` and man:usbd_do_request_flags[9].
|===
[[versions-7]]
== FreeBSD 7 Versions
[[freebsd-versions-table-7]]
.FreeBSD 7 `__FreeBSD_version` Values
[cols="1,1,1,1", frame="none", options="header"]
|===
| Value
| Revision
| Date
| Release
|700000
|link:https://svnweb.freebsd.org/changeset/base/147925[147925]
|July 11, 2005
|7.0-CURRENT.
|700001
|link:https://svnweb.freebsd.org/changeset/base/148341[148341]
|July 23, 2005
|7.0-CURRENT after bump of all shared library versions that had not been changed since RELENG_5.
|700002
|link:https://svnweb.freebsd.org/changeset/base/149039[149039]
|August 13, 2005
|7.0-CURRENT after credential argument is added to dev_clone event handler.
|700003
|link:https://svnweb.freebsd.org/changeset/base/149470[149470]
|August 25, 2005
|7.0-CURRENT after man:memmem[3] is added to libc.
|700004
|link:https://svnweb.freebsd.org/changeset/base/151888[151888]
|October 30, 2005
|7.0-CURRENT after man:solisten[9] kernel arguments are modified to accept a backlog parameter.
|700005
|link:https://svnweb.freebsd.org/changeset/base/152296[152296]
|November 11, 2005
|7.0-CURRENT after IFP2ENADDR() was changed to return a pointer to IF_LLADDR().
|700006
|link:https://svnweb.freebsd.org/changeset/base/152315[152315]
|November 11, 2005
|7.0-CURRENT after addition of `if_addr` member to `struct ifnet` and IFP2ENADDR() removal.
|700007
|link:https://svnweb.freebsd.org/changeset/base/153027[153027]
|December 2, 2005
|7.0-CURRENT after incorporating scripts from the local_startup directories into the base man:rcorder[8].
|700008
|link:https://svnweb.freebsd.org/changeset/base/153107[153107]
|December 5, 2005
|7.0-CURRENT after removal of MNT_NODEV mount option.
|700009
|link:https://svnweb.freebsd.org/changeset/base/153519[153519]
|December 19, 2005
|7.0-CURRENT after ELF-64 type changes and symbol versioning.
|700010
|link:https://svnweb.freebsd.org/changeset/base/153579[153579]
|December 20, 2005
|7.0-CURRENT after addition of hostb and vgapci drivers, addition of pci_find_extcap(), and changing the AGP drivers to no longer map the aperture.
|700011
|link:https://svnweb.freebsd.org/changeset/base/153936[153936]
|December 31, 2005
|7.0-CURRENT after tv_sec was made time_t on all platforms but Alpha.
|700012
|link:https://svnweb.freebsd.org/changeset/base/154114[154114]
|January 8, 2006
|7.0-CURRENT after ldconfig_local_dirs change.
|700013
|link:https://svnweb.freebsd.org/changeset/base/154269[154269]
|January 12, 2006
|7.0-CURRENT after changes to [.filename]#/etc/rc.d/abi# to support [.filename]#/compat/linux/etc/ld.so.cache# being a symlink in a readonly filesystem.
|700014
|link:https://svnweb.freebsd.org/changeset/base/154863[154863]
|January 26, 2006
|7.0-CURRENT after pts import.
|700015
|link:https://svnweb.freebsd.org/changeset/base/157144[157144]
|March 26, 2006
|7.0-CURRENT after the introduction of version 2 of man:hwpmc[4]'s ABI.
|700016
|link:https://svnweb.freebsd.org/changeset/base/157962[157962]
|April 22, 2006
|7.0-CURRENT after addition of man:fcloseall[3] to libc.
|700017
|link:https://svnweb.freebsd.org/changeset/base/158513[158513]
|May 13, 2006
|7.0-CURRENT after removal of ip6fw.
|700018
|link:https://svnweb.freebsd.org/changeset/base/160386[160386]
|July 15, 2006
|7.0-CURRENT after import of snd_emu10kx.
|700019
|link:https://svnweb.freebsd.org/changeset/base/160821[160821]
|July 29, 2006
|7.0-CURRENT after import of OpenSSL 0.9.8b.
|700020
|link:https://svnweb.freebsd.org/changeset/base/161931[161931]
|September 3, 2006
|7.0-CURRENT after addition of bus_dma_get_tag function
|700021
|link:https://svnweb.freebsd.org/changeset/base/162023[162023]
|September 4, 2006
|7.0-CURRENT after libpcap 0.9.4 and tcpdump 3.9.4 import.
|700022
|link:https://svnweb.freebsd.org/changeset/base/162170[162170]
|September 9, 2006
|7.0-CURRENT after dlsym change to look for a requested symbol both in specified dso and its implicit dependencies.
|700023
|link:https://svnweb.freebsd.org/changeset/base/162588[162588]
|September 23, 2006
|7.0-CURRENT after adding new sound IOCTLs for the OSSv4 mixer API.
|700024
|link:https://svnweb.freebsd.org/changeset/base/162919[162919]
|September 28, 2006
|7.0-CURRENT after import of OpenSSL 0.9.8d.
|700025
|link:https://svnweb.freebsd.org/changeset/base/164190[164190]
|November 11, 2006
|7.0-CURRENT after the addition of libelf.
|700026
|link:https://svnweb.freebsd.org/changeset/base/164614[164614]
|November 26, 2006
|7.0-CURRENT after major changes on sound sysctls.
|700027
|link:https://svnweb.freebsd.org/changeset/base/164770[164770]
|November 30, 2006
|7.0-CURRENT after the addition of Wi-Spy quirk.
|700028
|link:https://svnweb.freebsd.org/changeset/base/165242[165242]
|December 15, 2006
|7.0-CURRENT after the addition of sctp calls to libc
|700029
|link:https://svnweb.freebsd.org/changeset/base/166259[166259]
|January 26, 2007
|7.0-CURRENT after the GNU man:gzip[1] implementation was replaced with a BSD licensed version ported from NetBSD.
|700030
|link:https://svnweb.freebsd.org/changeset/base/166549[166549]
|February 7, 2007
|7.0-CURRENT after the removal of IPIP tunnel encapsulation (VIFF_TUNNEL) from the IPv4 multicast forwarding code.
|700031
|link:https://svnweb.freebsd.org/changeset/base/166907[166907]
|February 23, 2007
|7.0-CURRENT after the modification of bus_setup_intr() (newbus).
|700032
|link:https://svnweb.freebsd.org/changeset/base/167165[167165]
|March 2, 2007
|7.0-CURRENT after the inclusion of man:ipw[4] and man:iwi[4] firmware.
|700033
|link:https://svnweb.freebsd.org/changeset/base/167360[167360]
|March 9, 2007
|7.0-CURRENT after the inclusion of ncurses wide character support.
|700034
|link:https://svnweb.freebsd.org/changeset/base/167684[167684]
|March 19, 2007
|7.0-CURRENT after changes to how insmntque(), getnewvnode(), and vfs_hash_insert() work.
|700035
|link:https://svnweb.freebsd.org/changeset/base/167906[167906]
|March 26, 2007
|7.0-CURRENT after addition of a notify mechanism for CPU frequency changes.
|700036
|link:https://svnweb.freebsd.org/changeset/base/168413[168413]
|April 6, 2007
|7.0-CURRENT after import of the ZFS filesystem.
|700037
|link:https://svnweb.freebsd.org/changeset/base/168504[168504]
|April 8, 2007
|7.0-CURRENT after addition of CAM 'SG' peripheral device, which implements a subset of Linux SCSI SG passthrough device API.
|700038
|link:https://svnweb.freebsd.org/changeset/base/169151[169151]
|April 30, 2007
|7.0-CURRENT after changing man:getenv[3], man:putenv[3], man:setenv[3] and man:unsetenv[3] to be POSIX conformant.
|700039
|link:https://svnweb.freebsd.org/changeset/base/169190[169190]
|May 1, 2007
|7.0-CURRENT after the changes in 700038 were backed out.
|700040
|link:https://svnweb.freebsd.org/changeset/base/169453[169453]
|May 10, 2007
|7.0-CURRENT after the addition of man:flopen[3] to libutil.
|700041
|link:https://svnweb.freebsd.org/changeset/base/169526[169526]
|May 13, 2007
|7.0-CURRENT after enabling symbol versioning, and changing the default thread library to libthr.
|700042
|link:https://svnweb.freebsd.org/changeset/base/169758[169758]
|May 19, 2007
|7.0-CURRENT after the import of gcc 4.2.0.
|700043
|link:https://svnweb.freebsd.org/changeset/base/169830[169830]
|May 21, 2007
|7.0-CURRENT after bump of all shared library versions that had not been changed since RELENG_6.
|700044
|link:https://svnweb.freebsd.org/changeset/base/170395[170395]
|June 7, 2007
|7.0-CURRENT after changing the argument for vn_open()/VOP_OPEN() from file descriptor index to the struct file *.
|700045
|link:https://svnweb.freebsd.org/changeset/base/170510[170510]
|June 10, 2007
|7.0-CURRENT after changing man:pam_nologin[8] to provide an account management function instead of an authentication function to the PAM framework.
|700046
|link:https://svnweb.freebsd.org/changeset/base/170530[170530]
|June 11, 2007
|7.0-CURRENT after updated 802.11 wireless support.
|700047
|link:https://svnweb.freebsd.org/changeset/base/170579[170579]
|June 11, 2007
|7.0-CURRENT after adding TCP LRO interface capabilities.
|700048
|link:https://svnweb.freebsd.org/changeset/base/170613[170613]
|June 12, 2007
|7.0-CURRENT after RFC 3678 API support added to the IPv4 stack. Legacy RFC 1724 behavior of the IP_MULTICAST_IF ioctl has now been removed; 0.0.0.0/8 may no longer be used to specify an interface index. Use struct ipmreqn instead.
|700049
|link:https://svnweb.freebsd.org/changeset/base/171175[171175]
|July 3, 2007
|7.0-CURRENT after importing pf from OpenBSD 4.1
|(not changed)
|link:https://svnweb.freebsd.org/changeset/base/171167[171167]
|
|7.0-CURRENT after adding IPv6 support for FAST_IPSEC, deleting KAME IPSEC, and renaming FAST_IPSEC to IPSEC.
|700050
|link:https://svnweb.freebsd.org/changeset/base/171195[171195]
|July 4, 2007
|7.0-CURRENT after converting setenv/putenv/etc. calls from traditional BSD to POSIX.
|700051
|link:https://svnweb.freebsd.org/changeset/base/171211[171211]
|July 4, 2007
|7.0-CURRENT after adding new mmap/lseek/etc syscalls.
|700052
|link:https://svnweb.freebsd.org/changeset/base/171275[171275]
|July 6, 2007
|7.0-CURRENT after moving I4B headers to include/i4b.
|700053
|link:https://svnweb.freebsd.org/changeset/base/172394[172394]
|September 30, 2007
|7.0-CURRENT after the addition of support for PCI domains
|700054
|link:https://svnweb.freebsd.org/changeset/base/172988[172988]
|October 25, 2007
|7.0-STABLE after MFC of wide and single byte ctype separation.
|700055
|link:https://svnweb.freebsd.org/changeset/base/173104[173104]
|October 28, 2007
|7.0-RELEASE, and 7.0-CURRENT after ABI backwards compatibility to the FreeBSD 4/5/6 versions of the PCIOCGETCONF, PCIOCREAD and PCIOCWRITE IOCTLs was MFCed, which required the ABI of the PCIOCGETCONF IOCTL to be broken again
|700100
|link:https://svnweb.freebsd.org/changeset/base/174864[174864]
|December 22, 2007
|7.0-STABLE after 7.0-RELEASE
|700101
|link:https://svnweb.freebsd.org/changeset/base/176111[176111]
|February 8, 2008
|7.0-STABLE after the MFC of m_collapse().
|700102
|link:https://svnweb.freebsd.org/changeset/base/177735[177735]
|March 30, 2008
|7.0-STABLE after the MFC of kdb_enter_why().
|700103
|link:https://svnweb.freebsd.org/changeset/base/178061[178061]
|April 10, 2008
|7.0-STABLE after adding l_sysid to struct flock.
|700104
|link:https://svnweb.freebsd.org/changeset/base/178108[178108]
|April 11, 2008
|7.0-STABLE after the MFC of man:procstat[1].
|700105
|link:https://svnweb.freebsd.org/changeset/base/178120[178120]
|April 11, 2008
|7.0-STABLE after the MFC of umtx features.
|700106
|link:https://svnweb.freebsd.org/changeset/base/178225[178225]
|April 15, 2008
|7.0-STABLE after the MFC of man:write[2] support to man:psm[4].
|700107
|link:https://svnweb.freebsd.org/changeset/base/178353[178353]
|April 20, 2008
|7.0-STABLE after the MFC of F_DUP2FD command to man:fcntl[2].
|700108
|link:https://svnweb.freebsd.org/changeset/base/178783[178783]
|May 5, 2008
|7.0-STABLE after some man:lockmgr[9] changes, which makes it necessary to include [.filename]#sys/lock.h# to use man:lockmgr[9].
|700109
|link:https://svnweb.freebsd.org/changeset/base/179367[179367]
|May 27, 2008
|7.0-STABLE after MFC of the man:memrchr[3] function.
|700110
|link:https://svnweb.freebsd.org/changeset/base/181328[181328]
|August 5, 2008
|7.0-STABLE after MFC of kernel NFS lockd client.
|700111
|link:https://svnweb.freebsd.org/changeset/base/181940[181940]
|August 20, 2008
|7.0-STABLE after addition of physically contiguous jumbo frame support.
|700112
|link:https://svnweb.freebsd.org/changeset/base/182294[182294]
|August 27, 2008
|7.0-STABLE after MFC of kernel DTrace support.
|701000
|link:https://svnweb.freebsd.org/changeset/base/185315[185315]
|November 25, 2008
|7.1-RELEASE
|701100
|link:https://svnweb.freebsd.org/changeset/base/185302[185302]
|November 25, 2008
|7.1-STABLE after 7.1-RELEASE.
|701101
|link:https://svnweb.freebsd.org/changeset/base/187023[187023]
|January 10, 2009
|7.1-STABLE after man:strndup[3] merge.
|701102
|link:https://svnweb.freebsd.org/changeset/base/187370[187370]
|January 17, 2009
|7.1-STABLE after man:cpuctl[4] support added.
|701103
|link:https://svnweb.freebsd.org/changeset/base/188281[188281]
|February 7, 2009
|7.1-STABLE after the merge of multi-/no-IPv4/v6 jails.
|701104
|link:https://svnweb.freebsd.org/changeset/base/188625[188625]
|February 14, 2009
|7.1-STABLE after the store of the suspension owner in the struct mount, and introduction of vfs_susp_clean method into the struct vfsops.
|701105
|link:https://svnweb.freebsd.org/changeset/base/189740[189740]
|March 12, 2009
|7.1-STABLE after the incompatible change to the kern.ipc.shmsegs sysctl to allow allocating larger SysV shared memory segments on 64bit architectures.
|701106
|link:https://svnweb.freebsd.org/changeset/base/189786[189786]
|March 14, 2009
|7.1-STABLE after the merge of a fix for POSIX semaphore wait operations.
|702000
|link:https://svnweb.freebsd.org/changeset/base/191099[191099]
|April 15, 2009
|7.2-RELEASE
|702100
|link:https://svnweb.freebsd.org/changeset/base/191091[191091]
|April 15, 2009
|7.2-STABLE after 7.2-RELEASE.
|702101
|link:https://svnweb.freebsd.org/changeset/base/192149[192149]
|May 15, 2009
|7.2-STABLE after man:ichsmb[4] was changed to use left-adjusted slave addressing to match other SMBus controller drivers.
|702102
|link:https://svnweb.freebsd.org/changeset/base/193020[193020]
|May 28, 2009
|7.2-STABLE after MFC of the man:fdopendir[3] function.
|702103
|link:https://svnweb.freebsd.org/changeset/base/193638[193638]
|June 6, 2009
|7.2-STABLE after MFC of PmcTools.
|702104
|link:https://svnweb.freebsd.org/changeset/base/195694[195694]
|July 14, 2009
|7.2-STABLE after MFC of the man:closefrom[2] system call.
|702105
|link:https://svnweb.freebsd.org/changeset/base/196006[196006]
|July 31, 2009
|7.2-STABLE after MFC of the SYSVIPC ABI change.
|702106
|link:https://svnweb.freebsd.org/changeset/base/197198[197198]
|September 14, 2009
|7.2-STABLE after MFC of the x86 PAT enhancements and addition of d_mmap_single() and the scatter/gather list VM object type.
|703000
|link:https://svnweb.freebsd.org/changeset/base/203740[203740]
|February 9, 2010
|7.3-RELEASE
|703100
|link:https://svnweb.freebsd.org/changeset/base/203742[203742]
|February 9, 2010
|7.3-STABLE after 7.3-RELEASE.
|704000
|link:https://svnweb.freebsd.org/changeset/base/216647[216647]
|December 22, 2010
|7.4-RELEASE
|704100
|link:https://svnweb.freebsd.org/changeset/base/216658[216658]
|December 22, 2010
|7.4-STABLE after 7.4-RELEASE.
|704101
|link:https://svnweb.freebsd.org/changeset/base/221318[221318]
|May 2, 2011
|7.4-STABLE after the gcc MFC in rev link:https://svnweb.freebsd.org/changeset/base/221317[221317].
|===
[[versions-6]]
== FreeBSD 6 Versions
[[freebsd-versions-table-6]]
.FreeBSD 6 `__FreeBSD_version` Values
[cols="1,1,1,1", frame="none", options="header"]
|===
| Value
| Revision
| Date
| Release
|600000
|link:https://svnweb.freebsd.org/changeset/base/133921[133921]
|August 18, 2004
|6.0-CURRENT
|600001
|link:https://svnweb.freebsd.org/changeset/base/134396[134396]
|August 27, 2004
|6.0-CURRENT after permanently enabling PFIL_HOOKS in the kernel.
|600002
|link:https://svnweb.freebsd.org/changeset/base/134514[134514]
|August 30, 2004
|6.0-CURRENT after initial addition of ifi_epoch to struct if_data. Backed out after a few days. Do not use this value.
|600003
|link:https://svnweb.freebsd.org/changeset/base/134933[134933]
|September 8, 2004
|6.0-CURRENT after the re-addition of the ifi_epoch member of struct if_data.
|600004
|link:https://svnweb.freebsd.org/changeset/base/135920[135920]
|September 29, 2004
|6.0-CURRENT after addition of the struct inpcb argument to the pfil API.
|600005
|link:https://svnweb.freebsd.org/changeset/base/136172[136172]
|October 5, 2004
|6.0-CURRENT after addition of the "-d DESTDIR" argument to newsyslog.
|600006
|link:https://svnweb.freebsd.org/changeset/base/137192[137192]
|November 4, 2004
|6.0-CURRENT after addition of glibc style man:strftime[3] padding options.
|600007
|link:https://svnweb.freebsd.org/changeset/base/138760[138760]
|December 12, 2004
|6.0-CURRENT after addition of 802.11 framework updates.
|600008
|link:https://svnweb.freebsd.org/changeset/base/140809[140809]
|January 25, 2005
|6.0-CURRENT after changes to VOP_*VOBJECT() functions and introduction of MNTK_MPSAFE flag for Giantfree filesystems.
|600009
|link:https://svnweb.freebsd.org/changeset/base/141250[141250]
|February 4, 2005
|6.0-CURRENT after addition of the cpufreq framework and drivers.
|600010
|link:https://svnweb.freebsd.org/changeset/base/141394[141394]
|February 6, 2005
|6.0-CURRENT after importing OpenBSD's man:nc[1].
|600011
|link:https://svnweb.freebsd.org/changeset/base/141727[141727]
|February 12, 2005
|6.0-CURRENT after removing semblance of SVID2 `matherr()` support.
|600012
|link:https://svnweb.freebsd.org/changeset/base/141940[141940]
|February 15, 2005
|6.0-CURRENT after increase of default thread stacks' size.
|600013
|link:https://svnweb.freebsd.org/changeset/base/142089[142089]
|February 19, 2005
|6.0-CURRENT after fixes in [.filename]#<src/include/stdbool.h># and [.filename]#<src/sys/i386/include/_types.h># for using the GCC-compatibility of the Intel C/C++ compiler.
|600014
|link:https://svnweb.freebsd.org/changeset/base/142184[142184]
|February 21, 2005
|6.0-CURRENT after EOVERFLOW checks in man:vswprintf[3] fixed.
|600015
|link:https://svnweb.freebsd.org/changeset/base/142501[142501]
|February 25, 2005
|6.0-CURRENT after changing the struct if_data member, ifi_epoch, from wall clock time to uptime.
|600016
|link:https://svnweb.freebsd.org/changeset/base/142582[142582]
|February 26, 2005
|6.0-CURRENT after LC_CTYPE disk format changed.
|600017
|link:https://svnweb.freebsd.org/changeset/base/142683[142683]
|February 27, 2005
|6.0-CURRENT after NLS catalogs disk format changed.
|600018
|link:https://svnweb.freebsd.org/changeset/base/142686[142686]
|February 27, 2005
|6.0-CURRENT after LC_COLLATE disk format changed.
|600019
|link:https://svnweb.freebsd.org/changeset/base/142752[142752]
|February 28, 2005
|Installation of acpica includes into /usr/include.
|600020
|link:https://svnweb.freebsd.org/changeset/base/143308[143308]
|March 9, 2005
|Addition of MSG_NOSIGNAL flag to man:send[2] API.
|600021
|link:https://svnweb.freebsd.org/changeset/base/143746[143746]
|March 17, 2005
|Addition of fields to cdevsw
|600022
|link:https://svnweb.freebsd.org/changeset/base/143901[143901]
|March 21, 2005
|Removed gtar from base system.
|600023
|link:https://svnweb.freebsd.org/changeset/base/144980[144980]
|April 13, 2005
|LOCAL_CREDS, LOCAL_CONNWAIT socket options added to man:unix[4].
|600024
|link:https://svnweb.freebsd.org/changeset/base/145565[145565]
|April 19, 2005
|man:hwpmc[4] and related tools added to 6.0-CURRENT.
|600025
|link:https://svnweb.freebsd.org/changeset/base/145565[145565]
|April 26, 2005
|struct icmphdr added to 6.0-CURRENT.
|600026
|link:https://svnweb.freebsd.org/changeset/base/145843[145843]
|May 3, 2005
|pf updated to 3.7.
|600027
|link:https://svnweb.freebsd.org/changeset/base/145966[145966]
|May 6, 2005
|Kernel libalias and ng_nat introduced.
|600028
|link:https://svnweb.freebsd.org/changeset/base/146191[146191]
|May 13, 2005
|POSIX man:ttyname_r[3] made available through unistd.h and libc.
|600029
|link:https://svnweb.freebsd.org/changeset/base/146780[146780]
|May 29, 2005
|6.0-CURRENT after libpcap updated to v0.9.1 alpha 096.
|600030
|link:https://svnweb.freebsd.org/changeset/base/146988[146988]
|June 5, 2005
|6.0-CURRENT after importing NetBSD's man:if_bridge[4].
|600031
|link:https://svnweb.freebsd.org/changeset/base/147256[147256]
|June 10, 2005
|6.0-CURRENT after struct ifnet was broken out of the driver softcs.
|600032
|link:https://svnweb.freebsd.org/changeset/base/147898[147898]
|July 11, 2005
|6.0-CURRENT after the import of libpcap v0.9.1.
|600033
|link:https://svnweb.freebsd.org/changeset/base/148388[148388]
|July 25, 2005
|6.0-STABLE after bump of all shared library versions that had not been changed since RELENG_5.
|600034
|link:https://svnweb.freebsd.org/changeset/base/149040[149040]
|August 13, 2005
|6.0-STABLE after credential argument is added to dev_clone event handler. 6.0-RELEASE.
|600100
|link:https://svnweb.freebsd.org/changeset/base/151958[151958]
|November 1, 2005
|6.0-STABLE after 6.0-RELEASE
|600101
|link:https://svnweb.freebsd.org/changeset/base/153601[153601]
|December 21, 2005
|6.0-STABLE after incorporating scripts from the local_startup directories into the base man:rcorder[8].
|600102
|link:https://svnweb.freebsd.org/changeset/base/153912[153912]
|December 30, 2005
|6.0-STABLE after updating the ELF types and constants.
|600103
|link:https://svnweb.freebsd.org/changeset/base/154396[154396]
|January 15, 2006
|6.0-STABLE after MFC of man:pidfile[3] API.
|600104
|link:https://svnweb.freebsd.org/changeset/base/154453[154453]
|January 17, 2006
|6.0-STABLE after MFC of ldconfig_local_dirs change.
|600105
|link:https://svnweb.freebsd.org/changeset/base/156019[156019]
|February 26, 2006
|6.0-STABLE after NLS catalog support of man:csh[1].
|601000
|link:https://svnweb.freebsd.org/changeset/base/158330[158330]
|May 6, 2006
|6.1-RELEASE
|601100
|link:https://svnweb.freebsd.org/changeset/base/158331[158331]
|May 6, 2006
|6.1-STABLE after 6.1-RELEASE.
|601101
|link:https://svnweb.freebsd.org/changeset/base/159861[159861]
|June 22, 2006
|6.1-STABLE after the import of csup.
|601102
|link:https://svnweb.freebsd.org/changeset/base/160253[160253]
|July 11, 2006
|6.1-STABLE after the man:iwi[4] update.
|601103
|link:https://svnweb.freebsd.org/changeset/base/160429[160429]
|July 17, 2006
|6.1-STABLE after the resolver update to BIND9, and exposure of reentrant version of netdb functions.
|601104
|link:https://svnweb.freebsd.org/changeset/base/161098[161098]
|August 8, 2006
|6.1-STABLE after DSO (dynamic shared objects) support has been enabled in OpenSSL.
|601105
|link:https://svnweb.freebsd.org/changeset/base/161900[161900]
|September 2, 2006
|6.1-STABLE after 802.11 fixups changed the api for the IEEE80211_IOC_STA_INFO ioctl.
|602000
|link:https://svnweb.freebsd.org/changeset/base/164312[164312]
|November 15, 2006
|6.2-RELEASE
|602100
|link:https://svnweb.freebsd.org/changeset/base/162329[162329]
|September 15, 2006
|6.2-STABLE after 6.2-RELEASE.
|602101
|link:https://svnweb.freebsd.org/changeset/base/165122[165122]
|December 12, 2006
|6.2-STABLE after the addition of Wi-Spy quirk.
|602102
|link:https://svnweb.freebsd.org/changeset/base/165596[165596]
|December 28, 2006
|6.2-STABLE after pci_find_extcap() addition.
|602103
|link:https://svnweb.freebsd.org/changeset/base/166039[166039]
|January 16, 2007
|6.2-STABLE after MFC of dlsym change to look for a requested symbol both in specified dso and its implicit dependencies.
|602104
|link:https://svnweb.freebsd.org/changeset/base/166314[166314]
|January 28, 2007
|6.2-STABLE after MFC of man:ng_deflate[4] and man:ng_pred1[4] netgraph nodes and new compression and encryption modes for man:ng_ppp[4] node.
|602105
|link:https://svnweb.freebsd.org/changeset/base/166840[166840]
|February 20, 2007
|6.2-STABLE after MFC of BSD licensed version of man:gzip[1] ported from NetBSD.
|602106
|link:https://svnweb.freebsd.org/changeset/base/168133[168133]
|March 31, 2007
|6.2-STABLE after MFC of PCI MSI and MSI-X support.
|602107
|link:https://svnweb.freebsd.org/changeset/base/168438[168438]
|April 6, 2007
|6.2-STABLE after MFC of ncurses 5.6 and wide character support.
|602108
|link:https://svnweb.freebsd.org/changeset/base/168611[168611]
|April 11, 2007
|6.2-STABLE after MFC of CAM 'SG' peripheral device, which implements a subset of Linux SCSI SG passthrough device API.
|602109
|link:https://svnweb.freebsd.org/changeset/base/168805[168805]
|April 17, 2007
|6.2-STABLE after MFC of readline 5.2 patchset 002.
|602110
|link:https://svnweb.freebsd.org/changeset/base/169222[169222]
|May 2, 2007
|6.2-STABLE after MFC of pmap_invalidate_cache(), pmap_change_attr(), pmap_mapbios(), pmap_mapdev_attr(), and pmap_unmapbios() for amd64 and i386.
|602111
|link:https://svnweb.freebsd.org/changeset/base/170556[170556]
|June 11, 2007
|6.2-STABLE after MFC of BOP_BDFLUSH and caused breakage of the filesystem modules KBI.
|602112
|link:https://svnweb.freebsd.org/changeset/base/172284[172284]
|September 21, 2007
|6.2-STABLE after libutil(3) MFC's.
|602113
|link:https://svnweb.freebsd.org/changeset/base/172986[172986]
|October 25, 2007
|6.2-STABLE after MFC of wide and single byte ctype separation. Newly compiled binary that references to ctype.h may require a new symbol, __mb_sb_limit, which is not available on older systems.
|602114
|link:https://svnweb.freebsd.org/changeset/base/173170[173170]
|October 30, 2007
|6.2-STABLE after ctype ABI forward compatibility restored.
|602115
|link:https://svnweb.freebsd.org/changeset/base/173794[173794]
|November 21, 2007
|6.2-STABLE after back out of wide and single byte ctype separation.
|603000
|link:https://svnweb.freebsd.org/changeset/base/173897[173897]
|November 25, 2007
|6.3-RELEASE
|603100
|link:https://svnweb.freebsd.org/changeset/base/173891[173891]
|November 25, 2007
|6.3-STABLE after 6.3-RELEASE.
|(not changed)
|link:https://svnweb.freebsd.org/changeset/base/174434[174434]
|December 7, 2007
|6.3-STABLE after fixing multibyte type support in bit macro.
|603102
|link:https://svnweb.freebsd.org/changeset/base/178459[178459]
|April 24, 2008
|6.3-STABLE after adding l_sysid to struct flock.
|603103
|link:https://svnweb.freebsd.org/changeset/base/179367[179367]
|May 27, 2008
|6.3-STABLE after MFC of the man:memrchr[3] function.
|603104
|link:https://svnweb.freebsd.org/changeset/base/179810[179810]
|June 15, 2008
|6.3-STABLE after MFC of support for `:u` variable modifier in man:make[1].
|604000
|link:https://svnweb.freebsd.org/changeset/base/183583[183583]
|October 4, 2008
|6.4-RELEASE
|604100
|link:https://svnweb.freebsd.org/changeset/base/183584[183584]
|October 4, 2008
|6.4-STABLE after 6.4-RELEASE.
|===
[[versions-5]]
== FreeBSD 5 Versions
[[freebsd-versions-table-5]]
.FreeBSD 5 `__FreeBSD_version` Values
[cols="1,1,1,1", frame="none", options="header"]
|===
| Value
| Revision
| Date
| Release
|500000
|link:https://svnweb.freebsd.org/changeset/base/58009[58009]
|March 13, 2000
|5.0-CURRENT
|500001
|link:https://svnweb.freebsd.org/changeset/base/59348[59348]
|April 18, 2000
|5.0-CURRENT after adding addition ELF header fields, and changing our ELF binary branding method.
|500002
|link:https://svnweb.freebsd.org/changeset/base/59906[59906]
|May 2, 2000
|5.0-CURRENT after kld metadata changes.
|500003
|link:https://svnweb.freebsd.org/changeset/base/60688[60688]
|May 18, 2000
|5.0-CURRENT after buf/bio changes.
|500004
|link:https://svnweb.freebsd.org/changeset/base/60936[60936]
|May 26, 2000
|5.0-CURRENT after binutils upgrade.
|500005
|link:https://svnweb.freebsd.org/changeset/base/61221[61221]
|June 3, 2000
|5.0-CURRENT after merging libxpg4 code into libc and after TASKQ interface introduction.
|500006
|link:https://svnweb.freebsd.org/changeset/base/61500[61500]
|June 10, 2000
|5.0-CURRENT after the addition of AGP interfaces.
|500007
|link:https://svnweb.freebsd.org/changeset/base/62235[62235]
|June 29, 2000
|5.0-CURRENT after Perl upgrade to 5.6.0
|500008
|link:https://svnweb.freebsd.org/changeset/base/62764[62764]
|July 7, 2000
|5.0-CURRENT after the update of KAME code to 2000/07 sources.
|500009
|link:https://svnweb.freebsd.org/changeset/base/63154[63154]
|July 14, 2000
|5.0-CURRENT after ether_ifattach() and ether_ifdetach() changes.
|500010
|link:https://svnweb.freebsd.org/changeset/base/63265[63265]
|July 16, 2000
|5.0-CURRENT after changing mtree defaults back to original variant, adding -L to follow symlinks.
|500011
|link:https://svnweb.freebsd.org/changeset/base/63459[63459]
|July 18, 2000
|5.0-CURRENT after kqueue API changed.
|500012
|link:https://svnweb.freebsd.org/changeset/base/65353[65353]
|September 2, 2000
|5.0-CURRENT after man:setproctitle[3] moved from libutil to libc.
|500013
|link:https://svnweb.freebsd.org/changeset/base/65671[65671]
|September 10, 2000
|5.0-CURRENT after the first SMPng commit.
|500014
|link:https://svnweb.freebsd.org/changeset/base/70650[70650]
|January 4, 2001
|5.0-CURRENT after <sys/select.h> moved to <sys/selinfo.h>.
|500015
|link:https://svnweb.freebsd.org/changeset/base/70894[70894]
|January 10, 2001
|5.0-CURRENT after combining libgcc.a and libgcc_r.a, and associated GCC linkage changes.
|500016
|link:https://svnweb.freebsd.org/changeset/base/71583[71583]
|January 24, 2001
|5.0-CURRENT after change allowing libc and libc_r to be linked together, deprecating -pthread option.
|500017
|link:https://svnweb.freebsd.org/changeset/base/72650[72650]
|February 18, 2001
|5.0-CURRENT after switch from struct ucred to struct xucred to stabilize kernel-exported API for mountd et al.
|500018
|link:https://svnweb.freebsd.org/changeset/base/72975[72975]
|February 24, 2001
|5.0-CURRENT after addition of CPUTYPE make variable for controlling CPU-specific optimizations.
|500019
|link:https://svnweb.freebsd.org/changeset/base/77937[77937]
|June 9, 2001
|5.0-CURRENT after moving machine/ioctl_fd.h to sys/fdcio.h
|500020
|link:https://svnweb.freebsd.org/changeset/base/78304[78304]
|June 15, 2001
|5.0-CURRENT after locale names renaming.
|500021
|link:https://svnweb.freebsd.org/changeset/base/78632[78632]
|June 22, 2001
|5.0-CURRENT after Bzip2 import. Also signifies removal of S/Key.
|500022
|link:https://svnweb.freebsd.org/changeset/base/83435[83435]
|July 12, 2001
|5.0-CURRENT after SSE support.
|500023
|link:https://svnweb.freebsd.org/changeset/base/83435[83435]
|September 14, 2001
|5.0-CURRENT after KSE Milestone 2.
|500024
|link:https://svnweb.freebsd.org/changeset/base/84324[84324]
|October 1, 2001
|5.0-CURRENT after d_thread_t, and moving UUCP to ports.
|500025
|link:https://svnweb.freebsd.org/changeset/base/84481[84481]
|October 4, 2001
|5.0-CURRENT after ABI change for descriptor and creds passing on 64 bit platforms.
|500026
|link:https://svnweb.freebsd.org/changeset/base/84710[84710]
|October 9, 2001
|5.0-CURRENT after moving to XFree86 4 by default for package builds, and after the new libc strnstr() function was added.
|500027
|link:https://svnweb.freebsd.org/changeset/base/84743[84743]
|October 10, 2001
|5.0-CURRENT after the new libc strcasestr() function was added.
|500028
|link:https://svnweb.freebsd.org/changeset/base/87879[87879]
|December 14, 2001
|5.0-CURRENT after the userland components of smbfs were imported.
|(not changed)
|
|
|5.0-CURRENT after the new C99 specific-width integer types were added.
|500029
|link:https://svnweb.freebsd.org/changeset/base/89938[89938]
|January 29, 2002
|5.0-CURRENT after a change was made in the return value of man:sendfile[2].
|500030
|link:https://svnweb.freebsd.org/changeset/base/90711[90711]
|February 15, 2002
|5.0-CURRENT after the introduction of the type `fflags_t`, which is the appropriate size for file flags.
|500031
|link:https://svnweb.freebsd.org/changeset/base/91203[91203]
|February 24, 2002
|5.0-CURRENT after the usb structure element rename.
|500032
|link:https://svnweb.freebsd.org/changeset/base/92453[92453]
|March 16, 2002
|5.0-CURRENT after the introduction of Perl 5.6.1.
|500033
|link:https://svnweb.freebsd.org/changeset/base/93722[93722]
|April 3, 2002
|5.0-CURRENT after the `sendmail_enable` man:rc.conf[5] variable was made to take the value `NONE`.
|500034
|link:https://svnweb.freebsd.org/changeset/base/95831[95831]
|April 30, 2002
|5.0-CURRENT after mtx_init() grew a third argument.
|500035
|link:https://svnweb.freebsd.org/changeset/base/96498[96498]
|May 13, 2002
|5.0-CURRENT with Gcc 3.1.
|500036
|link:https://svnweb.freebsd.org/changeset/base/96781[96781]
|May 17, 2002
|5.0-CURRENT without Perl in /usr/src
|500037
|link:https://svnweb.freebsd.org/changeset/base/97516[97516]
|May 29, 2002
|5.0-CURRENT after the addition of man:dlfunc[3]
|500038
|link:https://svnweb.freebsd.org/changeset/base/100591[100591]
|July 24, 2002
|5.0-CURRENT after the types of some struct sockbuf members were changed and the structure was reordered.
|500039
|link:https://svnweb.freebsd.org/changeset/base/102757[102757]
|September 1, 2002
|5.0-CURRENT after GCC 3.2.1 import. Also after headers stopped using _BSD_FOO_T_ and started using _FOO_T_DECLARED. This value can also be used as a conservative estimate of the start of man:bzip2[1] package support.
|500040
|link:https://svnweb.freebsd.org/changeset/base/103675[103675]
|September 20, 2002
|5.0-CURRENT after various changes to disk functions were made in the name of removing dependency on disklabel structure internals.
|500041
|link:https://svnweb.freebsd.org/changeset/base/104250[104250]
|October 1, 2002
|5.0-CURRENT after the addition of man:getopt_long[3] to libc.
|500042
|link:https://svnweb.freebsd.org/changeset/base/105178[105178]
|October 15, 2002
|5.0-CURRENT after Binutils 2.13 upgrade, which included new FreeBSD emulation, vec, and output format.
|500043
|link:https://svnweb.freebsd.org/changeset/base/106289[106289]
|November 1, 2002
|5.0-CURRENT after adding weak pthread_XXX stubs to libc, obsoleting libXThrStub.so. 5.0-RELEASE.
|500100
|link:https://svnweb.freebsd.org/changeset/base/109405[109405]
|January 17, 2003
|5.0-CURRENT after branching for RELENG_5_0
|500101
|link:https://svnweb.freebsd.org/changeset/base/111120[111120]
|February 19, 2003
|<sys/dkstat.h> is empty. Do not include it.
|500102
|link:https://svnweb.freebsd.org/changeset/base/111482[111482]
|February 25, 2003
|5.0-CURRENT after the d_mmap_t interface change.
|500103
|link:https://svnweb.freebsd.org/changeset/base/111540[111540]
|February 26, 2003
|5.0-CURRENT after taskqueue_swi changed to run without Giant, and taskqueue_swi_giant added to run with Giant.
|500104
|link:https://svnweb.freebsd.org/changeset/base/111600[111600]
|February 27, 2003
|cdevsw_add() and cdevsw_remove() no longer exists. Appearance of MAJOR_AUTO allocation facility.
|500105
|link:https://svnweb.freebsd.org/changeset/base/111864[111864]
|March 4, 2003
|5.0-CURRENT after new cdevsw initialization method.
|500106
|link:https://svnweb.freebsd.org/changeset/base/112007[112007]
|March 8, 2003
|devstat_add_entry() has been replaced by devstat_new_entry()
|500107
|link:https://svnweb.freebsd.org/changeset/base/112288[112288]
|March 15, 2003
|Devstat interface change; see sys/sys/param.h 1.149
|500108
|link:https://svnweb.freebsd.org/changeset/base/112300[112300]
|March 15, 2003
|Token-Ring interface changes.
|500109
|link:https://svnweb.freebsd.org/changeset/base/112571[112571]
|March 25, 2003
|Addition of vm_paddr_t.
|500110
|link:https://svnweb.freebsd.org/changeset/base/112741[112741]
|March 28, 2003
|5.0-CURRENT after man:realpath[3] has been made thread-safe
|500111
|link:https://svnweb.freebsd.org/changeset/base/113273[113273]
|April 9, 2003
|5.0-CURRENT after man:usbhid[3] has been synced with NetBSD
|500112
|link:https://svnweb.freebsd.org/changeset/base/113597[113597]
|April 17, 2003
|5.0-CURRENT after new NSS implementation and addition of POSIX.1 getpw*_r, getgr*_r functions
|500113
|link:https://svnweb.freebsd.org/changeset/base/114492[114492]
|May 2, 2003
|5.0-CURRENT after removal of the old rc system.
|501000
|link:https://svnweb.freebsd.org/changeset/base/115816[115816]
|June 4, 2003
|5.1-RELEASE.
|501100
|link:https://svnweb.freebsd.org/changeset/base/115710[115710]
|June 2, 2003
|5.1-CURRENT after branching for RELENG_5_1.
|501101
|link:https://svnweb.freebsd.org/changeset/base/117025[117025]
|June 29, 2003
|5.1-CURRENT after correcting the semantics of man:sigtimedwait[2] and man:sigwaitinfo[2].
|501102
|link:https://svnweb.freebsd.org/changeset/base/117191[117191]
|July 3, 2003
|5.1-CURRENT after adding the lockfunc and lockfuncarg fields to man:bus_dma_tag_create[9].
|501103
|link:https://svnweb.freebsd.org/changeset/base/118241[118241]
|July 31, 2003
|5.1-CURRENT after GCC 3.3.1-pre 20030711 snapshot integration.
|501104
|link:https://svnweb.freebsd.org/changeset/base/118511[118511]
|August 5, 2003
|5.1-CURRENT 3ware API changes to twe.
|501105
|link:https://svnweb.freebsd.org/changeset/base/119021[119021]
|August 17, 2003
|5.1-CURRENT dynamically-linked /bin and /sbin support and movement of libraries to /lib.
|501106
|link:https://svnweb.freebsd.org/changeset/base/119881[119881]
|September 8, 2003
|5.1-CURRENT after adding kernel support for Coda 6.x.
|501107
|link:https://svnweb.freebsd.org/changeset/base/120180[120180]
|September 17, 2003
|5.1-CURRENT after 16550 UART constants moved from [.filename]#<dev/sio/sioreg.h># to [.filename]#<dev/ic/ns16550.h>#. Also when libmap functionality was unconditionally supported by rtld.
|501108
|link:https://svnweb.freebsd.org/changeset/base/120386[120386]
|September 23, 2003
|5.1-CURRENT after PFIL_HOOKS API update
|501109
|link:https://svnweb.freebsd.org/changeset/base/120503[120503]
|September 27, 2003
|5.1-CURRENT after adding man:kiconv[3]
|501110
|link:https://svnweb.freebsd.org/changeset/base/120556[120556]
|September 28, 2003
|5.1-CURRENT after changing default operations for open and close in cdevsw
|501111
|link:https://svnweb.freebsd.org/changeset/base/121125[121125]
|October 16, 2003
|5.1-CURRENT after changed layout of cdevsw
|501112
|link:https://svnweb.freebsd.org/changeset/base/121129[121129]
|October 16, 2003
| 5.1-CURRENT after adding kobj multiple inheritance
|501113
|link:https://svnweb.freebsd.org/changeset/base/121816[121816]
|October 31, 2003
| 5.1-CURRENT after the if_xname change in struct ifnet
|501114
|link:https://svnweb.freebsd.org/changeset/base/122779[122779]
|November 16, 2003
| 5.1-CURRENT after changing /bin and /sbin to be dynamically linked
|502000
|link:https://svnweb.freebsd.org/changeset/base/123198[123198]
|December 7, 2003
|5.2-RELEASE
|502010
|link:https://svnweb.freebsd.org/changeset/base/126150[126150]
|February 23, 2004
|5.2.1-RELEASE
|502100
|link:https://svnweb.freebsd.org/changeset/base/123196[123196]
|December 7, 2003
|5.2-CURRENT after branching for RELENG_5_2
|502101
|link:https://svnweb.freebsd.org/changeset/base/123677[123677]
|December 19, 2003
|5.2-CURRENT after __cxa_atexit/__cxa_finalize functions were added to libc.
|502102
|link:https://svnweb.freebsd.org/changeset/base/125236[125236]
|January 30, 2004
|5.2-CURRENT after change of default thread library from libc_r to libpthread.
|502103
|link:https://svnweb.freebsd.org/changeset/base/126083[126083]
|February 21, 2004
|5.2-CURRENT after device driver API megapatch.
|502104
|link:https://svnweb.freebsd.org/changeset/base/126208[126208]
|February 25, 2004
|5.2-CURRENT after getopt_long_only() addition.
|502105
|link:https://svnweb.freebsd.org/changeset/base/126644[126644]
|March 5, 2004
|5.2-CURRENT after NULL is made into ((void *)0) for C, creating more warnings.
|502106
|link:https://svnweb.freebsd.org/changeset/base/126757[126757]
|March 8, 2004
|5.2-CURRENT after pf is linked to the build and install.
|502107
|link:https://svnweb.freebsd.org/changeset/base/126819[126819]
|March 10, 2004
|5.2-CURRENT after time_t is changed to a 64-bit value on sparc64.
|502108
|link:https://svnweb.freebsd.org/changeset/base/126891[126891]
|March 12, 2004
|5.2-CURRENT after Intel C/C++ compiler support in some headers and man:execve[2] changes to be more strictly conforming to POSIX.
|502109
|link:https://svnweb.freebsd.org/changeset/base/127312[127312]
|March 22, 2004
|5.2-CURRENT after the introduction of the bus_alloc_resource_any API
|502110
|link:https://svnweb.freebsd.org/changeset/base/127475[127475]
|March 27, 2004
|5.2-CURRENT after the addition of UTF-8 locales
|502111
|link:https://svnweb.freebsd.org/changeset/base/128144[128144]
|April 11, 2004
|5.2-CURRENT after the removal of the man:getvfsent[3] API
|502112
|link:https://svnweb.freebsd.org/changeset/base/128182[128182]
|April 13, 2004
|5.2-CURRENT after the addition of the .warning directive for make.
|502113
|link:https://svnweb.freebsd.org/changeset/base/130057[130057]
|June 4, 2004
|5.2-CURRENT after ttyioctl() was made mandatory for serial drivers.
|502114
|link:https://svnweb.freebsd.org/changeset/base/130418[130418]
|June 13, 2004
|5.2-CURRENT after import of the ALTQ framework.
|502115
|link:https://svnweb.freebsd.org/changeset/base/130481[130481]
|June 14, 2004
|5.2-CURRENT after changing man:sema_timedwait[9] to return 0 on success and a non-zero error code on failure.
|502116
|link:https://svnweb.freebsd.org/changeset/base/130585[130585]
|June 16, 2004
|5.2-CURRENT after changing kernel dev_t to be pointer to struct cdev *.
|502117
|link:https://svnweb.freebsd.org/changeset/base/130640[130640]
|June 17, 2004
|5.2-CURRENT after changing kernel udev_t to dev_t.
|502118
|link:https://svnweb.freebsd.org/changeset/base/130656[130656]
|June 17, 2004
|5.2-CURRENT after adding support for CLOCK_VIRTUAL and CLOCK_PROF to man:clock_gettime[2] and man:clock_getres[2].
|502119
|link:https://svnweb.freebsd.org/changeset/base/130934[130934]
|June 22, 2004
|5.2-CURRENT after changing network interface cloning overhaul.
|502120
|link:https://svnweb.freebsd.org/changeset/base/131429[131429]
|July 2, 2004
|5.2-CURRENT after the update of the package tools to revision 20040629.
|502121
|link:https://svnweb.freebsd.org/changeset/base/131883[131883]
|July 9, 2004
|5.2-CURRENT after marking Bluetooth code as non-i386 specific.
|502122
|link:https://svnweb.freebsd.org/changeset/base/131971[131971]
|July 11, 2004
|5.2-CURRENT after the introduction of the KDB debugger framework, the conversion of DDB into a backend and the introduction of the GDB backend.
|502123
|link:https://svnweb.freebsd.org/changeset/base/132025[132025]
|July 12, 2004
|5.2-CURRENT after change to make VFS_ROOT take a struct thread argument as does vflush. Struct kinfo_proc now has a user data pointer. The switch of the default X implementation to `xorg` was also made at this time.
|502124
|link:https://svnweb.freebsd.org/changeset/base/132597[132597]
|July 24, 2004
|5.2-CURRENT after the change to separate the way ports rc.d and legacy scripts are started.
|502125
|link:https://svnweb.freebsd.org/changeset/base/132726[132726]
|July 28, 2004
|5.2-CURRENT after the backout of the previous change.
|502126
|link:https://svnweb.freebsd.org/changeset/base/132914[132914]
|July 31, 2004
|5.2-CURRENT after the removal of kmem_alloc_pageable() and the import of gcc 3.4.2.
|502127
|link:https://svnweb.freebsd.org/changeset/base/132991[132991]
|August 2, 2004
|5.2-CURRENT after changing the UMA kernel API to allow ctors/inits to fail.
|502128
|link:https://svnweb.freebsd.org/changeset/base/133306[133306]
|August 8, 2004
|5.2-CURRENT after the change of the vfs_mount signature as well as global replacement of PRISON_ROOT with SUSER_ALLOWJAIL for the man:suser[9] API.
|503000
|link:https://svnweb.freebsd.org/changeset/base/134189[134189]
|August 23, 2004
|5.3-BETA/RC before the pfil API change
|503001
|link:https://svnweb.freebsd.org/changeset/base/135580[135580]
|September 22, 2004
|5.3-RELEASE
|503100
|link:https://svnweb.freebsd.org/changeset/base/136595[136595]
|October 16, 2004
|5.3-STABLE after branching for RELENG_5_3
|503101
|link:https://svnweb.freebsd.org/changeset/base/138459[138459]
|December 3, 2004
|5.3-STABLE after addition of glibc style man:strftime[3] padding options.
|503102
|link:https://svnweb.freebsd.org/changeset/base/141788[141788]
|February 13, 2005
|5.3-STABLE after OpenBSD's man:nc[1] import MFC.
|503103
|link:https://svnweb.freebsd.org/changeset/base/142639[142639]
|February 27, 2005
|5.4-PRERELEASE after the MFC of the fixes in [.filename]#<src/include/stdbool.h># and [.filename]#<src/sys/i386/include/_types.h># for using the GCC-compatibility of the Intel C/C++ compiler.
|503104
|link:https://svnweb.freebsd.org/changeset/base/142835[142835]
|February 28, 2005
|5.4-PRERELEASE after the MFC of the change of ifi_epoch from wall clock time to uptime.
|503105
|link:https://svnweb.freebsd.org/changeset/base/143029[143029]
|March 2, 2005
|5.4-PRERELEASE after the MFC of the fix of EOVERFLOW check in man:vswprintf[3].
|504000
|link:https://svnweb.freebsd.org/changeset/base/144575[144575]
|April 3, 2005
|5.4-RELEASE.
|504100
|link:https://svnweb.freebsd.org/changeset/base/144581[144581]
|April 3, 2005
|5.4-STABLE after branching for RELENG_5_4
|504101
|link:https://svnweb.freebsd.org/changeset/base/146105[146105]
|May 11, 2005
|5.4-STABLE after increasing the default thread stacksizes
|504102
|link:https://svnweb.freebsd.org/changeset/base/504101[504101]
|June 24, 2005
|5.4-STABLE after the addition of sha256
|504103
|link:https://svnweb.freebsd.org/changeset/base/150892[150892]
|October 3, 2005
|5.4-STABLE after the MFC of if_bridge
|504104
|link:https://svnweb.freebsd.org/changeset/base/152370[152370]
|November 13, 2005
|5.4-STABLE after the MFC of bsdiff and portsnap
|504105
|link:https://svnweb.freebsd.org/changeset/base/154464[154464]
|January 17, 2006
|5.4-STABLE after MFC of ldconfig_local_dirs change.
|505000
|link:https://svnweb.freebsd.org/changeset/base/158481[158481]
|May 12, 2006
|5.5-RELEASE.
|505100
|link:https://svnweb.freebsd.org/changeset/base/158482[158482]
|May 12, 2006
|5.5-STABLE after branching for RELENG_5_5
|===
[[versions-4]]
== FreeBSD 4 Versions
[[freebsd-versions-table-4]]
.FreeBSD 4 `__FreeBSD_version` Values
[cols="1,1,1,1", frame="none", options="header"]
|===
| Value
| Revision
| Date
| Release
|400000
|link:https://svnweb.freebsd.org/changeset/base/43041[43041]
|January 22, 1999
|4.0-CURRENT after 3.4 branch
|400001
|link:https://svnweb.freebsd.org/changeset/base/44177[44177]
|February 20, 1999
|4.0-CURRENT after change in dynamic linker handling
|400002
|link:https://svnweb.freebsd.org/changeset/base/44699[44699]
|March 13, 1999
|4.0-CURRENT after C++ constructor/destructor order change
|400003
|link:https://svnweb.freebsd.org/changeset/base/45059[45059]
|March 27, 1999
|4.0-CURRENT after functioning man:dladdr[3]
|400004
|link:https://svnweb.freebsd.org/changeset/base/45321[45321]
|April 5, 1999
|4.0-CURRENT after __deregister_frame_info dynamic linker bug fix (also 4.0-CURRENT after EGCS 1.1.2 integration)
|400005
|link:https://svnweb.freebsd.org/changeset/base/46113[46113]
|April 27, 1999
|4.0-CURRENT after man:suser[9] API change (also 4.0-CURRENT after newbus)
|400006
|link:https://svnweb.freebsd.org/changeset/base/47640[47640]
|May 31, 1999
|4.0-CURRENT after cdevsw registration change
|400007
|link:https://svnweb.freebsd.org/changeset/base/47992[47992]
|June 17, 1999
|4.0-CURRENT after the addition of so_cred for socket level credentials
|400008
|link:https://svnweb.freebsd.org/changeset/base/48048[48048]
|June 20, 1999
|4.0-CURRENT after the addition of a poll syscall wrapper to libc_r
|400009
|link:https://svnweb.freebsd.org/changeset/base/48936[48936]
|July 20, 1999
|4.0-CURRENT after the change of the kernel's `dev_t` type to `struct specinfo` pointer
|400010
|link:https://svnweb.freebsd.org/changeset/base/51649[51649]
|September 25, 1999
|4.0-CURRENT after fixing a hole in man:jail[2]
|400011
|link:https://svnweb.freebsd.org/changeset/base/51791[51791]
|September 29, 1999
|4.0-CURRENT after the `sigset_t` datatype change
|400012
|link:https://svnweb.freebsd.org/changeset/base/53164[53164]
|November 15, 1999
|4.0-CURRENT after the cutover to the GCC 2.95.2 compiler
|400013
|link:https://svnweb.freebsd.org/changeset/base/54123[54123]
|December 4, 1999
|4.0-CURRENT after adding pluggable linux-mode ioctl handlers
|400014
|link:https://svnweb.freebsd.org/changeset/base/56216[56216]
|January 18, 2000
|4.0-CURRENT after importing OpenSSL
|400015
|link:https://svnweb.freebsd.org/changeset/base/56700[56700]
|January 27, 2000
|4.0-CURRENT after the C++ ABI change in GCC 2.95.2 from -fvtable-thunks to -fno-vtable-thunks by default
|400016
|link:https://svnweb.freebsd.org/changeset/base/57529[57529]
|February 27, 2000
|4.0-CURRENT after importing OpenSSH
|400017
|link:https://svnweb.freebsd.org/changeset/base/58005[58005]
|March 13, 2000
|4.0-RELEASE
|400018
|link:https://svnweb.freebsd.org/changeset/base/58170[58170]
|March 17, 2000
|4.0-STABLE after 4.0-RELEASE
|400019
|link:https://svnweb.freebsd.org/changeset/base/60047[60047]
|May 5, 2000
|4.0-STABLE after the introduction of delayed checksums.
|400020
|link:https://svnweb.freebsd.org/changeset/base/61262[61262]
|June 4, 2000
|4.0-STABLE after merging libxpg4 code into libc.
|400021
|link:https://svnweb.freebsd.org/changeset/base/62820[62820]
|July 8, 2000
|4.0-STABLE after upgrading Binutils to 2.10.0, ELF branding changes, and tcsh in the base system.
|410000
|link:https://svnweb.freebsd.org/changeset/base/63095[63095]
|July 14, 2000
|4.1-RELEASE
|410001
|link:https://svnweb.freebsd.org/changeset/base/64012[64012]
|July 29, 2000
|4.1-STABLE after 4.1-RELEASE
|410002
|link:https://svnweb.freebsd.org/changeset/base/65962[65962]
|September 16, 2000
|4.1-STABLE after man:setproctitle[3] moved from libutil to libc.
|411000
|link:https://svnweb.freebsd.org/changeset/base/66336[66336]
|September 25, 2000
|4.1.1-RELEASE
|411001
|
|
|4.1.1-STABLE after 4.1.1-RELEASE
|420000
|link:https://svnweb.freebsd.org/changeset/base/68066[68066]
|October 31, 2000
|4.2-RELEASE
|420001
|link:https://svnweb.freebsd.org/changeset/base/70895[70895]
|January 10, 2001
|4.2-STABLE after combining libgcc.a and libgcc_r.a, and associated GCC linkage changes.
|430000
|link:https://svnweb.freebsd.org/changeset/base/73800[73800]
|March 6, 2001
|4.3-RELEASE
|430001
|link:https://svnweb.freebsd.org/changeset/base/76779[76779]
|May 18, 2001
|4.3-STABLE after wint_t introduction.
|430002
|link:https://svnweb.freebsd.org/changeset/base/80157[80157]
|July 22, 2001
|4.3-STABLE after PCI powerstate API merge.
|440000
|link:https://svnweb.freebsd.org/changeset/base/80923[80923]
|August 1, 2001
|4.4-RELEASE
|440001
|link:https://svnweb.freebsd.org/changeset/base/85341[85341]
|October 23, 2001
|4.4-STABLE after d_thread_t introduction.
|440002
|link:https://svnweb.freebsd.org/changeset/base/86038[86038]
|November 4, 2001
|4.4-STABLE after mount structure changes (affects filesystem klds).
|440003
|link:https://svnweb.freebsd.org/changeset/base/88130[88130]
|December 18, 2001
|4.4-STABLE after the userland components of smbfs were imported.
|450000
|link:https://svnweb.freebsd.org/changeset/base/88271[88271]
|December 20, 2001
|4.5-RELEASE
|450001
|link:https://svnweb.freebsd.org/changeset/base/91203[91203]
|February 24, 2002
|4.5-STABLE after the usb structure element rename.
|450002
|link:https://svnweb.freebsd.org/changeset/base/92151[92151]
|March 12, 2002
|4.5-STABLE after locale changes.
|450003
|
|
|(Never created)
|450004
|link:https://svnweb.freebsd.org/changeset/base/94840[94840]
|April 16, 2002
|4.5-STABLE after the `sendmail_enable` man:rc.conf[5] variable was made to take the value `NONE`.
|450005
|link:https://svnweb.freebsd.org/changeset/base/95555[95555]
|April 27, 2002
|4.5-STABLE after moving to XFree86 4 by default for package builds.
|450006
|link:https://svnweb.freebsd.org/changeset/base/95846[95846]
|May 1, 2002
|4.5-STABLE after accept filtering was fixed so that is no longer susceptible to an easy DoS.
|460000
|link:https://svnweb.freebsd.org/changeset/base/97923[97923]
|June 21, 2002
|4.6-RELEASE
|460001
|link:https://svnweb.freebsd.org/changeset/base/98730[98730]
|June 21, 2002
|4.6-STABLE man:sendfile[2] fixed to comply with documentation, not to count any headers sent against the amount of data to be sent from the file.
|460002
|link:https://svnweb.freebsd.org/changeset/base/100366[100366]
|July 19, 2002
|4.6.2-RELEASE
|460100
|link:https://svnweb.freebsd.org/changeset/base/98857[98857]
|June 26, 2002
|4.6-STABLE
|460101
|link:https://svnweb.freebsd.org/changeset/base/98880[98880]
|June 26, 2002
|4.6-STABLE after MFC of `sed -i`.
|460102
|link:https://svnweb.freebsd.org/changeset/base/102759[102759]
|September 1, 2002
|4.6-STABLE after MFC of many new pkg_install features from the HEAD.
|470000
|link:https://svnweb.freebsd.org/changeset/base/104655[104655]
|October 8, 2002
|4.7-RELEASE
|470100
|link:https://svnweb.freebsd.org/changeset/base/104717[104717]
|October 9, 2002
|4.7-STABLE
|470101
|link:https://svnweb.freebsd.org/changeset/base/106732[106732]
|November 10, 2002
|Start generated __std{in,out,err}p references rather than __sF. This changes std{in,out,err} from a compile time expression to a runtime one.
|470102
|link:https://svnweb.freebsd.org/changeset/base/109753[109753]
|January 23, 2003
|4.7-STABLE after MFC of mbuf changes to replace m_aux mbufs by m_tag's
|470103
|link:https://svnweb.freebsd.org/changeset/base/110887[110887]
|February 14, 2003
|4.7-STABLE gets OpenSSL 0.9.7
|480000
|link:https://svnweb.freebsd.org/changeset/base/112852[112852]
|March 30, 2003
|4.8-RELEASE
|480100
|link:https://svnweb.freebsd.org/changeset/base/113107[113107]
|April 5, 2003
|4.8-STABLE
|480101
|link:https://svnweb.freebsd.org/changeset/base/115232[115232]
|May 22, 2003
|4.8-STABLE after man:realpath[3] has been made thread-safe
|480102
|link:https://svnweb.freebsd.org/changeset/base/118737[118737]
|August 10, 2003
|4.8-STABLE 3ware API changes to twe.
|490000
|link:https://svnweb.freebsd.org/changeset/base/121592[121592]
|October 27, 2003
|4.9-RELEASE
|490100
|link:https://svnweb.freebsd.org/changeset/base/121593[121593]
|October 27, 2003
|4.9-STABLE
|490101
|link:https://svnweb.freebsd.org/changeset/base/124264[124264]
|January 8, 2004
|4.9-STABLE after e_sid was added to struct kinfo_eproc.
|490102
|link:https://svnweb.freebsd.org/changeset/base/125417[125417]
|February 4, 2004
|4.9-STABLE after MFC of libmap functionality for rtld.
|491000
|link:https://svnweb.freebsd.org/changeset/base/129700[129700]
|May 25, 2004
|4.10-RELEASE
|491100
|link:https://svnweb.freebsd.org/changeset/base/129918[129918]
|June 1, 2004
|4.10-STABLE
|491101
|link:https://svnweb.freebsd.org/changeset/base/133506[133506]
|August 11, 2004
|4.10-STABLE after MFC of revision 20040629 of the package tools
|491102
|link:https://svnweb.freebsd.org/changeset/base/137786[137786]
|November 16, 2004
|4.10-STABLE after VM fix dealing with unwiring of fictitious pages
|492000
|link:https://svnweb.freebsd.org/changeset/base/138960[138960]
|December 17, 2004
|4.11-RELEASE
|492100
|link:https://svnweb.freebsd.org/changeset/base/138959[138959]
|December 17, 2004
|4.11-STABLE
|492101
|link:https://svnweb.freebsd.org/changeset/base/157843[157843]
|April 18, 2006
|4.11-STABLE after adding libdata/ldconfig directories to mtree files.
|===
[[versions-3]]
== FreeBSD 3 Versions
[[freebsd-versions-table-3]]
.FreeBSD 3 `__FreeBSD_version` Values
[cols="1,1,1,1", frame="none", options="header"]
|===
| Value
| Revision
| Date
| Release
|300000
|link:https://svnweb.freebsd.org/changeset/base/22917[22917]
|February 19, 1996
|3.0-CURRENT before man:mount[2] change
|300001
|link:https://svnweb.freebsd.org/changeset/base/36283[36283]
|September 24, 1997
|3.0-CURRENT after man:mount[2] change
|300002
|link:https://svnweb.freebsd.org/changeset/base/36592[36592]
|June 2, 1998
|3.0-CURRENT after man:semctl[2] change
|300003
|link:https://svnweb.freebsd.org/changeset/base/36735[36735]
|June 7, 1998
|3.0-CURRENT after ioctl arg changes
|300004
|link:https://svnweb.freebsd.org/changeset/base/38768[38768]
|September 3, 1998
|3.0-CURRENT after ELF conversion
|300005
|link:https://svnweb.freebsd.org/changeset/base/40438[40438]
|October 16, 1998
|3.0-RELEASE
|300006
|link:https://svnweb.freebsd.org/changeset/base/40445[40445]
|October 16, 1998
|3.0-CURRENT after 3.0-RELEASE
|300007
|link:https://svnweb.freebsd.org/changeset/base/43042[43042]
|January 22, 1999
|3.0-STABLE after 3/4 branch
|310000
|link:https://svnweb.freebsd.org/changeset/base/43807[43807]
|February 9, 1999
|3.1-RELEASE
|310001
|link:https://svnweb.freebsd.org/changeset/base/45060[45060]
|March 27, 1999
|3.1-STABLE after 3.1-RELEASE
|310002
|link:https://svnweb.freebsd.org/changeset/base/45689[45689]
|April 14, 1999
|3.1-STABLE after C++ constructor/destructor order change
|320000
|
|
|3.2-RELEASE
|320001
|link:https://svnweb.freebsd.org/changeset/base/46742[46742]
|May 8, 1999
|3.2-STABLE
|320002
|link:https://svnweb.freebsd.org/changeset/base/50563[50563]
|August 29, 1999
|3.2-STABLE after binary-incompatible IPFW and socket changes
|330000
|link:https://svnweb.freebsd.org/changeset/base/50813[50813]
|September 2, 1999
|3.3-RELEASE
|330001
|link:https://svnweb.freebsd.org/changeset/base/51328[51328]
|September 16, 1999
|3.3-STABLE
|330002
|link:https://svnweb.freebsd.org/changeset/base/53671[53671]
|November 24, 1999
|3.3-STABLE after adding man:mkstemp[3] to libc
|340000
|link:https://svnweb.freebsd.org/changeset/base/54166[54166]
|December 5, 1999
|3.4-RELEASE
|340001
|link:https://svnweb.freebsd.org/changeset/base/54730[54730]
|December 17, 1999
|3.4-STABLE
|350000
|link:https://svnweb.freebsd.org/changeset/base/61876[61876]
|June 20, 2000
|3.5-RELEASE
|350001
|link:https://svnweb.freebsd.org/changeset/base/63043[63043]
|July 12, 2000
|3.5-STABLE
|===
[[versions-2.2]]
== FreeBSD 2.2 Versions
[[freebsd-versions-table-2.2]]
.FreeBSD 2.2 `__FreeBSD_version` Values
[cols="1,1,1,1", frame="none", options="header"]
|===
| Value
| Revision
| Date
| Release
|220000
|link:https://svnweb.freebsd.org/changeset/base/22918[22918]
|February 19, 1997
|2.2-RELEASE
|(not changed)
|
|
|2.2.1-RELEASE
|(not changed)
|
|
|2.2-STABLE after 2.2.1-RELEASE
|221001
|link:https://svnweb.freebsd.org/changeset/base/24941[24941]
|April 15, 1997
|2.2-STABLE after texinfo-3.9
|221002
|link:https://svnweb.freebsd.org/changeset/base/25325[25325]
|April 30, 1997
|2.2-STABLE after top
|222000
|link:https://svnweb.freebsd.org/changeset/base/25851[25851]
|May 16, 1997
|2.2.2-RELEASE
|222001
|link:https://svnweb.freebsd.org/changeset/base/25921[25921]
|May 19, 1997
|2.2-STABLE after 2.2.2-RELEASE
|225000
|link:https://svnweb.freebsd.org/changeset/base/30053[30053]
|October 2, 1997
|2.2.5-RELEASE
|225001
|link:https://svnweb.freebsd.org/changeset/base/31300[31300]
|November 20, 1997
|2.2-STABLE after 2.2.5-RELEASE
|225002
|link:https://svnweb.freebsd.org/changeset/base/32019[32019]
|December 27, 1997
|2.2-STABLE after ldconfig -R merge
|226000
|link:https://svnweb.freebsd.org/changeset/base/34445[34445]
|March 24, 1998
|2.2.6-RELEASE
|227000
|link:https://svnweb.freebsd.org/changeset/base/37803[37803]
|July 21, 1998
|2.2.7-RELEASE
|227001
|link:https://svnweb.freebsd.org/changeset/base/37809[37809]
|July 21, 1998
|2.2-STABLE after 2.2.7-RELEASE
|227002
|link:https://svnweb.freebsd.org/changeset/base/39489[39489]
|September 19, 1998
|2.2-STABLE after man:semctl[2] change
|228000
|link:https://svnweb.freebsd.org/changeset/base/41403[41403]
|November 29, 1998
|2.2.8-RELEASE
|228001
|link:https://svnweb.freebsd.org/changeset/base/41418[41418]
|November 29, 1998
|2.2-STABLE after 2.2.8-RELEASE
|===
[NOTE]
====
Note that 2.2-STABLE sometimes identifies itself as "2.2.5-STABLE" after the 2.2.5-RELEASE. The pattern used to be year followed by the month, but we decided to change it to a more straightforward major/minor system starting from 2.2. This is because the parallel development on several branches made it infeasible to classify the releases merely by their real release dates. Do not worry about old -CURRENTs; they are listed here just for reference.
====
[[versions-2]]
== FreeBSD 2 Before 2.2-RELEASE Versions
[[freebsd-versions-table-2]]
.FreeBSD 2 Before 2.2-RELEASE `__FreeBSD_version` Values
[cols="1,1,1,1", frame="none", options="header"]
|===
| Value
| Revision
| Date
| Release
|119411
|
|
|2.0-RELEASE
|199501
|link:https://svnweb.freebsd.org/changeset/base/7153[7153]
|March 19, 1995
|2.1-CURRENT
|199503
|link:https://svnweb.freebsd.org/changeset/base/7310[7310]
|March 24, 1995
|2.1-CURRENT
|199504
|link:https://svnweb.freebsd.org/changeset/base/7704[7704]
|April 9, 1995
|2.0.5-RELEASE
|199508
|link:https://svnweb.freebsd.org/changeset/base/10297[10297]
|August 26, 1995
|2.2-CURRENT before 2.1
|199511
|link:https://svnweb.freebsd.org/changeset/base/12189[12189]
|November 10, 1995
|2.1.0-RELEASE
|199512
|link:https://svnweb.freebsd.org/changeset/base/12196[12196]
|November 10, 1995
|2.2-CURRENT before 2.1.5
|199607
|link:https://svnweb.freebsd.org/changeset/base/17067[17067]
|July 10, 1996
|2.1.5-RELEASE
|199608
|link:https://svnweb.freebsd.org/changeset/base/17127[17127]
|July 12, 1996
|2.2-CURRENT before 2.1.6
|199612
|link:https://svnweb.freebsd.org/changeset/base/19358[19358]
|November 15, 1996
|2.1.6-RELEASE
|199612
|
|
|2.1.7-RELEASE
|===
diff --git a/documentation/themes/beastie/layouts/articles/baseof.html b/documentation/themes/beastie/layouts/articles/baseof.html
index 08af64b0f0..f0dc57dcc3 100644
--- a/documentation/themes/beastie/layouts/articles/baseof.html
+++ b/documentation/themes/beastie/layouts/articles/baseof.html
@@ -1,21 +1,23 @@
<!DOCTYPE html>
<html lang="{{ $.Site.LanguageCode | default "en" }}">
<head>
<meta charset="utf-8">
- <meta name="description" content="FreeBSD is an operating system used to power modern servers, desktops, and embedded platforms." />
+ <meta name="viewport" content="width=device-width, initial-scale=1">
+ <meta name="description" content="{{ if .Params.Description }}{{ .Params.Description }}{{ end }}" />
+
<meta name="keywords" content="FreeBSD, BSD, UNIX, open source" />
- <meta name="copyright" content="1995-2020 The FreeBSD Foundation">
+ <meta name="copyright" content="1995-2021 The FreeBSD Foundation">
<title>{{ block "title" . }}{{ with .Params.Title }} {{ . }}{{ end }}{{ end }}</title>
<link rel="shortcut icon" href="{{ absLangURL ($.Site.BaseURL) }}favicon.ico">
<link rel="stylesheet" href="{{ absLangURL ($.Site.BaseURL) }}css/docbook.css">
<link rel="stylesheet" href="{{ absLangURL ($.Site.BaseURL) }}css/font-awesome-min.css">
</head>
<body>
<main>
{{ block "main" . }}{{ end }}
</main>
</body>
</html>
diff --git a/documentation/themes/beastie/layouts/books/baseof.html b/documentation/themes/beastie/layouts/books/baseof.html
index 08af64b0f0..f0dc57dcc3 100644
--- a/documentation/themes/beastie/layouts/books/baseof.html
+++ b/documentation/themes/beastie/layouts/books/baseof.html
@@ -1,21 +1,23 @@
<!DOCTYPE html>
<html lang="{{ $.Site.LanguageCode | default "en" }}">
<head>
<meta charset="utf-8">
- <meta name="description" content="FreeBSD is an operating system used to power modern servers, desktops, and embedded platforms." />
+ <meta name="viewport" content="width=device-width, initial-scale=1">
+ <meta name="description" content="{{ if .Params.Description }}{{ .Params.Description }}{{ end }}" />
+
<meta name="keywords" content="FreeBSD, BSD, UNIX, open source" />
- <meta name="copyright" content="1995-2020 The FreeBSD Foundation">
+ <meta name="copyright" content="1995-2021 The FreeBSD Foundation">
<title>{{ block "title" . }}{{ with .Params.Title }} {{ . }}{{ end }}{{ end }}</title>
<link rel="shortcut icon" href="{{ absLangURL ($.Site.BaseURL) }}favicon.ico">
<link rel="stylesheet" href="{{ absLangURL ($.Site.BaseURL) }}css/docbook.css">
<link rel="stylesheet" href="{{ absLangURL ($.Site.BaseURL) }}css/font-awesome-min.css">
</head>
<body>
<main>
{{ block "main" . }}{{ end }}
</main>
</body>
</html>
diff --git a/documentation/themes/beastie/layouts/partials/site-head.html b/documentation/themes/beastie/layouts/partials/site-head.html
index 489f8114fc..5242f76004 100644
--- a/documentation/themes/beastie/layouts/partials/site-head.html
+++ b/documentation/themes/beastie/layouts/partials/site-head.html
@@ -1,14 +1,14 @@
<head>
<meta charset="utf-8">
- <meta name="description" content="FreeBSD is an operating system used to power modern servers, desktops, and embedded platforms." />
- <meta name="keywords" content="FreeBSD, BSD, UNIX, open source" />
-
+ <meta name="viewport" content="width=device-width, initial-scale=1">
+ <meta name="description" content="{{ if .IsHome }}{{ .Site.Params.description }}{{ else }}{{ .Description }}{{ end }}"/>
+ <meta name="keywords" content="FreeBSD, FreeBSD Documentation Portal, BSD, UNIX, open source" />
<meta name="copyright" content="1995-2021 The FreeBSD Foundation">
- <title>{{ block "title" . }}{{ .Site.Title }} {{ with .Params.Title }} | {{ . }}{{ end }}{{ end }}</title>
+ <title>{{ block "title" . }}{{ .Site.Title }}{{ with .Params.Title }} | {{ . }}{{ end }}{{ end }}</title>
<link rel="shortcut icon" href="{{ absLangURL ($.Site.BaseURL) }}favicon.ico">
<link rel="stylesheet" href="{{ absLangURL ($.Site.BaseURL) }}css/fixed.css">
</head>

File Metadata

Mime Type
application/octet-stream
Expires
Tue, Jul 2, 9:36 PM (2 d)
Storage Engine
chunks
Storage Format
Chunks
Storage Handle
KqvnuHQZpMiv
Default Alt Text
(4 MB)

Event Timeline