Index: head/en_US.ISO8859-1/books/arch-handbook/boot/chapter.xml =================================================================== --- head/en_US.ISO8859-1/books/arch-handbook/boot/chapter.xml (revision 48481) +++ head/en_US.ISO8859-1/books/arch-handbook/boot/chapter.xml (revision 48482) @@ -1,2387 +1,2387 @@ Bootstrapping and Kernel Initialization Sergey Lyubka Contributed by Sergio Andrés Gómez del Real Updated and enhanced by Synopsis BIOS firmware POST IA-32 booting system initialization This chapter is an overview of the boot and system initialization processes, starting from the BIOS (firmware) POST, to the first user process creation. Since the initial steps of system startup are very architecture dependent, the IA-32 architecture is used as an example. The &os; boot process can be surprisingly complex. After control is passed from the BIOS, a considerable amount of low-level configuration must be done before the kernel can be loaded and executed. This setup must be done in a simple and flexible manner, allowing the user a great deal of customization possibilities. Overview The boot process is an extremely machine-dependent activity. Not only must code be written for every computer architecture, but there may also be multiple types of booting on the same architecture. For example, a directory listing of /usr/src/sys/boot reveals a great amount of architecture-dependent code. There is a directory for each of the various supported architectures. In the x86-specific i386 directory, there are subdirectories for different boot standards like mbr (Master Boot Record), gpt (GUID Partition Table), and efi (Extensible Firmware Interface). Each boot standard has its own conventions and data structures. The example that follows shows booting an x86 computer from an MBR hard drive with the &os; boot0 multi-boot loader stored in the very first sector. That boot code starts the &os; three-stage boot process. The key to understanding this process is that it is a series of stages of increasing complexity. These stages are boot1, boot2, and loader (see &man.boot.8; for more detail). The boot system executes each stage in sequence. The last stage, loader, is responsible for loading the &os; kernel. Each stage is examined in the following sections. Here is an example of the output generated by the different boot stages. Actual output may differ from machine to machine: &os; Component Output (may vary) boot0 F1 FreeBSD F2 BSD F5 Disk 2 boot2 This prompt will appear if the user presses a key just after selecting an OS to boot at the boot0 stage. >>FreeBSD/i386 BOOT Default: 1:ad(1,a)/boot/loader boot: loader BTX loader 1.00 BTX version is 1.02 Consoles: internal video/keyboard BIOS drive C: is disk0 BIOS 639kB/2096064kB available memory FreeBSD/x86 bootstrap loader, Revision 1.1 Console internal video/keyboard (root@snap.freebsd.org, Thu Jan 16 22:18:05 UTC 2014) Loading /boot/defaults/loader.conf /boot/kernel/kernel text=0xed9008 data=0x117d28+0x176650 syms=[0x8+0x137988+0x8+0x1515f8] kernel Copyright (c) 1992-2013 The FreeBSD Project. Copyright (c) 1979, 1980, 1983, 1986, 1988, 1989, 1991, 1992, 1993, 1994 The Regents of the University of California. All rights reserved. FreeBSD is a registered trademark of The FreeBSD Foundation. FreeBSD 10.0-RELEASE #0 r260789: Thu Jan 16 22:34:59 UTC 2014 root@snap.freebsd.org:/usr/obj/usr/src/sys/GENERIC amd64 FreeBSD clang version 3.3 (tags/RELEASE_33/final 183502) 20130610 The <acronym>BIOS</acronym> When the computer powers on, the processor's registers are set to some predefined values. One of the registers is the instruction pointer register, and its value after a power on is well defined: it is a 32-bit value of 0xfffffff0. The instruction pointer register (also known as the Program Counter) points to code to be executed by the processor. Another important register is the cr0 32-bit control register, and its value just after a reboot is 0. One of cr0's bits, the PE (Protection Enabled) bit, indicates whether the processor is running in 32-bit protected mode or 16-bit real mode. Since this bit is cleared at boot time, the processor boots in 16-bit real mode. Real mode means, among other things, that linear and physical addresses are identical. The reason for the processor not to start immediately in 32-bit protected mode is backwards compatibility. In particular, the boot process relies on the services provided by the BIOS, and the BIOS itself works in legacy, 16-bit code. The value of 0xfffffff0 is slightly less than 4 GB, so unless the machine has 4 GB of physical memory, it cannot point to a valid memory address. The computer's hardware translates this address so that it points to a BIOS memory block. The BIOS (Basic Input Output System) is a chip on the motherboard that has a relatively small amount of read-only memory (ROM). This memory contains various low-level routines that are specific to the hardware supplied with the motherboard. The processor will first jump to the address 0xfffffff0, which really resides in the BIOS's memory. Usually this address contains a jump instruction to the BIOS's POST routines. The POST (Power On Self Test) is a set of routines including the memory check, system bus check, and other low-level initialization so the CPU can set up the computer properly. The important step of this stage is determining the boot device. Modern BIOS implementations permit the selection of a boot device, allowing booting from a floppy, CD-ROM, hard disk, or other devices. The very last thing in the POST is the INT 0x19 instruction. The INT 0x19 handler reads 512 bytes from the first sector of boot device into the memory at address 0x7c00. The term first sector originates from hard drive architecture, where the magnetic plate is divided into a number of cylindrical tracks. Tracks are numbered, and every track is divided into a number (usually 64) of sectors. Track numbers start at 0, but sector numbers start from 1. Track 0 is the outermost on the magnetic plate, and sector 1, the first sector, has a special purpose. It is also called the MBR, or Master Boot Record. The remaining sectors on the first track are never used. This sector is our boot-sequence starting point. As we will see, this sector contains a copy of our boot0 program. A jump is made by the BIOS to address 0x7c00 so it starts executing. The Master Boot Record (<literal>boot0</literal>) MBR After control is received from the BIOS at memory address 0x7c00, boot0 starts executing. It is the first piece of code under &os; control. The task of boot0 is quite simple: scan the partition table and let the user choose which partition to boot from. The Partition Table is a special, standard data structure embedded in the MBR (hence embedded in boot0) describing the four standard PC partitions . boot0 resides in the filesystem as /boot/boot0. It is a small 512-byte file, and it is exactly what &os;'s installation procedure wrote to the hard disk's MBR if you chose the bootmanager option at installation time. Indeed, boot0 is the MBR. As mentioned previously, the INT 0x19 instruction causes the INT 0x19 handler to load an MBR (boot0) into memory at address 0x7c00. The source file for boot0 can be found in sys/boot/i386/boot0/boot0.S - which is an awesome piece of code written by Robert Nordier. A special structure starting from offset 0x1be in the MBR is called the partition table. It has four records of 16 bytes each, called partition records, which represent how the hard disk is partitioned, or, in &os;'s terminology, sliced. One byte of those 16 says whether a partition (slice) is bootable or not. Exactly one record must have that flag set, otherwise boot0's code will refuse to proceed. A partition record has the following fields: the 1-byte filesystem type the 1-byte bootable flag the 6 byte descriptor in CHS format the 8 byte descriptor in LBA format A partition record descriptor contains information about where exactly the partition resides on the drive. Both descriptors, LBA and CHS, describe the same information, but in different ways: LBA (Logical Block Addressing) has the starting sector for the partition and the partition's length, while CHS (Cylinder Head Sector) has coordinates for the first and last sectors of the partition. The partition table ends with the special signature 0xaa55. The MBR must fit into 512 bytes, a single disk sector. This program uses low-level tricks like taking advantage of the side effects of certain instructions and reusing register values from previous operations to make the most out of the fewest possible instructions. Care must also be taken when handling the partition table, which is embedded in the MBR itself. For these reasons, be very careful when modifying boot0.S. Note that the boot0.S source file is assembled as is: instructions are translated one by one to binary, with no additional information (no ELF file format, for example). This kind of low-level control is achieved at link time through special control flags passed to the linker. For example, the text section of the program is set to be located at address 0x600. In practice this means that boot0 must be loaded to memory address 0x600 in order to function properly. It is worth looking at the Makefile for boot0 (sys/boot/i386/boot0/Makefile), as it defines some of the run-time behavior of boot0. For instance, if a terminal connected to the serial port (COM1) is used for I/O, the macro SIO must be defined (-DSIO). -DPXE enables boot through PXE by pressing F6. Additionally, the program defines a set of flags that allow further modification of its behavior. All of this is illustrated in the Makefile. For example, look at the linker directives which command the linker to start the text section at address 0x600, and to build the output file as is (strip out any file formatting):
<filename>sys/boot/i386/boot0/Makefile</filename> BOOT_BOOT0_ORG?=0x600 LDFLAGS=-e start -Ttext ${BOOT_BOOT0_ORG} \ -Wl,-N,-S,--oformat,binary
Let us now start our study of the MBR, or boot0, starting where execution begins. Some modifications have been made to some instructions in favor of better exposition. For example, some macros are expanded, and some macro tests are omitted when the result of the test is known. This applies to all of the code examples shown.
<filename>sys/boot/i386/boot0/boot0.S</filename> start: cld # String ops inc xorw %ax,%ax # Zero movw %ax,%es # Address movw %ax,%ds # data movw %ax,%ss # Set up movw 0x7c00,%sp # stack
This first block of code is the entry point of the program. It is where the BIOS transfers control. First, it makes sure that the string operations autoincrement its pointer operands (the cld instruction) When in doubt, we refer the reader to the official Intel manuals, which describe the exact semantics for each instruction: .. Then, as it makes no assumption about the state of the segment registers, it initializes them. Finally, it sets the stack pointer register (%sp) to address 0x7c00, so we have a working stack. The next block is responsible for the relocation and subsequent jump to the relocated code.
<filename>sys/boot/i386/boot0/boot0.S</filename> movw $0x7c00,%si # Source movw $0x600,%di # Destination movw $512,%cx # Word count rep # Relocate movsb # code movw %di,%bp # Address variables movb $16,%cl # Words to clear rep # Zero stosb # them incb -0xe(%di) # Set the S field to 1 jmp main-0x7c00+0x600 # Jump to relocated code
Because boot0 is loaded by the BIOS to address 0x7C00, it copies itself to address 0x600 and then transfers control there (recall that it was linked to execute at address 0x600). The source address, 0x7c00, is copied to register %si. The destination address, 0x600, to register %di. The number of bytes to copy, 512 (the program's size), is copied to register %cx. Next, the rep instruction repeats the instruction that follows, that is, movsb, the number of times dictated by the %cx register. The movsb instruction copies the byte pointed to by %si to the address pointed to by %di. This is repeated another 511 times. On each repetition, both the source and destination registers, %si and %di, are incremented by one. Thus, upon completion of the 512-byte copy, %di has the value 0x600+512= 0x800, and %si has the value 0x7c00+512= 0x7e00; we have thus completed the code relocation. Next, the destination register %di is copied to %bp. %bp gets the value 0x800. The value 16 is copied to %cl in preparation for a new string operation (like our previous movsb). Now, stosb is executed 16 times. This instruction copies a 0 value to the address pointed to by the destination register (%di, which is 0x800), and increments it. This is repeated another 15 times, so %di ends up with value 0x810. Effectively, this clears the address range 0x800-0x80f. This range is used as a (fake) partition table for writing the MBR back to disk. Finally, the sector field for the CHS addressing of this fake partition is given the value 1 and a jump is made to the main function from the relocated code. Note that until this jump to the relocated code, any reference to an absolute address was avoided. The following code block tests whether the drive number provided by the BIOS should be used, or the one stored in boot0.
<filename>sys/boot/i386/boot0/boot0.S</filename> main: testb $SETDRV,-69(%bp) # Set drive number? jnz disable_update # Yes testb %dl,%dl # Drive number valid? js save_curdrive # Possibly (0x80 set)
This code tests the SETDRV bit (0x20) in the flags variable. Recall that register %bp points to address location 0x800, so the test is done to the flags variable at address 0x800-69= 0x7bb. This is an example of the type of modifications that can be done to boot0. The SETDRV flag is not set by default, but it can be set in the Makefile. When set, the drive number stored in the MBR is used instead of the one provided by the BIOS. We assume the defaults, and that the BIOS provided a valid drive number, so we jump to save_curdrive. The next block saves the drive number provided by the BIOS, and calls putn to print a new line on the screen.
<filename>sys/boot/i386/boot0/boot0.S</filename> save_curdrive: movb %dl, (%bp) # Save drive number pushw %dx # Also in the stack #ifdef TEST /* test code, print internal bios drive */ rolb $1, %dl movw $drive, %si call putkey #endif callw putn # Print a newline
Note that we assume TEST is not defined, so the conditional code in it is not assembled and will not appear in our executable boot0. Our next block implements the actual scanning of the partition table. It prints to the screen the partition type for each of the four entries in the partition table. It compares each type with a list of well-known operating system file systems. Examples of recognized partition types are NTFS (&windows;, ID 0x7), ext2fs (&linux;, ID 0x83), and, of course, ffs/ufs2 (&os;, ID 0xa5). The implementation is fairly simple.
<filename>sys/boot/i386/boot0/boot0.S</filename> movw $(partbl+0x4),%bx # Partition table (+4) xorw %dx,%dx # Item number read_entry: movb %ch,-0x4(%bx) # Zero active flag (ch == 0) btw %dx,_FLAGS(%bp) # Entry enabled? jnc next_entry # No movb (%bx),%al # Load type test %al, %al # skip empty partition jz next_entry movw $bootable_ids,%di # Lookup tables movb $(TLEN+1),%cl # Number of entries repne # Locate scasb # type addw $(TLEN-1), %di # Adjust movb (%di),%cl # Partition addw %cx,%di # description callw putx # Display it next_entry: incw %dx # Next item addb $0x10,%bl # Next entry jnc read_entry # Till done
It is important to note that the active flag for each entry is cleared, so after the scanning, no partition entry is active in our memory copy of boot0. Later, the active flag will be set for the selected partition. This ensures that only one active partition exists if the user chooses to write the changes back to disk. The next block tests for other drives. At startup, the BIOS writes the number of drives present in the computer to address 0x475. If there are any other drives present, boot0 prints the current drive to screen. The user may command boot0 to scan partitions on another drive later.
<filename>sys/boot/i386/boot0/boot0.S</filename> popw %ax # Drive number subb $0x79,%al # Does next cmpb 0x475,%al # drive exist? (from BIOS?) jb print_drive # Yes decw %ax # Already drive 0? jz print_prompt # Yes
We make the assumption that a single drive is present, so the jump to print_drive is not performed. We also assume nothing strange happened, so we jump to print_prompt. This next block just prints out a prompt followed by the default option:
<filename>sys/boot/i386/boot0/boot0.S</filename> print_prompt: movw $prompt,%si # Display callw putstr # prompt movb _OPT(%bp),%dl # Display decw %si # default callw putkey # key jmp start_input # Skip beep
Finally, a jump is performed to start_input, where the BIOS services are used to start a timer and for reading user input from the keyboard; if the timer expires, the default option will be selected:
<filename>sys/boot/i386/boot0/boot0.S</filename> start_input: xorb %ah,%ah # BIOS: Get int $0x1a # system time movw %dx,%di # Ticks when addw _TICKS(%bp),%di # timeout read_key: movb $0x1,%ah # BIOS: Check int $0x16 # for keypress jnz got_key # Have input xorb %ah,%ah # BIOS: int 0x1a, 00 int $0x1a # get system time cmpw %di,%dx # Timeout? jb read_key # No
An interrupt is requested with number 0x1a and argument 0 in register %ah. The BIOS has a predefined set of services, requested by applications as software-generated interrupts through the int instruction and receiving arguments in registers (in this case, %ah). Here, particularly, we are requesting the number of clock ticks since last midnight; this value is computed by the BIOS through the RTC (Real Time Clock). This clock can be programmed to work at frequencies ranging from 2 Hz to 8192 Hz. The BIOS sets it to 18.2 Hz at startup. When the request is satisfied, a 32-bit result is returned by the BIOS in registers %cx and %dx (lower bytes in %dx). This result (the %dx part) is copied to register %di, and the value of the TICKS variable is added to %di. This variable resides in boot0 at offset _TICKS (a negative value) from register %bp (which, recall, points to 0x800). The default value of this variable is 0xb6 (182 in decimal). Now, the idea is that boot0 constantly requests the time from the BIOS, and when the value returned in register %dx is greater than the value stored in %di, the time is up and the default selection will be made. Since the RTC ticks 18.2 times per second, this condition will be met after 10 - seconds (this default behaviour can be changed in the + seconds (this default behavior can be changed in the Makefile). Until this time has passed, boot0 continually asks the BIOS for any user input; this is done through int 0x16, argument 1 in %ah. Whether a key was pressed or the time expired, subsequent code validates the selection. Based on the selection, the register %si is set to point to the appropriate partition entry in the partition table. This new selection overrides the previous default one. Indeed, it becomes the new default. Finally, the ACTIVE flag of the selected partition is set. If it was enabled at compile time, the in-memory version of boot0 with these modified values is written back to the MBR on disk. We leave the details of this implementation to the reader. We now end our study with the last code block from the boot0 program:
<filename>sys/boot/i386/boot0/boot0.S</filename> movw $0x7c00,%bx # Address for read movb $0x2,%ah # Read sector callw intx13 # from disk jc beep # If error cmpw $0xaa55,0x1fe(%bx) # Bootable? jne beep # No pushw %si # Save ptr to selected part. callw putn # Leave some space popw %si # Restore, next stage uses it jmp *%bx # Invoke bootstrap
Recall that %si points to the selected partition entry. This entry tells us where the partition begins on disk. We assume, of course, that the partition selected is actually a &os; slice. From now on, we will favor the use of the technically more accurate term slice rather than partition. The transfer buffer is set to 0x7c00 (register %bx), and a read for the first sector of the &os; slice is requested by calling intx13. We assume that everything went okay, so a jump to beep is not performed. In particular, the new sector read must end with the magic sequence 0xaa55. Finally, the value at %si (the pointer to the selected partition table) is preserved for use by the next stage, and a jump is performed to address 0x7c00, where execution of our next stage (the just-read block) is started.
<literal>boot1</literal> Stage So far we have gone through the following sequence: The BIOS did some early hardware initialization, including the POST. The MBR (boot0) was loaded from absolute disk sector one to address 0x7c00. Execution control was passed to that location. boot0 relocated itself to the location it was linked to execute (0x600), followed by a jump to continue execution at the appropriate place. Finally, boot0 loaded the first disk sector from the &os; slice to address 0x7c00. Execution control was passed to that location. boot1 is the next step in the boot-loading sequence. It is the first of three boot stages. Note that we have been dealing exclusively with disk sectors. Indeed, the BIOS loads the absolute first sector, while boot0 loads the first sector of the &os; slice. Both loads are to address 0x7c00. We can conceptually think of these disk sectors as containing the files boot0 and boot1, respectively, but in reality this is not entirely true for boot1. Strictly speaking, unlike boot0, boot1 is not part of the boot blocks There is a file /boot/boot1, but it is not the written to the beginning of the &os; slice. Instead, it is concatenated with boot2 to form boot, which is written to the beginning of the &os; slice and read at boot time.. Instead, a single, full-blown file, boot (/boot/boot), is what ultimately is written to disk. This file is a combination of boot1, boot2 and the Boot Extender (or BTX). This single file is greater in size than a single sector (greater than 512 bytes). Fortunately, boot1 occupies exactly the first 512 bytes of this single file, so when boot0 loads the first sector of the &os; slice (512 bytes), it is actually loading boot1 and transferring control to it. The main task of boot1 is to load the next boot stage. This next stage is somewhat more complex. It is composed of a server called the Boot Extender, or BTX, and a client, called boot2. As we will see, the last boot stage, loader, is also a client of the BTX server. Let us now look in detail at what exactly is done by boot1, starting like we did for boot0, at its entry point:
<filename>sys/boot/i386/boot2/boot1.S</filename> start: jmp main
The entry point at start simply jumps past a special data area to the label main, which in turn looks like this:
<filename>sys/boot/i386/boot2/boot1.S</filename> main: cld # String ops inc xor %cx,%cx # Zero mov %cx,%es # Address mov %cx,%ds # data mov %cx,%ss # Set up mov $start,%sp # stack mov %sp,%si # Source mov $0x700,%di # Destination incb %ch # Word count rep # Copy movsw # code
Just like boot0, this code relocates boot1, this time to memory address 0x700. However, unlike boot0, it does not jump there. boot1 is linked to execute at address 0x7c00, effectively where it was loaded in the first place. The reason for this relocation will be discussed shortly. Next comes a loop that looks for the &os; slice. Although boot0 loaded boot1 from the &os; slice, no information was passed to it about this Actually we did pass a pointer to the slice entry in register %si. However, boot1 does not assume that it was loaded by boot0 (perhaps some other MBR loaded it, and did not pass this information), so it assumes nothing., so boot1 must rescan the partition table to find where the &os; slice starts. Therefore it rereads the MBR:
<filename>sys/boot/i386/boot2/boot1.S</filename> mov $part4,%si # Partition cmpb $0x80,%dl # Hard drive? jb main.4 # No movb $0x1,%dh # Block count callw nread # Read MBR
In the code above, register %dl maintains information about the boot device. This is passed on by the BIOS and preserved by the MBR. Numbers 0x80 and greater tells us that we are dealing with a hard drive, so a call is made to nread, where the MBR is read. Arguments to nread are passed through %si and %dh. The memory address at label part4 is copied to %si. This memory address holds a fake partition to be used by nread. The following is the data in the fake partition:
<filename>sys/boot/i386/boot2/Makefile</filename> part4: .byte 0x80, 0x00, 0x01, 0x00 .byte 0xa5, 0xfe, 0xff, 0xff .byte 0x00, 0x00, 0x00, 0x00 .byte 0x50, 0xc3, 0x00, 0x00
In particular, the LBA for this fake partition is hardcoded to zero. This is used as an argument to the BIOS for reading absolute sector one from the hard drive. Alternatively, CHS addressing could be used. In this case, the fake partition holds cylinder 0, head 0 and sector 1, which is equivalent to absolute sector one. Let us now proceed to take a look at nread:
<filename>sys/boot/i386/boot2/boot1.S</filename> nread: mov $0x8c00,%bx # Transfer buffer mov 0x8(%si),%ax # Get mov 0xa(%si),%cx # LBA push %cs # Read from callw xread.1 # disk jnc return # If success, return
Recall that %si points to the fake partition. The word In the context of 16-bit real mode, a word is 2 bytes. at offset 0x8 is copied to register %ax and word at offset 0xa to %cx. They are interpreted by the BIOS as the lower 4-byte value denoting the LBA to be read (the upper four bytes are assumed to be zero). Register %bx holds the memory address where the MBR will be loaded. The instruction pushing %cs onto the stack is very interesting. In this context, it accomplishes nothing. However, as we will see shortly, boot2, in conjunction with the BTX server, also uses xread.1. This mechanism will be discussed in the next section. The code at xread.1 further calls the read function, which actually calls the BIOS asking for the disk sector:
<filename>sys/boot/i386/boot2/boot1.S</filename> xread.1: pushl $0x0 # absolute push %cx # block push %ax # number push %es # Address of push %bx # transfer buffer xor %ax,%ax # Number of movb %dh,%al # blocks to push %ax # transfer push $0x10 # Size of packet mov %sp,%bp # Packet pointer callw read # Read from disk lea 0x10(%bp),%sp # Clear stack lret # To far caller
Note the long return instruction at the end of this block. This instruction pops out the %cs register pushed by nread, and returns. Finally, nread also returns. With the MBR loaded to memory, the actual loop for searching the &os; slice begins:
<filename>sys/boot/i386/boot2/boot1.S</filename> mov $0x1,%cx # Two passes main.1: mov $0x8dbe,%si # Partition table movb $0x1,%dh # Partition main.2: cmpb $0xa5,0x4(%si) # Our partition type? jne main.3 # No jcxz main.5 # If second pass testb $0x80,(%si) # Active? jnz main.5 # Yes main.3: add $0x10,%si # Next entry incb %dh # Partition cmpb $0x5,%dh # In table? jb main.2 # Yes dec %cx # Do two jcxz main.1 # passes
If a &os; slice is identified, execution continues at main.5. Note that when a &os; slice is found %si points to the appropriate entry in the partition table, and %dh holds the partition number. We assume that a &os; slice is found, so we continue execution at main.5:
<filename>sys/boot/i386/boot2/boot1.S</filename> main.5: mov %dx,0x900 # Save args movb $0x10,%dh # Sector count callw nread # Read disk mov $0x9000,%bx # BTX mov 0xa(%bx),%si # Get BTX length and set add %bx,%si # %si to start of boot2.bin mov $0xc000,%di # Client page 2 mov $0xa200,%cx # Byte sub %si,%cx # count rep # Relocate movsb # client
Recall that at this point, register %si points to the &os; slice entry in the MBR partition table, so a call to nread will effectively read sectors at the beginning of this partition. The argument passed on register %dh tells nread to read 16 disk sectors. Recall that the first 512 bytes, or the first sector of the &os; slice, coincides with the boot1 program. Also recall that the file written to the beginning of the &os; slice is not /boot/boot1, but /boot/boot. Let us look at the size of these files in the filesystem: -r--r--r-- 1 root wheel 512B Jan 8 00:15 /boot/boot0 -r--r--r-- 1 root wheel 512B Jan 8 00:15 /boot/boot1 -r--r--r-- 1 root wheel 7.5K Jan 8 00:15 /boot/boot2 -r--r--r-- 1 root wheel 8.0K Jan 8 00:15 /boot/boot Both boot0 and boot1 are 512 bytes each, so they fit exactly in one disk sector. boot2 is much bigger, holding both the BTX server and the boot2 client. Finally, a file called simply boot is 512 bytes larger than boot2. This file is a concatenation of boot1 and boot2. As already noted, boot0 is the file written to the absolute first disk sector (the MBR), and boot is the file written to the first sector of the &os; slice; boot1 and boot2 are not written to disk. The command used to concatenate boot1 and boot2 into a single boot is merely cat boot1 boot2 > boot. So boot1 occupies exactly the first 512 bytes of boot and, because boot is written to the first sector of the &os; slice, boot1 fits exactly in this first sector. Because nread reads the first 16 sectors of the &os; slice, it effectively reads the entire boot file 512*16=8192 bytes, exactly the size of boot. We will see more details about how boot is formed from boot1 and boot2 in the next section. Recall that nread uses memory address 0x8c00 as the transfer buffer to hold the sectors read. This address is conveniently chosen. Indeed, because boot1 belongs to the first 512 bytes, it ends up in the address range 0x8c00-0x8dff. The 512 bytes that follows (range 0x8e00-0x8fff) is used to store the bsdlabel Historically known as disklabel. If you ever wondered where &os; stored this information, it is in this region. See &man.bsdlabel.8;. Starting at address 0x9000 is the beginning of the BTX server, and immediately following is the boot2 client. The BTX server acts as a kernel, and executes in protected mode in the most privileged level. In contrast, the BTX clients (boot2, for example), execute in user mode. We will see how this is accomplished in the next section. The code after the call to nread locates the beginning of boot2 in the memory buffer, and copies it to memory address 0xc000. This is because the BTX server arranges boot2 to execute in a segment starting at 0xa000. We explore this in detail in the following section. The last code block of boot1 enables access to memory above 1MB This is necessary for legacy reasons. Interested readers should see . and concludes with a jump to the starting point of the BTX server:
<filename>sys/boot/i386/boot2/boot1.S</filename> seta20: cli # Disable interrupts seta20.1: dec %cx # Timeout? jz seta20.3 # Yes inb $0x64,%al # Get status testb $0x2,%al # Busy? jnz seta20.1 # Yes movb $0xd1,%al # Command: Write outb %al,$0x64 # output port seta20.2: inb $0x64,%al # Get status testb $0x2,%al # Busy? jnz seta20.2 # Yes movb $0xdf,%al # Enable outb %al,$0x60 # A20 seta20.3: sti # Enable interrupts jmp 0x9010 # Start BTX
Note that right before the jump, interrupts are enabled.
The <acronym>BTX</acronym> Server Next in our boot sequence is the BTX Server. Let us quickly remember how we got here: The BIOS loads the absolute sector one (the MBR, or boot0), to address 0x7c00 and jumps there. boot0 relocates itself to 0x600, the address it was linked to execute, and jumps over there. It then reads the first sector of the &os; slice (which consists of boot1) into address 0x7c00 and jumps over there. boot1 loads the first 16 sectors of the &os; slice into address 0x8c00. This 16 sectors, or 8192 bytes, is the whole file boot. The file is a concatenation of boot1 and boot2. boot2, in turn, contains the BTX server and the boot2 client. Finally, a jump is made to address 0x9010, the entry point of the BTX server. Before studying the BTX Server in detail, let us further review how the single, all-in-one boot file is created. The way boot is built is defined in its Makefile (/usr/src/sys/boot/i386/boot2/Makefile). Let us look at the rule that creates the boot file:
<filename>sys/boot/i386/boot2/Makefile</filename> boot: boot1 boot2 cat boot1 boot2 > boot
This tells us that boot1 and boot2 are needed, and the rule simply concatenates them to produce a single file called boot. The rules for creating boot1 are also quite simple:
<filename>sys/boot/i386/boot2/Makefile</filename> boot1: boot1.out objcopy -S -O binary boot1.out boot1 boot1.out: boot1.o ld -e start -Ttext 0x7c00 -o boot1.out boot1.o
To apply the rule for creating boot1, boot1.out must be resolved. This, in turn, depends on the existence of boot1.o. This last file is simply the result of assembling our familiar boot1.S, without linking. Now, the rule for creating boot1.out is applied. This tells us that boot1.o should be linked with start as its entry point, and starting at address 0x7c00. Finally, boot1 is created from boot1.out applying the appropriate rule. This rule is the objcopy command applied to boot1.out. Note the flags passed to objcopy: -S tells it to strip all relocation and symbolic information; -O binary indicates the output format, that is, a simple, unformatted binary file. Having boot1, let us take a look at how boot2 is constructed:
<filename>sys/boot/i386/boot2/Makefile</filename> boot2: boot2.ld @set -- `ls -l boot2.ld`; x=$$((7680-$$5)); \ echo "$$x bytes available"; test $$x -ge 0 dd if=boot2.ld of=boot2 obs=7680 conv=osync boot2.ld: boot2.ldr boot2.bin ../btx/btx/btx btxld -v -E 0x2000 -f bin -b ../btx/btx/btx -l boot2.ldr \ -o boot2.ld -P 1 boot2.bin boot2.ldr: dd if=/dev/zero of=boot2.ldr bs=512 count=1 boot2.bin: boot2.out objcopy -S -O binary boot2.out boot2.bin boot2.out: ../btx/lib/crt0.o boot2.o sio.o ld -Ttext 0x2000 -o boot2.out boot2.o: boot2.s ${CC} ${ACFLAGS} -c boot2.s boot2.s: boot2.c boot2.h ${.CURDIR}/../../common/ufsread.c ${CC} ${CFLAGS} -S -o boot2.s.tmp ${.CURDIR}/boot2.c sed -e '/align/d' -e '/nop/d' "MISSING" boot2.s.tmp > boot2.s rm -f boot2.s.tmp boot2.h: boot1.out ${NM} -t d ${.ALLSRC} | awk '/([0-9])+ T xread/ \ { x = $$1 - ORG1; \ printf("#define XREADORG %#x\n", REL1 + x) }' \ ORG1=`printf "%d" ${ORG1}` \ REL1=`printf "%d" ${REL1}` > ${.TARGET}
The mechanism for building boot2 is far more elaborate. Let us point out the most relevant facts. The dependency list is as follows:
<filename>sys/boot/i386/boot2/Makefile</filename> boot2: boot2.ld boot2.ld: boot2.ldr boot2.bin ${BTXDIR}/btx/btx boot2.bin: boot2.out boot2.out: ${BTXDIR}/lib/crt0.o boot2.o sio.o boot2.o: boot2.s boot2.s: boot2.c boot2.h ${.CURDIR}/../../common/ufsread.c boot2.h: boot1.out
Note that initially there is no header file boot2.h, but its creation depends on boot1.out, which we already have. The rule for its creation is a bit terse, but the important thing is that the output, boot2.h, is something like this:
<filename>sys/boot/i386/boot2/boot2.h</filename> #define XREADORG 0x725
Recall that boot1 was relocated (i.e., copied from 0x7c00 to 0x700). This relocation will now make sense, because as we will see, the BTX server reclaims some memory, including the space where boot1 was originally loaded. However, the BTX server needs access to boot1's xread function; this function, according to the output of boot2.h, is at location 0x725. Indeed, the BTX server uses the xread function from boot1's relocated code. This function is now accesible from within the boot2 client. We next build boot2.s from files boot2.h, boot2.c and /usr/src/sys/boot/common/ufsread.c. The rule for this is to compile the code in boot2.c (which includes boot2.h and ufsread.c) into assembly code. Having boot2.s, the next rule assembles boot2.s, creating the object file boot2.o. The next rule directs the linker to link various files (crt0.o, boot2.o and sio.o). Note that the output file, boot2.out, is linked to execute at address 0x2000. Recall that boot2 will be executed in user mode, within a special user segment set up by the BTX server. This segment starts at 0xa000. Also, remember that the boot2 portion of boot was copied to address 0xc000, that is, offset 0x2000 from the start of the user segment, so boot2 will work properly when we transfer control to it. Next, boot2.bin is created from boot2.out by stripping its symbols and format information; boot2.bin is a raw binary. Now, note that a file boot2.ldr is created as a 512-byte file full of zeros. This space is reserved for the bsdlabel. Now that we have files boot1, boot2.bin and boot2.ldr, only the BTX server is missing before creating the all-in-one boot file. The BTX server is located in /usr/src/sys/boot/i386/btx/btx; it has its own Makefile with its own set of rules for building. The important thing to notice is that it is also compiled as a raw binary, and that it is linked to execute at address 0x9000. The details can be found in /usr/src/sys/boot/i386/btx/btx/Makefile. Having the files that comprise the boot program, the final step is to merge them. This is done by a special program called btxld (source located in /usr/src/usr.sbin/btxld). Some arguments to this program include the name of the output file (boot), its entry point (0x2000) and its file format (raw binary). The various files are finally merged by this utility into the file boot, which consists of boot1, boot2, the bsdlabel and the BTX server. This file, which takes exactly 16 sectors, or 8192 bytes, is what is actually written to the beginning of the &os; slice during instalation. Let us now proceed to study the BTX server program. The BTX server prepares a simple environment and switches from 16-bit real mode to 32-bit protected mode, right before passing control to the client. This includes initializing and updating the following data structures: virtual v86 mode Modifies the Interrupt Vector Table (IVT). The IVT provides exception and interrupt handlers for Real-Mode code. The Interrupt Descriptor Table (IDT) is created. Entries are provided for processor exceptions, hardware interrupts, two system calls and V86 interface. The IDT provides exception and interrupt handlers for Protected-Mode code. A Task-State Segment (TSS) is created. This is necessary because the processor works in the least privileged level when executing the client (boot2), but in the most privileged level when executing the BTX server. The GDT (Global Descriptor Table) is set up. Entries (descriptors) are provided for supervisor code and data, user code and data, and real-mode code and data. Real-mode code and data are necessary when switching back to real mode from protected mode, as suggested by the Intel manuals. Let us now start studying the actual implementation. Recall that boot1 made a jump to address 0x9010, the BTX server's entry point. Before studying program execution there, note that the BTX server has a special header at address range 0x9000-0x900f, right before its entry point. This header is defined as follows:
<filename>sys/boot/i386/btx/btx/btx.S</filename> start: # Start of code /* * BTX header. */ btx_hdr: .byte 0xeb # Machine ID .byte 0xe # Header size .ascii "BTX" # Magic .byte 0x1 # Major version .byte 0x2 # Minor version .byte BTX_FLAGS # Flags .word PAG_CNT-MEM_ORG>>0xc # Paging control .word break-start # Text size .long 0x0 # Entry address
Note the first two bytes are 0xeb and 0xe. In the IA-32 architecture, these two bytes are interpreted as a relative jump past the header into the entry point, so in theory, boot1 could jump here (address 0x9000) instead of address 0x9010. Note that the last field in the BTX header is a pointer to the client's (boot2) entry point. This field is patched at link time. Immediately following the header is the BTX server's entry point:
<filename>sys/boot/i386/btx/btx/btx.S</filename> /* * Initialization routine. */ init: cli # Disable interrupts xor %ax,%ax # Zero/segment mov %ax,%ss # Set up mov $0x1800,%sp # stack mov %ax,%es # Address mov %ax,%ds # data pushl $0x2 # Clear popfl # flags
This code disables interrupts, sets up a working stack (starting at address 0x1800) and clears the flags in the EFLAGS register. Note that the popfl instruction pops out a doubleword (4 bytes) from the stack and places it in the EFLAGS register. Because the value actually popped is 2, the EFLAGS register is effectively cleared (IA-32 requires that bit 2 of the EFLAGS register always be 1). Our next code block clears (sets to 0) the memory range 0x5e00-0x8fff. This range is where the various data structures will be created:
<filename>sys/boot/i386/btx/btx/btx.S</filename> /* * Initialize memory. */ mov $0x5e00,%di # Memory to initialize mov $(0x9000-0x5e00)/2,%cx # Words to zero rep # Zero-fill stosw # memory
Recall that boot1 was originally loaded to address 0x7c00, so, with this memory initialization, that copy effectively dissapeared. However, also recall that boot1 was relocated to 0x700, so that copy is still in memory, and the BTX server will make use of it. Next, the real-mode IVT (Interrupt Vector Table is updated. The IVT is an array of segment/offset pairs for exception and interrupt handlers. The BIOS normally maps hardware interrupts to interrupt vectors 0x8 to 0xf and 0x70 to 0x77 but, as will be seen, the 8259A Programmable Interrupt Controller, the chip controlling the actual mapping of hardware interrupts to interrupt vectors, is programmed to remap these interrupt vectors from 0x8-0xf to 0x20-0x27 and from 0x70-0x77 to 0x28-0x2f. Thus, interrupt handlers are provided for interrupt vectors 0x20-0x2f. The reason the BIOS-provided handlers are not used directly is because they work in 16-bit real mode, but not 32-bit protected mode. Processor mode will be switched to 32-bit protected mode shortly. However, the BTX server sets up a mechanism to effectively use the handlers provided by the BIOS:
<filename>sys/boot/i386/btx/btx/btx.S</filename> /* * Update real mode IDT for reflecting hardware interrupts. */ mov $intr20,%bx # Address first handler mov $0x10,%cx # Number of handlers mov $0x20*4,%di # First real mode IDT entry init.0: mov %bx,(%di) # Store IP inc %di # Address next inc %di # entry stosw # Store CS add $4,%bx # Next handler loop init.0 # Next IRQ
The next block creates the IDT (Interrupt Descriptor Table). The IDT is analogous, in protected mode, to the IVT in real mode. That is, the IDT describes the various exception and interrupt handlers used when the processor is executing in protected mode. In essence, it also consists of an array of segment/offset pairs, although the structure is somewhat more complex, because segments in protected mode are different than in real mode, and various protection mechanisms apply:
<filename>sys/boot/i386/btx/btx/btx.S</filename> /* * Create IDT. */ mov $0x5e00,%di # IDT's address mov $idtctl,%si # Control string init.1: lodsb # Get entry cbw # count xchg %ax,%cx # as word jcxz init.4 # If done lodsb # Get segment xchg %ax,%dx # P:DPL:type lodsw # Get control xchg %ax,%bx # set lodsw # Get handler offset mov $SEL_SCODE,%dh # Segment selector init.2: shr %bx # Handle this int? jnc init.3 # No mov %ax,(%di) # Set handler offset mov %dh,0x2(%di) # and selector mov %dl,0x5(%di) # Set P:DPL:type add $0x4,%ax # Next handler init.3: lea 0x8(%di),%di # Next entry loop init.2 # Till set done jmp init.1 # Continue
Each entry in the IDT is 8 bytes long. Besides the segment/offset information, they also describe the segment type, privilege level, and whether the segment is present in memory or not. The construction is such that interrupt vectors from 0 to 0xf (exceptions) are handled by function intx00; vector 0x10 (also an exception) is handled by intx10; hardware interrupts, which are later configured to start at interrupt vector 0x20 all the way to interrupt vector 0x2f, are handled by function intx20. Lastly, interrupt vector 0x30, which is used for system calls, is handled by intx30, and vectors 0x31 and 0x32 are handled by intx31. It must be noted that only descriptors for interrupt vectors 0x30, 0x31 and 0x32 are given privilege level 3, the same privilege level as the boot2 client, which means the client can execute a software-generated interrupt to this vectors through the int instruction without failing (this is the way boot2 use the services provided by the BTX server). Also, note that only software-generated interrupts are protected from code executing in lesser privilege levels. Hardware-generated interrupts and processor-generated exceptions are always handled adequately, regardless of the actual privileges involved. The next step is to initialize the TSS (Task-State Segment). The TSS is a hardware feature that helps the operating system or executive software implement multitasking functionality through process abstraction. The IA-32 architecture demands the creation and use of at least one TSS if multitasking facilities are used or different privilege levels are defined. Because the boot2 client is executed in privilege level 3, but the BTX server does in privilege level 0, a TSS must be defined:
<filename>sys/boot/i386/btx/btx/btx.S</filename> /* * Initialize TSS. */ init.4: movb $_ESP0H,TSS_ESP0+1(%di) # Set ESP0 movb $SEL_SDATA,TSS_SS0(%di) # Set SS0 movb $_TSSIO,TSS_MAP(%di) # Set I/O bit map base
Note that a value is given for the Privilege Level 0 stack pointer and stack segment in the TSS. This is needed because, if an interrupt or exception is received while executing boot2 in Privilege Level 3, a change to Privilege Level 0 is automatically performed by the processor, so a new working stack is needed. Finally, the I/O Map Base Address field of the TSS is given a value, which is a 16-bit offset from the beginning of the TSS to the I/O Permission Bitmap and the Interrupt Redirection Bitmap. After the IDT and TSS are created, the processor is ready to switch to protected mode. This is done in the next block:
<filename>sys/boot/i386/btx/btx/btx.S</filename> /* * Bring up the system. */ mov $0x2820,%bx # Set protected mode callw setpic # IRQ offsets lidt idtdesc # Set IDT lgdt gdtdesc # Set GDT mov %cr0,%eax # Switch to protected inc %ax # mode mov %eax,%cr0 # ljmp $SEL_SCODE,$init.8 # To 32-bit code .code32 init.8: xorl %ecx,%ecx # Zero movb $SEL_SDATA,%cl # To 32-bit movw %cx,%ss # stack
First, a call is made to setpic to program the 8259A PIC (Programmable Interrupt Controller). This chip is connected to multiple hardware interrupt sources. Upon receiving an interrupt from a device, it signals the processor with the appropriate interrupt vector. This can be customized so that specific interrupts are associated with specific interrupt vectors, as explained before. Next, the IDTR (Interrupt Descriptor Table Register) and GDTR (Global Descriptor Table Register) are loaded with the instructions lidt and lgdt, respectively. These registers are loaded with the base address and limit address for the IDT and GDT. The following three instructions set the Protection Enable (PE) bit of the %cr0 register. This effectively switches the processor to 32-bit protected mode. Next, a long jump is made to init.8 using segment selector SEL_SCODE, which selects the Supervisor Code Segment. The processor is effectively executing in CPL 0, the most privileged level, after this jump. Finally, the Supervisor Data Segment is selected for the stack by assigning the segment selector SEL_SDATA to the %ss register. This data segment also has a privilege level of 0. Our last code block is responsible for loading the TR (Task Register) with the segment selector for the TSS we created earlier, and setting the User Mode environment before passing execution control to the boot2 client.
<filename>sys/boot/i386/btx/btx/btx.S</filename> /* * Launch user task. */ movb $SEL_TSS,%cl # Set task ltr %cx # register movl $0xa000,%edx # User base address movzwl %ss:BDA_MEM,%eax # Get free memory shll $0xa,%eax # To bytes subl $ARGSPACE,%eax # Less arg space subl %edx,%eax # Less base movb $SEL_UDATA,%cl # User data selector pushl %ecx # Set SS pushl %eax # Set ESP push $0x202 # Set flags (IF set) push $SEL_UCODE # Set CS pushl btx_hdr+0xc # Set EIP pushl %ecx # Set GS pushl %ecx # Set FS pushl %ecx # Set DS pushl %ecx # Set ES pushl %edx # Set EAX movb $0x7,%cl # Set remaining init.9: push $0x0 # general loop init.9 # registers popa # and initialize popl %es # Initialize popl %ds # user popl %fs # segment popl %gs # registers iret # To user mode
Note that the client's environment include a stack segment selector and stack pointer (registers %ss and %esp). Indeed, once the TR is loaded with the appropriate stack segment selector (instruction ltr), the stack pointer is calculated and pushed onto the stack along with the stack's segment selector. Next, the value 0x202 is pushed onto the stack; it is the value that the EFLAGS will get when control is passed to the client. Also, the User Mode code segment selector and the client's entry point are pushed. Recall that this entry point is patched in the BTX header at link time. Finally, segment selectors (stored in register %ecx) for the segment registers %gs, %fs, %ds and %es are pushed onto the stack, along with the value at %edx (0xa000). Keep in mind the various values that have been pushed onto the stack (they will be popped out shortly). Next, values for the remaining general purpose registers are also pushed onto the stack (note the loop that pushes the value 0 seven times). Now, values will be started to be popped out of the stack. First, the popa instruction pops out of the stack the latest seven values pushed. They are stored in the general purpose registers in order %edi, %esi, %ebp, %ebx, %edx, %ecx, %eax. Then, the various segment selectors pushed are popped into the various segment registers. Five values still remain on the stack. They are popped when the iret instruction is executed. This instruction first pops the value that was pushed from the BTX header. This value is a pointer to boot2's entry point. It is placed in the register %eip, the instruction pointer register. Next, the segment selector for the User Code Segment is popped and copied to register %cs. Remember that this segment's privilege level is 3, the least privileged level. This means that we must provide values for the stack of this privilege level. This is why the processor, besides further popping the value for the EFLAGS register, does two more pops out of the stack. These values go to the stack pointer (%esp) and the stack segment (%ss). Now, execution continues at boot0's entry point. It is important to note how the User Code Segment is defined. This segment's base address is set to 0xa000. This means that code memory addresses are relative to address 0xa000; if code being executed is fetched from address 0x2000, the actual memory addressed is 0xa000+0x2000=0xc000.
<application>boot2</application> Stage boot2 defines an important structure, struct bootinfo. This structure is initialized by boot2 and passed to the loader, and then further to the kernel. Some nodes of this structures are set by boot2, the rest by the loader. This structure, among other information, contains the kernel filename, BIOS harddisk geometry, BIOS drive number for boot device, physical memory available, envp pointer etc. The definition for it is: /usr/include/machine/bootinfo.h: struct bootinfo { u_int32_t bi_version; u_int32_t bi_kernelname; /* represents a char * */ u_int32_t bi_nfs_diskless; /* struct nfs_diskless * */ /* End of fields that are always present. */ #define bi_endcommon bi_n_bios_used u_int32_t bi_n_bios_used; u_int32_t bi_bios_geom[N_BIOS_GEOM]; u_int32_t bi_size; u_int8_t bi_memsizes_valid; u_int8_t bi_bios_dev; /* bootdev BIOS unit number */ u_int8_t bi_pad[2]; u_int32_t bi_basemem; u_int32_t bi_extmem; u_int32_t bi_symtab; /* struct symtab * */ u_int32_t bi_esymtab; /* struct symtab * */ /* Items below only from advanced bootloader */ u_int32_t bi_kernend; /* end of kernel space */ u_int32_t bi_envp; /* environment */ u_int32_t bi_modulep; /* preloaded modules */ }; boot2 enters into an infinite loop waiting for user input, then calls load(). If the user does not press anything, the loop breaks by a timeout, so load() will load the default file (/boot/loader). Functions ino_t lookup(char *filename) and int xfsread(ino_t inode, void *buf, size_t nbyte) are used to read the content of a file into memory. /boot/loader is an ELF binary, but where the ELF header is prepended with a.out's struct exec structure. load() scans the loader's ELF header, loading the content of /boot/loader into memory, and passing the execution to the loader's entry: sys/boot/i386/boot2/boot2.c: __exec((caddr_t)addr, RB_BOOTINFO | (opts & RBX_MASK), MAKEBOOTDEV(dev_maj[dsk.type], 0, dsk.slice, dsk.unit, dsk.part), 0, 0, 0, VTOP(&bootinfo)); <application>loader</application> Stage loader is a BTX client as well. I will not describe it here in detail, there is a comprehensive manpage written by Mike Smith, &man.loader.8;. The underlying mechanisms and BTX were discussed above. The main task for the loader is to boot the kernel. When the kernel is loaded into memory, it is being called by the loader: sys/boot/common/boot.c: /* Call the exec handler from the loader matching the kernel */ module_formats[km->m_loader]->l_exec(km); Kernel Initialization Let us take a look at the command that links the kernel. This will help identify the exact location where the loader passes execution to the kernel. This location is the kernel's actual entry point. sys/conf/Makefile.i386: ld -elf -Bdynamic -T /usr/src/sys/conf/ldscript.i386 -export-dynamic \ -dynamic-linker /red/herring -o kernel -X locore.o \ <lots of kernel .o files> ELF A few interesting things can be seen here. First, the kernel is an ELF dynamically linked binary, but the dynamic linker for kernel is /red/herring, which is definitely a bogus file. Second, taking a look at the file sys/conf/ldscript.i386 gives an idea about what ld options are used when compiling a kernel. Reading through the first few lines, the string sys/conf/ldscript.i386: ENTRY(btext) says that a kernel's entry point is the symbol `btext'. This symbol is defined in locore.s: sys/i386/i386/locore.s: .text /********************************************************************** * * This is where the bootblocks start us, set the ball rolling... * */ NON_GPROF_ENTRY(btext) First, the register EFLAGS is set to a predefined value of 0x00000002. Then all the segment registers are initialized: sys/i386/i386/locore.s: /* Don't trust what the BIOS gives for eflags. */ pushl $PSL_KERNEL popfl /* * Don't trust what the BIOS gives for %fs and %gs. Trust the bootstrap * to set %cs, %ds, %es and %ss. */ mov %ds, %ax mov %ax, %fs mov %ax, %gs btext calls the routines recover_bootinfo(), identify_cpu(), create_pagetables(), which are also defined in locore.s. Here is a description of what they do: recover_bootinfo This routine parses the parameters to the kernel passed from the bootstrap. The kernel may have been booted in 3 ways: by the loader, described above, by the old disk boot blocks, or by the old diskless boot procedure. This function determines the booting method, and stores the struct bootinfo structure into the kernel memory. identify_cpu This functions tries to find out what CPU it is running on, storing the value found in a variable _cpu. create_pagetables This function allocates and fills out a Page Table Directory at the top of the kernel memory area. The next steps are enabling VME, if the CPU supports it: testl $CPUID_VME, R(_cpu_feature) jz 1f movl %cr4, %eax orl $CR4_VME, %eax movl %eax, %cr4 Then, enabling paging: /* Now enable paging */ movl R(_IdlePTD), %eax movl %eax,%cr3 /* load ptd addr into mmu */ movl %cr0,%eax /* get control word */ orl $CR0_PE|CR0_PG,%eax /* enable paging */ movl %eax,%cr0 /* and let's page NOW! */ The next three lines of code are because the paging was set, so the jump is needed to continue the execution in virtualized address space: pushl $begin /* jump to high virtualized address */ ret /* now running relocated at KERNBASE where the system is linked to run */ begin: The function init386() is called with a pointer to the first free physical page, after that mi_startup(). init386 is an architecture dependent initialization function, and mi_startup() is an architecture independent one (the 'mi_' prefix stands for Machine Independent). The kernel never returns from mi_startup(), and by calling it, the kernel finishes booting: sys/i386/i386/locore.s: movl physfree, %esi pushl %esi /* value of first for init386(first) */ call _init386 /* wire 386 chip for unix operation */ call _mi_startup /* autoconfiguration, mountroot etc */ hlt /* never returns to here */ <function>init386()</function> init386() is defined in sys/i386/i386/machdep.c and performs low-level initialization specific to the i386 chip. The switch to protected mode was performed by the loader. The loader has created the very first task, in which the kernel continues to operate. Before looking at the code, consider the tasks the processor must complete to initialize protected mode execution: Initialize the kernel tunable parameters, passed from the bootstrapping program. Prepare the GDT. Prepare the IDT. Initialize the system console. Initialize the DDB, if it is compiled into kernel. Initialize the TSS. Prepare the LDT. Set up proc0's pcb. parameters init386() initializes the tunable parameters passed from bootstrap by setting the environment pointer (envp) and calling init_param1(). The envp pointer has been passed from loader in the bootinfo structure: sys/i386/i386/machdep.c: kern_envp = (caddr_t)bootinfo.bi_envp + KERNBASE; /* Init basic tunables, hz etc */ init_param1(); init_param1() is defined in sys/kern/subr_param.c. That file has a number of sysctls, and two functions, init_param1() and init_param2(), that are called from init386(): sys/kern/subr_param.c: hz = HZ; TUNABLE_INT_FETCH("kern.hz", &hz); TUNABLE_<typename>_FETCH is used to fetch the value from the environment: /usr/src/sys/sys/kernel.h: #define TUNABLE_INT_FETCH(path, var) getenv_int((path), (var)) Sysctl kern.hz is the system clock tick. Additionally, these sysctls are set by init_param1(): kern.maxswzone, kern.maxbcache, kern.maxtsiz, kern.dfldsiz, kern.maxdsiz, kern.dflssiz, kern.maxssiz, kern.sgrowsiz. Global Descriptors Table (GDT) Then init386() prepares the Global Descriptors Table (GDT). Every task on an x86 is running in its own virtual address space, and this space is addressed by a segment:offset pair. Say, for instance, the current instruction to be executed by the processor lies at CS:EIP, then the linear virtual address for that instruction would be the virtual address of code segment CS + EIP. For convenience, segments begin at virtual address 0 and end at a 4Gb boundary. Therefore, the instruction's linear virtual address for this example would just be the value of EIP. Segment registers such as CS, DS etc are the selectors, i.e., indexes, into GDT (to be more precise, an index is not a selector itself, but the INDEX field of a selector). FreeBSD's GDT holds descriptors for 15 selectors per CPU: sys/i386/i386/machdep.c: union descriptor gdt[NGDT * MAXCPU]; /* global descriptor table */ sys/i386/include/segments.h: /* * Entries in the Global Descriptor Table (GDT) */ #define GNULL_SEL 0 /* Null Descriptor */ #define GCODE_SEL 1 /* Kernel Code Descriptor */ #define GDATA_SEL 2 /* Kernel Data Descriptor */ #define GPRIV_SEL 3 /* SMP Per-Processor Private Data */ #define GPROC0_SEL 4 /* Task state process slot zero and up */ #define GLDT_SEL 5 /* LDT - eventually one per process */ #define GUSERLDT_SEL 6 /* User LDT */ #define GTGATE_SEL 7 /* Process task switch gate */ #define GBIOSLOWMEM_SEL 8 /* BIOS low memory access (must be entry 8) */ #define GPANIC_SEL 9 /* Task state to consider panic from */ #define GBIOSCODE32_SEL 10 /* BIOS interface (32bit Code) */ #define GBIOSCODE16_SEL 11 /* BIOS interface (16bit Code) */ #define GBIOSDATA_SEL 12 /* BIOS interface (Data) */ #define GBIOSUTIL_SEL 13 /* BIOS interface (Utility) */ #define GBIOSARGS_SEL 14 /* BIOS interface (Arguments) */ Note that those #defines are not selectors themselves, but just a field INDEX of a selector, so they are exactly the indices of the GDT. for example, an actual selector for the kernel code (GCODE_SEL) has the value 0x08. Interrupt Descriptor Table (IDT) The next step is to initialize the Interrupt Descriptor Table (IDT). This table is referenced by the processor when a software or hardware interrupt occurs. For example, to make a system call, user application issues the INT 0x80 instruction. This is a software interrupt, so the processor's hardware looks up a record with index 0x80 in the IDT. This record points to the routine that handles this interrupt, in this particular case, this will be the kernel's syscall gate. The IDT may have a maximum of 256 (0x100) records. The kernel allocates NIDT records for the IDT, where NIDT is the maximum (256): sys/i386/i386/machdep.c: static struct gate_descriptor idt0[NIDT]; struct gate_descriptor *idt = &idt0[0]; /* interrupt descriptor table */ For each interrupt, an appropriate handler is set. The syscall gate for INT 0x80 is set as well: sys/i386/i386/machdep.c: setidt(0x80, &IDTVEC(int0x80_syscall), SDT_SYS386TGT, SEL_UPL, GSEL(GCODE_SEL, SEL_KPL)); So when a userland application issues the INT 0x80 instruction, control will transfer to the function _Xint0x80_syscall, which is in the kernel code segment and will be executed with supervisor privileges. Console and DDB are then initialized: DDB sys/i386/i386/machdep.c: cninit(); /* skipped */ #ifdef DDB kdb_init(); if (boothowto & RB_KDB) Debugger("Boot flags requested debugger"); #endif The Task State Segment is another x86 protected mode structure, the TSS is used by the hardware to store task information when a task switch occurs. The Local Descriptors Table is used to reference userland code and data. Several selectors are defined to point to the LDT, they are the system call gates and the user code and data selectors: /usr/include/machine/segments.h: #define LSYS5CALLS_SEL 0 /* forced by intel BCS */ #define LSYS5SIGR_SEL 1 #define L43BSDCALLS_SEL 2 /* notyet */ #define LUCODE_SEL 3 #define LSOL26CALLS_SEL 4 /* Solaris >= 2.6 system call gate */ #define LUDATA_SEL 5 /* separate stack, es,fs,gs sels ? */ /* #define LPOSIXCALLS_SEL 5*/ /* notyet */ #define LBSDICALLS_SEL 16 /* BSDI system call gate */ #define NLDT (LBSDICALLS_SEL + 1) Next, proc0's Process Control Block (struct pcb) structure is initialized. proc0 is a struct proc structure that describes a kernel process. It is always present while the kernel is running, therefore it is declared as global: sys/kern/kern_init.c: struct proc proc0; The structure struct pcb is a part of a proc structure. It is defined in /usr/include/machine/pcb.h and has a process's information specific to the i386 architecture, such as registers values. <function>mi_startup()</function> This function performs a bubble sort of all the system initialization objects and then calls the entry of each object one by one: sys/kern/init_main.c: for (sipp = sysinit; *sipp; sipp++) { /* ... skipped ... */ /* Call function */ (*((*sipp)->func))((*sipp)->udata); /* ... skipped ... */ } Although the sysinit framework is described in the Developers' Handbook, I will discuss the internals of it. sysinit objects Every system initialization object (sysinit object) is created by calling a SYSINIT() macro. Let us take as example an announce sysinit object. This object prints the copyright message: sys/kern/init_main.c: static void print_caddr_t(void *data __unused) { printf("%s", (char *)data); } SYSINIT(announce, SI_SUB_COPYRIGHT, SI_ORDER_FIRST, print_caddr_t, copyright) The subsystem ID for this object is SI_SUB_COPYRIGHT (0x0800001), which comes right after the SI_SUB_CONSOLE (0x0800000). So, the copyright message will be printed out first, just after the console initialization. Let us take a look at what exactly the macro SYSINIT() does. It expands to a C_SYSINIT() macro. The C_SYSINIT() macro then expands to a static struct sysinit structure declaration with another DATA_SET macro call: /usr/include/sys/kernel.h: #define C_SYSINIT(uniquifier, subsystem, order, func, ident) \ static struct sysinit uniquifier ## _sys_init = { \ subsystem, \ order, \ func, \ ident \ }; \ DATA_SET(sysinit_set,uniquifier ## _sys_init); #define SYSINIT(uniquifier, subsystem, order, func, ident) \ C_SYSINIT(uniquifier, subsystem, order, \ (sysinit_cfunc_t)(sysinit_nfunc_t)func, (void *)ident) The DATA_SET() macro expands to a MAKE_SET(), and that macro is the point where all the sysinit magic is hidden: /usr/include/linker_set.h: #define MAKE_SET(set, sym) \ static void const * const __set_##set##_sym_##sym = &sym; \ __asm(".section .set." #set ",\"aw\""); \ __asm(".long " #sym); \ __asm(".previous") #endif #define TEXT_SET(set, sym) MAKE_SET(set, sym) #define DATA_SET(set, sym) MAKE_SET(set, sym) In our case, the following declaration will occur: static struct sysinit announce_sys_init = { SI_SUB_COPYRIGHT, SI_ORDER_FIRST, (sysinit_cfunc_t)(sysinit_nfunc_t) print_caddr_t, (void *) copyright }; static void const *const __set_sysinit_set_sym_announce_sys_init = &announce_sys_init; __asm(".section .set.sysinit_set" ",\"aw\""); __asm(".long " "announce_sys_init"); __asm(".previous"); The first __asm instruction will create an ELF section within the kernel's executable. This will happen at kernel link time. The section will have the name .set.sysinit_set. The content of this section is one 32-bit value, the address of announce_sys_init structure, and that is what the second __asm is. The third __asm instruction marks the end of a section. If a directive with the same section name occurred before, the content, i.e., the 32-bit value, will be appended to the existing section, so forming an array of 32-bit pointers. Running objdump on a kernel binary, you may notice the presence of such small sections: &prompt.user; objdump -h /kernel 7 .set.cons_set 00000014 c03164c0 c03164c0 002154c0 2**2 CONTENTS, ALLOC, LOAD, DATA 8 .set.kbddriver_set 00000010 c03164d4 c03164d4 002154d4 2**2 CONTENTS, ALLOC, LOAD, DATA 9 .set.scrndr_set 00000024 c03164e4 c03164e4 002154e4 2**2 CONTENTS, ALLOC, LOAD, DATA 10 .set.scterm_set 0000000c c0316508 c0316508 00215508 2**2 CONTENTS, ALLOC, LOAD, DATA 11 .set.sysctl_set 0000097c c0316514 c0316514 00215514 2**2 CONTENTS, ALLOC, LOAD, DATA 12 .set.sysinit_set 00000664 c0316e90 c0316e90 00215e90 2**2 CONTENTS, ALLOC, LOAD, DATA This screen dump shows that the size of .set.sysinit_set section is 0x664 bytes, so 0x664/sizeof(void *) sysinit objects are compiled into the kernel. The other sections such as .set.sysctl_set represent other linker sets. By defining a variable of type struct linker_set the content of .set.sysinit_set section will be collected into that variable: sys/kern/init_main.c: extern struct linker_set sysinit_set; /* XXX */ The struct linker_set is defined as follows: /usr/include/linker_set.h: struct linker_set { int ls_length; void *ls_items[1]; /* really ls_length of them, trailing NULL */ }; The first node will be equal to the number of a sysinit objects, and the second node will be a NULL-terminated array of pointers to them. Returning to the mi_startup() discussion, it is must be clear now, how the sysinit objects are being organized. The mi_startup() function sorts them and calls each. The very last object is the system scheduler: /usr/include/sys/kernel.h: enum sysinit_sub_id { SI_SUB_DUMMY = 0x0000000, /* not executed; for linker*/ SI_SUB_DONE = 0x0000001, /* processed*/ SI_SUB_CONSOLE = 0x0800000, /* console*/ SI_SUB_COPYRIGHT = 0x0800001, /* first use of console*/ ... SI_SUB_RUN_SCHEDULER = 0xfffffff /* scheduler: no return*/ }; The system scheduler sysinit object is defined in the file sys/vm/vm_glue.c, and the entry point for that object is scheduler(). That function is actually an infinite loop, and it represents a process with PID 0, the swapper process. The proc0 structure, mentioned before, is used to describe it. The first user process, called init, is created by the sysinit object init: sys/kern/init_main.c: static void create_init(const void *udata __unused) { int error; int s; s = splhigh(); error = fork1(&proc0, RFFDG | RFPROC, &initproc); if (error) panic("cannot fork init: %d\n", error); initproc->p_flag |= P_INMEM | P_SYSTEM; cpu_set_fork_handler(initproc, start_init, NULL); remrunqueue(initproc); splx(s); } SYSINIT(init,SI_SUB_CREATE_INIT, SI_ORDER_FIRST, create_init, NULL) The create_init() allocates a new process by calling fork1(), but does not mark it runnable. When this new process is scheduled for execution by the scheduler, the start_init() will be called. That function is defined in init_main.c. It tries to load and exec the init binary, probing /sbin/init first, then /sbin/oinit, /sbin/init.bak, and finally /stand/sysinstall: sys/kern/init_main.c: static char init_path[MAXPATHLEN] = #ifdef INIT_PATH __XSTRING(INIT_PATH); #else "/sbin/init:/sbin/oinit:/sbin/init.bak:/stand/sysinstall"; #endif
Index: head/en_US.ISO8859-1/books/dev-model/book.xml =================================================================== --- head/en_US.ISO8859-1/books/dev-model/book.xml (revision 48481) +++ head/en_US.ISO8859-1/books/dev-model/book.xml (revision 48482) @@ -1,2515 +1,2515 @@ %chapters; ]> A project model for the FreeBSD Project NiklasSaers 2002-2005 Niklas Saers 1.5 October, 2014 Remove mention of GNATS which is no longer used by the project. 1.4 September, 2013 Remove mention of CVS and CVSup which are no longer used by the project. 1.3 October, 2012 Remove hats held by specific people, these are documented elsewhere. 1.2 April, 2005 Update one year of changes, replace statistics with those of 2004 1.1 July, 2004 First update within the FreeBSD tree 1.0 December 4th, 2003 Ready for commit to FreeBSD Documentation 0.7 April 7th, 2003 Release for review by the Documentation team 0.6 March 1st, 2003 Incorporated corrections noted by interviewees and reviewers 0.5 February 1st, 2003 Initial review by interviewees $FreeBSD$ Foreword Up until now, the FreeBSD project has released a number of described techniques to do different parts of work. However, a project model summarising how the project is structured is needed because of the increasing amount of project members. This goes hand-in-hand with Brooks' law that adding another person to a late project will make it later since it will increase the communication needs . A project model is a tool to reduce the communication needs. This paper will provide such a project model and is donated to the FreeBSD Documentation project where it can evolve together with the project so that it can at any point in time reflect the way the project works. It is based on . I would like to thank the following people for taking the time to explain things that were unclear to me and for proofreading the document. Andrey A. Chernov ache@freebsd.org Bruce A. Mah bmah@freebsd.org Dag-Erling Smørgrav des@freebsd.org Giorgos Keramidaskeramida@freebsd.org Ingvil Hovig ingvil.hovig@skatteetaten.no Jesper Holckjeh.inf@cbs.dk John Baldwin jhb@freebsd.org John Polstra jdp@freebsd.org Kirk McKusick mckusick@freebsd.org Mark Linimon linimon@freebsd.org Marleen Devos Niels Jørgenssennielsj@ruc.dk Nik Clayton nik@freebsd.org Poul-Henning Kamp phk@freebsd.org Simon L. Nielsen simon@freebsd.org Overview A project model is a means to reduce the communications overhead in a project. As shown by , increasing the number of project participants increases the communication in the project exponentionally. FreeBSD has during the past few years increased both its mass of active users and committers, and the communication in the project has risen accordingly. This project model will serve to reduce this overhead by providing an up-to-date description of the project. During the Core elections in 2002, Mark Murray stated I am opposed to a long rule-book, as that satisfies lawyer-tendencies, and is counter to the technocentricity that the project so badly needs. . This project model is not meant to be a tool to justify creating impositions for developers, but as a tool to facilitate coordination. It is meant as a description of the project, with an overview of how the different processes are executed. It is an introduction to how the FreeBSD project works. The FreeBSD project model will be described as of July 1st, 2004. It is based on the Niels Jørgensen's paper , FreeBSD's official documents, discussions on FreeBSD mailing lists and interviews with developers. After providing definitions of terms used, this document will outline the organisational structure (including role descriptions and communication lines), discuss the methodology model and after presenting the tools used for process control, it will present the defined processes. Finally it will outline major sub-projects of the FreeBSD project. , Section 1.2 and 1.3 give the vision and the architectural guidelines for the project. The vision is To produce the best UNIX-like operating system package possible, with due respect to the original software tools ideology as well as usability, performance and stability. The architectural guidelines help determine whether a problem that someone wants to be solved is within the scope of the project Definitions
Activity An activity is an element of work performed during the course of a project . It has an output and leads towards an outcome. Such an output can either be an input to another activity or a part of the process' delivery.
Process A process is a series of activities that lead towards a particular outcome. A process can consist of one or more sub-processes. An example of a process is software design.
Hat A hat is synonymous with role. A hat has certain responsibilities in a process and for the process outcome. The hat executes activities. It is well defined what issues the hat should be contacted about by the project members and people outside the project.
Outcome An outcome is the final output of the process. This is synonymous with deliverable, that is defined as any measurable, tangible, verifiable outcome, result or item that must be produced to complete a project or part of a project. Often used more narrowly in reference to an external deliverable, which is a deliverable that is subject to approval by the project sponsor or customer by . Examples of outcomes are a piece of software, a decision made or a report written.
FreeBSD When saying FreeBSD we will mean the BSD derivative UNIX-like operating system FreeBSD, whereas when saying the FreeBSD Project we will mean the project organisation.
Organisational structure While no-one takes ownership of FreeBSD, the FreeBSD organisation is divided into core, committers and contributors and is part of the FreeBSD community that lives around it.
The FreeBSD Project's structure
Number of committers has been determined by going through CVS logs from January 1st, 2004 to December 31st, 2004 and contributors by going through the list of contributions and problem reports. The main resource in the FreeBSD community is its developers: the committers and contributors. It is with their contributions that the project can move forward. Regular developers are referred to as contributors. As by January 1st, 2003, there are an estimated 5500 contributors on the project. Committers are developers with the privilege of being able to commit changes. These are usually the most active developers who are willing to spend their time not only integrating their own code but integrating code submitted by the developers who do not have this privilege. They are also the developers who elect the core team, and they have access to closed discussions. The project can be grouped into four distinct separate parts, and most developers will focus their involvement in one part of FreeBSD. The four parts are kernel development, userland development, ports and documentation. When referring to the base system, both kernel and userland is meant. This split changes our triangle to look like this:
The FreeBSD Project's structure with committers in categories
Number of committers per area has been determined by going through CVS logs from January 1st, 2004 to December 31st, 2004. Note that many committers work in multiple areas, making the total number higher than the real number of committers. The total number of committers at that time was 269. Committers fall into three groups: committers who are only concerned with one area of the project (for instance file systems), committers who are involved only with one sub-project and committers who commit to different parts of the code, including sub-projects. Because some committers work on different parts, the total number in the committers section of the triangle is higher than in the above triangle. The kernel is the main building block of FreeBSD. While the userland applications are protected against faults in other userland applications, the entire system is vulnerable to errors in the kernel. This, combined with the vast amount of dependencies in the kernel and that it is not easy to see all the consequences of a kernel change, demands developers with a relative full understanding of the kernel. Multiple development efforts in the kernel also requires a closer coordination than userland applications do. The core utilities, known as userland, provide the interface that identifies FreeBSD, both user interface, shared libraries and external interfaces to connecting clients. Currently, 162 people are involved in userland development and maintenance, many being maintainers for their own part of the code. Maintainership will be discussed in the section. Documentation is handled by and includes all documents surrounding the FreeBSD project, including the web pages. There were during 2004 101 people making commits to the FreeBSD Documentation Project. Ports is the collection of meta-data that is needed to make software packages build correctly on FreeBSD. An example of a port is the port for the web-browser Mozilla. It contains information about where to fetch the source, what patches to apply and how, and how the package should be installed on the system. This allows automated tools to fetch, build and install the package. As of this writing, there are more than 12600 ports available. Statistics are generated by counting the number of entries in the file fetched by portsdb by April 1st, 2005. portsdb is a part of the port sysutils/portupgrade. , ranging from web servers to games, programming languages and most of the application types that are in use on modern computers. Ports will be discussed further in the section .
Methodology model
Development model There is no defined model for how people write code in FreeBSD. However, Niels Jørgenssen has suggested a model of how written code is integrated into the project.
Jørgenssen's model for change integration
The development release is the FreeBSD-CURRENT ("-CURRENT") branch and the production release is the FreeBSD-STABLE branch ("-STABLE") . This is a model for one change, and shows that after coding, developers seek community review and try integrating it with their own systems. After integrating the change into the development release, called FreeBSD-CURRENT, it is tested by many users and developers in the FreeBSD community. After it has gone through enough testing, it is merged into the production release, called FreeBSD-STABLE. Unless each stage is finished successfully, the developer needs to go back and make modifications in the code and restart the process. To integrate a change with either -CURRENT or -STABLE is called making a commit. Jørgensen found that most FreeBSD developers work individually, meaning that this model is used in parallel by many developers on the different ongoing development efforts. A developer can also be working on multiple changes, so that while he is waiting for review or people to test one or more of his changes, he may be writing another change. As each commit represents an increment, this is a massively incremental model. The commits are in fact so frequent that during one year The period from January 1st, 2004 to December 31st, 2004 was examined to find this number. , 85427 commits were made, making a daily average of 233 commits. Within the code bracket in Jørgensen's figure, each programmer has his own working style and follows his own development models. The bracket could very well have been called development as it includes requirements gathering and analysis, system and detailed design, implementation and verification. However, the only output from these stages is the source code or system documentation. From a stepwise model's perspective (such as the waterfall model), the other brackets can be seen as further verification and system integration. This system integration is also important to see if a change is accepted by the community. Up until the code is committed, the developer is free to choose how much to communicate about it to the rest of the project. In order for -CURRENT to work as a buffer (so that bright ideas that had some undiscovered drawbacks can be backed out) the minimum time a commit should be in -CURRENT before merging it to -STABLE is 3 days. Such a merge is referred to as an MFC (Merge From Current). It is important to notice the word change. Most commits do not contain radical new features, but are maintenance updates. The only exceptions from this model are security fixes and changes to features that are deprecated in the -CURRENT branch. In these cases, changes can be committed directly to the -STABLE branch. In addition to many people working on the project, there are many related projects to the FreeBSD Project. These are either projects developing brand new features, sub-projects or projects whose outcome is incorporated into FreeBSD For instance, the development of the Bluetooth stack started as a sub-project until it was deemed stable enough to be merged into the -CURRENT branch. Now it is a part of the core FreeBSD system. . These projects fit into the FreeBSD Project just like regular development efforts: they produce code that is integrated with the FreeBSD Project. However, some of them (like Ports and Documentation) have the privilege of being applicable to both branches or commit directly to both -CURRENT and -STABLE. There is no standards to how design should be done, nor is design collected in a centralised repository. The main design is that of 4.4BSD. According to Kirk McKusick, after 20 years of developing UNIX operating systems, the interfaces are for the most part figured out. There is therefore no need for much design. However, new applications of the system and new hardware leads to some implementations being more beneficial than those that used to be preferred. One example is the introduction of web browsing that made the normal TCP/IP connection a short burst of data rather than a steady stream over a longer period of time. As design is a part of the Code bracket in Jørgenssen's model, it is up to every developer or sub-project how this should be done. Even if the design should be stored in a central repository, the output from the design stages would be of limited use as the differences of methodologies would make them poorly if at all interoperable. For the overall design of the project, the project relies on the sub-projects to negotiate fit interfaces between each other rather than to dictate interfacing.
Release branches The releases of FreeBSD is best illustrated by a tree with many branches where each major branch represents a major version. Minor versions are represented by branches of the major branches. In the following release tree, arrows that follow one-another in a particular direction represent a branch. Boxes with full lines and diamonds represent official releases. Boxes with dotted lines represent the development branch at that time. Security branches are represented by ovals. Diamonds differ from boxes in that they represent a fork, meaning a place where a branch splits into two branches where one of the branches becomes a sub-branch. For example, at 4.0-RELEASE the 4.0-CURRENT branch split into 4-STABLE and 5.0-CURRENT. At 4.5-RELEASE, the branch forked off a security branch called RELENG_4_5.
The FreeBSD release tree
The latest -CURRENT version is always referred to as -CURRENT, while the latest -STABLE release is always referred to as -STABLE. In this figure, -STABLE refers to 4-STABLE while -CURRENT refers to 5.0-CURRENT following 5.0-RELEASE. A major release is always made from the -CURRENT branch. However, the -CURRENT branch does not need to fork at that point in time, but can focus on stabilising. An example of this is that following 3.0-RELEASE, 3.1-RELEASE was also a continuation of the -CURRENT-branch, and -CURRENT did not become a true development branch until this version was released and the 3-STABLE branch was forked. When -CURRENT returns to becoming a development branch, it can only be followed by a major release. 5-STABLE is predicted to be forked off 5.0-CURRENT at around 5.3-RELEASE. It is not until 5-STABLE is forked that the development branch will be branded 6.0-CURRENT. A minor release is made from the -CURRENT branch following a major release, or from the -STABLE branch. Following and including, 4.3-RELEASE The first release this actually happened for was 4.5-RELEASE, but security branches were at the same time created for 4.3-RELEASE and 4.4-RELEASE. , when a minor release has been made, it becomes a security branch. This is meant for organisations that do not want to follow the -STABLE branch and the potential new/changed features it offers, but instead require an absolutely stable environment, only updating to implement security updates. There is a terminology overlap with respect to the word "stable", which leads to some confusion. The -STABLE branch is still a development branch, whose goal is to be useful for most people. If it is never acceptable for a system to get changes that are not announced at the time it is deployed, that system should run a security branch. Each update to a security branch is called a patchlevel. For every security enhancement that is done, the patchlevel number is increased, making it easy for people tracking the branch to see what security enhancements they have implemented. In cases where there have been especially serious security flaws, an entire new release can be made from a security branch. An example of this is 4.6.2-RELEASE.
Model summary To summarise, the development model of FreeBSD can be seen as the following tree:
The overall development model
The tree of the FreeBSD development with ongoing development efforts and continuous integration. The tree symbolises the release versions with major versions spawning new main branches and minor versions being versions of the main branch. The top branch is the -CURRENT branch where all new development is integrated, and the -STABLE branch is the branch directly below it. Clouds of development efforts hang over the project where developers use the development models they see fit. The product of their work is then integrated into -CURRENT where it undergoes parallel debugging and is finally merged from -CURRENT into -STABLE. Security fixes are merged from -STABLE to the security branches.
Hats Many committers have a special area of responsibility. These roles are called hats. These hats can be either project roles, such as public relations officer, or maintainer for a certain area of the code. Because this is a project where people give voluntarily of their spare time, people with assigned hats are not always available. They must therefore appoint a deputy that can perform the hat's role in his or her absence. The other option is to have the role held by a group. Many of these hats are not formalised. Formalised hats have a charter stating the exact purpose of the hat along with its privileges and responsibilities. The writing of such charters is a new part of the project, and has thus yet to be completed for all hats. These hat descriptions are not such a formalisation, rather a summary of the role with links to the charter where available and contact addresses.
General Hats
Contributor A Contributor contributes to the FreeBSD project either as a developer, as an author, by sending problem reports, or in other ways contributing to the progress of the project. A contributor has no special privileges in the FreeBSD project.
Committer A person who has the required privileges to add his code or documentation to the repository. A committer has made a commit within the past 12 months. An active committer is a committer who has made an average of one commit per month during that time. It is worth noting that there are no technical barriers to prevent someone, once having gained commit privileges to the main- or a sub-project, to make commits in parts of that project's source the committer did not specifically get permission to modify. However, when wanting to make modifications to parts a committer has not been involved in before, he/she should read the logs to see what has happened in this area before, and also read the MAINTAINER file to see if the maintainer of this part has any special requests on how changes in the code should be made
Core Team The core team is elected by the committers from the pool of committers and serves as the board of directors of the FreeBSD project. It promotes active contributors to committers, assigns people to well-defined hats, and is the final arbiter of decisions involving which way the project should be heading. As by July 1st, 2004, core consisted of 9 members. Elections are held every two years.
Maintainership Maintainership means that that person is responsible for what is allowed to go into that area of the code and has the final say should disagreements over the code occur. This involves proactive work aimed at stimulating contributions and reactive work in reviewing commits. With the FreeBSD source comes the MAINTAINERS file that contains a one-line summary of how each maintainer would like contributions to be made. Having this notice and contact information enables developers to focus on the development effort rather than being stuck in a slow correspondence should the maintainer be unavailable for some time. If the maintainer is unavailable for an unreasonably long period of time, and other people do a significant amount of work, maintainership may be switched without the maintainer's approval. This is based on the stance that maintainership should be demonstrated, not declared. Maintainership of a particular piece of code is a hat that is not held as a group.
Official Hats The official hats in the FreeBSD Project are hats that are more or less formalised and mainly administrative roles. They have the authority and responsibility for their area. The following illustration shows the responsibility lines. After this follows a description of each hat, including who it is held by.
Overview of official hats
All boxes consist of groups of committers, except for the dotted boxes where the holders are not necessarily committers. The flattened circles are sub-projects and consist of both committers and non-committers of the main project.
Documentation project manager architect is responsible for defining and following up documentation goals for the committers in the Documentation project. Hat held by: The DocEng team doceng@FreeBSD.org. The DocEng Charter.
Postmaster The Postmaster is responsible for mail being correctly delivered to the committers' email address. He is also responsible for ensuring that the mailing lists work and should take measures against possible disruptions of mail such as having troll-, spam- and virus-filters. Hat currently held by: the Postmaster Team postmaster@FreeBSD.org.
Release Coordination The responsibilities of the Release Engineering Team are Setting, publishing and following a release schedule for official releases Documenting and formalising release engineering procedures Creation and maintenance of code branches Coordinating with the Ports and Documentation teams to have an updated set of packages and documentation released with the new releases Coordinating with the Security team so that pending releases are not affected by recently disclosed vulnerabilities. Further information about the development process is available in the section. Hat held by: the Release Engineering team re@FreeBSD.org. The Release Engineering Charter.
Public Relations & Corporate Liaison The Public Relations & Corporate Liaison's responsibilities are: Making press statements when happenings that are important to the FreeBSD Project happen. Being the official contact person for corporations that are working close with the FreeBSD Project. Take steps to promote FreeBSD within both the Open Source community and the corporate world. Handle the freebsd-advocacy mailing list. This hat is currently not occupied.
Security Officer The Security Officer's main responsibility is to coordinate information exchange with others in the security community and in the FreeBSD project. The Security Officer is also responsible for taking action when security problems are reported and promoting proactive - development behaviour when it comes to security. + development behavior when it comes to security. Because of the fear that information about vulnerabilities may leak out to people with malicious intent before a patch is available, only the Security Officer, consisting of an officer, a deputy and two members, receive sensitive information about security issues. However, to create or implement a patch, the Security Officer has the Security Officer Team security-team@FreeBSD.org to help do the work.
Source Repository Manager The Source Repository Manager is the only one who is allowed to directly modify the repository without using the tool. It is his/her responsibility to ensure that technical problems that arise in the repository are resolved quickly. The source repository manager has the authority to back out commits if this is necessary to resolve a SVN technical problem. Hat held by: the Source Repository Manager clusteradm@FreeBSD.org.
Election Manager The Election Manager is responsible for the process. The manager is responsible for running and maintaining the election system, and is the final authority should minor unforeseen events happen in the election process. Major unforeseen events have to be discussed with the Hat held only during elections.
Web site Management The Web site Management hat is responsible for coordinating the rollout of updated web pages on mirrors around the world, for the overall structure of the primary web site and the system it is running upon. The management needs to coordinate the content with and acts as maintainer for the www tree. Hat held by: the FreeBSD Webmasters www@FreeBSD.org.
Ports Manager The Ports Manager acts as a liaison between and the core project, and all requests from the project should go to the ports manager. Hat held by: the Ports Management Team portmgr@FreeBSD.org. The Portmgr charter.
Standards The Standards hat is responsible for ensuring that FreeBSD complies with the standards it is committed to , keeping up to date on the development of these standards and notifying FreeBSD developers of important changes that allows them to take a proactive role and decrease the time between a standards update and FreeBSD's compliancy. Hat currently held by: Garrett Wollman wollman@FreeBSD.org.
Core Secretary The Core Secretary's main responsibility is to write drafts to and publish the final Core Reports. The secretary also keeps the core agenda, thus ensuring that no balls are dropped unresolved. Hat currently held by: &a.matthew.email;.
Bugmeister The Bugmeister is responsible for ensuring that the maintenance database is in working order, that the entries are correctly categorised and that there are no invalid entries. Hat currently held by: the Bugmeister Team bugmeister@FreeBSD.org.
Donations Liaison Officer The task of the donations liaison officer is to match the developers with needs with people or organisations willing to make a donation. The Donations Liaison Charter is available here Hat held by: the Donations Liaison Office donations@FreeBSD.org.
Admin (Also called FreeBSD Cluster Admin) The admin team consists of the people responsible for administrating the computers that the project relies on for its distributed work and communication to be synchronised. It consists mainly of those people who have physical access to the servers. Hat held by: the Admin team admin@FreeBSD.org.
Process dependent hats
Report originator The person originally responsible for filing a Problem Report.
Bugbuster A person who will either find the right person to solve the problem, or close the PR if it is a duplicate or otherwise not an interesting one.
Mentor A mentor is a committer who takes it upon him/her to introduce a new committer to the project, both in terms of ensuring the new committers setup is valid, that the new committer knows the available tools required in his/her work and that the new committer knows what is expected of him/her in terms of - behaviour. + behavior.
Vendor The person(s) or organisation whom external code comes from and whom patches are sent to.
Reviewers People on the mailing list where the request for review is posted.
Processes The following section will describe the defined project processes. Issues that are not handled by these processes happen on an ad-hoc basis based on what has been customary to do in similar cases.
Adding new and removing old committers The Core team has the responsibility of giving and removing commit privileges to contributors. This can only be done through a vote on the core mailing list. The ports and documentation sub-projects can give commit privileges to people working on these projects, but have to date not removed such privileges. Normally a contributor is recommended to core by a committer. For contributors or outsiders to contact core asking to be a committer is not well thought of and is usually rejected. If the area of particular interest for the developer potentially overlaps with other committers' area of maintainership, the opinion of those maintainers is sought. However, it is frequently this committer that recommends the developer. When a contributor is given committer status, he is assigned a mentor. The committer who recommended the new committer will, in the general case, take it upon himself to be the new committers mentor. When a contributor is given his commit bit, a -signed email is sent from either , or nik@freebsd.org to both admins@freebsd.org, the assigned mentor, the new committer and core confirming the approval of a new account. The mentor then gathers a password line, public key and PGP key from the new committer and sends them to . When the new account is created, the mentor activates the commit bit and guides the new committer through the rest of the initial process.
Process summary: adding a new committer
When a contributor sends a piece of code, the receiving committer may choose to recommend that the contributor is given commit privileges. If he recommends this to core, they will vote on this recommendation. If they vote in favour, a mentor is assigned the new committer and the new committer has to email his details to the administrators for an account to be created. After this, the new committer is all set to make his first commit. By tradition, this is by adding his name to the committers list. Recall that a committer is considered to be someone who has committed code during the past 12 months. However, it is not until after 18 months of inactivity have passed that commit privileges are eligible to be revoked. There are, however, no automatic procedures for doing this. For reactions concerning commit privileges not triggered by time, see section 1.5.8.
Process summary: removing a committer
When Core decides to clean up the committers list, they check who has not made a commit for the past 18 months. Committers who have not done so have their commit bits revoked. It is also possible for committers to request that their commit bit be retired if for some reason they are no longer going to be actively committing to the project. In this case, it can also be restored at a later time by core, should the committer ask. Roles in this process:
Committing code The committing of new or modified code is one of the most frequent processes in the FreeBSD project and will usually happen many times a day. Committing of code can only be done by a committer. Committers commit either code written by themselves, code submitted to them or code submitted through a problem report. When code is written by the developer that is non-trivial, he should seek a code review from the community. This is done by sending mail to the relevant list asking for review. Before submitting the code for review, he should ensure it compiles correctly with the entire tree and that all relevant tests run. This is called pre-commit test. When contributed code is received, it should be reviewed by the committer and tested the same way. When a change is committed to a part of the source that has been contributed from an outside , the maintainer should ensure that the patch is contributed back to the vendor. This is in line with the open source philosophy and makes it easier to stay in sync with outside projects as the patches do not have to be reapplied every time a new release is made. After the code has been available for review and no further changes are necessary, the code is committed into the development branch, -CURRENT. If the change applies for the -STABLE branch or the other branches as well, a Merge From Current ("MFC") countdown is set by the committer. After the number of days the committer chose when setting the MFC have passed, an email will automatically be sent to the committer reminding him to commit it to the -STABLE branch (and possibly security branches as well). Only security critical changes should be merged to security branches. Delaying the commit to -STABLE and other branches allows for parallel debugging where the committed code is tested on a wide range of configurations. This makes changes to -STABLE to contain fewer faults and thus giving the branch its name.
Process summary: A committer commits code
When a committer has written a piece of code and wants to commit it, he first needs to determine if it is trivial enough to go in without prior review or if it should first be reviewed by the developer community. If the code is trivial or has been reviewed and the committer is not the maintainer, he should consult the maintainer before proceeding. If the code is contributed by an outside vendor, the maintainer should create a patch that is sent back to the vendor. The code is then committed and the deployed by the users. Should they find problems with the code, this will be reported and the committer can go back to writing a patch. If a vendor is affected, he can choose to implement or ignore the patch.
Process summary: A contributor commits code
The difference when a contributor makes a code contribution is that he submits the code through the Bugzilla interface. This report is picked up by the maintainer who reviews the code and commits it. Hats included in this process are:
Core election Core elections are held at least every two years. The first Core election was held September 2000 Nine core members are elected. New elections are held if the number of core members drops below seven. New elections can also be held should at least 1/3 of the active committers demand this. When an election is to take place, core announces this at least 6 weeks in advance, and appoints an election manager to run the elections. Only committers can be elected into core. The candidates need to submit their candidacy at least one week before the election starts, but can refine their statements until the voting starts. They are presented in the candidates list. When writing their election statements, the candidates must answer a few standard questions submitted by the election manager. During elections, the rule that a committer must have committed during the 12 past months is followed strictly. Only these committers are eligible to vote. When voting, the committer may vote once in support of up to nine nominees. The voting is done over a period of four weeks with reminders being posted on developers mailing list that is available to all committers. The election results are released one week after the election ends, and the new core team takes office one week after the results have been posted. Should there be a voting tie, this will be resolved by the new, unambiguously elected core members. Votes and candidate statements are archived, but the archives are not publicly available.
Process summary: Core elections
Core announces the election and selects an election manager. He prepares the elections, and when ready, candidates can announce their candidacies through submitting their statements. The committers then vote. After the vote is over, the election results are announced and the new core team takes office. Hats in core elections are:
Development of new features Within the project there are sub-projects that are working on new features. These projects are generally done by one person . Every project is free to organise development as it sees fit. However, when the project is merged to the -CURRENT branch it must follow the project guidelines. When the code has been well tested in the -CURRENT branch and deemed stable enough and relevant to the -STABLE branch, it is merged to the -STABLE branch. The requirements of the project are given by developer wishes, requests from the community in terms of direct requests by mail, Problem Reports, commercial funding for the development of features, or contributions by the scientific community. The wishes that come within the responsibility of a developer are given to that developer who prioritises his time between the request and his wishes. A common way to do this is maintain a TODO-list maintained by the project. Items that do not come within someone's responsibility are collected on TODO-lists unless someone volunteers to take the responsibility. All requests, their distribution and follow-up are handled by the tool. Requirements analysis happens in two ways. The requests that come in are discussed on mailing lists, both within the main project and in the sub-project that the request belongs to or is spawned by the request. Furthermore, individual developers on the sub-project will evaluate the feasibility of the requests and determine the prioritisation between them. Other than archives of the discussions that have taken place, no outcome is created by this phase that is merged into the main project. As the requests are prioritised by the individual developers on the basis of doing what they find interesting, necessary or are funded to do, there is no overall strategy or prioritisation of what requests to regard as requirements and following up their correct implementation. However, most developers have some shared vision of what issues are more important, and they can ask for guidelines from the release engineering team. The verification phase of the project is two-fold. Before committing code to the current-branch, developers request their code to be reviewed by their peers. This review is for the most part done by functional testing, but also code review is important. When the code is committed to the branch, a broader functional testing will happen, that may trigger further code review and debugging should the code not behave as expected. This second verification form may be regarded as structural verification. Although the sub-projects themselves may write formal tests such as unit tests, these are usually not collected by the main project and are usually removed before the code is committed to the current branch. More and more tests are however performed when building the system (make world). These tests are however a very new addition and no systematic framework for these tests have yet been created.
Maintenance It is an advantage to the project to for each area of the source have at least one person that knows this area well. Some parts of the code have designated maintainers. Others have de-facto maintainers, and some parts of the system do not have maintainers. The maintainer is usually a person from the sub-project that wrote and integrated the code, or someone who has ported it from the platform it was written for. sendmail and named are examples of code that has been merged from other platforms. The maintainer's job is to make sure the code is in sync with the project the code comes from if it is contributed code, and apply patches submitted by the community or write fixes to issues that are discovered. The main bulk of work that is put into the FreeBSD project is maintenance. has made a figure showing the life cycle of changes.
Jørgenssen's model for change integration
Here development release refers to the -CURRENT branch while production release refers to the -STABLE branch. The pre-commit test is the functional testing by peer developers when asked to do so or trying out the code to determine the status of the sub-project. Parallel debugging is the functional testing that can trigger more review, and debugging when the code is included in the -CURRENT branch. As of this writing, there were 269 committers in the project. When they commit a change to a branch, that constitutes a new release. It is very common for users in the community to track a particular branch. The immediate existence of a new release makes the changes widely available right away and allows for rapid feedback from the community. This also gives the community the response time they expect on issues that are of importance to them. This makes the community more engaged, and thus allows for more and better feedback that again spurs more maintenance and ultimately should create a better product. Before making changes to code in parts of the tree that has a history unknown to the committer, the committer is required to read the commit logs to see why certain features are implemented the way they are in order not to make mistakes that have previously either been thought through or resolved.
Problem reporting Before &os; 10, &os; included a problem reporting tool called send-pr. Problems include bug reports, feature requests, feature enhancements and notices of new versions of external software that are included in the project. Although send-pr is available, users and developers are encouraged to submit issues using our problem report form. Problem reports are sent to an email address where it is inserted into the Problem Reports maintenance database. A classifies the problem and sends it to the correct group or maintainer within the project. After someone has taken responsibility for the report, the report is being analysed. This analysis includes verifying the problem and thinking out a solution for the problem. Often feedback is required from the report originator or even from the FreeBSD community. Once a patch for the problem is made, the originator may be asked to try it out. Finally, the working patch is integrated into the project, and documented if applicable. It there goes through the regular maintenance cycle as described in section . These are the states a problem report can be in: open, analyzed, feedback, patched, suspended and closed. The suspended state is for when further progress is not possible due to the lack of information or for when the task would require so much work that nobody is working on it at the moment.
Process summary: problem reporting
A problem is reported by the report originator. It is then classified by a bugbuster and handed to the correct maintainer. He verifies the problem and discusses the problem with the originator until he has enough information to create a working patch. This patch is then committed and the problem report is closed. The roles included in this process are: .
-
- Reacting to misbehaviour +
+ Reacting to misbehavior has a number of rules that committers should follow. However, it happens that these rules are broken. The following rules exist - in order to be able to react to misbehaviour. They specify what + in order to be able to react to misbehavior. They specify what actions will result in how long a suspension the committer's commit privileges. Committing during code freezes without the approval of the Release Engineering team - 2 days Committing to a security branch without approval - 2 days Commit wars - 5 days to all participating parties - Impolite or inappropriate behaviour - 5 days + Impolite or inappropriate behavior - 5 days For the suspensions to be efficient, any single core member can implement a suspension before discussing it on the core mailing list. Repeat offenders can, with a 2/3 vote by core, receive harsher penalties, including permanent removal of commit privileges. (However, the latter is always viewed as a last resort, due to its inherent tendency to create controversy). All suspensions are posted to the developers mailing list, a list available to committers only. It is important that you cannot be suspended for making technical errors. All penalties come from breaking social etiquette. Hats involved in this process:
Release engineering The FreeBSD project has a Release Engineering team with a principal release engineer that is responsible for creating releases of FreeBSD that can be brought out to the user community via the net or sold in retail outlets. Since FreeBSD is available on multiple platforms and releases for the different architectures are made available at the same time, the team has one person in charge of each architecture. Also, there are roles in the team responsible for coordinating quality assurance efforts, building a package set and for having an updated set of documents. When referring to the release engineer, a representative for the release engineering team is meant. When a release is coming, the FreeBSD project changes shape somewhat. A release schedule is made containing feature- and code-freezes, release of interim releases and the final release. A feature-freeze means no new features are allowed to be committed to the branch without the release engineers' explicit consent. Code-freeze means no changes to the code (like bugs-fixes) are allowed to be committed without the release engineers explicit consent. This feature- and code-freeze is known as stabilising. During the release process, the release engineer has the full authority to revert to older versions of code and thus "back out" changes should he find that the changes are not suitable to be included in the release. There are three different kinds of releases: .0 releases are the first release of a major version. These are branched of the -CURRENT branch and have a significantly longer release engineering cycle due to the unstable nature of the -CURRENT branch .X releases are releases of the -STABLE branch. They are scheduled to come out every 4 months. .X.Y releases are security releases that follow the .X branch. These come out only when sufficient security fixes have been merged since the last release on that branch. New features are rarely included, and the security team is far more involved in these than in regular releases. For releases of the -STABLE-branch, the release process starts 45 days before the anticipated release date. During the first phase, the first 15 days, the developers merge what changes they have had in -CURRENT that they want to have in the release to the release branch. When this period is over, the code enters a 15 day code freeze in which only bug fixes, documentation updates, security-related fixes and minor device driver changes are allowed. These changes must be approved by the release engineer in advance. At the beginning of the last 15 day period a release candidate is created for widespread testing. Updates are less likely to be allowed during this period, except for important bug fixes and security updates. In this final period, all releases are considered release candidates. At the end of the release process, a release is created with the new version number, including binary distributions on web sites and the creation of a CD-ROM images. However, the release is not considered "really released" until a -signed message stating exactly that, is sent to the mailing list freebsd-announce; anything labelled as a "release" before that may well be in-process and subject to change before the PGP-signed message is sent. Many commercial vendors use these images to create CD-ROMs that are sold in retail outlets. . The releases of the -CURRENT-branch (that is, all releases that end with .0) are very similar, but with twice as long timeframe. It starts 8 weeks prior to the release with announcement of the release time line. Two weeks into the release process, the feature freeze is initiated and performance tweaks should be kept to a minimum. Four weeks prior to the release, an official beta version is made available. Two weeks prior to release, the code is officially branched into a new version. This version is given release candidate status, and as with the release engineering of -STABLE, the code freeze of the release candidate is hardened. However, development on the main development branch can continue. Other than these differences, the release engineering processes are alike. .0 releases go into their own branch and are aimed mainly at early adopters. The branch then goes through a period of stabilisation, and it is not until the decides the demands to stability have been satisfied that the branch becomes -STABLE and -CURRENT targets the next major version. While this for the majority has been with .1 versions, this is not a demand. Most releases are made when a given date that has been deemed a long enough time since the previous release comes. A target is set for having major releases every 18 months and minor releases every 4 months. The user community has made it very clear that security and stability cannot be sacrificed by self-imposed deadlines and target release dates. For slips of time not to become too long with regards to security and stability issues, extra discipline is required when committing changes to -STABLE.
Process summary: release engineering
These are the stages in the release engineering process. Multiple release candidates may be created until the release is deemed stable enough to be released.
Tools The major support tools for supporting the development process are Perforce, Bugzilla, Mailman, and OpenSSH. These are externally developed tools and are commonly used in the open source world.
Subversion (SVN) Subversion (SVN) is a system to handle multiple versions of text files and tracking who committed what changes and why. A project lives within a repository and different versions are considered different branches.
Bugzilla Bugzilla is a maintenance database consisting of a set of tools to track bugs at a central site. It supports the bug tracking process for sending and handling bugs as well as querying and updating the database and editing bug reports. The project uses its web interface to send Problem Reports to the projects central Bugzilla server. The committers also have web and command-line clients.
Mailman Mailman is a program that automates the management of mailing lists. The FreeBSD Project uses it to run 16 general lists, 60 technical lists, 4 limited lists and 5 lists with CVS commit logs. It is also used for many mailing lists set up and used by other people and projects in the FreeBSD community. General lists are lists for the general public, technical lists are mainly for the development of specific areas of interest, and closed lists are for internal communication not intended for the general public. The majority of all the communication in the project goes through these 85 lists , Appendix C.
Perforce Perforce is a commercial software configuration management system developed by Perforce Systems that is available on over 50 operating systems. It is a collection of clients built around the Perforce server that contains the central file repository and tracks the operations done upon it. The clients are both clients for accessing the repository and administration of its configuration.
Pretty Good Privacy Pretty Good Privacy, better known as PGP, is a cryptosystem using a public key architecture to allow people to digitally sign and/or encrypt information in order to ensure secure communication between two parties. A signature is used when sending information out many recipients, enabling them to verify that the information has not been tampered with before they received it. In the FreeBSD Project this is the primary means of ensuring that information has been written by the person who claims to have written it, and not altered in transit.
Secure Shell Secure Shell is a standard for securely logging into a remote system and for executing commands on the remote system. It allows other connections, called tunnels, to be established and protected between the two involved systems. This standard exists in two primary versions, and only version two is used for the FreeBSD Project. The most common implementation of the standard is OpenSSH that is a part of the project's main distribution. Since its source is updated more often than FreeBSD releases, the latest version is also available in the ports tree.
Sub-projects Sub-projects are formed to reduce the amount of communication needed to coordinate the group of developers. When a problem area is sufficiently isolated, most communication would be within the group focusing on the problem, requiring less communication with the groups they communicate with than were the group not isolated.
The Ports Subproject A port is a set of meta-data and patches that are needed to fetch, compile and install correctly an external piece of software on a FreeBSD system. The amount of ports have grown at a tremendous rate, as shown by the following figure.
Number of ports added between 1996 and 2005
is taken from the FreeBSD web site. It shows the number of ports available to FreeBSD in the period 1995 to 2005. It looks like the curve has first grown exponentionally, and then since the middle of 2001 grown linearly. As the external software described by the port often is under continued development, the amount of work required to maintain the ports is already large, and increasing. This has led to the ports part of the FreeBSD project gaining a more empowered structure, and is more and more becoming a sub-project of the FreeBSD project. Ports has its own core team with the as its leader, and this team can appoint committers without FreeBSD Core's approval. Unlike in the FreeBSD Project, where a lot of maintenance frequently is rewarded with a commit bit, the ports sub-project contains many active maintainers that are not committers. Unlike the main project, the ports tree is not branched. Every release of FreeBSD follows the current ports collection and has thus available updated information on where to find programs and how to build them. This, however, means that a port that makes dependencies on the system may need to have variations depending on what version of FreeBSD it runs on. With an unbranched ports repository it is not possible to guarantee that any port will run on anything other than -CURRENT and -STABLE, in particular older, minor releases. There is neither the infrastructure nor volunteer time needed to guarantee this. For efficiency of communication, teams depending on Ports, such as the release engineering team, have their own ports liaisons.
The FreeBSD Documentation Project The FreeBSD Documentation project was started January 1995. From the initial group of a project leader, four team leaders and 16 members, they are now a total of 44 committers. The documentation mailing list has just under 300 members, indicating that there is quite a large community around it. The goal of the Documentation project is to provide good and useful documentation of the FreeBSD project, thus making it easier for new users to get familiar with the system and detailing advanced features for the users. The main tasks in the Documentation project are to work on current projects in the FreeBSD Documentation Set, and translate the documentation to other languages. Like the FreeBSD Project, documentation is split in the same branches. This is done so that there is always an updated version of the documentation for each version. Only documentation errors are corrected in the security branches. Like the ports sub-project, the Documentation project can appoint documentation committers without FreeBSD Core's approval. . The Documentation project has a primer. This is used both to introduce new project members to the standard tools and syntaxes and acts as a reference when working on the project.
References Frederick P.Brooks 19751995 Pearson Education Limited 0201835959 Addison-Wesley Pub Co The Mythical Man-Month Essays on Software Engineering, Anniversary Edition (2nd Edition) NiklasSaers 2003 A project model for the FreeBSD Project Candidatus Scientiarum thesis http://niklas.saers.com/thesis NielsJørgensen 2001 Putting it All in the Trunk Incremental Software Development in the FreeBSD Open Source Project http://www.dat.ruc.dk/~nielsj/research/papers/freebsd.pdf Project Management Institute 19962000 Project Management Institute 1-880410-23-0 Project Management Institute
Newtown Square Pennsylvania USA
PMBOK Guide A Guide to the Project Management Body of Knowledge, 2000 Edition
2002 The FreeBSD Project Core Bylaws http://www.freebsd.org/internal/bylaws.html 2002 The FreeBSD Documentation Project FreeBSD Developer's Handbook http://www.freebsd.org/doc/en_US.ISO8859-1/books/developers-handbook/ 2002 The FreeBSD Project Core team election 2002 http://election.uk.freebsd.org/candidates.html Dag-ErlingSmørgrav HitenPandya 2002 The FreeBSD Documentation Project The FreeBSD Documentation Project Problem Report Handling Guidelines http://www.freebsd.org/doc/en/articles/pr-guidelines/article.html Dag-ErlingSmørgrav 2002 The FreeBSD Documentation Project The FreeBSD Documentation Project Writing FreeBSD Problem Reports http://www.freebsd.org/doc/en/articles/problem-reports/article.html 2001 The FreeBSD Documentation Project The FreeBSD Documentation Project Committers Guide http://www.freebsd.org/doc/en/articles/committers-guide/article.html MurrayStokely 2002 The FreeBSD Documentation Project The FreeBSD Documentation Project FreeBSD Release Engineering http://www.freebsd.org/doc/en_US.ISO8859-1/articles/releng/article.html The FreeBSD Documentation Project FreeBSD Handbook http://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook 2002 The FreeBSD Documentation Project The FreeBSD Documentation Project Contributors to FreeBSD http://www.freebsd.org/doc/en_US.ISO8859-1/articles/contributors/article.html 2002 The FreeBSD Project The FreeBSD Project Core team elections 2002 http://election.uk.freebsd.org 2002 The FreeBSD Project The FreeBSD Project Commit Bit Expiration Policy 2002/04/06 15:35:30 http://www.freebsd.org/internal/expire-bits.html 2002 The FreeBSD Project The FreeBSD Project New Account Creation Procedure 2002/08/19 17:11:27 http://www.freebsd.org/internal/new-account.html 2002 The FreeBSD Documentation Project The FreeBSD Documentation Project FreeBSD DocEng Team Charter 2003/03/16 12:17 http://www.freebsd.org/internal/doceng.html GregLehey 2002 Greg Lehey Greg Lehey Two years in the trenches The evolution of a software project http://www.lemis.com/grog/In-the-trenches.pdf
Index: head/en_US.ISO8859-1/books/developers-handbook/l10n/chapter.xml =================================================================== --- head/en_US.ISO8859-1/books/developers-handbook/l10n/chapter.xml (revision 48481) +++ head/en_US.ISO8859-1/books/developers-handbook/l10n/chapter.xml (revision 48482) @@ -1,319 +1,319 @@ Localization and Internationalization - L10N and I18N Programming I18N Compliant Applications Qt GTK To make your application more useful for speakers of other languages, we hope that you will program I18N compliant. The GNU gcc compiler and GUI libraries like QT and GTK support I18N through special handling of strings. Making a program I18N compliant is very easy. It allows contributors to port your application to other languages quickly. Refer to the library specific I18N documentation for more details. In contrast with common perception, I18N compliant code is easy to write. Usually, it only involves wrapping your strings with library specific functions. In addition, please be sure to allow for wide or multibyte character support. A Call to Unify the I18N Effort It has come to our attention that the individual I18N/L10N efforts for each country has been repeating each others' efforts. Many of us have been reinventing the wheel repeatedly and inefficiently. We hope that the various major groups in I18N could congregate into a group effort similar to the Core Team's responsibility. Currently, we hope that, when you write or port I18N programs, you would send it out to each country's related FreeBSD mailing list for testing. In the future, we hope to create applications that work in all the languages out-of-the-box without dirty hacks. The &a.i18n; has been established. If you are an I18N/L10N developer, please send your comments, ideas, questions, and anything you deem related to it. Perl and Python Perl Python Perl and Python have I18N and wide character handling libraries. Please use them for I18N compliance. Localized Messages with POSIX.1 Native Language Support (NLS) GáborKövesdánContributed by Beyond the basic I18N functions, like supporting various input encodings or supporting national conventions, such as the different decimal separators, at a higher level of I18N, it is possible to localize the messages written to the output by the various programs. A common way of doing this is using the POSIX.1 NLS functions, which are provided as a part of the &os; base system. Organizing Localized Messages into Catalog Files POSIX.1 NLS is based on catalog files, which contain the localized messages in the desired encoding. The messages are organized into sets and each message is identified by an integer number in the containing set. The catalog files are conventionally named after the locale they contain localized messages for, followed by the .msg extension. For instance, the Hungarian messages for ISO8859-2 encoding should be stored in a file called hu_HU.ISO8859-2. These catalog files are common text files that contain the numbered messages. It is possible to write comments by starting the line with a $ sign. Set boundaries are also separated by special comments, where the keyword set must directly follow the $ sign. The set keyword is then followed by the set number. For example: $set 1 The actual message entries start with the message number and followed by the localized message. The well-known modifiers from &man.printf.3; are accepted: 15 "File not found: %s\n" The language catalog files have to be compiled into a binary form before they can be opened from the program. This conversion is done with the &man.gencat.1; utility. Its first argument is the filename of the compiled catalog and its further arguments are the input catalogs. The localized messages can also be organized into more catalog files and then all of them can be processed with &man.gencat.1;. Using the Catalog Files from the Source Code Using the catalog files is simple. To use the related functions, nl_types.h must be included. Before using a catalog, it has to be opened with &man.catopen.3;. The function takes two arguments. The first parameter is the name of the installed and compiled catalog. Usually, the name of the program is used, such as grep. This name will be used when looking for the compiled catalog file. The &man.catopen.3; call looks for this file in /usr/share/nls/locale/catname and in /usr/local/share/nls/locale/catname, where locale is the locale set and catname is the catalog name being discussed. The second parameter is a constant, which can have two values: NL_CAT_LOCALE, which means that the used catalog file will be based on LC_MESSAGES. 0, which means that LANG has to be used to open the proper catalog. The &man.catopen.3; call returns a catalog identifier of type nl_catd. Please refer to the manual page for a list of possible returned error codes. After opening a catalog &man.catgets.3; can be used to retrieve a message. The first parameter is the catalog identifier returned by &man.catopen.3;, the second one is the number of the set, the third one is the number of the messages, and the fourth one is a fallback message, which will be returned if the requested message cannot be retrieved from the catalog file. After using the catalog file, it must be closed by calling &man.catclose.3;, which has one argument, the catalog id. A Practical Example The following example will demonstrate an easy solution on how to use NLS catalogs in a flexible way. The below lines need to be put into a common header file of the program, which is included into all source files where localized messages are necessary: #ifdef WITHOUT_NLS #define getstr(n) nlsstr[n] #else #include <nl_types.h> extern nl_catd catalog; #define getstr(n) catgets(catalog, 1, n, nlsstr[n]) #endif extern char *nlsstr[]; Next, put these lines into the global declaration part of the main source file: #ifndef WITHOUT_NLS #include <nl_types.h> nl_catd catalog; #endif /* * Default messages to use when NLS is disabled or no catalog * is found. */ char *nlsstr[] = { "", /* 1*/ "some random message", /* 2*/ "some other message" }; Next come the real code snippets, which open, read, and close the catalog: #ifndef WITHOUT_NLS catalog = catopen("myapp", NL_CAT_LOCALE); #endif ... printf(getstr(1)); ... #ifndef WITHOUT_NLS catclose(catalog); #endif Reducing Strings to Localize There is a good way of reducing the strings that need to be localized by using libc error messages. This is also useful to just avoid duplication and provide consistent error messages for the common errors that can be encountered by a great many of programs. First, here is an example that does not use libc error messages: #include <err.h> ... if (!S_ISDIR(st.st_mode)) errx(1, "argument is not a directory"); This can be transformed to print an error message by reading errno and printing an error message accordingly: #include <err.h> #include <errno.h> ... if (!S_ISDIR(st.st_mode)) { errno = ENOTDIR; err(1, NULL); } In this example, the custom string is eliminated, thus translators will have less work when localizing the program and users will see the usual Not a directory error message when they encounter this error. This message will probably seem more familiar to them. Please note that it was necessary to include errno.h in order to directly access errno. It is worth to note that there are cases when errno is set automatically by a preceding call, so it is not necessary to set it explicitly: #include <err.h> ... if ((p = malloc(size)) == NULL) err(1, NULL); Making use of <filename>bsd.nls.mk</filename> Using the catalog files requires few repeatable steps, such as compiling the catalogs and installing them to the proper location. In order to simplify this process even more, bsd.nls.mk introduces some macros. It is not necessary to include bsd.nls.mk explicitly, it is pulled in from the common Makefiles, such as bsd.prog.mk or bsd.lib.mk. Usually it is enough to define NLSNAME, which should have the catalog name mentioned as the first argument of &man.catopen.3; and list the catalog files in NLS without their .msg extension. Here is an example, which makes it possible to to disable NLS when used with the code examples before. The WITHOUT_NLS &man.make.1; variable has to be defined in order to build the program without NLS support. .if !defined(WITHOUT_NLS) NLS= es_ES.ISO8859-1 NLS+= hu_HU.ISO8859-2 NLS+= pt_BR.ISO8859-1 .else CFLAGS+= -DWITHOUT_NLS .endif Conventionally, the catalog files are placed under the nls subdirectory and - this is the default behaviour of bsd.nls.mk. + this is the default behavior of bsd.nls.mk. It is possible, though to override the location of the catalogs with the NLSSRCDIR &man.make.1; variable. The default name of the precompiled catalog files also follow the naming convention mentioned before. It can be overridden by setting the NLSNAME variable. There are other options to fine tune the processing of the catalog files but usually it is not needed, thus they are not described here. For further information on bsd.nls.mk, please refer to the file itself, it is short and easy to understand. Index: head/en_US.ISO8859-1/books/handbook/advanced-networking/chapter.xml =================================================================== --- head/en_US.ISO8859-1/books/handbook/advanced-networking/chapter.xml (revision 48481) +++ head/en_US.ISO8859-1/books/handbook/advanced-networking/chapter.xml (revision 48482) @@ -1,5204 +1,5204 @@ Advanced Networking Synopsis This chapter covers a number of advanced networking topics. After reading this chapter, you will know: The basics of gateways and routes. How to set up USB tethering. How to set up &ieee; 802.11 and &bluetooth; devices. How to make &os; act as a bridge. How to set up network PXE booting. How to set up IPv6 on a &os; machine. How to enable and utilize the features of the Common Address Redundancy Protocol (CARP) in &os;. Before reading this chapter, you should: Understand the basics of the /etc/rc scripts. Be familiar with basic network terminology. Know how to configure and install a new &os; kernel (). Know how to install additional third-party software (). Gateways and Routes Coranth Gryphon Contributed by routing gateway subnet Routing is the mechanism that allows a system to find the network path to another system. A route is a defined pair of addresses which represent the destination and a gateway. The route indicates that when trying to get to the specified destination, send the packets through the specified gateway. There are three types of destinations: individual hosts, subnets, and default. The default route is used if no other routes apply. There are also three types of gateways: individual hosts, interfaces, also called links, and Ethernet hardware (MAC) addresses. Known routes are stored in a routing table. This section provides an overview of routing basics. It then demonstrates how to configure a &os; system as a router and offers some troubleshooting tips. Routing Basics To view the routing table of a &os; system, use &man.netstat.1;: &prompt.user; netstat -r Routing tables Internet: Destination Gateway Flags Refs Use Netif Expire default outside-gw UGS 37 418 em0 localhost localhost UH 0 181 lo0 test0 0:e0:b5:36:cf:4f UHLW 5 63288 re0 77 10.20.30.255 link#1 UHLW 1 2421 example.com link#1 UC 0 0 host1 0:e0:a8:37:8:1e UHLW 3 4601 lo0 host2 0:e0:a8:37:8:1e UHLW 0 5 lo0 => host2.example.com link#1 UC 0 0 224 link#1 UC 0 0 The entries in this example are as follows: default The first route in this table specifies the default route. When the local system needs to make a connection to a remote host, it checks the routing table to determine if a known path exists. If the remote host matches an entry in the table, the system checks to see if it can connect using the interface specified in that entry. If the destination does not match an entry, or if all known paths fail, the system uses the entry for the default route. For hosts on a local area network, the Gateway field in the default route is set to the system which has a direct connection to the Internet. When reading this entry, verify that the Flags column indicates that the gateway is usable (UG). The default route for a machine which itself is functioning as the gateway to the outside world will be the gateway machine at the Internet Service Provider (ISP). localhost The second route is the localhost route. The interface specified in the Netif column for localhost is lo0, also known as the loopback device. This indicates that all traffic for this destination should be internal, rather than sending it out over the network. MAC address The addresses beginning with 0:e0: are MAC addresses. &os; will automatically identify any hosts, test0 in the example, on the local Ethernet and add a route for that host over the Ethernet interface, re0. This type of route has a timeout, seen in the Expire column, which is used if the host does not respond in a specific amount of time. When this happens, the route to this host will be automatically deleted. These hosts are identified using the Routing Information Protocol (RIP), which calculates routes to local hosts based upon a shortest path determination. subnet &os; will automatically add subnet routes for the local subnet. In this example, 10.20.30.255 is the broadcast address for the subnet 10.20.30 and example.com is the domain name associated with that subnet. The designation link#1 refers to the first Ethernet card in the machine. Local network hosts and local subnets have their routes automatically configured by a daemon called &man.routed.8;. If it is not running, only routes which are statically defined by the administrator will exist. host The host1 line refers to the host by its Ethernet address. Since it is the sending host, &os; knows to use the loopback interface (lo0) rather than the Ethernet interface. The two host2 lines represent aliases which were created using &man.ifconfig.8;. The => symbol after the lo0 interface says that an alias has been set in addition to the loopback address. Such routes only show up on the host that supports the alias and all other hosts on the local network will have a link#1 line for such routes. 224 The final line (destination subnet 224) deals with multicasting. Various attributes of each route can be seen in the Flags column. summarizes some of these flags and their meanings: Commonly Seen Routing Table Flags Command Purpose U The route is active (up). H The route destination is a single host. G Send anything for this destination on to this gateway, which will figure out from there where to send it. S This route was statically configured. C Clones a new route based upon this route for machines to connect to. This type of route is normally used for local networks. W The route was auto-configured based upon a local area network (clone) route. L Route involves references to Ethernet (link) hardware.
On a &os; system, the default route can defined in /etc/rc.conf by specifying the IP address of the default gateway: defaultrouter="10.20.30.1" It is also possible to manually add the route using route: &prompt.root; route add default 10.20.30.1 Note that manually added routes will not survive a reboot. For more information on manual manipulation of network routing tables, refer to &man.route.8;.
Configuring a Router with Static Routes Al Hoang Contributed by dual homed hosts A &os; system can be configured as the default gateway, or router, for a network if it is a dual-homed system. A dual-homed system is a host which resides on at least two different networks. Typically, each network is connected to a separate network interface, though IP aliasing can be used to bind multiple addresses, each on a different subnet, to one physical interface. router In order for the system to forward packets between interfaces, &os; must be configured as a router. Internet standards and good engineering practice prevent the &os; Project from enabling this feature by default, but it can be configured to start at boot by adding this line to /etc/rc.conf: gateway_enable="YES" # Set to YES if this host will be a gateway To enable routing now, set the &man.sysctl.8; variable net.inet.ip.forwarding to 1. To stop routing, reset this variable to 0. BGP RIP OSPF The routing table of a router needs additional routes so it knows how to reach other networks. Routes can be either added manually using static routes or routes can be automatically learned using a routing protocol. Static routes are appropriate for small networks and this section describes how to add a static routing entry for a small network. For large networks, static routes quickly become unscalable. &os; comes with the standard BSD routing daemon &man.routed.8;, which provides the routing protocols RIP, versions 1 and 2, and IRDP. Support for the BGP and OSPF routing protocols can be installed using the net/zebra package or port. Consider the following network: INTERNET | (10.0.0.1/24) Default Router to Internet | |Interface xl0 |10.0.0.10/24 +------+ | | RouterA | | (FreeBSD gateway) +------+ | Interface xl1 | 192.168.1.1/24 | +--------------------------------+ Internal Net 1 | 192.168.1.2/24 | +------+ | | RouterB | | +------+ | 192.168.2.1/24 | Internal Net 2 In this scenario, RouterA is a &os; machine that is acting as a router to the rest of the Internet. It has a default route set to 10.0.0.1 which allows it to connect with the outside world. RouterB is already configured to use 192.168.1.1 as its default gateway. Before adding any static routes, the routing table on RouterA looks like this: &prompt.user; netstat -nr Routing tables Internet: Destination Gateway Flags Refs Use Netif Expire default 10.0.0.1 UGS 0 49378 xl0 127.0.0.1 127.0.0.1 UH 0 6 lo0 10.0.0.0/24 link#1 UC 0 0 xl0 192.168.1.0/24 link#2 UC 0 0 xl1 With the current routing table, RouterA does not have a route to the 192.168.2.0/24 network. The following command adds the Internal Net 2 network to RouterA's routing table using 192.168.1.2 as the next hop: &prompt.root; route add -net 192.168.2.0/24 192.168.1.2 Now, RouterA can reach any host on the 192.168.2.0/24 network. However, the routing information will not persist if the &os; system reboots. If a static route needs to be persistent, add it to /etc/rc.conf: # Add Internal Net 2 as a persistent static route static_routes="internalnet2" route_internalnet2="-net 192.168.2.0/24 192.168.1.2" The static_routes configuration variable is a list of strings separated by a space, where each string references a route name. The variable route_internalnet2 contains the static route for that route name. Using more than one string in static_routes creates multiple static routes. The following shows an example of adding static routes for the 192.168.0.0/24 and 192.168.1.0/24 networks: static_routes="net1 net2" route_net1="-net 192.168.0.0/24 192.168.0.1" route_net2="-net 192.168.1.0/24 192.168.1.1" Troubleshooting When an address space is assigned to a network, the service provider configures their routing tables so that all traffic for the network will be sent to the link for the site. But how do external sites know to send their packets to the network's ISP? There is a system that keeps track of all assigned address spaces and defines their point of connection to the Internet backbone, or the main trunk lines that carry Internet traffic across the country and around the world. Each backbone machine has a copy of a master set of tables, which direct traffic for a particular network to a specific backbone carrier, and from there down the chain of service providers until it reaches a particular network. It is the task of the service provider to advertise to the backbone sites that they are the point of connection, and thus the path inward, for a site. This is known as route propagation. &man.traceroute.8; Sometimes, there is a problem with route propagation and some sites are unable to connect. Perhaps the most useful command for trying to figure out where routing is breaking down is traceroute. It is useful when ping fails. When using traceroute, include the address of the remote host to connect to. The output will show the gateway hosts along the path of the attempt, eventually either reaching the target host, or terminating because of a lack of connection. For more information, refer to &man.traceroute.8;. Multicast Considerations multicast routing kernel options MROUTING &os; natively supports both multicast applications and multicast routing. Multicast applications do not require any special configuration in order to run on &os;. Support for multicast routing requires that the following option be compiled into a custom kernel: options MROUTING The multicast routing daemon, mrouted can be installed using the net/mrouted package or port. This daemon implements the DVMRP multicast routing protocol and is configured by editing /usr/local/etc/mrouted.conf in order to set up the tunnels and DVMRP. The installation of mrouted also installs map-mbone and mrinfo, as well as their associated man pages. Refer to these for configuration examples. DVMRP has largely been replaced by the PIM protocol in many multicast installations. Refer to &man.pim.4; for more information.
Wireless Networking Loader Marc Fonvieille Murray Stokely wireless networking 802.11 wireless networking Wireless Networking Basics Most wireless networks are based on the &ieee; 802.11 standards. A basic wireless network consists of multiple stations communicating with radios that broadcast in either the 2.4GHz or 5GHz band, though this varies according to the locale and is also changing to enable communication in the 2.3GHz and 4.9GHz ranges. 802.11 networks are organized in two ways. In infrastructure mode, one station acts as a master with all the other stations associating to it, the network is known as a BSS, and the master station is termed an access point (AP). In a BSS, all communication passes through the AP; even when one station wants to communicate with another wireless station, messages must go through the AP. In the second form of network, there is no master and stations communicate directly. This form of network is termed an IBSS and is commonly known as an ad-hoc network. 802.11 networks were first deployed in the 2.4GHz band using protocols defined by the &ieee; 802.11 and 802.11b standard. These specifications include the operating frequencies and the MAC layer characteristics, including framing and transmission rates, as communication can occur at various rates. Later, the 802.11a standard defined operation in the 5GHz band, including different signaling mechanisms and higher transmission rates. Still later, the 802.11g standard defined the use of 802.11a signaling and transmission mechanisms in the 2.4GHz band in such a way as to be backwards compatible with 802.11b networks. Separate from the underlying transmission techniques, 802.11 networks have a variety of security mechanisms. The original 802.11 specifications defined a simple security protocol called WEP. This protocol uses a fixed pre-shared key and the RC4 cryptographic cipher to encode data transmitted on a network. Stations must all agree on the fixed key in order to communicate. This scheme was shown to be easily broken and is now rarely used except to discourage transient users from joining networks. Current security practice is given by the &ieee; 802.11i specification that defines new cryptographic ciphers and an additional protocol to authenticate stations to an access point and exchange keys for data communication. Cryptographic keys are periodically refreshed and there are mechanisms for detecting and countering intrusion attempts. Another security protocol specification commonly used in wireless networks is termed WPA, which was a precursor to 802.11i. WPA specifies a subset of the requirements found in 802.11i and is designed for implementation on legacy hardware. Specifically, WPA requires only the TKIP cipher that is derived from the original WEP cipher. 802.11i permits use of TKIP but also requires support for a stronger cipher, AES-CCM, for encrypting data. The AES cipher was not required in WPA because it was deemed too computationally costly to be implemented on legacy hardware. The other standard to be aware of is 802.11e. It defines protocols for deploying multimedia applications, such as streaming video and voice over IP (VoIP), in an 802.11 network. Like 802.11i, 802.11e also has a precursor specification termed WME (later renamed WMM) that has been defined by an industry group as a subset of 802.11e that can be deployed now to enable multimedia applications while waiting for the final ratification of 802.11e. The most important thing to know about 802.11e and WME/WMM is that it enables prioritized traffic over a wireless network through Quality of Service (QoS) protocols and enhanced media access protocols. Proper implementation of these protocols enables high speed bursting of data and prioritized traffic flow. &os; supports networks that operate using 802.11a, 802.11b, and 802.11g. The WPA and 802.11i security protocols are likewise supported (in conjunction with any of 11a, 11b, and 11g) and QoS and traffic prioritization required by the WME/WMM protocols are supported for a limited set of wireless devices. Quick Start Connecting a computer to an existing wireless network is a very common situation. This procedure shows the steps required. Obtain the SSID (Service Set Identifier) and PSK (Pre-Shared Key) for the wireless network from the network administrator. Identify the wireless adapter. The &os; GENERIC kernel includes drivers for many common wireless adapters. If the wireless adapter is one of those models, it will be shown in the output from &man.ifconfig.8;: &prompt.user; ifconfig | grep -B3 -i wireless If a wireless adapter is not listed, an additional kernel module might be required, or it might be a model not supported by &os;. This example shows the Atheros ath0 wireless adapter. Add an entry for this network to /etc/wpa_supplicant.conf. If the file does not exist, create it. Replace myssid and mypsk with the SSID and PSK provided by the network administrator. network={ ssid="myssid" psk="mypsk" } Add entries to /etc/rc.conf to configure the network on startup: wlans_ath0="wlan0" ifconfig_wlan0="WPA SYNCDHCP" Restart the computer, or restart the network service to connect to the network: &prompt.root; service netif restart Basic Setup Kernel Configuration To use wireless networking, a wireless networking card is needed and the kernel needs to be configured with the appropriate wireless networking support. The kernel is separated into multiple modules so that only the required support needs to be configured. The most commonly used wireless devices are those that use parts made by Atheros. These devices are supported by &man.ath.4; and require the following line to be added to /boot/loader.conf: if_ath_load="YES" The Atheros driver is split up into three separate pieces: the driver (&man.ath.4;), the hardware support layer that handles chip-specific functions (&man.ath.hal.4;), and an algorithm for selecting the rate for transmitting frames. When this support is loaded as kernel modules, any dependencies are automatically handled. To load support for a different type of wireless device, specify the module for that device. This example is for devices based on the Intersil Prism parts (&man.wi.4;) driver: if_wi_load="YES" The examples in this section use an &man.ath.4; device and the device name in the examples must be changed according to the configuration. A list of available wireless drivers and supported adapters can be found in the &os; Hardware Notes, available on the Release Information page of the &os; website. If a native &os; driver for the wireless device does not exist, it may be possible to use the &windows; driver with the help of the NDIS driver wrapper. In addition, the modules that implement cryptographic support for the security protocols to use must be loaded. These are intended to be dynamically loaded on demand by the &man.wlan.4; module, but for now they must be manually configured. The following modules are available: &man.wlan.wep.4;, &man.wlan.ccmp.4;, and &man.wlan.tkip.4;. The &man.wlan.ccmp.4; and &man.wlan.tkip.4; drivers are only needed when using the WPA or 802.11i security protocols. If the network does not use encryption, &man.wlan.wep.4; support is not needed. To load these modules at boot time, add the following lines to /boot/loader.conf: wlan_wep_load="YES" wlan_ccmp_load="YES" wlan_tkip_load="YES" Once this information has been added to /boot/loader.conf, reboot the &os; box. Alternately, load the modules by hand using &man.kldload.8;. For users who do not want to use modules, it is possible to compile these drivers into the kernel by adding the following lines to a custom kernel configuration file: device wlan # 802.11 support device wlan_wep # 802.11 WEP support device wlan_ccmp # 802.11 CCMP support device wlan_tkip # 802.11 TKIP support device wlan_amrr # AMRR transmit rate control algorithm device ath # Atheros pci/cardbus NIC's device ath_hal # pci/cardbus chip support options AH_SUPPORT_AR5416 # enable AR5416 tx/rx descriptors device ath_rate_sample # SampleRate tx rate control for ath With this information in the kernel configuration file, recompile the kernel and reboot the &os; machine. Information about the wireless device should appear in the boot messages, like this: ath0: <Atheros 5212> mem 0x88000000-0x8800ffff irq 11 at device 0.0 on cardbus1 ath0: [ITHREAD] ath0: AR2413 mac 7.9 RF2413 phy 4.5 Infrastructure Mode Infrastructure (BSS) mode is the mode that is typically used. In this mode, a number of wireless access points are connected to a wired network. Each wireless network has its own name, called the SSID. Wireless clients connect to the wireless access points. &os; Clients How to Find Access Points To scan for available networks, use &man.ifconfig.8;. This request may take a few moments to complete as it requires the system to switch to each available wireless frequency and probe for available access points. Only the superuser can initiate a scan: &prompt.root; ifconfig wlan0 create wlandev ath0 &prompt.root; ifconfig wlan0 up scan SSID/MESH ID BSSID CHAN RATE S:N INT CAPS dlinkap 00:13:46:49:41:76 11 54M -90:96 100 EPS WPA WME freebsdap 00:11:95:c3:0d:ac 1 54M -83:96 100 EPS WPA The interface must be before it can scan. Subsequent scan requests do not require the interface to be marked as up again. The output of a scan request lists each BSS/IBSS network found. Besides listing the name of the network, the SSID, the output also shows the BSSID, which is the MAC address of the access point. The CAPS field identifies the type of each network and the capabilities of the stations operating there: Station Capability Codes Capability Code Meaning E Extended Service Set (ESS). Indicates that the station is part of an infrastructure network rather than an IBSS/ad-hoc network. I IBSS/ad-hoc network. Indicates that the station is part of an ad-hoc network rather than an ESS network. P Privacy. Encryption is required for all data frames exchanged within the BSS using cryptographic means such as WEP, TKIP or AES-CCMP. S Short Preamble. Indicates that the network is using short preambles, defined in 802.11b High Rate/DSSS PHY, and utilizes a 56 bit sync field rather than the 128 bit field used in long preamble mode. s Short slot time. Indicates that the 802.11g network is using a short slot time because there are no legacy (802.11b) stations present.
One can also display the current list of known networks with: &prompt.root; ifconfig wlan0 list scan This information may be updated automatically by the adapter or manually with a request. Old data is automatically removed from the cache, so over time this list may shrink unless more scans are done.
Basic Settings This section provides a simple example of how to make the wireless network adapter work in &os; without encryption. Once familiar with these concepts, it is strongly recommend to use WPA to set up the wireless network. There are three basic steps to configure a wireless network: select an access point, authenticate the station, and configure an IP address. The following sections discuss each step. Selecting an Access Point Most of the time, it is sufficient to let the system choose an access point using the builtin heuristics. - This is the default behaviour when an interface is + This is the default behavior when an interface is marked as up or it is listed in /etc/rc.conf: wlans_ath0="wlan0" ifconfig_wlan0="DHCP" If there are multiple access points, a specific one can be selected by its SSID: wlans_ath0="wlan0" ifconfig_wlan0="ssid your_ssid_here DHCP" In an environment where there are multiple access points with the same SSID, which is often done to simplify roaming, it may be necessary to associate to one specific device. In this case, the BSSID of the access point can be specified, with or without the SSID: wlans_ath0="wlan0" ifconfig_wlan0="ssid your_ssid_here bssid xx:xx:xx:xx:xx:xx DHCP" There are other ways to constrain the choice of an access point, such as limiting the set of frequencies the system will scan on. This may be useful for a multi-band wireless card as scanning all the possible channels can be time-consuming. To limit operation to a specific band, use the parameter: wlans_ath0="wlan0" ifconfig_wlan0="mode 11g ssid your_ssid_here DHCP" This example will force the card to operate in 802.11g, which is defined only for 2.4GHz frequencies so any 5GHz channels will not be considered. This can also be achieved with the parameter, which locks operation to one specific frequency, and the parameter, to specify a list of channels for scanning. More information about these parameters can be found in &man.ifconfig.8;. Authentication Once an access point is selected, the station needs to authenticate before it can pass data. Authentication can happen in several ways. The most common scheme, open authentication, allows any station to join the network and communicate. This is the authentication to use for test purposes the first time a wireless network is setup. Other schemes require cryptographic handshakes to be completed before data traffic can flow, either using pre-shared keys or secrets, or more complex schemes that involve backend services such as RADIUS. Open authentication is the default setting. The next most common setup is WPA-PSK, also known as WPA Personal, which is described in . If using an &apple; &airport; Extreme base station for an access point, shared-key authentication together with a WEP key needs to be configured. This can be configured in /etc/rc.conf or by using &man.wpa.supplicant.8;. For a single &airport; base station, access can be configured with: wlans_ath0="wlan0" ifconfig_wlan0="authmode shared wepmode on weptxkey 1 wepkey 01234567 DHCP" In general, shared key authentication should be avoided because it uses the WEP key material in a highly-constrained manner, making it even easier to crack the key. If WEP must be used for compatibility with legacy devices, it is better to use WEP with open authentication. More information regarding WEP can be found in . Getting an <acronym>IP</acronym> Address with <acronym>DHCP</acronym> Once an access point is selected and the authentication parameters are set, an IP address must be obtained in order to communicate. Most of the time, the IP address is obtained via DHCP. To achieve that, edit /etc/rc.conf and add DHCP to the configuration for the device: wlans_ath0="wlan0" ifconfig_wlan0="DHCP" The wireless interface is now ready to bring up: &prompt.root; service netif start Once the interface is running, use &man.ifconfig.8; to see the status of the interface ath0: &prompt.root; ifconfig wlan0 wlan0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> mtu 1500 ether 00:11:95:d5:43:62 inet 192.168.1.100 netmask 0xffffff00 broadcast 192.168.1.255 media: IEEE 802.11 Wireless Ethernet OFDM/54Mbps mode 11g status: associated ssid dlinkap channel 11 (2462 Mhz 11g) bssid 00:13:46:49:41:76 country US ecm authmode OPEN privacy OFF txpower 21.5 bmiss 7 scanvalid 60 bgscan bgscanintvl 300 bgscanidle 250 roam:rssi 7 roam:rate 5 protmode CTS wme burst The status: associated line means that it is connected to the wireless network. The bssid 00:13:46:49:41:76 is the MAC address of the access point and authmode OPEN indicates that the communication is not encrypted. Static <acronym>IP</acronym> Address If an IP address cannot be obtained from a DHCP server, set a fixed IP address. Replace the DHCP keyword shown above with the address information. Be sure to retain any other parameters for selecting the access point: wlans_ath0="wlan0" ifconfig_wlan0="inet 192.168.1.100 netmask 255.255.255.0 ssid your_ssid_here" <acronym>WPA</acronym> Wi-Fi Protected Access (WPA) is a security protocol used together with 802.11 networks to address the lack of proper authentication and the weakness of WEP. WPA leverages the 802.1X authentication protocol and uses one of several ciphers instead of WEP for data integrity. The only cipher required by WPA is the Temporary Key Integrity Protocol (TKIP). TKIP is a cipher that extends the basic RC4 cipher used by WEP by adding integrity checking, tamper detection, and measures for responding to detected intrusions. TKIP is designed to work on legacy hardware with only software modification. It represents a compromise that improves security but is still not entirely immune to attack. WPA also specifies the AES-CCMP cipher as an alternative to TKIP, and that is preferred when possible. For this specification, the term WPA2 or RSN is commonly used. WPA defines authentication and encryption protocols. Authentication is most commonly done using one of two techniques: by 802.1X and a backend authentication service such as RADIUS, or by a minimal handshake between the station and the access point using a pre-shared secret. The former is commonly termed WPA Enterprise and the latter is known as WPA Personal. Since most people will not set up a RADIUS backend server for their wireless network, WPA-PSK is by far the most commonly encountered configuration for WPA. The control of the wireless connection and the key negotiation or authentication with a server is done using &man.wpa.supplicant.8;. This program requires a configuration file, /etc/wpa_supplicant.conf, to run. More information regarding this file can be found in &man.wpa.supplicant.conf.5;. <acronym>WPA-PSK</acronym> WPA-PSK, also known as WPA Personal, is based on a pre-shared key (PSK) which is generated from a given password and used as the master key in the wireless network. This means every wireless user will share the same key. WPA-PSK is intended for small networks where the use of an authentication server is not possible or desired. Always use strong passwords that are sufficiently long and made from a rich alphabet so that they will not be easily guessed or attacked. The first step is the configuration of /etc/wpa_supplicant.conf with the SSID and the pre-shared key of the network: network={ ssid="freebsdap" psk="freebsdmall" } Then, in /etc/rc.conf, indicate that the wireless device configuration will be done with WPA and the IP address will be obtained with DHCP: wlans_ath0="wlan0" ifconfig_wlan0="WPA DHCP" Then, bring up the interface: &prompt.root; service netif start Starting wpa_supplicant. DHCPDISCOVER on wlan0 to 255.255.255.255 port 67 interval 5 DHCPDISCOVER on wlan0 to 255.255.255.255 port 67 interval 6 DHCPOFFER from 192.168.0.1 DHCPREQUEST on wlan0 to 255.255.255.255 port 67 DHCPACK from 192.168.0.1 bound to 192.168.0.254 -- renewal in 300 seconds. wlan0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> mtu 1500 ether 00:11:95:d5:43:62 inet 192.168.0.254 netmask 0xffffff00 broadcast 192.168.0.255 media: IEEE 802.11 Wireless Ethernet OFDM/36Mbps mode 11g status: associated ssid freebsdap channel 1 (2412 Mhz 11g) bssid 00:11:95:c3:0d:ac country US ecm authmode WPA2/802.11i privacy ON deftxkey UNDEF AES-CCM 3:128-bit txpower 21.5 bmiss 7 scanvalid 450 bgscan bgscanintvl 300 bgscanidle 250 roam:rssi 7 roam:rate 5 protmode CTS wme burst roaming MANUAL Or, try to configure the interface manually using the information in /etc/wpa_supplicant.conf: &prompt.root; wpa_supplicant -i wlan0 -c /etc/wpa_supplicant.conf Trying to associate with 00:11:95:c3:0d:ac (SSID='freebsdap' freq=2412 MHz) Associated with 00:11:95:c3:0d:ac WPA: Key negotiation completed with 00:11:95:c3:0d:ac [PTK=CCMP GTK=CCMP] CTRL-EVENT-CONNECTED - Connection to 00:11:95:c3:0d:ac completed (auth) [id=0 id_str=] The next operation is to launch &man.dhclient.8; to get the IP address from the DHCP server: &prompt.root; dhclient wlan0 DHCPREQUEST on wlan0 to 255.255.255.255 port 67 DHCPACK from 192.168.0.1 bound to 192.168.0.254 -- renewal in 300 seconds. &prompt.root; ifconfig wlan0 wlan0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> mtu 1500 ether 00:11:95:d5:43:62 inet 192.168.0.254 netmask 0xffffff00 broadcast 192.168.0.255 media: IEEE 802.11 Wireless Ethernet OFDM/36Mbps mode 11g status: associated ssid freebsdap channel 1 (2412 Mhz 11g) bssid 00:11:95:c3:0d:ac country US ecm authmode WPA2/802.11i privacy ON deftxkey UNDEF AES-CCM 3:128-bit txpower 21.5 bmiss 7 scanvalid 450 bgscan bgscanintvl 300 bgscanidle 250 roam:rssi 7 roam:rate 5 protmode CTS wme burst roaming MANUAL If /etc/rc.conf has an ifconfig_wlan0="DHCP" entry, &man.dhclient.8; will be launched automatically after &man.wpa.supplicant.8; associates with the access point. If DHCP is not possible or desired, set a static IP address after &man.wpa.supplicant.8; has authenticated the station: &prompt.root; ifconfig wlan0 inet 192.168.0.100 netmask 255.255.255.0 &prompt.root; ifconfig wlan0 wlan0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> mtu 1500 ether 00:11:95:d5:43:62 inet 192.168.0.100 netmask 0xffffff00 broadcast 192.168.0.255 media: IEEE 802.11 Wireless Ethernet OFDM/36Mbps mode 11g status: associated ssid freebsdap channel 1 (2412 Mhz 11g) bssid 00:11:95:c3:0d:ac country US ecm authmode WPA2/802.11i privacy ON deftxkey UNDEF AES-CCM 3:128-bit txpower 21.5 bmiss 7 scanvalid 450 bgscan bgscanintvl 300 bgscanidle 250 roam:rssi 7 roam:rate 5 protmode CTS wme burst roaming MANUAL When DHCP is not used, the default gateway and the nameserver also have to be manually set: &prompt.root; route add default your_default_router &prompt.root; echo "nameserver your_DNS_server" >> /etc/resolv.conf <acronym>WPA</acronym> with <acronym>EAP-TLS</acronym> The second way to use WPA is with an 802.1X backend authentication server. In this case, WPA is called WPA Enterprise to differentiate it from the less secure WPA Personal. Authentication in WPA Enterprise is based on the Extensible Authentication Protocol (EAP). EAP does not come with an encryption method. Instead, EAP is embedded inside an encrypted tunnel. There are many EAP authentication methods, but EAP-TLS, EAP-TTLS, and EAP-PEAP are the most common. EAP with Transport Layer Security (EAP-TLS) is a well-supported wireless authentication protocol since it was the first EAP method to be certified by the Wi-Fi Alliance. EAP-TLS requires three certificates to run: the certificate of the Certificate Authority (CA) installed on all machines, the server certificate for the authentication server, and one client certificate for each wireless client. In this EAP method, both the authentication server and wireless client authenticate each other by presenting their respective certificates, and then verify that these certificates were signed by the organization's CA. As previously, the configuration is done via /etc/wpa_supplicant.conf: network={ ssid="freebsdap" proto=RSN key_mgmt=WPA-EAP eap=TLS identity="loader" ca_cert="/etc/certs/cacert.pem" client_cert="/etc/certs/clientcert.pem" private_key="/etc/certs/clientkey.pem" private_key_passwd="freebsdmallclient" } This field indicates the network name (SSID). This example uses the RSN &ieee; 802.11i protocol, also known as WPA2. The key_mgmt line refers to the key management protocol to use. In this example, it is WPA using EAP authentication. This field indicates the EAP method for the connection. The identity field contains the identity string for EAP. The ca_cert field indicates the pathname of the CA certificate file. This file is needed to verify the server certificate. The client_cert line gives the pathname to the client certificate file. This certificate is unique to each wireless client of the network. The private_key field is the pathname to the client certificate private key file. The private_key_passwd field contains the passphrase for the private key. Then, add the following lines to /etc/rc.conf: wlans_ath0="wlan0" ifconfig_wlan0="WPA DHCP" The next step is to bring up the interface: &prompt.root; service netif start Starting wpa_supplicant. DHCPREQUEST on wlan0 to 255.255.255.255 port 67 interval 7 DHCPREQUEST on wlan0 to 255.255.255.255 port 67 interval 15 DHCPACK from 192.168.0.20 bound to 192.168.0.254 -- renewal in 300 seconds. wlan0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> mtu 1500 ether 00:11:95:d5:43:62 inet 192.168.0.254 netmask 0xffffff00 broadcast 192.168.0.255 media: IEEE 802.11 Wireless Ethernet DS/11Mbps mode 11g status: associated ssid freebsdap channel 1 (2412 Mhz 11g) bssid 00:11:95:c3:0d:ac country US ecm authmode WPA2/802.11i privacy ON deftxkey UNDEF AES-CCM 3:128-bit txpower 21.5 bmiss 7 scanvalid 450 bgscan bgscanintvl 300 bgscanidle 250 roam:rssi 7 roam:rate 5 protmode CTS wme burst roaming MANUAL It is also possible to bring up the interface manually using &man.wpa.supplicant.8; and &man.ifconfig.8;. <acronym>WPA</acronym> with <acronym>EAP-TTLS</acronym> With EAP-TLS, both the authentication server and the client need a certificate. With EAP-TTLS, a client certificate is optional. This method is similar to a web server which creates a secure SSL tunnel even if visitors do not have client-side certificates. EAP-TTLS uses an encrypted TLS tunnel for safe transport of the authentication data. The required configuration can be added to /etc/wpa_supplicant.conf: network={ ssid="freebsdap" proto=RSN key_mgmt=WPA-EAP eap=TTLS identity="test" password="test" ca_cert="/etc/certs/cacert.pem" phase2="auth=MD5" } This field specifies the EAP method for the connection. The identity field contains the identity string for EAP authentication inside the encrypted TLS tunnel. The password field contains the passphrase for the EAP authentication. The ca_cert field indicates the pathname of the CA certificate file. This file is needed to verify the server certificate. This field specifies the authentication method used in the encrypted TLS tunnel. In this example, EAP with MD5-Challenge is used. The inner authentication phase is often called phase2. Next, add the following lines to /etc/rc.conf: wlans_ath0="wlan0" ifconfig_wlan0="WPA DHCP" The next step is to bring up the interface: &prompt.root; service netif start Starting wpa_supplicant. DHCPREQUEST on wlan0 to 255.255.255.255 port 67 interval 7 DHCPREQUEST on wlan0 to 255.255.255.255 port 67 interval 15 DHCPREQUEST on wlan0 to 255.255.255.255 port 67 interval 21 DHCPACK from 192.168.0.20 bound to 192.168.0.254 -- renewal in 300 seconds. wlan0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> mtu 1500 ether 00:11:95:d5:43:62 inet 192.168.0.254 netmask 0xffffff00 broadcast 192.168.0.255 media: IEEE 802.11 Wireless Ethernet DS/11Mbps mode 11g status: associated ssid freebsdap channel 1 (2412 Mhz 11g) bssid 00:11:95:c3:0d:ac country US ecm authmode WPA2/802.11i privacy ON deftxkey UNDEF AES-CCM 3:128-bit txpower 21.5 bmiss 7 scanvalid 450 bgscan bgscanintvl 300 bgscanidle 250 roam:rssi 7 roam:rate 5 protmode CTS wme burst roaming MANUAL <acronym>WPA</acronym> with <acronym>EAP-PEAP</acronym> PEAPv0/EAP-MSCHAPv2 is the most common PEAP method. In this chapter, the term PEAP is used to refer to that method. Protected EAP (PEAP) is designed as an alternative to EAP-TTLS and is the most used EAP standard after EAP-TLS. In a network with mixed operating systems, PEAP should be the most supported standard after EAP-TLS. PEAP is similar to EAP-TTLS as it uses a server-side certificate to authenticate clients by creating an encrypted TLS tunnel between the client and the authentication server, which protects the ensuing exchange of authentication information. PEAP authentication differs from EAP-TTLS as it broadcasts the username in the clear and only the password is sent in the encrypted TLS tunnel. EAP-TTLS will use the TLS tunnel for both the username and password. Add the following lines to /etc/wpa_supplicant.conf to configure the EAP-PEAP related settings: network={ ssid="freebsdap" proto=RSN key_mgmt=WPA-EAP eap=PEAP identity="test" password="test" ca_cert="/etc/certs/cacert.pem" phase1="peaplabel=0" phase2="auth=MSCHAPV2" } This field specifies the EAP method for the connection. The identity field contains the identity string for EAP authentication inside the encrypted TLS tunnel. The password field contains the passphrase for the EAP authentication. The ca_cert field indicates the pathname of the CA certificate file. This file is needed to verify the server certificate. This field contains the parameters for the first phase of authentication, the TLS tunnel. According to the authentication server used, specify a specific label for authentication. Most of the time, the label will be client EAP encryption which is set by using peaplabel=0. More information can be found in &man.wpa.supplicant.conf.5;. This field specifies the authentication protocol used in the encrypted TLS tunnel. In the case of PEAP, it is auth=MSCHAPV2. Add the following to /etc/rc.conf: wlans_ath0="wlan0" ifconfig_wlan0="WPA DHCP" Then, bring up the interface: &prompt.root; service netif start Starting wpa_supplicant. DHCPREQUEST on wlan0 to 255.255.255.255 port 67 interval 7 DHCPREQUEST on wlan0 to 255.255.255.255 port 67 interval 15 DHCPREQUEST on wlan0 to 255.255.255.255 port 67 interval 21 DHCPACK from 192.168.0.20 bound to 192.168.0.254 -- renewal in 300 seconds. wlan0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> mtu 1500 ether 00:11:95:d5:43:62 inet 192.168.0.254 netmask 0xffffff00 broadcast 192.168.0.255 media: IEEE 802.11 Wireless Ethernet DS/11Mbps mode 11g status: associated ssid freebsdap channel 1 (2412 Mhz 11g) bssid 00:11:95:c3:0d:ac country US ecm authmode WPA2/802.11i privacy ON deftxkey UNDEF AES-CCM 3:128-bit txpower 21.5 bmiss 7 scanvalid 450 bgscan bgscanintvl 300 bgscanidle 250 roam:rssi 7 roam:rate 5 protmode CTS wme burst roaming MANUAL <acronym>WEP</acronym> Wired Equivalent Privacy (WEP) is part of the original 802.11 standard. There is no authentication mechanism, only a weak form of access control which is easily cracked. WEP can be set up using &man.ifconfig.8;: &prompt.root; ifconfig wlan0 create wlandev ath0 &prompt.root; ifconfig wlan0 inet 192.168.1.100 netmask 255.255.255.0 \ ssid my_net wepmode on weptxkey 3 wepkey 3:0x3456789012 The weptxkey specifies which WEP key will be used in the transmission. This example uses the third key. This must match the setting on the access point. When unsure which key is used by the access point, try 1 (the first key) for this value. The wepkey selects one of the WEP keys. It should be in the format index:key. Key 1 is used by default; the index only needs to be set when using a key other than the first key. Replace the 0x3456789012 with the key configured for use on the access point. Refer to &man.ifconfig.8; for further information. The &man.wpa.supplicant.8; facility can be used to configure a wireless interface with WEP. The example above can be set up by adding the following lines to /etc/wpa_supplicant.conf: network={ ssid="my_net" key_mgmt=NONE wep_key3=3456789012 wep_tx_keyidx=3 } Then: &prompt.root; wpa_supplicant -i wlan0 -c /etc/wpa_supplicant.conf Trying to associate with 00:13:46:49:41:76 (SSID='dlinkap' freq=2437 MHz) Associated with 00:13:46:49:41:76
Ad-hoc Mode IBSS mode, also called ad-hoc mode, is designed for point to point connections. For example, to establish an ad-hoc network between the machines A and B, choose two IP addresses and a SSID. On A: &prompt.root; ifconfig wlan0 create wlandev ath0 wlanmode adhoc &prompt.root; ifconfig wlan0 inet 192.168.0.1 netmask 255.255.255.0 ssid freebsdap &prompt.root; ifconfig wlan0 wlan0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500 ether 00:11:95:c3:0d:ac inet 192.168.0.1 netmask 0xffffff00 broadcast 192.168.0.255 media: IEEE 802.11 Wireless Ethernet autoselect mode 11g <adhoc> status: running ssid freebsdap channel 2 (2417 Mhz 11g) bssid 02:11:95:c3:0d:ac country US ecm authmode OPEN privacy OFF txpower 21.5 scanvalid 60 protmode CTS wme burst The adhoc parameter indicates that the interface is running in IBSS mode. B should now be able to detect A: &prompt.root; ifconfig wlan0 create wlandev ath0 wlanmode adhoc &prompt.root; ifconfig wlan0 up scan SSID/MESH ID BSSID CHAN RATE S:N INT CAPS freebsdap 02:11:95:c3:0d:ac 2 54M -64:-96 100 IS WME The I in the output confirms that A is in ad-hoc mode. Now, configure B with a different IP address: &prompt.root; ifconfig wlan0 inet 192.168.0.2 netmask 255.255.255.0 ssid freebsdap &prompt.root; ifconfig wlan0 wlan0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500 ether 00:11:95:d5:43:62 inet 192.168.0.2 netmask 0xffffff00 broadcast 192.168.0.255 media: IEEE 802.11 Wireless Ethernet autoselect mode 11g <adhoc> status: running ssid freebsdap channel 2 (2417 Mhz 11g) bssid 02:11:95:c3:0d:ac country US ecm authmode OPEN privacy OFF txpower 21.5 scanvalid 60 protmode CTS wme burst Both A and B are now ready to exchange information. &os; Host Access Points &os; can act as an Access Point (AP) which eliminates the need to buy a hardware AP or run an ad-hoc network. This can be particularly useful when a &os; machine is acting as a gateway to another network such as the Internet. Basic Settings Before configuring a &os; machine as an AP, the kernel must be configured with the appropriate networking support for the wireless card as well as the security protocols being used. For more details, see . The NDIS driver wrapper for &windows; drivers does not currently support AP operation. Only native &os; wireless drivers support AP mode. Once wireless networking support is loaded, check if the wireless device supports the host-based access point mode, also known as hostap mode: &prompt.root; ifconfig wlan0 create wlandev ath0 &prompt.root; ifconfig wlan0 list caps drivercaps=6f85edc1<STA,FF,TURBOP,IBSS,HOSTAP,AHDEMO,TXPMGT,SHSLOT,SHPREAMBLE,MONITOR,MBSS,WPA1,WPA2,BURST,WME,WDS,BGSCAN,TXFRAG> cryptocaps=1f<WEP,TKIP,AES,AES_CCM,TKIPMIC> This output displays the card's capabilities. The HOSTAP word confirms that this wireless card can act as an AP. Various supported ciphers are also listed: WEP, TKIP, and AES. This information indicates which security protocols can be used on the AP. The wireless device can only be put into hostap mode during the creation of the network pseudo-device, so a previously created device must be destroyed first: &prompt.root; ifconfig wlan0 destroy then regenerated with the correct option before setting the other parameters: &prompt.root; ifconfig wlan0 create wlandev ath0 wlanmode hostap &prompt.root; ifconfig wlan0 inet 192.168.0.1 netmask 255.255.255.0 ssid freebsdap mode 11g channel 1 Use &man.ifconfig.8; again to see the status of the wlan0 interface: &prompt.root; ifconfig wlan0 wlan0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500 ether 00:11:95:c3:0d:ac inet 192.168.0.1 netmask 0xffffff00 broadcast 192.168.0.255 media: IEEE 802.11 Wireless Ethernet autoselect mode 11g <hostap> status: running ssid freebsdap channel 1 (2412 Mhz 11g) bssid 00:11:95:c3:0d:ac country US ecm authmode OPEN privacy OFF txpower 21.5 scanvalid 60 protmode CTS wme burst dtimperiod 1 -dfs The hostap parameter indicates the interface is running in the host-based access point mode. The interface configuration can be done automatically at boot time by adding the following lines to /etc/rc.conf: wlans_ath0="wlan0" create_args_wlan0="wlanmode hostap" ifconfig_wlan0="inet 192.168.0.1 netmask 255.255.255.0 ssid freebsdap mode 11g channel 1" Host-based Access Point Without Authentication or Encryption Although it is not recommended to run an AP without any authentication or encryption, this is a simple way to check if the AP is working. This configuration is also important for debugging client issues. Once the AP is configured, initiate a scan from another wireless machine to find the AP: &prompt.root; ifconfig wlan0 create wlandev ath0 &prompt.root; ifconfig wlan0 up scan SSID/MESH ID BSSID CHAN RATE S:N INT CAPS freebsdap 00:11:95:c3:0d:ac 1 54M -66:-96 100 ES WME The client machine found the AP and can be associated with it: &prompt.root; ifconfig wlan0 inet 192.168.0.2 netmask 255.255.255.0 ssid freebsdap &prompt.root; ifconfig wlan0 wlan0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500 ether 00:11:95:d5:43:62 inet 192.168.0.2 netmask 0xffffff00 broadcast 192.168.0.255 media: IEEE 802.11 Wireless Ethernet OFDM/54Mbps mode 11g status: associated ssid freebsdap channel 1 (2412 Mhz 11g) bssid 00:11:95:c3:0d:ac country US ecm authmode OPEN privacy OFF txpower 21.5 bmiss 7 scanvalid 60 bgscan bgscanintvl 300 bgscanidle 250 roam:rssi 7 roam:rate 5 protmode CTS wme burst <acronym>WPA2</acronym> Host-based Access Point This section focuses on setting up a &os; access point using the WPA2 security protocol. More details regarding WPA and the configuration of WPA-based wireless clients can be found in . The &man.hostapd.8; daemon is used to deal with client authentication and key management on the WPA2-enabled AP. The following configuration operations are performed on the &os; machine acting as the AP. Once the AP is correctly working, &man.hostapd.8; can be automatically started at boot with this line in /etc/rc.conf: hostapd_enable="YES" Before trying to configure &man.hostapd.8;, first configure the basic settings introduced in . <acronym>WPA2-PSK</acronym> WPA2-PSK is intended for small networks where the use of a backend authentication server is not possible or desired. The configuration is done in /etc/hostapd.conf: interface=wlan0 debug=1 ctrl_interface=/var/run/hostapd ctrl_interface_group=wheel ssid=freebsdap wpa=2 wpa_passphrase=freebsdmall wpa_key_mgmt=WPA-PSK wpa_pairwise=CCMP Wireless interface used for the access point. Level of verbosity used during the execution of &man.hostapd.8;. A value of 1 represents the minimal level. Pathname of the directory used by &man.hostapd.8; to store domain socket files for communication with external programs such as &man.hostapd.cli.8;. The default value is used in this example. The group allowed to access the control interface files. The wireless network name, or SSID, that will appear in wireless scans. Enable WPA and specify which WPA authentication protocol will be required. A value of 2 configures the AP for WPA2 and is recommended. Set to 1 only if the obsolete WPA is required. ASCII passphrase for WPA authentication. Always use strong passwords that are at least 8 characters long and made from a rich alphabet so that they will not be easily guessed or attacked. The key management protocol to use. This example sets WPA-PSK. Encryption algorithms accepted by the access point. In this example, only the CCMP (AES) cipher is accepted. CCMP is an alternative to TKIP and is strongly preferred when possible. TKIP should be allowed only when there are stations incapable of using CCMP. The next step is to start &man.hostapd.8;: &prompt.root; service hostapd forcestart &prompt.root; ifconfig wlan0 wlan0: flags=8943<UP,BROADCAST,RUNNING,PROMISC,SIMPLEX,MULTICAST> metric 0 mtu 1500 ether 04:f0:21:16:8e:10 inet6 fe80::6f0:21ff:fe16:8e10%wlan0 prefixlen 64 scopeid 0x9 nd6 options=21<PERFORMNUD,AUTO_LINKLOCAL> media: IEEE 802.11 Wireless Ethernet autoselect mode 11na <hostap> status: running ssid No5ignal channel 36 (5180 MHz 11a ht/40+) bssid 04:f0:21:16:8e:10 country US ecm authmode WPA2/802.11i privacy MIXED deftxkey 2 AES-CCM 2:128-bit AES-CCM 3:128-bit txpower 17 mcastrate 6 mgmtrate 6 scanvalid 60 ampdulimit 64k ampdudensity 8 shortgi wme burst dtimperiod 1 -dfs groups: wlan Once the AP is running, the clients can associate with it. See for more details. It is possible to see the stations associated with the AP using ifconfig wlan0 list sta. <acronym>WEP</acronym> Host-based Access Point It is not recommended to use WEP for setting up an AP since there is no authentication mechanism and the encryption is easily cracked. Some legacy wireless cards only support WEP and these cards will only support an AP without authentication or encryption. The wireless device can now be put into hostap mode and configured with the correct SSID and IP address: &prompt.root; ifconfig wlan0 create wlandev ath0 wlanmode hostap &prompt.root; ifconfig wlan0 inet 192.168.0.1 netmask 255.255.255.0 \ ssid freebsdap wepmode on weptxkey 3 wepkey 3:0x3456789012 mode 11g The weptxkey indicates which WEP key will be used in the transmission. This example uses the third key as key numbering starts with 1. This parameter must be specified in order to encrypt the data. The wepkey sets the selected WEP key. It should be in the format index:key. If the index is not given, key 1 is set. The index needs to be set when using keys other than the first key. Use &man.ifconfig.8; to see the status of the wlan0 interface: &prompt.root; ifconfig wlan0 wlan0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500 ether 00:11:95:c3:0d:ac inet 192.168.0.1 netmask 0xffffff00 broadcast 192.168.0.255 media: IEEE 802.11 Wireless Ethernet autoselect mode 11g <hostap> status: running ssid freebsdap channel 4 (2427 Mhz 11g) bssid 00:11:95:c3:0d:ac country US ecm authmode OPEN privacy ON deftxkey 3 wepkey 3:40-bit txpower 21.5 scanvalid 60 protmode CTS wme burst dtimperiod 1 -dfs From another wireless machine, it is now possible to initiate a scan to find the AP: &prompt.root; ifconfig wlan0 create wlandev ath0 &prompt.root; ifconfig wlan0 up scan SSID BSSID CHAN RATE S:N INT CAPS freebsdap 00:11:95:c3:0d:ac 1 54M 22:1 100 EPS In this example, the client machine found the AP and can associate with it using the correct parameters. See for more details. Using Both Wired and Wireless Connections A wired connection provides better performance and reliability, while a wireless connection provides flexibility and mobility. Laptop users typically want to roam seamlessly between the two types of connections. On &os;, it is possible to combine two or even more network interfaces together in a failover fashion. This type of configuration uses the most preferred and available connection from a group of network interfaces, and the operating system switches automatically when the link state changes. Link aggregation and failover is covered in and an example for using both wired and wireless connections is provided at . Troubleshooting This section describes a number of steps to help troubleshoot common wireless networking problems. If the access point is not listed when scanning, check that the configuration has not limited the wireless device to a limited set of channels. If the device cannot associate with an access point, verify that the configuration matches the settings on the access point. This includes the authentication scheme and any security protocols. Simplify the configuration as much as possible. If using a security protocol such as WPA or WEP, configure the access point for open authentication and no security to see if traffic will pass. Debugging support is provided by &man.wpa.supplicant.8;. Try running this utility manually with and look at the system logs. Once the system can associate with the access point, diagnose the network configuration using tools like &man.ping.8;. There are many lower-level debugging tools. Debugging messages can be enabled in the 802.11 protocol support layer using &man.wlandebug.8;. For example, to enable console messages related to scanning for access points and the 802.11 protocol handshakes required to arrange communication: &prompt.root; wlandebug -i ath0 +scan+auth+debug+assoc net.wlan.0.debug: 0 => 0xc80000<assoc,auth,scan> Many useful statistics are maintained by the 802.11 layer and wlanstats, found in /usr/src/tools/tools/net80211, will dump this information. These statistics should display all errors identified by the 802.11 layer. However, some errors are identified in the device drivers that lie below the 802.11 layer so they may not show up. To diagnose device-specific problems, refer to the drivers' documentation. If the above information does not help to clarify the problem, submit a problem report and include output from the above tools.
USB Tethering tether Many cellphones provide the option to share their data connection over USB (often called "tethering"). This feature uses either the RNDIS, CDC or a custom &apple; &iphone;/&ipad; protocol. &android; devices generally use the &man.urndis.4; driver. &apple; devices use the &man.ipheth.4; driver. Older devices will often use the &man.cdce.4; driver. Before attaching a device, load the appropriate driver into the kernel: &prompt.root; kldload if_urndis &prompt.root; kldload if_cdce &prompt.root; kldload if_ipheth Once the device is attached ue0 will be available for use like a normal network device. Be sure that the USB tethering option is enabled on the device. Bluetooth Pav Lucistnik Written by pav@FreeBSD.org Bluetooth Bluetooth is a wireless technology for creating personal networks operating in the 2.4 GHz unlicensed band, with a range of 10 meters. Networks are usually formed ad-hoc from portable devices such as cellular phones, handhelds, and laptops. Unlike Wi-Fi wireless technology, Bluetooth offers higher level service profiles, such as FTP-like file servers, file pushing, voice transport, serial line emulation, and more. This section describes the use of a USB Bluetooth dongle on a &os; system. It then describes the various Bluetooth protocols and utilities. Loading Bluetooth Support The Bluetooth stack in &os; is implemented using the &man.netgraph.4; framework. A broad variety of Bluetooth USB dongles is supported by &man.ng.ubt.4;. Broadcom BCM2033 based Bluetooth devices are supported by the &man.ubtbcmfw.4; and &man.ng.ubt.4; drivers. The 3Com Bluetooth PC Card 3CRWB60-A is supported by the &man.ng.bt3c.4; driver. Serial and UART based Bluetooth devices are supported by &man.sio.4;, &man.ng.h4.4;, and &man.hcseriald.8;. Before attaching a device, determine which of the above drivers it uses, then load the driver. For example, if the device uses the &man.ng.ubt.4; driver: &prompt.root; kldload ng_ubt If the Bluetooth device will be attached to the system during system startup, the system can be configured to load the module at boot time by adding the driver to /boot/loader.conf: ng_ubt_load="YES" Once the driver is loaded, plug in the USB dongle. If the driver load was successful, output similar to the following should appear on the console and in /var/log/messages: ubt0: vendor 0x0a12 product 0x0001, rev 1.10/5.25, addr 2 ubt0: Interface 0 endpoints: interrupt=0x81, bulk-in=0x82, bulk-out=0x2 ubt0: Interface 1 (alt.config 5) endpoints: isoc-in=0x83, isoc-out=0x3, wMaxPacketSize=49, nframes=6, buffer size=294 To start and stop the Bluetooth stack, use its startup script. It is a good idea to stop the stack before unplugging the device. When starting the stack, the output should be similar to the following: &prompt.root; service bluetooth start ubt0 BD_ADDR: 00:02:72:00:d4:1a Features: 0xff 0xff 0xf 00 00 00 00 00 <3-Slot> <5-Slot> <Encryption> <Slot offset> <Timing accuracy> <Switch> <Hold mode> <Sniff mode> <Park mode> <RSSI> <Channel quality> <SCO link> <HV2 packets> <HV3 packets> <u-law log> <A-law log> <CVSD> <Paging scheme> <Power control> <Transparent SCO data> Max. ACL packet size: 192 bytes Number of ACL packets: 8 Max. SCO packet size: 64 bytes Number of SCO packets: 8 Finding Other Bluetooth Devices HCI The Host Controller Interface (HCI) provides a uniform method for accessing Bluetooth baseband capabilities. In &os;, a netgraph HCI node is created for each Bluetooth device. For more details, refer to &man.ng.hci.4;. One of the most common tasks is discovery of Bluetooth devices within RF proximity. This operation is called inquiry. Inquiry and other HCI related operations are done using &man.hccontrol.8;. The example below shows how to find out which Bluetooth devices are in range. The list of devices should be displayed in a few seconds. Note that a remote device will only answer the inquiry if it is set to discoverable mode. &prompt.user; hccontrol -n ubt0hci inquiry Inquiry result, num_responses=1 Inquiry result #0 BD_ADDR: 00:80:37:29:19:a4 Page Scan Rep. Mode: 0x1 Page Scan Period Mode: 00 Page Scan Mode: 00 Class: 52:02:04 Clock offset: 0x78ef Inquiry complete. Status: No error [00] The BD_ADDR is the unique address of a Bluetooth device, similar to the MAC address of a network card. This address is needed for further communication with a device and it is possible to assign a human readable name to a BD_ADDR. Information regarding the known Bluetooth hosts is contained in /etc/bluetooth/hosts. The following example shows how to obtain the human readable name that was assigned to the remote device: &prompt.user; hccontrol -n ubt0hci remote_name_request 00:80:37:29:19:a4 BD_ADDR: 00:80:37:29:19:a4 Name: Pav's T39 If an inquiry is performed on a remote Bluetooth device, it will find the computer as your.host.name (ubt0). The name assigned to the local device can be changed at any time. The Bluetooth system provides a point-to-point connection between two Bluetooth units, or a point-to-multipoint connection which is shared among several Bluetooth devices. The following example shows how to obtain the list of active baseband connections for the local device: &prompt.user; hccontrol -n ubt0hci read_connection_list Remote BD_ADDR Handle Type Mode Role Encrypt Pending Queue State 00:80:37:29:19:a4 41 ACL 0 MAST NONE 0 0 OPEN A connection handle is useful when termination of the baseband connection is required, though it is normally not required to do this by hand. The stack will automatically terminate inactive baseband connections. &prompt.root; hccontrol -n ubt0hci disconnect 41 Connection handle: 41 Reason: Connection terminated by local host [0x16] Type hccontrol help for a complete listing of available HCI commands. Most of the HCI commands do not require superuser privileges. Device Pairing By default, Bluetooth communication is not authenticated, and any device can talk to any other device. A Bluetooth device, such as a cellular phone, may choose to require authentication to provide a particular service. Bluetooth authentication is normally done with a PIN code, an ASCII string up to 16 characters in length. The user is required to enter the same PIN code on both devices. Once the user has entered the PIN code, both devices will generate a link key. After that, the link key can be stored either in the devices or in a persistent storage. Next time, both devices will use the previously generated link key. This procedure is called pairing. Note that if the link key is lost by either device, the pairing must be repeated. The &man.hcsecd.8; daemon is responsible for handling Bluetooth authentication requests. The default configuration file is /etc/bluetooth/hcsecd.conf. An example section for a cellular phone with the PIN code set to 1234 is shown below: device { bdaddr 00:80:37:29:19:a4; name "Pav's T39"; key nokey; pin "1234"; } The only limitation on PIN codes is length. Some devices, such as Bluetooth headsets, may have a fixed PIN code built in. The switch forces &man.hcsecd.8; to stay in the foreground, so it is easy to see what is happening. Set the remote device to receive pairing and initiate the Bluetooth connection to the remote device. The remote device should indicate that pairing was accepted and request the PIN code. Enter the same PIN code listed in hcsecd.conf. Now the computer and the remote device are paired. Alternatively, pairing can be initiated on the remote device. The following line can be added to /etc/rc.conf to configure &man.hcsecd.8; to start automatically on system start: hcsecd_enable="YES" The following is a sample of the &man.hcsecd.8; daemon output: hcsecd[16484]: Got Link_Key_Request event from 'ubt0hci', remote bdaddr 0:80:37:29:19:a4 hcsecd[16484]: Found matching entry, remote bdaddr 0:80:37:29:19:a4, name 'Pav's T39', link key doesn't exist hcsecd[16484]: Sending Link_Key_Negative_Reply to 'ubt0hci' for remote bdaddr 0:80:37:29:19:a4 hcsecd[16484]: Got PIN_Code_Request event from 'ubt0hci', remote bdaddr 0:80:37:29:19:a4 hcsecd[16484]: Found matching entry, remote bdaddr 0:80:37:29:19:a4, name 'Pav's T39', PIN code exists hcsecd[16484]: Sending PIN_Code_Reply to 'ubt0hci' for remote bdaddr 0:80:37:29:19:a4 Network Access with <acronym>PPP</acronym> Profiles A Dial-Up Networking (DUN) profile can be used to configure a cellular phone as a wireless modem for connecting to a dial-up Internet access server. It can also be used to configure a computer to receive data calls from a cellular phone. Network access with a PPP profile can be used to provide LAN access for a single Bluetooth device or multiple Bluetooth devices. It can also provide PC to PC connection using PPP networking over serial cable emulation. In &os;, these profiles are implemented with &man.ppp.8; and the &man.rfcomm.pppd.8; wrapper which converts a Bluetooth connection into something PPP can use. Before a profile can be used, a new PPP label must be created in /etc/ppp/ppp.conf. Consult &man.rfcomm.pppd.8; for examples. In this example, &man.rfcomm.pppd.8; is used to open a connection to a remote device with a BD_ADDR of 00:80:37:29:19:a4 on a DUN RFCOMM channel: &prompt.root; rfcomm_pppd -a 00:80:37:29:19:a4 -c -C dun -l rfcomm-dialup The actual channel number will be obtained from the remote device using the SDP protocol. It is possible to specify the RFCOMM channel by hand, and in this case &man.rfcomm.pppd.8; will not perform the SDP query. Use &man.sdpcontrol.8; to find out the RFCOMM channel on the remote device. In order to provide network access with the PPP LAN service, &man.sdpd.8; must be running and a new entry for LAN clients must be created in /etc/ppp/ppp.conf. Consult &man.rfcomm.pppd.8; for examples. Finally, start the RFCOMM PPP server on a valid RFCOMM channel number. The RFCOMM PPP server will automatically register the Bluetooth LAN service with the local SDP daemon. The example below shows how to start the RFCOMM PPP server. &prompt.root; rfcomm_pppd -s -C 7 -l rfcomm-server Bluetooth Protocols This section provides an overview of the various Bluetooth protocols, their function, and associated utilities. Logical Link Control and Adaptation Protocol (<acronym>L2CAP</acronym>) L2CAP The Logical Link Control and Adaptation Protocol (L2CAP) provides connection-oriented and connectionless data services to upper layer protocols. L2CAP permits higher level protocols and applications to transmit and receive L2CAP data packets up to 64 kilobytes in length. L2CAP is based around the concept of channels. A channel is a logical connection on top of a baseband connection, where each channel is bound to a single protocol in a many-to-one fashion. Multiple channels can be bound to the same protocol, but a channel cannot be bound to multiple protocols. Each L2CAP packet received on a channel is directed to the appropriate higher level protocol. Multiple channels can share the same baseband connection. In &os;, a netgraph L2CAP node is created for each Bluetooth device. This node is normally connected to the downstream Bluetooth HCI node and upstream Bluetooth socket nodes. The default name for the L2CAP node is devicel2cap. For more details refer to &man.ng.l2cap.4;. A useful command is &man.l2ping.8;, which can be used to ping other devices. Some Bluetooth implementations might not return all of the data sent to them, so 0 bytes in the following example is normal. &prompt.root; l2ping -a 00:80:37:29:19:a4 0 bytes from 0:80:37:29:19:a4 seq_no=0 time=48.633 ms result=0 0 bytes from 0:80:37:29:19:a4 seq_no=1 time=37.551 ms result=0 0 bytes from 0:80:37:29:19:a4 seq_no=2 time=28.324 ms result=0 0 bytes from 0:80:37:29:19:a4 seq_no=3 time=46.150 ms result=0 The &man.l2control.8; utility is used to perform various operations on L2CAP nodes. This example shows how to obtain the list of logical connections (channels) and the list of baseband connections for the local device: &prompt.user; l2control -a 00:02:72:00:d4:1a read_channel_list L2CAP channels: Remote BD_ADDR SCID/ DCID PSM IMTU/ OMTU State 00:07:e0:00:0b:ca 66/ 64 3 132/ 672 OPEN &prompt.user; l2control -a 00:02:72:00:d4:1a read_connection_list L2CAP connections: Remote BD_ADDR Handle Flags Pending State 00:07:e0:00:0b:ca 41 O 0 OPEN Another diagnostic tool is &man.btsockstat.1;. It is similar to &man.netstat.1;, but for Bluetooth network-related data structures. The example below shows the same logical connection as &man.l2control.8; above. &prompt.user; btsockstat Active L2CAP sockets PCB Recv-Q Send-Q Local address/PSM Foreign address CID State c2afe900 0 0 00:02:72:00:d4:1a/3 00:07:e0:00:0b:ca 66 OPEN Active RFCOMM sessions L2PCB PCB Flag MTU Out-Q DLCs State c2afe900 c2b53380 1 127 0 Yes OPEN Active RFCOMM sockets PCB Recv-Q Send-Q Local address Foreign address Chan DLCI State c2e8bc80 0 250 00:02:72:00:d4:1a 00:07:e0:00:0b:ca 3 6 OPEN Radio Frequency Communication (<acronym>RFCOMM</acronym>) The RFCOMM protocol provides emulation of serial ports over the L2CAP protocol. RFCOMM is a simple transport protocol, with additional provisions for emulating the 9 circuits of RS-232 (EIATIA-232-E) serial ports. It supports up to 60 simultaneous connections (RFCOMM channels) between two Bluetooth devices. For the purposes of RFCOMM, a complete communication path involves two applications running on the communication endpoints with a communication segment between them. RFCOMM is intended to cover applications that make use of the serial ports of the devices in which they reside. The communication segment is a direct connect Bluetooth link from one device to another. RFCOMM is only concerned with the connection between the devices in the direct connect case, or between the device and a modem in the network case. RFCOMM can support other configurations, such as modules that communicate via Bluetooth wireless technology on one side and provide a wired interface on the other side. In &os;, RFCOMM is implemented at the Bluetooth sockets layer. Service Discovery Protocol (<acronym>SDP</acronym>) SDP The Service Discovery Protocol (SDP) provides the means for client applications to discover the existence of services provided by server applications as well as the attributes of those services. The attributes of a service include the type or class of service offered and the mechanism or protocol information needed to utilize the service. SDP involves communication between a SDP server and a SDP client. The server maintains a list of service records that describe the characteristics of services associated with the server. Each service record contains information about a single service. A client may retrieve information from a service record maintained by the SDP server by issuing a SDP request. If the client, or an application associated with the client, decides to use a service, it must open a separate connection to the service provider in order to utilize the service. SDP provides a mechanism for discovering services and their attributes, but it does not provide a mechanism for utilizing those services. Normally, a SDP client searches for services based on some desired characteristics of the services. However, there are times when it is desirable to discover which types of services are described by an SDP server's service records without any prior information about the services. This process of looking for any offered services is called browsing. The Bluetooth SDP server, &man.sdpd.8;, and command line client, &man.sdpcontrol.8;, are included in the standard &os; installation. The following example shows how to perform a SDP browse query. &prompt.user; sdpcontrol -a 00:01:03:fc:6e:ec browse Record Handle: 00000000 Service Class ID List: Service Discovery Server (0x1000) Protocol Descriptor List: L2CAP (0x0100) Protocol specific parameter #1: u/int/uuid16 1 Protocol specific parameter #2: u/int/uuid16 1 Record Handle: 0x00000001 Service Class ID List: Browse Group Descriptor (0x1001) Record Handle: 0x00000002 Service Class ID List: LAN Access Using PPP (0x1102) Protocol Descriptor List: L2CAP (0x0100) RFCOMM (0x0003) Protocol specific parameter #1: u/int8/bool 1 Bluetooth Profile Descriptor List: LAN Access Using PPP (0x1102) ver. 1.0 Note that each service has a list of attributes, such as the RFCOMM channel. Depending on the service, the user might need to make note of some of the attributes. Some Bluetooth implementations do not support service browsing and may return an empty list. In this case, it is possible to search for the specific service. The example below shows how to search for the OBEX Object Push (OPUSH) service: &prompt.user; sdpcontrol -a 00:01:03:fc:6e:ec search OPUSH Offering services on &os; to Bluetooth clients is done with the &man.sdpd.8; server. The following line can be added to /etc/rc.conf: sdpd_enable="YES" Then the &man.sdpd.8; daemon can be started with: &prompt.root; service sdpd start The local server application that wants to provide a Bluetooth service to remote clients will register the service with the local SDP daemon. An example of such an application is &man.rfcomm.pppd.8;. Once started, it will register the Bluetooth LAN service with the local SDP daemon. The list of services registered with the local SDP server can be obtained by issuing a SDP browse query via the local control channel: &prompt.root; sdpcontrol -l browse <acronym>OBEX</acronym> Object Push (<acronym>OPUSH</acronym>) OBEX Object Exchange (OBEX) is a widely used protocol for simple file transfers between mobile devices. Its main use is in infrared communication, where it is used for generic file transfers between notebooks or PDAs, and for sending business cards or calendar entries between cellular phones and other devices with Personal Information Manager (PIM) applications. The OBEX server and client are implemented by obexapp, which can be installed using the comms/obexapp package or port. The OBEX client is used to push and/or pull objects from the OBEX server. An example object is a business card or an appointment. The OBEX client can obtain the RFCOMM channel number from the remote device via SDP. This can be done by specifying the service name instead of the RFCOMM channel number. Supported service names are: IrMC, FTRN, and OPUSH. It is also possible to specify the RFCOMM channel as a number. Below is an example of an OBEX session where the device information object is pulled from the cellular phone, and a new object, the business card, is pushed into the phone's directory. &prompt.user; obexapp -a 00:80:37:29:19:a4 -C IrMC obex> get telecom/devinfo.txt devinfo-t39.txt Success, response: OK, Success (0x20) obex> put new.vcf Success, response: OK, Success (0x20) obex> di Success, response: OK, Success (0x20) In order to provide the OPUSH service, &man.sdpd.8; must be running and a root folder, where all incoming objects will be stored, must be created. The default path to the root folder is /var/spool/obex. Finally, start the OBEX server on a valid RFCOMM channel number. The OBEX server will automatically register the OPUSH service with the local SDP daemon. The example below shows how to start the OBEX server. &prompt.root; obexapp -s -C 10 Serial Port Profile (<acronym>SPP</acronym>) The Serial Port Profile (SPP) allows Bluetooth devices to perform serial cable emulation. This profile allows legacy applications to use Bluetooth as a cable replacement, through a virtual serial port abstraction. In &os;, &man.rfcomm.sppd.1; implements SPP and a pseudo tty is used as a virtual serial port abstraction. The example below shows how to connect to a remote device's serial port service. A RFCOMM channel does not have to be specified as &man.rfcomm.sppd.1; can obtain it from the remote device via SDP. To override this, specify a RFCOMM channel on the command line. &prompt.root; rfcomm_sppd -a 00:07:E0:00:0B:CA -t rfcomm_sppd[94692]: Starting on /dev/pts/6... /dev/pts/6 Once connected, the pseudo tty can be used as serial port: &prompt.root; cu -l /dev/pts/6 The pseudo tty is printed on stdout and can be read by wrapper scripts: PTS=`rfcomm_sppd -a 00:07:E0:00:0B:CA -t` cu -l $PTS Troubleshooting By default, when &os; is accepting a new connection, it tries to perform a role switch and become master. Some older Bluetooth devices which do not support role switching will not be able to connect. Since role switching is performed when a new connection is being established, it is not possible to ask the remote device if it supports role switching. However, there is a HCI option to disable role switching on the local side: &prompt.root; hccontrol -n ubt0hci write_node_role_switch 0 To display Bluetooth packets, use the third-party package hcidump, which can be installed using the comms/hcidump package or port. This utility is similar to &man.tcpdump.1; and can be used to display the contents of Bluetooth packets on the terminal and to dump the Bluetooth packets to a file. Bridging Andrew Thompson Written by IP subnet bridge It is sometimes useful to divide a network, such as an Ethernet segment, into network segments without having to create IP subnets and use a router to connect the segments together. A device that connects two networks together in this fashion is called a bridge. A bridge works by learning the MAC addresses of the devices on each of its network interfaces. It forwards traffic between networks only when the source and destination MAC addresses are on different networks. In many respects, a bridge is like an Ethernet switch with very few ports. A &os; system with multiple network interfaces can be configured to act as a bridge. Bridging can be useful in the following situations: Connecting Networks The basic operation of a bridge is to join two or more network segments. There are many reasons to use a host-based bridge instead of networking equipment, such as cabling constraints or firewalling. A bridge can also connect a wireless interface running in hostap mode to a wired network and act as an access point. Filtering/Traffic Shaping Firewall A bridge can be used when firewall functionality is needed without routing or Network Address Translation (NAT). An example is a small company that is connected via DSL or ISDN to an ISP. There are thirteen public IP addresses from the ISP and ten computers on the network. In this situation, using a router-based firewall is difficult because of subnetting issues. A bridge-based firewall can be configured without any IP addressing issues. Network Tap A bridge can join two network segments in order to inspect all Ethernet frames that pass between them using &man.bpf.4; and &man.tcpdump.1; on the bridge interface or by sending a copy of all frames out an additional interface known as a span port. Layer 2 VPN Two Ethernet networks can be joined across an IP link by bridging the networks to an EtherIP tunnel or a &man.tap.4; based solution such as OpenVPN. Layer 2 Redundancy A network can be connected together with multiple links and use the Spanning Tree Protocol (STP) to block redundant paths. This section describes how to configure a &os; system as a bridge using &man.if.bridge.4;. A netgraph bridging driver is also available, and is described in &man.ng.bridge.4;. Packet filtering can be used with any firewall package that hooks into the &man.pfil.9; framework. The bridge can be used as a traffic shaper with &man.altq.4; or &man.dummynet.4;. Enabling the Bridge In &os;, &man.if.bridge.4; is a kernel module which is automatically loaded by &man.ifconfig.8; when creating a bridge interface. It is also possible to compile bridge support into a custom kernel by adding device if_bridge to the custom kernel configuration file. The bridge is created using interface cloning. To create the bridge interface: &prompt.root; ifconfig bridge create bridge0 &prompt.root; ifconfig bridge0 bridge0: flags=8802<BROADCAST,SIMPLEX,MULTICAST> metric 0 mtu 1500 ether 96:3d:4b:f1:79:7a id 00:00:00:00:00:00 priority 32768 hellotime 2 fwddelay 15 maxage 20 holdcnt 6 proto rstp maxaddr 100 timeout 1200 root id 00:00:00:00:00:00 priority 0 ifcost 0 port 0 When a bridge interface is created, it is automatically assigned a randomly generated Ethernet address. The maxaddr and timeout parameters control how many MAC addresses the bridge will keep in its forwarding table and how many seconds before each entry is removed after it is last seen. The other parameters control how STP operates. Next, specify which network interfaces to add as members of the bridge. For the bridge to forward packets, all member interfaces and the bridge need to be up: &prompt.root; ifconfig bridge0 addm fxp0 addm fxp1 up &prompt.root; ifconfig fxp0 up &prompt.root; ifconfig fxp1 up The bridge can now forward Ethernet frames between fxp0 and fxp1. Add the following lines to /etc/rc.conf so the bridge is created at startup: cloned_interfaces="bridge0" ifconfig_bridge0="addm fxp0 addm fxp1 up" ifconfig_fxp0="up" ifconfig_fxp1="up" If the bridge host needs an IP address, set it on the bridge interface, not on the member interfaces. The address can be set statically or via DHCP. This example sets a static IP address: &prompt.root; ifconfig bridge0 inet 192.168.0.1/24 It is also possible to assign an IPv6 address to a bridge interface. To make the changes permanent, add the addressing information to /etc/rc.conf. When packet filtering is enabled, bridged packets will pass through the filter inbound on the originating interface on the bridge interface, and outbound on the appropriate interfaces. Either stage can be disabled. When direction of the packet flow is important, it is best to firewall on the member interfaces rather than the bridge itself. The bridge has several configurable settings for passing non-IP and IP packets, and layer2 firewalling with &man.ipfw.8;. See &man.if.bridge.4; for more information. Enabling Spanning Tree For an Ethernet network to function properly, only one active path can exist between two devices. The STP protocol detects loops and puts redundant links into a blocked state. Should one of the active links fail, STP calculates a different tree and enables one of the blocked paths to restore connectivity to all points in the network. The Rapid Spanning Tree Protocol (RSTP or 802.1w) provides backwards compatibility with legacy STP. RSTP provides faster convergence and exchanges information with neighboring switches to quickly transition to forwarding mode without creating loops. &os; supports RSTP and STP as operating modes, with RSTP being the default mode. STP can be enabled on member interfaces using &man.ifconfig.8;. For a bridge with fxp0 and fxp1 as the current interfaces, enable STP with: &prompt.root; ifconfig bridge0 stp fxp0 stp fxp1 bridge0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500 ether d6:cf:d5:a0:94:6d id 00:01:02:4b:d4:50 priority 32768 hellotime 2 fwddelay 15 maxage 20 holdcnt 6 proto rstp maxaddr 100 timeout 1200 root id 00:01:02:4b:d4:50 priority 32768 ifcost 0 port 0 member: fxp0 flags=1c7<LEARNING,DISCOVER,STP,AUTOEDGE,PTP,AUTOPTP> port 3 priority 128 path cost 200000 proto rstp role designated state forwarding member: fxp1 flags=1c7<LEARNING,DISCOVER,STP,AUTOEDGE,PTP,AUTOPTP> port 4 priority 128 path cost 200000 proto rstp role designated state forwarding This bridge has a spanning tree ID of 00:01:02:4b:d4:50 and a priority of 32768. As the root id is the same, it indicates that this is the root bridge for the tree. Another bridge on the network also has STP enabled: bridge0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500 ether 96:3d:4b:f1:79:7a id 00:13:d4:9a:06:7a priority 32768 hellotime 2 fwddelay 15 maxage 20 holdcnt 6 proto rstp maxaddr 100 timeout 1200 root id 00:01:02:4b:d4:50 priority 32768 ifcost 400000 port 4 member: fxp0 flags=1c7<LEARNING,DISCOVER,STP,AUTOEDGE,PTP,AUTOPTP> port 4 priority 128 path cost 200000 proto rstp role root state forwarding member: fxp1 flags=1c7<LEARNING,DISCOVER,STP,AUTOEDGE,PTP,AUTOPTP> port 5 priority 128 path cost 200000 proto rstp role designated state forwarding The line root id 00:01:02:4b:d4:50 priority 32768 ifcost 400000 port 4 shows that the root bridge is 00:01:02:4b:d4:50 and has a path cost of 400000 from this bridge. The path to the root bridge is via port 4 which is fxp0. Bridge Interface Parameters Several ifconfig parameters are unique to bridge interfaces. This section summarizes some common uses for these parameters. The complete list of available parameters is described in &man.ifconfig.8;. private A private interface does not forward any traffic to any other port that is also designated as a private interface. The traffic is blocked unconditionally so no Ethernet frames will be forwarded, including ARP packets. If traffic needs to be selectively blocked, a firewall should be used instead. span A span port transmits a copy of every Ethernet frame received by the bridge. The number of span ports configured on a bridge is unlimited, but if an interface is designated as a span port, it cannot also be used as a regular bridge port. This is most useful for snooping a bridged network passively on another host connected to one of the span ports of the bridge. For example, to send a copy of all frames out the interface named fxp4: &prompt.root; ifconfig bridge0 span fxp4 sticky If a bridge member interface is marked as sticky, dynamically learned address entries are treated as static entries in the forwarding cache. Sticky entries are never aged out of the cache or replaced, even if the address is seen on a different interface. This gives the benefit of static address entries without the need to pre-populate the forwarding table. Clients learned on a particular segment of the bridge can not roam to another segment. An example of using sticky addresses is to combine the bridge with VLANs in order to isolate customer networks without wasting IP address space. Consider that CustomerA is on vlan100, CustomerB is on vlan101, and the bridge has the address 192.168.0.1: &prompt.root; ifconfig bridge0 addm vlan100 sticky vlan100 addm vlan101 sticky vlan101 &prompt.root; ifconfig bridge0 inet 192.168.0.1/24 In this example, both clients see 192.168.0.1 as their default gateway. Since the bridge cache is sticky, one host can not spoof the MAC address of the other customer in order to intercept their traffic. Any communication between the VLANs can be blocked using a firewall or, as seen in this example, private interfaces: &prompt.root; ifconfig bridge0 private vlan100 private vlan101 The customers are completely isolated from each other and the full /24 address range can be allocated without subnetting. The number of unique source MAC addresses behind an interface can be limited. Once the limit is reached, packets with unknown source addresses are dropped until an existing host cache entry expires or is removed. The following example sets the maximum number of Ethernet devices for CustomerA on vlan100 to 10: &prompt.root; ifconfig bridge0 ifmaxaddr vlan100 10 Bridge interfaces also support monitor mode, where the packets are discarded after &man.bpf.4; processing and are not processed or forwarded further. This can be used to multiplex the input of two or more interfaces into a single &man.bpf.4; stream. This is useful for reconstructing the traffic for network taps that transmit the RX/TX signals out through two separate interfaces. For example, to read the input from four network interfaces as one stream: &prompt.root; ifconfig bridge0 addm fxp0 addm fxp1 addm fxp2 addm fxp3 monitor up &prompt.root; tcpdump -i bridge0 <acronym>SNMP</acronym> Monitoring The bridge interface and STP parameters can be monitored via &man.bsnmpd.1; which is included in the &os; base system. The exported bridge MIBs conform to IETF standards so any SNMP client or monitoring package can be used to retrieve the data. To enable monitoring on the bridge, uncomment this line in /etc/snmp.config by removing the beginning # symbol: begemotSnmpdModulePath."bridge" = "/usr/lib/snmp_bridge.so" Other configuration settings, such as community names and access lists, may need to be modified in this file. See &man.bsnmpd.1; and &man.snmp.bridge.3; for more information. Once these edits are saved, add this line to /etc/rc.conf: bsnmpd_enable="YES" Then, start &man.bsnmpd.1;: &prompt.root; service bsnmpd start The following examples use the Net-SNMP software (net-mgmt/net-snmp) to query a bridge from a client system. The net-mgmt/bsnmptools port can also be used. From the SNMP client which is running Net-SNMP, add the following lines to $HOME/.snmp/snmp.conf in order to import the bridge MIB definitions: mibdirs +/usr/share/snmp/mibs mibs +BRIDGE-MIB:RSTP-MIB:BEGEMOT-MIB:BEGEMOT-BRIDGE-MIB To monitor a single bridge using the IETF BRIDGE-MIB (RFC4188): &prompt.user; snmpwalk -v 2c -c public bridge1.example.com mib-2.dot1dBridge BRIDGE-MIB::dot1dBaseBridgeAddress.0 = STRING: 66:fb:9b:6e:5c:44 BRIDGE-MIB::dot1dBaseNumPorts.0 = INTEGER: 1 ports BRIDGE-MIB::dot1dStpTimeSinceTopologyChange.0 = Timeticks: (189959) 0:31:39.59 centi-seconds BRIDGE-MIB::dot1dStpTopChanges.0 = Counter32: 2 BRIDGE-MIB::dot1dStpDesignatedRoot.0 = Hex-STRING: 80 00 00 01 02 4B D4 50 ... BRIDGE-MIB::dot1dStpPortState.3 = INTEGER: forwarding(5) BRIDGE-MIB::dot1dStpPortEnable.3 = INTEGER: enabled(1) BRIDGE-MIB::dot1dStpPortPathCost.3 = INTEGER: 200000 BRIDGE-MIB::dot1dStpPortDesignatedRoot.3 = Hex-STRING: 80 00 00 01 02 4B D4 50 BRIDGE-MIB::dot1dStpPortDesignatedCost.3 = INTEGER: 0 BRIDGE-MIB::dot1dStpPortDesignatedBridge.3 = Hex-STRING: 80 00 00 01 02 4B D4 50 BRIDGE-MIB::dot1dStpPortDesignatedPort.3 = Hex-STRING: 03 80 BRIDGE-MIB::dot1dStpPortForwardTransitions.3 = Counter32: 1 RSTP-MIB::dot1dStpVersion.0 = INTEGER: rstp(2) The dot1dStpTopChanges.0 value is two, indicating that the STP bridge topology has changed twice. A topology change means that one or more links in the network have changed or failed and a new tree has been calculated. The dot1dStpTimeSinceTopologyChange.0 value will show when this happened. To monitor multiple bridge interfaces, the private BEGEMOT-BRIDGE-MIB can be used: &prompt.user; snmpwalk -v 2c -c public bridge1.example.com enterprises.fokus.begemot.begemotBridge BEGEMOT-BRIDGE-MIB::begemotBridgeBaseName."bridge0" = STRING: bridge0 BEGEMOT-BRIDGE-MIB::begemotBridgeBaseName."bridge2" = STRING: bridge2 BEGEMOT-BRIDGE-MIB::begemotBridgeBaseAddress."bridge0" = STRING: e:ce:3b:5a:9e:13 BEGEMOT-BRIDGE-MIB::begemotBridgeBaseAddress."bridge2" = STRING: 12:5e:4d:74:d:fc BEGEMOT-BRIDGE-MIB::begemotBridgeBaseNumPorts."bridge0" = INTEGER: 1 BEGEMOT-BRIDGE-MIB::begemotBridgeBaseNumPorts."bridge2" = INTEGER: 1 ... BEGEMOT-BRIDGE-MIB::begemotBridgeStpTimeSinceTopologyChange."bridge0" = Timeticks: (116927) 0:19:29.27 centi-seconds BEGEMOT-BRIDGE-MIB::begemotBridgeStpTimeSinceTopologyChange."bridge2" = Timeticks: (82773) 0:13:47.73 centi-seconds BEGEMOT-BRIDGE-MIB::begemotBridgeStpTopChanges."bridge0" = Counter32: 1 BEGEMOT-BRIDGE-MIB::begemotBridgeStpTopChanges."bridge2" = Counter32: 1 BEGEMOT-BRIDGE-MIB::begemotBridgeStpDesignatedRoot."bridge0" = Hex-STRING: 80 00 00 40 95 30 5E 31 BEGEMOT-BRIDGE-MIB::begemotBridgeStpDesignatedRoot."bridge2" = Hex-STRING: 80 00 00 50 8B B8 C6 A9 To change the bridge interface being monitored via the mib-2.dot1dBridge subtree: &prompt.user; snmpset -v 2c -c private bridge1.example.com BEGEMOT-BRIDGE-MIB::begemotBridgeDefaultBridgeIf.0 s bridge2 Link Aggregation and Failover Andrew Thompson Written by lagg failover FEC LACP loadbalance roundrobin &os; provides the &man.lagg.4; interface which can be used to aggregate multiple network interfaces into one virtual interface in order to provide failover and link aggregation. Failover allows traffic to continue to flow as long as at least one aggregated network interface has an established link. Link aggregation works best on switches which support LACP, as this protocol distributes traffic bi-directionally while responding to the failure of individual links. The aggregation protocols supported by the lagg interface determine which ports are used for outgoing traffic and whether or not a specific port accepts incoming traffic. The following protocols are supported by &man.lagg.4;: failover This mode sends and receives traffic only through the master port. If the master port becomes unavailable, the next active port is used. The first interface added to the virtual interface is the master port and all subsequently added interfaces are used as failover devices. If failover to a non-master port occurs, the original port becomes master once it becomes available again. fec / loadbalance &cisco; Fast ðerchannel; (FEC) is found on older &cisco; switches. It provides a static setup and does not negotiate aggregation with the peer or exchange frames to monitor the link. If the switch supports LACP, that should be used instead. lacp The &ieee; 802.3ad Link Aggregation Control Protocol (LACP) negotiates a set of aggregable links with the peer into one or more Link Aggregated Groups (LAGs). Each LAG is composed of ports of the same speed, set to full-duplex operation, and traffic is balanced across the ports in the LAG with the greatest total speed. Typically, there is only one LAG which contains all the ports. In the event of changes in physical connectivity, LACP will quickly converge to a new configuration. LACP balances outgoing traffic across the active ports based on hashed protocol header information and accepts incoming traffic from any active port. The hash includes the Ethernet source and destination address and, if available, the VLAN tag, and the IPv4 or IPv6 source and destination address. roundrobin This mode distributes outgoing traffic using a round-robin scheduler through all active ports and accepts incoming traffic from any active port. Since this mode violates Ethernet frame ordering, it should be used with caution. Configuration Examples This section demonstrates how to configure a &cisco; switch and a &os; system for LACP load balancing. It then shows how to configure two Ethernet interfaces in failover mode as well as how to configure failover mode between an Ethernet and a wireless interface. <acronym>LACP</acronym> Aggregation with a &cisco; Switch This example connects two &man.fxp.4; Ethernet interfaces on a &os; machine to the first two Ethernet ports on a &cisco; switch as a single load balanced and fault tolerant link. More interfaces can be added to increase throughput and fault tolerance. Replace the names of the &cisco; ports, Ethernet devices, channel group number, and IP address shown in the example to match the local configuration. Frame ordering is mandatory on Ethernet links and any traffic between two stations always flows over the same physical link, limiting the maximum speed to that of one interface. The transmit algorithm attempts to use as much information as it can to distinguish different traffic flows and balance the flows across the available interfaces. On the &cisco; switch, add the FastEthernet0/1 and FastEthernet0/2 interfaces to channel group 1: interface FastEthernet0/1 channel-group 1 mode active channel-protocol lacp ! interface FastEthernet0/2 channel-group 1 mode active channel-protocol lacp On the &os; system, create the &man.lagg.4; interface using the physical interfaces fxp0 and fxp1 and bring the interfaces up with an IP address of 10.0.0.3/24: &prompt.root; ifconfig fxp0 up &prompt.root; ifconfig fxp1 up &prompt.root; ifconfig lagg0 create &prompt.root; ifconfig lagg0 up laggproto lacp laggport fxp0 laggport fxp1 10.0.0.3/24 Next, verify the status of the virtual interface: &prompt.root; ifconfig lagg0 lagg0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500 options=8<VLAN_MTU> ether 00:05:5d:71:8d:b8 media: Ethernet autoselect status: active laggproto lacp laggport: fxp1 flags=1c<ACTIVE,COLLECTING,DISTRIBUTING> laggport: fxp0 flags=1c<ACTIVE,COLLECTING,DISTRIBUTING> Ports marked as ACTIVE are part of the LAG that has been negotiated with the remote switch. Traffic will be transmitted and received through these active ports. Add to the above command to view the LAG identifiers. To see the port status on the &cisco; switch: switch# show lacp neighbor Flags: S - Device is requesting Slow LACPDUs F - Device is requesting Fast LACPDUs A - Device is in Active mode P - Device is in Passive mode Channel group 1 neighbors Partner's information: LACP port Oper Port Port Port Flags Priority Dev ID Age Key Number State Fa0/1 SA 32768 0005.5d71.8db8 29s 0x146 0x3 0x3D Fa0/2 SA 32768 0005.5d71.8db8 29s 0x146 0x4 0x3D For more detail, type show lacp neighbor detail. To retain this configuration across reboots, add the following entries to /etc/rc.conf on the &os; system: ifconfig_fxp0="up" ifconfig_fxp1="up" cloned_interfaces="lagg0" ifconfig_lagg0="laggproto lacp laggport fxp0 laggport fxp1 10.0.0.3/24" Failover Mode Failover mode can be used to switch over to a secondary interface if the link is lost on the master interface. To configure failover, make sure that the underlying physical interfaces are up, then create the &man.lagg.4; interface. In this example, fxp0 is the master interface, fxp1 is the secondary interface, and the virtual interface is assigned an IP address of 10.0.0.15/24: &prompt.root; ifconfig fxp0 up &prompt.root; ifconfig fxp1 up &prompt.root; ifconfig lagg0 create &prompt.root; ifconfig lagg0 up laggproto failover laggport fxp0 laggport fxp1 10.0.0.15/24 The virtual interface should look something like this: &prompt.root; ifconfig lagg0 lagg0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500 options=8<VLAN_MTU> ether 00:05:5d:71:8d:b8 inet 10.0.0.15 netmask 0xffffff00 broadcast 10.0.0.255 media: Ethernet autoselect status: active laggproto failover laggport: fxp1 flags=0<> laggport: fxp0 flags=5<MASTER,ACTIVE> Traffic will be transmitted and received on fxp0. If the link is lost on fxp0, fxp1 will become the active link. If the link is restored on the master interface, it will once again become the active link. To retain this configuration across reboots, add the following entries to /etc/rc.conf: ifconfig_fxp0="up" ifconfig_fxp1="up" cloned_interfaces="lagg0" ifconfig_lagg0="laggproto failover laggport fxp0 laggport fxp1 10.0.0.15/24" Failover Mode Between Ethernet and Wireless Interfaces For laptop users, it is usually desirable to configure the wireless device as a secondary which is only used when the Ethernet connection is not available. With &man.lagg.4;, it is possible to configure a failover which prefers the Ethernet connection for both performance and security reasons, while maintaining the ability to transfer data over the wireless connection. This is achieved by overriding the physical wireless interface's MAC address with that of the Ethernet interface. In this example, the Ethernet interface, bge0, is the master and the wireless interface, wlan0, is the failover. The wlan0 device was created from iwn0 wireless interface, which will be configured with the MAC address of the Ethernet interface. First, determine the MAC address of the Ethernet interface: &prompt.root; ifconfig bge0 bge0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500 options=19b<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,VLAN_HWCSUM,TSO4> ether 00:21:70:da:ae:37 inet6 fe80::221:70ff:feda:ae37%bge0 prefixlen 64 scopeid 0x2 nd6 options=29<PERFORMNUD,IFDISABLED,AUTO_LINKLOCAL> media: Ethernet autoselect (1000baseT <full-duplex>) status: active Replace bge0 to match the system's Ethernet interface name. The ether line will contain the MAC address of the specified interface. Now, change the MAC address of the underlying wireless interface: &prompt.root; ifconfig iwn0 ether 00:21:70:da:ae:37 Bring the wireless interface up, but do not set an IP address: &prompt.root; ifconfig wlan0 create wlandev iwn0 ssid my_router up Make sure the bge0 interface is up, then create the &man.lagg.4; interface with bge0 as master with failover to wlan0: &prompt.root; ifconfig bge0 up &prompt.root; ifconfig lagg0 create &prompt.root; ifconfig lagg0 up laggproto failover laggport bge0 laggport wlan0 The virtual interface should look something like this: &prompt.root; ifconfig lagg0 lagg0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500 options=8<VLAN_MTU> ether 00:21:70:da:ae:37 media: Ethernet autoselect status: active laggproto failover laggport: wlan0 flags=0<> laggport: bge0 flags=5<MASTER,ACTIVE> Then, start the DHCP client to obtain an IP address: &prompt.root; dhclient lagg0 To retain this configuration across reboots, add the following entries to /etc/rc.conf: ifconfig_bge0="up" ifconfig_iwn0="ether 00:21:70:da:ae:37" wlans_iwn0="wlan0" ifconfig_wlan0="WPA" cloned_interfaces="lagg0" ifconfig_lagg0="laggproto failover laggport bge0 laggport wlan0 DHCP" Diskless Operation with <acronym>PXE</acronym> Jean-François Dockès Updated by Alex Dupre Reorganized and enhanced by diskless workstation diskless operation The &intel; Preboot eXecution Environment (PXE) allows an operating system to boot over the network. For example, a &os; system can boot over the network and operate without a local disk, using file systems mounted from an NFS server. PXE support is usually available in the BIOS. To use PXE when the machine starts, select the Boot from network option in the BIOS setup or type a function key during system initialization. In order to provide the files needed for an operating system to boot over the network, a PXE setup also requires properly configured DHCP, TFTP, and NFS servers, where: Initial parameters, such as an IP address, executable boot filename and location, server name, and root path are obtained from the DHCP server. The operating system loader file is booted using TFTP. The file systems are loaded using NFS. When a computer PXE boots, it receives information over DHCP about where to obtain the initial boot loader file. After the host computer receives this information, it downloads the boot loader via TFTP and then executes the boot loader. In &os;, the boot loader file is /boot/pxeboot. After /boot/pxeboot executes, the &os; kernel is loaded and the rest of the &os; bootup sequence proceeds, as described in . This section describes how to configure these services on a &os; system so that other systems can PXE boot into &os;. Refer to &man.diskless.8; for more information. As described, the system providing these services is insecure. It should live in a protected area of a network and be untrusted by other hosts. Setting Up the <acronym>PXE</acronym> Environment Craig Rodrigues
rodrigc@FreeBSD.org
Written by
The steps shown in this section configure the built-in NFS and TFTP servers. The next section demonstrates how to install and configure the DHCP server. In this example, the directory which will contain the files used by PXE users is /b/tftpboot/FreeBSD/install. It is important that this directory exists and that the same directory name is set in both /etc/inetd.conf and /usr/local/etc/dhcpd.conf. Create the root directory which will contain a &os; installation to be NFS mounted: &prompt.root; export NFSROOTDIR=/b/tftpboot/FreeBSD/install &prompt.root; mkdir -p ${NFSROOTDIR} Enable the NFS server by adding this line to /etc/rc.conf: nfs_server_enable="YES" Export the diskless root directory via NFS by adding the following to /etc/exports: /b -ro -alldirs Start the NFS server: &prompt.root; service nfsd start Enable &man.inetd.8; by adding the following line to /etc/rc.conf: inetd_enable="YES" Uncomment the following line in /etc/inetd.conf by making sure it does not start with a # symbol: tftp dgram udp wait root /usr/libexec/tftpd tftpd -l -s /b/tftpboot Some PXE versions require the TCP version of TFTP. In this case, uncomment the second tftp line which contains stream tcp. Start &man.inetd.8;: &prompt.root; service inetd start Rebuild the &os; kernel and userland (refer to for more detailed instructions): &prompt.root; cd /usr/src &prompt.root; make buildworld &prompt.root; make buildkernel Install &os; into the directory mounted over NFS: &prompt.root; make installworld DESTDIR=${NFSROOTDIR} &prompt.root; make installkernel DESTDIR=${NFSROOTDIR} &prompt.root; make distribution DESTDIR=${NFSROOTDIR} Test that the TFTP server works and can download the boot loader which will be obtained via PXE: &prompt.root; tftp localhost tftp> get FreeBSD/install/boot/pxeboot Received 264951 bytes in 0.1 seconds Edit ${NFSROOTDIR}/etc/fstab and create an entry to mount the root file system over NFS: # Device Mountpoint FSType Options Dump Pass myhost.example.com:/b/tftpboot/FreeBSD/install / nfs ro 0 0 Replace myhost.example.com with the hostname or IP address of the NFS server. In this example, the root file system is mounted read-only in order to prevent NFS clients from potentially deleting the contents of the root file system. Set the root password in the PXE environment for client machines which are PXE booting : &prompt.root; chroot ${NFSROOTDIR} &prompt.root; passwd If needed, enable &man.ssh.1; root logins for client machines which are PXE booting by editing ${NFSROOTDIR}/etc/ssh/sshd_config and enabling PermitRootLogin. This option is documented in &man.sshd.config.5;. Perform any other needed customizations of the PXE environment in ${NFSROOTDIR}. These customizations could include things like installing packages or editing the password file with &man.vipw.8;. When booting from an NFS root volume, /etc/rc detects the NFS boot and runs /etc/rc.initdiskless. In this case, /etc and /var need to be memory backed file systems so that these directories are writable but the NFS root directory is read-only: &prompt.root; chroot ${NFSROOTDIR} &prompt.root; mkdir -p conf/base &prompt.root; tar -c -v -f conf/base/etc.cpio.gz --format cpio --gzip etc &prompt.root; tar -c -v -f conf/base/var.cpio.gz --format cpio --gzip var When the system boots, memory file systems for /etc and /var will be created and mounted and the contents of the cpio.gz files will be copied into them.
Configuring the <acronym>DHCP</acronym> Server DHCP diskless operation The DHCP server does not need to be the same machine as the TFTP and NFS server, but it needs to be accessible in the network. DHCP is not part of the &os; base system but can be installed using the net/isc-dhcp42-server port or package. Once installed, edit the configuration file, /usr/local/etc/dhcpd.conf. Configure the next-server, filename, and root-path settings as seen in this example: subnet 192.168.0.0 netmask 255.255.255.0 { range 192.168.0.2 192.168.0.3 ; option subnet-mask 255.255.255.0 ; option routers 192.168.0.1 ; option broadcast-address 192.168.0.255 ; option domain-name-servers 192.168.35.35, 192.168.35.36 ; option domain-name "example.com"; # IP address of TFTP server next-server 192.168.0.1 ; # path of boot loader obtained via tftp filename "FreeBSD/install/boot/pxeboot" ; # pxeboot boot loader will try to NFS mount this directory for root FS option root-path "192.168.0.1:/b/tftpboot/FreeBSD/install/" ; } The next-server directive is used to specify the IP address of the TFTP server. The filename directive defines the path to /boot/pxeboot. A relative filename is used, meaning that /b/tftpboot is not included in the path. The root-path option defines the path to the NFS root file system. Once the edits are saved, enable DHCP at boot time by adding the following line to /etc/rc.conf: dhcpd_enable="YES" Then start the DHCP service: &prompt.root; service isc-dhcpd start Debugging <acronym>PXE</acronym> Problems Once all of the services are configured and started, PXE clients should be able to automatically load &os; over the network. If a particular client is unable to connect, when that client machine boots up, enter the BIOS configuration menu and confirm that it is set to boot from the network. This section describes some troubleshooting tips for isolating the source of the configuration problem should no clients be able to PXE boot. Use the net/wireshark package or port to debug the network traffic involved during the PXE booting process, which is illustrated in the diagram below.
<acronym>PXE</acronym> Booting Process with <acronym>NFS</acronym> Root Mount Client broadcasts a DHCPDISCOVER message. The DHCP server responds with the IP address, next-server, filename, and root-path values. The client sends a TFTP request to next-server, asking to retrieve filename. The TFTP server responds and sends filename to client. The client executes filename, which is &man.pxeboot.8;, which then loads the kernel. When the kernel executes, the root file system specified by root-path is mounted over NFS.
On the TFTP server, read /var/log/xferlog to ensure that pxeboot is being retrieved from the correct location. To test this example configuration: &prompt.root; tftp 192.168.0.1 tftp> get FreeBSD/install/boot/pxeboot Received 264951 bytes in 0.1 seconds The BUGS sections in &man.tftpd.8; and &man.tftp.1; document some limitations with TFTP. Make sure that the root file system can be mounted via NFS. To test this example configuration: &prompt.root; mount -t nfs 192.168.0.1:/b/tftpboot/FreeBSD/install /mnt
<acronym>IPv6</acronym> Aaron Kaplan Originally Written by Tom Rhodes Restructured and Added by Brad Davis Extended by IPv6 is the new version of the well known IP protocol, also known as IPv4. IPv6 provides several advantages over IPv4 as well as many new features: Its 128-bit address space allows for 340,282,366,920,938,463,463,374,607,431,768,211,456 addresses. This addresses the IPv4 address shortage and eventual IPv4 address exhaustion. Routers only store network aggregation addresses in their routing tables, thus reducing the average space of a routing table to 8192 entries. This addresses the scalability issues associated with IPv4, which required every allocated block of IPv4 addresses to be exchanged between Internet routers, causing their routing tables to become too large to allow efficient routing. Address autoconfiguration (RFC2462). Mandatory multicast addresses. Built-in IPsec (IP security). Simplified header structure. Support for mobile IP. IPv6-to-IPv4 transition mechanisms. &os; includes the http://www.kame.net/ IPv6 reference implementation and comes with everything needed to use IPv6. This section focuses on getting IPv6 configured and running. Background on <acronym>IPv6</acronym> Addresses There are three different types of IPv6 addresses: Unicast A packet sent to a unicast address arrives at the interface belonging to the address. Anycast These addresses are syntactically indistinguishable from unicast addresses but they address a group of interfaces. The packet destined for an anycast address will arrive at the nearest router interface. Anycast addresses are only used by routers. Multicast These addresses identify a group of interfaces. A packet destined for a multicast address will arrive at all interfaces belonging to the multicast group. The IPv4 broadcast address, usually xxx.xxx.xxx.255, is expressed by multicast addresses in IPv6. When reading an IPv6 address, the canonical form is represented as x:x:x:x:x:x:x:x, where each x represents a 16 bit hex value. An example is FEBC:A574:382B:23C1:AA49:4592:4EFE:9982. Often, an address will have long substrings of all zeros. A :: (double colon) can be used to replace one substring per address. Also, up to three leading 0s per hex value can be omitted. For example, fe80::1 corresponds to the canonical form fe80:0000:0000:0000:0000:0000:0000:0001. A third form is to write the last 32 bits using the well known IPv4 notation. For example, 2002::10.0.0.1 corresponds to the hexadecimal canonical representation 2002:0000:0000:0000:0000:0000:0a00:0001, which in turn is equivalent to 2002::a00:1. To view a &os; system's IPv6 address, use &man.ifconfig.8;: &prompt.root; ifconfig rl0: flags=8943<UP,BROADCAST,RUNNING,PROMISC,SIMPLEX,MULTICAST> mtu 1500 inet 10.0.0.10 netmask 0xffffff00 broadcast 10.0.0.255 inet6 fe80::200:21ff:fe03:8e1%rl0 prefixlen 64 scopeid 0x1 ether 00:00:21:03:08:e1 media: Ethernet autoselect (100baseTX ) status: active In this example, the rl0 interface is using fe80::200:21ff:fe03:8e1%rl0, an auto-configured link-local address which was automatically generated from the MAC address. Some IPv6 addresses are reserved. A summary of these reserved addresses is seen in : Reserved <acronym>IPv6</acronym> Addresses IPv6 address Prefixlength (Bits) Description Notes :: 128 bits unspecified Equivalent to 0.0.0.0 in IPv4. ::1 128 bits loopback address Equivalent to 127.0.0.1 in IPv4. ::00:xx:xx:xx:xx 96 bits embedded IPv4 The lower 32 bits are the compatible IPv4 address. ::ff:xx:xx:xx:xx 96 bits IPv4 mapped IPv6 address The lower 32 bits are the IPv4 address for hosts which do not support IPv6. fe80::/10 10 bits link-local Equivalent to 169.254.0.0/16 in IPv4. fc00::/7 7 bits unique-local Unique local addresses are intended for local communication and are only routable within a set of cooperating sites. ff00:: 8 bits multicast   2000::-3fff:: 3 bits global unicast All global unicast addresses are assigned from this pool. The first 3 bits are 001.
For further information on the structure of IPv6 addresses, refer to RFC3513.
Configuring <acronym>IPv6</acronym> To configure a &os; system as an IPv6 client, add these two lines to rc.conf: ifconfig_rl0_ipv6="inet6 accept_rtadv" rtsold_enable="YES" The first line enables the specified interface to receive router solicitation messages. The second line enables the router solicitation daemon, &man.rtsol.8;. If the interface needs a statically assigned IPv6 address, add an entry to specify the static address and associated prefix length: ifconfig_rl0_ipv6="inet6 2001:db8:4672:6565:2026:5043:2d42:5344 prefixlen 64" To assign a default router, specify its address: ipv6_defaultrouter="2001:db8:4672:6565::1" Connecting to a Provider In order to connect to other IPv6 networks, one must have a provider or a tunnel that supports IPv6: Contact an Internet Service Provider to see if they offer IPv6. SixXS offers tunnels with end-points all around the globe. Hurricane Electric offers tunnels with end-points all around the globe. Install the net/freenet6 package or port for a dial-up connection. This section demonstrates how to take the directions from a tunnel provider and convert them into /etc/rc.conf settings that will persist through reboots. The first /etc/rc.conf entry creates the generic tunneling interface gif0: gif_interfaces="gif0" Next, configure that interface with the IPv4 addresses of the local and remote endpoints. Replace MY_IPv4_ADDR and REMOTE_IPv4_ADDR with the actual IPv4 addresses: gifconfig_gif0="MY_IPv4_ADDR REMOTE_IPv4_ADDR" To apply the IPv6 address that has been assigned for use as the IPv6 tunnel endpoint, add this line, replacing MY_ASSIGNED_IPv6_TUNNEL_ENDPOINT_ADDR with the assigned address: ifconfig_gif0_ipv6="inet6 MY_ASSIGNED_IPv6_TUNNEL_ENDPOINT_ADDR" Then, set the default route for the other side of the IPv6 tunnel. Replace MY_IPv6_REMOTE_TUNNEL_ENDPOINT_ADDR with the default gateway address assigned by the provider: ipv6_defaultrouter="MY_IPv6_REMOTE_TUNNEL_ENDPOINT_ADDR" If the &os; system will route IPv6 packets between the rest of the network and the world, enable the gateway using this line: ipv6_gateway_enable="YES" Router Advertisement and Host Auto Configuration This section demonstrates how to setup &man.rtadvd.8; to advertise the IPv6 default route. To enable &man.rtadvd.8;, add the following to /etc/rc.conf: rtadvd_enable="YES" It is important to specify the interface on which to do IPv6 router solicitation. For example, to tell &man.rtadvd.8; to use rl0: rtadvd_interfaces="rl0" Next, create the configuration file, /etc/rtadvd.conf as seen in this example: rl0:\ :addrs#1:addr="2001:471:1f11:246::":prefixlen#64:tc=ether: Replace rl0 with the interface to be used and 2001:471:1f11:246:: with the prefix of the allocation. For a dedicated /64 subnet, nothing else needs to be changed. Otherwise, change the prefixlen# to the correct value. <acronym>IPv6</acronym> and <acronym>IPv6</acronym> Address Mapping When IPv6 is enabled on a server, there may be a need to enable IPv4 mapped IPv6 address communication. This compatibility option allows for IPv4 addresses to be represented as IPv6 addresses. Permitting IPv6 applications to communicate with IPv4 and vice versa may be a security issue. This option may not be required in most cases and is available only for compatibility. This option will allow IPv6-only applications to work with IPv4 in a dual stack environment. This is most useful for third party applications which may not support an IPv6-only environment. To enable this feature, add the following to /etc/rc.conf: ipv6_ipv4mapping="YES" Reviewing the information in RFC 3493, section 3.6 and 3.7 as well as RFC 4038 section 4.2 may be useful to some administrators.
Common Address Redundancy Protocol (<acronym>CARP</acronym>) Tom Rhodes Contributed by Allan Jude Updated by CARP Common Address Redundancy Protocol The Common Address Redundancy Protocol (CARP) allows multiple hosts to share the same IP address and Virtual Host ID (VHID) in order to provide high availability for one or more services. This means that one or more hosts can fail, and the other hosts will transparently take over so that users do not see a service failure. In addition to the shared IP address, each host has its own IP address for management and configuration. All of the machines that share an IP address have the same VHID. The VHID for each virtual IP address must be unique across the broadcast domain of the network interface. High availability using CARP is built into &os;, though the steps to configure it vary slightly depending upon the &os; version. This section provides the same example configuration for versions before and equal to or after &os; 10. This example configures failover support with three hosts, all with unique IP addresses, but providing the same web content. It has two different masters named hosta.example.org and hostb.example.org, with a shared backup named hostc.example.org. These machines are load balanced with a Round Robin DNS configuration. The master and backup machines are configured identically except for their hostnames and management IP addresses. These servers must have the same configuration and run the same services. When the failover occurs, requests to the service on the shared IP address can only be answered correctly if the backup server has access to the same content. The backup machine has two additional CARP interfaces, one for each of the master content server's IP addresses. When a failure occurs, the backup server will pick up the failed master machine's IP address. Using <acronym>CARP</acronym> on &os; 10 and Later Enable boot-time support for CARP by adding an entry for the carp.ko kernel module in /boot/loader.conf: carp_load="YES" To load the module now without rebooting: &prompt.root; kldload carp For users who prefer to use a custom kernel, include the following line in the custom kernel configuration file and compile the kernel as described in : device carp The hostname, management IP address and subnet mask, shared IP address, and VHID are all set by adding entries to /etc/rc.conf. This example is for hosta.example.org: hostname="hosta.example.org" ifconfig_em0="inet 192.168.1.3 netmask 255.255.255.0" ifconfig_em0_alias0="vhid 1 pass testpass alias 192.168.1.50/32" The next set of entries are for hostb.example.org. Since it represents a second master, it uses a different shared IP address and VHID. However, the passwords specified with must be identical as CARP will only listen to and accept advertisements from machines with the correct password. hostname="hostb.example.org" ifconfig_em0="inet 192.168.1.4 netmask 255.255.255.0" ifconfig_em0_alias0="vhid 2 pass testpass alias 192.168.1.51/32" The third machine, hostc.example.org, is configured to handle failover from either master. This machine is configured with two CARP VHIDs, one to handle the virtual IP address for each of the master hosts. The CARP advertising skew, , is set to ensure that the backup host advertises later than the master, since controls the order of precedence when there are multiple backup servers. hostname="hostc.example.org" ifconfig_em0="inet 192.168.1.5 netmask 255.255.255.0" ifconfig_em0_alias0="vhid 1 advskew 100 pass testpass alias 192.168.1.50/32" ifconfig_em0_alias1="vhid 2 advskew 100 pass testpass alias 192.168.1.51/32" Having two CARP VHIDs configured means that hostc.example.org will notice if either of the master servers becomes unavailable. If a master fails to advertise before the backup server, the backup server will pick up the shared IP address until the master becomes available again. Preemption is disabled by default. If preemption has been enabled, hostc.example.org might not release the virtual IP address back to the original master server. The administrator can force the backup server to return the IP address to the master with the command: &prompt.root; ifconfig em0 vhid 1 state backup Once the configuration is complete, either restart networking or reboot each system. High availability is now enabled. CARP functionality can be controlled via several &man.sysctl.8; variables documented in the &man.carp.4; manual pages. Other actions can be triggered from CARP events by using &man.devd.8;. Using <acronym>CARP</acronym> on &os; 9 and Earlier The configuration for these versions of &os; is similar to the one described in the previous section, except that a CARP device must first be created and referred to in the configuration. Enable boot-time support for CARP by loading the if_carp.ko kernel module in /boot/loader.conf: if_carp_load="YES" To load the module now without rebooting: &prompt.root; kldload carp For users who prefer to use a custom kernel, include the following line in the custom kernel configuration file and compile the kernel as described in : device carp Next, on each host, create a CARP device: &prompt.root; ifconfig carp0 create Set the hostname, management IP address, the shared IP address, and VHID by adding the required lines to /etc/rc.conf. Since a virtual CARP device is used instead of an alias, the actual subnet mask of /24 is used instead of /32. Here are the entries for hosta.example.org: hostname="hosta.example.org" ifconfig_fxp0="inet 192.168.1.3 netmask 255.255.255.0" cloned_interfaces="carp0" ifconfig_carp0="vhid 1 pass testpass 192.168.1.50/24" On hostb.example.org: hostname="hostb.example.org" ifconfig_fxp0="inet 192.168.1.4 netmask 255.255.255.0" cloned_interfaces="carp0" ifconfig_carp0="vhid 2 pass testpass 192.168.1.51/24" The third machine, hostc.example.org, is configured to handle failover from either of the master hosts: hostname="hostc.example.org" ifconfig_fxp0="inet 192.168.1.5 netmask 255.255.255.0" cloned_interfaces="carp0 carp1" ifconfig_carp0="vhid 1 advskew 100 pass testpass 192.168.1.50/24" ifconfig_carp1="vhid 2 advskew 100 pass testpass 192.168.1.51/24" Preemption is disabled in the GENERIC &os; kernel. If preemption has been enabled with a custom kernel, hostc.example.org may not release the IP address back to the original content server. The administrator can force the backup server to return the IP address to the master with the command: &prompt.root; ifconfig carp0 down && ifconfig carp0 up This should be done on the carp interface which corresponds to the correct host. Once the configuration is complete, either restart networking or reboot each system. High availability is now enabled.
Index: head/en_US.ISO8859-1/books/handbook/jails/chapter.xml =================================================================== --- head/en_US.ISO8859-1/books/handbook/jails/chapter.xml (revision 48481) +++ head/en_US.ISO8859-1/books/handbook/jails/chapter.xml (revision 48482) @@ -1,1611 +1,1611 @@ Jails MatteoRiondatoContributed by jails Synopsis Since system administration is a difficult task, many tools have been developed to make life easier for the administrator. These tools often enhance the way systems are installed, configured, and maintained. One of the tools which can be used to enhance the security of a &os; system is jails. Jails have been available since &os; 4.X and continue to be enhanced in their usefulness, performance, reliability, and security. Jails build upon the &man.chroot.2; concept, which is used to change the root directory of a set of processes. This creates a safe environment, separate from the rest of the system. Processes created in the chrooted environment can not access files or resources outside of it. For that reason, compromising a service running in a chrooted environment should not allow the attacker to compromise the entire system. However, a chroot has several limitations. It is suited to easy tasks which do not require much flexibility or complex, advanced features. Over time, many ways have been found to escape from a chrooted environment, making it a less than ideal solution for securing services. Jails improve on the concept of the traditional chroot environment in several ways. In a traditional chroot environment, processes are only limited in the part of the file system they can access. The rest of the system resources, system users, running processes, and the networking subsystem are shared by the chrooted processes and the processes of the host system. Jails expand this model by virtualizing access to the file system, the set of users, and the networking subsystem. More fine-grained controls are available for tuning the access of a jailed environment. Jails can be considered as a type of operating system-level virtualization. A jail is characterized by four elements: A directory subtree: the starting point from which a jail is entered. Once inside the jail, a process is not permitted to escape outside of this subtree. A hostname: which will be used by the jail. An IP address: which is assigned to the jail. The IP address of a jail is often an alias address for an existing network interface. A command: the path name of an executable to run inside the jail. The path is relative to the root directory of the jail environment. Jails have their own set of users and their own root account which are limited to the jail environment. The root account of a jail is not allowed to perform operations to the system outside of the associated jail environment. This chapter provides an overview of the terminology and commands for managing &os; jails. Jails are a powerful tool for both system administrators, and advanced users. After reading this chapter, you will know: What a jail is and what purpose it may serve in &os; installations. How to build, start, and stop a jail. The basics of jail administration, both from inside and outside the jail. Jails are a powerful tool, but they are not a security panacea. While it is not possible for a jailed process to break out on its own, there are several ways in which an unprivileged user outside the jail can cooperate with a privileged user inside the jail to obtain elevated privileges in the host environment. Most of these attacks can be mitigated by ensuring that the jail root is not accessible to unprivileged users in the host environment. As a general rule, untrusted users with privileged access to a jail should not be given access to the host environment. Terms Related to Jails To facilitate better understanding of parts of the &os; system related to jails, their internals and the way they interact with the rest of &os;, the following terms are used further in this chapter: &man.chroot.8; (command) Utility, which uses &man.chroot.2; &os; system call to change the root directory of a process and all its descendants. &man.chroot.2; (environment) The environment of processes running in a chroot. This includes resources such as the part of the file system which is visible, user and group IDs which are available, network interfaces and other IPC mechanisms, etc. &man.jail.8; (command) The system administration utility which allows launching of processes within a jail environment. host (system, process, user, etc.) The controlling system of a jail environment. The host system has access to all the hardware resources available, and can control processes both outside of and inside a jail environment. One of the important differences of the host system from a jail is that the limitations which apply to superuser processes inside a jail are not enforced for processes of the host system. hosted (system, process, user, etc.) A process, user or other entity, whose access to resources is restricted by a &os; jail. Creating and Controlling Jails Some administrators divide jails into the following two types: complete jails, which resemble a real &os; system, and service jails, dedicated to one application or service, possibly running with privileges. This is only a conceptual division and the process of building a jail is not affected by it. When creating a complete jail there are two options for the source of the userland: use prebuilt binaries (such as those supplied on an install media) or build from source. To install the userland from installation media, first create the root directory for the jail. This can be done by setting the DESTDIR variable to the proper location. Start a shell and define DESTDIR: &prompt.root; sh &prompt.root; export DESTDIR=/here/is/the/jail Mount the install media as covered in &man.mdconfig.8; when using the install ISO: &prompt.root; mount -t cd9660 /dev/`mdconfig -f cdimage.iso` /mnt Extract the binaries from the tarballs on the install media into the declared destination. Minimally, only the base set needs to be extracted, but a complete install can be performed when preferred. To install just the base system: &prompt.root; tar -xf /mnt/usr/freebsd-dist/base.txz -C $DESTDIR To install everything except the kernel: &prompt.root; for sets in BASE PORTS; do tar -xf /mnt/FREEBSD_INSTALL/USR/FREEBSD_DIST/$sets.TXZ -C $DESTDIR ; done The &man.jail.8; manual page explains the procedure for building a jail: &prompt.root; setenv D /here/is/the/jail &prompt.root; mkdir -p $D &prompt.root; cd /usr/src &prompt.root; make buildworld &prompt.root; make installworld DESTDIR=$D &prompt.root; make distribution DESTDIR=$D &prompt.root; mount -t devfs devfs $D/dev Selecting a location for a jail is the best starting point. This is where the jail will physically reside within the file system of the jail's host. A good choice can be /usr/jail/jailname, where jailname is the hostname identifying the jail. Usually, /usr/ has enough space for the jail file system, which for complete jails is, essentially, a replication of every file present in a default installation of the &os; base system. If you have already rebuilt your userland using make world or make buildworld, you can skip this step and install your existing userland into the new jail. This command will populate the directory subtree chosen as jail's physical location on the file system with the necessary binaries, libraries, manual pages and so on. The distribution target for make installs every needed configuration file. In simple words, it installs every installable file of /usr/src/etc/ to the /etc directory of the jail environment: $D/etc/. Mounting the &man.devfs.8; file system inside a jail is not required. On the other hand, any, or almost any application requires access to at least one device, depending on the purpose of the given application. It is very important to control access to devices from inside a jail, as improper settings could permit an attacker to do nasty things in the jail. Control over &man.devfs.8; is managed through rulesets which are described in the &man.devfs.8; and &man.devfs.conf.5; manual pages. Once a jail is installed, it can be started by using the &man.jail.8; utility. The &man.jail.8; utility takes four mandatory arguments which are described in the . Other arguments may be specified too, e.g., to run the jailed process with the credentials of a specific user. The argument depends on the type of the jail; for a virtual system, /etc/rc is a good choice, since it will replicate the startup sequence of a real &os; system. For a service jail, it depends on the service or application that will run within the jail. Jails are often started at boot time and the &os; rc mechanism provides an easy way to do this. A list of the jails which are enabled to start at boot time should be added to the &man.rc.conf.5; file: jail_enable="YES" # Set to NO to disable starting of any jails jail_list="www" # Space separated list of names of jails Jail names in jail_list should contain alphanumeric characters only. For each jail listed in jail_list, a group of &man.rc.conf.5; settings, which describe the particular jail, should be added: jail_www_rootdir="/usr/jail/www" # jail's root directory jail_www_hostname="www.example.org" # jail's hostname jail_www_ip="192.168.0.10" # jail's IP address jail_www_devfs_enable="YES" # mount devfs in the jail The default startup of jails configured in &man.rc.conf.5;, will run the /etc/rc script of the jail, which assumes the jail is a complete virtual system. For service jails, the default startup command of the jail should be changed, by setting the jail_jailname_exec_start option appropriately. For a full list of available options, please see the &man.rc.conf.5; manual page. &man.service.8; can be used to start or stop a jail by hand, if an entry for it exists in rc.conf: &prompt.root; service jail start www &prompt.root; service jail stop www Jails can be shut down with &man.jexec.8;. Use &man.jls.8; to identify the jail's JID, then use &man.jexec.8; to run the shutdown script in that jail. &prompt.root; jls JID IP Address Hostname Path 3 192.168.0.10 www /usr/jail/www &prompt.root; jexec 3 /etc/rc.shutdown More information about this can be found in the &man.jail.8; manual page. Fine Tuning and Administration There are several options which can be set for any jail, and various ways of combining a host &os; system with jails, to produce higher level applications. This section presents: Some of the options available for tuning the behavior and security restrictions implemented by a jail installation. Some of the high-level applications for jail management, which are available through the &os; Ports Collection, and can be used to implement overall jail-based solutions. System Tools for Jail Tuning in &os; Fine tuning of a jail's configuration is mostly done by setting &man.sysctl.8; variables. A special subtree of sysctl exists as a basis for organizing all the relevant options: the security.jail.* hierarchy of &os; kernel options. Here is a list of the main jail-related sysctls, complete with their default value. Names should be self-explanatory, but for more information about them, please refer to the &man.jail.8; and &man.sysctl.8; manual pages. security.jail.set_hostname_allowed: 1 security.jail.socket_unixiproute_only: 1 security.jail.sysvipc_allowed: 0 security.jail.enforce_statfs: 2 security.jail.allow_raw_sockets: 0 security.jail.chflags_allowed: 0 security.jail.jailed: 0 These variables can be used by the system administrator of the host system to add or remove some of the limitations imposed by default on the root user. Note that there are some limitations which cannot be removed. The root user is not allowed to mount or unmount file systems from within a &man.jail.8;. The root inside a jail may not load or unload &man.devfs.8; rulesets, set firewall rules, or do many other administrative tasks which require modifications of in-kernel data, such as setting the securelevel of the kernel. The base system of &os; contains a basic set of tools for viewing information about the active jails, and attaching to a jail to run administrative commands. The &man.jls.8; and &man.jexec.8; commands are part of the base &os; system, and can be used to perform the following simple tasks: Print a list of active jails and their corresponding jail identifier (JID), IP address, hostname and path. Attach to a running jail, from its host system, and run a command inside the jail or perform administrative tasks inside the jail itself. This is especially useful when the root user wants to cleanly shut down a jail. The &man.jexec.8; utility can also be used to start a shell in a jail to do administration in it; for example: &prompt.root; jexec 1 tcsh High-Level Administrative Tools in the &os; Ports Collection Among the many third-party utilities for jail administration, one of the most complete and useful is sysutils/ezjail. It is a set of scripts that contribute to &man.jail.8; management. Please refer to the handbook section on ezjail for more information. Keeping Jails Patched and up to Date Jails should be kept up to date from the host operating system as attempting to patch userland from within the jail - may likely fail as the default behaviour in FreeBSD is to + may likely fail as the default behavior in FreeBSD is to disallow the use of &man.chflags.1; in a jail which prevents the replacement of some files. It is possible to change this behavior but it is recommended to use &man.freebsd-update.8; to maintain jails instead. Use to specify the path of the jail to be updated. &prompt.root; freebsd-update -b /here/is/the/jail fetch &prompt.root; freebsd-update -b /here/is/the/jail install Updating Multiple Jails Daniel Gerzo Contributed by Simon L. B. Nielsen Based upon an idea presented by Ken Tom And an article written by The management of multiple jails can become problematic because every jail has to be rebuilt from scratch whenever it is upgraded. This can be time consuming and tedious if a lot of jails are created and manually updated. This section demonstrates one method to resolve this issue by safely sharing as much as is possible between jails using read-only &man.mount.nullfs.8; mounts, so that updating is simpler. This makes it more attractive to put single services, such as HTTP, DNS, and SMTP, into individual jails. Additionally, it provides a simple way to add, remove, and upgrade jails. Simpler solutions exist, such as ezjail, which provides an easier method of administering &os; jails but is less versatile than this setup. ezjail is covered in more detail in . The goals of the setup described in this section are: Create a simple and easy to understand jail structure that does not require running a full installworld on each and every jail. Make it easy to add new jails or remove existing ones. Make it easy to update or upgrade existing jails. Make it possible to run a customized &os; branch. Be paranoid about security, reducing as much as possible the possibility of compromise. Save space and inodes, as much as possible. This design relies on a single, read-only master template which is mounted into each jail and one read-write device per jail. A device can be a separate physical disc, a partition, or a vnode backed memory device. This example uses read-write nullfs mounts. The file system layout is as follows: The jails are based under the /home partition. Each jail will be mounted under the /home/j directory. The template for each jail and the read-only partition for all of the jails is /home/j/mroot. A blank directory will be created for each jail under the /home/j directory. Each jail will have a /s directory that will be linked to the read-write portion of the system. Each jail will have its own read-write system that is based upon /home/j/skel. The read-write portion of each jail will be created in /home/js. Creating the Template This section describes the steps needed to create the master template. It is recommended to first update the host &os; system to the latest -RELEASE branch using the instructions in . Additionally, this template uses the sysutils/cpdup package or port and portsnap will be used to download the &os; Ports Collection. First, create a directory structure for the read-only file system which will contain the &os; binaries for the jails. Then, change directory to the &os; source tree and install the read-only file system to the jail template: &prompt.root; mkdir /home/j /home/j/mroot &prompt.root; cd /usr/src &prompt.root; make installworld DESTDIR=/home/j/mroot Next, prepare a &os; Ports Collection for the jails as well as a &os; source tree, which is required for mergemaster: &prompt.root; cd /home/j/mroot &prompt.root; mkdir usr/ports &prompt.root; portsnap -p /home/j/mroot/usr/ports fetch extract &prompt.root; cpdup /usr/src /home/j/mroot/usr/src Create a skeleton for the read-write portion of the system: &prompt.root; mkdir /home/j/skel /home/j/skel/home /home/j/skel/usr-X11R6 /home/j/skel/distfiles &prompt.root; mv etc /home/j/skel &prompt.root; mv usr/local /home/j/skel/usr-local &prompt.root; mv tmp /home/j/skel &prompt.root; mv var /home/j/skel &prompt.root; mv root /home/j/skel Use mergemaster to install missing configuration files. Then, remove the extra directories that mergemaster creates: &prompt.root; mergemaster -t /home/j/skel/var/tmp/temproot -D /home/j/skel -i &prompt.root; cd /home/j/skel &prompt.root; rm -R bin boot lib libexec mnt proc rescue sbin sys usr dev Now, symlink the read-write file system to the read-only file system. Ensure that the symlinks are created in the correct s/ locations as the creation of directories in the wrong locations will cause the installation to fail. &prompt.root; cd /home/j/mroot &prompt.root; mkdir s &prompt.root; ln -s s/etc etc &prompt.root; ln -s s/home home &prompt.root; ln -s s/root root &prompt.root; ln -s s/usr-local usr/local &prompt.root; ln -s s/usr-X11R6 usr/X11R6 &prompt.root; ln -s s/distfiles usr/ports/distfiles &prompt.root; ln -s s/tmp tmp &prompt.root; ln -s s/var var As a last step, create a generic /home/j/skel/etc/make.conf containing this line: WRKDIRPREFIX?= /s/portbuild This makes it possible to compile &os; ports inside each jail. Remember that the ports directory is part of the read-only system. The custom path for WRKDIRPREFIX allows builds to be done in the read-write portion of every jail. Creating Jails The jail template can now be used to setup and configure the jails in /etc/rc.conf. This example demonstrates the creation of 3 jails: NS, MAIL and WWW. Add the following lines to /etc/fstab, so that the read-only template for the jails and the read-write space will be available in the respective jails: /home/j/mroot /home/j/ns nullfs ro 0 0 /home/j/mroot /home/j/mail nullfs ro 0 0 /home/j/mroot /home/j/www nullfs ro 0 0 /home/js/ns /home/j/ns/s nullfs rw 0 0 /home/js/mail /home/j/mail/s nullfs rw 0 0 /home/js/www /home/j/www/s nullfs rw 0 0 To prevent fsck from checking nullfs mounts during boot and dump from backing up the read-only nullfs mounts of the jails, the last two columns are both set to 0. Configure the jails in /etc/rc.conf: jail_enable="YES" jail_set_hostname_allow="NO" jail_list="ns mail www" jail_ns_hostname="ns.example.org" jail_ns_ip="192.168.3.17" jail_ns_rootdir="/usr/home/j/ns" jail_ns_devfs_enable="YES" jail_mail_hostname="mail.example.org" jail_mail_ip="192.168.3.18" jail_mail_rootdir="/usr/home/j/mail" jail_mail_devfs_enable="YES" jail_www_hostname="www.example.org" jail_www_ip="62.123.43.14" jail_www_rootdir="/usr/home/j/www" jail_www_devfs_enable="YES" The jail_name_rootdir variable is set to /usr/home instead of /home because the physical path of /home on a default &os; installation is /usr/home. The jail_name_rootdir variable must not be set to a path which includes a symbolic link, otherwise the jails will refuse to start. Create the required mount points for the read-only file system of each jail: &prompt.root; mkdir /home/j/ns /home/j/mail /home/j/www Install the read-write template into each jail using sysutils/cpdup: &prompt.root; mkdir /home/js &prompt.root; cpdup /home/j/skel /home/js/ns &prompt.root; cpdup /home/j/skel /home/js/mail &prompt.root; cpdup /home/j/skel /home/js/www In this phase, the jails are built and prepared to run. First, mount the required file systems for each jail, and then start them: &prompt.root; mount -a &prompt.root; service jail start The jails should be running now. To check if they have started correctly, use jls. Its output should be similar to the following: &prompt.root; jls JID IP Address Hostname Path 3 192.168.3.17 ns.example.org /home/j/ns 2 192.168.3.18 mail.example.org /home/j/mail 1 62.123.43.14 www.example.org /home/j/www At this point, it should be possible to log onto each jail, add new users, or configure daemons. The JID column indicates the jail identification number of each running jail. Use the following command to perform administrative tasks in the jail whose JID is 3: &prompt.root; jexec 3 tcsh Upgrading The design of this setup provides an easy way to upgrade existing jails while minimizing their downtime. Also, it provides a way to roll back to the older version should a problem occur. The first step is to upgrade the host system. Then, create a new temporary read-only template in /home/j/mroot2. &prompt.root; mkdir /home/j/mroot2 &prompt.root; cd /usr/src &prompt.root; make installworld DESTDIR=/home/j/mroot2 &prompt.root; cd /home/j/mroot2 &prompt.root; cpdup /usr/src usr/src &prompt.root; mkdir s The installworld creates a few unnecessary directories, which should be removed: &prompt.root; chflags -R 0 var &prompt.root; rm -R etc var root usr/local tmp Recreate the read-write symlinks for the master file system: &prompt.root; ln -s s/etc etc &prompt.root; ln -s s/root root &prompt.root; ln -s s/home home &prompt.root; ln -s ../s/usr-local usr/local &prompt.root; ln -s ../s/usr-X11R6 usr/X11R6 &prompt.root; ln -s s/tmp tmp &prompt.root; ln -s s/var var Next, stop the jails: &prompt.root; service jail stop Unmount the original file systems as the read-write systems are attached to the read-only system (/s): &prompt.root; umount /home/j/ns/s &prompt.root; umount /home/j/ns &prompt.root; umount /home/j/mail/s &prompt.root; umount /home/j/mail &prompt.root; umount /home/j/www/s &prompt.root; umount /home/j/www Move the old read-only file system and replace it with the new one. This will serve as a backup and archive of the old read-only file system should something go wrong. The naming convention used here corresponds to when a new read-only file system has been created. Move the original &os; Ports Collection over to the new file system to save some space and inodes: &prompt.root; cd /home/j &prompt.root; mv mroot mroot.20060601 &prompt.root; mv mroot2 mroot &prompt.root; mv mroot.20060601/usr/ports mroot/usr At this point the new read-only template is ready, so the only remaining task is to remount the file systems and start the jails: &prompt.root; mount -a &prompt.root; service jail start Use jls to check if the jails started correctly. Run mergemaster in each jail to update the configuration files. Managing Jails with <application>ezjail</application> Warren Block Originally contributed by Creating and managing multiple jails can quickly become tedious and error-prone. Dirk Engling's ezjail automates and greatly simplifies many jail tasks. A basejail is created as a template. Additional jails use &man.mount.nullfs.8; to share many of the basejail directories without using additional disk space. Each additional jail takes only a few megabytes of disk space before applications are installed. Upgrading the copy of the userland in the basejail automatically upgrades all of the other jails. Additional benefits and features are described in detail on the ezjail web site, . Installing <application>ezjail</application> Installing ezjail consists of adding a loopback interface for use in jails, installing the port or package, and enabling the service. To keep jail loopback traffic off the host's loopback network interface lo0, a second loopback interface is created by adding an entry to /etc/rc.conf: cloned_interfaces="lo1" The second loopback interface lo1 will be created when the system starts. It can also be created manually without a restart: &prompt.root; service netif cloneup Created clone interfaces: lo1. Jails can be allowed to use aliases of this secondary loopback interface without interfering with the host. Inside a jail, access to the loopback address 127.0.0.1 is redirected to the first IP address assigned to the jail. To make the jail loopback correspond with the new lo1 interface, that interface must be specified first in the list of interfaces and IP addresses given when creating a new jail. Give each jail a unique loopback address in the 127.0.0.0/8 netblock. Install sysutils/ezjail: &prompt.root; cd /usr/ports/sysutils/ezjail &prompt.root; make install clean Enable ezjail by adding this line to /etc/rc.conf: ezjail_enable="YES" The service will automatically start on system boot. It can be started immediately for the current session: &prompt.root; service ezjail start Initial Setup With ezjail installed, the basejail directory structure can be created and populated. This step is only needed once on the jail host computer. In both of these examples, causes the ports tree to be retrieved with &man.portsnap.8; into the basejail. That single copy of the ports directory will be shared by all the jails. Using a separate copy of the ports directory for jails isolates them from the host. The ezjail FAQ explains in more detail: . To Populate the Jail with &os;-RELEASE For a basejail based on the &os; RELEASE matching that of the host computer, use install. For example, on a host computer running &os; 10-STABLE, the latest RELEASE version of &os; -10 will be installed in the jail): &prompt.root; ezjail-admin install -p To Populate the Jail with <command>installworld</command> The basejail can be installed from binaries created by buildworld on the host with ezjail-admin update. In this example, &os; 10-STABLE has been built from source. The jail directories are created. Then installworld is executed, installing the host's /usr/obj into the basejail. &prompt.root; ezjail-admin update -i -p The host's /usr/src is used by default. A different source directory on the host can be specified with and a path, or set with ezjail_sourcetree in /usr/local/etc/ezjail.conf. The basejail's ports tree is shared by other jails. However, downloaded distfiles are stored in the jail that downloaded them. By default, these files are stored in /var/ports/distfiles within each jail. /var/ports inside each jail is also used as a work directory when building ports. Creating and Starting a New Jail New jails are created with ezjail-admin create. In these examples, the lo1 loopback interface is used as described above. Create and Start a New Jail Create the jail, specifying a name and the loopback and network interfaces to use, along with their IP addresses. In this example, the jail is named dnsjail. &prompt.root; ezjail-admin create dnsjail 'lo1|127.0.1.1,em0|192.168.1.50' Most network services run in jails without problems. A few network services, most notably &man.ping.8;, use raw network sockets. In jails, raw network sockets are disabled by default for security. Services that require them will not work. Occasionally, a jail genuinely needs raw sockets. For example, network monitoring applications often use &man.ping.8; to check the availability of other computers. When raw network sockets are actually needed in a jail, they can be enabled by editing the ezjail configuration file for the individual jail, /usr/local/etc/ezjail/jailname. Modify the parameters entry: export jail_jailname_parameters="allow.raw_sockets=1" Do not enable raw network sockets unless services in the jail actually require them. Start the jail: &prompt.root; ezjail-admin start dnsjail Use a console on the jail: &prompt.root; ezjail-admin console dnsjail The jail is operating and additional configuration can be completed. Typical settings added at this point include: Set the <systemitem class="username">root</systemitem> Password Connect to the jail and set the root user's password: &prompt.root; ezjail-admin console dnsjail &prompt.root; passwd Changing local password for root New Password: Retype New Password: Time Zone Configuration The jail's time zone can be set with &man.tzsetup.8;. To avoid spurious error messages, the &man.adjkerntz.8; entry in /etc/crontab can be commented or removed. This job attempts to update the computer's hardware clock with time zone changes, but jails are not allowed to access that hardware. <acronym>DNS</acronym> Servers Enter domain name server lines in /etc/resolv.conf so DNS works in the jail. Edit <filename>/etc/hosts</filename> Change the address and add the jail name to the localhost entries in /etc/hosts. Configure <filename>/etc/rc.conf</filename> Enter configuration settings in /etc/rc.conf. This is much like configuring a full computer. The host name and IP address are not set here. Those values are already provided by the jail configuration. With the jail configured, the applications for which the jail was created can be installed. Some ports must be built with special options to be used in a jail. For example, both of the network monitoring plugin packages net-mgmt/nagios-plugins and net-mgmt/monitoring-plugins have a JAIL option which must be enabled for them to work correctly inside a jail. Updating Jails Updating the Operating System Because the basejail's copy of the userland is shared by the other jails, updating the basejail automatically updates all of the other jails. Either source or binary updates can be used. To build the world from source on the host, then install it in the basejail, use: &prompt.root; ezjail-admin update -b If the world has already been compiled on the host, install it in the basejail with: &prompt.root; ezjail-admin update -i Binary updates use &man.freebsd-update.8;. These updates have the same limitations as if &man.freebsd-update.8; were being run directly. The most important one is that only -RELEASE versions of &os; are available with this method. Update the basejail to the latest patched release of the version of &os; on the host. For example, updating from RELEASE-p1 to RELEASE-p2. &prompt.root; ezjail-admin update -u To upgrade the basejail to a new version, first upgrade the host system as described in . Once the host has been upgraded and rebooted, the basejail can then be upgraded. &man.freebsd-update.8; has no way of determining which version is currently installed in the basejail, so the original version must be specified. Use &man.file.1; to determine the original version in the basejail: &prompt.root; file /usr/jails/basejail/bin/sh /usr/jails/basejail/bin/sh: ELF 64-bit LSB executable, x86-64, version 1 (FreeBSD), dynamically linked (uses shared libs), for FreeBSD 9.3, stripped Now use this information to perform the upgrade from 9.3-RELEASE to the current version of the host system: &prompt.root; ezjail-admin update -U -s 9.3-RELEASE After updating the basejail, &man.mergemaster.8; must be run to update each jail's configuration files. How to use &man.mergemaster.8; depends on the purpose and trustworthiness of a jail. If a jail's services or users are not trusted, then &man.mergemaster.8; should only be run from within that jail: &man.mergemaster.8; on Untrusted Jail Delete the link from the jail's /usr/src into the basejail and create a new /usr/src in the jail as a mountpoint. Mount the host computer's /usr/src read-only on the jail's new /usr/src mountpoint: &prompt.root; rm /usr/jails/jailname/usr/src &prompt.root; mkdir /usr/jails/jailname/usr/src &prompt.root; mount -t nullfs -o ro /usr/src /usr/jails/jailname/usr/src Get a console in the jail: &prompt.root; ezjail-admin console jailname Inside the jail, run mergemaster. Then exit the jail console: &prompt.root; cd /usr/src &prompt.root; mergemaster -U &prompt.root; exit Finally, unmount the jail's /usr/src: &prompt.root; umount /usr/jails/jailname/usr/src &man.mergemaster.8; on Trusted Jail If the users and services in a jail are trusted, &man.mergemaster.8; can be run from the host: &prompt.root; mergemaster -U -D /usr/jails/jailname Updating Ports The ports tree in the basejail is shared by the other jails. Updating that copy of the ports tree gives the other jails the updated version also. The basejail ports tree is updated with &man.portsnap.8;: &prompt.root; ezjail-admin update -P Controlling Jails Stopping and Starting Jails ezjail automatically starts jails when the computer is started. Jails can be manually stopped and restarted with stop and start: &prompt.root; ezjail-admin stop sambajail Stopping jails: sambajail. By default, jails are started automatically when the host computer starts. Autostarting can be disabled with config: &prompt.root; ezjail-admin config -r norun seldomjail This takes effect the next time the host computer is started. A jail that is already running will not be stopped. Enabling autostart is very similar: &prompt.root; ezjail-admin config -r run oftenjail Archiving and Restoring Jails Use archive to create a .tar.gz archive of a jail. The file name is composed from the name of the jail and the current date. Archive files are written to the archive directory, /usr/jails/ezjail_archives. A different archive directory can be chosen by setting ezjail_archivedir in the configuration file. The archive file can be copied elsewhere as a backup, or an existing jail can be restored from it with restore. A new jail can be created from the archive, providing a convenient way to clone existing jails. Stop and archive a jail named wwwserver: &prompt.root; ezjail-admin stop wwwserver Stopping jails: wwwserver. &prompt.root; ezjail-admin archive wwwserver &prompt.root; ls /usr/jails/ezjail-archives/ wwwserver-201407271153.13.tar.gz Create a new jail named wwwserver-clone from the archive created in the previous step. Use the em1 interface and assign a new IP address to avoid conflict with the original: &prompt.root; ezjail-admin create -a /usr/jails/ezjail_archives/wwwserver-201407271153.13.tar.gz wwwserver-clone 'lo1|127.0.3.1,em1|192.168.1.51' Full Example: <application>BIND</application> in a Jail Putting the BIND DNS server in a jail improves security by isolating it. This example creates a simple caching-only name server. The jail will be called dns1. The jail will use IP address 192.168.1.240 on the host's re0 interface. The upstream ISP's DNS servers are at 10.0.0.62 and 10.0.0.61. The basejail has already been created and a ports tree installed. Running BIND in a Jail Create a cloned loopback interface by adding a line to /etc/rc.conf: cloned_interfaces="lo1" Immediately create the new loopback interface: &prompt.root; service netif cloneup Created clone interfaces: lo1. Create the jail: &prompt.root; ezjail-admin create dns1 'lo1|127.0.2.1,re0|192.168.1.240' Start the jail, connect to a console running on it, and perform some basic configuration: &prompt.root; ezjail-admin start dns1 &prompt.root; ezjail-admin console dns1 &prompt.root; passwd Changing local password for root New Password: Retype New Password: &prompt.root; tzsetup &prompt.root; sed -i .bak -e '/adjkerntz/ s/^/#/' /etc/crontab &prompt.root; sed -i .bak -e 's/127.0.0.1/127.0.2.1/g; s/localhost.my.domain/dns1.my.domain dns1/' /etc/hosts Temporarily set the upstream DNS servers in /etc/resolv.conf so ports can be downloaded: nameserver 10.0.0.62 nameserver 10.0.0.61 Still using the jail console, install dns/bind99. &prompt.root; make -C /usr/ports/dns/bind99 install clean Configure the name server by editing /usr/local/etc/namedb/named.conf. Create an Access Control List (ACL) of addresses and networks that are permitted to send DNS queries to this name server. This section is added just before the options section already in the file: ... // or cause huge amounts of useless Internet traffic. acl "trusted" { 192.168.1.0/24; localhost; localnets; }; options { ... Use the jail IP address in the listen-on setting to accept DNS queries from other computers on the network: listen-on { 192.168.1.240; }; A simple caching-only DNS name server is created by changing the forwarders section. The original file contains: /* forwarders { 127.0.0.1; }; */ Uncomment the section by removing the /* and */ lines. Enter the IP addresses of the upstream DNS servers. Immediately after the forwarders section, add references to the trusted ACL defined earlier: forwarders { 10.0.0.62; 10.0.0.61; }; allow-query { any; }; allow-recursion { trusted; }; allow-query-cache { trusted; }; Enable the service in /etc/rc.conf: named_enable="YES" Start and test the name server: &prompt.root; service named start wrote key file "/usr/local/etc/namedb/rndc.key" Starting named. &prompt.root; /usr/local/bin/dig @192.168.1.240 freebsd.org A response that includes ;; Got answer; shows that the new DNS server is working. A long delay followed by a response including ;; connection timed out; no servers could be reached shows a problem. Check the configuration settings and make sure any local firewalls allow the new DNS access to the upstream DNS servers. The new DNS server can use itself for local name resolution, just like other local computers. Set the address of the DNS server in the client computer's /etc/resolv.conf: nameserver 192.168.1.240 A local DHCP server can be configured to provide this address for a local DNS server, providing automatic configuration on DHCP clients. Index: head/en_US.ISO8859-1/books/handbook/network-servers/chapter.xml =================================================================== --- head/en_US.ISO8859-1/books/handbook/network-servers/chapter.xml (revision 48481) +++ head/en_US.ISO8859-1/books/handbook/network-servers/chapter.xml (revision 48482) @@ -1,5788 +1,5788 @@ Network Servers Synopsis This chapter covers some of the more frequently used network services on &unix; systems. This includes installing, configuring, testing, and maintaining many different types of network services. Example configuration files are included throughout this chapter for reference. By the end of this chapter, readers will know: How to manage the inetd daemon. How to set up the Network File System (NFS). How to set up the Network Information Server (NIS) for centralizing and sharing user accounts. How to set &os; up to act as an LDAP server or client How to set up automatic network settings using DHCP. How to set up a Domain Name Server (DNS). How to set up the Apache HTTP Server. How to set up a File Transfer Protocol (FTP) server. How to set up a file and print server for &windows; clients using Samba. How to synchronize the time and date, and set up a time server using the Network Time Protocol (NTP). How to set up iSCSI. This chapter assumes a basic knowledge of: /etc/rc scripts. Network terminology. Installation of additional third-party software (). The <application>inetd</application> Super-Server The &man.inetd.8; daemon is sometimes referred to as a Super-Server because it manages connections for many services. Instead of starting multiple applications, only the inetd service needs to be started. When a connection is received for a service that is managed by inetd, it determines which program the connection is destined for, spawns a process for that program, and delegates the program a socket. Using inetd for services that are not heavily used can reduce system load, when compared to running each daemon individually in stand-alone mode. Primarily, inetd is used to spawn other daemons, but several trivial protocols are handled internally, such as chargen, auth, time, echo, discard, and daytime. This section covers the basics of configuring inetd. Configuration File Configuration of inetd is done by editing /etc/inetd.conf. Each line of this configuration file represents an application which can be started by inetd. By default, every line starts with a comment (#), meaning that inetd is not listening for any applications. To configure inetd to listen for an application's connections, remove the # at the beginning of the line for that application. After saving your edits, configure inetd to start at system boot by editing /etc/rc.conf: inetd_enable="YES" To start inetd now, so that it listens for the service you configured, type: &prompt.root; service inetd start Once inetd is started, it needs to be notified whenever a modification is made to /etc/inetd.conf: Reloading the <application>inetd</application> Configuration File &prompt.root; service inetd reload Typically, the default entry for an application does not need to be edited beyond removing the #. In some situations, it may be appropriate to edit the default entry. As an example, this is the default entry for &man.ftpd.8; over IPv4: ftp stream tcp nowait root /usr/libexec/ftpd ftpd -l The seven columns in an entry are as follows: service-name socket-type protocol {wait|nowait}[/max-child[/max-connections-per-ip-per-minute[/max-child-per-ip]]] user[:group][/login-class] server-program server-program-arguments where: service-name The service name of the daemon to start. It must correspond to a service listed in /etc/services. This determines which port inetd listens on for incoming connections to that service. When using a custom service, it must first be added to /etc/services. socket-type Either stream, dgram, raw, or seqpacket. Use stream for TCP connections and dgram for UDP services. protocol Use one of the following protocol names: Protocol Name Explanation tcp or tcp4 TCP IPv4 udp or udp4 UDP IPv4 tcp6 TCP IPv6 udp6 UDP IPv6 tcp46 Both TCP IPv4 and IPv6 udp46 Both UDP IPv4 and IPv6 {wait|nowait}[/max-child[/max-connections-per-ip-per-minute[/max-child-per-ip]]] In this field, or must be specified. , and are optional. indicates whether or not the service is able to handle its own socket. socket types must use while daemons, which are usually multi-threaded, should use . usually hands off multiple sockets to a single daemon, while spawns a child daemon for each new socket. The maximum number of child daemons inetd may spawn is set by . For example, to limit ten instances of the daemon, place a /10 after . Specifying /0 allows an unlimited number of children. limits the number of connections from any particular IP address per minute. Once the limit is reached, further connections from this IP address will be dropped until the end of the minute. For example, a value of /10 would limit any particular IP address to ten connection attempts per minute. limits the number of child processes that can be started on behalf on any single IP address at any moment. These options can limit excessive resource consumption and help to prevent Denial of Service attacks. An example can be seen in the default settings for &man.fingerd.8;: finger stream tcp nowait/3/10 nobody /usr/libexec/fingerd fingerd -k -s user The username the daemon will run as. Daemons typically run as root, daemon, or nobody. server-program The full path to the daemon. If the daemon is a service provided by inetd internally, use . server-program-arguments Used to specify any command arguments to be passed to the daemon on invocation. If the daemon is an internal service, use . Command-Line Options Like most server daemons, inetd has a number of options that can be used to modify its - behaviour. By default, inetd is + behavior. By default, inetd is started with -wW -C 60. These options enable TCP wrappers for all services, including internal services, and prevent any IP address from requesting any service more than 60 times per minute. To change the default options which are passed to inetd, add an entry for inetd_flags in /etc/rc.conf. If inetd is already running, restart it with service inetd restart. The available rate limiting options are: -c maximum Specify the default maximum number of simultaneous invocations of each service, where the default is unlimited. May be overridden on a per-service basis by using in /etc/inetd.conf. -C rate Specify the default maximum number of times a service can be invoked from a single IP address per minute. May be overridden on a per-service basis by using in /etc/inetd.conf. -R rate Specify the maximum number of times a service can be invoked in one minute, where the default is 256. A rate of 0 allows an unlimited number. -s maximum Specify the maximum number of times a service can be invoked from a single IP address at any one time, where the default is unlimited. May be overridden on a per-service basis by using in /etc/inetd.conf. Additional options are available. Refer to &man.inetd.8; for the full list of options. Security Considerations Many of the daemons which can be managed by inetd are not security-conscious. Some daemons, such as fingerd, can provide information that may be useful to an attacker. Only enable the services which are needed and monitor the system for excessive connection attempts. max-connections-per-ip-per-minute, max-child and max-child-per-ip can be used to limit such attacks. By default, TCP wrappers is enabled. Consult &man.hosts.access.5; for more information on placing TCP restrictions on various inetd invoked daemons. Network File System (NFS) Tom Rhodes Reorganized and enhanced by Bill Swingle Written by NFS &os; supports the Network File System (NFS), which allows a server to share directories and files with clients over a network. With NFS, users and programs can access files on remote systems as if they were stored locally. NFS has many practical uses. Some of the more common uses include: Data that would otherwise be duplicated on each client can be kept in a single location and accessed by clients on the network. Several clients may need access to the /usr/ports/distfiles directory. Sharing that directory allows for quick access to the source files without having to download them to each client. On large networks, it is often more convenient to configure a central NFS server on which all user home directories are stored. Users can log into a client anywhere on the network and have access to their home directories. Administration of NFS exports is simplified. For example, there is only one file system where security or backup policies must be set. Removable media storage devices can be used by other machines on the network. This reduces the number of devices throughout the network and provides a centralized location to manage their security. It is often more convenient to install software on multiple machines from a centralized installation media. NFS consists of a server and one or more clients. The client remotely accesses the data that is stored on the server machine. In order for this to function properly, a few processes have to be configured and running. These daemons must be running on the server: NFS server file server UNIX clients rpcbind mountd nfsd Daemon Description nfsd The NFS daemon which services requests from NFS clients. mountd The NFS mount daemon which carries out requests received from nfsd. rpcbind This daemon allows NFS clients to discover which port the NFS server is using. Running &man.nfsiod.8; on the client can improve performance, but is not required. Configuring the Server NFS configuration The file systems which the NFS server will share are specified in /etc/exports. Each line in this file specifies a file system to be exported, which clients have access to that file system, and any access options. When adding entries to this file, each exported file system, its properties, and allowed hosts must occur on a single line. If no clients are listed in the entry, then any client on the network can mount that file system. NFS export examples The following /etc/exports entries demonstrate how to export file systems. The examples can be modified to match the file systems and client names on the reader's network. There are many options that can be used in this file, but only a few will be mentioned here. See &man.exports.5; for the full list of options. This example shows how to export /cdrom to three hosts named alpha, bravo, and charlie: /cdrom -ro alpha bravo charlie The -ro flag makes the file system read-only, preventing clients from making any changes to the exported file system. This example assumes that the host names are either in DNS or in /etc/hosts. Refer to &man.hosts.5; if the network does not have a DNS server. The next example exports /home to three clients by IP address. This can be useful for networks without DNS or /etc/hosts entries. The -alldirs flag allows subdirectories to be mount points. In other words, it will not automatically mount the subdirectories, but will permit the client to mount the directories that are required as needed. /usr/home -alldirs 10.0.0.2 10.0.0.3 10.0.0.4 This next example exports /a so that two clients from different domains may access that file system. The allows root on the remote system to write data on the exported file system as root. If -maproot=root is not specified, the client's root user will be mapped to the server's nobody account and will be subject to the access limitations defined for nobody. /a -maproot=root host.example.com box.example.org A client can only be specified once per file system. For example, if /usr is a single file system, these entries would be invalid as both entries specify the same host: # Invalid when /usr is one file system /usr/src client /usr/ports client The correct format for this situation is to use one entry: /usr/src /usr/ports client The following is an example of a valid export list, where /usr and /exports are local file systems: # Export src and ports to client01 and client02, but only # client01 has root privileges on it /usr/src /usr/ports -maproot=root client01 /usr/src /usr/ports client02 # The client machines have root and can mount anywhere # on /exports. Anyone in the world can mount /exports/obj read-only /exports -alldirs -maproot=root client01 client02 /exports/obj -ro To enable the processes required by the NFS server at boot time, add these options to /etc/rc.conf: rpcbind_enable="YES" nfs_server_enable="YES" mountd_flags="-r" The server can be started now by running this command: &prompt.root; service nfsd start Whenever the NFS server is started, mountd also starts automatically. However, mountd only reads /etc/exports when it is started. To make subsequent /etc/exports edits take effect immediately, force mountd to reread it: &prompt.root; service mountd reload Configuring the Client To enable NFS clients, set this option in each client's /etc/rc.conf: nfs_client_enable="YES" Then, run this command on each NFS client: &prompt.root; service nfsclient start The client now has everything it needs to mount a remote file system. In these examples, the server's name is server and the client's name is client. To mount /home on server to the /mnt mount point on client: NFS mounting &prompt.root; mount server:/home /mnt The files and directories in /home will now be available on client, in the /mnt directory. To mount a remote file system each time the client boots, add it to /etc/fstab: server:/home /mnt nfs rw 0 0 Refer to &man.fstab.5; for a description of all available options. Locking Some applications require file locking to operate correctly. To enable locking, add these lines to /etc/rc.conf on both the client and server: rpc_lockd_enable="YES" rpc_statd_enable="YES" Then start the applications: &prompt.root; service lockd start &prompt.root; service statd start If locking is not required on the server, the NFS client can be configured to lock locally by including when running mount. Refer to &man.mount.nfs.8; for further details. Automating Mounts with &man.amd.8; Wylie Stilwell Contributed by Chern Lee Rewritten by amd automatic mounter daemon The automatic mounter daemon, amd, automatically mounts a remote file system whenever a file or directory within that file system is accessed. File systems that are inactive for a period of time will be automatically unmounted by amd. This daemon provides an alternative to modifying /etc/fstab to list every client. It operates by attaching itself as an NFS server to the /host and /net directories. When a file is accessed within one of these directories, amd looks up the corresponding remote mount and automatically mounts it. /net is used to mount an exported file system from an IP address while /host is used to mount an export from a remote hostname. For instance, an attempt to access a file within /host/foobar/usr would tell amd to mount the /usr export on the host foobar. Mounting an Export with <application>amd</application> In this example, showmount -e shows the exported file systems that can be mounted from the NFS server, foobar: &prompt.user; showmount -e foobar Exports list on foobar: /usr 10.10.10.0 /a 10.10.10.0 &prompt.user; cd /host/foobar/usr The output from showmount shows /usr as an export. When changing directories to /host/foobar/usr, amd intercepts the request and attempts to resolve the hostname foobar. If successful, amd automatically mounts the desired export. To enable amd at boot time, add this line to /etc/rc.conf: amd_enable="YES" To start amd now: &prompt.root; service amd start Custom flags can be passed to amd from the amd_flags environment variable. By default, amd_flags is set to: amd_flags="-a /.amd_mnt -l syslog /host /etc/amd.map /net /etc/amd.map" The default options with which exports are mounted are defined in /etc/amd.map. Some of the more advanced features of amd are defined in /etc/amd.conf. Consult &man.amd.8; and &man.amd.conf.5; for more information. Automating Mounts with &man.autofs.5; The &man.autofs.5; automount facility is supported starting with &os; 10.1-RELEASE. To use the automounter functionality in older versions of &os;, use &man.amd.8; instead. This chapter only describes the &man.autofs.5; automounter. autofs automounter subsystem The &man.autofs.5; facility is a common name for several components that, together, allow for automatic mounting of remote and local filesystems whenever a file or directory within that file system is accessed. It consists of the kernel component, &man.autofs.5;, and several userspace applications: &man.automount.8;, &man.automountd.8; and &man.autounmountd.8;. It serves as an alternative for &man.amd.8; from previous &os; releases. Amd is still provided for backward compatibility purposes, as the two use different map format; the one used by autofs is the same as with other SVR4 automounters, such as the ones in Solaris, MacOS X, and Linux. The &man.autofs.5; virtual filesystem is mounted on specified mountpoints by &man.automount.8;, usually invoked during boot. Whenever a process attempts to access file within the &man.autofs.5; mountpoint, the kernel will notify &man.automountd.8; daemon and pause the triggering process. The &man.automountd.8; daemon will handle kernel requests by finding the proper map and mounting the filesystem according to it, then signal the kernel to release blocked process. The &man.autounmountd.8; daemon automatically unmounts automounted filesystems after some time, unless they are still being used. The primary autofs configuration file is /etc/auto_master. It assigns individual maps to top-level mounts. For an explanation of auto_master and the map syntax, refer to &man.auto.master.5;. There is a special automounter map mounted on /net. When a file is accessed within this directory, &man.autofs.5; looks up the corresponding remote mount and automatically mounts it. For instance, an attempt to access a file within /net/foobar/usr would tell &man.automountd.8; to mount the /usr export from the host foobar. Mounting an Export with &man.autofs.5; In this example, showmount -e shows the exported file systems that can be mounted from the NFS server, foobar: &prompt.user; showmount -e foobar Exports list on foobar: /usr 10.10.10.0 /a 10.10.10.0 &prompt.user; cd /net/foobar/usr The output from showmount shows /usr as an export. When changing directories to /host/foobar/usr, &man.automountd.8; intercepts the request and attempts to resolve the hostname foobar. If successful, &man.automountd.8; automatically mounts the source export. To enable &man.autofs.5; at boot time, add this line to /etc/rc.conf: autofs_enable="YES" Then &man.autofs.5; can be started by running: &prompt.root; service automount start &prompt.root; service automountd start &prompt.root; service autounmountd start The &man.autofs.5; map format is the same as in other operating systems, it might be desirable to consult information from other operating systems, such as the Mac OS X document. Consult the &man.automount.8;, &man.automountd.8;, &man.autounmountd.8;, and &man.auto.master.5; manual pages for more information. Network Information System (<acronym>NIS</acronym>) NIS Solaris HP-UX AIX Linux NetBSD OpenBSD yellow pages NIS Network Information System (NIS) is designed to centralize administration of &unix;-like systems such as &solaris;, HP-UX, &aix;, Linux, NetBSD, OpenBSD, and &os;. NIS was originally known as Yellow Pages but the name was changed due to trademark issues. This is the reason why NIS commands begin with yp. NIS domains NIS is a Remote Procedure Call (RPC)-based client/server system that allows a group of machines within an NIS domain to share a common set of configuration files. This permits a system administrator to set up NIS client systems with only minimal configuration data and to add, remove, or modify configuration data from a single location. &os; uses version 2 of the NIS protocol. <acronym>NIS</acronym> Terms and Processes Table 28.1 summarizes the terms and important processes used by NIS: rpcbind portmap <acronym>NIS</acronym> Terminology Term Description NIS domain name NIS servers and clients share an NIS domain name. Typically, this name does not have anything to do with DNS. &man.rpcbind.8; This service enables RPC and must be running in order to run an NIS server or act as an NIS client. &man.ypbind.8; This service binds an NIS client to its NIS server. It will take the NIS domain name and use RPC to connect to the server. It is the core of client/server communication in an NIS environment. If this service is not running on a client machine, it will not be able to access the NIS server. &man.ypserv.8; This is the process for the NIS server. If this service stops running, the server will no longer be able to respond to NIS requests so hopefully, there is a slave server to take over. Some non-&os; clients will not try to reconnect using a slave server and the ypbind process may need to be restarted on these clients. &man.rpc.yppasswdd.8; This process only runs on NIS master servers. This daemon allows NIS clients to change their NIS passwords. If this daemon is not running, users will have to login to the NIS master server and change their passwords there.
Machine Types NIS master server NIS slave server NIS client There are three types of hosts in an NIS environment: NIS master server This server acts as a central repository for host configuration information and maintains the authoritative copy of the files used by all of the NIS clients. The passwd, group, and other various files used by NIS clients are stored on the master server. While it is possible for one machine to be an NIS master server for more than one NIS domain, this type of configuration will not be covered in this chapter as it assumes a relatively small-scale NIS environment. NIS slave servers NIS slave servers maintain copies of the NIS master's data files in order to provide redundancy. Slave servers also help to balance the load of the master server as NIS clients always attach to the NIS server which responds first. NIS clients NIS clients authenticate against the NIS server during log on. Information in many files can be shared using NIS. The master.passwd, group, and hosts files are commonly shared via NIS. Whenever a process on a client needs information that would normally be found in these files locally, it makes a query to the NIS server that it is bound to instead. Planning Considerations This section describes a sample NIS environment which consists of 15 &os; machines with no centralized point of administration. Each machine has its own /etc/passwd and /etc/master.passwd. These files are kept in sync with each other only through manual intervention. Currently, when a user is added to the lab, the process must be repeated on all 15 machines. The configuration of the lab will be as follows: Machine name IP address Machine role ellington 10.0.0.2 NIS master coltrane 10.0.0.3 NIS slave basie 10.0.0.4 Faculty workstation bird 10.0.0.5 Client machine cli[1-11] 10.0.0.[6-17] Other client machines If this is the first time an NIS scheme is being developed, it should be thoroughly planned ahead of time. Regardless of network size, several decisions need to be made as part of the planning process. Choosing a <acronym>NIS</acronym> Domain Name NIS domain name When a client broadcasts its requests for info, it includes the name of the NIS domain that it is part of. This is how multiple servers on one network can tell which server should answer which request. Think of the NIS domain name as the name for a group of hosts. Some organizations choose to use their Internet domain name for their NIS domain name. This is not recommended as it can cause confusion when trying to debug network problems. The NIS domain name should be unique within the network and it is helpful if it describes the group of machines it represents. For example, the Art department at Acme Inc. might be in the acme-art NIS domain. This example will use the domain name test-domain. However, some non-&os; operating systems require the NIS domain name to be the same as the Internet domain name. If one or more machines on the network have this restriction, the Internet domain name must be used as the NIS domain name. Physical Server Requirements There are several things to keep in mind when choosing a machine to use as a NIS server. Since NIS clients depend upon the availability of the server, choose a machine that is not rebooted frequently. The NIS server should ideally be a stand alone machine whose sole purpose is to be an NIS server. If the network is not heavily used, it is acceptable to put the NIS server on a machine running other services. However, if the NIS server becomes unavailable, it will adversely affect all NIS clients. Configuring the <acronym>NIS</acronym> Master Server The canonical copies of all NIS files are stored on the master server. The databases used to store the information are called NIS maps. In &os;, these maps are stored in /var/yp/[domainname] where [domainname] is the name of the NIS domain. Since multiple domains are supported, it is possible to have several directories, one for each domain. Each domain will have its own independent set of maps. NIS master and slave servers handle all NIS requests through &man.ypserv.8;. This daemon is responsible for receiving incoming requests from NIS clients, translating the requested domain and map name to a path to the corresponding database file, and transmitting data from the database back to the client. NIS server configuration Setting up a master NIS server can be relatively straight forward, depending on environmental needs. Since &os; provides built-in NIS support, it only needs to be enabled by adding the following lines to /etc/rc.conf: nisdomainname="test-domain" nis_server_enable="YES" nis_yppasswdd_enable="YES" This line sets the NIS domain name to test-domain. This automates the start up of the NIS server processes when the system boots. This enables the &man.rpc.yppasswdd.8; daemon so that users can change their NIS password from a client machine. Care must be taken in a multi-server domain where the server machines are also NIS clients. It is generally a good idea to force the servers to bind to themselves rather than allowing them to broadcast bind requests and possibly become bound to each other. Strange failure modes can result if one server goes down and others are dependent upon it. Eventually, all the clients will time out and attempt to bind to other servers, but the delay involved can be considerable and the failure mode is still present since the servers might bind to each other all over again. A server that is also a client can be forced to bind to a particular server by adding these additional lines to /etc/rc.conf: nis_client_enable="YES" # run client stuff as well nis_client_flags="-S NIS domain,server" After saving the edits, type /etc/netstart to restart the network and apply the values defined in /etc/rc.conf. Before initializing the NIS maps, start &man.ypserv.8;: &prompt.root; service ypserv start Initializing the <acronym>NIS</acronym> Maps NIS maps NIS maps are generated from the configuration files in /etc on the NIS master, with one exception: /etc/master.passwd. This is to prevent the propagation of passwords to all the servers in the NIS domain. Therefore, before the NIS maps are initialized, configure the primary password files: &prompt.root; cp /etc/master.passwd /var/yp/master.passwd &prompt.root; cd /var/yp &prompt.root; vi master.passwd It is advisable to remove all entries for system accounts as well as any user accounts that do not need to be propagated to the NIS clients, such as the root and any other administrative accounts. Ensure that the /var/yp/master.passwd is neither group or world readable by setting its permissions to 600. After completing this task, initialize the NIS maps. &os; includes the &man.ypinit.8; script to do this. When generating maps for the master server, include and specify the NIS domain name: ellington&prompt.root; ypinit -m test-domain Server Type: MASTER Domain: test-domain Creating an YP server will require that you answer a few questions. Questions will all be asked at the beginning of the procedure. Do you want this procedure to quit on non-fatal errors? [y/n: n] n Ok, please remember to go back and redo manually whatever fails. If not, something might not work. At this point, we have to construct a list of this domains YP servers. rod.darktech.org is already known as master server. Please continue to add any slave servers, one per line. When you are done with the list, type a <control D>. master server : ellington next host to add: coltrane next host to add: ^D The current list of NIS servers looks like this: ellington coltrane Is this correct? [y/n: y] y [..output from map generation..] NIS Map update completed. ellington has been setup as an YP master server without any errors. This will create /var/yp/Makefile from /var/yp/Makefile.dist. By default, this file assumes that the environment has a single NIS server with only &os; clients. Since test-domain has a slave server, edit this line in /var/yp/Makefile so that it begins with a comment (#): NOPUSH = "True" Adding New Users Every time a new user is created, the user account must be added to the master NIS server and the NIS maps rebuilt. Until this occurs, the new user will not be able to login anywhere except on the NIS master. For example, to add the new user jsmith to the test-domain domain, run these commands on the master server: &prompt.root; pw useradd jsmith &prompt.root; cd /var/yp &prompt.root; make test-domain The user could also be added using adduser jsmith instead of pw useradd smith. Setting up a <acronym>NIS</acronym> Slave Server NIS slave server To set up an NIS slave server, log on to the slave server and edit /etc/rc.conf as for the master server. Do not generate any NIS maps, as these already exist on the master server. When running ypinit on the slave server, use (for slave) instead of (for master). This option requires the name of the NIS master in addition to the domain name, as seen in this example: coltrane&prompt.root; ypinit -s ellington test-domain Server Type: SLAVE Domain: test-domain Master: ellington Creating an YP server will require that you answer a few questions. Questions will all be asked at the beginning of the procedure. Do you want this procedure to quit on non-fatal errors? [y/n: n] n Ok, please remember to go back and redo manually whatever fails. If not, something might not work. There will be no further questions. The remainder of the procedure should take a few minutes, to copy the databases from ellington. Transferring netgroup... ypxfr: Exiting: Map successfully transferred Transferring netgroup.byuser... ypxfr: Exiting: Map successfully transferred Transferring netgroup.byhost... ypxfr: Exiting: Map successfully transferred Transferring master.passwd.byuid... ypxfr: Exiting: Map successfully transferred Transferring passwd.byuid... ypxfr: Exiting: Map successfully transferred Transferring passwd.byname... ypxfr: Exiting: Map successfully transferred Transferring group.bygid... ypxfr: Exiting: Map successfully transferred Transferring group.byname... ypxfr: Exiting: Map successfully transferred Transferring services.byname... ypxfr: Exiting: Map successfully transferred Transferring rpc.bynumber... ypxfr: Exiting: Map successfully transferred Transferring rpc.byname... ypxfr: Exiting: Map successfully transferred Transferring protocols.byname... ypxfr: Exiting: Map successfully transferred Transferring master.passwd.byname... ypxfr: Exiting: Map successfully transferred Transferring networks.byname... ypxfr: Exiting: Map successfully transferred Transferring networks.byaddr... ypxfr: Exiting: Map successfully transferred Transferring netid.byname... ypxfr: Exiting: Map successfully transferred Transferring hosts.byaddr... ypxfr: Exiting: Map successfully transferred Transferring protocols.bynumber... ypxfr: Exiting: Map successfully transferred Transferring ypservers... ypxfr: Exiting: Map successfully transferred Transferring hosts.byname... ypxfr: Exiting: Map successfully transferred coltrane has been setup as an YP slave server without any errors. Remember to update map ypservers on ellington. This will generate a directory on the slave server called /var/yp/test-domain which contains copies of the NIS master server's maps. Adding these /etc/crontab entries on each slave server will force the slaves to sync their maps with the maps on the master server: 20 * * * * root /usr/libexec/ypxfr passwd.byname 21 * * * * root /usr/libexec/ypxfr passwd.byuid These entries are not mandatory because the master server automatically attempts to push any map changes to its slaves. However, since clients may depend upon the slave server to provide correct password information, it is recommended to force frequent password map updates. This is especially important on busy networks where map updates might not always complete. To finish the configuration, run /etc/netstart on the slave server in order to start the NIS services. Setting Up an <acronym>NIS</acronym> Client An NIS client binds to an NIS server using &man.ypbind.8;. This daemon broadcasts RPC requests on the local network. These requests specify the domain name configured on the client. If an NIS server in the same domain receives one of the broadcasts, it will respond to ypbind, which will record the server's address. If there are several servers available, the client will use the address of the first server to respond and will direct all of its NIS requests to that server. The client will automatically ping the server on a regular basis to make sure it is still available. If it fails to receive a reply within a reasonable amount of time, ypbind will mark the domain as unbound and begin broadcasting again in the hopes of locating another server. NIS client configuration To configure a &os; machine to be an NIS client: Edit /etc/rc.conf and add the following lines in order to set the NIS domain name and start &man.ypbind.8; during network startup: nisdomainname="test-domain" nis_client_enable="YES" To import all possible password entries from the NIS server, use vipw to remove all user accounts except one from /etc/master.passwd. When removing the accounts, keep in mind that at least one local account should remain and this account should be a member of wheel. If there is a problem with NIS, this local account can be used to log in remotely, become the superuser, and fix the problem. Before saving the edits, add the following line to the end of the file: +::::::::: This line configures the client to provide anyone with a valid account in the NIS server's password maps an account on the client. There are many ways to configure the NIS client by modifying this line. One method is described in . For more detailed reading, refer to the book Managing NFS and NIS, published by O'Reilly Media. To import all possible group entries from the NIS server, add this line to /etc/group: +:*:: To start the NIS client immediately, execute the following commands as the superuser: &prompt.root; /etc/netstart &prompt.root; service ypbind start After completing these steps, running ypcat passwd on the client should show the server's passwd map. <acronym>NIS</acronym> Security Since RPC is a broadcast-based service, any system running ypbind within the same domain can retrieve the contents of the NIS maps. To prevent unauthorized transactions, &man.ypserv.8; supports a feature called securenets which can be used to restrict access to a given set of hosts. By default, this information is stored in /var/yp/securenets, unless &man.ypserv.8; is started with and an alternate path. This file contains entries that consist of a network specification and a network mask separated by white space. Lines starting with # are considered to be comments. A sample securenets might look like this: # allow connections from local host -- mandatory 127.0.0.1 255.255.255.255 # allow connections from any host # on the 192.168.128.0 network 192.168.128.0 255.255.255.0 # allow connections from any host # between 10.0.0.0 to 10.0.15.255 # this includes the machines in the testlab 10.0.0.0 255.255.240.0 If &man.ypserv.8; receives a request from an address that matches one of these rules, it will process the request normally. If the address fails to match a rule, the request will be ignored and a warning message will be logged. If the securenets does not exist, ypserv will allow connections from any host. is an alternate mechanism for providing access control instead of securenets. While either access control mechanism adds some security, they are both vulnerable to IP spoofing attacks. All NIS-related traffic should be blocked at the firewall. Servers using securenets may fail to serve legitimate NIS clients with archaic TCP/IP implementations. Some of these implementations set all host bits to zero when doing broadcasts or fail to observe the subnet mask when calculating the broadcast address. While some of these problems can be fixed by changing the client configuration, other problems may force the retirement of these client systems or the abandonment of securenets. TCP Wrapper The use of TCP Wrapper increases the latency of the NIS server. The additional delay may be long enough to cause timeouts in client programs, especially in busy networks with slow NIS servers. If one or more clients suffer from latency, convert those clients into NIS slave servers and force them to bind to themselves. Barring Some Users In this example, the basie system is a faculty workstation within the NIS domain. The passwd map on the master NIS server contains accounts for both faculty and students. This section demonstrates how to allow faculty logins on this system while refusing student logins. To prevent specified users from logging on to a system, even if they are present in the NIS database, use vipw to add -username with the correct number of colons towards the end of /etc/master.passwd on the client, where username is the username of a user to bar from logging in. The line with the blocked user must be before the + line that allows NIS users. In this example, bill is barred from logging on to basie: basie&prompt.root; cat /etc/master.passwd root:[password]:0:0::0:0:The super-user:/root:/bin/csh toor:[password]:0:0::0:0:The other super-user:/root:/bin/sh daemon:*:1:1::0:0:Owner of many system processes:/root:/sbin/nologin operator:*:2:5::0:0:System &:/:/sbin/nologin bin:*:3:7::0:0:Binaries Commands and Source,,,:/:/sbin/nologin tty:*:4:65533::0:0:Tty Sandbox:/:/sbin/nologin kmem:*:5:65533::0:0:KMem Sandbox:/:/sbin/nologin games:*:7:13::0:0:Games pseudo-user:/usr/games:/sbin/nologin news:*:8:8::0:0:News Subsystem:/:/sbin/nologin man:*:9:9::0:0:Mister Man Pages:/usr/share/man:/sbin/nologin bind:*:53:53::0:0:Bind Sandbox:/:/sbin/nologin uucp:*:66:66::0:0:UUCP pseudo-user:/var/spool/uucppublic:/usr/libexec/uucp/uucico xten:*:67:67::0:0:X-10 daemon:/usr/local/xten:/sbin/nologin pop:*:68:6::0:0:Post Office Owner:/nonexistent:/sbin/nologin nobody:*:65534:65534::0:0:Unprivileged user:/nonexistent:/sbin/nologin -bill::::::::: +::::::::: basie&prompt.root; Using Netgroups netgroups Barring specified users from logging on to individual systems becomes unscaleable on larger networks and quickly loses the main benefit of NIS: centralized administration. Netgroups were developed to handle large, complex networks with hundreds of users and machines. Their use is comparable to &unix; groups, where the main difference is the lack of a numeric ID and the ability to define a netgroup by including both user accounts and other netgroups. To expand on the example used in this chapter, the NIS domain will be extended to add the users and systems shown in Tables 28.2 and 28.3: Additional Users User Name(s) Description alpha, beta IT department employees charlie, delta IT department apprentices echo, foxtrott, golf, ... employees able, baker, ... interns
Additional Systems Machine Name(s) Description war, death, famine, pollution Only IT employees are allowed to log onto these servers. pride, greed, envy, wrath, lust, sloth All members of the IT department are allowed to login onto these servers. one, two, three, four, ... Ordinary workstations used by employees. trashcan A very old machine without any critical data. Even interns are allowed to use this system.
When using netgroups to configure this scenario, each user is assigned to one or more netgroups and logins are then allowed or forbidden for all members of the netgroup. When adding a new machine, login restrictions must be defined for all netgroups. When a new user is added, the account must be added to one or more netgroups. If the NIS setup is planned carefully, only one central configuration file needs modification to grant or deny access to machines. The first step is the initialization of the NIS netgroup map. In &os;, this map is not created by default. On the NIS master server, use an editor to create a map named /var/yp/netgroup. This example creates four netgroups to represent IT employees, IT apprentices, employees, and interns: IT_EMP (,alpha,test-domain) (,beta,test-domain) IT_APP (,charlie,test-domain) (,delta,test-domain) USERS (,echo,test-domain) (,foxtrott,test-domain) \ (,golf,test-domain) INTERNS (,able,test-domain) (,baker,test-domain) Each entry configures a netgroup. The first column in an entry is the name of the netgroup. Each set of brackets represents either a group of one or more users or the name of another netgroup. When specifying a user, the three comma-delimited fields inside each group represent: The name of the host(s) where the other fields representing the user are valid. If a hostname is not specified, the entry is valid on all hosts. The name of the account that belongs to this netgroup. The NIS domain for the account. Accounts may be imported from other NIS domains into a netgroup. If a group contains multiple users, separate each user with whitespace. Additionally, each field may contain wildcards. See &man.netgroup.5; for details. netgroups Netgroup names longer than 8 characters should not be used. The names are case sensitive and using capital letters for netgroup names is an easy way to distinguish between user, machine and netgroup names. Some non-&os; NIS clients cannot handle netgroups containing more than 15 entries. This limit may be circumvented by creating several sub-netgroups with 15 users or fewer and a real netgroup consisting of the sub-netgroups, as seen in this example: BIGGRP1 (,joe1,domain) (,joe2,domain) (,joe3,domain) [...] BIGGRP2 (,joe16,domain) (,joe17,domain) [...] BIGGRP3 (,joe31,domain) (,joe32,domain) BIGGROUP BIGGRP1 BIGGRP2 BIGGRP3 Repeat this process if more than 225 (15 times 15) users exist within a single netgroup. To activate and distribute the new NIS map: ellington&prompt.root; cd /var/yp ellington&prompt.root; make This will generate the three NIS maps netgroup, netgroup.byhost and netgroup.byuser. Use the map key option of &man.ypcat.1; to check if the new NIS maps are available: ellington&prompt.user; ypcat -k netgroup ellington&prompt.user; ypcat -k netgroup.byhost ellington&prompt.user; ypcat -k netgroup.byuser The output of the first command should resemble the contents of /var/yp/netgroup. The second command only produces output if host-specific netgroups were created. The third command is used to get the list of netgroups for a user. To configure a client, use &man.vipw.8; to specify the name of the netgroup. For example, on the server named war, replace this line: +::::::::: with +@IT_EMP::::::::: This specifies that only the users defined in the netgroup IT_EMP will be imported into this system's password database and only those users are allowed to login to this system. This configuration also applies to the ~ function of the shell and all routines which convert between user names and numerical user IDs. In other words, cd ~user will not work, ls -l will show the numerical ID instead of the username, and find . -user joe -print will fail with the message No such user. To fix this, import all user entries without allowing them to login into the servers. This can be achieved by adding an extra line: +:::::::::/sbin/nologin This line configures the client to import all entries but to replace the shell in those entries with /sbin/nologin. Make sure that extra line is placed after +@IT_EMP:::::::::. Otherwise, all user accounts imported from NIS will have /sbin/nologin as their login shell and no one will be able to login to the system. To configure the less important servers, replace the old +::::::::: on the servers with these lines: +@IT_EMP::::::::: +@IT_APP::::::::: +:::::::::/sbin/nologin The corresponding lines for the workstations would be: +@IT_EMP::::::::: +@USERS::::::::: +:::::::::/sbin/nologin NIS supports the creation of netgroups from other netgroups which can be useful if the policy regarding user access changes. One possibility is the creation of role-based netgroups. For example, one might create a netgroup called BIGSRV to define the login restrictions for the important servers, another netgroup called SMALLSRV for the less important servers, and a third netgroup called USERBOX for the workstations. Each of these netgroups contains the netgroups that are allowed to login onto these machines. The new entries for the NIS netgroup map would look like this: BIGSRV IT_EMP IT_APP SMALLSRV IT_EMP IT_APP ITINTERN USERBOX IT_EMP ITINTERN USERS This method of defining login restrictions works reasonably well when it is possible to define groups of machines with identical restrictions. Unfortunately, this is the exception and not the rule. Most of the time, the ability to define login restrictions on a per-machine basis is required. Machine-specific netgroup definitions are another possibility to deal with the policy changes. In this scenario, the /etc/master.passwd of each system contains two lines starting with +. The first line adds a netgroup with the accounts allowed to login onto this machine and the second line adds all other accounts with /sbin/nologin as shell. It is recommended to use the ALL-CAPS version of the hostname as the name of the netgroup: +@BOXNAME::::::::: +:::::::::/sbin/nologin Once this task is completed on all the machines, there is no longer a need to modify the local versions of /etc/master.passwd ever again. All further changes can be handled by modifying the NIS map. Here is an example of a possible netgroup map for this scenario: # Define groups of users first IT_EMP (,alpha,test-domain) (,beta,test-domain) IT_APP (,charlie,test-domain) (,delta,test-domain) DEPT1 (,echo,test-domain) (,foxtrott,test-domain) DEPT2 (,golf,test-domain) (,hotel,test-domain) DEPT3 (,india,test-domain) (,juliet,test-domain) ITINTERN (,kilo,test-domain) (,lima,test-domain) D_INTERNS (,able,test-domain) (,baker,test-domain) # # Now, define some groups based on roles USERS DEPT1 DEPT2 DEPT3 BIGSRV IT_EMP IT_APP SMALLSRV IT_EMP IT_APP ITINTERN USERBOX IT_EMP ITINTERN USERS # # And a groups for a special tasks # Allow echo and golf to access our anti-virus-machine SECURITY IT_EMP (,echo,test-domain) (,golf,test-domain) # # machine-based netgroups # Our main servers WAR BIGSRV FAMINE BIGSRV # User india needs access to this server POLLUTION BIGSRV (,india,test-domain) # # This one is really important and needs more access restrictions DEATH IT_EMP # # The anti-virus-machine mentioned above ONE SECURITY # # Restrict a machine to a single user TWO (,hotel,test-domain) # [...more groups to follow] It may not always be advisable to use machine-based netgroups. When deploying a couple of dozen or hundreds of systems, role-based netgroups instead of machine-based netgroups may be used to keep the size of the NIS map within reasonable limits.
Password Formats NIS password formats NIS requires that all hosts within an NIS domain use the same format for encrypting passwords. If users have trouble authenticating on an NIS client, it may be due to a differing password format. In a heterogeneous network, the format must be supported by all operating systems, where DES is the lowest common standard. To check which format a server or client is using, look at this section of /etc/login.conf: default:\ :passwd_format=des:\ :copyright=/etc/COPYRIGHT:\ [Further entries elided] In this example, the system is using the DES format. Other possible values are blf for Blowfish and md5 for MD5 encrypted passwords. If the format on a host needs to be edited to match the one being used in the NIS domain, the login capability database must be rebuilt after saving the change: &prompt.root; cap_mkdb /etc/login.conf The format of passwords for existing user accounts will not be updated until each user changes their password after the login capability database is rebuilt.
Lightweight Directory Access Protocol (<acronym>LDAP</acronym>) Tom Rhodes Written by LDAP The Lightweight Directory Access Protocol (LDAP) is an application layer protocol used to access, modify, and authenticate objects using a distributed directory information service. Think of it as a phone or record book which stores several levels of hierarchical, homogeneous information. It is used in Active Directory and OpenLDAP networks and allows users to access to several levels of internal information utilizing a single account. For example, email authentication, pulling employee contact information, and internal website authentication might all make use of a single user account in the LDAP server's record base. This section provides a quick start guide for configuring an LDAP server on a &os; system. It assumes that the administrator already has a design plan which includes the type of information to store, what that information will be used for, which users should have access to that information, and how to secure this information from unauthorized access. <acronym>LDAP</acronym> Terminology and Structure LDAP uses several terms which should be understood before starting the configuration. All directory entries consist of a group of attributes. Each of these attribute sets contains a unique identifier known as a Distinguished Name (DN) which is normally built from several other attributes such as the common or Relative Distinguished Name (RDN). Similar to how directories have absolute and relative paths, consider a DN as an absolute path and the RDN as the relative path. An example LDAP entry looks like the following. This example searches for the entry for the specified user account (uid), organizational unit (ou), and organization (o): &prompt.user; ldapsearch -xb "uid=trhodes,ou=users,o=example.com" # extended LDIF # # LDAPv3 # base <uid=trhodes,ou=users,o=example.com> with scope subtree # filter: (objectclass=*) # requesting: ALL # # trhodes, users, example.com dn: uid=trhodes,ou=users,o=example.com mail: trhodes@example.com cn: Tom Rhodes uid: trhodes telephoneNumber: (123) 456-7890 # search result search: 2 result: 0 Success # numResponses: 2 # numEntries: 1 This example entry shows the values for the dn, mail, cn, uid, and telephoneNumber attributes. The cn attribute is the RDN. More information about LDAP and its terminology can be found at http://www.openldap.org/doc/admin24/intro.html. Configuring an <acronym>LDAP</acronym> Server LDAP Server &os; does not provide a built-in LDAP server. Begin the configuration by installing the net/openldap24-server package or port. Since the port has many configurable options, it is recommended that the default options are reviewed to see if the package is sufficient, and to instead compile the port if any options should be changed. In most cases, the defaults are fine. However, if SQL support is needed, this option must be enabled and the port compiled using the instructions in . Next, create the directories to hold the data and to store the certificates: &prompt.root; mkdir /var/db/openldap-data &prompt.root; mkdir /usr/local/etc/openldap/private Copy over the database configuration file: &prompt.root; cp /usr/local/etc/openldap/DB_CONFIG.example /var/db/openldap-data/DB_CONFIG The next phase is to configure the certificate authority. The following commands must be executed from /usr/local/etc/openldap/private. This is important as the file permissions need to be restrictive and users should not have access to these files. To create the certificate authority, start with this command and follow the prompts: &prompt.root; openssl req -days 365 -nodes -new -x509 -keyout ca.key -out ../ca.crt The entries for the prompts may be generic except for the Common Name. This entry must be different than the system hostname. If this will be a self signed certificate, prefix the hostname with CA for certificate authority. The next task is to create a certificate signing request and a private key. Input this command and follow the prompts: &prompt.root; openssl req -days 365 -nodes -new -keyout server.key -out server.csr During the certificate generation process, be sure to correctly set the Common Name attribute. Once complete, sign the key: &prompt.root; openssl x509 -req -days 365 -in server.csr -out ../server.crt -CA ../ca.crt -CAkey ca.key -CAcreateserial The final part of the certificate generation process is to generate and sign the client certificates: &prompt.root; openssl req -days 365 -nodes -new -keyout client.key -out client.csr &prompt.root; openssl x509 -req -days 3650 -in client.csr -out ../client.crt -CA ../ca.crt -CAkey ca.key Remember to use the same Common Name attribute when prompted. When finished, ensure that a total of eight (8) new files have been generated through the proceeding commands. If so, the next step is to edit /usr/local/etc/openldap/slapd.conf and add the following options: TLSCipherSuite HIGH:MEDIUM:+SSLv3 TLSCertificateFile /usr/local/etc/openldap/server.crt TLSCertificateKeyFile /usr/local/etc/openldap/private/server.key TLSCACertificateFile /usr/local/etc/openldap/ca.crt Then, edit /usr/local/etc/openldap/ldap.conf and add the following lines: TLS_CACERT /usr/local/etc/openldap/ca.crt TLS_CIPHER_SUITE HIGH:MEDIUM:+SSLv3 While editing this file, uncomment the following entries and set them to the desired values: , , and . Set the to contain and . Then, add two entries pointing to the certificate authority. When finished, the entries should look similar to the following: BASE dc=example,dc=com URI ldap:// ldaps:// SIZELIMIT 12 TIMELIMIT 15 TLS_CACERT /usr/local/etc/openldap/ca.crt TLS_CIPHER_SUITE HIGH:MEDIUM:+SSLv3 The default password for the server should then be changed: &prompt.root; slappasswd -h "{SHA}" >> /usr/local/etc/openldap/slapd.conf This command will prompt for the password and, if the process does not fail, a password hash will be added to the end of slapd.conf. Several hashing formats are supported. Refer to the manual page for slappasswd for more information. Next, edit /usr/local/etc/openldap/slapd.conf and add the following lines: password-hash {sha} allow bind_v2 The in this file must be updated to match the used in /usr/local/etc/openldap/ldap.conf and should also be set. A recommended value for is something like . Before saving this file, place the in front of the password output from slappasswd and delete the old . The end result should look similar to this: TLSCipherSuite HIGH:MEDIUM:+SSLv3 TLSCertificateFile /usr/local/etc/openldap/server.crt TLSCertificateKeyFile /usr/local/etc/openldap/private/server.key TLSCACertificateFile /usr/local/etc/openldap/ca.crt rootpw {SHA}W6ph5Mm5Pz8GgiULbPgzG37mj9g= Finally, enable the OpenLDAP service in /etc/rc.conf and set the URI: slapd_enable="YES" slapd_flags="-4 -h ldaps:///" At this point the server can be started and tested: &prompt.root; service slapd start If everything is configured correctly, a search of the directory should show a successful connection with a single response as in this example: &prompt.root; ldapsearch -Z # extended LDIF # # LDAPv3 # base <dc=example,dc=com> (default) with scope subtree # filter: (objectclass=*) # requesting: ALL # # search result search: 3 result: 32 No such object # numResponses: 1 If the command fails and the configuration looks correct, stop the slapd service and restart it with debugging options: &prompt.root; service slapd stop &prompt.root; /usr/local/libexec/slapd -d -1 Once the service is responding, the directory can be populated using ldapadd. In this example, a file containing this list of users is first created. Each user should use the following format: dn: dc=example,dc=com objectclass: dcObject objectclass: organization o: Example dc: Example dn: cn=Manager,dc=example,dc=com objectclass: organizationalRole cn: Manager To import this file, specify the file name. The following command will prompt for the password specified earlier and the output should look something like this: &prompt.root; ldapadd -Z -D "cn=Manager,dc=example,dc=com" -W -f import.ldif Enter LDAP Password: adding new entry "dc=example,dc=com" adding new entry "cn=Manager,dc=example,dc=com" Verify the data was added by issuing a search on the server using ldapsearch: &prompt.user; ldapsearch -Z # extended LDIF # # LDAPv3 # base <dc=example,dc=com> (default) with scope subtree # filter: (objectclass=*) # requesting: ALL # # example.com dn: dc=example,dc=com objectClass: dcObject objectClass: organization o: Example dc: Example # Manager, example.com dn: cn=Manager,dc=example,dc=com objectClass: organizationalRole cn: Manager # search result search: 3 result: 0 Success # numResponses: 3 # numEntries: 2 At this point, the server should be configured and functioning properly. Dynamic Host Configuration Protocol (<acronym>DHCP</acronym>) Dynamic Host Configuration Protocol DHCP Internet Systems Consortium (ISC) The Dynamic Host Configuration Protocol (DHCP) allows a system to connect to a network in order to be assigned the necessary addressing information for communication on that network. &os; includes the OpenBSD version of dhclient which is used by the client to obtain the addressing information. &os; does not install a DHCP server, but several servers are available in the &os; Ports Collection. The DHCP protocol is fully described in RFC 2131. Informational resources are also available at isc.org/downloads/dhcp/. This section describes how to use the built-in DHCP client. It then describes how to install and configure a DHCP server. In &os;, the &man.bpf.4; device is needed by both the DHCP server and DHCP client. This device is included in the GENERIC kernel that is installed with &os;. Users who prefer to create a custom kernel need to keep this device if DHCP is used. It should be noted that bpf also allows privileged users to run network packet sniffers on that system. Configuring a <acronym>DHCP</acronym> Client DHCP client support is included in the &os; installer, making it easy to configure a newly installed system to automatically receive its networking addressing information from an existing DHCP server. Refer to for examples of network configuration. UDP When dhclient is executed on the client machine, it begins broadcasting requests for configuration information. By default, these requests use UDP port 68. The server replies on UDP port 67, giving the client an IP address and other relevant network information such as a subnet mask, default gateway, and DNS server addresses. This information is in the form of a DHCP lease and is valid for a configurable time. This allows stale IP addresses for clients no longer connected to the network to automatically be reused. DHCP clients can obtain a great deal of information from the server. An exhaustive list may be found in &man.dhcp-options.5;. By default, when a &os; system boots, its DHCP client runs in the background, or asynchronously. Other startup scripts continue to run while the DHCP process completes, which speeds up system startup. Background DHCP works well when the DHCP server responds quickly to the client's requests. However, DHCP may take a long time to complete on some systems. If network services attempt to run before DHCP has assigned the network addressing information, they will fail. Using DHCP in synchronous mode prevents this problem as it pauses startup until the DHCP configuration has completed. This line in /etc/rc.conf is used to configure background or asynchronous mode: ifconfig_fxp0="DHCP" This line may already exist if the system was configured to use DHCP during installation. Replace the fxp0 shown in these examples with the name of the interface to be dynamically configured, as described in . To instead configure the system to use synchronous mode, and to pause during startup while DHCP completes, use SYNCDHCP: ifconfig_fxp0="SYNCDHCP" Additional client options are available. Search for dhclient in &man.rc.conf.5; for details. DHCP configuration files The DHCP client uses the following files: /etc/dhclient.conf The configuration file used by dhclient. Typically, this file contains only comments as the defaults are suitable for most clients. This configuration file is described in &man.dhclient.conf.5;. /sbin/dhclient More information about the command itself can be found in &man.dhclient.8;. /sbin/dhclient-script The &os;-specific DHCP client configuration script. It is described in &man.dhclient-script.8;, but should not need any user modification to function properly. /var/db/dhclient.leases.interface The DHCP client keeps a database of valid leases in this file, which is written as a log and is described in &man.dhclient.leases.5;. Installing and Configuring a <acronym>DHCP</acronym> Server This section demonstrates how to configure a &os; system to act as a DHCP server using the Internet Systems Consortium (ISC) implementation of the DHCP server. This implementation and its documentation can be installed using the net/isc-dhcp42-server package or port. DHCP server DHCP installation The installation of net/isc-dhcp42-server installs a sample configuration file. Copy /usr/local/etc/dhcpd.conf.example to /usr/local/etc/dhcpd.conf and make any edits to this new file. DHCP dhcpd.conf The configuration file is comprised of declarations for subnets and hosts which define the information that is provided to DHCP clients. For example, these lines configure the following: option domain-name "example.org"; option domain-name-servers ns1.example.org; option subnet-mask 255.255.255.0; default-lease-time 600; max-lease-time 72400; ddns-update-style none; subnet 10.254.239.0 netmask 255.255.255.224 { range 10.254.239.10 10.254.239.20; option routers rtr-239-0-1.example.org, rtr-239-0-2.example.org; } host fantasia { hardware ethernet 08:00:07:26:c0:a5; fixed-address fantasia.fugue.com; } This option specifies the default search domain that will be provided to clients. Refer to &man.resolv.conf.5; for more information. This option specifies a comma separated list of DNS servers that the client should use. They can be listed by their Fully Qualified Domain Names (FQDN), as seen in the example, or by their IP addresses. The subnet mask that will be provided to clients. The default lease expiry time in seconds. A client can be configured to override this value. The maximum allowed length of time, in seconds, for a lease. Should a client request a longer lease, a lease will still be issued, but it will only be valid for max-lease-time. The default of disables dynamic DNS updates. Changing this to configures the DHCP server to update a DNS server whenever it hands out a lease so that the DNS server knows which IP addresses are associated with which computers in the network. Do not change the default setting unless the DNS server has been configured to support dynamic DNS. This line creates a pool of available IP addresses which are reserved for allocation to DHCP clients. The range of addresses must be valid for the network or subnet specified in the previous line. Declares the default gateway that is valid for the network or subnet specified before the opening { bracket. Specifies the hardware MAC address of a client so that the DHCP server can recognize the client when it makes a request. Specifies that this host should always be given the same IP address. Using the hostname is correct, since the DHCP server will resolve the hostname before returning the lease information. This configuration file supports many more options. Refer to dhcpd.conf(5), installed with the server, for details and examples. Once the configuration of dhcpd.conf is complete, enable the DHCP server in /etc/rc.conf: dhcpd_enable="YES" dhcpd_ifaces="dc0" Replace the dc0 with the interface (or interfaces, separated by whitespace) that the DHCP server should listen on for DHCP client requests. Start the server by issuing the following command: &prompt.root; service isc-dhcpd start Any future changes to the configuration of the server will require the dhcpd service to be stopped and then started using &man.service.8;. The DHCP server uses the following files. Note that the manual pages are installed with the server software. DHCP configuration files /usr/local/sbin/dhcpd More information about the dhcpd server can be found in dhcpd(8). /usr/local/etc/dhcpd.conf The server configuration file needs to contain all the information that should be provided to clients, along with information regarding the operation of the server. This configuration file is described in dhcpd.conf(5). /var/db/dhcpd.leases The DHCP server keeps a database of leases it has issued in this file, which is written as a log. Refer to dhcpd.leases(5), which gives a slightly longer description. /usr/local/sbin/dhcrelay This daemon is used in advanced environments where one DHCP server forwards a request from a client to another DHCP server on a separate network. If this functionality is required, install the net/isc-dhcp42-relay package or port. The installation includes dhcrelay(8) which provides more detail. Domain Name System (<acronym>DNS</acronym>) DNS Domain Name System (DNS) is the protocol through which domain names are mapped to IP addresses, and vice versa. DNS is coordinated across the Internet through a somewhat complex system of authoritative root, Top Level Domain (TLD), and other smaller-scale name servers, which host and cache individual domain information. It is not necessary to run a name server to perform DNS lookups on a system. BIND In &os; 10, the Berkeley Internet Name Domain (BIND) has been removed from the base system and replaced with Unbound. Unbound as configured in the &os; Base is a local caching resolver. BIND is still available from The Ports Collection as dns/bind99 or dns/bind98. In &os; 9 and lower, BIND is included in &os; Base. The &os; version provides enhanced security features, a new file system layout, and automated &man.chroot.8; configuration. BIND is maintained by the Internet Systems Consortium. resolver reverse DNS root zone The following table describes some of the terms associated with DNS: <acronym>DNS</acronym> Terminology Term Definition Forward DNS Mapping of hostnames to IP addresses. Origin Refers to the domain covered in a particular zone file. named, BIND Common names for the BIND name server package within &os;. Resolver A system process through which a machine queries a name server for zone information. Reverse DNS Mapping of IP addresses to hostnames. Root zone The beginning of the Internet zone hierarchy. All zones fall under the root zone, similar to how all files in a file system fall under the root directory. Zone An individual domain, subdomain, or portion of the DNS administered by the same authority.
zones examples Examples of zones: . is how the root zone is usually referred to in documentation. org. is a Top Level Domain (TLD) under the root zone. example.org. is a zone under the org. TLD. 1.168.192.in-addr.arpa is a zone referencing all IP addresses which fall under the 192.168.1.* IP address space. As one can see, the more specific part of a hostname appears to its left. For example, example.org. is more specific than org., as org. is more specific than the root zone. The layout of each part of a hostname is much like a file system: the /dev directory falls within the root, and so on. Reasons to Run a Name Server Name servers generally come in two forms: authoritative name servers, and caching (also known as resolving) name servers. An authoritative name server is needed when: One wants to serve DNS information to the world, replying authoritatively to queries. A domain, such as example.org, is registered and IP addresses need to be assigned to hostnames under it. An IP address block requires reverse DNS entries (IP to hostname). A backup or second name server, called a slave, will reply to queries. A caching name server is needed when: A local DNS server may cache and respond more quickly than querying an outside name server. When one queries for www.FreeBSD.org, the resolver usually queries the uplink ISP's name server, and retrieves the reply. With a local, caching DNS server, the query only has to be made once to the outside world by the caching DNS server. Additional queries will not have to go outside the local network, since the information is cached locally. <acronym>DNS</acronym> Server Configuration in &os; 10.0 and Later In &os; 10.0, BIND has been replaced with Unbound. Unbound is a validating caching resolver only. If an authoritative server is needed, many are available from the Ports Collection. Unbound is provided in the &os; base system. By default, it will provide DNS resolution to the local machine only. While the base system package can be configured to provide resolution services beyond the local machine, it is recommended that such requirements be addressed by installing Unbound from the &os; Ports Collection. To enable Unbound, add the following to /etc/rc.conf: local_unbound_enable="YES" Any existing nameservers in /etc/resolv.conf will be configured as forwarders in the new Unbound configuration. If any of the listed nameservers do not support DNSSEC, local DNS resolution will fail. Be sure to test each nameserver and remove any that fail the test. The following command will show the trust tree or a failure for a nameserver running on 192.168.1.1: &prompt.user; drill -S FreeBSD.org @192.168.1.1 Once each nameserver is confirmed to support DNSSEC, start Unbound: &prompt.root; service local_unbound onestart This will take care of updating /etc/resolv.conf so that queries for DNSSEC secured domains will now work. For example, run the following to validate the FreeBSD.org DNSSEC trust tree: &prompt.user; drill -S FreeBSD.org ;; Number of trusted keys: 1 ;; Chasing: freebsd.org. A DNSSEC Trust tree: freebsd.org. (A) |---freebsd.org. (DNSKEY keytag: 36786 alg: 8 flags: 256) |---freebsd.org. (DNSKEY keytag: 32659 alg: 8 flags: 257) |---freebsd.org. (DS keytag: 32659 digest type: 2) |---org. (DNSKEY keytag: 49587 alg: 7 flags: 256) |---org. (DNSKEY keytag: 9795 alg: 7 flags: 257) |---org. (DNSKEY keytag: 21366 alg: 7 flags: 257) |---org. (DS keytag: 21366 digest type: 1) | |---. (DNSKEY keytag: 40926 alg: 8 flags: 256) | |---. (DNSKEY keytag: 19036 alg: 8 flags: 257) |---org. (DS keytag: 21366 digest type: 2) |---. (DNSKEY keytag: 40926 alg: 8 flags: 256) |---. (DNSKEY keytag: 19036 alg: 8 flags: 257) ;; Chase successful DNS Server Configuration in &os; 9.<replaceable>X</replaceable> In &os;, the BIND daemon is called named. File Description &man.named.8; The BIND daemon. &man.rndc.8; Name server control utility. /etc/namedb Directory where BIND zone information resides. /etc/namedb/named.conf Configuration file of the daemon. Depending on how a given zone is configured on the server, the files related to that zone can be found in the master, slave, or dynamic subdirectories of the /etc/namedb directory. These files contain the DNS information that will be given out by the name server in response to queries. Starting BIND BIND starting Since BIND is installed by default, configuring it is relatively simple. The default named configuration is that of a basic resolving name server, running in a &man.chroot.8; environment, and restricted to listening on the local IPv4 loopback address (127.0.0.1). To start the server one time with this configuration, use the following command: &prompt.root; service named onestart To ensure the named daemon is started at boot each time, put the following line into the /etc/rc.conf: named_enable="YES" There are many configuration options for /etc/namedb/named.conf that are beyond the scope of this document. Other startup options for named on &os; can be found in the named_* flags in /etc/defaults/rc.conf and in &man.rc.conf.5;. The section is also a good read. Configuration Files BIND configuration files Configuration files for named currently reside in /etc/namedb directory and will need modification before use unless all that is needed is a simple resolver. This is where most of the configuration will be performed. <filename>/etc/namedb/named.conf</filename> // $FreeBSD$ // // Refer to the named.conf(5) and named(8) man pages, and the documentation // in /usr/share/doc/bind9 for more details. // // If you are going to set up an authoritative server, make sure you // understand the hairy details of how DNS works. Even with // simple mistakes, you can break connectivity for affected parties, // or cause huge amounts of useless Internet traffic. options { // All file and path names are relative to the chroot directory, // if any, and should be fully qualified. directory "/etc/namedb/working"; pid-file "/var/run/named/pid"; dump-file "/var/dump/named_dump.db"; statistics-file "/var/stats/named.stats"; // If named is being used only as a local resolver, this is a safe default. // For named to be accessible to the network, comment this option, specify // the proper IP address, or delete this option. listen-on { 127.0.0.1; }; // If you have IPv6 enabled on this system, uncomment this option for // use as a local resolver. To give access to the network, specify // an IPv6 address, or the keyword "any". // listen-on-v6 { ::1; }; // These zones are already covered by the empty zones listed below. // If you remove the related empty zones below, comment these lines out. disable-empty-zone "255.255.255.255.IN-ADDR.ARPA"; disable-empty-zone "0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.IP6.ARPA"; disable-empty-zone "1.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.IP6.ARPA"; // If you've got a DNS server around at your upstream provider, enter // its IP address here, and enable the line below. This will make you // benefit from its cache, thus reduce overall DNS traffic in the Internet. /* forwarders { 127.0.0.1; }; */ // If the 'forwarders' clause is not empty the default is to 'forward first' // which will fall back to sending a query from your local server if the name // servers in 'forwarders' do not have the answer. Alternatively you can // force your name server to never initiate queries of its own by enabling the // following line: // forward only; // If you wish to have forwarding configured automatically based on // the entries in /etc/resolv.conf, uncomment the following line and // set named_auto_forward=yes in /etc/rc.conf. You can also enable // named_auto_forward_only (the effect of which is described above). // include "/etc/namedb/auto_forward.conf"; Just as the comment says, to benefit from an uplink's cache, forwarders can be enabled here. Under normal circumstances, a name server will recursively query the Internet looking at certain name servers until it finds the answer it is looking for. Having this enabled will have it query the uplink's name server (or name server provided) first, taking advantage of its cache. If the uplink name server in question is a heavily trafficked, fast name server, enabling this may be worthwhile. 127.0.0.1 will not work here. Change this IP address to a name server at the uplink. /* Modern versions of BIND use a random UDP port for each outgoing query by default in order to dramatically reduce the possibility of cache poisoning. All users are strongly encouraged to utilize this feature, and to configure their firewalls to accommodate it. AS A LAST RESORT in order to get around a restrictive firewall policy you can try enabling the option below. Use of this option will significantly reduce your ability to withstand cache poisoning attacks, and should be avoided if at all possible. Replace NNNNN in the example with a number between 49160 and 65530. */ // query-source address * port NNNNN; }; // If you enable a local name server, don't forget to enter 127.0.0.1 // first in your /etc/resolv.conf so this server will be queried. // Also, make sure to enable it in /etc/rc.conf. // The traditional root hints mechanism. Use this, OR the slave zones below. zone "." { type hint; file "/etc/namedb/named.root"; }; /* Slaving the following zones from the root name servers has some significant advantages: 1. Faster local resolution for your users 2. No spurious traffic will be sent from your network to the roots 3. Greater resilience to any potential root server failure/DDoS On the other hand, this method requires more monitoring than the hints file to be sure that an unexpected failure mode has not incapacitated your server. Name servers that are serving a lot of clients will benefit more from this approach than individual hosts. Use with caution. To use this mechanism, uncomment the entries below, and comment the hint zone above. As documented at http://dns.icann.org/services/axfr/ these zones: "." (the root), ARPA, IN-ADDR.ARPA, IP6.ARPA, and ROOT-SERVERS.NET are available for AXFR from these servers on IPv4 and IPv6: xfr.lax.dns.icann.org, xfr.cjr.dns.icann.org */ /* zone "." { type slave; file "/etc/namedb/slave/root.slave"; masters { 192.5.5.241; // F.ROOT-SERVERS.NET. }; notify no; }; zone "arpa" { type slave; file "/etc/namedb/slave/arpa.slave"; masters { 192.5.5.241; // F.ROOT-SERVERS.NET. }; notify no; }; */ /* Serving the following zones locally will prevent any queries for these zones leaving your network and going to the root name servers. This has two significant advantages: 1. Faster local resolution for your users 2. No spurious traffic will be sent from your network to the roots */ // RFCs 1912 and 5735 (and BCP 32 for localhost) zone "localhost" { type master; file "/etc/namedb/master/localhost-forward.db"; }; zone "127.in-addr.arpa" { type master; file "/etc/namedb/master/localhost-reverse.db"; }; zone "255.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; // RFC 1912-style zone for IPv6 localhost address zone "0.ip6.arpa" { type master; file "/etc/namedb/master/localhost-reverse.db"; }; // "This" Network (RFCs 1912 and 5735) zone "0.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; // Private Use Networks (RFCs 1918 and 5735) zone "10.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "16.172.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "17.172.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "18.172.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "19.172.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "20.172.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "21.172.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "22.172.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "23.172.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "24.172.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "25.172.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "26.172.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "27.172.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "28.172.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "29.172.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "30.172.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "31.172.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "168.192.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; // Link-local/APIPA (RFCs 3927 and 5735) zone "254.169.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; // IETF protocol assignments (RFCs 5735 and 5736) zone "0.0.192.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; // TEST-NET-[1-3] for Documentation (RFCs 5735 and 5737) zone "2.0.192.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "100.51.198.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "113.0.203.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; // IPv6 Range for Documentation (RFC 3849) zone "8.b.d.0.1.0.0.2.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; // Domain Names for Documentation and Testing (BCP 32) zone "test" { type master; file "/etc/namedb/master/empty.db"; }; zone "example" { type master; file "/etc/namedb/master/empty.db"; }; zone "invalid" { type master; file "/etc/namedb/master/empty.db"; }; zone "example.com" { type master; file "/etc/namedb/master/empty.db"; }; zone "example.net" { type master; file "/etc/namedb/master/empty.db"; }; zone "example.org" { type master; file "/etc/namedb/master/empty.db"; }; // Router Benchmark Testing (RFCs 2544 and 5735) zone "18.198.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "19.198.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; // IANA Reserved - Old Class E Space (RFC 5735) zone "240.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "241.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "242.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "243.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "244.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "245.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "246.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "247.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "248.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "249.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "250.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "251.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "252.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "253.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "254.in-addr.arpa" { type master; file "/etc/namedb/master/empty.db"; }; // IPv6 Unassigned Addresses (RFC 4291) zone "1.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "3.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "4.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "5.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "6.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "7.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "8.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "9.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "a.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "b.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "c.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "d.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "e.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "0.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "1.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "2.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "3.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "4.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "5.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "6.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "7.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "8.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "9.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "a.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "b.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "0.e.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "1.e.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "2.e.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "3.e.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "4.e.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "5.e.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "6.e.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "7.e.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; // IPv6 ULA (RFC 4193) zone "c.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "d.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; // IPv6 Link Local (RFC 4291) zone "8.e.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "9.e.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "a.e.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "b.e.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; // IPv6 Deprecated Site-Local Addresses (RFC 3879) zone "c.e.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "d.e.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "e.e.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; zone "f.e.f.ip6.arpa" { type master; file "/etc/namedb/master/empty.db"; }; // IP6.INT is Deprecated (RFC 4159) zone "ip6.int" { type master; file "/etc/namedb/master/empty.db"; }; // NB: Do not use the IP addresses below, they are faked, and only // serve demonstration/documentation purposes! // // Example slave zone config entries. It can be convenient to become // a slave at least for the zone your own domain is in. Ask // your network administrator for the IP address of the responsible // master name server. // // Do not forget to include the reverse lookup zone! // This is named after the first bytes of the IP address, in reverse // order, with ".IN-ADDR.ARPA" appended, or ".IP6.ARPA" for IPv6. // // Before starting to set up a master zone, make sure you fully // understand how DNS and BIND work. There are sometimes // non-obvious pitfalls. Setting up a slave zone is usually simpler. // // NB: Don't blindly enable the examples below. :-) Use actual names // and addresses instead. /* An example dynamic zone key "exampleorgkey" { algorithm hmac-md5; secret "sf87HJqjkqh8ac87a02lla=="; }; zone "example.org" { type master; allow-update { key "exampleorgkey"; }; file "/etc/namedb/dynamic/example.org"; }; */ /* Example of a slave reverse zone zone "1.168.192.in-addr.arpa" { type slave; file "/etc/namedb/slave/1.168.192.in-addr.arpa"; masters { 192.168.1.1; }; }; */ In named.conf, these are examples of slave entries for a forward and reverse zone. For each new zone served, a new zone entry must be added to named.conf. For example, the simplest zone entry for example.org can look like: zone "example.org" { type master; file "master/example.org"; }; The zone is a master, as indicated by the statement, holding its zone information in /etc/namedb/master/example.org indicated by the statement. zone "example.org" { type slave; file "slave/example.org"; }; In the slave case, the zone information is transferred from the master name server for the particular zone, and saved in the file specified. If and when the master server dies or is unreachable, the slave name server will have the transferred zone information and will be able to serve it. Zone Files BIND zone files An example master zone file for example.org (existing within /etc/namedb/master/example.org) is as follows: $TTL 3600 ; 1 hour default TTL example.org. IN SOA ns1.example.org. admin.example.org. ( 2006051501 ; Serial 10800 ; Refresh 3600 ; Retry 604800 ; Expire 300 ; Negative Response TTL ) ; DNS Servers IN NS ns1.example.org. IN NS ns2.example.org. ; MX Records IN MX 10 mx.example.org. IN MX 20 mail.example.org. IN A 192.168.1.1 ; Machine Names localhost IN A 127.0.0.1 ns1 IN A 192.168.1.2 ns2 IN A 192.168.1.3 mx IN A 192.168.1.4 mail IN A 192.168.1.5 ; Aliases www IN CNAME example.org. Note that every hostname ending in a . is an exact hostname, whereas everything without a trailing . is relative to the origin. For example, ns1 is translated into ns1.example.org. The format of a zone file follows: recordname IN recordtype value DNS records The most commonly used DNS records: SOA start of zone authority NS an authoritative name server A a host address CNAME the canonical name for an alias MX mail exchanger PTR a domain name pointer (used in reverse DNS) example.org. IN SOA ns1.example.org. admin.example.org. ( 2006051501 ; Serial 10800 ; Refresh after 3 hours 3600 ; Retry after 1 hour 604800 ; Expire after 1 week 300 ) ; Negative Response TTL example.org. the domain name, also the origin for this zone file. ns1.example.org. the primary/authoritative name server for this zone. admin.example.org. the responsible person for this zone, email address with @ replaced. (admin@example.org becomes admin.example.org) 2006051501 the serial number of the file. This must be incremented each time the zone file is modified. Nowadays, many admins prefer a yyyymmddrr format for the serial number. 2006051501 would mean last modified 05/15/2006, the latter 01 being the first time the zone file has been modified this day. The serial number is important as it alerts slave name servers for a zone when it is updated. IN NS ns1.example.org. This is an NS entry. Every name server that is going to reply authoritatively for the zone must have one of these entries. localhost IN A 127.0.0.1 ns1 IN A 192.168.1.2 ns2 IN A 192.168.1.3 mx IN A 192.168.1.4 mail IN A 192.168.1.5 The A record indicates machine names. As seen above, ns1.example.org would resolve to 192.168.1.2. IN A 192.168.1.1 This line assigns IP address 192.168.1.1 to the current origin, in this case example.org. www IN CNAME @ The canonical name record is usually used for giving aliases to a machine. In the example, www is aliased to the master machine whose name happens to be the same as the domain name example.org (192.168.1.1). CNAMEs can never be used together with another kind of record for the same hostname. MX record IN MX 10 mail.example.org. The MX record indicates which mail servers are responsible for handling incoming mail for the zone. mail.example.org is the hostname of a mail server, and 10 is the priority of that mail server. One can have several mail servers, with priorities of 10, 20 and so on. A mail server attempting to deliver to example.org would first try the highest priority MX (the record with the lowest priority number), then the second highest, etc, until the mail can be properly delivered. For in-addr.arpa zone files (reverse DNS), the same format is used, except with PTR entries instead of A or CNAME. $TTL 3600 1.168.192.in-addr.arpa. IN SOA ns1.example.org. admin.example.org. ( 2006051501 ; Serial 10800 ; Refresh 3600 ; Retry 604800 ; Expire 300 ) ; Negative Response TTL IN NS ns1.example.org. IN NS ns2.example.org. 1 IN PTR example.org. 2 IN PTR ns1.example.org. 3 IN PTR ns2.example.org. 4 IN PTR mx.example.org. 5 IN PTR mail.example.org. This file gives the proper IP address to hostname mappings for the above fictitious domain. It is worth noting that all names on the right side of a PTR record need to be fully qualified (i.e., end in a .). Caching Name Server BIND caching name server A caching name server is a name server whose primary role is to resolve recursive queries. It simply asks queries of its own, and remembers the answers for later use. <acronym role="Domain Name Security Extensions">DNSSEC</acronym> BIND DNS security extensions Domain Name System Security Extensions, or DNSSEC for short, is a suite of specifications to protect resolving name servers from forged DNS data, such as spoofed DNS records. By using digital signatures, a resolver can verify the integrity of the record. Note that DNSSEC only provides integrity via digitally signing the Resource Records (RRs). It provides neither confidentiality nor protection against false end-user assumptions. This means that it cannot protect against people going to example.net instead of example.com. The only thing DNSSEC does is authenticate that the data has not been compromised in transit. The security of DNS is an important step in securing the Internet in general. For more in-depth details of how DNSSEC works, the relevant RFCs are a good place to start. See the list in . The following sections will demonstrate how to enable DNSSEC for an authoritative DNS server and a recursive (or caching) DNS server running BIND 9. While all versions of BIND 9 support DNSSEC, it is necessary to have at least version 9.6.2 in order to be able to use the signed root zone when validating DNS queries. This is because earlier versions lack the required algorithms to enable validation using the root zone key. It is strongly recommended to use the latest version of BIND 9.7 or later to take advantage of automatic key updating for the root key, as well as other features to automatically keep zones signed and signatures up to date. Where configurations differ between 9.6.2 and 9.7 and later, differences will be pointed out. Recursive <acronym>DNS</acronym> Server Configuration Enabling DNSSEC validation of queries performed by a recursive DNS server requires a few changes to named.conf. Before making these changes the root zone key, or trust anchor, must be acquired. Currently the root zone key is not available in a file format BIND understands, so it has to be manually converted into the proper format. The key itself can be obtained by querying the root zone for it using dig. By running &prompt.user; dig +multi +noall +answer DNSKEY . > root.dnskey the key will end up in root.dnskey. The contents should look something like this: . 93910 IN DNSKEY 257 3 8 ( AwEAAagAIKlVZrpC6Ia7gEzahOR+9W29euxhJhVVLOyQ bSEW0O8gcCjFFVQUTf6v58fLjwBd0YI0EzrAcQqBGCzh /RStIoO8g0NfnfL2MTJRkxoXbfDaUeVPQuYEhg37NZWA JQ9VnMVDxP/VHL496M/QZxkjf5/Efucp2gaDX6RS6CXp oY68LsvPVjR0ZSwzz1apAzvN9dlzEheX7ICJBBtuA6G3 LQpzW5hOA2hzCTMjJPJ8LbqF6dsV6DoBQzgul0sGIcGO Yl7OyQdXfZ57relSQageu+ipAdTTJ25AsRTAoub8ONGc LmqrAmRLKBP1dfwhYB4N7knNnulqQxA+Uk1ihz0= ) ; key id = 19036 . 93910 IN DNSKEY 256 3 8 ( AwEAAcaGQEA+OJmOzfzVfoYN249JId7gx+OZMbxy69Hf UyuGBbRN0+HuTOpBxxBCkNOL+EJB9qJxt+0FEY6ZUVjE g58sRr4ZQ6Iu6b1xTBKgc193zUARk4mmQ/PPGxn7Cn5V EGJ/1h6dNaiXuRHwR+7oWh7DnzkIJChcTqlFrXDW3tjt ) ; key id = 34525 Do not be alarmed if the obtained keys differ from this example. They might have changed since these instructions were last updated. This output actually contains two keys. The first key in the listing, with the value 257 after the DNSKEY record type, is the one needed. This value indicates that this is a Secure Entry Point (SEP), commonly known as a Key Signing Key (KSK). The second key, with value 256, is a subordinate key, commonly called a Zone Signing Key (ZSK). More on the different key types later in . Now the key must be verified and formatted so that BIND can use it. To verify the key, generate a DS RR set. Create a file containing these RRs with &prompt.user; dnssec-dsfromkey -f root.dnskey . > root.ds These records use SHA-1 and SHA-256 respectively, and should look similar to the following example, where the longer is using SHA-256. . IN DS 19036 8 1 B256BD09DC8DD59F0E0F0D8541B8328DD986DF6E . IN DS 19036 8 2 49AAC11D7B6F6446702E54A1607371607A1A41855200FD2CE1CDDE32F24E8FB5 The SHA-256 RR can now be compared to the digest in https://data.iana.org/root-anchors/root-anchors.xml. To be absolutely sure that the key has not been tampered with the data in the XML file can be verified using the PGP signature in https://data.iana.org/root-anchors/root-anchors.asc. Next, the key must be formatted properly. This differs a little between BIND versions 9.6.2 and 9.7 and later. In version 9.7 support was added to automatically track changes to the key and update it as necessary. This is done using managed-keys as seen in the example below. When using the older version, the key is added using a trusted-keys statement and updates must be done manually. For BIND 9.6.2 the format should look like: trusted-keys { "." 257 3 8 "AwEAAagAIKlVZrpC6Ia7gEzahOR+9W29euxhJhVVLOyQbSEW0O8gcCjF FVQUTf6v58fLjwBd0YI0EzrAcQqBGCzh/RStIoO8g0NfnfL2MTJRkxoX bfDaUeVPQuYEhg37NZWAJQ9VnMVDxP/VHL496M/QZxkjf5/Efucp2gaD X6RS6CXpoY68LsvPVjR0ZSwzz1apAzvN9dlzEheX7ICJBBtuA6G3LQpz W5hOA2hzCTMjJPJ8LbqF6dsV6DoBQzgul0sGIcGOYl7OyQdXfZ57relS Qageu+ipAdTTJ25AsRTAoub8ONGcLmqrAmRLKBP1dfwhYB4N7knNnulq QxA+Uk1ihz0="; }; For 9.7 the format will instead be: managed-keys { "." initial-key 257 3 8 "AwEAAagAIKlVZrpC6Ia7gEzahOR+9W29euxhJhVVLOyQbSEW0O8gcCjF FVQUTf6v58fLjwBd0YI0EzrAcQqBGCzh/RStIoO8g0NfnfL2MTJRkxoX bfDaUeVPQuYEhg37NZWAJQ9VnMVDxP/VHL496M/QZxkjf5/Efucp2gaD X6RS6CXpoY68LsvPVjR0ZSwzz1apAzvN9dlzEheX7ICJBBtuA6G3LQpz W5hOA2hzCTMjJPJ8LbqF6dsV6DoBQzgul0sGIcGOYl7OyQdXfZ57relS Qageu+ipAdTTJ25AsRTAoub8ONGcLmqrAmRLKBP1dfwhYB4N7knNnulq QxA+Uk1ihz0="; }; The root key can now be added to named.conf either directly or by including a file containing the key. After these steps, configure BIND to do DNSSEC validation on queries by editing named.conf and adding the following to the options directive: dnssec-enable yes; dnssec-validation yes; To verify that it is actually working use dig to make a query for a signed zone using the resolver just configured. A successful reply will contain the AD flag to indicate the data was authenticated. Running a query such as &prompt.user; dig @resolver +dnssec se ds should return the DS RR for the .se zone. In the flags: section the AD flag should be set, as seen in: ... ;; flags: qr rd ra ad; QUERY: 1, ANSWER: 3, AUTHORITY: 0, ADDITIONAL: 1 ... The resolver is now capable of authenticating DNS queries. Authoritative <acronym>DNS</acronym> Server Configuration In order to get an authoritative name server to serve a DNSSEC signed zone a little more work is required. A zone is signed using cryptographic keys which must be generated. It is possible to use only one key for this. The preferred method however is to have a strong well-protected Key Signing Key (KSK) that is not rotated very often and a Zone Signing Key (ZSK) that is rotated more frequently. Information on recommended operational practices can be found in RFC 4641: DNSSEC Operational Practices. Practices regarding the root zone can be found in DNSSEC Practice Statement for the Root Zone KSK operator and DNSSEC Practice Statement for the Root Zone ZSK operator. The KSK is used to build a chain of authority to the data in need of validation and as such is also called a Secure Entry Point (SEP) key. A message digest of this key, called a Delegation Signer (DS) record, must be published in the parent zone to establish the trust chain. How this is accomplished depends on the parent zone owner. The ZSK is used to sign the zone, and only needs to be published there. To enable DNSSEC for the example.com zone depicted in previous examples, the first step is to use dnssec-keygen to generate the KSK and ZSK key pair. This key pair can utilize different cryptographic algorithms. It is recommended to use RSA/SHA256 for the keys and 2048 bits key length should be enough. To generate the KSK for example.com, run &prompt.user; dnssec-keygen -f KSK -a RSASHA256 -b 2048 -n ZONE example.com and to generate the ZSK, run &prompt.user; dnssec-keygen -a RSASHA256 -b 2048 -n ZONE example.com dnssec-keygen outputs two files, the public and the private keys in files named similar to Kexample.com.+005+nnnnn.key (public) and Kexample.com.+005+nnnnn.private (private). The nnnnn part of the file name is a five digit key ID. Keep track of which key ID belongs to which key. This is especially important when having more than one key in a zone. It is also possible to rename the keys. For each KSK file do: &prompt.user; mv Kexample.com.+005+nnnnn.key Kexample.com.+005+nnnnn.KSK.key &prompt.user; mv Kexample.com.+005+nnnnn.private Kexample.com.+005+nnnnn.KSK.private For the ZSK files, substitute KSK for ZSK as necessary. The files can now be included in the zone file, using the $include statement. It should look something like this: $include Kexample.com.+005+nnnnn.KSK.key ; KSK $include Kexample.com.+005+nnnnn.ZSK.key ; ZSK Finally, sign the zone and tell BIND to use the signed zone file. To sign a zone dnssec-signzone is used. The command to sign the zone example.com, located in example.com.db would look similar to &prompt.user; dnssec-signzone -o example.com -k Kexample.com.+005+nnnnn.KSK example.com.db Kexample.com.+005+nnnnn.ZSK.key The key supplied to the argument is the KSK and the other key file is the ZSK that should be used in the signing. It is possible to supply more than one KSK and ZSK, which will result in the zone being signed with all supplied keys. This can be needed to supply zone data signed using more than one algorithm. The output of dnssec-signzone is a zone file with all RRs signed. This output will end up in a file with the extension .signed, such as example.com.db.signed. The DS records will also be written to a separate file dsset-example.com. To use this signed zone just modify the zone directive in named.conf to use example.com.db.signed. By default, the signatures are only valid 30 days, meaning that the zone needs to be resigned in about 15 days to be sure that resolvers are not caching records with stale signatures. It is possible to make a script and a cron job to do this. See relevant manuals for details. Be sure to keep private keys confidential, as with all cryptographic keys. When changing a key it is best to include the new key into the zone, while still signing with the old one, and then move over to using the new key to sign. After these steps are done the old key can be removed from the zone. Failure to do this might render the DNS data unavailable for a time, until the new key has propagated through the DNS hierarchy. For more information on key rollovers and other DNSSEC operational issues, see RFC 4641: DNSSEC Operational practices. Automation Using <acronym>BIND</acronym> 9.7 or Later Beginning with BIND version 9.7 a new feature called Smart Signing was introduced. This feature aims to make the key management and signing process simpler by automating parts of the task. By putting the keys into a directory called a key repository, and using the new option auto-dnssec, it is possible to create a dynamic zone which will be resigned as needed. To update this zone use nsupdate with the new option . rndc has also grown the ability to sign zones with keys in the key repository, using the option . To tell BIND to use this automatic signing and zone updating for example.com, add the following to named.conf: zone example.com { type master; key-directory "/etc/named/keys"; update-policy local; auto-dnssec maintain; file "/etc/named/dynamic/example.com.zone"; }; After making these changes, generate keys for the zone as explained in , put those keys in the key repository given as the argument to the key-directory in the zone configuration and the zone will be signed automatically. Updates to a zone configured this way must be done using nsupdate, which will take care of re-signing the zone with the new data added. For further details, see and the BIND documentation. Security Although BIND is the most common implementation of DNS, there is always the issue of security. Possible and exploitable security holes are sometimes found. While &os; automatically drops named into a &man.chroot.8; environment; there are several other security mechanisms in place which could help to lure off possible DNS service attacks. It is always good idea to read CERT's security advisories and to subscribe to the &a.security-notifications; to stay up to date with the current Internet and &os; security issues. If a problem arises, keeping sources up to date and having a fresh build of named may help. Further Reading BIND/named manual pages: &man.rndc.8; &man.named.8; &man.named.conf.5; &man.nsupdate.1; &man.dnssec-signzone.8; &man.dnssec-keygen.8; Official ISC BIND Page Official ISC BIND Forum O'Reilly DNS and BIND 5th Edition Root DNSSEC DNSSEC Trust Anchor Publication for the Root Zone RFC1034 - Domain Names - Concepts and Facilities RFC1035 - Domain Names - Implementation and Specification RFC4033 - DNS Security Introduction and Requirements RFC4034 - Resource Records for the DNS Security Extensions RFC4035 - Protocol Modifications for the DNS Security Extensions RFC4641 - DNSSEC Operational Practices RFC 5011 - Automated Updates of DNS Security (DNSSEC Trust Anchors
Apache HTTP Server Murray Stokely Contributed by web servers setting up Apache The open source Apache HTTP Server is the most widely used web server. &os; does not install this web server by default, but it can be installed from the www/apache24 package or port. This section summarizes how to configure and start version 2.x of the Apache HTTP Server on &os;. For more detailed information about Apache 2.X and its configuration directives, refer to httpd.apache.org. Configuring and Starting Apache Apache configuration file In &os;, the main Apache HTTP Server configuration file is installed as /usr/local/etc/apache2x/httpd.conf, where x represents the version number. This ASCII text file begins comment lines with a #. The most frequently modified directives are: ServerRoot "/usr/local" Specifies the default directory hierarchy for the Apache installation. Binaries are stored in the bin and sbin subdirectories of the server root and configuration files are stored in the etc/apache2x subdirectory. ServerAdmin you@example.com Change this to the email address to receive problems with the server. This address also appears on some server-generated pages, such as error documents. ServerName www.example.com:80 Allows an administrator to set a hostname which is sent back to clients for the server. For example, www can be used instead of the actual hostname. If the system does not have a registered DNS name, enter its IP address instead. If the server will listen on an alternate report, change 80 to the alternate port number. DocumentRoot "/usr/local/www/apache2x/data" The directory where documents will be served from. By default, all requests are taken from this directory, but symbolic links and aliases may be used to point to other locations. It is always a good idea to make a backup copy of the default Apache configuration file before making changes. When the configuration of Apache is complete, save the file and verify the configuration using apachectl. Running apachectl configtest should return Syntax OK. Apache starting or stopping To launch Apache at system startup, add the following line to /etc/rc.conf: apache24_enable="YES" If Apache should be started with non-default options, the following line may be added to /etc/rc.conf to specify the needed flags: apache24_flags="" If apachectl does not report configuration errors, start httpd now: &prompt.root; service apache24 start The httpd service can be tested by entering http://localhost in a web browser, replacing localhost with the fully-qualified domain name of the machine running httpd. The default web page that is displayed is /usr/local/www/apache24/data/index.html. The Apache configuration can be tested for errors after making subsequent configuration changes while httpd is running using the following command: &prompt.root; service apache24 configtest It is important to note that configtest is not an &man.rc.8; standard, and should not be expected to work for all startup scripts. Virtual Hosting Virtual hosting allows multiple websites to run on one Apache server. The virtual hosts can be IP-based or name-based. IP-based virtual hosting uses a different IP address for each website. Name-based virtual hosting uses the clients HTTP/1.1 headers to figure out the hostname, which allows the websites to share the same IP address. To setup Apache to use name-based virtual hosting, add a VirtualHost block for each website. For example, for the webserver named www.domain.tld with a virtual domain of www.someotherdomain.tld, add the following entries to httpd.conf: <VirtualHost *> ServerName www.domain.tld DocumentRoot /www/domain.tld </VirtualHost> <VirtualHost *> ServerName www.someotherdomain.tld DocumentRoot /www/someotherdomain.tld </VirtualHost> For each virtual host, replace the values for ServerName and DocumentRoot with the values to be used. For more information about setting up virtual hosts, consult the official Apache documentation at: http://httpd.apache.org/docs/vhosts/. Apache Modules Apache modules Apache uses modules to augment the functionality provided by the basic server. Refer to http://httpd.apache.org/docs/current/mod/ for a complete listing of and the configuration details for the available modules. In &os;, some modules can be compiled with the www/apache24 port. Type make config within /usr/ports/www/apache24 to see which modules are available and which are enabled by default. If the module is not compiled with the port, the &os; Ports Collection provides an easy way to install many modules. This section describes three of the most commonly used modules. <filename>mod_ssl</filename> web servers secure SSL cryptography The mod_ssl module uses the OpenSSL library to provide strong cryptography via the Secure Sockets Layer (SSLv3) and Transport Layer Security (TLSv1) protocols. This module provides everything necessary to request a signed certificate from a trusted certificate signing authority to run a secure web server on &os;. In &os;, mod_ssl module is enabled by default in both the package and the port. The available configuration directives are explained at http://httpd.apache.org/docs/current/mod/mod_ssl.html. <filename>mod_perl</filename> mod_perl Perl The mod_perl module makes it possible to write Apache modules in Perl. In addition, the persistent interpreter embedded in the server avoids the overhead of starting an external interpreter and the penalty of Perl start-up time. The mod_perl can be installed using the www/mod_perl2 package or port. Documentation for using this module can be found at http://perl.apache.org/docs/2.0/index.html. <filename>mod_php</filename> Tom Rhodes Written by mod_php PHP PHP: Hypertext Preprocessor (PHP) is a general-purpose scripting language that is especially suited for web development. Capable of being embedded into HTML, its syntax draws upon C, &java;, and Perl with the intention of allowing web developers to write dynamically generated webpages quickly. To gain support for PHP5 for the Apache web server, install the www/mod_php56 package or port. This will install and configure the modules required to support dynamic PHP applications. The installation will automatically add this line to /usr/local/etc/apache24/httpd.conf: LoadModule php5_module libexec/apache24/libphp5.so Then, perform a graceful restart to load the PHP module: &prompt.root; apachectl graceful The PHP support provided by www/mod_php56 is limited. Additional support can be installed using the lang/php56-extensions port which provides a menu driven interface to the available PHP extensions. Alternatively, individual extensions can be installed using the appropriate port. For instance, to add PHP support for the MySQL database server, install databases/php56-mysql. After installing an extension, the Apache server must be reloaded to pick up the new configuration changes: &prompt.root; apachectl graceful Dynamic Websites web servers dynamic In addition to mod_perl and mod_php, other languages are available for creating dynamic web content. These include Django and Ruby on Rails. Django Python Django Django is a BSD-licensed framework designed to allow developers to write high performance, elegant web applications quickly. It provides an object-relational mapper so that data types are developed as Python objects. A rich dynamic database-access API is provided for those objects without the developer ever having to write SQL. It also provides an extensible template system so that the logic of the application is separated from the HTML presentation. Django depends on mod_python, and an SQL database engine. In &os;, the www/py-django port automatically installs mod_python and supports the PostgreSQL, MySQL, or SQLite databases, with the default being SQLite. To change the database engine, type make config within /usr/ports/www/py-django, then install the port. Once Django is installed, the application will need a project directory along with the Apache configuration in order to use the embedded Python interpreter. This interpreter is used to call the application for specific URLs on the site. To configure Apache to pass requests for certain URLs to the web application, add the following to httpd.conf, specifying the full path to the project directory: <Location "/"> SetHandler python-program PythonPath "['/dir/to/the/django/packages/'] + sys.path" PythonHandler django.core.handlers.modpython SetEnv DJANGO_SETTINGS_MODULE mysite.settings PythonAutoReload On PythonDebug On </Location> Refer to https://docs.djangoproject.com/en/1.6/ for more information on how to use Django. Ruby on Rails Ruby on Rails Ruby on Rails is another open source web framework that provides a full development stack. It is optimized to make web developers more productive and capable of writing powerful applications quickly. On &os;, it can be installed using the www/rubygem-rails package or port. Refer to http://rubyonrails.org/documentation for more information on how to use Ruby on Rails. File Transfer Protocol (<acronym>FTP</acronym>) FTP servers The File Transfer Protocol (FTP) provides users with a simple way to transfer files to and from an FTP server. &os; includes FTP server software, ftpd, in the base system. &os; provides several configuration files for controlling access to the FTP server. This section summarizes these files. Refer to &man.ftpd.8; for more details about the built-in FTP server. Configuration The most important configuration step is deciding which accounts will be allowed access to the FTP server. A &os; system has a number of system accounts which should not be allowed FTP access. The list of users disallowed any FTP access can be found in /etc/ftpusers. By default, it includes system accounts. Additional users that should not be allowed access to FTP can be added. In some cases it may be desirable to restrict the access of some users without preventing them completely from using FTP. This can be accomplished be creating /etc/ftpchroot as described in &man.ftpchroot.5;. This file lists users and groups subject to FTP access restrictions. FTP anonymous To enable anonymous FTP access to the server, create a user named ftp on the &os; system. Users will then be able to log on to the FTP server with a username of ftp or anonymous. When prompted for the password, any input will be accepted, but by convention, an email address should be used as the password. The FTP server will call &man.chroot.2; when an anonymous user logs in, to restrict access to only the home directory of the ftp user. There are two text files that can be created to specify welcome messages to be displayed to FTP clients. The contents of /etc/ftpwelcome will be displayed to users before they reach the login prompt. After a successful login, the contents of /etc/ftpmotd will be displayed. Note that the path to this file is relative to the login environment, so the contents of ~ftp/etc/ftpmotd would be displayed for anonymous users. Once the FTP server has been configured, set the appropriate variable in /etc/rc.conf to start the service during boot: ftpd_enable="YES" To start the service now: &prompt.root; service ftpd start Test the connection to the FTP server by typing: &prompt.user; ftp localhost syslog log files FTP The ftpd daemon uses &man.syslog.3; to log messages. By default, the system log daemon will write messages related to FTP in /var/log/xferlog. The location of the FTP log can be modified by changing the following line in /etc/syslog.conf: ftp.info /var/log/xferlog FTP anonymous Be aware of the potential problems involved with running an anonymous FTP server. In particular, think twice about allowing anonymous users to upload files. It may turn out that the FTP site becomes a forum for the trade of unlicensed commercial software or worse. If anonymous FTP uploads are required, then verify the permissions so that these files can not be read by other anonymous users until they have been reviewed by an administrator. File and Print Services for µsoft.windows; Clients (Samba) Samba server Microsoft Windows file server Windows clients print server Windows clients Samba is a popular open source software package that provides file and print services using the SMB/CIFS protocol. This protocol is built into µsoft.windows; systems. It can be added to non-µsoft.windows; systems by installing the Samba client libraries. The protocol allows clients to access shared data and printers. These shares can be mapped as a local disk drive and shared printers can be used as if they were local printers. On &os;, the Samba client libraries can be installed using the net/samba-smbclient port or package. The client provides the ability for a &os; system to access SMB/CIFS shares in a µsoft.windows; network. A &os; system can also be configured to act as a Samba server. This allows the administrator to create SMB/CIFS shares on the &os; system which can be accessed by clients running µsoft.windows; or the Samba client libraries. In order to configure a Samba server on &os;, the net/samba36 port or package must first be installed. The rest of this section provides an overview of how to configure a Samba server on &os;. Configuration A default Samba configuration file is installed as /usr/local/share/examples/samba36/smb.conf.default. This file must be copied to /usr/local/etc/smb.conf and customized before Samba can be used. Runtime configuration information for Samba is found in smb.conf, such as definitions of the printers and file system shares that will be shared with &windows; clients. The Samba package includes a web based tool called swat which provides a simple way for configuring smb.conf. Using the Samba Web Administration Tool (SWAT) The Samba Web Administration Tool (SWAT) runs as a daemon from inetd. Therefore, inetd must be enabled as shown in . To enable swat, uncomment the following line in /etc/inetd.conf: swat stream tcp nowait/400 root /usr/local/sbin/swat swat As explained in , the inetd configuration must be reloaded after this configuration file is changed. Once swat has been enabled, use a web browser to connect to http://localhost:901. At first login, enter the credentials for root. Once logged in, the main Samba configuration page and the system documentation will be available. Begin configuration by clicking on the Globals tab. The Globals section corresponds to the variables that are set in the [global] section of /usr/local/etc/smb.conf. Global Settings Whether swat is used or /usr/local/etc/smb.conf is edited directly, the first directives encountered when configuring Samba are: workgroup The domain name or workgroup name for the computers that will be accessing this server. netbios name The NetBIOS name by which a Samba server is known. By default it is the same as the first component of the host's DNS name. server string The string that will be displayed in the output of net view and some other networking tools that seek to display descriptive text about the server. Security Settings Two of the most important settings in /usr/local/etc/smb.conf are the security model and the backend password format for client users. The following directives control these options: security The two most common options are security = share and security = user. If the clients use usernames that are the same as their usernames on the &os; machine, user level security should be used. This is the default security policy and it requires clients to first log on before they can access shared resources. In share level security, clients do not need to log onto the server with a valid username and password before attempting to connect to a shared resource. This was the default security model for older versions of Samba. passdb backend NIS+ LDAP SQL database Samba has several different backend authentication models. Clients may be authenticated with LDAP, NIS+, an SQL database, or a modified password file. The default authentication method is smbpasswd, and that is all that will be covered here. Assuming that the default smbpasswd backend is used, /usr/local/etc/samba/smbpasswd must be created to allow Samba to authenticate clients. To provide &unix; user accounts access from &windows; clients, use the following command to add each required user to that file: &prompt.root; smbpasswd -a username The recommended backend is now tdbsam. If this backend is selected, use the following command to add user accounts: &prompt.root; pdbedit -a -u username This section has only mentioned the most commonly used settings. Refer to the Official Samba HOWTO for additional information about the available configuration options. Starting <application>Samba</application> To enable Samba at boot time, add the following line to /etc/rc.conf: samba_enable="YES" Alternately, its services can be started separately: nmbd_enable="YES" smbd_enable="YES" To start Samba now: &prompt.root; service samba start Starting SAMBA: removing stale tdbs : Starting nmbd. Starting smbd. Samba consists of three separate daemons. Both the nmbd and smbd daemons are started by samba_enable. If winbind name resolution services are enabled in smb.conf, the winbindd daemon is started as well. Samba may be stopped at any time by typing: &prompt.root; service samba stop Samba is a complex software suite with functionality that allows broad integration with µsoft.windows; networks. For more information about functionality beyond the basic configuration described here, refer to http://www.samba.org. Clock Synchronization with NTP NTP ntpd Over time, a computer's clock is prone to drift. This is problematic as many network services require the computers on a network to share the same accurate time. Accurate time is also needed to ensure that file timestamps stay consistent. The Network Time Protocol (NTP) is one way to provide clock accuracy in a network. &os; includes &man.ntpd.8; which can be configured to query other NTP servers in order to synchronize the clock on that machine or to provide time services to other computers in the network. The servers which are queried can be local to the network or provided by an ISP. In addition, an online list of publicly accessible NTP servers is available. When choosing a public NTP server, select one that is geographically close and review its usage policy. Choosing several NTP servers is recommended in case one of the servers becomes unreachable or its clock proves unreliable. As ntpd receives responses, it favors reliable servers over the less reliable ones. This section describes how to configure ntpd on &os;. Further documentation can be found in /usr/share/doc/ntp/ in HTML format. <acronym>NTP</acronym> Configuration NTP ntp.conf On &os;, the built-in ntpd can be used to synchronize a system's clock. To enable ntpd at boot time, add ntpd_enable="YES" to /etc/rc.conf. Additional variables can be specified in /etc/rc.conf. Refer to &man.rc.conf.5; and &man.ntpd.8; for details. This application reads /etc/ntp.conf to determine which NTP servers to query. Here is a simple example of an /etc/ntp.conf: Sample <filename>/etc/ntp.conf</filename> server ntplocal.example.com prefer server timeserver.example.org server ntp2a.example.net driftfile /var/db/ntp.drift The format of this file is described in &man.ntp.conf.5;. The server option specifies which servers to query, with one server listed on each line. If a server entry includes prefer, that server is preferred over other servers. A response from a preferred server will be discarded if it differs significantly from other servers' responses; otherwise it will be used. The prefer argument should only be used for NTP servers that are known to be highly accurate, such as those with special time monitoring hardware. The driftfile entry specifies which file is used to store the system clock's frequency offset. ntpd uses this to automatically compensate for the clock's natural drift, allowing it to maintain a reasonably correct setting even if it is cut off from all external time sources for a period of time. This file also stores information about previous responses from NTP servers. Since this file contains internal information for NTP, it should not be modified. By default, an NTP server is accessible to any network host. The restrict option in /etc/ntp.conf can be used to control which systems can access the server. For example, to deny all machines from accessing the NTP server, add the following line to /etc/ntp.conf: restrict default ignore This will also prevent access from other NTP servers. If there is a need to synchronize with an external NTP server, allow only that specific server. Refer to &man.ntp.conf.5; for more information. To allow machines within the network to synchronize their clocks with the server, but ensure they are not allowed to configure the server or be used as peers to synchronize against, instead use: restrict 192.168.1.0 mask 255.255.255.0 nomodify notrap where 192.168.1.0 is the local network address and 255.255.255.0 is the network's subnet mask. Multiple restrict entries are supported. For more details, refer to the Access Control Support subsection of &man.ntp.conf.5;. Once ntpd_enable="YES" has been added to /etc/rc.conf, ntpd can be started now without rebooting the system by typing: &prompt.root; service ntpd start Using <acronym>NTP</acronym> with a <acronym>PPP</acronym> Connection ntpd does not need a permanent connection to the Internet to function properly. However, if a PPP connection is configured to dial out on demand, NTP traffic should be prevented from triggering a dial out or keeping the connection alive. This can be configured with filter directives in /etc/ppp/ppp.conf. For example: set filter dial 0 deny udp src eq 123 # Prevent NTP traffic from initiating dial out set filter dial 1 permit 0 0 set filter alive 0 deny udp src eq 123 # Prevent incoming NTP traffic from keeping the connection open set filter alive 1 deny udp dst eq 123 # Prevent outgoing NTP traffic from keeping the connection open set filter alive 2 permit 0/0 0/0 For more details, refer to the PACKET FILTERING section in &man.ppp.8; and the examples in /usr/share/examples/ppp/. Some Internet access providers block low-numbered ports, preventing NTP from functioning since replies never reach the machine. <acronym>iSCSI</acronym> Initiator and Target Configuration iSCSI is a way to share storage over a network. Unlike NFS, which works at the file system level, iSCSI works at the block device level. In iSCSI terminology, the system that shares the storage is known as the target. The storage can be a physical disk, or an area representing multiple disks or a portion of a physical disk. For example, if the disk(s) are formatted with ZFS, a zvol can be created to use as the iSCSI storage. The clients which access the iSCSI storage are called initiators. To initiators, the storage available through iSCSI appears as a raw, unformatted disk known as a LUN. Device nodes for the disk appear in /dev/ and the device must be separately formatted and mounted. Beginning with 10.0-RELEASE, &os; provides a native, kernel-based iSCSI target and initiator. This section describes how to configure a &os; system as a target or an initiator. Configuring an <acronym>iSCSI</acronym> Target The native iSCSI target is supported starting with &os; 10.0-RELEASE. To use iSCSI in older versions of &os;, install a userspace target from the Ports Collection, such as net/istgt. This chapter only describes the native target. To configure an iSCSI target, create the /etc/ctl.conf configuration file, add a line to /etc/rc.conf to make sure the &man.ctld.8; daemon is automatically started at boot, and then start the daemon. The following is an example of a simple /etc/ctl.conf configuration file. Refer to &man.ctl.conf.5; for a more complete description of this file's available options. portal-group pg0 { discovery-auth-group no-authentication listen 0.0.0.0 listen [::] } target iqn.2012-06.com.example:target0 { auth-group no-authentication portal-group pg0 lun 0 { path /data/target0-0 size 4G } } The first entry defines the pg0 portal group. Portal groups define which network addresses the &man.ctld.8; daemon will listen on. The discovery-auth-group no-authentication entry indicates that any initiator is allowed to perform iSCSI target discovery without authentication. Lines three and four configure &man.ctld.8; to listen on all IPv4 (listen 0.0.0.0) and IPv6 (listen [::]) addresses on the default port of 3260. It is not necessary to define a portal group as there is a built-in portal group called default. In this case, the difference between default and pg0 is that with default, target discovery is always denied, while with pg0, it is always allowed. The second entry defines a single target. Target has two possible meanings: a machine serving iSCSI or a named group of LUNs. This example uses the latter meaning, where iqn.2012-06.com.example:target0 is the target name. This target name is suitable for testing purposes. For actual use, change com.example to the real domain name, reversed. The 2012-06 represents the year and month of acquiring control of that domain name, and target0 can be any value. Any number of targets can be defined in this configuration file. The auth-group no-authentication line allows all initiators to connect to the specified target and portal-group pg0 makes the target reachable through the pg0 portal group. The next section defines the LUN. To the initiator, each LUN will be visible as a separate disk device. Multiple LUNs can be defined for each target. Each LUN is identified by a number, where LUN 0 is mandatory. The path /data/target0-0 line defines the full path to a file or zvol backing the LUN. That path must exist before starting &man.ctld.8;. The second line is optional and specifies the size of the LUN. Next, to make sure the &man.ctld.8; daemon is started at boot, add this line to /etc/rc.conf: ctld_enable="YES" To start &man.ctld.8; now, run this command: &prompt.root; service ctld start As the &man.ctld.8; daemon is started, it reads /etc/ctl.conf. If this file is edited after the daemon starts, use this command so that the changes take effect immediately: &prompt.root; service ctld reload Authentication The previous example is inherently insecure as it uses no authentication, granting anyone full access to all targets. To require a username and password to access targets, modify the configuration as follows: auth-group ag0 { chap username1 secretsecret chap username2 anothersecret } portal-group pg0 { discovery-auth-group no-authentication listen 0.0.0.0 listen [::] } target iqn.2012-06.com.example:target0 { auth-group ag0 portal-group pg0 lun 0 { path /data/target0-0 size 4G } } The auth-group section defines username and password pairs. An initiator trying to connect to iqn.2012-06.com.example:target0 must first specify a defined username and secret. However, target discovery is still permitted without authentication. To require target discovery authentication, set discovery-auth-group to a defined auth-group name instead of no-authentication. It is common to define a single exported target for every initiator. As a shorthand for the syntax above, the username and password can be specified directly in the target entry: target iqn.2012-06.com.example:target0 { portal-group pg0 chap username1 secretsecret lun 0 { path /data/target0-0 size 4G } } Configuring an <acronym>iSCSI</acronym> Initiator The iSCSI initiator described in this section is supported starting with &os; 10.0-RELEASE. To use the iSCSI initiator available in older versions, refer to &man.iscontrol.8;. The iSCSI initiator requires that the &man.iscsid.8; daemon is running. This daemon does not use a configuration file. To start it automatically at boot, add this line to /etc/rc.conf: iscsid_enable="YES" To start &man.iscsid.8; now, run this command: &prompt.root; service iscsid start Connecting to a target can be done with or without an /etc/iscsi.conf configuration file. This section demonstrates both types of connections. Connecting to a Target Without a Configuration File To connect an initiator to a single target, specify the IP address of the portal and the name of the target: &prompt.root; iscsictl -A -p 10.10.10.10 -t iqn.2012-06.com.example:target0 To verify if the connection succeeded, run iscsictl without any arguments. The output should look similar to this: Target name Target portal State iqn.2012-06.com.example:target0 10.10.10.10 Connected: da0 In this example, the iSCSI session was successfully established, with /dev/da0 representing the attached LUN. If the iqn.2012-06.com.example:target0 target exports more than one LUN, multiple device nodes will be shown in that section of the output: Connected: da0 da1 da2. Any errors will be reported in the output, as well as the system logs. For example, this message usually means that the &man.iscsid.8; daemon is not running: Target name Target portal State iqn.2012-06.com.example:target0 10.10.10.10 Waiting for iscsid(8) The following message suggests a networking problem, such as a wrong IP address or port: Target name Target portal State iqn.2012-06.com.example:target0 10.10.10.11 Connection refused This message means that the specified target name is wrong: Target name Target portal State iqn.2012-06.com.example:target0 10.10.10.10 Not found This message means that the target requires authentication: Target name Target portal State iqn.2012-06.com.example:target0 10.10.10.10 Authentication failed To specify a CHAP username and secret, use this syntax: &prompt.root; iscsictl -A -p 10.10.10.10 -t iqn.2012-06.com.example:target0 -u user -s secretsecret Connecting to a Target with a Configuration File To connect using a configuration file, create /etc/iscsi.conf with contents like this: t0 { TargetAddress = 10.10.10.10 TargetName = iqn.2012-06.com.example:target0 AuthMethod = CHAP chapIName = user chapSecret = secretsecret } The t0 specifies a nickname for the configuration file section. It will be used by the initiator to specify which configuration to use. The other lines specify the parameters to use during connection. The TargetAddress and TargetName are mandatory, whereas the other options are optional. In this example, the CHAP username and secret are shown. To connect to the defined target, specify the nickname: &prompt.root; iscsictl -An t0 Alternately, to connect to all targets defined in the configuration file, use: &prompt.root; iscsictl -Aa To make the initiator automatically connect to all targets in /etc/iscsi.conf, add the following to /etc/rc.conf: iscsictl_enable="YES" iscsictl_flags="-Aa"
Index: head/en_US.ISO8859-1/books/porters-handbook/uses/chapter.xml =================================================================== --- head/en_US.ISO8859-1/books/porters-handbook/uses/chapter.xml (revision 48481) +++ head/en_US.ISO8859-1/books/porters-handbook/uses/chapter.xml (revision 48482) @@ -1,1492 +1,1492 @@ Using <varname>USES</varname> Macros An Introduction to <varname>USES</varname> USES macros make it easy to declare requirements and settings for a port. They can add dependencies, change building behavior, add metadata to packages, and so on, all by selecting simple, preset values.. Each section in this chapter describes a possible value for USES, along with its possible arguments. Arguments are appeneded to the value after a colon (:). Multiple arguments are separated by commas (,). Using Multiple Values USES= bison perl Adding an Argument USES= gmake:lite Adding Multiple Arguments USES= drupal:7,theme Mixing it All Together USES= pgsql:9.3+ cpe python:2.7,build <literal>ada</literal> Possible arguments: (none), 47, 49, 5 Depends on an Ada-capable compiler, and sets CC accordingly. Defaults to a gcc 4.9 based compiler, use :47 to use the older gcc 4.7 based one and :5 to use the newer gcc 5 based one. <literal>autoreconf</literal> Possible arguments: (none), build Runs autoreconf. It encapsulates the aclocal, autoconf, autoheader, automake, autopoint, and libtoolize commands. Each command applies to ${CONFIGURE_WRKSRC}/configure.ac or its old name, ${CONFIGURE_WRKSRC}/configure.in. If configure.ac defines subdirectories with their own configure.ac using AC_CONFIG_SUBDIRS, autoreconf will recursively update those as well. The :build argument only adds build time dependencies on those tools but does not run autoreconf. <literal>blaslapack</literal> Possible arguments: (none), atlas, netlib (default), gotoblas, openblas Adds dependencies on Blas / Lapack libraries. <literal>bison</literal> Possible arguments: (none), build, run, both Uses devel/bison By default, with no arguments or with the build argument, it implies bison is a build-time dependency, run implies a run-time dependency, and both implies both run-time and build-time dependencies. <literal>charsetfix</literal> Possible arguments: (none) Prevents the port from installing charset.alias. This must be installed only by converters/libiconv. CHARSETFIX_MAKEFILEIN can be set to a path relative to WRKSRC if charset.alias is not installed by ${WRKSRC}/Makefile.in. <literal>cmake</literal> Possible arguments: (none), outsource, run Uses CMake for configuring and building. With the outsource argument, an out-of-source build will be performed. With the run argument, a run-time dependency is registered. For more information see . <literal>compiler</literal> Possible arguments: (none), c++14-lang, c++11-lang, gcc-c++11-lib, c++11-lib, c++0x, c11, openmp, nestedfct, features Determines which compiler to use based on any given wishes. Use c++14-lang if the port needs a C++14-capable compiler, gcc-c++11-lib if the port needs the g++ compiler with a C++11 library, or c++11-lib if the port needs a C++11-ready standard library. If the port needs a compiler understanding C++11, C++0X, C11, OpenMP, or nested functions, the corresponding parameters can be used. Use features to request a list of features supported by the default compiler. After including bsd.port.pre.mk the port can inspect the results using these variables: COMPILER_TYPE: the default compiler on the system, either gcc or clang ALT_COMPILER_TYPE: the alternative compiler on the system, either gcc or clang. Only set if two compilers are present in the base system. COMPILER_VERSION: the first two digits of the version of the default compiler. ALT_COMPILER_VERSION: the first two digits of the version of the alternative compiler, if present. CHOSEN_COMPILER_TYPE: the chosen compiler, either gcc or clang COMPILER_FEATURES: the features supported by the default compiler. It currently lists the C++ library. <literal>cpe</literal> Possible arguments: (none) Include Common Platform Enumeration (CPE) information in package manifest as a CPE 2.3 formatted string. See the CPE specification for details. To add CPE information to a port, follow these steps: Search for the official CPE para for the software product either by using the NVD's CPE search engine or in the official CPE dictionary (warning, very large XML file). Do not ever make up CPE data. Add cpe to USES and compare the result of make -V CPE_STR to the CPE dictionary para. Continue one step at a time until make -V CPE_STR is correct. If the product name (second field, defaults to PORTNAME) is incorrect, define CPE_PRODUCT. If the vendor name (first field, defaults to CPE_PRODUCT) is incorrect, define CPE_VENDOR. If the version field (third field, defaults to PORTVERSION) is incorrect, define CPE_VERSION. If the update field (fourth field, defaults to empty) is incorrect, define CPE_UPDATE. If it is still not correct, check Mk/Uses/cpe.mk for additional details, or contact the &a.ports-secteam;. Derive as much as possible of the CPE name from existing variables such as PORTNAME and PORTVERSION. Use variable modifiers to extract the relevant portions from these variables rather than hardcoding the name. Always run make -V CPE_STR and check the output before committing anything that changes PORTNAME or PORTVERSION or any other variable which is used to derive CPE_STR. <literal>cran</literal> Possible arguments: (none), auto-plist Uses the Comprehensive R Archive Network. Specify auto-plist to automatically generate pkg-plist. <literal>desktop-file-utils</literal> Possible arguments: (none) Uses update-desktop-database from devel/desktop-file-utils. An extra post-install step will be run without interfering with any post-install steps already in the port Makefile. A line with @desktop-file-utils will be added to the plist. <literal>desthack</literal> Possible arguments: (none) Changes the behavior of GNU configure to properly support DESTDIR in case the original software does not. <literal>display</literal> Possible arguments: (none), ARGS Set up a virtual display environment. If the environment variable DISPLAY is not set, then Xvfb is added as a build dependency, and CONFIGURE_ENV is extended with the port number of the currently running instance of Xvfb. The ARGS parameter defaults to install and controls the phase around which to start and stop the virtual display. <literal>dos2unix</literal> Possible arguments: (none) The port has files with line endings in DOS format which need to be converted. Several variables can be set to control which files will be converted. The default is to convert all files, including binaries. See for examples. DOS2UNIX_REGEX: match file names based on a regular expression. DOS2UNIX_FILES: match literal file names. DOS2UNIX_GLOB: match file names based on a glob pattern. DOS2UNIX_WRKSRC: the directory from which to start the conversions. Defaults to ${WRKSRC}. <literal>drupal</literal> Possible arguments: 6, 7, module, theme Automate installation of a port that is a Drupal theme or module. Use with the version of Drupal that the port is expecting. For example, USES=drupal:6,module says that this port creates a Drupal 6 module. A Drupal 7 theme can be specified with USES=drupal:7,theme. <literal>execinfo</literal> Possible arguments: (none) Add a library dependency on devel/libexecinfo if libexecinfo.so is not present in the base system. <literal>fakeroot</literal> Possible arguments: (none) - Changes some default behaviour of build systems to allow + Changes some default behavior of build systems to allow installing as a user. See for more information on fakeroot. <literal>fam</literal> Possible arguments: (none), fam, gamin Uses a File Alteration Monitor as a library dependency, either devel/fam or devel/gamin. End users can set WITH_FAM_SYSTEM to specify their preference. <literal>fmake</literal> Possible arguments: (none) Uses devel/fmake as a build-time dependency. <literal>fonts</literal> Possible arguments: (none), fc, fcfontsdir (default), fontsdir, none Adds a runtime dependency on tools needed to register fonts. Depending on the argument, add a @fc ${FONTSDIR} line, @fcfontsdir ${FONTSDIR} line, @fontsdir ${FONTSDIR} line, or no line if the argument is none, to the plist. FONTSDIR defaults to ${PREFIX}/share/fonts/${FONTNAME} and FONTNAME to ${PORTNAME}. Add FONTSDIR to PLIST_SUB and SUB_LIST <literal>fortran</literal> Possible arguments: gcc (default), ifort Uses the Fortran compiler from either GNU or Intel. <literal>fuse</literal> Possible arguments: (none) The port will depend on the FUSE library and handle the dependency on the kernel module depending on the version of &os;. <literal>gecko</literal> Possible arguments: libxul (default), firefox, seamonkey, thunderbird, build, XY, XY+ Add a dependency on different gecko based applications. If libxul is used, it is the only argument allowed. When the argument is not libxul, the firefox, seamonkey, or thunderbird arguments can be used, along with optional build and XY/XY+ version arguments. <literal>gettext</literal> Possible arguments: (none) Deprecated. Will include both gettext-runtime and gettext-tools. <literal>gettext-runtime</literal> Possible arguments: (none), lib (default), build, run Uses devel/gettext-runtime. By default, with no arguments or with the lib argument, implies a library dependency on libintl.so. build and run implies, respectively a build-time and a run-time dependency on gettext. <literal>gettext-tools</literal> Possible arguments: (none), build (default), run Uses devel/gettext-tools. By default, with no argument, or with the build argument, a build time dependency on msgfmt is registered. With the run argument, a run-time dependency is registered. <literal>ghostscript</literal> Possible arguments: X, build, run, nox11 A specific version X can be used. Possible versions are 7, 8, 9 (default), and agpl. nox11 indicates that the -nox11 version of the port is required. build and run add build- and run-time dependencies on Ghostscript. The default is both build- and run-time dependencies. <literal>gmake</literal> Possible arguments: (none), lite Uses devel/gmake, or devel/gmake-lite if the lite argument is used, as a build-time dependency and sets up the environment to use gmake as the default make for the build. <literal>gperf</literal> Possible arguments: (none) Add a buildtime dependency on devel/gperf if gperf is not present in the base system. <literal>gssapi</literal> Possible arguments: (none), base (default), heimdal, mit, flags, bootstrap Handle dependencies needed by consumers of the GSS-API. Only libraries that provide the Kerberos mechanism are available. By default, or set to base, the GSS-API library from the base system is used. Can also be set to heimdal to use security/heimdal, or mit to use security/krb5. When the local Kerberos installation is not in LOCALBASE, set HEIMDAL_HOME (for heimdal) or KRB5_HOME (for krb5) to the location of the Kerberos installation. These variables are exported for the ports to use: GSSAPIBASEDIR GSSAPICPPFLAGS GSSAPIINCDIR GSSAPILDFLAGS GSSAPILIBDIR GSSAPILIBS GSSAPI_CONFIGURE_ARGS The flags option can be given alongside base, heimdal, or mit to automatically add GSSAPICPPFLAGS, GSSAPILDFLAGS, and GSSAPILIBS to CFLAGS, LDFLAGS, and LDADD, respectively. For example, use base,flags. The bootstrap option is a special prefix only for use by security/krb5 and security/heimdal. For example, use bootstrap,mit. Typical Use OPTIONS_SINGLE= GSSAPI OPTIONS_SINGLE_GSSAPI= GSSAPI_BASE GSSAPI_HEIMDAL GSSAPI_MIT GSSAPI_NONE GSSAPI_BASE_USES= gssapi GSSAPI_BASE_CONFIGURE_ON= --with-gssapi=${GSSAPIBASEDIR} ${GSSAPI_CONFIGURE_ARGS} GSSAPI_HEIMDAL_USES= gssapi:heimdal GSSAPI_HEIMDAL_CONFIGURE_ON= --with-gssapi=${GSSAPIBASEDIR} ${GSSAPI_CONFIGURE_ARGS} GSSAPI_MIT_USES= gssapi:mit GSSAPI_MIT_CONFIGURE_ON= --with-gssapi=${GSSAPIBASEDIR} ${GSSAPI_CONFIGURE_ARGS} GSSAPI_NONE_CONFIGURE_ON= --without-gssapi <literal>horde</literal> Possible arguments: (none) Add buildtime and runtime dependencies on devel/pear-channel-horde. Other Horde dependencies can be added with USE_HORDE_BUILD and USE_HORDE_RUN. See for more information. <literal>iconv</literal> Possible arguments: (none), lib, build, patch, translit, wchar_t Uses iconv functions, either from the port converters/libiconv as a build-time and run-time dependency, or from the base system on 10-CURRENT after a native iconv was committed in 254273. By default, with no arguments or with the lib argument, implies iconv with build-time and run-time dependencies. build implies a build-time dependency, and patch implies a patch-time dependency. If the port uses the WCHAR_T or //TRANSLIT iconv extensions, add the relevant arguments so that the correct iconv is used. For more information see . <literal>imake</literal> Possible arguments: (none), env, notall, noman Add devel/imake as a build-time dependency and run xmkmf -a during the configure stage. If the env argument is given, the configure target is not set. If the flag is a problem for the port, add the notall argument. If xmkmf does not generate a install.man target, add the noman argument. <literal>kmod</literal> Possible arguments: (none) Fills in the boilerplate for kernel module ports, currently: Add kld to CATEGORIES. Set SSP_UNSAFE. Set IGNORE if the kernel sources are not found in SRC_BASE. Define KMODDIR to /boot/modules by default, add it to PLIST_SUB and MAKE_ENV, and create it upon installation. If KMODDIR is set to /boot/kernel, it will be rewritten to /boot/modules. This prevents breaking packages when upgrading the kernel due to /boot/kernel being renamed to /boot/kernel.old in the process. Handle cross-referencing kernel modules upon installation and deinstallation, using @kld. <literal>lha</literal> Possible arguments: (none) Set EXTRACT_SUFX to .lzh <literal>libarchive</literal> Possible arguments: (none) Registers a dependency on archivers/libarchive. Any ports depending on libarchive must include USES=libarchive. <literal>libedit</literal> Possible arguments: (none) Registers a dependency on devel/libedit. Any ports depending on libedit must include USES=libedit. <literal>libtool</literal> Possible arguments: (none), keepla, build Patches libtool scripts. This must be added to all ports that use libtool. The keepla argument can be used to keep .la files. Some ports do not ship with their own copy of libtool and need a build time dependency on devel/libtool, use the :build argument to add such dependency. <literal>localbase</literal> Possible arguments: (none) Ensures that libraries from dependencies in LOCALBASE are used instead of the ones from the base system. Ports that depend on libraries that are also present in the base system should use this. It is also used internally by a few other USES. <literal>lua</literal> Possible arguments: (none), XY+, XY, build, run Adds a dependency on Lua. By default this is a library dependency, unless overridden by the build or run option. The default version is 5.2, unless set by the XY parameter (for example, 51 or 52+). <literal>makeinfo</literal> Possible arguments: (none) Add a build-time dependency on makeinfo if it is not present in the base system. <literal>makeself</literal> Possible arguments: (none) Indicates that the distribution files are makeself archives and sets the appropriate dependencies. <literal>metaport</literal> Possible arguments: (none) Sets the following variables to make it easier to create a metaport: MASTER_SITES, DISTFILES, EXTRACT_ONLY, NO_BUILD, NO_INSTALL, NO_MTREE, NO_ARCH. <literal>mono</literal> Possible arguments: (none) Adds a dependency on the Mono (currently only C#) framework by setting the appropriate dependencies. <literal>motif</literal> Possible arguments: (none) Uses x11-toolkits/open-motif as a library dependency. End users can set WANT_LESSTIF for the dependency to be on x11-toolkits/lesstif instead of x11-toolkits/open-motif. <literal>ncurses</literal> Possible arguments: (none), base, port Uses ncurses, and causes some useful variables to be set. <literal>ninja</literal> Possible arguments: (none) Uses ninja to build the port. End users can set NINJA_VERBOSE for verbose output. <literal>objc</literal> Possible arguments: (none) Add objective C dependencies (compiler, runtime library) if the base system does not support it. <literal>openal</literal> Possible arguments: al, soft (default), si, alut Uses OpenAL. The backend can be specified, with the software implementation as the default. The user can specify a preferred backend with WANT_OPENAL. Valid values for this knob are soft (default) and si. <literal>pathfix</literal> Possible arguments: (none) Look for Makefile.in and configure in the port's associated sources and fix common paths to make sure they respect the &os; hierarchy. If the port uses automake, set PATHFIX_MAKEFILEIN to Makefile.am if needed. <literal>pear</literal> Possible arguments: (none) Adds a dependency on devel/pear. It will setup default behavior for software using the PHP Extension and Application Repository. See for more information. <literal>perl5</literal> Possible arguments: (none) Depends on Perl. These variables can be set: PERL_VERSION: Full version of Perl to use, or the default if not set PERL_ARCH: Directory name of architecture dependent libraries, defaults to mach PERL_PORT: Name of the Perl port to be installed, the default is derived from PERL_VERSION SITE_PERL: Directory name for site specific Perl packages USE_PERL5: Phases in which to use Perl, can be extract, patch, build, install, or run. It can also be configure, modbuild, or modbuildtiny when Makefile.PL, Build.PL, or the Module::Build::Tiny flavor of Build.PL is required. It defaults to build run. <literal>pgsql</literal> Possible arguments: (none), X.Y, X.Y+, X.Y- Provide support for PostgreSQL. Maintainer can set version required. Minimum and maximum versions can be specified; for example, 9.0-, 8.4+. Add PostgreSQL component dependency, using WANT_PGSQL=component[:target]. for example, WANT_PGSQL=server:configure pltcl plperl For the full list use make -V _USE_PGSQL_DEP. <literal>pkgconfig</literal> Possible arguments: (none), build (default), run, both Uses devel/pkgconf. With no arguments or with the build argument, it implies pkg-config as a build-time dependency. run implies a run-time dependency and both implies both run-time and build-time dependencies. <literal>pure</literal> Possible arguments: (none), ffi Uses lang/pure. Largely used for building related pure ports. With the ffi argument, it implies devel/pure-ffi as a run-time dependency. <literal>python</literal> Possible arguments: (none), X.Y, X.Y+, -X.Y, X.Y-Z.A, build, run Uses Python. A supported version or version range can be specified. If Python is only needed at build or run time, it can be set as a build or run dependency with build or run. See for more information. <literal>qmail</literal> Possible arguments: (none), build, run, both, vars Uses mail/qmail. With the build argument, it implies qmail as a build-time dependency. run implies a run-time dependency. Using no argument or the both argument implies both run-time and build-time dependencies. vars will only set QMAIL variables for the port to use. <literal>qmake</literal> Possible arguments: (none), norecursive, outsource Uses QMake for configuring. For more information see . <literal>readline</literal> Possible arguments: (none), port Uses readline as a library dependency, and sets CPPFLAGS and LDFLAGS as necessary. If the port argument is used or if readline is not present in the base system, add a dependency on devel/readline <literal>scons</literal> Possible arguments: (none) Provide support for the use of devel/scons <literal>shared-mime-info</literal> Possible arguments: (none) Uses update-mime-database from misc/shared-mime-info. This uses will automatically add a post-install step in such a way that the port itself still can specify there own post-install step if needed. It also add an @shared-mime-info para to the plist. <literal>shebangfix</literal> Possible arguments: (none) A lot of software uses incorrect locations for script interpreters, most notably /usr/bin/perl and /bin/bash. The shebagngfix macro fixes shebang lines in scripts listed in SHEBANG_FILES. The shebangfix macro is run from ${WRKSRC}, so it can contain paths that are relative to ${WRKSRC}. It can also deal with absolute paths if files outside of ${WRKSRC} require patching. For example: USES= shebangfix SHEBANG_FILES= scripts/foobar.pl scripts/*.sh Currently Bash, Java, Ksh, Lua, Perl, PHP, Python, Ruby, Tcl, and Tk are supported by default. To support another interpreter, set SHEBANG_LANG, interp_OLD_CMD and interp_CMD. For example: SHEBANG_LANG= lua lua_OLD_CMD= /usr/bin/lua lua_CMD= ${LOCALBASE}/bin/lua interp_OLD_CMD will contain multiple values. Any entry with spaces must be quoted. For example, if it was not already defined, the Ksh entry could be defined as: SHEBANG_LANG= ksh ksh_OLD_CMD= "/usr/bin/env ksh" /bin/ksh /usr/bin/ksh ksh_CMD= ${LOCALBASE}/bin/ksh Some software uses strange locations for an interpreter. For example, an application might expect Python to be located in /opt/bin/python2.7. The strange path to be replaced can be declared in the port Makefile: python_OLD_CMD= /opt/bin/python2.7 The fixing of shebangs is done during the patch phase. If scripts are created with incorrect shebangs during the build phase, the build process (for examples, the configure script, or the Makefiles) must be patched to generate the right shebangs. Correct paths for supported interpreters are available in interp_CMD. <literal>tar</literal> Possible arguments: (none), Z, bz2, bzip2, lzma, tbz, tbz2, tgz, txz, xz Set EXTRACT_SUFX to .tar, .tar.Z, .tar.bz2, .tar.bz2, .tar.lzma, .tbz, .tbz2, .tgz, .txz or .tar.xz respectively. <literal>tcl</literal> Possible arguments: PORT Add a dependency on Tcl. The PORT parameter can be either tcl or tk. Either a version or wrapper dependency can be appended using PORT:version or PORT:wrapper. The version can be empty, one or more exact version numbers (currently 84, 85, or 86), or a minimal version number (currently 84+, 85+ or 86+). A build- or run-time only dependency can be specified using PORT,build or PORT,run. After including bsd.port.pre.mk the port can inspect the results using these variables: TCL_VER: chosen major.minor version of Tcl TCLSH: full path of the Tcl interpreter TCL_LIBDIR: path of the Tcl libraries TCL_INCLUDEDIR: path of the Tcl C header files TK_VER: chosen major.minor version of Tk WISH: full path of the Tk interpreter TK_LIBDIR: path of the Tk libraries TK_INCLUDEDIR: path of the Tk C header files <literal>terminfo</literal> Possible arguments: (none) Adds @terminfo to the plist. Use when the port installs *.terminfo files in ${PREFIX}/share/misc. <literal>tk</literal> Same as arguments for tcl Small wrapper when using both Tcl and Tk. The same variables are returned as when using Tcl. <literal>twisted</literal> Possible arguments: (none), ARGS Add a dependency on twistedCore. The list of required components can be specified as a value of this variable. ARGS can be one of: build: add twistedCore or any specified component as build dependency. run: add twistedCore or any specified component as run dependency. Besides build and run, one or more other supported twisted components can be specified. Supported values are listed in Uses/twisted.mk. <literal>uidfix</literal> Possible arguments: (none) Changes some default behavior (mostly variables) of the build system to allow installing this port as a normal user. Try this in the port before adding NEED_ROOT=yes <literal>uniquefiles</literal> Possible arguments: (none), dirs Make files or directories 'unique', by adding a prefix or suffix. If the dirs argument is used, the port needs a prefix (a only a prefix) based on UNIQUE_PREFIX for standard directories DOCSDIR, EXAMPLESDIR, DATADIR, WWWDIR, ETCDIR. These variables are available for ports: UNIQUE_PREFIX: The prefix to be used for directories and files. Default: ${PKGNAMEPREFIX}. UNIQUE_PREFIX_FILES: A list of files that need to be prefixed. Default: empty. UNIQUE_SUFFIX: The suffix to be used for files. Default: ${PKGNAMESUFFIX}. UNIQUE_SUFFIX_FILES: A list of files that need to be suffixed. Default: empty. <literal>webplugin</literal> Possible arguments: (none), ARGS Automatically create and remove symbolic links for each application that supports the webplugin framework. ARGS can be one of: gecko: support plug-ins based on Gecko native: support plug-ins for Gecko, Opera, and WebKit-GTK linux: support Linux plug-ins all (default, implicit): support all plug-in types (individual entries): support only the browsers listed These variables can be adjusted: WEBPLUGIN_FILES: No default, must be set manually. The plug-in files to install. WEBPLUGIN_DIR: The directory to install the plug-in files to, default PREFIX/lib/browser_plugins/WEBPLUGIN_NAME. Set this if the port installs plug-in files outside of the default directory to prevent broken symbolic links. WEBPLUGIN_NAME: The final directory to install the plug-in files into, default PKGBASE. <literal>xfce</literal> Possible arguments: (none), gtk3 Provide support for Xfce related ports. See for details. The gtk3 argument specifies that the port requires GTK3 support. It adds additional features provided by some core components, for example, x11/libxfce4menu and x11-wm/xfce4-panel. <literal>zip</literal> Possible arguments: (none), infozip Indicates that the distribution files use the ZIP compression algorithm. For files using the InfoZip algorithm the infozip argument must be passed to set the appropriate dependencies. <literal>zope</literal> Possible arguments: (none) Uses www/zope. Mostly used for building zope related ports. ZOPE_VERSION can be used by a port to indicate that a specific version of zope shall be used.