diff --git a/en_US.ISO8859-1/books/handbook/geom/chapter.sgml b/en_US.ISO8859-1/books/handbook/geom/chapter.sgml index 1a5f542b18..c55294ea6f 100644 --- a/en_US.ISO8859-1/books/handbook/geom/chapter.sgml +++ b/en_US.ISO8859-1/books/handbook/geom/chapter.sgml @@ -1,548 +1,674 @@ Tom Rhodes Written by GEOM: Modular Disk Transformation Framework Synopsis GEOM GEOM Disk Framework GEOM This chapter covers the use of disks under the GEOM framework in &os;. This includes the major RAID control utilities which use the framework for configuration. This chapter will not go into in depth discussion on how GEOM handles or controls I/O, the underlying subsystem, or code. This information is provided through the &man.geom.4; manual page and its various SEE ALSO references. This chapter is also not a definitive guide to RAID configurations. Only GEOM-supported RAID classifications will be discussed. After reading this chapter, you will know: What type of RAID support is available through GEOM. How to use the base utilities to configure, maintain, and manipulate the various RAID levels. How to mirror, stripe, encrypt, and remotely connect disk devices through GEOM. How to troubleshoot disks attached to the GEOM framework. Before reading this chapter, you should: Understand how &os; treats disk devices (). Know how to configure and install a new &os; kernel (). GEOM Introduction GEOM permits access and control to classes — Master Boot Records, BSD labels, etc — through the use of providers, or the special files in /dev. Supporting various software RAID configurations, GEOM will transparently provide access to the operating system and operating system utilities. Tom Rhodes Written by Murray Stokely RAID0 - Striping GEOM Striping Striping is a method used to combine several disk drives into a single volume. In many cases, this is done through the use of hardware controllers. The GEOM disk subsystem provides software support for RAID0, also known as disk striping. In a RAID0 system, data are split up in blocks that get written across all the drives in the array. Instead of having to wait on the system to write 256k to one disk, a RAID0 system can simultaneously write 64k to each of four different disks, offering superior I/O performance. This performance can be enhanced further by using multiple disk controllers. Each disk in a RAID0 stripe must be of the same size, since I/O requests are interleaved to read or write to multiple disks in parallel. Disk Striping Illustration Creating a stripe of unformatted ATA disks Load the geom_stripe module: &prompt.root; kldload geom_stripe Ensure that a suitable mount point exists. If this volume will become a root partition, then temporarily use another mount point such as /mnt: &prompt.root; mkdir /mnt Determine the device names for the disks which will be striped, and create the new stripe device. For example, to stripe two unused and unpartitioned ATA disks, for example /dev/ad2 and /dev/ad3: &prompt.root; gstripe label -v st0 /dev/ad2 /dev/ad3 Write a standard label, also known as a partition table, on the new volume and install the default bootstrap code: &prompt.root; bsdlabel -wB /dev/stripe/st0 This process should have created two other devices in the /dev/stripe directory in addition to the st0 device. Those include st0a and st0c. At this point a file system may be created on the st0a device with the newfs utility: &prompt.root; newfs -U /dev/stripe/st0a Many numbers will glide across the screen, and after a few seconds, the process will be complete. The volume has been created and is ready to be mounted. To manually mount the created disk stripe: &prompt.root; mount /dev/stripe/st0a /mnt To mount this striped file system automatically during the boot process, place the volume information in /etc/fstab file: &prompt.root; echo "/dev/stripe/st0a /mnt ufs rw 2 2" \ >> /etc/fstab The geom_stripe module must also be automatically loaded during system initialization, by adding a line to /boot/loader.conf: &prompt.root; echo 'geom_stripe_load="YES"' >> /boot/loader.conf RAID1 - Mirroring GEOM Disk Mirroring Mirroring is a technology used by many corporations and home users to back up data without interruption. When a mirror exists, it simply means that diskB replicates diskA. Or, perhaps diskC+D replicates diskA+B. Regardless of the disk configuration, the important aspect is that information on one disk or partition is being replicated. Later, that information could be more easily restored, backed up without causing service or access interruption, and even be physically stored in a data safe. To begin, ensure the system has two disk drives of equal size, this exercise assumes they are direct access (&man.da.4;) SCSI disks. Begin by installing &os; on the first disk with only two partitions. One should be a swap partition, double the RAM size and all remaining space devoted to the root (/) file system. It is possible to have separate partitions for other mount points; however, this will increase the difficulty level ten fold due to manual alteration of the &man.bsdlabel.8; and &man.fdisk.8; settings. Reboot and wait for the system to fully initialize. Once this process has completed, log in as the root user. Create the /dev/mirror/gm device and link it with /dev/da1: &prompt.root; gmirror label -vnb round-robin gm0 /dev/da1 The system should respond with: Metadata value stored on /dev/da1. Done. Initialize GEOM, this will load the /boot/kernel/geom_mirror.ko kernel module: &prompt.root; gmirror load This command should have created the gm0, device node under the /dev/mirror directory. Install a generic fdisk label and boot code to new gm0 device: &prompt.root; fdisk -vBI /dev/mirror/gm0 Now install generic bsdlabel information: &prompt.root; bsdlabel -wB /dev/mirror/gm0s1 If multiple slices and partitions exist, the flags for the previous two commands will require alteration. They must match the slice and partition size of the other disk. Use the &man.newfs.8; utility to construct a default UFS file system on the gm0s1a device node: &prompt.root; newfs -U /dev/mirror/gm0s1a This should have caused the system to spit out some information and a bunch of numbers. This is good. Examine the screen for any error messages and mount the device to the /mnt mount point: &prompt.root; mount /dev/mirror/gm0s1a /mnt Now move all data from the boot disk over to this new file system. This example uses the &man.dump.8; and &man.restore.8; commands; however, &man.dd.1; would also work with this scenario. &prompt.root; dump -L -0 -f- / |(cd /mnt && restore -r -v -f-) This must be done for each file system. Simply place the appropriate file system in the correct location when running the aforementioned command. Now edit the replicated /mnt/etc/fstab file and remove or comment out the swap file It should be noted that commenting out the swap file entry in fstab will most likely require you to re-establish a different way of enabling swap space. Please refer to for more information. . Change the other file system information to use the new disk as shown in the following example: # Device Mountpoint FStype Options Dump Pass# #/dev/da0s2b none swap sw 0 0 /dev/mirror/gm0s1a / ufs rw 1 1 Now create a boot.conf file on both the current and new root partitions. This file will help the system BIOS boot the correct drive: &prompt.root; echo "1:da(1,a)/boot/loader" > /boot.config &prompt.root; echo "1:da(1,a)/boot/loader" > /mnt/boot.config We have placed it on both root partitions to ensure proper boot up. If for some reason the system cannot read from the new root partition, a failsafe is available. Ensure the geom_mirror.ko module will load on boot by running the following command: &prompt.root; echo 'geom_mirror_load="YES"' >> /mnt/boot/loader.conf Reboot the system: &prompt.root; shutdown -r now If all has gone well, the system should have booted from the gm0s1a device and a login prompt should be waiting. If something went wrong, see review the forthcoming troubleshooting section. Now add the da0 disk to gm0 device: &prompt.root; gmirror configure -a gm0 &prompt.root; gmirror insert gm0 /dev/da0 The flag tells &man.gmirror.8; to use automatic synchronization; i.e., mirror the disk writes automatically. The manual page explains how to rebuild and replace disks, although it uses data in place of gm0. Troubleshooting System refuses to boot If the system boots up to a prompt similar to: ffs_mountroot: can't find rootvp Root mount failed: 6 mountroot> Reboot the machine using the power or reset button. At the boot menu, select option six (6). This will drop the system to a &man.loader.8; prompt. Load the kernel module manually: OK? load geom_mirror OK? boot If this works then for whatever reason the module was not being loaded properly. Place: options GEOM_MIRROR in the kernel configuration file, rebuild and reinstall. That should remedy this issue. GEOM Gate Network Devices GEOM supports the remote use of devices, such as disks, CD-ROMs, files, etc. through the use of the gate utilities. This is similar to NFS. To begin, an exports file must be created. This file specifies who is permitted to access the exported resources and what level of access they are offered. For example, to export the fourth slice on the first SCSI disk, the following /etc/gg.exports is more than adequate: 192.168.1.0/24 RW /dev/da0s4d It will allow all hosts inside the private network access the file system on the da0s4d partition. To export this device, ensure it is not currently mounted, and start the &man.ggated.8; server daemon: &prompt.root; ggated Now to mount the device on the client machine, issue the following commands: &prompt.root; ggatec create -o rw 192.168.1.1 /dev/da0s4d ggate0 &prompt.root; mount /dev/ggate0 /mnt From here on, the device may be accessed through the /mnt mount point. It should be pointed out that this will fail if the device is currently mounted on either the server machine or any other machine on the network. When the device is no longer needed, it may be safely unmounted with the &man.umount.8; command, similar to any other disk device. + + + Labeling Disk Devices + + + GEOM + + + Disk Labels + + + During system initialization, the &os; kernel will create + device nodes as devices are found. This method of probing for + devices raises some issues, for instance what if a new disk + device is added via USB? It is very likely + that a flash device may be handed the device name of + da0 and the original + da0 shifted to + da1. This will cause issues mounting + file systems if they are listed in + /etc/fstab, effectively, this may also + prevent the system from booting. + + One solution to this issue is to chain the + SCSI devices in order so a new device added to + the SCSI card will be issued unused device + numbers. But what about USB devices which may + replace the primary SCSI disk? This happens + because USB devices are usually + probed before the SCSI card. One solution + is to only insert these devices after the system has been + booted. Another method could be to use only a single + ATA drive and never list the + SCSI devices in + /etc/fstab. + + A better solution is available. By using the + glabel utility, an administrator or user may + label their disk devices and use these labels in + /etc/fstab. Because + glabel stores the label in the last sector of + a given provider, the label will remain persistent across reboots. + By using this label as a device, the file system may always be + mounted regardless of what device node it is accessed + through. + + + This goes without saying that a label be permanent. The + glabel utility may be used to create both a + transient and permanent label. Only the permanent label will + remain consistent across reboots. See the &man.glabel.8; + manual page for more information on the differences between + labels. + + + + Label Types and Examples + + There are two types of labels, a generic label and a + file system label. The difference between the labels is + the auto detection associated with permanent labels, and the + fact that this type of label will be persistent across reboots. + These labels are given a special directory in + /dev, which will be named + based on their file system type. For example, + UFS2 file system labels will be created in + the /dev/ufs2 + directory. + + A generic label will go away with the next reboot. These + labels will be created in the + /dev/label directory and + are perfect for experimentation. + + + + Permanent labels may be placed on the file system using the + tunefs or newfs + utilities. To create a permanent label for a + UFS2 file system without destroying any + data, issue the following commands: + + &prompt.root; tunefs -L home /dev/da3 + + + If the file system is full, this may cause data + corruption; however, if the file system is full then the + main goal should be removing stale files and not adding + labels. + + + A label should now exist in + /dev/ufs2 which may be + added to /etc/fstab: + + /dev/ufs2/home /home ufs rw 2 2 + + + The file system must not be mounted while attempting + to run tunefs. + + + Now the file system may be mounted like normal: + + &prompt.root; mount /home + + The following command can be used to destroy the + label: + + &prompt.root; glabel destroy home + + From this point on, so long as the + geom_label.ko kernel module is loaded at + boot with /boot/loader.conf or the + GEOM_LABEL kernel option is present, + the device node may change without any ill effect on the + system. + + File systems may also be created with a default label + by using the flag with + newfs. See the &man.newfs.8; manual page + for more information. + + +