diff --git a/en_US.ISO8859-1/books/handbook/vinum/chapter.sgml b/en_US.ISO8859-1/books/handbook/vinum/chapter.sgml index 4cbab40d9f..73e709555f 100644 --- a/en_US.ISO8859-1/books/handbook/vinum/chapter.sgml +++ b/en_US.ISO8859-1/books/handbook/vinum/chapter.sgml @@ -1,912 +1,912 @@ The Vinum Volume Manager Synopsis No matter what disks you have, there will always be limitations: They can be too small. They can be too slow. They can be too unreliable. Greg Lehey - Originally contributed by + Originally written by Disks are too small Vinum Volume Manager Vinum is a so-called Volume Manager, a virtual disk driver that addresses these three problems. Let's look at them in more detail. Various solutions to these problems have been proposed and implemented: Disks are getting bigger, but so are data storage requirements. Often you'll find you want a file system that is bigger than the disks you have available. Admittedly, this problem is not as acute as it was ten years ago, but it still exists. Some systems have solve this by creating an abstract device which stores its data on a number of disks. Access bottlenecks Modern systems frequently need to access data in a highly concurrent manner. For example, large FTP or HTTP servers can maintain thousands of concurrent sessions and have multiple 100 Mbit/s connections to the outside world, well beyond the sustained transfer rate of most disks. Current disk drives can transfer data sequentially at up to 30 MB/s, but this value is of little importance in an environment where many independent processes access a drive, where they may achieve only a fraction of these values. In such cases it's more interesting to view the problem from the viewpoint of the disk subsystem: the important parameter is the load that a transfer places on the subsystem, in other words the time for which a transfer occupies the drives involved in the transfer. In any disk transfer, the drive must first position the heads, wait for the first sector to pass under the read head, and then perform the transfer. These actions can be considered to be atomic: it doesn't make any sense to interrupt them. Consider a typical transfer of about 10 kB: the current generation of high-performance disks can position the heads in an average of 6 ms. The fastest drives spin at 10,000 rpm, so the average rotational latency (half a revolution) is 3 ms. At 30 MB/s, the transfer itself takes about 350 μs, almost nothing compared to the positioning time. In such a case, the effective transfer rate drops to a little over 1 MB/s and is clearly highly dependent on the transfer size. The traditional and obvious solution to this bottleneck is more spindles: rather than using one large disk, it uses several smaller disks with the same aggregate storage space. Each disk is capable of positioning and transferring independently, so the effective throughput increases by a factor close to the number of disks used. The exact throughput improvement is, of course, smaller than the number of disks involved: although each drive is capable of transferring in parallel, there is no way to ensure that the requests are evenly distributed across the drives. Inevitably the load on one drive will be higher than on another. concatenation Vinum Vinum concatenation The evenness of the load on the disks is strongly dependent on the way the data is shared across the drives. In the following discussion, it's convenient to think of the disk storage as a large number of data sectors which are addressable by number, rather like the pages in a book. The most obvious method is to divide the virtual disk into groups of consecutive sectors the size of the individual physical disks and store them in this manner, rather like taking a large book and tearing it into smaller sections. This method is called concatenation and has the advantage that the disks are not required to have any specific size relationships. It works well when the access to the virtual disk is spread evenly about its address space. When access is concentrated on a smaller area, the improvement is less marked. illustrates the sequence in which storage units are allocated in a concatenated organization.
Concatenated organization
striping Vinum Vinum striping An alternative mapping is to divide the address space into smaller, equal-sized components and store them sequentially on different devices. For example, the first 256 sectors may be stored on the first disk, the next 256 sectors on the next disk and so on. After filling the last disk, the process repeats until the disks are full. This mapping is called striping or RAID-0. RAID Redundant Array of Inexpensive Disks RAID stands for Redundant Array of Inexpensive Disks and offers various forms of fault tolerance. though the latter term is somewhat misleading: it provides no redundancy. Striping requires somewhat more effort to locate the data, and it can cause additional I/O load where a transfer is spread over multiple disks, but it can also provide a more constant load across the disks. illustrates the sequence in which storage units are allocated in a striped organization.
Striped organization
Data integrity The final problem with current disks is that they are unreliable. Although disk drive reliability has increased tremendously over the last few years, they are still the most likely core component of a server to fail. When they do, the results can be catastrophic: replacing a failed disk drive and restoring data to it can take days. mirroringVinum Vinum mirroring RAID level 1 RAID-1 The traditional way to approach this problem has been mirroring, keeping two copies of the data on different physical hardware. Since the advent of the RAID levels, this technique has also been called RAID level 1 or RAID-1. Any write to the volume writes to both locations; a read can be satisfied from either, so if one drive fails, the data is still available on the other drive. Mirroring has two problems: The price. It requires twice as much disk storage as a non-redundant solution. The performance impact. Writes must be performed to both drives, so they take up twice the bandwidth of a non-mirrored volume. Reads do not suffer from a performance penalty: it even looks as if they are faster. RAID-5An alternative solution is parity, implemented in the RAID levels 2, 3, 4 and 5. Of these, RAID-5 is the most interesting. As implemented in Vinum, it is a variant on a striped organization which dedicates one block of each stripe to parity of the other blocks: As implemented by Vinum, a RAID-5 plex is similar to a striped plex, except that it implements RAID-5 by including a parity block in each stripe. As required by RAID-5, the location of this parity block changes from one stripe to the next. The numbers in the data blocks indicate the relative block numbers.
RAID-5 organization
Compared to mirroring, RAID-5 has the advantage of requiring significantly less storage space. Read access is similar to that of striped organizations, but write access is significantly slower, approximately 25% of the read performance. If one drive fails, the array can continue to operate in degraded mode: a read from one of the remaining accessible drives continues normally, but a read from the failed drive is recalculated from the corresponding block from all the remaining drives.
Vinum objects In order to address these problems, Vinum implements a four-level hierarchy of objects: The most visible object is the virtual disk, called a volume. Volumes have essentially the same properties as a UNIX™ disk drive, though there are some minor differences. They have no size limitations. Volumes are composed of plexes, each of which represent the total address space of a volume. This level in the hierarchy thus provides redundancy. Think of plexes as individual disks in a mirrored array, each containing the same data. Since Vinum exists within the UNIX™ disk storage framework, it would be possible to use UNIX™ partitions as the building block for multi-disk plexes, but in fact this turns out to be too inflexible: UNIX™ disks can have only a limited number of partitions. Instead, Vinum subdivides a single UNIX™ partition (the drive) into contiguous areas called subdisks, which it uses as building blocks for plexes. Subdisks reside on Vinum drives, currently UNIX™ partitions. Vinum drives can contain any number of subdisks. With the exception of a small area at the beginning of the drive, which is used for storing configuration and state information, the entire drive is available for data storage. The following sections describe the way these objects provide the functionality required of Vinum. Volume size considerations Plexes can include multiple subdisks spread over all drives in the Vinum configuration. As a result, the size of an individual drive does not limit the size of a plex, and thus of a volume. Redundant data storage Vinum implements mirroring by attaching multiple plexes to a volume. Each plex is a representation of the data in a volume. A volume may contain between one and eight plexes. Although a plex represents the complete data of a volume, it is possible for parts of the representation to be physically missing, either by design (by not defining a subdisk for parts of the plex) or by accident (as a result of the failure of a drive). As long as at least one plex can provide the data for the complete address range of the volume, the volume is fully functional. Performance issues Vinum implements both concatenation and striping at the plex level: A concatenated plex uses the address space of each subdisk in turn. A striped plex stripes the data across each subdisk. The subdisks must all have the same size, and there must be at least two subdisks in order to distinguish it from a concatenated plex. Which plex organization? The version of Vinum supplied with FreeBSD &rel.current; implements two kinds of plex: Concatenated plexes are the most flexible: they can contain any number of subdisks, and the subdisks may be of different length. The plex may be extended by adding additional subdisks. They require less CPU time than striped plexes, though the difference in CPU overhead is not measurable. On the other hand, they are most susceptible to hot spots, where one disk is very active and others are idle. The greatest advantage of striped (RAID-0) plexes is that they reduce hot spots: by choosing an optimum sized stripe (about 256 kB), you can even out the load on the component drives. The disadvantages of this approach are (fractionally) more complex code and restrictions on subdisks: they must be all the same size, and extending a plex by adding new subdisks is so complicated that Vinum currently does not implement it. Vinum imposes an additional, trivial restriction: a striped plex must have at least two subdisks, since otherwise it is indistinguishable from a concatenated plex. summarizes the advantages and disadvantages of each plex organization. Vinum Plex organizations Plex type Minimum subdisks Can add subdisks Must be equal size Application concatnated 1 yes no Large data storage with maximum placement flexibility and moderate performance striped 2 no yes High performance in combination with highly concurrent access
Some examples Vinum maintains a configuration database which describes the objects known to an individual system. Initially, the user creates the configuration database from one or more configuration files with the aid of the &man.vinum.8; utility program. Vinum stores a copy of its configuration database on each disk slice (which Vinum calls a device) under its control. This database is updated on each state change, so that a restart accurately restores the state of each Vinum object. The configuration file The configuration file describes individual Vinum objects. The definition of a simple volume might be: drive a device /dev/da3h volume myvol plex org concat sd length 512m drive a This file describes four Vinum objects: The drive line describes a disk partition (drive) and its location relative to the underlying hardware. It is given the symbolic name a. This separation of the symbolic names from the device names allows disks to be moved from one location to another without confusion. The volume line describes a volume. The only required attribute is the name, in this case myvol. The plex line defines a plex. The only required parameter is the organization, in this case concat. No name is necessary: the system automatically generates a name from the volume name by adding the suffix .px, where x is the number of the plex in the volume. Thus this plex will be called myvol.p0. The sd line describes a subdisk. The minimum specifications are the name of a drive on which to store it, and the length of the subdisk. As with plexes, no name is necessary: the system automatically assigns names derived from the plex name by adding the suffix .sx, where x is the number of the subdisk in the plex. Thus Vinum gives this subdisk the name myvol.p0.s0 After processing this file, &man.vinum.8; produces the following output: &prompt.root; vinum -> create config1 Configuration summary Drives: 1 (4 configured) Volumes: 1 (4 configured) Plexes: 1 (8 configured) Subdisks: 1 (16 configured) D a State: up Device /dev/da3h Avail: 2061/2573 MB (80%) V myvol State: up Plexes: 1 Size: 512 MB P myvol.p0 C State: up Subdisks: 1 Size: 512 MB S myvol.p0.s0 State: up PO: 0 B Size: 512 MB This output shows the brief listing format of &man.vinum.8;. It is represented graphically in .
A simple Vinum volume
This figure, and the ones which follow, represent a volume, which contains the plexes, which in turn contain the subdisks. In this trivial example, the volume contains one plex, and the plex contains one subdisk. This particular volume has no specific advantage over a conventional disk partition. It contains a single plex, so it is not redundant. The plex contains a single subdisk, so there is no difference in storage allocation from a conventional disk partition. The following sections illustrate various more interesting configuration methods.
Increased resilience: mirroring The resilience of a volume can be increased by mirroring. When laying out a mirrored volume, it is important to ensure that the subdisks of each plex are on different drives, so that a drive failure will not take down both plexes. The following configuration mirrors a volume: drive b device /dev/da4h volume mirror plex org concat sd length 512m drive a plex org concat sd length 512m drive b In this example, it was not necessary to specify a definition of drive a again, since Vinum keeps track of all objects in its configuration database. After processing this definition, the configuration looks like: Drives: 2 (4 configured) Volumes: 2 (4 configured) Plexes: 3 (8 configured) Subdisks: 3 (16 configured) D a State: up Device /dev/da3h Avail: 1549/2573 MB (60%) D b State: up Device /dev/da4h Avail: 2061/2573 MB (80%) V myvol State: up Plexes: 1 Size: 512 MB V mirror State: up Plexes: 2 Size: 512 MB P myvol.p0 C State: up Subdisks: 1 Size: 512 MB P mirror.p0 C State: up Subdisks: 1 Size: 512 MB P mirror.p1 C State: initializing Subdisks: 1 Size: 512 MB S myvol.p0.s0 State: up PO: 0 B Size: 512 MB S mirror.p0.s0 State: up PO: 0 B Size: 512 MB S mirror.p1.s0 State: empty PO: 0 B Size: 512 MB shows the structure graphically.
A mirrored Vinum volume
In this example, each plex contains the full 512 MB of address space. As in the previous example, each plex contains only a single subdisk.
Optimizing performance The mirrored volume in the previous example is more resistant to failure than an unmirrored volume, but its performance is less: each write to the volume requires a write to both drives, using up a greater proportion of the total disk bandwidth. Performance considerations demand a different approach: instead of mirroring, the data is striped across as many disk drives as possible. The following configuration shows a volume with a plex striped across four disk drives: drive c device /dev/da5h drive d device /dev/da6h volume stripe plex org striped 512k sd length 128m drive a sd length 128m drive b sd length 128m drive c sd length 128m drive d As before, it is not necessary to define the drives which are already known to Vinum. After processing this definition, the configuration looks like: Drives: 4 (4 configured) Volumes: 3 (4 configured) Plexes: 4 (8 configured) Subdisks: 7 (16 configured) D a State: up Device /dev/da3h Avail: 1421/2573 MB (55%) D b State: up Device /dev/da4h Avail: 1933/2573 MB (75%) D c State: up Device /dev/da5h Avail: 2445/2573 MB (95%) D d State: up Device /dev/da6h Avail: 2445/2573 MB (95%) V myvol State: up Plexes: 1 Size: 512 MB V mirror State: up Plexes: 2 Size: 512 MB V striped State: up Plexes: 1 Size: 512 MB P myvol.p0 C State: up Subdisks: 1 Size: 512 MB P mirror.p0 C State: up Subdisks: 1 Size: 512 MB P mirror.p1 C State: initializing Subdisks: 1 Size: 512 MB P striped.p1 State: up Subdisks: 1 Size: 512 MB S myvol.p0.s0 State: up PO: 0 B Size: 512 MB S mirror.p0.s0 State: up PO: 0 B Size: 512 MB S mirror.p1.s0 State: empty PO: 0 B Size: 512 MB S striped.p0.s0 State: up PO: 0 B Size: 128 MB S striped.p0.s1 State: up PO: 512 kB Size: 128 MB S striped.p0.s2 State: up PO: 1024 kB Size: 128 MB S striped.p0.s3 State: up PO: 1536 kB Size: 128 MB
A striped Vinum volume
This volume is represented in . The darkness of the stripes indicates the position within the plex address space: the lightest stripes come first, the darkest last.
Resilience and performance With sufficient hardware, it is possible to build volumes which show both increased resilience and increased performance compared to standard UNIX™ partitions. A typical configuration file might be: volume raid10 plex org striped 512k sd length 102480k drive a sd length 102480k drive b sd length 102480k drive c sd length 102480k drive d sd length 102480k drive e plex org striped 512k sd length 102480k drive c sd length 102480k drive d sd length 102480k drive e sd length 102480k drive a sd length 102480k drive b The subdisks of the second plex are offset by two drives from those of the first plex: this helps ensure that writes do not go to the same subdisks even if a transfer goes over two drives. represents the structure of this volume.
A mirrored, striped Vinum volume
Object naming As described above, Vinum assigns default names to plexes and subdisks, although they may be overridden. Overriding the default names is not recommended: experience with the VERITAS\(rg volume manager, which allows arbitrary naming of objects, has shown that this flexibility does not bring a significant advantage, and it can cause confusion. Names may contain any non-blank character, but it is recommended to restrict them to letters, digits and the underscore characters. The names of volumes, plexes and subdisks may be up to 64 characters long, and the names of drives may up to 32 characters long. /dev/vinumVinum objects are assigned device nodes in the hierarchy /dev/vinum. The configuration shown above would cause Vinum to create the following device nodes: The control devices /dev/vinum/control and /dev/vinum/controld, which are used by &man.vinum.8; and the Vinum daemon respectively. Block and character device entries for each volume. These are the main devices used by Vinum. The block device names are the name of the volume, while the character device names follow the BSD tradition of perpending the letter r to the name. Thus the configuration above would include the block devices /dev/vinum/myvol, /dev/vinum/mirror, /dev/vinum/striped, /dev/vinum/raid5 and /dev/vinum/raid10, and the character devices /dev/vinum/rmyvol, /dev/vinum/rmirror, /dev/vinum/rstriped, /dev/vinum/rraid5 and /dev/vinum/rraid10. There is obviously a problem here: it is possible to have two volumes called r and rr, but there will be a conflict creating the device node /dev/vinum/rr: is it a character device for volume r or a block device for volume rr? Currently Vinum does not address this conflict: the first-defined volume will get the name. A directory /dev/vinum/drive with entries for each drive. These entries are in fact symbolic links to the corresponding disk nodes. A directory /dev/vinum/volume with entries for each volume. It contains subdirectories for each plex, which in turn contain subdirectories for their component subdisks. The directories /dev/vinum/plex and /dev/vinum/sd, /dev/vinum/rsd, which contain block device nodes for each plex and block and character device nodes respectively for subdisk. For example, consider the following configuration file: drive drive1 device /dev/sd1h drive drive2 device /dev/sd2h drive drive3 device /dev/sd3h drive drive4 device /dev/sd4h volume s64 setupstate plex org striped 64k sd length 100m drive drive1 sd length 100m drive drive2 sd length 100m drive drive3 sd length 100m drive drive4 After processing this file, &man.vinum.8; creates the following structure in /dev/vinum: brwx------ 1 root wheel 25, 0x40000001 Apr 13 16:46 Control brwx------ 1 root wheel 25, 0x40000002 Apr 13 16:46 control brwx------ 1 root wheel 25, 0x40000000 Apr 13 16:46 controld drwxr-xr-x 2 root wheel 512 Apr 13 16:46 drive drwxr-xr-x 2 root wheel 512 Apr 13 16:46 plex crwxr-xr-- 1 root wheel 91, 2 Apr 13 16:46 rs64 drwxr-xr-x 2 root wheel 512 Apr 13 16:46 rsd drwxr-xr-x 2 root wheel 512 Apr 13 16:46 rvol brwxr-xr-- 1 root wheel 25, 2 Apr 13 16:46 s64 drwxr-xr-x 2 root wheel 512 Apr 13 16:46 sd drwxr-xr-x 3 root wheel 512 Apr 13 16:46 vol /dev/vinum/drive: total 0 lrwxr-xr-x 1 root wheel 9 Apr 13 16:46 drive1 -> /dev/sd1h lrwxr-xr-x 1 root wheel 9 Apr 13 16:46 drive2 -> /dev/sd2h lrwxr-xr-x 1 root wheel 9 Apr 13 16:46 drive3 -> /dev/sd3h lrwxr-xr-x 1 root wheel 9 Apr 13 16:46 drive4 -> /dev/sd4h /dev/vinum/plex: total 0 brwxr-xr-- 1 root wheel 25, 0x10000002 Apr 13 16:46 s64.p0 /dev/vinum/rsd: total 0 crwxr-xr-- 1 root wheel 91, 0x20000002 Apr 13 16:46 s64.p0.s0 crwxr-xr-- 1 root wheel 91, 0x20100002 Apr 13 16:46 s64.p0.s1 crwxr-xr-- 1 root wheel 91, 0x20200002 Apr 13 16:46 s64.p0.s2 crwxr-xr-- 1 root wheel 91, 0x20300002 Apr 13 16:46 s64.p0.s3 /dev/vinum/rvol: total 0 crwxr-xr-- 1 root wheel 91, 2 Apr 13 16:46 s64 /dev/vinum/sd: total 0 brwxr-xr-- 1 root wheel 25, 0x20000002 Apr 13 16:46 s64.p0.s0 brwxr-xr-- 1 root wheel 25, 0x20100002 Apr 13 16:46 s64.p0.s1 brwxr-xr-- 1 root wheel 25, 0x20200002 Apr 13 16:46 s64.p0.s2 brwxr-xr-- 1 root wheel 25, 0x20300002 Apr 13 16:46 s64.p0.s3 /dev/vinum/vol: total 1 brwxr-xr-- 1 root wheel 25, 2 Apr 13 16:46 s64 drwxr-xr-x 3 root wheel 512 Apr 13 16:46 s64.plex /dev/vinum/vol/s64.plex: total 1 brwxr-xr-- 1 root wheel 25, 0x10000002 Apr 13 16:46 s64.p0 drwxr-xr-x 2 root wheel 512 Apr 13 16:46 s64.p0.sd /dev/vinum/vol/s64.plex/s64.p0.sd: total 0 brwxr-xr-- 1 root wheel 25, 0x20000002 Apr 13 16:46 s64.p0.s0 brwxr-xr-- 1 root wheel 25, 0x20100002 Apr 13 16:46 s64.p0.s1 brwxr-xr-- 1 root wheel 25, 0x20200002 Apr 13 16:46 s64.p0.s2 brwxr-xr-- 1 root wheel 25, 0x20300002 Apr 13 16:46 s64.p0.s3 Although it is recommended that plexes and subdisks should not be allocated specific names, Vinum drives must be named. This makes it possible to move a drive to a different location and still recognize it automatically. Drive names may be up to 32 characters long. Creating file systems Volumes appear to the system to be identical to disks, with one exception. Unlike UNIX™ drives, Vinum does not partition volumes, which thus do not contain a partition table. This has required modification to some disk utilities, notably newfs, which previously tried to interpret the last letter of a Vinum volume name as a partition identifier. For example, a disk drive may have a name like /dev/wd0a or /dev/da2h. These names represent the first partition (a) on the first (0) IDE disk (wd) and the eight partition (h) on the third (2) SCSI disk (da) respectively. By contrast, a Vinum volume might be called /dev/vinum/concat, a name which has no relationship with a partition name. Normally, &man.newfs.8; interprets the name of the disk and complains if it cannot understand it. For example: &prompt.root; newfs /dev/vinum/concat newfs: /dev/vinum/concat: can't figure out file system partition In order to create a file system on this volume, use the -v option to &man.newfs.8;: &prompt.root; newfs -v /dev/vinum/concat Configuring Vinum The GENERIC kernel does not contain Vinum. It's possible to build a special kernel which includes Vinum, but this is not recommended. The standard way to start Vinum is as a kld. You don't even need to use &man.kldload.8; for Vinum: when you start &man.vinum.8;, it checks whether the module has been loaded, and if it isn't, it loads it automatically. Startup Vinum stores configuration information on the disk slices in essentially the same form as in the configuration files. When reading from the configuration database, Vinum recognizes a number of keywords which are not allowed in the configuration files. For example, a disk configuration might contain the following text: volume myvol state up volume bigraid state down plex name myvol.p0 state up org concat vol myvol plex name myvol.p1 state up org concat vol myvol plex name myvol.p2 state init org striped 512b vol myvol plex name bigraid.p0 state initializing org raid5 512b vol bigraid sd name myvol.p0.s0 drive a plex myvol.p0 state up len 1048576b driveoffset 265b plexoffset 0b sd name myvol.p0.s1 drive b plex myvol.p0 state up len 1048576b driveoffset 265b plexoffset 1048576b sd name myvol.p1.s0 drive c plex myvol.p1 state up len 1048576b driveoffset 265b plexoffset 0b sd name myvol.p1.s1 drive d plex myvol.p1 state up len 1048576b driveoffset 265b plexoffset 1048576b sd name myvol.p2.s0 drive a plex myvol.p2 state init len 524288b driveoffset 1048841b plexoffset 0b sd name myvol.p2.s1 drive b plex myvol.p2 state init len 524288b driveoffset 1048841b plexoffset 524288b sd name myvol.p2.s2 drive c plex myvol.p2 state init len 524288b driveoffset 1048841b plexoffset 1048576b sd name myvol.p2.s3 drive d plex myvol.p2 state init len 524288b driveoffset 1048841b plexoffset 1572864b sd name bigraid.p0.s0 drive a plex bigraid.p0 state initializing len 4194304b driveoff set 1573129b plexoffset 0b sd name bigraid.p0.s1 drive b plex bigraid.p0 state initializing len 4194304b driveoff set 1573129b plexoffset 4194304b sd name bigraid.p0.s2 drive c plex bigraid.p0 state initializing len 4194304b driveoff set 1573129b plexoffset 8388608b sd name bigraid.p0.s3 drive d plex bigraid.p0 state initializing len 4194304b driveoff set 1573129b plexoffset 12582912b sd name bigraid.p0.s4 drive e plex bigraid.p0 state initializing len 4194304b driveoff set 1573129b plexoffset 16777216b The obvious differences here are the presence of explicit location information and naming (both of which are also allowed, but discouraged, for use by the user) and the information on the states (which are not available to the user). Vinum does not store information about drives in the configuration information: it finds the drives by scanning the configured disk drives for partitions with a Vinum label. This enables Vinum to identify drives correctly even if they have been assigned different UNIX™ drive IDs. Automatic startup In order to start Vinum automatically when you boot the system, ensure that you have the following line in your /etc/rc.conf: start_vinum="YES" # set to YES to start vinum If you don't have a file /etc/rc.conf, create one with this content. This will cause the system to load the Vinum kld at startup, and to start any objects mentioned in the configuration. This is done before mounting file systems, so it's possible to automatically &man.fsck.8; and mount file systems on Vinum volumes. When you start Vinum with the vinum start command, Vinum reads the configuration database from one of the Vinum drives. Under normal circumstances, each drive contains an identical copy of the configuration database, so it does not matter which drive is read. After a crash, however, Vinum must determine which drive was updated most recently and read the configuration from this drive. It then updates the configuration if necessary from progressively older drives.