Index: head/lib/geom/part/gpart.8 =================================================================== --- head/lib/geom/part/gpart.8 (revision 364315) +++ head/lib/geom/part/gpart.8 (revision 364316) @@ -1,1468 +1,1522 @@ .\" Copyright (c) 2007, 2008 Marcel Moolenaar .\" All rights reserved. .\" .\" Redistribution and use in source and binary forms, with or without .\" modification, are permitted provided that the following conditions .\" are met: .\" 1. Redistributions of source code must retain the above copyright .\" notice, this list of conditions and the following disclaimer. .\" 2. Redistributions in binary form must reproduce the above copyright .\" notice, this list of conditions and the following disclaimer in the .\" documentation and/or other materials provided with the distribution. .\" .\" THIS SOFTWARE IS PROVIDED BY THE AUTHORS AND CONTRIBUTORS ``AS IS'' AND .\" ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE .\" IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE .\" ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHORS OR CONTRIBUTORS BE LIABLE .\" FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL .\" DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS .\" OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) .\" HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT .\" LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY .\" OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF .\" SUCH DAMAGE. .\" .\" $FreeBSD$ .\" -.Dd December 23, 2019 +.Dd August 17, 2020 .Dt GPART 8 .Os .Sh NAME .Nm gpart .Nd "control utility for the disk partitioning GEOM class" .Sh SYNOPSIS .\" ==== ADD ==== .Nm .Cm add .Fl t Ar type .Op Fl a Ar alignment .Op Fl b Ar start .Op Fl s Ar size .Op Fl i Ar index .Op Fl l Ar label .Op Fl f Ar flags .Ar geom .\" ==== BACKUP ==== .Nm .Cm backup .Ar geom .\" ==== BOOTCODE ==== .Nm .Cm bootcode .Op Fl N .Op Fl b Ar bootcode .Op Fl p Ar partcode Fl i Ar index .Op Fl f Ar flags .Ar geom .\" ==== COMMIT ==== .Nm .Cm commit .Ar geom .\" ==== CREATE ==== .Nm .Cm create .Fl s Ar scheme .Op Fl n Ar entries .Op Fl f Ar flags .Ar provider .\" ==== DELETE ==== .Nm .Cm delete .Fl i Ar index .Op Fl f Ar flags .Ar geom .\" ==== DESTROY ==== .Nm .Cm destroy .Op Fl F .Op Fl f Ar flags .Ar geom .\" ==== MODIFY ==== .Nm .Cm modify .Fl i Ar index .Op Fl l Ar label .Op Fl t Ar type .Op Fl f Ar flags .Ar geom .\" ==== RECOVER ==== .Nm .Cm recover .Op Fl f Ar flags .Ar geom .\" ==== RESIZE ==== .Nm .Cm resize .Fl i Ar index .Op Fl a Ar alignment .Op Fl s Ar size .Op Fl f Ar flags .Ar geom .\" ==== RESTORE ==== .Nm .Cm restore .Op Fl lF .Op Fl f Ar flags .Ar provider .Op Ar ... .\" ==== SET ==== .Nm .Cm set .Fl a Ar attrib .Fl i Ar index .Op Fl f Ar flags .Ar geom .\" ==== SHOW ==== .Nm .Cm show .Op Fl l | r .Op Fl p .Op Ar geom ... .\" ==== UNDO ==== .Nm .Cm undo .Ar geom .\" ==== UNSET ==== .Nm .Cm unset .Fl a Ar attrib .Fl i Ar index .Op Fl f Ar flags .Ar geom .\" .Nm .Cm list .Nm .Cm status .Nm .Cm load .Nm .Cm unload .Sh DESCRIPTION The .Nm utility is used to partition GEOM providers, normally disks. The first argument is the action to be taken: .Bl -tag -width ".Cm bootcode" .\" ==== ADD ==== .It Cm add Add a new partition to the partitioning scheme given by .Ar geom . The partition type must be specified with .Fl t Ar type . The partition's location, size, and other attributes will be calculated automatically if the corresponding options are not specified. .Pp The .Cm add command accepts these options: .Bl -tag -width 12n .It Fl a Ar alignment If specified, then the .Nm utility tries to align .Ar start offset and partition .Ar size to be multiple of .Ar alignment value. .It Fl b Ar start The logical block address where the partition will begin. A SI unit suffix is allowed. .It Fl f Ar flags Additional operational flags. See the section entitled .Sx "OPERATIONAL FLAGS" below for a discussion about its use. .It Fl i Ar index The index in the partition table at which the new partition is to be placed. The index determines the name of the device special file used to represent the partition. .It Fl l Ar label The label attached to the partition. This option is only valid when used on partitioning schemes that support partition labels. .It Fl s Ar size Create a partition of size .Ar size . A SI unit suffix is allowed. .It Fl t Ar type Create a partition of type .Ar type . Partition types are discussed below in the section entitled .Sx "PARTITION TYPES" . .El .\" ==== BACKUP ==== .It Cm backup Dump a partition table to standard output in a special format used by the .Cm restore action. .\" ==== BOOTCODE ==== .It Cm bootcode Embed bootstrap code into the partitioning scheme's metadata on the .Ar geom (using .Fl b Ar bootcode ) or write bootstrap code into a partition (using .Fl p Ar partcode and .Fl i Ar index ) . .Pp The .Cm bootcode command accepts these options: .Bl -tag -width 10n .It Fl N Don't preserve the Volume Serial Number for MBR. MBR bootcode contains Volume Serial Number by default, and .Nm tries to preserve it when installing new bootstrap code. This option allows to skip the preservation to help with some versions of .Xr boot0 8 that don't support Volume Serial Number. .It Fl b Ar bootcode Embed bootstrap code from the file .Ar bootcode into the partitioning scheme's metadata for .Ar geom . Not all partitioning schemes have embedded bootstrap code, so the .Fl b Ar bootcode option is scheme-specific in nature (see the section entitled .Sx BOOTSTRAPPING below). The .Ar bootcode file must match the partitioning scheme's requirements for file content and size. .It Fl f Ar flags Additional operational flags. See the section entitled .Sx "OPERATIONAL FLAGS" below for a discussion about its use. .It Fl i Ar index Specify the target partition for .Fl p Ar partcode . .It Fl p Ar partcode Write the bootstrap code from the file .Ar partcode into the .Ar geom partition specified by .Fl i Ar index . The size of the file must be smaller than the size of the partition. .El .\" ==== COMMIT ==== .It Cm commit Commit any pending changes for geom .Ar geom . All actions are committed by default and will not result in pending changes. Actions can be modified with the .Fl f Ar flags option so that they are not committed, but become pending. Pending changes are reflected by the geom and the .Nm utility, but they are not actually written to disk. The .Cm commit action will write all pending changes to disk. .\" ==== CREATE ==== .It Cm create Create a new partitioning scheme on a provider given by .Ar provider . The scheme to use must be specified with the .Fl s Ar scheme option. .Pp The .Cm create command accepts these options: .Bl -tag -width 10n .It Fl f Ar flags Additional operational flags. See the section entitled .Sx "OPERATIONAL FLAGS" below for a discussion about its use. .It Fl n Ar entries The number of entries in the partition table. Every partitioning scheme has a minimum and maximum number of entries. This option allows tables to be created with a number of entries that is within the limits. Some schemes have a maximum equal to the minimum and some schemes have a maximum large enough to be considered unlimited. By default, partition tables are created with the minimum number of entries. .It Fl s Ar scheme Specify the partitioning scheme to use. The kernel must have support for a particular scheme before that scheme can be used to partition a disk. .El .\" ==== DELETE ==== .It Cm delete Delete a partition from geom .Ar geom and further identified by the .Fl i Ar index option. The partition cannot be actively used by the kernel. .Pp The -.cm delete +.Cm delete command accepts these options: .Bl -tag -width 10n .It Fl f Ar flags Additional operational flags. See the section entitled .Sx "OPERATIONAL FLAGS" below for a discussion about its use. .It Fl i Ar index Specifies the index of the partition to be deleted. .El .\" ==== DESTROY ==== .It Cm destroy Destroy the partitioning scheme as implemented by geom .Ar geom . .Pp The .Cm destroy command accepts these options: .Bl -tag -width 10n .It Fl F Forced destroying of the partition table even if it is not empty. .It Fl f Ar flags Additional operational flags. See the section entitled .Sx "OPERATIONAL FLAGS" below for a discussion about its use. .El .\" ==== MODIFY ==== .It Cm modify Modify a partition from geom .Ar geom and further identified by the .Fl i Ar index option. Only the type and/or label of the partition can be modified. Not all partitioning schemes support labels and it is invalid to try to change a partition label in such cases. .Pp The .Cm modify command accepts these options: .Bl -tag -width 10n .It Fl f Ar flags Additional operational flags. See the section entitled .Sx "OPERATIONAL FLAGS" below for a discussion about its use. .It Fl i Ar index Specifies the index of the partition to be modified. .It Fl l Ar label Change the partition label to .Ar label . .It Fl t Ar type Change the partition type to .Ar type . .El .\" ==== RECOVER ==== .It Cm recover Recover a corrupt partition's scheme metadata on the geom .Ar geom . See the section entitled .Sx RECOVERING below for the additional information. .Pp The .Cm recover command accepts these options: .Bl -tag -width 10n .It Fl f Ar flags Additional operational flags. See the section entitled .Sx "OPERATIONAL FLAGS" below for a discussion about its use. .El .\" ==== RESIZE ==== .It Cm resize Resize a partition from geom .Ar geom and further identified by the .Fl i Ar index option. If the new size is not specified it is automatically calculated to be the maximum available from .Ar geom . .Pp The .Cm resize command accepts these options: .Bl -tag -width 12n .It Fl a Ar alignment If specified, then the .Nm utility tries to align partition .Ar size to be a multiple of the .Ar alignment value. .It Fl f Ar flags Additional operational flags. See the section entitled .Sx "OPERATIONAL FLAGS" below for a discussion about its use. .It Fl i Ar index Specifies the index of the partition to be resized. .It Fl s Ar size Specifies the new size of the partition, in logical blocks. A SI unit suffix is allowed. .El .\" ==== RESTORE ==== .It Cm restore Restore the partition table from a backup previously created by the .Cm backup action and read from standard input. Only the partition table is restored. This action does not affect the content of partitions. After restoring the partition table and writing bootcode if needed, user data must be restored from backup. .Pp The .Cm restore command accepts these options: .Bl -tag -width 10n .It Fl F Destroy partition table on the given .Ar provider before doing restore. .It Fl f Ar flags Additional operational flags. See the section entitled .Sx "OPERATIONAL FLAGS" below for a discussion about its use. .It Fl l Restore partition labels for partitioning schemes that support them. .El .\" ==== SET ==== .It Cm set Set the named attribute on the partition entry. See the section entitled .Sx ATTRIBUTES below for a list of available attributes. .Pp The .Cm set command accepts these options: .Bl -tag -width 10n .It Fl a Ar attrib Specifies the attribute to set. .It Fl f Ar flags Additional operational flags. See the section entitled .Sx "OPERATIONAL FLAGS" below for a discussion about its use. .It Fl i Ar index Specifies the index of the partition on which the attribute will be set. .El .\" ==== SHOW ==== .It Cm show Show current partition information for the specified geoms, or all geoms if none are specified. The default output includes the logical starting block of each partition, the partition size in blocks, the partition index number, the partition type, and a human readable partition size. Block sizes and locations are based on the device's Sectorsize as shown by .Cm gpart list . .Pp The .Cm show command accepts these options: .Bl -tag -width 10n .It Fl l For partitioning schemes that support partition labels, print them instead of partition type. .It Fl p Show provider names instead of partition indexes. .It Fl r Show raw partition type instead of symbolic name. .El .\" ==== UNDO ==== .It Cm undo Revert any pending changes for geom .Ar geom . This action is the opposite of the .Cm commit action and can be used to undo any changes that have not been committed. .\" ==== UNSET ==== .It Cm unset Clear the named attribute on the partition entry. See the section entitled .Sx ATTRIBUTES below for a list of available attributes. .Pp The .Cm unset command accepts these options: .Bl -tag -width 10n .It Fl a Ar attrib Specifies the attribute to clear. .It Fl f Ar flags Additional operational flags. See the section entitled .Sx "OPERATIONAL FLAGS" below for a discussion about its use. .It Fl i Ar index Specifies the index of the partition on which the attribute will be cleared. .El .It Cm list See .Xr geom 8 . .It Cm status See .Xr geom 8 . .It Cm load See .Xr geom 8 . .It Cm unload See .Xr geom 8 . .El .Sh PARTITIONING SCHEMES Several partitioning schemes are supported by the .Nm utility: .Bl -tag -width ".Cm VTOC8" .It Cm APM Apple Partition Map, used by PowerPC(R) Macintosh(R) computers. Requires the .Cd GEOM_PART_APM kernel option. .It Cm BSD Traditional BSD disklabel, usually used to subdivide MBR partitions. .Po This scheme can also be used as the sole partitioning method, without an MBR. Partition editing tools from other operating systems often do not understand the bare disklabel partition layout, so this is sometimes called .Dq dangerously dedicated . .Pc Requires the .Cm GEOM_PART_BSD kernel option. .It Cm BSD64 64-bit implementation of BSD disklabel used in DragonFlyBSD to subdivide MBR or GPT partitions. Requires the .Cm GEOM_PART_BSD64 kernel option. .It Cm LDM The Logical Disk Manager is an implementation of volume manager for Microsoft Windows NT. Requires the .Cd GEOM_PART_LDM kernel option. .It Cm GPT GUID Partition Table is used on Intel-based Macintosh computers and gradually replacing MBR on most PCs and other systems. Requires the .Cm GEOM_PART_GPT kernel option. .It Cm MBR Master Boot Record is used on PCs and removable media. Requires the .Cm GEOM_PART_MBR kernel option. The .Cm GEOM_PART_EBR option adds support for the Extended Boot Record (EBR), which is used to define a logical partition. The .Cm GEOM_PART_EBR_COMPAT option enables backward compatibility for partition names in the EBR scheme. It also prevents any type of actions on such partitions. .It Cm VTOC8 Sun's SMI Volume Table Of Contents, used by .Tn SPARC64 and .Tn UltraSPARC computers. Requires the .Cm GEOM_PART_VTOC8 kernel option. .El .Sh PARTITION TYPES Partition types are identified on disk by particular strings or magic values. The .Nm utility uses symbolic names for common partition types so the user does not need to know these values or other details of the partitioning scheme in question. The .Nm utility also allows the user to specify scheme-specific partition types for partition types that do not have symbolic names. Symbolic names currently understood and used by .Fx are: .Bl -tag -width ".Cm dragonfly-disklabel64" .It Cm apple-boot The system partition dedicated to storing boot loaders on some Apple systems. The scheme-specific types are .Qq Li "!171" for MBR, .Qq Li "!Apple_Bootstrap" for APM, and .Qq Li "!426f6f74-0000-11aa-aa11-00306543ecac" for GPT. .It Cm bios-boot The system partition dedicated to second stage of the boot loader program. Usually it is used by the GRUB 2 loader for GPT partitioning schemes. The scheme-specific type is .Qq Li "!21686148-6449-6E6F-744E-656564454649" . .It Cm efi The system partition for computers that use the Extensible Firmware Interface (EFI). The scheme-specific types are .Qq Li "!239" for MBR, and .Qq Li "!c12a7328-f81f-11d2-ba4b-00a0c93ec93b" for GPT. .It Cm freebsd A .Fx partition subdivided into filesystems with a .Bx disklabel. This is a legacy partition type and should not be used for the APM or GPT schemes. The scheme-specific types are .Qq Li "!165" for MBR, .Qq Li "!FreeBSD" for APM, and .Qq Li "!516e7cb4-6ecf-11d6-8ff8-00022d09712b" for GPT. .It Cm freebsd-boot A .Fx partition dedicated to bootstrap code. The scheme-specific type is .Qq Li "!83bd6b9d-7f41-11dc-be0b-001560b84f0f" for GPT. .It Cm freebsd-swap A .Fx partition dedicated to swap space. The scheme-specific types are .Qq Li "!FreeBSD-swap" for APM, .Qq Li "!516e7cb5-6ecf-11d6-8ff8-00022d09712b" for GPT, and tag 0x0901 for VTOC8. .It Cm freebsd-ufs A .Fx partition that contains a UFS or UFS2 filesystem. The scheme-specific types are .Qq Li "!FreeBSD-UFS" for APM, .Qq Li "!516e7cb6-6ecf-11d6-8ff8-00022d09712b" for GPT, and tag 0x0902 for VTOC8. .It Cm freebsd-vinum A .Fx partition that contains a Vinum volume. The scheme-specific types are .Qq Li "!FreeBSD-Vinum" for APM, .Qq Li "!516e7cb8-6ecf-11d6-8ff8-00022d09712b" for GPT, and tag 0x0903 for VTOC8. .It Cm freebsd-zfs A .Fx partition that contains a ZFS volume. The scheme-specific types are .Qq Li "!FreeBSD-ZFS" for APM, .Qq Li "!516e7cba-6ecf-11d6-8ff8-00022d09712b" for GPT, and 0x0904 for VTOC8. .El .Pp Other symbolic names that can be used with the .Nm utility are: .Bl -tag -width ".Cm dragonfly-disklabel64" .It Cm apple-apfs An Apple macOS partition used for the Apple file system, APFS. .It Cm apple-core-storage An Apple Mac OS X partition used by logical volume manager known as Core Storage. The scheme-specific type is .Qq Li "!53746f72-6167-11aa-aa11-00306543ecac" for GPT. .It Cm apple-hfs An Apple Mac OS X partition that contains a HFS or HFS+ filesystem. The scheme-specific types are .Qq Li "!175" for MBR, .Qq Li "!Apple_HFS" for APM and .Qq Li "!48465300-0000-11aa-aa11-00306543ecac" for GPT. .It Cm apple-label An Apple Mac OS X partition dedicated to partition metadata that descibes disk device. The scheme-specific type is .Qq Li "!4c616265-6c00-11aa-aa11-00306543ecac" for GPT. .It Cm apple-raid An Apple Mac OS X partition used in a software RAID configuration. The scheme-specific type is .Qq Li "!52414944-0000-11aa-aa11-00306543ecac" for GPT. .It Cm apple-raid-offline An Apple Mac OS X partition used in a software RAID configuration. The scheme-specific type is .Qq Li "!52414944-5f4f-11aa-aa11-00306543ecac" for GPT. .It Cm apple-tv-recovery An Apple Mac OS X partition used by Apple TV. The scheme-specific type is .Qq Li "!5265636f-7665-11aa-aa11-00306543ecac" for GPT. .It Cm apple-ufs An Apple Mac OS X partition that contains a UFS filesystem. The scheme-specific types are .Qq Li "!168" for MBR, .Qq Li "!Apple_UNIX_SVR2" for APM and .Qq Li "!55465300-0000-11aa-aa11-00306543ecac" for GPT. +.It Cm apple-zfs +An Apple Mac OS X partition that contains a ZFS volume. +The scheme-specific type is +.Qq Li "!6a898cc3-1dd2-11b2-99a6-080020736631" +for GPT. The same GUID is being used also for +.Sy illumos/Solaris /usr partition . +See +.Sx CAVEATS +section below. .It Cm dragonfly-label32 A DragonFlyBSD partition subdivided into filesystems with a .Bx disklabel. The scheme-specific type is .Qq Li "!9d087404-1ca5-11dc-8817-01301bb8a9f5" for GPT. .It Cm dragonfly-label64 A DragonFlyBSD partition subdivided into filesystems with a disklabel64. The scheme-specific type is .Qq Li "!3d48ce54-1d16-11dc-8696-01301bb8a9f5" for GPT. .It Cm dragonfly-legacy A legacy partition type used in DragonFlyBSD. The scheme-specific type is .Qq Li "!bd215ab2-1d16-11dc-8696-01301bb8a9f5" for GPT. .It Cm dragonfly-ccd A DragonFlyBSD partition used with Concatenated Disk driver. The scheme-specific type is .Qq Li "!dbd5211b-1ca5-11dc-8817-01301bb8a9f5" for GPT. .It Cm dragonfly-hammer A DragonFlyBSD partition that contains a Hammer filesystem. The scheme-specific type is .Qq Li "!61dc63ac-6e38-11dc-8513-01301bb8a9f5" for GPT. .It Cm dragonfly-hammer2 A DragonFlyBSD partition that contains a Hammer2 filesystem. The scheme-specific type is .Qq Li "!5cbb9ad1-862d-11dc-a94d-01301bb8a9f5" for GPT. .It Cm dragonfly-swap A DragonFlyBSD partition dedicated to swap space. The scheme-specific type is .Qq Li "!9d58fdbd-1ca5-11dc-8817-01301bb8a9f5" for GPT. .It Cm dragonfly-ufs A DragonFlyBSD partition that contains an UFS1 filesystem. The scheme-specific type is .Qq Li "!9d94ce7c-1ca5-11dc-8817-01301bb8a9f5" for GPT. .It Cm dragonfly-vinum A DragonFlyBSD partition used with Logical Volume Manager. The scheme-specific type is .Qq Li "!9dd4478f-1ca5-11dc-8817-01301bb8a9f5" for GPT. .It Cm ebr A partition subdivided into filesystems with a EBR. The scheme-specific type is .Qq Li "!5" for MBR. .It Cm fat16 A partition that contains a FAT16 filesystem. The scheme-specific type is .Qq Li "!6" for MBR. .It Cm fat32 A partition that contains a FAT32 filesystem. The scheme-specific type is .Qq Li "!11" for MBR. .It Cm fat32lba A partition that contains a FAT32 (LBA) filesystem. The scheme-specific type is .Qq Li "!12" for MBR. .It Cm linux-data A Linux partition that contains some filesystem with data. The scheme-specific types are .Qq Li "!131" for MBR and .Qq Li "!0fc63daf-8483-4772-8e79-3d69d8477de4" for GPT. .It Cm linux-lvm A Linux partition dedicated to Logical Volume Manager. The scheme-specific types are .Qq Li "!142" for MBR and .Qq Li "!e6d6d379-f507-44c2-a23c-238f2a3df928" for GPT. .It Cm linux-raid A Linux partition used in a software RAID configuration. The scheme-specific types are .Qq Li "!253" for MBR and .Qq Li "!a19d880f-05fc-4d3b-a006-743f0f84911e" for GPT. .It Cm linux-swap A Linux partition dedicated to swap space. The scheme-specific types are .Qq Li "!130" for MBR and .Qq Li "!0657fd6d-a4ab-43c4-84e5-0933c84b4f4f" for GPT. .It Cm mbr A partition that is sub-partitioned by a Master Boot Record (MBR). This type is known as .Qq Li "!024dee41-33e7-11d3-9d69-0008c781f39f" by GPT. .It Cm ms-basic-data A basic data partition (BDP) for Microsoft operating systems. In the GPT this type is the equivalent to partition types .Cm fat16 , fat32 and .Cm ntfs in MBR. This type is used for GPT exFAT partitions. The scheme-specific type is .Qq Li "!ebd0a0a2-b9e5-4433-87c0-68b6b72699c7" for GPT. .It Cm ms-ldm-data A partition that contains Logical Disk Manager (LDM) volumes. The scheme-specific types are .Qq Li "!66" for MBR, .Qq Li "!af9b60a0-1431-4f62-bc68-3311714a69ad" for GPT. .It Cm ms-ldm-metadata A partition that contains Logical Disk Manager (LDM) database. The scheme-specific type is .Qq Li "!5808c8aa-7e8f-42e0-85d2-e1e90434cfb3" for GPT. .It Cm netbsd-ccd A NetBSD partition used with Concatenated Disk driver. The scheme-specific type is .Qq Li "!2db519c4-b10f-11dc-b99b-0019d1879648" for GPT. .It Cm netbsd-cgd An encrypted NetBSD partition. The scheme-specific type is .Qq Li "!2db519ec-b10f-11dc-b99b-0019d1879648" for GPT. .It Cm netbsd-ffs A NetBSD partition that contains an UFS filesystem. The scheme-specific type is .Qq Li "!49f48d5a-b10e-11dc-b99b-0019d1879648" for GPT. .It Cm netbsd-lfs A NetBSD partition that contains an LFS filesystem. The scheme-specific type is .Qq Li "!49f48d82-b10e-11dc-b99b-0019d1879648" for GPT. .It Cm netbsd-raid A NetBSD partition used in a software RAID configuration. The scheme-specific type is .Qq Li "!49f48daa-b10e-11dc-b99b-0019d1879648" for GPT. .It Cm netbsd-swap A NetBSD partition dedicated to swap space. The scheme-specific type is .Qq Li "!49f48d32-b10e-11dc-b99b-0019d1879648" for GPT. .It Cm ntfs A partition that contains a NTFS or exFAT filesystem. The scheme-specific type is .Qq Li "!7" for MBR. .It Cm prep-boot The system partition dedicated to storing boot loaders on some PowerPC systems, notably those made by IBM. The scheme-specific types are .Qq Li "!65" for MBR and -.Qq Li "!0x9e1a2d38-c612-4316-aa26-8b49521e5a8b" +.Qq Li "!9e1a2d38-c612-4316-aa26-8b49521e5a8b" for GPT. +.It Cm solaris-boot +A illumos/Solaris partition dedicated to boot loader. +The scheme-specific type is +.Qq Li "!6a82cb45-1dd2-11b2-99a6-080020736631" +for GPT. +.It Cm solaris-root +A illumos/Solaris partition dedicated to root filesystem. +The scheme-specific type is +.Qq Li "!6a85cf4d-1dd2-11b2-99a6-080020736631" +for GPT. +.It Cm solaris-swap +A illumos/Solaris partition dedicated to swap. +The scheme-specific type is +.Qq Li "!6a87c46f-1dd2-11b2-99a6-080020736631" +for GPT. +.It Cm solaris-backup +A illumos/Solaris partition dedicated to backup. +The scheme-specific type is +.Qq Li "!6a8b642b-1dd2-11b2-99a6-080020736631" +for GPT. +.It Cm solaris-var +A illumos/Solaris partition dedicated to /var filesystem. +The scheme-specific type is +.Qq Li "!6a8ef2e9-1dd2-11b2-99a6-080020736631" +for GPT. +.It Cm solaris-home +A illumos/Solaris partition dedicated to /home filesystem. +The scheme-specific type is +.Qq Li "!6a90ba39-1dd2-11b2-99a6-080020736631" +for GPT. +.It Cm solaris-altsec +A illumos/Solaris partition dedicated to alternate sector. +The scheme-specific type is +.Qq Li "!6a9283a5-1dd2-11b2-99a6-080020736631" +for GPT. +.It Cm solaris-reserved +A illumos/Solaris partition dedicated to reserved space. +The scheme-specific type is +.Qq Li "!6a945a3b-1dd2-11b2-99a6-080020736631" +for GPT. .It Cm vmware-vmfs A partition that contains a VMware File System (VMFS). The scheme-specific types are .Qq Li "!251" for MBR and .Qq Li "!aa31e02a-400f-11db-9590-000c2911d1b8" for GPT. .It Cm vmware-vmkdiag A partition that contains a VMware diagostic filesystem. The scheme-specific types are .Qq Li "!252" for MBR and .Qq Li "!9d275380-40ad-11db-bf97-000c2911d1b8" for GPT. .It Cm vmware-reserved A VMware reserved partition. The scheme-specific type is .Qq Li "!9198effc-31c0-11db-8f-78-000c2911d1b8" for GPT. .It Cm vmware-vsanhdr A partition claimed by VMware VSAN. The scheme-specific type is .Qq Li "!381cfccc-7288-11e0-92ee-000c2911d0b2" for GPT. .El .Sh ATTRIBUTES The scheme-specific attributes for EBR: .Bl -tag -width ".Cm active" .It Cm active .El .Pp The scheme-specific attributes for GPT: .Bl -tag -width ".Cm bootfailed" .It Cm bootme When set, the .Nm gptboot stage 1 boot loader will try to boot the system from this partition. Multiple partitions can be marked with the .Cm bootme attribute. See .Xr gptboot 8 for more details. .It Cm bootonce Setting this attribute automatically sets the .Cm bootme attribute. When set, the .Nm gptboot stage 1 boot loader will try to boot the system from this partition only once. Multiple partitions can be marked with the .Cm bootonce and .Cm bootme attribute pairs. See .Xr gptboot 8 for more details. .It Cm bootfailed This attribute should not be manually managed. It is managed by the .Nm gptboot stage 1 boot loader and the .Pa /etc/rc.d/gptboot start-up script. See .Xr gptboot 8 for more details. .It Cm lenovofix Setting this attribute overwrites the Protective MBR with a new one where the 0xee partition is the second, rather than the first record. This resolves a BIOS compatibility issue with some Lenovo models including the X220, T420, and T520, allowing them to boot from GPT partitioned disks without using EFI. .El .Pp The scheme-specific attributes for MBR: .Bl -tag -width ".Cm active" .It Cm active .El .Sh BOOTSTRAPPING .Fx supports several partitioning schemes and each scheme uses different bootstrap code. The bootstrap code is located in a specific disk area for each partitioning scheme, and may vary in size for different schemes. .Pp Bootstrap code can be separated into two types. The first type is embedded in the partitioning scheme's metadata, while the second type is located on a specific partition. Embedding bootstrap code should only be done with the .Cm gpart bootcode command with the .Fl b Ar bootcode option. The GEOM PART class knows how to safely embed bootstrap code into specific partitioning scheme metadata without causing any damage. .Pp The Master Boot Record (MBR) uses a 512-byte bootstrap code image, embedded into the partition table's metadata area. There are two variants of this bootstrap code: .Pa /boot/mbr and .Pa /boot/boot0 . .Pa /boot/mbr searches for a partition with the .Cm active attribute (see the .Sx ATTRIBUTES section) in the partition table. Then it runs next bootstrap stage. The .Pa /boot/boot0 image contains a boot manager with some additional interactive functions for multi-booting from a user-selected partition. .Pp A BSD disklabel is usually created inside an MBR partition (slice) with type .Cm freebsd (see the .Sx "PARTITION TYPES" section). It uses 8 KB size bootstrap code image .Pa /boot/boot , embedded into the partition table's metadata area. .Pp Both types of bootstrap code are used to boot from the GUID Partition Table. First, a protective MBR is embedded into the first disk sector from the .Pa /boot/pmbr image. It searches through the GPT for a .Cm freebsd-boot partition (see the .Sx "PARTITION TYPES" section) and runs the next bootstrap stage from it. The .Cm freebsd-boot partition should be smaller than 545 KB. It can be located either before or after other .Fx partitions on the disk. There are two variants of bootstrap code to write to this partition: .Pa /boot/gptboot and .Pa /boot/gptzfsboot . .Pp .Pa /boot/gptboot is used to boot from UFS partitions. .Cm gptboot searches through .Cm freebsd-ufs partitions in the GPT and selects one to boot based on the .Cm bootonce and .Cm bootme attributes. If neither attribute is found, .Pa /boot/gptboot boots from the first .Cm freebsd-ufs partition. .Pa /boot/loader .Pq the third bootstrap stage is loaded from the first partition that matches these conditions. See .Xr gptboot 8 for more information. .Pp .Pa /boot/gptzfsboot is used to boot from ZFS. It searches through the GPT for .Cm freebsd-zfs partitions, trying to detect ZFS pools. After all pools are detected, .Pa /boot/loader is started from the first one found set as bootable. .Pp The VTOC8 scheme does not support embedding bootstrap code. Instead, the 8 KBytes bootstrap code image .Pa /boot/boot1 should be written with the .Cm gpart bootcode command with the .Fl p Ar bootcode option to all sufficiently large VTOC8 partitions. To do this the .Fl i Ar index option could be omitted. .Pp The APM scheme also does not support embedding bootstrap code. Instead, the 800 KBytes bootstrap code image .Pa /boot/boot1.hfs should be written with the .Cm gpart bootcode command to a partition of type .Cm apple-boot , which should also be 800 KB in size. .Sh OPERATIONAL FLAGS Actions other than the .Cm commit and .Cm undo actions take an optional .Fl f Ar flags option. This option is used to specify action-specific operational flags. By default, the .Nm utility defines the .Ql C flag so that the action is immediately committed. The user can specify .Dq Fl f Cm x to have the action result in a pending change that can later, with other pending changes, be committed as a single compound change with the .Cm commit action or reverted with the .Cm undo action. .Sh RECOVERING The GEOM PART class supports recovering of partition tables only for GPT. The GPT primary metadata is stored at the beginning of the device. For redundancy, a secondary .Pq backup copy of the metadata is stored at the end of the device. As a result of having two copies, some corruption of metadata is not fatal to the working of GPT. When the kernel detects corrupt metadata, it marks this table as corrupt and reports the problem. .Cm destroy and .Cm recover are the only operations allowed on corrupt tables. .Pp If one GPT header appears to be corrupt but the other copy remains intact, the kernel will log the following: .Bd -literal -offset indent GEOM: provider: the primary GPT table is corrupt or invalid. GEOM: provider: using the secondary instead -- recovery strongly advised. .Ed .Pp or .Bd -literal -offset indent GEOM: provider: the secondary GPT table is corrupt or invalid. GEOM: provider: using the primary only -- recovery suggested. .Ed .Pp Also .Nm commands such as .Cm show , status and .Cm list will report about corrupt tables. .Pp If the size of the device has changed (e.g.,\& volume expansion) the secondary GPT header will no longer be located in the last sector. This is not a metadata corruption, but it is dangerous because any corruption of the primary GPT will lead to loss of the partition table. This problem is reported by the kernel with the message: .Bd -literal -offset indent GEOM: provider: the secondary GPT header is not in the last LBA. .Ed .Pp This situation can be recovered with the .Cm recover command. This command reconstructs the corrupt metadata using known valid metadata and relocates the secondary GPT to the end of the device. .Pp .Em NOTE : The GEOM PART class can detect the same partition table visible through different GEOM providers, and some of them will be marked as corrupt. Be careful when choosing a provider for recovery. If you choose incorrectly you can destroy the metadata of another GEOM class, e.g.,\& GEOM MIRROR or GEOM LABEL. .Sh SYSCTL VARIABLES The following .Xr sysctl 8 variables can be used to control the behavior of the .Nm PART GEOM class. The default value is shown next to each variable. .Bl -tag -width indent .It Va kern.geom.part.allow_nesting : No 0 By default, some schemes (currently BSD, BSD64 and VTOC8) do not permit further nested partitioning. This variable overrides this restriction and allows arbitrary nesting (except within partitions created at offset 0). Some schemes have their own separate checks, for which see below. .It Va kern.geom.part.auto_resize : No 1 This variable controls automatic resize behavior of the .Nm PART GEOM class. When this variable is enable and new size of provider is detected, the schema metadata is resized but all changes are not saved to disk, until .Cm gpart commit is run to confirm changes. This behavior is also reported with diagnostic message: .Sy "GEOM_PART: (provider) was automatically resized." .Sy "Use `gpart commit (provider)` to save changes or `gpart undo (provider)`" .Sy "to revert them." .It Va kern.geom.part.check_integrity : No 1 This variable controls the behaviour of metadata integrity checks. When integrity checks are enabled, the .Nm PART GEOM class verifies all generic partition parameters obtained from the disk metadata. If some inconsistency is detected, the partition table will be rejected with a diagnostic message: .Sy "GEOM_PART: Integrity check failed (provider, scheme)" . .It Va kern.geom.part.gpt.allow_nesting : No 0 By default the GPT scheme is allowed only at the outermost nesting level. This variable allows this restriction to be removed. .It Va kern.geom.part.ldm.debug : No 0 Debug level of the Logical Disk Manager (LDM) module. This can be set to a number between 0 and 2 inclusive. If set to 0 minimal debug information is printed, and if set to 2 the maximum amount of debug information is printed. .It Va kern.geom.part.ldm.show_mirrors : No 0 This variable controls how the Logical Disk Manager (LDM) module handles mirrored volumes. By default mirrored volumes are shown as partitions with type .Cm ms-ldm-data (see the .Sx "PARTITION TYPES" section). If this variable set to 1 each component of the mirrored volume will be present as independent partition. .Em NOTE : This may break a mirrored volume and lead to data damage. .It Va kern.geom.part.mbr.enforce_chs : No 0 Specify how the Master Boot Record (MBR) module does alignment. If this variable is set to a non-zero value, the module will automatically recalculate the user-specified offset and size for alignment with the CHS geometry. Otherwise the values will be left unchanged. .It Va kern.geom.part.separator : No "" Specify an optional separator that will be inserted between the GEOM name and partition name. This variable is a .Xr loader 8 tunable. Note that setting this variable may break software which assumes a particular naming scheme. .El .Sh EXIT STATUS Exit status is 0 on success, and 1 if the command fails. .Sh EXAMPLES The examples below assume that the disk's logical block size is 512 bytes, regardless of its physical block size. .Ss GPT In this example, we will format .Pa ada0 with the GPT scheme and create boot, swap and root partitions. First, we need to create the partition table: .Bd -literal -offset indent /sbin/gpart create -s GPT ada0 .Ed .Pp Next, we install a protective MBR with the first-stage bootstrap code. The protective MBR lists a single, bootable partition spanning the entire disk, thus allowing non-GPT-aware BIOSes to boot from the disk and preventing tools which do not understand the GPT scheme from considering the disk to be unformatted. .Bd -literal -offset indent /sbin/gpart bootcode -b /boot/pmbr ada0 .Ed .Pp We then create a dedicated .Cm freebsd-boot partition to hold the second-stage boot loader, which will load the .Fx kernel and modules from a UFS or ZFS filesystem. This partition must be larger than the bootstrap code .Po either .Pa /boot/gptboot for UFS or .Pa /boot/gptzfsboot for ZFS .Pc , but smaller than 545 kB since the first-stage loader will load the entire partition into memory during boot, regardless of how much data it actually contains. We create a 472-block (236 kB) boot partition at offset 40, which is the size of the partition table (34 blocks or 17 kB) rounded up to the nearest 4 kB boundary. .Bd -literal -offset indent /sbin/gpart add -b 40 -s 472 -t freebsd-boot ada0 /sbin/gpart bootcode -p /boot/gptboot -i 1 ada0 .Ed .Pp We now create a 4 GB swap partition at the first available offset, which is 40 + 472 = 512 blocks (256 kB). .Bd -literal -offset indent /sbin/gpart add -s 4G -t freebsd-swap ada0 .Ed .Pp Aligning the swap partition and all subsequent partitions on a 256 kB boundary ensures optimal performance on a wide range of media, from plain old disks with 512-byte blocks, through modern .Dq advanced format disks with 4096-byte physical blocks, to RAID volumes with stripe sizes of up to 256 kB. .Pp Finally, we create and format an 8 GB .Cm freebsd-ufs partition for the root filesystem, leaving the rest of the slice free for additional filesystems: .Bd -literal -offset indent /sbin/gpart add -s 8G -t freebsd-ufs ada0 /sbin/newfs -Uj /dev/ada0p3 .Ed .Ss MBR In this example, we will format .Pa ada0 with the MBR scheme and create a single partition which we subdivide using a traditional .Bx disklabel. .Pp First, we create the partition table and a single 64 GB partition, then we mark that partition active (bootable) and install the first-stage boot loader: .Bd -literal -offset indent /sbin/gpart create -s MBR ada0 /sbin/gpart add -t freebsd -s 64G ada0 /sbin/gpart set -a active -i 1 ada0 /sbin/gpart bootcode -b /boot/boot0 ada0 .Ed .Pp Next, we create a disklabel in that partition .Po .Dq slice in disklabel terminology .Pc with room for up to 20 partitions: .Bd -literal -offset indent /sbin/gpart create -s BSD -n 20 ada0s1 .Ed .Pp We then create an 8 GB root partition and a 4 GB swap partition: .Bd -literal -offset indent /sbin/gpart add -t freebsd-ufs -s 8G ada0s1 /sbin/gpart add -t freebsd-swap -s 4G ada0s1 .Ed .Pp Finally, we install the appropriate boot loader for the .Bx label: .Bd -literal -offset indent /sbin/gpart bootcode -b /boot/boot ada0s1 .Ed .Ss VTOC8 .Pp Create a VTOC8 scheme on .Pa da0 : .Bd -literal -offset indent /sbin/gpart create -s VTOC8 da0 .Ed .Pp Create a 512MB-sized .Cm freebsd-ufs partition to contain a UFS filesystem from which the system can boot. .Bd -literal -offset indent /sbin/gpart add -s 512M -t freebsd-ufs da0 .Ed .Pp Create a 15GB-sized .Cm freebsd-ufs partition to contain a UFS filesystem and aligned on 4KB boundaries: .Bd -literal -offset indent /sbin/gpart add -s 15G -t freebsd-ufs -a 4k da0 .Ed .Pp After creating all required partitions, embed bootstrap code into them: .Bd -literal -offset indent /sbin/gpart bootcode -p /boot/boot1 da0 .Ed .Ss Deleting Partitions and Destroying the Partitioning Scheme If a .Em "Device busy" error is shown when trying to destroy a partition table, remember that all of the partitions must be deleted first with the .Cm delete action. In this example, .Pa da0 has three partitions: .Bd -literal -offset indent /sbin/gpart delete -i 3 da0 /sbin/gpart delete -i 2 da0 /sbin/gpart delete -i 1 da0 /sbin/gpart destroy da0 .Ed .Pp Rather than deleting each partition and then destroying the partitioning scheme, the .Fl F option can be given with .Cm destroy to delete all of the partitions before destroying the partitioning scheme. This is equivalent to the previous example: .Bd -literal -offset indent /sbin/gpart destroy -F da0 .Ed .Ss Backup and Restore .Pp Create a backup of the partition table from .Pa da0 : .Bd -literal -offset indent /sbin/gpart backup da0 > da0.backup .Ed .Pp Restore the partition table from the backup to .Pa da0 : .Bd -literal -offset indent /sbin/gpart restore -l da0 < /mnt/da0.backup .Ed .Pp Clone the partition table from .Pa ada0 to .Pa ada1 and .Pa ada2 : .Bd -literal -offset indent /sbin/gpart backup ada0 | /sbin/gpart restore -F ada1 ada2 .Ed .Sh SEE ALSO .Xr geom 4 , .Xr boot0cfg 8 , .Xr geom 8 , .Xr gptboot 8 .Sh HISTORY The .Nm utility appeared in .Fx 7.0 . .Sh AUTHORS .An Marcel Moolenaar Aq Mt marcel@FreeBSD.org +.Sh CAVEATS +Partition type +.Em apple-zfs +(6a898cc3-1dd2-11b2-99a6-080020736631) is also being used +on illumos/Solaris platforms for ZFS volumes. Index: head/sys/geom/part/g_part.c =================================================================== --- head/sys/geom/part/g_part.c (revision 364315) +++ head/sys/geom/part/g_part.c (revision 364316) @@ -1,2420 +1,2429 @@ /*- * SPDX-License-Identifier: BSD-2-Clause-FreeBSD * * Copyright (c) 2002, 2005-2009 Marcel Moolenaar * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR * IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES * OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. * IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT * NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF * THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ #include __FBSDID("$FreeBSD$"); #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include "g_part_if.h" static kobj_method_t g_part_null_methods[] = { { 0, 0 } }; static struct g_part_scheme g_part_null_scheme = { "(none)", g_part_null_methods, sizeof(struct g_part_table), }; TAILQ_HEAD(, g_part_scheme) g_part_schemes = TAILQ_HEAD_INITIALIZER(g_part_schemes); struct g_part_alias_list { const char *lexeme; enum g_part_alias alias; } g_part_alias_list[G_PART_ALIAS_COUNT] = { { "apple-apfs", G_PART_ALIAS_APPLE_APFS }, { "apple-boot", G_PART_ALIAS_APPLE_BOOT }, { "apple-core-storage", G_PART_ALIAS_APPLE_CORE_STORAGE }, { "apple-hfs", G_PART_ALIAS_APPLE_HFS }, { "apple-label", G_PART_ALIAS_APPLE_LABEL }, { "apple-raid", G_PART_ALIAS_APPLE_RAID }, { "apple-raid-offline", G_PART_ALIAS_APPLE_RAID_OFFLINE }, { "apple-tv-recovery", G_PART_ALIAS_APPLE_TV_RECOVERY }, { "apple-ufs", G_PART_ALIAS_APPLE_UFS }, + { "apple-zfs", G_PART_ALIAS_APPLE_ZFS }, { "bios-boot", G_PART_ALIAS_BIOS_BOOT }, { "chromeos-firmware", G_PART_ALIAS_CHROMEOS_FIRMWARE }, { "chromeos-kernel", G_PART_ALIAS_CHROMEOS_KERNEL }, { "chromeos-reserved", G_PART_ALIAS_CHROMEOS_RESERVED }, { "chromeos-root", G_PART_ALIAS_CHROMEOS_ROOT }, { "dragonfly-ccd", G_PART_ALIAS_DFBSD_CCD }, { "dragonfly-hammer", G_PART_ALIAS_DFBSD_HAMMER }, { "dragonfly-hammer2", G_PART_ALIAS_DFBSD_HAMMER2 }, { "dragonfly-label32", G_PART_ALIAS_DFBSD }, { "dragonfly-label64", G_PART_ALIAS_DFBSD64 }, { "dragonfly-legacy", G_PART_ALIAS_DFBSD_LEGACY }, { "dragonfly-swap", G_PART_ALIAS_DFBSD_SWAP }, { "dragonfly-ufs", G_PART_ALIAS_DFBSD_UFS }, { "dragonfly-vinum", G_PART_ALIAS_DFBSD_VINUM }, { "ebr", G_PART_ALIAS_EBR }, { "efi", G_PART_ALIAS_EFI }, { "fat16", G_PART_ALIAS_MS_FAT16 }, { "fat32", G_PART_ALIAS_MS_FAT32 }, { "fat32lba", G_PART_ALIAS_MS_FAT32LBA }, { "freebsd", G_PART_ALIAS_FREEBSD }, { "freebsd-boot", G_PART_ALIAS_FREEBSD_BOOT }, { "freebsd-nandfs", G_PART_ALIAS_FREEBSD_NANDFS }, { "freebsd-swap", G_PART_ALIAS_FREEBSD_SWAP }, { "freebsd-ufs", G_PART_ALIAS_FREEBSD_UFS }, { "freebsd-vinum", G_PART_ALIAS_FREEBSD_VINUM }, { "freebsd-zfs", G_PART_ALIAS_FREEBSD_ZFS }, { "linux-data", G_PART_ALIAS_LINUX_DATA }, { "linux-lvm", G_PART_ALIAS_LINUX_LVM }, { "linux-raid", G_PART_ALIAS_LINUX_RAID }, { "linux-swap", G_PART_ALIAS_LINUX_SWAP }, { "mbr", G_PART_ALIAS_MBR }, { "ms-basic-data", G_PART_ALIAS_MS_BASIC_DATA }, { "ms-ldm-data", G_PART_ALIAS_MS_LDM_DATA }, { "ms-ldm-metadata", G_PART_ALIAS_MS_LDM_METADATA }, { "ms-recovery", G_PART_ALIAS_MS_RECOVERY }, { "ms-reserved", G_PART_ALIAS_MS_RESERVED }, { "ms-spaces", G_PART_ALIAS_MS_SPACES }, { "netbsd-ccd", G_PART_ALIAS_NETBSD_CCD }, { "netbsd-cgd", G_PART_ALIAS_NETBSD_CGD }, { "netbsd-ffs", G_PART_ALIAS_NETBSD_FFS }, { "netbsd-lfs", G_PART_ALIAS_NETBSD_LFS }, { "netbsd-raid", G_PART_ALIAS_NETBSD_RAID }, { "netbsd-swap", G_PART_ALIAS_NETBSD_SWAP }, { "ntfs", G_PART_ALIAS_MS_NTFS }, { "openbsd-data", G_PART_ALIAS_OPENBSD_DATA }, { "prep-boot", G_PART_ALIAS_PREP_BOOT }, + { "solaris-boot", G_PART_ALIAS_SOLARIS_BOOT }, + { "solaris-root", G_PART_ALIAS_SOLARIS_ROOT }, + { "solaris-swap", G_PART_ALIAS_SOLARIS_SWAP }, + { "solaris-backup", G_PART_ALIAS_SOLARIS_BACKUP }, + { "solaris-var", G_PART_ALIAS_SOLARIS_VAR }, + { "solaris-home", G_PART_ALIAS_SOLARIS_HOME }, + { "solaris-altsec", G_PART_ALIAS_SOLARIS_ALTSEC }, + { "solaris-reserved", G_PART_ALIAS_SOLARIS_RESERVED }, { "vmware-reserved", G_PART_ALIAS_VMRESERVED }, { "vmware-vmfs", G_PART_ALIAS_VMFS }, { "vmware-vmkdiag", G_PART_ALIAS_VMKDIAG }, { "vmware-vsanhdr", G_PART_ALIAS_VMVSANHDR }, }; SYSCTL_DECL(_kern_geom); SYSCTL_NODE(_kern_geom, OID_AUTO, part, CTLFLAG_RW | CTLFLAG_MPSAFE, 0, "GEOM_PART stuff"); static u_int check_integrity = 1; SYSCTL_UINT(_kern_geom_part, OID_AUTO, check_integrity, CTLFLAG_RWTUN, &check_integrity, 1, "Enable integrity checking"); static u_int auto_resize = 1; SYSCTL_UINT(_kern_geom_part, OID_AUTO, auto_resize, CTLFLAG_RWTUN, &auto_resize, 1, "Enable auto resize"); static u_int allow_nesting = 0; SYSCTL_UINT(_kern_geom_part, OID_AUTO, allow_nesting, CTLFLAG_RWTUN, &allow_nesting, 0, "Allow additional levels of nesting"); char g_part_separator[MAXPATHLEN] = ""; SYSCTL_STRING(_kern_geom_part, OID_AUTO, separator, CTLFLAG_RDTUN, &g_part_separator, sizeof(g_part_separator), "Partition name separator"); /* * The GEOM partitioning class. */ static g_ctl_req_t g_part_ctlreq; static g_ctl_destroy_geom_t g_part_destroy_geom; static g_fini_t g_part_fini; static g_init_t g_part_init; static g_taste_t g_part_taste; static g_access_t g_part_access; static g_dumpconf_t g_part_dumpconf; static g_orphan_t g_part_orphan; static g_spoiled_t g_part_spoiled; static g_start_t g_part_start; static g_resize_t g_part_resize; static g_ioctl_t g_part_ioctl; static struct g_class g_part_class = { .name = "PART", .version = G_VERSION, /* Class methods. */ .ctlreq = g_part_ctlreq, .destroy_geom = g_part_destroy_geom, .fini = g_part_fini, .init = g_part_init, .taste = g_part_taste, /* Geom methods. */ .access = g_part_access, .dumpconf = g_part_dumpconf, .orphan = g_part_orphan, .spoiled = g_part_spoiled, .start = g_part_start, .resize = g_part_resize, .ioctl = g_part_ioctl, }; DECLARE_GEOM_CLASS(g_part_class, g_part); MODULE_VERSION(g_part, 0); /* * Support functions. */ static void g_part_wither(struct g_geom *, int); const char * g_part_alias_name(enum g_part_alias alias) { int i; for (i = 0; i < G_PART_ALIAS_COUNT; i++) { if (g_part_alias_list[i].alias != alias) continue; return (g_part_alias_list[i].lexeme); } return (NULL); } void g_part_geometry_heads(off_t blocks, u_int sectors, off_t *bestchs, u_int *bestheads) { static u_int candidate_heads[] = { 1, 2, 16, 32, 64, 128, 255, 0 }; off_t chs, cylinders; u_int heads; int idx; *bestchs = 0; *bestheads = 0; for (idx = 0; candidate_heads[idx] != 0; idx++) { heads = candidate_heads[idx]; cylinders = blocks / heads / sectors; if (cylinders < heads || cylinders < sectors) break; if (cylinders > 1023) continue; chs = cylinders * heads * sectors; if (chs > *bestchs || (chs == *bestchs && *bestheads == 1)) { *bestchs = chs; *bestheads = heads; } } } static void g_part_geometry(struct g_part_table *table, struct g_consumer *cp, off_t blocks) { static u_int candidate_sectors[] = { 1, 9, 17, 33, 63, 0 }; off_t chs, bestchs; u_int heads, sectors; int idx; if (g_getattr("GEOM::fwsectors", cp, §ors) != 0 || sectors == 0 || g_getattr("GEOM::fwheads", cp, &heads) != 0 || heads == 0) { table->gpt_fixgeom = 0; table->gpt_heads = 0; table->gpt_sectors = 0; bestchs = 0; for (idx = 0; candidate_sectors[idx] != 0; idx++) { sectors = candidate_sectors[idx]; g_part_geometry_heads(blocks, sectors, &chs, &heads); if (chs == 0) continue; /* * Prefer a geometry with sectors > 1, but only if * it doesn't bump down the number of heads to 1. */ if (chs > bestchs || (chs == bestchs && heads > 1 && table->gpt_sectors == 1)) { bestchs = chs; table->gpt_heads = heads; table->gpt_sectors = sectors; } } /* * If we didn't find a geometry at all, then the disk is * too big. This means we can use the maximum number of * heads and sectors. */ if (bestchs == 0) { table->gpt_heads = 255; table->gpt_sectors = 63; } } else { table->gpt_fixgeom = 1; table->gpt_heads = heads; table->gpt_sectors = sectors; } } static void g_part_get_physpath_done(struct bio *bp) { struct g_geom *gp; struct g_part_entry *entry; struct g_part_table *table; struct g_provider *pp; struct bio *pbp; pbp = bp->bio_parent; pp = pbp->bio_to; gp = pp->geom; table = gp->softc; entry = pp->private; if (bp->bio_error == 0) { char *end; size_t len, remainder; len = strlcat(bp->bio_data, "/", bp->bio_length); if (len < bp->bio_length) { end = bp->bio_data + len; remainder = bp->bio_length - len; G_PART_NAME(table, entry, end, remainder); } } g_std_done(bp); } #define DPRINTF(...) if (bootverbose) { \ printf("GEOM_PART: " __VA_ARGS__); \ } static int g_part_check_integrity(struct g_part_table *table, struct g_consumer *cp) { struct g_part_entry *e1, *e2; struct g_provider *pp; off_t offset; int failed; failed = 0; pp = cp->provider; if (table->gpt_last < table->gpt_first) { DPRINTF("last LBA is below first LBA: %jd < %jd\n", (intmax_t)table->gpt_last, (intmax_t)table->gpt_first); failed++; } if (table->gpt_last > pp->mediasize / pp->sectorsize - 1) { DPRINTF("last LBA extends beyond mediasize: " "%jd > %jd\n", (intmax_t)table->gpt_last, (intmax_t)pp->mediasize / pp->sectorsize - 1); failed++; } LIST_FOREACH(e1, &table->gpt_entry, gpe_entry) { if (e1->gpe_deleted || e1->gpe_internal) continue; if (e1->gpe_start < table->gpt_first) { DPRINTF("partition %d has start offset below first " "LBA: %jd < %jd\n", e1->gpe_index, (intmax_t)e1->gpe_start, (intmax_t)table->gpt_first); failed++; } if (e1->gpe_start > table->gpt_last) { DPRINTF("partition %d has start offset beyond last " "LBA: %jd > %jd\n", e1->gpe_index, (intmax_t)e1->gpe_start, (intmax_t)table->gpt_last); failed++; } if (e1->gpe_end < e1->gpe_start) { DPRINTF("partition %d has end offset below start " "offset: %jd < %jd\n", e1->gpe_index, (intmax_t)e1->gpe_end, (intmax_t)e1->gpe_start); failed++; } if (e1->gpe_end > table->gpt_last) { DPRINTF("partition %d has end offset beyond last " "LBA: %jd > %jd\n", e1->gpe_index, (intmax_t)e1->gpe_end, (intmax_t)table->gpt_last); failed++; } if (pp->stripesize > 0) { offset = e1->gpe_start * pp->sectorsize; if (e1->gpe_offset > offset) offset = e1->gpe_offset; if ((offset + pp->stripeoffset) % pp->stripesize) { DPRINTF("partition %d on (%s, %s) is not " "aligned on %ju bytes\n", e1->gpe_index, pp->name, table->gpt_scheme->name, (uintmax_t)pp->stripesize); /* Don't treat this as a critical failure */ } } e2 = e1; while ((e2 = LIST_NEXT(e2, gpe_entry)) != NULL) { if (e2->gpe_deleted || e2->gpe_internal) continue; if (e1->gpe_start >= e2->gpe_start && e1->gpe_start <= e2->gpe_end) { DPRINTF("partition %d has start offset inside " "partition %d: start[%d] %jd >= start[%d] " "%jd <= end[%d] %jd\n", e1->gpe_index, e2->gpe_index, e2->gpe_index, (intmax_t)e2->gpe_start, e1->gpe_index, (intmax_t)e1->gpe_start, e2->gpe_index, (intmax_t)e2->gpe_end); failed++; } if (e1->gpe_end >= e2->gpe_start && e1->gpe_end <= e2->gpe_end) { DPRINTF("partition %d has end offset inside " "partition %d: start[%d] %jd >= end[%d] " "%jd <= end[%d] %jd\n", e1->gpe_index, e2->gpe_index, e2->gpe_index, (intmax_t)e2->gpe_start, e1->gpe_index, (intmax_t)e1->gpe_end, e2->gpe_index, (intmax_t)e2->gpe_end); failed++; } if (e1->gpe_start < e2->gpe_start && e1->gpe_end > e2->gpe_end) { DPRINTF("partition %d contains partition %d: " "start[%d] %jd > start[%d] %jd, end[%d] " "%jd < end[%d] %jd\n", e1->gpe_index, e2->gpe_index, e1->gpe_index, (intmax_t)e1->gpe_start, e2->gpe_index, (intmax_t)e2->gpe_start, e2->gpe_index, (intmax_t)e2->gpe_end, e1->gpe_index, (intmax_t)e1->gpe_end); failed++; } } } if (failed != 0) { printf("GEOM_PART: integrity check failed (%s, %s)\n", pp->name, table->gpt_scheme->name); if (check_integrity != 0) return (EINVAL); table->gpt_corrupt = 1; } return (0); } #undef DPRINTF struct g_part_entry * g_part_new_entry(struct g_part_table *table, int index, quad_t start, quad_t end) { struct g_part_entry *entry, *last; last = NULL; LIST_FOREACH(entry, &table->gpt_entry, gpe_entry) { if (entry->gpe_index == index) break; if (entry->gpe_index > index) { entry = NULL; break; } last = entry; } if (entry == NULL) { entry = g_malloc(table->gpt_scheme->gps_entrysz, M_WAITOK | M_ZERO); entry->gpe_index = index; if (last == NULL) LIST_INSERT_HEAD(&table->gpt_entry, entry, gpe_entry); else LIST_INSERT_AFTER(last, entry, gpe_entry); } else entry->gpe_offset = 0; entry->gpe_start = start; entry->gpe_end = end; return (entry); } static void g_part_new_provider(struct g_geom *gp, struct g_part_table *table, struct g_part_entry *entry) { struct g_consumer *cp; struct g_provider *pp; struct g_geom_alias *gap; off_t offset; cp = LIST_FIRST(&gp->consumer); pp = cp->provider; offset = entry->gpe_start * pp->sectorsize; if (entry->gpe_offset < offset) entry->gpe_offset = offset; if (entry->gpe_pp == NULL) { entry->gpe_pp = G_PART_NEW_PROVIDER(table, gp, entry, gp->name); /* * If our parent provider had any aliases, then copy them to our * provider so when geom DEV tastes things later, they will be * there for it to create the aliases with those name used in * place of the geom's name we use to create the provider. The * kobj interface that generates names makes this awkward. */ LIST_FOREACH(gap, &pp->aliases, ga_next) G_PART_ADD_ALIAS(table, entry->gpe_pp, entry, gap->ga_alias); entry->gpe_pp->flags |= G_PF_DIRECT_SEND | G_PF_DIRECT_RECEIVE; entry->gpe_pp->private = entry; /* Close the circle. */ } entry->gpe_pp->index = entry->gpe_index - 1; /* index is 1-based. */ entry->gpe_pp->mediasize = (entry->gpe_end - entry->gpe_start + 1) * pp->sectorsize; entry->gpe_pp->mediasize -= entry->gpe_offset - offset; entry->gpe_pp->sectorsize = pp->sectorsize; entry->gpe_pp->stripesize = pp->stripesize; entry->gpe_pp->stripeoffset = pp->stripeoffset + entry->gpe_offset; if (pp->stripesize > 0) entry->gpe_pp->stripeoffset %= pp->stripesize; entry->gpe_pp->flags |= pp->flags & G_PF_ACCEPT_UNMAPPED; g_error_provider(entry->gpe_pp, 0); } static struct g_geom* g_part_find_geom(const char *name) { struct g_geom *gp; LIST_FOREACH(gp, &g_part_class.geom, geom) { if ((gp->flags & G_GEOM_WITHER) == 0 && strcmp(name, gp->name) == 0) break; } return (gp); } static int g_part_parm_geom(struct gctl_req *req, const char *name, struct g_geom **v) { struct g_geom *gp; const char *gname; gname = gctl_get_asciiparam(req, name); if (gname == NULL) return (ENOATTR); if (strncmp(gname, _PATH_DEV, sizeof(_PATH_DEV) - 1) == 0) gname += sizeof(_PATH_DEV) - 1; gp = g_part_find_geom(gname); if (gp == NULL) { gctl_error(req, "%d %s '%s'", EINVAL, name, gname); return (EINVAL); } *v = gp; return (0); } static int g_part_parm_provider(struct gctl_req *req, const char *name, struct g_provider **v) { struct g_provider *pp; const char *pname; pname = gctl_get_asciiparam(req, name); if (pname == NULL) return (ENOATTR); if (strncmp(pname, _PATH_DEV, sizeof(_PATH_DEV) - 1) == 0) pname += sizeof(_PATH_DEV) - 1; pp = g_provider_by_name(pname); if (pp == NULL) { gctl_error(req, "%d %s '%s'", EINVAL, name, pname); return (EINVAL); } *v = pp; return (0); } static int g_part_parm_quad(struct gctl_req *req, const char *name, quad_t *v) { const char *p; char *x; quad_t q; p = gctl_get_asciiparam(req, name); if (p == NULL) return (ENOATTR); q = strtoq(p, &x, 0); if (*x != '\0' || q < 0) { gctl_error(req, "%d %s '%s'", EINVAL, name, p); return (EINVAL); } *v = q; return (0); } static int g_part_parm_scheme(struct gctl_req *req, const char *name, struct g_part_scheme **v) { struct g_part_scheme *s; const char *p; p = gctl_get_asciiparam(req, name); if (p == NULL) return (ENOATTR); TAILQ_FOREACH(s, &g_part_schemes, scheme_list) { if (s == &g_part_null_scheme) continue; if (!strcasecmp(s->name, p)) break; } if (s == NULL) { gctl_error(req, "%d %s '%s'", EINVAL, name, p); return (EINVAL); } *v = s; return (0); } static int g_part_parm_str(struct gctl_req *req, const char *name, const char **v) { const char *p; p = gctl_get_asciiparam(req, name); if (p == NULL) return (ENOATTR); /* An empty label is always valid. */ if (strcmp(name, "label") != 0 && p[0] == '\0') { gctl_error(req, "%d %s '%s'", EINVAL, name, p); return (EINVAL); } *v = p; return (0); } static int g_part_parm_intmax(struct gctl_req *req, const char *name, u_int *v) { const intmax_t *p; int size; p = gctl_get_param(req, name, &size); if (p == NULL) return (ENOATTR); if (size != sizeof(*p) || *p < 0 || *p > INT_MAX) { gctl_error(req, "%d %s '%jd'", EINVAL, name, *p); return (EINVAL); } *v = (u_int)*p; return (0); } static int g_part_parm_uint32(struct gctl_req *req, const char *name, u_int *v) { const uint32_t *p; int size; p = gctl_get_param(req, name, &size); if (p == NULL) return (ENOATTR); if (size != sizeof(*p) || *p > INT_MAX) { gctl_error(req, "%d %s '%u'", EINVAL, name, (unsigned int)*p); return (EINVAL); } *v = (u_int)*p; return (0); } static int g_part_parm_bootcode(struct gctl_req *req, const char *name, const void **v, unsigned int *s) { const void *p; int size; p = gctl_get_param(req, name, &size); if (p == NULL) return (ENOATTR); *v = p; *s = size; return (0); } static int g_part_probe(struct g_geom *gp, struct g_consumer *cp, int depth) { struct g_part_scheme *iter, *scheme; struct g_part_table *table; int pri, probe; table = gp->softc; scheme = (table != NULL) ? table->gpt_scheme : NULL; pri = (scheme != NULL) ? G_PART_PROBE(table, cp) : INT_MIN; if (pri == 0) goto done; if (pri > 0) { /* error */ scheme = NULL; pri = INT_MIN; } TAILQ_FOREACH(iter, &g_part_schemes, scheme_list) { if (iter == &g_part_null_scheme) continue; table = (void *)kobj_create((kobj_class_t)iter, M_GEOM, M_WAITOK); table->gpt_gp = gp; table->gpt_scheme = iter; table->gpt_depth = depth; probe = G_PART_PROBE(table, cp); if (probe <= 0 && probe > pri) { pri = probe; scheme = iter; if (gp->softc != NULL) kobj_delete((kobj_t)gp->softc, M_GEOM); gp->softc = table; if (pri == 0) goto done; } else kobj_delete((kobj_t)table, M_GEOM); } done: return ((scheme == NULL) ? ENXIO : 0); } /* * Control request functions. */ static int g_part_ctl_add(struct gctl_req *req, struct g_part_parms *gpp) { struct g_geom *gp; struct g_provider *pp; struct g_part_entry *delent, *last, *entry; struct g_part_table *table; struct sbuf *sb; quad_t end; unsigned int index; int error; gp = gpp->gpp_geom; G_PART_TRACE((G_T_TOPOLOGY, "%s(%s)", __func__, gp->name)); g_topology_assert(); pp = LIST_FIRST(&gp->consumer)->provider; table = gp->softc; end = gpp->gpp_start + gpp->gpp_size - 1; if (gpp->gpp_start < table->gpt_first || gpp->gpp_start > table->gpt_last) { gctl_error(req, "%d start '%jd'", EINVAL, (intmax_t)gpp->gpp_start); return (EINVAL); } if (end < gpp->gpp_start || end > table->gpt_last) { gctl_error(req, "%d size '%jd'", EINVAL, (intmax_t)gpp->gpp_size); return (EINVAL); } if (gpp->gpp_index > table->gpt_entries) { gctl_error(req, "%d index '%d'", EINVAL, gpp->gpp_index); return (EINVAL); } delent = last = NULL; index = (gpp->gpp_index > 0) ? gpp->gpp_index : 1; LIST_FOREACH(entry, &table->gpt_entry, gpe_entry) { if (entry->gpe_deleted) { if (entry->gpe_index == index) delent = entry; continue; } if (entry->gpe_index == index) index = entry->gpe_index + 1; if (entry->gpe_index < index) last = entry; if (entry->gpe_internal) continue; if (gpp->gpp_start >= entry->gpe_start && gpp->gpp_start <= entry->gpe_end) { gctl_error(req, "%d start '%jd'", ENOSPC, (intmax_t)gpp->gpp_start); return (ENOSPC); } if (end >= entry->gpe_start && end <= entry->gpe_end) { gctl_error(req, "%d end '%jd'", ENOSPC, (intmax_t)end); return (ENOSPC); } if (gpp->gpp_start < entry->gpe_start && end > entry->gpe_end) { gctl_error(req, "%d size '%jd'", ENOSPC, (intmax_t)gpp->gpp_size); return (ENOSPC); } } if (gpp->gpp_index > 0 && index != gpp->gpp_index) { gctl_error(req, "%d index '%d'", EEXIST, gpp->gpp_index); return (EEXIST); } if (index > table->gpt_entries) { gctl_error(req, "%d index '%d'", ENOSPC, index); return (ENOSPC); } entry = (delent == NULL) ? g_malloc(table->gpt_scheme->gps_entrysz, M_WAITOK | M_ZERO) : delent; entry->gpe_index = index; entry->gpe_start = gpp->gpp_start; entry->gpe_end = end; error = G_PART_ADD(table, entry, gpp); if (error) { gctl_error(req, "%d", error); if (delent == NULL) g_free(entry); return (error); } if (delent == NULL) { if (last == NULL) LIST_INSERT_HEAD(&table->gpt_entry, entry, gpe_entry); else LIST_INSERT_AFTER(last, entry, gpe_entry); entry->gpe_created = 1; } else { entry->gpe_deleted = 0; entry->gpe_modified = 1; } g_part_new_provider(gp, table, entry); /* Provide feedback if so requested. */ if (gpp->gpp_parms & G_PART_PARM_OUTPUT) { sb = sbuf_new_auto(); G_PART_FULLNAME(table, entry, sb, gp->name); if (pp->stripesize > 0 && entry->gpe_pp->stripeoffset != 0) sbuf_printf(sb, " added, but partition is not " "aligned on %ju bytes\n", (uintmax_t)pp->stripesize); else sbuf_cat(sb, " added\n"); sbuf_finish(sb); gctl_set_param(req, "output", sbuf_data(sb), sbuf_len(sb) + 1); sbuf_delete(sb); } return (0); } static int g_part_ctl_bootcode(struct gctl_req *req, struct g_part_parms *gpp) { struct g_geom *gp; struct g_part_table *table; struct sbuf *sb; int error, sz; gp = gpp->gpp_geom; G_PART_TRACE((G_T_TOPOLOGY, "%s(%s)", __func__, gp->name)); g_topology_assert(); table = gp->softc; sz = table->gpt_scheme->gps_bootcodesz; if (sz == 0) { error = ENODEV; goto fail; } if (gpp->gpp_codesize > sz) { error = EFBIG; goto fail; } error = G_PART_BOOTCODE(table, gpp); if (error) goto fail; /* Provide feedback if so requested. */ if (gpp->gpp_parms & G_PART_PARM_OUTPUT) { sb = sbuf_new_auto(); sbuf_printf(sb, "bootcode written to %s\n", gp->name); sbuf_finish(sb); gctl_set_param(req, "output", sbuf_data(sb), sbuf_len(sb) + 1); sbuf_delete(sb); } return (0); fail: gctl_error(req, "%d", error); return (error); } static int g_part_ctl_commit(struct gctl_req *req, struct g_part_parms *gpp) { struct g_consumer *cp; struct g_geom *gp; struct g_provider *pp; struct g_part_entry *entry, *tmp; struct g_part_table *table; char *buf; int error, i; gp = gpp->gpp_geom; G_PART_TRACE((G_T_TOPOLOGY, "%s(%s)", __func__, gp->name)); g_topology_assert(); table = gp->softc; if (!table->gpt_opened) { gctl_error(req, "%d", EPERM); return (EPERM); } g_topology_unlock(); cp = LIST_FIRST(&gp->consumer); if ((table->gpt_smhead | table->gpt_smtail) != 0) { pp = cp->provider; buf = g_malloc(pp->sectorsize, M_WAITOK | M_ZERO); while (table->gpt_smhead != 0) { i = ffs(table->gpt_smhead) - 1; error = g_write_data(cp, i * pp->sectorsize, buf, pp->sectorsize); if (error) { g_free(buf); goto fail; } table->gpt_smhead &= ~(1 << i); } while (table->gpt_smtail != 0) { i = ffs(table->gpt_smtail) - 1; error = g_write_data(cp, pp->mediasize - (i + 1) * pp->sectorsize, buf, pp->sectorsize); if (error) { g_free(buf); goto fail; } table->gpt_smtail &= ~(1 << i); } g_free(buf); } if (table->gpt_scheme == &g_part_null_scheme) { g_topology_lock(); g_access(cp, -1, -1, -1); g_part_wither(gp, ENXIO); return (0); } error = G_PART_WRITE(table, cp); if (error) goto fail; LIST_FOREACH_SAFE(entry, &table->gpt_entry, gpe_entry, tmp) { if (!entry->gpe_deleted) { /* Notify consumers that provider might be changed. */ if (entry->gpe_modified && ( entry->gpe_pp->acw + entry->gpe_pp->ace + entry->gpe_pp->acr) == 0) g_media_changed(entry->gpe_pp, M_NOWAIT); entry->gpe_created = 0; entry->gpe_modified = 0; continue; } LIST_REMOVE(entry, gpe_entry); g_free(entry); } table->gpt_created = 0; table->gpt_opened = 0; g_topology_lock(); g_access(cp, -1, -1, -1); return (0); fail: g_topology_lock(); gctl_error(req, "%d", error); return (error); } static int g_part_ctl_create(struct gctl_req *req, struct g_part_parms *gpp) { struct g_consumer *cp; struct g_geom *gp; struct g_provider *pp; struct g_part_scheme *scheme; struct g_part_table *null, *table; struct sbuf *sb; int attr, error; pp = gpp->gpp_provider; scheme = gpp->gpp_scheme; G_PART_TRACE((G_T_TOPOLOGY, "%s(%s)", __func__, pp->name)); g_topology_assert(); /* Check that there isn't already a g_part geom on the provider. */ gp = g_part_find_geom(pp->name); if (gp != NULL) { null = gp->softc; if (null->gpt_scheme != &g_part_null_scheme) { gctl_error(req, "%d geom '%s'", EEXIST, pp->name); return (EEXIST); } } else null = NULL; if ((gpp->gpp_parms & G_PART_PARM_ENTRIES) && (gpp->gpp_entries < scheme->gps_minent || gpp->gpp_entries > scheme->gps_maxent)) { gctl_error(req, "%d entries '%d'", EINVAL, gpp->gpp_entries); return (EINVAL); } if (null == NULL) gp = g_new_geomf(&g_part_class, "%s", pp->name); gp->softc = kobj_create((kobj_class_t)gpp->gpp_scheme, M_GEOM, M_WAITOK); table = gp->softc; table->gpt_gp = gp; table->gpt_scheme = gpp->gpp_scheme; table->gpt_entries = (gpp->gpp_parms & G_PART_PARM_ENTRIES) ? gpp->gpp_entries : scheme->gps_minent; LIST_INIT(&table->gpt_entry); if (null == NULL) { cp = g_new_consumer(gp); cp->flags |= G_CF_DIRECT_SEND | G_CF_DIRECT_RECEIVE; error = g_attach(cp, pp); if (error == 0) error = g_access(cp, 1, 1, 1); if (error != 0) { g_part_wither(gp, error); gctl_error(req, "%d geom '%s'", error, pp->name); return (error); } table->gpt_opened = 1; } else { cp = LIST_FIRST(&gp->consumer); table->gpt_opened = null->gpt_opened; table->gpt_smhead = null->gpt_smhead; table->gpt_smtail = null->gpt_smtail; } g_topology_unlock(); /* Make sure the provider has media. */ if (pp->mediasize == 0 || pp->sectorsize == 0) { error = ENODEV; goto fail; } /* Make sure we can nest and if so, determine our depth. */ error = g_getattr("PART::isleaf", cp, &attr); if (!error && attr) { error = ENODEV; goto fail; } error = g_getattr("PART::depth", cp, &attr); table->gpt_depth = (!error) ? attr + 1 : 0; /* * Synthesize a disk geometry. Some partitioning schemes * depend on it and since some file systems need it even * when the partitition scheme doesn't, we do it here in * scheme-independent code. */ g_part_geometry(table, cp, pp->mediasize / pp->sectorsize); error = G_PART_CREATE(table, gpp); if (error) goto fail; g_topology_lock(); table->gpt_created = 1; if (null != NULL) kobj_delete((kobj_t)null, M_GEOM); /* * Support automatic commit by filling in the gpp_geom * parameter. */ gpp->gpp_parms |= G_PART_PARM_GEOM; gpp->gpp_geom = gp; /* Provide feedback if so requested. */ if (gpp->gpp_parms & G_PART_PARM_OUTPUT) { sb = sbuf_new_auto(); sbuf_printf(sb, "%s created\n", gp->name); sbuf_finish(sb); gctl_set_param(req, "output", sbuf_data(sb), sbuf_len(sb) + 1); sbuf_delete(sb); } return (0); fail: g_topology_lock(); if (null == NULL) { g_access(cp, -1, -1, -1); g_part_wither(gp, error); } else { kobj_delete((kobj_t)gp->softc, M_GEOM); gp->softc = null; } gctl_error(req, "%d provider", error); return (error); } static int g_part_ctl_delete(struct gctl_req *req, struct g_part_parms *gpp) { struct g_geom *gp; struct g_provider *pp; struct g_part_entry *entry; struct g_part_table *table; struct sbuf *sb; gp = gpp->gpp_geom; G_PART_TRACE((G_T_TOPOLOGY, "%s(%s)", __func__, gp->name)); g_topology_assert(); table = gp->softc; LIST_FOREACH(entry, &table->gpt_entry, gpe_entry) { if (entry->gpe_deleted || entry->gpe_internal) continue; if (entry->gpe_index == gpp->gpp_index) break; } if (entry == NULL) { gctl_error(req, "%d index '%d'", ENOENT, gpp->gpp_index); return (ENOENT); } pp = entry->gpe_pp; if (pp != NULL) { if (pp->acr > 0 || pp->acw > 0 || pp->ace > 0) { gctl_error(req, "%d", EBUSY); return (EBUSY); } pp->private = NULL; entry->gpe_pp = NULL; } if (pp != NULL) g_wither_provider(pp, ENXIO); /* Provide feedback if so requested. */ if (gpp->gpp_parms & G_PART_PARM_OUTPUT) { sb = sbuf_new_auto(); G_PART_FULLNAME(table, entry, sb, gp->name); sbuf_cat(sb, " deleted\n"); sbuf_finish(sb); gctl_set_param(req, "output", sbuf_data(sb), sbuf_len(sb) + 1); sbuf_delete(sb); } if (entry->gpe_created) { LIST_REMOVE(entry, gpe_entry); g_free(entry); } else { entry->gpe_modified = 0; entry->gpe_deleted = 1; } return (0); } static int g_part_ctl_destroy(struct gctl_req *req, struct g_part_parms *gpp) { struct g_consumer *cp; struct g_geom *gp; struct g_provider *pp; struct g_part_entry *entry, *tmp; struct g_part_table *null, *table; struct sbuf *sb; int error; gp = gpp->gpp_geom; G_PART_TRACE((G_T_TOPOLOGY, "%s(%s)", __func__, gp->name)); g_topology_assert(); table = gp->softc; /* Check for busy providers. */ LIST_FOREACH(entry, &table->gpt_entry, gpe_entry) { if (entry->gpe_deleted || entry->gpe_internal) continue; if (gpp->gpp_force) { pp = entry->gpe_pp; if (pp == NULL) continue; if (pp->acr == 0 && pp->acw == 0 && pp->ace == 0) continue; } gctl_error(req, "%d", EBUSY); return (EBUSY); } if (gpp->gpp_force) { /* Destroy all providers. */ LIST_FOREACH_SAFE(entry, &table->gpt_entry, gpe_entry, tmp) { pp = entry->gpe_pp; if (pp != NULL) { pp->private = NULL; g_wither_provider(pp, ENXIO); } LIST_REMOVE(entry, gpe_entry); g_free(entry); } } error = G_PART_DESTROY(table, gpp); if (error) { gctl_error(req, "%d", error); return (error); } gp->softc = kobj_create((kobj_class_t)&g_part_null_scheme, M_GEOM, M_WAITOK); null = gp->softc; null->gpt_gp = gp; null->gpt_scheme = &g_part_null_scheme; LIST_INIT(&null->gpt_entry); cp = LIST_FIRST(&gp->consumer); pp = cp->provider; null->gpt_last = pp->mediasize / pp->sectorsize - 1; null->gpt_depth = table->gpt_depth; null->gpt_opened = table->gpt_opened; null->gpt_smhead = table->gpt_smhead; null->gpt_smtail = table->gpt_smtail; while ((entry = LIST_FIRST(&table->gpt_entry)) != NULL) { LIST_REMOVE(entry, gpe_entry); g_free(entry); } kobj_delete((kobj_t)table, M_GEOM); /* Provide feedback if so requested. */ if (gpp->gpp_parms & G_PART_PARM_OUTPUT) { sb = sbuf_new_auto(); sbuf_printf(sb, "%s destroyed\n", gp->name); sbuf_finish(sb); gctl_set_param(req, "output", sbuf_data(sb), sbuf_len(sb) + 1); sbuf_delete(sb); } return (0); } static int g_part_ctl_modify(struct gctl_req *req, struct g_part_parms *gpp) { struct g_geom *gp; struct g_part_entry *entry; struct g_part_table *table; struct sbuf *sb; int error; gp = gpp->gpp_geom; G_PART_TRACE((G_T_TOPOLOGY, "%s(%s)", __func__, gp->name)); g_topology_assert(); table = gp->softc; LIST_FOREACH(entry, &table->gpt_entry, gpe_entry) { if (entry->gpe_deleted || entry->gpe_internal) continue; if (entry->gpe_index == gpp->gpp_index) break; } if (entry == NULL) { gctl_error(req, "%d index '%d'", ENOENT, gpp->gpp_index); return (ENOENT); } error = G_PART_MODIFY(table, entry, gpp); if (error) { gctl_error(req, "%d", error); return (error); } if (!entry->gpe_created) entry->gpe_modified = 1; /* Provide feedback if so requested. */ if (gpp->gpp_parms & G_PART_PARM_OUTPUT) { sb = sbuf_new_auto(); G_PART_FULLNAME(table, entry, sb, gp->name); sbuf_cat(sb, " modified\n"); sbuf_finish(sb); gctl_set_param(req, "output", sbuf_data(sb), sbuf_len(sb) + 1); sbuf_delete(sb); } return (0); } static int g_part_ctl_move(struct gctl_req *req, struct g_part_parms *gpp) { gctl_error(req, "%d verb 'move'", ENOSYS); return (ENOSYS); } static int g_part_ctl_recover(struct gctl_req *req, struct g_part_parms *gpp) { struct g_part_table *table; struct g_geom *gp; struct sbuf *sb; int error, recovered; gp = gpp->gpp_geom; G_PART_TRACE((G_T_TOPOLOGY, "%s(%s)", __func__, gp->name)); g_topology_assert(); table = gp->softc; error = recovered = 0; if (table->gpt_corrupt) { error = G_PART_RECOVER(table); if (error == 0) error = g_part_check_integrity(table, LIST_FIRST(&gp->consumer)); if (error) { gctl_error(req, "%d recovering '%s' failed", error, gp->name); return (error); } recovered = 1; } /* Provide feedback if so requested. */ if (gpp->gpp_parms & G_PART_PARM_OUTPUT) { sb = sbuf_new_auto(); if (recovered) sbuf_printf(sb, "%s recovered\n", gp->name); else sbuf_printf(sb, "%s recovering is not needed\n", gp->name); sbuf_finish(sb); gctl_set_param(req, "output", sbuf_data(sb), sbuf_len(sb) + 1); sbuf_delete(sb); } return (0); } static int g_part_ctl_resize(struct gctl_req *req, struct g_part_parms *gpp) { struct g_geom *gp; struct g_provider *pp; struct g_part_entry *pe, *entry; struct g_part_table *table; struct sbuf *sb; quad_t end; int error; off_t mediasize; gp = gpp->gpp_geom; G_PART_TRACE((G_T_TOPOLOGY, "%s(%s)", __func__, gp->name)); g_topology_assert(); table = gp->softc; /* check gpp_index */ LIST_FOREACH(entry, &table->gpt_entry, gpe_entry) { if (entry->gpe_deleted || entry->gpe_internal) continue; if (entry->gpe_index == gpp->gpp_index) break; } if (entry == NULL) { gctl_error(req, "%d index '%d'", ENOENT, gpp->gpp_index); return (ENOENT); } /* check gpp_size */ end = entry->gpe_start + gpp->gpp_size - 1; if (gpp->gpp_size < 1 || end > table->gpt_last) { gctl_error(req, "%d size '%jd'", EINVAL, (intmax_t)gpp->gpp_size); return (EINVAL); } LIST_FOREACH(pe, &table->gpt_entry, gpe_entry) { if (pe->gpe_deleted || pe->gpe_internal || pe == entry) continue; if (end >= pe->gpe_start && end <= pe->gpe_end) { gctl_error(req, "%d end '%jd'", ENOSPC, (intmax_t)end); return (ENOSPC); } if (entry->gpe_start < pe->gpe_start && end > pe->gpe_end) { gctl_error(req, "%d size '%jd'", ENOSPC, (intmax_t)gpp->gpp_size); return (ENOSPC); } } pp = entry->gpe_pp; if ((g_debugflags & G_F_FOOTSHOOTING) == 0 && (pp->acr > 0 || pp->acw > 0 || pp->ace > 0)) { if (entry->gpe_end - entry->gpe_start + 1 > gpp->gpp_size) { /* Deny shrinking of an opened partition. */ gctl_error(req, "%d", EBUSY); return (EBUSY); } } error = G_PART_RESIZE(table, entry, gpp); if (error) { gctl_error(req, "%d%s", error, error != EBUSY ? "": " resizing will lead to unexpected shrinking" " due to alignment"); return (error); } if (!entry->gpe_created) entry->gpe_modified = 1; /* update mediasize of changed provider */ mediasize = (entry->gpe_end - entry->gpe_start + 1) * pp->sectorsize; g_resize_provider(pp, mediasize); /* Provide feedback if so requested. */ if (gpp->gpp_parms & G_PART_PARM_OUTPUT) { sb = sbuf_new_auto(); G_PART_FULLNAME(table, entry, sb, gp->name); sbuf_cat(sb, " resized\n"); sbuf_finish(sb); gctl_set_param(req, "output", sbuf_data(sb), sbuf_len(sb) + 1); sbuf_delete(sb); } return (0); } static int g_part_ctl_setunset(struct gctl_req *req, struct g_part_parms *gpp, unsigned int set) { struct g_geom *gp; struct g_part_entry *entry; struct g_part_table *table; struct sbuf *sb; int error; gp = gpp->gpp_geom; G_PART_TRACE((G_T_TOPOLOGY, "%s(%s)", __func__, gp->name)); g_topology_assert(); table = gp->softc; if (gpp->gpp_parms & G_PART_PARM_INDEX) { LIST_FOREACH(entry, &table->gpt_entry, gpe_entry) { if (entry->gpe_deleted || entry->gpe_internal) continue; if (entry->gpe_index == gpp->gpp_index) break; } if (entry == NULL) { gctl_error(req, "%d index '%d'", ENOENT, gpp->gpp_index); return (ENOENT); } } else entry = NULL; error = G_PART_SETUNSET(table, entry, gpp->gpp_attrib, set); if (error) { gctl_error(req, "%d attrib '%s'", error, gpp->gpp_attrib); return (error); } /* Provide feedback if so requested. */ if (gpp->gpp_parms & G_PART_PARM_OUTPUT) { sb = sbuf_new_auto(); sbuf_printf(sb, "%s %sset on ", gpp->gpp_attrib, (set) ? "" : "un"); if (entry) G_PART_FULLNAME(table, entry, sb, gp->name); else sbuf_cat(sb, gp->name); sbuf_cat(sb, "\n"); sbuf_finish(sb); gctl_set_param(req, "output", sbuf_data(sb), sbuf_len(sb) + 1); sbuf_delete(sb); } return (0); } static int g_part_ctl_undo(struct gctl_req *req, struct g_part_parms *gpp) { struct g_consumer *cp; struct g_provider *pp; struct g_geom *gp; struct g_part_entry *entry, *tmp; struct g_part_table *table; int error, reprobe; gp = gpp->gpp_geom; G_PART_TRACE((G_T_TOPOLOGY, "%s(%s)", __func__, gp->name)); g_topology_assert(); table = gp->softc; if (!table->gpt_opened) { gctl_error(req, "%d", EPERM); return (EPERM); } cp = LIST_FIRST(&gp->consumer); LIST_FOREACH_SAFE(entry, &table->gpt_entry, gpe_entry, tmp) { entry->gpe_modified = 0; if (entry->gpe_created) { pp = entry->gpe_pp; if (pp != NULL) { pp->private = NULL; entry->gpe_pp = NULL; g_wither_provider(pp, ENXIO); } entry->gpe_deleted = 1; } if (entry->gpe_deleted) { LIST_REMOVE(entry, gpe_entry); g_free(entry); } } g_topology_unlock(); reprobe = (table->gpt_scheme == &g_part_null_scheme || table->gpt_created) ? 1 : 0; if (reprobe) { LIST_FOREACH(entry, &table->gpt_entry, gpe_entry) { if (entry->gpe_internal) continue; error = EBUSY; goto fail; } while ((entry = LIST_FIRST(&table->gpt_entry)) != NULL) { LIST_REMOVE(entry, gpe_entry); g_free(entry); } error = g_part_probe(gp, cp, table->gpt_depth); if (error) { g_topology_lock(); g_access(cp, -1, -1, -1); g_part_wither(gp, error); return (0); } table = gp->softc; /* * Synthesize a disk geometry. Some partitioning schemes * depend on it and since some file systems need it even * when the partitition scheme doesn't, we do it here in * scheme-independent code. */ pp = cp->provider; g_part_geometry(table, cp, pp->mediasize / pp->sectorsize); } error = G_PART_READ(table, cp); if (error) goto fail; error = g_part_check_integrity(table, cp); if (error) goto fail; g_topology_lock(); LIST_FOREACH(entry, &table->gpt_entry, gpe_entry) { if (!entry->gpe_internal) g_part_new_provider(gp, table, entry); } table->gpt_opened = 0; g_access(cp, -1, -1, -1); return (0); fail: g_topology_lock(); gctl_error(req, "%d", error); return (error); } static void g_part_wither(struct g_geom *gp, int error) { struct g_part_entry *entry; struct g_part_table *table; struct g_provider *pp; table = gp->softc; if (table != NULL) { gp->softc = NULL; while ((entry = LIST_FIRST(&table->gpt_entry)) != NULL) { LIST_REMOVE(entry, gpe_entry); pp = entry->gpe_pp; entry->gpe_pp = NULL; if (pp != NULL) { pp->private = NULL; g_wither_provider(pp, error); } g_free(entry); } G_PART_DESTROY(table, NULL); kobj_delete((kobj_t)table, M_GEOM); } g_wither_geom(gp, error); } /* * Class methods. */ static void g_part_ctlreq(struct gctl_req *req, struct g_class *mp, const char *verb) { struct g_part_parms gpp; struct g_part_table *table; struct gctl_req_arg *ap; enum g_part_ctl ctlreq; unsigned int i, mparms, oparms, parm; int auto_commit, close_on_error; int error, modifies; G_PART_TRACE((G_T_TOPOLOGY, "%s(%s,%s)", __func__, mp->name, verb)); g_topology_assert(); ctlreq = G_PART_CTL_NONE; modifies = 1; mparms = 0; oparms = G_PART_PARM_FLAGS | G_PART_PARM_OUTPUT | G_PART_PARM_VERSION; switch (*verb) { case 'a': if (!strcmp(verb, "add")) { ctlreq = G_PART_CTL_ADD; mparms |= G_PART_PARM_GEOM | G_PART_PARM_SIZE | G_PART_PARM_START | G_PART_PARM_TYPE; oparms |= G_PART_PARM_INDEX | G_PART_PARM_LABEL; } break; case 'b': if (!strcmp(verb, "bootcode")) { ctlreq = G_PART_CTL_BOOTCODE; mparms |= G_PART_PARM_GEOM | G_PART_PARM_BOOTCODE; oparms |= G_PART_PARM_SKIP_DSN; } break; case 'c': if (!strcmp(verb, "commit")) { ctlreq = G_PART_CTL_COMMIT; mparms |= G_PART_PARM_GEOM; modifies = 0; } else if (!strcmp(verb, "create")) { ctlreq = G_PART_CTL_CREATE; mparms |= G_PART_PARM_PROVIDER | G_PART_PARM_SCHEME; oparms |= G_PART_PARM_ENTRIES; } break; case 'd': if (!strcmp(verb, "delete")) { ctlreq = G_PART_CTL_DELETE; mparms |= G_PART_PARM_GEOM | G_PART_PARM_INDEX; } else if (!strcmp(verb, "destroy")) { ctlreq = G_PART_CTL_DESTROY; mparms |= G_PART_PARM_GEOM; oparms |= G_PART_PARM_FORCE; } break; case 'm': if (!strcmp(verb, "modify")) { ctlreq = G_PART_CTL_MODIFY; mparms |= G_PART_PARM_GEOM | G_PART_PARM_INDEX; oparms |= G_PART_PARM_LABEL | G_PART_PARM_TYPE; } else if (!strcmp(verb, "move")) { ctlreq = G_PART_CTL_MOVE; mparms |= G_PART_PARM_GEOM | G_PART_PARM_INDEX; } break; case 'r': if (!strcmp(verb, "recover")) { ctlreq = G_PART_CTL_RECOVER; mparms |= G_PART_PARM_GEOM; } else if (!strcmp(verb, "resize")) { ctlreq = G_PART_CTL_RESIZE; mparms |= G_PART_PARM_GEOM | G_PART_PARM_INDEX | G_PART_PARM_SIZE; } break; case 's': if (!strcmp(verb, "set")) { ctlreq = G_PART_CTL_SET; mparms |= G_PART_PARM_ATTRIB | G_PART_PARM_GEOM; oparms |= G_PART_PARM_INDEX; } break; case 'u': if (!strcmp(verb, "undo")) { ctlreq = G_PART_CTL_UNDO; mparms |= G_PART_PARM_GEOM; modifies = 0; } else if (!strcmp(verb, "unset")) { ctlreq = G_PART_CTL_UNSET; mparms |= G_PART_PARM_ATTRIB | G_PART_PARM_GEOM; oparms |= G_PART_PARM_INDEX; } break; } if (ctlreq == G_PART_CTL_NONE) { gctl_error(req, "%d verb '%s'", EINVAL, verb); return; } bzero(&gpp, sizeof(gpp)); for (i = 0; i < req->narg; i++) { ap = &req->arg[i]; parm = 0; switch (ap->name[0]) { case 'a': if (!strcmp(ap->name, "arg0")) { parm = mparms & (G_PART_PARM_GEOM | G_PART_PARM_PROVIDER); } if (!strcmp(ap->name, "attrib")) parm = G_PART_PARM_ATTRIB; break; case 'b': if (!strcmp(ap->name, "bootcode")) parm = G_PART_PARM_BOOTCODE; break; case 'c': if (!strcmp(ap->name, "class")) continue; break; case 'e': if (!strcmp(ap->name, "entries")) parm = G_PART_PARM_ENTRIES; break; case 'f': if (!strcmp(ap->name, "flags")) parm = G_PART_PARM_FLAGS; else if (!strcmp(ap->name, "force")) parm = G_PART_PARM_FORCE; break; case 'i': if (!strcmp(ap->name, "index")) parm = G_PART_PARM_INDEX; break; case 'l': if (!strcmp(ap->name, "label")) parm = G_PART_PARM_LABEL; break; case 'o': if (!strcmp(ap->name, "output")) parm = G_PART_PARM_OUTPUT; break; case 's': if (!strcmp(ap->name, "scheme")) parm = G_PART_PARM_SCHEME; else if (!strcmp(ap->name, "size")) parm = G_PART_PARM_SIZE; else if (!strcmp(ap->name, "start")) parm = G_PART_PARM_START; else if (!strcmp(ap->name, "skip_dsn")) parm = G_PART_PARM_SKIP_DSN; break; case 't': if (!strcmp(ap->name, "type")) parm = G_PART_PARM_TYPE; break; case 'v': if (!strcmp(ap->name, "verb")) continue; else if (!strcmp(ap->name, "version")) parm = G_PART_PARM_VERSION; break; } if ((parm & (mparms | oparms)) == 0) { gctl_error(req, "%d param '%s'", EINVAL, ap->name); return; } switch (parm) { case G_PART_PARM_ATTRIB: error = g_part_parm_str(req, ap->name, &gpp.gpp_attrib); break; case G_PART_PARM_BOOTCODE: error = g_part_parm_bootcode(req, ap->name, &gpp.gpp_codeptr, &gpp.gpp_codesize); break; case G_PART_PARM_ENTRIES: error = g_part_parm_intmax(req, ap->name, &gpp.gpp_entries); break; case G_PART_PARM_FLAGS: error = g_part_parm_str(req, ap->name, &gpp.gpp_flags); break; case G_PART_PARM_FORCE: error = g_part_parm_uint32(req, ap->name, &gpp.gpp_force); break; case G_PART_PARM_GEOM: error = g_part_parm_geom(req, ap->name, &gpp.gpp_geom); break; case G_PART_PARM_INDEX: error = g_part_parm_intmax(req, ap->name, &gpp.gpp_index); break; case G_PART_PARM_LABEL: error = g_part_parm_str(req, ap->name, &gpp.gpp_label); break; case G_PART_PARM_OUTPUT: error = 0; /* Write-only parameter */ break; case G_PART_PARM_PROVIDER: error = g_part_parm_provider(req, ap->name, &gpp.gpp_provider); break; case G_PART_PARM_SCHEME: error = g_part_parm_scheme(req, ap->name, &gpp.gpp_scheme); break; case G_PART_PARM_SIZE: error = g_part_parm_quad(req, ap->name, &gpp.gpp_size); break; case G_PART_PARM_SKIP_DSN: error = g_part_parm_uint32(req, ap->name, &gpp.gpp_skip_dsn); break; case G_PART_PARM_START: error = g_part_parm_quad(req, ap->name, &gpp.gpp_start); break; case G_PART_PARM_TYPE: error = g_part_parm_str(req, ap->name, &gpp.gpp_type); break; case G_PART_PARM_VERSION: error = g_part_parm_uint32(req, ap->name, &gpp.gpp_version); break; default: error = EDOOFUS; gctl_error(req, "%d %s", error, ap->name); break; } if (error != 0) { if (error == ENOATTR) { gctl_error(req, "%d param '%s'", error, ap->name); } return; } gpp.gpp_parms |= parm; } if ((gpp.gpp_parms & mparms) != mparms) { parm = mparms - (gpp.gpp_parms & mparms); gctl_error(req, "%d param '%x'", ENOATTR, parm); return; } /* Obtain permissions if possible/necessary. */ close_on_error = 0; table = NULL; if (modifies && (gpp.gpp_parms & G_PART_PARM_GEOM)) { table = gpp.gpp_geom->softc; if (table != NULL && table->gpt_corrupt && ctlreq != G_PART_CTL_DESTROY && ctlreq != G_PART_CTL_RECOVER) { gctl_error(req, "%d table '%s' is corrupt", EPERM, gpp.gpp_geom->name); return; } if (table != NULL && !table->gpt_opened) { error = g_access(LIST_FIRST(&gpp.gpp_geom->consumer), 1, 1, 1); if (error) { gctl_error(req, "%d geom '%s'", error, gpp.gpp_geom->name); return; } table->gpt_opened = 1; close_on_error = 1; } } /* Allow the scheme to check or modify the parameters. */ if (table != NULL) { error = G_PART_PRECHECK(table, ctlreq, &gpp); if (error) { gctl_error(req, "%d pre-check failed", error); goto out; } } else error = EDOOFUS; /* Prevent bogus uninit. warning. */ switch (ctlreq) { case G_PART_CTL_NONE: panic("%s", __func__); case G_PART_CTL_ADD: error = g_part_ctl_add(req, &gpp); break; case G_PART_CTL_BOOTCODE: error = g_part_ctl_bootcode(req, &gpp); break; case G_PART_CTL_COMMIT: error = g_part_ctl_commit(req, &gpp); break; case G_PART_CTL_CREATE: error = g_part_ctl_create(req, &gpp); break; case G_PART_CTL_DELETE: error = g_part_ctl_delete(req, &gpp); break; case G_PART_CTL_DESTROY: error = g_part_ctl_destroy(req, &gpp); break; case G_PART_CTL_MODIFY: error = g_part_ctl_modify(req, &gpp); break; case G_PART_CTL_MOVE: error = g_part_ctl_move(req, &gpp); break; case G_PART_CTL_RECOVER: error = g_part_ctl_recover(req, &gpp); break; case G_PART_CTL_RESIZE: error = g_part_ctl_resize(req, &gpp); break; case G_PART_CTL_SET: error = g_part_ctl_setunset(req, &gpp, 1); break; case G_PART_CTL_UNDO: error = g_part_ctl_undo(req, &gpp); break; case G_PART_CTL_UNSET: error = g_part_ctl_setunset(req, &gpp, 0); break; } /* Implement automatic commit. */ if (!error) { auto_commit = (modifies && (gpp.gpp_parms & G_PART_PARM_FLAGS) && strchr(gpp.gpp_flags, 'C') != NULL) ? 1 : 0; if (auto_commit) { KASSERT(gpp.gpp_parms & G_PART_PARM_GEOM, ("%s", __func__)); error = g_part_ctl_commit(req, &gpp); } } out: if (error && close_on_error) { g_access(LIST_FIRST(&gpp.gpp_geom->consumer), -1, -1, -1); table->gpt_opened = 0; } } static int g_part_destroy_geom(struct gctl_req *req, struct g_class *mp, struct g_geom *gp) { G_PART_TRACE((G_T_TOPOLOGY, "%s(%s,%s)", __func__, mp->name, gp->name)); g_topology_assert(); g_part_wither(gp, EINVAL); return (0); } static struct g_geom * g_part_taste(struct g_class *mp, struct g_provider *pp, int flags __unused) { struct g_consumer *cp; struct g_geom *gp; struct g_part_entry *entry; struct g_part_table *table; struct root_hold_token *rht; int attr, depth; int error; G_PART_TRACE((G_T_TOPOLOGY, "%s(%s,%s)", __func__, mp->name, pp->name)); g_topology_assert(); /* Skip providers that are already open for writing. */ if (pp->acw > 0) return (NULL); /* * Create a GEOM with consumer and hook it up to the provider. * With that we become part of the topology. Obtain read access * to the provider. */ gp = g_new_geomf(mp, "%s", pp->name); cp = g_new_consumer(gp); cp->flags |= G_CF_DIRECT_SEND | G_CF_DIRECT_RECEIVE; error = g_attach(cp, pp); if (error == 0) error = g_access(cp, 1, 0, 0); if (error != 0) { if (cp->provider) g_detach(cp); g_destroy_consumer(cp); g_destroy_geom(gp); return (NULL); } rht = root_mount_hold(mp->name); g_topology_unlock(); /* * Short-circuit the whole probing galore when there's no * media present. */ if (pp->mediasize == 0 || pp->sectorsize == 0) { error = ENODEV; goto fail; } /* Make sure we can nest and if so, determine our depth. */ error = g_getattr("PART::isleaf", cp, &attr); if (!error && attr) { error = ENODEV; goto fail; } error = g_getattr("PART::depth", cp, &attr); depth = (!error) ? attr + 1 : 0; error = g_part_probe(gp, cp, depth); if (error) goto fail; table = gp->softc; /* * Synthesize a disk geometry. Some partitioning schemes * depend on it and since some file systems need it even * when the partitition scheme doesn't, we do it here in * scheme-independent code. */ g_part_geometry(table, cp, pp->mediasize / pp->sectorsize); error = G_PART_READ(table, cp); if (error) goto fail; error = g_part_check_integrity(table, cp); if (error) goto fail; g_topology_lock(); LIST_FOREACH(entry, &table->gpt_entry, gpe_entry) { if (!entry->gpe_internal) g_part_new_provider(gp, table, entry); } root_mount_rel(rht); g_access(cp, -1, 0, 0); return (gp); fail: g_topology_lock(); root_mount_rel(rht); g_access(cp, -1, 0, 0); g_detach(cp); g_destroy_consumer(cp); g_destroy_geom(gp); return (NULL); } /* * Geom methods. */ static int g_part_access(struct g_provider *pp, int dr, int dw, int de) { struct g_consumer *cp; G_PART_TRACE((G_T_ACCESS, "%s(%s,%d,%d,%d)", __func__, pp->name, dr, dw, de)); cp = LIST_FIRST(&pp->geom->consumer); /* We always gain write-exclusive access. */ return (g_access(cp, dr, dw, dw + de)); } static void g_part_dumpconf(struct sbuf *sb, const char *indent, struct g_geom *gp, struct g_consumer *cp, struct g_provider *pp) { char buf[64]; struct g_part_entry *entry; struct g_part_table *table; KASSERT(sb != NULL && gp != NULL, ("%s", __func__)); table = gp->softc; if (indent == NULL) { KASSERT(cp == NULL && pp != NULL, ("%s", __func__)); entry = pp->private; if (entry == NULL) return; sbuf_printf(sb, " i %u o %ju ty %s", entry->gpe_index, (uintmax_t)entry->gpe_offset, G_PART_TYPE(table, entry, buf, sizeof(buf))); /* * libdisk compatibility quirk - the scheme dumps the * slicer name and partition type in a way that is * compatible with libdisk. When libdisk is not used * anymore, this should go away. */ G_PART_DUMPCONF(table, entry, sb, indent); } else if (cp != NULL) { /* Consumer configuration. */ KASSERT(pp == NULL, ("%s", __func__)); /* none */ } else if (pp != NULL) { /* Provider configuration. */ entry = pp->private; if (entry == NULL) return; sbuf_printf(sb, "%s%ju\n", indent, (uintmax_t)entry->gpe_start); sbuf_printf(sb, "%s%ju\n", indent, (uintmax_t)entry->gpe_end); sbuf_printf(sb, "%s%u\n", indent, entry->gpe_index); sbuf_printf(sb, "%s%s\n", indent, G_PART_TYPE(table, entry, buf, sizeof(buf))); sbuf_printf(sb, "%s%ju\n", indent, (uintmax_t)entry->gpe_offset); sbuf_printf(sb, "%s%ju\n", indent, (uintmax_t)pp->mediasize); G_PART_DUMPCONF(table, entry, sb, indent); } else { /* Geom configuration. */ sbuf_printf(sb, "%s%s\n", indent, table->gpt_scheme->name); sbuf_printf(sb, "%s%u\n", indent, table->gpt_entries); sbuf_printf(sb, "%s%ju\n", indent, (uintmax_t)table->gpt_first); sbuf_printf(sb, "%s%ju\n", indent, (uintmax_t)table->gpt_last); sbuf_printf(sb, "%s%u\n", indent, table->gpt_sectors); sbuf_printf(sb, "%s%u\n", indent, table->gpt_heads); sbuf_printf(sb, "%s%s\n", indent, table->gpt_corrupt ? "CORRUPT": "OK"); sbuf_printf(sb, "%s%s\n", indent, table->gpt_opened ? "true": "false"); G_PART_DUMPCONF(table, NULL, sb, indent); } } /*- * This start routine is only called for non-trivial requests, all the * trivial ones are handled autonomously by the slice code. * For requests we handle here, we must call the g_io_deliver() on the * bio, and return non-zero to indicate to the slice code that we did so. * This code executes in the "DOWN" I/O path, this means: * * No sleeping. * * Don't grab the topology lock. * * Don't call biowait, g_getattr(), g_setattr() or g_read_data() */ static int g_part_ioctl(struct g_provider *pp, u_long cmd, void *data, int fflag, struct thread *td) { struct g_part_table *table; table = pp->geom->softc; return G_PART_IOCTL(table, pp, cmd, data, fflag, td); } static void g_part_resize(struct g_consumer *cp) { struct g_part_table *table; G_PART_TRACE((G_T_TOPOLOGY, "%s(%s)", __func__, cp->provider->name)); g_topology_assert(); if (auto_resize == 0) return; table = cp->geom->softc; if (table->gpt_opened == 0) { if (g_access(cp, 1, 1, 1) != 0) return; table->gpt_opened = 1; } if (G_PART_RESIZE(table, NULL, NULL) == 0) printf("GEOM_PART: %s was automatically resized.\n" " Use `gpart commit %s` to save changes or " "`gpart undo %s` to revert them.\n", cp->geom->name, cp->geom->name, cp->geom->name); if (g_part_check_integrity(table, cp) != 0) { g_access(cp, -1, -1, -1); table->gpt_opened = 0; g_part_wither(table->gpt_gp, ENXIO); } } static void g_part_orphan(struct g_consumer *cp) { struct g_provider *pp; struct g_part_table *table; pp = cp->provider; KASSERT(pp != NULL, ("%s", __func__)); G_PART_TRACE((G_T_TOPOLOGY, "%s(%s)", __func__, pp->name)); g_topology_assert(); KASSERT(pp->error != 0, ("%s", __func__)); table = cp->geom->softc; if (table != NULL && table->gpt_opened) g_access(cp, -1, -1, -1); g_part_wither(cp->geom, pp->error); } static void g_part_spoiled(struct g_consumer *cp) { G_PART_TRACE((G_T_TOPOLOGY, "%s(%s)", __func__, cp->provider->name)); g_topology_assert(); cp->flags |= G_CF_ORPHAN; g_part_wither(cp->geom, ENXIO); } static void g_part_start(struct bio *bp) { struct bio *bp2; struct g_consumer *cp; struct g_geom *gp; struct g_part_entry *entry; struct g_part_table *table; struct g_kerneldump *gkd; struct g_provider *pp; void (*done_func)(struct bio *) = g_std_done; char buf[64]; biotrack(bp, __func__); pp = bp->bio_to; gp = pp->geom; table = gp->softc; cp = LIST_FIRST(&gp->consumer); G_PART_TRACE((G_T_BIO, "%s: cmd=%d, provider=%s", __func__, bp->bio_cmd, pp->name)); entry = pp->private; if (entry == NULL) { g_io_deliver(bp, ENXIO); return; } switch(bp->bio_cmd) { case BIO_DELETE: case BIO_READ: case BIO_WRITE: if (bp->bio_offset >= pp->mediasize) { g_io_deliver(bp, EIO); return; } bp2 = g_clone_bio(bp); if (bp2 == NULL) { g_io_deliver(bp, ENOMEM); return; } if (bp2->bio_offset + bp2->bio_length > pp->mediasize) bp2->bio_length = pp->mediasize - bp2->bio_offset; bp2->bio_done = g_std_done; bp2->bio_offset += entry->gpe_offset; g_io_request(bp2, cp); return; case BIO_SPEEDUP: case BIO_FLUSH: break; case BIO_GETATTR: if (g_handleattr_int(bp, "GEOM::fwheads", table->gpt_heads)) return; if (g_handleattr_int(bp, "GEOM::fwsectors", table->gpt_sectors)) return; /* * allow_nesting overrides "isleaf" to false _unless_ the * provider offset is zero, since otherwise we would recurse. */ if (g_handleattr_int(bp, "PART::isleaf", table->gpt_isleaf && (allow_nesting == 0 || entry->gpe_offset == 0))) return; if (g_handleattr_int(bp, "PART::depth", table->gpt_depth)) return; if (g_handleattr_str(bp, "PART::scheme", table->gpt_scheme->name)) return; if (g_handleattr_str(bp, "PART::type", G_PART_TYPE(table, entry, buf, sizeof(buf)))) return; if (!strcmp("GEOM::physpath", bp->bio_attribute)) { done_func = g_part_get_physpath_done; break; } if (!strcmp("GEOM::kerneldump", bp->bio_attribute)) { /* * Check that the partition is suitable for kernel * dumps. Typically only swap partitions should be * used. If the request comes from the nested scheme * we allow dumping there as well. */ if ((bp->bio_from == NULL || bp->bio_from->geom->class != &g_part_class) && G_PART_DUMPTO(table, entry) == 0) { g_io_deliver(bp, ENODEV); printf("GEOM_PART: Partition '%s' not suitable" " for kernel dumps (wrong type?)\n", pp->name); return; } gkd = (struct g_kerneldump *)bp->bio_data; if (gkd->offset >= pp->mediasize) { g_io_deliver(bp, EIO); return; } if (gkd->offset + gkd->length > pp->mediasize) gkd->length = pp->mediasize - gkd->offset; gkd->offset += entry->gpe_offset; } break; default: g_io_deliver(bp, EOPNOTSUPP); return; } bp2 = g_clone_bio(bp); if (bp2 == NULL) { g_io_deliver(bp, ENOMEM); return; } bp2->bio_done = done_func; g_io_request(bp2, cp); } static void g_part_init(struct g_class *mp) { TAILQ_INSERT_HEAD(&g_part_schemes, &g_part_null_scheme, scheme_list); } static void g_part_fini(struct g_class *mp) { TAILQ_REMOVE(&g_part_schemes, &g_part_null_scheme, scheme_list); } static void g_part_unload_event(void *arg, int flag) { struct g_consumer *cp; struct g_geom *gp; struct g_provider *pp; struct g_part_scheme *scheme; struct g_part_table *table; uintptr_t *xchg; int acc, error; if (flag == EV_CANCEL) return; xchg = arg; error = 0; scheme = (void *)(*xchg); g_topology_assert(); LIST_FOREACH(gp, &g_part_class.geom, geom) { table = gp->softc; if (table->gpt_scheme != scheme) continue; acc = 0; LIST_FOREACH(pp, &gp->provider, provider) acc += pp->acr + pp->acw + pp->ace; LIST_FOREACH(cp, &gp->consumer, consumer) acc += cp->acr + cp->acw + cp->ace; if (!acc) g_part_wither(gp, ENOSYS); else error = EBUSY; } if (!error) TAILQ_REMOVE(&g_part_schemes, scheme, scheme_list); *xchg = error; } int g_part_modevent(module_t mod, int type, struct g_part_scheme *scheme) { struct g_part_scheme *iter; uintptr_t arg; int error; error = 0; switch (type) { case MOD_LOAD: TAILQ_FOREACH(iter, &g_part_schemes, scheme_list) { if (scheme == iter) { printf("GEOM_PART: scheme %s is already " "registered!\n", scheme->name); break; } } if (iter == NULL) { TAILQ_INSERT_TAIL(&g_part_schemes, scheme, scheme_list); g_retaste(&g_part_class); } break; case MOD_UNLOAD: arg = (uintptr_t)scheme; error = g_waitfor_event(g_part_unload_event, &arg, M_WAITOK, NULL); if (error == 0) error = arg; break; default: error = EOPNOTSUPP; break; } return (error); } Index: head/sys/geom/part/g_part.h =================================================================== --- head/sys/geom/part/g_part.h (revision 364315) +++ head/sys/geom/part/g_part.h (revision 364316) @@ -1,246 +1,256 @@ /*- * SPDX-License-Identifier: BSD-2-Clause-FreeBSD * * Copyright (c) 2006-2008 Marcel Moolenaar * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR * IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES * OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. * IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT * NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF * THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. * * $FreeBSD$ */ #ifndef _GEOM_PART_H_ #define _GEOM_PART_H_ #define G_PART_TRACE(args) g_trace args #define G_PART_PROBE_PRI_LOW -10 #define G_PART_PROBE_PRI_NORM -5 #define G_PART_PROBE_PRI_HIGH 0 enum g_part_alias { G_PART_ALIAS_APPLE_APFS, /* An Apple APFS partition. */ G_PART_ALIAS_APPLE_BOOT, /* An Apple boot partition entry. */ G_PART_ALIAS_APPLE_CORE_STORAGE,/* An Apple Core Storage partition. */ G_PART_ALIAS_APPLE_HFS, /* An HFS+ file system entry. */ G_PART_ALIAS_APPLE_LABEL, /* An Apple label partition entry. */ G_PART_ALIAS_APPLE_RAID, /* An Apple RAID partition entry. */ G_PART_ALIAS_APPLE_RAID_OFFLINE,/* An Apple RAID (offline) part entry.*/ G_PART_ALIAS_APPLE_TV_RECOVERY, /* An Apple TV recovery part entry. */ G_PART_ALIAS_APPLE_UFS, /* An Apple UFS partition entry. */ + G_PART_ALIAS_APPLE_ZFS, /* An Apple ZFS partition entry. + Also used for Solaris /usr partition. */ G_PART_ALIAS_BIOS_BOOT, /* A GRUB 2 boot partition entry. */ G_PART_ALIAS_CHROMEOS_FIRMWARE, /* A ChromeOS firmware part. entry. */ G_PART_ALIAS_CHROMEOS_KERNEL, /* A ChromeOS Kernel part. entry. */ G_PART_ALIAS_CHROMEOS_RESERVED, /* ChromeOS. Reserved for future use. */ G_PART_ALIAS_CHROMEOS_ROOT, /* A ChromeOS root part. entry. */ G_PART_ALIAS_DFBSD, /* A DfBSD label32 partition entry */ G_PART_ALIAS_DFBSD64, /* A DfBSD label64 partition entry */ G_PART_ALIAS_DFBSD_CCD, /* A DfBSD CCD partition entry */ G_PART_ALIAS_DFBSD_HAMMER, /* A DfBSD HAMMER FS partition entry */ G_PART_ALIAS_DFBSD_HAMMER2, /* A DfBSD HAMMER2 FS partition entry */ G_PART_ALIAS_DFBSD_LEGACY, /* A DfBSD legacy partition entry */ G_PART_ALIAS_DFBSD_SWAP, /* A DfBSD swap partition entry */ G_PART_ALIAS_DFBSD_UFS, /* A DfBSD UFS partition entry */ G_PART_ALIAS_DFBSD_VINUM, /* A DfBSD Vinum partition entry */ G_PART_ALIAS_EBR, /* A EBR partition entry. */ G_PART_ALIAS_EFI, /* A EFI system partition entry. */ G_PART_ALIAS_FREEBSD, /* A BSD labeled partition entry. */ G_PART_ALIAS_FREEBSD_BOOT, /* A FreeBSD boot partition entry. */ G_PART_ALIAS_FREEBSD_NANDFS, /* A FreeBSD nandfs partition entry. */ G_PART_ALIAS_FREEBSD_SWAP, /* A swap partition entry. */ G_PART_ALIAS_FREEBSD_UFS, /* A UFS/UFS2 file system entry. */ G_PART_ALIAS_FREEBSD_VINUM, /* A Vinum partition entry. */ G_PART_ALIAS_FREEBSD_ZFS, /* A ZFS file system entry. */ G_PART_ALIAS_LINUX_DATA, /* A Linux data partition entry. */ G_PART_ALIAS_LINUX_LVM, /* A Linux LVM partition entry. */ G_PART_ALIAS_LINUX_RAID, /* A Linux RAID partition entry. */ G_PART_ALIAS_LINUX_SWAP, /* A Linux swap partition entry. */ G_PART_ALIAS_MBR, /* A MBR (extended) partition entry. */ G_PART_ALIAS_MS_BASIC_DATA, /* A Microsoft Data part. entry. */ G_PART_ALIAS_MS_FAT16, /* A Microsoft FAT16 partition entry. */ G_PART_ALIAS_MS_FAT32, /* A Microsoft FAT32 partition entry. */ G_PART_ALIAS_MS_FAT32LBA, /* A Microsoft FAT32 LBA partition entry */ G_PART_ALIAS_MS_LDM_DATA, /* A Microsoft LDM Data part. entry. */ G_PART_ALIAS_MS_LDM_METADATA, /* A Microsoft LDM Metadata entry. */ G_PART_ALIAS_MS_NTFS, /* A Microsoft NTFS partition entry */ G_PART_ALIAS_MS_RECOVERY, /* A Microsoft recovery part. entry. */ G_PART_ALIAS_MS_RESERVED, /* A Microsoft Reserved part. entry. */ G_PART_ALIAS_MS_SPACES, /* A Microsoft Spaces part. entry. */ G_PART_ALIAS_NETBSD_CCD, /* A NetBSD CCD partition entry. */ G_PART_ALIAS_NETBSD_CGD, /* A NetBSD CGD partition entry. */ G_PART_ALIAS_NETBSD_FFS, /* A NetBSD FFS partition entry. */ G_PART_ALIAS_NETBSD_LFS, /* A NetBSD LFS partition entry. */ G_PART_ALIAS_NETBSD_RAID, /* A NetBSD RAID partition entry. */ G_PART_ALIAS_NETBSD_SWAP, /* A NetBSD swap partition entry. */ G_PART_ALIAS_OPENBSD_DATA, /* An OpenBSD data partition entry. */ G_PART_ALIAS_PREP_BOOT, /* A PREP/CHRP boot partition entry. */ + G_PART_ALIAS_SOLARIS_BOOT, /* A Solaris boot partition entry. */ + G_PART_ALIAS_SOLARIS_ROOT, /* A Solaris root partition entry. */ + G_PART_ALIAS_SOLARIS_SWAP, /* A Solaris swap partition entry. */ + G_PART_ALIAS_SOLARIS_BACKUP, /* A Solaris backup partition entry. */ + G_PART_ALIAS_SOLARIS_VAR, /* A Solaris /var partition entry. */ + G_PART_ALIAS_SOLARIS_HOME, /* A Solaris /home partition entry. */ + G_PART_ALIAS_SOLARIS_ALTSEC, /* A Solaris alternate sector partition entry. */ + G_PART_ALIAS_SOLARIS_RESERVED, /* A Solaris reserved partition entry. */ G_PART_ALIAS_VMFS, /* A VMware VMFS partition entry */ G_PART_ALIAS_VMKDIAG, /* A VMware vmkDiagnostic partition entry */ G_PART_ALIAS_VMRESERVED, /* A VMware reserved partition entry */ G_PART_ALIAS_VMVSANHDR, /* A VMware vSAN header partition entry */ /* Keep the following last */ G_PART_ALIAS_COUNT }; const char *g_part_alias_name(enum g_part_alias); /* G_PART scheme (KOBJ class). */ struct g_part_scheme { KOBJ_CLASS_FIELDS; size_t gps_entrysz; int gps_minent; int gps_maxent; int gps_bootcodesz; TAILQ_ENTRY(g_part_scheme) scheme_list; }; struct g_part_entry { LIST_ENTRY(g_part_entry) gpe_entry; struct g_provider *gpe_pp; /* Corresponding provider. */ off_t gpe_offset; /* Byte offset. */ quad_t gpe_start; /* First LBA of partition. */ quad_t gpe_end; /* Last LBA of partition. */ int gpe_index; int gpe_created:1; /* Entry is newly created. */ int gpe_deleted:1; /* Entry has been deleted. */ int gpe_modified:1; /* Entry has been modified. */ int gpe_internal:1; /* Entry is not a used entry. */ }; /* G_PART table (KOBJ instance). */ struct g_part_table { KOBJ_FIELDS; struct g_part_scheme *gpt_scheme; struct g_geom *gpt_gp; LIST_HEAD(, g_part_entry) gpt_entry; quad_t gpt_first; /* First allocatable LBA */ quad_t gpt_last; /* Last allocatable LBA */ int gpt_entries; /* * gpt_smhead and gpt_smtail are bitmaps representing the first * 32 sectors on the disk (gpt_smhead) and the last 32 sectors * on the disk (gpt_smtail). These maps are used by the commit * verb to clear sectors previously used by a scheme after the * partitioning scheme has been destroyed. */ uint32_t gpt_smhead; uint32_t gpt_smtail; /* * gpt_sectors and gpt_heads are the fixed or synchesized number * of sectors per track and heads (resp) that make up a disks * geometry. This is to support partitioning schemes as well as * file systems that work on a geometry. The MBR scheme and the * MS-DOS (FAT) file system come to mind. * We keep track of whether the geometry is fixed or synchesized * so that a partitioning scheme can correct the synthesized * geometry, based on the on-disk metadata. */ uint32_t gpt_sectors; uint32_t gpt_heads; int gpt_depth; /* Sub-partitioning level. */ int gpt_isleaf:1; /* Cannot be sub-partitioned. */ int gpt_created:1; /* Newly created. */ int gpt_modified:1; /* Table changes have been made. */ int gpt_opened:1; /* Permissions obtained. */ int gpt_fixgeom:1; /* Geometry is fixed. */ int gpt_corrupt:1; /* Table is corrupt. */ }; struct g_part_entry *g_part_new_entry(struct g_part_table *, int, quad_t, quad_t); enum g_part_ctl { G_PART_CTL_NONE, G_PART_CTL_ADD, G_PART_CTL_BOOTCODE, G_PART_CTL_COMMIT, G_PART_CTL_CREATE, G_PART_CTL_DELETE, G_PART_CTL_DESTROY, G_PART_CTL_MODIFY, G_PART_CTL_MOVE, G_PART_CTL_RECOVER, G_PART_CTL_RESIZE, G_PART_CTL_SET, G_PART_CTL_UNDO, G_PART_CTL_UNSET }; /* G_PART ctlreq parameters. */ #define G_PART_PARM_ENTRIES 0x0001 #define G_PART_PARM_FLAGS 0x0002 #define G_PART_PARM_GEOM 0x0004 #define G_PART_PARM_INDEX 0x0008 #define G_PART_PARM_LABEL 0x0010 #define G_PART_PARM_OUTPUT 0x0020 #define G_PART_PARM_PROVIDER 0x0040 #define G_PART_PARM_SCHEME 0x0080 #define G_PART_PARM_SIZE 0x0100 #define G_PART_PARM_START 0x0200 #define G_PART_PARM_TYPE 0x0400 #define G_PART_PARM_VERSION 0x0800 #define G_PART_PARM_BOOTCODE 0x1000 #define G_PART_PARM_ATTRIB 0x2000 #define G_PART_PARM_FORCE 0x4000 #define G_PART_PARM_SKIP_DSN 0x8000 struct g_part_parms { unsigned int gpp_parms; unsigned int gpp_entries; const char *gpp_flags; struct g_geom *gpp_geom; unsigned int gpp_index; const char *gpp_label; struct g_provider *gpp_provider; struct g_part_scheme *gpp_scheme; quad_t gpp_size; quad_t gpp_start; const char *gpp_type; unsigned int gpp_version; const void *gpp_codeptr; unsigned int gpp_codesize; const char *gpp_attrib; unsigned int gpp_force; unsigned int gpp_skip_dsn; }; void g_part_geometry_heads(off_t, u_int, off_t *, u_int *); int g_part_modevent(module_t, int, struct g_part_scheme *); extern char g_part_separator[]; #define G_PART_SCHEME_DECLARE(name) \ static int name##_modevent(module_t mod, int tp, void *d) \ { \ return (g_part_modevent(mod, tp, d)); \ } \ static moduledata_t name##_mod = { \ #name, \ name##_modevent, \ &name##_scheme \ }; \ DECLARE_MODULE(name, name##_mod, SI_SUB_DRIVERS, SI_ORDER_ANY); \ MODULE_DEPEND(name, g_part, 0, 0, 0) #endif /* !_GEOM_PART_H_ */ Index: head/sys/geom/part/g_part_gpt.c =================================================================== --- head/sys/geom/part/g_part_gpt.c (revision 364315) +++ head/sys/geom/part/g_part_gpt.c (revision 364316) @@ -1,1420 +1,1438 @@ /*- * SPDX-License-Identifier: BSD-2-Clause-FreeBSD * * Copyright (c) 2002, 2005-2007, 2011 Marcel Moolenaar * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR * IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES * OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. * IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT * NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF * THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ #include __FBSDID("$FreeBSD$"); #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include "g_part_if.h" FEATURE(geom_part_gpt, "GEOM partitioning class for GPT partitions support"); SYSCTL_DECL(_kern_geom_part); static SYSCTL_NODE(_kern_geom_part, OID_AUTO, gpt, CTLFLAG_RW | CTLFLAG_MPSAFE, 0, "GEOM_PART_GPT GUID Partition Table"); static u_int allow_nesting = 0; SYSCTL_UINT(_kern_geom_part_gpt, OID_AUTO, allow_nesting, CTLFLAG_RWTUN, &allow_nesting, 0, "Allow GPT to be nested inside other schemes"); CTASSERT(offsetof(struct gpt_hdr, padding) == 92); CTASSERT(sizeof(struct gpt_ent) == 128); #define EQUUID(a,b) (memcmp(a, b, sizeof(struct uuid)) == 0) #define MBRSIZE 512 enum gpt_elt { GPT_ELT_PRIHDR, GPT_ELT_PRITBL, GPT_ELT_SECHDR, GPT_ELT_SECTBL, GPT_ELT_COUNT }; enum gpt_state { GPT_STATE_UNKNOWN, /* Not determined. */ GPT_STATE_MISSING, /* No signature found. */ GPT_STATE_CORRUPT, /* Checksum mismatch. */ GPT_STATE_INVALID, /* Nonconformant/invalid. */ GPT_STATE_OK /* Perfectly fine. */ }; struct g_part_gpt_table { struct g_part_table base; u_char mbr[MBRSIZE]; struct gpt_hdr *hdr; quad_t lba[GPT_ELT_COUNT]; enum gpt_state state[GPT_ELT_COUNT]; int bootcamp; }; struct g_part_gpt_entry { struct g_part_entry base; struct gpt_ent ent; }; static void g_gpt_printf_utf16(struct sbuf *, uint16_t *, size_t); static void g_gpt_utf8_to_utf16(const uint8_t *, uint16_t *, size_t); static void g_gpt_set_defaults(struct g_part_table *, struct g_provider *); static int g_part_gpt_add(struct g_part_table *, struct g_part_entry *, struct g_part_parms *); static int g_part_gpt_bootcode(struct g_part_table *, struct g_part_parms *); static int g_part_gpt_create(struct g_part_table *, struct g_part_parms *); static int g_part_gpt_destroy(struct g_part_table *, struct g_part_parms *); static void g_part_gpt_dumpconf(struct g_part_table *, struct g_part_entry *, struct sbuf *, const char *); static int g_part_gpt_dumpto(struct g_part_table *, struct g_part_entry *); static int g_part_gpt_modify(struct g_part_table *, struct g_part_entry *, struct g_part_parms *); static const char *g_part_gpt_name(struct g_part_table *, struct g_part_entry *, char *, size_t); static int g_part_gpt_probe(struct g_part_table *, struct g_consumer *); static int g_part_gpt_read(struct g_part_table *, struct g_consumer *); static int g_part_gpt_setunset(struct g_part_table *table, struct g_part_entry *baseentry, const char *attrib, unsigned int set); static const char *g_part_gpt_type(struct g_part_table *, struct g_part_entry *, char *, size_t); static int g_part_gpt_write(struct g_part_table *, struct g_consumer *); static int g_part_gpt_resize(struct g_part_table *, struct g_part_entry *, struct g_part_parms *); static int g_part_gpt_recover(struct g_part_table *); static kobj_method_t g_part_gpt_methods[] = { KOBJMETHOD(g_part_add, g_part_gpt_add), KOBJMETHOD(g_part_bootcode, g_part_gpt_bootcode), KOBJMETHOD(g_part_create, g_part_gpt_create), KOBJMETHOD(g_part_destroy, g_part_gpt_destroy), KOBJMETHOD(g_part_dumpconf, g_part_gpt_dumpconf), KOBJMETHOD(g_part_dumpto, g_part_gpt_dumpto), KOBJMETHOD(g_part_modify, g_part_gpt_modify), KOBJMETHOD(g_part_resize, g_part_gpt_resize), KOBJMETHOD(g_part_name, g_part_gpt_name), KOBJMETHOD(g_part_probe, g_part_gpt_probe), KOBJMETHOD(g_part_read, g_part_gpt_read), KOBJMETHOD(g_part_recover, g_part_gpt_recover), KOBJMETHOD(g_part_setunset, g_part_gpt_setunset), KOBJMETHOD(g_part_type, g_part_gpt_type), KOBJMETHOD(g_part_write, g_part_gpt_write), { 0, 0 } }; static struct g_part_scheme g_part_gpt_scheme = { "GPT", g_part_gpt_methods, sizeof(struct g_part_gpt_table), .gps_entrysz = sizeof(struct g_part_gpt_entry), .gps_minent = 128, .gps_maxent = 4096, .gps_bootcodesz = MBRSIZE, }; G_PART_SCHEME_DECLARE(g_part_gpt); MODULE_VERSION(geom_part_gpt, 0); static struct uuid gpt_uuid_apple_apfs = GPT_ENT_TYPE_APPLE_APFS; static struct uuid gpt_uuid_apple_boot = GPT_ENT_TYPE_APPLE_BOOT; static struct uuid gpt_uuid_apple_core_storage = GPT_ENT_TYPE_APPLE_CORE_STORAGE; static struct uuid gpt_uuid_apple_hfs = GPT_ENT_TYPE_APPLE_HFS; static struct uuid gpt_uuid_apple_label = GPT_ENT_TYPE_APPLE_LABEL; static struct uuid gpt_uuid_apple_raid = GPT_ENT_TYPE_APPLE_RAID; static struct uuid gpt_uuid_apple_raid_offline = GPT_ENT_TYPE_APPLE_RAID_OFFLINE; static struct uuid gpt_uuid_apple_tv_recovery = GPT_ENT_TYPE_APPLE_TV_RECOVERY; static struct uuid gpt_uuid_apple_ufs = GPT_ENT_TYPE_APPLE_UFS; +static struct uuid gpt_uuid_apple_zfs = GPT_ENT_TYPE_APPLE_ZFS; static struct uuid gpt_uuid_bios_boot = GPT_ENT_TYPE_BIOS_BOOT; static struct uuid gpt_uuid_chromeos_firmware = GPT_ENT_TYPE_CHROMEOS_FIRMWARE; static struct uuid gpt_uuid_chromeos_kernel = GPT_ENT_TYPE_CHROMEOS_KERNEL; static struct uuid gpt_uuid_chromeos_reserved = GPT_ENT_TYPE_CHROMEOS_RESERVED; static struct uuid gpt_uuid_chromeos_root = GPT_ENT_TYPE_CHROMEOS_ROOT; static struct uuid gpt_uuid_dfbsd_ccd = GPT_ENT_TYPE_DRAGONFLY_CCD; static struct uuid gpt_uuid_dfbsd_hammer = GPT_ENT_TYPE_DRAGONFLY_HAMMER; static struct uuid gpt_uuid_dfbsd_hammer2 = GPT_ENT_TYPE_DRAGONFLY_HAMMER2; static struct uuid gpt_uuid_dfbsd_label32 = GPT_ENT_TYPE_DRAGONFLY_LABEL32; static struct uuid gpt_uuid_dfbsd_label64 = GPT_ENT_TYPE_DRAGONFLY_LABEL64; static struct uuid gpt_uuid_dfbsd_legacy = GPT_ENT_TYPE_DRAGONFLY_LEGACY; static struct uuid gpt_uuid_dfbsd_swap = GPT_ENT_TYPE_DRAGONFLY_SWAP; static struct uuid gpt_uuid_dfbsd_ufs1 = GPT_ENT_TYPE_DRAGONFLY_UFS1; static struct uuid gpt_uuid_dfbsd_vinum = GPT_ENT_TYPE_DRAGONFLY_VINUM; static struct uuid gpt_uuid_efi = GPT_ENT_TYPE_EFI; static struct uuid gpt_uuid_freebsd = GPT_ENT_TYPE_FREEBSD; static struct uuid gpt_uuid_freebsd_boot = GPT_ENT_TYPE_FREEBSD_BOOT; static struct uuid gpt_uuid_freebsd_nandfs = GPT_ENT_TYPE_FREEBSD_NANDFS; static struct uuid gpt_uuid_freebsd_swap = GPT_ENT_TYPE_FREEBSD_SWAP; static struct uuid gpt_uuid_freebsd_ufs = GPT_ENT_TYPE_FREEBSD_UFS; static struct uuid gpt_uuid_freebsd_vinum = GPT_ENT_TYPE_FREEBSD_VINUM; static struct uuid gpt_uuid_freebsd_zfs = GPT_ENT_TYPE_FREEBSD_ZFS; static struct uuid gpt_uuid_linux_data = GPT_ENT_TYPE_LINUX_DATA; static struct uuid gpt_uuid_linux_lvm = GPT_ENT_TYPE_LINUX_LVM; static struct uuid gpt_uuid_linux_raid = GPT_ENT_TYPE_LINUX_RAID; static struct uuid gpt_uuid_linux_swap = GPT_ENT_TYPE_LINUX_SWAP; static struct uuid gpt_uuid_mbr = GPT_ENT_TYPE_MBR; static struct uuid gpt_uuid_ms_basic_data = GPT_ENT_TYPE_MS_BASIC_DATA; static struct uuid gpt_uuid_ms_ldm_data = GPT_ENT_TYPE_MS_LDM_DATA; static struct uuid gpt_uuid_ms_ldm_metadata = GPT_ENT_TYPE_MS_LDM_METADATA; static struct uuid gpt_uuid_ms_recovery = GPT_ENT_TYPE_MS_RECOVERY; static struct uuid gpt_uuid_ms_reserved = GPT_ENT_TYPE_MS_RESERVED; static struct uuid gpt_uuid_ms_spaces = GPT_ENT_TYPE_MS_SPACES; static struct uuid gpt_uuid_netbsd_ccd = GPT_ENT_TYPE_NETBSD_CCD; static struct uuid gpt_uuid_netbsd_cgd = GPT_ENT_TYPE_NETBSD_CGD; static struct uuid gpt_uuid_netbsd_ffs = GPT_ENT_TYPE_NETBSD_FFS; static struct uuid gpt_uuid_netbsd_lfs = GPT_ENT_TYPE_NETBSD_LFS; static struct uuid gpt_uuid_netbsd_raid = GPT_ENT_TYPE_NETBSD_RAID; static struct uuid gpt_uuid_netbsd_swap = GPT_ENT_TYPE_NETBSD_SWAP; static struct uuid gpt_uuid_openbsd_data = GPT_ENT_TYPE_OPENBSD_DATA; static struct uuid gpt_uuid_prep_boot = GPT_ENT_TYPE_PREP_BOOT; +static struct uuid gpt_uuid_solaris_boot = GPT_ENT_TYPE_SOLARIS_BOOT; +static struct uuid gpt_uuid_solaris_root = GPT_ENT_TYPE_SOLARIS_ROOT; +static struct uuid gpt_uuid_solaris_swap = GPT_ENT_TYPE_SOLARIS_SWAP; +static struct uuid gpt_uuid_solaris_backup = GPT_ENT_TYPE_SOLARIS_BACKUP; +static struct uuid gpt_uuid_solaris_var = GPT_ENT_TYPE_SOLARIS_VAR; +static struct uuid gpt_uuid_solaris_home = GPT_ENT_TYPE_SOLARIS_HOME; +static struct uuid gpt_uuid_solaris_altsec = GPT_ENT_TYPE_SOLARIS_ALTSEC; +static struct uuid gpt_uuid_solaris_reserved = GPT_ENT_TYPE_SOLARIS_RESERVED; static struct uuid gpt_uuid_unused = GPT_ENT_TYPE_UNUSED; static struct uuid gpt_uuid_vmfs = GPT_ENT_TYPE_VMFS; static struct uuid gpt_uuid_vmkdiag = GPT_ENT_TYPE_VMKDIAG; static struct uuid gpt_uuid_vmreserved = GPT_ENT_TYPE_VMRESERVED; static struct uuid gpt_uuid_vmvsanhdr = GPT_ENT_TYPE_VMVSANHDR; static struct g_part_uuid_alias { struct uuid *uuid; int alias; int mbrtype; } gpt_uuid_alias_match[] = { { &gpt_uuid_apple_apfs, G_PART_ALIAS_APPLE_APFS, 0 }, { &gpt_uuid_apple_boot, G_PART_ALIAS_APPLE_BOOT, 0xab }, { &gpt_uuid_apple_core_storage, G_PART_ALIAS_APPLE_CORE_STORAGE, 0 }, { &gpt_uuid_apple_hfs, G_PART_ALIAS_APPLE_HFS, 0xaf }, { &gpt_uuid_apple_label, G_PART_ALIAS_APPLE_LABEL, 0 }, { &gpt_uuid_apple_raid, G_PART_ALIAS_APPLE_RAID, 0 }, { &gpt_uuid_apple_raid_offline, G_PART_ALIAS_APPLE_RAID_OFFLINE, 0 }, { &gpt_uuid_apple_tv_recovery, G_PART_ALIAS_APPLE_TV_RECOVERY, 0 }, { &gpt_uuid_apple_ufs, G_PART_ALIAS_APPLE_UFS, 0 }, + { &gpt_uuid_apple_zfs, G_PART_ALIAS_APPLE_ZFS, 0 }, { &gpt_uuid_bios_boot, G_PART_ALIAS_BIOS_BOOT, 0 }, { &gpt_uuid_chromeos_firmware, G_PART_ALIAS_CHROMEOS_FIRMWARE, 0 }, { &gpt_uuid_chromeos_kernel, G_PART_ALIAS_CHROMEOS_KERNEL, 0 }, { &gpt_uuid_chromeos_reserved, G_PART_ALIAS_CHROMEOS_RESERVED, 0 }, { &gpt_uuid_chromeos_root, G_PART_ALIAS_CHROMEOS_ROOT, 0 }, { &gpt_uuid_dfbsd_ccd, G_PART_ALIAS_DFBSD_CCD, 0 }, { &gpt_uuid_dfbsd_hammer, G_PART_ALIAS_DFBSD_HAMMER, 0 }, { &gpt_uuid_dfbsd_hammer2, G_PART_ALIAS_DFBSD_HAMMER2, 0 }, { &gpt_uuid_dfbsd_label32, G_PART_ALIAS_DFBSD, 0xa5 }, { &gpt_uuid_dfbsd_label64, G_PART_ALIAS_DFBSD64, 0xa5 }, { &gpt_uuid_dfbsd_legacy, G_PART_ALIAS_DFBSD_LEGACY, 0 }, { &gpt_uuid_dfbsd_swap, G_PART_ALIAS_DFBSD_SWAP, 0 }, { &gpt_uuid_dfbsd_ufs1, G_PART_ALIAS_DFBSD_UFS, 0 }, { &gpt_uuid_dfbsd_vinum, G_PART_ALIAS_DFBSD_VINUM, 0 }, { &gpt_uuid_efi, G_PART_ALIAS_EFI, 0xee }, { &gpt_uuid_freebsd, G_PART_ALIAS_FREEBSD, 0xa5 }, { &gpt_uuid_freebsd_boot, G_PART_ALIAS_FREEBSD_BOOT, 0 }, { &gpt_uuid_freebsd_nandfs, G_PART_ALIAS_FREEBSD_NANDFS, 0 }, { &gpt_uuid_freebsd_swap, G_PART_ALIAS_FREEBSD_SWAP, 0 }, { &gpt_uuid_freebsd_ufs, G_PART_ALIAS_FREEBSD_UFS, 0 }, { &gpt_uuid_freebsd_vinum, G_PART_ALIAS_FREEBSD_VINUM, 0 }, { &gpt_uuid_freebsd_zfs, G_PART_ALIAS_FREEBSD_ZFS, 0 }, { &gpt_uuid_linux_data, G_PART_ALIAS_LINUX_DATA, 0x0b }, { &gpt_uuid_linux_lvm, G_PART_ALIAS_LINUX_LVM, 0 }, { &gpt_uuid_linux_raid, G_PART_ALIAS_LINUX_RAID, 0 }, { &gpt_uuid_linux_swap, G_PART_ALIAS_LINUX_SWAP, 0 }, { &gpt_uuid_mbr, G_PART_ALIAS_MBR, 0 }, { &gpt_uuid_ms_basic_data, G_PART_ALIAS_MS_BASIC_DATA, 0x0b }, { &gpt_uuid_ms_ldm_data, G_PART_ALIAS_MS_LDM_DATA, 0 }, { &gpt_uuid_ms_ldm_metadata, G_PART_ALIAS_MS_LDM_METADATA, 0 }, { &gpt_uuid_ms_recovery, G_PART_ALIAS_MS_RECOVERY, 0 }, { &gpt_uuid_ms_reserved, G_PART_ALIAS_MS_RESERVED, 0 }, { &gpt_uuid_ms_spaces, G_PART_ALIAS_MS_SPACES, 0 }, { &gpt_uuid_netbsd_ccd, G_PART_ALIAS_NETBSD_CCD, 0 }, { &gpt_uuid_netbsd_cgd, G_PART_ALIAS_NETBSD_CGD, 0 }, { &gpt_uuid_netbsd_ffs, G_PART_ALIAS_NETBSD_FFS, 0 }, { &gpt_uuid_netbsd_lfs, G_PART_ALIAS_NETBSD_LFS, 0 }, { &gpt_uuid_netbsd_raid, G_PART_ALIAS_NETBSD_RAID, 0 }, { &gpt_uuid_netbsd_swap, G_PART_ALIAS_NETBSD_SWAP, 0 }, { &gpt_uuid_openbsd_data, G_PART_ALIAS_OPENBSD_DATA, 0 }, { &gpt_uuid_prep_boot, G_PART_ALIAS_PREP_BOOT, 0x41 }, + { &gpt_uuid_solaris_boot, G_PART_ALIAS_SOLARIS_BOOT, 0 }, + { &gpt_uuid_solaris_root, G_PART_ALIAS_SOLARIS_ROOT, 0 }, + { &gpt_uuid_solaris_swap, G_PART_ALIAS_SOLARIS_SWAP, 0 }, + { &gpt_uuid_solaris_backup, G_PART_ALIAS_SOLARIS_BACKUP, 0 }, + { &gpt_uuid_solaris_var, G_PART_ALIAS_SOLARIS_VAR, 0 }, + { &gpt_uuid_solaris_home, G_PART_ALIAS_SOLARIS_HOME, 0 }, + { &gpt_uuid_solaris_altsec, G_PART_ALIAS_SOLARIS_ALTSEC, 0 }, + { &gpt_uuid_solaris_reserved, G_PART_ALIAS_SOLARIS_RESERVED, 0 }, { &gpt_uuid_vmfs, G_PART_ALIAS_VMFS, 0 }, { &gpt_uuid_vmkdiag, G_PART_ALIAS_VMKDIAG, 0 }, { &gpt_uuid_vmreserved, G_PART_ALIAS_VMRESERVED, 0 }, { &gpt_uuid_vmvsanhdr, G_PART_ALIAS_VMVSANHDR, 0 }, { NULL, 0, 0 } }; static int gpt_write_mbr_entry(u_char *mbr, int idx, int typ, quad_t start, quad_t end) { if (typ == 0 || start > UINT32_MAX || end > UINT32_MAX) return (EINVAL); mbr += DOSPARTOFF + idx * DOSPARTSIZE; mbr[0] = 0; if (start == 1) { /* * Treat the PMBR partition specially to maximize * interoperability with BIOSes. */ mbr[1] = mbr[3] = 0; mbr[2] = 2; } else mbr[1] = mbr[2] = mbr[3] = 0xff; mbr[4] = typ; mbr[5] = mbr[6] = mbr[7] = 0xff; le32enc(mbr + 8, (uint32_t)start); le32enc(mbr + 12, (uint32_t)(end - start + 1)); return (0); } static int gpt_map_type(struct uuid *t) { struct g_part_uuid_alias *uap; for (uap = &gpt_uuid_alias_match[0]; uap->uuid; uap++) { if (EQUUID(t, uap->uuid)) return (uap->mbrtype); } return (0); } static void gpt_create_pmbr(struct g_part_gpt_table *table, struct g_provider *pp) { bzero(table->mbr + DOSPARTOFF, DOSPARTSIZE * NDOSPART); gpt_write_mbr_entry(table->mbr, 0, 0xee, 1, MIN(pp->mediasize / pp->sectorsize - 1, UINT32_MAX)); le16enc(table->mbr + DOSMAGICOFFSET, DOSMAGIC); } /* * Under Boot Camp the PMBR partition (type 0xEE) doesn't cover the * whole disk anymore. Rather, it covers the GPT table and the EFI * system partition only. This way the HFS+ partition and any FAT * partitions can be added to the MBR without creating an overlap. */ static int gpt_is_bootcamp(struct g_part_gpt_table *table, const char *provname) { uint8_t *p; p = table->mbr + DOSPARTOFF; if (p[4] != 0xee || le32dec(p + 8) != 1) return (0); p += DOSPARTSIZE; if (p[4] != 0xaf) return (0); printf("GEOM: %s: enabling Boot Camp\n", provname); return (1); } static void gpt_update_bootcamp(struct g_part_table *basetable, struct g_provider *pp) { struct g_part_entry *baseentry; struct g_part_gpt_entry *entry; struct g_part_gpt_table *table; int bootable, error, index, slices, typ; table = (struct g_part_gpt_table *)basetable; bootable = -1; for (index = 0; index < NDOSPART; index++) { if (table->mbr[DOSPARTOFF + DOSPARTSIZE * index]) bootable = index; } bzero(table->mbr + DOSPARTOFF, DOSPARTSIZE * NDOSPART); slices = 0; LIST_FOREACH(baseentry, &basetable->gpt_entry, gpe_entry) { if (baseentry->gpe_deleted) continue; index = baseentry->gpe_index - 1; if (index >= NDOSPART) continue; entry = (struct g_part_gpt_entry *)baseentry; switch (index) { case 0: /* This must be the EFI system partition. */ if (!EQUUID(&entry->ent.ent_type, &gpt_uuid_efi)) goto disable; error = gpt_write_mbr_entry(table->mbr, index, 0xee, 1ull, entry->ent.ent_lba_end); break; case 1: /* This must be the HFS+ partition. */ if (!EQUUID(&entry->ent.ent_type, &gpt_uuid_apple_hfs)) goto disable; error = gpt_write_mbr_entry(table->mbr, index, 0xaf, entry->ent.ent_lba_start, entry->ent.ent_lba_end); break; default: typ = gpt_map_type(&entry->ent.ent_type); error = gpt_write_mbr_entry(table->mbr, index, typ, entry->ent.ent_lba_start, entry->ent.ent_lba_end); break; } if (error) continue; if (index == bootable) table->mbr[DOSPARTOFF + DOSPARTSIZE * index] = 0x80; slices |= 1 << index; } if ((slices & 3) == 3) return; disable: table->bootcamp = 0; gpt_create_pmbr(table, pp); } static struct gpt_hdr * gpt_read_hdr(struct g_part_gpt_table *table, struct g_consumer *cp, enum gpt_elt elt) { struct gpt_hdr *buf, *hdr; struct g_provider *pp; quad_t lba, last; int error; uint32_t crc, sz; pp = cp->provider; last = (pp->mediasize / pp->sectorsize) - 1; table->state[elt] = GPT_STATE_MISSING; /* * If the primary header is valid look for secondary * header in AlternateLBA, otherwise in the last medium's LBA. */ if (elt == GPT_ELT_SECHDR) { if (table->state[GPT_ELT_PRIHDR] != GPT_STATE_OK) table->lba[elt] = last; } else table->lba[elt] = 1; buf = g_read_data(cp, table->lba[elt] * pp->sectorsize, pp->sectorsize, &error); if (buf == NULL) return (NULL); hdr = NULL; if (memcmp(buf->hdr_sig, GPT_HDR_SIG, sizeof(buf->hdr_sig)) != 0) goto fail; table->state[elt] = GPT_STATE_CORRUPT; sz = le32toh(buf->hdr_size); if (sz < 92 || sz > pp->sectorsize) goto fail; hdr = g_malloc(sz, M_WAITOK | M_ZERO); bcopy(buf, hdr, sz); hdr->hdr_size = sz; crc = le32toh(buf->hdr_crc_self); buf->hdr_crc_self = 0; if (crc32(buf, sz) != crc) goto fail; hdr->hdr_crc_self = crc; table->state[elt] = GPT_STATE_INVALID; hdr->hdr_revision = le32toh(buf->hdr_revision); if (hdr->hdr_revision < GPT_HDR_REVISION) goto fail; hdr->hdr_lba_self = le64toh(buf->hdr_lba_self); if (hdr->hdr_lba_self != table->lba[elt]) goto fail; hdr->hdr_lba_alt = le64toh(buf->hdr_lba_alt); if (hdr->hdr_lba_alt == hdr->hdr_lba_self || hdr->hdr_lba_alt > last) goto fail; /* Check the managed area. */ hdr->hdr_lba_start = le64toh(buf->hdr_lba_start); if (hdr->hdr_lba_start < 2 || hdr->hdr_lba_start >= last) goto fail; hdr->hdr_lba_end = le64toh(buf->hdr_lba_end); if (hdr->hdr_lba_end < hdr->hdr_lba_start || hdr->hdr_lba_end >= last) goto fail; /* Check the table location and size of the table. */ hdr->hdr_entries = le32toh(buf->hdr_entries); hdr->hdr_entsz = le32toh(buf->hdr_entsz); if (hdr->hdr_entries == 0 || hdr->hdr_entsz < 128 || (hdr->hdr_entsz & 7) != 0) goto fail; hdr->hdr_lba_table = le64toh(buf->hdr_lba_table); if (hdr->hdr_lba_table < 2 || hdr->hdr_lba_table >= last) goto fail; if (hdr->hdr_lba_table >= hdr->hdr_lba_start && hdr->hdr_lba_table <= hdr->hdr_lba_end) goto fail; lba = hdr->hdr_lba_table + howmany(hdr->hdr_entries * hdr->hdr_entsz, pp->sectorsize) - 1; if (lba >= last) goto fail; if (lba >= hdr->hdr_lba_start && lba <= hdr->hdr_lba_end) goto fail; table->state[elt] = GPT_STATE_OK; le_uuid_dec(&buf->hdr_uuid, &hdr->hdr_uuid); hdr->hdr_crc_table = le32toh(buf->hdr_crc_table); /* save LBA for secondary header */ if (elt == GPT_ELT_PRIHDR) table->lba[GPT_ELT_SECHDR] = hdr->hdr_lba_alt; g_free(buf); return (hdr); fail: if (hdr != NULL) g_free(hdr); g_free(buf); return (NULL); } static struct gpt_ent * gpt_read_tbl(struct g_part_gpt_table *table, struct g_consumer *cp, enum gpt_elt elt, struct gpt_hdr *hdr) { struct g_provider *pp; struct gpt_ent *ent, *tbl; char *buf, *p; unsigned int idx, sectors, tblsz, size; int error; if (hdr == NULL) return (NULL); pp = cp->provider; table->lba[elt] = hdr->hdr_lba_table; table->state[elt] = GPT_STATE_MISSING; tblsz = hdr->hdr_entries * hdr->hdr_entsz; sectors = howmany(tblsz, pp->sectorsize); buf = g_malloc(sectors * pp->sectorsize, M_WAITOK | M_ZERO); for (idx = 0; idx < sectors; idx += MAXPHYS / pp->sectorsize) { size = (sectors - idx > MAXPHYS / pp->sectorsize) ? MAXPHYS: (sectors - idx) * pp->sectorsize; p = g_read_data(cp, (table->lba[elt] + idx) * pp->sectorsize, size, &error); if (p == NULL) { g_free(buf); return (NULL); } bcopy(p, buf + idx * pp->sectorsize, size); g_free(p); } table->state[elt] = GPT_STATE_CORRUPT; if (crc32(buf, tblsz) != hdr->hdr_crc_table) { g_free(buf); return (NULL); } table->state[elt] = GPT_STATE_OK; tbl = g_malloc(hdr->hdr_entries * sizeof(struct gpt_ent), M_WAITOK | M_ZERO); for (idx = 0, ent = tbl, p = buf; idx < hdr->hdr_entries; idx++, ent++, p += hdr->hdr_entsz) { le_uuid_dec(p, &ent->ent_type); le_uuid_dec(p + 16, &ent->ent_uuid); ent->ent_lba_start = le64dec(p + 32); ent->ent_lba_end = le64dec(p + 40); ent->ent_attr = le64dec(p + 48); /* Keep UTF-16 in little-endian. */ bcopy(p + 56, ent->ent_name, sizeof(ent->ent_name)); } g_free(buf); return (tbl); } static int gpt_matched_hdrs(struct gpt_hdr *pri, struct gpt_hdr *sec) { if (pri == NULL || sec == NULL) return (0); if (!EQUUID(&pri->hdr_uuid, &sec->hdr_uuid)) return (0); return ((pri->hdr_revision == sec->hdr_revision && pri->hdr_size == sec->hdr_size && pri->hdr_lba_start == sec->hdr_lba_start && pri->hdr_lba_end == sec->hdr_lba_end && pri->hdr_entries == sec->hdr_entries && pri->hdr_entsz == sec->hdr_entsz && pri->hdr_crc_table == sec->hdr_crc_table) ? 1 : 0); } static int gpt_parse_type(const char *type, struct uuid *uuid) { struct uuid tmp; const char *alias; int error; struct g_part_uuid_alias *uap; if (type[0] == '!') { error = parse_uuid(type + 1, &tmp); if (error) return (error); if (EQUUID(&tmp, &gpt_uuid_unused)) return (EINVAL); *uuid = tmp; return (0); } for (uap = &gpt_uuid_alias_match[0]; uap->uuid; uap++) { alias = g_part_alias_name(uap->alias); if (!strcasecmp(type, alias)) { *uuid = *uap->uuid; return (0); } } return (EINVAL); } static int g_part_gpt_add(struct g_part_table *basetable, struct g_part_entry *baseentry, struct g_part_parms *gpp) { struct g_part_gpt_entry *entry; int error; entry = (struct g_part_gpt_entry *)baseentry; error = gpt_parse_type(gpp->gpp_type, &entry->ent.ent_type); if (error) return (error); kern_uuidgen(&entry->ent.ent_uuid, 1); entry->ent.ent_lba_start = baseentry->gpe_start; entry->ent.ent_lba_end = baseentry->gpe_end; if (baseentry->gpe_deleted) { entry->ent.ent_attr = 0; bzero(entry->ent.ent_name, sizeof(entry->ent.ent_name)); } if (gpp->gpp_parms & G_PART_PARM_LABEL) g_gpt_utf8_to_utf16(gpp->gpp_label, entry->ent.ent_name, sizeof(entry->ent.ent_name) / sizeof(entry->ent.ent_name[0])); return (0); } static int g_part_gpt_bootcode(struct g_part_table *basetable, struct g_part_parms *gpp) { struct g_part_gpt_table *table; size_t codesz; codesz = DOSPARTOFF; table = (struct g_part_gpt_table *)basetable; bzero(table->mbr, codesz); codesz = MIN(codesz, gpp->gpp_codesize); if (codesz > 0) bcopy(gpp->gpp_codeptr, table->mbr, codesz); return (0); } static int g_part_gpt_create(struct g_part_table *basetable, struct g_part_parms *gpp) { struct g_provider *pp; struct g_part_gpt_table *table; size_t tblsz; /* Our depth should be 0 unless nesting was explicitly enabled. */ if (!allow_nesting && basetable->gpt_depth != 0) return (ENXIO); table = (struct g_part_gpt_table *)basetable; pp = gpp->gpp_provider; tblsz = howmany(basetable->gpt_entries * sizeof(struct gpt_ent), pp->sectorsize); if (pp->sectorsize < MBRSIZE || pp->mediasize < (3 + 2 * tblsz + basetable->gpt_entries) * pp->sectorsize) return (ENOSPC); gpt_create_pmbr(table, pp); /* Allocate space for the header */ table->hdr = g_malloc(sizeof(struct gpt_hdr), M_WAITOK | M_ZERO); bcopy(GPT_HDR_SIG, table->hdr->hdr_sig, sizeof(table->hdr->hdr_sig)); table->hdr->hdr_revision = GPT_HDR_REVISION; table->hdr->hdr_size = offsetof(struct gpt_hdr, padding); kern_uuidgen(&table->hdr->hdr_uuid, 1); table->hdr->hdr_entries = basetable->gpt_entries; table->hdr->hdr_entsz = sizeof(struct gpt_ent); g_gpt_set_defaults(basetable, pp); return (0); } static int g_part_gpt_destroy(struct g_part_table *basetable, struct g_part_parms *gpp) { struct g_part_gpt_table *table; struct g_provider *pp; table = (struct g_part_gpt_table *)basetable; pp = LIST_FIRST(&basetable->gpt_gp->consumer)->provider; g_free(table->hdr); table->hdr = NULL; /* * Wipe the first 2 sectors and last one to clear the partitioning. * Wipe sectors only if they have valid metadata. */ if (table->state[GPT_ELT_PRIHDR] == GPT_STATE_OK) basetable->gpt_smhead |= 3; if (table->state[GPT_ELT_SECHDR] == GPT_STATE_OK && table->lba[GPT_ELT_SECHDR] == pp->mediasize / pp->sectorsize - 1) basetable->gpt_smtail |= 1; return (0); } static void g_part_gpt_dumpconf(struct g_part_table *table, struct g_part_entry *baseentry, struct sbuf *sb, const char *indent) { struct g_part_gpt_entry *entry; entry = (struct g_part_gpt_entry *)baseentry; if (indent == NULL) { /* conftxt: libdisk compatibility */ sbuf_cat(sb, " xs GPT xt "); sbuf_printf_uuid(sb, &entry->ent.ent_type); } else if (entry != NULL) { /* confxml: partition entry information */ sbuf_printf(sb, "%s\n"); if (entry->ent.ent_attr & GPT_ENT_ATTR_BOOTME) sbuf_printf(sb, "%sbootme\n", indent); if (entry->ent.ent_attr & GPT_ENT_ATTR_BOOTONCE) { sbuf_printf(sb, "%sbootonce\n", indent); } if (entry->ent.ent_attr & GPT_ENT_ATTR_BOOTFAILED) { sbuf_printf(sb, "%sbootfailed\n", indent); } sbuf_printf(sb, "%s", indent); sbuf_printf_uuid(sb, &entry->ent.ent_type); sbuf_cat(sb, "\n"); sbuf_printf(sb, "%s", indent); sbuf_printf_uuid(sb, &entry->ent.ent_uuid); sbuf_cat(sb, "\n"); sbuf_printf(sb, "%s", indent); sbuf_printf(sb, "HD(%d,GPT,", entry->base.gpe_index); sbuf_printf_uuid(sb, &entry->ent.ent_uuid); sbuf_printf(sb, ",%#jx,%#jx)", (intmax_t)entry->base.gpe_start, (intmax_t)(entry->base.gpe_end - entry->base.gpe_start + 1)); sbuf_cat(sb, "\n"); } else { /* confxml: scheme information */ } } static int g_part_gpt_dumpto(struct g_part_table *table, struct g_part_entry *baseentry) { struct g_part_gpt_entry *entry; entry = (struct g_part_gpt_entry *)baseentry; return ((EQUUID(&entry->ent.ent_type, &gpt_uuid_freebsd_swap) || EQUUID(&entry->ent.ent_type, &gpt_uuid_linux_swap) || EQUUID(&entry->ent.ent_type, &gpt_uuid_dfbsd_swap)) ? 1 : 0); } static int g_part_gpt_modify(struct g_part_table *basetable, struct g_part_entry *baseentry, struct g_part_parms *gpp) { struct g_part_gpt_entry *entry; int error; entry = (struct g_part_gpt_entry *)baseentry; if (gpp->gpp_parms & G_PART_PARM_TYPE) { error = gpt_parse_type(gpp->gpp_type, &entry->ent.ent_type); if (error) return (error); } if (gpp->gpp_parms & G_PART_PARM_LABEL) g_gpt_utf8_to_utf16(gpp->gpp_label, entry->ent.ent_name, sizeof(entry->ent.ent_name) / sizeof(entry->ent.ent_name[0])); return (0); } static int g_part_gpt_resize(struct g_part_table *basetable, struct g_part_entry *baseentry, struct g_part_parms *gpp) { struct g_part_gpt_entry *entry; if (baseentry == NULL) return (g_part_gpt_recover(basetable)); entry = (struct g_part_gpt_entry *)baseentry; baseentry->gpe_end = baseentry->gpe_start + gpp->gpp_size - 1; entry->ent.ent_lba_end = baseentry->gpe_end; return (0); } static const char * g_part_gpt_name(struct g_part_table *table, struct g_part_entry *baseentry, char *buf, size_t bufsz) { struct g_part_gpt_entry *entry; char c; entry = (struct g_part_gpt_entry *)baseentry; c = (EQUUID(&entry->ent.ent_type, &gpt_uuid_freebsd)) ? 's' : 'p'; snprintf(buf, bufsz, "%c%d", c, baseentry->gpe_index); return (buf); } static int g_part_gpt_probe(struct g_part_table *table, struct g_consumer *cp) { struct g_provider *pp; u_char *buf; int error, index, pri, res; /* Our depth should be 0 unless nesting was explicitly enabled. */ if (!allow_nesting && table->gpt_depth != 0) return (ENXIO); pp = cp->provider; /* * Sanity-check the provider. Since the first sector on the provider * must be a PMBR and a PMBR is 512 bytes large, the sector size * must be at least 512 bytes. Also, since the theoretical minimum * number of sectors needed by GPT is 6, any medium that has less * than 6 sectors is never going to be able to hold a GPT. The * number 6 comes from: * 1 sector for the PMBR * 2 sectors for the GPT headers (each 1 sector) * 2 sectors for the GPT tables (each 1 sector) * 1 sector for an actual partition * It's better to catch this pathological case early than behaving * pathologically later on... */ if (pp->sectorsize < MBRSIZE || pp->mediasize < 6 * pp->sectorsize) return (ENOSPC); /* * Check that there's a MBR or a PMBR. If it's a PMBR, we return * as the highest priority on a match, otherwise we assume some * GPT-unaware tool has destroyed the GPT by recreating a MBR and * we really want the MBR scheme to take precedence. */ buf = g_read_data(cp, 0L, pp->sectorsize, &error); if (buf == NULL) return (error); res = le16dec(buf + DOSMAGICOFFSET); pri = G_PART_PROBE_PRI_LOW; if (res == DOSMAGIC) { for (index = 0; index < NDOSPART; index++) { if (buf[DOSPARTOFF + DOSPARTSIZE * index + 4] == 0xee) pri = G_PART_PROBE_PRI_HIGH; } g_free(buf); /* Check that there's a primary header. */ buf = g_read_data(cp, pp->sectorsize, pp->sectorsize, &error); if (buf == NULL) return (error); res = memcmp(buf, GPT_HDR_SIG, 8); g_free(buf); if (res == 0) return (pri); } else g_free(buf); /* No primary? Check that there's a secondary. */ buf = g_read_data(cp, pp->mediasize - pp->sectorsize, pp->sectorsize, &error); if (buf == NULL) return (error); res = memcmp(buf, GPT_HDR_SIG, 8); g_free(buf); return ((res == 0) ? pri : ENXIO); } static int g_part_gpt_read(struct g_part_table *basetable, struct g_consumer *cp) { struct gpt_hdr *prihdr, *sechdr; struct gpt_ent *tbl, *pritbl, *sectbl; struct g_provider *pp; struct g_part_gpt_table *table; struct g_part_gpt_entry *entry; u_char *buf; uint64_t last; int error, index; table = (struct g_part_gpt_table *)basetable; pp = cp->provider; last = (pp->mediasize / pp->sectorsize) - 1; /* Read the PMBR */ buf = g_read_data(cp, 0, pp->sectorsize, &error); if (buf == NULL) return (error); bcopy(buf, table->mbr, MBRSIZE); g_free(buf); /* Read the primary header and table. */ prihdr = gpt_read_hdr(table, cp, GPT_ELT_PRIHDR); if (table->state[GPT_ELT_PRIHDR] == GPT_STATE_OK) { pritbl = gpt_read_tbl(table, cp, GPT_ELT_PRITBL, prihdr); } else { table->state[GPT_ELT_PRITBL] = GPT_STATE_MISSING; pritbl = NULL; } /* Read the secondary header and table. */ sechdr = gpt_read_hdr(table, cp, GPT_ELT_SECHDR); if (table->state[GPT_ELT_SECHDR] == GPT_STATE_OK) { sectbl = gpt_read_tbl(table, cp, GPT_ELT_SECTBL, sechdr); } else { table->state[GPT_ELT_SECTBL] = GPT_STATE_MISSING; sectbl = NULL; } /* Fail if we haven't got any good tables at all. */ if (table->state[GPT_ELT_PRITBL] != GPT_STATE_OK && table->state[GPT_ELT_SECTBL] != GPT_STATE_OK) { printf("GEOM: %s: corrupt or invalid GPT detected.\n", pp->name); printf("GEOM: %s: GPT rejected -- may not be recoverable.\n", pp->name); if (prihdr != NULL) g_free(prihdr); if (pritbl != NULL) g_free(pritbl); if (sechdr != NULL) g_free(sechdr); if (sectbl != NULL) g_free(sectbl); return (EINVAL); } /* * If both headers are good but they disagree with each other, * then invalidate one. We prefer to keep the primary header, * unless the primary table is corrupt. */ if (table->state[GPT_ELT_PRIHDR] == GPT_STATE_OK && table->state[GPT_ELT_SECHDR] == GPT_STATE_OK && !gpt_matched_hdrs(prihdr, sechdr)) { if (table->state[GPT_ELT_PRITBL] == GPT_STATE_OK) { table->state[GPT_ELT_SECHDR] = GPT_STATE_INVALID; table->state[GPT_ELT_SECTBL] = GPT_STATE_MISSING; g_free(sechdr); sechdr = NULL; } else { table->state[GPT_ELT_PRIHDR] = GPT_STATE_INVALID; table->state[GPT_ELT_PRITBL] = GPT_STATE_MISSING; g_free(prihdr); prihdr = NULL; } } if (table->state[GPT_ELT_PRITBL] != GPT_STATE_OK) { printf("GEOM: %s: the primary GPT table is corrupt or " "invalid.\n", pp->name); printf("GEOM: %s: using the secondary instead -- recovery " "strongly advised.\n", pp->name); table->hdr = sechdr; basetable->gpt_corrupt = 1; if (prihdr != NULL) g_free(prihdr); tbl = sectbl; if (pritbl != NULL) g_free(pritbl); } else { if (table->state[GPT_ELT_SECTBL] != GPT_STATE_OK) { printf("GEOM: %s: the secondary GPT table is corrupt " "or invalid.\n", pp->name); printf("GEOM: %s: using the primary only -- recovery " "suggested.\n", pp->name); basetable->gpt_corrupt = 1; } else if (table->lba[GPT_ELT_SECHDR] != last) { printf( "GEOM: %s: the secondary GPT header is not in " "the last LBA.\n", pp->name); basetable->gpt_corrupt = 1; } table->hdr = prihdr; if (sechdr != NULL) g_free(sechdr); tbl = pritbl; if (sectbl != NULL) g_free(sectbl); } basetable->gpt_first = table->hdr->hdr_lba_start; basetable->gpt_last = table->hdr->hdr_lba_end; basetable->gpt_entries = table->hdr->hdr_entries; for (index = basetable->gpt_entries - 1; index >= 0; index--) { if (EQUUID(&tbl[index].ent_type, &gpt_uuid_unused)) continue; entry = (struct g_part_gpt_entry *)g_part_new_entry( basetable, index + 1, tbl[index].ent_lba_start, tbl[index].ent_lba_end); entry->ent = tbl[index]; } g_free(tbl); /* * Under Mac OS X, the MBR mirrors the first 4 GPT partitions * if (and only if) any FAT32 or FAT16 partitions have been * created. This happens irrespective of whether Boot Camp is * used/enabled, though it's generally understood to be done * to support legacy Windows under Boot Camp. We refer to this * mirroring simply as Boot Camp. We try to detect Boot Camp * so that we can update the MBR if and when GPT changes have * been made. Note that we do not enable Boot Camp if not * previously enabled because we can't assume that we're on a * Mac alongside Mac OS X. */ table->bootcamp = gpt_is_bootcamp(table, pp->name); return (0); } static int g_part_gpt_recover(struct g_part_table *basetable) { struct g_part_gpt_table *table; struct g_provider *pp; table = (struct g_part_gpt_table *)basetable; pp = LIST_FIRST(&basetable->gpt_gp->consumer)->provider; gpt_create_pmbr(table, pp); g_gpt_set_defaults(basetable, pp); basetable->gpt_corrupt = 0; return (0); } static int g_part_gpt_setunset(struct g_part_table *basetable, struct g_part_entry *baseentry, const char *attrib, unsigned int set) { struct g_part_gpt_entry *entry; struct g_part_gpt_table *table; struct g_provider *pp; uint8_t *p; uint64_t attr; int i; table = (struct g_part_gpt_table *)basetable; entry = (struct g_part_gpt_entry *)baseentry; if (strcasecmp(attrib, "active") == 0) { if (table->bootcamp) { /* The active flag must be set on a valid entry. */ if (entry == NULL) return (ENXIO); if (baseentry->gpe_index > NDOSPART) return (EINVAL); for (i = 0; i < NDOSPART; i++) { p = &table->mbr[DOSPARTOFF + i * DOSPARTSIZE]; p[0] = (i == baseentry->gpe_index - 1) ? ((set) ? 0x80 : 0) : 0; } } else { /* The PMBR is marked as active without an entry. */ if (entry != NULL) return (ENXIO); for (i = 0; i < NDOSPART; i++) { p = &table->mbr[DOSPARTOFF + i * DOSPARTSIZE]; p[0] = (p[4] == 0xee) ? ((set) ? 0x80 : 0) : 0; } } return (0); } else if (strcasecmp(attrib, "lenovofix") == 0) { /* * Write the 0xee GPT entry to slot #1 (2nd slot) in the pMBR. * This workaround allows Lenovo X220, T420, T520, etc to boot * from GPT Partitions in BIOS mode. */ if (entry != NULL) return (ENXIO); pp = LIST_FIRST(&basetable->gpt_gp->consumer)->provider; bzero(table->mbr + DOSPARTOFF, DOSPARTSIZE * NDOSPART); gpt_write_mbr_entry(table->mbr, ((set) ? 1 : 0), 0xee, 1, MIN(pp->mediasize / pp->sectorsize - 1, UINT32_MAX)); return (0); } if (entry == NULL) return (ENODEV); attr = 0; if (strcasecmp(attrib, "bootme") == 0) { attr |= GPT_ENT_ATTR_BOOTME; } else if (strcasecmp(attrib, "bootonce") == 0) { attr |= GPT_ENT_ATTR_BOOTONCE; if (set) attr |= GPT_ENT_ATTR_BOOTME; } else if (strcasecmp(attrib, "bootfailed") == 0) { /* * It should only be possible to unset BOOTFAILED, but it might * be useful for test purposes to also be able to set it. */ attr |= GPT_ENT_ATTR_BOOTFAILED; } if (attr == 0) return (EINVAL); if (set) attr = entry->ent.ent_attr | attr; else attr = entry->ent.ent_attr & ~attr; if (attr != entry->ent.ent_attr) { entry->ent.ent_attr = attr; if (!baseentry->gpe_created) baseentry->gpe_modified = 1; } return (0); } static const char * g_part_gpt_type(struct g_part_table *basetable, struct g_part_entry *baseentry, char *buf, size_t bufsz) { struct g_part_gpt_entry *entry; struct uuid *type; struct g_part_uuid_alias *uap; entry = (struct g_part_gpt_entry *)baseentry; type = &entry->ent.ent_type; for (uap = &gpt_uuid_alias_match[0]; uap->uuid; uap++) if (EQUUID(type, uap->uuid)) return (g_part_alias_name(uap->alias)); buf[0] = '!'; snprintf_uuid(buf + 1, bufsz - 1, type); return (buf); } static int g_part_gpt_write(struct g_part_table *basetable, struct g_consumer *cp) { unsigned char *buf, *bp; struct g_provider *pp; struct g_part_entry *baseentry; struct g_part_gpt_entry *entry; struct g_part_gpt_table *table; size_t tblsz; uint32_t crc; int error, index; pp = cp->provider; table = (struct g_part_gpt_table *)basetable; tblsz = howmany(table->hdr->hdr_entries * table->hdr->hdr_entsz, pp->sectorsize); /* Reconstruct the MBR from the GPT if under Boot Camp. */ if (table->bootcamp) gpt_update_bootcamp(basetable, pp); /* Write the PMBR */ buf = g_malloc(pp->sectorsize, M_WAITOK | M_ZERO); bcopy(table->mbr, buf, MBRSIZE); error = g_write_data(cp, 0, buf, pp->sectorsize); g_free(buf); if (error) return (error); /* Allocate space for the header and entries. */ buf = g_malloc((tblsz + 1) * pp->sectorsize, M_WAITOK | M_ZERO); memcpy(buf, table->hdr->hdr_sig, sizeof(table->hdr->hdr_sig)); le32enc(buf + 8, table->hdr->hdr_revision); le32enc(buf + 12, table->hdr->hdr_size); le64enc(buf + 40, table->hdr->hdr_lba_start); le64enc(buf + 48, table->hdr->hdr_lba_end); le_uuid_enc(buf + 56, &table->hdr->hdr_uuid); le32enc(buf + 80, table->hdr->hdr_entries); le32enc(buf + 84, table->hdr->hdr_entsz); LIST_FOREACH(baseentry, &basetable->gpt_entry, gpe_entry) { if (baseentry->gpe_deleted) continue; entry = (struct g_part_gpt_entry *)baseentry; index = baseentry->gpe_index - 1; bp = buf + pp->sectorsize + table->hdr->hdr_entsz * index; le_uuid_enc(bp, &entry->ent.ent_type); le_uuid_enc(bp + 16, &entry->ent.ent_uuid); le64enc(bp + 32, entry->ent.ent_lba_start); le64enc(bp + 40, entry->ent.ent_lba_end); le64enc(bp + 48, entry->ent.ent_attr); memcpy(bp + 56, entry->ent.ent_name, sizeof(entry->ent.ent_name)); } crc = crc32(buf + pp->sectorsize, table->hdr->hdr_entries * table->hdr->hdr_entsz); le32enc(buf + 88, crc); /* Write primary meta-data. */ le32enc(buf + 16, 0); /* hdr_crc_self. */ le64enc(buf + 24, table->lba[GPT_ELT_PRIHDR]); /* hdr_lba_self. */ le64enc(buf + 32, table->lba[GPT_ELT_SECHDR]); /* hdr_lba_alt. */ le64enc(buf + 72, table->lba[GPT_ELT_PRITBL]); /* hdr_lba_table. */ crc = crc32(buf, table->hdr->hdr_size); le32enc(buf + 16, crc); for (index = 0; index < tblsz; index += MAXPHYS / pp->sectorsize) { error = g_write_data(cp, (table->lba[GPT_ELT_PRITBL] + index) * pp->sectorsize, buf + (index + 1) * pp->sectorsize, (tblsz - index > MAXPHYS / pp->sectorsize) ? MAXPHYS: (tblsz - index) * pp->sectorsize); if (error) goto out; } error = g_write_data(cp, table->lba[GPT_ELT_PRIHDR] * pp->sectorsize, buf, pp->sectorsize); if (error) goto out; /* Write secondary meta-data. */ le32enc(buf + 16, 0); /* hdr_crc_self. */ le64enc(buf + 24, table->lba[GPT_ELT_SECHDR]); /* hdr_lba_self. */ le64enc(buf + 32, table->lba[GPT_ELT_PRIHDR]); /* hdr_lba_alt. */ le64enc(buf + 72, table->lba[GPT_ELT_SECTBL]); /* hdr_lba_table. */ crc = crc32(buf, table->hdr->hdr_size); le32enc(buf + 16, crc); for (index = 0; index < tblsz; index += MAXPHYS / pp->sectorsize) { error = g_write_data(cp, (table->lba[GPT_ELT_SECTBL] + index) * pp->sectorsize, buf + (index + 1) * pp->sectorsize, (tblsz - index > MAXPHYS / pp->sectorsize) ? MAXPHYS: (tblsz - index) * pp->sectorsize); if (error) goto out; } error = g_write_data(cp, table->lba[GPT_ELT_SECHDR] * pp->sectorsize, buf, pp->sectorsize); out: g_free(buf); return (error); } static void g_gpt_set_defaults(struct g_part_table *basetable, struct g_provider *pp) { struct g_part_entry *baseentry; struct g_part_gpt_entry *entry; struct g_part_gpt_table *table; quad_t start, end, min, max; quad_t lba, last; size_t spb, tblsz; table = (struct g_part_gpt_table *)basetable; last = pp->mediasize / pp->sectorsize - 1; tblsz = howmany(basetable->gpt_entries * sizeof(struct gpt_ent), pp->sectorsize); table->lba[GPT_ELT_PRIHDR] = 1; table->lba[GPT_ELT_PRITBL] = 2; table->lba[GPT_ELT_SECHDR] = last; table->lba[GPT_ELT_SECTBL] = last - tblsz; table->state[GPT_ELT_PRIHDR] = GPT_STATE_OK; table->state[GPT_ELT_PRITBL] = GPT_STATE_OK; table->state[GPT_ELT_SECHDR] = GPT_STATE_OK; table->state[GPT_ELT_SECTBL] = GPT_STATE_OK; max = start = 2 + tblsz; min = end = last - tblsz - 1; LIST_FOREACH(baseentry, &basetable->gpt_entry, gpe_entry) { if (baseentry->gpe_deleted) continue; entry = (struct g_part_gpt_entry *)baseentry; if (entry->ent.ent_lba_start < min) min = entry->ent.ent_lba_start; if (entry->ent.ent_lba_end > max) max = entry->ent.ent_lba_end; } spb = 4096 / pp->sectorsize; if (spb > 1) { lba = start + ((start % spb) ? spb - start % spb : 0); if (lba <= min) start = lba; lba = end - (end + 1) % spb; if (max <= lba) end = lba; } table->hdr->hdr_lba_start = start; table->hdr->hdr_lba_end = end; basetable->gpt_first = start; basetable->gpt_last = end; } static void g_gpt_printf_utf16(struct sbuf *sb, uint16_t *str, size_t len) { u_int bo; uint32_t ch; uint16_t c; bo = LITTLE_ENDIAN; /* GPT is little-endian */ while (len > 0 && *str != 0) { ch = (bo == BIG_ENDIAN) ? be16toh(*str) : le16toh(*str); str++, len--; if ((ch & 0xf800) == 0xd800) { if (len > 0) { c = (bo == BIG_ENDIAN) ? be16toh(*str) : le16toh(*str); str++, len--; } else c = 0xfffd; if ((ch & 0x400) == 0 && (c & 0xfc00) == 0xdc00) { ch = ((ch & 0x3ff) << 10) + (c & 0x3ff); ch += 0x10000; } else ch = 0xfffd; } else if (ch == 0xfffe) { /* BOM (U+FEFF) swapped. */ bo = (bo == BIG_ENDIAN) ? LITTLE_ENDIAN : BIG_ENDIAN; continue; } else if (ch == 0xfeff) /* BOM (U+FEFF) unswapped. */ continue; /* Write the Unicode character in UTF-8 */ if (ch < 0x80) g_conf_printf_escaped(sb, "%c", ch); else if (ch < 0x800) g_conf_printf_escaped(sb, "%c%c", 0xc0 | (ch >> 6), 0x80 | (ch & 0x3f)); else if (ch < 0x10000) g_conf_printf_escaped(sb, "%c%c%c", 0xe0 | (ch >> 12), 0x80 | ((ch >> 6) & 0x3f), 0x80 | (ch & 0x3f)); else if (ch < 0x200000) g_conf_printf_escaped(sb, "%c%c%c%c", 0xf0 | (ch >> 18), 0x80 | ((ch >> 12) & 0x3f), 0x80 | ((ch >> 6) & 0x3f), 0x80 | (ch & 0x3f)); } } static void g_gpt_utf8_to_utf16(const uint8_t *s8, uint16_t *s16, size_t s16len) { size_t s16idx, s8idx; uint32_t utfchar; unsigned int c, utfbytes; s8idx = s16idx = 0; utfchar = 0; utfbytes = 0; bzero(s16, s16len << 1); while (s8[s8idx] != 0 && s16idx < s16len) { c = s8[s8idx++]; if ((c & 0xc0) != 0x80) { /* Initial characters. */ if (utfbytes != 0) { /* Incomplete encoding of previous char. */ s16[s16idx++] = htole16(0xfffd); } if ((c & 0xf8) == 0xf0) { utfchar = c & 0x07; utfbytes = 3; } else if ((c & 0xf0) == 0xe0) { utfchar = c & 0x0f; utfbytes = 2; } else if ((c & 0xe0) == 0xc0) { utfchar = c & 0x1f; utfbytes = 1; } else { utfchar = c & 0x7f; utfbytes = 0; } } else { /* Followup characters. */ if (utfbytes > 0) { utfchar = (utfchar << 6) + (c & 0x3f); utfbytes--; } else if (utfbytes == 0) utfbytes = ~0; } /* * Write the complete Unicode character as UTF-16 when we * have all the UTF-8 charactars collected. */ if (utfbytes == 0) { /* * If we need to write 2 UTF-16 characters, but * we only have room for 1, then we truncate the * string by writing a 0 instead. */ if (utfchar >= 0x10000 && s16idx < s16len - 1) { s16[s16idx++] = htole16(0xd800 | ((utfchar >> 10) - 0x40)); s16[s16idx++] = htole16(0xdc00 | (utfchar & 0x3ff)); } else s16[s16idx++] = (utfchar >= 0x10000) ? 0 : htole16(utfchar); } } /* * If our input string was truncated, append an invalid encoding * character to the output string. */ if (utfbytes != 0 && s16idx < s16len) s16[s16idx++] = htole16(0xfffd); } Index: head/sys/sys/disk/gpt.h =================================================================== --- head/sys/sys/disk/gpt.h (revision 364315) +++ head/sys/sys/disk/gpt.h (revision 364316) @@ -1,239 +1,256 @@ /*- * Copyright (c) 2002 Marcel Moolenaar * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR * IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES * OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. * IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT * NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF * THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. * * $FreeBSD$ */ #ifndef _SYS_DISK_GPT_H_ #define _SYS_DISK_GPT_H_ /* * Applications can define GPT_UUID_TYPE if they want the GPT structures * to use a particular type definition for UUIDs/GUIDs. This header uses * a generic (DCE 1.1 compatible) definition otherwise. */ #ifndef GPT_UUID_TYPE struct gpt_uuid { uint32_t time_low; uint16_t time_mid; uint16_t time_hi_and_version; uint8_t clock_seq_hi_and_reserved; uint8_t clock_seq_low; uint8_t node[6]; }; #define GPT_UUID_TYPE struct gpt_uuid #endif /* !GPT_UUID_TYPE */ typedef GPT_UUID_TYPE gpt_uuid_t; #ifdef CTASSERT CTASSERT(sizeof(gpt_uuid_t) == 16); #endif struct gpt_hdr { char hdr_sig[8]; #define GPT_HDR_SIG "EFI PART" uint32_t hdr_revision; #define GPT_HDR_REVISION 0x00010000 uint32_t hdr_size; uint32_t hdr_crc_self; uint32_t __reserved; uint64_t hdr_lba_self; uint64_t hdr_lba_alt; uint64_t hdr_lba_start; uint64_t hdr_lba_end; gpt_uuid_t hdr_uuid; uint64_t hdr_lba_table; uint32_t hdr_entries; uint32_t hdr_entsz; uint32_t hdr_crc_table; /* * The header as defined in the EFI spec is not a multiple of 8 bytes * and given that the alignment requirement is on an 8 byte boundary, * padding will happen. We make the padding explicit so that we can * correct the value returned by sizeof() when we put the size of the * header in field hdr_size, or otherwise use offsetof(). */ uint32_t padding; }; #ifdef CTASSERT CTASSERT(offsetof(struct gpt_hdr, padding) == 92); #endif struct gpt_ent { gpt_uuid_t ent_type; gpt_uuid_t ent_uuid; uint64_t ent_lba_start; uint64_t ent_lba_end; uint64_t ent_attr; #define GPT_ENT_ATTR_PLATFORM (1ULL << 0) #define GPT_ENT_ATTR_BOOTME (1ULL << 59) #define GPT_ENT_ATTR_BOOTONCE (1ULL << 58) #define GPT_ENT_ATTR_BOOTFAILED (1ULL << 57) uint16_t ent_name[36]; /* UTF-16. */ }; #ifdef CTASSERT CTASSERT(sizeof(struct gpt_ent) == 128); #endif /* CTASSERT */ #define GPT_ENT_TYPE_UNUSED \ {0x00000000,0x0000,0x0000,0x00,0x00,{0x00,0x00,0x00,0x00,0x00,0x00}} #define GPT_ENT_TYPE_EFI \ {0xc12a7328,0xf81f,0x11d2,0xba,0x4b,{0x00,0xa0,0xc9,0x3e,0xc9,0x3b}} #define GPT_ENT_TYPE_MBR \ {0x024dee41,0x33e7,0x11d3,0x9d,0x69,{0x00,0x08,0xc7,0x81,0xf3,0x9f}} #define GPT_ENT_TYPE_FREEBSD \ {0x516e7cb4,0x6ecf,0x11d6,0x8f,0xf8,{0x00,0x02,0x2d,0x09,0x71,0x2b}} #define GPT_ENT_TYPE_FREEBSD_BOOT \ {0x83bd6b9d,0x7f41,0x11dc,0xbe,0x0b,{0x00,0x15,0x60,0xb8,0x4f,0x0f}} #define GPT_ENT_TYPE_FREEBSD_NANDFS \ {0x74ba7dd9,0xa689,0x11e1,0xbd,0x04,{0x00,0xe0,0x81,0x28,0x6a,0xcf}} #define GPT_ENT_TYPE_FREEBSD_SWAP \ {0x516e7cb5,0x6ecf,0x11d6,0x8f,0xf8,{0x00,0x02,0x2d,0x09,0x71,0x2b}} #define GPT_ENT_TYPE_FREEBSD_UFS \ {0x516e7cb6,0x6ecf,0x11d6,0x8f,0xf8,{0x00,0x02,0x2d,0x09,0x71,0x2b}} #define GPT_ENT_TYPE_FREEBSD_VINUM \ {0x516e7cb8,0x6ecf,0x11d6,0x8f,0xf8,{0x00,0x02,0x2d,0x09,0x71,0x2b}} #define GPT_ENT_TYPE_FREEBSD_ZFS \ {0x516e7cba,0x6ecf,0x11d6,0x8f,0xf8,{0x00,0x02,0x2d,0x09,0x71,0x2b}} #define GPT_ENT_TYPE_PREP_BOOT \ {0x9e1a2d38,0xc612,0x4316,0xaa,0x26,{0x8b,0x49,0x52,0x1e,0x5a,0x8b}} /* * The following are unused but documented here to avoid reuse. * * GPT_ENT_TYPE_FREEBSD_UFS2 \ * {0x516e7cb7,0x6ecf,0x11d6,0x8f,0xf8,{0x00,0x02,0x2d,0x09,0x71,0x2b}} */ /* * Foreign partition types that we're likely to encounter. Note that Linux * apparently choose to share data partitions with MS. I don't what the * advantage might be. I can see how sharing swap partitions is advantageous * though. */ #define GPT_ENT_TYPE_MS_BASIC_DATA \ {0xebd0a0a2,0xb9e5,0x4433,0x87,0xc0,{0x68,0xb6,0xb7,0x26,0x99,0xc7}} #define GPT_ENT_TYPE_MS_LDM_DATA \ {0xaf9b60a0,0x1431,0x4f62,0xbc,0x68,{0x33,0x11,0x71,0x4a,0x69,0xad}} #define GPT_ENT_TYPE_MS_LDM_METADATA \ {0x5808c8aa,0x7e8f,0x42e0,0x85,0xd2,{0xe1,0xe9,0x04,0x34,0xcf,0xb3}} #define GPT_ENT_TYPE_MS_RECOVERY \ {0xde94bba4,0x06d1,0x4d40,0xa1,0x6a,{0xbf,0xd5,0x01,0x79,0xd6,0xac}} #define GPT_ENT_TYPE_MS_RESERVED \ {0xe3c9e316,0x0b5c,0x4db8,0x81,0x7d,{0xf9,0x2d,0xf0,0x02,0x15,0xae}} #define GPT_ENT_TYPE_MS_SPACES \ {0xe75caf8f,0xf680,0x4cee,0xaf,0xa3,{0xb0,0x01,0xe5,0x6e,0xfc,0x2d}} #define GPT_ENT_TYPE_LINUX_DATA \ {0x0fc63daf,0x8483,0x4772,0x8e,0x79,{0x3d,0x69,0xd8,0x47,0x7d,0xe4}} #define GPT_ENT_TYPE_LINUX_RAID \ {0xa19d880f,0x05fc,0x4d3b,0xa0,0x06,{0x74,0x3f,0x0f,0x84,0x91,0x1e}} #define GPT_ENT_TYPE_LINUX_SWAP \ {0x0657fd6d,0xa4ab,0x43c4,0x84,0xe5,{0x09,0x33,0xc8,0x4b,0x4f,0x4f}} #define GPT_ENT_TYPE_LINUX_LVM \ {0xe6d6d379,0xf507,0x44c2,0xa2,0x3c,{0x23,0x8f,0x2a,0x3d,0xf9,0x28}} #define GPT_ENT_TYPE_VMFS \ {0xaa31e02a,0x400f,0x11db,0x95,0x90,{0x00,0x0c,0x29,0x11,0xd1,0xb8}} #define GPT_ENT_TYPE_VMKDIAG \ {0x9d275380,0x40ad,0x11db,0xbf,0x97,{0x00,0x0c,0x29,0x11,0xd1,0xb8}} #define GPT_ENT_TYPE_VMRESERVED \ {0x9198effc,0x31c0,0x11db,0x8f,0x78,{0x00,0x0c,0x29,0x11,0xd1,0xb8}} #define GPT_ENT_TYPE_VMVSANHDR \ {0x381cfccc,0x7288,0x11e0,0x92,0xee,{0x00,0x0c,0x29,0x11,0xd0,0xb2}} #define GPT_ENT_TYPE_APPLE_BOOT \ {0x426F6F74,0x0000,0x11aa,0xaa,0x11,{0x00,0x30,0x65,0x43,0xec,0xac}} #define GPT_ENT_TYPE_APPLE_HFS \ {0x48465300,0x0000,0x11aa,0xaa,0x11,{0x00,0x30,0x65,0x43,0xec,0xac}} #define GPT_ENT_TYPE_APPLE_UFS \ {0x55465300,0x0000,0x11aa,0xaa,0x11,{0x00,0x30,0x65,0x43,0xec,0xac}} #define GPT_ENT_TYPE_APPLE_ZFS \ {0x6a898cc3,0x1dd2,0x11b2,0x99,0xa6,{0x08,0x00,0x20,0x73,0x66,0x31}} #define GPT_ENT_TYPE_APPLE_RAID \ {0x52414944,0x0000,0x11aa,0xaa,0x22,{0x00,0x30,0x65,0x43,0xec,0xac}} #define GPT_ENT_TYPE_APPLE_RAID_OFFLINE \ {0x52414944,0x5f4f,0x11aa,0xaa,0x22,{0x00,0x30,0x65,0x43,0xec,0xac}} #define GPT_ENT_TYPE_APPLE_LABEL \ {0x4C616265,0x6c00,0x11aa,0xaa,0x11,{0x00,0x30,0x65,0x43,0xec,0xac}} #define GPT_ENT_TYPE_APPLE_TV_RECOVERY \ {0x5265636f,0x7665,0x11AA,0xaa,0x11,{0x00,0x30,0x65,0x43,0xec,0xac}} #define GPT_ENT_TYPE_APPLE_CORE_STORAGE \ {0x53746f72,0x6167,0x11AA,0xaa,0x11,{0x00,0x30,0x65,0x43,0xec,0xac}} #define GPT_ENT_TYPE_APPLE_APFS \ {0x7c3457ef,0x0000,0x11aa,0xaa,0x11,{0x00,0x30,0x65,0x43,0xec,0xac}} #define GPT_ENT_TYPE_NETBSD_FFS \ {0x49f48d5a,0xb10e,0x11dc,0xb9,0x9b,{0x00,0x19,0xd1,0x87,0x96,0x48}} #define GPT_ENT_TYPE_NETBSD_LFS \ {0x49f48d82,0xb10e,0x11dc,0xb9,0x9b,{0x00,0x19,0xd1,0x87,0x96,0x48}} #define GPT_ENT_TYPE_NETBSD_SWAP \ {0x49f48d32,0xb10e,0x11dc,0xB9,0x9B,{0x00,0x19,0xd1,0x87,0x96,0x48}} #define GPT_ENT_TYPE_NETBSD_RAID \ {0x49f48daa,0xb10e,0x11dc,0xb9,0x9b,{0x00,0x19,0xd1,0x87,0x96,0x48}} #define GPT_ENT_TYPE_NETBSD_CCD \ {0x2db519c4,0xb10f,0x11dc,0xb9,0x9b,{0x00,0x19,0xd1,0x87,0x96,0x48}} #define GPT_ENT_TYPE_NETBSD_CGD \ {0x2db519ec,0xb10f,0x11dc,0xb9,0x9b,{0x00,0x19,0xd1,0x87,0x96,0x48}} #define GPT_ENT_TYPE_DRAGONFLY_LABEL32 \ {0x9d087404,0x1ca5,0x11dc,0x88,0x17,{0x01,0x30,0x1b,0xb8,0xa9,0xf5}} #define GPT_ENT_TYPE_DRAGONFLY_SWAP \ {0x9d58fdbd,0x1ca5,0x11dc,0x88,0x17,{0x01,0x30,0x1b,0xb8,0xa9,0xf5}} #define GPT_ENT_TYPE_DRAGONFLY_UFS1 \ {0x9d94ce7c,0x1ca5,0x11dc,0x88,0x17,{0x01,0x30,0x1b,0xb8,0xa9,0xf5}} #define GPT_ENT_TYPE_DRAGONFLY_VINUM \ {0x9dd4478f,0x1ca5,0x11dc,0x88,0x17,{0x01,0x30,0x1b,0xb8,0xa9,0xf5}} #define GPT_ENT_TYPE_DRAGONFLY_CCD \ {0xdbd5211b,0x1ca5,0x11dc,0x88,0x17,{0x01,0x30,0x1b,0xb8,0xa9,0xf5}} #define GPT_ENT_TYPE_DRAGONFLY_LABEL64 \ {0x3d48ce54,0x1d16,0x11dc,0x86,0x96,{0x01,0x30,0x1b,0xb8,0xa9,0xf5}} #define GPT_ENT_TYPE_DRAGONFLY_LEGACY \ {0xbd215ab2,0x1d16,0x11dc,0x86,0x96,{0x01,0x30,0x1b,0xb8,0xa9,0xf5}} #define GPT_ENT_TYPE_DRAGONFLY_HAMMER \ {0x61dc63ac,0x6e38,0x11dc,0x85,0x13,{0x01,0x30,0x1b,0xb8,0xa9,0xf5}} #define GPT_ENT_TYPE_DRAGONFLY_HAMMER2 \ {0x5cbb9ad1,0x862d,0x11dc,0xa9,0x4d,{0x01,0x30,0x1b,0xb8,0xa9,0xf5}} #define GPT_ENT_TYPE_CHROMEOS_FIRMWARE \ {0xcab6e88e,0xabf3,0x4102,0xa0,0x7a,{0xd4,0xbb,0x9b,0xe3,0xc1,0xd3}} #define GPT_ENT_TYPE_CHROMEOS_KERNEL \ {0xfe3a2a5d,0x4f32,0x41a7,0xb7,0x25,{0xac,0xcc,0x32,0x85,0xa3,0x09}} #define GPT_ENT_TYPE_CHROMEOS_RESERVED \ {0x2e0a753d,0x9e48,0x43b0,0x83,0x37,{0xb1,0x51,0x92,0xcb,0x1b,0x5e}} #define GPT_ENT_TYPE_CHROMEOS_ROOT \ {0x3cb8e202,0x3b7e,0x47dd,0x8a,0x3c,{0x7f,0xf2,0xa1,0x3c,0xfc,0xec}} #define GPT_ENT_TYPE_OPENBSD_DATA \ {0x824cc7a0,0x36a8,0x11e3,0x89,0x0a,{0x95,0x25,0x19,0xad,0x3f,0x61}} +#define GPT_ENT_TYPE_SOLARIS_BOOT \ + {0x6a82cb45,0x1dd2,0x11b2,0x99,0xa6,{0x08,0x00,0x20,0x73,0x66,0x31}} +#define GPT_ENT_TYPE_SOLARIS_ROOT \ + {0x6a85cf4d,0x1dd2,0x11b2,0x99,0xa6,{0x08,0x00,0x20,0x73,0x66,0x31}} +#define GPT_ENT_TYPE_SOLARIS_SWAP \ + {0x6a87c46f,0x1dd2,0x11b2,0x99,0xa6,{0x08,0x00,0x20,0x73,0x66,0x31}} +#define GPT_ENT_TYPE_SOLARIS_BACKUP \ + {0x6a8b642b,0x1dd2,0x11b2,0x99,0xa6,{0x08,0x00,0x20,0x73,0x66,0x31}} +#define GPT_ENT_TYPE_SOLARIS_VAR \ + {0x6a8ef2e9,0x1dd2,0x11b2,0x99,0xa6,{0x08,0x00,0x20,0x73,0x66,0x31}} +#define GPT_ENT_TYPE_SOLARIS_HOME \ + {0x6a90ba39,0x1dd2,0x11b2,0x99,0xa6,{0x08,0x00,0x20,0x73,0x66,0x31}} +#define GPT_ENT_TYPE_SOLARIS_ALTSEC \ + {0x6a9283a5,0x1dd2,0x11b2,0x99,0xa6,{0x08,0x00,0x20,0x73,0x66,0x31}} +#define GPT_ENT_TYPE_SOLARIS_RESERVED \ + {0x6a945a3b,0x1dd2,0x11b2,0x99,0xa6,{0x08,0x00,0x20,0x73,0x66,0x31}} + /* * Boot partition used by GRUB 2. */ #define GPT_ENT_TYPE_BIOS_BOOT \ {0x21686148,0x6449,0x6e6f,0x74,0x4e,{0x65,0x65,0x64,0x45,0x46,0x49}} #endif /* _SYS_DISK_GPT_H_ */