Page MenuHomeFreeBSD

Teach /etc/rc.d/growfs to handle disks with ZFS
ClosedPublic

Authored by cperciva on Feb 7 2019, 1:41 AM.
Tags
None
Referenced Files
F106016293: D19095.id53703.diff
Mon, Dec 23, 11:14 PM
Unknown Object (File)
Sat, Dec 14, 4:26 AM
Unknown Object (File)
Mon, Dec 9, 12:22 PM
Unknown Object (File)
Wed, Nov 27, 1:15 PM
Unknown Object (File)
Nov 22 2024, 2:17 PM
Unknown Object (File)
Nov 22 2024, 12:30 PM
Unknown Object (File)
Nov 17 2024, 9:19 AM
Unknown Object (File)
Nov 13 2024, 10:52 AM

Details

Summary

Currently /etc/rc.d/growfs knows how to expand a disk which has a UFS filesystem on it, but doesn't know how to handle a disk which has ZFS. This should fix that, which will make it possible to have FreeBSD/EC2 ZFS images which DTRT.

Diff Detail

Repository
rS FreeBSD src repository - subversion
Lint
Lint Not Applicable
Unit
Tests Not Applicable

Event Timeline

allanjude added a subscriber: allanjude.

There should be a better way to get a list of the disks from the pool with the zpool command, but there currently isn't

zpool list -o name -v $poolname should only have the name column, but it includes the others. It probably should also have a -t leaf or something, to only include the actual disks, not things like 'raidz2' 'mirror' etc.

So this is as good as it is going to be for now, and works for the one disk case just fine, but I do plan to fixup the zpool command.

libexec/rc/rc.d/growfs
58 ↗(On Diff #53632)

Few things here.

  1. Use -H on zpool list to remove the headers, and change the field separator to one hard tab, instead of printf human friendly column alignment
  2. tail -n 1 isn't safe here, in the case of a pool with a log or cache device, you'll get that device, not the root vdev. Although there might not be a better way to do it right now
This revision is now accepted and ready to land.Feb 7 2019, 5:59 AM
matthew added inline comments.
libexec/rc/rc.d/growfs
58 ↗(On Diff #53632)

Doesn't this need to loop over all the disks in the pool rather than just the last one? For instance, my zpool looks like this:

lucid-nonsense:~/src/namedb:% zpool list -v zroot
NAME            SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
zroot           456G   255G   201G        -         -    58%    55%  1.00x  ONLINE  -
  mirror        456G   255G   201G        -         -    58%    55%
    gpt/disk1      -      -      -        -         -      -      -
    gpt/disk0      -      -      -        -         -      -      -

So I'd need to do

# zpool online -e zroot gpt/disk0
# zpool online -e zroot gpt/disk1

before the available space in the pool would grow.

What about something like this?:

expandsize=$(zpool get -H expandsize $pool | cut -w -f 3)
case $expandsize in
   -)
      zpool set autoexpand=off $pool
      ;;
   *)
      zpool set autoexpand=on $pool
      ;;
esac

You're right that this doesn't handle multi-disk situations -- but it really isn't intended to, and the script currently doesn't handle those for UFS either. In an ideal world it would probably know how to grow GELI encrypted disks too, for that matter... but my goal right now is simply to provide "feature parity" between ZFS and UFS, i.e., to handle the case of a single unencrypted disk.

After all, this was never intended as a general "resize disks" script -- it's a firstboot script, intended specifically for the case of VMs being launched with different sizes of boot disk.

if you have to reupload for a change please include full context next time

libexec/rc/rc.d/growfs
54 ↗(On Diff #53632)

this feels to me like it should be a case statement

Use case statements to select between filesystem types.

This revision now requires review to proceed.Feb 7 2019, 7:56 PM

Updated patch, using a case statement instead of if/elif/else.

libexec/rc/rc.d/growfs
54 ↗(On Diff #53632)

Thanks, fixed.

This revision was not accepted when it landed; it landed in state Needs Review.Feb 8 2019, 7:19 PM
This revision was automatically updated to reflect the committed changes.