Before this change, the first probed member of a pool would initialize
vdev tree for the pool. Now, imagine a situation when a machine has a
disk that has been removed from the pool, but the ZFS label was not
erased. That's a typical scenario - disk goes offline, it is replaced
with a spare, no data changes written to the gone disk. Then, disk
appears back at boot time and it is the first one to be probed by the
loader. It has the same pool GUID as all other members and naive loader
would not see a conflict. Then the disk will be used as source of truth
to read the bootenv.
To fix that, provide vdev_free() that allows to rollback the already
prebuilt vdev tree, so that a new one can be built from scratch. Upon
encountering a newer configuration for already known pool, calltop level part of a
pool, call vdev_free() and let vdev_probe() to build a new one. Note that we shall
preserve spa pointer
The change has been tested with loader_lua and userboot.so, asbut it has already been returned to upper layers byshould
previous probe.
The change has been tested with loader_lua, but it should have same have same effect on the legacy boot1 loader.
effect on the legacy boot1 loader.
While here, two cosmetic fixes:
- Don't hardcode root vdev name, use the real pool name.
- Don't read and store ZPOOL_CONFIG_VDEV_CHILDREN in vdev_probe() since
we it is done in vdev_init_from_nvlist() and we don't need it in
vdev_probe().