...in the UEFI case.
We should check for an error here.
I think we should additionally fall back to the old method of creating VMs if an error occurs and errno == ENOENT. Consider a user that runs bhyve in a jail with devfs rules that hide /dev/vmmctl. Upon updating their jail, bhyve will stop working since vmmctl_open() fails.
To simplify code, do not try to create a vm in the non-bootrom case, since we expect it to be created already, so non-existence will be caught in the call to vm_open. Note that an error message has been removed; if it is still necessary then I can add an explicit check in the non-bootrom case.
There is a problem with this revision. If you try to reboot a VM, the error message "vm_open: Device not configured" appears. The problem is that rebooting works as follows: The bhyve process exits with exit code 0, which tells the caller (such as vmrun.sh), to call bhyve again. Since the bhyve process exited, the VM is destroyed, but this is an asynchronous action. So when the bhyve process is called again almost instantly, the VM likely has not yet been destroyed, so a new VM cannot be created.
An easy solution is to add a sleep in vmrun.sh to give enough time for the VM to be destroyed, but this is not ideal. Having to destroy and then re-create the VM on every reboot can be inefficient if the VM is large. It would be better if we don't have to destroy and re-create the VM (basically, make the VMs persistent), but the new /dev/vmmctl interface was created explicitly with the benefit that VM lifetimes are tied to the file descriptor / bhyve process lifetime.
A solution that we were considering is having bhyve execve itself instead of having vmrun.sh call bhyve again so that the process can inherit the /dev/vmmctl file descriptor, but this can't be done since bhyve is run in capability mode.