- Queries
- All Stories
- Search
- Advanced Search
- Transactions
- Transaction Logs
Advanced Search
Apr 23 2019
Apr 20 2019
Apr 19 2019
Apr 7 2019
Mar 31 2019
um, I am sorry to jump in late. but ... there is something similar I did too:
Mar 28 2019
Mar 27 2019
Mar 20 2019
Mar 15 2019
In D19588#419505, @bcran wrote:I think, what you need to establish first is what this script is for and what are the use cases. If we are running it from installer, it must only install boot programs on target disk(s). If it is meant to be used as generic update current boot disks, then it has to discover disks related to current root file system. The least we want to happen is that some random disk will get updated.
I forgot to explain that. This script would be run from make installworld or freebsd-update to update an ESP that's expected to already have a FreeBSD boot1.efi or loader.efi on it.
I need to update the diff to look at the output of gmirror status, zpool status etc. to work out which ESP(s) to update. In a mirror situation there may be multiple ESPs which should be updated.
In D19588#419418, @bcran wrote:I'm thinking that instead of looking at all disks in the system it should perhaps only check the disk that contains the root filesystem.
That would avoid the case where for example the script would try and update the install media plugged into a USB port.
Mar 6 2019
Feb 27 2019
Feb 26 2019
Feb 20 2019
Feb 19 2019
In D19238#411586, @tsoome wrote:In D19238#411585, @ian wrote:In D19238#411584, @tsoome wrote:Well, you have no place in config to state that the slice or partition is to be set -1. However, to preserve the working configs (where disk*: strings are set, what needs to be done is to update archsw.arch_getdev() to check and add partition a if needed (that is, if we have MBR freebsd slice and BSD label in it). But that can and should be separate patch.
If the device string is just disk0s1: then the partition is initialized to -1 in disk_parsedev(), which means on all arches leaving off the partition on a BSD slice means (and has always meant) "use the first freebsd-ufs partition in the slice".
Which means we do have the place for fix. And it has to be fixed where we do translate string descriptor to device structure. If we can add 'a' partition or not can be easily checked by simple disk_open() call. As simple as it.
Feb 18 2019
In D19238#411585, @ian wrote:In D19238#411584, @tsoome wrote:Well, you have no place in config to state that the slice or partition is to be set -1. However, to preserve the working configs (where disk*: strings are set, what needs to be done is to update archsw.arch_getdev() to check and add partition a if needed (that is, if we have MBR freebsd slice and BSD label in it). But that can and should be separate patch.
If the device string is just disk0s1: then the partition is initialized to -1 in disk_parsedev(), which means on all arches leaving off the partition on a BSD slice means (and has always meant) "use the first freebsd-ufs partition in the slice".
In D19238#411581, @ian wrote:This is insufficient. Whether we like it or not, whether it's documented or not, it is an existing feature that disk_open() with partition set to any negative number means "use the first freebsd-ufs partition in the slice". This isn't documented anywhere, even the loader(8) manpage only says "the syntax for devices is odd". But if we change the current behavior, peoples' existing configurations can break, leaving them without access to a remote system, etc.
Ubldr very explicitly relies on (and documents, at least in comments) that partition == -1 means probe for a good partition. It looks to me like the code in biosdisk.c for x86 also implicitly assumes and expects that (through a really twisty path that ultimately traces back to disk_parsedev() setting d_partition to -1 and leaving it that way if there is no pX or sX on the end.
It does look like the negative number aspect is univserally -1, so we should be free to define that with a name that means 'give me what you got' and then define a new -2 or other value to mean 'give me the raw slice'.
BTW, fwiw, this isn't a problem for slices because -1 means "not initialized", 0 means "raw slice" and 1+ are slice numbers. It appears disk_open() with a negative slice number probably falls on its face, returning success with d_slice set to the negative number, but that's based on running the code in my head, haven't tested on a real machine.
Feb 17 2019
Feb 16 2019
Feb 15 2019
Feb 13 2019
Thanks for working on this!:)
Feb 11 2019
In D19140#409630, @pkelsey wrote:In D19140#409629, @tsoome wrote:The reason why I am asking is, I am trying to understand if we actually *can* get into the situation with overrun, I got the impression the zfs on disk format should keep things sector aligned, but it really is easy to get confused there... And it does feel safer if we do have proper checks in place.
If the zfs on-disk format keeps things sector-aligned / sector-multiple, then why was vdev_read complicated with code to handle non-sector aligned / non-sector multiple reads to begin with?
In D19140#409563, @pkelsey wrote:In D19140#409448, @tsoome wrote:I have two questions:
- how was it tested - was there some corruption case?
I did not see any corruption happen. I found this while reading code while root-causing an issue I was having with a machine I had upgraded to 11.2 (the issue described briefly in D19142). On the 11.2 machine, I did instrument vdev_read and found that on that machine, no reads were performed that required the bounce buffer. I imagine the bounce buffer only comes into use with 4K sector drives - none of my gear is 4K.
I tested this patch using a VM booting from a ZFS mirror.
I have two questions:
Feb 1 2019
Jan 31 2019
Jan 19 2019
Jan 17 2019
Jan 13 2019
Jan 5 2019
Jan 3 2019
In D18723#399420, @yuripv wrote:Are we *guaranteed* to have the floppy drives coming only at the start of the list?
Jan 2 2019
Dec 30 2018
Dec 18 2018
Small nits
Update the comment about alloc/free ordering.
Dec 17 2018
Since UEFI does probe zfs only by calling zfs probe with partition and not
with whole disk, we need zfs probe to check this case.
Dec 16 2018
In D18558#396096, @alvisen_gmail.com wrote:This broke boot on my laptop.
Single SATA SSD. UEFI boot.
loader.efi does not find my ZFS pool, which is on GPT partition 3.=> 40 250069600 ada0 GPT (119G) 40 1024 1 freebsd-boot (512K) 1064 131072 2 efi (64M) 132136 984 - free - (492K) 133120 249935872 3 freebsd-zfs (119G) 250068992 648 - free - (324K)
Dec 14 2018
Small comment rerwrite
Dec 3 2018
Nov 30 2018
Rebase on r341328
In D18391#391262, @avg wrote:Maybe rather than removing the code it would be better to ifdef it out?
But not sure. What are plans for illumos in this respect?
Nov 29 2018
Cleaning up a bit: