Indeed, do you have any bandwidth to do proper testing to prove either way or have you already done this?
- Queries
- All Stories
- Search
- Advanced Search
- Transactions
- Transaction Logs
Advanced Search
Feb 24 2017
Feb 23 2017
LGTM
Feb 22 2017
Feb 21 2017
Just a few seemingly redundant assignments to error vars, sorry didn't spot them before
Feb 20 2017
Disabled reset receive buffer auto scaling when not in bulk receive mode, which gives an extra 20% performance increase, bring it closer to Linux.
Feb 19 2017
For reference fastpath version of this hasn't been done, which need also need the same changes to:
tcp_stacks/fastpath.c
Feb 15 2017
In D9611#198637, @avg wrote:Essentially, L2ARC periodically writes to every block on the device and that allows to physically move the data around.
That's exactly the reason why I think that TRIM is not needed for L2ARC.
TRIM is useful when we don't need data in some area, but we are not going to overwrite that area, so we need to a way to tell the storage system that it can reuse the physical cells without worrying about any data in them. But if we overwrite that area anyway, then the storage system is automatically aware that the data in those physical cells is obsolete. It's free to choose either those same cells or any different cells for the new data according to the wear leveling algorithms, but that's beside the point.
In D9611#198616, @avg wrote:My understanding of how L2ARC writing works is this. The code maintains a "hand" (like a clock hand) that points to a disk offset. At the regular intervals a certain space in front of the hand is freed by discarding L2 headers that point to that space and then new buffers are written to that space. Then the hand is moved forward by the appropriate amount.
There is also some freeing of L2 headers when ARC headers are freed, etc. In any case, after some uptime almost the whole cache disk is usually filled with data. And the hand inevitably moves forward. So, every block gets written over sooner or later. I do not see how TRIM helps in that case.
The only scenario where it makes difference, in my humble opinion, is the scenario that @mav described: a worn out disk where the "cheese holes" behind the hand can make some difference for writing new blocks at the hand. But I think that that's too marginal to be important.
In D9611#198582, @mav wrote:In D9611#198579, @smh wrote:In scenario #1 the performance of TRIM is also generally good, mitigating the need to avoid doing it.
Is there such thing as good TRIM performance? On my new Samsung 950 NVMe I had to disable TRIM as unusable. Though yes, NVMe driver still probably needs aggregation of TRIM requests to get better numbers.
In scenario #1 the performance of TRIM is also generally good, mitigating the need to avoid doing it.
This can could excessive slow down as the capacity of the disk is reached, however it could be argued that a better mitigation for L2ARC devices would be to use an under-provisioned slice to ensure the SSD controller always has space to work.
Jan 23 2017
Jan 16 2017
Jan 11 2017
Jan 9 2017
Nov 28 2016
Couple of little style nits and I'd like to understand why ENAMETOOLONG error gets turned into success in a few places.
Nov 2 2016
Oct 31 2016
Wouldn't it be better to have 0 (the default) be "use the PhyNum field as a fallback to the mapping logic" as that way at least the device would initialise when mpX_mapping_get_sas_id fails?
Aug 15 2016
Aug 14 2016
Aug 11 2016
Aug 4 2016
Jul 22 2016
Thanks for attacking this Andriy its a big task which really needed some attention.
Jul 13 2016
Apart from the comment this looks good in principle, however I think we need to better understand why retrying the probe works as it feels like where just hiding the real error with this.
Jul 11 2016
This looks reasonable however as you say its potentially racey with multiple renames happening.
Jul 6 2016
Jun 29 2016
Jun 28 2016
Jun 14 2016
I'd like to know why finding the current device would ever fail?
Jun 3 2016
Jun 1 2016
May 9 2016
May 5 2016
Apr 11 2016
Remove redundent check on <= 0
Apr 9 2016
Nice, do you have metrics on the typical dump size increase?
Apr 8 2016
This looks like it might leave old data in the zpool.cache?
Apr 7 2016
Seems reasonable to me
Mar 21 2016
Mar 19 2016
Mar 16 2016
Mar 10 2016
Mar 3 2016
Feb 26 2016
Feb 25 2016
Missed a few sub indents on the white space fixup.
Fix invalid whitespacing (7 spaces instead of tabs)
Define and init cpu_id using rss_getcpu.
Feb 22 2016
Feb 20 2016
We should get this up-streamed.
Feb 14 2016
Feb 11 2016
Feb 8 2016
Restructure Makefile fix to aid future merges as per ngie's suggestion.
Feb 6 2016
Fix pointer cast issue on arm.armeb.
In D5212#110873, @ngie wrote:Was there any change to how the operations need to be done in order to call efi_handle_lookup in the amd64/arm64 case on head with a ZFS root?
Feb 5 2016
Feb 3 2016
Feb 1 2016
Almost there. recomparing with our diff the only remaining missing block is the following which comes from between r283882 and r283883:
Fix device_paths_match breakage in last diff caused by late check of media type match.
Jan 31 2016
- Comment new methods.
- Fix possible edge case in device_paths_match.
- Fix failure printf in first path case for load_loader.
- Small flow optimisation in try_boot.
- Rename devpath_strncat -> devpath_strlcat to more closely describe its function in terms of standard string functions.
- Increase devpath_str buffer to 256 as 128 could be could be close.
- Optimise placement of device_paths_match in probe_handle.