Re: "the first attempt to READ/WRITE multiple blocks always return CRC error" -
- Queries
- All Stories
- Search
- Advanced Search
- Transactions
- Transaction Logs
Advanced Search
Jun 10 2025
Apr 8 2025
Apr 7 2025
It does detect the card with your patch, but size is 0bytes (CSD is zero)
mmc0: New card detected (CID 035344534236344780da29024a014400) mmc0: New card detected (CSD 00000200000000000000000000000000)I am looking if I can find an issue...
Apr 4 2025
@pkelsey Could you add permission to the file uploaded, so I can download.
Apr 3 2025
I've uploaded a version of this file that removes the R7 translation hack and fixes the underlying issue as I've proposed. I did not compile this code, so please excuse any mechanical issues that you find. The SPI response type to SD/MMC response type correspondence that has been added here corresponds to what the code implemented prior to the commit I referenced above, and I double checked it for each command against the current version of the SD Physical Layer Simplified Specification.
Mar 15 2025
@br Interesting to see this being revived. It has been quite some time since I wrote the base version of this code. Are you able to provide some description of the "fixed bugs" you noted in the summary?
Dec 1 2021
In D33189#750472, @avg wrote:I thought a bit about gracefully handling too many (non-empty) fragments for a packet.
My only concern is what happens if we abort after seeing SOP and before reaching EOP.
Might that confuse the next call to vmxnet3_isc_rxd_available / vmxnet3_isc_rxd_pkt_get?
Nov 30 2021
We can only guess as to what may be really going on in the virtual device, but this seems like a correct defensive approach to what has been observed. Given the data at https://people.freebsd.org/~avg/vmxnet3-fragment-overrun.txt (thanks for collecting this!), I was concerned that there might be an issue with the fact that the virtual device is consuming free list entries for the zero-length fragments, but we are hiding this information from iflib, and that might break free list maintenance. I reviewed those paths, and it's fine (iflib tracks and replaces packet mbufs sent to the stack via a bitmap, provides the corresponding free list indexes to the driver during refill, and during refill, the vmx driver correctly identifies and handles gaps in those provided indexes).
May 19 2021
This is the right fix for what is a mistaken partial edit that occurred when parent determination was pulled up into eval_pfqueue() during 1d34c9dac8624c5c315ae39ad3ae8e5879b23256.
Apr 26 2021
Mar 17 2020
Mar 16 2020
Mar 14 2020
Mar 10 2020
This kills the refill limit entirely.
Mar 4 2020
In D23948#526385, @gallatin wrote:I get it, but that (totally arbitrary) limit is a pet peeve of mine. Its one of those things that you follow up several levels of code to an "XXX" comment and an arbitrary value. Its like reading a mystery novel and finding the butler really did do it.
And, to be honest, you already have the sysctl, If the user already has to touch something for decent performance, let's just have him set the budget to 65535 rather than making the running code even more complex.
In D23943#526402, @erj wrote:One driver that is subject to the above scenario is the ixl driver.
Does that mean you also tested this patch with the ixl driver? I didn't see it mentioned in the "Test Plan" notes.
In D23947#526398, @erj wrote:I think this looks okay, but I don't think it'll apply to the Intel drivers, right?
Mar 3 2020
In D23948#526345, @gallatin wrote:Why not just remove the limit, rather than making things even more complex?
In D23946#526363, @gallatin wrote:Funny how the comment right above the assert calls it out as bogus..
In D23945#526265, @avg wrote:LGTM.
I assume that a change to vmxnet3 that takes advantage of this improvement is coming.
Jan 15 2020
Looks good to me.
Jan 13 2020
Jan 12 2020
I have updated bug 242890 to include the rationale for enabling (and also modifying) the #ifdef notyet RSS code when converting the driver to iflib, and also raised the question of whether this same issue exists for the bnxt driver.
Mar 14 2019
Feb 18 2019
In D19140#411495, @rozhuk.im-gmail.com wrote:Not sure that this works.
I try this on 11.2 and on Hyper-V gen1 system stick at ldelf loading stage on boot from zfs root.
Feb 17 2019
Feb 12 2019
Introduced local char * buffer pointer to avoid casts on output buffer pointer updates
Feb 11 2019
In D19140#409631, @tsoome wrote:In D19140#409630, @pkelsey wrote:In D19140#409629, @tsoome wrote:The reason why I am asking is, I am trying to understand if we actually *can* get into the situation with overrun, I got the impression the zfs on disk format should keep things sector aligned, but it really is easy to get confused there... And it does feel safer if we do have proper checks in place.
If the zfs on-disk format keeps things sector-aligned / sector-multiple, then why was vdev_read complicated with code to handle non-sector aligned / non-sector multiple reads to begin with?
Yea, you are right:)
In D19140#409629, @tsoome wrote:The reason why I am asking is, I am trying to understand if we actually *can* get into the situation with overrun, I got the impression the zfs on disk format should keep things sector aligned, but it really is easy to get confused there... And it does feel safer if we do have proper checks in place.
In D19140#409448, @tsoome wrote:I have two questions:
- how was it tested - was there some corruption case?
Feb 9 2019
In D19124#409229, @kristof wrote:I have no objections to the patch, but I don't know enough about HFSC to meaningfully review this, I'm afraid.