Red Hat Bugzilla – Bug 1314687
4.4.x kernel reports wrong discard_granularity
Last modified: 2016-09-26 06:55:58 EDT
Description of problem:
See bug 1313377
Instead of reporting 4096, the kernel reports 1 for
Version-Release number of selected component (if applicable):
Not affected: 4.3.5-300.fc23.x86_64
This makes lvremove not discard logical volumes, since it assumes that the granularity is always bigger or equal to 512 for devices supporting discard/trim.
Please see: https://www.redhat.com/archives/dm-devel/2016-March/msg00030.html
Can you confirm whether or not your underlying SCSI device supports LBPRZ?
Please report what you have in: /sys/block/<scsi_dev_name>/queue/discard_zeroes_data
$ cat /sys/block/sda/queue/discard_zeroes_data
Does this meant that LBPRZ is supported? What does LBPRZ mean? Is it something like Read Zero After Trim (RZAT)?
I use Debian but this has happened to me also. Their kernel 4.3.3-7
correctly reports 512 as the discard_granularity (although, my understanding
is that the hardware actually has 8k erase blocks), while kernel 4.4.6-1
reports discard_granularity 1. This then causes the device-mapper to report:
discard granularity unexpectedly less than sector size
According to the vendor data sheet, the device does in fact zero discarded
blocks. This concurs with /sys/block/sda/queue/discard_zeroes_data on
my device ('1').
Model Family: Intel X18-M/X25-M/X25-V G2 SSDs
Device Model: INTEL SSDSA2M160G2GC
Firmware Version: 2CV102M3
User Capacity: 160,041,885,696 bytes [160 GB]
Sector Size: 512 bytes logical/physical
Rotation Rate: Solid State Device
ATA Version is: ATA/ATAPI-7 T13/1532D revision 1
SATA Version is: SATA 2.6, 3.0 Gb/s
I cannot seem to find the data sheet now, but I remember looking for it before
when determining if I needed to overwrite the blocks or not to sanitize it.
*********** MASS BUG UPDATE **************
We apologize for the inconvenience. There is a large number of bugs to go through and several of them have gone stale. Due to this, we are doing a mass bug update across all of the Fedora 23 kernel bugs.
Fedora 23 has now been rebased to 4.7.4-100.fc23. Please test this kernel update (or newer) and let us know if you issue has been resolved or if it is still present with the newer kernel.
If you have moved on to Fedora 24 or 25, and are still experiencing this issue, please change the version to Fedora 24 or 25.
If you experience different issues, please open a new bug report for those.
kernel 4.7.4-100.fc23.x86_64 reports 512, thank you for fixing it:
$ cat /sys/dev/block/8:0/queue/discard_granularity