RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1400824 - "fstrim/discard" operation is not supported for lvm RAID1 in RHEL7.3
Summary: "fstrim/discard" operation is not supported for lvm RAID1 in RHEL7.3
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: lvm2
Version: 7.3
Hardware: All
OS: Linux
unspecified
unspecified
Target Milestone: rc
: ---
Assignee: Heinz Mauelshagen
QA Contact: cluster-qe@redhat.com
Steven J. Levine
URL:
Whiteboard:
Depends On:
Blocks: 1385242
TreeView+ depends on / blocked
 
Reported: 2016-12-02 07:49 UTC by Ranjith ML
Modified: 2021-09-03 12:36 UTC (History)
11 users (show)

Fixed In Version: lvm2-2.02.169-1.el7
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1474462 (view as bug list)
Environment:
Last Closed: 2017-08-01 21:49:49 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2017:2222 0 normal SHIPPED_LIVE lvm2 bug fix and enhancement update 2017-08-01 18:42:41 UTC

Description Ranjith ML 2016-12-02 07:49:14 UTC
Description of problem:

"fstrim / discard" operation is not supported for lvm RAID1 in RHEL7.3

Version-Release number of selected component (if applicable):
kernel-3.10.0-514.el7.x86_64
lvm2-2.02.166-1.el7_3.1.x86_64

How reproducible:

1. Create lvm RAID1 logical volume.

~~~
lvcreate --type raid1 -m 1 -L 90M -n testlv testvg

# lvs -ao +devices testvg
  LV                VG     Attr       LSize  Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert Devices                              
  testlv            testvg rwi-aor--- 92.00m                                    100.00           testlv_rimage_0(0),testlv_rimage_1(0)
  [testlv_rimage_0] testvg iwi-aor--- 92.00m                                                     /dev/loop0(1)                        
  [testlv_rimage_1] testvg iwi-aor--- 92.00m                                                     /dev/loop1(1)                        
  [testlv_rmeta_0]  testvg ewi-aor---  4.00m                                                     /dev/loop0(0)                        
  [testlv_rmeta_1]  testvg ewi-aor---  4.00m                                                     /dev/loop1(0)                     
~~~

2. Create ext4 filesystem 

~~~
mkfs.ext4 /dev/mapper/testvg-testlv
~~~

3. Update RHEL7 system to latest version.

~~~
$ yum update
$ uname -r
3.10.0-514.el7.x86_64
~~~

4. Try to release free block from filesystem using "fstrim" 

~~~
# fstrim -v /test/
fstrim: /test/: the discard operation is not supported
~~~

Actual results:

"fstrim" failed with error "the discard operation is not supported"

Expected results:

"fstrim" should work without any error message.

# fstrim -v /test/
/test/: 76.3 MiB (79990784 bytes) trimmed

Additional info:

Comment 5 Heinz Mauelshagen 2017-03-08 17:06:05 UTC
Works on kernel 3.10.0-587.el7.x86_64:

# lvs -aoname,size,segtype,devices
  LV           LSize   Type   Devices                    
  root          45.12g linear /dev/vda2(0)               
  swap           3.88g linear /dev/vda2(11550)           
  r            200.00t raid1  r_rimage_0(0),r_rimage_1(0)
  [r_rimage_0] 200.00t linear /dev/sdb(1)                
  [r_rimage_1] 200.00t linear /dev/sdc(1)                
  [r_rmeta_0]    4.00m linear /dev/sdb(0)                
  [r_rmeta_1]    4.00m linear /dev/sdc(0)

# time mkfs -t ext4 -E nodiscard /dev/t/r
mke2fs 1.42.9 (28-Dec-2013)
Filesystem label=
OS type: Linux
...
real    1m36.392s
user    1m29.085s
sys     0m3.799s

# mount /dev/t/r /mnt/r

[root@rhel-7-4 ~]# df -h /mnt/r
Filesystem       Size  Used Avail Use% Mounted on
/dev/mapper/t-r  200T   20K  190T   1% /mnt/r

# time fstrim -v /mnt/r
/mnt/r: 199.2 TiB (219026630234112 bytes) trimmed

real    1m37.915s
user    0m0.001s
sys     0m39.112s

Comment 7 Roman Bednář 2017-05-24 11:18:04 UTC
Does not work with 3.10.0-663.el7.x86_64

Tried fstrim with linear, raid1, thin_lv and thin_lv on raid1 thinpool,
on both ext4 and xfs.

# lvs -aoname,pool_lv,segtype,devices
  LV                              Pool           Type      Devices                                                          
  root                                           linear    /dev/vda2(205)                                                   
  swap                                           linear    /dev/vda2(0)                                                     
  linear                                         linear    /dev/sda(257)                                                    
  [lvol0_pmspare]                                linear    /dev/sda(513)                                                    
  newraid1                                       raid1     newraid1_rimage_0(0),newraid1_rimage_1(0)                        
  [newraid1_rimage_0]                            linear    /dev/sda(1029)                                                   
  [newraid1_rimage_1]                            linear    /dev/sdb(515)                                                    
  [newraid1_rmeta_0]                             linear    /dev/sda(1028)                                                   
  [newraid1_rmeta_1]                             linear    /dev/sdb(514)                                                    
  raid1                                          raid1     raid1_rimage_0(0),raid1_rimage_1(0)                              
  [raid1_rimage_0]                               linear    /dev/sda(772)                                                    
  [raid1_rimage_1]                               linear    /dev/sdb(258)                                                    
  [raid1_rmeta_0]                                linear    /dev/sda(771)                                                    
  [raid1_rmeta_1]                                linear    /dev/sdb(257)                                                    
  thinlv                          thinp          thin                                                                       
  thinp                                          thin-pool thinp_tdata(0)                                                   
  thinp_on_raid1                                 thin-pool thinp_on_raid1_tdata(0)                                          
  [thinp_on_raid1_tdata]                         raid1     thinp_on_raid1_tdata_rimage_0(0),thinp_on_raid1_tdata_rimage_1(0)
  [thinp_on_raid1_tdata_rimage_0]                linear    /dev/sda(1)                                                      
  [thinp_on_raid1_tdata_rimage_1]                linear    /dev/sdb(1)                                                      
  [thinp_on_raid1_tdata_rmeta_0]                 linear    /dev/sda(0)                                                      
  [thinp_on_raid1_tdata_rmeta_1]                 linear    /dev/sdb(0)                                                      
  [thinp_on_raid1_tmeta]                         linear    /dev/sda(770)                                                    
  [thinp_tdata]                                  linear    /dev/sda(514)                                                    
  [thinp_tmeta]                                  linear    /dev/sdj(0)                                                      
  virt_raid1                      thinp_on_raid1 thin                                         


# mount
...
/dev/mapper/vg-raid1 on /mnt/raid1 type xfs (rw,relatime,seclabel,attr2,inode64,noquota)
/dev/mapper/vg-thinlv on /mnt/thinlv type xfs (rw,relatime,seclabel,attr2,inode64,noquota)
/dev/mapper/vg-linear on /mnt/linear type xfs (rw,relatime,seclabel,attr2,inode64,noquota)
/dev/mapper/vg-virt_raid1 on /mnt/virt_raid1 type xfs (rw,relatime,seclabel,attr2,inode64,logbsize=64k,sunit=128,swidth=128,noquota)
...


# fstrim -v /mnt/linear
fstrim: /mnt/linear: the discard operation is not supported

# fstrim -v /mnt/raid1
fstrim: /mnt/raid1: the discard operation is not supported

# fstrim -v /mnt/thinlv
/mnt/thinlv: 5 MiB (5267456 bytes) trimmed

# fstrim -v /mnt/virt_raid1
/mnt/virt_raid1: 96.5 MiB (101220352 bytes) trimmed



lvm2-2.02.171-1.el7    BUILT: Wed May  3 14:05:13 CEST 2017
lvm2-libs-2.02.171-1.el7    BUILT: Wed May  3 14:05:13 CEST 2017
lvm2-cluster-2.02.171-1.el7    BUILT: Wed May  3 14:05:13 CEST 2017
device-mapper-1.02.140-1.el7    BUILT: Wed May  3 14:05:13 CEST 2017
device-mapper-libs-1.02.140-1.el7    BUILT: Wed May  3 14:05:13 CEST 2017
device-mapper-event-1.02.140-1.el7    BUILT: Wed May  3 14:05:13 CEST 2017
device-mapper-event-libs-1.02.140-1.el7    BUILT: Wed May  3 14:05:13 CEST 2017
device-mapper-persistent-data-0.7.0-0.1.rc6.el7    BUILT: Mon Mar 27 17:15:46 CEST 2017
cmirror-2.02.171-1.el7    BUILT: Wed May  3 14:05:13 CEST 2017

Comment 11 Mike Snitzer 2017-06-06 15:43:04 UTC
In reply to comment#7, and in speaking with Roman on irc, it looks like the underlying storage does _not_ support discard (at least not all devices).

DM thinp will disable discard passdown if the underlying storage doesn't support discards but thinp itself will handle the discards at the logical layer.

Please re-test lvm RAID1 only against storage that supports discards.  If you don't have physical storage that supports discards you can use scsi_debug to create ramdisk-based test devices that support discard, e.g.:

modprobe scsi_debug lbpws=1 dev_size_mb=100

Comment 12 Roman Bednář 2017-06-09 07:20:20 UTC
Thanks Mike, it was exactly the case. I did the previous testing with VMs on host that actually uses SSDs as backing devices, but then on the VMs scsi targets don't appear to have discard support as shown below.

Marking verified, fstrim/discard on raid1 volumes is supported with latest rpms.


==============================================================
# modprobe scsi_debug lbpws=1 dev_size_mb=100

# lsscsi
...
[62:0:0:0]   disk    LIO-ORG  cluster62595-di  4.0   /dev/sdj 
[70:0:0:0]   disk    LIO-ORG  cluster62595-di  4.0   /dev/sdi 
...
[72:0:0:0]   disk    Linux    scsi_debug       0004  /dev/sdk
...

## Partition /dev/sdk
# lsblk -D
NAME                    DISC-ALN DISC-GRAN DISC-MAX DISC-ZERO
...
sdi                            0        0B       0B         0
└─sdi1                         0        0B       0B         0
sdj                            0        0B       0B         0
└─sdj1                         0        0B       0B         0
sdk                            0      512B      32M         1
├─sdk1                         0      512B      32M         1
├─sdk2                         0      512B      32M         1
└─sdk3                         0      512B      32M         1
vda                            0        0B       0B         0
├─vda1                         0        0B       0B         0
...

# lvs -a -o lv_name,devices
  LV               Devices                            
  root             /dev/vda2(205)                     
  swap             /dev/vda2(0)                       
  raid1            raid1_rimage_0(0),raid1_rimage_1(0)
  [raid1_rimage_0] /dev/sdk1(1)                       
  [raid1_rimage_1] /dev/sdk2(1)                       
  [raid1_rmeta_0]  /dev/sdk1(0)                       
  [raid1_rmeta_1]  /dev/sdk2(0)                       

# mount | grep /mnt/raid1
/dev/mapper/vg-raid1 on /mnt/raid1 type ext4 (rw,relatime,seclabel,data=ordered)


# fstrim -v /mnt/raid1
/mnt/raid1: 18 MiB (18914304 bytes) trimmed
==============================================================


3.10.0-675.el7.x86_64

lvm2-2.02.171-3.el7    BUILT: Wed May 31 15:36:29 CEST 2017
lvm2-libs-2.02.171-3.el7    BUILT: Wed May 31 15:36:29 CEST 2017
lvm2-cluster-2.02.171-3.el7    BUILT: Wed May 31 15:36:29 CEST 2017
device-mapper-1.02.140-3.el7    BUILT: Wed May 31 15:36:29 CEST 2017
device-mapper-libs-1.02.140-3.el7    BUILT: Wed May 31 15:36:29 CEST 2017
device-mapper-event-1.02.140-3.el7    BUILT: Wed May 31 15:36:29 CEST 2017
device-mapper-event-libs-1.02.140-3.el7    BUILT: Wed May 31 15:36:29 CEST 2017
device-mapper-persistent-data-0.7.0-0.1.rc6.el7    BUILT: Mon Mar 27 17:15:46 CEST 2017
cmirror-2.02.171-3.el7    BUILT: Wed May 31 15:36:29 CEST 2017

Comment 18 Steven J. Levine 2017-07-21 14:50:00 UTC
Putting in NEEDINFO re: Comment 17, to settle for sure that this does not belong in the release notes as a new feature.

Comment 19 Tom Coughlan 2017-07-22 01:06:05 UTC
Okay, I read through this more carefully and I think I understand the situation:

1. Support for discard on LVM RAID 1 is not a new feature in 7.4. So we do not need a new-feature release note for that.

2. The reason several customers raised the question is because they received the message "discard operation is not supported" when they ran fstrim on an LVM RAID 1 device. This error was apparently caused by the fact that one or more of the underlying hardware devices do not support discard. It was just a coincidence that RAID 1 was in the mix. 

3. There is no need to single out the RAID section of the LVM guide for a discussion about the reason for discard, or instructions on how to use it. It is a generic storage management function, not tied to LVM RAID.

4. Discard is covered in the Storage Management Guide:

https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Storage_Administration_Guide/ch02s04.html

It seems to me the best way to avoid customer cases such as the ones attached to this BZ is to enhance that section of the Guide. For example, after the text: 

"Physical discard operations are supported if ... discard_max_bytes is not zero."

we could add "If you issue the fstrim command on a device that does not support discard, the message "the discard operation is not supported" will be displayed. You will also get this message if you use the fstrim command on a logical device (LVM or MD) comprised of multiple devices where any one of the devices does not support discard."

Then, someone will need to determine what message is displayed of you try to mount such a device with the -o discard option, and then add a similar paragraph to cover that scenario as well. (Or rearrange to provide one new paragraph that covers both batch discard (fstrim) and on-line discard (mount -o discard), assuming the behavior is similar.

I am removing the Doc Type flag from theis BZ. John, Steven, if you agree the proposed change is useful, then clone this for the Storage Mgmt. Guide.

Comment 20 Steven J. Levine 2017-07-24 17:00:32 UTC
I have cloned this as a documentation bug for the Storage Administration Guide, as per Comment 19.

Comment 21 Steven J. Levine 2017-07-24 17:20:52 UTC
The needinfo in Comment 18 was addressed in Comment 19 so I'm clearing it.

Comment 22 errata-xmlrpc 2017-08-01 21:49:49 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2017:2222


Note You need to log in before you can comment on or make changes to this bug.