Bug 1853682 - [RFE] Enable discard support for pvmove command when "issue_discards = 1" set in /etc/lvm/lvm.conf
Summary: [RFE] Enable discard support for pvmove command when "issue_discards = 1" set...
Keywords:
Status: NEW
Alias: None
Product: Red Hat Enterprise Linux 8
Classification: Red Hat
Component: lvm2
Version: 8.2
Hardware: Unspecified
OS: Unspecified
medium
unspecified
Target Milestone: rc
: 8.0
Assignee: Zdenek Kabelac
QA Contact: cluster-qe
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-07-03 14:38 UTC by Rupesh Girase
Modified: 2023-08-10 15:40 UTC (History)
8 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed:
Type: Bug
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Rupesh Girase 2020-07-03 14:38:16 UTC
Description of problem:

Discard IO should be sent to the source PV (if supported) to reclaim space after successful pvmove provided  "issue_discards = 1" is set in /etc/lvm/lvm.conf

Version-Release number of selected component (if applicable):

RHEL-8
lvm2

How reproducible:

Always

Steps to Reproduce:

1. do pvmove from one pv to another pv backed by thinly provisioned device like vmdk file


Actual results:

Unused space is not reclaimed on source device


Expected results:

Issue discard IO and reclaim space on device


Additional info:

Asked customer to try "blkdiscard" command on source PV to reclaim space but it seemed to remove all the data.

Reply in customer's own word:

"blkdiscard does reclaim the space, but also removes all data from the PV. This doesn't really help because if the PV gets erased then we can just remove the lun from VMware and get the space back anyway. It would of been advantageous if pvmove issued a discard, just like lvremove does when moving specific LVS and not evacuating the whole PV. My main query is what LVM commands do support issuing discards. Do LVM discards only apply to the logical volume commands, such as lvremove and lvreduce?"

Comment 2 Zdenek Kabelac 2020-07-03 14:44:03 UTC
Clearly the discard can be send only to 'UNUSED' devices - so if you want to discard while PV - such device should be already removed from VG  (aka vgreduce vg /dev/sdaX ;   blkdiscard /dev/sdaX)

If the user want to 'dicard' all teh free space in VG left after  pvmove  -  he can try to allocate temporary LV and discard such LV - aka   'vgcreate -l100%free -n tmplv  vg ;   blkdiscard /dev/vg/tmplv;  lvremove vg/tmplv'

Note:  lvm2 team discourage to use issue_discard=1 as it makes any 'vgcfgrestore' operation mostly useless - so the explicit workaround above makes the action way more 'explicit' and usually admin is aware about consequences in this case.

Comment 3 Zdenek Kabelac 2020-07-15 13:27:28 UTC
I'm looking into this issue for a while - it has several problems - if we want to provide support for all modes of pvmove - it's not so easy since do in 1 commit dropping mirror and replaing it with error target.

I'm still in process of thinking how to introduce discard without breaking compatibility.

Side step around could be to collect regions and after commit submit discard - this has one drawback - if the thing fail during these steps - the operation is not repeatable since mapping are already gone. But it has clearly the lowest complexity.


Note You need to log in before you can comment on or make changes to this bug.