Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.

Bug 918647

Summary: dm-thin: Discarding blocks shared between thin devices may cause data loss
Product: Red Hat Enterprise Linux 6 Reporter: Jim Minter <jminter>
Component: kernelAssignee: Mike Snitzer <msnitzer>
Status: CLOSED DUPLICATE QA Contact: yanfu,wang <yanwang>
Severity: urgent Docs Contact:
Priority: urgent    
Version: 6.4CC: agk, dhoward, dwysocha, heinzm, jbrassow, msnitzer, prajnoha, prockai, thornber, xiaoli, zkabelac
Target Milestone: pre-dev-freeze   
Target Release: 6.5   
Hardware: All   
OS: All   
Whiteboard:
Fixed In Version: Doc Type: Known Issue
Doc Text:
Thin provisioning uses reference counts to indicate that data is shared between a thin volume and snapshots of the thin volume. There is a known issue with the way reference counts are managed in the case when a discard is issued to a thin volume that has snapshots. Creating snapshots of a thin volume and then issuing discards to the thin volume can therefore result in data loss in the snapshot volumes. Users are strongly encouraged to disable discard support on the thin-pool for the time being. To do so using lvm2 while the pool is offline, use the lvchange --discard ignore <pool> command. Any discards that might be issued to thin volumes will be ignored.
Story Points: ---
Clone Of:
: 919138 (view as bug list) Environment:
Last Closed: 2013-09-19 19:42:35 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 919138    

Description Jim Minter 2013-03-06 16:31:56 UTC
Bug originally reported at http://www.redhat.com/archives/dm-devel/2013-March/msg00033.html , problem relates to issuing the BLKDISCARD ioctl to a thin volume.  If I create a thin volume, fill it with data, snapshot it, then call BLKDISCARD on the thin volume, it looks like the kernel doesn't take into account the fact that the underlying blocks are shared with the snapshot, and just goes ahead and discards them.  This appears to then leave the metadata in an inconsistent state.

Fix at https://github.com/jthornber/linux-2.6/commit/a42dfef751cb666d3274346c07dff655cb40cc5a ("Fix a bug in dm_btree_remove that could leave leaf values with incorrect reference counts").

Comment 1 Zdenek Kabelac 2013-03-07 08:13:48 UTC
Are we going add 'workaround' support for this issue in lvm2 tools - to disable usage of discard for thinp targets prior some specific version ?

If so - how do we detect target with this bug - will there be version number increase?