Bug 1006062

Summary: thin pool mda device swapping doesn't work
Product: Red Hat Enterprise Linux 6 Reporter: Corey Marthaler <cmarthal>
Component: lvm2Assignee: Zdenek Kabelac <zkabelac>
Status: CLOSED DUPLICATE QA Contact: Cluster QE <mspqa-list>
Severity: high Docs Contact:
Priority: high    
Version: 6.5CC: agk, dwysocha, heinzm, jbrassow, msnitzer, prajnoha, prockai, thornber, zkabelac
Target Milestone: rcKeywords: Reopened
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: 973419 Environment:
Last Closed: 2013-10-16 12:17:10 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 973419    
Bug Blocks:    

Comment 3 RHEL Program Management 2013-10-14 02:18:25 UTC
This request was not resolved in time for the current release.
Red Hat invites you to ask your support representative to
propose this request, if still desired, for consideration in
the next release of Red Hat Enterprise Linux.

Comment 4 Zdenek Kabelac 2013-10-16 12:17:10 UTC
I'm closing this one as duplicate of Bug #1007074.  This BZ added few more checks when metadata could be swapped.

It relates to this BZ as well - since if the thin-pool would be inactive,
not activation for swapped thin-pool is made.

However if someone tries to activate thin-pool with incorrect metadata,
he needs to deal with consequences - which currently may enforce user
to manually remove devices from dm table.

So if the 'newMDA' doesn't have valid metadata, the user will need to swap back valid metadata for the snapper_thinp/POOL device - or has to manually  remove entries from dm table and  vgcfgrestore metadata without any appearance of the POOL in them (thin data & metadata and thin volumes removed).

Swapping is only meant to be used for manual recovery of damaged pool (which may eventually be stacked over raids).

*** This bug has been marked as a duplicate of bug 1007074 ***