Bug 1889354

Summary: Regression in vgextend after dropping PV from a VG
Product: Red Hat Enterprise Linux 8 Reporter: Zdenek Kabelac <zkabelac>
Component: lvm2Assignee: David Teigland <teigland>
lvm2 sub component: Command-line tools QA Contact: cluster-qe <cluster-qe>
Status: CLOSED WONTFIX Docs Contact:
Severity: unspecified    
Priority: medium CC: agk, heinzm, jbrassow, msnitzer, prajnoha, teigland, thornber, zkabelac
Version: 8.4   
Target Milestone: rc   
Target Release: 8.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2022-04-19 07:27:29 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Zdenek Kabelac 2020-10-19 13:01:49 UTC
In short I think this is easiest description of a problem in 'test-suite' words:

aux prepare_vg 3
aux disable_dev "$dev3"
vgreduce --removemissing --force $vg
aux enable_dev "$dev3"

# Following command does not working
vgextend $vg "$dev3"

So after dropping PV from a VG when such PV reappears, while VG has already dropped reference to such PV - such PV cannot be easily put back into a VG.

However - in this case suddenly works 'vgextend --restoremissing' - although there was no MISSING PV in a VG.

Current lvm2 (2.03.10git) reports this issue:
..........
 Physical volume '/dev/...pv3' is already in volume group 'vg'
 Unable to add physical volume '/dev/...pv3' to volume group 'vg'
 /dev/...pv3: physical volume not initialized.

Comment 1 David Teigland 2020-10-19 16:53:17 UTC
> aux prepare_vg 3
> aux disable_dev "$dev3"
> vgreduce --removemissing --force $vg
> aux enable_dev "$dev3"
> 
> # Following command does not working
> vgextend $vg "$dev3"
> 
> So after dropping PV from a VG when such PV reappears, while VG has already
> dropped reference to such PV - such PV cannot be easily put back into a VG.

This is because dev3 still contains old VG metadata.  It doesn't seem safe for vgextend to just ignore the metadata on dev3 and treat it like a new PV.  If vgextend were to treat dev3 like a new PV then any user data on dev3 would be lost.  pvs describes the state of dev3 along with a possible solution:

$ pvs
  WARNING: ignoring metadata seqno 6 on /dev/loop2 for seqno 7 on /dev/loop1 for VG foo.
  WARNING: Inconsistent metadata found for VG foo.
  See vgck --updatemetadata to correct inconsistency.
  WARNING: outdated PV /dev/loop2 seqno 6 has been removed in current VG foo seqno 7.
  See vgck --updatemetadata to clear outdated metadata.
  PV         VG           Fmt  Attr PSize   PFree  
  /dev/loop0 foo          lvm2 a--    4.00m      0 
  /dev/loop1 foo          lvm2 a--    4.00m      0 


> However - in this case suddenly works 'vgextend --restoremissing' - although
> there was no MISSING PV in a VG.

I don't get that, I see the same messages that pvs reported.

Comment 2 Zdenek Kabelac 2020-10-19 17:04:57 UTC
Old metadata should be ignored - since the newer higher sequence number is present - so any older data are meant to be ignored by design.
So 'dev3' should be seen as unused when it reepears in the system as VG has been already reduced by this PV.

Reportable error is only the case where the user would be 'update' in paralell  lost device -  so we would have seen VG with
unique  UUID and same  'seqno'  but different metadata.  This case we can't decide which VG is 'correct'.

Comment 3 Zdenek Kabelac 2020-10-19 17:21:31 UTC
And yep actually  --restoremissing gives same error - it's been then some other error in the script.

So the main regression is - vgextend should  not require  extra  vgck  step - since it's ok, to reuse PV which is unused at this moment.
It's just adding unnecessary steps for the user, when tool already knows what needs to be done.

Comment 4 David Teigland 2020-10-19 17:29:52 UTC
Ignoring old metadata shouldn't be difficult if that's what we wanted.  But, it still seems that could lead to losing user data, no?  In which case it seems safer to require the user to make a choice about it.

Comment 5 Zdenek Kabelac 2020-10-19 17:49:52 UTC
I'm not seeing here real danger with data loss on the reappearance of reduced PV - since it must have been the user himself who's reduced VG and the newer metadata do not refer to PV - so the PV should be simply ignored and always treated as unused.

And on the other hand it later could unnecessarily complicate and rather confuse user we he will try to reuse 'the lost and reduced PV' again for something else during i.e. repair of some raid configuration (although we have still  very 'weak' logic on raid repair part side).

Comment 6 David Teigland 2020-10-19 18:27:58 UTC
> I'm not seeing here real danger with data loss on the reappearance of
> reduced PV - since it must have been the user himself who's reduced VG and
> the newer metadata do not refer to PV - so the PV should be simply ignored
> and always treated as unused.

Yeah, based on strict command logic I can see what you mean, but from a user perspective I don't think it's quite so obvious.

The user may run vgreduce to put the VG into a nominally usable state, and may not be fully agreeing to erasing all of the data on the disconnected PV. 
The disconnected PV may contain numerous entire file systems full of user data, and I don't think silently erasing all of that data can be easily justified by pointing at a prior vgreduce, run when that PV wasn't present.  An extra user step addressing the reappeared PV seems to me like the minimum we can do to preserve any important data on it.

Comment 7 Zdenek Kabelac 2020-10-19 18:38:07 UTC
I think here needs to be seed if  user ever counts with using 'lost PV' again - he simply never runs  'vgreduce --removemissing'.
Secondly - if there is any allocated content - he  has to even use  '--force'  to allow proceed of removal of associated LVs.

So if there is a 'mixture' of LVs that partially use  reduced PV - they are already gone from  VG when 'PV' reappears.

So IMHO there is almost never correct condition to reuse reappeared PV - the only use-case that may come close to some usefulness is
when the PV would contain standalone LV.

In this case I can probably see some 'minor' use-case of  maybe 'prompting' user that content of reappeared PV will be lost
while it will be reattached to VG.

But still the operation should pass and should not require complicated  'checking' - since the user has made everything
correctly, thus he should not be bothered with some sort of extra fsck operation - which is IMHO the case when he just correctly 
executed  'vgreduce --removemissing' and later he wants to use such PV for something else.

Comment 10 RHEL Program Management 2022-04-19 07:27:29 UTC
After evaluating this issue, there are no plans to address it further or fix it in an upcoming release.  Therefore, it is being closed.  If plans change such that this issue will be fixed in an upcoming release, then the bug can be reopened.