Bug 707014

Summary: vgreduce --removemissing --force is activating some lvs
Product: [Fedora] Fedora Reporter: David Lehman <dlehman>
Component: lvm2Assignee: LVM and device-mapper development team <lvm-team>
Status: CLOSED CANTFIX QA Contact: Fedora Extras Quality Assurance <extras-qa>
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: 19CC: agk, bmarzins, bmr, dwysocha, heinzm, jbrassow, jonathan, lvm-team, msnitzer, prajnoha, prockai, zkabelac
Target Milestone: ---   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2014-10-06 14:22:11 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
logs of lvm output from vgreduce and vgchange -an none

Description David Lehman 2011-05-23 18:36:56 UTC
Description of problem:
When the disk containing one or more PVs is removed, then you run 'vgreduce --removemissing --force $vgname' it appears that any LVs whose PEs are all in the still-present disks' PVs get activated. I can't see any reason why vgreduce should be activating any LVs at all.

Version-Release number of selected component (if applicable):
lvm2-2.02.84-1.fc15

How reproducible:
Always

Steps to Reproduce:
1. Do a default anaconda install with two empty disks
2. Remove the second disk
3. Run vgreduce --removemissing --force $vgname
  
Actual results:
The command will complete, but then you have an active $vgname-lv_swap, which makes absolutely no sense.

Expected results:
No unexpectedly activated LVs since I didn't run any commands that should activate any LVs.

Additional info:

Comment 1 Peter Rajnoha 2011-05-26 10:16:53 UTC
I tried to reproduce, but any LVs remain deactivated as expected. Can you post the -vvvv log + the lvm.conf file so we can have a look what's happening in your case? Thanks.

Comment 2 David Lehman 2011-05-26 17:49:42 UTC
I am trying to get this output for you, but apparently adding more 'v's leads to new deadlocks/hangs. I will attach a file containing the output with -vv, but with -vvv or -vvvv I have yet to get the vgreduce command to complete.

Comment 3 David Lehman 2011-05-26 17:53:36 UTC
Created attachment 501152 [details]
logs of lvm output from vgreduce and vgchange -an

The vgchange command is what we have added to work around failures in the following vgremove call. If you like I can remove it to demonstrate the original failure.

Comment 4 Milan Broz 2011-05-26 19:11:22 UTC
See logs:

16:51:08,311 ERR program:       VolGroup/lv_swap is active exclusive locally
...

16:51:08,314 ERR program:   Unable to deactivate open VolGroup-lv_swap (253:0)

This means that VolGroup-lv_swap was activated and is in use before vgreduce run.

Isn't there some magic "activate all swaps" before the command?

Comment 5 David Lehman 2011-05-26 21:06:16 UTC
No, there's nothing like that. /proc/swaps was empty.

Comment 6 Jonathan Earl Brassow 2011-05-26 21:25:21 UTC
Could you add either an 'lvs -a' or 'dmsetup status' before the vgreduce is run so we can see if the volume was active and open before hand?

Comment 7 Milan Broz 2011-05-26 21:50:56 UTC
Can you please paste "dmsetup table" before the vgreduce is run?
(Or even better - output of lsblk before vgreduce is run).

Or even better, lsblk before and after so we can prove that vgreduce activated something.

I can simply prepare the same configuration and vgreduce of course do not activate anything.

BTW after "vgreduce --removemissing --force $vgname" is $vgname reapaired and lv_swap still available (but inactive) because it was on still existing PV.
Is it really what do you want to achieve here?

Comment 8 David Lehman 2011-05-27 15:59:32 UTC
Milan, I think you are right -- this seems to be caused by some systemd magic. Once I confirm I will close this bug. Sorry for the noise.

Comment 9 Fedora End Of Life 2013-04-03 15:59:56 UTC
This bug appears to have been reported against 'rawhide' during the Fedora 19 development cycle.
Changing version to '19'.

(As we did not run this process for some time, it could affect also pre-Fedora 19 development
cycle bugs. We are very sorry. It will help us with cleanup during Fedora 19 End Of Life. Thank you.)

More information and reason for this action is here:
https://fedoraproject.org/wiki/BugZappers/HouseKeeping/Fedora19