Bug 707014 - vgreduce --removemissing --force is activating some lvs
vgreduce --removemissing --force is activating some lvs
Product: Fedora
Classification: Fedora
Component: lvm2 (Show other bugs)
Unspecified Unspecified
unspecified Severity unspecified
: ---
: ---
Assigned To: LVM and device-mapper development team
Fedora Extras Quality Assurance
Depends On:
  Show dependency treegraph
Reported: 2011-05-23 14:36 EDT by David Lehman
Modified: 2014-10-06 10:32 EDT (History)
12 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Last Closed: 2014-10-06 10:22:11 EDT
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---

Attachments (Terms of Use)
logs of lvm output from vgreduce and vgchange -an (19.04 KB, text/plain)
2011-05-26 13:53 EDT, David Lehman
no flags Details

  None (edit)
Description David Lehman 2011-05-23 14:36:56 EDT
Description of problem:
When the disk containing one or more PVs is removed, then you run 'vgreduce --removemissing --force $vgname' it appears that any LVs whose PEs are all in the still-present disks' PVs get activated. I can't see any reason why vgreduce should be activating any LVs at all.

Version-Release number of selected component (if applicable):

How reproducible:

Steps to Reproduce:
1. Do a default anaconda install with two empty disks
2. Remove the second disk
3. Run vgreduce --removemissing --force $vgname
Actual results:
The command will complete, but then you have an active $vgname-lv_swap, which makes absolutely no sense.

Expected results:
No unexpectedly activated LVs since I didn't run any commands that should activate any LVs.

Additional info:
Comment 1 Peter Rajnoha 2011-05-26 06:16:53 EDT
I tried to reproduce, but any LVs remain deactivated as expected. Can you post the -vvvv log + the lvm.conf file so we can have a look what's happening in your case? Thanks.
Comment 2 David Lehman 2011-05-26 13:49:42 EDT
I am trying to get this output for you, but apparently adding more 'v's leads to new deadlocks/hangs. I will attach a file containing the output with -vv, but with -vvv or -vvvv I have yet to get the vgreduce command to complete.
Comment 3 David Lehman 2011-05-26 13:53:36 EDT
Created attachment 501152 [details]
logs of lvm output from vgreduce and vgchange -an

The vgchange command is what we have added to work around failures in the following vgremove call. If you like I can remove it to demonstrate the original failure.
Comment 4 Milan Broz 2011-05-26 15:11:22 EDT
See logs:

16:51:08,311 ERR program:       VolGroup/lv_swap is active exclusive locally

16:51:08,314 ERR program:   Unable to deactivate open VolGroup-lv_swap (253:0)

This means that VolGroup-lv_swap was activated and is in use before vgreduce run.

Isn't there some magic "activate all swaps" before the command?
Comment 5 David Lehman 2011-05-26 17:06:16 EDT
No, there's nothing like that. /proc/swaps was empty.
Comment 6 Jonathan Earl Brassow 2011-05-26 17:25:21 EDT
Could you add either an 'lvs -a' or 'dmsetup status' before the vgreduce is run so we can see if the volume was active and open before hand?
Comment 7 Milan Broz 2011-05-26 17:50:56 EDT
Can you please paste "dmsetup table" before the vgreduce is run?
(Or even better - output of lsblk before vgreduce is run).

Or even better, lsblk before and after so we can prove that vgreduce activated something.

I can simply prepare the same configuration and vgreduce of course do not activate anything.

BTW after "vgreduce --removemissing --force $vgname" is $vgname reapaired and lv_swap still available (but inactive) because it was on still existing PV.
Is it really what do you want to achieve here?
Comment 8 David Lehman 2011-05-27 11:59:32 EDT
Milan, I think you are right -- this seems to be caused by some systemd magic. Once I confirm I will close this bug. Sorry for the noise.
Comment 9 Fedora End Of Life 2013-04-03 11:59:56 EDT
This bug appears to have been reported against 'rawhide' during the Fedora 19 development cycle.
Changing version to '19'.

(As we did not run this process for some time, it could affect also pre-Fedora 19 development
cycle bugs. We are very sorry. It will help us with cleanup during Fedora 19 End Of Life. Thank you.)

More information and reason for this action is here:

Note You need to log in before you can comment on or make changes to this bug.