Bug 445022 - lvremove briefly activates (mirror) volumes, only to deactivate them again before removing
lvremove briefly activates (mirror) volumes, only to deactivate them again be...
Status: CLOSED ERRATA
Product: Red Hat Enterprise Linux 5
Classification: Red Hat
Component: lvm2-cluster (Show other bugs)
5.2
All Linux
low Severity low
: rc
: ---
Assigned To: Milan Broz
Corey Marthaler
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2008-05-02 15:13 EDT by Jonathan Earl Brassow
Modified: 2013-02-28 23:06 EST (History)
9 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2009-09-02 07:57:50 EDT
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Jonathan Earl Brassow 2008-05-02 15:13:56 EDT
I haven't checked if this is necessary or not yet...

If you 'vgchange -an vg' then 'lvremove -ff vg', it will re-activate the mirrors
then deactivate them and remove them... seems redundant.
Comment 1 Dave Wysochanski 2008-05-13 11:35:28 EDT
The reason for this is so we ensure the LV is not active on other nodes before
removing.  In order to find out if it is safe to remove, we must try to activate
the LV exclusively.  Currently this is the only way we can know it is safe to
remove.

I think we all agree this is not ideal but AFAIK this is the best we can do for now.

Here's the snippit of the logic we currently use from lv_remove_single():

		/*
		 * Check for confirmation prompts in the following cases:
		 * 1) Clustered VG, and some remote nodes have the LV active
		 * 2) Non-clustered VG, but LV active locally
		 */
		if (vg_is_clustered(vg) && !activate_lv_excl(cmd, lv) &&
		    (force == PROMPT)) {
			if (yes_no_prompt("Logical volume \"%s\" is active on other "
					  "cluster nodes.  Really remove? [y/n]: ",
					  lv->name) == 'n') {
				log_print("Logical volume \"%s\" not removed",
					  lv->name);
				return 0;
			}
		} else if (info.exists && (force == PROMPT)) {
			 if (yes_no_prompt("Do you really want to remove active "
					   "logical volume \"%s\"? [y/n]: ",
					   lv->name) == 'n') {
				log_print("Logical volume \"%s\" not removed",
					  lv->name);
				return 0;
			 }
		}
	}
Comment 2 Milan Broz 2009-05-27 11:22:43 EDT
I think this was fixed already in code, should be in lvm2-cluster-2.02.46-1.el5.
Comment 3 Alasdair Kergon 2009-05-27 14:26:45 EDT
For reference, the fix came from adding a new cluster-wide query mechanism so we can now find out if an LV is active anywhere without needing to activate it.
Comment 6 Corey Marthaler 2009-07-01 12:15:23 EDT
Fix verified in lvm2-2.02.46-8.el5.

OLD lvm2 version:
# vgchange -an $vg
Jul  1 10:54:13 grant-02 lvm[14626]: No longer monitoring mirror device grant-mirror for events 

# lvremove -ff $vg
Jul  1 10:54:41 grant-02 [14626]: Monitoring mirror device grant-mirror for events 
Jul  1 10:54:41 grant-02 lvm[14626]: grant-mirror is now in-sync 
Jul  1 10:54:41 grant-02 lvm[14626]: No longer monitoring mirror device grant-mirror for events 


NEW lvm2 version:
# vgchange -an $vg
Jul  1 11:12:56 grant-01 lvm[9879]: No longer monitoring mirror device grant-mirror for events 

# lvremove -ff $vg
Comment 8 errata-xmlrpc 2009-09-02 07:57:50 EDT
An advisory has been issued which should help the problem
described in this bug report. This report is therefore being
closed with a resolution of ERRATA. For more information
on therefore solution and/or where to find the updated files,
please follow the link below. You may reopen this bug report
if the solution does not work for you.

http://rhn.redhat.com/errata/RHBA-2009-1394.html

Note You need to log in before you can comment on or make changes to this bug.