Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.

Bug 1124766

Summary: Wrong error displayed when trying to exclusively deactivate already inactive LV on a single host (non-clustered)
Product: Red Hat Enterprise Linux 6 Reporter: Nenad Peric <nperic>
Component: lvm2Assignee: Peter Rajnoha <prajnoha>
lvm2 sub component: Activating existing Logical Volumes (RHEL6) QA Contact: Cluster QE <mspqa-list>
Status: CLOSED ERRATA Docs Contact:
Severity: unspecified    
Priority: unspecified CC: agk, heinzm, jbrassow, msnitzer, prajnoha, prockai, tlavigne, zkabelac
Version: 6.6   
Target Milestone: rc   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: lvm2-2.02.109-2.el6 Doc Type: Bug Fix
Doc Text:
Cause: An incorrect check checking the cluster status of non-clustered snapshot origin LV before local deactivation (-aln). Consequence: There was an incorrect error message issued: "Cannot deactivate remotely exclusive device locally." (newer versions) or just "Cannot deactivate <lv name> locally." (older versions) if the snapshot origin LV was non-clustered, already deactivated and we tried to deactivate it locally (-aln). Fix: The check done before local deactivation was fixed to properly check only clustered LVs. Result: The incorrect error message "Cannot deactivate ... locally" is no longer displayed.
Story Points: ---
Clone Of: Environment:
Last Closed: 2014-10-14 08:25:35 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Nenad Peric 2014-07-30 09:23:17 UTC
Description of problem:

When trying to deactivate a LV which is currently inactive on a single hsoted server (not clustered environment), the error displayed is ambiguous. 


Version-Release number of selected component (if applicable):

lvm2-2.02.108-1.el6.x86_64

How reproducible:

Everytime

Steps to Reproduce:

[root@tardis-01 ~]# lvchange -aln multi/snap_of_simple
Change of snapshot snap_of_simple will also change its origin simple. Proceed? [y/n]: y
  Cannot deactivate remotely exclusive device locally.


Actual results:

LVM complains that it cannot deactivate REMOTELY exclusive device when there is not remote server existing. It is confusing. 

Expected results:

LVM should report that the mentioned LV is already inactive and thus cannot be deactivated again.

Comment 2 Peter Rajnoha 2014-08-07 14:49:41 UTC
Just missing check whether the VG is clustered or not - the check doesn't make sense in case it's not clustered. Patched with:

https://git.fedorahosted.org/cgit/lvm2.git/commit/?id=c52c9a1e316b6a92a2475dfe3ad2aac92edc80c0

Comment 5 Nenad Peric 2014-08-20 12:08:13 UTC
[root@tardis-01 ~]# lvchange -aln new_vg/snap_of_simple
Change of snapshot snap_of_simple will also change its origin simple. Proceed? [y/n]: y
[root@tardis-01 ~]# 


Marking VERIFIED with:

lvm2-2.02.109-2.el6    BUILT: Tue Aug 19 16:32:25 CEST 2014
lvm2-libs-2.02.109-2.el6    BUILT: Tue Aug 19 16:32:25 CEST 2014
lvm2-cluster-2.02.109-2.el6    BUILT: Tue Aug 19 16:32:25 CEST 2014
udev-147-2.57.el6    BUILT: Thu Jul 24 15:48:47 CEST 2014
device-mapper-1.02.88-2.el6    BUILT: Tue Aug 19 16:32:25 CEST 2014
device-mapper-libs-1.02.88-2.el6    BUILT: Tue Aug 19 16:32:25 CEST 2014
device-mapper-event-1.02.88-2.el6    BUILT: Tue Aug 19 16:32:25 CEST 2014
device-mapper-event-libs-1.02.88-2.el6    BUILT: Tue Aug 19 16:32:25 CEST 2014
device-mapper-persistent-data-0.3.2-1.el6    BUILT: Fri Apr  4 15:43:06 CEST 2014
cmirror-2.02.109-2.el6    BUILT: Tue Aug 19 16:32:25 CEST 2014

Comment 6 errata-xmlrpc 2014-10-14 08:25:35 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHBA-2014-1387.html