RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 677817 - vgchange returns success when exclusive activation fails
Summary: vgchange returns success when exclusive activation fails
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: lvm2
Version: 6.1
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: rc
: ---
Assignee: LVM and device-mapper development team
QA Contact: Corey Marthaler
URL:
Whiteboard:
Depends On:
Blocks: 1191724
TreeView+ depends on / blocked
 
Reported: 2011-02-15 22:55 UTC by Nate Straz
Modified: 2015-02-26 10:38 UTC (History)
11 users (show)

Fixed In Version: lvm2-2.02.83-3.el6
Doc Type: Bug Fix
Doc Text:
Clone Of:
: 1191724 (view as bug list)
Environment:
Last Closed: 2011-05-03 14:56:23 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Bugzilla 1196580 0 unspecified CLOSED [RFE] Add local exclusive activation to vgchange (-aeyl) 2021-02-22 00:41:40 UTC
Red Hat Bugzilla 1196585 0 unspecified CLOSED [RFE] Add local exclusive activation to vgchange (-aeyl) 2021-09-08 20:25:10 UTC

Internal Links: 1196580 1196585

Description Nate Straz 2011-02-15 22:55:51 UTC
Description of problem:

When an LV is activated exclusively on another node and another node tries to activate it exclusively, the LV does not become active, but the vgchange command still returns success.

Version-Release number of selected component (if applicable):
2.6.32-114.0.1.el6.x86_64

lvm2-2.02.83-2.el6    BUILT: Tue Feb  8 10:10:57 CST 2011
lvm2-libs-2.02.83-2.el6    BUILT: Tue Feb  8 10:10:57 CST 2011
lvm2-cluster-2.02.83-2.el6    BUILT: Tue Feb  8 10:10:57 CST 2011
udev-147-2.33.el6    BUILT: Wed Feb  9 09:56:24 CST 2011
device-mapper-1.02.62-2.el6    BUILT: Tue Feb  8 10:10:57 CST 2011
device-mapper-libs-1.02.62-2.el6    BUILT: Tue Feb  8 10:10:57 CST 2011
device-mapper-event-1.02.62-2.el6    BUILT: Tue Feb  8 10:10:57 CST 2011
device-mapper-event-libs-1.02.62-2.el6    BUILT: Tue Feb  8 10:10:57 CST 2011
cmirror-2.02.83-2.el6    BUILT: Tue Feb  8 10:10:57 CST 2011


How reproducible:
Every time

Steps to Reproduce:
1. on node A vgchange -aye $VG
2. on node B vgchange -aye $VG <- this should return non-zero

  
Actual results:
[root@dash-01 audit]# lvs
  LV             VG            Attr   LSize   Origin Snap%  Move Log Copy%  Conv
  linear_9_55810 linear_9_5581 -wima- 685.68g
  lv_home        vg_dash01     -wi-ao  31.87g
  lv_root        vg_dash01     -wi-ao  35.29g
  lv_swap        vg_dash01     -wi-ao   6.86g

[root@dash-02 audit]# lvs
  LV             VG            Attr   LSize   Origin Snap%  Move Log Copy%  Conv
  linear_9_55810 linear_9_5581 -wim-- 685.68g
  lv_home        vg_dash02     -wi-ao  31.87g
  lv_root        vg_dash02     -wi-ao  35.29g
  lv_swap        vg_dash02     -wi-ao   6.86g
[root@dash-02 audit]# vgchange -aye linear_9_5581; echo $?
  0 logical volume(s) in volume group "linear_9_5581" now active
0

Expected results:
vgchange should return non-zero

Additional info:

Comment 3 Alasdair Kergon 2011-02-16 01:24:45 UTC
That old chestnut!  If it's already active the command has nothing to do so should it therefore fail?  Or is it enough to say that you wanted it active, it is active so return success?

I'm not sure we're ever going to resolve this to everyone's satisfaction.

Comment 4 Alasdair Kergon 2011-02-16 01:30:25 UTC
(BTW Remember that vgchange -a is a clustered command which acts symmetrically on all nodes unless 'l' is used.  vgchange -aey means activate it exclusively on any one node, subject to any tag and lvm.conf constraints.  We don't support '-aely' yet.)

Comment 5 Alasdair Kergon 2011-02-16 01:32:56 UTC
The "0 LVs active" message only queries local LVs.  We probably do have the infrastructure available now to include LVs active remotely in those totals now.

Comment 7 Nate Straz 2011-02-16 13:47:38 UTC
Alasdair, this is a regression.  We ran this test throughout the RHEL6.0 process.

Here is the test output from the RHEL6.0-20100818.0 tree which contained lvm2-2.02.72-8.el6.x86_64.

EXCLUSIVE VOLUME GROUP LOCKING
deactivating volume group
grabing the exclusive lock on dash-01
attempting to also grab an exclusive lock on dash-02
  Error locking on node dash-02: Volume is busy on another node
attempting to grab a non exclusive lock on dash-02
  Error locking on node dash-02: Volume is busy on another node
  Error locking on node dash-03: Volume is busy on another node
  Error locking on node dash-01: Device or resource busy
attempting to also grab an exclusive lock on dash-03
  Error locking on node dash-03: Volume is busy on another node
attempting to grab a non exclusive lock on dash-03
  Error locking on node dash-03: Volume is busy on another node
  Error locking on node dash-02: Volume is busy on another node
  Error locking on node dash-01: Device or resource busy
releasing the exclusive lock on dash-01

Comment 8 Tom Coughlan 2011-03-21 23:06:33 UTC
Does anyone know why this behavior appears to have changed between 6.0 and 6.1?

Comment 12 Nate Straz 2011-04-28 20:01:41 UTC
Looking through my test logs shows that this behavior was fixed at some point in the release.  Testing against lvm2-2.02.83-3.el6.x86_64 passed this part of our tests.

Comment 13 Milan Broz 2011-05-03 09:22:42 UTC
Nate, do I read comment #12 correctly that it is in fact fixed in current 6.1?


Note You need to log in before you can comment on or make changes to this bug.