Note: This bug is displayed in read-only format because
the product is no longer active in Red Hat Bugzilla.
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
That old chestnut! If it's already active the command has nothing to do so should it therefore fail? Or is it enough to say that you wanted it active, it is active so return success?
I'm not sure we're ever going to resolve this to everyone's satisfaction.
(BTW Remember that vgchange -a is a clustered command which acts symmetrically on all nodes unless 'l' is used. vgchange -aey means activate it exclusively on any one node, subject to any tag and lvm.conf constraints. We don't support '-aely' yet.)
The "0 LVs active" message only queries local LVs. We probably do have the infrastructure available now to include LVs active remotely in those totals now.
Alasdair, this is a regression. We ran this test throughout the RHEL6.0 process.
Here is the test output from the RHEL6.0-20100818.0 tree which contained lvm2-2.02.72-8.el6.x86_64.
EXCLUSIVE VOLUME GROUP LOCKING
deactivating volume group
grabing the exclusive lock on dash-01
attempting to also grab an exclusive lock on dash-02
Error locking on node dash-02: Volume is busy on another node
attempting to grab a non exclusive lock on dash-02
Error locking on node dash-02: Volume is busy on another node
Error locking on node dash-03: Volume is busy on another node
Error locking on node dash-01: Device or resource busy
attempting to also grab an exclusive lock on dash-03
Error locking on node dash-03: Volume is busy on another node
attempting to grab a non exclusive lock on dash-03
Error locking on node dash-03: Volume is busy on another node
Error locking on node dash-02: Volume is busy on another node
Error locking on node dash-01: Device or resource busy
releasing the exclusive lock on dash-01
Looking through my test logs shows that this behavior was fixed at some point in the release. Testing against lvm2-2.02.83-3.el6.x86_64 passed this part of our tests.