Note: This bug is displayed in read-only format because
the product is no longer active in Red Hat Bugzilla.
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
DescriptionFabio Massimo Di Nitto
2013-09-18 09:02:09 UTC
Description of problem:
Regression from RHEL6, makes it impossible to use clvmd in rhel7 when cluster nodes are offline.
Version-Release number of selected component (if applicable):
lvm2-cluster-2.02.99-1.el7.x86_64
How reproducible:
always
Steps to Reproduce:
1. start a 2 node cluster with clvmd and create a simple clustered vg/lv
2. clean shutdown one node (poweroff is fine)
[root@rhel7-ha-node2 ~]# systemctl stop corosync
(for example)
3. verify cluster node has left the membership (important bit)
4. try to remove the clustered lv.
Actual results:
[root@rhel7-ha-node1 ~]# lvremove /dev/cluster_vg/cluster_lv
Do you really want to remove active clustered logical volume cluster_lv? [y/n]: y
cluster request failed: Host is down
Unable to deactivate logical volume "cluster_lv"
cluster request failed: Host is down
lv is not removed
Expected results:
Similar behaviour as rhel6:
[root@rhel6-ha-node1 ~]# lvremove /dev/cluster_vg/cluster_lv
Do you really want to remove active clustered logical volume cluster_lv? [y/n]: y
Logical volume "cluster_lv" successfully removed
Additional info:
Comment 3Fabio Massimo Di Nitto
2013-09-18 10:45:05 UTC
on Agk request:
it's not a regression in 6.5, it's a regression observed between rhel6.* and rhel7.
lvm.conf is default with the only exception of locking_type set to 3.
Comment 4Fabio Massimo Di Nitto
2013-09-18 11:12:45 UTC
As long as cluster is quorate, there are no issues removing the clustered LV.
tested and verified with lvm2-2.02.103-5.el7
[root@virt-002 pacemaker]# lvremove clustered/mirror
Do you really want to remove active clustered logical volume mirror? [y/n]: y
Logical volume "mirror" successfully removed
[root@virt-002 pacemaker]# pcs status
Cluster name: STSRHTS10638
Last updated: Wed Nov 20 15:20:31 2013
Last change: Wed Nov 20 14:41:21 2013 via cibadmin on virt-002.cluster-qe.lab.eng.brq.redhat.com
Stack: corosync
Current DC: virt-002.cluster-qe.lab.eng.brq.redhat.com (1) - partition with quorum
Version: 1.1.10-20.el7-368c726
3 Nodes configured
1 Resources configured
Online: [ virt-002.cluster-qe.lab.eng.brq.redhat.com ]
OFFLINE: [ virt-003.cluster-qe.lab.eng.brq.redhat.com virt-004.cluster-qe.lab.eng.brq.redhat.com ]
This request was resolved in Red Hat Enterprise Linux 7.0.
Contact your manager or support representative in case you have further questions about the request.