Bug 1009341
Summary: | clvmd no longer works when nodes are offline | ||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Product: | Red Hat Enterprise Linux 7 | Reporter: | Fabio Massimo Di Nitto <fdinitto> | ||||||||||||
Component: | lvm2 | Assignee: | LVM and device-mapper development team <lvm-team> | ||||||||||||
lvm2 sub component: | Default / Unclassified | QA Contact: | cluster-qe <cluster-qe> | ||||||||||||
Status: | CLOSED CURRENTRELEASE | Docs Contact: | |||||||||||||
Severity: | urgent | ||||||||||||||
Priority: | urgent | CC: | agk, cmarthal, heinzm, jbrassow, msnitzer, nperic, prajnoha, prockai, thornber, zkabelac | ||||||||||||
Version: | 7.0 | Keywords: | Regression, Triaged | ||||||||||||
Target Milestone: | rc | ||||||||||||||
Target Release: | --- | ||||||||||||||
Hardware: | Unspecified | ||||||||||||||
OS: | Unspecified | ||||||||||||||
Whiteboard: | |||||||||||||||
Fixed In Version: | lvm2-2.02.102-1.el7 | Doc Type: | Bug Fix | ||||||||||||
Doc Text: | Story Points: | --- | |||||||||||||
Clone Of: | Environment: | ||||||||||||||
Last Closed: | 2014-06-13 10:50:18 UTC | Type: | Bug | ||||||||||||
Regression: | --- | Mount Type: | --- | ||||||||||||
Documentation: | --- | CRM: | |||||||||||||
Verified Versions: | Category: | --- | |||||||||||||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||||||||||
Cloudforms Team: | --- | Target Upstream Version: | |||||||||||||
Embargoed: | |||||||||||||||
Attachments: |
|
Description
Fabio Massimo Di Nitto
2013-09-18 09:02:09 UTC
on Agk request: it's not a regression in 6.5, it's a regression observed between rhel6.* and rhel7. lvm.conf is default with the only exception of locking_type set to 3. Created attachment 799327 [details]
lvremove -vvvv on rhel6
lvremove -vvvv on rhel6
Created attachment 799328 [details]
lvremove -vvvv on rhel7
lvremove -vvvv on rhel7
rhel7 is with latest nightly build of lvm2 lvm2-cluster-2.02.101-0.157.el7.x86_64 Created attachment 799333 [details]
clvmd debugging logs from node1 (node2 was poweroff)
lvm client side: Successful case: #locking/cluster_locking.c:502 Locking LV EQ4qhf7TgdAMeBaCOgZ0M57mqiIBTXIEhUdwleLadJmtgkYMEFu0Doqrw7k9OsAb NL (LV|NONBLOCK|CLUSTER) (0x98) Failure case: #locking/cluster_locking.c:502 Locking LV yDC7vdTMn3TGdEdEBGD3DPBcTFzHdR0tnKBwNY62WULjrIf9fUZ6vvFvcSb7gwO7 NL (LV|NONBLOCK|CLUSTER) (0x98) #locking/cluster_locking.c:161 cluster request failed: Host is down Created attachment 799356 [details]
rhel7 logs with syslog=1 loglevel debug
Created attachment 799383 [details]
another attempt to capture logs
commit 431eda63cc0ebff7c62dacb313cabcffbda6573a Author: Christine Caulfield <ccaulfie> Date: Mon Sep 23 13:23:00 2013 +0100 clvmd: Fix node up/down handing in corosync module In release 2.02.102. As long as cluster is quorate, there are no issues removing the clustered LV. tested and verified with lvm2-2.02.103-5.el7 [root@virt-002 pacemaker]# lvremove clustered/mirror Do you really want to remove active clustered logical volume mirror? [y/n]: y Logical volume "mirror" successfully removed [root@virt-002 pacemaker]# pcs status Cluster name: STSRHTS10638 Last updated: Wed Nov 20 15:20:31 2013 Last change: Wed Nov 20 14:41:21 2013 via cibadmin on virt-002.cluster-qe.lab.eng.brq.redhat.com Stack: corosync Current DC: virt-002.cluster-qe.lab.eng.brq.redhat.com (1) - partition with quorum Version: 1.1.10-20.el7-368c726 3 Nodes configured 1 Resources configured Online: [ virt-002.cluster-qe.lab.eng.brq.redhat.com ] OFFLINE: [ virt-003.cluster-qe.lab.eng.brq.redhat.com virt-004.cluster-qe.lab.eng.brq.redhat.com ] This request was resolved in Red Hat Enterprise Linux 7.0. Contact your manager or support representative in case you have further questions about the request. |