Bug 598495
Summary: | service clvmd status shows "active clustered" volumes which are inactive | ||
---|---|---|---|
Product: | Red Hat Enterprise Linux 6 | Reporter: | Nate Straz <nstraz> |
Component: | lvm2 | Assignee: | Milan Broz <mbroz> |
Status: | CLOSED CURRENTRELEASE | QA Contact: | Corey Marthaler <cmarthal> |
Severity: | medium | Docs Contact: | |
Priority: | low | ||
Version: | 6.0 | CC: | agk, dwysocha, heinzm, jbrassow, joe.thornber, mbroz, prajnoha, prockai, pvrabec |
Target Milestone: | rc | ||
Target Release: | --- | ||
Hardware: | All | ||
OS: | Linux | ||
Whiteboard: | |||
Fixed In Version: | lvm2-2.02.68-1.el6 | Doc Type: | Bug Fix |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2010-11-10 21:08:03 UTC | Type: | --- |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Nate Straz
2010-06-01 14:00:51 UTC
So we ought to distinguish between the different activation states as best we can? Active locally, active elsewhere, active exclusively locally, active exclusively elsewhere? I'd be happy if `service cman status` only showed with the LVs are active locally. I'd be really happy if a clustered LV was either active on all nodes or none. Normally LVs are active everywhere, but you're free to choose otherwise, via lvm.conf or with lvchange -aey or -aly, or if you add nodes to the cluster later. It's usually the last one that I hit. A node will join the cluster, start clvmd, hit bz 596977 where clvmd startup times out, vgchange never runs, and the cluster LVs are not activated on the node. This request was evaluated by Red Hat Product Management for inclusion in a Red Hat Enterprise Linux major release. Product Management has requested further review of this request by Red Hat Engineering, for potential inclusion in a Red Hat Enterprise Linux Major release. This request is not yet committed for inclusion. Fixed upstream, now it prints something like this: # service clvmd status clvmd (pid 1592) is running... Clustered Volume Groups: vg_test Active clustered Logical Volumes: /dev/vg_test/lv1 /dev/vg_test/lv3 # no active cluster vols on this node [root@taft-01 ~]# lvs -a LV VG Attr LSize Move Log Copy% mirror taft mwi--- 100.00m mirror_mlog [mirror_mimage_0] taft Iwi--- 100.00m [mirror_mimage_1] taft Iwi--- 100.00m [mirror_mlog] taft lwi--- 4.00m [root@taft-01 ~]# service clvmd status clvmd (pid 13783) is running... Clustered Volume Groups: taft Active clustered Logical Volumes: (none) # one active cluster vol on this node [root@taft-03 ~]# lvs -a LV VG Attr LSize Move Log Copy% mirror taft mwi-a- 100.00m mirror_mlog 100.00 [mirror_mimage_0] taft iwi-ao 100.00m [mirror_mimage_1] taft iwi-ao 100.00m [mirror_mlog] taft lwi-ao 4.00m [root@taft-03 ~]# service clvmd status clvmd (pid 2469) is running... Clustered Volume Groups: taft Active clustered Logical Volumes: /dev/taft/mirror Fix verified in the following build: lvm2-2.02.69-1.el6 BUILT: Wed Jun 30 11:00:37 CDT 2010 lvm2-libs-2.02.69-1.el6 BUILT: Wed Jun 30 11:00:37 CDT 2010 lvm2-cluster-2.02.69-1.el6 BUILT: Wed Jun 30 11:00:37 CDT 2010 udev-147-2.18.el6 BUILT: Fri Jun 11 07:47:21 CDT 2010 device-mapper-1.02.51-1.el6 BUILT: Wed Jun 30 11:00:37 CDT 2010 device-mapper-libs-1.02.51-1.el6 BUILT: Wed Jun 30 11:00:37 CDT 2010 device-mapper-event-1.02.51-1.el6 BUILT: Wed Jun 30 11:00:37 CDT 2010 device-mapper-event-libs-1.02.51-1.el6 BUILT: Wed Jun 30 11:00:37 CDT 2010 cmirror-2.02.69-1.el6 BUILT: Wed Jun 30 11:00:37 CDT 2010 Red Hat Enterprise Linux 6.0 is now available and should resolve the problem described in this bug report. This report is therefore being closed with a resolution of CURRENTRELEASE. You may reopen this bug report if the solution does not work for you. |