Bug 598495

Summary: service clvmd status shows "active clustered" volumes which are inactive
Product: Red Hat Enterprise Linux 6 Reporter: Nate Straz <nstraz>
Component: lvm2Assignee: Milan Broz <mbroz>
Status: CLOSED CURRENTRELEASE QA Contact: Corey Marthaler <cmarthal>
Severity: medium Docs Contact:
Priority: low    
Version: 6.0CC: agk, dwysocha, heinzm, jbrassow, joe.thornber, mbroz, prajnoha, prockai, pvrabec
Target Milestone: rc   
Target Release: ---   
Hardware: All   
OS: Linux   
Whiteboard:
Fixed In Version: lvm2-2.02.68-1.el6 Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2010-11-10 21:08:03 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Nate Straz 2010-06-01 14:00:51 UTC
Description of problem:

If the clustered volumes are not active on the current node, they are still listed as active in `service clvmd status.`

[root@west-04 ~]# service clvmd status
clvmd (pid 1651) is running...
Active clustered Volume Groups: west
Active clustered Logical Volumes: west0 west1 west2
[root@west-04 ~]# lvs
  LV      VG        Attr   LSize   Origin Snap%  Move Log Copy%  Convert
  lv_home vg_west04 -wi-ao  88.64g
  lv_root vg_west04 -wi-ao  50.00g
  lv_swap vg_west04 -wi-ao   9.88g
  west0   west      -wi--- 767.68g
  west1   west      -wi--- 779.31g
  west2   west      -wi--- 779.31g
[root@west-04 ~]# vgs
  VG        #PV #LV #SN Attr   VSize   VFree
  vg_west04   1   3   0 wz--n- 148.52g    0
  west        1   3   0 wz--nc   2.27t    0


Version-Release number of selected component (if applicable):
lvm2-2.02.66-2.el6.x86_64
lvm2-cluster-2.02.66-2.el6.x86_64


How reproducible:
Easily

Steps to Reproduce:
1. mount clustered LV on node A
2. try to deactivate clustered LV on node B, LV will become inactive on current node.
3. service clvmd status on node B
  
Actual results:


Expected results:


Additional info:

Comment 2 Alasdair Kergon 2010-06-01 15:28:49 UTC
So we ought to distinguish between the different activation states as best we can?  Active locally, active elsewhere, active exclusively locally, active exclusively elsewhere?

Comment 3 Nate Straz 2010-06-01 19:03:18 UTC
I'd be happy if `service cman status` only showed with the LVs are active locally.

I'd be really happy if a clustered LV was either active on all nodes or none.

Comment 4 Alasdair Kergon 2010-06-01 20:05:12 UTC
Normally LVs are active everywhere, but you're free to choose otherwise, via lvm.conf or with lvchange -aey or -aly, or if you add nodes to the cluster later.

Comment 5 Nate Straz 2010-06-01 20:30:42 UTC
It's usually the last one that I hit.  A node will join the cluster, start clvmd, hit bz 596977 where clvmd startup times out, vgchange never runs, and the cluster LVs are not activated on the node.

Comment 6 RHEL Program Management 2010-06-07 16:04:17 UTC
This request was evaluated by Red Hat Product Management for inclusion in a Red
Hat Enterprise Linux major release.  Product Management has requested further
review of this request by Red Hat Engineering, for potential inclusion in a Red
Hat Enterprise Linux Major release.  This request is not yet committed for
inclusion.

Comment 7 Milan Broz 2010-06-23 16:25:54 UTC
Fixed upstream, now it prints something like this:
# service clvmd status
clvmd (pid 1592) is running...
Clustered Volume Groups: vg_test
Active clustered Logical Volumes: /dev/vg_test/lv1 /dev/vg_test/lv3

Comment 8 Corey Marthaler 2010-06-30 20:02:53 UTC
# no active cluster vols on this node
[root@taft-01 ~]# lvs -a 
  LV                VG        Attr   LSize   Move Log         Copy%
  mirror            taft      mwi--- 100.00m      mirror_mlog
  [mirror_mimage_0] taft      Iwi--- 100.00m
  [mirror_mimage_1] taft      Iwi--- 100.00m
  [mirror_mlog]     taft      lwi---   4.00m
[root@taft-01 ~]#  service clvmd status
clvmd (pid 13783) is running...
Clustered Volume Groups: taft
Active clustered Logical Volumes: (none)

# one active cluster vol on this node
[root@taft-03 ~]# lvs -a 
  LV                VG        Attr   LSize   Move Log         Copy%
  mirror            taft      mwi-a- 100.00m      mirror_mlog 100.00
  [mirror_mimage_0] taft      iwi-ao 100.00m
  [mirror_mimage_1] taft      iwi-ao 100.00m
  [mirror_mlog]     taft      lwi-ao   4.00m
[root@taft-03 ~]# service clvmd status
clvmd (pid 2469) is running...
Clustered Volume Groups: taft
Active clustered Logical Volumes: /dev/taft/mirror

Fix verified in the following build:
lvm2-2.02.69-1.el6    BUILT: Wed Jun 30 11:00:37 CDT 2010
lvm2-libs-2.02.69-1.el6    BUILT: Wed Jun 30 11:00:37 CDT 2010
lvm2-cluster-2.02.69-1.el6    BUILT: Wed Jun 30 11:00:37 CDT 2010
udev-147-2.18.el6    BUILT: Fri Jun 11 07:47:21 CDT 2010
device-mapper-1.02.51-1.el6    BUILT: Wed Jun 30 11:00:37 CDT 2010
device-mapper-libs-1.02.51-1.el6    BUILT: Wed Jun 30 11:00:37 CDT 2010
device-mapper-event-1.02.51-1.el6    BUILT: Wed Jun 30 11:00:37 CDT 2010
device-mapper-event-libs-1.02.51-1.el6    BUILT: Wed Jun 30 11:00:37 CDT 2010
cmirror-2.02.69-1.el6    BUILT: Wed Jun 30 11:00:37 CDT 2010

Comment 9 releng-rhel@redhat.com 2010-11-10 21:08:03 UTC
Red Hat Enterprise Linux 6.0 is now available and should resolve
the problem described in this bug report. This report is therefore being closed
with a resolution of CURRENTRELEASE. You may reopen this bug report if the
solution does not work for you.