RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 598495 - service clvmd status shows "active clustered" volumes which are inactive
Summary: service clvmd status shows "active clustered" volumes which are inactive
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: lvm2
Version: 6.0
Hardware: All
OS: Linux
low
medium
Target Milestone: rc
: ---
Assignee: Milan Broz
QA Contact: Corey Marthaler
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2010-06-01 14:00 UTC by Nate Straz
Modified: 2013-03-01 04:09 UTC (History)
9 users (show)

Fixed In Version: lvm2-2.02.68-1.el6
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2010-11-10 21:08:03 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Nate Straz 2010-06-01 14:00:51 UTC
Description of problem:

If the clustered volumes are not active on the current node, they are still listed as active in `service clvmd status.`

[root@west-04 ~]# service clvmd status
clvmd (pid 1651) is running...
Active clustered Volume Groups: west
Active clustered Logical Volumes: west0 west1 west2
[root@west-04 ~]# lvs
  LV      VG        Attr   LSize   Origin Snap%  Move Log Copy%  Convert
  lv_home vg_west04 -wi-ao  88.64g
  lv_root vg_west04 -wi-ao  50.00g
  lv_swap vg_west04 -wi-ao   9.88g
  west0   west      -wi--- 767.68g
  west1   west      -wi--- 779.31g
  west2   west      -wi--- 779.31g
[root@west-04 ~]# vgs
  VG        #PV #LV #SN Attr   VSize   VFree
  vg_west04   1   3   0 wz--n- 148.52g    0
  west        1   3   0 wz--nc   2.27t    0


Version-Release number of selected component (if applicable):
lvm2-2.02.66-2.el6.x86_64
lvm2-cluster-2.02.66-2.el6.x86_64


How reproducible:
Easily

Steps to Reproduce:
1. mount clustered LV on node A
2. try to deactivate clustered LV on node B, LV will become inactive on current node.
3. service clvmd status on node B
  
Actual results:


Expected results:


Additional info:

Comment 2 Alasdair Kergon 2010-06-01 15:28:49 UTC
So we ought to distinguish between the different activation states as best we can?  Active locally, active elsewhere, active exclusively locally, active exclusively elsewhere?

Comment 3 Nate Straz 2010-06-01 19:03:18 UTC
I'd be happy if `service cman status` only showed with the LVs are active locally.

I'd be really happy if a clustered LV was either active on all nodes or none.

Comment 4 Alasdair Kergon 2010-06-01 20:05:12 UTC
Normally LVs are active everywhere, but you're free to choose otherwise, via lvm.conf or with lvchange -aey or -aly, or if you add nodes to the cluster later.

Comment 5 Nate Straz 2010-06-01 20:30:42 UTC
It's usually the last one that I hit.  A node will join the cluster, start clvmd, hit bz 596977 where clvmd startup times out, vgchange never runs, and the cluster LVs are not activated on the node.

Comment 6 RHEL Program Management 2010-06-07 16:04:17 UTC
This request was evaluated by Red Hat Product Management for inclusion in a Red
Hat Enterprise Linux major release.  Product Management has requested further
review of this request by Red Hat Engineering, for potential inclusion in a Red
Hat Enterprise Linux Major release.  This request is not yet committed for
inclusion.

Comment 7 Milan Broz 2010-06-23 16:25:54 UTC
Fixed upstream, now it prints something like this:
# service clvmd status
clvmd (pid 1592) is running...
Clustered Volume Groups: vg_test
Active clustered Logical Volumes: /dev/vg_test/lv1 /dev/vg_test/lv3

Comment 8 Corey Marthaler 2010-06-30 20:02:53 UTC
# no active cluster vols on this node
[root@taft-01 ~]# lvs -a 
  LV                VG        Attr   LSize   Move Log         Copy%
  mirror            taft      mwi--- 100.00m      mirror_mlog
  [mirror_mimage_0] taft      Iwi--- 100.00m
  [mirror_mimage_1] taft      Iwi--- 100.00m
  [mirror_mlog]     taft      lwi---   4.00m
[root@taft-01 ~]#  service clvmd status
clvmd (pid 13783) is running...
Clustered Volume Groups: taft
Active clustered Logical Volumes: (none)

# one active cluster vol on this node
[root@taft-03 ~]# lvs -a 
  LV                VG        Attr   LSize   Move Log         Copy%
  mirror            taft      mwi-a- 100.00m      mirror_mlog 100.00
  [mirror_mimage_0] taft      iwi-ao 100.00m
  [mirror_mimage_1] taft      iwi-ao 100.00m
  [mirror_mlog]     taft      lwi-ao   4.00m
[root@taft-03 ~]# service clvmd status
clvmd (pid 2469) is running...
Clustered Volume Groups: taft
Active clustered Logical Volumes: /dev/taft/mirror

Fix verified in the following build:
lvm2-2.02.69-1.el6    BUILT: Wed Jun 30 11:00:37 CDT 2010
lvm2-libs-2.02.69-1.el6    BUILT: Wed Jun 30 11:00:37 CDT 2010
lvm2-cluster-2.02.69-1.el6    BUILT: Wed Jun 30 11:00:37 CDT 2010
udev-147-2.18.el6    BUILT: Fri Jun 11 07:47:21 CDT 2010
device-mapper-1.02.51-1.el6    BUILT: Wed Jun 30 11:00:37 CDT 2010
device-mapper-libs-1.02.51-1.el6    BUILT: Wed Jun 30 11:00:37 CDT 2010
device-mapper-event-1.02.51-1.el6    BUILT: Wed Jun 30 11:00:37 CDT 2010
device-mapper-event-libs-1.02.51-1.el6    BUILT: Wed Jun 30 11:00:37 CDT 2010
cmirror-2.02.69-1.el6    BUILT: Wed Jun 30 11:00:37 CDT 2010

Comment 9 releng-rhel@redhat.com 2010-11-10 21:08:03 UTC
Red Hat Enterprise Linux 6.0 is now available and should resolve
the problem described in this bug report. This report is therefore being closed
with a resolution of CURRENTRELEASE. You may reopen this bug report if the
solution does not work for you.


Note You need to log in before you can comment on or make changes to this bug.