Bug 592362

Summary: service clvmd stop tried to deactivate mounted, local volumes
Product: Red Hat Enterprise Linux 6 Reporter: Nate Straz <nstraz>
Component: lvm2Assignee: Fabio Massimo Di Nitto <fdinitto>
Status: CLOSED CURRENTRELEASE QA Contact: Corey Marthaler <cmarthal>
Severity: high Docs Contact:
Priority: low    
Version: 6.0CC: agk, antillon.maurizio, dwysocha, heinzm, jbrassow, joe.thornber, mbroz, prajnoha, prockai
Target Milestone: rcKeywords: Regression
Target Release: ---   
Hardware: All   
OS: Linux   
Whiteboard:
Fixed In Version: lvm2-2_02_65-1_el6 Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2010-11-11 14:51:11 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
proposed patch none

Description Nate Straz 2010-05-14 16:10:24 UTC
Description of problem:

[root@morph-03 init.d]# vgs
  VG         #PV #LV #SN Attr   VSize  VFree
  vg_morph03   1   2   0 wz--n- 36.78g    0
[root@morph-03 init.d]# lvs
  LV      VG         Attr   LSize  Origin Snap%  Move Log Copy%  Convert
  lv_root vg_morph03 -wi-ao 32.81g
  lv_swap vg_morph03 -wi-ao  3.97g
[root@morph-03 init.d]# service clvmd status
clvmd (pid 1533) is running...
Active clustered Volume Groups: (none)
Active clustered Logical Volumes: (none)
[root@morph-03 init.d]# service clvmd stop
Deactivating clusterd VG(s):   Can't deactivate volume group "vg_morph03" with 2 open logical volume(s)
                                                           [FAILED]


Version-Release number of selected component (if applicable):
lvm2-2.02.64-2.el6.i686

How reproducible:
easily

Steps to Reproduce:
1. service clvmd start
2. service clvmd stop
  
Actual results:


Expected results:


Additional info:

Comment 3 Alasdair Kergon 2010-05-14 23:48:02 UTC
Nice bug!  If there are no clustered VGs it defaults to empty string and tries everything!  (So skip if clustered_vgs is empty.  Also fix the typo 'r*e*d' in the message.)

Comment 4 Fabio Massimo Di Nitto 2010-05-15 04:25:34 UTC
Created attachment 414208 [details]
proposed patch

with this patch applied:

[root@fedora12-node2 scripts]# sh clvmd_init_red_hat start
Starting clvmd:                                            [  OK  ]
Activating VG(s):   2 logical volume(s) in volume group "VolGroup" now active
                                                           [  OK  ]

[root@fedora12-node2 scripts]# sh clvmd_init_red_hat status
clvmd (pid 3083) is running...
Active clustered Volume Groups: (none)
Active clustered Logical Volumes: (none)

[root@fedora12-node2 scripts]# sh clvmd_init_red_hat stop  
Signaling clvmd to exit                                    [  OK  ]
clvmd terminated                                           [  OK  ]

It respects that LVM_VGS has higher priority than clustered_vgs and addresses the 'r*e*d*' typo in the message.

Comment 7 Fabio Massimo Di Nitto 2010-05-17 03:19:21 UTC
--- WHATS_NEW   14 May 2010 15:19:42 -0000      1.1559
+++ WHATS_NEW   17 May 2010 03:17:23 -0000
@@ -1,5 +1,6 @@
 Version 2.02.65 - 
 =================================
+  Fix clvmd init script to not deactive non-clustered volume groups.

Checking in WHATS_NEW;
/cvs/lvm2/LVM2/WHATS_NEW,v  <--  WHATS_NEW
new revision: 1.1560; previous revision: 1.1559
done
Checking in scripts/clvmd_init_red_hat.in;
/cvs/lvm2/LVM2/scripts/clvmd_init_red_hat.in,v  <--  clvmd_init_red_hat.in
new revision: 1.7; previous revision: 1.6
done

Comment 9 Nate Straz 2010-05-18 19:33:26 UTC
I tried out lvm2-2.02.65-1.el6 from brew and it is able to shutdown clvmd without trying to deactivate local volumes.

Comment 11 Nate Straz 2010-06-07 19:26:32 UTC
I haven't hit this issue with recent trees.

Comment 12 releng-rhel@redhat.com 2010-11-11 14:51:11 UTC
Red Hat Enterprise Linux 6.0 is now available and should resolve
the problem described in this bug report. This report is therefore being closed
with a resolution of CURRENTRELEASE. You may reopen this bug report if the
solution does not work for you.