Red Hat Bugzilla – Bug 638547
[RFE] Allow clvmd to be restarted when volumes are in use
Last modified: 2013-07-03 00:06:02 EDT
1. Customer Name <not disclosed> 2. What is the nature and description of the request? When the clvmd service is restarted while lvms are in use, the service will de-activate the volume groups that are not in use. The start section of the script is not executed. This is expected behavior. The "service clvmd restart" command never actually shuts clvmd down because there are still active VG's. This is the way the initscript is designed: restart) if stop The stop never gets a return of "0" due to the active VG's. From the stop function of the clvmd initscript: [ $rtrn -ne 0 ] && break Here is the output of our reproducer: /dev/mapper/vg1-lv1 on /mnt/lv1 type ext3 (rw) [root@test ~]# service clvmd stop Deactivating VG vg2: 0 logical volume(s) in volume group "vg2" now active [ OK ] Deactivating VG vg1: Can't deactivate volume group "vg1" with 1 open logical volume(s) [FAILED] [root@test ~]# lvs LV VG Attr LSize Origin Snap% Move Log Copy% Convert LogVol00 VolGroup00 -wi-ao 3.38G LogVol01 VolGroup00 -wi-ao 512.00M lv1 vg1 -wi-ao 1.00G lv2 vg1 -wi-a- 1020.00M lv1 vg2 -wi--- 1.00G lv2 vg2 -wi--- 1020.00M [root@test ~]# service clvmd status clvmd (pid 2054) is running... active volumes: LogVol00 LogVol01 lv1 lv2 It sees the failed deactivation and does not stop the clvmd service. It does deactivate the other volume groups, however it does not run the start section of the script. While we understand the purpose of not de-activating volume groups in use, perhaps the restart command should print a message that states, "some logical volumes are currently under use, unable to restart the clvmd service" The command should not de-activate the volume groups not in use. 3. Why does the customer need this? (List the business requirements here) This is something that affects every customer. If the service is executed, there is no message explaining why only some of the volume groups are activated, and the service clvmd start command has to be executed to reactivate the volume groups. 4. How would the customer like to achieve this? (List the functional requirements here) Change to the script Do not deactivate other volume groups, print message "logical volumes in use, we are unable to restart the service at this time" 5. For each functional requirement listed in question 4, specify how Red Hat and the customer can test to confirm the requirement is successfully implemented. [Blank] 6. Is there already an existing RFE upstream or in Red Hat bugzilla? None found 7. How quickly does this need resolved? (desired target release) Major 8. Does this request meet the RHEL Inclusion criteria (please review) Yes 9. List the affected packages lvm2-cluster-2.02.56-7.el5_5.4
(In reply to comment #0) > lvm2-cluster-2.02.56-7.el5_5.4 This is RHEL5 package, but you requested fix in RHEL 6.1 - is it what you want? New clvmd has ability to restart without deactivating volumes, I think it is in RHEL6 already. Script just support update from old version which do not understand that restart switch yet.
This request was evaluated by Red Hat Product Management for inclusion in the current release of Red Hat Enterprise Linux. Because the affected component is not scheduled to be updated in the current release, Red Hat is unfortunately unable to address this request at this time. Red Hat invites you to ask your support representative to propose this request, if appropriate and relevant, in the next release of Red Hat Enterprise Linux.
This request was erroneously denied for the current release of Red Hat Enterprise Linux. The error has been fixed and this request has been re-proposed for the current release.
For stop command: Probably the best approach here is simple run vgchange command in test mode and if it doesn't fail (no open volumes), repeat with real deactivating volumes. Unfortunately test mode was never implemented properly for cluster locking, so it must be fixed first (see bug 682793). BTW restart without deactivating volumes is already supported using -S switch. (and initscript uses it as well).
We need to extend clvmd protocol to support test bit, this is not going to happen in 5.7 time frame so I am postponing this to 5.8.
For RHEL5 is seems unrealistic that we can extend cluster locking protocol, so I will try to fix at least part of this: - adding -S flag for restart (so clvmd restart is possible while clustered volumes are active) - try to silent and perpahs workaournd other messages. Once bug 682793 is fixed, we can create better initiscript.
RHEL 5.8 will include lvm2 2.02.88. I checked that clvmd -S works for the restart command (thus avoiding messages when it cannot stop active LVs), that's perhaps all we can do in RHEL5 timeframe.
I verified that clvmd is now able to restart when there are clustered volumes in use. That said, I'm not sure how that "Improves CLVM init script reporting"? Should this bug be retitled "Allow clvmd to be restarted when volumes are in use"?
Fixing title to describe real change in script.
Fix verified in the latest rpms. 2.6.18-274.el5 lvm2-2.02.88-4.el5 BUILT: Wed Nov 16 09:40:55 CST 2011 lvm2-cluster-2.02.88-4.el5 BUILT: Wed Nov 16 09:46:51 CST 2011 device-mapper-1.02.67-2.el5 BUILT: Mon Oct 17 08:31:56 CDT 2011 device-mapper-event-1.02.67-2.el5 BUILT: Mon Oct 17 08:31:56 CDT 2011 cmirror-1.1.39-10.el5 BUILT: Wed Sep 8 16:32:05 CDT 2010 kmod-cmirror-0.1.22-3.el5 BUILT: Tue Dec 22 13:39:47 CST 2009 [root@taft-01 ~]# lvs LV Attr LSize Log Copy% syncd_primary_4legs_1 mwi-ao 500.00M syncd_primary_4legs_1_mlog 100.00 syncd_primary_4legs_2 mwi-ao 500.00M syncd_primary_4legs_2_mlog 100.00 syncd_primary_4legs_3 mwi-ao 500.00M syncd_primary_4legs_3_mlog 100.00 [root@taft-01 ~]# mount /dev/mapper/helter_skelter-syncd_primary_4legs_1 on /mnt/syncd_primary_4legs_1 type gfs2 (rw,hostdata=jid=0:id=44892161:first=1) /dev/mapper/helter_skelter-syncd_primary_4legs_2 on /mnt/syncd_primary_4legs_2 type gfs2 (rw,hostdata=jid=0:id=45023233:first=1) /dev/mapper/helter_skelter-syncd_primary_4legs_3 on /mnt/syncd_primary_4legs_3 type gfs2 (rw,hostdata=jid=0:id=45154305:first=1) [root@taft-01 ~]# service clvmd restart Restarting clvmd: [ OK ]
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHBA-2012-0223.html