Bugzilla will be upgraded to version 5.0. The upgrade date is tentatively scheduled for 2 December 2018, pending final testing and feedback.
Bug 638547 - [RFE] Allow clvmd to be restarted when volumes are in use
[RFE] Allow clvmd to be restarted when volumes are in use
Status: CLOSED ERRATA
Product: Red Hat Enterprise Linux 5
Classification: Red Hat
Component: lvm2-cluster (Show other bugs)
5.5.z
All Linux
low Severity medium
: rc
: ---
Assigned To: Milan Broz
Cluster QE
: FutureFeature, Triaged
Depends On:
Blocks: 554476
  Show dependency treegraph
 
Reported: 2010-09-29 06:10 EDT by J.H.M. Dassen (Ray)
Modified: 2013-07-03 00:06 EDT (History)
16 users (show)

See Also:
Fixed In Version: lvm2-cluster-2.02.88-1.el5
Doc Type: Enhancement
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2012-02-21 01:02:16 EST
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)


External Trackers
Tracker ID Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2012:0223 normal SHIPPED_LIVE lvm2-cluster bug fix and enhancement update 2012-02-20 09:53:03 EST

  None (edit)
Description J.H.M. Dassen (Ray) 2010-09-29 06:10:17 EDT
1. Customer Name

<not disclosed>

2. What is the nature and description of the request?

When the clvmd service is restarted while lvms are in use, the service will
de-activate the volume groups that are not in use. The start section of the
script is not executed.

This is expected behavior. The "service clvmd restart" command never
actually shuts clvmd down because there are still active VG's. This is the
way the initscript is designed:

restart)
	if stop

The stop never gets a return of "0" due to the active VG's.

From the stop function of the clvmd initscript:
[ $rtrn -ne 0 ] && break

Here is the output of our reproducer:

/dev/mapper/vg1-lv1 on /mnt/lv1 type ext3 (rw)

[root@test ~]# service clvmd stop
Deactivating VG vg2: 0 logical volume(s) in volume group "vg2" now active
	[ OK ]
Deactivating VG vg1: Can't deactivate volume group "vg1" with 1 open logical volume(s)
	[FAILED]

[root@test ~]# lvs
LV VG Attr LSize Origin Snap% Move Log Copy% Convert
LogVol00 VolGroup00 -wi-ao 3.38G
LogVol01 VolGroup00 -wi-ao 512.00M

lv1 vg1 -wi-ao 1.00G
lv2 vg1 -wi-a- 1020.00M
lv1 vg2 -wi--- 1.00G
lv2 vg2 -wi--- 1020.00M

[root@test ~]# service clvmd status clvmd
(pid 2054) is running... active volumes: LogVol00 LogVol01 lv1 lv2

It sees the failed deactivation and does not stop the clvmd service. It does
deactivate the other volume groups, however it does not run the start
section of the script.

While we understand the purpose of not de-activating volume groups in use,
perhaps the restart command should print a message that states, "some
logical volumes are currently under use, unable to restart the clvmd
service" The command should not de-activate the volume groups not in use. 

3. Why does the customer need this? (List the business requirements here)

This is something that affects every customer. If the service is executed, there is no message explaining why only some of the volume groups are activated, and the service clvmd start command has to be executed to reactivate the volume groups. 

4. How would the customer like to achieve this? (List the functional requirements here)

Change to the script
Do not deactivate other volume groups, print message "logical volumes in
use, we are unable to restart the service at this time" 

5. For each functional requirement listed in question 4, specify how Red Hat and the customer can test to confirm the requirement is successfully implemented. 

[Blank]

6. Is there already an existing RFE upstream or in Red Hat bugzilla?

None found 

7. How quickly does this need resolved? (desired target release)

Major 

8. Does this request meet the RHEL Inclusion criteria (please review)

Yes 

9. List the affected packages

lvm2-cluster-2.02.56-7.el5_5.4
Comment 1 Milan Broz 2010-09-29 06:40:31 EDT
(In reply to comment #0)
> lvm2-cluster-2.02.56-7.el5_5.4

This is RHEL5 package, but you requested fix in RHEL 6.1 - is it what you want?

New clvmd has ability to restart without deactivating volumes, I think it is in RHEL6 already. Script just support update from old version which do not understand that restart switch yet.
Comment 7 RHEL Product and Program Management 2011-01-11 15:24:53 EST
This request was evaluated by Red Hat Product Management for
inclusion in the current release of Red Hat Enterprise Linux.
Because the affected component is not scheduled to be updated in the
current release, Red Hat is unfortunately unable to address this
request at this time. Red Hat invites you to ask your support
representative to propose this request, if appropriate and relevant,
in the next release of Red Hat Enterprise Linux.
Comment 8 RHEL Product and Program Management 2011-01-11 18:05:37 EST
This request was erroneously denied for the current release of
Red Hat Enterprise Linux.  The error has been fixed and this
request has been re-proposed for the current release.
Comment 11 Milan Broz 2011-03-07 13:18:18 EST
For stop command: Probably the best approach here is simple run vgchange command in test mode and if it doesn't fail (no open volumes), repeat with real deactivating volumes.

Unfortunately test mode was never implemented properly for cluster locking, so it must be fixed first (see bug 682793).

BTW restart without deactivating volumes is already supported using -S switch.
(and initscript uses it as well).
Comment 12 Milan Broz 2011-03-22 09:33:24 EDT
We need to extend clvmd protocol to support test bit, this is not going to happen in 5.7 time frame so I am postponing this to 5.8.
Comment 13 Milan Broz 2011-08-22 08:44:59 EDT
For RHEL5 is seems unrealistic that we can extend cluster locking protocol, so I will try to fix at least part of this:

- adding -S flag for restart (so clvmd restart is possible while clustered volumes are active)
- try to silent and perpahs workaournd other messages.

Once bug 682793 is fixed, we can create better initiscript.
Comment 14 Milan Broz 2011-10-18 07:06:36 EDT
RHEL 5.8 will include lvm2 2.02.88. I checked that clvmd -S works for the restart command (thus avoiding messages when it cannot stop active LVs), that's perhaps all we can do in RHEL5 timeframe.
Comment 16 Corey Marthaler 2011-11-30 16:10:36 EST
I verified that clvmd is now able to restart when there are clustered volumes in use. That said, I'm not sure how that "Improves CLVM init script reporting"? Should this bug be retitled "Allow clvmd to be restarted when volumes are in use"?
Comment 17 Milan Broz 2011-11-30 16:22:01 EST
Fixing title to describe real change in script.
Comment 18 Corey Marthaler 2011-11-30 16:30:56 EST
Fix verified in the latest rpms.

2.6.18-274.el5

lvm2-2.02.88-4.el5    BUILT: Wed Nov 16 09:40:55 CST 2011
lvm2-cluster-2.02.88-4.el5    BUILT: Wed Nov 16 09:46:51 CST 2011
device-mapper-1.02.67-2.el5    BUILT: Mon Oct 17 08:31:56 CDT 2011
device-mapper-event-1.02.67-2.el5    BUILT: Mon Oct 17 08:31:56 CDT 2011
cmirror-1.1.39-10.el5    BUILT: Wed Sep  8 16:32:05 CDT 2010
kmod-cmirror-0.1.22-3.el5    BUILT: Tue Dec 22 13:39:47 CST 2009



[root@taft-01 ~]# lvs
  LV                    Attr   LSize   Log                        Copy% 
  syncd_primary_4legs_1 mwi-ao 500.00M syncd_primary_4legs_1_mlog 100.00
  syncd_primary_4legs_2 mwi-ao 500.00M syncd_primary_4legs_2_mlog 100.00
  syncd_primary_4legs_3 mwi-ao 500.00M syncd_primary_4legs_3_mlog 100.00

[root@taft-01 ~]# mount
/dev/mapper/helter_skelter-syncd_primary_4legs_1 on /mnt/syncd_primary_4legs_1 type gfs2 (rw,hostdata=jid=0:id=44892161:first=1)
/dev/mapper/helter_skelter-syncd_primary_4legs_2 on /mnt/syncd_primary_4legs_2 type gfs2 (rw,hostdata=jid=0:id=45023233:first=1)
/dev/mapper/helter_skelter-syncd_primary_4legs_3 on /mnt/syncd_primary_4legs_3 type gfs2 (rw,hostdata=jid=0:id=45154305:first=1)

[root@taft-01 ~]# service clvmd restart
Restarting clvmd:                                          [  OK  ]
Comment 19 errata-xmlrpc 2012-02-21 01:02:16 EST
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHBA-2012-0223.html

Note You need to log in before you can comment on or make changes to this bug.