Description of problem: In version lvm2-cluster-2.02.56-7.el5 the init scripts for /etc/init.d/clvmd includes new "INIT_INFO" lines. In previous versions of lvm2-cluster the lines were empty. The "INIT INFO" lines will take precendent over the chkconfig line as noted in man page for chkconfig: $ man chkconfig --add name This option adds a new service for management by chkconfig. When a new service is added, chkconfig ensures that the service has either a start or a kill entry in every runlevel. If any runlevel is missing such an entry, chkconfig creates the appropriate entry as specified by the default values in the init script. Note that default entries in LSB-delimited ’INIT INFO’ sections take precedence over the default runlevels in the initscript. $ rpm -q lvm2-cluster lvm2-cluster-2.02.46-8.el5_4.1 $ cat -n /etc/init.d/clvmd 1 #!/bin/bash 2 # 3 # chkconfig: - 24 76 4 # description: Starts and stops clvmd 5 # 6 # For Red-Hat-based distributions such as Fedora, RHEL, CentOS. 7 # 8 ### BEGIN INIT INFO 9 # Provides: 10 ### END INIT INFO 11 12 . /etc/init.d/functions $ rpm -q lvm2-cluster lvm2-cluster-2.02.56-7.el5 $ cat -n /etc/init.d/clvmd 1 #!/bin/bash 2 # 3 # chkconfig: - 24 76 4 # description: Starts and stops clvmd 5 # 6 # For Red-Hat-based distributions such as Fedora, RHEL, CentOS. 7 # 8 ### BEGIN INIT INFO 9 # Provides: clvmd 10 # Required-Start: $local_fs 11 # Required-Stop: $local_fs 12 # Default-Start: 13 # Default-Stop: 0 1 6 14 # Short-Description: Clustered LVM Daemon 15 ### END INIT INFO Version-Release number of selected component (if applicable): lvm2-cluster-2.02.56-7.el5 How reproducible: Everytime Steps to Reproduce: 1. Install lvm2-cluster 2. Mount GFS filesystem on vg/lv 3. Reboot Actual results: lvm2-cluster cannot deactivate a vg because it is still mounted. clvmd init.d script levels are set incorrect. They are set to K74/S26. Expected results: clvmd init.d script levels should be K76/S24 Additional info: Removing those lines then doing chkconfig delete then chkconifg add fixed the issue by setting clvmd to K76/S24.
Once item I forgot to add. The reason chkconfig comes into play is because during the rpm install the rpm calls: $(chkconfig clvmd add) --sbradley
Fix in rhel5 branch cvs.
*** Bug 592125 has been marked as a duplicate of this bug. ***
Alasdair, No we don't have this scenario in standard regression tests.
This doesn't appear to be properly fixed. The umount in the GFS script still runs before the clvmd script. # reboot Deactivating clustered VG(s): Can't deactivate volume group "taft" with 1 open logical volume(s) [FAILED] Unmounting GFS filesystems: [ OK ] Stopping HAL daemon: [ OK ] Stopping monitoring for VG VolGroup00: 2 logical volume(s) in volume group "VolGroup00" unmonitored [ OK ] Stopping monitoring for VG taft: 4 logical volume(s) in volume group "taft" unmonitored [ OK ] Stopping cluster: Stopping fencing... done Stopping cman... failed /usr/sbin/cman_tool: Error leaving cluster: Device or resource busy [FAILED] Stopping system message bus: [ OK ] Stopping RPC idmapd: [ OK ] 2.6.18-194.11.3.el5 lvm2-2.02.73-2.el5 BUILT: Mon Aug 30 06:36:20 CDT 2010 lvm2-cluster-2.02.73-2.el5 BUILT: Mon Aug 30 06:38:05 CDT 2010 device-mapper-1.02.54-2.el5 BUILT: Fri Sep 10 12:00:05 CDT 2010 cmirror-1.1.39-10.el5 BUILT: Wed Sep 8 16:32:05 CDT 2010 kmod-cmirror-0.1.22-3.el5 BUILT: Tue Dec 22 13:39:47 CST 2009
init_info removed for clvmd initscript in lvm2-cluster-2.02.74-1.el5 again -> modified.
The clvm init now runs after the gfs init. Fix verified in the latest rpms. 2.6.18-227.el5 lvm2-2.02.74-1.el5 BUILT: Fri Oct 15 10:26:21 CDT 2010 lvm2-cluster-2.02.74-2.el5 BUILT: Fri Oct 29 07:48:11 CDT 2010 device-mapper-1.02.55-1.el5 BUILT: Fri Oct 15 06:15:55 CDT 2010 cmirror-1.1.39-10.el5 BUILT: Wed Sep 8 16:32:05 CDT 2010 kmod-cmirror-0.1.22-3.el5 BUILT: Tue Dec 22 13:39:47 CST 2009 Unmounting GFS filesystems: [ OK ] Stopping HAL daemon: [ OK ] Stopping monitoring for VG VolGroup00: 2 logical volume(s) in volume group "VolGroup00" unmonitored [ OK ] Stopping monitoring for VG taft: Couldn't find device with uuid SF8Edc-jS9V-QYeA-1YZP-0JII-xgLC-yvNX9Q. Cannot change VG taft while PVs are missing. Consider vgreduce --removemissing. [FAILED] Deactivating clustered VG(s): Couldn't find device with uuid SF8Edc-jS9V-QYeA-1YZP-0JII-xgLC-yvNX9Q. 0 logical volume(s) in volume group "taft" now active [ OK ] Signaling clvmd to exit [ OK ] clvmd terminated[ OK ] Stopping cluster: Stopping fencing... done Stopping cman... done Stopping ccsd... dlm: closing connection to node 1 dlm: closing connection to node 4 dlm: closing connection to node 3 dlm: closing connection to node 2 done Unmounting configfs... done [ OK ]
An advisory has been issued which should help the problem described in this bug report. This report is therefore being closed with a resolution of ERRATA. For more information on therefore solution and/or where to find the updated files, please follow the link below. You may reopen this bug report if the solution does not work for you. http://rhn.redhat.com/errata/RHBA-2011-0053.html