Bug 157313 - managing all volumes on the system so we can't stop the service
managing all volumes on the system so we can't stop the service
Status: CLOSED CURRENTRELEASE
Product: Red Hat Cluster Suite
Classification: Red Hat
Component: lvm2-cluster (Show other bugs)
4
x86_64 Linux
medium Severity medium
: ---
: ---
Assigned To: Christine Caulfield
Cluster QE
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2005-05-10 11:57 EDT by Tyson Webster
Modified: 2010-01-11 23:03 EST (History)
3 users (show)

See Also:
Fixed In Version: U2?
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2006-03-07 14:49:24 EST
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Tyson Webster 2005-05-10 11:57:03 EDT
From Bugzilla Helper:
User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.7.7) Gecko/20050416 Red Hat/1.0.3-1.4.1 Firefox/1.0.3

Description of problem:
The system hangs on reboot because cman will not shutdown while clvm is still active.  We investigated further and it looks like clvm is failing to stop becasue it is managing all lvm2 volumes on the system.

[root@suplx2 ~]# service clvmd status

clvmd (pid 2539) is running...

active volumes: SupLXLV00 SupLXLV01 SupLXLV03 LogVol00 LogVol01 LogVol02



LogVol00 is mounted on / type ext3 (rw)
LogVol01 is swap


LogVol02 is mounted on on /data type ext3 (rw)


SupLXLV00 on /00 type gfs (rw,noatime)

SupLXLV01 on /01 type gfs (rw,noatime)

SupLXLV03 on /03 type gfs (rw,noatime)



Version-Release number of selected component (if applicable):
lvm2-cluster-2.01.08-1.0.RHEL4

How reproducible:
Always

Steps to Reproduce:
We have 2 clusters, 1 with 3 nodes and 1 with 2 nodes, and we see this on all nodes
1.start clvmd
2.service clvmd status shows all lvm2 files systems being managed
3.try to reboot or stop service with service clvmd stop
  

Actual Results:  on a reboot the system hangs and has to be power cycled
service failes to stop when manually attempted

Expected Results:  server should reboot
service should stop OK

Additional info:

This is what we see in messages when we try and reboot:

May  9 15:58:50 suplx2 gfs: Unmounting GFS filesystems:  succeeded
May  9 15:58:53 suplx2 clvmd: Deactivating lvms:
May  9 15:58:53 suplx2 kernel: cdrom: open failed.
May  9 15:58:53 suplx2 clvmd:  failed
May  9 15:58:53 suplx2 clvmd:
May  9 15:58:53 suplx2 clvmd:
May  9 15:58:53 suplx2 rc: Stopping clvmd:  failed
May  9 15:58:53 suplx2 fenced: Stopping fence domain:
May  9 15:58:53 suplx2 fenced:  succeeded
May  9 15:58:53 suplx2 fenced:
May  9 15:58:53 suplx2 fenced:
May  9 15:58:53 suplx2 rc: Stopping fenced:  succeeded
May  9 15:58:53 suplx2 cman: Stopping cman:
May  9 15:58:53 suplx2 cman: cman_tool: Can't leave cluster while there are 2 active subsystems
May  9 15:58:53 suplx2 cman:
May  9 15:58:56 suplx2 cman: FATAL: Module cman is in use.
May  9 15:58:56 suplx2 cman:  failed
May  9 15:58:56 suplx2 cman:
May  9 15:58:56 suplx2 rc: Stopping cman:  failed
May  9 15:58:56 suplx2 ccsd: Stopping ccsd:
May  9 15:58:56 suplx2 ccsd[2866]: Stopping ccsd, SIGTERM received.
May  9 15:58:57 suplx2 ccsd:  succeeded
Comment 1 AJ Lewis 2005-05-24 16:51:47 EDT
This should be fixed in the next lvm2-cluster release - we now only attempt to
deactivate VGs designated clustered in the clvmd initscript on shutdown.

Note You need to log in before you can comment on or make changes to this bug.