Bug 128286 - sleep on mount gfs disk after reboot
sleep on mount gfs disk after reboot
Status: CLOSED NOTABUG
Product: Red Hat Cluster Suite
Classification: Red Hat
Component: dlm (Show other bugs)
4
All Linux
medium Severity medium
: ---
: ---
Assigned To: David Teigland
Cluster QE
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2004-07-21 05:07 EDT by Anton Nekhoroshikh
Modified: 2009-04-16 16:29 EDT (History)
2 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2006-02-02 09:51:10 EST
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Anton Nekhoroshikh 2004-07-21 05:07:19 EDT
Description of problem:

after reboot node command mount -t gfs /dev ... enter to sleep state
and stop activity on all nodes

dmseg:
Lock_Harness <CVS> (built Jul 15 2004 15:18:47) installed
GFS <CVS> (built Jul 15 2004 15:18:29) installed
CMAN <CVS> (built Jul 15 2004 15:31:01) installed
NET: Registered protocol family 31
DLM <CVS> (built Jul 15 2004 15:31:12) installed
Lock_DLM (built Jul 16 2004 20:15:21) installed
CMAN: Waiting to join or form a Linux-cluster
CMAN: sending membership request
CMAN: got node c2.310.ru
CMAN: got node c0.310.ru
CMAN: got node c4.310.ru
CMAN: got node c1.310.ru
CMAN: got node c5.310.ru
CMAN: got node master.310.ru
CMAN: quorum regained, resuming activity
dlm: gfs01: recover event 2 (first)
dlm: gfs01: add nodes
dlm: got connection from 1
dlm: got connection from 3
dlm: got connection from 4
dlm: got connection from 5

nodes:
[root@c3 root]# cat /proc/cluster/nodes
Node  Votes Exp Sts  Name
   1    1    2   M   master.310.ru
   2    1    2   M   c4.310.ru
   3    1    2   M   c1.310.ru
   4    1    2   M   c5.310.ru
   5    1    2   M   c2.310.ru
   6    1    2   M   c3.310.ru
   7    1    2   M   c0.310.ru

services at rebooted node:
[root@c3 root]# cat /proc/cluster/services
Service          Name                              GID LID State     Code
Fence Domain:    "default"                           0   2 join     
S-1,280,7
[]

DLM Lock Space:  "gfs01"                             9   3 join     
S-6,20,6
[1 4 3 2 5 6]

services at all other nodes:
[root@c4 root]# cat /proc/cluster/services

Service          Name                              GID LID State     Code
Fence Domain:    "default"                           0   2 join     
S-1,80,7
[]

DLM Lock Space:  "gfs01"                             9   3 update   
U-4,1,6
[1 2 3 5 4 6]

GFS Mount Group: "gfs01"                            10   4 run       -
[1 2 3 5 4]

Version-Release number of selected component (if applicable):

kernel 2.5.7
cluster from cvs
Comment 1 David Teigland 2004-08-19 00:15:58 EDT
could you retry this one too?  this looks like you may not have been
doing something correctly with fencing, not necessarily a bug.
Comment 2 Kiersten (Kerri) Anderson 2004-11-04 10:15:13 EST
Updates with the proper version and component name.

Note You need to log in before you can comment on or make changes to this bug.