Bug 460233 - gfs mount attempt shouldn't hang if enough journals are not available
gfs mount attempt shouldn't hang if enough journals are not available
Status: CLOSED DUPLICATE of bug 425421
Product: Red Hat Enterprise Linux 5
Classification: Red Hat
Component: gfs-utils (Show other bugs)
5.3
All Linux
low Severity low
: rc
: ---
Assigned To: Robert Peterson
Cluster QE
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2008-08-26 17:39 EDT by Corey Marthaler
Modified: 2010-01-11 22:34 EST (History)
1 user (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2008-08-27 14:58:28 EDT
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Corey Marthaler 2008-08-26 17:39:27 EDT
Description of problem:
I thought there was already a bz open for this issue but I was unable to find one in BZ.

I created a gfs filesystem with only 3 journals, yet attempted to mount it on the 4 taft nodes. The last mount attempt hung with the following messages in the console:

Trying to join cluster "lock_dlm", "TAFT:1"
Joined cluster. Now mounting FS...
GFS: fsid=TAFT:1.3: can't mount journal #3
GFS: fsid=TAFT:1.3: there are only 3 journals (0 - 2)


Version-Release number of selected component (if applicable):
2.6.18-98.el5
gfs2-utils-0.1.44-1.el5
gfs-utils-0.1.17-1.el5
kmod-gfs-0.1.23-5.el5
Comment 1 Corey Marthaler 2008-08-26 17:40:45 EDT
The following was later also dumped to the console:

GFS: fsid=TAFT:1.3: Unmount seems to be stalled. Dumping lock state...
Glock (2, 24)
  gl_flags = 
  gl_count = 2
  gl_state = 0
  req_gh = no
  req_bh = no
  lvb_count = 0
  object = yes
  new_le = no
  incore_le = no
  reclaim = no
  aspace = 0
  ail_bufs = no
  Inode:
    num = 24/24
    type = 1
    i_count = 1
    i_flags = 
    vnode = no
Glock (5, 24)
  gl_flags = 
  gl_count = 2
  gl_state = 3
  req_gh = no
  req_bh = no
  lvb_count = 0
  object = yes
  new_le = no
  incore_le = no
  reclaim = no
  aspace = no
  ail_bufs = no
  Holder
    owner = -1
    gh_state = 3
    gh_flags = 5 7 
    error = 0
    gh_iflags = 1 6 7
Comment 2 RHEL Product and Program Management 2008-08-26 17:42:35 EDT
This request was evaluated by Red Hat Product Management for inclusion in a Red
Hat Enterprise Linux maintenance release.  Product Management has requested
further review of this request by Red Hat Engineering, for potential
inclusion in a Red Hat Enterprise Linux Update release for currently deployed
products.  This request is not yet committed for inclusion in an Update
release.
Comment 3 Robert Peterson 2008-08-27 13:21:16 EDT
This looks to be the same as bug #425421.  Can you try this on
kmod-gfs-0.1.24-2.el5 and let me know if the problem still exists?
Comment 4 Corey Marthaler 2008-08-27 14:58:28 EDT
Ah, I thought I had filed this bug already, though I was unable to find it for some reason. Closing as a dupe...

*** This bug has been marked as a duplicate of bug 425421 ***

Note You need to log in before you can comment on or make changes to this bug.