Description of problem:
The problem is in libgfs2, which has a problem with not writing out
indirect blocks properly. This manifests itself most noticeibly
in mkfs.gfs2 because journals are 32MB. If your block size is 1K,
for example, you can only fit 125 indirect page pointers on an
indirect page (99 pointers on the inode). A 32MB journal therefore
needs a height of 3. The problem is that these indirect pages were
not getting written out properly.
Version-Release number of selected component (if applicable):
Steps to Reproduce:
mkfs.gfs2 -t smoke:aol -p lock_nolock -j 1 -b 1024 /dev/smokevg/aol
Indirect pointers will be all zeroes. So trying to mount the file
system will crash.
The indirect pages should be non-zero and mount should not crash.
Created attachment 157633 [details]
patch to fix the problem
This request was evaluated by Red Hat Product Management for inclusion in a Red
Hat Enterprise Linux maintenance release. Product Management has requested
further review of this request by Red Hat Engineering, for potential
inclusion in a Red Hat Enterprise Linux Update release for currently deployed
products. This request is not yet committed for inclusion in an Update
Fix tested on system salem and committed to cvs at HEAD and RHEL5
branches. Setting status to modified.
I found another problem with small block sizes during testing.
This has to do with the variable: sdp->bsize_shift. This value
corresponds to the number of bits shifted left to make your block
size. This was not being adjusted to reflect the correct block
size and therefore it was confusing file i/o operations.
I got rid of the point of confusion by eliminating the variable
altogether and always using the on-disk superblock structure that
has the equivalent: sb_bsize_shift.
Thus, I'm committing an addendum patch to fix the problem.
An advisory has been issued which should help the problem
described in this bug report. This report is therefore being
closed with a resolution of ERRATA. For more information
on the solution and/or where to find the updated files,
please follow the link below. You may reopen this bug report
if the solution does not work for you.