Bug 620384 - fsck.gfs2 segfaults if journals are missing
fsck.gfs2 segfaults if journals are missing
Status: CLOSED ERRATA
Product: Red Hat Enterprise Linux 5
Classification: Red Hat
Component: gfs2-utils (Show other bugs)
5.5
x86_64 Linux
low Severity low
: rc
: 5.6
Assigned To: Robert Peterson
Cluster QE
:
Depends On: 575968
Blocks: 622576 624689 624691
  Show dependency treegraph
 
Reported: 2010-08-02 07:52 EDT by Theophanis Kontogiannis
Modified: 2011-11-04 17:29 EDT (History)
6 users (show)

See Also:
Fixed In Version: gfs2-utils-0.1.62-26.el5
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
: 622576 (view as bug list)
Environment:
Last Closed: 2011-01-13 18:21:07 EST
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:


Attachments (Terms of Use)
Preliminary patch (11.93 KB, patch)
2010-08-06 23:56 EDT, Robert Peterson
no flags Details | Diff
Try 3 patch (21.69 KB, patch)
2010-08-12 18:43 EDT, Robert Peterson
no flags Details | Diff
Final patch for 5.6 (25.75 KB, patch)
2010-08-18 13:56 EDT, Robert Peterson
no flags Details | Diff

  None (edit)
Description Theophanis Kontogiannis 2010-08-02 07:52:50 EDT
Hello all,

Centos 5.5 
drbd --> lv --> gfs2
gfs2_tool 0.1.62 (built Mar 31 2010 07:34:45)

The filesystem was created in Centos5.2

Running gfs2_fsck seg faults:

    [root@tweety-2 /]# gfs2_fsck -v /dev/mapper/vg1-data1
    Initializing fsck
    Initializing lists...
    jid=0: Looking at journal...
    jid=0: Journal is clean.
    jid=1: Looking at journal...
    jid=1: Journal is clean.
    jid=2: Looking at journal...
    jid=2: Journal is clean.
    jid=3: Looking at journal...
    jid=3: Journal is clean.
    jid=4: Looking at journal...
    jid=4: Journal is clean.
    jid=5: Looking at journal...
    jid=5: Journal is clean.
    Segmentation fault
    gfs2_fsck[5131]: segfault at 00000000000000f0 rip 000000000040aefa rsp 00007fffd02c2d50 error 4


The nice thing is that it also alters the lock mechanism defined for the fs, in one that does not exist (fsck_dlm):

    [root@tweety-2 /]# mount /mounts
    /sbin/mount.gfs2: error mounting /dev/mapper/vg1-data1 on /mounts: No such file or directory

    GFS2: fsid=: Trying to join cluster "fsck_dlm", "tweety:gfs2-11"
    GFS2: can't find protocol fsck_dlm
    GFS2: fsid=: can't mount proto=fsck_dlm, table=tweety:gfs2-11, hostdata=
    GFS2: fsid=: Trying to join cluster "fsck_dlm", "tweety:gfs2-11"
    GFS2: can't find protocol fsck_dlm
    GFS2: fsid=: can't mount proto=fsck_dlm, table=tweety:gfs2-11, hostdata=
    GFS2: fsid=: Trying to join cluster "fsck_dlm", "tweety:gfs2-11"
    GFS2: can't find protocol fsck_dlm
    GFS2: fsid=: can't mount proto=fsck_dlm, table=tweety:gfs2-11, hostdata=


It gets restored with:

    [root@tweety-2 /]# gfs2_tool sb /dev/mapper/vg1-data1 proto lock_dlm
    You shouldn't change any of these values if the filesystem is mounted.

    Are you sure? [y/n] y

    current lock protocol name = "fsck_dlm"
    new lock protocol name = "lock_dlm"
    Done


    [root@tweety-2 /]#mount /mounts
    [root@tweety-2 /]#

    GFS2: fsid=: Trying to join cluster "lock_dlm", "tweety:gfs2-11"
    GFS2: fsid=tweety:gfs2-11.0: Joined cluster. Now mounting FS...
    GFS2: fsid=tweety:gfs2-11.0: jid=0, already locked for use
    GFS2: fsid=tweety:gfs2-11.0: jid=0: Looking at journal...
    GFS2: fsid=tweety:gfs2-11.0: jid=0: Done
    GFS2: fsid=tweety:gfs2-11.0: jid=1: Trying to acquire journal lock...
    GFS2: fsid=tweety:gfs2-11.0: jid=1: Looking at journal...
    GFS2: fsid=tweety:gfs2-11.0: jid=1: Done
    GFS2: fsid=tweety:gfs2-11.0: jid=2: Trying to acquire journal lock...
    GFS2: fsid=tweety:gfs2-11.0: jid=2: Looking at journal...
    GFS2: fsid=tweety:gfs2-11.0: jid=2: Done
    GFS2: fsid=tweety:gfs2-11.0: jid=3: Trying to acquire journal lock...
    GFS2: fsid=tweety:gfs2-11.0: jid=3: Looking at journal...
    GFS2: fsid=tweety:gfs2-11.0: jid=3: Done
    GFS2: fsid=tweety:gfs2-11.0: jid=4: Trying to acquire journal lock...
    GFS2: fsid=tweety:gfs2-11.0: jid=4: Looking at journal...
    GFS2: fsid=tweety:gfs2-11.0: jid=4: Done
    GFS2: fsid=tweety:gfs2-11.0: jid=5: Trying to acquire journal lock...
    GFS2: fsid=tweety:gfs2-11.0: jid=5: Looking at journal...
    GFS2: fsid=tweety:gfs2-11.0: jid=5: Done


I have saved all my date, but before destroying this GFS2, would any developer like me to assist in debugging?

Sincerely,

Theophanis Kontogiannis
Comment 1 Robert Peterson 2010-08-02 09:52:28 EDT
This may be a bug that I've previously found and fixed.  Can
you try (at your own risk) the fsck.gfs2 on my people page?

http://people.redhat.com/rpeterso/Experimental/RHEL5.x/gfs2/fsck.gfs2

If that doesn't work, please save off your file system metadata
with gfs2_edit savemeta and post it somewhere (private) where
I can download it and recreate the problem.
Comment 2 Theophanis Kontogiannis 2010-08-02 10:10:24 EDT
Hello Bob,

No change with the mentioned fsck.

[root@tweety-2 ~]# ./fsck.gfs2 -v /dev/mapper/vg1-data1 
Initializing fsck
Initializing lists...
jid=0: Looking at journal...
jid=0: Journal is clean.
jid=1: Looking at journal...
jid=1: Journal is clean.
jid=2: Looking at journal...
jid=2: Journal is clean.
jid=3: Looking at journal...
jid=3: Journal is clean.
jid=4: Looking at journal...
jid=4: Journal is clean.
jid=5: Looking at journal...
jid=5: Journal is clean.
Segmentation fault
fsck.gfs2[7436]: segfault at 00000000000000f8 rip 0000000000410ea3 rsp 00007fff0cd9de80 error 4

and again changed the lock proto.

GFS2: fsid=: Trying to join cluster "fsck_dlm", "tweety:gfs2-11"
GFS2: can't find protocol fsck_dlm
GFS2: fsid=: can't mount proto=fsck_dlm, table=tweety:gfs2-11, hostdata=

I am sending you an e-mail with the link for the metada.

No worries about my data. I have backed up all of them so we can do whatever we like on this file system.

BR
TK
Comment 3 Robert Peterson 2010-08-04 15:29:10 EDT
I received a copy of Theophanis's metadata, restored it and
recreated the problem using my latest and greatest code.  The
problem seems to be that there are two journals mysteriously
deleted from the jindex.  The code that looks at the integrity
of the journals is apparently unable to cope with missing journals.
In this case there are supposed to be ten journals (journal0
through journal9) but journal6 and journal7 are gone for some reason.

I changed the code so that it just skips over the missing journals
but it encounters another problem in pass1 when it tries to
recreate them.  I'm investigating that now and hopefully it will
be easy to figure out.

It should be noted that this file system has a block size of 512
bytes (1/2K) which I believe is an unsupported configuration.
Normal journals are driven to 4 levels of metadata indirection!
So far I haven't run into any code that can't deal with this block
size, and the missing journal problem would still be there even if
the block size was bigger, so that's not impacting me at the moment.
It might, however, have contributed to the fact that those two
journals are missing (under investigation as well).  It would be
helpful to know if Theophanis knows how the journals went missing.
Was the file system created with ten journals or fewer, and
gfs2_jadd run?
Comment 4 Theophanis Kontogiannis 2010-08-04 16:59:58 EDT
Hi Bob,

In fact until now I did not even know the journals were missing. 

After moving out all my files I run gfs2_fsck for no reason, and this is how I ended up filling this bug.

The file system was created from the beggining with ten journals and no kind of alternations were made throughout its lifecycle.

BR
TK
Comment 5 Robert Peterson 2010-08-05 10:45:45 EDT
I've got a prototype that seems to be working properly.  I'm
testing it now.  Hopefully we can get this into 5.6.  I'm going
to try to figure out how the journals went missing.
Comment 6 Robert Peterson 2010-08-06 23:56:23 EDT
Created attachment 437297 [details]
Preliminary patch

With this patch I was able to fix the broken file system.
This is still a preliminary patch and has not been tested properly.
Comment 7 Robert Peterson 2010-08-12 18:43:48 EDT
Created attachment 438553 [details]
Try 3 patch

I found some problems with the previous patch under more
rigorous testing.  The previous patch was also for upstream
code.  This version is more comprehensive in its cleaning up
of deleted, missing and destroyed journal dinodes.
Even so, it needs more testing.  This one is at least close.
Comment 8 Robert Peterson 2010-08-17 09:17:38 EDT
Yesterday I did more rigorous testing and discovered two more
bugs.  The first one affects mkfs.gfs2 and I opened it as bug
#624535.  The second bug I'm trying to decide what to do about.
Basically, the latest and greatest fsck.gfs2 doesn't like when
directories get really big (i.e. lots of entries).  This happens
easier with a small block size like the 512B blocks from the
user's metadata for this bug.

For almost all normal directories, the metadata structure looks
like this:

height  structure
------  -------------------------------------------------
0.      dinode
1.      journaled data block (hash table block pointers)
2.      directory leaf blocks

When directories get really big their metadata structure gets
more complex and ends up looking like this:

height  structure
------  -------------------------------------------------
0.      dinode
1.      indirect block (block pointers to block pointers)
2.      journaled data block (hash table block pointers)
3.      directory leaf blocks

If there are enough directory entries, the structure can
reach more heights, with level 2 being another level of
indirect blocks:

height  structure
------  -------------------------------------------------
0.      dinode
1.      indirect block (block pointers to block pointers)
2.      indirect block (block pointers to block pointers)
3.      journaled data block (hash table block pointers)
4.      directory leaf blocks

Right now, fsck.gfs2 can only handle directories of the first
form.  Large directories with four different metadata types
are flagged as errors and data is destroyed.  This is very
serious and needs to get fixed ASAP.  I've written a patch for
this issue and I'm testing it now.  So far the patch has passed
a simple unit test using a four-level directory.  Now I'm
running it against the metadata for this bug to see if it has
any issues.  Unfortunately, that takes a long time to complete.
I'm likely to open a new bugzilla record for this new problem.
Comment 9 Robert Peterson 2010-08-17 14:46:47 EDT
I opened up bugzilla records for the second problem listed
in comment #8.  The RHEL5 bug is bug #624689.  The RHEL6 bug
is bug #624691.  My combined patch that includes both fixes
ran successfully on the user's metadata.  That means my patch
works perfectly.  But since I separated out the fix for that
second problem, I need to rebase this patch.
Comment 10 Robert Peterson 2010-08-18 13:56:02 EDT
Created attachment 439455 [details]
Final patch for 5.6

This is an updated version of the patch that fixes some issues
I caught in testing.  Hopefully this is the final version I
will push to the git repo.  It was tested on system kool.
Comment 11 Robert Peterson 2010-08-18 14:29:55 EDT
The patch was pushed to the RHEL56 branch of the cluster git
tree for inclusion into 5.6.  Changing status to POST until
this gets built into a gfs2-utils package.
Comment 12 Robert Peterson 2010-09-20 10:49:26 EDT
Build 2770902 successful.  Changing status to Modified.
This fix is in gfs2-utils-0.1.62-26.el5.
Comment 17 errata-xmlrpc 2011-01-13 18:21:07 EST
An advisory has been issued which should help the problem
described in this bug report. This report is therefore being
closed with a resolution of ERRATA. For more information
on therefore solution and/or where to find the updated files,
please follow the link below. You may reopen this bug report
if the solution does not work for you.

http://rhn.redhat.com/errata/RHBA-2011-0135.html

Note You need to log in before you can comment on or make changes to this bug.