Bug 250772 - GFS2: assertion failure gfs2_block_map, file = fs/gfs2/bmap.c, line = 475
Summary: GFS2: assertion failure gfs2_block_map, file = fs/gfs2/bmap.c, line = 475
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 5
Classification: Red Hat
Component: kernel
Version: 5.0
Hardware: All
OS: Linux
high
high
Target Milestone: ---
: ---
Assignee: Don Zickus
QA Contact: GFS Bugs
URL:
Whiteboard:
Depends On: 252191
Blocks:
TreeView+ depends on / blocked
 
Reported: 2007-08-03 15:30 UTC by Wendy Cheng
Modified: 2007-11-30 22:07 UTC (History)
5 users (show)

Fixed In Version: RHBA-2007-0959
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2007-11-07 19:57:47 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
Patch to fix the unstuff problem (3.31 KB, patch)
2007-08-13 17:06 UTC, Robert Peterson
no flags Details | Diff
Patch to forcibly unstuff the quota inode in gfs2_adjust_quota() (637 bytes, patch)
2007-08-13 22:42 UTC, Abhijith Das
no flags Details | Diff


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2007:0959 0 normal SHIPPED_LIVE Updated kernel packages for Red Hat Enterprise Linux 5 Update 1 2007-11-08 00:47:37 UTC

Description Wendy Cheng 2007-08-03 15:30:14 UTC
Description of problem:

On a newly installed 37.el5 kernel, consistently hit the following panic
with *and* without quota=on mount option:

Aug  3 11:19:18 dhcp143 kernel: GFS2: fsid=rhel5:spec.0: warning: assertion
"!gfs2_is_stuffed(ip)" failed
Aug  3 11:19:18 dhcp143 kernel: GFS2: fsid=rhel5:spec.0:   function =
gfs2_block_map, file = fs/gfs2/bmap.c, line = 475
Aug  3 11:19:18 dhcp143 kernel:
Aug  3 11:19:18 dhcp143 kernel: Call Trace:
Aug  3 11:19:18 dhcp143 kernel:  [<ffffffff8851782e>]
:gfs2:gfs2_assert_warn_i+0xa1/0xc3
Aug  3 11:19:18 dhcp143 kernel:  [<ffffffff884f9eaf>]
:gfs2:gfs2_block_map+0x85/0x346
Aug  3 11:19:18 dhcp143 kernel:  [<ffffffff80022b17>] alloc_buffer_head+0x31/0x36
Aug  3 11:19:18 dhcp143 kernel:  [<ffffffff8002e371>] alloc_page_buffers+0x81/0xd3
Aug  3 11:19:18 dhcp143 kernel:  [<ffffffff88511273>] :gfs2:do_sync+0x336/0x58a
Aug  3 11:19:18 dhcp143 kernel:  [<ffffffff88509433>] :gfs2:gfs2_meta_read+0x4d/0x6b
Aug  3 11:19:18 dhcp143 kernel:  [<ffffffff88511054>] :gfs2:do_sync+0x117/0x58a
Aug  3 11:19:18 dhcp143 kernel:  [<ffffffff88511cca>]
:gfs2:gfs2_quota_sync+0x1fa/0x25c
Aug  3 11:19:18 dhcp143 kernel:  [<ffffffff8009b291>]
keventd_create_kthread+0x0/0x61
Aug  3 11:19:18 dhcp143 kernel:  [<ffffffff884fb3e3>] :gfs2:gfs2_quotad+0xc7/0x154
Aug  3 11:19:18 dhcp143 kernel:  [<ffffffff884fb31c>] :gfs2:gfs2_quotad+0x0/0x154
Aug  3 11:19:18 dhcp143 kernel:  [<ffffffff80032161>] kthread+0xfe/0x132
Aug  3 11:19:18 dhcp143 kernel:  [<ffffffff8005bfb1>] child_rip+0xa/0x11
Aug  3 11:19:18 dhcp143 kernel:  [<ffffffff8009b291>]
keventd_create_kthread+0x0/0x61
Aug  3 11:19:18 dhcp143 kernel:  [<ffffffff80032063>] kthread+0x0/0x132

Version-Release number of selected component (if applicable):


How reproducible:
Alway hit it if quota=on mount option is used. However, I can hit it while 
running the kernel built for bugzilla 249905 with NFS specsfs benchmark
without quota on. 

Steps to Reproduce:
1. mount gfs2 filesystem with quota on
2. do some write activities.
3. the assertion failure would occur shortly
  
Actual results:


Expected results:


Additional info:
Never saw this issue with older RHEL5 kernels (but last kernel I ran was
18.el5 - so it is hard to say which patch could cause this issue)

Comment 1 Wendy Cheng 2007-08-03 15:36:03 UTC
I can't work on the data corruption bug found in 249905 since this new issue 
always hit first. And no, I do not specifically turn on quota. I use the default
mount options. 

Abhi reported exactly the same issue in #sistina irc yesterday.

Comment 2 Wendy Cheng 2007-08-03 15:39:40 UTC
Was trying to see whether Barry's data corruption could be recreated on my small
cluster but consistently hit this assertion instead.

Steve, I'll take this bugzilla. 

Comment 3 Wendy Cheng 2007-08-03 16:25:23 UTC
hmm... sorry, I must have quota-on in previous runs. So the quota records
lingering around and journaling code just tries fruitlessly to recover it.
Will re-make the filesystem to get rid of this (hopefull). However, will
keep trouble shooting this issue. 

Comment 4 Wendy Cheng 2007-08-04 02:51:21 UTC
I never worked on quota code before - need to discuss with Abhi and Steve
on Monday. 

The problem seems to be sd_quota_inode is stuffed (at least right after
refreshly mkfs - btw, can this file grow ? Anyway, it is not the current 
issue). The issue here is that gfs2_adjust_quota tries to update the 
sd_quota_inode. Two things can go wrong here (I think):

1. It first grabs a page from page cache that is used to host this inode.
   What would happen if this page gets reclaimed by VM ? The quota code
   doesn't seem to expect to do any block allocation (bh mapping) in this
   part of the code segment. It will assert later in gfs2_block_map if this
   page is newly allocated.
2. Even page does exists, the buffer head can get released when the glock
   is released. So again, we would flow into gfs2_block_map and gets an
   assert there since the new buffer head is not mapped yet. 

GFS1 doesn't have this issue since it always does a read first (where the
page can be re-allocated and buffer head correctly mapped) before it does
this write. 

Comment 5 Wendy Cheng 2007-08-04 02:57:54 UTC
Wait ... (2) can't happen since in this part of the code segment, the glock
is held. (1) apparently is the issue. 

Comment 6 Wendy Cheng 2007-08-04 03:18:51 UTC
Another wait :) .. (2) can happen since the buffer head can be released
by previous write. 

I think the fix here is to allow this code segment doing block allocation.
Will hack a solution and discuss with the team on Monday. 

Comment 7 Wendy Cheng 2007-08-04 17:33:38 UTC
Re-read the code (gfs2_adjust_quota) .. I'm surprised to see the code doesn't 
have any logic to handle stuffed ip .. why ? This looks very wrong.

Comment 8 Wendy Cheng 2007-08-04 17:41:23 UTC
The comment (of gfs2_adjust_quota()) says "this function was mostly borrowed 
from gfs2_block_truncate_page which was in turn mostly borrowed from ext3". 
Apparently it has been ignoring stuffed inode case. Do we allow quota inode to
get stuffed or not ? Note that we assert at first write and the inode is stuffed.


Comment 9 Steve Whitehouse 2007-08-06 08:11:58 UTC
It seems rather unlikely that in any practical situation, we'd land up with a
stuffed quota file. I'd suggest just adding a simple (if stuffed, then unstuff
the inode) to the top of that function so that we can then always assume that
its not stuffed.


Comment 10 Wendy Cheng 2007-08-06 21:24:47 UTC
Transfer this bug to Abhi ... 

Comment 11 Robert Peterson 2007-08-10 22:02:54 UTC
Abhi tried doing an unstuff at the suggested location, but unfortunately,
we removed some kludges from the code and now the problem is back where
unstuffing a journaled inode is causing the kernel to panic.
Part of the removed kludging looked like:

        if (!bh){
                list_move(&bd->bd_ail_st_list, &ai->ai_ail2_list);
                continue;
        }

in functions gfs2_ail1_start_one and gfs2_ail1_empty_one.  It was
removed as part of the recent 248176.  As an experiment, I tried
adding this back in, but it only pushed the problem further out and
it failed later on.

I discussed this at length with Steve Whitehouse and he agrees that
this was a kludge and was rightfully removed: A null bh should not be
on the ail lists to begin with.

I tried some experiments at Steve's suggestion, all of which failed,
but it told us more about the problem.

After Steve left for the day, I did a bunch of experiments and
debugging to find out exactly where these null entries are coming
from.  The sequence of events seems to be this:

As part of the write operation, gfs2_invalidatepage gets called when
it's done.  gfs2_invalidatepage calls discard_buffer, which sets:

bh->b_private = NULL;

But that buffer header hasn't been flushed out to disk yet.  So when
log_flush gets called and the bh gets passed down the active items
lists, that's where we encounter a problem.

I checked the source code in git, and this call to discard_buffer and
its zeroing out of b_private have been in there since day one.
I tried making discard_buffer not do that, but of course, that also
failed, most likely because discard_buffer still does a bunch of
resetting of the bits in the bh.  It fails in: gfs2_ail2_empty_one
on the statement:

bh_ip = GFS2_I(bd->bd_bh->b_page->mapping->host);

I think that perhaps we shouldn't call discard_buffer at all until
the log/lops functions, (the gfs2_log_flush process), have finished
with the buffers/buffer descriptors.  Having the log code reference
these pages and bds, especially after discard_buffer calls
kmem_cache_free on the bd, is asking for corruption and kernel panics.

Perhaps Steve has some more thoughts on how to do this properly.


Comment 12 Robert Peterson 2007-08-13 17:06:14 UTC
Created attachment 161189 [details]
Patch to fix the unstuff problem

Abhi, can you test this patch with your quota unstuff patch?
This is half-Steve and half-me to fix these problems.

Comment 13 Abhijith Das 2007-08-13 22:42:24 UTC
Created attachment 161236 [details]
Patch to forcibly unstuff the quota inode in gfs2_adjust_quota()

Bob, I tried this unstuffing patch of mine on top of your patch in the previous
comment and things seem to work fine. Steve, can you look through this patch
and tell me if it looks good?

Comment 14 Robert Peterson 2007-08-13 22:56:32 UTC
I've been doing some more testing on the above patch.  I found another
test that fails, even with this patch.  This used to work a month or 
so ago.  It's test #4 listed here:

https://bugzilla.redhat.com/bugzilla/show_bug.cgi?id=243899#c12

It basically hits the stuck_releasepage code in ops_address.c which
tells me somewhere it forgot to do a brelse on a buffer header.
Either that or someone did an extra bhget.  It gives the following values:

GFS2: fsid=bob_cluster2:test_gfs.0: stuck in gfs2_releasepage() d6c79ca0
GFS2: fsid=bob_cluster2:test_gfs.0: blkno = 147019, bh->b_count = 1
GFS2: fsid=bob_cluster2:test_gfs.0: pinned = 0
GFS2: fsid=bob_cluster2:test_gfs.0: bh->b_private = !NULL
GFS2: fsid=bob_cluster2:test_gfs.0: gl = (2, 99316)
GFS2: fsid=bob_cluster2:test_gfs.0: bd_list_tr = no, bd_le.le_list = no
GFS2: fsid=bob_cluster2:test_gfs.0: ip = 22 99316
GFS2: fsid=bob_cluster2:test_gfs.0: ip->i_cache[0] = NULL
GFS2: fsid=bob_cluster2:test_gfs.0: ip->i_cache[1] = NULL
GFS2: fsid=bob_cluster2:test_gfs.0: ip->i_cache[2] = NULL
GFS2: fsid=bob_cluster2:test_gfs.0: ip->i_cache[3] = NULL
GFS2: fsid=bob_cluster2:test_gfs.0: ip->i_cache[4] = NULL
GFS2: fsid=bob_cluster2:test_gfs.0: ip->i_cache[5] = NULL
GFS2: fsid=bob_cluster2:test_gfs.0: ip->i_cache[6] = NULL
GFS2: fsid=bob_cluster2:test_gfs.0: ip->i_cache[7] = NULL
GFS2: fsid=bob_cluster2:test_gfs.0: ip->i_cache[8] = NULL
GFS2: fsid=bob_cluster2:test_gfs.0: ip->i_cache[9] = NULL

BTW, that block didn't look interesting, and it's not in an RG or
anything.  Just binary data in the middle (174MB) of the file,
and it's not always the same block.

I tried backing off my patch and had the same problem.
I tried some older versions of the code, but they all have various
problems doing this same test.

Since the buffer isn't pinned, the pins must match the unpins, so
that's apparently not it.  Since the i_cache's are all NULL, that
tells me that gfs2_meta_cache_flush has already run and done its
brelse's as well, so that's apparently not it either.

I did notice in gfs2_unpin that it will only do the brelse 
if (bd->bd_ail), so perhaps we have a bd_ail management issue?  
I'm still investigating.

Perhaps Steve can try this on his system, since he invented the test.


Comment 15 Steve Whitehouse 2007-08-14 08:18:23 UTC
Abhi, your patch from comment #13 looks ok to me.


Comment 16 Steve Whitehouse 2007-08-14 09:47:51 UTC
Bob, I tried the test with just my patch and the change to gfs2_ail2_empty_one()
and it worked ok for me. Perhaps thats a clue as to where the problem lies?

To be honest I've never really understood why it is that we wait in
gfs2_releasepage() at all. There is a comment in the VFS code which indicates
that there is a plan to make releasepage non-blocking in the future anyway. ext3
doesn't block in its release page, so if that becomes a problem, then I think we
should adopt the same solution.


Comment 17 Robert Peterson 2007-08-14 14:38:06 UTC
I added a call to dump_stack() when the failure is detected and got this:

GFS2: fsid=bob_cluster2:test_gfs.0: stuck in gfs2_releasepage() c3d1eca0
 [<e02b9763>] gfs2_releasepage+0xef/0x3b1 [gfs2]
 [<e02b9674>] gfs2_releasepage+0x0/0x3b1 [gfs2]
 [<c0147e3f>] try_to_release_page+0x30/0x42
 [<c014d6ed>] shrink_inactive_list+0x4f9/0x7c7
 [<c01e49c0>] nfs_access_cache_shrinker+0x20/0x182
 [<c014da85>] shrink_zone+0xca/0xef
 [<c014ded7>] kswapd+0x288/0x405
 [<c0133e5d>] autoremove_wake_function+0x0/0x35
 [<c011dd60>] complete+0x39/0x48
 [<c014dc4f>] kswapd+0x0/0x405
 [<c0133d97>] kthread+0x38/0x5d
 [<c0133d5f>] kthread+0x0/0x5d
 [<c0105a6f>] kernel_thread_helper+0x7/0x10
 =======================
GFS2: fsid=bob_cluster2:test_gfs.0: blkno = 146511, bh->b_count = 1
GFS2: fsid=bob_cluster2:test_gfs.0: pinned = 1
GFS2: fsid=bob_cluster2:test_gfs.0: bh->b_private = !NULL
GFS2: fsid=bob_cluster2:test_gfs.0: gl = (2, 99316)
GFS2: fsid=bob_cluster2:test_gfs.0: bd_list_tr = no, bd_le.le_list = yes
GFS2: fsid=bob_cluster2:test_gfs.0: ip = 22 99316
GFS2: fsid=bob_cluster2:test_gfs.0: ip->i_cache[0] = NULL
GFS2: fsid=bob_cluster2:test_gfs.0: ip->i_cache[1] = NULL
GFS2: fsid=bob_cluster2:test_gfs.0: ip->i_cache[2] = NULL
GFS2: fsid=bob_cluster2:test_gfs.0: ip->i_cache[3] = NULL
GFS2: fsid=bob_cluster2:test_gfs.0: ip->i_cache[4] = NULL
GFS2: fsid=bob_cluster2:test_gfs.0: ip->i_cache[5] = NULL
GFS2: fsid=bob_cluster2:test_gfs.0: ip->i_cache[6] = NULL
GFS2: fsid=bob_cluster2:test_gfs.0: ip->i_cache[7] = NULL
GFS2: fsid=bob_cluster2:test_gfs.0: ip->i_cache[8] = NULL
GFS2: fsid=bob_cluster2:test_gfs.0: ip->i_cache[9] = NULL
GFS2: fsid=bob_cluster2:test_gfs.0: stuck in gfs2_releasepage() c3d1eca0
 [<e02b9763>] gfs2_releasepage+0xef/0x3b1 [gfs2]
 [<e02b9674>] gfs2_releasepage+0x0/0x3b1 [gfs2]
 [<c0147e3f>] try_to_release_page+0x30/0x42
 [<c014d6ed>] shrink_inactive_list+0x4f9/0x7c7
 [<c041f1e2>] __sched_text_start+0x53a/0x5c9
 [<c012bb1e>] lock_timer_base+0x19/0x35
 [<c014da85>] shrink_zone+0xca/0xef
 [<c014e3f8>] try_to_free_pages+0x11d/0x1f1
 [<c014a2cf>] __alloc_pages+0x18f/0x28b
 [<c014732f>] generic_file_buffered_write+0x1b3/0x5d2
 [<c041f985>] __wait_on_bit_lock+0x4b/0x52
 [<c0147bd6>] __generic_file_aio_write_nolock+0x488/0x4e7
 [<c0147c87>] generic_file_aio_write+0x52/0xb0
 [<c01622f9>] do_sync_write+0xc7/0x10a
 [<c0128852>] tasklet_action+0x46/0x90
 [<c0133e5d>] autoremove_wake_function+0x0/0x35
 [<c012872d>] irq_exit+0x53/0x6b
 [<c011896e>] smp_apic_timer_interrupt+0x74/0x80
 [<c01058ec>] apic_timer_interrupt+0x28/0x30
 [<c0162232>] do_sync_write+0x0/0x10a
 [<c0162a72>] vfs_write+0x8a/0x10c
 [<c0162fde>] sys_write+0x41/0x67
 [<c0104e1e>] sysenter_past_esp+0x5f/0x85
 =======================
GFS2: fsid=bob_cluster2:test_gfs.0: blkno = 146535, bh->b_count = 1
GFS2: fsid=bob_cluster2:test_gfs.0: pinned = 1
GFS2: fsid=bob_cluster2:test_gfs.0: bh->b_private = !NULL
GFS2: fsid=bob_cluster2:test_gfs.0: gl = (2, 99316)
GFS2: fsid=bob_cluster2:test_gfs.0: bd_list_tr = no, bd_le.le_list = yes
GFS2: fsid=bob_cluster2:test_gfs.0: ip = 22 99316
GFS2: fsid=bob_cluster2:test_gfs.0: ip->i_cache[0] = NULL
GFS2: fsid=bob_cluster2:test_gfs.0: ip->i_cache[1] = NULL
GFS2: fsid=bob_cluster2:test_gfs.0: ip->i_cache[2] = NULL
GFS2: fsid=bob_cluster2:test_gfs.0: ip->i_cache[3] = NULL
GFS2: fsid=bob_cluster2:test_gfs.0: ip->i_cache[4] = NULL
GFS2: fsid=bob_cluster2:test_gfs.0: ip->i_cache[5] = NULL
GFS2: fsid=bob_cluster2:test_gfs.0: ip->i_cache[6] = NULL
GFS2: fsid=bob_cluster2:test_gfs.0: ip->i_cache[7] = NULL
GFS2: fsid=bob_cluster2:test_gfs.0: ip->i_cache[8] = NULL
GFS2: fsid=bob_cluster2:test_gfs.0: ip->i_cache[9] = NULL
GFS2: fsid=bob_cluster2:test_gfs.0: stuck in gfs2_releasepage() c3d1eca0
 [<e02b9763>] gfs2_releasepage+0xef/0x3b1 [gfs2]
 [<c011dbcc>] dequeue_entity+0x75/0x92
 [<e02b9674>] gfs2_releasepage+0x0/0x3b1 [gfs2]
 [<c0147e3f>] try_to_release_page+0x30/0x42
 [<c014d6ed>] shrink_inactive_list+0x4f9/0x7c7
 [<c012bb1e>] lock_timer_base+0x19/0x35
 [<c012bb7e>] try_to_del_timer_sync+0x44/0x4a
 [<c041f902>] schedule_timeout+0x79/0x8d
 [<c012b88c>] process_timeout+0x0/0x5
 [<c041f8f4>] schedule_timeout+0x6b/0x8d
 [<c014f0cd>] congestion_wait+0x5b/0x64
 [<c0133e5d>] autoremove_wake_function+0x0/0x35
 [<c014da85>] shrink_zone+0xca/0xef
 [<c014e3f8>] try_to_free_pages+0x11d/0x1f1
 [<c014a2cf>] __alloc_pages+0x18f/0x28b
 [<c015f317>] __slab_alloc+0x177/0x4a7
 [<c015f75d>] kmem_cache_alloc+0x3c/0x7c
 [<c017c08f>] alloc_buffer_head+0x10/0x37
 [<c017c08f>] alloc_buffer_head+0x10/0x37
 [<e02b6265>] gfs2_log_fake_buf+0x56/0x10e [gfs2]
 [<e02b79ac>] databuf_lo_before_commit+0x39d/0x509 [gfs2]
 [<e02b6003>] gfs2_log_flush+0x110/0x2eb [gfs2]
 [<e02b5579>] gfs2_ail1_empty+0x2e/0x84 [gfs2]
 [<e02aa63a>] gfs2_logd+0x89/0x13b [gfs2]
 [<c011dd60>] complete+0x39/0x48
 [<e02aa5b1>] gfs2_logd+0x0/0x13b [gfs2]
 [<c0133d97>] kthread+0x38/0x5d
 [<c0133d5f>] kthread+0x0/0x5d
 [<c0105a6f>] kernel_thread_helper+0x7/0x10
 =======================
GFS2: fsid=bob_cluster2:test_gfs.0: blkno = 146543, bh->b_count = 1
GFS2: fsid=bob_cluster2:test_gfs.0: pinned = 1
GFS2: fsid=bob_cluster2:test_gfs.0: bh->b_private = !NULL
GFS2: fsid=bob_cluster2:test_gfs.0: gl = (2, 99316)
GFS2: fsid=bob_cluster2:test_gfs.0: bd_list_tr = no, bd_le.le_list = yes
GFS2: fsid=bob_cluster2:test_gfs.0: ip = 22 99316
GFS2: fsid=bob_cluster2:test_gfs.0: ip->i_cache[0] = NULL
GFS2: fsid=bob_cluster2:test_gfs.0: ip->i_cache[1] = NULL
GFS2: fsid=bob_cluster2:test_gfs.0: ip->i_cache[2] = NULL
GFS2: fsid=bob_cluster2:test_gfs.0: ip->i_cache[3] = NULL
GFS2: fsid=bob_cluster2:test_gfs.0: ip->i_cache[4] = NULL
GFS2: fsid=bob_cluster2:test_gfs.0: ip->i_cache[5] = NULL
GFS2: fsid=bob_cluster2:test_gfs.0: ip->i_cache[6] = NULL
GFS2: fsid=bob_cluster2:test_gfs.0: ip->i_cache[7] = NULL
GFS2: fsid=bob_cluster2:test_gfs.0: ip->i_cache[8] = NULL
GFS2: fsid=bob_cluster2:test_gfs.0: ip->i_cache[9] = NULL

I'm not using NFS, so this is apparently just vfs doing its page
maintenance.


Comment 18 Robert Peterson 2007-08-14 17:18:49 UTC
I split out the work for the new unstuff/journaled file problems
into bugzilla 252191.  I'm making this one depend on that one.


Comment 19 Steve Whitehouse 2007-08-15 09:30:31 UTC
I've pushed the patch for this bz upstream, so now we need a RHEL 5.1 version of
it as soon as possible, so we can get this into the POST state.


Comment 20 Abhijith Das 2007-08-17 17:59:58 UTC
Posted patch to rhkernel-list
http://post-office.corp.redhat.com/archives/rhkernel-list/2007-August/msg00622.html

Comment 21 Don Zickus 2007-08-21 18:36:14 UTC
in 2.6.18-42.el5
You can download this test kernel from http://people.redhat.com/dzickus/el5

Comment 24 errata-xmlrpc 2007-11-07 19:57:47 UTC
An advisory has been issued which should help the problem
described in this bug report. This report is therefore being
closed with a resolution of ERRATA. For more information
on the solution and/or where to find the updated files,
please follow the link below. You may reopen this bug report
if the solution does not work for you.

http://rhn.redhat.com/errata/RHBA-2007-0959.html



Note You need to log in before you can comment on or make changes to this bug.