Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.
For bugs related to Red Hat Enterprise Linux 5 product line. The current stable release is 5.10. For Red Hat Enterprise Linux 6 and above, please visit Red Hat JIRA https://issues.redhat.com/secure/CreateIssue!default.jspa?pid=12332745 to report new issues.

Bug 475384

Summary: RAID0 (Strip) – produces i/o errors on boot. Seems to be a Strip RAID system boots and work normally except for i/o errors on boot.
Product: Red Hat Enterprise Linux 5 Reporter: Ed Ciechanowski <ed.ciechanowski>
Component: mkinitrdAssignee: Hans de Goede <hdegoede>
Status: CLOSED ERRATA QA Contact: Alexander Todorov <atodorov>
Severity: high Docs Contact:
Priority: low    
Version: 5.3CC: atodorov, borgan, cward, ddumas, fernando, hdegoede, heinzm, Jacek.Danecki, jane.lv, jgranado, jjarvis, jvillalo, keve.a.gabbert, krzysztof.wojcik, luyu, lvm-team, mgahagan, naveenr, nelhawar, pjones, rpacheco, syeghiay, tao
Target Milestone: rcKeywords: OtherQA
Target Release: ---   
Hardware: i386   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2009-09-02 11:15:34 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 471689, 485400    
Bug Blocks: 480792    
Attachments:
Description Flags
Screen capture of i/o errors on RAID0 Boot
none
Screen capture from fdisk -l
none
df.txt from RAID0 system
none
output from the command "dmesg > bootmsg.log"
none
Screen capture of RAID0 boot error
none
RAID0 logs from Install of RHEL5.3 RC1 none

Description Ed Ciechanowski 2008-12-09 00:48:46 UTC
Created attachment 326233 [details]
Screen capture of i/o errors on RAID0 Boot

+++ This bug was initially created as a clone of Bug #471689 +++

Description of problem:
During the installing of RHEL 5.3 Snapshot 2 no sw raid devices are active to install OS to. Also the version of dmraid being used during install are old.

After Keyboard type selection and one Skips the Registration #, the following ERROR appears:
ERROR 
Error opening /dev/mapper/isw_cdecfjjff_Volume0:
No such device or address

If you hit Alt-F1
<the last few lines are as follows>
error: only one argument allowed for this option
Error
Error: error opening /dev/mapper/isw_cdecfjjff_Volume0: no such device or address 80

If you hit Alt-F2 and try to activate dmraid:
dmraid –ay
raidset “/dev/mapper/cdecfjjff_Volume0” was not activated 

If you hit Alt-F3 you see this:
9:20:05 INFO:		moving (1) to step regkey
9:20:11 INFO:		Repopaths is {‘base’, ‘server’}
9:20:11 INFO:		Moving (1) to step find root parts
9:20:11 ERROR:	Activating raid “/dev/mapper/isw_cdecfjjff_Volume0” : failed 
9:20:11 ERROR: 	Table: 0 156295168 mirror core 2 131072 nosync 2 /dev/sda 0 /dev/sdb 0 1 handle_errors
9:20:11 ERROR:	Exception: device-mapper: reload ioctl failed: invalid argument
9:20:11 Critical:	parted exception: ERROR: error opening /dev/mapper/isw_cdecfjjff_Volume0: no such device or address

If you hit Alt-F4 (the last few lines are):
<6> device-mapper: multipath: ver 1.0.5 loaded
<6> device-mapper: round-robin: v1.0.0 loaded
<3> device-mapper: table: 253:0 mirror: wrong number of minor arguments
<4> device-mapper: ioctl: error adding target to table

Version-Release number of selected component (if applicable):
During install of RHEL 5.3 snapshot 2.
Also in Alt-F2 you run:
dmraid –V
dmraid version: 		1.0.0.rc13 (2007.09.17) static debug
dmraid library version:	1.0.0.rc13 (2006.09.17) 
device-mapper version:	4.11.5
“THESE ARE THE WRONG VERSIONS!”

How reproducible:
Run install of RHEL 5.3 Snapshot2 DVD iso with an ISW SW RAID mirror setup as the only two drives in the system, can miss it.

Steps to Reproduce:
1. Create RAID1 in OROM. Use default settings.
2. Boot to install DVD of RHEL 5.3 Snapshot2
3. Select a keyboard type and Skip the registration #.
4. the next screen that comes up shows the error
  
Actual results:
RHEL 5.3 Snapshot2 does not recognize the sw raid drives setup in the bios orom so it can install the os to the mirror. 

Expected results:
Expected RHEL 5.3 Snapshot2 to recognize and install the OS to the SW raid mirror. 

Additional info:

--- Additional comment from clumens on 2008-11-16 22:45:49 EDT ---

Please attach /tmp/anaconda.log and /tmp/syslog to this bug report.  A picture or something of those error messages on tty1 would be pretty helpful too.  Thanks.

--- Additional comment from ed.ciechanowski on 2008-11-17 15:01:37 EDT ---

Created an attachment (id=323792)
Anaconda.log file

attached anaconda.log file

--- Additional comment from ed.ciechanowski on 2008-11-17 15:02:34 EDT ---

Created an attachment (id=323793)
syslog file

attached syslog file

--- Additional comment from ddumas on 2008-11-17 15:41:15 EDT ---

We are seeing device-mapper-related problems showing up in anaconda logs with
Snapshot 2.

05:13:57 ERROR   : Activating raid isw_cdecfjjff_Volume0 failed: 05:13:57 ERROR
  :   table: 0 156295168 mirror core 2 131072 nosync 2 /dev/sda 0 /dev/sdb 0 1
handle_errors
05:13:57 ERROR   :   table: 0 156295168 mirror core 2 131072 nosync 2 /dev/sda
0 /dev/sdb 0 1 handle_errors
05:13:57 ERROR   :   exception: device-mapper: reload ioctl failed: Invalid
argument
05:13:57 ERROR   :   exception: device-mapper: reload ioctl failed: Invalid
argument

lvm-team, could someone please take a look?

--- Additional comment from mbroz on 2008-11-17 16:57:31 EDT ---

Please retest with snapshot3, it should contain fixed dmraid package, also see
https://bugzilla.redhat.com/show_bug.cgi?id=471400#c6

--- Additional comment from ed.ciechanowski on 2008-11-19 17:29:30 EDT ---

Created an attachment (id=324109)
screen capture dmraid on first boot error

This is the screen capture of RHEL 5.3 Snapshot3 after installing to a mirror and first reboot will show this error.

--- Additional comment from ed.ciechanowski on 2008-11-19 17:31:37 EDT ---

Created an attachment (id=324110)
/var/log dir tar to show logs

Here are the latest log files from RHEL 5.3 Snapshot3. The install goes further but still get an error on first report, see screen capture. Not sure if the logs will help. Please let me know what else I can provide to help resolve this issue.

--- Additional comment from hdegoede on 2008-11-20 09:21:34 EDT ---

Ed,

As the system does boot, can you please do the following:
mkdir t
cd t
zcat /boot/mkinitrd...... | cpio -i

After that you should have a file called init (amongst others) in the "t" directory, can you please attach that here? Thanks!

--- Additional comment from hdegoede on 2008-11-20 11:09:56 EDT ---

One more information request, can you please press a key when the inital Red RHEL5 bootloader screen is shown, then press A to append kernel cmdline arguments, and then remove "quiet" from the cmdline (and press enter to boot)

And then take a screenshot of the machine when it fails to boot again.

Thank you.

--- Additional comment from ed.ciechanowski on 2008-11-21 14:59:51 EDT ---

Created an attachment (id=324337)
first screen shot of error

I took two screen shots, this is the first one.

--- Additional comment from ed.ciechanowski on 2008-11-21 15:01:47 EDT ---

Created an attachment (id=324339)
Second screen shoot

Second screen shot, I took Two. Let me know if you need previous message that do not appear in screen shot 1 nor 2.

--- Additional comment from ed.ciechanowski on 2008-11-21 15:03:34 EDT ---

Created an attachment (id=324340)
Here is the init file

I believe the command you wanted was /sbin/mkinitrd...... | cpio -i and not /boot/mkinitrd. Let me know if this is what you needed. Thanks again!

--- Additional comment from heinzm on 2008-11-24 06:47:40 EDT ---

Running "dmraid -ay -i -p $Name" on the command line works perfectly fine.

Do we have all necessary blockdev nodes to acces the component devices of the RAID set $Name requested created by the initrd ?

--- Additional comment from ed.ciechanowski on 2008-11-24 12:10:47 EDT ---

When installing RHEL 5.3 Snapshot3 it looks like the mirror is being written to before the reboot. After the install reboots the first time the above errors show up at boot. It seems from this point the OS has defaulted back to running off /dev/sda only. After the OS boots, looks like to /dev/sda, running the command from a terminal "dmraid -ay" Gives the message raid set as not activated. 

What the question in comment 13 for me? Thanks. What more can I provide that will help resolve this issue?

--- Additional comment from jgranado on 2008-11-24 12:28:18 EDT ---

Created an attachment (id=324509)
5 pictures containing the output of init. only the output relevant to the dmraid messages

I believe we have all the necesarry nodes.  This attachement is a tar.gz of the pictures I tool of the output of my machie when changing the init script to execute `dmraid -ay -i -p -vvv -ddd "isw_bhdbbaeebb_Volume0` (sorry for the crappy pictures, the only thing I could find was an Iphone.)

As seen in the output, in the NOTICE messages of the beginning.  dmraid successfully identifies /dev/sdb and /dev/sda as containing isw metadata.

--- Additional comment from jgranado on 2008-11-24 12:35:52 EDT ---

(In reply to comment #14)
> When installing RHEL 5.3 Snapshot3 it looks like the mirror is being written to
> before the reboot. 

Can you please exand on this.  What do you mean by being written to.  it is normal that just before reboot we would want to use the deivice to which we install.  postinstall scripts. rpm installation is ending....  I don't see this as out of the ordinary.

> After the install reboots the first time the above errors
> show up at boot. It seems from this point the OS has defaulted back to running
> off /dev/sda only. After the OS boots, looks like to /dev/sda, 

Yes.  this only happens with mirror raid.  If you install striped RAID you will get a kernal panic.  I assume that it is because of the same reason.  Only with stripped it is not that ieasy to default to using just one of the block devices.

> running the
> command from a terminal "dmraid -ay" Gives the message raid set as not
> activated. 

Same behaviour here.

> 
> What the question in comment 13 for me? Thanks. What more can I provide that
> will help resolve this issue?

--- Additional comment from jgranado on 2008-11-24 12:39:40 EDT ---

(In reply to comment #13)
> Running "dmraid -ay -i -p $Name" on the command line works perfectly fine.

What is your test case.  I mean.  Do you install, and after install you see that the command works as expected?  

Are you testing in a running system?  what special configuration do you have?

Thx for the info.

--- Additional comment from heinzm on 2008-11-24 12:49:41 EDT ---

Joel,

after install, the command works fine for me on a running system.
Can't open your attachment to comment #15.
Are you sure, that all block devices (I.e. the component devices making up the RAID set in question) are there when the initrd processes ?

Ed,

the question in comment #13 was meant for our anaconda/mkinitrd colleagues.

--- Additional comment from jgranado on 2008-11-24 13:21:12 EDT ---

try http://jgranado.fedorapeople.org/temp/init.tar.gz, bugzilla somehow screwed this up.

--- Additional comment from jgranado on 2008-11-24 13:24:30 EDT ---

(In reply to comment #17)
> (In reply to comment #13)
> > Running "dmraid -ay -i -p $Name" on the command line works perfectly fine.

I see the same behavior when I have the os installed on a non raid device and try to activate the raid device after boot.  But When I do the install on the raid device itself and try to use it, it does not work.

Heinz:
insight on the output of the init that is on http://jgranado.fedorapeople.org/temp/init.tar.gz would be greatly appreciated.

--- Additional comment from jgranado on 2008-11-24 13:36:36 EDT ---

On a comparison between what I see in the pictures and in the output of "dmraid -ay -i -p $Name" on a running sysmte.  I noticed a slight difference:

Init output:
.
.
.
NOTICE: added DEV to RAID set "NAME"
NOTICE: dropping unwanted RAID set "NAME_Volume0"
.
.
.

Normal output:
.
.
.
NOTICE: added DEV to RAID set "NAME"
.
.
.

The normal output does not have the "dropping unwanted ...." message.

Any ideas?

--- Additional comment from hdegoede on 2008-11-24 14:28:41 EDT ---

(In reply to comment #21)
> On a comparison between what I see in the pictures and in the output of "dmraid
> -ay -i -p $Name" on a running sysmte.  I noticed a slight difference:
> 
> Init output:
> .
> .
> .
> NOTICE: added DEV to RAID set "NAME"
> NOTICE: dropping unwanted RAID set "NAME_Volume0"
> .
> .
> .
> 
> Normal output:
> .
> .
> .
> NOTICE: added DEV to RAID set "NAME"
> .
> .
> .
> 
> The normal output does not have the "dropping unwanted ...." message.
> 
> Any ideas?

Joel, when you run dmraid on a running system do you use:
"dmraid -ay" or "dmraid -ay -p NAME_Volume0" ?

Notice how dmraid says:
> NOTICE: added DEV to RAID set "NAME"
> NOTICE: dropping unwanted RAID set "NAME_Volume0"

Where in one case the _Volume0 is printed and in the other not. There have been several comments in other reports about the _Volume0 causing problems.

Joel, if you are using "dmraid -ay" (so without the " -p NAME_Volume0", try changing the "init" script in the initrd to do the same (so remove the " -p NAME_Volume0"), and then see if the raid array gets recognized at boot.

--- Additional comment from jgranado on 2008-11-25 06:06:28 EDT ---

(In reply to comment #22)

> Joel, when you run dmraid on a running system do you use:
> "dmraid -ay" or "dmraid -ay -p NAME_Volume0" ?

I use NAME_Volume0, it does not find any sets with just NAME.  But it does print the mangled name in the NOTICE message.

> 
> Notice how dmraid says:
> > NOTICE: added DEV to RAID set "NAME"
> > NOTICE: dropping unwanted RAID set "NAME_Volume0"
> 
> Where in one case the _Volume0 is printed and in the other not. There have been
> several comments in other reports about the _Volume0 causing problems.
> 
> Joel, if you are using "dmraid -ay" (so without the " -p NAME_Volume0", try
> changing the "init" script in the initrd to do the same (so remove the " -p
> NAME_Volume0"), and then see if the raid array gets recognized at boot.

I'll give it a try.

--- Additional comment from heinzm on 2008-11-25 06:11:36 EDT ---

Hans' comment #22 is a workaround in case mkinitrd provides the wrong RAID set name to dmraid,

Our policy is to activate boot time mappings *only* in the initrd, hence mkinitrd needs fixing if it provides a wrong RAID set name.

--- Additional comment from jgranado on 2008-11-25 08:54:22 EDT ---

(In reply to comment #24)
> Our policy is to activate boot time mappings *only* in the initrd, hence
> mkinitrd needs fixing if it provides a wrong RAID set name.

The name is correct.  That is not the issue.

Heinz:
When I run `dmraid -ay` from init, the raid set starts correctly.  I think there is something missing from the environment at that point, but I have no idea what.  Any ideas?

--- Additional comment from jgranado on 2008-11-25 09:10:10 EDT ---

The snapshots with the name are in http://jgranado.fedorapeople.org/temp/init.tar.gz.

I'll post the snapshots without the name (the one that works) shortly.

--- Additional comment from jgranado on 2008-11-25 09:40:10 EDT ---

The snapshots with the command `dmraid -ay -ddd -vvv` are in http://jgranado.fedorapeople.org/temp/initWork.tar.gz

--- Additional comment from jgranado on 2008-11-25 10:43:42 EDT ---

(In reply to comment #18)
> Joel,
> 
> after install, the command works fine for me on a running system.

Heinz
can you send me, post somewhere, attach to the bug your initrd image for the test machine.
thx.

--- Additional comment from heinzm on 2008-11-25 11:20:13 EDT ---

Joel,

like I said, I only ran online, no initrd test.

The provided init*tar.gz snapshots show with the name, that it is being dropped,
ie. the dmraid library want_set() function drops it, which is only possible when the the names in the RAID set and on the command line differ.

Could there be some strange, non-displayable char in the name ?

Please provide the initrd being used to produce to init.tar.gz (ie. the one *with* the name), thx.

--- Additional comment from hdegoede on 2008-11-25 16:51:37 EDT ---

*** Bug 472888 has been marked as a duplicate of this bug. ***

--- Additional comment from hdegoede on 2008-12-02 05:46:41 EDT ---

*** Bug 473244 has been marked as a duplicate of this bug. ***

--- Additional comment from hdegoede on 2008-12-02 06:14:55 EDT ---

We've managed to track down the course of this to mkinitrd (nash). We've done a new build of mkinitrd / nash: 5.1.19.6-41, which we believe fixes this (it does on our test systems).

The new nash-5.1.19.6-41, will be in RHEL 5.3 snapshot 5 which should become available for testing next Monday.

Please test this with snapshot5 when available and let us know how it goes. Thanks for your patience.

--- Additional comment from bmarzins on 2008-12-02 15:01:29 EDT ---

*** Bug 471879 has been marked as a duplicate of this bug. ***

--- Additional comment from pjones on 2008-12-02 15:45:56 EDT ---

*** Bug 446284 has been marked as a duplicate of this bug. ***

--- Additional comment from pjones on 2008-12-02 16:09:25 EDT ---

This should be fixed with nash-5.1.19.6-41 .

--- Additional comment from ddumas on 2008-12-05 13:18:42 EDT ---

*** Bug 474825 has been marked as a duplicate of this bug. ***

--- Additional comment from cward on 2008-12-08 06:53:21 EDT ---

~~ Snapshot 5 is now available @ partners.redhat.com ~~ 

Partners, RHEL 5.3 Snapshot 5 is now available for testing. Please send us your testing feedback on this important bug fix / feature request AS SOON AS POSSIBLE. If you are unable to test, indicate this in a comment or escalate to your Partner Manager. If we do not receive your test feedback, this bug will be AT RISK of being dropped from the release.

If you have VERIFIED the fix, please add PartnerVerified to the Bugzilla
Keywords field, along with a description of the test results. 

If you encounter a new bug, CLONE this bug and request from your Partner
manager to review. We are no longer excepting new bugs into the release, bar
critical regressions.

RAID0 (Strip) – produces i/o errors on boot. Seems to be a Strip RAID system boots and work normally except for i/o errors on boot. SEE ATTACHED .JPG

Comment 1 Ed Ciechanowski 2008-12-09 00:59:21 UTC
RAID0 (Strip) – produces i/o errors on boot. Seems to be a Strip RAID system
boots and work normally except for i/o errors on boot. SEE ATTACHED .JPG

If logs are needed let me know which ones.

Comment 2 Chris Ward 2008-12-09 07:41:17 UTC
Is this  a regression or critical error? It's getting very late to introduce new change into the release. Please make your case as soon as possible. Otherwise we'll be forced to defer to 5.4. If fixing for 5.4 is OK, please let me know that too.

Comment 3 Joel Andres Granados 2008-12-09 07:48:53 UTC
Can you also please post the error messages.  Are the error messages seen in the installation at all?  or just when the machine boots normally?

Comment 4 Joel Andres Granados 2008-12-09 17:30:05 UTC
Do you also see kernel messages saying that the partition table is busted?

Comment 5 Ed Ciechanowski 2008-12-09 17:58:01 UTC
Created attachment 326370 [details]
Screen capture from fdisk -l

Comment 6 Ed Ciechanowski 2008-12-09 17:58:36 UTC
Created attachment 326371 [details]
df.txt from RAID0 system

Comment 7 Joel Andres Granados 2008-12-09 18:56:55 UTC
I still need the boot messages.  These are the most important ones.  Also look into the installer itself and look for the same messages.
thx.

Comment 8 Ed Ciechanowski 2008-12-09 19:29:01 UTC
Created attachment 326382 [details]
output from the command "dmesg > bootmsg.log"

Comment 9 Joel Andres Granados 2008-12-10 10:25:41 UTC
Yes this is what I see as well.  Since this does not really prevent the booting of the machine.  we will probably take out a relnote for this.  Will come back with more info latter in the day.

Comment 10 Ed Ciechanowski 2008-12-29 06:17:16 UTC
Created attachment 327910 [details]
Screen capture of RAID0 boot error

Tested RHEL5.3 RC1. Attached Log's and Screen Shot of boot errors.

Comment 11 Ed Ciechanowski 2008-12-29 06:18:13 UTC
Created attachment 327911 [details]
RAID0 logs from Install of RHEL5.3 RC1

Tested RHEL5.3 RC1. Attached Log's and Screen Shot of boot errors.

Comment 12 Krzysztof Wojcik 2009-01-26 16:04:12 UTC
We tested RHEL 5.3 RC2.
Issue still exist.

Comment 13 Gary Case 2009-01-27 22:38:46 UTC
I see the same errors as Ed detailed in comment 11. Do we know if these are actually harmful? It looks like the system's trying to read beyond the end of the disks.

Comment 14 Joel Andres Granados 2009-01-28 09:55:22 UTC
Gary:

The errors will not prevent the booting of the machine.  However, it would be nice to have it boot with no error messages.

Comment 15 Denise Dumas 2009-02-03 22:03:11 UTC
These errors are not info only, they are "real" but harmless. With the striped setup we use 2 disks as one large disk, but given this is sw raid the kernel sees the 2 separate disks too, and the first disk has a valid partition table spanning both disks, so when some tool tries to do something to those partitions we can get those errors as the partitions are bigger than one disk.
This is completely harmless but probably very hard to fix, especially in the RHEL5 stream where we are more limited in the amount of change allowed. So assuming the kernel team agrees, we would like to address with a release note.  I'll post a suggested release note in the BZ for review.

Comment 16 Denise Dumas 2009-02-16 13:47:09 UTC
The patches attached to BZ 485400 should provide a real fix avoiding the display of these error messages, and Heinz plans to incorporate them into 5.4. At that point we can validate this BZ and close it. 

We will leave the release note plan as a fallback in case there are unintended side effects from the patches that would prevent their addition to 5.4.

Comment 17 RHEL Program Management 2009-02-16 15:43:08 UTC
Updating PM score.

Comment 18 RHEL Program Management 2009-04-13 13:09:52 UTC
This request was evaluated by Red Hat Product Management for inclusion in a Red
Hat Enterprise Linux maintenance release.  Product Management has requested
further review of this request by Red Hat Engineering, for potential
inclusion in a Red Hat Enterprise Linux Update release for currently deployed
products.  This request is not yet committed for inclusion in an Update
release.

Comment 19 Krzysztof Wojcik 2009-04-29 07:44:09 UTC
I have a question. Are you going to resolve this issue in RHEL5.4 or other release?
Maybe version and status of this topic should be changed?

Comment 20 Chris Ward 2009-04-29 08:08:16 UTC
This issue has been approved for 5.4.0, however the code is not yet complete so there is no guarantee at this point that it /will/ make it into the release. The status will be updated once a patch is available.

Comment 21 Hans de Goede 2009-05-06 13:04:20 UTC
This is not a kernel issue, but a userspace issue, the fix for this consists of 2 parts, first dmraid needs a few changes, which is tracked in bug 485400, and once that is done we need to make a few small changes to mkinitrd. So I'm changing the component of this bug to mkinitrd and assigning it to me. As soon as the new dmraid is available I'll make the necessary changes to mkinitrd.

Comment 22 Heinz Mauelshagen 2009-05-06 13:09:29 UTC
Hans,
I'm waiting for flags to be set by PM/QE in order to be able to checkin and build a new dmraid version.

Comment 23 Denise Dumas 2009-05-11 20:34:19 UTC
Heinz has 485400 in Modified now.

Comment 24 Hans de Goede 2009-05-13 10:27:54 UTC
This is fixed in mkinitrd 5.1.19.6-49.

Comment 26 Chris Ward 2009-06-14 23:17:31 UTC
~~ Attention Partners RHEL 5.4 Partner Alpha Released! ~~

RHEL 5.4 Partner Alpha has been released on partners.redhat.com. There should
be a fix present that addresses this particular request. Please test and report back your results here, at your earliest convenience. Our Public Beta release is just around the corner!

If you encounter any issues, please set the bug back to the ASSIGNED state and
describe the issues you encountered. If you have verified the request functions as expected, please set your Partner ID in the Partner field above to indicate successful test results. Do not flip the bug status to VERIFIED. Further questions can be directed to your Red Hat Partner Manager. Thanks!

Comment 27 Krzysztof Wojcik 2009-06-24 07:12:05 UTC
Issue verified in RHEL5.4 Alpha with PASS result.

Comment 28 Alexander Todorov 2009-07-03 10:23:06 UTC
Moving to VERIFIED as per comment #27

Comment 30 errata-xmlrpc 2009-09-02 11:15:34 UTC
An advisory has been issued which should help the problem
described in this bug report. This report is therefore being
closed with a resolution of ERRATA. For more information
on therefore solution and/or where to find the updated files,
please follow the link below. You may reopen this bug report
if the solution does not work for you.

http://rhn.redhat.com/errata/RHBA-2009-1345.html