RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 714891 - Install UEFI RHEL 6.0 x86_64 on Sandy Bridge IRST RAID : boot loader installation fail
Summary: Install UEFI RHEL 6.0 x86_64 on Sandy Bridge IRST RAID : boot loader installa...
Keywords:
Status: CLOSED INSUFFICIENT_DATA
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: grub
Version: 6.0
Hardware: x86_64
OS: Linux
medium
high
Target Milestone: rc
: ---
Assignee: Peter Jones
QA Contact: Release Test Team
URL:
Whiteboard:
Depends On:
Blocks: 756082
TreeView+ depends on / blocked
 
Reported: 2011-06-21 08:33 UTC by Jaroslav Škarvada
Modified: 2011-12-23 17:21 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of: 709266
Environment:
Last Closed: 2011-12-02 18:50:33 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Jaroslav Škarvada 2011-06-21 08:33:46 UTC
+++ This bug was initially created as a clone of Bug #709266 +++

Description of problem:

   We install RHEL 6.0 x86_64 in UEFI mode on our Sandy Bridge platform with Cougar Point chipset. And we configure the HDDs as IRST (Intel Rapid Storage Technology) RAID0 . However it will encounter "format failed: 1" while formatting device /dev/md127p1 .

   We have tried with IRST (Intel Rapid Storage Technology) RAID0, RAID1, and RAID10 configurations. All of them encounter the same failure.

   BTW, it is OK if we install RHEL 6.0 in Legacy BIOS mode on the same IRST RAID .

Version-Release number of selected component (if applicable):

   RHEL 6.0 x86_64 .

How reproducible:


Steps to Reproduce:
1. Configure BIOS to UEFI mode on the Sandy Bridge platform
2. Configure BIOS to use onboard RAID function.
3. Press Ctrl+I during BIOS POST to configure IRST (Intel Rapid Storage Technology) RAID .
4. Install RHEL 6.0 x86_64 in UEFI mode.
  
Actual results:


Expected results:


Additional info:

--- Additional comment from lavator on 2011-05-31 10:52:03 CEST ---

Created attachment 501941 [details]
BIOS: IRST RAID

As shown in "BIOS_IRST_RAID.jpg" , our Sandy Bridge platform is configure as IRST  ( Intel Rapid Storage Technology) RAID0 in this testing case.

--- Additional comment from lavator on 2011-05-31 10:56:22 CEST ---

Created attachment 501945 [details]
HDD partition table of the RHEL 6.0 x86_64 UEFI installation

  The HDD partitions of the RHEL 6.0 x86_64 UEFI installation are shown in "partition.jpg" . The EFI partition is located in /dev/md127p1 .

--- Additional comment from lavator on 2011-05-31 10:59:20 CEST ---

Created attachment 501946 [details]
snapshot of format failure :1

 As shown in "failure.jpg", an error was encountered while formatting device /dev/md127p1 .

--- Additional comment from lavator on 2011-05-31 11:05:03 CEST ---

Created attachment 501948 [details]
the log file

Please check "raid.tgz" for the log files during installation. In particular, the "anaconda-tb-UQD_I0LjBU2H.xml" inside is generated by pressing the button "File Bug" .

--- Additional comment from lavator on 2011-05-31 11:11:28 CEST ---

Created attachment 501950 [details]
trouble shooting to format EFI partition

  As shown in "trouble_shooting.jpg" , we try to manually format the EFI partition /dev/md127p1. mkfs.vfat cannot recognize the device node /dev/md127p1 , and mkfs.ext4 ( and also mkfs.ext3, mkfs.ext2 ) can format /dev/md127p1 correctly. It seems to be the possible root cause.

   Please kindly help to clarify this issue.

--- Additional comment from jskarvad on 2011-05-31 22:05:28 CEST ---

Created attachment 502092 [details]
Test code

This is such a "security feature". But it is not perfect in detection. Does it work with the -I switch? I think anaconda should be changed to use this switch by default.

Please provide the major and minor numbers for the failing device, or compile and run the attached source code and provide its output (preffered, it uses the same syscalls as dosfstools detection):
$ gcc -o test test.c
# ./test /dev/md127p1

--- Additional comment from lavator on 2011-06-01 07:32:23 CEST ---

Hi Jaroslav ,

   "mkfs.vfat -I" works with IRST RAID0 volume /dev/md127p1 . Please check the response below.

# ./test /dev/md127p1
rdev: 10300

# mkfs.vfat -I /dev/md127p1
# mkdir /tmp/x
# mount /dev/md127p1 /tmp/x
# touch /tmp/x/file
# mkdir /tmp/x/dir

--- Additional comment from jskarvad on 2011-06-02 12:30:16 CEST ---

Created attachment 502487 [details]
Proposed fix

Thanks for info. The attached patch should fix it. But I am still suggesting the anaconda team to use the '-I' switch.

There is also available scratch build for testing:
http://jskarvad.fedorapeople.org/dosfstools/dosfstools-3.0.9-4.el6.x86_64.rpm

Please test it and report results.

--- Additional comment from lavator on 2011-06-03 09:29:52 CEST ---

Created attachment 502742 [details]
test results of new dosfstools-3.0.9-4.el6.x86_64.rpm

Hi,
  As shown in "mkdosfs.jpg", dosfstools-3.0.9-4.el6.x86_64.rpm works well.
  ( where /tmp/mkdosfs comes from dosfstools-3.0.9-4.el6.x86_64.rpm , and /usr/sbin/mkdosfs is the original one ).

--- Additional comment from jskarvad on 2011-06-03 09:35:34 CEST ---

Thanks for info. I will forward the patch upstream.

--- Additional comment from jskarvad on 2011-06-03 16:37:14 CEST ---

Reported upstream and cloned as fedora bug 710480.

--- Additional comment from lavator on 2011-06-14 14:04:38 CEST ---

Created attachment 504663 [details]
Install UEFI RHEL 6.0 x86_64 on Sandy Bridge IRST RAID : error on installing bootloader

Hi,

   We reverify to install UEFI RHEL 6.0 x86_64 on Sandy Bridge IRST RAID. But it got error on the last stage of bootloader installation. Please check "i001.jpg" .

--- Additional comment from jskarvad on 2011-06-20 09:28:36 CEST ---

Lin, could you provide logs? If this another error (comment 14) is not related to dosfstools I suggest to create new bug on it.

--- Additional comment from stuart_hayes on 2011-06-20 22:49:43 CEST ---

I'm seeing the same thing.  The system I'm working with has a Intel RAID1 across /dev/sda and /dev/sdb, which shows up as /dev/md127.  (/proc/partitions does not show any partitions on /dev/sda or /dev/sdb, though parted & fdisk see the partition tables on those devices.)

After I boot to an installed system, when I run "grub-install --grub-shell=/sbin/grub --no-floppy /dev/md127p1", I get this sort of thing:

Probing devices to guess BIOS drives. This may take a long time.
The file /boot/grub/stage1 not read correctly.

/boot/grub/device.map looks like this:

(hd0)   /dev/sda
(hd1)   /dev/sdb

And mdadm --query --detail /dev/md127p1 returns this:

/dev/md127p1:
      Container : /dev/md0, member 0
     Raid Level : raid1
...
    Number   Major   Minor   RaidDevice State
       1       8        0        0      active sync   /dev/sda
       0       8       16        1      active sync   /dev/sdb

It appears that the grub-install script thinks this is a software RAID, and is using an mdadm query to convert /dev/md127p1 into /dev/sda, /boot/grub/device.map to convert that into (hd0).  I'm not sure how this would ever work with Intel firmware RAID...?



If I change /boot/grub/device.map to:

(hd0)   /dev/md127

And I change /sbin/grub-install so that the function is_raid1_device always returns 0, it will install grub successfully to /dev/md127p1.

--- Additional comment from lavator on 2011-06-21 03:45:11 CEST ---

Hi Jaroslav ,

   I try to capture logs on bootloader installation error, but in vain. Since this is the follow-up problem after patching dosfstools, i am afraid that it will be hard to understand if i create new bug on this.

Hi Stuart,

  Will the modification of is_raid1_device in /sbin/grub-install work for RAID0 and RAID10 too ?

--- Additional comment from jskarvad on 2011-06-21 10:30:31 CEST ---

Lin, Stuart,

thanks for info. I will use this BZ entry (bug 709266) to fix the dosfstools. I am cloning this bug to grub package for further investigation of this follow-up (but independent) problem.

Comment 2 Stuart Hayes 2011-06-21 16:50:14 UTC
OK, I was working with a system that had already been installed by someone else.

I reinstalled RHEL 6.1 on this system, with the same RAID 1 on the Intel firmware RAID, and it installed correctly in BIOS mode.  After the install, /boot/grub/device.map shows only "(hd0) /dev/md127", which is different than the previous install which showed "(hd0) /dev/sda (hd1) /dev/sdb".  It booted fine after the install so presumably grub was installed correctly.  And now when I run "grub-install --grub-shell=/sbin/grub --no-floppy /dev/md127p1", it says "/dev/sda does not have any corresponding BIOS drive."

So perhaps the issue is just with the grub-install script.

I'll try UEFI now.

Comment 3 David Cantrell 2011-07-19 13:42:41 UTC
Stuart,

Waiting to hear back your findings from comment #2.

Comment 4 Stuart Hayes 2011-07-19 15:37:51 UTC
Sorry... I wasn't able to boot in UEFI mode on the system that was available to me for some reason, and it was a system I had borrowed just to look at this issue.  I'll see if I can find another that I can borrow.

Comment 5 David Cantrell 2011-12-02 18:50:33 UTC
Closing this one for now.  If you can provide more information, feel free to reopen it.


Note You need to log in before you can comment on or make changes to this bug.