Bug 236452 - bootrecord always written to MBR, when device is using the 'disk#p#' partition naming scheme
bootrecord always written to MBR, when device is using the 'disk#p#' partitio...
Status: CLOSED ERRATA
Product: Red Hat Enterprise Linux 5
Classification: Red Hat
Component: booty (Show other bugs)
5.0
All Linux
medium Severity medium
: ---
: ---
Assigned To: Peter Jones
Alexander Todorov
:
Depends On: 225551
Blocks:
  Show dependency treegraph
 
Reported: 2007-04-14 10:11 EDT by Bryn M. Reeves
Modified: 2010-10-22 10:22 EDT (History)
3 users (show)

See Also:
Fixed In Version: RHBA-2008-0444
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2008-05-21 10:33:32 EDT
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)
The patch which was applied (564 bytes, patch)
2008-04-07 12:31 EDT, Peter Jones
no flags Details | Diff

  None (edit)
Description Bryn M. Reeves 2007-04-14 10:11:15 EDT
+++ This bug was initially created as a clone of Bug #225551 +++

Description of problem:

Even if during installation write bootrecord to boot-section is selected,
an existing MBR is overwritten.

Version-Release number of selected component (if applicable):
anaconda-11.1.2.24 (RHEL5)


How reproducible:
100%


Steps to Reproduce:
1. Install RHEL5 on a system with SATA Raid.
2. Install a second RHEL5 installation on the same system and select
   write bootrecord to bootsection.
3. reboot
  
Actual results:
The second installation will be booted.

Expected results:
Boot of the first fedora Installation.
To get access to the 2nd installation, you should only need to insert 3 lines in
/boot/grub/grub.conf of the 1st installation:

title Fedora (2nd Installation)
    root (hd0,x)
    chainloader +1 


Additional info:
similar problems occur on openSUSE 10.2
Comment 4 RHEL Product and Program Management 2007-10-16 00:02:01 EDT
This request was evaluated by Red Hat Product Management for inclusion in a Red
Hat Enterprise Linux maintenance release.  Product Management has requested
further review of this request by Red Hat Engineering, for potential
inclusion in a Red Hat Enterprise Linux Update release for currently deployed
products.  This request is not yet committed for inclusion in an Update
release.
Comment 7 Peter Jones 2008-02-07 19:02:26 EST
Fixed in booty-0.80.4-6 .
Comment 9 Alexander Todorov 2008-04-03 10:03:57 EDT
pjones,
can you clarify the exact steps to reproduce here? 
In particular looking at the original BZ the reporter seems to use both
partitions and discs in his comments. How many discs and partitions we have here
and what is installed where?

Thanks.
Comment 10 Peter Jones 2008-04-03 10:53:52 EDT
In dmraid, you have multiple disks in a RAID, which we treat as one normal disk
(with a strange path) in the installer.  On the combined raid, install like
normal i.e. a /boot and a LVM PV with / in a Logical Volume, and ask the
installer to install the bootloader on the first partition of the raid.  That
is, install it to /boot on /dev/mappper/isw_Volume_0p1 , not to the MBR on
/dev/mapper/isw_Volume_0 .
Comment 11 Alexander Todorov 2008-04-07 11:33:33 EDT
Steps to reproduce with hardware RAID (disc  cciss/c0d0 - Compaq Smart Array):
1) Install a system with 4 partitions: /boot, /data, / and swap
/dev/cciss/c0d0
     cciss/c0d0p1 - /boot
     cciss/c0d0p2 - /data
     cciss/c0d0p3 - /
     cciss/c0d0p4 swap

2) Install GRUB on MBR [/dev/cciss/c0d0  Master Boot Record (MBR)]
3) Complete the install and boot into the system. Edit /boot/grub/grub.conf as
if there was another OS on c0d0p2 (/data). We'd like to chainload this OS later.
Also I have an entry for Anaconda to boot into the install (not relevant for
this bug)

# grub.conf generated by anaconda
#
# Note that you do not have to rerun grub after making changes to this file
# NOTICE:  You have a /boot partition.  This means that
#          all kernel and initrd paths are relative to /boot/, eg.
#          root (hd0,0)
#          kernel /vmlinuz-version ro root=/dev/cciss/c0d0p3
#          initrd /initrd-version.img
#boot=/dev/cciss/c0d0
default=0
timeout=50
serial --unit=0 --speed=38400
terminal --timeout=5 serial console
title Red Hat Enterprise Linux Server (2.6.18-88.el5)
        root (hd0,0)
        kernel /vmlinuz-2.6.18-88.el5 ro root=LABEL=/ console=ttyS0,38400
        initrd /initrd-2.6.18-88.el5.img

title Second Installation
        rootnoverify (hd0,1)
        chainloader +1

title Anaconda
        root (hd0,0)
        kernel /vmlinuz-anaconda console=ttyS0,38400
        initrd /initrd.img-anaconda

4) Perform a second install and select c0d0p2 as / (format if you want). leave
other partitions untouched.
/dev/cciss/c0d0
     cciss/c0d0p1 - untouched (no mount point, was previously /boot)
     cciss/c0d0p2 - / (was previously /data)
     cciss/c0d0p3 - untouched (no mount point, was previously /)
     cciss/c0d0p4 - swap
5) Install GRUB on the first sector of root partition 
   [/dev/cciss/c0d0p2 First sector of boot partition]
6) Complete install and reboot

Results:
* with RHEL 5.1:
The bootloader of the second install gets installed on MBR overwriting the
previous one. Hence the 2nd OS is loaded. The customizations are lost (actually
they are in the c0d0p1 partition and if you mount it you can see them, but this
instance of GRUB is not loaded)

with RHEL5.2 snap #4:
* MBR is not overwritten. GRUB from the 1st install is loaded and all
modifications can be seen from the grub menu. Boot loader of the 2nd OS is
installed on the first sector of its root partition (where we told it to
install). Chainloading works as expected. Booting into 1st OS also works as
expected.
Comment 13 Bryn M. Reeves 2008-04-07 11:37:03 EDT
Did the system in comment #11 have a dmraid capable controller? Most systems
with CCISS don't and afaik, it is a requirement to reproduce this bug.
Comment 16 Peter Jones 2008-04-07 12:31:39 EDT
Created attachment 301538 [details]
The patch which was applied

This is the patch applied to fix the problem; it should affect any device using
the 'disk#p#' partition naming scheme.
Comment 17 Alexander Todorov 2008-04-07 12:49:57 EDT
breeves: test was correct but only because CCISS uses the same naming scheme
nstraz: so an install to an aoe or mpath device would exercise the code too

The test in comment #11 turned out to be enough. No need to test with dmraid
explicitly.
Comment 18 Winfrid Tschiedel 2008-04-08 03:22:31 EDT
Hello Alexander,

Yes I have access to the rhel 5.2 Snapshots -
please read also 225548 ( which concerns fedora ).
I will do additioal tests, nevertheless I found it very hard to deal with 
dmraid - you get one item fixed and then you find the next problem.
You should have also a look at 164550 in Issue Tracker, here I saw that
my problem with dmraid with 2 dmraids seem to be FastTrak unique -
I could install a rhel5.2 Snapshot3 on a Primergy TX200-S3 (ESB2) on 2 RAID1, 
but after reboot I only saw the sata disks ( /dev/sda .... /dev/sdd ) instead 
of /dev/mapper/dff1_.....


Comment 19 Alexander Todorov 2008-04-08 04:41:44 EDT
Winfrid,
the root cause of the problem was a bug in the code which affected devices that
used the 'disk#p#' partition naming scheme. dmraid is one of the cases. 
If you have the time you can retest this issue with dmraid setup although we
believe the test in comment #11 was enough to verify the fix of this issue.

Thanks.
Comment 21 Winfrid Tschiedel 2008-04-08 09:56:30 EDT
Alexander,

I have verified that a bootrecord is written to the requested partition, 
but ...

on my booted system I see /dev/sda7 instead of /dev/mapper/ddf1_.......p7 !
On the same system I installed fedora 8 and there everything looks good.
But fedora 9 does not install at all !

Winfrid

PS.: I know this is a new bugzilla entry, but send me some advice, that 
     I can prepare as much information as possible.
Comment 22 Alexander Todorov 2008-04-08 10:07:59 EDT
Winfrid,
if your disk was named /dev/sda7 then the code path that got fixed was not
executed (see comment #16, #17).

Please open another bug with the following information:
* Version of RHEL/Fedora and your hardware setup where you saw differences in
disk naming between Fedora and RHEL. If you've been trying to use dmraid on RHEL
it may have silently failed without you noticing that.

* If you're trying to test the same environment with Fedora 9 then ensure you
have a tree that is installable in a default environment (e.g. lvm). If this is
the case then file a bug against Rawhide with the information from the failure.
You'd probably want to post to fedora-test-list _at_ redhat.com to seek more
advise. 


Please do not use this issue to track dmraid failures or other Fedora issues. It
was created to track the issue where boot loader was not installed on the
requested partition but on MBR.


Thanks.
Comment 23 errata-xmlrpc 2008-05-21 10:33:32 EDT
An advisory has been issued which should help the problem
described in this bug report. This report is therefore being
closed with a resolution of ERRATA. For more information
on the solution and/or where to find the updated files,
please follow the link below. You may reopen this bug report
if the solution does not work for you.

http://rhn.redhat.com/errata/RHBA-2008-0444.html

Note You need to log in before you can comment on or make changes to this bug.