Bug 730727 - mkinitrd fails to package raid1 drivers when required
Summary: mkinitrd fails to package raid1 drivers when required
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat Enterprise Linux 5
Classification: Red Hat
Component: mkinitrd
Version: 5.5
Hardware: All
OS: Linux
medium
medium
Target Milestone: rc
: 5.8
Assignee: Brian Lane
QA Contact: Release Test Team
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2011-08-15 14:31 UTC by Anthony Green
Modified: 2011-10-24 16:19 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2011-10-24 16:19:33 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Anthony Green 2011-08-15 14:31:29 UTC
Description of problem:
I was recently setting up software RAID1 on / post-installation, and following one of the kbase articles (DOC-7355), although google will point you at any number of similar documents.
On RHEL 5.5 (untested on newer versions, but still believed to be buggy), mkinitrd will fail to package up the right set of kernel modules.  The work-around is to force their inclusion.

Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:

I was able to reproduce using a VM with these steps.  The steps below are known to work with RHEL5.5 guests.  To reproduce the error, remove the --preload and --with options from mkinitrd.  I went for over-kill with those options.  It's almost certain that a subset of those options will fix the problem, but I did not test.

** Copy partition table.  I used dd if=/dev/vda of=/dev/vdb and Crtl-C 
   after a while.
** Change new partition types to fd (Linux raid autodetect)
** Create md devices with..
   mdadm --create /dev/md1 --level=1 --raid-disks=2 missing /dev/vdb1
   ..etc
** Run "mdadm --detail --scan > /etc/mdadm.conf"
** Edit /etc/fstab to use the raid devices
** Edit /boot/grub/grub.conf to set root=/dev/md1. 
   Also remove rhgb and quiet for now.
** In /boot run..
   $ mkinitrd -v -f --preload=raid1 --preload=dm-mirror --with=raid1 --with=dm-mirror initrd-`uname -r`.img `uname -r`
** Copy data from regular partitions to raid devices.  I used rsync...
   $ mount /dev/md1 /mnt
   $ rsync -aqxP / /mnt
   $ umount /mnt
   ..etc
** Run mkswap /dev/md3 (or wherever your swap partition will now be)
** Run these commands (or similar) in grub
   grub> device (hd0) /dev/vdb  
   grub> root (hd0,0)
   grub> setup (hd0)
   grub> quit
** reboot!
** Change all partition types to fd on the first disk
 * Use mdadm to add first disk partitions to raid mirror...
   $ for i in 1 2 3; do mdadm --add /dev/md$i /dev/vda$i; done
 * Wait for mirrors to sync..
   $ watch cat /proc/mdstat
 * Update mdadm.conf..
   $ mdadm --detail --scan > /etc/mdadm.conf
** All done.
  
Actual results:
raid device fails to mount and system will not boot.

Expected results:
a booting system

Additional info:

Comment 1 RHEL Program Management 2011-08-18 21:29:29 UTC
This request was evaluated by Red Hat Product Management for inclusion in a Red
Hat Enterprise Linux maintenance release.  Product Management has requested
further review of this request by Red Hat Engineering, for potential
inclusion in a Red Hat Enterprise Linux Update release for currently deployed
products.  This request is not yet committed for inclusion in an Update
release.

Comment 2 Brian Lane 2011-09-08 21:31:16 UTC
Could you run mkinitrd like this and attach the output to this bug?

bash -x mkinitrd -v -f initrd-`uname -r`.img `uname -r`

Also, please attach /etc/fstab and /proc/mdstat

Comment 3 Doug Ledford 2011-10-22 14:29:57 UTC
The problem here is that mkinitrd determines the modules to load in the initrd from the running system, not from the fstab.  So, if your root device is currently running on a non-raid device, it won't load the raid modules.  If your root device is on a raid device, it will.  There are two ways to get the raid modules in the initrd when changing from a non-raid root to a raid root.  The first, you already know: manually tell mkinitrd to include the proper raid modules.  The second is to do the switch to using raid a little differently.  Boot the vm using the rescue CD, create the raid devices, mount the raid devices under /mnt/sysroot, chroot into the /mnt/sysroot filesystem on the raid devices (with also /proc and /dev bind mounted on the /mnt/sysroot mounted / filesystem), run mkinitrd in the chroot environment.  In that case, mkinitrd will pick up the fact that root is on raid and load the right modules.

Here are my recommendations for how to make this non-raid to raid transition you are trying to do:

1) Boot the system using the rescue CD
2) Copy the disk partition from vda to vdb (plus enough data to get the boot loader too)
    dd if=/dev/vda of=/dev/vdb bs=512 count=128
3) Force the kernel to reload the partitions on /dev/vdb
    fdisk /dev/vdb
      w
4) Shrink the filesystems on the source drive.  This can get tricky if you are using lvm, but is much simpler if you just have a filesystem directly on each
partition.  If you are using lvm, then that's beyond my knowledge of how to shrink but I'm sure you can find it on the net.  For straight filesystems, something like ext2resize will work (or the appropriate tool for your specific filesystem).  I'm not going to get into how to do this, only that you need to shrink the filesystem by 128KB.
5) Create a raid1 device using the existing /dev/vda partitions (this is why we shrunk the filesystem, to make room for the raid superblock at the end).  Make sure you use a superblock format that sits at the end of the device.  I highly recommend version 1.0 for this.
    mdadm -C /dev/md/boot -l1 -n2 -e1.0 --name=boot /dev/vda1 missing
    mdadm -C /dev/md/root -l1 -n2 -e1.0 --name=root /dev/vda2 missing
6) Add the vdb partitions to the raid devices.  That the partitions from vdb are added after the initial creation forces the vda partitions to be considered the good partitions and the data will be rebuilt onto the vdb partitions
    mdadm /dev/md/boot -a /dev/vdb1
    mdadm /dev/md/root -a /dev/vdb2
7) Mount the new partitions
    mount /dev/md/root /mnt/sysimage
    mount /dev/md/boot /mnt/sysimage/boot
    mount -t bind /proc /mnt/sysimage/proc
    mount -t bind /dev /mnt/sysimage/dev
    mount -t bind /sys /mnt/sysimage/sys
8) Chroot into the system root
    chroot /mnt/sysimage
9) Run mkinitrd
    mkinitrd -v -f /boot/<blah> <blah>
10) Exit chroot, unmount filesystems, reboot.

At this point you are done.  Because you did a 128KB dd of the beginning of the vda disk to vdb, you actually got all the important bits of the boot loader, and the bits that you didn't get were copied over in the exact same spots by the fact that you set /boot up as a raid1, so both disk images are now perfectly bootable disk images.  The fact that you created the raid array with vda devices first and then added vdb devices means your data will be copied from the existing disk to the new one.  The raid subsystem, especially with version 1.0 superblocks, is perfectly able to stop the resync part way through and pick up where it left off when the machine reboots, so no need to wait for the resync to finish.  And because we did the mkinitrd inside a root environment that included raid, the mkinitrd script will do the right thing.

In any case, to get mkinitrd to pick up raid requirements from fstab instead of from the running system is likely an RFE.  mkinitrd wasn't designed to do that, at least partially because raid devices can be arbitrarily named and so it's not always possible to tell from reading the fstab if any given device in that file *must* be a raid device.

Comment 4 Brian Lane 2011-10-24 16:19:33 UTC
Thanks for the excellent description Doug. I don't see any need to make this kind of change to mkinitrd at this point.


Note You need to log in before you can comment on or make changes to this bug.