Bug 185674
| Summary: | /etc/mdadm.conf doesn't match /proc/mdstat | ||
|---|---|---|---|
| Product: | Red Hat Enterprise Linux 4 | Reporter: | Sean Dilda <agrajag> |
| Component: | anaconda | Assignee: | Joel Andres Granados <jgranado> |
| Status: | CLOSED ERRATA | QA Contact: | Milan Zázrivec <mzazrivec> |
| Severity: | medium | Docs Contact: | |
| Priority: | medium | ||
| Version: | 4.0 | CC: | atodorov |
| Target Milestone: | --- | ||
| Target Release: | --- | ||
| Hardware: | All | ||
| OS: | Linux | ||
| Whiteboard: | |||
| Fixed In Version: | RHBA-2008-0653 | Doc Type: | Bug Fix |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2008-07-24 19:05:29 UTC | Type: | --- |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
Can you provide the partitioning snippet of the actual ks.cfg being used instead of just the anaconda-ks.cfg (which doesn't end up being identical) clearpart --all part raid.21 --size=2000 --ondisk=sda --asprimary part raid.22 --size=2000 --ondisk=sdb --asprimary part raid.30 --size=100 --grow --ondisk=sdf part raid.29 --size=100 --grow --ondisk=sde part raid.28 --size=100 --grow --ondisk=sdd part raid.27 --size=100 --grow --ondisk=sdc part raid.24 --size=100 --grow --ondisk=sdb part raid.23 --size=100 --grow --ondisk=sda raid /boot --fstype ext3 --level=RAID1 raid.21 raid.22 raid pv.26 --fstype "physical volume (LVM)" --level=RAID1 raid.23 raid.24 raid pv.31 --fstype "physical volume (LVM)" --level=RAID5 raid.27 raid.28 raid.29 raid.30 volgroup VolGroup00 --pesize=32768 pv.26 volgroup VolGroup01 --pesize=32768 pv.31 logvol swap --fstype swap --name=swap00 --vgname=VolGroup00 --size=2048 logvol / --fstype ext3 --name=root00 --vgname=VolGroup00 --size=30624 logvol /srv --fstype ext3 --name=srv00 --vgname=VolGroup01 --size=209952 Hrmm, the code to write out mdadm.conf is actually reading the minor info from the superblock, so I'm not sure how they could disagree :-/ this only happens when we reinstall the box multiple times. could it be that mdadm is preserving the superblock/old info from the partitions and assembling them that way? requested by Jams Antill Backported a change I did for rhel5. Didn't test. Should be available in anaconda 10.1.1.82 and after. Verified in anaconda-10.1.1.89-1 / RHEL4-U7-re20080514.0 An advisory has been issued which should help the problem described in this bug report. This report is therefore being closed with a resolution of ERRATA. For more information on therefore solution and/or where to find the updated files, please follow the link below. You may reopen this bug report if the solution does not work for you. http://rhn.redhat.com/errata/RHBA-2008-0653.html |
On a RHEL4u2 box installed via kickstart (with all disk configs done in kickstart) the /etc/mdadm.conf file doesn't match the arrays listed in /proc/mdstat. As an example, here is the partition info from /root/anaconda-ks.cfg: #clearpart --all #part raid.100000 --size=2000 --ondisk=sda --asprimary #part raid.100001 --size=2000 --ondisk=sdb --asprimary #part raid.100003 --size=1000 --ondisk=sdb #part raid.100002 --size=1000 --ondisk=sda #part raid.100005 --size=100 --grow --ondisk=sdb #part raid.100004 --size=100 --grow --ondisk=sda #raid pv.100006 --fstype "physical volume (LVM)" --level=RAID1 raid.100004 raid.100005 #raid swap --fstype swap --level=RAID1 raid.100002 raid.100003 #raid /boot --fstype ext3 --level=RAID1 raid.100000 raid.100001 #volgroup VolGroup00 --pesize=32768 pv.100006 #logvol / --fstype ext3 --name=rootvol00 --vgname=VolGroup00 --size=31680 Also: [root@webhost-01 ~]# cat /proc/mdstat Personalities : [raid1] md3 : active raid1 sdb1[1] sda1[0] 2048192 blocks [2/2] [UU] md1 : active raid1 sdb2[1] sda2[0] 1020032 blocks [2/2] [UU] md2 : active raid1 sdb3[1] sda3[0] 32491392 blocks [2/2] [UU] unused devices: <none> [root@webhost-01 ~]# cat /etc/mdadm.conf # mdadm.conf written out by anaconda DEVICE partitions MAILADDR root ARRAY /dev/md0 super-minor=0 ARRAY /dev/md3 super-minor=3 ARRAY /dev/md1 super-minor=1 You can see that /proc/mdstat lists md1, md2, and md3. Whereas mdadm.conf lists md0, md1, and md3. In checking various machines with RHEL4 installed, I see this problem on most of them.