Bug 246081 - anaconda upgrade writes incorrect mdadm.conf when using lvm-over-md
anaconda upgrade writes incorrect mdadm.conf when using lvm-over-md
Status: CLOSED DUPLICATE of bug 242334
Product: Fedora
Classification: Fedora
Component: anaconda (Show other bugs)
7
All Linux
low Severity high
: ---
: ---
Assigned To: Anaconda Maintenance Team
Fedora Extras Quality Assurance
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2007-06-28 09:06 EDT by edwinh
Modified: 2007-11-30 17:12 EST (History)
0 users

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2007-06-28 09:26:54 EDT
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description edwinh 2007-06-28 09:06:42 EDT
Description of problem:

I had an fc6 box with 2 md partitions (raid5) and for one of them had put lvm on
top. After f7 upgrade, the mdadm.conf had beeen rewritten by anaconda to just
contain the basic md device only... on boot it didn't init md1, didn't init the
lvm volumes, and my /usr /var, etc where just not there.

It took me a while (while panicing!) to figure out what had happened.

I put my old mdadm.conf back, rebooted and life is happy.



Version-Release number of selected component (if applicable):

default f7 installer
/proc/mdstat:

Personalities : [raid6] [raid5] [raid4]
md1 : active raid5 sda2[0] sdc2[2] sdb2[1]
      36017536 blocks level 5, 64k chunk, algorithm 2 [3/3] [UUU]
md0 : active raid5 sda3[0] sdc3[2] sdb3[1]
      274904064 blocks level 5, 64k chunk, algorithm 2 [3/3] [UUU]

old madadm.conf:

ARRAY /dev/md0 level=raid5 num-devices=3 UUID=29e65c43:46ed5ea3:070286c3:c95faa84
ARRAY /dev/md1 level=raid5 num-devices=3 UUID=ff41b9b3:0d3af1a2:94b86470:3bb78438

anaconda's mdadm.conf after upgrade:

# mdadm.conf written out by anaconda
DEVICE partitions
MAILADDR root
ARRAY /dev/md0 level=raid5 num-devices=3 uuid=29e65c43:46ed5ea3:070286c3:c95faa84

pvdisplay:
  --- Physical volume ---
  PV Name               /dev/md1
  VG Name               vg1
  PV Size               34.35 GB / not usable 5.38 MB
  Allocatable           yes (but full)
  PE Size (KByte)       32768
  Total PE              1099
  Free PE               0
  Allocated PE          1099
  PV UUID               1HuunI-1pc8-oazE-6VuF-3mfM-AQ3i-6dSm

fdisk -l:


Disk /dev/sda: 160.0 GB, 160000000000 bytes
255 heads, 63 sectors/track, 19452 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1          98      787153+  83  Linux
/dev/sda2              99        2340    18008865   fd  Linux raid autodetect
/dev/sda3            2341       19452   137452140   fd  Linux raid autodetect

Disk /dev/sdb: 160.0 GB, 160041885696 bytes
255 heads, 63 sectors/track, 19457 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1   *           1          98      787153+  83  Linux
/dev/sdb2              99        2340    18008865   fd  Linux raid autodetect
/dev/sdb3            2341       19457   137492302+  fd  Linux raid autodetect

Disk /dev/sdc: 160.0 GB, 160041885696 bytes
255 heads, 63 sectors/track, 19457 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sdc1   *           1          98      787153+  82  Linux swap / Solaris
/dev/sdc2              99        2340    18008865   fd  Linux raid autodetect
/dev/sdc3            2341       19457   137492302+  fd  Linux raid autodetect
Comment 1 Paul Howarth 2007-06-28 09:26:54 EDT

*** This bug has been marked as a duplicate of 242334 ***

Note You need to log in before you can comment on or make changes to this bug.