Bug 186182 - Anaconda crash at end of upgrade (lvm-related?)
Summary: Anaconda crash at end of upgrade (lvm-related?)
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Fedora
Classification: Fedora
Component: anaconda
Version: 5
Hardware: All
OS: Linux
medium
medium
Target Milestone: ---
Assignee: Peter Jones
QA Contact: Mike McLean
URL:
Whiteboard: bzcl34nup
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2006-03-22 01:03 UTC by Brad Smith
Modified: 2008-05-06 15:36 UTC (History)
2 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2008-05-06 15:36:14 UTC
Type: ---
Embargoed:


Attachments (Terms of Use)
Anaconda crash info (1.03 MB, text/plain)
2006-03-22 01:03 UTC, Brad Smith
no flags Details
anacdump.txt - written out on floppy at bugtime (70.58 KB, text/plain)
2006-03-22 15:06 UTC, Bob Gustafson
no flags Details

Description Brad Smith 2006-03-22 01:03:38 UTC
FC5 anaconda crashed at end of upgrading from FC4. Upon booting the system,
fedora-release showed FC5 and fc5 packages, including the kernel were listed as
installed, but the FC5 kernel its self was not there. 

One unusual thing that I did was choose new install -> custom partitioning,
remove a volume group I wasn't using, then change my mind and go back to upgrade
instead of install. However, the vg that I removed was not in use by the Fedora
install that I upgraded.

Crash information is attached (that remote dump feature is great!)

Comment 1 Brad Smith 2006-03-22 01:03:39 UTC
Created attachment 126449 [details]
Anaconda crash info

Comment 2 Bob Gustafson 2006-03-22 15:06:37 UTC
Created attachment 126479 [details]
anacdump.txt - written out on floppy at bugtime

My problem may be related.

My machine has 3 disks - one 4Gb which has Fedora4, and two (freshly) wiped
36Gb disks (all SCSI).

I went through the new installation, custom partition, raid
md0 (100MB) /boot
md1 (1428MB) swap
md2 (34000MB..) LVM /rootvg/root mount as /

(Why does the installer rename disk partitions.. One has to create partitions
from biggest to smallest to get around spontaneous renaming..)

Put in network addresses, selected installation type (Software development)
(deferred installation of other stuff is great)

And then almost settled down with a cup of coffee to await the downloading and
installation of multiple CDs of Fedora5. Then got bug alert, wrote state on
floppy, and here I am.

When I rebooted Fedora4 on the 4G disk (untouched during Fedora5 install), the
log file contained the following - pertaining to the RAIDED disks.

Before the Fedora5 installation, one of the 36Gb disks had raid partitions set
up as described above, but with a 'missing' mirror disk. The other disk was a
previous Fedora4 installation which was backed up and available for wiping
during Fedora5 installation as raid mirror.


md: Autodetecting RAID arrays.
md: invalid raid superblock magic on sdb1
md: sdb1 has invalid sb, not importing!
md: invalid raid superblock magic on sdb2
md: sdb2 has invalid sb, not importing!
md: autorun ...
md: considering sdc3 ...
md:  adding sdc3 ...
md: sdc2 has different UUID to sdc3
md: sdc1 has different UUID to sdc3
md:  adding sdb3 ...
md: created md2
md: bind<sdb3>
md: bind<sdc3>
md: running: <sdc3><sdb3>
md: md2: raid array is not clean -- starting background reconstruction
md: raid1 personality registered as nr 3
raid1: raid set md2 active with 2 out of 2 mirrors
md: considering sdc2 ...
md:  adding sdc2 ...
md: sdc1 has different UUID to sdc2
md: syncing RAID array md2
md: minimum _guaranteed_ reconstruction speed: 1000 KB/sec/disc.
md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec)

for reconstruction.
md: using 128k window, over a total of 34275264 blocks.
md: resuming recovery of md2 from checkpoint.
md: created md1
md: bind<sdc2>
md: running: <sdc2>
raid1: raid set md1 active with 1 out of 2 mirrors
md: considering sdc1 ...
md:  adding sdc1 ...
md: created md0
md: bind<sdc1>
md: running: <sdc1>
raid1: raid set md0 active with 1 out of 2 mirrors
md: ... autorun DONE.
EXT3 FS on dm-0, internal journal
kjournald starting.  Commit interval 5 seconds
EXT3 FS on sda1, internal journal
EXT3-fs: mounted filesystem with ordered data mode.
SELinux: initialized (dev sda1, type ext3), uses xattr
SELinux: initialized (dev tmpfs, type tmpfs), uses transition SIDs
Adding 1466296k swap on /dev/md1.  Priority:-1 extents:1 across:1466296k

Comment 4 Bob Gustafson 2006-03-22 18:22:03 UTC
OK, I think I fixed my problem.

Looking at the anacom debug output, it looks like anaconda does not have the
smarts to assemble RAID disks.

When I rebooted on my Fedora4, it was quite sluggish. More on that later.

I did

/sbin/mdadm --add /dev/md0 /dev/sdb1
/sbin/mdadm --add /dev/md1 /dev/sdb2
/sbin/mdadm --add /dev/md2 /dev/sdb3

This last command said device was busy. I thought maybe the LVM was causing the
/dev/md2 to busy out. No - checking

/sbin/mdadm --detail /dev/md2

said that this RAID1 disk was already assembled and the two disks were already
synced. This synchronization was probably the reason for the sluggishness when
Fedora4 booted. This also means that anacondo did set a flag or something during
its activity with /dev/md2 (LVM rootvg/root  mount point /)

Now that all of the RAID1 disks were assembled and synced, I went back into
Fedora5 install mode.

Fooling around with the partition/RAID/LVM user interface was a bit frustrating,
but at least 1) it seemed to prevent me from doing harm, 2) it did (finally)
allow me to name the LVM physical volume (root) and set the / and /boot mount
points.

I was able to proceed beyond the previous failure point and it was looking like
it might be successful.

However, there was a read error (CD?), which resulted in a dialog box asking if
I wanted to retry (yes), which resulted in an unhandled exception and a dialog
box with 3 choices. One of them was SAVE the state to floppy. I knew that this
ended in a dialog box [Reboot], so I chose the DEBUG button. I was in the python
debugger and it seemed to say that, yes - it was an unhandled exception. I did
an 'exit' from the debugger - it did not go back to the GUI screen and so I
therefore lost the opportunity to get another floppy of state information.

My window of opportunity has ended here - I need to get on a plane for Denver
this afternoon.

I think the next time I try, I will unselect all of the checkboxes at the last
stage and go with installing a minimum minium system. Hopefully that will be
quicker and will only require disc1 (I did not change disks before).

Can I install a minimum system? No office productivity, no software development,
no web server? And install some of these later?

If this is possible, it might save a lot of time for a lot of folks, and provide
you with only one choice/path to debug.

Comment 5 Bob Gustafson 2006-03-24 22:44:45 UTC
see also bug #186312

Comment 6 Need Real Name 2006-03-28 14:14:03 UTC
  I saw something similar when I upgraded a machine from FC2 to FC4 that had RAID-1 partitions. In that 
case, the RAID-1 slices from one of the drives became detached after the installation. The problem was 
easily resolved by just adding the missing slices back to each md device.

Comment 7 petrosyan 2008-03-11 02:04:51 UTC
Fedora Core 5 is no longer maintained. Is this bug still present in Fedora 7 or
Fedora 8?

Comment 8 Bug Zapper 2008-04-04 02:11:24 UTC
Fedora apologizes that these issues have not been resolved yet. We're
sorry it's taken so long for your bug to be properly triaged and acted
on. We appreciate the time you took to report this issue and want to
make sure no important bugs slip through the cracks.

If you're currently running a version of Fedora Core between 1 and 6,
please note that Fedora no longer maintains these releases. We strongly
encourage you to upgrade to a current Fedora release. In order to
refocus our efforts as a project we are flagging all of the open bugs
for releases which are no longer maintained and closing them.
http://fedoraproject.org/wiki/LifeCycle/EOL

If this bug is still open against Fedora Core 1 through 6, thirty days
from now, it will be closed 'WONTFIX'. If you can reporduce this bug in
the latest Fedora version, please change to the respective version. If
you are unable to do this, please add a comment to this bug requesting
the change.

Thanks for your help, and we apologize again that we haven't handled
these issues to this point.

The process we are following is outlined here:
http://fedoraproject.org/wiki/BugZappers/F9CleanUp

We will be following the process here:
http://fedoraproject.org/wiki/BugZappers/HouseKeeping to ensure this
doesn't happen again.

And if you'd like to join the bug triage team to help make things
better, check out http://fedoraproject.org/wiki/BugZappers

Comment 9 Bug Zapper 2008-05-06 15:36:12 UTC
This bug is open for a Fedora version that is no longer maintained and
will not be fixed by Fedora. Therefore we are closing this bug.

If you can reproduce this bug against a currently maintained version of
Fedora please feel free to reopen thus bug against that version.

Thank you for reporting this bug and we are sorry it could not be fixed.


Note You need to log in before you can comment on or make changes to this bug.