Description of problem: anaconda seams unable to delete LV Filesystems and thows an exception, aborting the installation Version-Release number of selected component (if applicable): 11.1.2.87.1 How reproducible: unsure ... it happend to 3 out of 10 pc's in an rh133 course. a kickstart file is used to install these machines. Actual results: exception thrown. installation aborts Expected results: successful installtion Additional info: temporary fix: remove all partition information before starting the installation. Do a dd if=/dev/zero of=/dev/sda in the rescue mode. Looking at the exception, i think anaconda is trying to delete lvm snapshots. This will fail, because no snapshots were used.
Created attachment 295534 [details] exception file for the bug
*** Bug 446646 has been marked as a duplicate of this bug. ***
Looks like this is still valid as of 5.3. Patch: diff --git a/partitions.py b/partitions.py index 47ae4df..f2292dc 100644 --- a/partitions.py +++ b/partitions.py @@ -1599,6 +1599,8 @@ class Partitions: lvm_parent_deletes = [] tmp = {} def addSnap(name, vg): + if not snapshots.has_key(name): + return snaps = snapshots[name] for snap, snapvg in snaps: addSnap(snap, snapvg)
This request was evaluated by Red Hat Product Management for inclusion in a Red Hat Enterprise Linux maintenance release. Product Management has requested further review of this request by Red Hat Engineering, for potential inclusion in a Red Hat Enterprise Linux Update release for currently deployed products. This request is not yet committed for inclusion in an Update release.
Fix is in anaconda-11.1.2.169-1 and later.
Lutz, can you attach the ks.cfg used during install? What was previously on the system? Did the students use the same ks.cfg to install over and over again ? Thanks.
Lutz, I can't reproduce this on 5.3. Can you provide the kickstart file used in the training?
Hi Alexander, i think this bug was lvm snapshot based. And only poped up on machines that had an lvm snapshot set up during the last training cycle. Try installing a rhel5 system with vol0 volume group and create a few snapshots. After that reinstall that system with a kickstart that will use the same volume group. Don't know if i can provide the relevant kickstart, but will check if you still need it. Regards Lutz
Hi Lutz, Thanks for your feedback ... Due to time and schedule issues we're hoping that you can verify this issue is resolved, or provide us with specific information for a reproducer ... can you work with us to help verify this in the next week? Otherwise the concern is that if the issue is not fully resolved it will not be addressed in the current update. Regards, Brock
(In reply to comment #10) > Hi Alexander, > > i think this bug was lvm snapshot based. And only poped up on machines that had > an lvm snapshot set up during the last training cycle. > > Try installing a rhel5 system with vol0 volume group and create a few > snapshots. After that reinstall that system with a kickstart that will use the > same volume group. > I've tried the following: 1) Install a system and create 2 snapshots, one for vol0/home and one for vol0/root named home_snap, root_snap. Reused anaconda-ks.cfg to re-install. No traceback. 2) Install a system and create 2 snapshots, one for vol0/home and one for vol0/root named home_snap, root_snap. Reused anaconda-ks.cfg to re-install but changed the names of LVs in ks.cfg to match the names of the snapshots. No traceback. > Don't know if i can provide the relevant kickstart, but will check if you still > need it. > Please provide the ks.cfg used in the class and any instructions used in exercises (i.e. what snapshots, what names to create).
An advisory has been issued which should help the problem described in this bug report. This report is therefore being closed with a resolution of ERRATA. For more information on therefore solution and/or where to find the updated files, please follow the link below. You may reopen this bug report if the solution does not work for you. http://rhn.redhat.com/errata/RHBA-2009-1306.html