From Bugzilla Helper:
User-Agent: Mozilla/5.0 Galeon/1.2.7 (X11; Linux i686; U;) Gecko/20030131
Description of problem:
When installing from NFS using manual partitioning I get the bogus Low Memory
warning as seen in Bug #100885. If my swap exists as a Logical Volume, I get
the following error as well:
"An error occurred trying to initialize swap on device BaseVol/swap0. This
problem is serious, and the install cannot continue."
BaseVol/swap0 is a name I used in place of the default Volume00/LogVol00.
The problem appears regardless of the LVM names. From this dialog the only
option is to reboot.
If installing from a method that doesn't produce the lowmem warning (such as
CD), there are no problems putting LVM on swap. It also works if swap is placed
on its own partition. (I still get the lowmem warning, but the install
continues normally) If the system is partitioned with LVM but no swap at all,
the install will give the low memory warning and then continue normally. The
problem seems to lie in activating swap on LVM immediately after partitioning.
I will attach the anaconda.log from the LVM/swap failure case. There are a
few lines near the end that might provide clues.
Version-Release number of selected component (if applicable):
Steps to Reproduce:
1. Run installer, selecting NFS source and manual partitioning
2. Create an LVM VG and place swap on a Logical Volume
3. Select "Next", get "Must activate swap" warning
4. Get swap initialization error, must reboot
Actual Results: Forced to abort installation
Expected Results: LVM table written to disk, swap initialized and activated,
Actually, the low memory error shouldn't occur at all. But that's another bug. ;)
Created attachment 93975 [details]
Anaconda log from swap on LVM failure case
Problem still exists in Fedora Core, test2.
Ditto for Fedora Core, Test 3
I'm guessing that I'm the only one who's tested this condition. This happens
with every install on any system I test. Maybe I'm just being weird.
Can you try the update disk at
http://people.redhat.com/~katzj/early-swap-lvm.img and see if it fixes it?
I tried the update on a fresh system, after verifying the problem exists
without the updates. With the update (loaded from the NFS ISO directory), the
installer makes it past partitioning and up to the install progress screen. It
then spews an exception, apparently in the "enablefilesystems" step. I'll
attach the crash dump that was produced.
Since this is a previously untested system I'll run a few more test cases to
ensure it's not broken in some other way. However, this appears to still be a
LVM issue, just in a different spot.
Created attachment 95236 [details]
Dump from Anaconda crash w/ updates disk
Oops, stupid typo. Fixed and uploaded a new version of the image file. Can you
Ok, the new image gives me a different set of problems. I'm getting exceptions
at either partitioning or just before the first RPM install. It seems to depend
on the state of the HD prior to loading the installer.
The first dump I'm attaching is from my first repeatable exception. This was
with a clean drive (partition table zeroed), 100MB /boot, 512MB LVM swap, ~19GB
LVM root. I've tried to stick to the default options wherever possible to limit
variables. The exception happened just after "Setting up RPM Transaction".
Created attachment 95243 [details]
Clean drive, exception at beginning of package install
Created attachment 95244 [details]
Crash dump after re-partitioning old drive
This is the Anaconda dump for the partitioning crash. In this case I re-used
the drive from the previous case without zeroing it first. I deleted all
partitions and recreated with the same names, but slightly different sizes. I
don't have a lot (any) time to examine it right now, but I'd guess that it's a
problem with re-using the LVM names/layout.
I've updated the image (again) and it looks like it's working for me, at least
in the cases I've tried so far. I'm going to go ahead and commit it as it is
now to CVS so that it will get into images and get beat on a bit more.
Sorry for the delay. I've had a chance to test the new image (Oct 16 18:44) on
several systems in a few different scenarios.
The new image seems to work reliably in the fresh install case, but still
falls down on a re-install over an existing VG. It looks to be the same problem
as in my most recent attachment, and it's definately a problem with re-using the
same VG name.
The exception refers to a vgcreate failure. Running vgcreate from the shell
results in "volume group directory or file already exists", which it
(/dev/Volume00/*) indeed does. vgremove won't touch it, but manually removing
that tree will allow vgcreate to work.
Maybe the problem now is actually with one of the LVM tools not cleaning up
Ah, yeah... see it and fixed.
I've now had a chance to test the FC1 installer. Even when I limit
the available memory in order to trigger early swap activation, I
can't see any issues with LVM usage. It looks like this bug can be
Great, thanks for verifying.