Bug 492264 - LVM error and crash on kickstart
Summary: LVM error and crash on kickstart
Keywords:
Status: CLOSED RAWHIDE
Alias: None
Product: Fedora
Classification: Fedora
Component: anaconda
Version: 11
Hardware: i386
OS: Linux
medium
high
Target Milestone: ---
Assignee: Radek Vykydal
QA Contact: Fedora Extras Quality Assurance
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2009-03-26 09:20 UTC by A.J. Werkman
Modified: 2009-06-25 15:28 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2009-06-25 15:28:42 UTC
Type: ---
Embargoed:


Attachments (Terms of Use)
Tty1 (173.66 KB, image/gif)
2009-03-26 09:20 UTC, A.J. Werkman
no flags Details
Tty3 (206.68 KB, image/gif)
2009-03-26 09:21 UTC, A.J. Werkman
no flags Details
patch for comment #2 (2.23 KB, patch)
2009-03-27 15:38 UTC, Radek Vykydal
no flags Details | Diff

Description A.J. Werkman 2009-03-26 09:20:53 UTC
Created attachment 336766 [details]
Tty1

Anaconda 11.5.0.37 on a dual harddisk system, both disks 203Gb of size.

Using the following partitioning scheme in a kickstart file anaconda crashes:

====================================================================
lang en_US.UTF-8
keyboard us
skipx
text

zerombr
bootloader --location=mbr --timeout=20 --append="nomodeset"
clearpart --all --initlabel
part /boot --fstype ext3 --size=512 --grow --maxsize=1024
part pv.7 --size=0 --grow
volgroup VolGroup00 --pesize=32768 pv.7
logvol swap --fstype swap --name=LogVol01 --vgname=VolGroup00 --size=1000 --grow --maxsize=1984
logvol / --fstype ext3 --name=LogVol00 --vgname=VolGroup00 --size=1024 --grow
=======================================================================

On Tty1 the error is: "new lv is too large to fit in free space".
On tty3 info says: "VolGroup00 size is -32Mb", "vg VolGroup00 has -32Mb free"

If I change "part pv.7 --size=0 --grow" to "part pv.7 --size=10 --grow" tty3 reports "vg VolGroup00 has 0Mb free"

This prohibits me from further reporting on bug #485276

I attach screenshots from tty1 and tty3

Comment 1 A.J. Werkman 2009-03-26 09:21:39 UTC
Created attachment 336767 [details]
Tty3

Comment 2 Radek Vykydal 2009-03-27 12:40:33 UTC
Despite the negative VG size, which is not the main problem,
PV growing hurts us here. We create LV instance (request) when parsing ks,
and during the creation it fails size check to fit in VG (the check is done
by VG instance when adding LV) because the VG is not actually grown yet.

The problem is that we don't know the size of growing VG until we've parsed all partition requests. Supposing we don't want to get rid of the check,
we could qrow partitions (even call doPartitiong?) in KS before creating LV
request on growing VG (I don't like the idea much). This way we get close to what
we do in UI. But passing the size check doesn't protect us from failing later,
when doing final doPartitioning in doAutoPartitiong for KS, because the size of VG
can change after parsing all partition requests (i.e. when some other partition
is added). So maybe it is better to get rid of the size check and count only
on final doPartitiong to fail if LV doesn't fit to grown VG.
At the moment, final doPartitioning would succeed (perhaps some checks are missing, or we count on the ctor size check?), and lvcreate fails.

Comment 3 Radek Vykydal 2009-03-27 15:38:32 UTC
Created attachment 337027 [details]
patch for comment #2

The patch removes check of fitting into VG's free space in time of
creating LV in ks (where it does bad for growing VGs) 
and lets doPartitioning or growLVM do the fail in doAutoPartition (when
we already have all needed information)

Its conseqences are:

* for LVs added to growing VGs:
- Does not fail when *base* size of qrowing VG doesn't satisfy LV request
  in ks (as it would without the patch).
- if LV request can't be satisfied even after growing, Fails in qrowLVM
  in doAutoPartition - it tracebacks, but exceptions from growLVM are not
  handled at all yet, tbd

* for LVs added to nongrowing VGs:
- If LV reqest can't be satisfied after VG growing, it fails in doAutoPartition,
  either in doPartitionoing or in growLVM

I was thinkng also about removing of the check only for growing VGs, but
then the check could fail already in KS with DeviceError which is worse
than having all possible failing in doAutoPartition.

I think the change shouldn't break manual partitioning as we check size
limits in GUI and not by the removed check.

Comment 4 David Lehman 2009-03-27 16:36:21 UTC
We do use the VG's space check in at least one place in the lvm dialog (clickCB). I don't like the idea of removing this check very much. If possible, I would rather find a way to make kickstart work without yanking sanity checks from the device classes.

Comment 5 Radek Vykydal 2009-03-31 15:06:47 UTC
(In reply to comment #4)
> We do use the VG's space check in at least one place in the lvm dialog
> (clickCB). I don't like the idea of removing this check very much. If possible,
> I would rather find a way to make kickstart work without yanking sanity checks
> from the device classes.



You are right about using the check in UI. Generally we are using
the check in these places now:
(just to remind the check is in vg._addLogVol called by
LVMLogicalVolumeDevice)

1) default autopartitioning, where it is ok because LVs are created
after VGs have already been qrown
2) device discovery, where growing VGs doesn't come into play
3) in kickstart, where it causes this bug
4) in UI when running VG dialog - at the beginning when we create
temporary VG to be edited. But it fails only in specific
cases I'll give an example of below, in other cases we are protected by
check in lv size property (in _setSize) which takes place
after editing LV.

So the specific cases are when LV request becames insatisfiable (eh)
due to other changes then editing LV request, which is changing
of VG size due to partition growing, e.g.:
a) create VG containing growing PV
b) add LV to the VG (use all free space)
c) create new growing partition => size of VG gets smaller
d) edit the VG
then there is an exception from the check when VG dialog is run
(VG is edited in UI). If we disable the check, VG dialog appears with
negative free space, and fails (takes new VG size into account) only if we
edit LV, which is not ideal too, but actually we should fail (I think it
is what we did in F10) in step c) if existing requests can't be satisfied
(or give a warning, or adjust lv, ...). Failing when running VG edit dialog
- that is on the _addLogVol check is useless.

So I'd still use disabling the check, although we can do it
only for growing (=> nonexisting) VGs:

diff --git a/storage/devices.py b/storage/devices.py
index ccf4f87..447550b 100644
--- a/storage/devices.py
+++ b/storage/devices.py
@@ -1690,7 +1690,9 @@ class LVMVolumeGroupDevice(DMDevice):
             raise ValueError("lv is already part of this vg")

         # verify we have the space, then add it
-        if not lv.exists and lv.size > self.freeSpace:
+        if not lv.exists and \
+           not [pv for pv in self.pvs if pv.req_grow] and \
+           lv.size > self.freeSpace:
             raise DeviceError("new lv is too large to fit in free space")

         self._lvs.append(lv)


The only other ways I've been able to think about (apart from passing
and propagating sort of ks flag when creating LV which is bad) are
- ordering ks requests so that we create lvs after all vgs are created
and grown, which seems to add too much logic we do in autopartitioning step
into kickstart.py to me,
- not passing the size to LV class, but setting it (size and req_size) after
class creating which I tried but haven't succeeded - setting size of lv
fails on property check, and not setting size causes growLVM to fail
because of badly computed free space on VG.


I think that with this grow thing, this creation-time check makes sense
only if you ensure ordering of device classes creation (VGs created and
before LVs), and even run growing logic before creating LV instance.
That's why I found so hard to use device instances right in ks, without
creating ks requests which are processed in specific (i.e. grow logic
determined) order later (as we do it with default autopartitioning PartSpec)

Comment 6 James Ralston 2009-04-03 02:22:33 UTC
This kickstart configuration:

bootloader --location=mbr --driveorder=vda --append="nomodeset"
clearpart --drives=vda --all --initlabel
part /boot --ondisk=vda --fstype ext3 --size=1024
part pv.0 --ondisk=vda --size=1024 --grow
volgroup vos --pesize=32768 pv.0
logvol swap --fstype swap --name=swap --vgname=vos --size=2048
logvol / --fstype ext4 --name=root --vgname=vos --size=1024
logvol /usr --fstype ext4 --name=usr --vgname=vos --size=4096
logvol /home --fstype ext4 --name=home --vgname=vos --size=1024
logvol /var --fstype ext4 --name=var --vgname=vos --size=1024
logvol /var/tmp --fstype ext4 --name=vtmp --vgname=vos --size=1024
logvol /tmp --fstype ext4 --name=tmp --vgname=vos --size=1024
logvol /var/log --fstype ext4 --name=log --vgname=vos --size=1024

...works perfectly for both F9 and F10 (with a 16GB disk), but for F11 beta, it dies with:

  File "/usr/lib/python2.6/site-packages/pykickstart/base.py", line 376, in dispatcher
    obj = self.commands[cmd].parse(args[1:])
  File "/usr/lib/anaconda/kickstart.py", line 419, in parse
    percent=lvd.percent)
  File "/usr/lib/anaconda/storage/__init__.py", line 604, in newLV
    return LVMLogicalVolumeDevice(name, vg, *args, **kwargs)
  File "/usr/lib/anaconda/storage/devices.py", line 1869, in __init__
    self.vg._addLogVol(self)
  File "/usr/lib/anaconda/storage/devices.py", line 1694, in _addLogVol
    raise DeviceError("new lv is too large to fit in free space")
storage.errors.DeviceError: new lv is too large to fit in free space

Am I being bitten by this bug, or should I file a new bug?

Comment 7 A.J. Werkman 2009-04-03 08:08:32 UTC
Comment #6 From  James Ralston (ralston)  2009-04-02 22:22:33 EDT   (-) [reply] -------

This kickstart configuration:

bootloader --location=mbr --driveorder=vda --append="nomodeset"
clearpart --drives=vda --all --initlabel
part /boot --ondisk=vda --fstype ext3 --size=1024

part pv.0 --ondisk=vda --size=1024 --grow
                                   =======
volgroup vos --pesize=32768 pv.0
logvol swap --fstype swap --name=swap --vgname=vos --size=2048
logvol / --fstype ext4 --name=root --vgname=vos --size=1024
logvol /usr --fstype ext4 --name=usr --vgname=vos --size=4096
logvol /home --fstype ext4 --name=home --vgname=vos --size=1024
logvol /var --fstype ext4 --name=var --vgname=vos --size=1024
logvol /var/tmp --fstype ext4 --name=vtmp --vgname=vos --size=1024
logvol /tmp --fstype ext4 --name=tmp --vgname=vos --size=1024
logvol /var/log --fstype ext4 --name=log --vgname=vos --size=1024


The problem is most probably the grow part. You could try to leave it out of your kickstart file. Then it will most probably work.

Comment 8 Radek Vykydal 2009-04-03 09:08:02 UTC
(In reply to comment #6)
> 
> Am I being bitten by this bug, or should I file a new bug?  
Most probably you are, the cause is the --grow option which is broken.

The bug should be fixed in version 11.5.0.41 of anaconda.
Thanks for your reports.

Comment 9 Bug Zapper 2009-06-09 12:38:53 UTC
This bug appears to have been reported against 'rawhide' during the Fedora 11 development cycle.
Changing version to '11'.

More information and reason for this action is here:
http://fedoraproject.org/wiki/BugZappers/HouseKeeping


Note You need to log in before you can comment on or make changes to this bug.