From Bugzilla Helper: User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.0.1) Gecko/20021003 Description of problem: One of the physical volumes of a volume group has 4MB logical extents on a partition of size 37244.41259765625 (as reported by `p pvsize´ within lvm.clampPVSize). According to vgdisplay -v, this physical volume has 9310 logical extents, but clampPVSize clamps it such that it ends up with 9309 extents, as follows: the initial numpes is computed as 9311.0 overhead is then computed as 164.37109375 one is computed as 4425.0, and two is computed as 329.0 usable is the computed based on one, giving 37239.41259765625 finally, it returns 37236L, that, divided by 4 (MB/extent), gives 9309, not 9310 It doesn't seem like the overhead is being overcomputed (even though it could probably use an expression such as (pvsize * 1024 * 1024 - 128) / (pesize * 1024 + 4) for more accurate results. This wouldn't be enough to get the right number, though. There's probably some additional rounding that should take place, or perhaps that shouldn't. For example, if I take out the math.ceil from the assignments to usable, I get the expected result. Ideally, the numbers shouldn't be computed from the partition size, but rather read from the actual physical volume, such that we don't have to do rough estimates. For physical volumes yet to be created, we should use some code closer to what the actual lvm tools use, otherwise we may waste disk space too. Version-Release number of selected component (if applicable): How reproducible: Always Steps to Reproduce: 1.Create a physical extent with 37244.41259765625 MB (cylinders 1089 to 5836 on a disk with 16063 * 512 bytes per cylinder) 2.Start the installer 3.Get to disk druid Actual Results: Instead of 37240MB, anaconda says the physical volume has only 37236MB. Expected Results: It should compute the right number of extents. Additional info: The work around I use to get past disk druid (because it complains I'm using more space than is available in the volume group) is to temporarily lvreduce one of the logical volumes by one extent. As long as this logical volume is not mounted by the installer and you don't allocate more extents (to make sure that extent is not reused for something else), this should be fine.
We can't read them from the physical volumes because in the new physical volume case, it hasn't been created to read it from yet :(
Which is why I wrote we should do something else for volumes yet to be created :-) We can be more conservative with those, but if the volume already exists, we have to be accurate.
Okay, added code that should do this. Still need to test it tomorrow. If you want to try it also, grab me on IRC and I'll make you an update disk that will work with Phoebe for testing it.
Ouch! Over the holidays, I had a disk failure on the very disk that exposed the problem, and I ended up having to re-create it all from scratch. Apparently I haven't done so in a way that still triggers the problem :-(
Got this again on phoebe2. One of the physical extents has 9308 4 MB extents, for a total of 37232 MB, but the installer says it has only 37228MB. According to parted, the partition extends from 8542.406 to 45788.974. Per fdisk, it goes from cylinder 1090 to 5836, with 16065 * 512 bytes per cylinder.
Seems to be fixed in a recent tree, that actually reads LVM info from the kernel.
This is back in Fedora Core test2 (as well as current tree). A volume group with two PVs of, respectively, 9060660 and 17518882 blocks (as reported by fdisk), containing 2211 and 4276 4MB extents, are reported by anaconda has containing 8840MB and 17100 MB, respectively. I.e., it's off by 1 extend in each of the PVs. Since I have only 1 PE extend free in the VG, disk druid refuses to proceed. This is not a problem for kickstart installs, only for interactive installs.
Created attachment 95273 [details] update disk Can you try with the attached update image -- it should add a lot more logging of the data we get, and then grab /tmp/anaconda.log?
* looking at VG: all * found size: 26570752 * found pesize: 4096 * looking at pv hda4: Existing Part Request -- mountpoint: None uniqueID: 6 type: physical volume (LVM) format: None badblocks: None device: hda4 drive: hda primary: None size: 17108.2836914 grow: 0 maxsize: None start: 23567355 end: 58605119 migrate: None origfstype: physical volume (LVM) * looking at pv hda3: Existing Part Request -- mountpoint: None uniqueID: 5 type: physical volume (LVM) format: None badblocks: None device: hda3 drive: hda primary: None size: 8848.30078125 grow: 0 maxsize: None start: 5446035 end: 23567354 migrate: None origfstype: physical volume (LVM) * looking at LV: all/severn * lvsize is: 16777216 (8192.0 megs) * looking at LV: all/swap * lvsize is: 1048576 (512.0 megs) * looking at LV: all/l * lvsize is: 18530304 (9048.0 megs) * looking at LV: all/shrike * lvsize is: 16777216 (8192.0 megs)
Is this better with current fc3 trees?
Ugh. Hard to tell. I've since switched to LVM2, and significantly different extent counts. I suppose we could jus close this if the way the sizes are computed changed in a significant way likely to fix the problem. I probably won't be able to recreate the scenario I had above. Feel free to close if you like.
I think it should be better now.
This problem still exists in current rawhide (2005-04-01)
Created attachment 112612 [details] anaconda.log
Created attachment 112613 [details] output of fdisk -l /dev/hda
Created attachment 112614 [details] output of parted /dev/hda print
Created attachment 112615 [details] output of vgdisplay -v
Created attachment 112616 [details] anaconda screenshot