Bug 895982
Summary: | LVMError: lvcreate failed for vg_system1/LogVol00: 07:19:23,331 ERROR : Volume group "vg_system1" has insufficient free space (3774 extents): 3775 required. | ||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Product: | Red Hat Enterprise Linux 6 | Reporter: | Ľuboš Kardoš <lkardos> | ||||||||||||||||||||
Component: | anaconda | Assignee: | David Lehman <dlehman> | ||||||||||||||||||||
Status: | CLOSED ERRATA | QA Contact: | Ľuboš Kardoš <lkardos> | ||||||||||||||||||||
Severity: | unspecified | Docs Contact: | |||||||||||||||||||||
Priority: | low | ||||||||||||||||||||||
Version: | 6.4 | CC: | charlieb-fedora-bugzilla, ddumas, jorton, joshua.kugler, sbueno | ||||||||||||||||||||
Target Milestone: | rc | ||||||||||||||||||||||
Target Release: | --- | ||||||||||||||||||||||
Hardware: | i686 | ||||||||||||||||||||||
OS: | Unspecified | ||||||||||||||||||||||
Whiteboard: | abrt_hash:acd9f4a70acffca58c9ab4ea3543777ebca9256db3feff5bdbe0911f3f951c71 | ||||||||||||||||||||||
Fixed In Version: | anaconda-13.21.202-1 | Doc Type: | Known Issue | ||||||||||||||||||||
Doc Text: |
Physical-extents size less than 32MB on top of an MD physical volume leads to problems with calculating the capacity of a volume group. To work around this problem, use a physical-extent size of 32MB or leave space double the physical-extent size free when allocating logical volumes. Another option is to change the default 4MB size of a physical extent to 32MB.
|
Story Points: | --- | ||||||||||||||||||||
Clone Of: | Environment: | ||||||||||||||||||||||
Last Closed: | 2013-11-21 09:58:29 UTC | Type: | --- | ||||||||||||||||||||
Regression: | --- | Mount Type: | --- | ||||||||||||||||||||
Documentation: | --- | CRM: | |||||||||||||||||||||
Verified Versions: | Category: | --- | |||||||||||||||||||||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||||||||||||||||||
Cloudforms Team: | --- | Target Upstream Version: | |||||||||||||||||||||
Embargoed: | |||||||||||||||||||||||
Bug Depends On: | |||||||||||||||||||||||
Bug Blocks: | 841211, 960065 | ||||||||||||||||||||||
Attachments: |
|
Description
Ľuboš Kardoš
2013-01-16 12:24:28 UTC
Created attachment 679507 [details]
File: anaconda-tb-o6SI8e
Created attachment 679508 [details]
File: release
Created attachment 679509 [details]
File: version
Created attachment 679510 [details]
File: hashmarkername
Created attachment 679511 [details]
File: type
Created attachment 679512 [details]
File: exnFileName
Created attachment 679513 [details]
File: product
Created attachment 679514 [details]
File: environ
Created attachment 679516 [details]
partitioning
Steps to reproduce 1. Strat graphical installation. 2. Proceed to the partitioning screen. 3. Create partitioning like in attached "partitioning.png" image. 4. Click next. Actual results: Traceback is showed. Expected results: Anaconda should successfully creates partitions and then it should show package selection screen. Additional info: The same problem is in bug 876450 but in my case I am not able to reproduce it by kickstart. I am able to reproduce this problem only in graphical installation. Because bug 876450 was blocker I propose this bug as blocker too. (In reply to comment #11) > Additional info: > The same problem is in bug 876450 but in my case I am not able to reproduce > it by kickstart. I am able to reproduce this problem only in graphical > installation. When you tried to reproduce with kickstart how did you define the volgroup? Did you specify a pesize? Perhaps more interesting: Do you still hit the error if you set the volume group's physical extent size to 32MB in the graphical install? When I tried to reproduce with kickstart I didn't specify a pesize. I don't hit the error if I set the volume group's physical extent size to 32MB in the graphical install. I'm trying to track down exactly what's going on here, but I don't think this is a blocker. Using a larger PE works around it. I think this is an interaction between partition size and PE size and isn't new. I also think this has been a bug since earlier releases. I verified that the bug exists in a RHEL6 tree containing anaconda-13.21.181-1. We have the option of just documenting the issue: PE size of <32MB on top of MD PVs leads to problems calculating VG capacity. Workarounds include using 32MB PE size or making sure to leave 2*pesize free when allocating LVs. Another option is to change the current default of 4MB pesize to 32MB. Setting this to DevCond NAK capacity. This is definitely an issue that should be addressed, but there is a documented work-around to bypass it. If there is time at the end of the devel window, we will certainly make the effort to fix this. > I also think this has been a bug since earlier releases. I verified that the
> bug exists in a RHEL6 tree containing anaconda-13.21.181-1.
Did you try 13.21.176? Something has changed since RHEL6.3.
We are using a custom setDefaultPartitioning, and have lvm over raid1, which worked fine for RHEL < 6.4, but now hits this bug, which looks like it might be a rounding issue. We have '/' and swap sharing a LV, with '/' of 2G but growable, and swap sized by:
(minswap, maxswap) = iutil.swapSuggestion()
autorequests.append(PartSpec(fstype="swap", size=minswap, maxSize=maxswap, grow=True, asVol=True))
> I also think this has been a bug since earlier releases. I verified that the
> bug exists in a RHEL6 tree containing anaconda-13.21.181-1.
Did you try 13.21.176? Something has changed since RHEL6.3.
We are using a custom setDefaultPartitioning, and have lvm over raid1, which worked fine for RHEL < 6.4, but now hits this bug, which looks like it might be a rounding issue. We have '/' and swap sharing a LV, with '/' of 2G but growable, and swap sized by:
(minswap, maxswap) = iutil.swapSuggestion()
autorequests.append(PartSpec(fstype="swap", size=minswap, maxSize=maxswap, grow=True, asVol=True))
I notice that anaconda 13.21.195 introduces variable superBlockSize. Is the algorithm off-by-one, or has a rounding error, or otherwise wrong?
With my failed install using a RHEL6.4 based system, anaconda creates lv_root using '-L 269736m" (67434 * 4) and then failes to create lv_swap using "-L 16104m" (4026 * 4). pvdisplay shows: PV size 279.14 GiB/not usable 4.81MiB PE Size 4.00 MiB Total PE 71459 Free PE 4025 Allocated PE 67434 With a successful RHEL6.3 based system, anaconda creates lv_root using "-L 27836m" (69591 * 4) and lv_swap using "-L 7600m" (1900 * 4). pvdisplay shows: ... PV Size 279.27 GiB/not usable 3.87MiB Total BE 71491 ... So it looks to me as though anaconda is assuming there are 71460 PEs available. Could "not usable" being > 4MiB be the factor here? Where does anaconda estimate/evaluage this "not usable" quantity? s/evaluage/evaluate/ > Workarounds include using 32MB PE size or making sure to leave
> 2*pesize free when allocating LVs.
In my case changing PE size from 4MB to 32MB seems sufficient on my hardware. Presumably the underlying bug is still there, however, so it would be good to have a proper diagnosis and proper fix.
I have a patch that could use some testing. If people tell me what version of anaconda want to test with I can provide updates image containing the patch to enable you to try it. Undoing unintentional changes to flags, &c from previous update. I am using 13.21.195-1. A patch would be easier for me to use than an updates img (since I already make my own updates img). (In reply to Charlie Brady from comment #29) > I am using 13.21.195-1. A patch would be easier for me to use than an > updates img (since I already make my own updates img). http://dlehman.fedorapeople.org/20130717-lvm-padding-895982.1.patch Let me know how it goes. (In reply to David Lehman from comment #30) > Let me know how it goes. Wouldn't be for a week, I'm about to head off on vacation. I'd appreciate it if you could explain what you think is going wrong. --- a/storage/devices.py +++ b/storage/devices.py @@ -2249,6 +2250,11 @@ class LVMVolumeGroupDevice(DMDevice): used = sum(lv.vgSpaceUsed for lv in self.lvs) + self.snapshotSpace used += self.reservedSpace free = self.size - used + + pad = self.peSize * 2 * len(self.pvs) + if free >= pad: + free -= pad + log.debug("vg %s has %dMB free" % (self.name, free)) return free Isn't the problem here that 'used' is being underestimated? In that case, shouldn't something just be added to used? or reservedSpace be corrected? It looks like md is using more space for metadata than we expect. Rather than try to match up the code in anaconda to match md's behavior exactly, only to have md (or lvm) change again in a few more weeks, I am just going to pad out the lvm calculations so there is some buffer. I tested patch from comment 30 and I can confirm that it fixes problem described in comment 11. Verified on anaconda-13.21.211-1 (RHEL6.5-20131009.0) Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHBA-2013-1588.html The patch from comment #30 is not included in anaconda-13.21.215, which was used in RHEL6.5. Neither is the change in default peSize from 4.0 to 32.0 included. How do we find out what code was changed in the errata? * Mon Aug 12 2013 Samantha N. Bueno <sbueno+anaconda> - 13.21.202-1 - Add support for preexisting lvm using raid segment types. (dlehman) Resolves: rhbz#873281 - Add a bit of padding for metadata in new md size estimates. (dlehman) Resolves: rhbz#895982 - Expand warning text for package install issues. (sbueno+anaconda) Resolves: rhbz#895098 - Don't filter partitions on mpath devices in lvm (sbueno+anaconda) Resolves: rhbz#885755 - Move "nousbstorage" and "nousb" handling into init.c. (clumens) Resolves: rhbz#947704 I can answer my own question: https://lists.fedorahosted.org/pipermail/anaconda-patches/2013-August/005157.html diff --git a/storage/devices.py b/storage/devices.py index 42e3b6f..02d73ee 100644 --- a/storage/devices.py +++ b/storage/devices.py @@ -2789,7 +2789,9 @@ class MDRaidArrayDevice(StorageDevice): elif self.level == mdraid.RAID10: size = (self.memberDevices / 2.0) * smallestMemberSize size -= size % self.chunkSize - log.debug("non-existant RAID %s size == %s" % (self.level, size)) + + size -= 1 # account for unexpected metadata + log.debug("non-existent RAID %s size == %s" % (self.level, size)) else: size = self.partedDevice.getSize() log.debug("existing RAID %s size == %s" % (self.level, size)) I am hitting this exact same bug in Redhat 6.6, but in a really odd failure mode. I have this line in the kickstart: logvol /export/backups --fstype=ext4 --name=lv_backup --vgname=vg_01 --size=4096 --grow. That works great, works fine. but, if I change --fstype to xfs, I get this error in the install: lvcreate failed for vg_01/lv_backup: 15:31:17,673 ERROR Volume group "vg_01" has insufficient free space (2563336 extents): 2563337 required. Is this related to the above bug? Or should I open a new bug? (In reply to joshua.kugler from comment #41) > I am hitting this exact same bug in Redhat 6.6, ... ... > Is this related to the above bug? Or should I open a new bug? It might be related to this bug, but you should open a new bug, since you can't re-open this one, and it might be a different root cause. |