Bug 587442
Summary: | LVMError: lvcreate failed for vg_alma_fast/lv_home: 18:06:18,834 ERROR : Insufficient free extents (748) in volume group vg_alma_fast: 749 required | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
Product: | Red Hat Enterprise Linux 6 | Reporter: | Brian Lane <bcl> | ||||||||
Component: | anaconda | Assignee: | Brian Lane <bcl> | ||||||||
Status: | CLOSED CURRENTRELEASE | QA Contact: | Release Test Team <release-test-team-automation> | ||||||||
Severity: | medium | Docs Contact: | |||||||||
Priority: | medium | ||||||||||
Version: | 6.0 | CC: | atodorov | ||||||||
Target Milestone: | rc | Keywords: | Reopened | ||||||||
Target Release: | --- | ||||||||||
Hardware: | i386 | ||||||||||
OS: | Linux | ||||||||||
Whiteboard: | anaconda_trace_hash:a3346164c6cf4ee1ecb6ba84bd86d41de8815266e42f71fface8b46aed7475d4 | ||||||||||
Fixed In Version: | anaconda-13.21.50-1 | Doc Type: | Bug Fix | ||||||||
Doc Text: | Story Points: | --- | |||||||||
Clone Of: | Environment: | ||||||||||
Last Closed: | 2010-07-02 20:48:06 UTC | Type: | --- | ||||||||
Regression: | --- | Mount Type: | --- | ||||||||
Documentation: | --- | CRM: | |||||||||
Verified Versions: | Category: | --- | |||||||||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||||||
Cloudforms Team: | --- | Target Upstream Version: | |||||||||
Embargoed: | |||||||||||
Attachments: |
|
Description
Brian Lane
2010-04-29 22:33:50 UTC
Created attachment 410254 [details]
Attached traceback automatically from anaconda.
This request was evaluated by Red Hat Product Management for inclusion in a Red Hat Enterprise Linux major release. Product Management has requested further review of this request by Red Hat Engineering, for potential inclusion in a Red Hat Enterprise Linux Major release. This request is not yet committed for inclusion. Brian, How did you perform the installation? I was trying to confirm a bugfix (can't find the bug number right now) that was using a RAID5 and a RAID0 so I setup the virtual with 8 1G drives. I'm not sure if I can reproduce this, but it looks like a rounding error when calculating extents. I was working on bug 585839, I just tried to reproduce this and couldn't. Moving to MODIFIED per commit #6. With RHEL6.0-20100511.3: I've installed using the steps to reproduce from https://bugzilla.redhat.com/show_bug.cgi?id=585839#c0 (the bug Brian was testing) (virtual system, 4 disks, same partitioning but with smaller disk sizes) and install completed successfully. Wrt comment #6 I'll close this. *** Bug 591941 has been marked as a duplicate of this bug. *** I've hit this with 0512.0 tree with slightly different setup. I didn't leave any freespace at the beginning nor created other partitions than the RAID ones. My setup is a KVM domU with 4 disks (10GB each) with the below layout. Because those disks were used in previous installs after starting stage2 I've switched to tt2 and did `dd if=/dev/zero of=/dev/vdX bs=1M count=10` for each disk and let anaconda re-initialize all disks. Manually created this layout in anaconda. vda1 200MB /boot ext4 vda2 4000MB software raid vda3 4000MB software raid vda4 - Extended vda5 - all available space, swap vdb1 4000MB software raid vdb2 4000MB software raid Free space vdc1 4000MB software raid vdc2 4000MB software raid Free space vdc1 4000MB software raid vdc2 4000MB software raid Free space md0, RAID 5 (vda2, vdb2, vdc2, vdd2), 3 active, 1 spare, PV md1, RAID 0 (vda3, vdb1, vdc1, vdd1), 4 active, 0 spare, PV vg_safe on md0 with lv_root, / - all space, ext4 vg_fast on md1 with lv_home - all space, ext4 Re-opening. Created attachment 414125 [details]
anaconda traceback from failure reproducer
A somewhat simpler way to recreate this: vda1 500M /boot vda2 4000M SW RAID vdb1 500M swap vdb2 4000M SW RAID vdc1 4000M SW RAID vdd1 4000M SW RAID md0, RAID5 (vda2, vdb2, vdc1, vdd1) 4 active, 0 spares vg_safe on md0 with all space md0's size is 11999 and the vg_safe size is 11996 Created attachment 417297 [details]
Extra debugging
Added logging of PV size before and after creation. Search for 'size ==' to find the entries.
Committed 7a9698b3cd93bf4f6d297b7bccf6f463aebe34e8 and 4a76ede46f1c8a5bb02a3055e186f4134d18342a to fix this. Solution was to adjust the estimated size of the superblocks to match mdadm's usage of v1.1 metadata in the RAID. Hi, tested with anaconda-13.21.50-9. (0622.1 build) and the partitioning scheme from comment #12. There was no traceback. Moving to VERIFIED. Red Hat Enterprise Linux Beta 2 is now available and should resolve the problem described in this bug report. This report is therefore being closed with a resolution of CURRENTRELEASE. You may reopen this bug report if the solution does not work for you. |