Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.

Bug 1072999

Summary: LVMVolumeGroupDevice cannot create new logical volume
Product: Red Hat Enterprise Linux 7 Reporter: Jan Safranek <jsafrane>
Component: python-blivetAssignee: Vratislav Podzimek <vpodzime>
Status: CLOSED CURRENTRELEASE QA Contact: Release Test Team <release-test-team-automation>
Severity: high Docs Contact:
Priority: unspecified    
Version: 7.0CC: dlehman, mbanas, pjanda, svenkatr
Target Milestone: rcKeywords: Regression
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: python-blivet-0.18.32-1 Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
: 1083067 (view as bug list) Environment:
Last Closed: 2014-06-13 09:23:16 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
reproducer
none
blivet log
none
output from reproducer
none
fdisk.output
none
pvdisplay output
none
vgdisplay.output
none
lvdisplay.output
none
lvdisplay.output
none
parted.output
none
pvdisplay.output
none
reproducer2.output
none
reproducer2.py
none
vgdisplay.output none

Description Jan Safranek 2014-03-05 15:19:45 UTC
This is somewhat related to bug #1021505.

I have a volume group on top of MD RAID, with 7 logical volumes

$ vgdisplay 
  --- Volume group ---
  VG Name               test
  System ID             
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  8
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                7
  Open LV               0
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               184.00 MiB
  PE Size               4.00 MiB
  Total PE              46
  Alloc PE / Size       14 / 56.00 MiB
  Free  PE / Size       32 / 128.00 MiB
  VG UUID               R0vZWK-DH6U-F9KN-B5i7-6sGt-QHu8-sdZpE

$ lvs
LV   VG   Attr       LSize Pool Origin Data%  Move Log Cpy%Sync Convert
  1    test -wi-a----- 8.00m                                             
  2    test -wi-a----- 8.00m                                             
  3    test -wi-a----- 8.00m                                             
  4    test -wi-a----- 8.00m                                             
  5    test -wi-a----- 8.00m                                             
  6    test -wi-a----- 8.00m                                             
  7    test -wi-a----- 8.00m                                             

(you can see the RAID is quite small).

Creating new 8MB logical volume using blivet throws:

  File "/usr/lib/python2.7/site-packages/lmi/storage/LMI_StorageConfigurationService.py", line 212, in _create_lv
    lv = self.storage.newLV(**args)
  File "/usr/lib/python2.7/site-packages/blivet/__init__.py", line 1145, in newLV
    return device_class(name, *args, **kwargs)
  File "/usr/lib/python2.7/site-packages/blivet/devices.py", line 2595, in __init__
    self.parents[0]._addLogVol(self)
  File "/usr/lib/python2.7/site-packages/blivet/devices.py", line 2314, in _addLogVol
DeviceError: ('new lv is too large to fit in free space', 'test')


The reason is wrong LVMVolumeGroupDevice.freeSpace and freeExtents:
>>> vg = blivet.vgs[0]
>>> print repr(vg)

LVMVolumeGroupDevice instance (0x20dc810) --
  name = test  status = True  kids = 7 id = 21
  parents = ['existing 188MB mdarray 1 (6) with existing lvmpv']
  uuid = R0vZWK-DH6U-F9KN-B5i7-6sGt-QHu8-sdZpEW  size = 184
  format = existing None
  major = 0  minor = 0  exists = True  protected = False
  sysfs path =   partedDevice = None
  target size = 0  path = /dev/mapper/test
  format args = []  originalFormat = None  target = None  dmUuid = None  free = 128.0  PE Size = 4.0  PE Count = 46
  PE Free = 32  PV Count = 1
  LV Names = ['1', '2', '3', '4', '5', '6', '7']  modified = False
  extents = 46.0  free space = -12.0
  free extents = -3.0  reserved percent = 0  reserved space = 0
  PVs = ['existing 188MB mdarray 1 (6) with existing lvmpv']
  LVs = ['existing 8MB lvmlv test-1 (22)',
 'existing 8MB lvmlv test-2 (23)',
 'existing 8MB lvmlv test-3 (24)',
 'existing 8MB lvmlv test-4 (25)',
 'existing 8MB lvmlv test-5 (26)',
 'existing 8MB lvmlv test-6 (27)',
 'existing 8MB lvmlv test-7 (28)']

Notice free space = -12.0 and free extents = -3.0 -> _addLogVol() check for free space fails.

Version-Release number of selected component (if applicable):
python-blivet-0.18.28-1.el7.noarch

python-blivet-0.18.27-1.el7.noarch was working, I suspect commit b82b638aa53fbaa375e6e038c2b2251a8e145c52 did something wrong.

How reproducible:
always

Steps to Reproduce:
1. get two 100MB devices (partitions)
2. create raid0 of them: mdadm -C -l 0 -n 2 /dev/md1 /dev/sdc{1,2}
3. create VG on top of them: vgcreate test /dev/md1
4. create 7 LVs: for i in `seq 7`;do lvcreate -l 2 -n $i test; done
5. initialize blivet and look at LVMVolumeGroupDevice (see attachment)

Actual results:
blivet prints: free space = -12.0 free extents = -3.0

Expected results:
blivet prints something positive, indicating that it has plenty of free space in the VG. Bug 1021505 indicates that it may not be pixel-perfect, but missing 128MB is IMO too much.

Comment 1 Jan Safranek 2014-03-05 15:21:07 UTC
Created attachment 871031 [details]
reproducer

Comment 2 Jan Safranek 2014-03-05 15:21:57 UTC
Created attachment 871032 [details]
blivet log

Comment 6 Petr Janda 2014-03-17 11:48:00 UTC
Created attachment 875448 [details]
output from reproducer

I am afraid it is not resolved
tested on RHEL-7.0-20140314.0 ComputeNode 
and reproducer still prints negative values for free space and free extents. I also met the bug 1076365

Comment 7 Petr Janda 2014-03-17 11:49:34 UTC
Created attachment 875449 [details]
fdisk.output

Comment 8 Petr Janda 2014-03-17 11:50:18 UTC
Created attachment 875450 [details]
pvdisplay output

Comment 9 Petr Janda 2014-03-17 11:50:57 UTC
Created attachment 875451 [details]
vgdisplay.output

Comment 10 Petr Janda 2014-03-17 11:51:40 UTC
Created attachment 875452 [details]
lvdisplay.output

Comment 11 Vratislav Podzimek 2014-03-18 07:29:35 UTC
LVMVolumeGroupDevice instance (0x1abcb90) --
  name = rhel  status = True  kids = 3 id = 4
  parents = ['existing 10240MB partition sda2 (3) with existing lvmpv']
  uuid = eo1kec-Ekla-yDZY-awZ9-np3W-ZDL0-yIAumK  size = 10236
  format = existing None
  major = 0  minor = 0  exists = True  protected = False
  sysfs path =   partedDevice = None
  target size = 0  path = /dev/mapper/rhel
  format args = []  originalFormat = None  target = None  dmUuid = None  free = 0.0  PE Size = 4.0  PE Count = 238341
  PE Free = 0  PV Count = 1
  LV Names = ['home', 'root', 'swap']  modified = False
  extents = 2559.0  free space = -943128
  free extents = -235782.0  reserved percent = 0  reserved space = 0
  PVs = ['existing 10240MB partition sda2 (3) with existing lvmpv']
  LVs = ['existing 894212MB lvmlv rhel-home (5)',
 'existing 51200MB lvmlv rhel-root (6)',
 'existing 7952MB lvmlv rhel-swap (7) with existing swap']

This looks to me as a completely different issue. Dave, any idea? How can there be a ~10GB PV with a VG containing ~900GB LV?

Comment 12 David Lehman 2014-03-18 13:57:13 UTC
The "rhel" VG is not what's being tested AFAICT. You should be looking at the "test" VG. It looks to me like sda2 (a PV in "rhel") is corrupt. The VG thinks it is 931.02 GiB, but udev and parted say it is 1 GiB.

Comment 15 Vratislav Podzimek 2014-03-27 16:25:33 UTC
Petr, do you have a reproducer for the issue you hit?

Comment 16 Petr Janda 2014-04-01 11:39:49 UTC
I cannot reproduce it.
But I have try to verify fix and when there is another volume group (rhel in my case) volume group on raid0 (named test) is not recognized by blivet at all.

I used modified reproducer to print all vgs

my steps
1) install rhel system, use LVM partitioning, do not use whole diks
2) boot installation again, configure network (just for record I don't believe it matters)
3) switch to 2nd console
4) create two 100MB partitions (using parted in my case)
5) mdadm -C -l 0 -n 2 /dev/md1 /dev/sda{4,5}
6) vgcreate /dev/md1 test
7) for i in 1 2 3 4 5 6 7; do lvcreate -l 2 -n $i test; done
8) run reproducer2.py

Comment 17 Petr Janda 2014-04-01 11:42:54 UTC
Created attachment 881308 [details]
lvdisplay.output

Comment 18 Petr Janda 2014-04-01 11:42:58 UTC
Created attachment 881309 [details]
parted.output

Comment 19 Petr Janda 2014-04-01 11:43:01 UTC
Created attachment 881310 [details]
pvdisplay.output

Comment 20 Petr Janda 2014-04-01 11:43:12 UTC
Created attachment 881311 [details]
reproducer2.output

Comment 21 Petr Janda 2014-04-01 11:43:16 UTC
Created attachment 881312 [details]
reproducer2.py

Comment 22 Petr Janda 2014-04-01 11:43:18 UTC
Created attachment 881313 [details]
vgdisplay.output

Comment 25 Ludek Smid 2014-06-13 09:23:16 UTC
This request was resolved in Red Hat Enterprise Linux 7.0.

Contact your manager or support representative in case you have further questions about the request.