RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 709265 - empty vg storage pool can break GetVolumeByPath for all pools
Summary: empty vg storage pool can break GetVolumeByPath for all pools
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: libvirt
Version: 6.1
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: rc
: ---
Assignee: Osier Yang
QA Contact: Virtualization Bugs
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2011-05-31 08:38 UTC by Troels Arvin
Modified: 2012-06-20 06:28 UTC (History)
6 users (show)

Fixed In Version: libvirt-0.9.9-1.el6
Doc Type: Bug Fix
Doc Text:
No documentation needed.
Clone Of:
Environment:
Last Closed: 2012-06-20 06:28:09 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2012:0748 0 normal SHIPPED_LIVE Low: libvirt security, bug fix, and enhancement update 2012-06-19 19:31:38 UTC

Description Troels Arvin 2011-05-31 08:38:46 UTC
Description of problem:
It seems that virt-manager's storage management feature gets confused if the system has a volume group with no logical volumes. The confusion makes it impossible to add managed storage.

Version-Release number of selected component (if applicable):
virt-manager-0.8.6-4.el6.noarch

How reproducible:
Every time.

Steps to Reproduce:
1. Create a volume group, but don't create any logical volumes from it yet.
2. Use virt-manager to create a new virtual server. (Or add to an existing one.)
3. In the storage selection step, choose "Select managed...", "Browse".
4. Select a storage pool from a VG which already has at least one LV.
5. Create a volume, and "Choose Volume"
  
Actual results:
Dialog box: "Storage parameter error. Name ... already in use by another volume". (Which is certainly not true.)

Expected results:
The ned volume should have been assigned to the virtual machine.

Additional info:
See http://troels.arvin.dk/shots/virt-manager-storagepool-bug/ for screenshots.

In shot2: Notice the fc15b pool. It's an empty volume group.

In shot4: Notice the empty "Used By" field for the newly created g_testdb3__o_shared__d_sata_01.

Between clicking "Næste" (which means Next) in shot 5 and the error message in shot 6, a 20-second pause occurs.

Shots 7, 8 and 9, the same thing is tried, but "Browse Local" is used after g_testdb3__o_shared__d_sata_01 was manually added to the linvirt2_vg_ams2100_sata1 vg. Same problem.

Now, if I add at least one LV to the previously empty fc15b pool (based on a VG called "linvirt2_vg_fc15b") and try again, things work fine, even though I don't choose any storage from the fc15b pool, and the pause mentioned for the step between shot 5 and 6 doesn't occur.

Comment 1 Troels Arvin 2011-05-31 08:46:55 UTC
Note - corresponding to the pause between the step between shot 5 and 6, the following line appears in /var/log/messages:

May 31 09:45:35 linvirt2 libvirtd: 09:45:35.731: 14409: error : virStorageBackendStablePath:1320 : cannot read dir '/dev/linvirt2_vg_fc15b': No such file or directory

This problem seems to confuse virt-manager so much that it gives up(?).

Comment 3 Cole Robinson 2011-07-15 20:53:18 UTC
That error from the logs is the root problem. virt-manager is trying to detect if the specified path is libvirt managed storage. It does so by calling virStorageVolumeLookupByPath. The empty VG is causing that call to fail, even though the passed in path is indeed a storage volume.

virt-manager then does some more checks, sees that the root of the path /dev/existingvg is an existing storage pool, assumes you want to create a new volume named /dev/existingvg/existingvol, tries to use that name, and libvirt tells us it's taken.

I've committed an extra check in upstream virtinst that will make virt-manager check harder event if GetVolumeByPath fails, but libvirt needs to be fixed. 
Reassigning this bug to libvirt

And to clarify, an empty vg pool doesn't break GetVolumeByPath for every lookup, just for those pools who happen to come sequentially after the busted pool in libvirt's internal pool list, so just defining an empty vg pool might not cause the issue to easily reproduce. The root cause is the error:

virStorageBackendStablePath:1320 : cannot read dir '/dev/linvirt2_vg_fc15b': No
such file or directory

Comment 4 Osier Yang 2011-09-21 10:13:44 UTC
Patch posted to upstream.

http://www.redhat.com/archives/libvir-list/2011-September/msg00820.html

Comment 5 Osier Yang 2011-09-27 01:37:06 UTC
patch commited to upstream.

commit 05e2fc51d1f4ba884a18c23c924d90cfd04384e3
Author: Osier Yang <jyang>
Date:   Mon Sep 26 14:30:44 2011 +0800

    storage: Do not break the whole vol lookup process in the middle
    
    * src/storage/storage_driver.c: As virStorageVolLookupByPath lookups
    all the pool objs of the drivers, breaking when failing on getting
    the stable path of the pool will just breaks the whole lookup process,
    it can cause the API fails even if the vol exists indeed. It won't get
    any benefit. This patch is to fix it.

Comment 8 Alex Jia 2012-01-13 06:40:35 UTC
(In reply to comment #0)
> Description of problem:
> 
> Version-Release number of selected component (if applicable):
> virt-manager-0.8.6-4.el6.noarch
> 
> How reproducible:
> Every time.
> 
> Steps to Reproduce:
> 1. Create a volume group, but don't create any logical volumes from it yet.
> 2. Use virt-manager to create a new virtual server. (Or add to an existing
> one.)
> 3. In the storage selection step, choose "Select managed...", "Browse".
> 4. Select a storage pool from a VG which already has at least one LV.
> 5. Create a volume, and "Choose Volume"


I can't reproduce this issue on libvirt-0.9.4-1.el6_x86_64 with virt-manager-0.8.6-4.el6.noarch according to above steps, and could you give me your libvirt version? 

In addition, IMHO, 0.9.4-1(before Sep 26) is enough to verify this bug, but I indeed can't reproduce it.

Osier, I guess you should reproduce the bug, right? if so, please commit it, 
thanks.

# pvdisplay
  --- Physical volume ---
  PV Name               /dev/sda3
  VG Name               vg1
  PV Size               9.77 GiB / not usable 4.00 MiB
  Allocatable           yes
  PE Size               4.00 MiB
  Total PE              2499
  Free PE               499
  Allocated PE          2000
  PV UUID               mOej86-t93k-t2ai-E2XQ-BOJ1-e3AS-ZbTGbc

  "/dev/sda2" is a new physical volume of "9.77 GiB"
  --- NEW Physical volume ---
  PV Name               /dev/sda2
  VG Name
  PV Size               9.77 GiB
  Allocatable           NO
  PE Size               0
  Total PE              0
  Free PE               0
  Allocated PE          0
  PV UUID               8gWeh1-rl08-yuDR-ZqPJ-sKFj-uQ7b-IRTz6c

# vgdisplay
  --- Volume group ---
  VG Name               vg1
  System ID
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  2
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                1
  Open LV               1
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               9.76 GiB
  PE Size               4.00 MiB
  Total PE              2499
  Alloc PE / Size       2000 / 7.81 GiB
  Free  PE / Size       499 / 1.95 GiB
  VG UUID               u2HMVM-9xOd-7d8a-yovr-TlTH-rVvV-lyOSW0

# lvdisplay
  --- Logical volume ---
  LV Name                /dev/vg1/test
  VG Name                vg1
  LV UUID                UEXCWp-MiN7-efF2-6SFY-913u-Wy1M-iRdBiA
  LV Write Access        read/write
  LV Status              available
  # open                 1
  LV Size                7.81 GiB
  Current LE             2000
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:0

Comment 10 Troels Arvin 2012-01-13 08:43:56 UTC
I have changed job, and I don't have access to any RHEL servers with libvirt, currently.

Comment 11 Alex Jia 2012-01-13 09:00:13 UTC
(In reply to comment #10)
> I have changed job, and I don't have access to any RHEL servers with libvirt,
> currently.

Hi Troels,
Thanks for your reply, no problem, I will check it again.

Alex

Comment 12 Alex Jia 2012-01-13 15:54:30 UTC
I can reproduce the bug on libvirt-0.9.4-1.el6.x86_64, I think it's helpful to show test steps in here:

Create a empty VG such as vg1 according to original bug description, then create a volume such as foo.img in 'default' pool:

# virsh pool-list
Name                 State      Autostart 
-----------------------------------------
default              active     yes       
vg1                  active     yes

# virsh vol-list vg1
Name                 Path                                    
-----------------------------------------


# virsh vol-list default
Name                 Path                                    
-----------------------------------------
foo.img           /var/lib/libvirt/images/foo.img      

# virsh vol-path vg1 /var/lib/libvirt/images/foo.img
error: failed to get pool '/var/lib/libvirt/images/foo.img'
error: failed to get vol 'vg1'
error: cannot read dir '/dev/vg1': No such file or directory

The 0.9.9-1 is fine, every lookup hasn't been broken by GetVolumeByPath, and I can also see a new warning in log file, the patch is valid:

warning : storageVolumeLookupByPath:1255 : Failed to get stable path for pool 'vg1'

But, it exists another issue, I will file a separated bug to trace it.

Comment 13 Alex Jia 2012-01-17 03:34:00 UTC
(In reply to comment #12)
> But, it exists another issue, I will file a separated bug to trace it.

New issue: virsh vol-path can't correctly display which pool a volume belongs to
https://bugzilla.redhat.com/show_bug.cgi?id=781515

Comment 14 Osier Yang 2012-05-04 09:51:06 UTC
    Technical note added. If any revisions are required, please edit the "Technical Notes" field
    accordingly. All revisions will be proofread by the Engineering Content Services team.
    
    New Contents:
No documentation needed.

Comment 16 errata-xmlrpc 2012-06-20 06:28:09 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHSA-2012-0748.html


Note You need to log in before you can comment on or make changes to this bug.