RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 958702 - libvirt should skip not activated/suspended volumes during pool-create/pool-refresh for logical pools
Summary: libvirt should skip not activated/suspended volumes during pool-create/pool-r...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: libvirt
Version: 7.0
Hardware: Unspecified
OS: Unspecified
medium
high
Target Milestone: rc
: ---
Assignee: John Ferlan
QA Contact: Virtualization Bugs
URL:
Whiteboard:
Depends On: 748282
Blocks:
TreeView+ depends on / blocked
 
Reported: 2013-05-02 09:13 UTC by Geyang Kong
Modified: 2014-06-18 00:50 UTC (History)
8 users (show)

Fixed In Version: libvirt-1.0.6-1.el7
Doc Type: Bug Fix
Doc Text:
Clone Of: 748282
Environment:
Last Closed: 2014-06-13 10:37:16 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
libvirt-storage-fix-clutered-lvm (1.74 KB, patch)
2013-05-08 13:52 UTC, Roman
no flags Details | Diff

Comment 1 Osier Yang 2013-05-08 04:14:00 UTC
commit 59750ed6ea12c4db5ba042fef1a39b963cbfb559
Author: Osier Yang <jyang>
Date:   Tue May 7 18:29:29 2013 +0800

    storage: Skip inactive lv volumes
    
    If the volume is of a clustered volume group, and not active, the
    related pool APIs fails on opening /dev/vg/lv. If the volume is
    suspended, it hangs on open(2) the volume.
    
    Though the best solution is to expose the volume status in volume
    XML, and even better to provide API to activate/deactivate the volume,
    but it's not the work I want to touch currently. Volume status in
    other status is just fine to skip.
    
    About the 5th field of lv_attr (from man lvs[8])
    <quote>
     5 State: (a)ctive, (s)uspended, (I)nvalid snapshot, invalid
       (S)uspended snapshot, snapshot (m)erge failed,suspended
       snapshot (M)erge failed, mapped (d)evice present without
       tables,  mapped device present with (i)nactive table
    </quote>

Comment 2 Roman 2013-05-08 13:52:17 UTC
Created attachment 745255 [details]
libvirt-storage-fix-clutered-lvm

my current patch for rhel6 implements this behavior

Comment 3 zhe peng 2013-06-06 06:54:38 UTC
I can reproduce this with libvirt-1.0.4-1.1.el7.x86_64
verify with libvirt-1.0.6-1.el7

step:
pool-refresh:
1) create  a pv 

#pvcreate ...

2) create a vg  

#vgcreate ... 

3)  add a vg pool in the libvirt 
<pool type='logical'>
  <name>vg</name>
  <uuid>9af81e46-41e2-3a2a-bb6e-07c9c0ceab73</uuid>
  <capacity unit='bytes'>32208060416</capacity>
  <allocation unit='bytes'>419430400</allocation>
  <available unit='bytes'>31788630016</available>
  <source>
    <device path='/dev/sda6'/>
    <name>vg_virt</name>
    <format type='lvm2'/>
  </source>
  <target>
    <path>/dev/vg_virt</path>
    <permissions>
      <mode>0700</mode>
      <owner>-1</owner>
      <group>-1</group>
    </permissions>
  </target>
</pool>
4) create 2 lv in the pool   lv1 , lv2

5) disable lv1   

#lvchange -an /dev/vg_virt/lv1

6)check lv1 

#lvdisplay 
  --- Logical volume ---
  LV Path                /dev/vg_virt/lv1
  LV Name                lv1
  VG Name                vg_virt
  LV UUID                EsrAXn-dd0u-D8wc-YMB3-sTTE-kPXN-oMdILf
  LV Write Access        read/write
  LV Creation host, time intel-5310-32-2.englab.nay.redhat.com, 2013-06-06 14:29:03 +0800
  LV Status              NOT available
  LV Size                200.00 MiB
  Current LE             50
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto

7) refresh pool
#virsh pool-refresh vg
Pool vg refreshed

# virsh vol-list vg
Name                 Path                                    
-----------------------------------------
lv2                  /dev/vg_virt/lv2  

pool-create

1)create two lv(lv1,lv2)
2)disable lv1
3)#virsh pool-create pool.xml
Pool vg created from pool.xml

verification passed.

Comment 6 Ludek Smid 2014-06-13 10:37:16 UTC
This request was resolved in Red Hat Enterprise Linux 7.0.

Contact your manager or support representative in case you have further questions about the request.


Note You need to log in before you can comment on or make changes to this bug.