RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 748282 - libvirt should skip not activated/suspended volumes during pool-create/pool-refresh for logical pools
Summary: libvirt should skip not activated/suspended volumes during pool-create/pool-r...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: libvirt
Version: 6.1
Hardware: Unspecified
OS: Unspecified
medium
high
Target Milestone: rc
: ---
Assignee: John Ferlan
QA Contact: Virtualization Bugs
URL:
Whiteboard:
Depends On:
Blocks: 958702
TreeView+ depends on / blocked
 
Reported: 2011-10-23 22:55 UTC by Roman
Modified: 2016-04-26 15:30 UTC (History)
9 users (show)

Fixed In Version: libvirt-0.10.2-33.el6
Doc Type: Bug Fix
Doc Text:
Clone Of:
: 958702 (view as bug list)
Environment:
Last Closed: 2014-10-14 04:13:37 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
libvirt-storage-fix-logical.patch (3.27 KB, patch)
2011-10-23 22:57 UTC, Roman
no flags Details | Diff


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2014:1374 0 normal SHIPPED_LIVE libvirt bug fix and enhancement update 2014-10-14 08:11:54 UTC

Description Roman 2011-10-23 22:55:44 UTC
Description of problem:
libvirt should skip not activated/suspended volumes during pool-create/pool-refresh for logical pools

Version-Release number of selected component (if applicable):
libvirt-0.8.7-18.el6_1.1
libvirt-0.9.6

How reproducible:
always

Steps to Reproduce:
1. Build 2-node cluster
2. Create clustered volume group
3. Activate exclusively any volume on the first node
4. Try to create pool on the second node
  
Actual results:
# virsh pool-create vg.xml 
error: Failed to create pool from vg.xml
error: internal error lvs command failed

Expected results:
# virsh pool-create vg.xml 
Pool vg created from vg.xml

Additional info:
Attached patch fix problem for me.
Also fix this: https://bugzilla.redhat.com/show_bug.cgi?id=748248

Comment 1 Roman 2011-10-23 22:57:10 UTC
Created attachment 529712 [details]
libvirt-storage-fix-logical.patch

Comment 3 Osier Yang 2011-10-24 01:32:24 UTC
Hi, Roman

The patch looks good, though not sure if it still works well for not clustered vgs after changing "-ay/-an" into "-aly/-aln", have you tested on not clustered vgs? we might need to introduce flags if not. Anyway, could you post the patch to libvirt upstream? we can take it to upstream and keep author as you if you are not convenient. Thanks

Comment 4 Roman 2011-10-24 13:07:03 UTC
I posted this patch to libvirt upstream:
https://bugzilla.redhat.com/show_bug.cgi?id=748437

vgchange -aly/-aln works for any type of vg: clustered or not

Comment 6 Osier Yang 2011-12-12 14:06:54 UTC
The patch works because the pool is started with "-aly", but not because of the inactived vols are skipped. We allow the inactive vols exist in pool. So this BZ
should be closed either "WONTFIX" or "DUPLICATE" with 748248.

Comment 7 Osier Yang 2011-12-13 14:39:23 UTC

*** This bug has been marked as a duplicate of bug 748248 ***

Comment 8 Roman 2011-12-27 01:07:25 UTC
Sorry, but libvirtd should skip volumes, that it can't activate.

1) If I exclusively lock the volume on another node in the clustered volume group,
vgchange -aly completes without error, but locked volume still not activated.
"virsh pool-refresh" will fail - it will try open /dev/vg/lv on refresh.

2) If the logical volume in "suspended" status, libvirtd hangs on open(2) syscall for that volume.

It should skip suspended/not activated volumes.

Comment 15 Jiri Denemark 2014-03-27 08:53:20 UTC
Fixed upstream by v1.0.5-47-g59750ed:

commit 59750ed6ea12c4db5ba042fef1a39b963cbfb559
Author: Osier Yang <jyang>
Date:   Tue May 7 18:29:29 2013 +0800

    storage: Skip inactive lv volumes
    
    If the volume is of a clustered volume group, and not active, the
    related pool APIs fails on opening /dev/vg/lv. If the volume is
    suspended, it hangs on open(2) the volume.
    
    Though the best solution is to expose the volume status in volume
    XML, and even better to provide API to activate/deactivate the volume,
    but it's not the work I want to touch currently. Volume status in
    other status is just fine to skip.
    
    About the 5th field of lv_attr (from man lvs[8])
    <quote>
     5 State: (a)ctive, (s)uspended, (I)nvalid snapshot, invalid
       (S)uspended snapshot, snapshot (m)erge failed,suspended
       snapshot (M)erge failed, mapped (d)evice present without
       tables,  mapped device present with (i)nactive table
    </quote>

Comment 19 Xuesong Zhang 2014-07-18 09:26:24 UTC
hi, Jiri,

While verify this bug, there is one concern need your help.
As you can see the scenario 2 in this comment, the pool-create is working well, not fail due to the inactive lvm. 
But the inactive lvm is also listed in the vol-list of this pool. 
Is it as expected? 
As my understanding from the bug summary, the inactive lvm should be skip while pool-create.


Verify this bug with the following package:
libvirt-0.10.2-41.el6.x86_64
qemu-kvm-rhev-0.12.1.2-2.430.el6.x86_64
kernel-2.6.32-492.el6.x86_64


Steps:
Scenario 1: Will skip the inactive volume after pool-refresh
1. prepare one active logical pool, and there are 2 volume in this pool.
# virsh pool-list --all
Name                 State      Autostart 
-----------------------------------------       
LVM-pool             active     no 

# virsh vol-list LVM-pool
Name                 Path                                    
-----------------------------------------
xuzhanglvm1          /dev/xuzhangVG/xuzhanglvm1              
xuzhanglvm2          /dev/xuzhangVG/xuzhanglvm2              

2. inactive one volume, and make sure the LV status is "NOT available"
# lvchange -an /dev/xuzhangVG/xuzhanglvm2

# lvdisplay /dev/xuzhangVG/xuzhanglvm2
  --- Logical volume ---
  LV Path                /dev/xuzhangVG/xuzhanglvm2
  LV Name                xuzhanglvm2
  VG Name                xuzhangVG
  LV UUID                skXq1t-rZ78-8Da9-BvNJ-6uQr-sV3e-iOQBZA
  LV Write Access        read/write
  LV Creation host, time xuzhangtest3, 2014-07-18 04:36:59 -0400
  LV Status              NOT available
  LV Size                1.00 GiB
  Current LE             256
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto

3. refresh the pool and check the volume of this pool again, the inactive volume is not listed as expected.
# virsh pool-refresh LVM-pool
Pool LVM-pool refreshed

# virsh vol-list LVM-pool
Name                 Path                                    
-----------------------------------------
xuzhanglvm1          /dev/xuzhangVG/xuzhanglvm1  


Scenario2: pool-create will skip the inactive volume
1. prepare one xml like following one:
# cat LVM-pool.xml 
<pool type='logical'>
       <name>LVM-pool</name>
       <source>
         <name>xuzhangVG</name>
         <format type='lvm2'/>
         <device path='/dev/sda1'/>
       </source>
       <target>
         <path>/dev/xuzhangVG</path>
       </target>
     </pool>

2. prepare 2 lvm on the host under the vg of step 1, and make sure one lvm is active, the other one is inactive. 
Such as the following example: 
xuzhanglvm1 status is "NOT available"
xuzhanglvm2 status is "available"

# lvdisplay 
  --- Logical volume ---
  LV Path                /dev/xuzhangVG/xuzhanglvm1
  LV Name                xuzhanglvm1
  VG Name                xuzhangVG
  LV UUID                TmfOuo-ilzY-2vW3-zRTv-rjJZ-fXAT-as2qgl
  LV Write Access        read/write
  LV Creation host, time xuzhangtest3, 2014-07-18 04:36:55 -0400
  LV Status              NOT available
  LV Size                1.00 GiB
  Current LE             256
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
   
  --- Logical volume ---
  LV Path                /dev/xuzhangVG/xuzhanglvm2
  LV Name                xuzhanglvm2
  VG Name                xuzhangVG
  LV UUID                skXq1t-rZ78-8Da9-BvNJ-6uQr-sV3e-iOQBZA
  LV Write Access        read/write
  LV Creation host, time xuzhangtest3, 2014-07-18 04:36:59 -0400
  LV Status              available
  # open                 0
  LV Size                1.00 GiB
  Current LE             256
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:0

3. pool-create the xml
# virsh pool-create LVM-pool.xml 
Pool LVM-pool created from LVM-pool.xml

# virsh pool-list 
Name                 State      Autostart 
-----------------------------------------    
LVM-pool             active     no   


4. Checking the volume of thie logical pool, and find the 2 lvm are all listed.
# virsh vol-list LVM-pool
Name                 Path                                    
-----------------------------------------
xuzhanglvm1          /dev/xuzhangVG/xuzhanglvm1              
xuzhanglvm2          /dev/xuzhangVG/xuzhanglvm2

Comment 20 Jiri Denemark 2014-07-18 09:39:41 UTC
Yeah, I think the result of step4 in scenario2 should be consistent with the result after pool-refresh (step 3 in scenario1). I guess running virsh pool-refresh LVM-pool at the end of scenario2 would make xuzhanglvm1 disappear, right?

Comment 21 Xuesong Zhang 2014-07-18 09:55:59 UTC
(In reply to Jiri Denemark from comment #20)
> Yeah, I think the result of step4 in scenario2 should be consistent with the
> result after pool-refresh (step 3 in scenario1). I guess running virsh
> pool-refresh LVM-pool at the end of scenario2 would make xuzhanglvm1
> disappear, right?

No, after pool-refresh, the inactive volume is still there.
And I check the status of the inactive volume via "lvdisplay" command, it seems the inactive volume status turned to "available" after "pool-create".

# virsh pool-refresh LVM-pool
Pool LVM-pool refreshed
    

# virsh vol-list LVM-pool
Name                 Path                                    
-----------------------------------------
xuzhanglvm1          /dev/xuzhangVG/xuzhanglvm1              
xuzhanglvm2          /dev/xuzhangVG/xuzhanglvm2              
             

# lvdisplay 
  --- Logical volume ---
  LV Path                /dev/xuzhangVG/xuzhanglvm1
  LV Name                xuzhanglvm1
  VG Name                xuzhangVG
  LV UUID                TmfOuo-ilzY-2vW3-zRTv-rjJZ-fXAT-as2qgl
  LV Write Access        read/write
  LV Creation host, time xuzhangtest3, 2014-07-18 04:36:55 -0400
  LV Status              available
  # open                 0
  LV Size                1.00 GiB
  Current LE             256
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:1
   
  --- Logical volume ---
  LV Path                /dev/xuzhangVG/xuzhanglvm2
  LV Name                xuzhanglvm2
  VG Name                xuzhangVG
  LV UUID                skXq1t-rZ78-8Da9-BvNJ-6uQr-sV3e-iOQBZA
  LV Write Access        read/write
  LV Creation host, time xuzhangtest3, 2014-07-18 04:36:59 -0400
  LV Status              available
  # open                 0
  LV Size                1.00 GiB
  Current LE             256
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:0

Comment 22 Jiri Denemark 2014-07-18 11:42:13 UTC
Hmm, interesting. John, do you have any insight into why this is happening and whether it is expected or not?

Comment 23 John Ferlan 2014-07-21 17:36:59 UTC
I would be better to see the 'lvs' output since that's what libvirt uses.

In any case, I'm not seeing the same results it seems... although I am using upstream - here's what I have (using what I've recently been working on from bz 1091866 as how I've created/populated the lv_pool):

# lvs
  LV      VG      Attr       LSize   Pool Origin Data%  Move Log Cpy%Sync Convert
  home    fedora  -wi-ao---- 136.72g                                             
  root    fedora  -wi-ao----  50.00g                                             
  swap    fedora  -wi-ao----   7.64g                                             
  lv_test lv_pool -wi-a-----   4.00m
# virsh vol-list lv_pool
 Name                 Path                                    
------------------------------------------------------------------------------
 lv_test              /dev/lv_pool/lv_test                    

# virsh vol-info --pool lv_pool lv_test
Name:           lv_test
Type:           block
Capacity:       4.00 MiB
Allocation:     4.00 MiB                                   
# lvchange -an /dev/lv_pool/lv_test
# lvs
  LV      VG      Attr       LSize   Pool Origin Data%  Move Log Cpy%Sync Convert
  home    fedora  -wi-ao---- 136.72g                                             
  root    fedora  -wi-ao----  50.00g                                             
  swap    fedora  -wi-ao----   7.64g                                             
  lv_test lv_pool -wi-------   4.00m            
# virsh pool-refresh lv_pool
Pool lv_pool refreshed

# virsh vol-list lv_pool
 Name                 Path                                    
------------------------------------------------------------------------------

# virsh vol-info --pool lv_pool lv_test
error: failed to get vol 'lv_test'
error: Storage volume not found: no storage vol with matching path 'lv_test'
# lvdisplay
...
   
  --- Logical volume ---
  LV Path                /dev/lv_pool/lv_test
  LV Name                lv_test
  VG Name                lv_pool
  LV UUID                oZ4oAr-7U2g-EmOd-alBP-WADV-fhm9-ykKacn
  LV Write Access        read/write
  LV Creation host, time localhost.localdomain, 2014-07-21 13:26:17 -0400
  LV Status              NOT available
  LV Size                4.00 MiB
  Current LE             1
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
...
#

Show me what you do step by step... Then I can work on attempting to reproduce.

Comment 24 Xuesong Zhang 2014-07-22 10:00:18 UTC
hi, John,

Here is the details steps, you can try to reproduce it.

The issue is: pool-create will not skip the inactive volume, and will active the inactive volume strangely.

1. prepare one xml like following one:
# cat LVM-pool.xml 
<pool type='logical'>
       <name>LVM-pool</name>
       <source>
         <name>xuzhangVG</name>
         <format type='lvm2'/>
         <device path='/dev/sda1'/>
       </source>
       <target>
         <path>/dev/xuzhangVG</path>
       </target>
     </pool>

2. prepare 2 lvm on the host under the vg of step 1, and make sure one lvm is active, the other one is inactive. 
Such as the following example: 
xuzhanglvm1 status is "NOT available"
xuzhanglvm2 status is "available"

# lvs
  LV          VG        Attr       LSize Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  xuzhanglvm1 xuzhangVG -wi------- 1.00g                                                    
  xuzhanglvm2 xuzhangVG -wi-a----- 1.00g                                        

3. pool-create the xml
# virsh pool-create LVM-pool.xml 
Pool LVM-pool created from LVM-pool.xml

# virsh pool-list 
Name                 State      Autostart 
-----------------------------------------    
LVM-pool             active     no   


4. Checking the volume of this logical pool, and find the 2 lvm are all listed.
# virsh vol-list LVM-pool
Name                 Path                                    
-----------------------------------------
xuzhanglvm1          /dev/xuzhangVG/xuzhanglvm1              
xuzhanglvm2          /dev/xuzhangVG/xuzhanglvm2

# lvs
  LV          VG        Attr       LSize Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  xuzhanglvm1 xuzhangVG -wi-a----- 1.00g                                                    
  xuzhanglvm2 xuzhangVG -wi-a----- 1.00g

Comment 25 John Ferlan 2014-07-22 18:12:15 UTC
I see the same results with upstream - whether this is a different process than originally described and resolved I'm not quite sure. However, my investigation shows it's "expected". 


# virsh pool-list --all
 Name                 State      Autostart 
-------------------------------------------
 default              active     no        
 images               active     yes       
 iscsi-net-pool       active     no        

# vgs
  VG      #PV #LV #SN Attr   VSize   VFree
  fedora    1   3   0 wz--n- 194.36g    0 
  lv_pool   1   0   0 wz--n-   7.32g 7.32g
# lvs
  LV   VG     Attr       LSize   Pool Origin Data%  Move Log Cpy%Sync Convert
  home fedora -wi-ao---- 136.72g                                             
  root fedora -wi-ao----  50.00g                                             
  swap fedora -wi-ao----   7.64g   
# lvcreate --name lv_test_vol1 -L 4096K lv_pool
  Logical volume "lv_test_vol1" created
# lvcreate --name lv_test_vol2 -L 4096K lv_pool
  Logical volume "lv_test_vol2" created
# lvs
  LV           VG      Attr       LSize   Pool Origin Data%  Move Log Cpy%Sync Convert
  home         fedora  -wi-ao---- 136.72g                                             
  root         fedora  -wi-ao----  50.00g                                             
  swap         fedora  -wi-ao----   7.64g                                             
  lv_test_vol1 lv_pool -wi-a-----   4.00m                                             
  lv_test_vol2 lv_pool -wi-a-----   4.00m         
# lvchange -an /dev/lv_pool/lv_test_vol1
# lvs
  LV           VG      Attr       LSize   Pool Origin Data%  Move Log Cpy%Sync Convert
  home         fedora  -wi-ao---- 136.72g                                             
  root         fedora  -wi-ao----  50.00g                                             
  swap         fedora  -wi-ao----   7.64g                                             
  lv_test_vol1 lv_pool -wi-------   4.00m                                             
  lv_test_vol2 lv_pool -wi-a-----   4.00m               
# cat lv_pool.xml
<pool type="logical">
  <name>lv_pool</name>
  <source>
      <device path="/dev/sdb1"/>
  </source>
  <target>
    <path>/dev/lv_pool</path>
  </target>
</pool>
# virsh pool-create lv_pool.xml
Pool lv_pool created from lv_pool.xml

# lvs
  LV           VG      Attr       LSize   Pool Origin Data%  Move Log Cpy%Sync Convert
  home         fedora  -wi-ao---- 136.72g                                             
  root         fedora  -wi-ao----  50.00g                                             
  swap         fedora  -wi-ao----   7.64g                                             
  lv_test_vol1 lv_pool -wi-a-----   4.00m                                             
  lv_test_vol2 lv_pool -wi-a-----   4.00m              
#

If I destroy the lv_pool, then both volumes become inactive...

# virsh pool-destroy lv_pool
Pool lv_pool destroyed

# lvs
  LV           VG      Attr       LSize   Pool Origin Data%  Move Log Cpy%Sync Convert
  home         fedora  -wi-ao---- 136.72g                                             
  root         fedora  -wi-ao----  50.00g                                             
  swap         fedora  -wi-ao----   7.64g                                             
  lv_test_vol1 lv_pool -wi-------   4.00m                                             
  lv_test_vol2 lv_pool -wi-------   4.00m
#

Of course, if I recreate, both are active again.  Leading me to believe that pool creation is designed to activate lv's.    Which the code proves to be true when one goes to start the pool - virStorageBackendLogicalStartPool() will call virStorageBackendLogicalSetActive() which does a "vgchange -aly lv_pool".  This same process has been in the code since it was first added via commit id 'ac736602f'.

Comment 26 Xuesong Zhang 2014-07-23 03:05:47 UTC
Seems the result is as expected, so close this bug as verified.

Comment 28 errata-xmlrpc 2014-10-14 04:13:37 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHBA-2014-1374.html


Note You need to log in before you can comment on or make changes to this bug.