Bug 748437 - libvirt and clustered volume group problems
Summary: libvirt and clustered volume group problems
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Virtualization Tools
Classification: Community
Component: libvirt
Version: unspecified
Hardware: All
OS: Linux
unspecified
medium
Target Milestone: ---
Assignee: Libvirt Maintainers
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2011-10-24 12:59 UTC by Roman
Modified: 2016-04-26 15:07 UTC (History)
4 users (show)

Fixed In Version:
Clone Of:
Environment:
Last Closed: 2015-03-16 16:56:28 UTC
Embargoed:


Attachments (Terms of Use)
libvirt-storage-fix-logical.patch (3.27 KB, patch)
2011-10-24 13:00 UTC, Roman
no flags Details | Diff

Description Roman 2011-10-24 12:59:43 UTC
Description of problem:
a) libvirt should use vgchange -aly/-aln instead of vgchange -ay/-an for clustered volume groups
b) libvirt should skip not activated/suspended volumes during pool-create/pool-refresh for logical pools

Version-Release number of selected component (if applicable):
libvirt-0.9.6

How reproducible:
always

Steps to Reproduce:

a) vgchange -ay/-an
1. Build 2-node cluster
2. Create clustered volume group
3. Create logical storage pool in libvirt
4. Open/mount any volume on the first node
5. Try to destroy storage pool on the second node

b) not activated volumes:
1. Build 2-node cluster
2. Create clustered volume group
3. Activate exclusively any volume on the first node
4. Try to create pool on the second node

Actual results:

a) vgchange -ay/-an
# virsh pool-destroy vg
error: Failed to destroy pool vg
error: internal error '/sbin/vgchange -an vg' exited with non-zero status 5 and
signal 0:   Error locking on node node1: LV vg/lv1 in use: not deactivating

b) not activated volumes:
# virsh pool-create vg.xml 
error: Failed to create pool from vg.xml
error: internal error lvs command failed

Expected results:

a) vgchange -ay/-an
# virsh pool-destroy vg
Pool vg destroyed

b) not activated volumes:
# virsh pool-create vg.xml 
Pool vg created from vg.xml

Additional info:
Attached patch fix problem for me.
a) https://bugzilla.redhat.com/show_bug.cgi?id=748248
b) https://bugzilla.redhat.com/show_bug.cgi?id=748282

Comment 1 Roman 2011-10-24 13:00:21 UTC
Created attachment 529864 [details]
libvirt-storage-fix-logical.patch

Comment 2 Eric Blake 2011-10-24 15:31:39 UTC
Can you please forward this patch upstream, for incorporation there?
libvir-list

Comment 3 Roman 2011-10-24 17:07:01 UTC
I forwarded this patch to libvir-list

Comment 4 Osier Yang 2011-12-12 07:54:16 UTC
This should be closed as two separated bugs are filed. See #BZs 748248, 748282.

Comment 5 Ján Tomko 2015-03-16 16:56:28 UTC
Per comment 4, this should be fixed upstream now:
commit 95ab4154178e41f92ebb16a2379c1ac6f99e6a89
Author: Rommer <rommer>
Date:   Mon Dec 12 15:40:52 2011 +0800

    storage: Activate/deactivate logical volumes only on local node
    
    Current "-ay | -an" has problems on pool starting/refreshing if
    the volumes are clustered. Rommer has posted a patch to list 2
    months ago.
    
    https://www.redhat.com/archives/libvir-list/2011-October/msg01116.html
    
    But IMO we shouldn't skip the inactived vols. So this is a squashed
    patch by Rommer.
    
    Signed-off-by: Rommer <rommer>


Fixed upstream by v1.0.5-47-g59750ed:

commit 59750ed6ea12c4db5ba042fef1a39b963cbfb559
Author: Osier Yang <jyang>
Date:   Tue May 7 18:29:29 2013 +0800

    storage: Skip inactive lv volumes
    
    If the volume is of a clustered volume group, and not active, the
    related pool APIs fails on opening /dev/vg/lv. If the volume is
    suspended, it hangs on open(2) the volume.
    
    Though the best solution is to expose the volume status in volume
    XML, and even better to provide API to activate/deactivate the volume,
    but it's not the work I want to touch currently. Volume status in
    other status is just fine to skip.
    
    About the 5th field of lv_attr (from man lvs[8])
    <quote>
     5 State: (a)ctive, (s)uspended, (I)nvalid snapshot, invalid
       (S)uspended snapshot, snapshot (m)erge failed,suspended
       snapshot (M)erge failed, mapped (d)evice present without
       tables,  mapped device present with (i)nactive table
    </quote>


Note You need to log in before you can comment on or make changes to this bug.