Red Hat Bugzilla – Bug 748248
libvirt should use vgchange -aly/-aln instead of vgchange -ay/-an for clustered volume groups
Last modified: 2012-06-20 02:35:31 EDT
Description of problem: libvirt should use vgchange -aly/-aln instead of vgchange -ay/-an for clustered volume groups Version-Release number of selected component (if applicable): libvirt-0.8.7-18.el6_1.1 How reproducible: always Steps to Reproduce: 1. Build 2-node cluster 2. Create clustered volume group 3. Create logical storage pool in libvirt 4. Open/mount any volume on the first node 5. Try to destroy storage pool on the second node Actual results: # virsh pool-destroy vg error: Failed to destroy pool vg error: internal error '/sbin/vgchange -an vg' exited with non-zero status 5 and signal 0: Error locking on node node1: LV vg/lv1 in use: not deactivating Expected results: # virsh pool-destroy vg Pool vg destroyed Additional info:
see also bug 748437 for a proposed patch
Patch committed to upstream. commit 95ab4154178e41f92ebb16a2379c1ac6f99e6a89 Author: Rommer <rommer@active.by> Date: Mon Dec 12 15:40:52 2011 +0800 storage: Activate/deactivate logical volumes only on local node Current "-ay | -an" has problems on pool starting/refreshing if the volumes are clustered. Rommer has posted a patch to list 2 months ago. https://www.redhat.com/archives/libvir-list/2011-October/msg01116.html But IMO we shouldn't skip the inactived vols. So this is a squashed patch by Rommer. Signed-off-by: Rommer <rommer@active.by>
*** Bug 748282 has been marked as a duplicate of this bug. ***
Verify this bug with : libvirt-0.9.9-1.el6.x86_64 1) host1 and host2 use the same lvm via iscsi 2) add lvm in the libvirt pool two hosts <pool type='logical'> <name>vg</name> <uuid>7092394e-d5ee-301d-f2cc-21c0c0bb51a1</uuid> <capacity>1044381696</capacity> <allocation>104857600</allocation> <available>939524096</available> <source> <device path='/dev/sdf1'/> <name>vg</name> <format type='lvm2'/> </source> <target> <path>/dev/vg</path> <permissions> <mode>0700</mode> <owner>-1</owner> <group>-1</group> </permissions> </target> </pool> 3) start and create a lv #virsh vol-list vg Name Path ----------------------------------------- ctest /dev/vg/ctest 4) in the host1 mount it #mount /dev/vg/ctest /media 5) in the host2 destroy the pool # virsh pool-destroy vg Pool vg destroyed check the lv status is NOT available # lvs LV VG Attr LSize Origin Snap% Move Log Copy% Convert ctest vg -wi--- 100.00m in the host1 check the lv status is available # lvs LV VG Attr LSize Origin Snap% Move Log Copy% Convert ctest vg -wi-ao 100.00m
Technical note added. If any revisions are required, please edit the "Technical Notes" field accordingly. All revisions will be proofread by the Engineering Content Services team. New Contents: No documentation needed.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHSA-2012-0748.html