Bug 748248
| Summary: | libvirt should use vgchange -aly/-aln instead of vgchange -ay/-an for clustered volume groups | ||
|---|---|---|---|
| Product: | Red Hat Enterprise Linux 6 | Reporter: | Roman <rommer> |
| Component: | libvirt | Assignee: | Osier Yang <jyang> |
| Status: | CLOSED ERRATA | QA Contact: | Virtualization Bugs <virt-bugs> |
| Severity: | high | Docs Contact: | |
| Priority: | medium | ||
| Version: | 6.1 | CC: | acathrow, ajia, dallan, eblake, mzhan, rwu, whuang |
| Target Milestone: | rc | ||
| Target Release: | --- | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | libvirt-0.9.9-1.el6 | Doc Type: | Bug Fix |
| Doc Text: |
No documentation needed.
|
Story Points: | --- |
| Clone Of: | Environment: | ||
| Last Closed: | 2012-06-20 06:35:31 UTC | Type: | --- |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
|
Description
Roman
2011-10-23 14:58:02 UTC
see also bug 748437 for a proposed patch Patch committed to upstream.
commit 95ab4154178e41f92ebb16a2379c1ac6f99e6a89
Author: Rommer <rommer>
Date: Mon Dec 12 15:40:52 2011 +0800
storage: Activate/deactivate logical volumes only on local node
Current "-ay | -an" has problems on pool starting/refreshing if
the volumes are clustered. Rommer has posted a patch to list 2
months ago.
https://www.redhat.com/archives/libvir-list/2011-October/msg01116.html
But IMO we shouldn't skip the inactived vols. So this is a squashed
patch by Rommer.
Signed-off-by: Rommer <rommer>
*** Bug 748282 has been marked as a duplicate of this bug. *** Verify this bug with :
libvirt-0.9.9-1.el6.x86_64
1) host1 and host2 use the same lvm via iscsi
2) add lvm in the libvirt pool two hosts
<pool type='logical'>
<name>vg</name>
<uuid>7092394e-d5ee-301d-f2cc-21c0c0bb51a1</uuid>
<capacity>1044381696</capacity>
<allocation>104857600</allocation>
<available>939524096</available>
<source>
<device path='/dev/sdf1'/>
<name>vg</name>
<format type='lvm2'/>
</source>
<target>
<path>/dev/vg</path>
<permissions>
<mode>0700</mode>
<owner>-1</owner>
<group>-1</group>
</permissions>
</target>
</pool>
3) start and create a lv
#virsh vol-list vg
Name Path
-----------------------------------------
ctest /dev/vg/ctest
4) in the host1 mount it
#mount /dev/vg/ctest /media
5) in the host2 destroy the pool
# virsh pool-destroy vg
Pool vg destroyed
check the lv status is NOT available
# lvs
LV VG Attr LSize Origin Snap% Move Log Copy% Convert
ctest vg -wi--- 100.00m
in the host1 check the lv status is available
# lvs
LV VG Attr LSize Origin Snap% Move Log Copy% Convert
ctest vg -wi-ao 100.00m
Technical note added. If any revisions are required, please edit the "Technical Notes" field
accordingly. All revisions will be proofread by the Engineering Content Services team.
New Contents:
No documentation needed.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHSA-2012-0748.html |