Bug 1025230
Summary: | libvirt activate pool with invalid source. | |||
---|---|---|---|---|
Product: | Red Hat Enterprise Linux 7 | Reporter: | Hao Liu <hliu> | |
Component: | libvirt | Assignee: | John Ferlan <jferlan> | |
Status: | CLOSED ERRATA | QA Contact: | Virtualization Bugs <virt-bugs> | |
Severity: | medium | Docs Contact: | ||
Priority: | medium | |||
Version: | 7.0 | CC: | dyuan, eblake, hliu, lsu, mzhan, rbalakri, yanyang, yisun | |
Target Milestone: | rc | |||
Target Release: | 7.0 | |||
Hardware: | x86_64 | |||
OS: | Linux | |||
Whiteboard: | ||||
Fixed In Version: | libvirt-1.3.1-1.el7 | Doc Type: | Bug Fix | |
Doc Text: | Story Points: | --- | ||
Clone Of: | ||||
: | 1025232 (view as bug list) | Environment: | ||
Last Closed: | 2016-11-03 18:06:49 UTC | Type: | Bug | |
Regression: | --- | Mount Type: | --- | |
Documentation: | --- | CRM: | ||
Verified Versions: | Category: | --- | ||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
Cloudforms Team: | --- | Target Upstream Version: | ||
Embargoed: | ||||
Bug Depends On: | ||||
Bug Blocks: | 1025232 |
Description
Hao Liu
2013-10-31 09:50:30 UTC
The issue can also be reproduced in netfs type pool. # virsh pool-dumpxml netfs-1 <pool type='netfs'> <name>netfs-1</name> <uuid>9e28c716-ae4a-4efd-aba1-91e95bd5285e</uuid> <capacity unit='bytes'>1063256064</capacity> <allocation unit='bytes'>33722368</allocation> <available unit='bytes'>1029533696</available> <source> <host name='rhel7.redhat.com'/> <dir path='/var/lib/libvirt/images'/> <format type='nfs'/> </source> <target> <path>/opt</path> <permissions> <mode>0700</mode> <owner>0</owner> <group>0</group> </permissions> </target> </pool> # virsh pool-list --all Name State Autostart ------------------------------------------- netfs-1 inactive no # mount /dev/sda7 /opt # service libvirtd restart # virsh pool-list --all Name State Autostart ------------------------------------------- netfs-1 active no The pool netfs-1 activated while libvirtd starts up when the target dir /opt is already mounted by some source The issue can also be reproduced in logical pool. The logical pool using non-existing disk as source device activates while libvirtd starts up if the lv has been activated manually. Steps to reproduce 1.Prepare an inactive logical pool using an non-existing source device # virsh pool-list --all Name State Autostart ------------------------------------------- HostVG inactive no # virsh pool-dumpxml HostVG <pool type='logical'> <name>HostVG</name> <uuid>5f8bb99c-716a-4233-a359-b9788e97fa35</uuid> <capacity unit='bytes'>0</capacity> <allocation unit='bytes'>0</allocation> <available unit='bytes'>0</available> <source> <device path='/dev/sde'/> <name>HostVG</name> <format type='lvm2'/> </source> <target> <path>/dev/HostVG</path> <permissions> <mode>0755</mode> <owner>-1</owner> <group>-1</group> </permissions> </target> </pool> # lsblk | grep sde 2. create volume group HostVG and a lv, then activate lv # vgs VG #PV #LV #SN Attr VSize VFree HostVG 2 1 0 wz--n- 1.97g 1.87g # lvs LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert vol1 HostVG -wi------- 100.00m # vgchange -aly HostVG 1 logical volume(s) in volume group "HostVG" now active # lvs LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert vol1 HostVG -wi-a----- 100.00m 3. restart libvirtd # service libvirtd restart Redirecting to /bin/systemctl restart libvirtd.service 4. check the pool status # virsh pool-list --all Name State Autostart ------------------------------------------- HostVG active no *** Bug 1180084 has been marked as a duplicate of this bug. *** Posted some patches upstream to handle the issues: http://www.redhat.com/archives/libvir-list/2015-December/msg00270.html The first 3 patches in the series deal with the FS/NFS issues, while the last 2 patches deal with the logical issue. As a reminder for future bzs - try to generate separate bzs for separate backends - FS/NFS are considered one backend, while logical is another. Patches pushed upstream FS/NFS: $ git describe dae7007d6e445060afd987b14cc7431b67d60bed CVE-2015-5313-16-gdae7007 $ Logical: $ git describe 71b803ac9a9cadacf6eaca2028bbcebd05050a77 CVE-2015-5313-18-g71b803a $ Logical patch resulted in a regression noted here: http://www.redhat.com/archives/libvir-list/2015-December/msg00656.html (follow the followups) Issue resolved, patch pushed: commit 8c865052b98f927fb3cc2d043e7ffff6fdcb2be9 Author: John Ferlan <jferlan> Date: Wed Dec 16 11:54:04 2015 -0500 storage: Fix startup issue for logical pool Commit id '71b803ac' assumed that the storage pool source device path was required for a 'logical' pool. This resulted in a failure to start a pool without any device path defined. So, adjust the virStorageBackendLogicalMatchPoolSource logic to return success if at least the pool name matches the vgs output when no pool source device path is/are provided. verified on libvirt-1.3.2-1.el7.x86_64 PASSED ========= fs pool ============ 1. # virsh pool-dumpxml fs <pool type='fs'> <name>fs</name> <uuid>3a4312ff-bb48-4af3-acd9-19174a6a0d6e</uuid> <capacity unit='bytes'>0</capacity> <allocation unit='bytes'>0</allocation> <available unit='bytes'>0</available> <source> <device path='/dev/notexist'/> <format type='ext3'/> </source> <target> <path>/mnt</path> <permissions> <mode>0777</mode> <owner>0</owner> <group>0</group> </permissions> </target> </pool> 2. # virsh pool-start fs error: Failed to start pool fs error: internal error: Child process (/usr/bin/mount -t ext3 /dev/notexist /mnt) unexpected exit status 32: 2016-03-10 09:54:23.892+0000: 28073: debug : virFileClose:103 : Closed fd 25 2016-03-10 09:54:23.892+0000: 28073: debug : virFileClose:103 : Closed fd 27 2016-03-10 09:54:23.892+0000: 28073: debug : virFileClose:103 : Closed fd 23 mount: special device /dev/notexist does not exist 3. #mount /dev/sde /mnt 4. # service libvirtd restart Redirecting to /bin/systemctl restart libvirtd.service 5. # virsh pool-list --all Name State Autostart ------------------------------------------- default active no fs inactive no <=== not start 5. # virsh pool-start fs error: Failed to start pool fs error: internal error: Child process (/usr/bin/mount -t ext3 /dev/notexist /mnt) unexpected exit status 32: 2016-03-10 09:54:23.892+0000: 28073: debug : virFileClose:103 : Closed fd 25 2016-03-10 09:54:23.892+0000: 28073: debug : virFileClose:103 : Closed fd 27 2016-03-10 09:54:23.892+0000: 28073: debug : virFileClose:103 : Closed fd 23 mount: special device /dev/notexist does not exist 6. # virsh pool-autostart fs Pool fs marked as autostarted 7. # service libvirtd restart Redirecting to /bin/systemctl restart libvirtd.service 8. # virsh pool-list --all Name State Autostart ------------------------------------------- default active no fs inactive yes ================ logical pool ================ 1.# virsh pool-dumpxml HostVG <pool type='logical'> <name>HostVG</name> <uuid>5f8bb99c-716a-4233-a359-b9788e97fa35</uuid> <capacity unit='bytes'>0</capacity> <allocation unit='bytes'>0</allocation> <available unit='bytes'>0</available> <source> <device path='/dev/sdz'/> <==== sdz not existing <name>HostVG</name> <format type='lvm2'/> </source> <target> <path>/dev/HostVG</path> <permissions> <mode>0755</mode> </permissions> </target> </pool> 2. # vgs VG #PV #LV #SN Attr VSize VFree HostVG 1 0 0 wz--n- 1020.00m 1020.00m 3. # lvs LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert lvol0 HostVG -wi-a----- 100.00m 4. # vgchange -aly HostVG 1 logical volume(s) in volume group "HostVG" now active 5. # virsh pool-list --all Name State Autostart ------------------------------------------- default active no fs inactive yes HostVG inactive yes 6. # virsh pool-autostart HostVG Pool HostVG marked as autostarted 7. # service libvirtd restart Redirecting to /bin/systemctl restart libvirtd.service 8. # virsh pool-list --all Name State Autostart ------------------------------------------- default active no fs inactive yes HostVG inactive yes 9. # virsh pool-start HostVG error: Failed to start pool HostVG error: unsupported configuration: cannot find any matching source devices for logical volume group 'HostVG' ============= nfs pool ============== 1. # virsh pool-dumpxml nfs <pool type='netfs'> <name>nfs</name> <uuid>e01e44dc-cb67-46df-bc12-33e83aaa00a1</uuid> <capacity unit='bytes'>0</capacity> <allocation unit='bytes'>0</allocation> <available unit='bytes'>0</available> <source> <host name='nowhere on line'/> <dir path='/vol/S3/libvirtmanual/yiyi'/> <format type='nfs'/> </source> <target> <path>/mnt</path> <permissions> <mode>0700</mode> <owner>0</owner> <group>0</group> </permissions> </target> </pool> 2. # mount /dev/sde /mnt 3. # service libvirtd restart 4. # virsh pool-list --all Name State Autostart ------------------------------------------- default active no fs inactive yes HostVG inactive yes nfs inactive no 5. # virsh pool-autostart nfs Pool nfs marked as autostarted 6. # service libvirtd restart Redirecting to /bin/systemctl restart libvirtd.service 7. # virsh pool-list --all Name State Autostart ------------------------------------------- default active no fs inactive yes HostVG inactive yes nfs inactive yes Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHSA-2016-2577.html |