RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1025230 - libvirt activate pool with invalid source.
Summary: libvirt activate pool with invalid source.
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: libvirt
Version: 7.0
Hardware: x86_64
OS: Linux
medium
medium
Target Milestone: rc
: 7.0
Assignee: John Ferlan
QA Contact: Virtualization Bugs
URL:
Whiteboard:
: 1180084 (view as bug list)
Depends On:
Blocks: 1025232
TreeView+ depends on / blocked
 
Reported: 2013-10-31 09:50 UTC by Hao Liu
Modified: 2016-11-03 18:06 UTC (History)
8 users (show)

Fixed In Version: libvirt-1.3.1-1.el7
Doc Type: Bug Fix
Doc Text:
Clone Of:
: 1025232 (view as bug list)
Environment:
Last Closed: 2016-11-03 18:06:49 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2016:2577 0 normal SHIPPED_LIVE Moderate: libvirt security, bug fix, and enhancement update 2016-11-03 12:07:06 UTC

Description Hao Liu 2013-10-31 09:50:30 UTC
Description of problem:
libvirt activate pool with invalid source.

Version-Release number of selected component (if applicable):
Red Hat Enterprise Linux Server release 7.0 Beta
libvirt-1.1.1-10.el7.x86_64

How reproducible:
always

Command:
1. Define a pool with an invalid source.
# cat test_pool.xml 
<pool type='fs'>
  <name>test_pool</name>
  <source>
    <device path='/dev/notexist'/>
    <format type='ext3'/>
  </source>
  <target>
    <path>/mnt</path>
    <permissions>
      <mode>0777</mode>
      <owner>0</owner>
      <group>0</group>
    </permissions>
  </target>
</pool>

# virsh pool-define test_pool.xml

2. Try to start the pool.
# virsh pool-start test_pool
error: Failed to start pool test_pool
error: internal error: Child process (/usr/bin/mount -t ext3 /dev/notexist /mnt) unexpected exit status 32: mount: special device /dev/notexist does not exist

# virsh pool-list --all
Name                 State      Autostart 
-----------------------------------------
default              active     yes       
test_pool            inactive   no   

3. Mount a valid fs to target path.
# mount /dev/sda1 /mnt

4. Restart libvirt daemon.
# service libvirtd restart

5. The invalid pool is active.
# virsh pool-list --all
Name                 State      Autostart 
-----------------------------------------
default              active     yes       
test_pool            active     no    

Expected result:
The pool with invalid source should be checked and not be activated when starting libvirt daemon.

Comment 4 Yang Yang 2015-01-12 03:09:57 UTC
The issue can also be reproduced in netfs type pool.

# virsh pool-dumpxml netfs-1
<pool type='netfs'>
  <name>netfs-1</name>
  <uuid>9e28c716-ae4a-4efd-aba1-91e95bd5285e</uuid>
  <capacity unit='bytes'>1063256064</capacity>
  <allocation unit='bytes'>33722368</allocation>
  <available unit='bytes'>1029533696</available>
  <source>
    <host name='rhel7.redhat.com'/>
    <dir path='/var/lib/libvirt/images'/>
    <format type='nfs'/>
  </source>
  <target>
    <path>/opt</path>
    <permissions>
      <mode>0700</mode>
      <owner>0</owner>
      <group>0</group>
    </permissions>
  </target>
</pool>

# virsh pool-list --all
 Name                 State      Autostart 
-------------------------------------------
netfs-1              inactive   no

# mount /dev/sda7 /opt

# service libvirtd restart

# virsh pool-list --all
 Name                 State      Autostart 
-------------------------------------------
netfs-1              active     no


The pool netfs-1 activated while libvirtd starts up when the target dir /opt is already mounted by some source

Comment 5 Yang Yang 2015-01-12 09:22:02 UTC
The issue can also be reproduced in logical pool. The logical pool using non-existing disk as source device activates while libvirtd starts up if the lv has been activated manually.

Steps to reproduce
1.Prepare an inactive logical pool using an non-existing source device
# virsh pool-list --all
 Name                 State      Autostart 
-------------------------------------------
HostVG               inactive   no

# virsh pool-dumpxml HostVG
<pool type='logical'>
  <name>HostVG</name>
  <uuid>5f8bb99c-716a-4233-a359-b9788e97fa35</uuid>
  <capacity unit='bytes'>0</capacity>
  <allocation unit='bytes'>0</allocation>
  <available unit='bytes'>0</available>
  <source>
    <device path='/dev/sde'/>
    <name>HostVG</name>
    <format type='lvm2'/>
  </source>
  <target>
    <path>/dev/HostVG</path>
    <permissions>
      <mode>0755</mode>
      <owner>-1</owner>
      <group>-1</group>
    </permissions>
  </target>
</pool>

# lsblk | grep sde

2. create volume group HostVG and a lv, then activate lv
# vgs
  VG     #PV #LV #SN Attr   VSize VFree
  HostVG   2   1   0 wz--n- 1.97g 1.87g
# lvs
  LV   VG     Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  vol1 HostVG -wi------- 100.00m
# vgchange -aly HostVG
  1 logical volume(s) in volume group "HostVG" now active

# lvs
  LV   VG     Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  vol1 HostVG -wi-a----- 100.00m

3. restart libvirtd
# service libvirtd restart
Redirecting to /bin/systemctl restart  libvirtd.service

4. check the pool status
# virsh pool-list --all
 Name                 State      Autostart 
-------------------------------------------
HostVG               active     no

Comment 6 Yang Yang 2015-01-12 10:06:45 UTC
*** Bug 1180084 has been marked as a duplicate of this bug. ***

Comment 9 John Ferlan 2015-12-08 11:23:33 UTC
Posted some patches upstream to handle the issues:

http://www.redhat.com/archives/libvir-list/2015-December/msg00270.html

The first 3 patches in the series deal with the FS/NFS issues, while the last 2 patches deal with the logical issue.

As a reminder for future bzs - try to generate separate bzs for separate backends - FS/NFS are considered one backend, while logical is another.

Comment 10 John Ferlan 2015-12-15 19:38:33 UTC
Patches pushed upstream

FS/NFS:

$ git describe dae7007d6e445060afd987b14cc7431b67d60bed
CVE-2015-5313-16-gdae7007
$

Logical:

$ git describe 71b803ac9a9cadacf6eaca2028bbcebd05050a77
CVE-2015-5313-18-g71b803a
$

Comment 11 John Ferlan 2015-12-17 13:29:19 UTC
Logical patch resulted in a regression noted here:

http://www.redhat.com/archives/libvir-list/2015-December/msg00656.html

(follow the followups)

Issue resolved, patch pushed:

commit 8c865052b98f927fb3cc2d043e7ffff6fdcb2be9
Author: John Ferlan <jferlan>
Date:   Wed Dec 16 11:54:04 2015 -0500

    storage: Fix startup issue for logical pool
    
    Commit id '71b803ac' assumed that the storage pool source device path
    was required for a 'logical' pool. This resulted in a failure to start
    a pool without any device path defined.
    
    So, adjust the virStorageBackendLogicalMatchPoolSource logic to
    return success if at least the pool name matches the vgs output
    when no pool source device path is/are provided.

Comment 13 yisun 2016-03-10 10:42:04 UTC
verified on libvirt-1.3.2-1.el7.x86_64
PASSED


========= fs pool ============
1. # virsh pool-dumpxml fs
<pool type='fs'>
  <name>fs</name>
  <uuid>3a4312ff-bb48-4af3-acd9-19174a6a0d6e</uuid>
  <capacity unit='bytes'>0</capacity>
  <allocation unit='bytes'>0</allocation>
  <available unit='bytes'>0</available>
  <source>
    <device path='/dev/notexist'/>
    <format type='ext3'/>
  </source>
  <target>
    <path>/mnt</path>
    <permissions>
      <mode>0777</mode>
      <owner>0</owner>
      <group>0</group>
    </permissions>
  </target>
</pool>

2. # virsh pool-start fs
error: Failed to start pool fs
error: internal error: Child process (/usr/bin/mount -t ext3 /dev/notexist /mnt) unexpected exit status 32: 2016-03-10 09:54:23.892+0000: 28073: debug : virFileClose:103 : Closed fd 25
2016-03-10 09:54:23.892+0000: 28073: debug : virFileClose:103 : Closed fd 27
2016-03-10 09:54:23.892+0000: 28073: debug : virFileClose:103 : Closed fd 23
mount: special device /dev/notexist does not exist

3. #mount /dev/sde /mnt
4. # service libvirtd restart
Redirecting to /bin/systemctl restart  libvirtd.service

5. # virsh pool-list --all
 Name                 State      Autostart 
-------------------------------------------
 default              active     no        
 fs                   inactive   no          <=== not start 
  


5. # virsh pool-start fs
error: Failed to start pool fs
error: internal error: Child process (/usr/bin/mount -t ext3 /dev/notexist /mnt) unexpected exit status 32: 2016-03-10 09:54:23.892+0000: 28073: debug : virFileClose:103 : Closed fd 25
2016-03-10 09:54:23.892+0000: 28073: debug : virFileClose:103 : Closed fd 27
2016-03-10 09:54:23.892+0000: 28073: debug : virFileClose:103 : Closed fd 23
mount: special device /dev/notexist does not exist

6. # virsh pool-autostart fs
Pool fs marked as autostarted

7. # service libvirtd restart
Redirecting to /bin/systemctl restart  libvirtd.service

8. # virsh pool-list --all
 Name                 State      Autostart 
-------------------------------------------
 default              active     no        
 fs                   inactive   yes       
      


================ logical pool ================
1.# virsh pool-dumpxml HostVG
 <pool type='logical'>
  <name>HostVG</name>
  <uuid>5f8bb99c-716a-4233-a359-b9788e97fa35</uuid>
  <capacity unit='bytes'>0</capacity>
  <allocation unit='bytes'>0</allocation>
  <available unit='bytes'>0</available>
  <source>
    <device path='/dev/sdz'/>   <==== sdz not existing
    <name>HostVG</name>
    <format type='lvm2'/>
  </source>
  <target>
    <path>/dev/HostVG</path>
    <permissions>
      <mode>0755</mode>
    </permissions>
  </target>
</pool>

2. # vgs
  VG     #PV #LV #SN Attr   VSize    VFree   
  HostVG   1   0   0 wz--n- 1020.00m 1020.00m

3. # lvs
  LV    VG     Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  lvol0 HostVG -wi-a----- 100.00m   

4. # vgchange -aly HostVG
  1 logical volume(s) in volume group "HostVG" now active


5. # virsh pool-list --all
 Name                 State      Autostart 
-------------------------------------------
 default              active     no        
 fs                   inactive   yes       
 HostVG               inactive   yes 

6. # virsh pool-autostart HostVG
Pool HostVG marked as autostarted


7. # service libvirtd restart
Redirecting to /bin/systemctl restart  libvirtd.service

8. # virsh pool-list --all
 Name                 State      Autostart 
-------------------------------------------
 default              active     no        
 fs                   inactive   yes       
 HostVG               inactive   yes      

9. # virsh pool-start HostVG
error: Failed to start pool HostVG
error: unsupported configuration: cannot find any matching source devices for logical volume group 'HostVG'


============= nfs pool ==============
1. # virsh pool-dumpxml nfs
<pool type='netfs'>
  <name>nfs</name>
  <uuid>e01e44dc-cb67-46df-bc12-33e83aaa00a1</uuid>
  <capacity unit='bytes'>0</capacity>
  <allocation unit='bytes'>0</allocation>
  <available unit='bytes'>0</available>
  <source>
    <host name='nowhere on line'/>
    <dir path='/vol/S3/libvirtmanual/yiyi'/>
    <format type='nfs'/>
  </source>
  <target>
    <path>/mnt</path>
    <permissions>
      <mode>0700</mode>
      <owner>0</owner>
      <group>0</group>
    </permissions>
  </target>
</pool>


2. # mount /dev/sde /mnt
3. # service libvirtd restart
4. # virsh pool-list --all
 Name                 State      Autostart 
-------------------------------------------
 default              active     no        
 fs                   inactive   yes       
 HostVG               inactive   yes       
 nfs                  inactive   no      

5. # virsh pool-autostart nfs
Pool nfs marked as autostarted

6. # service libvirtd restart
Redirecting to /bin/systemctl restart  libvirtd.service

7. # virsh pool-list --all
 Name                 State      Autostart 
-------------------------------------------
 default              active     no        
 fs                   inactive   yes       
 HostVG               inactive   yes       
 nfs                  inactive   yes

Comment 15 errata-xmlrpc 2016-11-03 18:06:49 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHSA-2016-2577.html


Note You need to log in before you can comment on or make changes to this bug.