Bug 1411600 - libvirtd unsupported configuration: cannot find any matching source devices for logical volume group
Summary: libvirtd unsupported configuration: cannot find any matching source devices f...
Keywords:
Status: NEW
Alias: None
Product: Virtualization Tools
Classification: Community
Component: libvirt
Version: unspecified
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
Assignee: Libvirt Maintainers
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-01-10 04:27 UTC by Chris Murphy
Modified: 2023-06-05 18:27 UTC (History)
12 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed:
Embargoed:


Attachments (Terms of Use)

Description Chris Murphy 2017-01-10 04:27:46 UTC
Summary:

Recently I'm seeing the following messages in the journal:
libvirtd[4511]: unsupported configuration: cannot
find any matching source devices for logical volume group 'vg'

virt-manager complains about making this VG active (it's now always listed as inactive).

Seems to be a regression but I haven't done testing to find out when it started to fail.

Reproduce steps:

1. Boot computer, check journal
2. Additionally, launch virt-manager Edit > Connection Details > Storage and it can't be made active.

Results:

journal contains:

Jan 09 14:20:25 f25h libvirtd[4511]: unsupported configuration: cannot
find any matching source devices for logical volume group 'vg'
Jan 09 14:20:25 f25h libvirtd[4511]: internal error: Failed to
autostart storage pool 'vg': unsupported configuration: cannot find
any matching source devices for logical vo

virt-manager's complaint is:

Error starting pool 'vg': unsupported configuration: cannot find any
matching source devices for logical volume group 'vg'

Traceback (most recent call last):
  File "/usr/share/virt-manager/virtManager/asyncjob.py", line 88, in cb_wrapper
    callback(asyncjob, *args, **kwargs)
  File "/usr/share/virt-manager/virtManager/asyncjob.py", line 124, in tmpcb
    callback(*args, **kwargs)
  File "/usr/share/virt-manager/virtManager/libvirtobject.py", line 83, in newfn
    ret = fn(self, *args, **kwargs)
  File "/usr/share/virt-manager/virtManager/storagepool.py", line 164, in start
    self._backend.create(0)
  File "/usr/lib64/python2.7/site-packages/libvirt.py", line 3315, in create
    if ret == -1: raise libvirtError ('virStoragePoolCreate() failed',
pool=self)
libvirtError: unsupported configuration: cannot find any matching
source devices for logical volume group 'vg'


However, if I use virsh edit and just specify path to any LV on this
VG, the VM works and uses it without complaint.

Expected:

I don't expect these errors. The VG 'vg' does exist, there's nothing out of the ordinary with it and I didn't used to get these errors. I should be able to create and choose LV's from within virt-manager but this problem prevents it. I can create them on CLI and add them manually with virsh edit but sometimes that's tedious.


Additional information:

[root@f25h ~]# pvs
  PV             VG Fmt  Attr PSize  PFree
  /dev/nvme0n1p7 vg lvm2 a--  80.43g 47.43g
[root@f25h ~]# lvs
  LV          VG Attr       LSize  Pool Origin Data%  Meta%  Move Log
Cpy%Sync Convert
  macossierra vg -wi-a----- 32.00g
  test        vg -wi-a-----  1.00g
[root@f25h ~]# vgs
  VG #PV #LV #SN Attr   VSize  VFree
  vg   1   2   0 wz--n- 80.43g 47.43g

Comment 1 Chris Murphy 2017-01-10 05:01:47 UTC
$ sudo virsh pool-start vg
error: Failed to start pool vg
error: unsupported configuration: cannot find any matching source devices for logical volume group 'vg'


journalctl -f reports

Jan 09 21:54:52 f25h sudo[14940]:    chris : TTY=pts/2 ; PWD=/home/chris ; USER=root ; COMMAND=/bin/virsh pool-start vg
Jan 09 21:54:52 f25h audit[14940]: USER_CMD pid=14940 uid=1000 auid=1000 ses=3 subj=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 msg='cwd="/home/chris" cmd=766972736820706F6F6C2D7374617274207667 terminal=pts/2 res=success'
Jan 09 21:54:52 f25h audit[14940]: CRED_REFR pid=14940 uid=0 auid=1000 ses=3 subj=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 msg='op=PAM:setcred grantors=pam_env,pam_fprintd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=/dev/pts/2 res=success'
Jan 09 21:54:52 f25h sudo[14940]: pam_systemd(sudo:session): Cannot create session: Already occupied by a session
Jan 09 21:54:52 f25h audit[14940]: USER_START pid=14940 uid=0 auid=1000 ses=3 subj=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 msg='op=PAM:session_open grantors=pam_keyinit,pam_limits,pam_keyinit,pam_limits,pam_systemd,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=/dev/pts/2 res=success'
Jan 09 21:54:52 f25h sudo[14940]: pam_unix(sudo:session): session opened for user root by (uid=0)
Jan 09 21:54:52 f25h libvirtd[4511]: unsupported configuration: cannot find any matching source devices for logical volume group 'vg'
Jan 09 21:54:52 f25h sudo[14940]: pam_unix(sudo:session): session closed for user root
Jan 09 21:54:52 f25h audit[14940]: USER_END pid=14940 uid=0 auid=1000 ses=3 subj=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 msg='op=PAM:session_close grantors=pam_keyinit,pam_limits,pam_keyinit,pam_limits,pam_systemd,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=/dev/pts/2 res=success'
Jan 09 21:54:52 f25h audit[14940]: CRED_DISP pid=14940 uid=0 auid=1000 ses=3 subj=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 msg='op=PAM:setcred grantors=pam_env,pam_fprintd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=/dev/pts/2 res=success'


virsh # pool-dumpxml vg
<pool type='logical'>
  <name>vg</name>
  <uuid>b7598e33-ecb1-4059-baa8-04f658f4a627</uuid>
  <capacity unit='bytes'>0</capacity>
  <allocation unit='bytes'>0</allocation>
  <available unit='bytes'>0</available>
  <source>
    <device path='/dev/nvme0n1p5'/>
    <name>vg</name>
    <format type='lvm2'/>
  </source>
  <target>
    <path>/dev/vg</path>
  </target>
</pool>

Comment 2 Chris Murphy 2017-01-10 05:07:34 UTC
Haha, user error!
/dev/nvme0n1p5 is wrong; a recent partitioning change changed this to nvme0n1p7. Seems to me the source device path is superfluous if the pool references an LVM VG UUID which in turn knows what its PVs are. But OK...

Comment 3 ariedederde 2017-01-18 07:02:34 UTC
I have the same problem and I don't think I am making an error.

- libvirt-2.0.0-10.el7_3.2

The problem occured after adding a new LVM volume to a pool. The storage pools became inactive.

lvcreate -L 96G -n rdvdb-test_disk_E vg_guest_lvmfast

virsh # pool-list --all
 Name                 State      Autostart 
-------------------------------------------
 default              active     yes       
 IsoImages            active     yes       
 SlowDiskOnFile       active     yes       
 SsdDiskOnFile        active     yes       
 vg_guest_lvmfast     inactive   yes       
 vg_guest_lvmslow     inactive   yes       

virsh # pool-start vg_guest_lvmfast                                                                                                    
error: Failed to start pool vg_guest_lvmfast                                                                                           
error: unsupported configuration: cannot find any matching source devices for logical volume group 'vg_guest_lvmfast'                  


virsh # pool-dumpxml vg_guest_lvmfast
<pool type='logical'>                                                                                                                  
  <name>vg_guest_lvmfast</name>                                                                                                        
  <uuid>d0c80fc1-c12e-4ed2-be7c-d8b84f2a48be</uuid>
  <capacity unit='bytes'>0</capacity>
  <allocation unit='bytes'>0</allocation>
  <available unit='bytes'>0</available>
  <source>
    <device path='/dev/vg_guest_lvmfast'/>
    <name>vg_guest_lvmfast</name>
    <format type='lvm2'/>
  </source>
  <target>
    <path>/dev/vg_guest_lvmfast</path>
  </target>
</pool>



vgdisplay vg_guest_lvmfast
  --- Volume group ---
  VG Name               vg_guest_lvmfast
  System ID             
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  9
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                4
  Open LV               4
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               1,09 TiB
  PE Size               4,00 MiB
  Total PE              285887
  Alloc PE / Size       97024 / 379,00 GiB
  Free  PE / Size       188863 / 737,75 GiB
  VG UUID               KLcujo-eqL9-2D1e-dt74-8aP8-1xar-CdnbFN



ls -l /dev/vg_guest_lvmfast/
total 0
lrwxrwxrwx 1 root root 7 17. Jan 14:18 lv_kvm_aistatistics_diskE -> ../dm-2
lrwxrwxrwx 1 root root 7 17. Jan 14:18 lv_kvm_aistatistics_diskF -> ../dm-3
lrwxrwxrwx 1 root root 7 17. Jan 14:56 rdvdb-test_disk_D -> ../dm-4
lrwxrwxrwx 1 root root 7 17. Jan 14:03 rdvdb-test_disk_E -> ../dm-5


using virsh edit I can add LVM (or any other) disks to VMs, that works fine. 

Best regards,
Arie.

Comment 4 Peter Krempa 2017-01-18 08:53:39 UTC
The problem is that libvirt enforces the mapping of the physical volume names to the volume group also after it was built which is not entirely correct. LVM is able to properly detect the presence of all PVs and does not need such care.

As a workaround I'd suggest you delete the physical volume paths from the storage pool definition, then the check will be skipped and only the volume group presence will be checked.

Comment 5 ariedederde 2017-01-18 11:48:25 UTC
Hi Peter,

thanks for your reply and suggestion.

When you say "delete the physical volume paths from the storage pool definition", then you mean to delete the "device path" XML tag from the "source" container?

best,
Arie.

Comment 6 Peter Krempa 2017-01-18 12:19:07 UTC
Yes, exactly. That data is not required besides "building" and deleting of the pool. For regular usage, the name should be sufficient.

Comment 7 ariedederde 2017-01-18 13:20:02 UTC
Peter,

I commented the "device path" xml tag which caused a complete removal of the line after a libvirtd restart. Well ok.

The result:

virsh # pool-dumpxml vg_guest_lvmfast
<pool type='logical'>
  <name>vg_guest_lvmfast</name>
  <uuid>d0c80fc1-c12e-4ed2-be7c-d8b84f2a48be</uuid>
  <capacity unit='bytes'>1199096987648</capacity>
  <allocation unit='bytes'>406948151296</allocation>
  <available unit='bytes'>792148836352</available>
  <source>
    <name>vg_guest_lvmfast</name>
    <format type='lvm2'/>
  </source>
  <target>
    <path>/dev/vg_guest_lvmfast</path>
  </target>
</pool>

virsh # pool-list --all
 Name                 State      Autostart 
-------------------------------------------
 default              active     yes
 IsoImages            active     yes
 SlowDiskOnFile       active     yes
 SsdDiskOnFile        active     yes
 vg_guest_lvmfast     active     yes
 vg_guest_lvmslow     active     yes


Your workaround worked around!

Thanks again,
Arie.


Note You need to log in before you can comment on or make changes to this bug.