Bug 1370443 - Libvirt (and/or Libvirt-guests) FAILS to auto-start virtual machine with LVM volume created on thinpool.
Summary: Libvirt (and/or Libvirt-guests) FAILS to auto-start virtual machine with LVM ...
Keywords:
Status: CLOSED EOL
Alias: None
Product: Fedora
Classification: Fedora
Component: libvirt
Version: 24
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
Assignee: Libvirt Maintainers
QA Contact: Fedora Extras Quality Assurance
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-08-26 10:47 UTC by David Hlacik
Modified: 2019-11-01 11:11 UTC (History)
10 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-08-08 16:48:48 UTC
Type: Bug


Attachments (Terms of Use)

Description David Hlacik 2016-08-26 10:47:09 UTC
Description of problem:


Libvirt-guests FAILS to auto-start virtual machine with LVM volume created on thinpool.


config file : 

vi /etc/sysconfig/libvirt-guests

# action taken on host boot
# - start   all guests which were running on shutdown are started on boot
#           regardless on their autostart settings
# - ignore  libvirt-guests init script won't start any guest on boot, however,
#           guests marked as autostart will still be automatically started by
#           libvirtd
ON_BOOT=start

Version-Release number of selected component (if applicable):

libvirt-daemon-driver-interface-1.3.3.2-1.fc24.x86_64
libvirt-daemon-driver-storage-1.3.3.2-1.fc24.x86_64
libvirt-daemon-kvm-1.3.3.2-1.fc24.x86_64
libvirt-client-1.3.3.2-1.fc24.x86_64
libvirt-python-1.3.3-3.fc24.x86_64
libvirt-daemon-1.3.3.2-1.fc24.x86_64
libvirt-daemon-driver-qemu-1.3.3.2-1.fc24.x86_64
libvirt-daemon-driver-nodedev-1.3.3.2-1.fc24.x86_64
libvirt-daemon-driver-secret-1.3.3.2-1.fc24.x86_64
libvirt-glib-0.2.3-2.fc24.x86_64
libvirt-daemon-driver-network-1.3.3.2-1.fc24.x86_64
libvirt-daemon-driver-nwfilter-1.3.3.2-1.fc24.x86_64
libvirt-daemon-config-network-1.3.3.2-1.fc24.x86_64



How reproducible:

Create thinpool on HDD
lvcreate -l 100%FREE -T ssd/ssdpool
Create volume
lvcreate -l 100%FREE -T ssd/ssdpool


Actual results:

Aug 26 12:20:31 brutus-coreos libvirtd[818]: Cannot access storage file '/dev/hdd/gamedata' (as uid:107, gid:107): No su

-- Reboot --
Aug 26 12:20:30 brutus-coreos systemd[1]: Starting Virtualization daemon...
Aug 26 12:20:30 brutus-coreos systemd[1]: Started Virtualization daemon.
Aug 26 12:20:31 brutus-coreos dnsmasq[969]: started, version 2.76 cachesize 150
Aug 26 12:20:31 brutus-coreos dnsmasq[969]: compile time options: IPv6 GNU-getopt DBus no-i18n IDN DHCP DHCPv6 no-Lua TF
Aug 26 12:20:31 brutus-coreos dnsmasq-dhcp[969]: DHCP, IP range 192.168.122.2 -- 192.168.122.254, lease time 1h
Aug 26 12:20:31 brutus-coreos dnsmasq-dhcp[969]: DHCP, sockets bound exclusively to interface virbr0
Aug 26 12:20:31 brutus-coreos dnsmasq[969]: read /etc/hosts - 2 addresses
Aug 26 12:20:31 brutus-coreos dnsmasq[969]: read /var/lib/libvirt/dnsmasq/default.addnhosts - 0 addresses
Aug 26 12:20:31 brutus-coreos dnsmasq-dhcp[969]: read /var/lib/libvirt/dnsmasq/default.hostsfile
Aug 26 12:20:31 brutus-coreos libvirtd[818]: libvirt version: 1.3.3.2, package: 1.fc24 (Fedora Project, 2016-07-19-00:36
Aug 26 12:20:31 brutus-coreos libvirtd[818]: hostname: brutus-coreos
Aug 26 12:20:31 brutus-coreos libvirtd[818]: Cannot access storage file '/dev/hdd/gamedata' (as uid:107, gid:107): No su
Aug 26 12:20:33 brutus-coreos dnsmasq[969]: failed to access /etc/resolv.conf: No such file or directory
Aug 26 12:20:33 brutus-coreos dnsmasq[969]: reading /etc/resolv.conf
Aug 26 12:20:33 brutus-coreos dnsmasq[969]: using nameserver 192.168.31.1#53
Aug 26 12:29:38 brutus-coreos libvirtd[818]: Domain id=1 name='gameos' uuid=b545285d-ab2d-43b7-bed4-6cb6c9a51a63 is tain
Aug 26 12:29:40 brutus-coreos libvirtd[818]: host doesn't support hyperv 'relaxed' feature
Aug 26 12:29:40 brutus-coreos libvirtd[818]: host doesn't support hyperv 'vapic' feature
Aug 26 12:30:57 brutus-coreos libvirtd[818]: internal error: End of file from monitor
Aug 26 12:30:57 brutus-coreos systemd[1]: Stopping Virtualization daemon...
Aug 26 12:30:58 brutus-coreos systemd[1]: libvirtd.service: Main process exited, code=killed, status=11/SEGV
Aug 26 12:30:58 brutus-coreos systemd[1]: Stopped Virtualization daemon.
Aug 26 12:30:58 brutus-coreos systemd[1]: libvirtd.service: Unit entered failed state.
Aug 26 12:30:58 brutus-coreos systemd[1]: libvirtd.service: Failed with result 'signal'.
-- Reboot --


Expected results:

Probably libvirt-guests should wait until LVM volumes are properly inicialized? 


Additional info:

When started manually, it works

[root@brutus-coreos ~]# virsh start gameos
Domain gameos started

Comment 1 David Hlacik 2016-08-26 11:19:17 UTC
[root@brutus-coreos ~]# journalctl -b -u libvirt-guests
-- Logs begin at Wed 2016-07-27 22:33:51 CEST, end at Fri 2016-08-26 13:17:53 CEST. --
Aug 26 13:16:30 brutus-coreos systemd[1]: Starting Suspend Active Libvirt Guests...
Aug 26 13:16:31 brutus-coreos libvirt-guests.sh[962]: Resuming guests on default URI...
Aug 26 13:16:31 brutus-coreos libvirt-guests.sh[962]: Resuming guest gameos: error: Failed to start domain gameos
Aug 26 13:16:31 brutus-coreos libvirt-guests.sh[962]: error: Cannot access storage file '/dev/hdd/gamedata' (as uid:107,
Aug 26 13:16:31 brutus-coreos systemd[1]: libvirt-guests.service: Main process exited, code=exited, status=1/FAILURE
Aug 26 13:16:31 brutus-coreos systemd[1]: Failed to start Suspend Active Libvirt Guests.
Aug 26 13:16:31 brutus-coreos systemd[1]: libvirt-guests.service: Unit entered failed state.
Aug 26 13:16:31 brutus-coreos systemd[1]: libvirt-guests.service: Failed with result 'exit-code'.

Comment 2 David Hlacik 2016-08-26 11:30:39 UTC
Applicable also to autostart

virsh autostart gameos

after reboot
Aug 26 13:27:44 brutus-coreos systemd[1]: Starting Virtualization daemon...
Aug 26 13:27:44 brutus-coreos systemd[1]: Started Virtualization daemon.
Aug 26 13:27:45 brutus-coreos dnsmasq[1064]: started, version 2.76 cachesize 150
Aug 26 13:27:45 brutus-coreos dnsmasq[1064]: compile time options: IPv6 GNU-getopt DBus no-i18n IDN DHCP DHCPv6 no-Lua T
Aug 26 13:27:45 brutus-coreos dnsmasq-dhcp[1064]: DHCP, IP range 192.168.122.2 -- 192.168.122.254, lease time 1h
Aug 26 13:27:45 brutus-coreos dnsmasq-dhcp[1064]: DHCP, sockets bound exclusively to interface virbr0
Aug 26 13:27:45 brutus-coreos dnsmasq[1064]: read /etc/hosts - 2 addresses
Aug 26 13:27:45 brutus-coreos dnsmasq[1064]: read /var/lib/libvirt/dnsmasq/default.addnhosts - 0 addresses
Aug 26 13:27:45 brutus-coreos dnsmasq-dhcp[1064]: read /var/lib/libvirt/dnsmasq/default.hostsfile
Aug 26 13:27:45 brutus-coreos libvirtd[892]: libvirt version: 2.1.0, package: 1.fc24 (Unknown, 2016-08-03-03:08:34, cole
Aug 26 13:27:45 brutus-coreos libvirtd[892]: hostname: brutus-coreos
Aug 26 13:27:45 brutus-coreos libvirtd[892]: Cannot access storage file '/dev/hdd/gamedata' (as uid:107, gid:107): No su
Aug 26 13:27:45 brutus-coreos libvirtd[892]: internal error: Failed to autostart VM 'gameos': Cannot access storage file
Aug 26 13:27:47 brutus-coreos dnsmasq[1064]: failed to access /etc/resolv.conf: No such file or directory
Aug 26 13:27:47 brutus-coreos dnsmasq[1064]: reading /etc/resolv.conf
Aug 26 13:27:47 brutus-coreos dnsmasq[1064]: using nameserver 192.168.31.1#53

Comment 3 David Hlacik 2016-08-26 16:06:56 UTC
Adding vgchange -a y to libvirtd service solves the issue. Is this correct way to solve it?

# /etc/systemd/system/libvirtd.service.d/override.conf
[Service]
ExecStartPre=/usr/sbin/lvm vgchange -a y

Comment 4 David Hlacik 2016-08-26 16:07:11 UTC
Adding vgchange -a y to libvirtd service solves the issue. Is this correct way to solve it?

# /etc/systemd/system/libvirtd.service.d/override.conf
[Service]
ExecStartPre=/usr/sbin/lvm vgchange -a y

Comment 5 David Hlacik 2016-08-26 17:29:27 UTC
However it seems like a timing issue. Libvirtd service starts before LVM will successfully activate all volumes and groups.

You can see from journalctl -b that because of my ovveride vgchange is called twice.




Aug 26 19:24:57 brutus-coreos dracut-cmdline[193]: Using kernel command line parameters: BOOT_IMAGE=/boot/vmlinuz-4.6.7-300.fc24.x86_64 root=/dev/mapper/ssd-root ro rd.lvm.lv=ssd/root rd.lvm.lv=ssd/swap rhgb quiet intel_iommu=on pci-stub.ids=1002:67df,1002:aaf0 hugepages=4096
Aug 26 19:24:59 brutus-coreos audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=lvm2-lvmetad comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Aug 26 19:24:59 brutus-coreos audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=lvm2-monitor comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Aug 26 19:24:59 brutus-coreos lvm[540]:   2 logical volume(s) in volume group "ssd" monitored
Aug 26 19:24:59 brutus-coreos systemd[1]: Created slice system-lvm2\x2dpvscan.slice.
Aug 26 19:25:00 brutus-coreos lvm[597]:   6 logical volume(s) in volume group "ssd" now active
Aug 26 19:25:00 brutus-coreos audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=lvm2-pvscan@8:2 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Aug 26 19:25:00 brutus-coreos lvm[871]:   6 logical volume(s) in volume group "ssd" now active
Aug 26 19:25:03 brutus-coreos lvm[594]:   4 logical volume(s) in volume group "hdd" now active
Aug 26 19:25:03 brutus-coreos lvm[871]:   4 logical volume(s) in volume group "hdd" now active
Aug 26 19:25:03 brutus-coreos audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=lvm2-pvscan@8:3 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Aug 26 19:25:03 brutus-coreos lvm[596]:   4 logical volume(s) in volume group "hdd" now active
Aug 26 19:25:03 brutus-coreos audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=lvm2-pvscan@8:17 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'

Comment 6 Cole Robinson 2017-05-03 20:53:15 UTC
Hi David, are you still seeing this with latest packages? Have you tried f25 or later?

Comment 7 Fedora End Of Life 2017-07-25 22:39:19 UTC
This message is a reminder that Fedora 24 is nearing its end of life.
Approximately 2 (two) weeks from now Fedora will stop maintaining
and issuing updates for Fedora 24. It is Fedora's policy to close all
bug reports from releases that are no longer maintained. At that time
this bug will be closed as EOL if it remains open with a Fedora  'version'
of '24'.

Package Maintainer: If you wish for this bug to remain open because you
plan to fix it in a currently maintained version, simply change the 'version'
to a later Fedora version.

Thank you for reporting this issue and we are sorry that we were not
able to fix it before Fedora 24 is end of life. If you would still like
to see this bug fixed and are able to reproduce it against a later version
of Fedora, you are encouraged  change the 'version' to a later Fedora
version prior this bug is closed as described in the policy above.

Although we aim to fix as many bugs as possible during every release's
lifetime, sometimes those efforts are overtaken by events. Often a
more recent Fedora release includes newer upstream software that fixes
bugs or makes them obsolete.

Comment 8 Fedora End Of Life 2017-08-08 16:48:48 UTC
Fedora 24 changed to end-of-life (EOL) status on 2017-08-08. Fedora 24 is
no longer maintained, which means that it will not receive any further
security or bug fix updates. As a result we are closing this bug.

If you can reproduce this bug against a currently maintained version of
Fedora please feel free to reopen this bug against that version. If you
are unable to reopen this bug, please file a new report against the
current release. If you experience problems, please add a comment to this
bug.

Thank you for reporting this bug and we are sorry it could not be fixed.

Comment 9 David Hlacik 2019-11-01 11:11:52 UTC
I do no longer have setup where I can test this.


Note You need to log in before you can comment on or make changes to this bug.