Bug 1878724
Summary: | vdsm-tool configure is failing with error "dependency job for libvirtd.service failed" | ||
---|---|---|---|
Product: | Red Hat Enterprise Virtualization Manager | Reporter: | nijin ashok <nashok> |
Component: | vdsm | Assignee: | Marcin Sobczyk <msobczyk> |
Status: | CLOSED ERRATA | QA Contact: | Pavol Brilla <pbrilla> |
Severity: | medium | Docs Contact: | |
Priority: | medium | ||
Version: | 4.4.1 | CC: | cshao, gdeolive, jortialc, lsurette, mavital, mkalinin, mperina, srevivo, ycui |
Target Milestone: | ovirt-4.5.0 | Keywords: | TestOnly, ZStream |
Target Release: | 4.5.0 | ||
Hardware: | All | ||
OS: | Linux | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | No Doc Update | |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2022-05-26 17:22:44 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | Infra | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | 1889363 | ||
Bug Blocks: |
Description
nijin ashok
2020-09-14 12:01:03 UTC
Using vdsm-4.40.33-1.el8ev.x86_64 this still fails the first time I try to install the host (reinstall passes just as is said in description). Change linked in this bug is apparently present when checking the changed file on host. I installed RHEL 8.3, then ovirt-host and virt-who which I started as well as libvirt, then I tried to install the host in an engine which failed on: "stdout" : "fatal: [10.37.138.41]: FAILED! => {\"changed\": true, \"cmd\": [\"vdsm-tool\", \"configure\", \"--force\"], \"delta\": \"0:00:46.909863\", \"end\": \"2020-10-14 15:19:29.123406\", \"msg\": \"non-zero return code\", \"rc\": 1, \"start\": \"2020-10-14 15:18:42.213543\", \"stderr\": \"Error: ServiceOperationError: _systemctlStart failed\\nb'Job for libvirtd.socket failed.\\\\nSee \\\"systemctl status libvirtd.socket\\\" and \\\"journalctl -xe\\\" for details.\\\\n' \", \"stderr_lines\": [\"Error: ServiceOperationError: _systemctlStart failed\", \"b'Job for libvirtd.socket failed.\\\\nSee \\\"systemctl status libvirtd.socket\\\" and \\\"journalctl -xe\\\" for details.\\\\n' \"], \"stdout\": \"\\nChecking configuration status...\\n\\nWARNING: LVM local configuration: /etc/lvm/lvmlocal.conf is not based on vdsm configuration\\nlvm requires configuration\\nlibvirt is not configured for vdsm yet\\nlibvirtd.service doesn't have requirement on libvirtd-tls.socket unit\\nDB file /var/lib/vdsm/storage/managedvolume.db doesn't exists\\nManaged volume database requires configuration\\nabrt is not configured for vdsm\\nmultipath requires configuration\\n\\nRunning configure...\\nReconfiguration of sanlock is done.\\nReconfiguration of passwd is done.\\nReconfiguration of certificates is done.\\nWARNING: LVM local configuration: /etc/lvm/lvmlocal.conf is not based on vdsm configuration\\nBacking up /etc/lvm/lvmlocal.conf to /etc/lvm/lvmlocal.conf.202010141519\\nInstalling /usr/share/vdsm/lvmlocal.conf at /etc/lvm/lvmlocal.conf\\nReconfiguration of lvm is done.\\nReconfiguration of libvirt is done.\\nDB file /var/lib/vdsm/storage/managedvolume.db doesn't exists\\nCreating managed volumes database at /var/lib/vdsm/storage/managedvolume.db\\nSetting up ownership of database file to vdsm:kvm\\nReconfiguration of managedvolumedb is done.\\nReconfiguration of bond_defaults is done.\\nReconfiguration of abrt is done.\\nReconfiguration of sebool is done.\\nReconfiguration of multipath is done.\", \"stdout_lines\": [\"\", \"Checking configuration status...\", \"\", \"WARNING: LVM local configuration: /etc/lvm/lvmlocal.conf is not based on vdsm configuration\", \"lvm requires configuration\", \"libvirt is not configured for vdsm yet\", \"libvirtd.service doesn't have requirement on libvirtd-tls.socket unit\", \"DB file /var/lib/vdsm/storage/managedvolume.db doesn't exists\", \"Managed volume database requires configuration\", \"abrt is not configured for vdsm\", \"multipath requires configuration\", \"\", \"Running configure...\", \"Reconfiguration of sanlock is done.\", \"Reconfiguration of passwd is done.\", \"Reconfiguration of certificates is done.\", \"WARNING: LVM local configuration: /etc/lvm/lvmlocal.conf is not based on vdsm configuration\", \"Backing up /etc/lvm/lvmlocal.conf to /etc/lvm/lvmlocal.conf.202010141519\", \"Installing /usr/share/vdsm/lvmlocal.conf at /etc/lvm/lvmlocal.conf\", \"Reconfiguration of lvm is done.\", \"Reconfiguration of libvirt is done.\", \"DB file /var/lib/vdsm/storage/managedvolume.db doesn't exists\", \"Creating managed volumes database at /var/lib/vdsm/storage/managedvolume.db\", \"Setting up ownership of database file to vdsm:kvm\", \"Reconfiguration of managedvolumedb is done.\", \"Reconfiguration of bond_defaults is done.\", \"Reconfiguration of abrt is done.\", \"Reconfiguration of sebool is done.\", \"Reconfiguration of multipath is done.\"]}", Right, so it turns out that even though virt-who uses the 'libvirtd-ro.socket' [1] it doesn't require it on a systemd unit level [2]. That means that even if we stop 'libvirtd-ro.socket', 'virt-who.service' will still be running and depending on the implementation anything can really happen. This has to be fixed on virt-who side first. Given that, the fact that we also dynamically depend on either 'libvirtd-tcp.socket' or 'libvirt-tls.socket', so we cannot prevent a similar scenario to happen if someone uses one of these, and the gentle nature of socket activation I would prefer to revert the patch and leave the things as is. [1] https://github.com/candlepin/virt-who/blob/4c7fdb032a66e2fe3324cc2d7579101c699e3b00/virtwho/virt/libvirtd/libvirtd.py#L282 [2] https://github.com/candlepin/virt-who/blob/master/virt-who.service virt-who-1.30.9-1.el8 should contain the fix, no code changes requires on RHV side "stdout_lines" : [ "", "Checking configuration status...", "", "WARNING: LVM local configuration: /etc/lvm/lvmlocal.conf is not based on vdsm configuration", "lvm requires configuration", "DB file /var/lib/vdsm/storage/managedvolume.db doesn't exists", "Managed volume database requires configuration", "sanlock user needs groups: qemu, kvm", "multipath requires configuration", "libvirt is not configured for vdsm yet", "libvirtd.service doesn't have requirement on libvirtd-tls.socket unit", "", "Running configure...", "WARNING: LVM local configuration: /etc/lvm/lvmlocal.conf is not based on vdsm configuration", "Previous lvmlocal.conf copied to /etc/lvm/lvmlocal.conf.20220502123424", "Installing /usr/share/vdsm/lvmlocal.conf at /etc/lvm/lvmlocal.conf", "Reconfiguration of lvm is done.", "DB file /var/lib/vdsm/storage/managedvolume.db doesn't exists", "Creating managed volumes database at /var/lib/vdsm/storage/managedvolume.db", "Setting up ownership of database file to vdsm:kvm", "Reconfiguration of managedvolumedb is done.", "Reconfiguration of passwd is done.", "Configuring sanlock user groups", "Configuring sanlock config file", "Previous sanlock.conf copied to /etc/sanlock/sanlock.conf.20220502123424", "Reconfiguration of sanlock is done.", "Reconfiguration of multipath is done.", "Reconfiguration of libvirt is done.", "Reconfiguration of sebool is done.", "Reconfiguration of bond_defaults is done.", "", "Done configuring modules to VDSM." ], "stderr_lines" : [ ], # yum list ovirt-engine Installed Packages ovirt-engine.noarch 4.5.0.5-0.7.el8ev Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Low: RHV RHEL Host (ovirt-host) [ovirt-4.5.0] security update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2022:4764 |