Bug 1813961
| Summary: | vdsm-tool crashes on python3 change needed | ||
|---|---|---|---|
| Product: | [oVirt] vdsm | Reporter: | Sandro Bonazzola <sbonazzo> |
| Component: | Services | Assignee: | Marcin Sobczyk <msobczyk> |
| Status: | CLOSED CURRENTRELEASE | QA Contact: | Petr Matyáš <pmatyas> |
| Severity: | urgent | Docs Contact: | |
| Priority: | unspecified | ||
| Version: | 4.40.5 | CC: | bugs, mperina, msobczyk |
| Target Milestone: | ovirt-4.4.0 | Flags: | sbonazzo:
ovirt-4.4?
|
| Target Release: | 4.40.7 | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | vdsm-4.40.7 | Doc Type: | No Doc Update |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2020-05-20 20:04:06 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | Infra | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
What are the verification steps for this one? As HE deploy is still broken. Also what was done to solve this bug? As no patches are linked... I can see a patch attached in the "links" table: https://gerrit.ovirt.org/107698 This bug was about a failure during exception handling in vdsm-tool. You don't actually need HE to reproduce it - any working host will do fine. To reproduce it I guess you need to: 1. Break one of services that vdsm-tool tries to restart when reconfiguring (i.e. libvirtd) on purpose. You can try doing this by i.e. adding "Requires=doesntexist.service" to "libvirtd.service" and running "systemctl daemon-reload" 2. Try to reconfigure libvirtd with "vdsm-tool configure --force --module libvirt" Verified on vdsm-4.40.7-1.el8ev.x86_64 vdsm-tool now doesn't fail on type conversion during service configuration This bugzilla is included in oVirt 4.4.0 release, published on May 20th 2020. Since the problem described in this bug report should be resolved in oVirt 4.4.0 release, it has been closed with a resolution of CURRENT RELEASE. If the solution does not work for you, please open a new bug report. |
Description of problem: Hosted engine deployment fails while deploying the host. Looking at the engine ansible runner logs: Checking configuration status... DB file /var/lib/vdsm/storage/managedvolume.db doesn't exists Managed volume database requires configuration multipath requires configuration WARNING: LVM local configuration: /etc/lvm/lvmlocal.conf is not based on vdsm configuration lvm requires configuration abrt is not configured for vdsm libvirtd socket units status: [{'Names': 'libvirtd-admin.socket', 'LoadState': 'loaded'}, {'Names': 'libvirtd-tcp.socket', 'LoadState': 'loaded'}, {'Names': 'libvirtd.socket', 'LoadState': 'loaded'}, {'Names': 'libvirtd-tls.socket', 'LoadState': 'loaded'}, {'Names': 'libvirtd-ro.socket', 'LoadState': 'loaded'}] libvirtd uses socket activation libvirtd-tls.socket unit is disabled libvirt is not configured for vdsm yet FAILED: conflicting vdsm and libvirt-qemu tls configuration. vdsm.conf with ssl=True requires the following changes: libvirtd.conf: listen_tcp=0, auth_tcp="sasl", listen_tls=1 qemu.conf: spice_tls=1. Running configure... DB file /var/lib/vdsm/storage/managedvolume.db doesn't exists Creating managed volumes database at /var/lib/vdsm/storage/managedvolume.db Setting up ownership of database file to vdsm:kvm Reconfiguration of managedvolumedb is done. Reconfiguration of sanlock is done. Reconfiguration of multipath is done. Reconfiguration of certificates is done. WARNING: LVM local configuration: /etc/lvm/lvmlocal.conf is not based on vdsm configuration Backing up /etc/lvm/lvmlocal.conf to /etc/lvm/lvmlocal.conf.202003131619 Installing /usr/share/vdsm/lvmlocal.conf at /etc/lvm/lvmlocal.conf Reconfiguration of lvm is done. Reconfiguration of abrt is done. Reconfiguration of sebool is done. libvirtd socket units status: [{'Names': 'libvirtd-admin.socket', 'LoadState': 'loaded'}, {'Names': 'libvirtd-tcp.socket', 'LoadState': 'loaded'}, {'Names': 'libvirtd.socket', 'LoadState': 'loaded'}, {'Names': 'libvirtd-tls.socket', 'LoadState': 'loaded'}, {'Names': 'libvirtd-ro.socket', 'LoadState': 'loaded'}] libvirtd uses socket activation libvirtd socket units status: [{'Names': 'libvirtd-admin.socket', 'LoadState': 'loaded'}, {'Names': 'libvirtd-tcp.socket', 'LoadState': 'loaded'}, {'Names': 'libvirtd.socket', 'LoadState': 'loaded'}, {'Names': 'libvirtd-tls.socket', 'LoadState': 'loaded'}, {'Names': 'libvirtd-ro.socket', 'LoadState': 'loaded'}] libvirtd uses socket activation Reconfiguration of libvirt is done. Reconfiguration of passwd is done.", "stderr": "Error: Traceback (most recent call last): File "/usr/bin/vdsm-tool", line 209, in main return tool_command[cmd]["command"](*args) File "/usr/lib/python3.6/site-packages/vdsm/tool/__init__.py", line 40, in wrapper func(*args, **kwargs) File "/usr/lib/python3.6/site-packages/vdsm/tool/configurator.py", line 149, in configure service.service_start(s) File "/usr/lib/python3.6/site-packages/vdsm/tool/service.py", line 193, in service_start return _runAlts(_srvStartAlts, srvName) File "/usr/lib/python3.6/site-packages/vdsm/tool/service.py", line 172, in _runAlts "%s failed" % alt.__name__, out, err) vdsm.tool.service.ServiceOperationError: <exception str() failed> During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/bin/vdsm-tool", line 224, in <module> sys.exit(main()) File "/usr/bin/vdsm-tool", line 214, in main print('Error: ', e, '\n', file=sys.stderr) File "/usr/lib/python3.6/site-packages/vdsm/tool/service.py", line 75, in __str__ return '\n'.join(s) TypeError: sequence item 1: expected str instance, bytes found" Version-Release number of selected component (if applicable): - ovirt-node-ng-installer-4.4.0-2020031301.el8.iso - ovirt-engine-appliance-4.4-20200305233855.1.el8.x86_64 - vdsm-4.40.5-1.el8.x86_64