Note: This bug is displayed in read-only format because
the product is no longer active in Red Hat Bugzilla.
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Description of problem:
can not start vdsmd service after update the libvirt packages
Version-Release number of selected component (if applicable):
libvirt-0.9.13-2.el6.x86_64
vdsm-4.9.6-16.0.el6.x86_64
How reproducible:
100%
Steps to Reproduce:
[root@libn755 Downloads]# rpm -Uvh libvirt-0.9.13-2.el6.x86_64.rpm libvirt-client-0.9.13-2.el6.x86_64.rpm libvirt-python-0.9.13-2.el6.x86_64.rpm libvirt-daemon-0.9.13-2.el6.x86_64.rpm
Preparing... ########################################### [100%]
1:libvirt-client ########################################### [ 25%]
2:libvirt-daemon warning: /etc/libvirt/libvirtd.conf created as /etc/libvirt/libvirtd.conf.rpmnew
warning: /etc/libvirt/qemu.conf created as /etc/libvirt/qemu.conf.rpmnew
########################################### [ 50%]
3:libvirt ########################################### [ 75%]
4:libvirt-python ########################################### [100%]
[root@libn755 Downloads]# service libvirtd restart
Stopping libvirtd daemon: libvirtd: libvirtd is managed by upstart and started, use initctl instead
[root@libn755 Downloads]# service vdsmd restart
Shutting down vdsm daemon:
vdsm watchdog stop [ OK ]
vdsm stop [ OK ]
Stopping libvirtd daemon: libvirtd: libvirtd is managed by upstart and started, use initctl instead
vdsm: libvirt already configured for vdsm [ OK ]
Starting wdmd...
Starting wdmd: [ OK ]
Starting sanlock...
Starting sanlock: [ OK ]
Starting iscsid:
diff: : No such file or directory
/bin/cp: cannot stat `': No such file or directory
vdsm: one of the dependent services did not start, error co[FAILED]
And the host will disconnect from RHEVM.
Maybe the real reason of this bug is the new packages missing libvirtd.upstart file:
# rpm -ql libvirt|grep upstart
There is nothing output with that command
And older packages like libvirt-0.9.10-21.el6 will output:
# rpm -ql libvirt|grep upstart
/usr/share/doc/libvirt-0.9.10/libvirtd.upstart
Actual results:
as steps
Expected results:
should start vdsmd normally.
Additional info:
when I reduce to libvirt-0.9.10-21.el6.x86_64.rpm, it works well.
Just saw the libvirtd.upstart file:
# ll /usr/share/doc/libvirt-daemon-0.9.13/libvirtd.upstart
-rw-r--r--. 1 root root 1181 Feb 3 16:51 /usr/share/doc/libvirt-daemon-0.9.13/libvirtd.upstart
Maybe there is some cooperation problems between libvirt and vdsm.
Verified in libvirt-0.9.13-3.el6.
Versions:
libvirt-0.9.13-3.el6.x86_64
vdsm-4.9.6-21.0.el6.x86_64
Can start vdsmd service normally after upgrade the libvirt packages to libvirt-0.9.13-3.el6, and the host will reconnect to RHEVM successfully.
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.
For information on the advisory, and where to find the updated
files, follow the link below.
If the solution does not work for you, open a new bug report.
http://rhn.redhat.com/errata/RHSA-2013-0276.html