RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 728758 - [libvirt] vdsm fails to connect to libvirt socket in case libvirt is executed via initctl and running service libvirtd restart
Summary: [libvirt] vdsm fails to connect to libvirt socket in case libvirt is executed...
Keywords:
Status: CLOSED DUPLICATE of bug 728153
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: libvirt
Version: 6.2
Hardware: x86_64
OS: Linux
medium
medium
Target Milestone: rc
: 6.2
Assignee: Jiri Denemark
QA Contact: Virtualization Bugs
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2011-08-07 08:38 UTC by Haim
Modified: 2014-01-13 00:50 UTC (History)
12 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2011-08-26 15:43:30 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Haim 2011-08-07 08:38:38 UTC
Description of problem:

case (history):

after vdsm started to configure libvirt using initctl, we seem to hit a repeated case where vdsm fails connecting libvirt socket, and system fails to initialize. 

how it happens? 

system is alive after vdsm started libvirtd using initctl, user comes and restart libvirt using sysvfs (service libvirtd restart), in some conditions, it creates a state of 2 running libvirt daemons, were actually, its one running libvirt process, along with watchdog tries to start another process, and using 'watch', i see lots of pid's changing (probably goes up and down).

Repro steps:

[root@nott-vds1 core]# initctl stop libvirtd
initctl: Unknown instance:  
[root@nott-vds1 core]# initctl start libvirtd
libvirtd start/running, process 16772
[root@nott-vds1 core]# /etc/init.d/libvirtd start 
Starting libvirtd daemon: 

since we are moving forward to beta2, and RC, I think we should find better solution, or at least, from libvirt side, protect such cases, so in case libvirt process already runs, do not try to start another one. 

vdsm error log connection to socket: 

clientIFinit::ERROR::2011-08-07 11:20:43,811::clientIF::933::vds::(_recoverExistingVms) Vm's recovery failed
Traceback (most recent call last):
  File "/usr/share/vdsm/clientIF.py", line 898, in _recoverExistingVms
    vdsmVms = self.getVDSMVms()
  File "/usr/share/vdsm/clientIF.py", line 959, in getVDSMVms
    conn = libvirtconnection.get(self)
  File "/usr/share/vdsm/libvirtconnection.py", line 94, in get
    conn = libvirt.openAuth('qemu:///system', auth, 0)
  File "/usr/lib64/python2.6/site-packages/libvirt.py", line 102, in openAuth
    if ret is None:raise libvirtError('virConnectOpenAuth() failed')
libvirtError: Cannot recv data: Connection reset by peer

libvirt log when watchdog tries to start another process:

11:21:36.806: 19298: error : virNetSocketNewListenTCP:281 : Unable to bind to port: Address already in use
11:21:36.820: 19310: info : libvirt version: 0.9.4, package: 0rc1.2.el6 (Red Hat, Inc. <http://bugzilla.redhat.com/bugzilla>, 2011-08-01-23:37:12, x86-003.build.bos.redhat.com)
11:21:36.820: 19310: debug : virRegisterNetworkDriver:584 : registering Network as network driver 3
11:21:36.820: 19310: debug : virRegisterInterfaceDriver:617 : registering Interface as interface driver 3
11:21:36.820: 19310: debug : virRegisterStorageDriver:650 : registering storage as storage driver 3
11:21:36.820: 19310: debug : virRegisterDeviceMonitor:683 : registering udevDeviceMonitor as device driver 3
11:21:36.820: 19310: debug : virRegisterSecretDriver:716 : registering secret as secret driver 3
11:21:36.820: 19310: debug : virRegisterNWFilterDriver:749 : registering nwfilter as network filter driver 3
11:21:36.820: 19310: debug : virRegisterDriver:767 : driver=0x71d500 name=QEMU
11:21:36.820: 19310: debug : virRegisterDriver:791 : registering QEMU as driver 3
11:21:36.820: 19310: debug : virRegisterDriver:767 : driver=0x71db20 name=LXC
11:21:36.820: 19310: debug : virRegisterDriver:791 : registering LXC as driver 4
11:21:36.821: 19310: debug : virHookCheck:115 : No hook script /etc/libvirt/hooks/daemon
11:21:36.821: 19310: debug : virHookCheck:115 : No hook script /etc/libvirt/hooks/qemu
11:21:36.821: 19310: debug : virHookCheck:115 : No hook script /etc/libvirt/hooks/lxc
11:21:37.160: 19310: error : virNetSocketNewListenTCP:281 : Unable to bind to port: Address already in us

Comment 2 Dave Allan 2011-08-08 19:31:56 UTC
Hi Haim, it seems like this could be solved with the same change as BZ 728153, having the init script check to see if libvirtd is being managed by upstart or systemd, and if so, notifying the user and exiting.  If that works for you, I'll close this one as a dup and we can track the work through 728158.

Comment 3 Jiri Denemark 2011-08-26 15:43:30 UTC

*** This bug has been marked as a duplicate of bug 728153 ***


Note You need to log in before you can comment on or make changes to this bug.