Bug 1308903

Summary: Renaming a VM results in an inability to restart it
Product: Red Hat Enterprise Linux 7 Reporter: Vered Volansky <vered>
Component: libvirtAssignee: Pavel Hrdina <phrdina>
Status: CLOSED DUPLICATE QA Contact: Virtualization Bugs <virt-bugs>
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: 7.2CC: amureini, rbalakri
Target Milestone: rcKeywords: Regression
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2016-03-18 08:28:38 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Vered Volansky 2016-02-16 12:19:53 UTC
Description of problem:
Can't run a VM after shutting it down in order to rename it, then renaming it.

Version-Release number of selected component (if applicable):
Virtual Machine Manager, version 1.3.2
libvirt - 1.2.17-13el7_2.2.x86_64
qemu-kvm-ev-2.3.0-31.el7_2.7.1.x86_64
qemu-kvm 1.5.3-105.el7_2.3.x86_64


RHEL 7.2 installed

RHEL release - 3.10.0-327.4.5.el7.x86_64

How reproducible:
100%

Steps to Reproduce:
1.Setup a VM with linux OS (tried with fedora 22 and rhel 7.2)
2.Shut down the VM (using shut down)
3.VM status is now Shutoff.
4.Rename the VM
5.Press Run (fails)
6.Rename the VM again to the original name with which it was created.
7.Press Run (succeeds...)

Actual results after step 5:
rror starting domain: internal error: process exited while connecting to monitor: 2016-02-16T11:58:29.173694Z qemu-kvm: -chardev socket,id=charchannel0,path=/var/lib/libvirt/qemu/channel/target/domain-fedora22/org.qemu.guest_agent.0,server,nowait: Failed to bind socket to /var/lib/libvirt/qemu/channel/target/domain-fedora22/org.qemu.guest_agent.0: No such file or directory


Traceback (most recent call last):
  File "/usr/share/virt-manager/virtManager/asyncjob.py", line 90, in cb_wrapper
    callback(asyncjob, *args, **kwargs)
  File "/usr/share/virt-manager/virtManager/asyncjob.py", line 126, in tmpcb
    callback(*args, **kwargs)
  File "/usr/share/virt-manager/virtManager/libvirtobject.py", line 83, in newfn
    ret = fn(self, *args, **kwargs)
  File "/usr/share/virt-manager/virtManager/domain.py", line 1402, in startup
    self._backend.create()
  File "/usr/lib64/python2.7/site-packages/libvirt.py", line 1029, in create
    if ret == -1: raise libvirtError ('virDomainCreate() failed', dom=self)
libvirtError: internal error: process exited while connecting to monitor: 2016-02-16T11:58:29.173694Z qemu-kvm: -chardev socket,id=charchannel0,path=/var/lib/libvirt/qemu/channel/target/domain-fedora22/org.qemu.guest_agent.0,server,nowait: Failed to bind socket to /var/lib/libvirt/qemu/channel/target/domain-fedora22/org.qemu.guest_agent.0: No such file or directory

BTW, libvirtd status, which was nice and clean before, now contains:

Feb 16 10:54:32 galactica.tlv.redhat.com libvirtd[28313]: Domain id=2 name='rhel7.2Template' uuid=8772bd23-058d-4a97-96b1-700e54912f81 is tainted: high-privileges
Feb 16 13:58:29 galactica.tlv.redhat.com libvirtd[28313]: Domain id=3 name='fedora22Template' uuid=799eaca8-1827-4543-8881-ace233b942d3 is tainted: high-privileges
Feb 16 13:58:29 galactica.tlv.redhat.com libvirtd[28313]: failed to connect to monitor socket: No such process
Feb 16 13:58:29 galactica.tlv.redhat.com libvirtd[28313]: internal error: process exited while connecting to monitor: 2016-02-16T11:58:29.173694Z qemu-kvm: -chardev socket,id=charchannel0,path=/var/lib/libvirt/qemu/channel/target/domain-fedora22/org.qemu.guest_agent.0,server,nowait: Failed to bind socket to /var/lib/libvirt/qemu/channel/target/domain-fedora22/org.qemu.guest_agent.0: No such file or directory

These are the changed names of the VMs, and the tainted message is also received when restart is indeed performed and I believe is caused by qemu intervening with libvirt, and has no real baring on the use case. 

Expected results:
Successfully restarting the VM.

Let me know if any additional info is needed.
And if you have a workaround to enable VM renaming I would greatly appreciate it.

I did not encounter this behaviour in previous setups, therefore tagged as a regression. I did, for sure, changed the name of a VM before and could successfuly restart it.

Comment 3 Pavel Hrdina 2016-03-18 08:28:38 UTC

*** This bug has been marked as a duplicate of bug 1278068 ***