RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1244564 - virtio-serial is successfully detached while the guest is in S3 state
Summary: virtio-serial is successfully detached while the guest is in S3 state
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: qemu-kvm
Version: 7.2
Hardware: x86_64
OS: Unspecified
unspecified
medium
Target Milestone: rc
: ---
Assignee: Amit Shah
QA Contact: Virtualization Bugs
URL:
Whiteboard:
Depends On:
Blocks: Virt-S3/S4-7.0
TreeView+ depends on / blocked
 
Reported: 2015-07-20 02:30 UTC by zhenfeng wang
Modified: 2016-07-04 16:25 UTC (History)
9 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-07-04 16:25:49 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description zhenfeng wang 2015-07-20 02:30:45 UTC
Description of problem:
The guest agent channel will always stay in disconnected status while re-attach guest agent to guest which just wakeup from dompmsuspended status

Version-Release number of selected component (if applicable):
libvirt-1.2.17-1.el7.x86_64

How reproducible:
100%

Steps to Reproduce:
1.Start a guest with guest agent installing
#virsh dumpxml 7.0
--
  <pm>
    <suspend-to-mem enabled='yes'/>
    <suspend-to-disk enabled='yes'/>
  </pm>

--
    <channel type='unix'>
      <source mode='bind' path='/var/lib/libvirt/qemu/channel/target/7.0.org.qemu.guest_agent.0'/>
      <target type='virtio' name='org.qemu.guest_agent.0' state='connected'/>
      <alias name='channel0'/>
      <address type='virtio-serial' controller='0' bus='0' port='2'/>
    </channel>

2.After guest start successfully, do S3 with the guest
# virsh dompmsuspend 7.0 --target mem
Domain 7.0 successfully suspended

# virsh list
 Id    Name                           State
----------------------------------------------------
 26    7.0                            pmsuspended

3.Prepare the guest agent xml
#cat agent.xml
    <channel type='unix'>
      <source mode='bind' path='/var/lib/libvirt/qemu/channel/target/7.0.org.qemu.guest_agent.0'/>
      <target type='virtio' name='org.qemu.guest_agent.0' state='connected'/>
      <alias name='channel1'/>
      <address type='virtio-serial' controller='0' bus='0' port='2'/>
    </channel>


4.Detach the guest agent with agent xml, then wakeup the guest
# virsh detach-device 7.0 agent.xml 
Device detached successfully

# virsh dompmwakeup 7.0
Domain 7.0 successfully woken up

# virsh list --all
 Id    Name                           State
----------------------------------------------------
 26    7.0                            running

5.Re-attach the guest agent to the guest, found the guest agent didn't stay in 'connected' status, also will fail to operate the virsh command which depends on guest agent
#virsh attach-device 7.0 agent.xml
#virsh dumpxml 7.0
--
    <channel type='unix'>
      <source mode='bind' path='/var/lib/libvirt/qemu/channel/target/7.0.org.qemu.guest_agent.0'/>
      <target type='virtio' name='org.qemu.guest_agent.0'/>
      <alias name='channel0'/>
      <address type='virtio-serial' controller='0' bus='0' port='2'/>
    </channel>

# virsh domtime 7.0
error: Guest agent is not responding: QEMU guest agent is not connected

6.Restart libvirtd service, the guest agent will appears in 'disconnected' status
#systemctl restart libvirtd

# virsh dumpxml 7.0 |grep agent
      <source mode='bind' path='/var/lib/libvirt/qemu/channel/target/7.0.org.qemu.guest_agent.0'/>
      <target type='virtio' name='org.qemu.guest_agent.0' state='disconnected'/>

# time virsh domtime 7.0
error: Guest agent is not responding: Guest agent not available for now


real	0m5.009s
user	0m0.006s
sys	0m0.003s

7.Login in the guest, find the qemu-guest-agent service was in running status.

Actual results:
The guest agent channel will always stay in disconnected while re-attach guest agent to guest

Expected results:
The guest agent should be in connected status and works expectly after re-attach the guest agent to guest

Additional info:

Comment 3 Peter Krempa 2015-11-30 16:12:22 UTC
I've tried to reproduce the bug. An attempt to detach the device while the guest is in S3 state was successful so libvirt cleared the device from the XML.

2015-11-30 15:52:14.133+0000: 1548142: info : qemuMonitorIOWrite:526 : QEMU_MONITOR_IO_WRITE: mon=0x7f7e84004410 buf={"execute":"device_del","arguments":{"id":"channel0"},"id":"libvirt-99"}
 len=74 ret=74 errno=0
2015-11-30 15:52:14.134+0000: 1548142: info : qemuMonitorIOProcess:421 : QEMU_MONITOR_IO_PROCESS: mon=0x7f7e84004410 buf={"timestamp": {"seconds": 1448898734, "microseconds": 134281}, "event": "DEVICE_DELETED", "data": {"device": "channel0", "path": "/machine/peripheral/channel0"}}
 len=163
2015-11-30 15:52:14.134+0000: 1548142: debug : qemuMonitorJSONIOProcessLine:186 : Line [{"timestamp": {"seconds": 1448898734, "microseconds": 134281}, "event": "DEVICE_DELETED", "data": {"device": "channel0", "path": "/machine/peripheral/channel0"}}]
2015-11-30 15:52:14.134+0000: 1548142: info : qemuMonitorJSONIOProcessLine:201 : QEMU_MONITOR_RECV_EVENT: mon=0x7f7e84004410 event={"timestamp": {"seconds": 1448898734, "microseconds": 134281}, "event": "DEVICE_DELETED", "data": {"device": "channel0", "path": "/machine/peripheral/channel0"}}

The device deleted event is returned immediately. After resuming the guest, the /dev/virtio-ports/ entry for the unplugged device, thus the device was removed without the OS knowing about it.

Reassigning to qemu.

Comment 5 Amit Shah 2016-07-04 16:25:49 UTC
It looks like you're doing:

1. Start guest
2. Put guest in S3 state
3. Detach virtio-serial port
4. Resume guest

Then this isn't expected to work normally, as Linux doesn't expect hardware to change while it's in suspend state.

Marking this as NOTABUG, but please reopen if there's something else that's happening.


Note You need to log in before you can comment on or make changes to this bug.