RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 985749 - vm can't be shutdown completely.
Summary: vm can't be shutdown completely.
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: qemu-kvm
Version: 6.3
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: rc
: ---
Assignee: Virtualization Maintenance
QA Contact: Virtualization Bugs
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2013-07-18 07:04 UTC by shalong228
Modified: 2014-10-26 10:48 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2014-10-26 10:48:08 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description shalong228 2013-07-18 07:04:57 UTC
Description of problem:
We develop a suse vm on hypervisor, the hypervisor we used is a redhat 6.3. When I
run shutdown in the vm, the vm start shutdown, I saw the vm is completely shutdown, but the vm state on hypervisor is changed from running to paused,
It seems it stay at pause state for ever, it is expected in "shut off" state.   

information on vm:

menu.list: 
title vm
    root (hd0,0)
    kernel /bzImage ro root=LABEL=/2 acpi=force apm=power_off psmouse.proto=bare console=ttyS0,115200 console=tty0
    initrd /vm_initrd.gz

information on hypervisor:

# uname -a
Linux vm117 2.6.32-279.el6.x86_64 #1 SMP Wed Jun 13 18:24:36 EDT 2012 x86_64 x86_64 x86_64 GNU/Linux

# cat /etc/redhat-release
Red Hat Enterprise Linux Workstation release 6.3 (Santiago)

Version-Release number of selected component (if applicable):

# cat /etc/redhat-release
Red Hat Enterprise Linux Workstation release 6.3 (Santiago)


How reproducible:


Steps to Reproduce:
1. virsh start vm
2. in the vm, issue shtudown command to shutdown the vm
3. In hypervisor, check the vm state by "virsh list --all"

Actual results:
The vm is in paused state, rather than "shut off"

Expected results:

The vm should be "shut off" state after vm is shutdown. 


Additional info:

Comment 2 shalong228 2013-07-26 09:12:47 UTC
I deployed the vm on rel 6.1, 6.2. 6.3, and 6.4, the shutdown issue only occurs on rel 6.3. So I think the root cause is not in the vm, should be in the rel 6.3.

Comment 3 caobbu 2013-07-31 02:56:38 UTC
I think you should supply the domain xml of the VM and you should also provide more information about the libvirt and suse.

What version are you using them?

Comment 4 shalong228 2013-07-31 06:31:06 UTC
# rpm -qa|grep libvirt
libvirt-devel-0.9.10-21.el6.x86_64
libvirt-java-devel-0.4.7-1.el6.noarch
libvirt-client-0.9.10-21.el6.x86_64
libvirt-java-0.4.7-1.el6.noarch
libvirt-0.9.10-21.el6.x86_64
libvirt-python-0.9.10-21.el6.x86_64

domain.xml: 

<domain type='kvm'>
  <name>sha</name>
  <uuid>eb18e6fa-3f6d-7a0c-97c2-a5efb261008d</uuid>
  <memory unit='KiB'>6000000</memory>
  <currentMemory unit='KiB'>6000000</currentMemory>
  <vcpu placement='static'>4</vcpu>
  <os>
    <type arch='x86_64' machine='rhel6.3.0'>hvm</type>
    <boot dev='hd'/>
  </os>
  <features>
    <acpi/>
    <pae/>
  </features>
  <clock offset='utc'/>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>restart</on_reboot>
  <on_crash>destroy</on_crash>
  <devices>
    <emulator>/usr/libexec/qemu-kvm</emulator>
    <disk type='file' device='disk'>
      <driver name='qemu' type='raw'/>
      <source file='/home/shalong/vmdisk1.img'/>
      <target dev='vda' bus='virtio'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
    </disk>
    <disk type='file' device='disk'>
      <driver name='qemu' type='raw'/>
      <source file='/home/shalong/dvmdisk2.img'/>
      <target dev='vdb' bus='virtio'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
    </disk>

   <controller type='usb' index='0'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
    </controller>
    <interface type='bridge'>
      <mac address='00:1a:6a:32:19:a2'/>
      <source bridge='br0'/>
      <model type='e1000'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
    </interface>
    <interface type='bridge'>
      <mac address='00:1a:b4:21:37:62'/>
      <source bridge='br0'/>
      <model type='e1000'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
    </interface>
    <serial type='file'>
      <source path='/home/shalong/boot.log'/>
      <target port='0'/>
    </serial>
    <console type='file'>
      <source path='/home/shalong/boot.log'/>
      <target type='serial' port='0'/>
    </console>
    <input type='mouse' bus='ps2'/>
    <graphics type='vnc' port='-1' autoport='yes' listen='0.0.0.0'>
      <listen type='address' address='0.0.0.0'/>
    </graphics>
    <video>
      <model type='cirrus' vram='9216' heads='1'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
    </video>
    <memballoon model='virtio'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x0c' function='0x0'/>
    </memballoon>
  </devices>
</domain>

Comment 5 RHEL Program Management 2013-10-13 23:22:16 UTC
This request was evaluated by Red Hat Product Management for
inclusion in the current release of Red Hat Enterprise Linux.
Because the affected component is not scheduled to be updated
in the current release, Red Hat is unable to address this
request at this time.

Red Hat invites you to ask your support representative to
propose this request, if appropriate, in the next release of
Red Hat Enterprise Linux.

Comment 6 Jaroslav Škarvada 2014-10-24 12:21:06 UTC
It doesn't seem to be acpid bug, as the event is propagated. Reassigning to qemu-kvm for further investigation.

Comment 7 Ronen Hod 2014-10-26 10:48:08 UTC
Dear shalong228,

Thank you for taking the time to enter a bug report with us. We appreciate the feedback and look to use reports such as this to guide our efforts at improving our products. That being said, this bug tracking system is not a mechanism for requesting support, and we are not able to  guarantee the timeliness or suitability of a resolution.
 
If this issue is critical or in any way time sensitive, please raise a ticket through your regular Red Hat support channels to make certain  it receives the proper attention and prioritization to assure a timely resolution. 
 
For information on how to contact the Red Hat production support team, please visit:
https://www.redhat.com/support/process/production/#howto

In any case, Red Hat does not provide fixes to RHEL6.3.


Note You need to log in before you can comment on or make changes to this bug.