RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1191353 - "dracut Warning: poweroff failed" when try to shutdown a "q35" guest after do s3
Summary: "dracut Warning: poweroff failed" when try to shutdown a "q35" guest after d...
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: qemu-kvm-rhev
Version: 7.1
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: rc
: ---
Assignee: Amit Shah
QA Contact: Virtualization Bugs
URL:
Whiteboard:
Depends On:
Blocks: Virt-S3/S4-7.0
TreeView+ depends on / blocked
 
Reported: 2015-02-11 06:12 UTC by Qian Guo
Modified: 2016-03-28 08:30 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2015-03-04 06:02:37 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Qian Guo 2015-02-11 06:12:59 UTC
Description of problem:
Do s3 and resume inside a rhel7.1 guest that under q35 machine type, then try to shutdown it, but it prompt:

dracut Warning: poweroff failed

Version-Release number of selected component (if applicable):
qemu-kvm-rhev-2.1.2-23.el7.x86_64

guest/host kernel-3.10.0-229.el7.x86_64

How reproducible:
100%

Steps to Reproduce:
1.Boot a rhel7.1 guest under q35 machine type
# /usr/libexec/qemu-kvm -name rhel7 -S -machine pc-q35-rhel7.1.0,accel=kvm,usb=off -m 4096 -cpu SandyBridge -realtime mlock=on -sandbox on -smp 4,maxcpus=4,sockets=4,cores=1,threads=1 -no-user-config -nodefaults -boot menu=on -drive file=/home/rhel7.1q35/rhel7.1q35.qcow2,if=none,id=drive-blk-disk,format=qcow2,cache=writethrough,werror=stop,rerror=stop -device virtio-blk-pci,scsi=off,bus=pcie.0,addr=0x2,drive=drive-blk-disk,id=virtio-disk0  -device qxl-vga,id=video0,ram_size=67108864,vram_size=67108864,vgamem_mb=16,bus=pcie.0,addr=0x3 -device ich9-intel-hda,id=sound0,bus=pcie.0,addr=0x4 -device hda-duplex,id=sound0-codec0,bus=sound0.0,cad=0  -device virtio-balloon-pci,id=balloon0,bus=pcie.0,addr=0x6 -monitor stdio -qmp tcp:0:4444,server,nowait -serial unix:/tmp/s1,server,nowait -spice port=5900,disable-ticketing  -netdev tap,id=hostnet0,vhost=on,id=hostnet0,fds=5:15:25:35 5<>/dev/tap5 15<>/dev/tap5 25<>/dev/tap5 35<>/dev/tap5 -device virtio-net-pci,netdev=hostnet0,id=virtio-net-pci0,mac=0e:49:5d:6e:f9:f5,mq=on,vectors=10,bus=pcie.0,addr=0x7 -global ICH9-LPC.disable_s3=0 -global ICH9-LPC.disable_s4=0

2.In side guest, do s3 and resume

3.shutdown guset
guest# shutdown -h now

Actual results:
Can not shutdown guest finally, it prompt

"dracut Warning: poweroff failed"

and qemu-kvm can not quit

Expected results:
Can shutdown guest successfully

Additional info:
1.-machine pc won't hit such issue.

2.if do not do s3, can not hit such issue


Note You need to log in before you can comment on or make changes to this bug.