Bug 714811
Summary: | Resumed VM consumes 100% cpu. Console frozen, not pingable. | ||
---|---|---|---|
Product: | Red Hat Enterprise Linux 5 | Reporter: | Michael Closson <closms> |
Component: | kvm | Assignee: | Amit Shah <amit.shah> |
Status: | CLOSED WONTFIX | QA Contact: | Virtualization Bugs <virt-bugs> |
Severity: | high | Docs Contact: | |
Priority: | unspecified | ||
Version: | 5.6 | CC: | bcao, gyue, juzhang, mkenneth, quintela, rhod, shuang, tburke, virt-maint |
Target Milestone: | rc | ||
Target Release: | --- | ||
Hardware: | x86_64 | ||
OS: | Linux | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | Bug Fix | |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2012-04-09 10:33:14 UTC | Type: | --- |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | |||
Bug Blocks: | 580946, 807971 |
Description
Michael Closson
2011-06-20 20:51:27 UTC
Guest s3/s4 with virtio devices isn't yet supported, and is unlikely to be supported in RHEL5 releases. From your command line, looks like you only have the balloon virtio device. If you're not using it, can you disable it and then check if suspend/resume works as expected? Amit. What is Guest s3/s4? Also, I used virsh edit to remove <memballoon model='virtio'/> But when I power on the VM it is added back again automatically. I'll find out why. http://fossplanet.com/f13/%5Blibvirt%5D-%5Bpatch%5D-docs-document-how-disable-memballoon-63997/ Google knows everything. Just to confirm that the balloon param was removed. [root@blue07 qemu]# ps -ef | grep kvm root 25297 1 27 00:12 ? 00:00:07 /usr/libexec/qemu-kvm -S -M rhel5.4.0 -m 512 -smp 1,sockets=1,cores=1,threads=1 -name _vm_lsf_dyn__385 -uuid 5c7bb027-acb9-4e41-9ea9-48576466a7f7 -monitor unix:/var/lib/libvirt/qemu/_vm_lsf_dyn__385.monitor,server,nowait -no-kvm-pit-reinjection -boot c -drive file=/VMOxen/storage/share/SR/1c0b78ea-0580-4b51-8ec9-9b95d0543033/VM/images/_vm_lsf_dyn__385_1.img,if=ide,bus=0,unit=0,format=raw,cache=none -drive file=/VMOxen/storage/share/SR/1c0b78ea-0580-4b51-8ec9-9b95d0543033/VM/images/_vm_lsf_dyn__385.iso,if=ide,media=cdrom,bus=1,unit=0,readonly=on,format=raw -net nic,macaddr=00:16:3e:55:59:82,vlan=0 -net tap,fd=20,vlan=0 -serial pty -parallel none -usb -vnc 127.0.0.1:0 -k en-us -vga cirrus root 25334 1 27 00:12 ? 00:00:07 /usr/libexec/qemu-kvm -S -M rhel5.4.0 -m 512 -smp 1,sockets=1,cores=1,threads=1 -name _vm_lsf_dyn__392 -uuid 828606b9-fcc4-4159-873e-bcc132e1869e -monitor unix:/var/lib/libvirt/qemu/_vm_lsf_dyn__392.monitor,server,nowait -no-kvm-pit-reinjection -boot c -drive file=/VMOxen/storage/share/SR/1c0b78ea-0580-4b51-8ec9-9b95d0543033/VM/images/_vm_lsf_dyn__392_1.img,if=ide,bus=0,unit=0,format=raw,cache=none -drive file=/VMOxen/storage/share/SR/1c0b78ea-0580-4b51-8ec9-9b95d0543033/VM/images/_vm_lsf_dyn__392.iso,if=ide,media=cdrom,bus=1,unit=0,readonly=on,format=raw -net nic,macaddr=00:16:3e:ff:dc:ce,vlan=0 -net tap,fd=20,vlan=0 -serial pty -parallel none -usb -vnc 127.0.0.1:1 -k en-us -vga cirrus root 25374 1 29 00:12 ? 00:00:07 /usr/libexec/qemu-kvm -S -M rhel5.4.0 -m 512 -smp 1,sockets=1,cores=1,threads=1 -name _vm_lsf_dyn__394 -uuid cd9c1617-2d8a-4efe-afc3-31d3e56696f2 -monitor unix:/var/lib/libvirt/qemu/_vm_lsf_dyn__394.monitor,server,nowait -no-kvm-pit-reinjection -boot c -drive file=/VMOxen/storage/share/SR/1c0b78ea-0580-4b51-8ec9-9b95d0543033/VM/images/_vm_lsf_dyn__394_1.img,if=ide,bus=0,unit=0,format=raw,cache=none -drive file=/VMOxen/storage/share/SR/1c0b78ea-0580-4b51-8ec9-9b95d0543033/VM/images/_vm_lsf_dyn__394.iso,if=ide,media=cdrom,bus=1,unit=0,readonly=on,format=raw -net nic,macaddr=00:16:3e:d8:b9:86,vlan=0 -net tap,fd=22,vlan=0 -serial pty -parallel none -usb -vnc 127.0.0.1:2 -k en-us -vga cirrus root 25405 1 28 00:12 ? 00:00:07 /usr/libexec/qemu-kvm -S -M rhel5.4.0 -m 512 -smp 1,sockets=1,cores=1,threads=1 -name _vm_lsf_dyn__395 -uuid 5121750a-96d7-4bf3-80f3-da85f428152d -monitor unix:/var/lib/libvirt/qemu/_vm_lsf_dyn__395.monitor,server,nowait -no-kvm-pit-reinjection -boot c -drive file=/VMOxen/storage/share/SR/1c0b78ea-0580-4b51-8ec9-9b95d0543033/VM/images/_vm_lsf_dyn__395_1.img,if=ide,bus=0,unit=0,format=raw,cache=none -drive file=/VMOxen/storage/share/SR/1c0b78ea-0580-4b51-8ec9-9b95d0543033/VM/images/_vm_lsf_dyn__395.iso,if=ide,media=cdrom,bus=1,unit=0,readonly=on,format=raw -net nic,macaddr=00:16:3e:68:88:47,vlan=0 -net tap,fd=22,vlan=0 -serial pty -parallel none -usb -vnc 127.0.0.1:3 -k en-us -vga cirrus (In reply to comment #2) > Amit. What is Guest s3/s4? It means suspend-to-memory or suspend-to-disk from within a guest. After removing the virtio-balloon device, does suspend/resume work fine? It seems like libvirt that comes with RHEL56 _always_ enables the ballon device. The output in comment #4 is with libvirt-0.9.1. In that test environment the problem occured. I rolled back the libvirt RPMs and installed the standard libvirt that comes with RHEL56 but I cannot get libvirt to disable the ballon device (unless I make a code change and rebuild the rpms). I think the test with libvirt 0.9.2 is still valid. The same kvm RPMs w/o the ballon device causes the same behaviour. As before, I had to let the stress test run for an hour before the bug happened. Just blacklisting the virtio-balloon module in the guest will work as well. I disabled the virtio_ballon module by renaming the file and then rebooting. In the VM: [root@localhost ~]# uptime 09:08:44 up 1 min, 2 users, load average: 0.48, 0.22, 0.08 [root@localhost ~]# ls -l /lib/modules/2.6.18-238.el5/kernel/drivers/virtio/ total 192 -rwxr--r-- 1 root root 44608 Dec 19 2010 virtio_balloon.ko.XXX -rwxr--r-- 1 root root 43024 Dec 19 2010 virtio.ko -rwxr--r-- 1 root root 45944 Dec 19 2010 virtio_pci.ko -rwxr--r-- 1 root root 40808 Dec 19 2010 virtio_ring.ko [root@localhost ~]# lsmod | grep virtio virtio_net 48193 0 virtio_blk 41673 3 virtio_pci 41545 0 virtio_ring 37953 1 virtio_pci virtio 39365 3 virtio_net,virtio_blk,virtio_pci On the hypervisor: [root@delamd06 ~]# ps -ef | grep kvm root 24905 1 25 08:57 ? 00:03:18 /usr/libexec/qemu-kvm -S -M rhel5.4.0 -m 1024 -smp 1,sockets=1,cores=1,threads=1 -name tmpl -uuid 6dca4b19-e48b-04ae-d3d8-f206e57a75a8 -monitor unix:/var/lib/libvirt/qemu/tmpl.monitor,server,nowait -no-kvm-pit-reinjection -boot c -drive file=/VMOxen/storage/share/SR/755d80a2-d9fb-43cc-ba3e-a83409796669/template/images/RHEL56_32G_1.img,if=virtio,boot=on,format=qcow2,cache=none -net nic,macaddr=54:52:00:74:78:9f,vlan=0,model=virtio -net tap,fd=18,vlan=0 -serial pty -parallel none -usb -vnc 127.0.0.1:0 -k en-us -vga cirrus -balloon virtio [root@delamd06 ~]# rpm -qa | grep libvirt libvirt-0.8.2-15.el5 libvirt-python-0.8.2-15.el5 libvirt-0.8.2-15.el5 I made the change in the VM template. Then I setup the test case and let it run. After about 6 hours I didn't see the bug again. I'll continue to monitor it. You need to disable all virtio devices -- net, blk. No virtio devices can handle hibernate yet. Amit, I want to make sure we're on the same page here. I'm suspending the VM by running the command "virsh save <domain id> <state file>". Not by the save to memory/save to disk features of Linux/Windows that also work on a PM. Does virtio support this? (In reply to comment #10) > Amit, I want to make sure we're on the same page here. I'm suspending the VM > by running the command "virsh save <domain id> <state file>". Not by the save > to memory/save to disk features of Linux/Windows that also work on a PM. Aha, I haven't seen that mentioned anywhere yet; or I missed it. That shouldn't cause a problem with virtio indeed. Do you have the logs for the qemu process corresponding to the guest that gets stuck in /var/log/libvirt/qemu/ ? Please upload them here. Can you re-try with -M rhel5.6 ? rhel5.4 had issues with not saving some kvmclock fields. Dor, I started testing the case you suggest now. I restored the virtio_balloon module in the template and change the config from: <os> <type arch='x86_64' machine='rhel5.4.0'>hvm</type> <boot dev='hd'/> </os> to <os> <type arch='x86_64' machine='rhel5.6.0'>hvm</type> <boot dev='hd'/> </os> I'll post the results later. [root@delamd05 qemu]# ps -ef | grep kvm root 14012 1 36 12:31 ? 00:01:16 /usr/libexec/qemu-kvm -S -M rhel5.6.0 -m 2048 -smp 1,sockets=1,cores=1,threads=1 -name _vm_lsf_dyn__74 -uuid 5c3872c2-5538-485a-a603-1bda5ca59598 -monitor unix:/var/lib/libvirt/qemu/_vm_lsf_dyn__74.monitor,server,nowait -no-kvm-pit-reinjection -boot c -drive file=/VMOxen/storage/share/SR/755d80a2-d9fb-43cc-ba3e-a83409796669/VM/images/_vm_lsf_dyn__74_1.img,if=virtio,boot=on,format=qcow2,cache=none -drive file=/VMOxen/storage/share/SR/755d80a2-d9fb-43cc-ba3e-a83409796669/VM/images/_vm_lsf_dyn__74.iso,if=ide,media=cdrom,bus=1,unit=0,readonly=on,format=raw -net nic,macaddr=00:16:3e:f8:66:e6,vlan=0,model=virtio -net tap,fd=54,vlan=0 -serial pty -parallel none -usb -vnc 127.0.0.1:3 -k en-us -vga cirrus -balloon virtio root 14599 1 11 12:33 ? 00:00:08 /usr/libexec/qemu-kvm -S -M rhel5.6.0 -m 2048 -smp 1,sockets=1,cores=1,threads=1 -name _vm_lsf_dyn__70 -uuid 50e7a6d8-baf5-4c54-99ea-13ac1ee08066 -monitor unix:/var/lib/libvirt/qemu/_vm_lsf_dyn__70.monitor,server,nowait -no-kvm-pit-reinjection -boot c -drive file=/VMOxen/storage/share/SR/755d80a2-d9fb-43cc-ba3e-a83409796669/VM/images/_vm_lsf_dyn__70_1.img,if=virtio,boot=on,format=qcow2,cache=none -drive file=/VMOxen/storage/share/SR/755d80a2-d9fb-43cc-ba3e-a83409796669/VM/images/_vm_lsf_dyn__70.iso,if=ide,media=cdrom,bus=1,unit=0,readonly=on,format=raw -net nic,macaddr=00:16:3e:c8:6c:d3,vlan=0,model=virtio -net tap,fd=55,vlan=0 -serial pty -parallel none -usb -vnc 127.0.0.1:0 -k en-us -vga cirrus -incoming exec:cat -balloon virtio root 14799 1 12 12:33 ? 00:00:09 /usr/libexec/qemu-kvm -S -M rhel5.6.0 -m 2048 -smp 1,sockets=1,cores=1,threads=1 -name _vm_lsf_dyn__68 -uuid 3f9a2f77-7601-4424-8302-43070f8943f2 -monitor unix:/var/lib/libvirt/qemu/_vm_lsf_dyn__68.monitor,server,nowait -no-kvm-pit-reinjection -boot c -drive file=/VMOxen/storage/share/SR/755d80a2-d9fb-43cc-ba3e-a83409796669/VM/images/_vm_lsf_dyn__68_1.img,if=virtio,boot=on,format=qcow2,cache=none -drive file=/VMOxen/storage/share/SR/755d80a2-d9fb-43cc-ba3e-a83409796669/VM/images/_vm_lsf_dyn__68.iso,if=ide,media=cdrom,bus=1,unit=0,readonly=on,format=raw -net nic,macaddr=00:16:3e:50:a6:6a,vlan=0,model=virtio -net tap,fd=63,vlan=0 -serial pty -parallel none -usb -vnc 127.0.0.1:2 -k en-us -vga cirrus -incoming exec:cat -balloon virtio root 14989 4726 0 12:35 pts/0 00:00:00 grep kvm Sometimes virt-manager freezes up. (gdb) info threads * 1 Thread 0x2b59462ef170 (LWP 4461) 0x0000003a802cb2e6 in poll () from /lib64/libc.so.6 (gdb) bt #0 0x0000003a802cb2e6 in poll () from /lib64/libc.so.6 #1 0x000000333feb05d2 in remoteIOEventLoop (conn=0x1ffcdb20, priv=0x1ffd1020, in_open=0, thiscall=0x20556db0) at remote/remote_driver.c:9657 #2 0x000000333feb10bd in remoteIO (conn=0x1ffcdb20, priv=0x1ffd1020, flags=0, thiscall=0x20556db0) at remote/remote_driver.c:9901 #3 0x000000333feb178b in call (conn=0x1ffcdb20, priv=0x1ffd1020, flags=0, proc_nr=16, args_filter=0x333feb3d0e <xdr_remote_domain_get_info_args>, args=0x7fff1d64eb50 "@\206\001 ", ret_filter=0x333feb3d44 <xdr_remote_domain_get_info_ret>, ret=0x7fff1d64eb20 "") at remote/remote_driver.c:9990 #4 0x000000333fea06e0 in remoteDomainGetInfo (domain=0x2001c580, info=0x7fff1d64ec00) at remote/remote_driver.c:2297 #5 0x000000333fe7ed12 in virDomainGetInfo (domain=0x2001c580, info=0x7fff1d64ec00) at libvirt.c:3050 #6 0x00002b594990daac in libvirt_virDomainGetInfo (self=0x0, args=0x1fe4fc50) at libvirt-override.c:1025 #7 0x0000003a8129639a in PyEval_EvalFrame () from /usr/lib64/libpython2.4.so.1.0 #8 0x0000003a81295e46 in PyEval_EvalFrame () from /usr/lib64/libpython2.4.so.1.0 #9 0x0000003a81295e46 in PyEval_EvalFrame () from /usr/lib64/libpython2.4.so.1.0 #10 0x0000003a812972c5 in PyEval_EvalCodeEx () from /usr/lib64/libpython2.4.so.1.0 #11 0x0000003a81295a1f in PyEval_EvalFrame () from /usr/lib64/libpython2.4.so.1.0 #12 0x0000003a81295e46 in PyEval_EvalFrame () from /usr/lib64/libpython2.4.so.1.0 #13 0x0000003a812972c5 in PyEval_EvalCodeEx () from /usr/lib64/libpython2.4.so.1.0 #14 0x0000003a8124c6d7 in ?? () from /usr/lib64/libpython2.4.so.1.0 #15 0x0000003a81236430 in PyObject_Call () from /usr/lib64/libpython2.4.so.1.0 #16 0x0000003a8123c52f in ?? () from /usr/lib64/libpython2.4.so.1.0 #17 0x0000003a81236430 in PyObject_Call () from /usr/lib64/libpython2.4.so.1.0 #18 0x0000003a81290f1d in PyEval_CallObjectWithKeywords () from /usr/lib64/libpython2.4.so.1.0 #19 0x00002b594b2025cf in ?? () from /usr/lib64/python2.4/site-packages/gtk-2.0/gobject/_gobject.so #20 0x0000003a8222d2bb in ?? () from /lib64/libglib-2.0.so.0 #21 0x0000003a8222cdb4 in g_main_context_dispatch () from /lib64/libglib-2.0.so.0 #22 0x0000003a8222fc0d in ?? () from /lib64/libglib-2.0.so.0 #23 0x0000003a8222ff1a in g_main_loop_run () from /lib64/libglib-2.0.so.0 #24 0x0000003a8df2aa63 in gtk_main () from /usr/lib64/libgtk-x11-2.0.so.0 #25 0x00002b594b81d684 in ?? () from /usr/lib64/python2.4/site-packages/gtk-2.0/gtk/_gtk.so #26 0x0000003a81296167 in PyEval_EvalFrame () from /usr/lib64/libpython2.4.so.1.0 #27 0x0000003a81295e46 in PyEval_EvalFrame () from /usr/lib64/libpython2.4.so.1.0 #28 0x0000003a812972c5 in PyEval_EvalCodeEx () from /usr/lib64/libpython2.4.so.1.0 #29 0x0000003a81297312 in PyEval_EvalCode () from /usr/lib64/libpython2.4.so.1.0 #30 0x0000003a812b39f9 in ?? () from /usr/lib64/libpython2.4.so.1.0 #31 0x0000003a812b4ea8 in PyRun_SimpleFileExFlags () from /usr/lib64/libpython2.4.so.1.0 #32 0x0000003a812bb33d in Py_Main () from /usr/lib64/libpython2.4.so.1.0 #33 0x0000003a8021d994 in __libc_start_main () from /lib64/libc.so.6 #34 0x0000000000400629 in _start () Looks like it is waiting for libvirtd. Thread 26 (Thread 0x50b15940 (LWP 18696)): #0 0x0000003a80e0d4c4 in __lll_lock_wait () from /lib64/libpthread.so.0 #1 0x0000003a80e101b1 in _L_cond_lock_989 () from /lib64/libpthread.so.0 #2 0x0000003a80e1007f in __pthread_mutex_cond_lock () from /lib64/libpthread.so.0 #3 0x0000003a80e0af84 in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0 #4 0x000000333fe399c2 in virCondWait (c=0x2aaab407e368, m=0x2aaab407e340) at util/threads-pthread.c:100 #5 0x000000000046ef16 in qemuMonitorSend (mon=0x2aaab407e340, msg=0x50b14b70) at qemu/qemu_monitor.c:728 #6 0x0000000000472e54 in qemuMonitorCommandWithHandler (mon=0x2aaab407e340, cmd=0x4c332e "info balloon", passwordHandler=0, passwordOpaque=0x0, scm_fd=-1, reply=0x50b14c70) at qemu/qemu_monitor_text.c:340 #7 0x0000000000472fcb in qemuMonitorCommandWithFd (mon=0x2aaab407e340, cmd=0x4c332e "info balloon", scm_fd=-1, reply=0x50b14c70) at qemu/qemu_monitor_text.c:374 #8 0x0000000000472ff7 in qemuMonitorCommand (mon=0x2aaab407e340, cmd=0x4c332e "info balloon", reply=0x50b14c70) at qemu/qemu_monitor_text.c:381 #9 0x0000000000473996 in qemuMonitorTextGetBalloonInfo (mon=0x2aaab407e340, currmem=0x50b14d38) at qemu/qemu_monitor_text.c:680 #10 0x000000000046fb4e in qemuMonitorGetBalloonInfo (mon=0x2aaab407e340, currmem=0x50b14d38) at qemu/qemu_monitor.c:1014 #11 0x0000000000440755 in qemudDomainGetInfo (dom=0x1950800, info=0x50b14e20) at qemu/qemu_driver.c:4886 #12 0x000000333fe7ed12 in virDomainGetInfo (domain=0x1950800, info=0x50b14e20) at libvirt.c:3050 #13 0x00000000004216d0 in remoteDispatchDomainGetInfo (server=0x18ea910, client=0x2aaaac001140, conn=0x19aaa80, hdr=0x2aaaaca94c20, rerr=0x50b14f50, args=0x50b14f00, ret=0x50b14ea0) at remote.c:1485 #14 0x000000000042b3c3 in remoteDispatchClientCall (server=0x18ea910, client=0x2aaaac001140, msg=0x2aaaaca54c10) at dispatch.c:508 #15 0x000000000042afe8 in remoteDispatchClientRequest (server=0x18ea910, client=0x2aaaac001140, msg=0x2aaaaca54c10) at dispatch.c:390 #16 0x000000000041a4a8 in qemudWorker (data=0x18f0a18) at libvirtd.c:1574 #17 0x0000003a80e0673d in start_thread () from /lib64/libpthread.so.0 #18 0x0000003a802d40cd in clone () from /lib64/libc.so.6 Perhaps libvirtd is blocking on the guest OS? Querying the mem balloon? Just a guess. I let the test run over the weekend. It stopped working Saturday afternoon. The agent process in our software that links to libvirt was blocked in a call to virDomainInfo(), which didn't return at all. I let it run until Monday morning at which time I restarted libvirtd to get things going again. I check the stack trace before restarting and I think 4 threads were doing "info balloon", just like the stack trace in the previous append. There are 2 hypervisors in my environment and both stopped processing LSF jobs (and VM requests) because our agent was blocked in virDomainInfo(). The good news is that I didn't observe the VM that was frozen after a resume. I will disable the balloon module to see if that will prevent virDomainInfo() from hanging and reset the test. There is another bug that blocks the retest for this one. The libvirt client locks up in virDomainGetInfo(). I'll log a separate issue to track that one. From comment #10, can confirm this is "migrate exec:dd" action, to repeat migration and load vm will leads to vm stack on 100% cpu. According to above comments, this work need long time test. I will write script to test it and update the result when I complete. Can you look if you can reproduce with only amd/intel hosts? We don't support migration (save/resume is the same code path) between architectures on RHEL5/6? Could you check the issue only happened when migrate guest from intel host to AMD host or from AMD host to Intel host? If so ,this is a senario we do not support. Best Regards, Mike (In reply to comment #18) > Can you look if you can reproduce with only amd/intel hosts? We don't support > migration (save/resume is the same code path) between architectures on RHEL5/6? Yes, I am doing test with only amd/intel hosts. Tested migration with guest RHEL5.6-32 & RHEL5.6-64 200 times, didn't hit this bug. steps: 1. migrate -d "exec:dd of=/tmp/rhel5.6img.test bs=4096 seek=1" 2. boot vm with -incoming "exec:dd if=/tmp/rhel5.6img.test bs=4096 skip=1" The job link: Intel-host: https://virtlab.englab.nay.redhat.com/job/41536/details/ AMD-host: https://virtlab.englab.nay.redhat.com/job/41537/details/ The host info: kernel-2.6.18-294.el5 kvm-83-243.el5 cmd: /home/autotest-devel/client/tests/kvm/qemu -name 'vm1' -monitor unix:'/tmp/monitor-humanmonitor1-20111116-130535-rKwp',server,nowait -serial unix:'/tmp/serial-20111116-130535-rKwp',server,nowait -drive file='/home/autotest-devel/client/tests/kvm/images/RHEL-Server-5.6-32.raw',index=0,if=ide,media=disk,cache=none,format=raw -net nic,vlan=0,model=rtl8139,macaddr='9a:86:29:6f:18:a3' -net tap,vlan=0,fd=36 -m 1024 -smp 2,cores=1,threads=1,sockets=2 -cpu qemu64,+sse2 -vnc :1 -rtc-td-hack -boot c -no-kvm-pit-reinjection -M rhel5.6.0 -usbdevice tablet -S -incoming "exec:dd if=/tmp/rhel5.6img.test bs=4096 skip=1" This request was evaluated by Red Hat Product Management for inclusion in a Red Hat Enterprise Linux release. Product Management has requested further review of this request by Red Hat Engineering, for potential inclusion in a Red Hat Enterprise Linux release for currently deployed products. This request is not yet committed for inclusion in a release. Hi Michael Closson, Thank you for taking the time to enter a bug report with us. We do appreciate the feedback and look to use reports such as this to guide our efforts at improving our products. That being said, this bug tracking system is not a mechanism for getting support, and as such we are not able to make any guarantees as to the timeliness or suitability of a resolution. If this issue is critical or in any way time sensitive, please raise a ticket through your regular Red Hat support channels to make certain that it gets the proper attention and prioritization to assure a timely resolution. For information on how to contact the Red Hat production support team, please see: https://www.redhat.com/support/process/production/#howto For now, this bug is not reproducible here, so I am closing it for RHEL5. Thanks, Ronen. |