Hide Forgot
Description of problem: Trying to run VM which was imported from VMware environment using webadmin failed with the next VDSM ERROR log: libvirtError: Hook script execution failed: internal error: Child process (LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin /etc/libvirt/hooks/qemu bbb_rhevh prepare begin -) unexpected fatal signal 13 Thread-17920::INFO::2016-04-03 14:09:22,140::vm::1330::virt.vm::(setDownStatus) vmId=`3c4a8a3a-bbf9-476e-9553-12922912b991`::Changed state to Down: Hook script execution failed: internal error: Child process (LC_ALL=C PATH=/usr/local/sbin :/usr/local/bin:/usr/sbin:/usr/bin /etc/libvirt/hooks/qemu bbb_rhevh prepare begin -) unexpected fatal signal 13 (code=1) Verified few times with different VM OS types. virt-v2v import progress completed without errors (see attached logs) Running same source VMs which was imported using RHEL host is possible. Version-Release number of selected component (if applicable): rhevm-3.6.5-0.1.el6 RHEV Hypervisor - 7.2 - 20160330.0.el7ev libvirt-client-1.2.17-13.el7_2.3.x86_64 qemu-kvm-rhev-2.3.0-31.el7_2.7.x86_64 vdsm-4.17.25-0.el7ev.noarch sanlock-3.2.4-1.el7.x86_64 virt-v2v-1.28.1-1.55.el7.x86_64 libguestfs-1.28.1-1.55.el7.x86_64 How reproducible: Consistently. Steps to Reproduce: 1. Browse webadmin -> virtual machines tab --> import 2. Enter VMware details, select VM to import and start import. 3. Wait for import progress to be completed and try to run imported VM. Actual results: Imported VM failed to run. the next event is logged in webadmin: VM bbb_rhevh is down with error. Exit message: Hook script execution failed: internal error: Child process (LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin /etc/libvirt/hooks/qemu bbb_rhevh prepare begin -) unexpected fatal signal 13. Expected results: VM should run normally after import completed. Additional info: VDSM and engine.log attached. issue occurred at: 2016-04-03 14:09:22,138
Created attachment 1142959 [details] engine log
Created attachment 1142960 [details] VDSM log
This is a libvirt error - qemu and libvirt log might be needed also what does "Running same source VMs which was imported using RHEL host is possible." mean?
Created attachment 1143284 [details] libvirt log
Created attachment 1143285 [details] qemu log
qemu and libvirt log attached. As for the source VM, I meant that same VMware VM can be imported and run properly using RHEL 7.2 host.
this should not be relevant to v2v....are you sure other VMs run fine? OR - are you able to run this converted VM on a RHEL host?
You are correct. other VMs (which was created manually) cannot be run also. Converted VM can run on RHEL host.
seems to be a general node issue then. Fabian, any ideas? That hook was done in 3.4 by msivak...but I find it hard to believe there's anything wrong with it. It may be an environmental issue
I can not reproduce this bug with: RHEV Hypervisor - 7.2 - 20160330.0.el7ev rhevm-3.6.5.1-0.1.el6.noarch
Nisim, can you provide more details on the steps you took? It would be helpful if Tingting could get a second reproducer to identify the issue.
1. Add rhevh to 3.6.5 engine and upgrade rhevh to rhevh 7.2 (20160330.0.el7ev) 2. verify virt-v2v package included. 3. using webadmin importdialog, import VMware VM with RHEL7 OS. 4. Wait for import completion and run VM. I left the related rhevh host as is for further observation. Please contact me if you want to take a look at the rhavh host.
Thanks Nisim. Chen, can you reproduce it with those steps?
Hi tingting, Could you help to reply #c13? Thanks.
(In reply to Nisim Simsolo from comment #12) > 1. Add rhevh to 3.6.5 engine and upgrade rhevh to rhevh 7.2 > (20160330.0.el7ev) > 2. verify virt-v2v package included. > 3. using webadmin importdialog, import VMware VM with RHEL7 OS. > 4. Wait for import completion and run VM. > > I left the related rhevh host as is for further observation. > Please contact me if you want to take a look at the rhavh host. The steps are the same and in our enviroment I can not reproduce this bug. Refer to comment 8,the guest created manually can also not run,so this issue is not related to import guest by virt-v2v.
Hm, pasting from the vdsm log: Thread-17312::ERROR::2016-04-03 11:48:15,192::vm::759::virt.vm::(_startUnderlyingVm) vmId=`45b0f42e-654b-4f14-b3aa-def92e7bc3e3`::The vm start process failed Traceback (most recent call last): File "/usr/share/vdsm/virt/vm.py", line 703, in _startUnderlyingVm self._run() File "/usr/share/vdsm/virt/vm.py", line 1941, in _run self._connection.createXML(domxml, flags), File "/usr/lib/python2.7/site-packages/vdsm/libvirtconnection.py", line 124, in wrapper ret = f(*args, **kwargs) File "/usr/lib/python2.7/site-packages/vdsm/utils.py", line 1313, in wrapper return func(inst, *args, **kwargs) File "/usr/lib64/python2.7/site-packages/libvirt.py", line 3611, in createXML if ret is None:raise libvirtError('virDomainCreateXML() failed', conn=self) libvirtError: Hook script execution failed: internal error: Child process (LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin /etc/libvirt/hooks/qemu bbb_rhevh prepare begin -) unexpected fatal signal 13 Thread-17312::INFO::2016-04-03 11:48:15,194::vm::1330::virt.vm::(setDownStatus) vmId=`45b0f42e-654b-4f14-b3aa-def92e7bc3e3`::Changed state to Down: Hook script execution failed: internal error: Child process (LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin /etc/libvirt/hooks/qemu bbb_rhevh prepare begin -) unexpected fatal signal 13 (code=1) Thread-17312::DEBUG::2016-04-03 11:48:15,194::vmchannels::229::vds::(unregister) Delete fileno 35 from listener. Thread-17312::DEBUG::2016-04-03 11:48:15,195::vmchannels::59::vds::(_unregister_fd) Failed to unregister FD from epoll (ENOENT): 35 Thread-17312::DEBUG::2016-04-03 11:48:15,196::__init__::206::jsonrpc.Notification::(emit) Sending event {"params": {"45b0f42e-654b-4f14-b3aa-def92e7bc3e3": {"status": "Down", "timeOffset": "0", "exitReason": 1, "exitMessage": "Hook script execution failed: internal error: Child process (LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin /etc/libvirt/hooks/qemu bbb_rhevh prepare begin -) unexpected fatal signal 13", "exitCode": 1}, "notify_time": 4537346350}, "jsonrpc": "2.0", "method": "|virt|VM_status|45b0f42e-654b-4f14-b3aa-def92e7bc3e3"} mailbox.SPMMonitor::DEBUG::2016-04-03 11:48:15,463: Maybe this is related to bug 1324016. And - picking up from the initial description: 2016-04-03 07:23:28.956+0000: 9632: error : virCommandWait:2552 : internal error: Child process (LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin /etc/libvirt/hooks/qemu ccc release end -) unexpected fatal signal 13 This hook (/etc/libvirt/hooks/qemu) is owned by vdsm. Michal, could someone support us in finding the root cause of this libvirt or hook error?
the libvirt hook error does not make sense. it looks like environmental or hardware issue. is it the same host as in bug 1324016?
To ease debugging. Douglas, could you provide an rpm-manifest diff between the RHEV-H 3.6.4 and last RHEV-H 3.6.5 build?
(In reply to Fabian Deutsch from comment #18) > To ease debugging. > > Douglas, could you provide an rpm-manifest diff between the RHEV-H 3.6.4 and > last RHEV-H 3.6.5 build? SRPM manifest diff: --- rhev-hypervisor7-7.2-20160328.0.iso.d/isolinux/manifest-srpm.txt 2016-03-28 08:57:17.000000000 -0400 +++ rhev-hypervisor7-7.2-20160330.0.iso.d/isolinux/manifest-srpm.txt 2016-03-30 15:37:34.000000000 -0400 +gtk2-2.24.28-8.el7.src.rpm -ovirt-hosted-engine-ha-1.3.5.1-1.el7ev.src.rpm -ovirt-hosted-engine-setup-1.3.4.0-1.el7ev.src.rpm -ovirt-node-3.6.1-8.0.el7ev.src.rpm +ovirt-hosted-engine-ha-1.3.5.2-1.el7ev.src.rpm +ovirt-hosted-engine-setup-1.3.5.0-1.el7ev.src.rpm +ovirt-node-3.6.1-9.0.el7ev.src.rpm -rhevm-sdk-python-3.6.3.0-1.el7ev.src.rpm +rhevm-sdk-python-3.6.5.0-1.el7ev.src.rpm -safelease-1.0-5.el7ev.src.rpm +safelease-1.0-7.el7ev.src.rpm -vdsm-4.17.23.2-1.el7ev.src.rpm +vdsm-4.17.25-0.el7ev.src.rpm
The issue cannot be reproduced using a different RHEV-H under the same cluster of the problematic RHEV-H. RHEV-H build: 20160407.0.el7ev
Lowering the priority of this issue and moving it out to 3.6.6 according to comment 21. Let#s monitor it for a while, and close it once it's not faced again.
Looks like abnormal issue (hardware/environment) as comment #17 reported. Additionally, the reporter made new tests with updated rhev-h and it's not possible to reproduce. For now, closing the bug, please re-open in case you see it again for further investigation. Thanks!