Bug 1478712 - Power-off VM via the engine, raise Traceback in the vdsm log
Summary: Power-off VM via the engine, raise Traceback in the vdsm log
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: vdsm
Classification: oVirt
Component: Core
Version: 4.20.0
Hardware: x86_64
OS: Linux
medium
medium
Target Milestone: ovirt-4.2.0
: ---
Assignee: Milan Zamazal
QA Contact: Raz Tamir
URL:
Whiteboard:
: 1478715 (view as bug list)
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-08-06 12:27 UTC by Artyom
Modified: 2017-08-28 09:07 UTC (History)
4 users (show)

Fixed In Version:
Clone Of:
Environment:
Last Closed: 2017-08-24 14:30:43 UTC
oVirt Team: Virt
Embargoed:
rule-engine: ovirt-4.2+
rule-engine: blocker+


Attachments (Terms of Use)
vdsm log (5.93 MB, text/plain)
2017-08-06 12:27 UTC, Artyom
no flags Details


Links
System ID Private Priority Status Summary Last Updated
oVirt gerrit 81085 0 master MERGED API: Don't log tracebacks on non-existent VMs 2020-09-07 13:57:54 UTC

Description Artyom 2017-08-06 12:27:05 UTC
Created attachment 1309649 [details]
vdsm log

Description of problem:
Power-off VM via the engine, raise Traceback in the vdsm log
2017-08-06 15:19:44,002+0300 ERROR (jsonrpc/3) [api] FINISH destroy error=Virtual machine does not exist: {'vmId': u'4854aff0-263e-43e9-ac83-6e55f5d49ef7'} (api:119)
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 117, in method
    ret = func(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/vdsm/API.py", line 301, in destroy
    res = self.vm.destroy(gracefulAttempts)
  File "/usr/lib/python2.7/site-packages/vdsm/API.py", line 129, in vm
    raise exception.NoSuchVM(vmId=self._UUID)
NoSuchVM: Virtual machine does not exist: {'vmId': u'4854aff0-263e-43e9-ac83-6e55f5d49ef7'}

Version-Release number of selected component (if applicable):
vdsm-4.20.1-269.git4afc67d.el7.centos.x86_64

How reproducible:
Always

Steps to Reproduce:
1. Start VM via the engine
2. Wait until the VM will have state UP
3. Power-off the VM

Actual results:
Power-off action succeeded and VM destroyed, but I can see Traceback under the vdsm.log

Expected results:
Power-off action succeeded and VM destroyed without any Traceback in vdsm log

Additional info:
Looks like VDSM runs two concurrent JSON requests to destroy VDSM, I checked the VDSM log in 4.1 and I can see, two destroy commands outputs for the same VM
2017-08-06 15:13:01,209+0300 INFO  (jsonrpc/7) [jsonrpc.JsonRpcServer] RPC call VM.destroy succeeded in 3.25 seconds (__init__:539)
2017-08-06 15:13:03,242+0300 INFO  (jsonrpc/5) [jsonrpc.JsonRpcServer] RPC call VM.destroy failed (error 1) in 0.00 seconds (__init__:539)

Comment 1 Red Hat Bugzilla Rules Engine 2017-08-07 04:57:27 UTC
This bug report has Keywords: Regression or TestBlocker.
Since no regressions or test blockers are allowed between releases, it is also being identified as a blocker for this release. Please resolve ASAP.

Comment 2 Milan Zamazal 2017-08-07 12:46:43 UTC
I think what happens here is: Engine calls Destroy to power off the VM. Vdsm destroys the VM and sends an event to Engine about the VM being down. Then Engine calls another Destroy on the VM and that raises the exception since the VM has already been destroyed by the first Destroy call.

I'm not sure which of the actions is superfluous and should be avoided to prevent the error.

Comment 3 Tomas Jelinek 2017-08-09 07:41:39 UTC
*** Bug 1478715 has been marked as a duplicate of this bug. ***

Comment 4 Tomas Jelinek 2017-08-24 14:30:43 UTC
This is actually not a bug. When the monitoring realizes that the VM has been turned to down, first thing it does is to call destroy on it to make sure it will get cleaned up properly.

It is indeed redundant in this case but does not cause any harm (other than an ugly looking error in log). But in other cases (such as shutdown from inside of guest) this destroy needs to be called to make sure the VM will not appear back again as an external VM.

We could theoretically exclude this call to destroy in some cases but it would complicate the code and would risk regressions.

Since the outcome is not harmful at all, I will close this bug. If it is for some reason an important issue for you, please feel free to reopen it.

Comment 5 Artyom 2017-08-26 20:32:14 UTC
But can we at least catch the Traceback and give a more friendly message, because now this Traceback overflow whole vdsm log.

Comment 6 Milan Zamazal 2017-08-28 09:07:42 UTC
Indeed, we probably needn't to log tracebacks in similar situations, patch posted.


Note You need to log in before you can comment on or make changes to this bug.