Bug 1382583 - Periodic functions/monitor start before VM is run.
Summary: Periodic functions/monitor start before VM is run.
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: vdsm
Version: 3.6.8
Hardware: Unspecified
OS: Unspecified
high
medium
Target Milestone: ovirt-4.1.0-alpha
: ---
Assignee: Francesco Romani
QA Contact: Nisim Simsolo
URL:
Whiteboard:
: 1395916 (view as bug list)
Depends On:
Blocks: 1398415
TreeView+ depends on / blocked
 
Reported: 2016-10-07 05:41 UTC by Germano Veit Michel
Modified: 2020-03-11 15:18 UTC (History)
14 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Previously, if a VM shutdown was too slow, the state of the VM could be misreported as unresponsive, even though the VM was operating correctly, albeit too slowly. This was caused by a too-aggressive check on startup and shutdown. This patch takes into account slowdowns in startup and shutdown, avoiding false positive reports.
Clone Of:
: 1398415 (view as bug list)
Environment:
Last Closed: 2017-04-25 00:55:45 UTC
oVirt Team: Virt
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Knowledge Base (Solution) 2749551 0 None None None 2016-11-04 05:50:57 UTC
Red Hat Product Errata RHEA-2017:0998 0 normal SHIPPED_LIVE VDSM bug fix and enhancement update 4.1 GA 2017-04-18 20:11:39 UTC
oVirt gerrit 65814 0 None MERGED vm: introduce a `monitorable' attribute 2020-02-20 04:44:54 UTC
oVirt gerrit 66011 0 None MERGED vm: reformat setUnresponsiveIfTimeout 2020-02-20 04:44:54 UTC
oVirt gerrit 66012 0 None MERGED vm: rename statsAge parameter 2020-02-20 04:44:54 UTC
oVirt gerrit 66013 0 None MERGED vm: introduce a `monitorable' attribute 2020-02-20 04:44:54 UTC

Description Germano Veit Michel 2016-10-07 05:41:41 UTC
Description of problem:

This is the opposite of BZ 1382578.

Apparently when a VM is starting periodic tasks/monitor may run before the qemu process is actually up (or libvirt able to respond). VM might be seen by the engine as Not responding during powering up.

In the Engine, instead of 
'Powering Up' -> 'Up'

It is seen as:
'Powering Up' -> 'Not responding' -> 'Up'

Version-Release number of selected component (if applicable):
Red Hat Enterprise Virtualization Hypervisor release 7.2 (20160920.1.el7ev)
rhevm-3.6.8.1-0.1.el6.noarch

How reproducible:

Sometimes.

Steps to Reproduce:
1. Start VM from Administration Portal

Actual results:
'Powering Up' -> 'Not responding' -> 'Up'

Expected results:
'Powering Up' -> 'Up'

Additional info:

 2016-10-07 13:41:45.74+10  | VM ntptest was started by admin@internal (Host: Germano-Test).
 2016-10-07 13:41:46.439+10 | VM ntptest is not responding.
 2016-10-07 13:42:26.36+10  | VM ntptest started on Host Germano-Test

jsonrpc.Executor/3::DEBUG::2016-10-07 03:41:42,264::__init__::557::jsonrpc.JsonRpcServer::(_handle_request) Return 'VM.create' .....

Thread-89::DEBUG::2016-10-07 03:41:42,266::vm::697::virt.vm::(_startUnderlyingVm) vmId=`4759032f-680f-4a5f-9f26-604a8fac2808`::Start

Thread-89::INFO::2016-10-07 03:41:42,266::vm::1850::virt.vm::(_run) vmId=`4759032f-680f-4a5f-9f26-604a8fac2808`::VM wrapper has started

Monitor already running...

jsonrpc.Executor/5::WARNING::2016-10-07 03:41:43,773::vm::5177::virt.vm::(_setUnresponsiveIfTimeout) vmId=`4759032f-680f-4a5f-9f26-604a8fac2808`::monitor become unresponsive (command timeout, age=4296317.33)

So so is the periodic function watermark

periodic/6::WARNING::2016-10-07 03:41:44,317::periodic::268::virt.periodic.VmDispatcher::(__call__) could not run <class 'virt.periodic.DriveWatermarkMonitor'> on [u'4759032f-680f-4a5f-9f26-604a8fac2808']

Thread-89::INFO::2016-10-07 03:41:45,722::vm::1944::virt.vm::(_run) vmId=`4759032f-680f-4a5f-9f26-604a8fac2808`::<?xml version="1.0" encoding="utf-8"?>
<domain type="kvm" xmlns:ovirt="http://ovirt.org/vm/tune/1.0">
	<name>ntptest</name>
	<uuid>4759032f-680f-4a5f-9f26-604a8fac2808</uuid>
	<memory>1048576</memory>
	<currentMemory>1048576</currentMemory>
	<maxMemory slots="16">4294967296</maxMemory>


periodic/4::WARNING::2016-10-07 03:41:46,318::periodic::268::virt.periodic.VmDispatcher::(__call__) could not run <class 'virt.periodic.DriveWatermarkMonitor'> on [u'4759032f-680f-4a5f-9f26-604a8fac2808']

libvirtEventLoop::DEBUG::2016-10-07 03:41:46,352::vm::4648::virt.vm::(onLibvirtLifecycleEvent) vmId=`4759032f-680f-4a5f-9f26-604a8fac2808`::event Resumed detail 0 opaque None

libvirtEventLoop::INFO::2016-10-07 03:41:46,353::vm::5169::virt.vm::(_logGuestCpuStatus) vmId=`4759032f-680f-4a5f-9f26-604a8fac2808`::CPU running: onResume

VM Channels Listener::DEBUG::2016-10-07 03:42:23,017::__init__::207::jsonrpc.Notification::(emit) Sending event {"params": {"notify_time": 4296356580, "4759032f-680f-4a5f-9f26-604a8fac2808": {"status": "Up" .....

Comment 2 Francesco Romani 2016-10-19 05:53:33 UTC
(In reply to Germano Veit Michel from comment #0)
> Description of problem:
> 
> This is the opposite of BZ 1382578.
> 
> Apparently when a VM is starting periodic tasks/monitor may run before the
> qemu process is actually up (or libvirt able to respond). VM might be seen
> by the engine as Not responding during powering up.

The root cause is the same of BZ 1382578, and the cause is slow operation on the host (too slow wrt the expectations) and a small change of behaviour of Vdsm.
This change little from the user perspective, but it is important to understand the cause of the issue.

Both the issue and the fix will be on Vdsm, Engine relies on this information.

> Steps to Reproduce:
> 1. Start VM from Administration Portal
> 
> Actual results:
> 'Powering Up' -> 'Not responding' -> 'Up'
> 
> Expected results:
> 'Powering Up' -> 'Up'

Again, for the sake of clarity: the problem is that this information is a false positive, not that Vdsm reports the VM as "not responding": it is totally possible that a VM goes rogue already on the "powering up" phase.

Fixing this issue isn't hard.
Fixing in a way that doesn't hide this scenario has some challenges.

> jsonrpc.Executor/5::WARNING::2016-10-07
> 03:41:43,773::vm::5177::virt.vm::(_setUnresponsiveIfTimeout)
> vmId=`4759032f-680f-4a5f-9f26-604a8fac2808`::monitor become unresponsive
> (command timeout, age=4296317.33)

This strangely high value is caused by the default value of timestmap (zero),
but here the real issue is that the VM was seen last time too far in the past;


> So so is the periodic function watermark
> 
> periodic/6::WARNING::2016-10-07
> 03:41:44,317::periodic::268::virt.periodic.VmDispatcher::(__call__) could
> not run <class 'virt.periodic.DriveWatermarkMonitor'> on
> [u'4759032f-680f-4a5f-9f26-604a8fac2808']

This is related but not part of this issue, if you want this behaviour to change it should be tracked and discussed separately.

Comment 3 Germano Veit Michel 2016-10-19 06:00:13 UTC
(In reply to Francesco Romani from comment #2)
> (In reply to Germano Veit Michel from comment #0)
> > periodic/6::WARNING::2016-10-07
> > 03:41:44,317::periodic::268::virt.periodic.VmDispatcher::(__call__) could
> > not run <class 'virt.periodic.DriveWatermarkMonitor'> on
> > [u'4759032f-680f-4a5f-9f26-604a8fac2808']
> 
> This is related but not part of this issue, if you want this behaviour to
> change it should be tracked and discussed separately.

Hi Francesco. Thank you for owning this BZ as well.

AFAIK, the watermark monitor running before the VM is harmless, so I am not sure it's worth spending resources on fixing/hiding it. If you think it can cause a problem I'll gladly open a new BZ for it, but for now I just see it as a harmless warning.

Thanks!

Comment 4 Francesco Romani 2016-10-25 12:55:49 UTC
this is not yet POST. Patches are been posted against master branch only. They are backport-friendly, but still not there.

Comment 5 Francesco Romani 2016-10-25 16:15:50 UTC
no POST yet - still ASSIGNED

Comment 6 Francesco Romani 2016-10-26 15:36:20 UTC
The patches against master branch are in the final stages of verification, and should be merged soon.
The fixes are simple and backportable, and for master they look good.
I'd like to do additional testing for stable branches.
I've verified locally the sequence of events that triggers the bug, and I'm confident the patches should help. Any chance of testing them perhaps in a bigger QE environment?
I can provide RPMs/scratch builds for 4.0.z or even 3.6.z

Comment 7 Francesco Romani 2016-10-26 15:36:46 UTC
please check https://bugzilla.redhat.com/show_bug.cgi?id=1382583#c6

Comment 8 Germano Veit Michel 2016-10-28 01:04:44 UTC
Hi Francesco,

The logs I used to open both BZs are from my own env, which is really small (and slow). It reproduces this VERY easily. It's currently on 4.0.4 + RHEL 7.2. If you provide me the patches or the RPMs/scratch builds I can run some tests for you.

Cheers!

Comment 13 Francesco Romani 2016-11-03 11:55:59 UTC
Will be fixed in 4.1.0 and 4.0.6. Backport to 3.6.10 still pending (patches 66011, 66012, 66013)

Comment 14 Michal Skrivanek 2016-11-03 14:38:39 UTC
note this bug is missing acks for a proper downstream clone

Comment 18 Tomas Jelinek 2016-12-14 15:12:48 UTC
*** Bug 1395916 has been marked as a duplicate of this bug. ***

Comment 21 Nisim Simsolo 2017-02-02 15:10:42 UTC
Verification builds: 
ovirt-engine-4.1.0.3-0.1.el7
libvirt-client-2.0.0-10.el7_3.4.x86_64
vdsm-4.19.4-1.el7ev.x86_64
qemu-kvm-rhev-2.6.0-28.el7_3.3.x86_64
sanlock-3.4.0-1.el7.x86_64

Verificatino scenario:
1. Overload host CPU.
2. On the overloaded host, create VM pool of 10 VMs with 10 prestarted VMs.
3. Verify VMs does not transit to "not responding" state before going to up state.
3. Power off all VMs at the same time.
4. Verify VMs does not transit into "not responding" state before going to down state.
5. repeat steps 2-4 few times.


Note You need to log in before you can comment on or make changes to this bug.