Bug 1693628

Summary: Engine generates too many updates to vm_dynamic table due to the session change
Product: Red Hat Enterprise Virtualization Manager Reporter: Roman Hodain <rhodain>
Component: ovirt-engineAssignee: Shmuel Melamud <smelamud>
Status: CLOSED ERRATA QA Contact: Liran Rotenberg <lrotenbe>
Severity: high Docs Contact:
Priority: unspecified    
Version: 4.2.8-2CC: dfediuck, mavital, michal.skrivanek, rdlugyhe, smelamud
Target Milestone: ovirt-4.4.0Keywords: ZStream
Target Release: ---Flags: lrotenbe: testing_plan_complete+
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Previously, the state of the user session was not saved correctly in the Engine database, causing many unnecessary database updates to be performed. The current release fixes this issue: Now, the user session state is saved correctly on the first update.
Story Points: ---
Clone Of:
: 1712243 (view as bug list) Environment:
Last Closed: 2020-08-04 13:16:58 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: Virt RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1712243    

Description Roman Hodain 2019-03-28 11:36:05 UTC
Description of problem:
When analyzing changes in data provided by vdsm about the VM we check if the data is different than the data stored in the DB if so then we update the DB.

One of the fields provided by the vdsm is the session state (UserLoggedOn, LoggedOff, Locked, ...). We evaluate the state as changed as in the beginning the state in the DB is Unknown and we try to update the DB, but the VmDynamic.updateRuntimeData does not update the data for the session. So we end up in updating the DB All the time.

Version-Release number of selected component (if applicable):
4.2.8

How reproducible:
100%

Steps to Reproduce:
1. Stop guest agent on a RHEL guest system
2. Run echo '{"__name__": "session-logon"}' > /dev/virtio-ports/ovirt-guest-agent.0
3. vdsm-client VM getStats vmID=VMUUID | grep -i session

You should see:

        "session": "UserLoggedOn", 

4. Run sql
select session from vm_dynamic where vm_guid='VMUUID';

You will always see '0'

Actual results:
Always 0 (Unknown)

Expected results:
DB is updated

Additional info:
This is very critical for gbig environment where the DB updates can generate hundreds of updated to the DB every hour.

Comment 1 Michal Skrivanek 2019-03-29 07:06:38 UTC
can you please add versions of qemu-guest-agent and ovirt-guest-agent running in that guest?

Comment 2 Michal Skrivanek 2019-03-29 07:11:07 UTC
and the entire output of vdsm-client VM getStats vmID=VMUUID please

Comment 3 Roman Hodain 2019-03-29 10:34:04 UTC
I do not have any agent installed as I trigger the change manually by the echo command. The issue is not agent related.
The vdsm-client out is  [1]. This should fix the problem:

    https://gerrit.ovirt.org/#/c/99028/


[1]:
# vdsm-client VM getStats vmID=a7664445-87ce-4492-8f9e-1b376e881696 
[
    {
        "displayInfo": [
            {
                "tlsPort": "5901", 
                "ipAddress": "xxxxxxx", 
                "port": "5900", 
                "type": "spice"
            }
        ], 
        "memUsage": "0", 
        "acpiEnable": "true", 
        "vmId": "a7664445-87ce-4492-8f9e-1b376e881696", 
        "guestIPs": "xxxxxxxx", 
        "session": "UserLoggedOn", 
        "netIfaces": [
            {
                "name": "eth0", 
                "inet6": [
                    "fe80::546f:7dff:feec:0", 
                    "2620:52:0:25c0:546f:7dff:feec:0"
                ], 
                "inet": [
                    "xxxxxxxx"
                ], 
                "hw": "56:6f:7d:ec:00:00"
            }
        ], 
        "timeOffset": "-1", 
        "memoryStats": {
            "swap_out": 0, 
            "majflt": 0, 
            "minflt": 546, 
            "mem_cached": "682104", 
            "mem_free": "897244", 
            "mem_buffers": "2108", 
            "swap_in": 0, 
            "pageflt": 546, 
            "mem_total": "1813960", 
            "mem_unused": "897244"
        }, 
        "balloonInfo": {
            "balloon_max": "2097152", 
            "balloon_cur": "2097152", 
            "balloon_target": "2097152", 
            "balloon_min": "2097152"
        }, 
        "pauseCode": "NOERR", 
        "disksUsage": [
            {
                "path": "/", 
                "total": "6641680384", 
                "used": "1494687744", 
                "fs": "xfs"
            }, 
            {
                "path": "/boot", 
                "total": "1063256064", 
                "used": "151805952", 
                "fs": "xfs"
            }
        ], 
        "network": {
            "vnet0": {
                "macAddr": "56:6f:7d:ec:00:00", 
                "rxDropped": "1947", 
                "tx": "10083143", 
                "txDropped": "0", 
                "rxErrors": "0", 
                "rx": "1067332937", 
                "txErrors": "0", 
                "state": "unknown", 
                "sampleTime": 4630469.1, 
                "speed": "1000", 
                "name": "vnet0"
            }
        }, 
        "vmType": "kvm", 
        "cpuUser": "5.70", 
        "elapsedTime": "92635", 
        "vmJobs": {}, 
        "cpuSys": "1.00", 
        "appsList": [
            "ovirt-guest-agent-common-1.0.14-3.el7ev", 
            "kernel-3.10.0-957.el7"
        ], 
        "guestOs": "3.10.0-957.el7.x86_64", 
        "vmName": "test", 
        "guestFQDN": "unused", 
        "hash": "-4853325321511922985", 
        "lastLogin": 1553765872.655569, 
        "cpuUsage": "861670000000", 
        "vcpuPeriod": 100000, 
        "lastLogout": 1553762889.5671, 
        "lastUser": "Unknown", 
        "guestTimezone": {
            "zone": "Europe/Prague", 
            "offset": 60
        }, 
        "vcpuQuota": "-1", 
        "guestContainers": [], 
        "kvmEnable": "true", 
        "disks": {
            "hdc": {
                "readLatency": "0", 
                "writtenBytes": "0", 
                "writeOps": "0", 
                "apparentsize": "0", 
                "readOps": "12", 
                "writeLatency": "0", 
                "readBytes": "368", 
                "flushLatency": "0", 
                "readRate": "0.0", 
                "truesize": "0", 
                "writeRate": "0.0"
            }, 
            "sda": {
                "readLatency": "0", 
                "writtenBytes": "2590196224", 
                "writeOps": "638134", 
                "apparentsize": "3221225472", 
                "readOps": "116890", 
                "writeLatency": "3483024", 
                "imageID": "f8a12645-1501-45e5-b59b-cca1fa0a54df", 
                "readBytes": "496647680", 
                "flushLatency": "90658937", 
                "readRate": "0.0", 
                "truesize": "3221225472", 
                "writeRate": "14848.0"
            }
        }, 
        "monitorResponse": "0", 
        "guestOsInfo": {
            "kernel": "3.10.0-957.el7.x86_64", 
            "type": "linux", 
            "version": "7.6", 
            "distribution": "Red Hat Enterprise Linux Server", 
            "arch": "x86_64", 
            "codename": "Maipo"
        }, 
        "username": "root", 
        "guestName": "unused", 
        "status": "Up", 
        "guestCPUCount": 1, 
        "vcpuCount": "1", 
        "clientIp": "xxxxxxxxxxx", 
        "statusTime": "4630469100"
    }
]

Comment 4 Michal Skrivanek 2019-03-29 10:58:56 UTC
(In reply to Roman Hodain from comment #3)
> I do not have any agent installed as I trigger the change manually by the
> echo command. The issue is not agent related.
> The vdsm-client out is  [1]. This should fix the problem:
> 
>     https://gerrit.ovirt.org/#/c/99028/

it indeed should!

Comment 6 Liran Rotenberg 2019-05-21 11:09:31 UTC
Verified on:
ovirt-engine-4.4.0-0.0.master.20190519192123.gitd51360f.el7.noarch
vdsm-4.30.15-1.el7.x86_64

Steps:
1. Stop guest agent on a RHEL guest system
2. Run echo '{"__name__": "session-logon"}' > /dev/virtio-ports/ovirt-guest-agent.0
3. vdsm-client VM getStats vmID=VMUUID | grep -i session
4. Check the engine's DB
$ su - postgres
$ psql -d engine
# select session from vm_dynamic where vm_guid='VMUUID';

I tried both with the qemu-guest-agent on and off, and with step one set as session-logon, session-logoff.

Results:
$ vdsm-client VM getStats vmID=VMUUID | grep -i session
    "session": "UserLoggedOn",
engine=# select session from vm_dynamic where vm_guid='VMUUID';
 session 
---------
       1
(1 row)

In logoff it will show:
$ vdsm-client VM getStats vmID=af4ef4e2-f570-4c32-8089-adfd38c1edb9 | grep -i session
    "session": "LoggedOff", 
engine=# select session from vm_dynamic where vm_guid='VMUUID';
 session 
---------
       4
(1 row)

Comment 8 RHV bug bot 2019-12-13 13:15:33 UTC
WARN: Bug status (VERIFIED) wasn't changed but the folowing should be fixed:

[Found non-acked flags: '{}', ]

For more info please contact: rhv-devops: Bug status (VERIFIED) wasn't changed but the folowing should be fixed:

[Found non-acked flags: '{}', ]

For more info please contact: rhv-devops

Comment 9 RHV bug bot 2019-12-20 17:45:12 UTC
WARN: Bug status (VERIFIED) wasn't changed but the folowing should be fixed:

[Found non-acked flags: '{}', ]

For more info please contact: rhv-devops: Bug status (VERIFIED) wasn't changed but the folowing should be fixed:

[Found non-acked flags: '{}', ]

For more info please contact: rhv-devops

Comment 10 RHV bug bot 2020-01-08 14:49:18 UTC
WARN: Bug status (VERIFIED) wasn't changed but the folowing should be fixed:

[Found non-acked flags: '{}', ]

For more info please contact: rhv-devops: Bug status (VERIFIED) wasn't changed but the folowing should be fixed:

[Found non-acked flags: '{}', ]

For more info please contact: rhv-devops

Comment 11 RHV bug bot 2020-01-08 15:16:45 UTC
WARN: Bug status (VERIFIED) wasn't changed but the folowing should be fixed:

[Found non-acked flags: '{}', ]

For more info please contact: rhv-devops: Bug status (VERIFIED) wasn't changed but the folowing should be fixed:

[Found non-acked flags: '{}', ]

For more info please contact: rhv-devops

Comment 12 RHV bug bot 2020-01-24 19:51:07 UTC
WARN: Bug status (VERIFIED) wasn't changed but the folowing should be fixed:

[Found non-acked flags: '{}', ]

For more info please contact: rhv-devops: Bug status (VERIFIED) wasn't changed but the folowing should be fixed:

[Found non-acked flags: '{}', ]

For more info please contact: rhv-devops

Comment 16 errata-xmlrpc 2020-08-04 13:16:58 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Important: RHV Manager (ovirt-engine) 4.4 security, bug fix, and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2020:3247