Bug 1693628
Summary: | Engine generates too many updates to vm_dynamic table due to the session change | |||
---|---|---|---|---|
Product: | Red Hat Enterprise Virtualization Manager | Reporter: | Roman Hodain <rhodain> | |
Component: | ovirt-engine | Assignee: | Shmuel Melamud <smelamud> | |
Status: | CLOSED ERRATA | QA Contact: | Liran Rotenberg <lrotenbe> | |
Severity: | high | Docs Contact: | ||
Priority: | unspecified | |||
Version: | 4.2.8-2 | CC: | dfediuck, mavital, michal.skrivanek, rdlugyhe, smelamud | |
Target Milestone: | ovirt-4.4.0 | Keywords: | ZStream | |
Target Release: | --- | Flags: | lrotenbe:
testing_plan_complete+
|
|
Hardware: | Unspecified | |||
OS: | Unspecified | |||
Whiteboard: | ||||
Fixed In Version: | Doc Type: | Bug Fix | ||
Doc Text: |
Previously, the state of the user session was not saved correctly in the Engine database, causing many unnecessary database updates to be performed. The current release fixes this issue: Now, the user session state is saved correctly on the first update.
|
Story Points: | --- | |
Clone Of: | ||||
: | 1712243 (view as bug list) | Environment: | ||
Last Closed: | 2020-08-04 13:16:58 UTC | Type: | Bug | |
Regression: | --- | Mount Type: | --- | |
Documentation: | --- | CRM: | ||
Verified Versions: | Category: | --- | ||
oVirt Team: | Virt | RHEL 7.3 requirements from Atomic Host: | ||
Cloudforms Team: | --- | Target Upstream Version: | ||
Embargoed: | ||||
Bug Depends On: | ||||
Bug Blocks: | 1712243 |
Description
Roman Hodain
2019-03-28 11:36:05 UTC
can you please add versions of qemu-guest-agent and ovirt-guest-agent running in that guest? and the entire output of vdsm-client VM getStats vmID=VMUUID please I do not have any agent installed as I trigger the change manually by the echo command. The issue is not agent related. The vdsm-client out is [1]. This should fix the problem: https://gerrit.ovirt.org/#/c/99028/ [1]: # vdsm-client VM getStats vmID=a7664445-87ce-4492-8f9e-1b376e881696 [ { "displayInfo": [ { "tlsPort": "5901", "ipAddress": "xxxxxxx", "port": "5900", "type": "spice" } ], "memUsage": "0", "acpiEnable": "true", "vmId": "a7664445-87ce-4492-8f9e-1b376e881696", "guestIPs": "xxxxxxxx", "session": "UserLoggedOn", "netIfaces": [ { "name": "eth0", "inet6": [ "fe80::546f:7dff:feec:0", "2620:52:0:25c0:546f:7dff:feec:0" ], "inet": [ "xxxxxxxx" ], "hw": "56:6f:7d:ec:00:00" } ], "timeOffset": "-1", "memoryStats": { "swap_out": 0, "majflt": 0, "minflt": 546, "mem_cached": "682104", "mem_free": "897244", "mem_buffers": "2108", "swap_in": 0, "pageflt": 546, "mem_total": "1813960", "mem_unused": "897244" }, "balloonInfo": { "balloon_max": "2097152", "balloon_cur": "2097152", "balloon_target": "2097152", "balloon_min": "2097152" }, "pauseCode": "NOERR", "disksUsage": [ { "path": "/", "total": "6641680384", "used": "1494687744", "fs": "xfs" }, { "path": "/boot", "total": "1063256064", "used": "151805952", "fs": "xfs" } ], "network": { "vnet0": { "macAddr": "56:6f:7d:ec:00:00", "rxDropped": "1947", "tx": "10083143", "txDropped": "0", "rxErrors": "0", "rx": "1067332937", "txErrors": "0", "state": "unknown", "sampleTime": 4630469.1, "speed": "1000", "name": "vnet0" } }, "vmType": "kvm", "cpuUser": "5.70", "elapsedTime": "92635", "vmJobs": {}, "cpuSys": "1.00", "appsList": [ "ovirt-guest-agent-common-1.0.14-3.el7ev", "kernel-3.10.0-957.el7" ], "guestOs": "3.10.0-957.el7.x86_64", "vmName": "test", "guestFQDN": "unused", "hash": "-4853325321511922985", "lastLogin": 1553765872.655569, "cpuUsage": "861670000000", "vcpuPeriod": 100000, "lastLogout": 1553762889.5671, "lastUser": "Unknown", "guestTimezone": { "zone": "Europe/Prague", "offset": 60 }, "vcpuQuota": "-1", "guestContainers": [], "kvmEnable": "true", "disks": { "hdc": { "readLatency": "0", "writtenBytes": "0", "writeOps": "0", "apparentsize": "0", "readOps": "12", "writeLatency": "0", "readBytes": "368", "flushLatency": "0", "readRate": "0.0", "truesize": "0", "writeRate": "0.0" }, "sda": { "readLatency": "0", "writtenBytes": "2590196224", "writeOps": "638134", "apparentsize": "3221225472", "readOps": "116890", "writeLatency": "3483024", "imageID": "f8a12645-1501-45e5-b59b-cca1fa0a54df", "readBytes": "496647680", "flushLatency": "90658937", "readRate": "0.0", "truesize": "3221225472", "writeRate": "14848.0" } }, "monitorResponse": "0", "guestOsInfo": { "kernel": "3.10.0-957.el7.x86_64", "type": "linux", "version": "7.6", "distribution": "Red Hat Enterprise Linux Server", "arch": "x86_64", "codename": "Maipo" }, "username": "root", "guestName": "unused", "status": "Up", "guestCPUCount": 1, "vcpuCount": "1", "clientIp": "xxxxxxxxxxx", "statusTime": "4630469100" } ] (In reply to Roman Hodain from comment #3) > I do not have any agent installed as I trigger the change manually by the > echo command. The issue is not agent related. > The vdsm-client out is [1]. This should fix the problem: > > https://gerrit.ovirt.org/#/c/99028/ it indeed should! Verified on: ovirt-engine-4.4.0-0.0.master.20190519192123.gitd51360f.el7.noarch vdsm-4.30.15-1.el7.x86_64 Steps: 1. Stop guest agent on a RHEL guest system 2. Run echo '{"__name__": "session-logon"}' > /dev/virtio-ports/ovirt-guest-agent.0 3. vdsm-client VM getStats vmID=VMUUID | grep -i session 4. Check the engine's DB $ su - postgres $ psql -d engine # select session from vm_dynamic where vm_guid='VMUUID'; I tried both with the qemu-guest-agent on and off, and with step one set as session-logon, session-logoff. Results: $ vdsm-client VM getStats vmID=VMUUID | grep -i session "session": "UserLoggedOn", engine=# select session from vm_dynamic where vm_guid='VMUUID'; session --------- 1 (1 row) In logoff it will show: $ vdsm-client VM getStats vmID=af4ef4e2-f570-4c32-8089-adfd38c1edb9 | grep -i session "session": "LoggedOff", engine=# select session from vm_dynamic where vm_guid='VMUUID'; session --------- 4 (1 row) WARN: Bug status (VERIFIED) wasn't changed but the folowing should be fixed: [Found non-acked flags: '{}', ] For more info please contact: rhv-devops: Bug status (VERIFIED) wasn't changed but the folowing should be fixed: [Found non-acked flags: '{}', ] For more info please contact: rhv-devops WARN: Bug status (VERIFIED) wasn't changed but the folowing should be fixed: [Found non-acked flags: '{}', ] For more info please contact: rhv-devops: Bug status (VERIFIED) wasn't changed but the folowing should be fixed: [Found non-acked flags: '{}', ] For more info please contact: rhv-devops WARN: Bug status (VERIFIED) wasn't changed but the folowing should be fixed: [Found non-acked flags: '{}', ] For more info please contact: rhv-devops: Bug status (VERIFIED) wasn't changed but the folowing should be fixed: [Found non-acked flags: '{}', ] For more info please contact: rhv-devops WARN: Bug status (VERIFIED) wasn't changed but the folowing should be fixed: [Found non-acked flags: '{}', ] For more info please contact: rhv-devops: Bug status (VERIFIED) wasn't changed but the folowing should be fixed: [Found non-acked flags: '{}', ] For more info please contact: rhv-devops WARN: Bug status (VERIFIED) wasn't changed but the folowing should be fixed: [Found non-acked flags: '{}', ] For more info please contact: rhv-devops: Bug status (VERIFIED) wasn't changed but the folowing should be fixed: [Found non-acked flags: '{}', ] For more info please contact: rhv-devops Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Important: RHV Manager (ovirt-engine) 4.4 security, bug fix, and enhancement update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2020:3247 |