Bug 1548845 - HotPlug succeeds but ERROR seen in VDSM: VM metrics collection failed with KeyError: 'readOps'
Summary: HotPlug succeeds but ERROR seen in VDSM: VM metrics collection failed with Ke...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: vdsm
Classification: oVirt
Component: General
Version: ---
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: ovirt-4.2.3
: ---
Assignee: Daniel Erez
QA Contact: Evelina Shames
URL:
Whiteboard:
: 1556964 (view as bug list)
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-02-25 13:01 UTC by Avihai
Modified: 2018-05-10 06:30 UTC (History)
5 users (show)

Fixed In Version:
Clone Of:
Environment:
Last Closed: 2018-05-10 06:30:22 UTC
oVirt Team: Storage
Embargoed:
rule-engine: ovirt-4.2+


Attachments (Terms of Use)
engine , vdsm ,libvirt logs (2.22 MB, application/x-gzip)
2018-02-25 13:01 UTC, Avihai
no flags Details


Links
System ID Private Priority Status Summary Last Updated
oVirt gerrit 89069 0 master MERGED virt: vmstats - disk_info add default values 2020-12-22 01:00:37 UTC
oVirt gerrit 89137 0 ovirt-4.2 MERGED virt: vmstats - disk_info add default values 2020-12-22 01:01:09 UTC

Description Avihai 2018-02-25 13:01:35 UTC
Created attachment 1400568 [details]
engine , vdsm ,libvirt logs

Description of problem:
Each time we hot-plug a disk to a VM (running with OS) we get the following error on relevant host vdsm.log:
 
2018-02-25 14:33:15,858+0200 ERROR (periodic/2) [virt.vmstats] VM metrics collection failed (vmstats:264)
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/vdsm/virt/vmstats.py", line 221, in send_metrics
    diskinfo['readOps']
KeyError: 'readOps'

It looks like 'readOps' Key in 'diskinfo' dictionaty does not exist .

Disk hotplug looks OK so the effect is not clear here.

VDSM:
2018-02-25 14:33:15,764+0200 WARN  (jsonrpc/5) [virt.vm] (vmId='05361b2e-1ae3-40df-a159-cb4688b303c5') Found unknown drive: {'alias': 'scsi0-0-0-3', 'readonly': 'False', 'iface': 'ide', 'name': 'sdi', 'address': {'bus': '0', 'controller': '0', 'type': 'drive', 'target': '0', 'unit': '3'}, 'device': 'disk', 'path': '/rhev/data-center/mnt/yellow-vdsb.qa.lab.tlv.redhat.com:_Storage__NFS_storage__local__ge4__nfs__0/5390765b-eaa9-4254-958a-1bef94820d9d/images/111ac326-4372-46ed-8c92-4192ebeab679/c791d837-236a-4f0c-ab34-984c8c35659b', 'type': 'disk'} (storage:218)
2018-02-25 14:33:15,808+0200 WARN  (periodic/0) [virt.periodic.VmDispatcher] could not run <class 'vdsm.virt.periodic.UpdateVolumes'> on ['05361b2e-1ae3-40df-a159-cb4688b303c5'] (periodic:316)
2018-02-25 14:33:15,858+0200 ERROR (periodic/2) [virt.vmstats] VM metrics collection failed (vmstats:264)
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/vdsm/virt/vmstats.py", line 221, in send_metrics
    diskinfo['readOps']
KeyError: 'readOps'



libvirt:
2018-02-25 12:33:15.559+0000: 16304: debug : qemuMonitorAddDrive:3244 : drive=file=/rhev/data-center/mnt/yellow-vdsb.qa.lab.tlv.redhat.com:_Storage__NFS_storage__local__ge4__nfs__0/5390765b-eaa9-4254-958a-1bef94820d9d/images/111ac326-4372-46ed-8c92-4192ebeab679/c791d837-236a-4f0c-ab34-984c8c35659b,format=raw,if=none,id=drive-scsi0-0-0-3,serial=111ac326-4372-46ed-8c92-4192ebeab679,cache=none,werror=stop,rerror=stop,aio=threads
2018-02-25 12:33:15.559+0000: 16304: info : qemuMonitorSend:1061 : QEMU_MONITOR_SEND_MSG: mon=0x7fd7280263d0 msg={"execute":"__com.redhat_drive_add","arguments":{"file":"/rhev/data-center/mnt/yellow-vdsb.qa.lab.tlv.redhat.com:_Storage__NFS_storage__local__ge4__nfs__0/5390765b-eaa9-4254-958a-1bef94820d9d/images/111ac326-4372-46ed-8c92-4192ebeab679/c791d837-236a-4f0c-ab34-984c8c35659b","format":"raw","id":"drive-scsi0-0-0-3","serial":"111ac326-4372-46ed-8c92-4192ebeab679","cache":"none","werror":"stop","rerror":"stop","aio":"threads"},"id":"libvirt-437"}
2018-02-25 12:33:15.559+0000: 16293: info : qemuMonitorIOWrite:543 : QEMU_MONITOR_IO_WRITE: mon=0x7fd7280263d0 buf={"execute":"__com.redhat_drive_add","arguments":{"file":"/rhev/data-center/mnt/yellow-vdsb.qa.lab.tlv.redhat.com:_Storage__NFS_storage__local__ge4__nfs__0/5390765b-eaa9-4254-958a-1bef94820d9d/images/111ac326-4372-46ed-8c92-4192ebeab679/c791d837-236a-4f0c-ab34-984c8c35659b","format":"raw","id":"drive-scsi0-0-0-3","serial":"111ac326-4372-46ed-8c92-4192ebeab679","cache":"none","werror":"stop","rerror":"stop","aio":"threads"},"id":"libvirt-437"}


Engine:
2018-02-25 14:33:15,393+02 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.HotPlugDiskVDSCommand] (EE-ManagedThreadFactory-engine-Thread-207618) [dc480934-9541-44e6-b40c-2d8c9ce68958] START, HotPlugDiskVDSComma
nd(HostName = host_mixed_3, HotPlugDiskVDSParameters:{hostId='7ca55bcf-258c-4fe0-ba34-d7f6dd72c020', vmId='05361b2e-1ae3-40df-a159-cb4688b303c5', diskId='111ac326-4372-46ed-8c92-4192ebeab679', addressMap='[bus=0
, controller=0, unit=3, type=drive, target=0]'}), log id: 6e68b139


Version-Release number of selected component (if applicable):
vdsm-4.20.18-1.el7ev.x86_64
libvirt-daemon-3.9.0-13.el7.x86_64
ovirt-engine-4.2.2-0.1.el7.noarch
qemu-guest-agent-2.8.0-2.el7.x86_64

How reproducible:
100%


Steps to Reproduce:
1. Start a VM with OS 
2. Create a new disk (disk1) on the VM - wait until its active.
3. Deactivate disk1
4. Activate disk1

Actual results:
Error appears at VM's host VDSM log:
2018-02-25 14:33:15,858+0200 ERROR (periodic/2) [virt.vmstats] VM metrics collection failed (vmstats:264)
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/vdsm/virt/vmstats.py", line 221, in send_metrics
    diskinfo['readOps']
KeyError: 'readOps'

Expected results:
No ERROR should appear.

Additional info:

Comment 1 Francesco Romani 2018-03-15 15:36:17 UTC
I want to clarify that the only negative effect is noise in the logs.

Comment 2 Douglas Schilling Landgraf 2018-03-15 16:59:07 UTC
*** Bug 1556964 has been marked as a duplicate of this bug. ***

Comment 3 Evelina Shames 2018-04-18 07:30:23 UTC
Verified.
vdsm-4.20.25-1
ovirt-engine-4.2.3-0.1

Comment 4 Sandro Bonazzola 2018-05-10 06:30:22 UTC
This bugzilla is included in oVirt 4.2.3 release, published on May 4th 2018.

Since the problem described in this bug report should be
resolved in oVirt 4.2.3 release, it has been closed with a resolution of CURRENT RELEASE.

If the solution does not work for you, please open a new bug report.


Note You need to log in before you can comment on or make changes to this bug.