Bug 1219903 - 20150421 build floods vdsm.log with "message: metadata not found: Requested metadata element is not present"
Summary: 20150421 build floods vdsm.log with "message: metadata not found: Requested m...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: vdsm
Version: 3.5.0
Hardware: All
OS: Linux
medium
high
Target Milestone: ovirt-3.6.1
: 3.6.0
Assignee: Martin Sivák
QA Contact: Shira Maximov
URL:
Whiteboard:
Depends On: 1220474 1260864
Blocks: 1289007
TreeView+ depends on / blocked
 
Reported: 2015-05-08 16:18 UTC by Robert McSwain
Modified: 2019-09-12 08:27 UTC (History)
23 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Previously, libvirt reported a "metadata not found" error in vdsm.log when a query was made to get the missing metadata element. This was not actually an error, but a misleading message issued by VDSM. An empty metadata element has been added to the code so that this message will no longer appear in the log.
Clone Of:
: 1289007 (view as bug list)
Environment:
Last Closed: 2016-03-09 19:39:35 UTC
oVirt Team: SLA
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Knowledge Base (Solution) 1453913 0 None None None Never
Red Hat Product Errata RHBA-2016:0362 0 normal SHIPPED_LIVE vdsm 3.6.0 bug fix and enhancement update 2016-03-09 23:49:32 UTC
oVirt gerrit 45664 0 master MERGED Add an empty metadata qos element to the created domain Never
oVirt gerrit 45799 0 ovirt-3.6 MERGED Add an empty metadata qos element to the created domain Never
oVirt gerrit 49655 0 ovirt-3.5 MERGED Add an empty metadata qos element to the created domain Never
oVirt gerrit 50341 0 ovirt-3.5 MERGED Handle empty QoS section with no cpu limit information Never

Description Robert McSwain 2015-05-08 16:18:14 UTC
Description of problem:
After upgrading to 20150421 build of RHEV-H, /var/log is being filled up with messages, leading to manual cleaning of teh logs

Thread-116::DEBUG::2015-05-04 19:53:42,344::libvirtconnection::143::root::(wrapper) Unknown libvirterror: ecode: 80 edom: 20 level: 2 message: metadata not found: Requested metadata element is not present
VM Channels Listener::DEBUG::2015-05-04 19:53:44,195::vmchannels::96::vds::(_handle_timeouts) Timeout on fileno 38.
Thread-120::DEBUG::2015-05-04 19:53:44,308::libvirtconnection::143::root::(wrapper) Unknown libvirterror: ecode: 80 edom: 20 level: 2 message: metadata not found: Requested metadata element is not present

No other changes to the environment were made other than the host upgrade


How reproducible:
Unknown, issue occurred after installing the 20150421 build

Comment 2 Fabian Deutsch 2015-05-08 16:34:04 UTC
Jiri, have you maybe seen this errors before? Or do you know what is causing it?

Comment 3 Fabian Deutsch 2015-05-08 16:37:29 UTC
I'm also seeing vdsm related errors in messages:

17452 May  4 19:36:59 localhost vdsm vm.Vm ERROR vmId=`7a226fc5-847a-4b1f-be90-2ada589e17a6`::Alias not found for device type graphics during migration at destination host
17453 May  4 19:37:00 localhost kernel: device vnet3 entered promiscuous mode
17454 May  4 19:37:00 localhost kernel: SERVER_VLAN3144: port 3(vnet3) entering forwarding state
17455 May  4 19:37:00 localhost sanlock[5618]: 2015-05-04 19:37:00+0000 1133 [5618]: cmd 9 target pid 11132 not found
17456 May  4 19:37:01 localhost ntpd[4758]: Listen normally on 32 vnet2 fe80::fc1a:4aff:fe01:4275 UDP 123
17457 May  4 19:37:01 localhost ntpd[4758]: Listen normally on 33 vnet0 fe80::fc1a:4aff:fe01:d12 UDP 123
17458 May  4 19:37:01 localhost ntpd[4758]: Listen normally on 34 vnet1 fe80::fc1a:4aff:fe01:c59f UDP 123
17459 May  4 19:37:01 localhost ntpd[4758]: peers refreshed
17460 May  4 19:37:03 localhost ntpd[4758]: Listen normally on 35 vnet3 fe80::fc1a:4aff:fe01:46a8 UDP 123
17461 May  4 19:37:03 localhost ntpd[4758]: peers refreshed
17462 May  4 19:37:41 localhost vdsm vm.Vm WARNING vmId=`8346d408-12bb-4739-982e-fe48b80fac60`::Unknown type found, device: '{'device': 'unix', 'alias': 'channel0', 'type': 'channel', 'address': {'bus': '0', 'co      ntroller': '0', 'type': 'virtio-serial', 'port': '1'}}' found
17463 May  4 19:37:41 localhost vdsm vm.Vm WARNING vmId=`8346d408-12bb-4739-982e-fe48b80fac60`::Unknown type found, device: '{'device': 'unix', 'alias': 'channel1', 'type': 'channel', 'address': {'bus': '0', 'co      ntroller': '0', 'type': 'virtio-serial', 'port': '2'}}' found
17464 May  4 19:37:41 localhost vdsm vm.Vm WARNING vmId=`8346d408-12bb-4739-982e-fe48b80fac60`::Unknown type found, device: '{'device': 'spicevmc', 'alias': 'channel2', 'type': 'channel', 'address': {'bus': '0',       'controller': '0', 'type': 'virtio-serial', 'port': '3'}}' found
17465 May  4 19:37:42 localhost vdsm vm.Vm ERROR vmId=`8346d408-12bb-4739-982e-fe48b80fac60`::Alias not found for device type graphics during migration at destination host
17466 May  4 19:37:43 localhost kernel: device vnet4 entered promiscuous mode
17467 May  4 19:37:43 localhost kernel: SERVER_VLAN3144: port 4(vnet4) entering forwarding state
17468 May  4 19:37:43 localhost sanlock[5618]: 2015-05-04 19:37:43+0000 1176 [5618]: cmd 9 target pid 11334 not found
17469 May  4 19:37:46 localhost ntpd[4758]: Listen normally on 36 vnet4 fe80::fc1a:4aff:fe01:c59c UDP 123
17470 May  4 19:37:46 localhost ntpd[4758]: peers refreshed
17471 May  4 19:38:58 localhost vdsm vm.Vm WARNING vmId=`a7c67ab0-767d-46e8-99f9-6a293f4e208e`::Unknown type found, device: '{'device': 'unix', 'alias': 'channel0', 'type': 'channel', 'address': {'bus': '0', 'co      ntroller': '0', 'type': 'virtio-serial', 'port': '1'}}' found
17472 May  4 19:38:58 localhost vdsm vm.Vm WARNING vmId=`a7c67ab0-767d-46e8-99f9-6a293f4e208e`::Unknown type found, device: '{'device': 'unix', 'alias': 'channel1', 'type': 'channel', 'address': {'bus': '0', 'co      ntroller': '0', 'type': 'virtio-serial', 'port': '2'}}' found
17473 May  4 19:38:58 localhost vdsm vm.Vm ERROR vmId=`a7c67ab0-767d-46e8-99f9-6a293f4e208e`::Alias not found for device type graphics during migration at destination host
17474 May  4 19:38:58 localhost kernel: device vnet5 entered promiscuous mode
17475 May  4 19:38:58 localhost kernel: SERVER_VLAN3144: port 5(vnet5) entering forwarding state
17476 May  4 19:38:58 localhost sanlock[5618]: 2015-05-04 19:38:58+0000 1252 [5618]: cmd 9 target pid 11594 not found
17477 May  4 19:38:58 localhost vdsm vm.Vm WARNING vmId=`283b6d8a-1e65-4010-b583-aebb3be50dd0`::Unknown type found, device: '{'device': 'unix', 'alias': 'channel0', 'type': 'channel', 'address': {'bus': '0', 'co      ntroller': '0', 'type': 'virtio-serial', 'port': '1'}}' found
17478 May  4 19:38:58 localhost vdsm vm.Vm WARNING vmId=`283b6d8a-1e65-4010-b583-aebb3be50dd0`::Unknown type found, device: '{'device': 'unix', 'alias': 'channel1', 'type': 'channel', 'address': {'bus': '0', 'co      ntroller': '0', 'type': 'virtio-serial', 'port': '2'}}' found
17479 May  4 19:38:58 localhost vdsm vm.Vm WARNING vmId=`283b6d8a-1e65-4010-b583-aebb3be50dd0`::Unknown type found, device: '{'device': 'spicevmc', 'alias': 'channel2', 'type': 'channel', 'address': {'bus': '0',       'controller': '0', 'type': 'virtio-serial', 'port': '3'}}' found
17480 May  4 19:38:58 localhost vdsm vm.Vm ERROR vmId=`283b6d8a-1e65-4010-b583-aebb3be50dd0`::Alias not found for device type graphics during migration at destination host


Michal, can you possibly identify a cause?

Comment 4 Roy Golan 2015-05-10 07:43:37 UTC
its a libvirt issues basically. cluster 3.5 should have metadata support. I'm not sure about lower version  ( SLA uses it for cpu tune) 

what is the libvirt issue there?

Comment 5 Roy Golan 2015-05-12 11:23:41 UTC
(In reply to Roy Golan from comment #4)
> its a libvirt issues basically. cluster 3.5 should have metadata support.
> I'm not sure about lower version  ( SLA uses it for cpu tune) 
> 
> what is the libvirt issue there?

typo s/issue/version/

Comment 6 Doron Fediuck 2015-05-27 13:18:25 UTC
The issue is indeed libvirt based, and should be resolved as a part of bug 1220474.

Comment 7 Fabian Deutsch 2015-05-27 13:55:57 UTC
Moving this bug to vdsm for now, as it can be tracked there.

Comment 8 Peter Krempa 2015-05-27 18:42:45 UTC
While bug 1220474 fixes issue of libvirtd log (or system log) being flooded with the message, the issue with VDSM will require adding a filter to it's logging code that will ignore libvirt errors with code "VIR_ERR_NO_DOMAIN_METADATA = 80".

The error is reported from libvirt so that the users are able to differentiate between some kinds of internal (fatal) errors while retrieving the requested metadata element and the requested metadata element not being defined for the selected VM. The error code is unique to the event of the requested metadata entry not being present and applications are welcome to ignore it if they are able to cope with that.

Since the error code is now part of the libvirt API it cannot be changed without causing regressions in other possible applications using the metadata subsystem.

Comment 9 Doron Fediuck 2015-06-01 14:06:48 UTC
Dan,
    any thoughts on the right way to handle it in vdsm?

Comment 10 Dan Kenigsberg 2015-06-01 15:44:56 UTC
I'm all for removing all kinds of logging from libvirtconnection:143. If mskrivan thinks that this is still useful, we can skip logging of this specific error.

Comment 11 Michal Skrivanek 2015-06-15 08:00:49 UTC
I think better solution was proposed by martin - use empty <qos> tag inside the metadata. Then it won't fail

Comment 12 Michal Skrivanek 2015-06-17 06:49:45 UTC
Removing TestOnly, some change is required
Raising Severity a this is incredibly annoying and complicates troubleshooting

Comment 16 Martin Sivák 2015-08-11 13:13:09 UTC
Empty QoS element will solve this nicely. Unfortunately the XML code we have at vdsm does not allow creating an namespaced (custom prefix + uri) XML element at the moment :/

The message is totally harmless with regards to the functionality and "only" pollutes the log.

Comment 17 Pavel Zhukov 2015-08-11 15:22:40 UTC
(In reply to Martin Sivák from comment #16)
> Empty QoS element will solve this nicely. Unfortunately the XML code we have
> at vdsm does not allow creating an namespaced (custom prefix + uri) XML
> element at the moment :/
> 
> The message is totally harmless with regards to the functionality and "only"
> pollutes the log.

Not really. logs are rotated frequently and GSS looses possibilities to debug issues.

Comment 18 Roy Golan 2015-09-02 06:34:30 UTC
wouldn't a login filter using  LIBVIRT_LOG_FILTERS env var will suppress that and will save the need for a build and so on? just a libvirtd restart?

Comment 19 Roy Golan 2015-09-02 06:57:24 UTC
> with the message, the issue with VDSM will require adding a filter to it's
> logging code that will ignore libvirt errors with code
> "VIR_ERR_NO_DOMAIN_METADATA = 80".

sorry you already wrote that.

I think we should close this bug with this solution.

Comment 21 Shira Maximov 2015-11-05 14:49:36 UTC
verified on : 
Red Hat Enterprise Virtualization Manager Version: 3.6.0.3-0.1.el6

verification steps: 
look at /var/log/vdsm/vdsm.log and make sure that the logs are not polluted with this:
metadata not found: Requested metadata element is not present

Comment 25 errata-xmlrpc 2016-03-09 19:39:35 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2016-0362.html


Note You need to log in before you can comment on or make changes to this bug.