Description of problem: After upgrading to 20150421 build of RHEV-H, /var/log is being filled up with messages, leading to manual cleaning of teh logs Thread-116::DEBUG::2015-05-04 19:53:42,344::libvirtconnection::143::root::(wrapper) Unknown libvirterror: ecode: 80 edom: 20 level: 2 message: metadata not found: Requested metadata element is not present VM Channels Listener::DEBUG::2015-05-04 19:53:44,195::vmchannels::96::vds::(_handle_timeouts) Timeout on fileno 38. Thread-120::DEBUG::2015-05-04 19:53:44,308::libvirtconnection::143::root::(wrapper) Unknown libvirterror: ecode: 80 edom: 20 level: 2 message: metadata not found: Requested metadata element is not present No other changes to the environment were made other than the host upgrade How reproducible: Unknown, issue occurred after installing the 20150421 build
Jiri, have you maybe seen this errors before? Or do you know what is causing it?
I'm also seeing vdsm related errors in messages: 17452 May 4 19:36:59 localhost vdsm vm.Vm ERROR vmId=`7a226fc5-847a-4b1f-be90-2ada589e17a6`::Alias not found for device type graphics during migration at destination host 17453 May 4 19:37:00 localhost kernel: device vnet3 entered promiscuous mode 17454 May 4 19:37:00 localhost kernel: SERVER_VLAN3144: port 3(vnet3) entering forwarding state 17455 May 4 19:37:00 localhost sanlock[5618]: 2015-05-04 19:37:00+0000 1133 [5618]: cmd 9 target pid 11132 not found 17456 May 4 19:37:01 localhost ntpd[4758]: Listen normally on 32 vnet2 fe80::fc1a:4aff:fe01:4275 UDP 123 17457 May 4 19:37:01 localhost ntpd[4758]: Listen normally on 33 vnet0 fe80::fc1a:4aff:fe01:d12 UDP 123 17458 May 4 19:37:01 localhost ntpd[4758]: Listen normally on 34 vnet1 fe80::fc1a:4aff:fe01:c59f UDP 123 17459 May 4 19:37:01 localhost ntpd[4758]: peers refreshed 17460 May 4 19:37:03 localhost ntpd[4758]: Listen normally on 35 vnet3 fe80::fc1a:4aff:fe01:46a8 UDP 123 17461 May 4 19:37:03 localhost ntpd[4758]: peers refreshed 17462 May 4 19:37:41 localhost vdsm vm.Vm WARNING vmId=`8346d408-12bb-4739-982e-fe48b80fac60`::Unknown type found, device: '{'device': 'unix', 'alias': 'channel0', 'type': 'channel', 'address': {'bus': '0', 'co ntroller': '0', 'type': 'virtio-serial', 'port': '1'}}' found 17463 May 4 19:37:41 localhost vdsm vm.Vm WARNING vmId=`8346d408-12bb-4739-982e-fe48b80fac60`::Unknown type found, device: '{'device': 'unix', 'alias': 'channel1', 'type': 'channel', 'address': {'bus': '0', 'co ntroller': '0', 'type': 'virtio-serial', 'port': '2'}}' found 17464 May 4 19:37:41 localhost vdsm vm.Vm WARNING vmId=`8346d408-12bb-4739-982e-fe48b80fac60`::Unknown type found, device: '{'device': 'spicevmc', 'alias': 'channel2', 'type': 'channel', 'address': {'bus': '0', 'controller': '0', 'type': 'virtio-serial', 'port': '3'}}' found 17465 May 4 19:37:42 localhost vdsm vm.Vm ERROR vmId=`8346d408-12bb-4739-982e-fe48b80fac60`::Alias not found for device type graphics during migration at destination host 17466 May 4 19:37:43 localhost kernel: device vnet4 entered promiscuous mode 17467 May 4 19:37:43 localhost kernel: SERVER_VLAN3144: port 4(vnet4) entering forwarding state 17468 May 4 19:37:43 localhost sanlock[5618]: 2015-05-04 19:37:43+0000 1176 [5618]: cmd 9 target pid 11334 not found 17469 May 4 19:37:46 localhost ntpd[4758]: Listen normally on 36 vnet4 fe80::fc1a:4aff:fe01:c59c UDP 123 17470 May 4 19:37:46 localhost ntpd[4758]: peers refreshed 17471 May 4 19:38:58 localhost vdsm vm.Vm WARNING vmId=`a7c67ab0-767d-46e8-99f9-6a293f4e208e`::Unknown type found, device: '{'device': 'unix', 'alias': 'channel0', 'type': 'channel', 'address': {'bus': '0', 'co ntroller': '0', 'type': 'virtio-serial', 'port': '1'}}' found 17472 May 4 19:38:58 localhost vdsm vm.Vm WARNING vmId=`a7c67ab0-767d-46e8-99f9-6a293f4e208e`::Unknown type found, device: '{'device': 'unix', 'alias': 'channel1', 'type': 'channel', 'address': {'bus': '0', 'co ntroller': '0', 'type': 'virtio-serial', 'port': '2'}}' found 17473 May 4 19:38:58 localhost vdsm vm.Vm ERROR vmId=`a7c67ab0-767d-46e8-99f9-6a293f4e208e`::Alias not found for device type graphics during migration at destination host 17474 May 4 19:38:58 localhost kernel: device vnet5 entered promiscuous mode 17475 May 4 19:38:58 localhost kernel: SERVER_VLAN3144: port 5(vnet5) entering forwarding state 17476 May 4 19:38:58 localhost sanlock[5618]: 2015-05-04 19:38:58+0000 1252 [5618]: cmd 9 target pid 11594 not found 17477 May 4 19:38:58 localhost vdsm vm.Vm WARNING vmId=`283b6d8a-1e65-4010-b583-aebb3be50dd0`::Unknown type found, device: '{'device': 'unix', 'alias': 'channel0', 'type': 'channel', 'address': {'bus': '0', 'co ntroller': '0', 'type': 'virtio-serial', 'port': '1'}}' found 17478 May 4 19:38:58 localhost vdsm vm.Vm WARNING vmId=`283b6d8a-1e65-4010-b583-aebb3be50dd0`::Unknown type found, device: '{'device': 'unix', 'alias': 'channel1', 'type': 'channel', 'address': {'bus': '0', 'co ntroller': '0', 'type': 'virtio-serial', 'port': '2'}}' found 17479 May 4 19:38:58 localhost vdsm vm.Vm WARNING vmId=`283b6d8a-1e65-4010-b583-aebb3be50dd0`::Unknown type found, device: '{'device': 'spicevmc', 'alias': 'channel2', 'type': 'channel', 'address': {'bus': '0', 'controller': '0', 'type': 'virtio-serial', 'port': '3'}}' found 17480 May 4 19:38:58 localhost vdsm vm.Vm ERROR vmId=`283b6d8a-1e65-4010-b583-aebb3be50dd0`::Alias not found for device type graphics during migration at destination host Michal, can you possibly identify a cause?
its a libvirt issues basically. cluster 3.5 should have metadata support. I'm not sure about lower version ( SLA uses it for cpu tune) what is the libvirt issue there?
(In reply to Roy Golan from comment #4) > its a libvirt issues basically. cluster 3.5 should have metadata support. > I'm not sure about lower version ( SLA uses it for cpu tune) > > what is the libvirt issue there? typo s/issue/version/
The issue is indeed libvirt based, and should be resolved as a part of bug 1220474.
Moving this bug to vdsm for now, as it can be tracked there.
While bug 1220474 fixes issue of libvirtd log (or system log) being flooded with the message, the issue with VDSM will require adding a filter to it's logging code that will ignore libvirt errors with code "VIR_ERR_NO_DOMAIN_METADATA = 80". The error is reported from libvirt so that the users are able to differentiate between some kinds of internal (fatal) errors while retrieving the requested metadata element and the requested metadata element not being defined for the selected VM. The error code is unique to the event of the requested metadata entry not being present and applications are welcome to ignore it if they are able to cope with that. Since the error code is now part of the libvirt API it cannot be changed without causing regressions in other possible applications using the metadata subsystem.
Dan, any thoughts on the right way to handle it in vdsm?
I'm all for removing all kinds of logging from libvirtconnection:143. If mskrivan thinks that this is still useful, we can skip logging of this specific error.
I think better solution was proposed by martin - use empty <qos> tag inside the metadata. Then it won't fail
Removing TestOnly, some change is required Raising Severity a this is incredibly annoying and complicates troubleshooting
Empty QoS element will solve this nicely. Unfortunately the XML code we have at vdsm does not allow creating an namespaced (custom prefix + uri) XML element at the moment :/ The message is totally harmless with regards to the functionality and "only" pollutes the log.
(In reply to Martin Sivák from comment #16) > Empty QoS element will solve this nicely. Unfortunately the XML code we have > at vdsm does not allow creating an namespaced (custom prefix + uri) XML > element at the moment :/ > > The message is totally harmless with regards to the functionality and "only" > pollutes the log. Not really. logs are rotated frequently and GSS looses possibilities to debug issues.
wouldn't a login filter using LIBVIRT_LOG_FILTERS env var will suppress that and will save the need for a build and so on? just a libvirtd restart?
> with the message, the issue with VDSM will require adding a filter to it's > logging code that will ignore libvirt errors with code > "VIR_ERR_NO_DOMAIN_METADATA = 80". sorry you already wrote that. I think we should close this bug with this solution.
verified on : Red Hat Enterprise Virtualization Manager Version: 3.6.0.3-0.1.el6 verification steps: look at /var/log/vdsm/vdsm.log and make sure that the logs are not polluted with this: metadata not found: Requested metadata element is not present
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHBA-2016-0362.html