RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1181157 - libvirtError: argument unsupported: QEMU driver does not support <metadata> element
Summary: libvirtError: argument unsupported: QEMU driver does not support <metadata> e...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: libvirt
Version: 7.0
Hardware: x86_64
OS: Linux
high
urgent
Target Milestone: rc
: ---
Assignee: Peter Krempa
QA Contact: Virtualization Bugs
URL:
Whiteboard: sla
Depends On:
Blocks: rhev35gablocker 1179592 1184929
TreeView+ depends on / blocked
 
Reported: 2015-01-12 13:47 UTC by Martin Sivák
Modified: 2015-06-24 11:19 UTC (History)
24 users (show)

Fixed In Version: libvirt-1.2.8-1.el7
Doc Type: Bug Fix
Doc Text:
Cause: Initial implementation of the APIs used for manipulating/requesting of the guest metadata didn't implement the actions for custom metadata stored in the <metadata> element in the guest xml. Consequence: Users were not able to modify the custom metadata element at runtime via the provided APIs. There was no workaround to achieve modification of the custom metadata for a running VM. Fix: The missing functionality was implemented. Result: Users are now able to use the full potential of the virDomain(Set|Get)Metadata API.
Clone Of: 1179592
: 1184929 (view as bug list)
Environment:
Last Closed: 2015-03-05 07:48:56 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
The script to reproduce the bug (2.37 KB, text/plain)
2015-02-05 05:19 UTC, zhenfeng wang
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2015:0323 0 normal SHIPPED_LIVE Low: libvirt security, bug fix, and enhancement update 2015-03-05 12:10:54 UTC

Description Martin Sivák 2015-01-12 13:47:27 UTC
Can we please get the necessary functionality for metadata XML support backported to RHEL 7.0?

+++ This bug was initially created as a clone of Bug #1179592 +++

Description of problem:
I created two cpu profiles, one with QoS that have limitation value 50 and second that have limitation value 25, and I attach this profiles to vm one by one, but I see that values for quota and period under dumpxml stay the same.
I set bug under mom, because I know that we use mom policy to apply quota and period for vm.

Version-Release number of selected component (if applicable):
rhevm-3.5.0-0.27.el6ev.noarch
vdsm-4.16.8.1-4.el7ev.x86_64
mom-0.4.1-4.el7ev.noarch

How reproducible:
Always

Steps to Reproduce:
1. Create two Cpu QoS under the same datacenter one with limitation value 25 and second with 50
<qoss>
<qos type="cpu" href= "/ovirt-engine/api/datacenters/f8f5eaee-8fd0-4b45-87db-62d61b03a916/qoss/2571eeea-25a7-4b09-9c37-d82591733f26" id="2571eeea-25a7-4b09-9c37-d82591733f26">
<name>test_1</name>
<data_center href= "/ovirt-engine/api/datacenters/f8f5eaee-8fd0-4b45-87db-62d61b03a916" id="f8f5eaee-8fd0-4b45-87db-62d61b03a916"/>
<cpu_limit>50</cpu_limit>
</qos>
<qos type="cpu" href= "/ovirt-engine/api/datacenters/f8f5eaee-8fd0-4b45-87db-62d61b03a916/qoss/883d876e-2038-4cd4-8c35-e9b52f2f4380" id="883d876e-2038-4cd4-8c35-e9b52f2f4380">
<name>test_2</name>
<data_center href= "/ovirt-engine/api/datacenters/f8f5eaee-8fd0-4b45-87db-62d61b03a916" id="f8f5eaee-8fd0-4b45-87db-62d61b03a916"/>
<cpu_limit>25</cpu_limit>
</qos>
</qoss>
2. Create two cpu profile with different QoS in the same cluster
<cpu_profiles>
<cpu_profile href= "/ovirt-engine/api/cpuprofiles/5be5c0b7-5b91-4ac4-9d53-ef6f987bff05" id="5be5c0b7-5b91-4ac4-9d53-ef6f987bff05">
<name>test_1</name>
<qos href= "/ovirt-engine/api/datacenters/f8f5eaee-8fd0-4b45-87db-62d61b03a916/qoss/2571eeea-25a7-4b09-9c37-d82591733f26" id="2571eeea-25a7-4b09-9c37-d82591733f26"/>
<cluster href= "/ovirt-engine/api/clusters/67866b36-fd68-4106-8758-34cf31b0c3d4" id="67866b36-fd68-4106-8758-34cf31b0c3d4"/>
</cpu_profile>
<cpu_profile href= "/ovirt-engine/api/cpuprofiles/b015da68-b7a5-4a4b-8389-5cbc8ce58f73" id="b015da68-b7a5-4a4b-8389-5cbc8ce58f73">
<name>test_2</name>
<qos href= "/ovirt-engine/api/datacenters/f8f5eaee-8fd0-4b45-87db-62d61b03a916/qoss/883d876e-2038-4cd4-8c35-e9b52f2f4380" id="883d876e-2038-4cd4-8c35-e9b52f2f4380"/>
<cluster href= "/ovirt-engine/api/clusters/67866b36-fd68-4106-8758-34cf31b0c3d4" id="67866b36-fd68-4106-8758-34cf31b0c3d4"/>
</cpu_profile>
</cpu_profiles>
3. Create some vm, and run it first with first QoS and after with second.

First run:

<cpu_profile href= "/ovirt-engine/api/cpuprofiles/5be5c0b7-5b91-4ac4-9d53-ef6f987bff05" id="5be5c0b7-5b91-4ac4-9d53-ef6f987bff05"/>

dumpxml
<vcpu placement='static' current='4'>32</vcpu>
  <cputune>
    <shares>1020</shares>
    <period>12500</period>
    <quota>25000</quota>
  </cputune>

Second run:
<cpu_profile href= "/ovirt-engine/api/cpuprofiles/b015da68-b7a5-4a4b-8389-5cbc8ce58f73" id="b015da68-b7a5-4a4b-8389-5cbc8ce58f73"/>

dumpxml
<vcpu placement='static' current='4'>32</vcpu>
  <cputune>
    <shares>1020</shares>
    <period>12500</period>
    <quota>25000</quota>
  </cputune>

Actual results:
<quota> and <period> under cpu_tunning have the same values under dumpxml, for different limitation value

Expected results:
<quota> and <period> have different values under different limitations

Additional info:
ok, I will start with this that not really understand why we use this kind of formula:
period = anchor / #NumOfCpuInHost
quota = (anchor*(#userSelection/100)) / #numOfVcpusInVm
why we need this anchor, and why we change period time(default 1000000)
I played a little with values of period and quota and virsh create, and limitation work pretty well for formula:
period = default_value
quota = period * (pcpu/vcpu) * (limitation/100)
I check it on vm with 4 cpu's and on host with 8 cpu's
with limitation 10, 25 and 50
From some reason the same proportion, but with small period work not precious or  not work at all(seems to me like bug in cgroups)

--- Additional comment from Martin Sivák on 2015-01-07 10:46:06 EST ---

RHEL 7 uses libvirt-1.1.1 and the metadata xml feature that is needed for this to work seems to be missing from that version. I was told that it was originally included to libvirt-1.1.3 which did not make it into RHEL 7.

So currently the quota is always treated as 100% on RHEL 7 and the computed numbers (quota 25000, period 12500) cause no cpu usage throttling at all.

RHEL 6.6 should have the necessary libvirt feature backported and should therefore work properly.

Artyom: can you please retest with RHEL 6.6 hosts?

--- Additional comment from Martin Sivák on 2015-01-07 10:48:45 EST ---

I should add that I saw the proper values to bubble through VDSM APIs so it is really only an issue with the:

Thread-4352::DEBUG::2015-01-07 16:47:01,450::__init__::469::jsonrpc.JsonRpcServer::(_serveRequest) Calling 'VM.updateVmPolicy' in bridge with {u'params': {u'vmId': u'4d7aa507-1b32-4618-a5a2-884500dbbbc1', u'vcpuLimit': u'2'}, u'vmID': u'4d7aa507-1b32-4618-a5a2-884500dbbbc1'}

Thread-4352::DEBUG::2015-01-07 16:47:01,454::libvirtconnection::143::root::(wrapper) Unknown libvirterror: ecode: 74 edom: 10 level: 2 message: argument unsupported: QEMU driver does not support <metadata> element

Thread-4352::ERROR::2015-01-07 16:47:01,454::vm::3821::vm.Vm::(_getVmPolicy) vmId=`4d7aa507-1b32-4618-a5a2-884500dbbbc1`::getVmPolicy failed
Traceback (most recent call last):
  File "/usr/share/vdsm/virt/vm.py", line 3818, in _getVmPolicy
    METADATA_VM_TUNE_URI, 0)
  File "/usr/share/vdsm/virt/vm.py", line 689, in f
    ret = attr(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/vdsm/libvirtconnection.py", line 111, in wrapper
    ret = f(*args, **kwargs)
  File "/usr/lib64/python2.7/site-packages/libvirt.py", line 942, in metadata
    if ret is None: raise libvirtError ('virDomainGetMetadata() failed', dom=self)
libvirtError: argument unsupported: QEMU driver does not support <metadata> element

--- Additional comment from Doron Fediuck on 2015-01-08 08:39:42 EST ---

No reason to block RC on a wrong libvirt version.

--- Additional comment from Michal Skrivanek on 2015-01-08 10:06:45 EST ---

what's the libvirt dependency?
do we expect 7.0.z update? if so, when?

--- Additional comment from Artyom on 2015-01-11 05:38:23 EST ---

for rhel6.6 it also not work:
Thread-131338::DEBUG::2015-01-11 12:35:18,673::libvirtconnection::143::root::(wrapper) Unknown libvirterror: ecode: 80 edom: 20 level: 2 message: metadata not found: Requested metadata element is not present
Thread-131338::ERROR::2015-01-11 12:35:18,675::__init__::493::jsonrpc.JsonRpcServer::(_serveRequest) Internal server error
Traceback (most recent call last):
  File "/usr/lib/python2.6/site-packages/yajsonrpc/__init__.py", line 488, in _serveRequest
    res = method(**params)
  File "/usr/share/vdsm/rpc/Bridge.py", line 284, in _dynamicMethod
    return self._fixupRet(className, methodName, ret)
  File "/usr/share/vdsm/rpc/Bridge.py", line 234, in _fixupRet
    self._typeFixup('return', retType, result)
  File "/usr/share/vdsm/rpc/Bridge.py", line 214, in _typeFixup
    if k in item:
TypeError: argument of type 'NoneType' is not iterable

So it also not receive limit from engine.

vdsm-4.16.8.1-5.el6ev.x86_64
libvirt-0.10.2-46.el6_6.2.x86_64

--- Additional comment from Artyom on 2015-01-11 06:03:05 EST ---

Actual only for RHEL6.6
After one minute I see that parameter updated to correct value, so error above not correct to QoS
I also see that metadata passed correct:
<metadata>
    <ovirt:qos xmlns:ovirt="http://ovirt.org/vm/tune/1.0">
        <ovirt:vcpuLimit>10</ovirt:vcpuLimit>
</ovirt:qos>
And period and quota have correct values:

<period>12500</period>
<quota>2500</quota>

tested for 5, 10, 25 and 50 percents

--- Additional comment from Artyom on 2015-01-11 06:19:45 EST ---

I see that for error above we already have bug:
https://bugzilla.redhat.com/show_bug.cgi?id=1142851

--- Additional comment from Doron Fediuck on 2015-01-12 03:29:43 EST ---

Comment 2 Jiri Denemark 2015-01-12 14:04:09 UTC
Already in 7.1 thanks to rebase.

Comment 9 Jiri Denemark 2015-01-22 14:01:02 UTC
BTW, this is an equivalent of a 6.6 bug 1115039, which can be consulted for verification steps.

Comment 10 zhenfeng wang 2015-01-23 12:27:57 UTC
I can reproduce this bug with libvirt-1.1.1-29.el7

[root@zhwangrhel71 ~]# python testmetadata.py -n rhel7abc -o set -t 2 -u "http://herp.derp/" -k test -m "<foo><bar fooish='blurb'>baz</bar></foo>"
libvirt: QEMU Driver error : argument unsupported: QEMU driver does not support <metadata> element
Traceback (most recent call last):
  File "testmetadata.py", line 85, in <module>
    main()
  File "testmetadata.py", line 78, in main
    do_set(dom, mtype, metadata, key, uri, flags)
  File "testmetadata.py", line 9, in do_set
    if dom.setMetadata(mtype, metadata, key, uri, flags):
  File "/usr/lib64/python2.7/site-packages/libvirt.py", line 1597, in setMetadata
    if ret == -1: raise libvirtError ('virDomainSetMetadata() failed', dom=self)
libvirt.libvirtError: argument unsupported: QEMU driver does not support <metadata> element

Verify this bug with libvirt-1.2.8-15.el7.x86_64
1.# python testmetadata.py -n rhel7abc -o set -t 2 -u "http://herp.derp/" -k test -m "<foo><bar fooish='blurb'>baz</bar></foo>"
Okay
2.# python testmetadata.py -n rhel7abc -o get -t 2 -u "http://herp.derp/"
<foo>
  <bar fooish="blurb">baz</bar>
</foo>
3.# virsh dumpxml rhel7abc

  <metadata>
    <test:foo xmlns:test="http://herp.derp/">
      <test:bar fooish="blurb">baz</test:bar>
    </test:foo>
  </metadata>


4.# python testmetadata.py -n rhel7abc -o remove -t 2 -u "http://herp.derp/"
Okay
5.# # virsh dumpxml rhel7abc

  <metadata/>

Edit the guest's metadata with metadata command
1.virsh -c qemu://rhel7abc/system

2.virsh # metadata rhel7abc --uri http://herp.derp/ --key herp --set  "<derp xmlns:foobar='http://foo.bar/'></derp>"
Metadata modified

3.virsh # metadata rhel7abc --uri http://herp.derp/ 
<derp xmlns:foobar="http://foo.bar/"/>

4.virsh # dumpxml rhel7abc

  <metadata>
    <herp:derp xmlns:foobar="http://foo.bar/" xmlns:herp="http://herp.derp/"/>
  </metadata>

5.virsh # metadata rhel7abc --uri http://herp.derp/ --remove
Metadata removed

6.virsh # metadata rhel7abc --uri http://herp.derp/ 
error: metadata not found: Requested metadata element is not present

7.virsh # dumpxml rhel7abc

  <metadata/>

According to the upper steps, mark this bug verifed

Comment 11 zhenfeng wang 2015-02-05 05:19:48 UTC
Created attachment 988387 [details]
The script to reproduce the bug

Comment 13 errata-xmlrpc 2015-03-05 07:48:56 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHSA-2015-0323.html


Note You need to log in before you can comment on or make changes to this bug.