Bug 1814565
Summary: | Report disk.usage for VMs with RHEL 8 guests | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
Product: | [oVirt] ovirt-engine | Reporter: | Polina <pagranat> | ||||||||
Component: | BLL.Virt | Assignee: | Tomáš Golembiovský <tgolembi> | ||||||||
Status: | CLOSED CURRENTRELEASE | QA Contact: | Polina <pagranat> | ||||||||
Severity: | medium | Docs Contact: | |||||||||
Priority: | high | ||||||||||
Version: | 4.4.0 | CC: | ahadas, bugs, mavital, mtessun, oliel, tgolembi | ||||||||
Target Milestone: | ovirt-4.4.3 | Keywords: | TestOnly | ||||||||
Target Release: | --- | Flags: | pm-rhel:
ovirt-4.4+
mtessun: planning_ack+ ahadas: devel_ack+ mavital: testing_ack+ |
||||||||
Hardware: | x86_64 | ||||||||||
OS: | Linux | ||||||||||
Whiteboard: | |||||||||||
Fixed In Version: | ovirt-engine-4.4.3.2 | Doc Type: | No Doc Update | ||||||||
Doc Text: | Story Points: | --- | |||||||||
Clone Of: | Environment: | ||||||||||
Last Closed: | 2020-11-11 06:42:46 UTC | Type: | Bug | ||||||||
Regression: | --- | Mount Type: | --- | ||||||||
Documentation: | --- | CRM: | |||||||||
Verified Versions: | Category: | --- | |||||||||
oVirt Team: | Virt | RHEL 7.3 requirements from Atomic Host: | |||||||||
Cloudforms Team: | --- | Target Upstream Version: | |||||||||
Embargoed: | |||||||||||
Bug Depends On: | 1823729, 1877675 | ||||||||||
Bug Blocks: | |||||||||||
Attachments: |
|
Description
Polina
2020-03-18 09:37:04 UTC
Happens in both 4.3 and 4.4 .tried with quest template 7.7 - the same result. sometimes after the VM start the value looks like <detail>[]</detail> <statistic> <name>disks.usage</name> <description>Disk usage, in bytes, per filesystem as JSON (agent)</description> <kind>gauge</kind> <type>string</type> <unit>none</unit> <values> <value> <detail>[]</detail> </value> </values> <vm href="/ovirt-engine/api/vms/08268d86-ad8a-4220-8283-d0fe3fa66ea6" id="08268d86-ad8a-4220-8283-d0fe3fa66ea6"/> </statistic> Guest agent on EL7? Is this just qemu-ga or ovirt-guest-agent? Could you run `vdsm-client Host getAllVmStats` and report the result? Created attachment 1671486 [details]
getAllVmStats
only qemu-guest-agent
[root@dhcp163-76 ~]# rpm -qa |grep guest
qemu-guest-agent-2.12.0-88.module+el8.1.0+5013+4f99814c.1.x86_64
[root@dhcp163-76 ~]# rpm -qa |grep ovirt
[root@dhcp163-76 ~]#
getAllVmStats output attached
I have opened a bug on platform for inclusion of the feature in qemu-ga. It is only opened for RHEL 8. If we want that also fixed in RHEL 7 we need another bug. Hi, I've re-tested it in the last 4.4.3 with the same result . nothing changed in this behavior. vdsm-4.40.29-1.el8ev.x86_64 ovirt-engine-4.4.3.2-0.19.el8ev.noarch libvirt-6.6.0-4.module+el8.3.0+7883+3d717aa8.x86_64 Hi Polina, you need to update qemu-ga in your guest. Based on bug 1823729 it is fixed in qemu-guest-agent version 4.2.0-4.module+el8.2.0+5220+e82621dc. (In reply to Tomáš Golembiovský from comment #8) > Hi Polina, you need to update qemu-ga in your guest. Based on bug 1823729 it > is fixed in qemu-guest-agent version 4.2.0-4.module+el8.2.0+5220+e82621dc. +1 Polina, the RHEL8 templates I see in some of your environment are installed with RHEL8.2 - note that we need to test this one with a RHEL8.3 guest Created attachment 1720971 [details]
statistics responses xml for started/stopped vm
tested on qemu-guest-agent-4.2.0-33.module+el8.3.0+7705+f09d73e4.x86_64 the disks.usage for running VM looks like that (full xml statistics responces are attached): <statistic href="/ovirt-engine/api/vms/97f5d16d-d7c9-4742-81e3-ae0c81cbb52f/statistics/c5dd0086-d5f0-3abf-9628-4674abb3f5ac" id="c5dd0086-d5f0-3abf-9628-4674abb3f5ac"> <name>disks.usage</name> <description>Disk usage, in bytes, per filesystem as JSON (agent)</description> <kind>gauge</kind> <type>string</type> <unit>none</unit> <values> <value> <detail>[{"path":"/","total":"10618929152","used":"2271281152","fs":"xfs"},{"path":"/boot/efi","total":"104634368","used":"7168000","fs":"vfat"}]</detail> </value> </values> <vm href="/ovirt-engine/api/vms/97f5d16d-d7c9-4742-81e3-ae0c81cbb52f" id="97f5d16d-d7c9-4742-81e3-ae0c81cbb52f"/> </statistic> the disks.usage for the not started VM looks like the same as described in https://bugzilla.redhat.com/show_bug.cgi?id=1814565#c0. <statistic href="/ovirt-engine/api/vms/6fae30e1-4d24-4b92-892a-3547237dd8f2/statistics/c5dd0086-d5f0-3abf-9628-4674abb3f5ac" id="c5dd0086-d5f0-3abf-9628-4674abb3f5ac"> <name>disks.usage</name> <description>Disk usage, in bytes, per filesystem as JSON (agent)</description> <kind>gauge</kind> <type>string</type> <unit>none</unit> <values> <value> <detail></detail> </value> </values> <vm href="/ovirt-engine/api/vms/6fae30e1-4d24-4b92-892a-3547237dd8f2" id="6fae30e1-4d24-4b92-892a-3547237dd8f2"/> </statistic> I don't know what are requirements for the format. My expectation is based on other statistics in the response. For not running VM I would expect like <statistic href="/ovirt-engine/api/vms/6fae30e1-4d24-4b92-892a-3547237dd8f2/statistics/c5dd0086-d5f0-3abf-9628-4674abb3f5ac" id="c5dd0086-d5f0-3abf-9628-4674abb3f5ac"> <name>disks.usage</name> <description>Disk usage, in bytes, per filesystem as JSON (agent)</description> <kind>gauge</kind> <type>string</type> <unit>bytes</unit> <values/> <vm href="/ovirt-engine/api/vms/6fae30e1-4d24-4b92-892a-3547237dd8f2" id="6fae30e1-4d24-4b92-892a-3547237dd8f2"/> </statistic> but I'm not sure . please look at the xml statistics responses and confirm that we want to verify on the base of it (In reply to Polina from comment #11) > tested on qemu-guest-agent-4.2.0-33.module+el8.3.0+7705+f09d73e4.x86_64 > > the disks.usage for running VM looks like that (full xml statistics > responces are attached): > > <statistic > href="/ovirt-engine/api/vms/97f5d16d-d7c9-4742-81e3-ae0c81cbb52f/statistics/ > c5dd0086-d5f0-3abf-9628-4674abb3f5ac" > id="c5dd0086-d5f0-3abf-9628-4674abb3f5ac"> > <name>disks.usage</name> > <description>Disk usage, in bytes, per filesystem as JSON > (agent)</description> > <kind>gauge</kind> > <type>string</type> > <unit>none</unit> > <values> > <value> > > <detail>[{"path":"/","total":"10618929152","used":"2271281152","fs":"xfs"}, > {"path":"/boot/efi","total":"104634368","used":"7168000","fs":"vfat"}]</ > detail> > </value> > </values> > <vm > href="/ovirt-engine/api/vms/97f5d16d-d7c9-4742-81e3-ae0c81cbb52f" > id="97f5d16d-d7c9-4742-81e3-ae0c81cbb52f"/> > </statistic> > So this seems correct. > the disks.usage for the not started VM looks like the same as described in > https://bugzilla.redhat.com/show_bug.cgi?id=1814565#c0. > > <statistic > href="/ovirt-engine/api/vms/6fae30e1-4d24-4b92-892a-3547237dd8f2/statistics/ > c5dd0086-d5f0-3abf-9628-4674abb3f5ac" > id="c5dd0086-d5f0-3abf-9628-4674abb3f5ac"> > <name>disks.usage</name> > <description>Disk usage, in bytes, per filesystem as JSON > (agent)</description> > <kind>gauge</kind> > <type>string</type> > <unit>none</unit> > <values> > <value> > <detail></detail> > </value> > </values> > <vm > href="/ovirt-engine/api/vms/6fae30e1-4d24-4b92-892a-3547237dd8f2" > id="6fae30e1-4d24-4b92-892a-3547237dd8f2"/> > </statistic> > > > I don't know what are requirements for the format. My expectation is based > on other statistics in the response. For not running VM I would expect like > > <statistic > href="/ovirt-engine/api/vms/6fae30e1-4d24-4b92-892a-3547237dd8f2/statistics/ > c5dd0086-d5f0-3abf-9628-4674abb3f5ac" > id="c5dd0086-d5f0-3abf-9628-4674abb3f5ac"> > <name>disks.usage</name> > <description>Disk usage, in bytes, per filesystem as JSON > (agent)</description> > <kind>gauge</kind> > <type>string</type> > <unit>bytes</unit> > <values/> > <vm > href="/ovirt-engine/api/vms/6fae30e1-4d24-4b92-892a-3547237dd8f2" > id="6fae30e1-4d24-4b92-892a-3547237dd8f2"/> > </statistic> > > but I'm not sure . please look at the xml statistics responses and confirm > that we want to verify on the base of it From what I gathered by looking at the stats it looks like the <values/> is used in cases where there is a list of values but it does not contain anything at the moment (empty list). Wheres in case of disks.usage we set it to empty string "" so I would say the XML is correct in this regard. But it still does not feel right as empty string is not strictly valid JSON value. I also not sure if using units "none" is ok or it should have been "bytes". I haven't found any documentation describing the types. I discussed the XML format with Ori (our REST API maintainer) and: - he agrees that we should change how we represent "no value" and that it should behave as Polina mentioned (i.e. empty <values /> element); - we should keep the units to "none" -- it represents the content format not the encoding, and even though the disk sizes are in fact in bytes it would be far-reaching to extend the format to whole string Arik, in light of the above https://bugzilla.redhat.com/show_bug.cgi?id=1814565#c13 , could you please let me know if I should move the bug to re-assign (In reply to Tomáš Golembiovský from comment #13) > I discussed the XML format with Ori (our REST API maintainer) and: > > - he agrees that we should change how we represent "no value" and that it > should behave as Polina mentioned (i.e. empty <values /> element); Yes, I also agree that it's the right way to represent "no value" in this case I'm a bit concerned about changing it now though as this might be considered a change that breaks backward compatibility (for clients that are not prepared for having </values> for the disk statistics) > - we should keep the units to "none" -- it represents the content format not > the encoding, and even though the disk sizes are in fact in bytes it would > be far-reaching to extend the format to whole string +1 Verifying this bug on the base of https://bugzilla.redhat.com/show_bug.cgi?id=1814565#c11 for the part of running VM. for the empty element the new bug https://bugzilla.redhat.com/show_bug.cgi?id=1892291 was open This bugzilla is included in oVirt 4.4.3 release, published on November 10th 2020. Since the problem described in this bug report should be resolved in oVirt 4.4.3 release, it has been closed with a resolution of CURRENT RELEASE. If the solution does not work for you, please open a new bug report. |