Bugzilla (bugzilla.redhat.com) will be under maintenance for infrastructure upgrades and will not be unavailable on July 31st between 12:30 AM - 05:30 AM UTC. We appreciate your understanding and patience. You can follow status.redhat.com for details.
Bug 1576442 - KeyError: 'sizeTotal' in gluster volume status monitoring
Summary: KeyError: 'sizeTotal' in gluster volume status monitoring
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: vdsm
Classification: oVirt
Component: Gluster
Version: 4.30.0
Hardware: Unspecified
OS: Unspecified
high
medium
Target Milestone: ovirt-4.2.4
: ---
Assignee: Sahina Bose
QA Contact: SATHEESARAN
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-05-09 13:04 UTC by Sahina Bose
Modified: 2018-06-26 08:45 UTC (History)
2 users (show)

Fixed In Version: vdsm v4.20.29
Doc Type: Bug Fix
Doc Text:
Cause: Missing elements in "gluster volume status detail" output Consequence: vdsm monitoring throws exception and the status updates are not reflected in engine correctly Fix: Handle missing elements gracefully Result: Monitoring works for volumes that report status correctly
Clone Of:
Environment:
Last Closed: 2018-06-26 08:45:42 UTC
oVirt Team: Gluster
rule-engine: ovirt-4.2?
sasundar: planning_ack?
sabose: devel_ack+
sasundar: testing_ack+


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
oVirt gerrit 91093 0 master MERGED gluster: Handle missing elements in volume status output 2018-05-13 18:09:11 UTC
oVirt gerrit 91218 0 ovirt-4.2 MERGED gluster: Handle missing elements in volume status output 2018-05-29 14:56:04 UTC

Description Sahina Bose 2018-05-09 13:04:13 UTC
Description of problem:

Error seen when monitoring gluster volume status detail - resulting in monitoring of capacity and size not working for some volumes.

This is seen when the gluster output does not contain the size related fields:
<cliOutput>
  <opRet>0</opRet>
  <opErrno>0</opErrno>
  <opErrstr/>
  <volStatus>
    <volumes>
      <volume>
        <volName>vmstore</volName>
        <nodeCount>3</nodeCount>
        <node>
          <hostname>10.70.37.28</hostname>
          <path>/gluster_bricks/vmstore/vmstore</path>
          <peerid>61924995-48fb-487b-bc8c-5dba9b041dc1</peerid>
          <status>1</status>
          <port>49154</port>
          <ports>
            <tcp>49154</tcp>
            <rdma>N/A</rdma>
          </ports>
          <pid>3531</pid>
          <sizeTotal>9661531029504</sizeTotal>
          <sizeFree>9658771755008</sizeFree>
          <device>/dev/mapper/gluster_vg_sdd-gluster_lv_vmstore</device>
          <blockSize>4096</blockSize>
          <mntOptions>rw,seclabel,noatime,nodiratime,attr2,inode64,sunit=512,swidth=2048,noquota</mntOptions>
          <fsName>xfs</fsName>
          <inodeSize>xfs</inodeSize>
          <inodesTotal>943717568</inodesTotal>
          <inodesFree>943715561</inodesFree>
        </node>
        <node>
          <hostname>10.70.37.29</hostname>
          <path>/gluster_bricks/vmstore/vmstore</path>
          <peerid>bd08ff35-c56c-4d6c-aff3-58b9eaaf1f55</peerid>
          <status>1</status>
          <port>49153</port>
          <ports>
            <tcp>49153</tcp>
            <rdma>N/A</rdma>
          </ports>
          <pid>20266</pid>
        </node>
        <node>
          <hostname>10.70.37.30</hostname>
          <path>/gluster_bricks/vmstore/vmstore</path>
          <peerid>a03dfa62-5766-4431-a0be-e46b81d2e7af</peerid>
          <status>0</status>
          <port>N/A</port>
          <ports>
            <tcp>N/A</tcp>
            <rdma>N/A</rdma>
          </ports>
          <pid>-1</pid>
        </node>
      </volume>
    </volumes>
  </volStatus>
</cliOutput>


Version-Release number of selected component (if applicable):


How reproducible:
Sometimes

Steps to Reproduce:
Not sure under what conditions gluster fails to report these.

Comment 1 SATHEESARAN 2018-06-22 08:27:57 UTC
Tested with hyperconverged setup with:
vdsm-4.20.31-1.el7ev.x86_64
vdsm-gluster-4.20.31-1.el7ev.x86_64

Repeated around 100 times the command - 'gluster volume status vmstore details'
Size values are returned correctly each time

Here is the sample output:
<snip>
[root@ ~]# gluster volume status vmstore detail --xml | grep size
          <sizeTotal>4395911086080</sizeTotal>
          <sizeFree>4359812784128</sizeFree>
          <sizeTotal>4395911086080</sizeTotal>
          <sizeFree>4359812784128</sizeFree>
          <sizeTotal>4395911086080</sizeTotal>
          <sizeFree>4359773298688</sizeFree>

</snip>

Comment 2 Sandro Bonazzola 2018-06-26 08:45:42 UTC
This bugzilla is included in oVirt 4.2.4 release, published on June 26th 2018.

Since the problem described in this bug report should be
resolved in oVirt 4.2.4 release, it has been closed with a resolution of CURRENT RELEASE.

If the solution does not work for you, please open a new bug report.


Note You need to log in before you can comment on or make changes to this bug.