Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.

Bug 1576442

Summary: KeyError: 'sizeTotal' in gluster volume status monitoring
Product: [oVirt] vdsm Reporter: Sahina Bose <sabose>
Component: GlusterAssignee: Sahina Bose <sabose>
Status: CLOSED CURRENTRELEASE QA Contact: SATHEESARAN <sasundar>
Severity: medium Docs Contact:
Priority: high    
Version: 4.30.0CC: bugs, lveyde
Target Milestone: ovirt-4.2.4Flags: rule-engine: ovirt-4.2?
sasundar: planning_ack?
sabose: devel_ack+
sasundar: testing_ack+
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: vdsm v4.20.29 Doc Type: Bug Fix
Doc Text:
Cause: Missing elements in "gluster volume status detail" output Consequence: vdsm monitoring throws exception and the status updates are not reflected in engine correctly Fix: Handle missing elements gracefully Result: Monitoring works for volumes that report status correctly
Story Points: ---
Clone Of: Environment:
Last Closed: 2018-06-26 08:45:42 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: Gluster RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Sahina Bose 2018-05-09 13:04:13 UTC
Description of problem:

Error seen when monitoring gluster volume status detail - resulting in monitoring of capacity and size not working for some volumes.

This is seen when the gluster output does not contain the size related fields:
<cliOutput>
  <opRet>0</opRet>
  <opErrno>0</opErrno>
  <opErrstr/>
  <volStatus>
    <volumes>
      <volume>
        <volName>vmstore</volName>
        <nodeCount>3</nodeCount>
        <node>
          <hostname>10.70.37.28</hostname>
          <path>/gluster_bricks/vmstore/vmstore</path>
          <peerid>61924995-48fb-487b-bc8c-5dba9b041dc1</peerid>
          <status>1</status>
          <port>49154</port>
          <ports>
            <tcp>49154</tcp>
            <rdma>N/A</rdma>
          </ports>
          <pid>3531</pid>
          <sizeTotal>9661531029504</sizeTotal>
          <sizeFree>9658771755008</sizeFree>
          <device>/dev/mapper/gluster_vg_sdd-gluster_lv_vmstore</device>
          <blockSize>4096</blockSize>
          <mntOptions>rw,seclabel,noatime,nodiratime,attr2,inode64,sunit=512,swidth=2048,noquota</mntOptions>
          <fsName>xfs</fsName>
          <inodeSize>xfs</inodeSize>
          <inodesTotal>943717568</inodesTotal>
          <inodesFree>943715561</inodesFree>
        </node>
        <node>
          <hostname>10.70.37.29</hostname>
          <path>/gluster_bricks/vmstore/vmstore</path>
          <peerid>bd08ff35-c56c-4d6c-aff3-58b9eaaf1f55</peerid>
          <status>1</status>
          <port>49153</port>
          <ports>
            <tcp>49153</tcp>
            <rdma>N/A</rdma>
          </ports>
          <pid>20266</pid>
        </node>
        <node>
          <hostname>10.70.37.30</hostname>
          <path>/gluster_bricks/vmstore/vmstore</path>
          <peerid>a03dfa62-5766-4431-a0be-e46b81d2e7af</peerid>
          <status>0</status>
          <port>N/A</port>
          <ports>
            <tcp>N/A</tcp>
            <rdma>N/A</rdma>
          </ports>
          <pid>-1</pid>
        </node>
      </volume>
    </volumes>
  </volStatus>
</cliOutput>


Version-Release number of selected component (if applicable):


How reproducible:
Sometimes

Steps to Reproduce:
Not sure under what conditions gluster fails to report these.

Comment 1 SATHEESARAN 2018-06-22 08:27:57 UTC
Tested with hyperconverged setup with:
vdsm-4.20.31-1.el7ev.x86_64
vdsm-gluster-4.20.31-1.el7ev.x86_64

Repeated around 100 times the command - 'gluster volume status vmstore details'
Size values are returned correctly each time

Here is the sample output:
<snip>
[root@ ~]# gluster volume status vmstore detail --xml | grep size
          <sizeTotal>4395911086080</sizeTotal>
          <sizeFree>4359812784128</sizeFree>
          <sizeTotal>4395911086080</sizeTotal>
          <sizeFree>4359812784128</sizeFree>
          <sizeTotal>4395911086080</sizeTotal>
          <sizeFree>4359773298688</sizeFree>

</snip>

Comment 2 Sandro Bonazzola 2018-06-26 08:45:42 UTC
This bugzilla is included in oVirt 4.2.4 release, published on June 26th 2018.

Since the problem described in this bug report should be
resolved in oVirt 4.2.4 release, it has been closed with a resolution of CURRENT RELEASE.

If the solution does not work for you, please open a new bug report.