Bug 1410283

Summary: gluster cli: Exception when brick resides on a btrfs subvolume
Product: [oVirt] vdsm Reporter: George Joseph <g.devel>
Component: GlusterAssignee: Gobinda Das <godas>
Status: CLOSED CURRENTRELEASE QA Contact: SATHEESARAN <sasundar>
Severity: medium Docs Contact:
Priority: unspecified    
Version: 4.18.15.2CC: bugs, g.devel, godas, lveyde, sabose
Target Milestone: ovirt-4.2.2Flags: rule-engine: ovirt-4.2+
rule-engine: planning_ack+
rule-engine: devel_ack+
rule-engine: testing_ack+
Target Release: 4.20.18   
Hardware: x86_64   
OS: Linux   
URL: https://gerrit.ovirt.org/#/c/69668/
Whiteboard:
Fixed In Version: vdsm v4.20.18 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2018-05-10 06:23:24 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: Gluster RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1406569    
Bug Blocks:    

Description George Joseph 2017-01-05 01:01:16 UTC
Description of problem:

When a gluster brick resides on a btrfs subvolume, an exception is thrown when attempting to retrieve the volume status detail.

MainProcess|jsonrpc.Executor/7::ERROR::2017-01-04 17:50:18,828::supervdsmServer::96::SuperVdsm.ServerCallback::(wrapper) Error in volumeStatus
Traceback (most recent call last):
  File "/usr/share/vdsm/supervdsmServer", line 94, in wrapper
    res = func(*args, **kwargs)
  File "/usr/share/vdsm/gluster/cli.py", line 353, in volumeStatus
    return _parseVolumeStatusDetail(xmltree)
  File "/usr/share/vdsm/gluster/cli.py", line 217, in _parseVolumeStatusDetail
    'device': value['device'],
KeyError: 'device'

The exception happens because gluster doesn't report device (or blockSize, mntOptions or fsName) for bricks that reside on btrfs subvolumes but parseVolumeStatusDetail expects them.

Version-Release number of selected component (if applicable):

4.18.15.3 but the issue also occurs in the 'master' vdsm branch.

How reproducible:

Both periodically when the engine polls the bricks and on demand when a user selects "Advanced Details" from the Volumes tab in the UI.

Steps to Reproduce:
1.  Create a Gluster volume with bricks in btrfs subvolumes on any host in the cluster.
2.  In the web UI, navigate to Volumes and select the new volume.
3.  In the brick list, select a brick and press Advanced Details.

Actual results:

Error message: "Error in fetching the brick details, please try again."
In the supervdsm log on the host, you'll see the above exception.

Expected results:

The details dialog should be populated.

Additional info:

Patch forthcoming

Comment 1 Sahina Bose 2017-01-09 09:39:37 UTC
Ramesh, can you check if this is related to vdsm issue that you fixed?

Comment 2 Ramesh N 2017-01-09 09:57:58 UTC
(In reply to Sahina Bose from comment #1)
> Ramesh, can you check if this is related to vdsm issue that you fixed?

We saw similar issue from some community user earlier. 'device' field was missing for one arbiter brick in the arbiter volume. 


George Joseph: Can you tell the gluster version you are running?. Also it will be useful to get the gluster cli output for 'gluster volume status <vol-name> detail --xml'.

Comment 3 Ramesh N 2017-01-09 10:25:44 UTC
This is side effect gluster bz#1406569 Please followup in the bz#1406569 for more details.

Comment 4 George Joseph 2017-01-14 17:59:26 UTC
[root@vmhost1 ~]# gluster --version
glusterfs 3.7.18 built on Dec  8 2016 12:16:01
Repository revision: git://git.gluster.com/glusterfs.git
Copyright (c) 2006-2011 Gluster Inc. <http://www.gluster.com>
GlusterFS comes with ABSOLUTELY NO WARRANTY.
You may redistribute copies of GlusterFS under the terms of the GNU General Public License.

[root@vmhost1 ~]# gluster volume status gvm1 detail --xml
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<cliOutput>
  <opRet>0</opRet>
  <opErrno>0</opErrno>
  <opErrstr/>
  <volStatus>
    <volumes>
      <volume>
        <volName>gvm1</volName>
        <nodeCount>4</nodeCount>
        <node>
          <hostname>vmhostg1</hostname>
          <path>/gfs1/gvm1-brick1/brick</path>
          <peerid>afc90e43-c1d3-4c88-b077-d8597a68abcd</peerid>
          <status>1</status>
          <port>49169</port>
          <ports>
            <tcp>49169</tcp>
            <rdma>N/A</rdma>
          </ports>
          <pid>2541</pid>
          <sizeTotal>750171193344</sizeTotal>
          <sizeFree>631472906240</sizeFree>
          <blockSize>4096</blockSize>
        </node>
        <node>
          <hostname>vmhostg2</hostname>
          <path>/gfs1/gvm1-brick1/brick</path>
          <peerid>386f553f-4902-4031-9b76-8ea98a63502d</peerid>
          <status>1</status>
          <port>49172</port>
          <ports>
            <tcp>49172</tcp>
            <rdma>N/A</rdma>
          </ports>
          <pid>3619</pid>
          <sizeTotal>750171193344</sizeTotal>
          <sizeFree>634974404608</sizeFree>
          <blockSize>4096</blockSize>
        </node>
        <node>
          <hostname>vmhostg3</hostname>
          <path>/gfs1/gvm1-brick1/brick</path>
          <peerid>af65f817-a655-403d-b5d1-f61ea1aab5fc</peerid>
          <status>1</status>
          <port>49172</port>
          <ports>
            <tcp>49172</tcp>
            <rdma>N/A</rdma>
          </ports>
          <pid>2324</pid>
          <sizeTotal>762173194240</sizeTotal>
          <sizeFree>682148352000</sizeFree>
          <blockSize>4096</blockSize>
        </node>
        <node>
          <hostname>vmhostg4</hostname>
          <path>/gfs1/gvm1-brick1/brick</path>
          <peerid>9811a9a7-1b25-470b-a18b-5e5a8178f395</peerid>
          <status>1</status>
          <port>49172</port>
          <ports>
            <tcp>49172</tcp>
            <rdma>N/A</rdma>
          </ports>
          <pid>3332</pid>
          <sizeTotal>762173194240</sizeTotal>
          <sizeFree>678508535808</sizeFree>
          <blockSize>4096</blockSize>
        </node>
      </volume>
    </volumes>
  </volStatus>
</cliOutput>

Comment 5 George Joseph 2017-01-14 18:06:04 UTC
The referenced issue is specific to arbiter bricks. In my situation, there is no device on any brick.

Actually I have 2 volumes, the detail  above is from a 2x2 volume.  No device on any brick.

Here's the detail from a  4 x (2 + 1) = 12 volume

[root@vmhost1 ~]# gluster volume status gvm0 detail --xml
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<cliOutput>
  <opRet>0</opRet>
  <opErrno>0</opErrno>
  <opErrstr/>
  <volStatus>
    <volumes>
      <volume>
        <volName>gvm0</volName>
        <nodeCount>12</nodeCount>
        <node>
          <hostname>vmhostg1</hostname>
          <path>/gfs1/gvm0-brick1/brick</path>
          <peerid>afc90e43-c1d3-4c88-b077-d8597a68abcd</peerid>
          <status>1</status>
          <port>49160</port>
          <ports>
            <tcp>49160</tcp>
            <rdma>N/A</rdma>
          </ports>
          <pid>2478</pid>
          <sizeTotal>750171193344</sizeTotal>
          <sizeFree>631473377280</sizeFree>
          <blockSize>4096</blockSize>
        </node>
        <node>
          <hostname>vmhostg2</hostname>
          <path>/gfs1/gvm0-brick1/brick</path>
          <peerid>386f553f-4902-4031-9b76-8ea98a63502d</peerid>
          <status>1</status>
          <port>49163</port>
          <ports>
            <tcp>49163</tcp>
            <rdma>N/A</rdma>
          </ports>
          <pid>3599</pid>
          <sizeTotal>750171193344</sizeTotal>
          <sizeFree>634974969856</sizeFree>
          <blockSize>4096</blockSize>
        </node>
        <node>
          <hostname>vmhostg3</hostname>
          <path>/gfs1/gvm0-brick1/brick</path>
          <peerid>af65f817-a655-403d-b5d1-f61ea1aab5fc</peerid>
          <status>1</status>
          <port>49163</port>
          <ports>
            <tcp>49163</tcp>
            <rdma>N/A</rdma>
          </ports>
          <pid>2290</pid>
          <sizeTotal>762173194240</sizeTotal>
          <sizeFree>682126888960</sizeFree>
          <blockSize>4096</blockSize>
        </node>
        <node>
          <hostname>vmhostg4</hostname>
          <path>/gfs1/gvm0-brick1/brick</path>
          <peerid>9811a9a7-1b25-470b-a18b-5e5a8178f395</peerid>
          <status>1</status>
          <port>49163</port>
          <ports>
            <tcp>49163</tcp>
            <rdma>N/A</rdma>
          </ports>
          <pid>3282</pid>
          <sizeTotal>762173194240</sizeTotal>
          <sizeFree>678477139968</sizeFree>
          <blockSize>4096</blockSize>
        </node>
        <node>
          <hostname>vmhostg1</hostname>
          <path>/gfs1/gvm0-brick2/brick</path>
          <peerid>afc90e43-c1d3-4c88-b077-d8597a68abcd</peerid>
          <status>1</status>
          <port>49161</port>
          <ports>
            <tcp>49161</tcp>
            <rdma>N/A</rdma>
          </ports>
          <pid>2536</pid>
          <sizeTotal>750171193344</sizeTotal>
          <sizeFree>631473377280</sizeFree>
          <blockSize>4096</blockSize>
        </node>
        <node>
          <hostname>vmhostg2</hostname>
          <path>/gfs1/gvm0-brick2/brick</path>
          <peerid>386f553f-4902-4031-9b76-8ea98a63502d</peerid>
          <status>1</status>
          <port>49164</port>
          <ports>
            <tcp>49164</tcp>
            <rdma>N/A</rdma>
          </ports>
          <pid>3606</pid>
          <sizeTotal>750171193344</sizeTotal>
          <sizeFree>634974969856</sizeFree>
          <blockSize>4096</blockSize>
        </node>
        <node>
          <hostname>vmhostg3</hostname>
          <path>/gfs1/gvm0-brick2/brick</path>
          <peerid>af65f817-a655-403d-b5d1-f61ea1aab5fc</peerid>
          <status>1</status>
          <port>49164</port>
          <ports>
            <tcp>49164</tcp>
            <rdma>N/A</rdma>
          </ports>
          <pid>2297</pid>
          <sizeTotal>762173194240</sizeTotal>
          <sizeFree>682126888960</sizeFree>
          <blockSize>4096</blockSize>
        </node>
        <node>
          <hostname>vmhostg4</hostname>
          <path>/gfs1/gvm0-brick2/brick</path>
          <peerid>9811a9a7-1b25-470b-a18b-5e5a8178f395</peerid>
          <status>1</status>
          <port>49164</port>
          <ports>
            <tcp>49164</tcp>
            <rdma>N/A</rdma>
          </ports>
          <pid>3296</pid>
          <sizeTotal>762173194240</sizeTotal>
          <sizeFree>678477139968</sizeFree>
          <blockSize>4096</blockSize>
        </node>
        <node>
          <hostname>vmhostg1</hostname>
          <path>/gfs1/gvm0-brick3/brick</path>
          <peerid>afc90e43-c1d3-4c88-b077-d8597a68abcd</peerid>
          <status>1</status>
          <port>49162</port>
          <ports>
            <tcp>49162</tcp>
            <rdma>N/A</rdma>
          </ports>
          <pid>2527</pid>
          <sizeTotal>750171193344</sizeTotal>
          <sizeFree>631473377280</sizeFree>
          <blockSize>4096</blockSize>
        </node>
        <node>
          <hostname>vmhostg2</hostname>
          <path>/gfs1/gvm0-brick3/brick</path>
          <peerid>386f553f-4902-4031-9b76-8ea98a63502d</peerid>
          <status>1</status>
          <port>49165</port>
          <ports>
            <tcp>49165</tcp>
            <rdma>N/A</rdma>
          </ports>
          <pid>3612</pid>
          <sizeTotal>750171193344</sizeTotal>
          <sizeFree>634974969856</sizeFree>
          <blockSize>4096</blockSize>
        </node>
        <node>
          <hostname>vmhostg3</hostname>
          <path>/gfs1/gvm0-brick3/brick</path>
          <peerid>af65f817-a655-403d-b5d1-f61ea1aab5fc</peerid>
          <status>1</status>
          <port>49165</port>
          <ports>
            <tcp>49165</tcp>
            <rdma>N/A</rdma>
          </ports>
          <pid>2304</pid>
          <sizeTotal>762173194240</sizeTotal>
          <sizeFree>682126888960</sizeFree>
          <blockSize>4096</blockSize>
        </node>
        <node>
          <hostname>vmhostg4</hostname>
          <path>/gfs1/gvm0-brick3/brick</path>
          <peerid>9811a9a7-1b25-470b-a18b-5e5a8178f395</peerid>
          <status>1</status>
          <port>49165</port>
          <ports>
            <tcp>49165</tcp>
            <rdma>N/A</rdma>
          </ports>
          <pid>3308</pid>
          <sizeTotal>762173194240</sizeTotal>
          <sizeFree>678477139968</sizeFree>
          <blockSize>4096</blockSize>
        </node>
      </volume>
    </volumes>
  </volStatus>
</cliOutput>


Either way, vdsm shouldn't barf on a missing element.

Comment 6 Sahina Bose 2018-01-31 07:06:48 UTC
Gobinda, can you follow up to make sure it gets merged? Thanks!

Comment 7 Gobinda Das 2018-02-05 05:26:18 UTC
Sahina, It is merged.

Comment 8 SATHEESARAN 2018-05-10 02:30:19 UTC
Tested with RHV 4.2.3, When using the btrfs FS backed gluster brick, 'Advanced details' from the bricks could be fetched

Comment 9 Sandro Bonazzola 2018-05-10 06:23:24 UTC
This bugzilla is included in oVirt 4.2.2 release, published on March 28th 2018.

Since the problem described in this bug report should be
resolved in oVirt 4.2.2 release, it has been closed with a resolution of CURRENT RELEASE.

If the solution does not work for you, please open a new bug report.