Bug 1410283
Summary: | gluster cli: Exception when brick resides on a btrfs subvolume | ||
---|---|---|---|
Product: | [oVirt] vdsm | Reporter: | George Joseph <g.devel> |
Component: | Gluster | Assignee: | Gobinda Das <godas> |
Status: | CLOSED CURRENTRELEASE | QA Contact: | SATHEESARAN <sasundar> |
Severity: | medium | Docs Contact: | |
Priority: | unspecified | ||
Version: | 4.18.15.2 | CC: | bugs, g.devel, godas, lveyde, sabose |
Target Milestone: | ovirt-4.2.2 | Flags: | rule-engine:
ovirt-4.2+
rule-engine: planning_ack+ rule-engine: devel_ack+ rule-engine: testing_ack+ |
Target Release: | 4.20.18 | ||
Hardware: | x86_64 | ||
OS: | Linux | ||
URL: | https://gerrit.ovirt.org/#/c/69668/ | ||
Whiteboard: | |||
Fixed In Version: | vdsm v4.20.18 | Doc Type: | If docs needed, set a value |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2018-05-10 06:23:24 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | Gluster | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | 1406569 | ||
Bug Blocks: |
Description
George Joseph
2017-01-05 01:01:16 UTC
Ramesh, can you check if this is related to vdsm issue that you fixed? (In reply to Sahina Bose from comment #1) > Ramesh, can you check if this is related to vdsm issue that you fixed? We saw similar issue from some community user earlier. 'device' field was missing for one arbiter brick in the arbiter volume. George Joseph: Can you tell the gluster version you are running?. Also it will be useful to get the gluster cli output for 'gluster volume status <vol-name> detail --xml'. This is side effect gluster bz#1406569 Please followup in the bz#1406569 for more details. [root@vmhost1 ~]# gluster --version glusterfs 3.7.18 built on Dec 8 2016 12:16:01 Repository revision: git://git.gluster.com/glusterfs.git Copyright (c) 2006-2011 Gluster Inc. <http://www.gluster.com> GlusterFS comes with ABSOLUTELY NO WARRANTY. You may redistribute copies of GlusterFS under the terms of the GNU General Public License. [root@vmhost1 ~]# gluster volume status gvm1 detail --xml <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <cliOutput> <opRet>0</opRet> <opErrno>0</opErrno> <opErrstr/> <volStatus> <volumes> <volume> <volName>gvm1</volName> <nodeCount>4</nodeCount> <node> <hostname>vmhostg1</hostname> <path>/gfs1/gvm1-brick1/brick</path> <peerid>afc90e43-c1d3-4c88-b077-d8597a68abcd</peerid> <status>1</status> <port>49169</port> <ports> <tcp>49169</tcp> <rdma>N/A</rdma> </ports> <pid>2541</pid> <sizeTotal>750171193344</sizeTotal> <sizeFree>631472906240</sizeFree> <blockSize>4096</blockSize> </node> <node> <hostname>vmhostg2</hostname> <path>/gfs1/gvm1-brick1/brick</path> <peerid>386f553f-4902-4031-9b76-8ea98a63502d</peerid> <status>1</status> <port>49172</port> <ports> <tcp>49172</tcp> <rdma>N/A</rdma> </ports> <pid>3619</pid> <sizeTotal>750171193344</sizeTotal> <sizeFree>634974404608</sizeFree> <blockSize>4096</blockSize> </node> <node> <hostname>vmhostg3</hostname> <path>/gfs1/gvm1-brick1/brick</path> <peerid>af65f817-a655-403d-b5d1-f61ea1aab5fc</peerid> <status>1</status> <port>49172</port> <ports> <tcp>49172</tcp> <rdma>N/A</rdma> </ports> <pid>2324</pid> <sizeTotal>762173194240</sizeTotal> <sizeFree>682148352000</sizeFree> <blockSize>4096</blockSize> </node> <node> <hostname>vmhostg4</hostname> <path>/gfs1/gvm1-brick1/brick</path> <peerid>9811a9a7-1b25-470b-a18b-5e5a8178f395</peerid> <status>1</status> <port>49172</port> <ports> <tcp>49172</tcp> <rdma>N/A</rdma> </ports> <pid>3332</pid> <sizeTotal>762173194240</sizeTotal> <sizeFree>678508535808</sizeFree> <blockSize>4096</blockSize> </node> </volume> </volumes> </volStatus> </cliOutput> The referenced issue is specific to arbiter bricks. In my situation, there is no device on any brick. Actually I have 2 volumes, the detail above is from a 2x2 volume. No device on any brick. Here's the detail from a 4 x (2 + 1) = 12 volume [root@vmhost1 ~]# gluster volume status gvm0 detail --xml <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <cliOutput> <opRet>0</opRet> <opErrno>0</opErrno> <opErrstr/> <volStatus> <volumes> <volume> <volName>gvm0</volName> <nodeCount>12</nodeCount> <node> <hostname>vmhostg1</hostname> <path>/gfs1/gvm0-brick1/brick</path> <peerid>afc90e43-c1d3-4c88-b077-d8597a68abcd</peerid> <status>1</status> <port>49160</port> <ports> <tcp>49160</tcp> <rdma>N/A</rdma> </ports> <pid>2478</pid> <sizeTotal>750171193344</sizeTotal> <sizeFree>631473377280</sizeFree> <blockSize>4096</blockSize> </node> <node> <hostname>vmhostg2</hostname> <path>/gfs1/gvm0-brick1/brick</path> <peerid>386f553f-4902-4031-9b76-8ea98a63502d</peerid> <status>1</status> <port>49163</port> <ports> <tcp>49163</tcp> <rdma>N/A</rdma> </ports> <pid>3599</pid> <sizeTotal>750171193344</sizeTotal> <sizeFree>634974969856</sizeFree> <blockSize>4096</blockSize> </node> <node> <hostname>vmhostg3</hostname> <path>/gfs1/gvm0-brick1/brick</path> <peerid>af65f817-a655-403d-b5d1-f61ea1aab5fc</peerid> <status>1</status> <port>49163</port> <ports> <tcp>49163</tcp> <rdma>N/A</rdma> </ports> <pid>2290</pid> <sizeTotal>762173194240</sizeTotal> <sizeFree>682126888960</sizeFree> <blockSize>4096</blockSize> </node> <node> <hostname>vmhostg4</hostname> <path>/gfs1/gvm0-brick1/brick</path> <peerid>9811a9a7-1b25-470b-a18b-5e5a8178f395</peerid> <status>1</status> <port>49163</port> <ports> <tcp>49163</tcp> <rdma>N/A</rdma> </ports> <pid>3282</pid> <sizeTotal>762173194240</sizeTotal> <sizeFree>678477139968</sizeFree> <blockSize>4096</blockSize> </node> <node> <hostname>vmhostg1</hostname> <path>/gfs1/gvm0-brick2/brick</path> <peerid>afc90e43-c1d3-4c88-b077-d8597a68abcd</peerid> <status>1</status> <port>49161</port> <ports> <tcp>49161</tcp> <rdma>N/A</rdma> </ports> <pid>2536</pid> <sizeTotal>750171193344</sizeTotal> <sizeFree>631473377280</sizeFree> <blockSize>4096</blockSize> </node> <node> <hostname>vmhostg2</hostname> <path>/gfs1/gvm0-brick2/brick</path> <peerid>386f553f-4902-4031-9b76-8ea98a63502d</peerid> <status>1</status> <port>49164</port> <ports> <tcp>49164</tcp> <rdma>N/A</rdma> </ports> <pid>3606</pid> <sizeTotal>750171193344</sizeTotal> <sizeFree>634974969856</sizeFree> <blockSize>4096</blockSize> </node> <node> <hostname>vmhostg3</hostname> <path>/gfs1/gvm0-brick2/brick</path> <peerid>af65f817-a655-403d-b5d1-f61ea1aab5fc</peerid> <status>1</status> <port>49164</port> <ports> <tcp>49164</tcp> <rdma>N/A</rdma> </ports> <pid>2297</pid> <sizeTotal>762173194240</sizeTotal> <sizeFree>682126888960</sizeFree> <blockSize>4096</blockSize> </node> <node> <hostname>vmhostg4</hostname> <path>/gfs1/gvm0-brick2/brick</path> <peerid>9811a9a7-1b25-470b-a18b-5e5a8178f395</peerid> <status>1</status> <port>49164</port> <ports> <tcp>49164</tcp> <rdma>N/A</rdma> </ports> <pid>3296</pid> <sizeTotal>762173194240</sizeTotal> <sizeFree>678477139968</sizeFree> <blockSize>4096</blockSize> </node> <node> <hostname>vmhostg1</hostname> <path>/gfs1/gvm0-brick3/brick</path> <peerid>afc90e43-c1d3-4c88-b077-d8597a68abcd</peerid> <status>1</status> <port>49162</port> <ports> <tcp>49162</tcp> <rdma>N/A</rdma> </ports> <pid>2527</pid> <sizeTotal>750171193344</sizeTotal> <sizeFree>631473377280</sizeFree> <blockSize>4096</blockSize> </node> <node> <hostname>vmhostg2</hostname> <path>/gfs1/gvm0-brick3/brick</path> <peerid>386f553f-4902-4031-9b76-8ea98a63502d</peerid> <status>1</status> <port>49165</port> <ports> <tcp>49165</tcp> <rdma>N/A</rdma> </ports> <pid>3612</pid> <sizeTotal>750171193344</sizeTotal> <sizeFree>634974969856</sizeFree> <blockSize>4096</blockSize> </node> <node> <hostname>vmhostg3</hostname> <path>/gfs1/gvm0-brick3/brick</path> <peerid>af65f817-a655-403d-b5d1-f61ea1aab5fc</peerid> <status>1</status> <port>49165</port> <ports> <tcp>49165</tcp> <rdma>N/A</rdma> </ports> <pid>2304</pid> <sizeTotal>762173194240</sizeTotal> <sizeFree>682126888960</sizeFree> <blockSize>4096</blockSize> </node> <node> <hostname>vmhostg4</hostname> <path>/gfs1/gvm0-brick3/brick</path> <peerid>9811a9a7-1b25-470b-a18b-5e5a8178f395</peerid> <status>1</status> <port>49165</port> <ports> <tcp>49165</tcp> <rdma>N/A</rdma> </ports> <pid>3308</pid> <sizeTotal>762173194240</sizeTotal> <sizeFree>678477139968</sizeFree> <blockSize>4096</blockSize> </node> </volume> </volumes> </volStatus> </cliOutput> Either way, vdsm shouldn't barf on a missing element. Gobinda, can you follow up to make sure it gets merged? Thanks! Sahina, It is merged. Tested with RHV 4.2.3, When using the btrfs FS backed gluster brick, 'Advanced details' from the bricks could be fetched This bugzilla is included in oVirt 4.2.2 release, published on March 28th 2018. Since the problem described in this bug report should be resolved in oVirt 4.2.2 release, it has been closed with a resolution of CURRENT RELEASE. If the solution does not work for you, please open a new bug report. |