Description of problem: When I try restart glusterfs service over oVirt Manager or create GlusterFS Domain in supervdsm.log I have next message like that: MainProcess|jsonrpc/4::ERROR::2022-04-25 19:17:45,985::supervdsm_server::98::SuperVdsm.ServerCallback::(wrapper) Error in volumeInfo Traceback (most recent call last): File "/usr/lib/python3.6/site-packages/vdsm/gluster/cli.py", line 544, in volumeInfo return _parseVolumeInfo(xmltree) File "/usr/lib/python3.6/site-packages/vdsm/gluster/cli.py", line 429, in _parseVolumeInfo value['stripeCount'] = el.find('stripeCount').text AttributeError: 'NoneType' object has no attribute 'text' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/lib/python3.6/site-packages/vdsm/supervdsm_server.py", line 96, in wrapper res = func(*args, **kwargs) File "/usr/lib/python3.6/site-packages/vdsm/gluster/cli.py", line 546, in volumeInfo raise ge.GlusterXmlErrorException(err=[etree.tostring(xmltree)]) vdsm.gluster.exception.GlusterXmlErrorException: XML error: rc=0 out=() err=[b'<cliOutput>\n <opRet>0</opRet>\n <opErrno>0</opErrno>\n <opErrstr />\n <volInfo>\n <volumes>\n <volume>\n <name>VM_IMAGES</name>\n <id>7af6d997-f238-4a61-bd9f-fb5d1f3783e7</id>\n <status>1</status>\n <statusStr>Started</statusStr>\n <snapshotCount>0</snapshotCount>\n <brickCount>1</brickCount>\n <distCount>1</distCount>\n <replicaCount>1</replicaCount>\n <arbiterCount>0</arbiterCount>\n <disperseCount>0</disperseCount>\n <redundancyCount>0</redundancyCount>\n <type>0</type>\n <typeStr>Distribute</typeStr>\n <transport>0</transport>\n <bricks>\n <brick uuid="045fd2cd-6fee-4082-9ed7-e7a13a2fcbe9">vmcore1.san.rener:/mnt/DATA/VM_IMAGES/brick<name>vmcore1.san.rener:/mnt/DATA/VM_IMAGES/brick</name><hostUuid>045fd2cd-6fee-4082-9ed7-e7a13a2fcbe9</hostUuid><isArbiter>0</isArbiter></brick>\n </bricks>\n <optCount>11</optCount>\n <options>\n <option>\n <name>storage.fips-mode-rchecksum</name>\n <value>on</value>\n </option>\n <option>\n <name>transport.address-family</name>\n <value>inet</value>\n </option>\n <option>\n <name>nfs.disable</name>\n <value>on</value>\n </option>\n <option>\n <name>auth.allow</name>\n <value>10.233.2.2,10.233.2.3,10.233.2.4,10.224.2.8,10.233.0.8,10.233.0.9,10.233.0.10</value>\n </option>\n <option>\n <name>performance.strict-o-direct</name>\n <value>on</value>\n </option>\n <option>\n <name>network.remote-dio</name>\n <value>on</value>\n </option>\n <option>\n <name>storage.owner-uid</name>\n <value>36</value>\n </option>\n <option>\n <name>storage.owner-gid</name>\n <value>36</value>\n </option>\n <option>\n <name>network.ping-timeout</name>\n <value>5</value>\n </option>\n <option>\n <name>cluster.quorum-type</name>\n <value>none</value>\n </option>\n <option>\n <name>cluster.server-quorum-type</name>\n <value>none</value>\n </option>\n </options>\n </volume>\n <count>1</count>\n </volumes>\n </volInfo>\n</cliOutput>'] GlusterFS 10.1
workaround here: https://lists.ovirt.org/archives/list/users@ovirt.org/message/66PWY6JFI63QOSHWX7XBQZQRLJHEI7YU/
Ritesh is working on it.
Upstream issue: https://github.com/oVirt/vdsm/issues/155