Gluster 6 has been released: https://www.gluster.org/announcing-gluster-6/ Gluster 7 is expected to be released on July 2019 https://github.com/gluster/glusterfs/milestone/9 Gluster 5 is expected to go EOL once Gluster 8 will be released: https://www.gluster.org/release-schedule/ So it may happen that Gluster 5 will go EOL while oVirt 4.3 is still supported.
Just a short info - running 4.3.3.7 with Gluster v6.1 is OK (for now), but I have noticed that my performance used to be better before - but that is a crazy setup. I am using a team device with loadbalance (eth + l3 + l4 for tx hashing) and the performance on 5.6 (with cluster op version 31200 as I never knew there is such an option that needs to be raised) was around 120MB/s (tested with dd if=/dev/zero of=file/on/fuse/mountpoint bs=4M count=1000) and now it is at 80MB/s (same test). Most probably any default option was changed and this modified the results. Yet, v5.6 had issues with dom_md/ids was always pending a heal, while in v 6.1 everything is fine (set the storage domain in maintenance during the upgrade).
Update: oVirt 4.3.4 (RC4) - I have detected that the arbiter brick "Advanced Details (Storage -> Volumes -> Volume -> Bricks -> Select "Brick" -> Advanced Details) is working properly, while the Data brick reports 'Error in fetching the brick details, please try again.' .
Tested RHV 4.3.5 with gluster-6.0-7 and all worked good
The previously reported issue is most probably related to non-standard network setup.
This bugzilla is included in oVirt 4.3.5 release, published on July 30th 2019. Since the problem described in this bug report should be resolved in oVirt 4.3.5 release, it has been closed with a resolution of CURRENT RELEASE. If the solution does not work for you, please open a new bug report.