Tested with grep tendrl-monitoring-integration-1.5.4-14. There is no healing info for volume as it is described in comment #2. Now there is just healing info for every brick. I've installed and configured Gluster cluster with volume(both supported types), installed and configured WA and import Gluster cluster into WA. After more than 50 minutes there is no healing info at brick dashboard. I've reload web page with cache clearing. Also there is enough free memory and free disk space on server node. I see correct info in output of "gluster volume heal volume_name info" on every node: Brick mkudlej-usm2-gl1:/mnt/brick_beta_arbiter_1/1 Status: Connected Number of entries: 0 Brick mkudlej-usm2-gl2:/mnt/brick_beta_arbiter_1/1 Status: Connected Number of entries: 0 .... --> ASSIGNED
Created attachment 1366000 [details] there is no healing info for brick
I've filed documentation bug because healing info chart has moved from volume dashboard to brick dashboard and it has changed probably its content. See https://bugzilla.redhat.com/show_bug.cgi?id=1524431
Martin, after debugging the setup we find that the volume you are taking in example is of type `Distributed-Disperse`. Heal info is applicable only for replicate volumes. Just tried `gluster v heal <your vol name> info split-brain`, and it gives the error `Volume volume_gama_disperse_4_plus_2x2 is not of type replicate`. Request you to create a proper volume of replicate type and then verify the scenario. Attached screenshot from your setup itself with working details.
Created attachment 1366030 [details] brick level heal info for replicate volume
@martin, Since the splitbrain related information is available with dispersed volume type, the scenario in which bug moved to failedQE is invalid. Moving this bug back to ON_QA for further verification. Please do the needful
Tested with etcd-3.2.7-1.el7.x86_64 glusterfs-3.8.4-18.4.el7.x86_64 glusterfs-3.8.4-52.el7_4.x86_64 glusterfs-3.8.4-52.el7rhgs.x86_64 glusterfs-api-3.8.4-52.el7rhgs.x86_64 glusterfs-cli-3.8.4-52.el7rhgs.x86_64 glusterfs-client-xlators-3.8.4-18.4.el7.x86_64 glusterfs-client-xlators-3.8.4-52.el7_4.x86_64 glusterfs-client-xlators-3.8.4-52.el7rhgs.x86_64 glusterfs-events-3.8.4-52.el7rhgs.x86_64 glusterfs-fuse-3.8.4-18.4.el7.x86_64 glusterfs-fuse-3.8.4-52.el7_4.x86_64 glusterfs-fuse-3.8.4-52.el7rhgs.x86_64 glusterfs-geo-replication-3.8.4-52.el7rhgs.x86_64 glusterfs-libs-3.8.4-18.4.el7.x86_64 glusterfs-libs-3.8.4-52.el7_4.x86_64 glusterfs-libs-3.8.4-52.el7rhgs.x86_64 glusterfs-rdma-3.8.4-52.el7rhgs.x86_64 glusterfs-server-3.8.4-52.el7rhgs.x86_64 gluster-nagios-addons-0.2.10-2.el7rhgs.x86_64 gluster-nagios-common-0.2.4-1.el7rhgs.noarch libvirt-daemon-driver-storage-gluster-3.2.0-14.el7_4.5.x86_64 python-etcd-0.4.5-1.el7rhgs.noarch python-gluster-3.8.4-52.el7rhgs.noarch rubygem-etcd-0.3.0-1.el7rhgs.noarch tendrl-ansible-1.5.4-6.el7rhgs.noarch tendrl-api-1.5.4-4.el7rhgs.noarch tendrl-api-httpd-1.5.4-4.el7rhgs.noarch tendrl-collectd-selinux-1.5.4-1.el7rhgs.noarch tendrl-commons-1.5.4-9.el7rhgs.noarch tendrl-gluster-integration-1.5.4-13.el7rhgs.noarch tendrl-grafana-plugins-1.5.4-14.el7rhgs.noarch tendrl-grafana-selinux-1.5.4-1.el7rhgs.noarch tendrl-monitoring-integration-1.5.4-14.el7rhgs.noarch tendrl-node-agent-1.5.4-15.el7rhgs.noarch tendrl-notifier-1.5.4-6.el7rhgs.noarch tendrl-selinux-1.5.4-1.el7rhgs.noarch tendrl-ui-1.5.4-6.el7rhgs.noarch vdsm-gluster-4.17.33-1.2.el7rhgs.noarch and it works for replicated volumes. --> VERIFIED
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHEA-2017:3478