Description of problem ====================== Description of Subvolume panel on Volume dashboard doesn't explain what gluster subvolume is. Moreover it talks about some status info, while only total number is actually provided on the panel. Version-Release number of selected component ============================================ tendrl-monitoring-integration-1.6.3-7.el7rhgs.noarch Steps to Reproduce ================== 1. Instal RHGS WA using tendrl-ansible 2. Import Trusted storage pool with at least one volume 3. Go to Volume dashboard and check Subvolume panel's description (available via little "i" icon on top left corner of the panel). Actual results ============== The panel reports just number of subvolumes. The description states: > The Subvolumes panel displays subvolume status information for a given volume. See screenshot 1. Expected results ================ The description explains what a gluster subvolume is, so that customer would be able to understand information reported by this panel. Moreover, the description mentions status information, but I see only a total number reported on the panel. If this is the only information provided here, the description should clearly articulate that.
Created attachment 1474618 [details] screenshot 1: subvolume panel with it's description
Proposed Description: The Subvolumes panel displays the number of subvolumes in a given volume. A subvolume is a set of bricks which form a replica set for the volume
Above description may be wrong, we can go with, The Subvolumes panel displays the number of subvolumes in a given volume. A brick after being processed by at least one translator or in other words set of one or more xlator stacked together is called a sub-volume.
This is more than a tooltip BZ. In looking at the Subvolume panel on the Volume Dashboard, it’s current showing # subvolumes and not giving status information. MartinB asked the relevance of showing # subvolumes. The original intent was to be able to show the count along with the health of the subvolumes (e.g. ok, degraded, not ok or the values that we get from Gluster or derive based on calculating this information) in order to identify if there are availability concerns. If we’re only showing # subvolumes, it’s not really useful to show it. However, if we can show the health information of the subvolumes, which was the original intent, then I recommend we fix this accordingly. RE Comment3 and Comment4 above, it is only applicable if the volume is not an EC volume, and we'd probably also need to state when it is not applicable.
I agree with Ju's comments.
need to discuss this with Nisanth
This is more than a tooltip BZ. In looking at the Subvolume panel on the Volume Dashboard, it’s current showing # subvolumes and not giving status information. MartinB asked the relevance of showing # subvolumes. The original intent was to be able to show the count along with the health of the subvolumes (e.g. ok, degraded, not ok) in order to identify if there are availability concerns. If we’re only showing # subvolumes, then I agree it’s not useful to show it, and therefore should remove it. Per discussions, we would prefer to get the health information of the subvolumes from the underlying Gluster, which does not provide this information (vs. calculating it in WA/tendrl). Therefore, let's remove this panel for now.
I agree with Ju that we should resolve this problem by removing this panel and create another BZ which swill track adding this panel as original intended, with info which makes sense.
PR is under review: https://github.com/Tendrl/monitoring-integration/pull/571
(In reply to Nishanth Thomas from comment #8) > we need to create a separate RFE BZ and priitize that in a later release The RFE bugzilla was created as BZ 1627828
There's no Subvolume panel on Volume dashboard now.
Looks good to me
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2018:3427