Bug 1048425

Summary: [RHSC][Scale] - Rebalance status dialog does not get rendered properly when rebalance is stopped from status dialog
Product: [Red Hat Storage] Red Hat Gluster Storage Reporter: RamaKasturi <knarra>
Component: rhscAssignee: Ramesh N <rnachimu>
Status: CLOSED EOL QA Contact: storage-qa-internal <storage-qa-internal>
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: 2.1CC: knarra, mmahoney, mmccune, rhs-bugs, ssampat
Target Milestone: ---   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2015-12-03 17:20:27 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
Attached is the actual results none

Description RamaKasturi 2014-01-04 06:52:07 UTC
Created attachment 845206 [details]
Attached is the actual results

Description of problem:
Rebalance status dialog gives two rebalance status dialog when rebalance is stopped from rebalance status dialog.

Version-Release number of selected component (if applicable):
rhsc-2.1.2-0.28.beta.el6_5.noarch

How reproducible:
Always

Steps to Reproduce:
1. Have the scale config
2. create a distribute volume and start rebalance on the volume.
3. Now click on the status from the rebalance drop down icon.

Actual results:
Actual results are attached in the screenshot.

Expected results:
It should give only one rebalance status at once.

Additional info:

Comment 2 RamaKasturi 2014-01-07 13:21:34 UTC
seen the above issue with CB14 (rhsc-2.1.2-0.32.el6rhs.noarch) and glusterfs version (glusterfs-server-3.4.0.54rhs-2.el6rhs.x86_64) as well.

Comment 3 RamaKasturi 2014-01-08 06:06:31 UTC
Scale configuration details are below:

Create a volume with 16 servers, 1 brick per server.

Comment 4 Vivek Agarwal 2015-12-03 17:20:27 UTC
Thank you for submitting this issue for consideration in Red Hat Gluster Storage. The release for which you requested us to review, is now End of Life. Please See https://access.redhat.com/support/policy/updates/rhs/

If you can reproduce this bug against a currently maintained version of Red Hat Gluster Storage, please feel free to file a new report against the current release.