Bug 1313934 - adding new disk
adding new disk
Status: CLOSED NOTABUG
Product: Red Hat Storage Console
Classification: Red Hat
Component: UI (Show other bugs)
2
Unspecified Unspecified
unspecified Severity medium
: ---
: 2
Assigned To: Kanagaraj
sds-qe-bugs
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2016-03-02 11:09 EST by Lubos Trilety
Modified: 2016-03-08 11:14 EST (History)
3 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2016-03-08 11:14:37 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Lubos Trilety 2016-03-02 11:09:29 EST
Description of problem:
If there is more than two disks after adding of a new disk. The host disk inventory is not properly updated on UI. It still shows 2 disks on the host instead of 3. Rest API seems to work properly though, the new disk is listed in api/v1/nodes.

Version-Release number of selected component (if applicable):
rhscon-ui-0.0.19-1.el7.noarch
rhscon-ceph-0.0.6-10.el7.x86_64
rhscon-core-0.0.8-9.el7.x86_64

How reproducible:
100%

Steps to Reproduce:
1. Prepare some hosts with two not used disks available for ceph storage
2. Accept hosts from step 1. in console UI
3. Create some cluster from some of those hosts
4. Create some pool
5. Add a new disk to those hosts

Actual results:
Disk is partitioned and marked as ceph_data. It is listed in api/v1/nodes GET request response, however it is not seen on UI. The number of cluster OSDs is not increased, size of pool is not changed, number of disks for hosts which are not part of cluster is not increased etc. The new disks are not listed in responses of http://<server>:8080/api/v1/nodes?state=free and http://<server>:8080/api/v1/clusters/<ID>/nodes GET requests.

Expected results:
The new disk are properly added and counted on UI

Additional info:
Comment 2 Nishanth Thomas 2016-03-03 22:59:00 EST
Is it a UI refresh issue?

What you mean by size of the pool is not increased? what you expect here?

what you mean by number of disks for hosts which are not part of cluster is not increased?

Could you please elaborate the issues found and expected result for each of those issues?
Comment 3 Lubos Trilety 2016-03-04 03:26:33 EST
(In reply to Nishanth Thomas from comment #2)
> Is it a UI refresh issue?

If by that you mean that the new disk is not seen anywhere on UI, in other words it has no impact on any number there, then yes it is UI refresh issue.

> 
> What you mean by size of the pool is not increased? what you expect here?

During the sprint planning call I asked if the pool size should be changed, when a new disk is added and the answer was yes. So that's what I expect.

> 
> what you mean by number of disks for hosts which are not part of cluster is
> not increased?

There no host dashboard, so simply during cluster creation/expand I check if the number of disks for the host is correct. However it was not, moreover I checked the console and found that the response for http://<server>:8080/api/v1/nodes?state=free is wrong. The new disk was not there.

> 
> Could you please elaborate the issues found and expected result for each of
> those issues?

To have here all issues, another thing which should change is number of OSDs for cluster dashboard and on main dashboard too.
Comment 4 Nishanth Thomas 2016-03-04 05:21:14 EST
(In reply to Lubos Trilety from comment #3)
> (In reply to Nishanth Thomas from comment #2)
> > Is it a UI refresh issue?
> 
> If by that you mean that the new disk is not seen anywhere on UI, in other
> words it has no impact on any number there, then yes it is UI refresh issue.
> 
> > 
> > What you mean by size of the pool is not increased? what you expect here
> 
> During the sprint planning call I asked if the pool size should be changed,
> when a new disk is added and the answer was yes. So that's what I expect.
> 

Pool size will not change when you add an osd. Only PG num will change that also depend on the current num of OSDs

> > 
> > what you mean by number of disks for hosts which are not part of cluster is
> > not increased?
> 
> There no host dashboard, so simply during cluster creation/expand I check if
> the number of disks for the host is correct. However it was not, moreover I
> checked the console and found that the response for
> http://<server>:8080/api/v1/nodes?state=free is wrong. The new disk was not
> there.
> 

I dont think is an issue. We are not able to reproduce this issue as such. Could you please provide a set up where we can reproduce this issue. Please make sure you are not hitting this issue https://bugzilla.redhat.com/show_bug.cgi?id=1312265

> > 
> > Could you please elaborate the issues found and expected result for each of
> > those issues?
> 
> To have here all issues, another thing which should change is number of OSDs
> for cluster dashboard and on main dashboard too.

This page is not refreshed periodically at this point. Also that feature not complete

Note You need to log in before you can comment on or make changes to this bug.