Description of problem: If there is more than two disks after adding of a new disk. The host disk inventory is not properly updated on UI. It still shows 2 disks on the host instead of 3. Rest API seems to work properly though, the new disk is listed in api/v1/nodes. Version-Release number of selected component (if applicable): rhscon-ui-0.0.19-1.el7.noarch rhscon-ceph-0.0.6-10.el7.x86_64 rhscon-core-0.0.8-9.el7.x86_64 How reproducible: 100% Steps to Reproduce: 1. Prepare some hosts with two not used disks available for ceph storage 2. Accept hosts from step 1. in console UI 3. Create some cluster from some of those hosts 4. Create some pool 5. Add a new disk to those hosts Actual results: Disk is partitioned and marked as ceph_data. It is listed in api/v1/nodes GET request response, however it is not seen on UI. The number of cluster OSDs is not increased, size of pool is not changed, number of disks for hosts which are not part of cluster is not increased etc. The new disks are not listed in responses of http://<server>:8080/api/v1/nodes?state=free and http://<server>:8080/api/v1/clusters/<ID>/nodes GET requests. Expected results: The new disk are properly added and counted on UI Additional info:
Is it a UI refresh issue? What you mean by size of the pool is not increased? what you expect here? what you mean by number of disks for hosts which are not part of cluster is not increased? Could you please elaborate the issues found and expected result for each of those issues?
(In reply to Nishanth Thomas from comment #2) > Is it a UI refresh issue? If by that you mean that the new disk is not seen anywhere on UI, in other words it has no impact on any number there, then yes it is UI refresh issue. > > What you mean by size of the pool is not increased? what you expect here? During the sprint planning call I asked if the pool size should be changed, when a new disk is added and the answer was yes. So that's what I expect. > > what you mean by number of disks for hosts which are not part of cluster is > not increased? There no host dashboard, so simply during cluster creation/expand I check if the number of disks for the host is correct. However it was not, moreover I checked the console and found that the response for http://<server>:8080/api/v1/nodes?state=free is wrong. The new disk was not there. > > Could you please elaborate the issues found and expected result for each of > those issues? To have here all issues, another thing which should change is number of OSDs for cluster dashboard and on main dashboard too.
(In reply to Lubos Trilety from comment #3) > (In reply to Nishanth Thomas from comment #2) > > Is it a UI refresh issue? > > If by that you mean that the new disk is not seen anywhere on UI, in other > words it has no impact on any number there, then yes it is UI refresh issue. > > > > > What you mean by size of the pool is not increased? what you expect here > > During the sprint planning call I asked if the pool size should be changed, > when a new disk is added and the answer was yes. So that's what I expect. > Pool size will not change when you add an osd. Only PG num will change that also depend on the current num of OSDs > > > > what you mean by number of disks for hosts which are not part of cluster is > > not increased? > > There no host dashboard, so simply during cluster creation/expand I check if > the number of disks for the host is correct. However it was not, moreover I > checked the console and found that the response for > http://<server>:8080/api/v1/nodes?state=free is wrong. The new disk was not > there. > I dont think is an issue. We are not able to reproduce this issue as such. Could you please provide a set up where we can reproduce this issue. Please make sure you are not hitting this issue https://bugzilla.redhat.com/show_bug.cgi?id=1312265 > > > > Could you please elaborate the issues found and expected result for each of > > those issues? > > To have here all issues, another thing which should change is number of OSDs > for cluster dashboard and on main dashboard too. This page is not refreshed periodically at this point. Also that feature not complete