Bug 1310746 - "Select Hosts" page of "Create Cluster" wizard reports wrong number of disks for node which wasn't initialized when the wizard was started
Summary: "Select Hosts" page of "Create Cluster" wizard reports wrong number of disks ...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat Storage Console
Classification: Red Hat Storage
Component: core
Version: 2
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: ---
: 2
Assignee: Nishanth Thomas
QA Contact: sds-qe-bugs
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-02-22 15:29 UTC by Martin Bukatovic
Modified: 2018-11-19 05:30 UTC (History)
1 user (show)

Fixed In Version: rhscon-ceph-0.0.23-1.el7scon.x86_64, rhscon-core-0.0.24-1.el7scon.x86_64, rhscon-ui-0.0.39-1.el7scon.noarch
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-11-19 05:30:48 UTC
Embargoed:


Attachments (Terms of Use)
screenshot of Select Hosts page with a node with 0 disks reported by mistake (53.12 KB, image/png)
2016-02-22 15:29 UTC, Martin Bukatovic
no flags Details

Description Martin Bukatovic 2016-02-22 15:29:30 UTC
Created attachment 1129367 [details]
screenshot of Select Hosts page with a node with 0 disks reported by mistake

Description of problem
======================

When "Create Cluster" task is started before the "Initialize Node" task
completes on all nodes, "2. Select Hosts" page of "Create Cluster" wizard
reports 0 disks detected for this node, even though this node actually
contains a disk drive. This happens because this node wasn't initialized when
the "Create Cluster" operation was started.

Version-Release number of selected component
============================================

rhscon-core-0.0.8-4.el7.x86_64
rhscon-ceph-0.0.6-4.el7.x86_64
rhscon-ui-0.0.14-1.el7.noarch

How reproducible
================

100 %

Steps to Reproduce
==================

1. Install skyring on server and prepare few hosts for cluster setup
2. Login to web interface and on the main page accept one node.
    * Make sure you use node which has at least one disk available for usm
      to use.
    * One node is enough to reproduce the issue, moreover it allows you
      to retry with the same cluster again.
3. Immediatelly click on "Create Cluster" button (so that the "Initialize Node"
   task is not completed yet)
4. Stop on the "2. Select Hosts" page of "Create Cluster" wizard and check
   number of disks reported for the node which has been just accepted.

Actual results
==============

The page reports "0 Disks" for the new node even though the machine has one
disk available and it has been successfully initialized in the mean time.

Expected results
================

There are multiple solutions and I'm not sure which one would make more
sense wrt other usm use cases:

* Don't let to start "Create Cluster" until all "Initialize Node" tasks are
finished. 
* The page reports that "Initialize Node" task is still running for this node
and so that the information for the new node are not available yet. In addition
to that, it would be good to update the list of nodes when all tasks finis.

Additional info
===============

Going "Back" in the wizard doesn't help. One has to start "Create Cluster"
again to see actual state of the new host.

Interesting fact: the "Storage Profiles" modal window shows correct number
of disks every time.

Comment 1 Martin Bukatovic 2016-02-23 13:24:58 UTC
Note that USM doesn't even report the IP address of the node.

Comment 4 Nishanth Thomas 2016-05-24 11:05:19 UTC
Nodes which are in 'active' state only will be listed in the cluster creation screen and hence the issue is resolved

Comment 5 Martin Kudlej 2016-07-26 09:25:06 UTC
I haven't seen this for long time. Last time tested with:
ceph-ansible-1.0.5-31.el7scon.noarch
ceph-installer-1.0.14-1.el7scon.noarch
rhscon-ceph-0.0.36-1.el7scon.x86_64
rhscon-core-0.0.36-1.el7scon.x86_64
rhscon-core-selinux-0.0.36-1.el7scon.noarch
rhscon-ui-0.0.50-1.el7scon.noarch


Note You need to log in before you can comment on or make changes to this bug.