Bug 1315698 - skyring doesn't lock nodes when cluster creation starts [NEEDINFO]
skyring doesn't lock nodes when cluster creation starts
Status: CLOSED WONTFIX
Product: Red Hat Storage Console
Classification: Red Hat
Component: core (Show other bugs)
2
Unspecified Unspecified
unspecified Severity unspecified
: ---
: 3
Assigned To: Nishanth Thomas
sds-qe-bugs
: Reopened, TestBlocker
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2016-03-08 07:33 EST by Martin Kudlej
Modified: 2017-03-23 00:05 EDT (History)
3 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2017-03-23 00:05:46 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---
mkudlej: needinfo? (nthomas)


Attachments (Terms of Use)
logs from server (260.38 KB, application/x-gzip)
2016-03-08 07:33 EST, Martin Kudlej
no flags Details
logs from another server (208.49 KB, application/x-gzip)
2016-03-08 07:34 EST, Martin Kudlej
no flags Details

  None (edit)
Description Martin Kudlej 2016-03-08 07:33:44 EST
Created attachment 1134131 [details]
logs from server

Description of problem:
I've tried to create first cluster:

{"name":"testcl1","type":"ceph","nodes":[
{"nodeid":"d1178975-a308-4ded-8e99-e659472261e6","nodetype":["MON"],"disks":[]},
{"nodeid":"7c947505-8adc-4920-989c-a815c9d20144","nodetype":["OSD"],"disks":[{"name":"/dev/vdc","fstype":"xfs"},{"name":"/dev/vdb","fstype":"xfs"},{"name":"/dev/vdd","fstype":"xfs"}]},
{"nodeid":"4260bac6-7898-4b2d-8f7a-a963331e0572","nodetype":["OSD"],"disks":[{"name":"/dev/vdc","fstype":"xfs"},{"name":"/dev/vdb","fstype":"xfs"}]}],"networks":{"cluster":"172.16.180.0/24","public":"172.16.180.0/24"}}

and a while after submit this task I've tried to create another one:

{"name":"testcl2","type":"ceph","nodes":[
{"nodeid":"d1178975-a308-4ded-8e99-e659472261e6","nodetype":["MON"],"disks":[]},
{"nodeid":"02d012b9-8201-40b2-8336-d0eed02d2ede","nodetype":["MON"],"disks":[]},
{"nodeid":"fa5518bd-9563-4521-86dd-e1d13d44a454","nodetype":["MON"],"disks":[]},
{"nodeid":"7c947505-8adc-4920-989c-a815c9d20144","nodetype":["OSD"],"disks":[{"name":"/dev/vdc","fstype":"xfs"},{"name":"/dev/vdb","fstype":"xfs"},{"name":"/dev/vdd","fstype":"xfs"}]},
{"nodeid":"4260bac6-7898-4b2d-8f7a-a963331e0572","nodetype":["OSD"],"disks":[{"name":"/dev/vdc","fstype":"xfs"},{"name":"/dev/vdb","fstype":"xfs"}]},{"nodeid":"7173a316-4abb-4aed-9b5d-a9d7c9d9bf5d","nodetype":["OSD"],"disks":[{"name":"/dev/vdb","fstype":"xfs"}]},
{"nodeid":"b6087f7c-2beb-4f17-908f-8df62772e14f","nodetype":["OSD"],"disks":[{"name":"/dev/vdb","fstype":"xfs"}]}],"networks":{"cluster":"172.16.180.0/24","public":"172.16.180.0/24"}}

and as you can see I was able to select in UI same nodes for cluster creation as in first case.
I expect that it is not problem of UI but nodes from previous cluster are not marked for "testcl1" from beginning of task.

I've tried it twice and I've got these errors(I expect that this is different because I hit different task phase):
- Failed. error: Unable to Acquire the lock for d1178975-a308-4ded-8e99-e659472261e6 Message [POST_Clusters : dahorak-usm1-mon1]
- Failed. error: CreateCluster(): exception happened in python side

Version-Release number of selected component (if applicable):
rhscon-core-0.0.8-11.el7.x86_64
rhscon-ui-0.0.20-1.el7.noarch
rhscon-ceph-0.0.6-11.el7.x86_64

How reproducible:
100%

Steps to Reproduce:
1. submit cluster creation for subset of nodes
2. before cluster creation is in progress submit creation of another cluster with all available nodes
3. check result

Actual results:
It is possible to start cluster creation with nodes which are part of cluster creation which is in progress.

Expected results:
It will not be possible to use nodes which are already part of cluster creation for creating another cluster.
Comment 2 Martin Kudlej 2016-03-08 07:34 EST
Created attachment 1134132 [details]
logs from another server
Comment 3 Nishanth Thomas 2016-03-09 00:19:33 EST
I don't this is a bug. There is no requirement to lock the nodes from the UI as such. The locking of nodes happens in the back-end just before the cluster creation. Ideally what should happen in case you attempt the cluster creation from two different UIs with a same set of nodes is that, one request will go through and the second one fails because unable to acquire the lock on the nodes. 

Are you seeing a difference in behaviour from the above?
Comment 4 Nishanth Thomas 2016-03-09 00:20:40 EST
Need info on the above question
Comment 5 Martin Kudlej 2016-03-09 09:48:53 EST
I would like to described in again. There are 2 real scenarios:

a) Creating cluster from 2 different browsers.
Browser 1: user has submitted cluster creation from UI
Browser 2: page "Select Hosts" from cluster creation wizard is loaded few seconds after event from previous line and cluster 1 task from Browser 1 is not finished. So I expect that UI loads list of all available(accepted and free) nodes from backend.

What I expect is that Browser 2 does load list of free nodes WITHOUT nodes used by Browser 1 for creating cluster.

b) Creating 2 clusters from same browser.
Browser 1 cluster 1: user has submitted cluster creation from UI
Browser 1 cluster 2: page "Select Hosts" from cluster creation wizard is loaded before cluster 1 creation task is finished. So I expect that UI loads list of all available(accepted and free) nodes from backend.

What I expect is that Browser 1 cluster 2 does load list of free nodes WITHOUT nodes used by Browser 1 cluster 1 for creating cluster.


I agree that if 2 users from 2 browsers try to create cluster from same nodes(or there is intersection) one of task should fail. In this case page "Select Hosts" from cluster creation wizard loads same list of nodes. But this is not the case described by this BZ.
Comment 6 Nishanth Thomas 2016-03-10 06:21:18 EST
USM will not mark the a node as participating in a cluster until the cluster is created and node is added to the cluster. UI is in no position to figure out whether the node is already selected for creating another cluster. In the core,locking framework will ensure that a node is not allowed for creating multiple clusters and error out appropriately. This is how it behaves and not a bug.
Comment 7 Martin Kudlej 2016-03-10 06:39:03 EST
I understand how it is implemented.
But from my perspective UI should show in cluster creation wizard just nodes which are free and are not planned to be part of any cluster(cluster creation is in progress and those nodes are part of it). Otherwise user should remember which nodes he marked for previous cluster.
I think this should be implemented somehow. If you think this is problem of UI, feel free to change BZ component.

If you think this is not a bug, there should be note in documentation that there should be only one cluster creation task in progress at a time. So user should wait till cluster creation task ends for starting to create another cluster.
Comment 8 Nishanth Thomas 2016-03-10 07:02:20 EST
I think there is misunderstanding here. I am not saying you cannot create two clusters in parallel. if you initiate two cluster creations in parallel and the same set of nodes are used, the second request will fail with a clear error message.
This exactly what the requirement talks about(MVP-004 - Modal / Atomic Operations - Login Axis)
Comment 9 Martin Kudlej 2016-03-10 07:50:14 EST
I don't say that you cannot create 2 cluster in parallel. I say that this is not effectively possible from UI. If user creates cluster via API it is his responsibility to put there right list of nodes, but in case of UI this is not true. It is purpose of UI to provide list of nodes excluding already used nodes and nodes which were already marked for creating another cluster.
Comment 10 RHEL Product and Program Management 2016-03-22 06:15:55 EDT
Development Management has reviewed and declined this request.
You may appeal this decision by reopening this request.

Note You need to log in before you can comment on or make changes to this bug.