Bug 1165226 - [engine-backend] Adding a RHEL-X host to a cluster where there is already a RHEL-Y host fails and the error isn't visable to the user
Summary: [engine-backend] Adding a RHEL-X host to a cluster where there is already a R...
Keywords:
Status: CLOSED DUPLICATE of bug 1167827
Alias: None
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: ovirt-engine
Version: 3.5.0
Hardware: x86_64
OS: Unspecified
unspecified
high
Target Milestone: ---
: 3.5.0
Assignee: Tomas Jelinek
QA Contact:
URL:
Whiteboard: virt
: 1167096 (view as bug list)
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2014-11-18 15:26 UTC by Elad
Modified: 2014-11-26 07:24 UTC (History)
15 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2014-11-26 07:24:49 UTC
oVirt Team: ---
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
engine.log (103.04 KB, application/x-gzip)
2014-11-18 15:26 UTC, Elad
no flags Details
screenshot (115.03 KB, image/png)
2014-11-23 10:46 UTC, Elad
no flags Details
screenshot2 (111.94 KB, image/png)
2014-11-23 11:27 UTC, Elad
no flags Details

Description Elad 2014-11-18 15:26:02 UTC
Created attachment 958634 [details]
engine.log

Description of problem:
I tried to add a RHEL7 host to a cluster where there was already a RHEL6.6 host. The operational wasn't blocked by engine. It should be blocked because different OS hosts are not allowed to be in the same cluster. The RHEL7 host wasn't moved to Up since it has different OS, it moved to non-operational, with no explanation to user in webadmin.

Version-Release number of selected component (if applicable):
rhevm-3.5.0-0.20.el6ev.noarch


How reproducible:
Always

Steps to Reproduce:
1. Have a DC with cluster with RHEL6 host in it
2. Try to add a RHEL7 host to the cluster and activate it
3.

Actual results:
The host was added to the cluster and it failed to move to Up. It moved to non-op with this message in engine.log:

2014-11-18 16:42:36,553 INFO  [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (DefaultQuartzScheduler_Worker-36) [11f8f365] Host 458b0be9-3f95-4e3f-a8b6-d8629394c512 : green-vdsc is already in NonOperationa
l status for reason MIXING_RHEL_VERSIONS_IN_CLUSTER. SetNonOperationalVds command is skipped.

The operation wasn't blocked on CDA

Expected results:
Adding a RHEL-X host to a cluster where there is already a RHEL-Y host should be blocked.

Additional info:
engine.log

Comment 1 Barak 2014-11-20 14:14:27 UTC
This is work as designed.
There are many flows that in which we move host between cluster and than it becomes non-operational.

It is done post the move and than querying it's capabilities and based on that the decision is made.

moving to CLOSE NOTABUG

Comment 2 Elad 2014-11-20 14:19:23 UTC
Barak, 
The whole expirince in this flow is bad. Not only that the adding of the host is not blocked on CDA, the message shown to user is not informative, it doesn't indicate what is the reason for the host to become non-operational. Only after digging in engine.log, I saw the reason. 

If not blocking the operation, at least add an informative message to the user in the event tab.

Comment 4 Barak 2014-11-23 10:31:45 UTC
I'm sure this also appears in the event log (and in the host general subtab),
The reason is clear "...MIXING_RHEL_VERSIONS_IN_CLUSTER "

Moving to CLOSE NOTABUG.

please reopen only if the reason does not appear in the events.

Comment 5 Elad 2014-11-23 10:46:10 UTC
Created attachment 960414 [details]
screenshot

2014-11-23 12:43:18,425 INFO  [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (DefaultQuartzScheduler_Worker-4) [2b9c99b5] Host 7bc2052c-a47d-4f95-b151-5d5196e4400c : nott-vds2 is already in NonOperational status for reason MIXING_RHEL_VERSIONS_IN_CLUSTER. SetNonOperationalVds command is skipped.


Screenshot from webadmin attached

Comment 6 Elad 2014-11-23 11:27:12 UTC
Created attachment 960416 [details]
screenshot2

Also, there is no reason in the host events tab

Attached screeshot2

Comment 7 Oved Ourfali 2014-11-25 07:56:36 UTC
*** Bug 1167096 has been marked as a duplicate of this bug. ***

Comment 8 Michal Skrivanek 2014-11-26 07:24:49 UTC
this is being tracked by bug 1167827 which has already a patch attached

*** This bug has been marked as a duplicate of bug 1167827 ***


Note You need to log in before you can comment on or make changes to this bug.