Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.

Bug 1165226

Summary: [engine-backend] Adding a RHEL-X host to a cluster where there is already a RHEL-Y host fails and the error isn't visable to the user
Product: Red Hat Enterprise Virtualization Manager Reporter: Elad <ebenahar>
Component: ovirt-engineAssignee: Tomas Jelinek <tjelinek>
Status: CLOSED DUPLICATE QA Contact:
Severity: high Docs Contact:
Priority: unspecified    
Version: 3.5.0CC: bazulay, ebenahar, ecohen, fdeutsch, gklein, iheim, kfryklun, lpeer, lsurette, mgoldboi, michal.skrivanek, oourfali, rbalakri, Rhev-m-bugs, yeylon
Target Milestone: ---Keywords: Reopened
Target Release: 3.5.0   
Hardware: x86_64   
OS: Unspecified   
Whiteboard: virt
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2014-11-26 07:24:49 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
engine.log
none
screenshot
none
screenshot2 none

Description Elad 2014-11-18 15:26:02 UTC
Created attachment 958634 [details]
engine.log

Description of problem:
I tried to add a RHEL7 host to a cluster where there was already a RHEL6.6 host. The operational wasn't blocked by engine. It should be blocked because different OS hosts are not allowed to be in the same cluster. The RHEL7 host wasn't moved to Up since it has different OS, it moved to non-operational, with no explanation to user in webadmin.

Version-Release number of selected component (if applicable):
rhevm-3.5.0-0.20.el6ev.noarch


How reproducible:
Always

Steps to Reproduce:
1. Have a DC with cluster with RHEL6 host in it
2. Try to add a RHEL7 host to the cluster and activate it
3.

Actual results:
The host was added to the cluster and it failed to move to Up. It moved to non-op with this message in engine.log:

2014-11-18 16:42:36,553 INFO  [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (DefaultQuartzScheduler_Worker-36) [11f8f365] Host 458b0be9-3f95-4e3f-a8b6-d8629394c512 : green-vdsc is already in NonOperationa
l status for reason MIXING_RHEL_VERSIONS_IN_CLUSTER. SetNonOperationalVds command is skipped.

The operation wasn't blocked on CDA

Expected results:
Adding a RHEL-X host to a cluster where there is already a RHEL-Y host should be blocked.

Additional info:
engine.log

Comment 1 Barak 2014-11-20 14:14:27 UTC
This is work as designed.
There are many flows that in which we move host between cluster and than it becomes non-operational.

It is done post the move and than querying it's capabilities and based on that the decision is made.

moving to CLOSE NOTABUG

Comment 2 Elad 2014-11-20 14:19:23 UTC
Barak, 
The whole expirince in this flow is bad. Not only that the adding of the host is not blocked on CDA, the message shown to user is not informative, it doesn't indicate what is the reason for the host to become non-operational. Only after digging in engine.log, I saw the reason. 

If not blocking the operation, at least add an informative message to the user in the event tab.

Comment 4 Barak 2014-11-23 10:31:45 UTC
I'm sure this also appears in the event log (and in the host general subtab),
The reason is clear "...MIXING_RHEL_VERSIONS_IN_CLUSTER "

Moving to CLOSE NOTABUG.

please reopen only if the reason does not appear in the events.

Comment 5 Elad 2014-11-23 10:46:10 UTC
Created attachment 960414 [details]
screenshot

2014-11-23 12:43:18,425 INFO  [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (DefaultQuartzScheduler_Worker-4) [2b9c99b5] Host 7bc2052c-a47d-4f95-b151-5d5196e4400c : nott-vds2 is already in NonOperational status for reason MIXING_RHEL_VERSIONS_IN_CLUSTER. SetNonOperationalVds command is skipped.


Screenshot from webadmin attached

Comment 6 Elad 2014-11-23 11:27:12 UTC
Created attachment 960416 [details]
screenshot2

Also, there is no reason in the host events tab

Attached screeshot2

Comment 7 Oved Ourfali 2014-11-25 07:56:36 UTC
*** Bug 1167096 has been marked as a duplicate of this bug. ***

Comment 8 Michal Skrivanek 2014-11-26 07:24:49 UTC
this is being tracked by bug 1167827 which has already a patch attached

*** This bug has been marked as a duplicate of bug 1167827 ***