Bug 1004675

Summary: [Admin Portal] Attempt to add a 3.2 host into 3.3 env fails but the host cannot be removed
Product: Red Hat Enterprise Virtualization Manager Reporter: Jiri Belka <jbelka>
Component: ovirt-engine-webadmin-portalAssignee: Ravi Nori <rnori>
Status: CLOSED CURRENTRELEASE QA Contact: Tareq Alayan <talayan>
Severity: high Docs Contact:
Priority: unspecified    
Version: 3.3.0CC: aberezin, acathrow, bazulay, ecohen, eedri, iheim, jbelka, Rhev-m-bugs, talayan, yeylon, yzaslavs
Target Milestone: ---Keywords: Triaged
Target Release: 3.3.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard: infra
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: Infra RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 881055    
Attachments:
Description Flags
engine.log, server.log, vdsm.log
none
logs.tar.gz none

Description Jiri Belka 2013-09-05 08:07:54 UTC
Created attachment 794070 [details]
engine.log, server.log, vdsm.log

Description of problem:

I tried to add second host into 3.3 setup (is13). This host is 3.2 with sf20.1. The atempt to add this host fails and the host is in 'Non Operational' state in Admin Portal. I cannot remove this host as neither 'Remove' nor 'Maintenance' buttons/actions are active, thus I cannot do anything with this host.

While adding this host there's following event:

2013-Sep-05, 09:49
	
Host dell-r210ii-03 is compatible with versions (3.0,3.1,3.2) and cannot join Cluster Default which is set to version 3.3.

Version-Release number of selected component (if applicable):
is13

How reproducible:
100%

Steps to Reproduce:
1. is13 setup with 3.3 host and nfs storage
2. add 3.2 host (sf20.1)
3. try to delete 3.2 host

Actual results:
3.2 host is in 'Non Operational' state and cannot be deleted from the setup

Expected results:
if adding a host fails it should be always possible to remove such host

Additional info:

Comment 1 Tareq Alayan 2013-11-14 17:34:32 UTC
Created attachment 824076 [details]
logs.tar.gz

Comment 2 Tareq Alayan 2013-11-14 17:38:14 UTC
tested on rhevm-3.3.0-0.33.beta1.el6ev.noarch

failed qa. reassigning.

add 3.2 vdsm to 3.3 engine to DC with 3.3 compatibility mode.
the host becane non operational. 
remove button was disabled.
Maintanece buttom was enabled. I pressed to put host into maintenance it stayed in preparing for maintenance and didn't move to maintence state.  

log.tar.gz attached.

Comment 3 Ravi Nori 2013-11-14 17:45:02 UTC
The problem reported in FailedQA is different from the original bug. The original bug was that both maintenance and remove buttons were disabled, the new problem is that the host doesn't move to maintenance after clicking the maintenance button.

Is this 100% reproducible

Comment 4 Barak 2013-11-18 08:57:41 UTC
Per comment #14 moving this bug back to ON_QA.

And opened a different bug on the different issue reported on comment #2 
Bug 1031536

Comment 5 Tareq Alayan 2013-11-21 11:21:41 UTC
verified on rhevm-3.3.0-0.33.beta1.el6ev.noarch

1. Create 3.3 DC and Cluster
2. Add 3.3 host (vdsm from is23.1)
3. Setup NFS storage on DC
5. Add 3.2 host (vdsm from sf21.1)
6. After installation 3.2 host became Non-operational
7. Move 3.2 host to Maintenance -> host successfully moved to Maintenance
8. remove 3.2 host ---OK

Comment 6 Itamar Heim 2014-01-21 22:31:23 UTC
Closing - RHEV 3.3 Released

Comment 7 Itamar Heim 2014-01-21 22:31:24 UTC
Closing - RHEV 3.3 Released

Comment 8 Itamar Heim 2014-01-21 22:34:08 UTC
Closing - RHEV 3.3 Released