Bug 1031536

Summary: Failing move a 3.2 host to maintenance after adding it to a 3.3 cluster (and the host moved to non-operational)
Product: Red Hat Enterprise Virtualization Manager Reporter: Barak <bazulay>
Component: ovirt-engineAssignee: Martin Perina <mperina>
Status: CLOSED WORKSFORME QA Contact:
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: 3.3.0CC: acathrow, iheim, lpeer, Rhev-m-bugs, yeylon
Target Milestone: ---   
Target Release: 3.3.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard: infra
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2013-11-19 10:02:30 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: Infra RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:

Description Barak 2013-11-18 08:56:09 UTC
This issue was reported initially by Jiri Belka on Bug 1004675,
As this is a different issue than the original bug, this bug is opened.



Description of problem:

add 3.2 vdsm to 3.3 engine to DC with 3.3 compatibility mode.
the host becane non operational. 
remove button was disabled.
Maintanece buttom was enabled. I pressed to put host into maintenance it stayed in preparing for maintenance and didn't move to maintence state.



Version-Release number of selected component (if applicable):

rhevm-3.3.0-0.33.beta1.el6ev.noarch

Comment 1 Martin Perina 2013-11-19 10:02:30 UTC
I was unable to reproduce this on oVirt master and RHEVM is23.1. Here are steps:

1) Create 3.3 DC and Cluster
2) Add 3.3 host (vdsm from is23.1)
3) Setup NFS storage on DC
4) Everything is OK
5) Add 3.2 host (vdsm from sf21.1)
6) After installation 3.2 host became NonResponsive
7) Move 3.2 host to Maintenance -> host successfully moved to Maintenance