Bug 1031536 - Failing move a 3.2 host to maintenance after adding it to a 3.3 cluster (and the host moved to non-operational)
Summary: Failing move a 3.2 host to maintenance after adding it to a 3.3 cluster (and ...
Keywords:
Status: CLOSED WORKSFORME
Alias: None
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: ovirt-engine
Version: 3.3.0
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
: 3.3.0
Assignee: Martin Perina
QA Contact:
URL:
Whiteboard: infra
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2013-11-18 08:56 UTC by Barak
Modified: 2016-02-10 19:32 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2013-11-19 10:02:30 UTC
oVirt Team: Infra
Target Upstream Version:


Attachments (Terms of Use)

Description Barak 2013-11-18 08:56:09 UTC
This issue was reported initially by Jiri Belka on Bug 1004675,
As this is a different issue than the original bug, this bug is opened.



Description of problem:

add 3.2 vdsm to 3.3 engine to DC with 3.3 compatibility mode.
the host becane non operational. 
remove button was disabled.
Maintanece buttom was enabled. I pressed to put host into maintenance it stayed in preparing for maintenance and didn't move to maintence state.



Version-Release number of selected component (if applicable):

rhevm-3.3.0-0.33.beta1.el6ev.noarch

Comment 1 Martin Perina 2013-11-19 10:02:30 UTC
I was unable to reproduce this on oVirt master and RHEVM is23.1. Here are steps:

1) Create 3.3 DC and Cluster
2) Add 3.3 host (vdsm from is23.1)
3) Setup NFS storage on DC
4) Everything is OK
5) Add 3.2 host (vdsm from sf21.1)
6) After installation 3.2 host became NonResponsive
7) Move 3.2 host to Maintenance -> host successfully moved to Maintenance


Note You need to log in before you can comment on or make changes to this bug.