Bug 1261788 - Unclear error message received, while adding the RHEL6.7 host to RHEVM3.6 default cluster.
Summary: Unclear error message received, while adding the RHEL6.7 host to RHEVM3.6 def...
Keywords:
Status: CLOSED WORKSFORME
Alias: None
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: ovirt-engine
Version: 3.6.0
Hardware: x86_64
OS: Linux
unspecified
medium
Target Milestone: ovirt-3.6.1
: 3.6.0
Assignee: Moti Asayag
QA Contact: Pavol Brilla
URL:
Whiteboard: infra
: 1259186 (view as bug list)
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2015-09-10 07:27 UTC by Nikolai Sednev
Modified: 2016-02-10 19:14 UTC (History)
12 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2015-11-02 07:57:25 UTC
oVirt Team: Infra
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
engine logs (293.84 KB, application/x-gzip)
2015-09-10 07:27 UTC, Nikolai Sednev
no flags Details

Description Nikolai Sednev 2015-09-10 07:27:34 UTC
Created attachment 1072039 [details]
engine logs

Description of problem:
While adding the RHEL6.7 host to RHEVM3.6 default HC with 3.6 compatibility mode, an unclear error message received.
 
 
Version-Release number of selected component (if applicable):
On engine:
qemu-guest-agent-0.12.1.2-2.479.el6.x86_64
rhevm-3.6.0-0.12.master.el6.noarch
ovirt-vmconsole-1.0.0-0.0.master.el6ev.noarch
 
 
How reproducible:
100%
 
Steps to Reproduce:
1.Install RHEVM3.6.
2.Add RHEL6.7 host to 3.6 default host cluster.
 
Actual results:
Reported error not clear "Host <FQDN of the host> installation failed. Command returned failure code 1 during SSH session 'root@<FQDN of the host>'.".
 
Expected results:
Meaningful error message should be reported, like "Host's OS is lower than RHEL/RHEVH7.2" or "Incompatible OS on host".
 
Additional info:
logs from the engine

Comment 1 meital avital 2015-09-10 09:22:07 UTC
*** Bug 1259186 has been marked as a duplicate of this bug. ***

Comment 2 Moti Asayag 2015-10-25 18:47:45 UTC
By examining the log it seems like vdsmd service was already running and somehow "busy":

"2015-08-31 15:57:19 DEBUG otopi.plugins.otopi.services.rhel plugin.execute:935 execute-output: ('/sbin/service', 'vdsmd', 'stop') stdout:
cannot stop vdsm, operation is locked[FAILED]"

and the stacktrace contains:
"RuntimeError: Command '/sbin/service' failed to execute
2015-08-31 15:57:19 ERROR otopi.context context._executeMethod:164 Failed to execute stage 'Package installation': Command '/sbin/service' failed to execute"

Those errors indicates restarting vdsm fails, without considering the supported cluster levels by that vdsm.

Were you able to add the same host into 3.5 cluster on the same engine ?
Please try to reproduce with host that didn't contain vdsm installed on.

Comment 4 Oved Ourfali 2015-11-02 07:57:25 UTC
Due to comment #2, and the lack of response, closing on insufficient data.

Comment 5 Nikolai Sednev 2015-11-10 14:24:46 UTC
(In reply to Moti Asayag from comment #2)
> By examining the log it seems like vdsmd service was already running and
> somehow "busy":
> 
> "2015-08-31 15:57:19 DEBUG otopi.plugins.otopi.services.rhel
> plugin.execute:935 execute-output: ('/sbin/service', 'vdsmd', 'stop') stdout:
> cannot stop vdsm, operation is locked[FAILED]"
> 
> and the stacktrace contains:
> "RuntimeError: Command '/sbin/service' failed to execute
> 2015-08-31 15:57:19 ERROR otopi.context context._executeMethod:164 Failed to
> execute stage 'Package installation': Command '/sbin/service' failed to
> execute"
> 
> Those errors indicates restarting vdsm fails, without considering the
> supported cluster levels by that vdsm.
> 
> Were you able to add the same host into 3.5 cluster on the same engine ?
> Please try to reproduce with host that didn't contain vdsm installed on.

I did not reproduced this error on 3.6, while was trying to add the RHEL6.7 in to the default 3.6 host cluster.
I received this error: "Nov 10, 2015 4:05:01 PM	
Host black-vdsb.qa.lab.tlv.redhat.com is compatible with versions (3.0,3.1,3.2,3.3,3.4,3.5) and cannot join Cluster Default which is set to version 3.6", which seems logical to me and hence bug should be closed now as works for me.

If trying to add the RHEL6.7 host straight to the 3.5 compatible host cluster, which exists inside 3.6 engine, then it added easily and without errors, right as it should.

RHEL6.7 host:
sanlock-2.8-2.el6_5.x86_64
qemu-kvm-rhev-0.12.1.2-2.479.el6_7.2.x86_64
libvirt-client-0.10.2-54.el6.x86_64
vdsm-4.16.29-1.el6ev.x86_64
mom-0.4.1-5.el6ev.noarch
Linux version 2.6.32-573.7.1.el6.x86_64 (mockbuild.eng.bos.redhat.com) (gcc version 4.4.7 20120313 (Red Hat 4.4.7-16) (GCC) ) #1 SMP Thu Sep 10 13:42:16 EDT 2015

Engine:
ovirt-vmconsole-proxy-1.0.0-1.el6ev.noarch
rhevm-3.6.0.3-0.1.el6.noarch
ovirt-vmconsole-1.0.0-1.el6ev.noarch
rhevm-guest-agent-common-1.0.11-2.el6ev.noarch
Linux version 2.6.32-573.7.1.el6.x86_64 (mockbuild.eng.bos.redhat.com) (gcc version 4.4.7 20120313 (Red Hat 4.4.7-16) (GCC) ) #1 SMP Thu Sep 10 13:42:16 EDT 2015


Note You need to log in before you can comment on or make changes to this bug.