Bug 1059435 - PRD35 - [RFE] RHEVM Self Hosted Engine on RHEV-H
Summary: PRD35 - [RFE] RHEVM Self Hosted Engine on RHEV-H
Keywords:
Status: CLOSED DUPLICATE of bug 1250199
Alias: None
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: RFEs
Version: 3.3.0
Hardware: All
OS: Linux
medium
medium
Target Milestone: ---
: 3.5.4
Assignee: Fabian Deutsch
QA Contact: Nikolai Sednev
URL:
Whiteboard: node
: 1091188 1124920 1198639 (view as bug list)
Depends On: 1094842 rebase-ovirt-node-3.1 1139019 1139020 1144917 1151342 1151347 1159166 1168486 1168601 1171614 1172061 1200474 1204470 1205225 1206884 1208780 1209861 1218634 1230096 1230614 1230638 1231614 1235350 1235591 1236518 1239285 1241470 1244636 1244740 1245143
Blocks: 1123938 1241478 1250199
TreeView+ depends on / blocked
 
Reported: 2014-01-29 21:04 UTC by Anand Nande
Modified: 2019-08-15 03:45 UTC (History)
25 users (show)

Fixed In Version: vt2.2
Doc Type: Release Note
Doc Text:
This feature allows administrators to take full advantage of Self-Hosted Engine and can implement Self-Hosted Engine on a Red Hat Enterprise Virtualization Hypervisor Host.
Clone Of:
: 1250199 (view as bug list)
Environment:
Last Closed: 2015-08-04 20:44:30 UTC
oVirt Team: Node
sherold: Triaged+


Attachments (Terms of Use)
answers-20150706114639.conf (2.42 KB, text/plain)
2015-07-09 15:24 UTC, Nikolai Sednev
no flags Details
answers-20150706123207.conf (2.54 KB, text/plain)
2015-07-09 15:26 UTC, Nikolai Sednev
no flags Details
answers.conf (2.54 KB, text/plain)
2015-07-09 15:27 UTC, Nikolai Sednev
no flags Details
answers-20150721142009.conf (2.43 KB, text/plain)
2015-07-21 14:24 UTC, Nikolai Sednev
no flags Details


Links
System ID Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2015:0158 normal SHIPPED_LIVE Important: Red Hat Enterprise Virtualization Manager 3.5.0 2015-02-11 22:38:50 UTC
oVirt gerrit 39819 master MERGED spec: Enable HE plugin on Fedora only Never
Red Hat Knowledge Base (Solution) 875583 None None None Never
Red Hat Bugzilla 1091188 None CLOSED PRD35 - [RFE] RHEV-H Support for hosted engine 2019-08-27 11:01:09 UTC

Internal Links: 1091188

Comment 1 Zach Musselman 2014-02-01 17:41:33 UTC
One option that may help this feature until implemented is the ability to closely install RHEL with as similar of packages to the H image.  Would it be possible to post a kickstart file and/or list of RPM's contained in the H image so one could go about installing the bare minimum packages to get as close to an H image as possible WITH the addition of the ovirt-hosted-engine-setup AND ovirt-hosted-engine-ha packages and their dependencies.

Comment 2 Fabian Deutsch 2014-02-05 08:36:59 UTC
The packages used by RHEV-H are given in the isolinux/manifest-rpm.txt file within the RHEV-H iso.

Please note that some files/directories get removed during the RHEV-H build process to minimize the required disk space.

Comment 3 Itamar Heim 2014-06-25 14:26:18 UTC
*** Bug 1091188 has been marked as a duplicate of this bug. ***

Comment 4 Fabian Deutsch 2014-07-31 11:53:38 UTC
*** Bug 1124920 has been marked as a duplicate of this bug. ***

Comment 9 Ying Cui 2014-09-26 07:29:56 UTC
Till now QE have tested basic ovirt-node-plugin-hosted-engine function, but there still no new rhevh 7.0 for rhev 3.5 build yet, the latest is rhev-hypervisor7-7.0-20140904.0.el7ev, so we still can not test this feature full due to bugs in comment 8.

So this feature testing is pending because there is no RHEV-H 7.0 for 3.5 build yet for QE.

Comment 11 Ying Cui 2014-10-10 10:56:51 UTC
This feature is added into RHEVH 7.0 for 3.5 build, but feature testing still failed, the basic functions can not meet the needs of the customers for using, so I will verify this bug later till the basic self-hosted-engine works good on RHEVH. 

Several bugs are found:
Bug 1151339 - Lost all the configuration for hosted engine after reboot rhevh 
Bug 1151344 - Can not launch rhevm as a monitored service as it says after shutdown engine vm
Bug 1144915 - TUI messed up when downloading ISO/OVA file in hosted-engine page
Bug 1147467 - Report error about downloading OVA Image although actually it is succeed 
Bug 1151342 - Should display in status page that rhevh is managed by hosted engine after finish configured it

Comment 12 Ying Cui 2014-11-20 04:57:50 UTC
Till now QE did not get the feature update available build for testing. As far as I know, devel side have been working on this feature in development, so I have to update this bug to assign due to comment 11. When build is landed to QE, then please modify this bug to ON_QA again, QE will test this feature soon.

Comment 13 Ying Cui 2014-11-20 05:00:49 UTC
Reset assignee to fabian team because Joey has moved to rel-eng team yet.

Comment 22 Ying Cui 2014-12-10 12:49:21 UTC
(In reply to Ryan Barry from comment #20)
> New 6.6 and 7.0 images:
> 
> http://download.devel.redhat.com/brewroot/work/tasks/9984/8319984/rhev-
> hypervisor7-7.0-20141202.0.iso

Self Hosted Engine on rhev-hypervisor7-7.0-20141202.0.iso - Test Failed due to these bugs:

1172061 - RHEVM setup failed due to the minimum requirements for memory size can't be set during configuring hosted engine with OVA type
1171614 - Change the OVA file format from gzip to tar while download which leads the error during configuring hosted engine with OVA type
1151344 - Can not launch rhevm as a monitored service as it says after shutdown engine vm
1172511 - Switch to Hosted Engine TUI menu so slowly due to failed to connect to broker

Comment 23 Ying Cui 2014-12-18 08:56:50 UTC
according to comment 22, I have to change this bug to assign to follow bugzilla workflow, after more patches are merged yet to fix bugs, new build deliver to QE, then you can move this bug back to on_qa, QE will test and verify it.

Comment 28 errata-xmlrpc 2015-02-11 17:57:52 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHSA-2015-0158.html

Comment 40 Fabian Deutsch 2015-05-22 14:29:22 UTC
*** Bug 1198639 has been marked as a duplicate of this bug. ***

Comment 42 Nikolai Sednev 2015-07-09 13:19:43 UTC

RHEV-H 6.7 (0609) deployment failed for the first time on both hosts with following error:
"[ ERROR ] Failed to execute stage 'Misc configuration': Failed to setup networks {'rhevm': {'nic': 'p1p1', 'bootproto': 'dhcp', 'blockingdhcp': True}}. Error code: "16" message: "Unexpected exception".

As a workaround, I ran the deployment twice and only then network bridge was created. I already opened the bug on this in the past (https://bugzilla.redhat.com/show_bug.cgi?id=1206884).

I've used PXE over RHEL6.7 VM with guest-agen and pair of RHEVH6.7 (20150609.0.el6ev).

During addition of the second host I also hit this one:
[ INFO ] Still waiting for VDSM host to become operational...
The host hosted_engine_2 is in non-operational state.
Please try to activate it via the engine webadmin UI.
Retry checking host status or ignore this and continue (Retry, Ignore)[Retry]?
[ INFO ] The VDSM Host is now operational
[ INFO ] Enabling and starting HA services
Hosted Engine successfully set up
[ INFO ] Stage: Clean up
[ INFO ] Generating answer file '/var/lib/ovirt-hosted-engine-setup/answers/answers-20150706123207.conf'
[ INFO ] Generating answer file '/etc/ovirt-hosted-engine/answers.conf'
[ INFO ] Stage: Pre-termination
[ INFO ] Stage: Termination

So I've activated it manually within the WEBUI of the engine and host came online, then I hit "Retry" via CLI and succeeded adding second host.



Components were used on hosts:
qemu-kvm-rhev-tools-0.12.1.2-2.448.el6_6.3.x86_64
ovirt-node-3.2.3-3.el6.noarch
sanlock-python-2.8-2.el6_5.x86_64
ovirt-hosted-engine-ha-1.2.6-2.el6ev.noarch
ovirt-node-plugin-snmp-3.2.3-3.el6.noarch
sanlock-lib-2.8-2.el6_5.x86_64
ovirt-host-deploy-1.3.0-2.el6ev.noarch
ovirt-node-selinux-3.2.3-3.el6.noarch
mom-0.4.1-5.el6ev.noarch
qemu-kvm-rhev-0.12.1.2-2.448.el6_6.3.x86_64
ovirt-hosted-engine-setup-1.2.4-2.el6ev.noarch
ovirt-host-deploy-offline-1.3.0-3.el6ev.x86_64
ovirt-node-plugin-vdsm-0.2.0-25.el6ev.noarch
ovirt-node-plugin-cim-3.2.3-3.el6.noarch
ovirt-node-branding-rhev-3.2.3-3.el6.noarch
qemu-img-rhev-0.12.1.2-2.448.el6_6.3.x86_64
sanlock-2.8-2.el6_5.x86_64
libvirt-0.10.2-54.el6.x86_64
ovirt-node-plugin-rhn-3.2.3-3.el6.noarch
vdsm-4.16.20-1.el6ev.x86_64
ovirt-node-plugin-hosted-engine-0.2.0-15.0.el6ev.noarch

Components were used on engine's VM:
rhevm-3.5.4-1.1.el6ev.noarch
ovirt-host-deploy-1.3.0-2.el6ev.noarch
ovirt-hosted-engine-setup-1.2.5.1-1.el6ev.noarch
ovirt-hosted-engine-ha-1.2.6-2.el6ev.noarch
qemu-img-rhev-0.12.1.2-2.475.el6.x86_64
qemu-kvm-rhev-0.12.1.2-2.475.el6.x86_64
ovirt-host-deploy-java-1.3.0-2.el6ev.noarch
qemu-guest-agent-0.12.1.2-2.479.el6.x86_64

Comment 43 Sandro Bonazzola 2015-07-09 14:38:03 UTC
Nikolai, can you please provide logs from hosted_engine_2 host?

Comment 44 Nikolai Sednev 2015-07-09 15:24:38 UTC
Created attachment 1050353 [details]
answers-20150706114639.conf

Comment 45 Nikolai Sednev 2015-07-09 15:26:19 UTC
Created attachment 1050354 [details]
answers-20150706123207.conf

Comment 46 Nikolai Sednev 2015-07-09 15:27:06 UTC
Created attachment 1050356 [details]
answers.conf

Comment 47 Ilanit Stein 2015-07-15 09:23:45 UTC
RHEV-h 6.7, 7.1 for HE on rhevm 3.5.4 test results:
==================================================
==================================================


Deployment \ NFS on RHEVH 6.7 (20150609.0.el6ev), rhevm-3.5.4-1.1.el6ev: 
=======================================================================
2 issues found by nsednev:

1. deployment failed for the first time on both hosts with
 following error:
 
 "[ ERROR ] Failed to execute stage 'Misc configuration': Failed to setup
 networks {'rhevm': {'nic': 'p1p1', 'bootproto': 'dhcp', 'blockingdhcp':
 True}}. Error code: "16" message: "Unexpected exception".
 
As a workaround, I ran the deployment twice and only then network bridge was
created. I already opened the bug on this in the past (
 https://bugzilla.redhat.com/show_bug.cgi?id=1206884 ).

 I've used PXE over RHEL6.7 VM with guest-agen and pair of RHEVH6.7
 (20150609.0.el6ev).
 
 During addition of the second host I also hit this one:
 
 [ INFO ] Still waiting for VDSM host to become operational...
 The host hosted_engine_2 is in non-operational state.
 Please try to activate it via the engine webadmin UI.
 Retry checking host status or ignore this and continue (Retry,
 Ignore)[Retry]?
 
 [ INFO ] The VDSM Host is now operational
 [ INFO ] Enabling and starting HA services
 Hosted Engine successfully set up
 [ INFO ] Stage: Clean up
 [ INFO ] Generating answer file
 '/var/lib/ovirt-hosted-engine-setup/answers/answers-20150706123207.conf'
 [ INFO ] Generating answer file '/etc/ovirt-hosted-engine/answers.conf'
 [ INFO ] Stage: Pre-termination
 [ INFO ] Stage: Termination
 
 So I've activated it manually within the WEBUI of the engine and host came
 online, then I hit "Retry" via CLI and succeeded adding second host.
 

2. Not possible to get the engine appliance working on RHEVH. tmp dir too
 small. See bug 1242441.

Deployment \ NFS on RHEVH 7.1 (20150609.0 .el7ev), rhevm-3.5.3.1-1.4.el6ev: 
==========================================================================
By nsednev:
I re-provisioned via foreman both hosts using firstboot=1 in the extra boot options. 
Deployment - PASSED.

Hosted Engine High availability test \ NFS on RHEVH 6.7 (20150707.0.el6ev), rhevm-3.5.4: 
==========================================================================
PASS

ovirt-hosted-engine-setup-1.2.5.1-1.el6ev.noarch
ovirt-hosted-engine-ha-1.2.6-2.el6ev.noarch

Hosted Engine High availability test \ NFS on RHEVH 7.1 (20150709.0.el7ev), rhevm-3.5.4:
===========================================================================
PASS

ovirt-hosted-engine-setup-1.2.5.1-1.el7ev.noarch
ovirt-hosted-engine-ha-1.2.6-2.el7ev.noarch

Comment 48 Nikolai Sednev 2015-07-21 14:22:02 UTC
Failed again to deploy on Red Hat Enterprise Virtualization Hypervisor release 6.7 (20150707.0.el6ev), while used flat network:
[root@alma04 ~]# rpm -qa vdsm* sanlock* qemu* mom* libvirt* ovirt* gluster*
vdsm-jsonrpc-4.16.21-1.el6ev.noarch
libvirt-python-0.10.2-54.el6.x86_64
ovirt-node-plugin-rhn-3.2.3-11.el6.noarch
libvirt-lock-sanlock-0.10.2-54.el6.x86_64
vdsm-hook-vhostmd-4.16.21-1.el6ev.noarch
sanlock-python-2.8-2.el6_5.x86_64
qemu-kvm-rhev-0.12.1.2-2.479.el6.x86_64
vdsm-hook-ethtool-options-4.16.21-1.el6ev.noarch
ovirt-node-plugin-hosted-engine-0.2.0-16.0.el6ev.noarch
ovirt-node-plugin-cim-3.2.3-11.el6.noarch
vdsm-cli-4.16.21-1.el6ev.noarch
glusterfs-devel-3.6.0.53-1.el6.x86_64
ovirt-host-deploy-1.3.0-2.el6ev.noarch
ovirt-node-branding-rhev-3.2.3-11.el6.noarch
vdsm-yajsonrpc-4.16.21-1.el6ev.noarch
glusterfs-api-3.6.0.53-1.el6.x86_64
qemu-img-rhev-0.12.1.2-2.479.el6.x86_64
sanlock-lib-2.8-2.el6_5.x86_64
glusterfs-libs-3.6.0.53-1.el6.x86_64
vdsm-python-zombiereaper-4.16.21-1.el6ev.noarch
vdsm-xmlrpc-4.16.21-1.el6ev.noarch
libvirt-0.10.2-54.el6.x86_64
ovirt-node-selinux-3.2.3-11.el6.noarch
libvirt-cim-0.6.1-12.el6.x86_64
vdsm-4.16.21-1.el6ev.x86_64
vdsm-reg-4.16.21-1.el6ev.noarch
ovirt-hosted-engine-setup-1.2.5.1-1.el6ev.noarch
ovirt-host-deploy-offline-1.3.0-3.el6ev.x86_64
glusterfs-rdma-3.6.0.53-1.el6.x86_64
ovirt-node-plugin-snmp-3.2.3-11.el6.noarch
glusterfs-fuse-3.6.0.53-1.el6.x86_64
qemu-kvm-rhev-tools-0.12.1.2-2.479.el6.x86_64
glusterfs-3.6.0.53-1.el6.x86_64
sanlock-2.8-2.el6_5.x86_64
libvirt-client-0.10.2-54.el6.x86_64
mom-0.4.1-5.el6ev.noarch
ovirt-node-3.2.3-11.el6.noarch
ovirt-hosted-engine-ha-1.2.6-2.el6ev.noarch
ovirt-node-plugin-vdsm-0.2.0-25.el6ev.noarch
vdsm-python-4.16.21-1.el6ev.noarch



[root@alma04 ~]# ifconfig                                                  
lo        Link encap:Local Loopback                                        
          inet addr:127.0.0.1  Mask:255.0.0.0                              
          inet6 addr: ::1/128 Scope:Host                                   
          UP LOOPBACK RUNNING  MTU:65536  Metric:1                         
          RX packets:102754 errors:0 dropped:0 overruns:0 frame:0          
          TX packets:102754 errors:0 dropped:0 overruns:0 carrier:0        
          collisions:0 txqueuelen:0                                        
          RX bytes:18145895 (17.3 MiB)  TX bytes:18145895 (17.3 MiB)       

p1p1      Link encap:Ethernet  HWaddr A0:36:9F:3B:16:7C  
          inet addr:10.35.117.26  Bcast:10.35.117.255  Mask:255.255.255.0
          inet6 addr: fe80::a236:9fff:fe3b:167c/64 Scope:Link            
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1             
          RX packets:696845 errors:3 dropped:0 overruns:0 frame:3        
          TX packets:4272 errors:0 dropped:0 overruns:0 carrier:0        
          collisions:0 txqueuelen:1000                                   
          RX bytes:47110831 (44.9 MiB)  TX bytes:420354 (410.5 KiB)      

[root@alma04 ~]# hosted-engine --deploy
[ INFO  ] Stage: Initializing          
[ INFO  ] Generating a temporary VNC password.
[ INFO  ] Stage: Environment setup            
          Continuing will configure this host for serving as hypervisor and create a VM where you have to install oVirt Engine afterwards.
          Are you sure you want to continue? (Yes, No)[Yes]:                                                                              
          Configuration files: []                                                                                                         
          Log file: /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20150721141812-nzwhc3.log                                
          Version: otopi-1.3.2 (otopi-1.3.2-1.el6ev)                                                                                      
          It has been detected that this program is executed through an SSH connection without using screen.                              
          Continuing with the installation may lead to broken installation if the network connection fails.                               
          It is highly recommended to abort the installation and run it inside a screen session using command "screen".                   
          Do you want to continue anyway? (Yes, No)[No]: yes                                                                              
[ INFO  ] Hardware supports virtualization                                                                                                
[ INFO  ] Stage: Environment packages setup                                                                                               
[ INFO  ] Stage: Programs detection                                                                                                       
[ INFO  ] Stage: Environment setup                                                                                                        
[ INFO  ] Stage: Environment customization                                                                                                
                                                                                                                                          
          --== STORAGE CONFIGURATION ==--                                                                                                 
                                                                                                                                          
          During customization use CTRL-D to abort.                                                                                       
          Please specify the storage you would like to use (iscsi, nfs3, nfs4)[nfs3]:                                                     
          Please specify the full shared storage connection path to use (example: host:/path): 10.35.160.108:/RHEV/nsednev_HE_3_6_second_deployment_setup
[ INFO  ] Installing on first host                                                                                                                       
          Please provide storage domain name. [hosted_storage]:                                                                                          
          Local storage datacenter name is an internal name and currently will not be shown in engine's admin UI.                                        
          Please enter local datacenter name [hosted_datacenter]:                                                                                        
                                                                                                                                                         
          --== SYSTEM CONFIGURATION ==--                                                                                                                 
                                                                                                                                                         
                                                                                                                                                         
          --== NETWORK CONFIGURATION ==--                                                                                                                
                                                                                                                                                         
          Please indicate a nic to set rhevm bridge on: (em1, em2, p1p1, p1p2) [em1]: p1p1                                                               
          iptables was detected on your computer, do you wish setup to configure it? (Yes, No)[Yes]:                                                     
          Please indicate a pingable gateway IP address [10.35.117.254]:                                                                                 
                                                                                                                                                         
          --== VM CONFIGURATION ==--                                                                                                                     
                                                                                                                                                         
          Please specify the device to boot the VM from (cdrom, disk, pxe) [cdrom]: pxe                                                                  
          Please specify an alias for the Hosted Engine image [hosted_engine]:                                                                           
          The following CPU types are supported by this host:                                                                                            
                 - model_SandyBridge: Intel SandyBridge Family                                                                                           
                 - model_Westmere: Intel Westmere Family                                                                                                 
                 - model_Nehalem: Intel Nehalem Family                                                                                                   
                 - model_Penryn: Intel Penryn Family                                                                                                     
                 - model_Conroe: Intel Conroe Family                                                                                                     
          Please specify the CPU type to be used by the VM [model_SandyBridge]:                                                                          
          Please specify the number of virtual CPUs for the VM [Defaults to minimum requirement: 2]:                                                     
          Please specify the disk size of the VM in GB [Defaults to minimum requirement: 25]:                                                            
          You may specify a unicast MAC address for the VM or accept a randomly generated default [00:16:3e:0a:7a:2b]: 00:16:3E:7B:BB:BB                 
          Please specify the memory size of the VM in MB [Defaults to minimum requirement: 4096]:                                                        
          Please specify the console type you would like to use to connect to the VM (vnc, spice) [vnc]:                                                 
                                                                                                                                                         
          --== HOSTED ENGINE CONFIGURATION ==--                                                                                                          
                                                                                                                                                         
          Enter the name which will be used to identify this host inside the Administrator Portal [hosted_engine_1]:                                     
          Enter 'admin@internal' user password that will be used for accessing the Administrator Portal:                                                 
          Confirm 'admin@internal' user password:                                                                                                        
          Please provide the FQDN for the engine you would like to use.                                                                                  
          This needs to match the FQDN that you will use for the engine installation within the VM.                                                      
          Note: This will be the FQDN of the VM you are now going to create,                                                                             
          it should not point to the base host or to any other existing machine.                                                                         
          Engine FQDN: nsednev-he-2.qa.lab.tlv.redhat.com
          Please provide the name of the SMTP server through which we will send notifications [localhost]:
          Please provide the TCP port number of the SMTP server [25]:
          Please provide the email address from which notifications will be sent [root@localhost]:
          Please provide a comma-separated list of email addresses which will get notifications [root@localhost]:
[ INFO  ] Stage: Setup validation

          --== CONFIGURATION PREVIEW ==--

          Bridge interface                   : p1p1
          Engine FQDN                        : nsednev-he-2.qa.lab.tlv.redhat.com
          Bridge name                        : rhevm
          SSH daemon port                    : 22
          Firewall manager                   : iptables
          Gateway address                    : 10.35.117.254
          Host name for web application      : hosted_engine_1
          Host ID                            : 1
          Image alias                        : hosted_engine
          Image size GB                      : 25
          Storage connection                 : 10.35.160.108:/RHEV/nsednev_HE_3_6_second_deployment_setup
          Console type                       : vnc
          Memory size MB                     : 4096
          MAC address                        : 00:16:3E:7B:BB:BB
          Boot type                          : pxe
          Number of CPUs                     : 2
          CPU Type                           : model_SandyBridge

          Please confirm installation settings (Yes, No)[Yes]:
[ INFO  ] Stage: Transaction setup
[ INFO  ] Stage: Misc configuration
[ INFO  ] Stage: Package installation
[ INFO  ] Stage: Misc configuration
[ INFO  ] Configuring libvirt
[ INFO  ] Configuring VDSM
[ INFO  ] Starting vdsmd
[ INFO  ] Waiting for VDSM hardware info
[ INFO  ] Waiting for VDSM hardware info
[ INFO  ] Configuring the management bridge
[ ERROR ] Failed to execute stage 'Misc configuration': Failed to setup networks {'rhevm': {'nic': 'p1p1', 'bootproto': 'dhcp', 'blockingdhcp': True}}. Error code: "16" message: "Unexpected exception"
[ INFO  ] Stage: Clean up
[ INFO  ] Generating answer file '/var/lib/ovirt-hosted-engine-setup/answers/answers-20150721142009.conf'
[ INFO  ] Stage: Pre-termination
[ INFO  ] Stage: Termination



See logs attached.

Comment 49 Nikolai Sednev 2015-07-21 14:24:06 UTC
Created attachment 1054419 [details]
answers-20150721142009.conf

Comment 50 Nikolai Sednev 2015-07-29 15:21:53 UTC
Performed deployment on two RHEVH6.7s with HE3.5.z running on RHEL6.7 over NFS-PASS.

RHEVH6.7:
[root@localhost ~]# rpm -qa libvirt* sanlock* qemu* mom vdsm* ovirt*
vdsm-yajsonrpc-4.16.23-1.el6ev.noarch
sanlock-2.8-2.el6_5.x86_64
libvirt-0.10.2-54.el6.x86_64
libvirt-cim-0.6.1-12.el6.x86_64
sanlock-python-2.8-2.el6_5.x86_64
vdsm-4.16.23-1.el6ev.x86_64
ovirt-node-plugin-hosted-engine-0.2.0-16.0.el6ev.noarch
vdsm-python-zombiereaper-4.16.23-1.el6ev.noarch
ovirt-node-branding-rhev-3.2.3-14.el6.noarch
vdsm-jsonrpc-4.16.23-1.el6ev.noarch
sanlock-lib-2.8-2.el6_5.x86_64
qemu-img-rhev-0.12.1.2-2.479.el6.x86_64
vdsm-python-4.16.23-1.el6ev.noarch
vdsm-cli-4.16.23-1.el6ev.noarch
ovirt-host-deploy-1.3.0-2.el6ev.noarch
qemu-kvm-rhev-tools-0.12.1.2-2.479.el6.x86_64
libvirt-client-0.10.2-54.el6.x86_64
libvirt-python-0.10.2-54.el6.x86_64
mom-0.4.1-5.el6ev.noarch
ovirt-node-plugin-rhn-3.2.3-14.el6.noarch
ovirt-node-3.2.3-14.el6.noarch
libvirt-lock-sanlock-0.10.2-54.el6.x86_64
vdsm-hook-vhostmd-4.16.23-1.el6ev.noarch
qemu-kvm-rhev-0.12.1.2-2.479.el6.x86_64
vdsm-reg-4.16.23-1.el6ev.noarch
ovirt-hosted-engine-setup-1.2.5.2-1.el6ev.noarch
vdsm-hook-ethtool-options-4.16.23-1.el6ev.noarch
ovirt-node-plugin-vdsm-0.2.0-25.el6ev.noarch
ovirt-node-plugin-cim-3.2.3-14.el6.noarch
ovirt-node-selinux-3.2.3-14.el6.noarch
ovirt-hosted-engine-ha-1.2.6-2.el6ev.noarch
ovirt-host-deploy-offline-1.3.0-3.el6ev.x86_64
ovirt-node-plugin-snmp-3.2.3-14.el6.noarch
vdsm-xmlrpc-4.16.23-1.el6ev.noarch
Linux version 2.6.32-573.1.1.el6.x86_64 (mockbuild@x86-032.build.eng.bos.redhat.com) (gcc version 4.4.7 20120313 (Red Hat 4.4.7-16) (GCC) ) #1 SMP Tue Jul 14 02:46:51 EDT 2015

Engine:
Linux version 2.6.32-573.el6.x86_64 (mockbuild@x86-027.build.eng.bos.redhat.com) (gcc version 4.4.7 20120313 (Red Hat 4.4.7-16) (GCC) ) #1 SMP Wed Jul 1 18:23:37 EDT 2015
rhevm-3.5.4-1.2.el6ev.noarch

Comment 51 Nikolai Sednev 2015-07-29 15:23:26 UTC
Forgot to add this package to engine in previous comment, so adding it here:
rhevm-guest-agent-common-1.0.10-2.el6ev.noarch

Comment 52 Eyal Edri 2015-08-04 20:44:30 UTC
closing as dup of 1250199 since we can't add it to the errata for 3.5.4.
please use 1250199 for verification/errata.

*** This bug has been marked as a duplicate of bug 1250199 ***


Note You need to log in before you can comment on or make changes to this bug.