Description of problem: Failed to deploy HC-HE with error [ ERROR ] Failed to execute stage 'Closing up': The VM is not powering up: please check VDSM logs Thread-104::ERROR::2015-05-14 10:07:48,227::vm::741::vm.Vm::(_startUnderlyingVm) vmId=`ab39d4b6-bab9-42af-8955-e992bed816a9`::The vm start process failed Traceback (most recent call last): File "/usr/share/vdsm/virt/vm.py", line 689, in _startUnderlyingVm self._run() File "/usr/share/vdsm/virt/vm.py", line 1798, in _run self._connection.createXML(domxml, flags), File "/usr/lib/python2.7/site-packages/vdsm/libvirtconnection.py", line 122, in wrapper ret = f(*args, **kwargs) File "/usr/lib64/python2.7/site-packages/libvirt.py", line 3427, in createXML if ret is None:raise libvirtError('virDomainCreateXML() failed', conn=self) libvirtError: operation failed: domain is no longer running libvirtEventLoop::DEBUG::2015-05-14 10:07:48,237::resourceManager::649::Storage.ResourceManager::(releaseResource) No one is wait ing for resource 'Storage.b968e16c-cb9a-42cd-8e3f-5dfd4c43de27', Clearing records. libvirtEventLoop::DEBUG::2015-05-14 10:07:48,238::task::990::Storage.TaskManager.Task::(_decref) Task=`aafc2f7a-4632-4264-93eb-77 64e221c7cf`::ref 0 aborting False Thread-104::INFO::2015-05-14 10:07:48,238::vm::1218::vm.Vm::(setDownStatus) vmId=`ab39d4b6-bab9-42af-8955-e992bed816a9`::Changed state to Down: operation failed: domain is no longer running (code=1) libvirtEventLoop::WARNING::2015-05-14 10:07:48,238::utils::142::root::(rmFile) File: /var/lib/libvirt/qemu/channels/ab39d4b6-bab9 -42af-8955-e992bed816a9.com.redhat.rhevm.vdsm already removed libvirtEventLoop::WARNING::2015-05-14 10:07:48,240::utils::142::root::(rmFile) File: /var/lib/libvirt/qemu/channels/ab39d4b6-bab9 -42af-8955-e992bed816a9.org.qemu.guest_agent.0 already removed libvirtEventLoop::DEBUG::2015-05-14 10:07:48,240::task::592::Storage.TaskManager.Task::(_updateState) Task=`3bbf26c2-99c5-4825-b8 d0-d441ca95cdf5`::moving from state init -> state preparing libvirtEventLoop::INFO::2015-05-14 10:07:48,240::logUtils::48::dispatcher::(wrapper) Run and protect: inappropriateDevices(thiefI d='ab39d4b6-bab9-42af-8955-e992bed816a9') libvirtEventLoop::INFO::2015-05-14 10:07:48,243::logUtils::51::dispatcher::(wrapper) Run and protect: inappropriateDevices, Retur n response: None libvirtEventLoop::DEBUG::2015-05-14 10:07:48,243::task::1188::Storage.TaskManager.Task::(prepare) Task=`3bbf26c2-99c5-4825-b8d0-d 441ca95cdf5`::finished: None libvirtEventLoop::DEBUG::2015-05-14 10:07:48,243::task::592::Storage.TaskManager.Task::(_updateState) Task=`3bbf26c2-99c5-4825-b8 d0-d441ca95cdf5`::moving from state preparing -> state finished libvirtEventLoop::DEBUG::2015-05-14 10:07:48,243::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll) Owner.release All requests {} resources {} libvirtEventLoop::DEBUG::2015-05-14 10:07:48,243::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) Owner.cancelAl : Version-Release number of selected component (if applicable): [root@alma02 ~]# rpm -qa vdsm* libvirt sanlock qemu* mom gluster* ovirt-hosted-engine-setup vdsm-gluster-4.17.0-786.git07dec6d.el7.centos.noarch glusterfs-api-3.7.0beta2-0.2.gitc1cd4fa.el7.centos.x86_64 glusterfs-server-3.7.0beta2-0.2.gitc1cd4fa.el7.centos.x86_64 mom-0.4.3-1.el7.noarch vdsm-yajsonrpc-4.17.0-786.git07dec6d.el7.centos.noarch glusterfs-cli-3.7.0beta2-0.2.gitc1cd4fa.el7.centos.x86_64 vdsm-infra-4.17.0-786.git07dec6d.el7.centos.noarch qemu-kvm-common-ev-2.1.2-23.el7_1.2.x86_64 glusterfs-fuse-3.7.0beta2-0.2.gitc1cd4fa.el7.centos.x86_64 glusterfs-libs-3.7.0beta2-0.2.gitc1cd4fa.el7.centos.x86_64 qemu-img-ev-2.1.2-23.el7_1.2.x86_64 vdsm-python-4.17.0-786.git07dec6d.el7.centos.noarch vdsm-4.17.0-786.git07dec6d.el7.centos.x86_64 qemu-kvm-ev-2.1.2-23.el7_1.2.x86_64 vdsm-xmlrpc-4.17.0-786.git07dec6d.el7.centos.noarch glusterfs-rdma-3.7.0beta2-0.2.gitc1cd4fa.el7.centos.x86_64 ovirt-hosted-engine-setup-1.3.0-0.0.master.20150511134154.gitf764a6b.el7.noarch vdsm-cli-4.17.0-786.git07dec6d.el7.centos.noarch sanlock-3.2.2-2.el7.x86_64 glusterfs-client-xlators-3.7.0beta2-0.2.gitc1cd4fa.el7.centos.x86_64 glusterfs-geo-replication-3.7.0beta2-0.2.gitc1cd4fa.el7.centos.x86_64 vdsm-jsonrpc-4.17.0-786.git07dec6d.el7.centos.noarch glusterfs-3.7.0beta2-0.2.gitc1cd4fa.el7.centos.x86_64 How reproducible: 100% Steps to Reproduce: 1.Deploy HC-HE at RHEL7.1 host with all repos for 3.6 and other required rpms installed. 2. 3. Actual results: Deployment fails with [ ERROR ] Failed to execute stage 'Closing up': The VM is not powering up: please check VDSM logs Expected results: Deployment should succeed. Additional info: logs attached.
Created attachment 1025292 [details] alma02 logs
Can you please attach full sosreport? Since you're using rpms including the fix for bug #1201355 I'd like to exclude that the issue is still the same.
This bug might be closed as I failed to reproduce it with these components: Not reproduced on these components and got stuck on "Still waiting for VDSM host to become operational...", while engine became not reachable and I couldn't get connected to it not via ssh, neither via http: [root@alma02 ~]# hosted-engine --deploy [ INFO ] Stage: Initializing [ INFO ] Generating a temporary VNC password. [ INFO ] Stage: Environment setup Continuing will configure this host for serving as hypervisor and create a VM where you have to install oVirt Engine afterwards. Are you sure you want to continue? (Yes, No)[Yes]: Configuration files: [] Log file: /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20150521133424-7qbgp9.log Version: otopi-1.4.0_master (otopi-1.4.0-0.0.master.20150423125505.git08ea44e.el7) It has been detected that this program is executed through an SSH connection without using screen. Continuing with the installation may lead to broken installation if the network connection fails. It is highly recommended to abort the installation and run it inside a screen session using command "screen". Do you want to continue anyway? (Yes, No)[No]: yes [ INFO ] Hardware supports virtualization [ INFO ] Stage: Environment packages setup [ INFO ] Stage: Programs detection [ INFO ] Stage: Environment setup [ INFO ] Waiting for VDSM hardware info [ INFO ] Waiting for VDSM hardware info [ INFO ] Generating libvirt-spice certificates [ INFO ] Stage: Environment customization --== STORAGE CONFIGURATION ==-- During customization use CTRL-D to abort. Please specify the storage you would like to use (glusterfs, iscsi, fc, nfs3, nfs4)[nfs3]: glusterfs Do you want to configure this host for providing GlusterFS storage (will start with no replica requires to grow to replica 3 later)? (Yes, No)[No]: yes Please provide a path to be used for the brick on this host:/root/HC [ INFO ] Installing on first host Please provide storage domain name. [hosted_storage]: Local storage datacenter name is an internal name and currently will not be shown in engine's admin UI. Please enter local datacenter name [hosted_datacenter]: --== SYSTEM CONFIGURATION ==-- --== NETWORK CONFIGURATION ==-- Please indicate a nic to set ovirtmgmt bridge on: (enp3s0f1, enp3s0f0, eno2, eno1) [enp3s0f1]: enp3s0f0 iptables was detected on your computer, do you wish setup to configure it? (Yes, No)[Yes]: Please indicate a pingable gateway IP address [10.35.117.254]: --== VM CONFIGURATION ==-- Please specify the device to boot the VM from (cdrom, disk, pxe) [cdrom]: pxe Please specify an alias for the Hosted Engine image [hosted_engine]: The following CPU types are supported by this host: - model_SandyBridge: Intel SandyBridge Family - model_Westmere: Intel Westmere Family - model_Nehalem: Intel Nehalem Family - model_Penryn: Intel Penryn Family - model_Conroe: Intel Conroe Family Please specify the CPU type to be used by the VM [model_SandyBridge]: Please specify the number of virtual CPUs for the VM [Defaults to minimum requirement: 2]: Please specify the disk size of the VM in GB [Defaults to minimum requirement: 25]: You may specify a unicast MAC address for the VM or accept a randomly generated default [00:16:3e:76:cd:ea]: 00:16:3E:7B:B8:53 Please specify the memory size of the VM in MB [Defaults to minimum requirement: 4096]: Please specify the console type you would like to use to connect to the VM (vnc, spice) [vnc]: --== HOSTED ENGINE CONFIGURATION ==-- Enter the name which will be used to identify this host inside the Administrator Portal [hosted_engine_1]: Enter 'admin@internal' user password that will be used for accessing the Administrator Portal: Confirm 'admin@internal' user password: Please provide the FQDN for the engine you would like to use. This needs to match the FQDN that you will use for the engine installation within the VM. Note: This will be the FQDN of the VM you are now going to create, it should not point to the base host or to any other existing machine. Engine FQDN: nsednev-he-1.qa.lab.tlv.redhat.com Please provide the name of the SMTP server through which we will send notifications [localhost]: Please provide the TCP port number of the SMTP server [25]: Please provide the email address from which notifications will be sent [root@localhost]: Please provide a comma-separated list of email addresses which will get notifications [root@localhost]: [ INFO ] Stage: Setup validation --== CONFIGURATION PREVIEW ==-- Bridge interface : enp3s0f0 Engine FQDN : nsednev-he-1.qa.lab.tlv.redhat.com Bridge name : ovirtmgmt SSH daemon port : 22 Firewall manager : iptables Gateway address : 10.35.117.254 Host name for web application : hosted_engine_1 Host ID : 1 GlusterFS Brick : alma02.qa.lab.tlv.redhat.com:/root/HC Image alias : hosted_engine Image size GB : 25 GlusterFS Share Name : hosted_engine_glusterfs GlusterFS Brick Provisioning : True Storage connection : alma02.qa.lab.tlv.redhat.com:/hosted_engine_glusterfs Console type : vnc Memory size MB : 4096 MAC address : 00:16:3E:7B:B8:53 Boot type : pxe Number of CPUs : 2 CPU Type : model_SandyBridge Please confirm installation settings (Yes, No)[Yes]: [ INFO ] Stage: Transaction setup [ INFO ] Stage: Misc configuration [ INFO ] Stage: Package installation [ INFO ] Stage: Misc configuration [ INFO ] Configuring libvirt [ INFO ] Configuring VDSM [ INFO ] Starting vdsmd [ INFO ] Waiting for VDSM hardware info [ INFO ] Waiting for VDSM hardware info [ INFO ] Configuring the management bridge [ INFO ] Starting GlusterFS services [ INFO ] Creating GlusterFS Volume [ INFO ] Creating Storage Domain [ INFO ] Creating Storage Pool [ INFO ] Connecting Storage Pool [ INFO ] Verifying sanlock lockspace initialization [ INFO ] Creating VM Image [ INFO ] Disconnecting Storage Pool [ INFO ] Start monitoring domain [ INFO ] Configuring VM [ INFO ] Updating hosted-engine configuration [ INFO ] Stage: Transaction commit [ INFO ] Stage: Closing up [ INFO ] Enabling GlusterFS services [ INFO ] Creating VM You can now connect to the VM with the following command: /bin/remote-viewer vnc://localhost:5900 Use temporary password "7148HlHa" to connect to vnc console. Please note that in order to use remote-viewer you need to be able to run graphical applications. This means that if you are using ssh you have to supply the -Y flag (enables trusted X11 forwarding). Otherwise you can run the command from a terminal in your preferred desktop environment. If you cannot run graphical applications you can connect to the graphic console from another host or connect to the console using the following command: virsh -c qemu+tls://Test/system console HostedEngine If you need to reboot the VM you will need to start it manually using the command: hosted-engine --vm-start You can then set a temporary password using the command: hosted-engine --add-console-password The VM has been started. Install the OS and shut down or reboot it. To continue please make a selection: (1) Continue setup - VM installation is complete (2) Reboot the VM and restart installation (3) Abort setup (4) Destroy VM and abort setup (1, 2, 3, 4)[1]: 2 [ INFO ] Creating VM You can now connect to the VM with the following command: /bin/remote-viewer vnc://localhost:5900 Use temporary password "7148HlHa" to connect to vnc console. Please note that in order to use remote-viewer you need to be able to run graphical applications. This means that if you are using ssh you have to supply the -Y flag (enables trusted X11 forwarding). Otherwise you can run the command from a terminal in your preferred desktop environment. If you cannot run graphical applications you can connect to the graphic console from another host or connect to the console using the following command: virsh -c qemu+tls://Test/system console HostedEngine If you need to reboot the VM you will need to start it manually using the command: hosted-engine --vm-start You can then set a temporary password using the command: hosted-engine --add-console-password The VM has been started. Install the OS and shut down or reboot it. To continue please make a selection: (1) Continue setup - VM installation is complete (2) Reboot the VM and restart installation (3) Abort setup (4) Destroy VM and abort setup (1, 2, 3, 4)[1]: Waiting for VM to shut down... [ INFO ] Creating VM You can now connect to the VM with the following command: /bin/remote-viewer vnc://localhost:5900 Use temporary password "7148HlHa" to connect to vnc console. Please note that in order to use remote-viewer you need to be able to run graphical applications. This means that if you are using ssh you have to supply the -Y flag (enables trusted X11 forwarding). Otherwise you can run the command from a terminal in your preferred desktop environment. If you cannot run graphical applications you can connect to the graphic console from another host or connect to the console using the following command: virsh -c qemu+tls://Test/system console HostedEngine If you need to reboot the VM you will need to start it manually using the command: hosted-engine --vm-start You can then set a temporary password using the command: hosted-engine --add-console-password Please install and setup the engine in the VM. You may also be interested in installing ovirt-guest-agent-common package in the VM. To continue make a selection from the options below: (1) Continue setup - engine installation is complete (2) Power off and restart the VM (3) Abort setup (4) Destroy VM and abort setup (1, 2, 3, 4)[1]: 2 [ INFO ] Creating VM You can now connect to the VM with the following command: /bin/remote-viewer vnc://localhost:5900 Use temporary password "7148HlHa" to connect to vnc console. Please note that in order to use remote-viewer you need to be able to run graphical applications. This means that if you are using ssh you have to supply the -Y flag (enables trusted X11 forwarding). Otherwise you can run the command from a terminal in your preferred desktop environment. If you cannot run graphical applications you can connect to the graphic console from another host or connect to the console using the following command: virsh -c qemu+tls://Test/system console HostedEngine If you need to reboot the VM you will need to start it manually using the command: hosted-engine --vm-start You can then set a temporary password using the command: hosted-engine --add-console-password To continue make a selection from the options below: (1) Continue setup - engine installation is complete (2) Power off and restart the VM (3) Abort setup (4) Destroy VM and abort setup (1, 2, 3, 4)[1]: [ INFO ] Engine replied: DB Up!Welcome to Health Status! [ INFO ] Connecting to the Engine Enter the name of the cluster to which you want to add the host (Default) [Default]: [ INFO ] Waiting for the host to become operational in the engine. This may take several minutes... [ INFO ] Still waiting for VDSM host to become operational... Host's components: [root@alma02 ~]# rpm -qa vdsm sanlock qemu* libvirt* mom gluster* ovirt* libvirt-daemon-driver-nwfilter-1.2.8-16.el7_1.3.x86_64 ovirt-hosted-engine-ha-1.3.0-0.0.master.20150424113553.20150424113551.git7c14f4c.el7.noarch glusterfs-3.7.0beta2-0.2.gitc1cd4fa.el7.centos.x86_64 libvirt-daemon-driver-nodedev-1.2.8-16.el7_1.3.x86_64 qemu-img-ev-2.1.2-23.el7_1.3.1.x86_64 libvirt-daemon-config-nwfilter-1.2.8-16.el7_1.3.x86_64 libvirt-daemon-driver-qemu-1.2.8-16.el7_1.3.x86_64 ovirt-release-master-001-0.8.master.noarch glusterfs-api-3.7.0beta2-0.2.gitc1cd4fa.el7.centos.x86_64 glusterfs-geo-replication-3.7.0beta2-0.2.gitc1cd4fa.el7.centos.x86_64 libvirt-daemon-1.2.8-16.el7_1.3.x86_64 libvirt-lock-sanlock-1.2.8-16.el7_1.3.x86_64 libvirt-daemon-driver-secret-1.2.8-16.el7_1.3.x86_64 mom-0.4.4-0.0.master.20150515133332.git2d32797.el7.noarch sanlock-3.2.2-2.el7.x86_64 qemu-kvm-tools-ev-2.1.2-23.el7_1.3.1.x86_64 ovirt-host-deploy-1.4.0-0.0.master.20150505205623.giteabc23b.el7.noarch qemu-kvm-common-ev-2.1.2-23.el7_1.3.1.x86_64 glusterfs-client-xlators-3.7.0beta2-0.2.gitc1cd4fa.el7.centos.x86_64 glusterfs-cli-3.7.0beta2-0.2.gitc1cd4fa.el7.centos.x86_64 ovirt-engine-sdk-python-3.6.0.0-0.14.20150520.git8420a90.el7.centos.noarch libvirt-daemon-driver-interface-1.2.8-16.el7_1.3.x86_64 libvirt-daemon-kvm-1.2.8-16.el7_1.3.x86_64 vdsm-4.17.0-834.gitd066d4a.el7.noarch glusterfs-fuse-3.7.0beta2-0.2.gitc1cd4fa.el7.centos.x86_64 libvirt-python-1.2.8-7.el7_1.1.x86_64 libvirt-daemon-driver-network-1.2.8-16.el7_1.3.x86_64 glusterfs-libs-3.7.0beta2-0.2.gitc1cd4fa.el7.centos.x86_64 qemu-kvm-ev-2.1.2-23.el7_1.3.1.x86_64 glusterfs-server-3.7.0beta2-0.2.gitc1cd4fa.el7.centos.x86_64 libvirt-client-1.2.8-16.el7_1.3.x86_64 libvirt-daemon-driver-storage-1.2.8-16.el7_1.3.x86_64 ovirt-hosted-engine-setup-1.3.0-0.0.master.20150518075146.gitdd9741f.el7.noarch glusterfs-rdma-3.7.0beta2-0.2.gitc1cd4fa.el7.centos.x86_64 Engine was running over latest RHEL6.6 with all repos compatible to 3.6.0-2 and I finished engine's setup on VM successfully.