Bug 1242554 - [hosted-engine-setup] [FC support] HE VM is not started automatically once HE deployment is finished
Summary: [hosted-engine-setup] [FC support] HE VM is not started automatically once HE...
Keywords:
Status: CLOSED DUPLICATE of bug 1251752
Alias: None
Product: oVirt
Classification: Retired
Component: ovirt-hosted-engine-setup
Version: 3.6
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
: 3.6.0
Assignee: Simone Tiraboschi
QA Contact: Elad
URL:
Whiteboard: integration
Depends On:
Blocks: 1036731 1153278
TreeView+ depends on / blocked
 
Reported: 2015-07-13 14:57 UTC by Elad
Modified: 2015-08-24 12:15 UTC (History)
8 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2015-08-24 12:15:56 UTC
oVirt Team: ---
Embargoed:


Attachments (Terms of Use)

Description Elad 2015-07-13 14:57:10 UTC
Description of problem:
Once the HE deployment over FC is finished, the HE doesn't start automatically like it should.

Version-Release number of selected component (if applicable):
ovirt-3.6.0-3
ovirt-hosted-engine-setup-1.3.0-0.0.master.20150623153111.git68138d4.el7.noarch
vdsm-4.17.0-1054.git562e711.el7.noarch
sanlock-3.2.2-2.el7.x86_64

How reproducible:
Always

Steps to Reproduce:
1. Deploy hosted-engine using FC
2.
3.


Actual results:
The VM doesn't start. Checked its status and got the following exception:

[root@green-vdsb ovirt-hosted-engine-setup]# hosted-engine --vm-status
Traceback (most recent call last):
  File "/usr/lib64/python2.7/runpy.py", line 162, in _run_module_as_main
    "__main__", fname, loader, pkg_name)
  File "/usr/lib64/python2.7/runpy.py", line 72, in _run_code
    exec code in run_globals
  File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_setup/vm_status.py", line 117, in <module>
    if not status_checker.print_status():
  File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_setup/vm_status.py", line 60, in print_status
    all_host_stats = ha_cli.get_all_host_stats()
  File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/client/client.py", line 157, in get_all_host_stats
    return self.get_all_stats(self.StatModes.HOST)
  File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/client/client.py", line 102, in get_all_stats
    stats = broker.get_stats_from_storage(service)
  File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/brokerlink.py", line 232, in get_stats_from_storage
    result = self._checked_communicate(request)
  File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/brokerlink.py", line 260, in _checked_communicate
    .format(message or response))
ovirt_hosted_engine_ha.lib.exceptions.RequestError: Request failed: <type 'exceptions.OSError'>


So I started the VM manually:

[root@green-vdsb ovirt-hosted-engine-setup]# hosted-engine --vm-start

a7567b69-c29c-46a2-a643-799acf0b1a87
        Status = WaitForLaunch
        nicModel = rtl8139,pv
        statusTime = 4300682480
        emulatedMachine = rhel6.5.0
        pid = 0
        vmName = HostedEngine
        devices = [{'index': '2', 'iface': 'ide', 'specParams': {}, 'readonly': 'true', 'deviceId': '1a732367-113d-4e6a-8dcb-9adb45e3e1de', 'address': {'bus': '1', 'controller': '0', 'type': 'drive', 'target': '0', 'unit': '0'}, 'device': 'cdrom', 'shared': 'false', 'path': '', 'type': 'disk'}, {'index': '0', 'iface': 'virtio', 'format': 'raw', 'bootOrder': '1', 'poolID': '00000000-0000-0000-0000-000000000000', 'volumeID': '6a80ef55-6f15-492d-b962-123615bf27cf', 'imageID': 'df02f4f1-e1c7-474b-8075-b839e4bc1c95', 'specParams': {}, 'readonly': 'false', 'domainID': '4a5d3450-655b-452f-8dda-2ef7e051b1a8', 'optional': 'false', 'deviceId': 'df02f4f1-e1c7-474b-8075-b839e4bc1c95', 'address': {'slot': '0x06', 'bus': '0x00', 'domain': '0x0000', 'type': 'pci', 'function': '0x0'}, 'device': 'disk', 'shared': 'exclusive', 'propagateErrors': 'off', 'type': 'disk'}, {'device': 'scsi', 'model': 'virtio-scsi', 'type': 'controller'}, {'nicModel': 'pv', 'macAddr': '00:16:3E:76:D5:D5', 'linkActive': 'true', 'network': 'ovirtmgmt', 'filter': 'vdsm-no-mac-spoofing', 'specParams': {}, 'deviceId': 'a4c22ecc-0e5b-4548-b10a-5ca884d22946', 'address': {'slot': '0x03', 'bus': '0x00', 'domain': '0x0000', 'type': 'pci', 'function': '0x0'}, 'device': 'bridge', 'type': 'interface'}, {'device': 'console', 'specParams': {}, 'type': 'console', 'deviceId': '131f7a43-a609-4795-ba03-9f25f327f6f9', 'alias': 'console0'}]
        guestDiskMapping = {}
        vmType = kvm
        clientIp = 
        displaySecurePort = -1
        memSize = 4096
        displayPort = -1
        cpuType = Conroe
        spiceSecureChannels = smain,sdisplay,sinputs,scursor,splayback,srecord,ssmartcard,susbredir
        smp = 2
        displayIp = 0
        display = vnc


Checked again for the status and got the same exception.

**Deployment was done with SELinux as permissive as a workaround for https://bugzilla.redhat.com/show_bug.cgi?id=1242448


Expected results:
The VM should start 

Additional info:
sosreport: 
http://file.tlv.redhat.com/ebenahar/sosreport-green-vdsb.qa.lab.tlv.redhat.com-20150713144907.tar.xz

Comment 1 Simone Tiraboschi 2015-08-24 12:15:56 UTC

*** This bug has been marked as a duplicate of bug 1251752 ***


Note You need to log in before you can comment on or make changes to this bug.