Description of problem: Environment was built from 2 HE-hosts and engine over NFS, plus NFS SD for VMs. After several times of trying to import HE-SD, eventually succeeded, but then HE-VM import failed: 2015-12-28 11:36:07,457 INFO [org.ovirt.engine.core.bll.HostedEngineImporter] (org.ovirt.thread.pool-7-thread-12) [] Try to import the Hosted Eng ine VM 'VM [HostedEngine]' 2015-12-28 11:36:07,463 INFO [org.ovirt.engine.core.bll.ImportVmCommand] (org.ovirt.thread.pool-7-thread-12) [5be2dc1d] Lock Acquired to object ' EngineLock:{exclusiveLocks='[HostedEngine=<VM_NAME, ACTION_TYPE_FAILED_NAME_ALREADY_USED>, d653d7c5-f09e-4ec3-8ba3-3595d34c48f5=<VM, ACTION_TYPE_F AILED_VM_IS_BEING_IMPORTED$VmName HostedEngine>]', sharedLocks='[d653d7c5-f09e-4ec3-8ba3-3595d34c48f5=<REMOTE_VM, ACTION_TYPE_FAILED_VM_IS_BEING_I MPORTED$VmName HostedEngine>]'}' 2015-12-28 11:36:07,477 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.DoesImageExistVDSCommand] (org.ovirt.thread.pool-7-thread-12) [5be2dc1d] START, DoesImageExistVDSCommand( GetImageInfoVDSCommandParameters:{runAsync='true', storagePoolId='00000001-0001-0001-0001-0000000001bd', ignoreFa iloverLimit='false', storageDomainId='df2356f7-8272-401a-97f7-63c14f37ec7a', imageGroupId='b7101f23-aaed-4a22-84c7-45e5c735a0fd', imageId='925942e 8-e01e-4199-b881-bb70dcf6ee01'}), log id: 1640161f 2015-12-28 11:36:08,656 ERROR [org.ovirt.engine.core.vdsbroker.irsbroker.GetImageInfoVDSCommand] (org.ovirt.thread.pool-7-thread-12) [5be2dc1d] Ir sBroker::getImageInfo::Failed getting image info imageId='925942e8-e01e-4199-b881-bb70dcf6ee01' does not exist on domainName='hosted_storage', dom ainId='df2356f7-8272-401a-97f7-63c14f37ec7a', error code: 'VolumeDoesNotExist', message: Volume does not exist: (u'925942e8-e01e-4199-b881-bb70dcf6ee01',) 2015-12-28 11:36:08,656 WARN [org.ovirt.engine.core.vdsbroker.irsbroker.DoesImageExistVDSCommand] (org.ovirt.thread.pool-7-thread-12) [5be2dc1d] executeIrsBrokerCommand: getImageInfo on '925942e8-e01e-4199-b881-bb70dcf6ee01' threw an exception - assuming image doesn't exist: IRSGenericException: IRSErrorException: VolumeDoesNotExist 2015-12-28 11:36:08,656 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.DoesImageExistVDSCommand] (org.ovirt.thread.pool-7-thread-12) [5be2dc1d] FINISH, DoesImageExistVDSCommand, return: false, log id: 1640161f 2015-12-28 11:36:08,656 WARN [org.ovirt.engine.core.bll.ImportVmCommand] (org.ovirt.thread.pool-7-thread-12) [5be2dc1d] CanDoAction of action 'ImportVm' failed for user SYSTEM. Reasons: VAR__ACTION__IMPORT,VAR__TYPE__VM,ACTION_TYPE_FAILED_VM_IMAGE_DOES_NOT_EXIST 2015-12-28 11:36:08,656 INFO [org.ovirt.engine.core.bll.ImportVmCommand] (org.ovirt.thread.pool-7-thread-12) [5be2dc1d] Lock freed to object 'EngineLock:{exclusiveLocks='[HostedEngine=<VM_NAME, ACTION_TYPE_FAILED_NAME_ALREADY_USED>, d653d7c5-f09e-4ec3-8ba3-3595d34c48f5=<VM, ACTION_TYPE_FAILED_VM_IS_BEING_IMPORTED$VmName HostedEngine>]', sharedLocks='[d653d7c5-f09e-4ec3-8ba3-3595d34c48f5=<REMOTE_VM, ACTION_TYPE_FAILED_VM_IS_BEING_IMPORTED$VmName HostedEngine>]'}' 2015-12-28 11:36:08,656 ERROR [org.ovirt.engine.core.bll.HostedEngineImporter] (org.ovirt.thread.pool-7-thread-12) [5be2dc1d] Failed importing the Hosted Engine VM 2015-12-28 11:36:22,564 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.FullListVDSCommand] (DefaultQuartzScheduler_Worker-74) [1e4b7e01] START, FullListVDSCommand(HostName = , FullListVDSCommandParameters:{runAsync='true', hostId='bbf2916a-833d-476c-92a6-bee6ab753657', vds='Host[,bbf2916a-833d-476c-92a6-bee6ab753657]', vmIds='[d653d7c5-f09e-4ec3-8ba3-3595d34c48f5]'}), log id: 21c20489 2015-12-28 11:36:22,608 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.FullListVDSCommand] (DefaultQuartzScheduler_Worker-74) [1e4b7e01] FINISH, FullListVDSCommand, return: [{guestFQDN=, emulatedMachine=rhel6.5.0, pid=46589, guestDiskMapping={QEMU_DVD-ROM_={name=/dev/sr0}, b7101f23-aaed-4a22-8={name=/dev/vda}}, displaySecurePort=-1, cpuType=SandyBridge, pauseCode=NOERR, smp=2, vmType=kvm, memSize=4096, vmName=HostedEngine, username=Unknown, vmId=d653d7c5-f09e-4ec3-8ba3-3595d34c48f5, displayIp=0, displayPort=5900, guestIPs=, spiceSecureChannels=smain,sdisplay,sinputs,scursor,splayback,srecord,ssmartcard,susbredir, nicModel=rtl8139,pv, devices=[Ljava.lang.Object;@7063d417, status=Up, clientIp=, statusTime=4785679250, display=vnc}], log id: 21c20489 Checked from host that was not running the engine VM, but it was SPM: lvs | grep 925942e8-e01e-4199-b881-bb70dcf6ee01 925942e8-e01e-4199-b881-bb70dcf6ee01 d356ba06-ca65-4afb-882f-f56fa092c155 -wi-a----- 25.00g Version-Release number of selected component (if applicable): Engine: ovirt-engine-extension-aaa-jdbc-1.0.4-1.el6ev.noarch ovirt-vmconsole-1.0.0-1.el6ev.noarch ovirt-host-deploy-1.4.1-1.el6ev.noarch ovirt-vmconsole-proxy-1.0.0-1.el6ev.noarch ovirt-host-deploy-java-1.4.1-1.el6ev.noarch rhevm-3.6.1.3-0.1.el6.noarch Linux version 2.6.32-573.8.1.el6.x86_64 (mockbuild.eng.bos.redhat.com) (gcc version 4.4.7 20120313 (Red Hat 4.4.7-16) (GCC) ) #1 SMP Fri Sep 25 19:24:22 EDT 2015 Hosts: qemu-kvm-rhev-2.3.0-31.el7_2.5.x86_64 ovirt-host-deploy-1.4.1-1.el7ev.noarch libvirt-client-1.2.17-13.el7_2.2.x86_64 ovirt-vmconsole-1.0.0-1.el7ev.noarch vdsm-4.17.13-1.el7ev.noarch sanlock-3.2.4-2.el7_2.x86_64 mom-0.5.1-1.el7ev.noarch ovirt-vmconsole-host-1.0.0-1.el7ev.noarch ovirt-hosted-engine-ha-1.3.3.5-1.el7ev.noarch ovirt-hosted-engine-setup-1.3.1.3-1.el7ev.noarch ovirt-setup-lib-1.0.0-1.el7ev.noarch Linux version 3.10.0-327.4.4.el7.x86_64 (mockbuild.eng.bos.redhat.com) (gcc version 4.8.5 20150623 (Red Hat 4.8.5-4) (GCC) ) #1 SMP Thu Dec 17 15:51:24 EST 2015 How reproducible: Steps to Reproduce: 1.Deploy HE-VM over NFS with 2 hosts. 2.Add additionally one NFS SD to environment. 3.Attach HE-SD to the DC. 4.Check that when HE-SD is attached and imported, the HE-VM import fails. Actual results: HE-VM import fails. Expected results: HE-VM import should succeed. Additional info: Sosreport from engine and hosts attached.
Created attachment 1109995 [details] engine logs
Look for the "2015-12-28 11:38:10,136 ERROR [org.ovirt.engine.core.bll.HostedEngineImporter] (org.ovirt.thread.pool-7-thread-2) [34da26da] Failed importing the Hosted Engine VM" within the engine log.
For sosreport from the host follow the link: https://drive.google.com/a/redhat.com/file/d/0B85BEaDBcF88Y3VHeGRpNVR2UGs/view?usp=sharing
please output this from your host: $ vdsClient -s 0 list and $ vdsClient -s 0 getImagesList <your hosted_domain ID> I want to see what is the disk image id we try to import because this info comes from vdsm, I do not generate the ID or something.z
[root@alma03 ~]# vdsClient -s 0 list d653d7c5-f09e-4ec3-8ba3-3595d34c48f5 Status = Up guestFQDN = emulatedMachine = rhel6.5.0 spiceSecureChannels = smain,sdisplay,sinputs,scursor,splayback,srecord,ssmartcard,susbredir guestDiskMapping = {'QEMU_DVD-ROM_': {'name': '/dev/sr0'}, 'b7101f23-aaed-4a22-8': {'name': '/dev/vda'}} displaySecurePort = -1 cpuType = SandyBridge pauseCode = NOERR smp = 2 vmType = kvm memSize = 4096 vmName = HostedEngine username = Unknown pid = 46589 displayIp = 0 displayPort = 5900 guestIPs = nicModel = rtl8139,pv devices = [{'device': 'console', 'specParams': {}, 'type': 'console', 'deviceId': 'b4033338-e298-47bc-b285-f29cc9a30ee2', 'alias': 'console0'}, {'device': 'memballoon', 'specParams': {'model': 'none'}, 'type': 'balloon', 'alias': 'balloon0'}, {'device': 'scsi', 'alias': 'scsi0', 'model': 'virtio-scsi', 'type': 'controller', 'address': {'slot': '0x04', 'bus': '0x00', 'domain': '0x0000', 'type': 'pci', 'function': '0x0'}}, {'device': 'vnc', 'specParams': {'spiceSecureChannels': 'smain,sdisplay,sinputs,scursor,splayback,srecord,ssmartcard,susbredir', 'displayIp': '0'}, 'type': 'graphics', 'port': '5900'}, {'nicModel': 'pv', 'macAddr': '00:16:3E:7B:B8:53', 'linkActive': True, 'network': 'ovirtmgmt', 'alias': 'net0', 'filter': 'vdsm-no-mac-spoofing', 'specParams': {}, 'deviceId': 'b5ba27f5-cda6-48cd-af82-144e6c2b44ff', 'address': {'slot': '0x03', 'bus': '0x00', 'domain': '0x0000', 'type': 'pci', 'function': '0x0'}, 'device': 'bridge', 'type': 'interface', 'name': 'vnet0'}, {'index': '2', 'iface': 'ide', 'name': 'hdc', 'alias': 'ide0-1-0', 'specParams': {}, 'readonly': 'True', 'deviceId': '4ae909f0-d332-4152-8d4e-e0f91d4ba588', 'address': {'bus': '1', 'controller': '0', 'type': 'drive', 'target': '0', 'unit': '0'}, 'device': 'cdrom', 'shared': 'false', 'path': '', 'type': 'disk'}, {'poolID': '00000000-0000-0000-0000-000000000000', 'reqsize': '0', 'index': '0', 'iface': 'virtio', 'apparentsize': '26843545600', 'alias': 'virtio-disk0', 'imageID': 'b7101f23-aaed-4a22-84c7-45e5c735a0fd', 'readonly': 'False', 'shared': 'exclusive', 'truesize': '26843545600', 'type': 'disk', 'domainID': 'd356ba06-ca65-4afb-882f-f56fa092c155', 'volumeInfo': {'domainID': 'd356ba06-ca65-4afb-882f-f56fa092c155', 'volType': 'path', 'leaseOffset': 112197632, 'volumeID': '925942e8-e01e-4199-b881-bb70dcf6ee01', 'leasePath': '/dev/d356ba06-ca65-4afb-882f-f56fa092c155/leases', 'imageID': 'b7101f23-aaed-4a22-84c7-45e5c735a0fd', 'path': '/rhev/data-center/mnt/blockSD/d356ba06-ca65-4afb-882f-f56fa092c155/images/b7101f23-aaed-4a22-84c7-45e5c735a0fd/925942e8-e01e-4199-b881-bb70dcf6ee01'}, 'format': 'raw', 'deviceId': 'b7101f23-aaed-4a22-84c7-45e5c735a0fd', 'address': {'slot': '0x06', 'bus': '0x00', 'domain': '0x0000', 'type': 'pci', 'function': '0x0'}, 'device': 'disk', 'path': '/var/run/vdsm/storage/d356ba06-ca65-4afb-882f-f56fa092c155/b7101f23-aaed-4a22-84c7-45e5c735a0fd/925942e8-e01e-4199-b881-bb70dcf6ee01', 'propagateErrors': 'off', 'optional': 'false', 'name': 'vda', 'bootOrder': '1', 'volumeID': '925942e8-e01e-4199-b881-bb70dcf6ee01', 'specParams': {}, 'volumeChain': [{'domainID': 'd356ba06-ca65-4afb-882f-f56fa092c155', 'volType': 'path', 'leaseOffset': 112197632, 'volumeID': '925942e8-e01e-4199-b881-bb70dcf6ee01', 'leasePath': '/dev/d356ba06-ca65-4afb-882f-f56fa092c155/leases', 'imageID': 'b7101f23-aaed-4a22-84c7-45e5c735a0fd', 'path': '/rhev/data-center/mnt/blockSD/d356ba06-ca65-4afb-882f-f56fa092c155/images/b7101f23-aaed-4a22-84c7-45e5c735a0fd/925942e8-e01e-4199-b881-bb70dcf6ee01'}]}, {'device': 'usb', 'alias': 'usb', 'type': 'controller', 'address': {'slot': '0x01', 'bus': '0x00', 'domain': '0x0000', 'type': 'pci', 'function': '0x2'}}, {'device': 'ide', 'alias': 'ide', 'type': 'controller', 'address': {'slot': '0x01', 'bus': '0x00', 'domain': '0x0000', 'type': 'pci', 'function': '0x1'}}, {'device': 'virtio-serial', 'alias': 'virtio-serial0', 'type': 'controller', 'address': {'slot': '0x05', 'bus': '0x00', 'domain': '0x0000', 'type': 'pci', 'function': '0x0'}}, {'device': 'unix', 'alias': 'channel0', 'type': 'channel', 'address': {'bus': '0', 'controller': '0', 'type': 'virtio-serial', 'port': '1'}}, {'device': 'unix', 'alias': 'channel1', 'type': 'channel', 'address': {'bus': '0', 'controller': '0', 'type': 'virtio-serial', 'port': '2'}}, {'device': 'unix', 'alias': 'channel2', 'type': 'channel', 'address': {'bus': '0', 'controller': '0', 'type': 'virtio-serial', 'port': '3'}}, {'device': '', 'alias': 'video0', 'type': 'video', 'address': {'slot': '0x02', 'bus': '0x00', 'domain': '0x0000', 'type': 'pci', 'function': '0x0'}}] clientIp = statusTime = 5316668340 display = vnc # vdsClient -s 0 getImagesList 514f0c5a516000f4 Storage domain does not exist: ('514f0c5a516000f4',) Looks like autoimport simply imported the wrong volume ID, it should have been importing the 514f0c5a516000f4, but it imported the wrong one.
> 'format': 'raw', 'deviceId': 'b7101f23-aaed-4a22-84c7-45e5c735a0fd', > 'address': {'slot': '0x06', 'bus': '0x00', 'domain': '0x0000', 'type': > 'pci', 'function': '0x0'}, 'device': 'disk', 'path': > '/var/run/vdsm/storage/d356ba06-ca65-4afb-882f-f56fa092c155/b7101f23-aaed- > 4a22-84c7-45e5c735a0fd/925942e8-e01e-4199-b881-bb70dcf6ee01', this path is clearly pointing at domain d356ba06... and this is what the engine imported and activated- from the engine.log: 2015-12-28 11:32:40,075 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.ActivateStorageDomainVDSCommand] (org.ovirt.thread.pool-7-thread-11) [42f80da0] START, ActivateStorageDomainVDSCommand( ActivateStorageDomainVDSCommandParameters:{runAsync='true', storagePoolId='00000001-0001-0001-0001-0000000001bd', ignoreFailoverLimit='false', storageDomainId='df2356f7-8272-401a-97f7-63c14f37ec7a'}), log id: 1e4b6490 2015-12-28 11:32:42,919 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.ActivateStorageDomainVDSCommand] (org.ovirt.thread.pool-7-thread-11) [42f80da0] FINISH, ActivateStorageDomainVDSCommand, return: true, log id: 1e4b6490 please use df2356f7-8272-401a-97f7-63c14f37ec7a for the getImagesList. And we should look into vdsm log to understand why it threw this error while its obviously reported by vdsm that this images is the hosted engine disk image.
(In reply to Roy Golan from comment #6) > > 'format': 'raw', 'deviceId': 'b7101f23-aaed-4a22-84c7-45e5c735a0fd', > > 'address': {'slot': '0x06', 'bus': '0x00', 'domain': '0x0000', 'type': > > 'pci', 'function': '0x0'}, 'device': 'disk', 'path': > > '/var/run/vdsm/storage/d356ba06-ca65-4afb-882f-f56fa092c155/b7101f23-aaed- > > 4a22-84c7-45e5c735a0fd/925942e8-e01e-4199-b881-bb70dcf6ee01', > > this path is clearly pointing at domain d356ba06... > > and this is what the engine imported and activated- from the engine.log: > > > 2015-12-28 11:32:40,075 INFO > [org.ovirt.engine.core.vdsbroker.irsbroker.ActivateStorageDomainVDSCommand] > (org.ovirt.thread.pool-7-thread-11) [42f80da0] START, > ActivateStorageDomainVDSCommand( > ActivateStorageDomainVDSCommandParameters:{runAsync='true', > storagePoolId='00000001-0001-0001-0001-0000000001bd', > ignoreFailoverLimit='false', > storageDomainId='df2356f7-8272-401a-97f7-63c14f37ec7a'}), log id: 1e4b6490 > 2015-12-28 11:32:42,919 INFO > [org.ovirt.engine.core.vdsbroker.irsbroker.ActivateStorageDomainVDSCommand] > (org.ovirt.thread.pool-7-thread-11) [42f80da0] FINISH, > ActivateStorageDomainVDSCommand, return: true, log id: 1e4b6490 > > > please use df2356f7-8272-401a-97f7-63c14f37ec7a for the getImagesList. And > we should look into vdsm log to understand why it threw this error while its > obviously reported by vdsm that this images is the hosted engine disk image. vdsClient -s 0 getImagesList df2356f7-8272-401a-97f7-63c14f37ec7a cbba0291-afbe-4713-b9d1-1af36363ce5c 621c6b1f-e5e1-4488-aec6-44b4d807f569 eb8b6c74-abef-4b90-834c-05d91ea56b39 874df8ef-295d-4167-b343-c5ae181c3258 32ede857-3e34-4dd3-b8c3-e9ef269a2659 ea0fd41d-5555-419a-9cde-d39aacf8a49b ]# vdsClient -s 0 list d653d7c5-f09e-4ec3-8ba3-3595d34c48f5 Status = Up guestFQDN = emulatedMachine = rhel6.5.0 spiceSecureChannels = smain,sdisplay,sinputs,scursor,splayback,srecord,ssmartcard,susbredir guestDiskMapping = {'QEMU_DVD-ROM_': {'name': '/dev/sr0'}, 'b7101f23-aaed-4a22-8': {'name': '/dev/vda'}} displaySecurePort = -1 cpuType = SandyBridge pauseCode = NOERR smp = 2 vmType = kvm memSize = 4096 vmName = HostedEngine username = Unknown pid = 46589 displayIp = 0 displayPort = 5900 guestIPs = nicModel = rtl8139,pv devices = [{'device': 'console', 'specParams': {}, 'type': 'console', 'deviceId': 'b4033338-e298-47bc-b285-f29cc9a30ee2', 'alias': 'console0'}, {'device': 'memballoon', 'specParams': {'model': 'none'}, 'type': 'balloon', 'alias': 'balloon0'}, {'device': 'scsi', 'alias': 'scsi0', 'model': 'virtio-scsi', 'type': 'controller', 'address': {'slot': '0x04', 'bus': '0x00', 'domain': '0x0000', 'type': 'pci', 'function': '0x0'}}, {'device': 'vnc', 'specParams': {'spiceSecureChannels': 'smain,sdisplay,sinputs,scursor,splayback,srecord,ssmartcard,susbredir', 'displayIp': '0'}, 'type': 'graphics', 'port': '5900'}, {'nicModel': 'pv', 'macAddr': '00:16:3E:7B:B8:53', 'linkActive': True, 'network': 'ovirtmgmt', 'alias': 'net0', 'filter': 'vdsm-no-mac-spoofing', 'specParams': {}, 'deviceId': 'b5ba27f5-cda6-48cd-af82-144e6c2b44ff', 'address': {'slot': '0x03', 'bus': '0x00', 'domain': '0x0000', 'type': 'pci', 'function': '0x0'}, 'device': 'bridge', 'type': 'interface', 'name': 'vnet0'}, {'index': '2', 'iface': 'ide', 'name': 'hdc', 'alias': 'ide0-1-0', 'specParams': {}, 'readonly': 'True', 'deviceId': '4ae909f0-d332-4152-8d4e-e0f91d4ba588', 'address': {'bus': '1', 'controller': '0', 'type': 'drive', 'target': '0', 'unit': '0'}, 'device': 'cdrom', 'shared': 'false', 'path': '', 'type': 'disk'}, {'poolID': '00000000-0000-0000-0000-000000000000', 'reqsize': '0', 'index': '0', 'iface': 'virtio', 'apparentsize': '26843545600', 'alias': 'virtio-disk0', 'imageID': 'b7101f23-aaed-4a22-84c7-45e5c735a0fd', 'readonly': 'False', 'shared': 'exclusive', 'truesize': '26843545600', 'type': 'disk', 'domainID': 'd356ba06-ca65-4afb-882f-f56fa092c155', 'volumeInfo': {'domainID': 'd356ba06-ca65-4afb-882f-f56fa092c155', 'volType': 'path', 'leaseOffset': 112197632, 'volumeID': '925942e8-e01e-4199-b881-bb70dcf6ee01', 'leasePath': '/dev/d356ba06-ca65-4afb-882f-f56fa092c155/leases', 'imageID': 'b7101f23-aaed-4a22-84c7-45e5c735a0fd', 'path': '/rhev/data-center/mnt/blockSD/d356ba06-ca65-4afb-882f-f56fa092c155/images/b7101f23-aaed-4a22-84c7-45e5c735a0fd/925942e8-e01e-4199-b881-bb70dcf6ee01'}, 'format': 'raw', 'deviceId': 'b7101f23-aaed-4a22-84c7-45e5c735a0fd', 'address': {'slot': '0x06', 'bus': '0x00', 'domain': '0x0000', 'type': 'pci', 'function': '0x0'}, 'device': 'disk', 'path': '/var/run/vdsm/storage/d356ba06-ca65-4afb-882f-f56fa092c155/b7101f23-aaed-4a22-84c7-45e5c735a0fd/925942e8-e01e-4199-b881-bb70dcf6ee01', 'propagateErrors': 'off', 'optional': 'false', 'name': 'vda', 'bootOrder': '1', 'volumeID': '925942e8-e01e-4199-b881-bb70dcf6ee01', 'specParams': {}, 'volumeChain': [{'domainID': 'd356ba06-ca65-4afb-882f-f56fa092c155', 'volType': 'path', 'leaseOffset': 112197632, 'volumeID': '925942e8-e01e-4199-b881-bb70dcf6ee01', 'leasePath': '/dev/d356ba06-ca65-4afb-882f-f56fa092c155/leases', 'imageID': 'b7101f23-aaed-4a22-84c7-45e5c735a0fd', 'path': '/rhev/data-center/mnt/blockSD/d356ba06-ca65-4afb-882f-f56fa092c155/images/b7101f23-aaed-4a22-84c7-45e5c735a0fd/925942e8-e01e-4199-b881-bb70dcf6ee01'}]}, {'device': 'usb', 'alias': 'usb', 'type': 'controller', 'address': {'slot': '0x01', 'bus': '0x00', 'domain': '0x0000', 'type': 'pci', 'function': '0x2'}}, {'device': 'ide', 'alias': 'ide', 'type': 'controller', 'address': {'slot': '0x01', 'bus': '0x00', 'domain': '0x0000', 'type': 'pci', 'function': '0x1'}}, {'device': 'virtio-serial', 'alias': 'virtio-serial0', 'type': 'controller', 'address': {'slot': '0x05', 'bus': '0x00', 'domain': '0x0000', 'type': 'pci', 'function': '0x0'}}, {'device': 'unix', 'alias': 'channel0', 'type': 'channel', 'address': {'bus': '0', 'controller': '0', 'type': 'virtio-serial', 'port': '1'}}, {'device': 'unix', 'alias': 'channel1', 'type': 'channel', 'address': {'bus': '0', 'controller': '0', 'type': 'virtio-serial', 'port': '2'}}, {'device': 'unix', 'alias': 'channel2', 'type': 'channel', 'address': {'bus': '0', 'controller': '0', 'type': 'virtio-serial', 'port': '3'}}, {'device': '', 'alias': 'video0', 'type': 'video', 'address': {'slot': '0x02', 'bus': '0x00', 'domain': '0x0000', 'type': 'pci', 'function': '0x0'}}] clientIp = statusTime = 5323943570 display = vnc
Created attachment 1111183 [details] vdsm.log from host with active HE-VM on it.
Forth to our discussion with Roy, I've found the root cause: The same 3 hosts "see" 3 different LUNs with deployed there HE-VMs (nsednev-he-1, nsednev-he-2, nsednev-he-3). The auto-import should be importing it's SD from the 3514f0c5a516000f4 (it belongs to nsednev-he-1), but it did that by importing from 3514f0c5a51600810 (it belongs to nsednev-he-2). The thing is, autoimport can't differentiate between several same named HE-SDs at they all have the same name "hosted_storage". So as the result, HE-SD was eventually imported, but the VM was not found there, as it never existed there. # vdsClient -s 0 getStorageDomainInfo d356ba06-ca65-4afb-882f-f56fa092c155 uuid = d356ba06-ca65-4afb-882f-f56fa092c155 vguuid = 4MuaSU-Iymt-yFAi-Ph2J-LhgE-xX3o-Z1AQxf state = OK version = 3 role = Regular type = ISCSI class = Data pool = [] name = hosted_storage Such scenario might happen if: 1)Customer deploying multiple HE-VMs and their hosts all has connectivity to every iSCSI LUNs, on which all HE-VMs being deployed. 2)More than one HE-VM deployment were made with the default name = hosted_storage. Please consider choosing the HE-SD by unique name, or searching if there is more than one HE-SD exists with the name = hosted_storage same as of being imported HE-SD, then warn the customer or provide with ability to select from multiple LUNs for the import.
I see this mainly as a dirty environment. In a realistic scenario we should expect a clean storage with a single hosted_storage lun.
Created attachment 1115588 [details] sosreport from the engine
This is still a corner case that requires the following accumulated conditions: 1. Multiple HE setups. 2. Using a shared storage for the above setups. 3. Using iSCSI storage. Hence, I'd settle with a proper KB for this corner case.
Workaround: - at ovirt-hosted-engine-setup, set the name of the hosted engine SD to be other than the default "hosted_storage" - once the engine setup is done use $ engine-config -s setHostedEngineStorageDomainName=SD_NAME - restart ovirt-engine to reload the config
> $ engine-config -s setHostedEngineStorageDomainName=SD_NAME -s HostedEngineStorageDomainName
(In reply to Roy Golan from comment #15) > > $ engine-config -s setHostedEngineStorageDomainName=SD_NAME > > -s HostedEngineStorageDomainName engine-config -s HostedEngineStorageDomainName="hosted_storage2" Error setting HostedEngineStorageDomainName's value. No such entry. engine-config -s setHostedEngineStorageDomainName="hosted_storage2" Error setting setHostedEngineStorageDomainName's value. No such entry. engine-config -s setHostedEngineStorageDomainName=hosted_storage2 Error setting setHostedEngineStorageDomainName's value. No such entry. engine-config -s HostedEngineStorageDomainName=hosted_storage2 Error setting HostedEngineStorageDomainName's value. No such entry. There is no such parameter inside engine-config. WA not working. As a result I've got HE-VM installed on Red Hat Enterprise Virtualization Hypervisor (Beta) release 7.2 (20160113.0.el7ev) with rhevm-appliance-20160120.0-1 taken manually during TUI HE-deployment. iSCSI HE's SD was not imported, but data SD was successfully added and I've created working guest VM for farther tasks. Engine: rhevm-3.6.2.6-0.1.el6.noarch Host: rhevm-sdk-python-3.6.2.1-1.el7ev.noarch sanlock-3.2.4-1.el7.x86_64 libvirt-client-1.2.17-13.el7_2.2.x86_64 mom-0.5.1-1.el7ev.noarch vdsm-4.17.17-0.el7ev.noarch
(In reply to Nikolai Sednev from comment #16) > (In reply to Roy Golan from comment #15) > > > $ engine-config -s setHostedEngineStorageDomainName=SD_NAME > > > > -s HostedEngineStorageDomainName My bad, the patch[1] to use it in engine-config is in for 3.6.3 for Bug 1290478. you can for now, try to apply it - - $ su - postgres - $ psql engine -c "select fn_db_add_config_value('HostedEngineStorageDomainName','hosted_storage','general');" - $ exit - engine-config -s HostedEngineStorageDomainName=SD_NAME
Hi Roy, Can you please fill in the "Fixed In Version:" accordingly? Did not work for me: su - postgres -bash-4.1$ psql engine -c "select fn_db_add_config_value('HostedEngineStorageDomainName','hosted_storage','general');" fn_db_add_config_value ------------------------ (1 row) -bash-4.1$ exit logout # engine-config -s HostedEngineStorageDomainName="hosted_storage2" Error setting HostedEngineStorageDomainName's value. No such entry with version general.
Proper zoning would have eliminated this, reducing severity.
There is a more robust and simple way to detect the domain which is based on ID and not name: On VM import, we have all the disk details from VDSM, and the disk have the domain id. we simply need to use that instead of a name, as simple as that. since this bug is medium and this fix will touch the auto import flow this fix should land in the 2nd Z version.
*** Bug 1304611 has been marked as a duplicate of this bug. ***
(In reply to Nikolai Sednev from comment #18) > -bash-4.1$ exit > logout > # engine-config -s HostedEngineStorageDomainName="hosted_storage2" > Error setting HostedEngineStorageDomainName's value. No such entry with > version general. can you try: echo HostedEngineStorageDomainName | engine-config -s HostedEngineStorageDomainName="hosted_storage2" -p /dev/stdin
Hello, A database query of the config gives no result: # su - postgres -bash-4.1$ psql engine -c "select fn_db_add_config_value ('HostedEngineStorageDomainName', 'hosted_storage', 'general');" fn_db_add_config_value ------------------------ (1 row) Also, the engine config seems not to have the indicated key (HostedEngineStorageDomainName). The only two keys it reports are these: # engine-config -l | grep Hosted HostedEngineVmName: The name of the Hosted Engine VM. That name will be used to perform exclusive operation by ovirt-engine on that VM. (Value Type: String) AutoImportHostedEngine: "Try to automatically import the hosted engine VM and its storage domain" (Value Type: Boolean) The attempt of adding the HostedEngineStorageDomainName key finish with the following error: # echo HostedEngineStorageDomainName | engine-config -s HostedEngineStorageDomainName="stg-data-fc-he-0001" -p /dev/stdin Key for add operation must be defined!
*** Bug 1311693 has been marked as a duplicate of this bug. ***
Don't have the environment with this issue being reproduced any more. Please take a look at the logs and if no problem found, please close the bug.
Hi all, hosted-engine --deploy does no longer ask for a name of the Storage Doamin or even for a name of the HostedEngine. As these values cannot be changed anymore, I think that this Bug can be closed.
As mentioned in comment 26, the latest installer no longer allows this error flow. Closing the issue.
*** This bug has been marked as a duplicate of bug 1301105 ***