Bug 1294457 - Failed importing the Hosted Engine VM | VolumeDoesNotExist
Failed importing the Hosted Engine VM | VolumeDoesNotExist
Status: CLOSED DUPLICATE of bug 1301105
Product: ovirt-engine
Classification: oVirt
Component: BLL.HostedEngine (Show other bugs)
3.6.1.3
x86_64 Linux
medium Severity medium (vote)
: ovirt-4.1.0-alpha
: ---
Assigned To: Roy Golan
Nikolai Sednev
https://drive.google.com/a/redhat.com...
: Reopened
: 1304611 1311693 (view as bug list)
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2015-12-28 04:57 EST by Nikolai Sednev
Modified: 2016-07-17 06:48 EDT (History)
14 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2016-07-17 06:05:50 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: SLA
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---
dfediuck: ovirt‑4.1+
mgoldboi: planning_ack+
rgolan: devel_ack+
mavital: testing_ack+


Attachments (Terms of Use)
engine logs (9.04 MB, application/x-xz)
2015-12-28 05:01 EST, Nikolai Sednev
no flags Details
vdsm.log from host with active HE-VM on it. (7.18 MB, text/plain)
2016-01-03 10:13 EST, Nikolai Sednev
no flags Details
sosreport from the engine (9.32 MB, application/x-xz)
2016-01-17 07:37 EST, Nikolai Sednev
no flags Details

  None (edit)
Description Nikolai Sednev 2015-12-28 04:57:27 EST
Description of problem:
Environment was built from 2 HE-hosts and engine over NFS, plus NFS SD for VMs.
After several times of trying to import HE-SD, eventually succeeded, but then HE-VM import failed:
2015-12-28 11:36:07,457 INFO  [org.ovirt.engine.core.bll.HostedEngineImporter] (org.ovirt.thread.pool-7-thread-12) [] Try to import the Hosted Eng
ine VM 'VM [HostedEngine]'
2015-12-28 11:36:07,463 INFO  [org.ovirt.engine.core.bll.ImportVmCommand] (org.ovirt.thread.pool-7-thread-12) [5be2dc1d] Lock Acquired to object '
EngineLock:{exclusiveLocks='[HostedEngine=<VM_NAME, ACTION_TYPE_FAILED_NAME_ALREADY_USED>, d653d7c5-f09e-4ec3-8ba3-3595d34c48f5=<VM, ACTION_TYPE_F
AILED_VM_IS_BEING_IMPORTED$VmName HostedEngine>]', sharedLocks='[d653d7c5-f09e-4ec3-8ba3-3595d34c48f5=<REMOTE_VM, ACTION_TYPE_FAILED_VM_IS_BEING_I
MPORTED$VmName HostedEngine>]'}'
2015-12-28 11:36:07,477 INFO  [org.ovirt.engine.core.vdsbroker.irsbroker.DoesImageExistVDSCommand] (org.ovirt.thread.pool-7-thread-12) [5be2dc1d] 
START, DoesImageExistVDSCommand( GetImageInfoVDSCommandParameters:{runAsync='true', storagePoolId='00000001-0001-0001-0001-0000000001bd', ignoreFa
iloverLimit='false', storageDomainId='df2356f7-8272-401a-97f7-63c14f37ec7a', imageGroupId='b7101f23-aaed-4a22-84c7-45e5c735a0fd', imageId='925942e
8-e01e-4199-b881-bb70dcf6ee01'}), log id: 1640161f
2015-12-28 11:36:08,656 ERROR [org.ovirt.engine.core.vdsbroker.irsbroker.GetImageInfoVDSCommand] (org.ovirt.thread.pool-7-thread-12) [5be2dc1d] Ir
sBroker::getImageInfo::Failed getting image info imageId='925942e8-e01e-4199-b881-bb70dcf6ee01' does not exist on domainName='hosted_storage', dom
ainId='df2356f7-8272-401a-97f7-63c14f37ec7a', error code: 'VolumeDoesNotExist', message: Volume does not exist: (u'925942e8-e01e-4199-b881-bb70dcf6ee01',)
2015-12-28 11:36:08,656 WARN  [org.ovirt.engine.core.vdsbroker.irsbroker.DoesImageExistVDSCommand] (org.ovirt.thread.pool-7-thread-12) [5be2dc1d] executeIrsBrokerCommand: getImageInfo on '925942e8-e01e-4199-b881-bb70dcf6ee01' threw an exception - assuming image doesn't exist: IRSGenericException: IRSErrorException: VolumeDoesNotExist
2015-12-28 11:36:08,656 INFO  [org.ovirt.engine.core.vdsbroker.irsbroker.DoesImageExistVDSCommand] (org.ovirt.thread.pool-7-thread-12) [5be2dc1d] FINISH, DoesImageExistVDSCommand, return: false, log id: 1640161f
2015-12-28 11:36:08,656 WARN  [org.ovirt.engine.core.bll.ImportVmCommand] (org.ovirt.thread.pool-7-thread-12) [5be2dc1d] CanDoAction of action 'ImportVm' failed for user SYSTEM. Reasons: VAR__ACTION__IMPORT,VAR__TYPE__VM,ACTION_TYPE_FAILED_VM_IMAGE_DOES_NOT_EXIST
2015-12-28 11:36:08,656 INFO  [org.ovirt.engine.core.bll.ImportVmCommand] (org.ovirt.thread.pool-7-thread-12) [5be2dc1d] Lock freed to object 'EngineLock:{exclusiveLocks='[HostedEngine=<VM_NAME, ACTION_TYPE_FAILED_NAME_ALREADY_USED>, d653d7c5-f09e-4ec3-8ba3-3595d34c48f5=<VM, ACTION_TYPE_FAILED_VM_IS_BEING_IMPORTED$VmName HostedEngine>]', sharedLocks='[d653d7c5-f09e-4ec3-8ba3-3595d34c48f5=<REMOTE_VM, ACTION_TYPE_FAILED_VM_IS_BEING_IMPORTED$VmName HostedEngine>]'}'
2015-12-28 11:36:08,656 ERROR [org.ovirt.engine.core.bll.HostedEngineImporter] (org.ovirt.thread.pool-7-thread-12) [5be2dc1d] Failed importing the Hosted Engine VM
2015-12-28 11:36:22,564 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.FullListVDSCommand] (DefaultQuartzScheduler_Worker-74) [1e4b7e01] START, FullListVDSCommand(HostName = , FullListVDSCommandParameters:{runAsync='true', hostId='bbf2916a-833d-476c-92a6-bee6ab753657', vds='Host[,bbf2916a-833d-476c-92a6-bee6ab753657]', vmIds='[d653d7c5-f09e-4ec3-8ba3-3595d34c48f5]'}), log id: 21c20489
2015-12-28 11:36:22,608 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.FullListVDSCommand] (DefaultQuartzScheduler_Worker-74) [1e4b7e01] FINISH, FullListVDSCommand, return: [{guestFQDN=, emulatedMachine=rhel6.5.0, pid=46589, guestDiskMapping={QEMU_DVD-ROM_={name=/dev/sr0}, b7101f23-aaed-4a22-8={name=/dev/vda}}, displaySecurePort=-1, cpuType=SandyBridge, pauseCode=NOERR, smp=2, vmType=kvm, memSize=4096, vmName=HostedEngine, username=Unknown, vmId=d653d7c5-f09e-4ec3-8ba3-3595d34c48f5, displayIp=0, displayPort=5900, guestIPs=, spiceSecureChannels=smain,sdisplay,sinputs,scursor,splayback,srecord,ssmartcard,susbredir, nicModel=rtl8139,pv, devices=[Ljava.lang.Object;@7063d417, status=Up, clientIp=, statusTime=4785679250, display=vnc}], log id: 21c20489

Checked from host that was not running the engine VM, but it was SPM:
lvs | grep 925942e8-e01e-4199-b881-bb70dcf6ee01
  925942e8-e01e-4199-b881-bb70dcf6ee01 d356ba06-ca65-4afb-882f-f56fa092c155 -wi-a-----  25.00g    


Version-Release number of selected component (if applicable):
Engine:
ovirt-engine-extension-aaa-jdbc-1.0.4-1.el6ev.noarch
ovirt-vmconsole-1.0.0-1.el6ev.noarch
ovirt-host-deploy-1.4.1-1.el6ev.noarch
ovirt-vmconsole-proxy-1.0.0-1.el6ev.noarch
ovirt-host-deploy-java-1.4.1-1.el6ev.noarch
rhevm-3.6.1.3-0.1.el6.noarch
Linux version 2.6.32-573.8.1.el6.x86_64 (mockbuild@x86-033.build.eng.bos.redhat.com) (gcc version 4.4.7 20120313 (Red Hat 4.4.7-16) (GCC) ) #1 SMP Fri Sep 25 19:24:22 EDT 2015

Hosts:
qemu-kvm-rhev-2.3.0-31.el7_2.5.x86_64
ovirt-host-deploy-1.4.1-1.el7ev.noarch
libvirt-client-1.2.17-13.el7_2.2.x86_64
ovirt-vmconsole-1.0.0-1.el7ev.noarch
vdsm-4.17.13-1.el7ev.noarch
sanlock-3.2.4-2.el7_2.x86_64
mom-0.5.1-1.el7ev.noarch
ovirt-vmconsole-host-1.0.0-1.el7ev.noarch
ovirt-hosted-engine-ha-1.3.3.5-1.el7ev.noarch
ovirt-hosted-engine-setup-1.3.1.3-1.el7ev.noarch
ovirt-setup-lib-1.0.0-1.el7ev.noarch
Linux version 3.10.0-327.4.4.el7.x86_64 (mockbuild@x86-019.build.eng.bos.redhat.com) (gcc version 4.8.5 20150623 (Red Hat 4.8.5-4) (GCC) ) #1 SMP Thu Dec 17 15:51:24 EST 2015


How reproducible:


Steps to Reproduce:
1.Deploy HE-VM over NFS with 2 hosts.
2.Add additionally one NFS SD to environment.
3.Attach HE-SD to the DC.
4.Check that when HE-SD is attached and imported, the HE-VM import fails.

Actual results:
HE-VM import fails.

Expected results:
HE-VM import should succeed.

Additional info:
Sosreport from engine and hosts attached.
Comment 1 Nikolai Sednev 2015-12-28 05:01 EST
Created attachment 1109995 [details]
engine logs
Comment 2 Nikolai Sednev 2015-12-28 05:10:20 EST
Look for the "2015-12-28 11:38:10,136 ERROR [org.ovirt.engine.core.bll.HostedEngineImporter] (org.ovirt.thread.pool-7-thread-2) [34da26da] Failed importing the Hosted Engine VM" within the engine log.
Comment 3 Nikolai Sednev 2015-12-28 05:15:54 EST
For sosreport from the host follow the link: https://drive.google.com/a/redhat.com/file/d/0B85BEaDBcF88Y3VHeGRpNVR2UGs/view?usp=sharing
Comment 4 Roy Golan 2016-01-03 04:06:22 EST
please output this from your host: 
  $ vdsClient -s 0 list

and

  $ vdsClient -s 0 getImagesList <your hosted_domain ID>

I want to see what is the disk image id we try to import because this info comes from vdsm, I do not generate the ID or something.z
Comment 5 Nikolai Sednev 2016-01-03 08:23:39 EST
[root@alma03 ~]# vdsClient -s 0 list

d653d7c5-f09e-4ec3-8ba3-3595d34c48f5
        Status = Up
        guestFQDN = 
        emulatedMachine = rhel6.5.0
        spiceSecureChannels = smain,sdisplay,sinputs,scursor,splayback,srecord,ssmartcard,susbredir
        guestDiskMapping = {'QEMU_DVD-ROM_': {'name': '/dev/sr0'}, 'b7101f23-aaed-4a22-8': {'name': '/dev/vda'}}
        displaySecurePort = -1
        cpuType = SandyBridge
        pauseCode = NOERR
        smp = 2
        vmType = kvm
        memSize = 4096
        vmName = HostedEngine
        username = Unknown
        pid = 46589
        displayIp = 0
        displayPort = 5900
        guestIPs = 
        nicModel = rtl8139,pv
        devices = [{'device': 'console', 'specParams': {}, 'type': 'console', 'deviceId': 'b4033338-e298-47bc-b285-f29cc9a30ee2', 'alias': 'console0'}, {'device': 'memballoon', 'specParams': {'model': 'none'}, 'type': 'balloon', 'alias': 'balloon0'}, {'device': 'scsi', 'alias': 'scsi0', 'model': 'virtio-scsi', 'type': 'controller', 'address': {'slot': '0x04', 'bus': '0x00', 'domain': '0x0000', 'type': 'pci', 'function': '0x0'}}, {'device': 'vnc', 'specParams': {'spiceSecureChannels': 'smain,sdisplay,sinputs,scursor,splayback,srecord,ssmartcard,susbredir', 'displayIp': '0'}, 'type': 'graphics', 'port': '5900'}, {'nicModel': 'pv', 'macAddr': '00:16:3E:7B:B8:53', 'linkActive': True, 'network': 'ovirtmgmt', 'alias': 'net0', 'filter': 'vdsm-no-mac-spoofing', 'specParams': {}, 'deviceId': 'b5ba27f5-cda6-48cd-af82-144e6c2b44ff', 'address': {'slot': '0x03', 'bus': '0x00', 'domain': '0x0000', 'type': 'pci', 'function': '0x0'}, 'device': 'bridge', 'type': 'interface', 'name': 'vnet0'}, {'index': '2', 'iface': 'ide', 'name': 'hdc', 'alias': 'ide0-1-0', 'specParams': {}, 'readonly': 'True', 'deviceId': '4ae909f0-d332-4152-8d4e-e0f91d4ba588', 'address': {'bus': '1', 'controller': '0', 'type': 'drive', 'target': '0', 'unit': '0'}, 'device': 'cdrom', 'shared': 'false', 'path': '', 'type': 'disk'}, {'poolID': '00000000-0000-0000-0000-000000000000', 'reqsize': '0', 'index': '0', 'iface': 'virtio', 'apparentsize': '26843545600', 'alias': 'virtio-disk0', 'imageID': 'b7101f23-aaed-4a22-84c7-45e5c735a0fd', 'readonly': 'False', 'shared': 'exclusive', 'truesize': '26843545600', 'type': 'disk', 'domainID': 'd356ba06-ca65-4afb-882f-f56fa092c155', 'volumeInfo': {'domainID': 'd356ba06-ca65-4afb-882f-f56fa092c155', 'volType': 'path', 'leaseOffset': 112197632, 'volumeID': '925942e8-e01e-4199-b881-bb70dcf6ee01', 'leasePath': '/dev/d356ba06-ca65-4afb-882f-f56fa092c155/leases', 'imageID': 'b7101f23-aaed-4a22-84c7-45e5c735a0fd', 'path': '/rhev/data-center/mnt/blockSD/d356ba06-ca65-4afb-882f-f56fa092c155/images/b7101f23-aaed-4a22-84c7-45e5c735a0fd/925942e8-e01e-4199-b881-bb70dcf6ee01'}, 'format': 'raw', 'deviceId': 'b7101f23-aaed-4a22-84c7-45e5c735a0fd', 'address': {'slot': '0x06', 'bus': '0x00', 'domain': '0x0000', 'type': 'pci', 'function': '0x0'}, 'device': 'disk', 'path': '/var/run/vdsm/storage/d356ba06-ca65-4afb-882f-f56fa092c155/b7101f23-aaed-4a22-84c7-45e5c735a0fd/925942e8-e01e-4199-b881-bb70dcf6ee01', 'propagateErrors': 'off', 'optional': 'false', 'name': 'vda', 'bootOrder': '1', 'volumeID': '925942e8-e01e-4199-b881-bb70dcf6ee01', 'specParams': {}, 'volumeChain': [{'domainID': 'd356ba06-ca65-4afb-882f-f56fa092c155', 'volType': 'path', 'leaseOffset': 112197632, 'volumeID': '925942e8-e01e-4199-b881-bb70dcf6ee01', 'leasePath': '/dev/d356ba06-ca65-4afb-882f-f56fa092c155/leases', 'imageID': 'b7101f23-aaed-4a22-84c7-45e5c735a0fd', 'path': '/rhev/data-center/mnt/blockSD/d356ba06-ca65-4afb-882f-f56fa092c155/images/b7101f23-aaed-4a22-84c7-45e5c735a0fd/925942e8-e01e-4199-b881-bb70dcf6ee01'}]}, {'device': 'usb', 'alias': 'usb', 'type': 'controller', 'address': {'slot': '0x01', 'bus': '0x00', 'domain': '0x0000', 'type': 'pci', 'function': '0x2'}}, {'device': 'ide', 'alias': 'ide', 'type': 'controller', 'address': {'slot': '0x01', 'bus': '0x00', 'domain': '0x0000', 'type': 'pci', 'function': '0x1'}}, {'device': 'virtio-serial', 'alias': 'virtio-serial0', 'type': 'controller', 'address': {'slot': '0x05', 'bus': '0x00', 'domain': '0x0000', 'type': 'pci', 'function': '0x0'}}, {'device': 'unix', 'alias': 'channel0', 'type': 'channel', 'address': {'bus': '0', 'controller': '0', 'type': 'virtio-serial', 'port': '1'}}, {'device': 'unix', 'alias': 'channel1', 'type': 'channel', 'address': {'bus': '0', 'controller': '0', 'type': 'virtio-serial', 'port': '2'}}, {'device': 'unix', 'alias': 'channel2', 'type': 'channel', 'address': {'bus': '0', 'controller': '0', 'type': 'virtio-serial', 'port': '3'}}, {'device': '', 'alias': 'video0', 'type': 'video', 'address': {'slot': '0x02', 'bus': '0x00', 'domain': '0x0000', 'type': 'pci', 'function': '0x0'}}]
        clientIp = 
        statusTime = 5316668340
        display = vnc
# vdsClient -s 0 getImagesList 514f0c5a516000f4
Storage domain does not exist: ('514f0c5a516000f4',)

Looks like autoimport simply imported the wrong volume ID, it should have been importing the 514f0c5a516000f4, but it imported the wrong one.
Comment 6 Roy Golan 2016-01-03 08:50:58 EST
> 'format': 'raw', 'deviceId': 'b7101f23-aaed-4a22-84c7-45e5c735a0fd',
> 'address': {'slot': '0x06', 'bus': '0x00', 'domain': '0x0000', 'type':
> 'pci', 'function': '0x0'}, 'device': 'disk', 'path':
> '/var/run/vdsm/storage/d356ba06-ca65-4afb-882f-f56fa092c155/b7101f23-aaed-
> 4a22-84c7-45e5c735a0fd/925942e8-e01e-4199-b881-bb70dcf6ee01',

this path is clearly pointing at domain d356ba06...

and this is what the engine imported and activated- from the engine.log: 


2015-12-28 11:32:40,075 INFO  [org.ovirt.engine.core.vdsbroker.irsbroker.ActivateStorageDomainVDSCommand] (org.ovirt.thread.pool-7-thread-11) [42f80da0] START, ActivateStorageDomainVDSCommand( ActivateStorageDomainVDSCommandParameters:{runAsync='true', storagePoolId='00000001-0001-0001-0001-0000000001bd', ignoreFailoverLimit='false', storageDomainId='df2356f7-8272-401a-97f7-63c14f37ec7a'}), log id: 1e4b6490
2015-12-28 11:32:42,919 INFO  [org.ovirt.engine.core.vdsbroker.irsbroker.ActivateStorageDomainVDSCommand] (org.ovirt.thread.pool-7-thread-11) [42f80da0] FINISH, ActivateStorageDomainVDSCommand, return: true, log id: 1e4b6490


please use df2356f7-8272-401a-97f7-63c14f37ec7a for the getImagesList. And we should look into vdsm log to understand why it threw this error while its obviously reported by vdsm that this images is the hosted engine disk image.
Comment 7 Nikolai Sednev 2016-01-03 10:07:42 EST
(In reply to Roy Golan from comment #6)
> > 'format': 'raw', 'deviceId': 'b7101f23-aaed-4a22-84c7-45e5c735a0fd',
> > 'address': {'slot': '0x06', 'bus': '0x00', 'domain': '0x0000', 'type':
> > 'pci', 'function': '0x0'}, 'device': 'disk', 'path':
> > '/var/run/vdsm/storage/d356ba06-ca65-4afb-882f-f56fa092c155/b7101f23-aaed-
> > 4a22-84c7-45e5c735a0fd/925942e8-e01e-4199-b881-bb70dcf6ee01',
> 
> this path is clearly pointing at domain d356ba06...
> 
> and this is what the engine imported and activated- from the engine.log: 
> 
> 
> 2015-12-28 11:32:40,075 INFO 
> [org.ovirt.engine.core.vdsbroker.irsbroker.ActivateStorageDomainVDSCommand]
> (org.ovirt.thread.pool-7-thread-11) [42f80da0] START,
> ActivateStorageDomainVDSCommand(
> ActivateStorageDomainVDSCommandParameters:{runAsync='true',
> storagePoolId='00000001-0001-0001-0001-0000000001bd',
> ignoreFailoverLimit='false',
> storageDomainId='df2356f7-8272-401a-97f7-63c14f37ec7a'}), log id: 1e4b6490
> 2015-12-28 11:32:42,919 INFO 
> [org.ovirt.engine.core.vdsbroker.irsbroker.ActivateStorageDomainVDSCommand]
> (org.ovirt.thread.pool-7-thread-11) [42f80da0] FINISH,
> ActivateStorageDomainVDSCommand, return: true, log id: 1e4b6490
> 
> 
> please use df2356f7-8272-401a-97f7-63c14f37ec7a for the getImagesList. And
> we should look into vdsm log to understand why it threw this error while its
> obviously reported by vdsm that this images is the hosted engine disk image.

vdsClient -s 0 getImagesList df2356f7-8272-401a-97f7-63c14f37ec7a
cbba0291-afbe-4713-b9d1-1af36363ce5c
621c6b1f-e5e1-4488-aec6-44b4d807f569
eb8b6c74-abef-4b90-834c-05d91ea56b39
874df8ef-295d-4167-b343-c5ae181c3258
32ede857-3e34-4dd3-b8c3-e9ef269a2659
ea0fd41d-5555-419a-9cde-d39aacf8a49b

]# vdsClient -s 0 list

d653d7c5-f09e-4ec3-8ba3-3595d34c48f5
        Status = Up
        guestFQDN = 
        emulatedMachine = rhel6.5.0
        spiceSecureChannels = smain,sdisplay,sinputs,scursor,splayback,srecord,ssmartcard,susbredir
        guestDiskMapping = {'QEMU_DVD-ROM_': {'name': '/dev/sr0'}, 'b7101f23-aaed-4a22-8': {'name': '/dev/vda'}}
        displaySecurePort = -1
        cpuType = SandyBridge
        pauseCode = NOERR
        smp = 2
        vmType = kvm
        memSize = 4096
        vmName = HostedEngine
        username = Unknown
        pid = 46589
        displayIp = 0
        displayPort = 5900
        guestIPs = 
        nicModel = rtl8139,pv
        devices = [{'device': 'console', 'specParams': {}, 'type': 'console', 'deviceId': 'b4033338-e298-47bc-b285-f29cc9a30ee2', 'alias': 'console0'}, {'device': 'memballoon', 'specParams': {'model': 'none'}, 'type': 'balloon', 'alias': 'balloon0'}, {'device': 'scsi', 'alias': 'scsi0', 'model': 'virtio-scsi', 'type': 'controller', 'address': {'slot': '0x04', 'bus': '0x00', 'domain': '0x0000', 'type': 'pci', 'function': '0x0'}}, {'device': 'vnc', 'specParams': {'spiceSecureChannels': 'smain,sdisplay,sinputs,scursor,splayback,srecord,ssmartcard,susbredir', 'displayIp': '0'}, 'type': 'graphics', 'port': '5900'}, {'nicModel': 'pv', 'macAddr': '00:16:3E:7B:B8:53', 'linkActive': True, 'network': 'ovirtmgmt', 'alias': 'net0', 'filter': 'vdsm-no-mac-spoofing', 'specParams': {}, 'deviceId': 'b5ba27f5-cda6-48cd-af82-144e6c2b44ff', 'address': {'slot': '0x03', 'bus': '0x00', 'domain': '0x0000', 'type': 'pci', 'function': '0x0'}, 'device': 'bridge', 'type': 'interface', 'name': 'vnet0'}, {'index': '2', 'iface': 'ide', 'name': 'hdc', 'alias': 'ide0-1-0', 'specParams': {}, 'readonly': 'True', 'deviceId': '4ae909f0-d332-4152-8d4e-e0f91d4ba588', 'address': {'bus': '1', 'controller': '0', 'type': 'drive', 'target': '0', 'unit': '0'}, 'device': 'cdrom', 'shared': 'false', 'path': '', 'type': 'disk'}, {'poolID': '00000000-0000-0000-0000-000000000000', 'reqsize': '0', 'index': '0', 'iface': 'virtio', 'apparentsize': '26843545600', 'alias': 'virtio-disk0', 'imageID': 'b7101f23-aaed-4a22-84c7-45e5c735a0fd', 'readonly': 'False', 'shared': 'exclusive', 'truesize': '26843545600', 'type': 'disk', 'domainID': 'd356ba06-ca65-4afb-882f-f56fa092c155', 'volumeInfo': {'domainID': 'd356ba06-ca65-4afb-882f-f56fa092c155', 'volType': 'path', 'leaseOffset': 112197632, 'volumeID': '925942e8-e01e-4199-b881-bb70dcf6ee01', 'leasePath': '/dev/d356ba06-ca65-4afb-882f-f56fa092c155/leases', 'imageID': 'b7101f23-aaed-4a22-84c7-45e5c735a0fd', 'path': '/rhev/data-center/mnt/blockSD/d356ba06-ca65-4afb-882f-f56fa092c155/images/b7101f23-aaed-4a22-84c7-45e5c735a0fd/925942e8-e01e-4199-b881-bb70dcf6ee01'}, 'format': 'raw', 'deviceId': 'b7101f23-aaed-4a22-84c7-45e5c735a0fd', 'address': {'slot': '0x06', 'bus': '0x00', 'domain': '0x0000', 'type': 'pci', 'function': '0x0'}, 'device': 'disk', 'path': '/var/run/vdsm/storage/d356ba06-ca65-4afb-882f-f56fa092c155/b7101f23-aaed-4a22-84c7-45e5c735a0fd/925942e8-e01e-4199-b881-bb70dcf6ee01', 'propagateErrors': 'off', 'optional': 'false', 'name': 'vda', 'bootOrder': '1', 'volumeID': '925942e8-e01e-4199-b881-bb70dcf6ee01', 'specParams': {}, 'volumeChain': [{'domainID': 'd356ba06-ca65-4afb-882f-f56fa092c155', 'volType': 'path', 'leaseOffset': 112197632, 'volumeID': '925942e8-e01e-4199-b881-bb70dcf6ee01', 'leasePath': '/dev/d356ba06-ca65-4afb-882f-f56fa092c155/leases', 'imageID': 'b7101f23-aaed-4a22-84c7-45e5c735a0fd', 'path': '/rhev/data-center/mnt/blockSD/d356ba06-ca65-4afb-882f-f56fa092c155/images/b7101f23-aaed-4a22-84c7-45e5c735a0fd/925942e8-e01e-4199-b881-bb70dcf6ee01'}]}, {'device': 'usb', 'alias': 'usb', 'type': 'controller', 'address': {'slot': '0x01', 'bus': '0x00', 'domain': '0x0000', 'type': 'pci', 'function': '0x2'}}, {'device': 'ide', 'alias': 'ide', 'type': 'controller', 'address': {'slot': '0x01', 'bus': '0x00', 'domain': '0x0000', 'type': 'pci', 'function': '0x1'}}, {'device': 'virtio-serial', 'alias': 'virtio-serial0', 'type': 'controller', 'address': {'slot': '0x05', 'bus': '0x00', 'domain': '0x0000', 'type': 'pci', 'function': '0x0'}}, {'device': 'unix', 'alias': 'channel0', 'type': 'channel', 'address': {'bus': '0', 'controller': '0', 'type': 'virtio-serial', 'port': '1'}}, {'device': 'unix', 'alias': 'channel1', 'type': 'channel', 'address': {'bus': '0', 'controller': '0', 'type': 'virtio-serial', 'port': '2'}}, {'device': 'unix', 'alias': 'channel2', 'type': 'channel', 'address': {'bus': '0', 'controller': '0', 'type': 'virtio-serial', 'port': '3'}}, {'device': '', 'alias': 'video0', 'type': 'video', 'address': {'slot': '0x02', 'bus': '0x00', 'domain': '0x0000', 'type': 'pci', 'function': '0x0'}}]
        clientIp = 
        statusTime = 5323943570
        display = vnc
Comment 8 Nikolai Sednev 2016-01-03 10:13 EST
Created attachment 1111183 [details]
vdsm.log from host with active HE-VM on it.
Comment 9 Nikolai Sednev 2016-01-03 11:14:49 EST
Forth to our discussion with Roy, I've found the root cause:
The same 3 hosts "see" 3 different LUNs with deployed there HE-VMs (nsednev-he-1, nsednev-he-2, nsednev-he-3). The auto-import should be importing it's SD from the 3514f0c5a516000f4 (it belongs to nsednev-he-1), but it did that by importing from 3514f0c5a51600810 (it belongs to nsednev-he-2). The thing is, autoimport can't differentiate between several same named HE-SDs at they all have the same name "hosted_storage". So as the result, HE-SD was eventually imported, but the VM was not found there, as it never existed there.

# vdsClient -s 0 getStorageDomainInfo d356ba06-ca65-4afb-882f-f56fa092c155
        uuid = d356ba06-ca65-4afb-882f-f56fa092c155
        vguuid = 4MuaSU-Iymt-yFAi-Ph2J-LhgE-xX3o-Z1AQxf
        state = OK
        version = 3
        role = Regular
        type = ISCSI
        class = Data
        pool = []
        name = hosted_storage

Such scenario might happen if:
1)Customer deploying multiple HE-VMs and their hosts all has connectivity to every iSCSI LUNs, on which all HE-VMs being deployed.
2)More than one HE-VM deployment were made with the default name = hosted_storage.

Please consider choosing the HE-SD by unique name, or searching if there is more than one HE-SD exists with the name = hosted_storage same as of being imported HE-SD, then warn the customer or provide with ability to select from multiple LUNs for the import.
Comment 10 Doron Fediuck 2016-01-10 04:01:32 EST
I see this mainly as a dirty environment. In a realistic scenario we should expect
a clean storage with a single hosted_storage lun.
Comment 12 Nikolai Sednev 2016-01-17 07:37 EST
Created attachment 1115588 [details]
sosreport from the engine
Comment 13 Doron Fediuck 2016-01-24 03:29:24 EST
This is still a corner case that requires the following accumulated conditions:
1. Multiple HE setups.
2. Using a shared storage for the above setups.
3. Using iSCSI storage.

Hence, I'd settle with a proper KB for this corner case.
Comment 14 Roy Golan 2016-01-24 03:33:21 EST
Workaround: 
 - at ovirt-hosted-engine-setup, set the name of the hosted engine SD to be other than the default "hosted_storage"
 - once the engine setup is done use $ engine-config -s setHostedEngineStorageDomainName=SD_NAME
 - restart ovirt-engine to reload the config
Comment 15 Roy Golan 2016-01-24 03:34:59 EST
> $ engine-config -s setHostedEngineStorageDomainName=SD_NAME

-s HostedEngineStorageDomainName
Comment 16 Nikolai Sednev 2016-01-25 10:14:36 EST
(In reply to Roy Golan from comment #15)
> > $ engine-config -s setHostedEngineStorageDomainName=SD_NAME
> 
> -s HostedEngineStorageDomainName

engine-config -s HostedEngineStorageDomainName="hosted_storage2"
Error setting HostedEngineStorageDomainName's value. No such entry.
engine-config -s setHostedEngineStorageDomainName="hosted_storage2"
Error setting setHostedEngineStorageDomainName's value. No such entry.
engine-config -s setHostedEngineStorageDomainName=hosted_storage2
Error setting setHostedEngineStorageDomainName's value. No such entry.
engine-config -s HostedEngineStorageDomainName=hosted_storage2
Error setting HostedEngineStorageDomainName's value. No such entry.



There is no such parameter inside engine-config.
WA not working.

As a result I've got HE-VM installed on Red Hat Enterprise Virtualization Hypervisor (Beta) release 7.2 (20160113.0.el7ev) with rhevm-appliance-20160120.0-1 taken manually during TUI HE-deployment. iSCSI HE's SD was not imported, but data SD was successfully added and I've created working guest VM for farther tasks.

Engine:
rhevm-3.6.2.6-0.1.el6.noarch

Host:
rhevm-sdk-python-3.6.2.1-1.el7ev.noarch
sanlock-3.2.4-1.el7.x86_64
libvirt-client-1.2.17-13.el7_2.2.x86_64
mom-0.5.1-1.el7ev.noarch
vdsm-4.17.17-0.el7ev.noarch
Comment 17 Roy Golan 2016-01-25 14:41:51 EST
(In reply to Nikolai Sednev from comment #16)
> (In reply to Roy Golan from comment #15)
> > > $ engine-config -s setHostedEngineStorageDomainName=SD_NAME
> > 
> > -s HostedEngineStorageDomainName

My bad, the patch[1] to use it in engine-config is in for 3.6.3 for Bug 1290478.

you can for now, try to apply it - 

 - $ su - postgres
 - $ psql engine -c "select fn_db_add_config_value('HostedEngineStorageDomainName','hosted_storage','general');"
 - $ exit
 - engine-config -s HostedEngineStorageDomainName=SD_NAME
Comment 18 Nikolai Sednev 2016-01-26 08:42:09 EST
Hi Roy,
Can you please fill in the "Fixed In Version:" accordingly?

Did not work for me:
su - postgres
-bash-4.1$ psql engine -c "select fn_db_add_config_value('HostedEngineStorageDomainName','hosted_storage','general');"
 fn_db_add_config_value 
------------------------
 
(1 row)

-bash-4.1$  exit
logout
# engine-config -s HostedEngineStorageDomainName="hosted_storage2"
Error setting HostedEngineStorageDomainName's value. No such entry with version general.
Comment 19 Yaniv Kaul 2016-01-27 13:43:26 EST
Proper zoning would have eliminated this,  reducing severity.
Comment 20 Roy Golan 2016-02-02 07:50:21 EST
There is a more robust and simple way to detect the domain which is based on ID and not name: On VM import, we have all the disk details from VDSM, and the disk have the domain id. we simply need to use that instead of a name, as simple as that. since this bug is medium and this fix will touch the auto import flow this fix should land in the 2nd Z version.
Comment 21 Roy Golan 2016-02-07 03:03:30 EST
*** Bug 1304611 has been marked as a duplicate of this bug. ***
Comment 22 Roy Golan 2016-02-10 04:20:57 EST
(In reply to Nikolai Sednev from comment #18)
> -bash-4.1$  exit
> logout
> # engine-config -s HostedEngineStorageDomainName="hosted_storage2"
> Error setting HostedEngineStorageDomainName's value. No such entry with
> version general.

can you try:
echo HostedEngineStorageDomainName | engine-config  -s HostedEngineStorageDomainName="hosted_storage2" -p /dev/stdin
Comment 23 Sistemas Amtega 2016-02-11 07:48:51 EST
Hello,

A database query of the config gives no result:

# su - postgres

-bash-4.1$ psql engine -c "select fn_db_add_config_value ('HostedEngineStorageDomainName', 'hosted_storage', 'general');"

fn_db_add_config_value

------------------------
 
(1 row)



Also, the engine config seems not to have the indicated key (HostedEngineStorageDomainName). The only two keys it reports are these:

# engine-config -l | grep Hosted

HostedEngineVmName: The name of the Hosted Engine VM. That name will be used to perform exclusive operation by ovirt-engine on that VM. (Value Type: String)

AutoImportHostedEngine: "Try to automatically import the hosted engine VM and its storage domain" (Value Type: Boolean)



The attempt of adding the HostedEngineStorageDomainName key finish with the following error:

# echo HostedEngineStorageDomainName | engine-config  -s HostedEngineStorageDomainName="stg-data-fc-he-0001" -p /dev/stdin

Key for add operation must be defined!
Comment 24 Roy Golan 2016-02-25 06:34:27 EST
*** Bug 1311693 has been marked as a duplicate of this bug. ***
Comment 25 Nikolai Sednev 2016-03-21 04:52:52 EDT
Don't have the environment with this issue being reproduced any more.
Please take a look at the logs and if no problem found, please close the bug.
Comment 26 Martin Tessun 2016-05-19 06:28:18 EDT
Hi all,

hosted-engine --deploy does no longer ask for a name of the Storage Doamin or even for a name of the HostedEngine.

As these values cannot be changed anymore, I think that this Bug can be closed.
Comment 27 Doron Fediuck 2016-07-17 06:05:50 EDT
As mentioned in comment 26, the latest installer no longer allows this error flow.
Closing the issue.
Comment 28 Roy Golan 2016-07-17 06:48:49 EDT

*** This bug has been marked as a duplicate of bug 1301105 ***

Note You need to log in before you can comment on or make changes to this bug.