Bug 1294457
Summary: | Failed importing the Hosted Engine VM | VolumeDoesNotExist | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
Product: | [oVirt] ovirt-engine | Reporter: | Nikolai Sednev <nsednev> | ||||||||
Component: | BLL.HostedEngine | Assignee: | Roy Golan <rgolan> | ||||||||
Status: | CLOSED DUPLICATE | QA Contact: | Nikolai Sednev <nsednev> | ||||||||
Severity: | medium | Docs Contact: | |||||||||
Priority: | medium | ||||||||||
Version: | 3.6.1.3 | CC: | bugs, dfediuck, didi, lveyde, mavital, mgoldboi, mtessun, nsednev, rgolan, rmartins, sbonazzo, sistemas-soporte-linux, stirabos, zbrown | ||||||||
Target Milestone: | ovirt-4.1.0-alpha | Keywords: | Reopened | ||||||||
Target Release: | --- | Flags: | dfediuck:
ovirt-4.1+
mgoldboi: planning_ack+ rgolan: devel_ack+ mavital: testing_ack+ |
||||||||
Hardware: | x86_64 | ||||||||||
OS: | Linux | ||||||||||
URL: | https://drive.google.com/a/redhat.com/file/d/0B85BEaDBcF88Y3VHeGRpNVR2UGs/view?usp=sharing | ||||||||||
Whiteboard: | |||||||||||
Fixed In Version: | Doc Type: | Bug Fix | |||||||||
Doc Text: | Story Points: | --- | |||||||||
Clone Of: | Environment: | ||||||||||
Last Closed: | 2016-07-17 10:05:50 UTC | Type: | Bug | ||||||||
Regression: | --- | Mount Type: | --- | ||||||||
Documentation: | --- | CRM: | |||||||||
Verified Versions: | Category: | --- | |||||||||
oVirt Team: | SLA | RHEL 7.3 requirements from Atomic Host: | |||||||||
Cloudforms Team: | --- | Target Upstream Version: | |||||||||
Embargoed: | |||||||||||
Attachments: |
|
Description
Nikolai Sednev
2015-12-28 09:57:27 UTC
Created attachment 1109995 [details]
engine logs
Look for the "2015-12-28 11:38:10,136 ERROR [org.ovirt.engine.core.bll.HostedEngineImporter] (org.ovirt.thread.pool-7-thread-2) [34da26da] Failed importing the Hosted Engine VM" within the engine log. For sosreport from the host follow the link: https://drive.google.com/a/redhat.com/file/d/0B85BEaDBcF88Y3VHeGRpNVR2UGs/view?usp=sharing please output this from your host: $ vdsClient -s 0 list and $ vdsClient -s 0 getImagesList <your hosted_domain ID> I want to see what is the disk image id we try to import because this info comes from vdsm, I do not generate the ID or something.z [root@alma03 ~]# vdsClient -s 0 list d653d7c5-f09e-4ec3-8ba3-3595d34c48f5 Status = Up guestFQDN = emulatedMachine = rhel6.5.0 spiceSecureChannels = smain,sdisplay,sinputs,scursor,splayback,srecord,ssmartcard,susbredir guestDiskMapping = {'QEMU_DVD-ROM_': {'name': '/dev/sr0'}, 'b7101f23-aaed-4a22-8': {'name': '/dev/vda'}} displaySecurePort = -1 cpuType = SandyBridge pauseCode = NOERR smp = 2 vmType = kvm memSize = 4096 vmName = HostedEngine username = Unknown pid = 46589 displayIp = 0 displayPort = 5900 guestIPs = nicModel = rtl8139,pv devices = [{'device': 'console', 'specParams': {}, 'type': 'console', 'deviceId': 'b4033338-e298-47bc-b285-f29cc9a30ee2', 'alias': 'console0'}, {'device': 'memballoon', 'specParams': {'model': 'none'}, 'type': 'balloon', 'alias': 'balloon0'}, {'device': 'scsi', 'alias': 'scsi0', 'model': 'virtio-scsi', 'type': 'controller', 'address': {'slot': '0x04', 'bus': '0x00', 'domain': '0x0000', 'type': 'pci', 'function': '0x0'}}, {'device': 'vnc', 'specParams': {'spiceSecureChannels': 'smain,sdisplay,sinputs,scursor,splayback,srecord,ssmartcard,susbredir', 'displayIp': '0'}, 'type': 'graphics', 'port': '5900'}, {'nicModel': 'pv', 'macAddr': '00:16:3E:7B:B8:53', 'linkActive': True, 'network': 'ovirtmgmt', 'alias': 'net0', 'filter': 'vdsm-no-mac-spoofing', 'specParams': {}, 'deviceId': 'b5ba27f5-cda6-48cd-af82-144e6c2b44ff', 'address': {'slot': '0x03', 'bus': '0x00', 'domain': '0x0000', 'type': 'pci', 'function': '0x0'}, 'device': 'bridge', 'type': 'interface', 'name': 'vnet0'}, {'index': '2', 'iface': 'ide', 'name': 'hdc', 'alias': 'ide0-1-0', 'specParams': {}, 'readonly': 'True', 'deviceId': '4ae909f0-d332-4152-8d4e-e0f91d4ba588', 'address': {'bus': '1', 'controller': '0', 'type': 'drive', 'target': '0', 'unit': '0'}, 'device': 'cdrom', 'shared': 'false', 'path': '', 'type': 'disk'}, {'poolID': '00000000-0000-0000-0000-000000000000', 'reqsize': '0', 'index': '0', 'iface': 'virtio', 'apparentsize': '26843545600', 'alias': 'virtio-disk0', 'imageID': 'b7101f23-aaed-4a22-84c7-45e5c735a0fd', 'readonly': 'False', 'shared': 'exclusive', 'truesize': '26843545600', 'type': 'disk', 'domainID': 'd356ba06-ca65-4afb-882f-f56fa092c155', 'volumeInfo': {'domainID': 'd356ba06-ca65-4afb-882f-f56fa092c155', 'volType': 'path', 'leaseOffset': 112197632, 'volumeID': '925942e8-e01e-4199-b881-bb70dcf6ee01', 'leasePath': '/dev/d356ba06-ca65-4afb-882f-f56fa092c155/leases', 'imageID': 'b7101f23-aaed-4a22-84c7-45e5c735a0fd', 'path': '/rhev/data-center/mnt/blockSD/d356ba06-ca65-4afb-882f-f56fa092c155/images/b7101f23-aaed-4a22-84c7-45e5c735a0fd/925942e8-e01e-4199-b881-bb70dcf6ee01'}, 'format': 'raw', 'deviceId': 'b7101f23-aaed-4a22-84c7-45e5c735a0fd', 'address': {'slot': '0x06', 'bus': '0x00', 'domain': '0x0000', 'type': 'pci', 'function': '0x0'}, 'device': 'disk', 'path': '/var/run/vdsm/storage/d356ba06-ca65-4afb-882f-f56fa092c155/b7101f23-aaed-4a22-84c7-45e5c735a0fd/925942e8-e01e-4199-b881-bb70dcf6ee01', 'propagateErrors': 'off', 'optional': 'false', 'name': 'vda', 'bootOrder': '1', 'volumeID': '925942e8-e01e-4199-b881-bb70dcf6ee01', 'specParams': {}, 'volumeChain': [{'domainID': 'd356ba06-ca65-4afb-882f-f56fa092c155', 'volType': 'path', 'leaseOffset': 112197632, 'volumeID': '925942e8-e01e-4199-b881-bb70dcf6ee01', 'leasePath': '/dev/d356ba06-ca65-4afb-882f-f56fa092c155/leases', 'imageID': 'b7101f23-aaed-4a22-84c7-45e5c735a0fd', 'path': '/rhev/data-center/mnt/blockSD/d356ba06-ca65-4afb-882f-f56fa092c155/images/b7101f23-aaed-4a22-84c7-45e5c735a0fd/925942e8-e01e-4199-b881-bb70dcf6ee01'}]}, {'device': 'usb', 'alias': 'usb', 'type': 'controller', 'address': {'slot': '0x01', 'bus': '0x00', 'domain': '0x0000', 'type': 'pci', 'function': '0x2'}}, {'device': 'ide', 'alias': 'ide', 'type': 'controller', 'address': {'slot': '0x01', 'bus': '0x00', 'domain': '0x0000', 'type': 'pci', 'function': '0x1'}}, {'device': 'virtio-serial', 'alias': 'virtio-serial0', 'type': 'controller', 'address': {'slot': '0x05', 'bus': '0x00', 'domain': '0x0000', 'type': 'pci', 'function': '0x0'}}, {'device': 'unix', 'alias': 'channel0', 'type': 'channel', 'address': {'bus': '0', 'controller': '0', 'type': 'virtio-serial', 'port': '1'}}, {'device': 'unix', 'alias': 'channel1', 'type': 'channel', 'address': {'bus': '0', 'controller': '0', 'type': 'virtio-serial', 'port': '2'}}, {'device': 'unix', 'alias': 'channel2', 'type': 'channel', 'address': {'bus': '0', 'controller': '0', 'type': 'virtio-serial', 'port': '3'}}, {'device': '', 'alias': 'video0', 'type': 'video', 'address': {'slot': '0x02', 'bus': '0x00', 'domain': '0x0000', 'type': 'pci', 'function': '0x0'}}] clientIp = statusTime = 5316668340 display = vnc # vdsClient -s 0 getImagesList 514f0c5a516000f4 Storage domain does not exist: ('514f0c5a516000f4',) Looks like autoimport simply imported the wrong volume ID, it should have been importing the 514f0c5a516000f4, but it imported the wrong one.
> 'format': 'raw', 'deviceId': 'b7101f23-aaed-4a22-84c7-45e5c735a0fd',
> 'address': {'slot': '0x06', 'bus': '0x00', 'domain': '0x0000', 'type':
> 'pci', 'function': '0x0'}, 'device': 'disk', 'path':
> '/var/run/vdsm/storage/d356ba06-ca65-4afb-882f-f56fa092c155/b7101f23-aaed-
> 4a22-84c7-45e5c735a0fd/925942e8-e01e-4199-b881-bb70dcf6ee01',
this path is clearly pointing at domain d356ba06...
and this is what the engine imported and activated- from the engine.log:
2015-12-28 11:32:40,075 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.ActivateStorageDomainVDSCommand] (org.ovirt.thread.pool-7-thread-11) [42f80da0] START, ActivateStorageDomainVDSCommand( ActivateStorageDomainVDSCommandParameters:{runAsync='true', storagePoolId='00000001-0001-0001-0001-0000000001bd', ignoreFailoverLimit='false', storageDomainId='df2356f7-8272-401a-97f7-63c14f37ec7a'}), log id: 1e4b6490
2015-12-28 11:32:42,919 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.ActivateStorageDomainVDSCommand] (org.ovirt.thread.pool-7-thread-11) [42f80da0] FINISH, ActivateStorageDomainVDSCommand, return: true, log id: 1e4b6490
please use df2356f7-8272-401a-97f7-63c14f37ec7a for the getImagesList. And we should look into vdsm log to understand why it threw this error while its obviously reported by vdsm that this images is the hosted engine disk image.
(In reply to Roy Golan from comment #6) > > 'format': 'raw', 'deviceId': 'b7101f23-aaed-4a22-84c7-45e5c735a0fd', > > 'address': {'slot': '0x06', 'bus': '0x00', 'domain': '0x0000', 'type': > > 'pci', 'function': '0x0'}, 'device': 'disk', 'path': > > '/var/run/vdsm/storage/d356ba06-ca65-4afb-882f-f56fa092c155/b7101f23-aaed- > > 4a22-84c7-45e5c735a0fd/925942e8-e01e-4199-b881-bb70dcf6ee01', > > this path is clearly pointing at domain d356ba06... > > and this is what the engine imported and activated- from the engine.log: > > > 2015-12-28 11:32:40,075 INFO > [org.ovirt.engine.core.vdsbroker.irsbroker.ActivateStorageDomainVDSCommand] > (org.ovirt.thread.pool-7-thread-11) [42f80da0] START, > ActivateStorageDomainVDSCommand( > ActivateStorageDomainVDSCommandParameters:{runAsync='true', > storagePoolId='00000001-0001-0001-0001-0000000001bd', > ignoreFailoverLimit='false', > storageDomainId='df2356f7-8272-401a-97f7-63c14f37ec7a'}), log id: 1e4b6490 > 2015-12-28 11:32:42,919 INFO > [org.ovirt.engine.core.vdsbroker.irsbroker.ActivateStorageDomainVDSCommand] > (org.ovirt.thread.pool-7-thread-11) [42f80da0] FINISH, > ActivateStorageDomainVDSCommand, return: true, log id: 1e4b6490 > > > please use df2356f7-8272-401a-97f7-63c14f37ec7a for the getImagesList. And > we should look into vdsm log to understand why it threw this error while its > obviously reported by vdsm that this images is the hosted engine disk image. vdsClient -s 0 getImagesList df2356f7-8272-401a-97f7-63c14f37ec7a cbba0291-afbe-4713-b9d1-1af36363ce5c 621c6b1f-e5e1-4488-aec6-44b4d807f569 eb8b6c74-abef-4b90-834c-05d91ea56b39 874df8ef-295d-4167-b343-c5ae181c3258 32ede857-3e34-4dd3-b8c3-e9ef269a2659 ea0fd41d-5555-419a-9cde-d39aacf8a49b ]# vdsClient -s 0 list d653d7c5-f09e-4ec3-8ba3-3595d34c48f5 Status = Up guestFQDN = emulatedMachine = rhel6.5.0 spiceSecureChannels = smain,sdisplay,sinputs,scursor,splayback,srecord,ssmartcard,susbredir guestDiskMapping = {'QEMU_DVD-ROM_': {'name': '/dev/sr0'}, 'b7101f23-aaed-4a22-8': {'name': '/dev/vda'}} displaySecurePort = -1 cpuType = SandyBridge pauseCode = NOERR smp = 2 vmType = kvm memSize = 4096 vmName = HostedEngine username = Unknown pid = 46589 displayIp = 0 displayPort = 5900 guestIPs = nicModel = rtl8139,pv devices = [{'device': 'console', 'specParams': {}, 'type': 'console', 'deviceId': 'b4033338-e298-47bc-b285-f29cc9a30ee2', 'alias': 'console0'}, {'device': 'memballoon', 'specParams': {'model': 'none'}, 'type': 'balloon', 'alias': 'balloon0'}, {'device': 'scsi', 'alias': 'scsi0', 'model': 'virtio-scsi', 'type': 'controller', 'address': {'slot': '0x04', 'bus': '0x00', 'domain': '0x0000', 'type': 'pci', 'function': '0x0'}}, {'device': 'vnc', 'specParams': {'spiceSecureChannels': 'smain,sdisplay,sinputs,scursor,splayback,srecord,ssmartcard,susbredir', 'displayIp': '0'}, 'type': 'graphics', 'port': '5900'}, {'nicModel': 'pv', 'macAddr': '00:16:3E:7B:B8:53', 'linkActive': True, 'network': 'ovirtmgmt', 'alias': 'net0', 'filter': 'vdsm-no-mac-spoofing', 'specParams': {}, 'deviceId': 'b5ba27f5-cda6-48cd-af82-144e6c2b44ff', 'address': {'slot': '0x03', 'bus': '0x00', 'domain': '0x0000', 'type': 'pci', 'function': '0x0'}, 'device': 'bridge', 'type': 'interface', 'name': 'vnet0'}, {'index': '2', 'iface': 'ide', 'name': 'hdc', 'alias': 'ide0-1-0', 'specParams': {}, 'readonly': 'True', 'deviceId': '4ae909f0-d332-4152-8d4e-e0f91d4ba588', 'address': {'bus': '1', 'controller': '0', 'type': 'drive', 'target': '0', 'unit': '0'}, 'device': 'cdrom', 'shared': 'false', 'path': '', 'type': 'disk'}, {'poolID': '00000000-0000-0000-0000-000000000000', 'reqsize': '0', 'index': '0', 'iface': 'virtio', 'apparentsize': '26843545600', 'alias': 'virtio-disk0', 'imageID': 'b7101f23-aaed-4a22-84c7-45e5c735a0fd', 'readonly': 'False', 'shared': 'exclusive', 'truesize': '26843545600', 'type': 'disk', 'domainID': 'd356ba06-ca65-4afb-882f-f56fa092c155', 'volumeInfo': {'domainID': 'd356ba06-ca65-4afb-882f-f56fa092c155', 'volType': 'path', 'leaseOffset': 112197632, 'volumeID': '925942e8-e01e-4199-b881-bb70dcf6ee01', 'leasePath': '/dev/d356ba06-ca65-4afb-882f-f56fa092c155/leases', 'imageID': 'b7101f23-aaed-4a22-84c7-45e5c735a0fd', 'path': '/rhev/data-center/mnt/blockSD/d356ba06-ca65-4afb-882f-f56fa092c155/images/b7101f23-aaed-4a22-84c7-45e5c735a0fd/925942e8-e01e-4199-b881-bb70dcf6ee01'}, 'format': 'raw', 'deviceId': 'b7101f23-aaed-4a22-84c7-45e5c735a0fd', 'address': {'slot': '0x06', 'bus': '0x00', 'domain': '0x0000', 'type': 'pci', 'function': '0x0'}, 'device': 'disk', 'path': '/var/run/vdsm/storage/d356ba06-ca65-4afb-882f-f56fa092c155/b7101f23-aaed-4a22-84c7-45e5c735a0fd/925942e8-e01e-4199-b881-bb70dcf6ee01', 'propagateErrors': 'off', 'optional': 'false', 'name': 'vda', 'bootOrder': '1', 'volumeID': '925942e8-e01e-4199-b881-bb70dcf6ee01', 'specParams': {}, 'volumeChain': [{'domainID': 'd356ba06-ca65-4afb-882f-f56fa092c155', 'volType': 'path', 'leaseOffset': 112197632, 'volumeID': '925942e8-e01e-4199-b881-bb70dcf6ee01', 'leasePath': '/dev/d356ba06-ca65-4afb-882f-f56fa092c155/leases', 'imageID': 'b7101f23-aaed-4a22-84c7-45e5c735a0fd', 'path': '/rhev/data-center/mnt/blockSD/d356ba06-ca65-4afb-882f-f56fa092c155/images/b7101f23-aaed-4a22-84c7-45e5c735a0fd/925942e8-e01e-4199-b881-bb70dcf6ee01'}]}, {'device': 'usb', 'alias': 'usb', 'type': 'controller', 'address': {'slot': '0x01', 'bus': '0x00', 'domain': '0x0000', 'type': 'pci', 'function': '0x2'}}, {'device': 'ide', 'alias': 'ide', 'type': 'controller', 'address': {'slot': '0x01', 'bus': '0x00', 'domain': '0x0000', 'type': 'pci', 'function': '0x1'}}, {'device': 'virtio-serial', 'alias': 'virtio-serial0', 'type': 'controller', 'address': {'slot': '0x05', 'bus': '0x00', 'domain': '0x0000', 'type': 'pci', 'function': '0x0'}}, {'device': 'unix', 'alias': 'channel0', 'type': 'channel', 'address': {'bus': '0', 'controller': '0', 'type': 'virtio-serial', 'port': '1'}}, {'device': 'unix', 'alias': 'channel1', 'type': 'channel', 'address': {'bus': '0', 'controller': '0', 'type': 'virtio-serial', 'port': '2'}}, {'device': 'unix', 'alias': 'channel2', 'type': 'channel', 'address': {'bus': '0', 'controller': '0', 'type': 'virtio-serial', 'port': '3'}}, {'device': '', 'alias': 'video0', 'type': 'video', 'address': {'slot': '0x02', 'bus': '0x00', 'domain': '0x0000', 'type': 'pci', 'function': '0x0'}}] clientIp = statusTime = 5323943570 display = vnc Created attachment 1111183 [details]
vdsm.log from host with active HE-VM on it.
Forth to our discussion with Roy, I've found the root cause: The same 3 hosts "see" 3 different LUNs with deployed there HE-VMs (nsednev-he-1, nsednev-he-2, nsednev-he-3). The auto-import should be importing it's SD from the 3514f0c5a516000f4 (it belongs to nsednev-he-1), but it did that by importing from 3514f0c5a51600810 (it belongs to nsednev-he-2). The thing is, autoimport can't differentiate between several same named HE-SDs at they all have the same name "hosted_storage". So as the result, HE-SD was eventually imported, but the VM was not found there, as it never existed there. # vdsClient -s 0 getStorageDomainInfo d356ba06-ca65-4afb-882f-f56fa092c155 uuid = d356ba06-ca65-4afb-882f-f56fa092c155 vguuid = 4MuaSU-Iymt-yFAi-Ph2J-LhgE-xX3o-Z1AQxf state = OK version = 3 role = Regular type = ISCSI class = Data pool = [] name = hosted_storage Such scenario might happen if: 1)Customer deploying multiple HE-VMs and their hosts all has connectivity to every iSCSI LUNs, on which all HE-VMs being deployed. 2)More than one HE-VM deployment were made with the default name = hosted_storage. Please consider choosing the HE-SD by unique name, or searching if there is more than one HE-SD exists with the name = hosted_storage same as of being imported HE-SD, then warn the customer or provide with ability to select from multiple LUNs for the import. I see this mainly as a dirty environment. In a realistic scenario we should expect a clean storage with a single hosted_storage lun. Created attachment 1115588 [details]
sosreport from the engine
This is still a corner case that requires the following accumulated conditions: 1. Multiple HE setups. 2. Using a shared storage for the above setups. 3. Using iSCSI storage. Hence, I'd settle with a proper KB for this corner case. Workaround: - at ovirt-hosted-engine-setup, set the name of the hosted engine SD to be other than the default "hosted_storage" - once the engine setup is done use $ engine-config -s setHostedEngineStorageDomainName=SD_NAME - restart ovirt-engine to reload the config > $ engine-config -s setHostedEngineStorageDomainName=SD_NAME
-s HostedEngineStorageDomainName
(In reply to Roy Golan from comment #15) > > $ engine-config -s setHostedEngineStorageDomainName=SD_NAME > > -s HostedEngineStorageDomainName engine-config -s HostedEngineStorageDomainName="hosted_storage2" Error setting HostedEngineStorageDomainName's value. No such entry. engine-config -s setHostedEngineStorageDomainName="hosted_storage2" Error setting setHostedEngineStorageDomainName's value. No such entry. engine-config -s setHostedEngineStorageDomainName=hosted_storage2 Error setting setHostedEngineStorageDomainName's value. No such entry. engine-config -s HostedEngineStorageDomainName=hosted_storage2 Error setting HostedEngineStorageDomainName's value. No such entry. There is no such parameter inside engine-config. WA not working. As a result I've got HE-VM installed on Red Hat Enterprise Virtualization Hypervisor (Beta) release 7.2 (20160113.0.el7ev) with rhevm-appliance-20160120.0-1 taken manually during TUI HE-deployment. iSCSI HE's SD was not imported, but data SD was successfully added and I've created working guest VM for farther tasks. Engine: rhevm-3.6.2.6-0.1.el6.noarch Host: rhevm-sdk-python-3.6.2.1-1.el7ev.noarch sanlock-3.2.4-1.el7.x86_64 libvirt-client-1.2.17-13.el7_2.2.x86_64 mom-0.5.1-1.el7ev.noarch vdsm-4.17.17-0.el7ev.noarch (In reply to Nikolai Sednev from comment #16) > (In reply to Roy Golan from comment #15) > > > $ engine-config -s setHostedEngineStorageDomainName=SD_NAME > > > > -s HostedEngineStorageDomainName My bad, the patch[1] to use it in engine-config is in for 3.6.3 for Bug 1290478. you can for now, try to apply it - - $ su - postgres - $ psql engine -c "select fn_db_add_config_value('HostedEngineStorageDomainName','hosted_storage','general');" - $ exit - engine-config -s HostedEngineStorageDomainName=SD_NAME Hi Roy, Can you please fill in the "Fixed In Version:" accordingly? Did not work for me: su - postgres -bash-4.1$ psql engine -c "select fn_db_add_config_value('HostedEngineStorageDomainName','hosted_storage','general');" fn_db_add_config_value ------------------------ (1 row) -bash-4.1$ exit logout # engine-config -s HostedEngineStorageDomainName="hosted_storage2" Error setting HostedEngineStorageDomainName's value. No such entry with version general. Proper zoning would have eliminated this, reducing severity. There is a more robust and simple way to detect the domain which is based on ID and not name: On VM import, we have all the disk details from VDSM, and the disk have the domain id. we simply need to use that instead of a name, as simple as that. since this bug is medium and this fix will touch the auto import flow this fix should land in the 2nd Z version. *** Bug 1304611 has been marked as a duplicate of this bug. *** (In reply to Nikolai Sednev from comment #18) > -bash-4.1$ exit > logout > # engine-config -s HostedEngineStorageDomainName="hosted_storage2" > Error setting HostedEngineStorageDomainName's value. No such entry with > version general. can you try: echo HostedEngineStorageDomainName | engine-config -s HostedEngineStorageDomainName="hosted_storage2" -p /dev/stdin Hello, A database query of the config gives no result: # su - postgres -bash-4.1$ psql engine -c "select fn_db_add_config_value ('HostedEngineStorageDomainName', 'hosted_storage', 'general');" fn_db_add_config_value ------------------------ (1 row) Also, the engine config seems not to have the indicated key (HostedEngineStorageDomainName). The only two keys it reports are these: # engine-config -l | grep Hosted HostedEngineVmName: The name of the Hosted Engine VM. That name will be used to perform exclusive operation by ovirt-engine on that VM. (Value Type: String) AutoImportHostedEngine: "Try to automatically import the hosted engine VM and its storage domain" (Value Type: Boolean) The attempt of adding the HostedEngineStorageDomainName key finish with the following error: # echo HostedEngineStorageDomainName | engine-config -s HostedEngineStorageDomainName="stg-data-fc-he-0001" -p /dev/stdin Key for add operation must be defined! *** Bug 1311693 has been marked as a duplicate of this bug. *** Don't have the environment with this issue being reproduced any more. Please take a look at the logs and if no problem found, please close the bug. Hi all, hosted-engine --deploy does no longer ask for a name of the Storage Doamin or even for a name of the HostedEngine. As these values cannot be changed anymore, I think that this Bug can be closed. As mentioned in comment 26, the latest installer no longer allows this error flow. Closing the issue. *** This bug has been marked as a duplicate of bug 1301105 *** |