Bug 1566111
Summary: | Failed to auto-import HE-VM after vintage deployment of SHE. | ||||||||
---|---|---|---|---|---|---|---|---|---|
Product: | [oVirt] ovirt-engine | Reporter: | Nikolai Sednev <nsednev> | ||||||
Component: | BLL.HostedEngine | Assignee: | Doron Fediuck <dfediuck> | ||||||
Status: | CLOSED WORKSFORME | QA Contact: | meital avital <mavital> | ||||||
Severity: | urgent | Docs Contact: | |||||||
Priority: | high | ||||||||
Version: | 4.2.2.6 | CC: | bugs, msivak, nsednev, stirabos, ylavi | ||||||
Target Milestone: | ovirt-4.2.3 | Keywords: | Regression | ||||||
Target Release: | --- | Flags: | ylavi:
ovirt-4.2+
ykaul: blocker+ |
||||||
Hardware: | x86_64 | ||||||||
OS: | Linux | ||||||||
Whiteboard: | |||||||||
Fixed In Version: | Doc Type: | If docs needed, set a value | |||||||
Doc Text: | Story Points: | --- | |||||||
Clone Of: | Environment: | ||||||||
Last Closed: | 2018-04-20 09:18:29 UTC | Type: | Bug | ||||||
Regression: | --- | Mount Type: | --- | ||||||
Documentation: | --- | CRM: | |||||||
Verified Versions: | Category: | --- | |||||||
oVirt Team: | Integration | RHEL 7.3 requirements from Atomic Host: | |||||||
Cloudforms Team: | --- | Target Upstream Version: | |||||||
Embargoed: | |||||||||
Bug Depends On: | |||||||||
Bug Blocks: | 1560666 | ||||||||
Attachments: |
|
Description
Nikolai Sednev
2018-04-11 14:26:43 UTC
Created attachment 1420359 [details]
sosreport from alma04
Why would you use vintage deployment? (Just to try it out as plan B to node zero deployment?) (In reply to Yaniv Kaul from comment #2) > Why would you use vintage deployment? (Just to try it out as plan B to node > zero deployment?) Because of the verification of other bugs requires it. This one for example: https://bugzilla.redhat.com/show_bug.cgi?id=1560666 (In reply to Nikolai Sednev from comment #3) > (In reply to Yaniv Kaul from comment #2) > > Why would you use vintage deployment? (Just to try it out as plan B to node > > zero deployment?) > > Because of the verification of other bugs requires it. This one for example: > https://bugzilla.redhat.com/show_bug.cgi?id=1560666 And by the way some specif flows, like migrating to hosted-engine, still relies on that to let the user restore a engine backup before executing engine-setup. Yaniv, we still have it as a fallback in 4.2. Upstream on 4.2 on OST is still working: http://jenkins.ovirt.org/view/oVirt%20system%20tests/job/ovirt-system-tests_he-basic-suite-4.2/96/artifact/exported-artifacts/test_logs/he-basic-suite-4.2/post-006_network_by_label.py/lago-he-basic-suite-4-2-engine/_var_log/ovirt-engine/engine.log 2018-04-11 21:48:06,131-04 INFO [org.ovirt.engine.core.bll.HostedEngineImporter] (EE-ManagedThreadFactory-engine-Thread-20) [5bbf633c] Try to import the Hosted Engine VM 'VM [HostedEngine]' 2018-04-11 21:48:06,260-04 INFO [org.ovirt.engine.core.bll.exportimport.ImportVmCommand] (EE-ManagedThreadFactory-engine-Thread-20) [200277e4] Lock Acquired to object 'EngineLock:{exclusiveLocks='[37b940bd-e487-4036-a3a2-d40f2afb9fd5=VM, HostedEngine=VM_NAME]', sharedLocks='[37b940bd-e487-4036-a3a2-d40f2afb9fd5=REMOTE_VM]'}' 2018-04-11 21:48:06,328-04 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.GetImageInfoVDSCommand] (EE-ManagedThreadFactory-engine-Thread-20) [200277e4] START, GetImageInfoVDSCommand( GetImageInfoVDSCommandParameters:{storagePoolId='4175b61e-3df2-11e8-9a08-5452c0a8c863', ignoreFailoverLimit='false', storageDomainId='97b3ec99-c788-48d2-a8f2-85c5cfd48b4f', imageGroupId='f5b81a12-db53-4671-87a1-3ba17be698a7', imageId='39be3abc-3e66-4e9d-8ebe-972b05fb4ffa'}), log id: 6674c459 2018-04-11 21:48:06,330-04 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetVolumeInfoVDSCommand] (EE-ManagedThreadFactory-engine-Thread-20) [200277e4] START, GetVolumeInfoVDSCommand(HostName = lago-he-basic-suite-4-2-host-0, GetVolumeInfoVDSCommandParameters:{hostId='328c5290-7939-474e-894f-b5f766366435', storagePoolId='4175b61e-3df2-11e8-9a08-5452c0a8c863', storageDomainId='97b3ec99-c788-48d2-a8f2-85c5cfd48b4f', imageGroupId='f5b81a12-db53-4671-87a1-3ba17be698a7', imageId='39be3abc-3e66-4e9d-8ebe-972b05fb4ffa'}), log id: 75a14e5a 2018-04-11 21:48:06,353-04 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetVolumeInfoVDSCommand] (EE-ManagedThreadFactory-engine-Thread-20) [200277e4] FINISH, GetVolumeInfoVDSCommand, return: org.ovirt.engine.core.common.businessentities.storage.DiskImage@9a66e055, log id: 75a14e5a 2018-04-11 21:48:06,353-04 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.GetImageInfoVDSCommand] (EE-ManagedThreadFactory-engine-Thread-20) [200277e4] FINISH, GetImageInfoVDSCommand, return: org.ovirt.engine.core.common.businessentities.storage.DiskImage@9a66e055, log id: 6674c459 2018-04-11 21:48:06,842-04 INFO [org.ovirt.engine.core.bll.exportimport.ImportVmCommand] (EE-ManagedThreadFactory-engine-Thread-20) [200277e4] Running command: ImportVmCommand internal: true. Entities affected : ID: 97b3ec99-c788-48d2-a8f2-85c5cfd48b4f Type: StorageAction group IMPORT_EXPORT_VM with role type ADMIN, ID: 97b3ec99-c788-48d2-a8f2-85c5cfd48b4f Type: StorageAction group IMPORT_EXPORT_VM with role type ADMIN 2018-04-11 21:48:07,012-04 WARN [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-20) [200277e4] EVENT_ID: MAC_ADDRESS_IS_EXTERNAL(925), VM HostedEngine has MAC address(es) 54:52:c0:a8:c8:63, which is/are out of its MAC pool definitions. 2018-04-11 21:48:07,088-04 WARN [org.ovirt.engine.core.bll.exportimport.ImportVmCommand] (EE-ManagedThreadFactory-engine-Thread-20) [200277e4] VM '37b940bd-e487-4036-a3a2-d40f2afb9fd5' doesn't have active snapshot in export domain 2018-04-11 21:48:07,225-04 INFO [org.ovirt.engine.core.vdsbroker.SetVmStatusVDSCommand] (EE-ManagedThreadFactory-engine-Thread-20) [200277e4] START, SetVmStatusVDSCommand( SetVmStatusVDSCommandParameters:{vmId='37b940bd-e487-4036-a3a2-d40f2afb9fd5', status='Down', exitStatus='Normal'}), log id: 35da80e5 2018-04-11 21:48:07,247-04 INFO [org.ovirt.engine.core.vdsbroker.SetVmStatusVDSCommand] (EE-ManagedThreadFactory-engine-Thread-20) [200277e4] FINISH, SetVmStatusVDSCommand, log id: 35da80e5 2018-04-11 21:48:07,567-04 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-20) [200277e4] EVENT_ID: IMPORTEXPORT_STARTING_IMPORT_VM(1,165), Starting to import Vm HostedEngine to Data Center Default, Cluster Default 2018-04-11 21:48:07,604-04 INFO [org.ovirt.engine.core.bll.HostedEngineImporter] (EE-ManagedThreadFactory-engine-Thread-20) [200277e4] Successfully imported the Hosted Engine VM 2018-04-11 21:48:07,610-04 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-20) [200277e4] EVENT_ID: HOSTED_ENGINE_VM_IMPORT_SUCCEEDED(10,456), Hosted Engine VM was imported successfully 2018-04-11 21:48:08,959-04 INFO [org.ovirt.engine.core.bll.ConcurrentChildCommandsExecutionCallback] (EE-ManagedThreadFactory-engineScheduled-Thread-79) [200277e4] Command 'ImportVm' id: '8d03deda-b36c-45c4-8f39-8ac4164a003b' child commands '[]' executions were completed, status 'SUCCEEDED' 2018-04-11 21:48:10,014-04 INFO [org.ovirt.engine.core.bll.exportimport.ImportVmCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-13) [200277e4] Ending command 'org.ovirt.engine.core.bll.exportimport.ImportVmCommand' successfully. Can this be another case of 7.4 vs 7.5? Nikolai, is there anything specific about your machine? Some strange networking or something? The nic device on Nikolai test: <interface type='bridge'> <mac address='00:16:3e:7b:b8:54'/> <source bridge='ovirtmgmt'/> <target dev='vnet0'/> <model type='virtio'/> <link state='up'/> <alias name='net0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </interface> and on OST one: <interface type='bridge'> <mac address='54:52:c0:a8:c8:63'/> <source bridge='ovirtmgmt'/> <target dev='vnet0'/> <model type='virtio'/> <link state='up'/> <alias name='net0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </interface> looks absolutely comparable. (In reply to Martin Sivák from comment #7) > Can this be another case of 7.4 vs 7.5? > > Nikolai, is there anything specific about your machine? Some strange > networking or something? Nothing. I'm running on RHEL7.5 from scratch over the same untagged native VLAN topology, using the same host that works just fine with the new Node 0 appliance, but fails with the vintage. I just tested this using pure host with no vlans (just eth0 and the ovirtmgmt) and all went fine. (In reply to Martin Sivák from comment #10) > I just tested this using pure host with no vlans (just eth0 and the > ovirtmgmt) and all went fine. Please try doing so while deploying SHE over iSCSI instead of NFS, as that what I did differently on my environment. Still does not reproduce, I can try once more in case it is a race condition, but please do the same. Setting conditional NAK on reproducer. We can't do much without being able to reproduce the issue. CLOSING as WORKSFORME, please reopen if we found a reproducer Works for me on these components: ovirt-hosted-engine-setup-2.2.18-1.el7ev.noarch ovirt-hosted-engine-ha-2.2.10-1.el7ev.noarch rhvm-appliance-4.2-20180420.0.el7.noarch Linux 3.10.0-862.el7.x86_64 #1 SMP Wed Mar 21 18:14:51 EDT 2018 x86_64 x86_64 x86_64 GNU/Linux Red Hat Enterprise Linux Server release 7.5 (Maipo) Successfully deployed vintage over iSCSI and also added NFS data storage domain, then received hosted-storage as auto-import without any issues. |