+++ This bug is a downstream clone. The original bug is: +++ +++ bug 1620314 +++ ====================================================================== Description of problem: Since 4.2, the ansible SHE deployment creates a Data-Center already initialized with the hosted_storage as master. Currently, there is no way to change the master role to another SD[1]. The problem is that having hosted_storage as master breaks the Backup/Restore and move HE SD procedures, because the master sd is wiped from the DB by the --he-remove-storage-vm option passed to engine-backup. The result is the Backup+Restore/Disaster Recovery/Move HE SD procedures are broken and running any of them results in an unusable environment, many things fail due to the missing master domain and reinitializing the DC fails with NPE. In more details: 1) All previous Storage Domains in Down State (no master) 2018-08-23 11:07:26,350+10 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStatusVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-34) [4396d107] Command 'SpmStatusVDSCommand(HostName = host2.rhvlab, SpmStatusVDSCommandParameters:{hostId='28e4cef6-6721-41b3-b15d-9906f46a8a0a', storagePoolId='e94f9b5a-9ac4-11e8-8748-52540015c1ff'})' execution failed: IRSGenericException: IRSErrorException: IRSNoMasterDomainException: Error validating master storage domain: ('MD read error',) 2) New HE SD fails to import (no master) 2018-08-23 11:06:44,922+10 WARN [org.ovirt.engine.core.bll.storage.domain.ImportHostedEngineStorageDomainCommand] (EE-ManagedThreadFactory-engine-Thread-39) [7a221e26] Validation of action 'ImportHostedEngineStorageDomain' failed for user SYSTEM. Reasons: VAR__ACTION__ADD,VAR__TYPE__STORAGE__DOMAIN,ACTION_TYPE_FAILED_MASTER_STORAGE_DOMAIN_NOT_ACTIVE 3) Hosts in Non-Operational state (cannot connect to storage pool) 2018-08-23 11:06:31,471+10 ERROR [org.ovirt.engine.core.bll.eventqueue.EventQueueMonitor] (EE-ManagedThreadFactory-engine-Thread-32) [3a009015] Exception during process of events for pool 'e94f9b5a-9ac4-11e8-8748-52540015c1ff': java.lang.IndexOutOfBoundsException: Index: 0, Size: 0 2018-08-23 11:06:34,708+10 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStoragePoolVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-94) [] Command 'ConnectStoragePoolVDSCommand(HostName = host2.rhvlab, ConnectStoragePoolVDSCommandParameters:{hostId='28e4cef6-6721-41b3-b15d-9906f46a8a0a', vdsId='28e4cef6-6721-41b3-b15d-9906f46a8a0a', storagePoolId='e94f9b5a-9ac4-11e8-8748-52540015c1ff', masterVersion='1'})' execution failed: IRSGenericException: IRSErrorException: IRSNoMasterDomainException: Cannot find master domain: u'spUUID=e94f9b5a-9ac4-11e8-8748-52540015c1ff, msdUUID=00000000-0000-0000-0000-000000000000' 4) Cannot Reinitialize the DC. "There are no compatible Storage Domains to attach to this Data Center. Please add new Storage from the Storage tab.". Then after adding new storage, it hits a NPE: 2018-08-23 11:28:48,403+10 ERROR [org.ovirt.engine.core.bll.storage.pool.RecoveryStoragePoolCommand] (default task-8) [266ee7fc-73c9-4b4a-bd32-ca69a9b42b8f] Error during ValidateFailure.: java.lang.NullPointerException at org.ovirt.engine.core.bll.validator.storage.StorageDomainValidator.isInProcess(StorageDomainValidator.java:387) [bll.jar:] at org.ovirt.engine.core.bll.storage.pool.RecoveryStoragePoolCommand.validate(RecoveryStoragePoolCommand.java:67) [bll.jar:] at org.ovirt.engine.core.bll.CommandBase.internalValidate(CommandBase.java:779) [bll.jar:] at org.ovirt.engine.core.bll.CommandBase.validateOnly(CommandBase.java:368) [bll.jar:] at org.ovirt.engine.core.bll.PrevalidatingMultipleActionsRunner.canRunActions(PrevalidatingMultipleActionsRunner.java:113) [bll.jar:] at org.ovirt.engine.core.bll.PrevalidatingMultipleActionsRunner.invokeCommands(PrevalidatingMultipleActionsRunner.java:99) [bll.jar:] at org.ovirt.engine.core.bll.PrevalidatingMultipleActionsRunner.execute(PrevalidatingMultipleActionsRunner.java:76) [bll.jar:] at org.ovirt.engine.core.bll.Backend.runMultipleActionsImpl(Backend.java:596) [bll.jar:] at org.ovirt.engine.core.bll.Backend.runMultipleActions(Backend.java:566) [bll.jar:] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) [rt.jar:1.8.0_181] at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) [rt.jar:1.8.0_181] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) [rt.jar:1.8.0_181] at java.lang.reflect.Method.invoke(Method.java:498) [rt.jar:1.8.0_181] at org.jboss.as.ee.component.ManagedReferenceMethodInterceptor.processInvocation(ManagedReferenceMethodInterceptor.java:52) at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422) at org.jboss.invocation.InterceptorContext$Invocation.proceed(InterceptorContext.java:509) at org.jboss.as.weld.interceptors.Jsr299BindingsInterceptor.delegateInterception(Jsr299BindingsInterceptor.java:78) 5) Not sure if this one is related to this, but command sent via API from hosted-engine --deploy on the host failed with NPE on the engine: [ ERROR ] Cannot automatically set CPU level of cluster Default: General command validation failure. 2018-08-23 11:06:29,961+10 ERROR [org.ovirt.engine.core.bll.UpdateClusterCommand] (default task-4) [67d94cdc-8a73-4be8-b314-8b04aaa16b4c] Error during ValidateFailure.: java.lang.NullPointerException at org.ovirt.engine.core.bll.UpdateClusterCommand.getEmulatedMachineOfHostInCluster(UpdateClusterCommand.java:446) [bll.jar:] at org.ovirt.engine.core.bll.UpdateClusterCommand.isSupportedEmulatedMachinesMatchClusterLevel(UpdateClusterCommand.java:769) [bll.jar:] at org.ovirt.engine.core.bll.UpdateClusterCommand.validate(UpdateClusterCommand.java:640) [bll.jar:] at org.ovirt.engine.core.bll.CommandBase.internalValidate(CommandBase.java:779) [bll.jar:] at org.ovirt.engine.core.bll.CommandBase.executeAction(CommandBase.java:393) [bll.jar:] at org.ovirt.engine.core.bll.executor.DefaultBackendActionExecutor.execute(DefaultBackendActionExecutor.java:13) [bll.jar:] at org.ovirt.engine.core.bll.Backend.runAction(Backend.java:468) [bll.jar:] at org.ovirt.engine.core.bll.Backend.runActionImpl(Backend.java:450) [bll.jar:] at org.ovirt.engine.core.bll.Backend.runAction(Backend.java:403) [bll.jar:] at sun.reflect.GeneratedMethodAccessor135.invoke(Unknown Source) [:1.8.0_181] Version-Release number of selected component (if applicable): ovirt-engine-4.2.5.3-1.el7.noarch How reproducible: 100% Steps to Reproduce: 1. Deploy 4.2 SHE (ansible) 2. Run the Backup+Restore SHE procedure[2] or Move SHE SD procedure[3] Additional info: [1] https://bugzilla.redhat.com/show_bug.cgi?id=1576923 [2] https://bugzilla.redhat.com/show_bug.cgi?id=1420604#c71 [3] https://access.redhat.com/solutions/2998291 (Originally by Germano Veit Michel)
Created attachment 1478022 [details] engine logs (Originally by Germano Veit Michel)
hosted engine deployment - moving to Integration (Originally by michal.skrivanek)
For backup/restore operations we are currently asking the user to run hosted-engine-setup with --noansible option to force the old flow where the hosted-engine SD is not the master one and the user can still manually run engine-backup before engine-setup. This kind of issues related to the new flow will be handled as for https://bugzilla.redhat.com/1469908 (Originally by Simone Tiraboschi)
(In reply to Simone Tiraboschi from comment #3) > For backup/restore operations we are currently asking the user to run > hosted-engine-setup with --noansible option to force the old flow where the > hosted-engine SD is not the master one and the user can still manually run > engine-backup before engine-setup. Hi Simone, This is exactly was done. Its the old --no-ansible option as per comment #0 --he-remove-storage-vm was used to restore the backup before engine-setup. This works fine if the HE SD is not master. But on fresh 4.2 deployments the HE SD is always master and cannot be changed. So the procedure is now broken. (Originally by Germano Veit Michel)
This also breaks SHE > baremetal migration. (Originally by Germano Veit Michel)
WARN: Bug status wasn't changed from MODIFIED to ON_QA due to the following reason: [Found non-acked flags: '{'rhevm-4.2.z': '?'}', ] For more info please contact: rhv-devops: Bug status wasn't changed from MODIFIED to ON_QA due to the following reason: [Found non-acked flags: '{'rhevm-4.2.z': '?'}', ] For more info please contact: rhv-devops (Originally by rhv-bugzilla-bot)
Setting blocker according to commet #7
Works for me on these components: ovirt-hosted-engine-setup-2.2.32-1.el7ev.noarch ovirt-hosted-engine-ha-2.2.18-1.el7ev.noarch Moving to verified.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHEA-2019:1050
sync2jira