Bug 1956263 - The moving from Gluster SD to Gluster SD to be master storage domain in the second tries aren't forbidden when putting all the other SD's types on maintenance mode.
Summary: The moving from Gluster SD to Gluster SD to be master storage domain in the s...
Keywords:
Status: CLOSED DUPLICATE of bug 1913764
Alias: None
Product: ovirt-engine
Classification: oVirt
Component: BLL.Storage
Version: 4.4.6.6
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: ---
Assignee: Eyal Shenitzky
QA Contact: Avihai
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2021-05-03 10:36 UTC by sshmulev
Modified: 2021-05-04 05:36 UTC (History)
1 user (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-05-04 05:36:48 UTC
oVirt Team: Storage


Attachments (Terms of Use)
Screenshot (162.28 KB, image/png)
2021-05-03 10:36 UTC, sshmulev
no flags Details

Description sshmulev 2021-05-03 10:36:39 UTC
Created attachment 1778903 [details]
Screenshot

When all the SDs are in maintenance except the Gluser SD, the option to switch to master domain between Glusters SDs is available at the second try, it makes all other SD's to be reconstruction (screenshot attached).
In the UI just the bottom is disabled , but the option to move between SD's by maintenance mode, is still available in the UI, 

version:
ovirt-engine-4.4.6.6-0.10.el8ev.noarch


steps to reproduce:
1) In Storage domains -> need to go one by one and set each SD type (not gluster) to maintenance (except the current master SD and all the Gluster SDs).
2) Set the master SD to maintenance. - > it will be set the one of the Gluster SDs to be the master SD.
3) Set one another Gluster SD to maintenance.

Actual result:
* The operation will succeed for the first time.
*The operation causes long reconstruction of the Gluster SD with error in the engine log

Engine Log:
2021-05-03 13:17:50,698+03 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-63095) [46fdeaa7] EVENT_ID: RECONSTRUCT_MASTER_FAILED(985), Failed to Reconstruct Master Domain
 for Data Center golden_env_mixed.

2021-05-03 13:17:50,718+03 ERROR [org.ovirt.engine.core.bll.storage.domain.ActivateStorageDomainCommand] (EE-ManagedThreadFactory-engine-Thread-63096) [27e43ab1] Command 'org.ovirt.engine.core.bll.storage.domain.ActivateStorageDomainCommand' failed: EngineException: Cannot allocate IRS server (Failed with error IRS_REPOSITORY_NOT_FOUND and code 5009)
2021-05-03 13:17:50,718+03 INFO  [org.ovirt.engine.core.bll.CommandCompensator] (EE-ManagedThreadFactory-engine-Thread-63096) [27e43ab1] Command [id=ceaf4a89-1d3e-4f26-8560-e1db801714b7]: Compensating CHANGED_STATUS_ONLY of org.ovirt.engine.core.common.businessentities.StoragePoolIsoMap; snapshot: EntityStatusSnapshot:{id='StoragePoolIsoMapId:{storagePoolId='2587e8ed-f1ee-46d1-be06-37556a726d4c', storageId='5a602dd3-9cc9-4c98-b955-23eb332a3c1d'}', status='Maintenance'}.

2021-05-03 13:17:50,723+03 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-63096) [27e43ab1] EVENT_ID: USER_ACTIVATE_STORAGE_DOMAIN_FAILED(967), Failed to activate Storage Domain test_gluster_1 (Data Center golden_env_mixed) by admin@internal-authz

VDSM Log:
2021-05-02 16:06:26,427+0300 INFO  (jsonrpc/2) [vdsm.api] FINISH deactivateStorageDomain error=(1, 0, b'', b'') from=::ffff:10.46.16.85,38824, flow_id=158fae60, task_id=31ba0da5-451e-4a8e-8278-803c9565468b (api:52)
2021-05-02 16:06:26,427+0300 ERROR (jsonrpc/2) [storage.TaskManager.Task] (Task='31ba0da5-451e-4a8e-8278-803c9565468b') Unexpected error (task:880)
Traceback (most recent call last):
  File "/usr/lib/python3.6/site-packages/vdsm/storage/task.py", line 887, in _run
    return fn(*args, **kargs)
  File "<decorator-gen-41>", line 2, in deactivateStorageDomain
  File "/usr/lib/python3.6/site-packages/vdsm/common/api.py", line 50, in method
    ret = func(*args, **kwargs)
  File "/usr/lib/python3.6/site-packages/vdsm/storage/hsm.py", line 1273, in deactivateStorageDomain
    pool.deactivateSD(sdUUID, msdUUID, masterVersion)
  File "/usr/lib/python3.6/site-packages/vdsm/storage/securable.py", line 79, in wrapper
    return method(self, *args, **kwargs)
  File "/usr/lib/python3.6/site-packages/vdsm/storage/sp.py", line 1227, in deactivateSD
    self.masterMigrate(sdUUID, newMsdUUID, masterVersion)
  File "/usr/lib/python3.6/site-packages/vdsm/storage/securable.py", line 79, in wrapper
    return method(self, *args, **kwargs)
  File "/usr/lib/python3.6/site-packages/vdsm/storage/sp.py", line 896, in masterMigrate
    exclude=('./lost+found',))
  File "/usr/lib/python3.6/site-packages/vdsm/storage/fileUtils.py", line 71, in tarCopy
    raise TarCopyFailed(tsrc.returncode, tdst.returncode, out, err)

Expected results:
The operation of switching between Gluster SDs should be blocked by a comment until the issue is fixed by Gluster team (https://bugzilla.redhat.com/show_bug.cgi?id=1913764)

Comment 1 Eyal Shenitzky 2021-05-04 05:36:48 UTC
Need to be fixed by Gluster, closing a duplication of bug 1913764.

*** This bug has been marked as a duplicate of bug 1913764 ***


Note You need to log in before you can comment on or make changes to this bug.