Description of problem: master domain fail to regress to regular domain after storage pool destroyed Version-Release number of selected component (if applicable): vdsm-4.9.4-0.18 How reproducible: often Steps to Reproduce: (1)create storage domain sd1 (2)create storage pool with md sd1 (3)destroy storage pool (4)get sd1 info Actual results: storage domain's role is still master and it's pool is not cleared Expected results: storage domain's role regress to regular and it's pool should be cleared Additional info: { "resource": "storagedomain", "id": "ae4d6a96-d3da-419c-8905-b5eec55c4500", "href": "/vdsm-api/storagedomains/ae4d6a96-d3da-419c-8905-b5eec55c4500", "name": "Test Domain", "type": "LOCALFS", "class": "Data", "role": "Master",----------------------------------->wrong "remotePath": "/storagedomain7", "version": "0", "master_ver": "1", "lver": "0", "spm_id": "1", "storagepool": { "id": "1ef32ac7-1e12-4823-8e8c-8c887333fe50", "href": "/vdsm-api/storagepools/1ef32ac7-1e12-4823-8e8c-8c887333fe50" },------------------------------->wrong "actions": {} }
This bug found for: vdsm-4.9.4-0.18 been fixed in vdsm-4.9.6-0.44