Bug 806781

Summary: master domain fail to regress to regular domain after storage pool destroyed
Product: [Retired] oVirt Reporter: Royce Lv <lvroyce>
Component: vdsmAssignee: Royce Lv <lvroyce>
Status: CLOSED CURRENTRELEASE QA Contact: yeylon <yeylon>
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: unspecifiedCC: srevivo
Target Milestone: ---   
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: vdsm-4.9.6-0.44 Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2012-03-28 07:27:58 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Royce Lv 2012-03-26 08:34:32 UTC
Description of problem:
master domain fail to regress to regular domain after storage pool destroyed

Version-Release number of selected component (if applicable):
vdsm-4.9.4-0.18

How reproducible:
often

Steps to Reproduce:
(1)create storage domain sd1
(2)create storage pool with md sd1
(3)destroy storage pool
(4)get sd1 info
  
Actual results:
storage domain's role is still master and it's pool is not cleared

Expected results:
storage domain's role regress to regular and it's pool should be cleared

Additional info:
 { "resource": "storagedomain",
   "id": "ae4d6a96-d3da-419c-8905-b5eec55c4500",
"href": "/vdsm-api/storagedomains/ae4d6a96-d3da-419c-8905-b5eec55c4500",
"name": "Test Domain",
"type": "LOCALFS",
"class": "Data",
"role": "Master",----------------------------------->wrong
"remotePath": "/storagedomain7",
"version": "0",
"master_ver": "1",
"lver": "0",
"spm_id": "1",
"storagepool": { "id": "1ef32ac7-1e12-4823-8e8c-8c887333fe50", "href": "/vdsm-api/storagepools/1ef32ac7-1e12-4823-8e8c-8c887333fe50" },------------------------------->wrong
"actions": {}

}

Comment 1 Royce Lv 2012-03-28 07:25:52 UTC
This bug found for:
vdsm-4.9.4-0.18

been fixed in vdsm-4.9.6-0.44