Bug 806781 - master domain fail to regress to regular domain after storage pool destroyed
Summary: master domain fail to regress to regular domain after storage pool destroyed
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: oVirt
Classification: Retired
Component: vdsm
Version: unspecified
Hardware: x86_64
OS: Linux
unspecified
unspecified
Target Milestone: ---
: ---
Assignee: Royce Lv
QA Contact: yeylon@redhat.com
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2012-03-26 08:34 UTC by Royce Lv
Modified: 2016-04-18 06:44 UTC (History)
1 user (show)

Fixed In Version: vdsm-4.9.6-0.44
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2012-03-28 07:27:58 UTC
oVirt Team: ---
Embargoed:


Attachments (Terms of Use)

Description Royce Lv 2012-03-26 08:34:32 UTC
Description of problem:
master domain fail to regress to regular domain after storage pool destroyed

Version-Release number of selected component (if applicable):
vdsm-4.9.4-0.18

How reproducible:
often

Steps to Reproduce:
(1)create storage domain sd1
(2)create storage pool with md sd1
(3)destroy storage pool
(4)get sd1 info
  
Actual results:
storage domain's role is still master and it's pool is not cleared

Expected results:
storage domain's role regress to regular and it's pool should be cleared

Additional info:
 { "resource": "storagedomain",
   "id": "ae4d6a96-d3da-419c-8905-b5eec55c4500",
"href": "/vdsm-api/storagedomains/ae4d6a96-d3da-419c-8905-b5eec55c4500",
"name": "Test Domain",
"type": "LOCALFS",
"class": "Data",
"role": "Master",----------------------------------->wrong
"remotePath": "/storagedomain7",
"version": "0",
"master_ver": "1",
"lver": "0",
"spm_id": "1",
"storagepool": { "id": "1ef32ac7-1e12-4823-8e8c-8c887333fe50", "href": "/vdsm-api/storagepools/1ef32ac7-1e12-4823-8e8c-8c887333fe50" },------------------------------->wrong
"actions": {}

}

Comment 1 Royce Lv 2012-03-28 07:25:52 UTC
This bug found for:
vdsm-4.9.4-0.18

been fixed in vdsm-4.9.6-0.44


Note You need to log in before you can comment on or make changes to this bug.