Bug 709604 - [RHEL.6][VDSM] - Storage Pool keeps detached storage domain info in Metadata
Summary: [RHEL.6][VDSM] - Storage Pool keeps detached storage domain info in Metadata
Keywords:
Status: CLOSED INSUFFICIENT_DATA
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: vdsm
Version: 6.1
Hardware: x86_64
OS: Linux
low
medium
Target Milestone: rc
: ---
Assignee: Dan Kenigsberg
QA Contact: David Botzer
URL:
Whiteboard: storage
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2011-06-01 07:36 UTC by David Botzer
Modified: 2014-01-13 23:59 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2011-06-19 15:05:04 UTC
Target Upstream Version:


Attachments (Terms of Use)
SPM_vdsm_Log (500.05 KB, application/x-gzip)
2011-06-01 07:36 UTC, David Botzer
no flags Details
RHEVMlog (64.86 KB, application/x-gzip)
2011-06-01 07:40 UTC, David Botzer
no flags Details

Description David Botzer 2011-06-01 07:36:38 UTC
Created attachment 502179 [details]
SPM_vdsm_Log

Description of problem:
Storage Pool keeps detached storage domain info in Metadata

Version-Release number of selected component (if applicable):
IC119

How reproducible:
always

Steps to Reproduce:
1.Create a storage domain (EXP, Data, ISO [iscsi or nfs] )
2.Delete and remove the SD from setup
3.Look at the logs and vgscan to see error & logs of the deleted SD
  
Actual results:
RHEVM is looking for a detached storage pool, as it gets info it exists via vdsClient

Expected results:
Should clean all info regarding deleted, or EVEN detached but not deleted Storage domains

Additional info:
===========================
Haim A:
the problem here is not LVM errors, as its a known issue with device
mapper, vdsm can clean it, but its not serious.
The real problem is pool metadata contains detached storage domain, and when 
backend ask for getStoragePoolInfo, vdsm produces an exception that vg doesn't
exits, where in fact, it really doesn't. 
the bug here is vdsm doesn't update pool metadata during detach EXP process. 
===========================
===========================
   1.
 vdsClient -s 0 getStoragePoolInfo ac790cf8-f6f3-450a-97d8-1f0eea1b9e27
   2.              name = DC_23_IC119
   3.              isoprefix =
   4.              pool_status = connected
   5.              lver = 14
   6.              domains = 9156c903-1770-41ab-86e2-e85d40c94819:Active,f806b532-340f-4f04-a579-cd9e52a1fbb1:Active
   7.              master_uuid = f806b532-340f-4f04-a579-cd9e52a1fbb1
   8.              version = 0
   9.              spm_id = 4
  10.              type = ISCSI
  11.              master_ver = 1
  12.              9156c903-1770-41ab-86e2-e85d40c94819 = {'status': 'Active', 'diskfree': '439848796160', 'disktotal': '792670765056'}
  13.              f806b532-340f-4f04-a579-cd9e52a1fbb1 = {'status': 'Active', 'diskfree': '173409304576', 'disktotal': '268301238272'}

Comment 1 David Botzer 2011-06-01 07:40:14 UTC
Created attachment 502181 [details]
RHEVMlog

Comment 4 David Botzer 2011-06-01 07:56:27 UTC
vdsm version -- >VDSM70

vdsm-reg-4.9-70.el6.x86_64
vdsm-hook-faqemu-4.9-70.el6.x86_64
vdsm-4.9-70.el6.x86_64
vdsm-debug-plugin-4.9-70.el6.x86_64
vdsm-cli-4.9-70.el6.x86_64
vdsm-debuginfo-4.9-70.el6.x86_64
vdsm-bootstrap-4.9-70.el6.x86_64

Comment 5 Ayal Baron 2011-06-05 05:32:03 UTC
The only detach request in the attached log is of domain f4809bf2-a0aa-47cd-9d47-6036966e43b5 (see log excerpt below) which does not appear in the domains listed in the command output above so I really don't understand the problem...

Thread-83877::INFO::2011-05-31 09:55:09,110::dispatcher::94::Storage.Dispatcher.Protect::(run) Run and protect: detachStorageDomain, args: ( sdUUID=f4809bf2-a0aa-47cd-9d47-60369
66e43b5 spUUID=ac790cf8-f6f3-450a-97d8-1f0eea1b9e27 msdUUID=00000000-0000-0000-0000-000000000000 masterVersion=1)
...
Thread-83877::INFO::2011-05-31 09:55:22,924::dispatcher::100::Storage.Dispatcher.Protect::(run) Run and protect: detachStorageDomain, Return response: {'status': {'message': 'OK', 'code': 0}}


Note You need to log in before you can comment on or make changes to this bug.