Note: This bug is displayed in read-only format because
the product is no longer active in Red Hat Bugzilla.
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Created attachment 502179[details]
SPM_vdsm_Log
Description of problem:
Storage Pool keeps detached storage domain info in Metadata
Version-Release number of selected component (if applicable):
IC119
How reproducible:
always
Steps to Reproduce:
1.Create a storage domain (EXP, Data, ISO [iscsi or nfs] )
2.Delete and remove the SD from setup
3.Look at the logs and vgscan to see error & logs of the deleted SD
Actual results:
RHEVM is looking for a detached storage pool, as it gets info it exists via vdsClient
Expected results:
Should clean all info regarding deleted, or EVEN detached but not deleted Storage domains
Additional info:
===========================
Haim A:
the problem here is not LVM errors, as its a known issue with device
mapper, vdsm can clean it, but its not serious.
The real problem is pool metadata contains detached storage domain, and when
backend ask for getStoragePoolInfo, vdsm produces an exception that vg doesn't
exits, where in fact, it really doesn't.
the bug here is vdsm doesn't update pool metadata during detach EXP process.
===========================
===========================
1.
vdsClient -s 0 getStoragePoolInfo ac790cf8-f6f3-450a-97d8-1f0eea1b9e27
2. name = DC_23_IC119
3. isoprefix =
4. pool_status = connected
5. lver = 14
6. domains = 9156c903-1770-41ab-86e2-e85d40c94819:Active,f806b532-340f-4f04-a579-cd9e52a1fbb1:Active
7. master_uuid = f806b532-340f-4f04-a579-cd9e52a1fbb1
8. version = 0
9. spm_id = 4
10. type = ISCSI
11. master_ver = 1
12. 9156c903-1770-41ab-86e2-e85d40c94819 = {'status': 'Active', 'diskfree': '439848796160', 'disktotal': '792670765056'}
13. f806b532-340f-4f04-a579-cd9e52a1fbb1 = {'status': 'Active', 'diskfree': '173409304576', 'disktotal': '268301238272'}
The only detach request in the attached log is of domain f4809bf2-a0aa-47cd-9d47-6036966e43b5 (see log excerpt below) which does not appear in the domains listed in the command output above so I really don't understand the problem...
Thread-83877::INFO::2011-05-31 09:55:09,110::dispatcher::94::Storage.Dispatcher.Protect::(run) Run and protect: detachStorageDomain, args: ( sdUUID=f4809bf2-a0aa-47cd-9d47-60369
66e43b5 spUUID=ac790cf8-f6f3-450a-97d8-1f0eea1b9e27 msdUUID=00000000-0000-0000-0000-000000000000 masterVersion=1)
...
Thread-83877::INFO::2011-05-31 09:55:22,924::dispatcher::100::Storage.Dispatcher.Protect::(run) Run and protect: detachStorageDomain, Return response: {'status': {'message': 'OK', 'code': 0}}
Created attachment 502179 [details] SPM_vdsm_Log Description of problem: Storage Pool keeps detached storage domain info in Metadata Version-Release number of selected component (if applicable): IC119 How reproducible: always Steps to Reproduce: 1.Create a storage domain (EXP, Data, ISO [iscsi or nfs] ) 2.Delete and remove the SD from setup 3.Look at the logs and vgscan to see error & logs of the deleted SD Actual results: RHEVM is looking for a detached storage pool, as it gets info it exists via vdsClient Expected results: Should clean all info regarding deleted, or EVEN detached but not deleted Storage domains Additional info: =========================== Haim A: the problem here is not LVM errors, as its a known issue with device mapper, vdsm can clean it, but its not serious. The real problem is pool metadata contains detached storage domain, and when backend ask for getStoragePoolInfo, vdsm produces an exception that vg doesn't exits, where in fact, it really doesn't. the bug here is vdsm doesn't update pool metadata during detach EXP process. =========================== =========================== 1. vdsClient -s 0 getStoragePoolInfo ac790cf8-f6f3-450a-97d8-1f0eea1b9e27 2. name = DC_23_IC119 3. isoprefix = 4. pool_status = connected 5. lver = 14 6. domains = 9156c903-1770-41ab-86e2-e85d40c94819:Active,f806b532-340f-4f04-a579-cd9e52a1fbb1:Active 7. master_uuid = f806b532-340f-4f04-a579-cd9e52a1fbb1 8. version = 0 9. spm_id = 4 10. type = ISCSI 11. master_ver = 1 12. 9156c903-1770-41ab-86e2-e85d40c94819 = {'status': 'Active', 'diskfree': '439848796160', 'disktotal': '792670765056'} 13. f806b532-340f-4f04-a579-cd9e52a1fbb1 = {'status': 'Active', 'diskfree': '173409304576', 'disktotal': '268301238272'}