Bug 1544370 - vdsm does not deactivate all LVs if a LUN is removed from the Storage Domain
Summary: vdsm does not deactivate all LVs if a LUN is removed from the Storage Domain
Keywords:
Status: CLOSED DUPLICATE of bug 1163890
Alias: None
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: vdsm
Version: 4.1.6
Hardware: All
OS: Linux
high
high
Target Milestone: ovirt-4.4.0
: ---
Assignee: Vojtech Juranek
QA Contact: Evelina Shames
URL:
Whiteboard:
: 1778291 (view as bug list)
Depends On:
Blocks: 902971 1310330 1602776
TreeView+ depends on / blocked
 
Reported: 2018-02-12 10:07 UTC by Ron van der Wees
Modified: 2023-10-06 17:43 UTC (History)
16 users (show)

Fixed In Version: vdsm-4.40.9
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-05-12 12:54:00 UTC
oVirt Team: Storage
Target Upstream Version:
Embargoed:
lsvaty: testing_plan_complete-


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Knowledge Base (Solution) 3350701 0 None None None 2018-02-12 10:48:38 UTC
oVirt gerrit 56876 0 master ABANDONED blockSD: Storage domain life cycle management 2021-02-10 13:24:18 UTC
oVirt gerrit 105816 0 master MERGED blockSD: Storage domain life cycle management 2021-02-10 13:24:18 UTC

Description Ron van der Wees 2018-02-12 10:07:09 UTC
Description of problem:
Follow up issue from bz#1310330#C74
Once a LUN is removed from the Storage Domain, not all LVs are deactivated by
vdsm on the hypervisors, other then the one that was used to remove the LUN.
As the sysadmin, after the removal of the LUN, removing the multipath device
with 'multipath -f "WWN"' fails since the LV's are still active.

Version-Release number of selected component (if applicable):
vdsm-4.19.31-1.el7ev.x86_64

How reproducible:
Always

Steps to Reproduce:
1. Remove the LUN / Storage Domain from RHV
2. Unexport the LUN from all hypervisors
3. On the hypervisor with the SPM role, remove the mpath device:
   ~~~ 
   multipath -f "WWN"
   ~~~
4. On all other hypervisors remove the mpath device using the same command

Actual results:
On the hypervisor with the SPM role, the mpath device can be removed
successfully while on the other hypervisors the mpath removal fails because
there are active LVs. As an example:
lrwxrwxrwx. 1 root root 8 Feb  1 11:20 /dev/mapper/0a4f3a32--a64a--4c51--96cf--ca60c1a8393c-ids -> ../dm-72                                                                        
lrwxrwxrwx. 1 root root 8 Feb  1 11:20 /dev/mapper/0a4f3a32--a64a--4c51--96cf--ca60c1a8393c-inbox -> ../dm-73
lrwxrwxrwx. 1 root root 8 Feb  1 11:20 /dev/mapper/0a4f3a32--a64a--4c51--96cf--ca60c1a8393c-leases -> ../dm-71
lrwxrwxrwx. 1 root root 8 Feb  1 11:20 /dev/mapper/0a4f3a32--a64a--4c51--96cf--ca60c1a8393c-master -> ../dm-74
lrwxrwxrwx. 1 root root 8 Feb  1 11:20 /dev/mapper/0a4f3a32--a64a--4c51--96cf--ca60c1a8393c-metadata -> ../dm-68
lrwxrwxrwx. 1 root root 8 Feb  1 11:20 /dev/mapper/0a4f3a32--a64a--4c51--96cf--ca60c1a8393c-outbox -> ../dm-69
lrwxrwxrwx. 1 root root 8 Feb  1 11:20 /dev/mapper/0a4f3a32--a64a--4c51--96cf--ca60c1a8393c-xleases -> ../dm-70

Expected results:
The LVs to be deactivated on all hypervisors and the removal of the mpath
device completes successfully.

Additional info:
Manually removing the LVs when decommissioning a LUN with "dmsetup remove" is
cumbersome and error-prone.
Customer uses 3Par with VLUNs but this issue should apply to any SAN.

Comment 1 Nir Soffer 2018-02-23 15:35:02 UTC
Ron, you wrote:

> 1. Remove the LUN / Storage Domain from RHV

Do you mean removing a storage domain?

To remove a storage domain, the storage domain must be deactivated first. When we
deactivating a storage domain, we should deactivate all the LVs on that storage
domain on all hosts.

Is this the flow you describe in comment 0?

Can you provide vdsm logs from a host that have was not used to remove the storage
domain, and has leftover lvs?

If you don't have such logs I guess that QE can reproduce this issue.

Comment 8 Sandro Bonazzola 2019-01-28 09:40:47 UTC
This bug has not been marked as blocker for oVirt 4.3.0.
Since we are releasing it tomorrow, January 29th, this bug has been re-targeted to 4.3.1.

Comment 11 Daniel Gur 2019-08-28 13:14:12 UTC
sync2jira

Comment 12 Daniel Gur 2019-08-28 13:18:29 UTC
sync2jira

Comment 14 nijin ashok 2019-12-17 13:35:22 UTC
*** Bug 1778291 has been marked as a duplicate of this bug. ***

Comment 16 Avihai 2020-04-16 07:10:30 UTC
Hi Vojtech, 

Can you please provide a clear verification scenario for this bug?

Comment 17 Vojtech Juranek 2020-04-20 09:02:03 UTC
(In reply to Avihai from comment #16)
> Hi Vojtech, 
> 
> Can you please provide a clear verification scenario for this bug?

it seems this duplicate BZ #1163890 - provided test scenarios there

Comment 20 Avihai 2020-05-10 14:22:44 UTC
(In reply to Vojtech Juranek from comment #17)
> (In reply to Avihai from comment #16)
> > Hi Vojtech, 
> > 
> > Can you please provide a clear verification scenario for this bug?
> 
> it seems this duplicate BZ #1163890 - provided test scenarios there

Wait, if this is a duplicate bug please close this bug as duplicate.
No need to verify it again.

Comment 21 Vojtech Juranek 2020-05-12 12:54:00 UTC
yes, this is duplicate of BZ #1163890 which was already verified.

*** This bug has been marked as a duplicate of bug 1163890 ***


Note You need to log in before you can comment on or make changes to this bug.