Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.

Bug 1863058

Summary: LVs still active after putting host into maintenance
Product: [oVirt] vdsm Reporter: Vojtech Juranek <vjuranek>
Component: CoreAssignee: Vojtech Juranek <vjuranek>
Status: CLOSED CURRENTRELEASE QA Contact: Ilan Zuckerman <izuckerm>
Severity: high Docs Contact:
Priority: unspecified    
Version: 4.40.23CC: bugs, dfodor, sfishbai, tnisan
Target Milestone: ovirt-4.4.3   
Target Release: 4.40.32   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: vdsm-4.40.32 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2020-11-11 06:42:33 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: Storage RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Vojtech Juranek 2020-08-03 14:56:08 UTC
Description of problem:
When tearing down the block SD on a host, we deactivate given VG [1]. However, it seems that in some cases there are still active LVs which have to be removed manually by calling dmsetup remove escaped--vg--name-excaped--lv--name.

We probably need to add dmsetup remove into teardown [1] to make sure all the links are really removed.

[1] https://github.com/oVirt/vdsm/blob/v4.40.23/lib/vdsm/storage/blockSD.py#L988

Comment 1 Tal Nisan 2020-08-10 14:16:32 UTC
Vojta, is this a regression?

Comment 2 Vojtech Juranek 2020-08-10 19:23:39 UTC
(In reply to Tal Nisan from comment #1)
> Vojta, is this a regression?

AFAICT no. We improved block SD teardown in BZ #1850458 and it in some edge cases doesn't work fully, so this is rather request for improvement (at least I believe this could happen also before changes done in BZ #1850458).

Comment 3 Avihai 2020-10-04 08:16:38 UTC
HI Vojtech, 
Can you please provide a clear reproduction/verification scenario?

Comment 4 Vojtech Juranek 2020-10-05 07:19:03 UTC
(In reply to Avihai from comment #3)
> HI Vojtech, 
> Can you please provide a clear reproduction/verification scenario?

1. create block storage domain using iSCSI target on remote server/VM
2. check that host connected to the cluster should have links to LVM volumes in /dev/mapper
3. put host into maintenance
4. at the same time or very soon after putting host into maintenance kill the connection to storage server (or kill VM running iSCSI target)
5. check that LVM links in /dev/mapper (mentioned in step 2.) are removed

Comment 5 Ilan Zuckerman 2020-10-05 09:24:38 UTC
Verified on rhv-4.4.3-7 build with vdsm-4.40.32-1.el8ev.x86_64

2. check that host connected to the cluster should have links to LVM volumes in /dev/mapper

[root@storage-ge5-vdsm3 ~]# ll /dev/mapper/
total 0
lrwxrwxrwx. 1 root root       7 Oct  5 12:17 213f80c1--973d--4bf2--a8a9--b9f8490d0260-ids -> ../dm-7
lrwxrwxrwx. 1 root root       8 Oct  5 12:17 213f80c1--973d--4bf2--a8a9--b9f8490d0260-inbox -> ../dm-10
lrwxrwxrwx. 1 root root       7 Oct  5 12:17 213f80c1--973d--4bf2--a8a9--b9f8490d0260-leases -> ../dm-8
lrwxrwxrwx. 1 root root       8 Oct  5 12:17 213f80c1--973d--4bf2--a8a9--b9f8490d0260-master -> ../dm-13
lrwxrwxrwx. 1 root root       7 Oct  5 12:17 213f80c1--973d--4bf2--a8a9--b9f8490d0260-metadata -> ../dm-9
lrwxrwxrwx. 1 root root       8 Oct  5 12:17 213f80c1--973d--4bf2--a8a9--b9f8490d0260-outbox -> ../dm-11
lrwxrwxrwx. 1 root root       8 Oct  5 12:17 213f80c1--973d--4bf2--a8a9--b9f8490d0260-xleases -> ../dm-12
lrwxrwxrwx. 1 root root       8 Oct  5 12:17 287e4573--22a3--4a4e--b537--d107d8a30c30-ids -> ../dm-14
lrwxrwxrwx. 1 root root       8 Oct  5 12:17 287e4573--22a3--4a4e--b537--d107d8a30c30-inbox -> ../dm-19
lrwxrwxrwx. 1 root root       8 Oct  5 12:17 287e4573--22a3--4a4e--b537--d107d8a30c30-leases -> ../dm-16
lrwxrwxrwx. 1 root root       8 Oct  5 12:17 287e4573--22a3--4a4e--b537--d107d8a30c30-master -> ../dm-22
lrwxrwxrwx. 1 root root       8 Oct  5 12:17 287e4573--22a3--4a4e--b537--d107d8a30c30-metadata -> ../dm-18
lrwxrwxrwx. 1 root root       8 Oct  5 12:17 287e4573--22a3--4a4e--b537--d107d8a30c30-outbox -> ../dm-20
lrwxrwxrwx. 1 root root       8 Oct  5 12:17 287e4573--22a3--4a4e--b537--d107d8a30c30-xleases -> ../dm-21
lrwxrwxrwx. 1 root root       7 Oct  5 12:17 360002ac0000000000000003100021f6b -> ../dm-1
lrwxrwxrwx. 1 root root       7 Oct  5 12:17 360002ac0000000000000003200021f6b -> ../dm-4
lrwxrwxrwx. 1 root root       7 Oct  5 12:17 360002ac0000000000000003300021f6b -> ../dm-2
lrwxrwxrwx. 1 root root       7 Oct  5 12:17 360002ac0000000000000003400021f6b -> ../dm-5
lrwxrwxrwx. 1 root root       7 Oct  5 12:17 360002ac0000000000000003500021f6b -> ../dm-3
lrwxrwxrwx. 1 root root       7 Oct  5 12:17 360002ac0000000000000003600021f6b -> ../dm-6
lrwxrwxrwx. 1 root root       8 Oct  5 12:17 a676cc44--3280--4cfb--bc50--1a1228db2dc5-ids -> ../dm-15
lrwxrwxrwx. 1 root root       8 Oct  5 12:17 a676cc44--3280--4cfb--bc50--1a1228db2dc5-inbox -> ../dm-24
lrwxrwxrwx. 1 root root       8 Oct  5 12:17 a676cc44--3280--4cfb--bc50--1a1228db2dc5-leases -> ../dm-17
lrwxrwxrwx. 1 root root       8 Oct  5 12:17 a676cc44--3280--4cfb--bc50--1a1228db2dc5-master -> ../dm-27
lrwxrwxrwx. 1 root root       8 Oct  5 12:17 a676cc44--3280--4cfb--bc50--1a1228db2dc5-metadata -> ../dm-23
lrwxrwxrwx. 1 root root       8 Oct  5 12:17 a676cc44--3280--4cfb--bc50--1a1228db2dc5-outbox -> ../dm-25
lrwxrwxrwx. 1 root root       8 Oct  5 12:17 a676cc44--3280--4cfb--bc50--1a1228db2dc5-xleases -> ../dm-26
crw-------. 1 root root 10, 236 Oct  4 18:08 control
lrwxrwxrwx. 1 root root       7 Oct  4 18:08 VolGroup01-root -> ../dm-0


3. put host into maintenance + block connection to storage with:
iptables -A OUTPUT -d 3par-iscsiXXX -j DROP


4. check that LVM links in /dev/mapper (mentioned in step 2.) are removed:

[root@storage-ge5-vdsm3 ~]# ll /dev/mapper/
total 0
crw-------. 1 root root 10, 236 Oct  4 18:08 control
lrwxrwxrwx. 1 root root       7 Oct  4 18:08 VolGroup01-root -> ../dm-0


LVM links are removed as expected.

Comment 6 Sandro Bonazzola 2020-11-11 06:42:33 UTC
This bugzilla is included in oVirt 4.4.3 release, published on November 10th 2020.

Since the problem described in this bug report should be resolved in oVirt 4.4.3 release, it has been closed with a resolution of CURRENT RELEASE.

If the solution does not work for you, please open a new bug report.