Description of problem: Removing block SD from the cluster leave stale device mapper links on the host. In example bellow is example output of two iSCSI SDs on a host, one (formed by two LUNs 36001405c56ca5d9b4674f2dbb9d6d174 and 36001405ce319a580db84df7bf50517fd) being removed. After putting domain into maintenance, VG is properly deactivated. However, during domain deatch, VG is activated again (in StoragePool.detachSD()) which results into stale DM links. These can cause various issue, e.g. removing multipath path fails as the device is in use. Before removal: =============== [root@localhost ~]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 10G 0 disk `-3600140577f11d1c4ec2418292f164b7e 252:2 0 10G 0 mpath sdb 8:16 0 10G 0 disk `-360014057825e344174146f385cf3c2bd 252:0 0 10G 0 mpath sdc 8:32 0 10G 0 disk `-36001405042dd41f40334273abc579952 252:1 0 10G 0 mpath |-5f2f58dd--4c46--4f38--88d3--d6a78973c0e2-ids 252:3 0 128M 0 lvm |-5f2f58dd--4c46--4f38--88d3--d6a78973c0e2-leases 252:4 0 2G 0 lvm |-5f2f58dd--4c46--4f38--88d3--d6a78973c0e2-metadata 252:5 0 128M 0 lvm |-5f2f58dd--4c46--4f38--88d3--d6a78973c0e2-inbox 252:6 0 128M 0 lvm |-5f2f58dd--4c46--4f38--88d3--d6a78973c0e2-outbox 252:7 0 128M 0 lvm |-5f2f58dd--4c46--4f38--88d3--d6a78973c0e2-xleases 252:8 0 1G 0 lvm `-5f2f58dd--4c46--4f38--88d3--d6a78973c0e2-master 252:9 0 1G 0 lvm /rhev/data-center/mnt/blockSD/5f2f58dd-4c46-4f38-88d3-d6a78973c0e2/master sdd 8:48 0 5G 0 disk `-36001405c56ca5d9b4674f2dbb9d6d174 252:10 0 5G 0 mpath |-529421ed--fc1d--47c5--a11b--22fc608ae9ad-ids 252:12 0 128M 0 lvm |-529421ed--fc1d--47c5--a11b--22fc608ae9ad-leases 252:13 0 2G 0 lvm |-529421ed--fc1d--47c5--a11b--22fc608ae9ad-metadata 252:14 0 128M 0 lvm |-529421ed--fc1d--47c5--a11b--22fc608ae9ad-inbox 252:15 0 128M 0 lvm |-529421ed--fc1d--47c5--a11b--22fc608ae9ad-outbox 252:16 0 128M 0 lvm |-529421ed--fc1d--47c5--a11b--22fc608ae9ad-xleases 252:17 0 1G 0 lvm `-529421ed--fc1d--47c5--a11b--22fc608ae9ad-master 252:18 0 1G 0 lvm sdf 8:80 0 5G 0 disk `-36001405ce319a580db84df7bf50517fd 252:11 0 5G 0 mpath vda 253:0 0 20G 0 disk  |-vda1 253:1 0 1M 0 part |-vda2 253:2 0 1G 0 part /boot |-vda3 253:3 0 615M 0 part [SWAP] `-vda4 253:4 0 18.4G 0 part / After putting into maintenance: =============================== [root@localhost ~]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 10G 0 disk `-3600140577f11d1c4ec2418292f164b7e 252:2 0 10G 0 mpath sdb 8:16 0 10G 0 disk `-360014057825e344174146f385cf3c2bd 252:0 0 10G 0 mpath sdc 8:32 0 10G 0 disk `-36001405042dd41f40334273abc579952 252:1 0 10G 0 mpath |-5f2f58dd--4c46--4f38--88d3--d6a78973c0e2-ids 252:3 0 128M 0 lvm |-5f2f58dd--4c46--4f38--88d3--d6a78973c0e2-leases 252:4 0 2G 0 lvm |-5f2f58dd--4c46--4f38--88d3--d6a78973c0e2-metadata 252:5 0 128M 0 lvm |-5f2f58dd--4c46--4f38--88d3--d6a78973c0e2-inbox 252:6 0 128M 0 lvm |-5f2f58dd--4c46--4f38--88d3--d6a78973c0e2-outbox 252:7 0 128M 0 lvm |-5f2f58dd--4c46--4f38--88d3--d6a78973c0e2-xleases 252:8 0 1G 0 lvm `-5f2f58dd--4c46--4f38--88d3--d6a78973c0e2-master 252:9 0 1G 0 lvm /rhev/data-center/mnt/blockSD/5f2f58dd-4c46-4f38-88d3-d6a78973c0e2/master vda 253:0 0 20G 0 disk  |-vda1 253:1 0 1M 0 part |-vda2 253:2 0 1G 0 part /boot |-vda3 253:3 0 615M 0 part [SWAP] `-vda4 253:4 0 18.4G 0 part / After detach: ============= [root@localhost ~]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 10G 0 disk `-3600140577f11d1c4ec2418292f164b7e 252:2 0 10G 0 mpath sdb 8:16 0 10G 0 disk `-360014057825e344174146f385cf3c2bd 252:0 0 10G 0 mpath sdc 8:32 0 10G 0 disk `-36001405042dd41f40334273abc579952 252:1 0 10G 0 mpath |-5f2f58dd--4c46--4f38--88d3--d6a78973c0e2-ids 252:3 0 128M 0 lvm |-5f2f58dd--4c46--4f38--88d3--d6a78973c0e2-leases 252:4 0 2G 0 lvm |-5f2f58dd--4c46--4f38--88d3--d6a78973c0e2-metadata 252:5 0 128M 0 lvm |-5f2f58dd--4c46--4f38--88d3--d6a78973c0e2-inbox 252:6 0 128M 0 lvm |-5f2f58dd--4c46--4f38--88d3--d6a78973c0e2-outbox 252:7 0 128M 0 lvm |-5f2f58dd--4c46--4f38--88d3--d6a78973c0e2-xleases 252:8 0 1G 0 lvm `-5f2f58dd--4c46--4f38--88d3--d6a78973c0e2-master 252:9 0 1G 0 lvm /rhev/data-center/mnt/blockSD/5f2f58dd-4c46-4f38-88d3-d6a78973c0e2/master 36001405c56ca5d9b4674f2dbb9d6d174 252:10 0 5G 0 mpath  |-529421ed--fc1d--47c5--a11b--22fc608ae9ad-ids 252:12 0 128M 0 lvm |-529421ed--fc1d--47c5--a11b--22fc608ae9ad-leases 252:13 0 2G 0 lvm |-529421ed--fc1d--47c5--a11b--22fc608ae9ad-metadata 252:14 0 128M 0 lvm |-529421ed--fc1d--47c5--a11b--22fc608ae9ad-inbox 252:15 0 128M 0 lvm |-529421ed--fc1d--47c5--a11b--22fc608ae9ad-outbox 252:16 0 128M 0 lvm |-529421ed--fc1d--47c5--a11b--22fc608ae9ad-xleases 252:17 0 1G 0 lvm `-529421ed--fc1d--47c5--a11b--22fc608ae9ad-master 252:18 0 1G 0 lvm vda 253:0 0 20G 0 disk  |-vda1 253:1 0 1M 0 part |-vda2 253:2 0 1G 0 part /boot |-vda3 253:3 0 615M 0 part [SWAP] `-vda4 253:4 0 18.4G 0 part / How reproducible: always Steps to Reproduce: 1. create iSCSI storage domain 2. put it into maintenance 3. detach SD from the cluster Actual results: Stale DM links present, removing multipath fails. In example above: [root@localhost ~]# multipath -f 36001405c56ca5d9b4674f2dbb9d6d174 Feb 10 08:03:30 | 36001405c56ca5d9b4674f2dbb9d6d174: map in use Expected results: No stale links are present, removing multipath works.
Verified! See the details below: Before putting in maintenance: [root@storage-ge3-vdsm2 ~]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 30G 0 disk ├─sda1 8:1 0 700M 0 part /boot ├─sda2 8:2 0 3G 0 part [SWAP] └─sda3 8:3 0 26.3G 0 part └─VolGroup01-root 253:0 0 26.3G 0 lvm / sdb 8:16 0 75G 0 disk └─3600a098038304479363f4c4870454f79 253:1 0 75G 0 mpath ├─9fbf1764--74df--4c5d--bb41--346ba7725db3-ids 253:7 0 128M 0 lvm ├─9fbf1764--74df--4c5d--bb41--346ba7725db3-leases 253:8 0 2G 0 lvm ├─9fbf1764--74df--4c5d--bb41--346ba7725db3-metadata 253:9 0 128M 0 lvm ├─9fbf1764--74df--4c5d--bb41--346ba7725db3-inbox 253:10 0 128M 0 lvm ├─9fbf1764--74df--4c5d--bb41--346ba7725db3-outbox 253:11 0 128M 0 lvm ├─9fbf1764--74df--4c5d--bb41--346ba7725db3-xleases 253:12 0 1G 0 lvm └─9fbf1764--74df--4c5d--bb41--346ba7725db3-master 253:13 0 1G 0 lvm sdc 8:32 0 75G 0 disk └─3600a098038304479363f4c4870454f7a 253:4 0 75G 0 mpath ├─9add0a5d--c9e0--48bc--a53a--060f6b27f342-ids 253:14 0 128M 0 lvm ├─9add0a5d--c9e0--48bc--a53a--060f6b27f342-leases 253:15 0 2G 0 lvm ├─9add0a5d--c9e0--48bc--a53a--060f6b27f342-metadata 253:16 0 128M 0 lvm ├─9add0a5d--c9e0--48bc--a53a--060f6b27f342-inbox 253:17 0 128M 0 lvm ├─9add0a5d--c9e0--48bc--a53a--060f6b27f342-outbox 253:18 0 128M 0 lvm ├─9add0a5d--c9e0--48bc--a53a--060f6b27f342-xleases 253:19 0 1G 0 lvm └─9add0a5d--c9e0--48bc--a53a--060f6b27f342-master 253:20 0 1G 0 lvm sdd 8:48 0 75G 0 disk └─3600a098038304479363f4c4870455030 253:3 0 75G 0 mpath ├─e813621a--fda7--4a74--a736--a477914c2c1e-ids 253:21 0 128M 0 lvm ├─e813621a--fda7--4a74--a736--a477914c2c1e-leases 253:22 0 2G 0 lvm ├─e813621a--fda7--4a74--a736--a477914c2c1e-metadata 253:23 0 128M 0 lvm ├─e813621a--fda7--4a74--a736--a477914c2c1e-inbox 253:24 0 128M 0 lvm ├─e813621a--fda7--4a74--a736--a477914c2c1e-outbox 253:25 0 128M 0 lvm ├─e813621a--fda7--4a74--a736--a477914c2c1e-xleases 253:26 0 1G 0 lvm └─e813621a--fda7--4a74--a736--a477914c2c1e-master 253:27 0 1G 0 lvm sde 8:64 0 50G 0 disk └─3600a098038304479363f4c4870455031 253:5 0 50G 0 mpath sdf 8:80 0 50G 0 disk └─3600a098038304479363f4c4870455032 253:2 0 50G 0 mpath sdg 8:96 0 50G 0 disk └─3600a098038304479363f4c4870455033 253:6 0 50G 0 mpath ├─abed1ee9--728a--4c69--a29e--b553ed13ab2b-metadata 253:28 0 128M 0 lvm ├─abed1ee9--728a--4c69--a29e--b553ed13ab2b-ids 253:29 0 128M 0 lvm ├─abed1ee9--728a--4c69--a29e--b553ed13ab2b-inbox 253:30 0 128M 0 lvm ├─abed1ee9--728a--4c69--a29e--b553ed13ab2b-outbox 253:31 0 128M 0 lvm ├─abed1ee9--728a--4c69--a29e--b553ed13ab2b-leases 253:32 0 2G 0 lvm ├─abed1ee9--728a--4c69--a29e--b553ed13ab2b-xleases 253:33 0 1G 0 lvm └─abed1ee9--728a--4c69--a29e--b553ed13ab2b-master 253:34 0 1G 0 lvm sr0 After putting in maintenance: [root@storage-ge3-vdsm2 ~]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 30G 0 disk ├─sda1 8:1 0 700M 0 part /boot ├─sda2 8:2 0 3G 0 part [SWAP] └─sda3 8:3 0 26.3G 0 part └─VolGroup01-root 253:0 0 26.3G 0 lvm / sdb 8:16 0 75G 0 disk └─3600a098038304479363f4c4870454f79 253:1 0 75G 0 mpath ├─9fbf1764--74df--4c5d--bb41--346ba7725db3-ids 253:7 0 128M 0 lvm ├─9fbf1764--74df--4c5d--bb41--346ba7725db3-leases 253:8 0 2G 0 lvm ├─9fbf1764--74df--4c5d--bb41--346ba7725db3-metadata 253:9 0 128M 0 lvm ├─9fbf1764--74df--4c5d--bb41--346ba7725db3-inbox 253:10 0 128M 0 lvm ├─9fbf1764--74df--4c5d--bb41--346ba7725db3-outbox 253:11 0 128M 0 lvm ├─9fbf1764--74df--4c5d--bb41--346ba7725db3-xleases 253:12 0 1G 0 lvm └─9fbf1764--74df--4c5d--bb41--346ba7725db3-master 253:13 0 1G 0 lvm sdc 8:32 0 75G 0 disk └─3600a098038304479363f4c4870454f7a 253:4 0 75G 0 mpath ├─9add0a5d--c9e0--48bc--a53a--060f6b27f342-ids 253:14 0 128M 0 lvm ├─9add0a5d--c9e0--48bc--a53a--060f6b27f342-leases 253:15 0 2G 0 lvm ├─9add0a5d--c9e0--48bc--a53a--060f6b27f342-metadata 253:16 0 128M 0 lvm ├─9add0a5d--c9e0--48bc--a53a--060f6b27f342-inbox 253:17 0 128M 0 lvm ├─9add0a5d--c9e0--48bc--a53a--060f6b27f342-outbox 253:18 0 128M 0 lvm ├─9add0a5d--c9e0--48bc--a53a--060f6b27f342-xleases 253:19 0 1G 0 lvm └─9add0a5d--c9e0--48bc--a53a--060f6b27f342-master 253:20 0 1G 0 lvm sdd 8:48 0 75G 0 disk └─3600a098038304479363f4c4870455030 253:3 0 75G 0 mpath ├─e813621a--fda7--4a74--a736--a477914c2c1e-ids 253:21 0 128M 0 lvm ├─e813621a--fda7--4a74--a736--a477914c2c1e-leases 253:22 0 2G 0 lvm ├─e813621a--fda7--4a74--a736--a477914c2c1e-metadata 253:23 0 128M 0 lvm ├─e813621a--fda7--4a74--a736--a477914c2c1e-inbox 253:24 0 128M 0 lvm ├─e813621a--fda7--4a74--a736--a477914c2c1e-outbox 253:25 0 128M 0 lvm ├─e813621a--fda7--4a74--a736--a477914c2c1e-xleases 253:26 0 1G 0 lvm └─e813621a--fda7--4a74--a736--a477914c2c1e-master 253:27 0 1G 0 lvm sde 8:64 0 50G 0 disk └─3600a098038304479363f4c4870455031 253:5 0 50G 0 mpath sdf 8:80 0 50G 0 disk └─3600a098038304479363f4c4870455032 253:2 0 50G 0 mpath sdg 8:96 0 50G 0 disk └─3600a098038304479363f4c4870455033 253:6 0 50G 0 mpath sr0 11:0 1 1024M 0 rom After detach: [root@storage-ge3-vdsm2 ~]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 30G 0 disk ├─sda1 8:1 0 700M 0 part /boot ├─sda2 8:2 0 3G 0 part [SWAP] └─sda3 8:3 0 26.3G 0 part └─VolGroup01-root 253:0 0 26.3G 0 lvm / sdb 8:16 0 75G 0 disk └─3600a098038304479363f4c4870454f79 253:1 0 75G 0 mpath ├─9fbf1764--74df--4c5d--bb41--346ba7725db3-ids 253:7 0 128M 0 lvm ├─9fbf1764--74df--4c5d--bb41--346ba7725db3-leases 253:8 0 2G 0 lvm ├─9fbf1764--74df--4c5d--bb41--346ba7725db3-metadata 253:9 0 128M 0 lvm ├─9fbf1764--74df--4c5d--bb41--346ba7725db3-inbox 253:10 0 128M 0 lvm ├─9fbf1764--74df--4c5d--bb41--346ba7725db3-outbox 253:11 0 128M 0 lvm ├─9fbf1764--74df--4c5d--bb41--346ba7725db3-xleases 253:12 0 1G 0 lvm └─9fbf1764--74df--4c5d--bb41--346ba7725db3-master 253:13 0 1G 0 lvm sdc 8:32 0 75G 0 disk └─3600a098038304479363f4c4870454f7a 253:4 0 75G 0 mpath ├─9add0a5d--c9e0--48bc--a53a--060f6b27f342-ids 253:14 0 128M 0 lvm ├─9add0a5d--c9e0--48bc--a53a--060f6b27f342-leases 253:15 0 2G 0 lvm ├─9add0a5d--c9e0--48bc--a53a--060f6b27f342-metadata 253:16 0 128M 0 lvm ├─9add0a5d--c9e0--48bc--a53a--060f6b27f342-inbox 253:17 0 128M 0 lvm ├─9add0a5d--c9e0--48bc--a53a--060f6b27f342-outbox 253:18 0 128M 0 lvm ├─9add0a5d--c9e0--48bc--a53a--060f6b27f342-xleases 253:19 0 1G 0 lvm └─9add0a5d--c9e0--48bc--a53a--060f6b27f342-master 253:20 0 1G 0 lvm sdd 8:48 0 75G 0 disk └─3600a098038304479363f4c4870455030 253:3 0 75G 0 mpath ├─e813621a--fda7--4a74--a736--a477914c2c1e-ids 253:21 0 128M 0 lvm ├─e813621a--fda7--4a74--a736--a477914c2c1e-leases 253:22 0 2G 0 lvm ├─e813621a--fda7--4a74--a736--a477914c2c1e-metadata 253:23 0 128M 0 lvm ├─e813621a--fda7--4a74--a736--a477914c2c1e-inbox 253:24 0 128M 0 lvm ├─e813621a--fda7--4a74--a736--a477914c2c1e-outbox 253:25 0 128M 0 lvm ├─e813621a--fda7--4a74--a736--a477914c2c1e-xleases 253:26 0 1G 0 lvm └─e813621a--fda7--4a74--a736--a477914c2c1e-master 253:27 0 1G 0 lvm sde 8:64 0 50G 0 disk └─3600a098038304479363f4c4870455031 253:5 0 50G 0 mpath sdf 8:80 0 50G 0 disk └─3600a098038304479363f4c4870455032 253:2 0 50G 0 mpath sdg 8:96 0 50G 0 disk └─3600a098038304479363f4c4870455033 253:6 0 50G 0 mpath sr0 Versions: vdsm-4.40.50.8-1.el8ev.x86_64 ovirt-engine-4.4.5.9-0.1.el8ev.noarch
This bugzilla is included in oVirt 4.4.5 release, published on March 18th 2021. Since the problem described in this bug report should be resolved in oVirt 4.4.5 release, it has been closed with a resolution of CURRENT RELEASE. If the solution does not work for you, please open a new bug report.
*** Bug 1948952 has been marked as a duplicate of this bug. ***