Created attachment 1536914 [details] Raw device shoing as already in use Description of problem: RHV-M wrongly shows device already in use for non-used device during creation of brick through RHV-M portal Version-Release number of selected component (if applicable): vdsm-gluster-4.20.35-1.el7ev.x86_64 vdsm-4.20.35-1.el7ev.x86_64 glusterfs-server-3.8.4-54.15.el7rhgs.x86_64 How reproducible: Every time Device already in use is coming for all type of devices : RAW disk, LVM or mount point
Over a Email Communication by Kaustav : ~~ Ovirt issues the command to vdsm to get the storageDevicesList in which each disk has an attribute for 'canCreateBrick' which calls in python blivet to get the list as well as filters the ones which can create a brick. I ran the code in python repl on the said host to find which condition is filtered for our required device. device.kids > 0 is the one where nvme1n is not eligible to createBrick. For a workaround you can change the code in vdsm installed on host and restart supervdsmd although its only fine if you use it in a testing environment since _canCreateBrick function is used in a lot of places. def _canCreateBrick(device): if not device or device.kids > 0 or device.format.type or \ hasattr(device.format, 'mountpoint') or \ device.type in ['cdrom', 'lvmvg', 'lvmthinpool', 'lvmlv', 'lvmthinlv']: return False return True ~~~
Kaustav, can you take a look at this? If needed, please check with the blivet team
4.3.2 has been released a while ago, re-targeting to 4.3.3 for re-evaluation.
Which devices in those images are showing incorrect information? It might be useful to have the blivet logs that go with the failures as well.
Kaustav, can you please provide the requested info?
Would this bug be fixed by BZ#1670722 ?
Hi Marina, The BZ#1670722 , in short , is having a problem where the device names keeps changing on the reboot, but the gluster_ansible makes use of names which resulted in that bug, so as a fix they have changed gluster_ansible in such a way that it makes use of uuid (which remains same irrespective of disk name). Since this bug has not been root caused to such level it cannot be said that it fixes the issue. If there were any logs as mentioned it would be helpful in debugging, The ovirt-engine code looks good, in the meantime i will try to dive deep, debug and check.
Thank you.
This bug has been inactive with insufficient logs for a while, so closing this bug.