Bug 1679458 - [GSS] RHV-M wrongly shows device already in use.
Summary: [GSS] RHV-M wrongly shows device already in use.
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: vdsm
Classification: oVirt
Component: Gluster
Version: ---
Hardware: x86_64
OS: Linux
urgent
high
Target Milestone: ---
: ---
Assignee: Prajith
QA Contact: SATHEESARAN
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-02-21 07:54 UTC by Abhishek Kumar
Modified: 2020-12-21 12:44 UTC (History)
11 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-09-02 07:54:28 UTC
oVirt Team: Gluster
Embargoed:
sbonazzo: ovirt-4.4?
pm-rhel: devel_ack?


Attachments (Terms of Use)

Description Abhishek Kumar 2019-02-21 07:54:18 UTC
Created attachment 1536914 [details]
Raw device shoing as already in use

Description of problem:

RHV-M wrongly shows device already in use for non-used device during creation of brick through RHV-M portal


Version-Release number of selected component (if applicable):

vdsm-gluster-4.20.35-1.el7ev.x86_64

vdsm-4.20.35-1.el7ev.x86_64

glusterfs-server-3.8.4-54.15.el7rhgs.x86_64

How reproducible:

Every time

Device already in use is coming for all type of devices : RAW disk, LVM or mount point

Comment 3 Abhishek Kumar 2019-02-21 07:57:21 UTC
Over a Email Communication by Kaustav :

~~
Ovirt issues the command to vdsm to get the storageDevicesList in which each disk has an attribute for 'canCreateBrick' which calls in python blivet to get the list as well as filters the ones which can create a brick.
I ran the code in python repl on the said host to  find which condition is filtered for our required device. 
device.kids > 0 is the one where nvme1n is not eligible to createBrick. 
For a workaround you can change the code in vdsm installed on host and restart supervdsmd although its only fine if you use it in a testing environment since _canCreateBrick function is used in a lot of places.

def _canCreateBrick(device):
    if not device or device.kids > 0 or device.format.type or \
       hasattr(device.format, 'mountpoint') or \
       device.type in ['cdrom', 'lvmvg', 'lvmthinpool', 'lvmlv', 'lvmthinlv']:
        return False
    return True

~~~

Comment 4 Sahina Bose 2019-03-21 12:03:16 UTC
Kaustav, can you take a look at this? If needed, please check with the blivet team

Comment 5 Sandro Bonazzola 2019-03-22 10:47:17 UTC
4.3.2 has been released a while ago, re-targeting to 4.3.3 for re-evaluation.

Comment 6 David Lehman 2019-04-02 13:35:26 UTC
Which devices in those images are showing incorrect information? It might be useful to have the blivet logs that go with the failures as well.

Comment 8 Sahina Bose 2019-04-15 10:29:32 UTC
Kaustav, can you please provide the requested info?

Comment 10 Marina Kalinin 2019-11-26 20:10:51 UTC
Would this bug be fixed by BZ#1670722 ?

Comment 11 Prajith 2020-01-06 14:23:37 UTC
Hi Marina,

The BZ#1670722 , in short , is having a problem where the device names keeps changing on the reboot, but the gluster_ansible makes use of names which resulted in that bug, so as a fix they have changed gluster_ansible in such a way that it makes use of uuid (which remains same irrespective of disk name).

Since this bug has not been root caused to such level it cannot be said that it fixes the issue. If there were any logs as mentioned it would be helpful in debugging,

The ovirt-engine code looks good, in the meantime i will try to dive deep, debug and check.

Comment 12 Marina Kalinin 2020-01-23 19:52:53 UTC
Thank you.

Comment 13 Prajith 2020-09-02 07:54:28 UTC
This bug has been inactive with insufficient logs for a while, so closing this bug.


Note You need to log in before you can comment on or make changes to this bug.