Bug 1116585 - [vdsm] Create disks on gluster storage domain fails with OSError although the domain is reported as active
Summary: [vdsm] Create disks on gluster storage domain fails with OSError although the...
Alias: None
Product: oVirt
Classification: Retired
Component: vdsm
Version: 3.5
Hardware: x86_64
OS: Unspecified
Target Milestone: ---
: 3.5.0
Assignee: Federico Simoncelli
QA Contact: Elad
Whiteboard: storage
: 1124397 (view as bug list)
Depends On:
Blocks: 1045842 1073943 1105513
TreeView+ depends on / blocked
Reported: 2014-07-06 13:43 UTC by Elad
Modified: 2016-02-10 17:30 UTC (History)
10 users (show)

Fixed In Version: v4.16.2
Doc Type: Bug Fix
Doc Text:
Clone Of:
Last Closed: 2014-10-17 12:23:02 UTC
oVirt Team: Storage

Attachments (Terms of Use)
logs from engine and vdsm (1.75 MB, application/x-gzip)
2014-07-06 13:43 UTC, Elad
no flags Details

System ID Private Priority Status Summary Last Updated
oVirt gerrit 30934 0 master MERGED fileSD: include gluster in getMountsList 2020-09-10 12:51:08 UTC
oVirt gerrit 31158 0 ovirt-3.5 MERGED fileSD: include gluster in getMountsList 2020-09-10 12:51:08 UTC

Description Elad 2014-07-06 13:43:05 UTC
Created attachment 914954 [details]
logs from engine and vdsm

Description of problem:
I tried to create a disk on a gluster storage domain and failed with the following error in vdsm.log:

43487e4b-817c-46f0-a062-4317dacb370e::ERROR::2014-07-06 15:43:59,398::task::866::Storage.TaskManager.Task::(_setError) Task=`43487e4b-817c-46f0-a062-4317dacb370e`::Unexpected error
Traceback (most recent call last):
  File "/usr/share/vdsm/storage/task.py", line 873, in _run
    return fn(*args, **kargs)
  File "/usr/share/vdsm/storage/task.py", line 334, in run
    return self.cmd(*self.argslist, **self.argsdict)
  File "/usr/share/vdsm/storage/securable.py", line 77, in wrapper
    return method(self, *args, **kwargs)
  File "/usr/share/vdsm/storage/sp.py", line 1796, in createVolume
    desc=desc, srcImgUUID=srcImgUUID, srcVolUUID=srcVolUUID)
  File "/usr/share/vdsm/storage/sd.py", line 429, in createVolume
    preallocate, diskType, volUUID, desc, srcImgUUID, srcVolUUID)
  File "/usr/share/vdsm/storage/volume.py", line 375, in create
    imgPath = image.Image(repoPath).create(sdUUID, imgUUID)
  File "/usr/share/vdsm/storage/image.py", line 126, in create
OSError: [Errno 2] No such file or directory: '/rhev/data-center/b6869cda-8e14-410f-b323-4fe17b521a9f/1b1ae61b-bd6a-40c7-97ea-818a99668e9c/images/8a8c0c60-ddba-4b4e-944e-ce489320a6dc'

Version-Release number of selected component (if applicable):

How reproducible:

Steps to Reproduce:
1. Create a gluster storage domain, attach and activate it in a shared DC 
2. Create a disk on the gluster domain

Actual results:
Add disk operation fails with the mentioned error in vdsm.log 

The link for the mount path of the gluster storage domain doesn't seem to exist even though the linking operation for that domain reported as success in vdsm.log during the storage domain activation

Linking operation for domain 1b1ae61b-bd6a-40c7-97ea-818a99668e9c:

Thread-88::INFO::2014-07-06 15:55:29,042::sp::1120::Storage.StoragePool::(_linkStorageDomain) Linking /rhev/data-center/mnt/glusterSD/orion.qa.lab.tlv.redhat.com:_elad-ovirt/1b1ae61b-bd6a-40c7-97ea-818a99668e9c to

Link doesn't exist under /rhev/data-center/pool/ :

[root@green-vdsb b6869cda-8e14-410f-b323-4fe17b521a9f]# pwd
[root@green-vdsb b6869cda-8e14-410f-b323-4fe17b521a9f]# ll
total 20
lrwxrwxrwx. 1 vdsm kvm 105 Jul  6 15:57 146f410f-f906-4d4a-bb29-75f2341a3612 -> /rhev/data-center/mnt/lion.qa.lab.tlv.redhat.com:_export_rhevm-3-iso/146f410f-f906-4d4a-bb29-75f2341a3612
lrwxrwxrwx. 1 vdsm kvm 100 Jul  6 15:57 3370be57-fe1b-4bed-bd18-afb58a6d40af -> /rhev/data-center/mnt/lion.qa.lab.tlv.redhat.com:_export_elad_1/3370be57-fe1b-4bed-bd18-afb58a6d40af
lrwxrwxrwx. 1 vdsm kvm  66 Jul  6 15:57 81647cc2-d186-4f99-b480-9aa31861675b -> /rhev/data-center/mnt/blockSD/81647cc2-d186-4f99-b480-9aa31861675b
lrwxrwxrwx. 1 vdsm kvm 114 Jul  6 15:57 f5966475-1b14-40c0-96cc-c303590e1ed8 -> /rhev/data-center/mnt/vserver-spider.eng.lab.tlv.redhat.com:_vol__pnfs_acanan/f5966475-1b14-40c0-96cc-c303590e1ed8
lrwxrwxrwx. 1 vdsm kvm 100 Jul  6 15:57 mastersd -> /rhev/data-center/mnt/lion.qa.lab.tlv.redhat.com:_export_elad_1/3370be57-fe1b-4bed-bd18-afb58a6d40af
[root@green-vdsb b6869cda-8e14-410f-b323-4fe17b521a9f]

I tried to deactivate and activate the domain several times, it didn't help.

Expected results:
1) Link should exist for gluster domain
2) If link doesn't exist, it means that user cannot perform basic operations on the storage domain, which means that the domain must be reported as inactive,

Additional info: logs from engine and vdsm

Comment 1 Federico Simoncelli 2014-08-01 16:26:29 UTC
*** Bug 1124397 has been marked as a duplicate of this bug. ***

Comment 2 Elad 2014-08-27 10:18:30 UTC
Performed basic operations on a gluster domain:
activate, deactivate, attach, detach, disk creation and running a VM with a disk from the gluster domain attached to it.
All went fine.

Verified using ovirt-3.5-RC1.1

Comment 3 Sandro Bonazzola 2014-10-17 12:23:02 UTC
oVirt 3.5 has been released and should include the fix for this issue.

Note You need to log in before you can comment on or make changes to this bug.