Bug 1024686
Summary: | [HC] - self hosted engine | vdsm doesn't list existing domain on connected storage if the storage is glusterfs | ||||||
---|---|---|---|---|---|---|---|
Product: | Red Hat Enterprise Virtualization Manager | Reporter: | Leonid Natapov <lnatapov> | ||||
Component: | vdsm | Assignee: | Ala Hino <ahino> | ||||
Status: | CLOSED CURRENTRELEASE | QA Contact: | Elad <ebenahar> | ||||
Severity: | high | Docs Contact: | |||||
Priority: | medium | ||||||
Version: | 3.3.0 | CC: | acanan, ahino, amureini, aneil2, bazulay, fsimonce, josh, lpeer, sbonazzo, scohen, tnisan, yeylon, ylavi | ||||
Target Milestone: | ovirt-3.6.0-rc | Keywords: | DevelBlocker | ||||
Target Release: | 3.6.0 | Flags: | amureini:
Triaged+
|
||||
Hardware: | Unspecified | ||||||
OS: | Unspecified | ||||||
Whiteboard: | |||||||
Fixed In Version: | Doc Type: | Bug Fix | |||||
Doc Text: | Story Points: | --- | |||||
Clone Of: | Environment: | ||||||
Last Closed: | Type: | Bug | |||||
Regression: | --- | Mount Type: | --- | ||||
Documentation: | --- | CRM: | |||||
Verified Versions: | Category: | --- | |||||
oVirt Team: | Storage | RHEL 7.3 requirements from Atomic Host: | |||||
Cloudforms Team: | --- | Target Upstream Version: | |||||
Embargoed: | |||||||
Bug Depends On: | 1208458, 1213878, 1227466 | ||||||
Bug Blocks: | 1173669 | ||||||
Attachments: |
|
Description
Leonid Natapov
2013-10-30 09:11:20 UTC
Created attachment 817348 [details]
vdsm log
Here is the HE part of the log: ------------------------------------------------------ > 2013-10-29 17:53:02 DEBUG otopi.plugins.ovirt_hosted_engine_setup.storage.storage plugin.execute:446 execute-output: ('/bin/mount', '-tglusterfs', > '10.35.161.249:/hosted_engine', '/tmp/tmpDf8peI') > stderr: > > > 2013-10-29 17:53:02 DEBUG otopi.ovirt_hosted_engine_setup.domains domains.check_valid_path:76 validate '/tmp/tmpDf8peI' as a valid mount point > 2013-10-29 17:53:02 DEBUG otopi.plugins.ovirt_hosted_engine_setup.storage.storage plugin.executeRaw:366 execute: ('/usr/bin/sudo', '-u', 'vdsm', '-g', > 'kvm', 'test', '-r', '/tmp/tmpDf8peI', '-a', > '-w', '/tmp/tmpDf8peI', '-a', '-x', '/tmp/tmpDf8peI'), executable='None', cwd='None', env=None > 2013-10-29 17:53:02 DEBUG otopi.plugins.ovirt_hosted_engine_setup.storage.storage plugin.executeRaw:383 execute-result: ('/usr/bin/sudo', '-u', > 'vdsm', '-g', 'kvm', 'test', '-r', '/tmp/tmpDf8peI', > '-a', '-w', '/tmp/tmpDf8peI', '-a', '-x', '/tmp/tmpDf8peI'), rc=0 > 2013-10-29 17:53:02 DEBUG otopi.plugins.ovirt_hosted_engine_setup.storage.storage plugin.execute:441 execute-output: ('/usr/bin/sudo', '-u', 'vdsm', > '-g', 'kvm', 'test', '-r', '/tmp/tmpDf8peI', '- > a', '-w', '/tmp/tmpDf8peI', '-a', '-x', '/tmp/tmpDf8peI') stdout: > > > 2013-10-29 17:53:02 DEBUG otopi.plugins.ovirt_hosted_engine_setup.storage.storage plugin.execute:446 execute-output: ('/usr/bin/sudo', '-u', 'vdsm', > '-g', 'kvm', 'test', '-r', '/tmp/tmpDf8peI', '- > a', '-w', '/tmp/tmpDf8peI', '-a', '-x', '/tmp/tmpDf8peI') stderr: > > > 2013-10-29 17:53:02 DEBUG otopi.ovirt_hosted_engine_setup.domains domains.check_base_writable:90 Attempting to write temp file to /tmp/tmpDf8peI > 2013-10-29 17:53:02 DEBUG otopi.ovirt_hosted_engine_setup.domains domains.check_available_space:108 Checking available space on /tmp/tmpDf8peI > 2013-10-29 17:53:02 DEBUG otopi.ovirt_hosted_engine_setup.domains domains.check_available_space:115 Available space on /tmp/tmpDf8peI is 178729Mb > 2013-10-29 17:53:02 DEBUG otopi.plugins.ovirt_hosted_engine_setup.storage.storage plugin.executeRaw:366 execute: ('/bin/umount', '/tmp/tmpDf8peI'), > executable='None', cwd='None', env=None > 2013-10-29 17:53:02 DEBUG otopi.plugins.ovirt_hosted_engine_setup.storage.storage plugin.executeRaw:383 execute-result: ('/bin/umount', > '/tmp/tmpDf8peI'), rc=0 > 2013-10-29 17:53:02 DEBUG otopi.plugins.ovirt_hosted_engine_setup.storage.storage plugin.execute:441 execute-output: ('/bin/umount', '/tmp/tmpDf8peI') > stdout: > > > 2013-10-29 17:53:02 DEBUG otopi.plugins.ovirt_hosted_engine_setup.storage.storage plugin.execute:446 execute-output: ('/bin/umount', '/tmp/tmpDf8peI') > stderr: > > > 2013-10-29 17:53:02 DEBUG otopi.plugins.ovirt_hosted_engine_setup.storage.storage storage._storageServerConnection:400 connectStorageServer > 2013-10-29 17:53:03 DEBUG otopi.plugins.ovirt_hosted_engine_setup.storage.storage storage._getStorageDomainsList:365 getStorageDomainsList > 2013-10-29 17:53:06 DEBUG otopi.plugins.ovirt_hosted_engine_setup.storage.storage storage._getStorageDomainsList:368 {'status': {'message': 'OK', > 'code': 0}, 'domlist': []} > > Bottom line here is that getStorageDomainsList is not reporting gluster storage domains. I haven't investigated this much but given (sdc.py): def getUUIDs(self): import blockSD import fileSD uuids = [] for mod in (blockSD, fileSD): uuids.extend(mod.getStorageDomainsList()) return uuids I think that fileSD.getStorageDomainsList() can't find any gluster storage domain as they're in a different path (/rhev/data-center/mnt/glusterSD vs /rhev/data-center/mnt). (In reply to Federico Simoncelli from comment #4) > Bottom line here is that getStorageDomainsList is not reporting gluster > storage domains. > > I haven't investigated this much but given (sdc.py): > > > def getUUIDs(self): > import blockSD > import fileSD > > uuids = [] > for mod in (blockSD, fileSD): > uuids.extend(mod.getStorageDomainsList()) > > return uuids > > > I think that fileSD.getStorageDomainsList() can't find any gluster storage > domain as they're in a different path (/rhev/data-center/mnt/glusterSD vs > /rhev/data-center/mnt). Just curious... Looking further up in sdc.py, I see imports for for glusterSD and nfsSD. Are these omitted from the getUUIDs for some reason? Would a fix for this bug be something like the following? - for mod in (blockSD, fileSD): + for mod in (blockSD, glusterSD, fileSD): Also, is this the main blocker in why the hosted engine installer's "storage.py" has glusterfs commented out as an option? (I'm using 3.4.0rc2) I have a GlusterFS 3.5beta3 environment set up, if you'd like me to test anything. Thanks, Joshua P.S. On the other hand, I just looked. I guess that function isn't defined in glusterSD.py.. Oh well.. Please let me know if I can help. [root@core-n1 storage]# grep getStorageDomainsList * blockSD.py:def getStorageDomainsList(): Binary file blockSD.pyc matches Binary file blockSD.pyo matches fileSD.py:def getStorageDomainsList(): Binary file fileSD.pyc matches Binary file fileSD.pyo matches hsm.py: uuids = tuple(blockSD.getStorageDomainsList()) hsm.py: def getStorageDomainsList( Binary file hsm.pyc matches Binary file hsm.pyo matches grep: imageRepository: Is a directory sdc.py: uuids.extend(mod.getStorageDomainsList()) Binary file sdc.pyc matches Binary file sdc.pyo matches [root@core-n1 storage]# Any progress on this? (In reply to Sandro Bonazzola from comment #9) > Any progress on this? Not yet, unfortunately. Please rise priority of this bug since it's causing issues in Hosted Engine support for GlusterFS Can you explain please why it moved to "on_qa"? Do we have a fix? if so please add the patch to the bug. Does it work now or didn't reproduce? (In reply to Aharon Canan from comment #12) > Can you explain please why it moved to "on_qa"? > Do we have a fix? if so please add the patch to the bug. > Does it work now or didn't reproduce? It is working for me now using nightly build. Maybe Sandro can provide more info regarding fix/patch. No fix / patch provided by me. If now it works, it has been fixed in vdsm. Gluster domains are now listed for getStorageDomainsList. Also, as 1083025 (HE on Gluster) is now verified, we can move this bug to VERIFIED. Gluster domain (6f2b9433-5b58-420a-978c-7f5d7fa63b8b) is listed: [root@green-vdsa 10.35.160.6:_elad1]# pwd /rhev/data-center/mnt/glusterSD/10.35.160.6:_elad1 [root@green-vdsa 10.35.160.6:_elad1]# vdsClient -s 0 getStorageDomainsList b8b386af-7453-465d-a3ce-4b747cae8032 992fa11e-d046-4911-a898-13a5db4f0457 6f2b9433-5b58-420a-978c-7f5d7fa63b8b Verified using vdsm-4.17.10-5.el7ev.noarch RHEV 3.6.0 has been released, setting status to CLOSED CURRENTRELEASE RHEV 3.6.0 has been released, setting status to CLOSED CURRENTRELEASE RHEV 3.6.0 has been released, setting status to CLOSED CURRENTRELEASE RHEV 3.6.0 has been released, setting status to CLOSED CURRENTRELEASE |