Bug 1257506
Summary: | [storage] [RFE] Add the ability to use mount flags when creating POSIX compliant FS ISO domains | ||||||
---|---|---|---|---|---|---|---|
Product: | [oVirt] ovirt-engine | Reporter: | Jiri Belka <jbelka> | ||||
Component: | General | Assignee: | Idan Shaby <ishaby> | ||||
Status: | CLOSED DEFERRED | QA Contact: | Raz Tamir <ratamir> | ||||
Severity: | low | Docs Contact: | |||||
Priority: | unspecified | ||||||
Version: | --- | CC: | ahino, amureini, bazulay, bhughes, bugs, cinglese, ishaby, kilduff, lsurette, srevivo, tnisan, ycui, ykaul, ylavi | ||||
Target Milestone: | --- | Keywords: | FutureFeature | ||||
Target Release: | --- | Flags: | amureini:
ovirt-future?
rule-engine: planning_ack? rule-engine: devel_ack? rule-engine: testing_ack? |
||||
Hardware: | Unspecified | ||||||
OS: | Unspecified | ||||||
Whiteboard: | |||||||
Fixed In Version: | Doc Type: | Enhancement | |||||
Doc Text: | Story Points: | --- | |||||
Clone Of: | Environment: | ||||||
Last Closed: | 2017-11-16 13:40:33 UTC | Type: | Bug | ||||
Regression: | --- | Mount Type: | --- | ||||
Documentation: | --- | CRM: | |||||
Verified Versions: | Category: | --- | |||||
oVirt Team: | Storage | RHEL 7.3 requirements from Atomic Host: | |||||
Cloudforms Team: | --- | Target Upstream Version: | |||||
Embargoed: | |||||||
Attachments: |
|
this is an automated message. oVirt 3.6.0 RC3 has been released and GA is targeted to next week, Nov 4th 2015. Please review this bug and if not a blocker, please postpone to a later release. All bugs not postponed on GA release will be automatically re-targeted to - 3.6.1 if severity >= high - 4.0 if severity < high I would also bump this issue, a bind mount could be very useful. Think of a path having what-ever underlying FS technology, and bind mount being the simple way to integrate any FS to ovirt. It actually does partially work, and I think the reason it fails for me is the test to see if the mount is successful or not: vdsm.log: File "/usr/share/vdsm/storage/mount.py", line 280, in getRecord (self.fs_spec, self.fs_file)) OSError: [Errno 2] Mount of `/data/test` at `/rhev/data-center/mnt/_data_test` does not exist How-ever, ovirt has mounted the bind mount, output from `mount |grep test`: /dev/sda2 on /rhev/data-center/mnt/_data_test type xfs (rw,noatime,attr2,inode64,noquota) How-ever this is a centos7 annoyance thing, the original folder does not get displayed as the source. On centos6 without this annoyance the mount would look like: /data/test on /rhev/data-center/mnt/_data_test type xfs (rw,noatime,attr2,inode64,noquota) I think this is why mount.py at line 280 gets confused. Idan - aren't you working on a similar issue? Smells like it. I will check it against my patches when they are ready. (In reply to Idan Shaby from comment #4) > Smells like it. > I will check it against my patches when they are ready. Is there any update on this ticket? I am having the same issue and am very interested in a fix for this as well. Thanks (In reply to Bryan from comment #5) > (In reply to Idan Shaby from comment #4) > > Smells like it. > > I will check it against my patches when they are ready. > > Is there any update on this ticket? I am having the same issue and am very > interested in a fix for this as well. > > Thanks The ticket is up to date. Severity is low, it's in NEW state, tentatively planned for 4.0, but no devel-ack or planning ack given yet. Hi Jiri, You tried to use a mount flag ("--bind") in the mount options field. Currently we don't support adding mount flags, but we definitely can. Yaniv, can you please target this RFE? (In reply to Idan Shaby from comment #7) > Hi Jiri, > > You tried to use a mount flag ("--bind") in the mount options field. > Currently we don't support adding mount flags, but we definitely can. > > Yaniv, can you please target this RFE? The target is good for now, not seeing this happening for oVirt 4.0. If anyone would like to submit patches to allow this, we can help with reviews. We can revisit this in 4.1. We are looking into deprecating the ISO domain (and instead, allow upload ISOs to data domains). Therefore, I don't see it is getting implemented. |
Created attachment 1067595 [details] engine.log, vdsm.log Description of problem: Why I cannot add local storage dir as iso domain? Admin portal allows me to add POSIXFS... (One could think about this as "huh, I refused to have local iso during engine-setup but I changed my mind now...") - engine.log ... 2015-08-27 10:07:52,701 INFO [org.ovirt.engine.core.bll.storage.AddStorageServerConnectionCommand] (ajp-/127.0.0.1:8702-3) [792924bd] Lock Acquired to object 'EngineLock:{exclusiveLocks='[/iso=<STORAGE_CONNECTIO N, ACTION_TYPE_FAILED_OBJECT_LOCKED>]', sharedLocks='null'}' 2015-08-27 10:07:52,721 INFO [org.ovirt.engine.core.bll.storage.AddStorageServerConnectionCommand] (ajp-/127.0.0.1:8702-3) [792924bd] Running command: AddStorageServerConnectionCommand internal: false. Entities affected : ID: aaa00000-0000-0000-0000-123456789aaa Type: SystemAction group CREATE_STORAGE_DOMAIN with role type ADMIN 2015-08-27 10:07:52,723 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] (ajp-/127.0.0.1:8702-3) [792924bd] START, ConnectStorageServerVDSCommand(HostName = dell-r210ii-04.rhev.lab .eng.brq.redhat.com, StorageServerConnectionManagementVDSParameters:{runAsync='true', hostId='57ac2ae0-1a10-472e-97f0-a8907bced766', storagePoolId='00000000-0000-0000-0000-000000000000', storageType='POSIXFS', co nnectionList='[StorageServerConnections:{id='null', connection='/iso', iqn='null', vfsType='xfs', mountOptions='null', nfsVersion='null', nfsRetrans='null', nfsTimeo='null', iface='null', netIfaceName='null'}]'}) , log id: 7692e43d 2015-08-27 10:07:52,770 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] (ajp-/127.0.0.1:8702-3) [792924bd] FINISH, ConnectStorageServerVDSCommand, return: {00000000-0000-0000-0000 -000000000000=477}, log id: 7692e43d 2015-08-27 10:07:52,773 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (ajp-/127.0.0.1:8702-3) [792924bd] Correlation ID: null, Call Stack: null, Custom Event ID: -1, Message: The error message for connection /iso returned by VDSM was: Problem while trying to mount target 2015-08-27 10:07:52,773 ERROR [org.ovirt.engine.core.bll.storage.BaseFsStorageHelper] (ajp-/127.0.0.1:8702-3) [792924bd] The connection with details '/iso' failed because of error code '477' and error message is: problem while trying to mount target - vdsm.log ... Thread-323340::DEBUG::2015-08-27 10:08:28,560::task::595::Storage.TaskManager.Task::(_updateState) Task=`31ebf34c-479d-46e6-9d88-9a29ec9028d0`::moving from state init -> state preparing Thread-323340::INFO::2015-08-27 10:08:28,560::logUtils::48::dispatcher::(wrapper) Run and protect: connectStorageServer(domType=6, spUUID=u'00000000-0000-0000-0000-000000000000', conList=[{u'mnt_options': u'bind' , u'id': u'00000000-0000-0000-0000-000000000000', u'connection': u'/iso', u'iqn': u'', u'user': u'', u'tpgt': u'1', u'vfs_type': u'xfs', u'password': '********', u'port': u''}], options=None) Thread-323340::DEBUG::2015-08-27 10:08:28,562::fileUtils::143::Storage.fileUtils::(createdir) Creating directory: /rhev/data-center/mnt/_iso mode: None Thread-323340::WARNING::2015-08-27 10:08:28,562::fileUtils::152::Storage.fileUtils::(createdir) Dir /rhev/data-center/mnt/_iso already exists Thread-323340::DEBUG::2015-08-27 10:08:28,562::mount::229::Storage.Misc.excCmd::(_runcmd) /usr/bin/sudo -n /usr/bin/mount -t xfs -o bind /iso /rhev/data-center/mnt/_iso (cwd None) Thread-323340::ERROR::2015-08-27 10:08:28,577::hsm::2454::Storage.HSM::(connectStorageServer) Could not connect to storageServer Traceback (most recent call last): File "/usr/share/vdsm/storage/hsm.py", line 2451, in connectStorageServer conObj.connect() File "/usr/share/vdsm/storage/storageServer.py", line 236, in connect self.getMountObj().getRecord().fs_file) File "/usr/share/vdsm/storage/mount.py", line 280, in getRecord (self.fs_spec, self.fs_file)) OSError: [Errno 2] Mount of `/iso` at `/rhev/data-center/mnt/_iso` does not exist Thread-323340::DEBUG::2015-08-27 10:08:28,577::hsm::2478::Storage.HSM::(connectStorageServer) knownSDs: {0c78b4d6-ba00-4d3e-9f9f-65c7d5899d71: storage.nfsSD.findDomain, 2834fba3-6200-489d-9868-7b8c162749ca: stora ge.nfsSD.findDomain} Version-Release number of selected component (if applicable): vdsm-4.17.2-1.el7ev.noarch How reproducible: 100% Steps to Reproduce: 1. (host) install -d -o vdsm -g kvm -m 755 /iso 2. (engine) add new iso domain while posixfs - /iso as path, mount options 'bind' ('xfs' as type) 3. Actual results: failure Expected results: should work (even this is dump, maybe a check/warning that this is not probably clustered fs between all/future hosts in DC?) Additional info: