Description of problem: While adding a glusterfs volume from Red Hat Storage (RHS) server, as Storage Domain (POSIX compliant FS) in RHEV-M, when the following mount options of glusterfs were passed individually, as well as together, the mount options were not honoured, though the Storage Domain was successfully added. acl entry-timeout=<SECONDS> attribute-timeout=<SECONDS> gid-timeout=<SECONDS> background-qlen=<N> direct-io-mode[=BOOL] If the example of the option 'attribute-timeout' is taken, from the vdsm logs, it can be seen that the option appears at the 'validateStorageServerConnection' line, but does not appear at the line where 'mount' is called. This is the case with all the other options listed above. ----------------------------------------------------------- Thread-158323::INFO::2013-03-25 13:54:54,645::logUtils::37::dispatcher::(wrapper) Run and protect: validateStorageServerConnection(domType=6, spUUID='00000000-0000-0000-0000-000000000000', conList=[{'port': '', 'connection': 'rhs-client45.lab.eng.blr.redhat.com:/RHS_VMadd_store', 'mnt_options': 'attribute-timeout=2', 'portal': '', 'user': '', 'iqn': '', 'vfs_type': 'glusterfs', 'password': '******', 'id': '00000000-0000-0000-0000-000000000000'}], options=None) --------- Thread-158324::DEBUG::2013-03-25 13:54:54,696::misc::83::Storage.Misc.excCmd::(<lambda>) '/usr/bin/sudo -n /bin/mount -t glusterfs rhs-client45.lab.eng.blr.redhat.com:/RHS_VMadd_store /rhev/data-center/mnt/rhs-client45.lab.eng.blr.redhat.com:_RHS__VMadd__store' (cwd None) ----------------------------------------------------------- Additional Info: An interesting special case exists with the mount option 'backupvolfile-server=<SECONDARY SERVER NAME>. If the mount option 'backupvolfile-server=<SECONDARY SERVER NAME> is given either individually, or clubbed with the other mount options, and if the primary RHS server is available, the result is the same in the vdsm logs, as reported above. However if the mount option 'backupvolfile-server=<SECONDARY SERVER NAME> is given either individually, or clubbed with the other mount options, and if the primary RHS server is not available, the mount option(s) appear at the line where 'mount' is called in the vdsm logs. But still the mount fails in this case, though the secondary RHS server is available. This particular issue is reported and worked upon in BZ 922744. ----------------------------------------------------------- When the primary server is available: Thread-169133::INFO::2013-03-25 18:55:51,822::logUtils::37::dispatcher::(wrapper) Run and protect: validateStorageServerConnection(domType=6, spUUID='00000000-0000-0000-0000-000000000000', conList=[{'port': '', 'connection': 'rhs-client45.lab.eng.blr.redhat.com:/RHS_VMadd_store', 'mnt_options': 'acl,attribute-timeout=2,background-qlen=128,direct-io-mode=on,entry-timeout=2,gid-timeout=2,backupvolfile-server=rhs-client15.lab.eng.blr.redhat.com', 'portal': '', 'user': '', 'iqn': '', 'vfs_type': 'glusterfs', 'password': '******', 'id': '00000000-0000-0000-0000-000000000000'}], options=None) ----- Thread-169134::DEBUG::2013-03-25 18:55:51,883::misc::83::Storage.Misc.excCmd::(<lambda>) '/usr/bin/sudo -n /bin/mount -t glusterfs rhs-client45.lab.eng.blr.redhat.com:/RHS_VMadd_store /rhev/data-center/mnt/rhs-client45.lab.eng.blr.redhat.com:_RHS__VMadd__store' (cwd None) ========================== When the primary server is not available Thread-169103::INFO::2013-03-25 18:55:08,905::logUtils::37::dispatcher::(wrapper) Run and protect: validateStorageServerConnection(domType=6, spUUID='00000000-0000-0000-0000-000000000000', conList=[{'port': '', 'connection': 'rhs-client15.lab.eng.blr.redhat.com:/RHS_VMadd_store', 'mnt_options': 'acl,attribute-timeout=2,background-qlen=128,direct-io-mode=on,entry-timeout=2,gid-timeout=2,backupvolfile-server=rhs-client45.lab.eng.blr.redhat.com', 'portal': '', 'user': '', 'iqn': '', 'vfs_type': 'glusterfs', 'password': '******', 'id': '00000000-0000-0000-0000-000000000000'}], options=None) Thread-169104::INFO::2013-03-25 18:55:08,955::logUtils::37::dispatcher::(wrapper) Run and protect: connectStorageServer(domType=6, spUUID='00000000-0000-0000-0000-000000000000', conList=[{'port': '', 'connection': 'rhs-client15.lab.eng.blr.redhat.com:/RHS_VMadd_store', 'mnt_options': 'acl,attribute-timeout=2,background-qlen=128,direct-io-mode=on,entry-timeout=2,gid-timeout=2,backupvolfile-server=rhs-client45.lab.eng.blr.redhat.com', 'portal': '', 'user': '', 'iqn': '', 'vfs_type': 'glusterfs', 'password': '******', 'id': '00000000-0000-0000-0000-000000000000'}], options=None) Thread-169104::DEBUG::2013-03-25 18:55:08,958::misc::83::Storage.Misc.excCmd::(<lambda>) '/usr/bin/sudo -n /bin/mount -t glusterfs -o acl,attribute-timeout=2,background-qlen=128,direct-io-mode=on,entry-timeout=2,gid-timeout=2,backupvolfile-server=rhs-client45.lab.eng.blr.redhat.com rhs-client15.lab.eng.blr.redhat.com:/RHS_VMadd_store /rhev/data-center/mnt/rhs-client15.lab.eng.blr.redhat.com:_RHS__VMadd__store' (cwd None) Thread-169107::INFO::2013-03-25 18:55:13,318::logUtils::37::dispatcher::(wrapper) Run and protect: disconnectStorageServer(domType=6, spUUID='00000000-0000-0000-0000-000000000000', conList=[{'port': '', 'connection': 'rhs-client15.lab.eng.blr.redhat.com:/RHS_VMadd_store', 'mnt_options': 'acl,attribute-timeout=2,background-qlen=128,direct-io-mode=on,entry-timeout=2,gid-timeout=2,backupvolfile-server=rhs-client45.lab.eng.blr.redhat.com', 'portal': '', 'user': '', 'iqn': '', 'vfs_type': 'glusterfs', 'password': '******', 'id': '00000000-0000-0000-0000-000000000000'}], options=None) ----------------------------------------------------------- Version-Release number of selected component (if applicable): RHEV-M: rhevm-3.1.0-50.el6ev Hypervisors: RHEV-H 6.4 RHEL 6.4 RHEL 6.3 RHS: RHS-2.0-20130320.2-RHS-x86_64 Gluster versions: glusterfs-3.3.0.7rhs-1.el6rhs.x86_64 glusterfs-fuse-3.3.0.7rhs-1.el6rhs.x86_64 How reproducible: Steps to Reproduce: 1.set-up RHEV-M environment with POSIX compliant FS Data Center 2.add gluster volume from RHS server to the Data Center, with additional mount options individually, or clubbed together separated by comma 3.watch vdsm logs Actual results: The glusterfs mount options given through RHEV-M, while adding Storage Domain, are not being honoured at the 'mount' call by vdsm Expected results: The glusterfs mount options given through RHEV-M, while adding Storage Domain, should be honoured at the 'mount' call by vdsm Additional info:
I'm not sure where you're looking. There is a series of connect calls from engine which don't have any additional options and then there are calls which do contain it and accordingly in vdsm: $ xzgrep "mount -t glusterfs -o " vdsm.log* vdsm.log.1.xz:Thread-162755::DEBUG::2013-03-25 16:01:44,022::misc::83::Storage.Misc.excCmd::(<lambda>) '/usr/bin/sudo -n /bin/mount -t glusterfs -o backupvolfile-server=rhs-client45.lab.eng.blr.redhat.com rhs-client15.lab.eng.blr.redhat.com:/RHS_VMadd_store /rhev/data-center/mnt/rhs-client15.lab.eng.blr.redhat.com:_RHS__VMadd__store' (cwd None) vdsm.log.1.xz:Thread-167515::DEBUG::2013-03-25 18:13:04,537::misc::83::Storage.Misc.excCmd::(<lambda>) '/usr/bin/sudo -n /bin/mount -t glusterfs -o acl,attribute-timeout=2,background-qlen=128,direct-io-mode=enable,entry-timeout=2,gid-timeout=2,backupvolfile-server=rhs-client45.lab.eng.blr.redhat.com RHS_VMadd_store /rhev/data-center/mnt/RHS__VMadd__store' (cwd None) vdsm.log.1.xz:Thread-167521::DEBUG::2013-03-25 18:13:10,023::misc::83::Storage.Misc.excCmd::(<lambda>) '/usr/bin/sudo -n /bin/mount -t glusterfs -o acl,attribute-timeout=2,background-qlen=128,direct-io-mode=enable,entry-timeout=2,gid-timeout=2,backupvolfile-server=rhs-client45.lab.eng.blr.redhat.com RHS_VMadd_store /rhev/data-center/mnt/RHS__VMadd__store' (cwd None) vdsm.log.1.xz:Thread-169104::DEBUG::2013-03-25 18:55:08,958::misc::83::Storage.Misc.excCmd::(<lambda>) '/usr/bin/sudo -n /bin/mount -t glusterfs -o acl,attribute-timeout=2,background-qlen=128,direct-io-mode=on,entry-timeout=2,gid-timeout=2,backupvolfile-server=rhs-client45.lab.eng.blr.redhat.com rhs-client15.lab.eng.blr.redhat.com:/RHS_VMadd_store /rhev/data-center/mnt/rhs-client15.lab.eng.blr.redhat.com:_RHS__VMadd__store' (cwd None) vdsm.log.4.xz:Thread-103959::DEBUG::2013-03-24 13:15:21,595::misc::83::Storage.Misc.excCmd::(<lambda>) '/usr/bin/sudo -n /bin/mount -t glusterfs -o backupvolfile-server=rhs-client15.lab.eng.blr.redhat.com rhs-client37.lab.eng.blr.redhat.com:/RHS_VMadd_store /rhev/data-center/mnt/rhs-client37.lab.eng.blr.redhat.com:_RHS__VMadd__store' (cwd None) vdsm.log.4.xz:Thread-104094::DEBUG::2013-03-24 13:18:13,968::misc::83::Storage.Misc.excCmd::(<lambda>) '/usr/bin/sudo -n /bin/mount -t glusterfs -o backupvolfile-server=rhs-client15.lab.eng.blr.redhat.com rhs-client37.lab.eng.blr.redhat.com:/RHS_VMadd_store /rhev/data-center/mnt/rhs-client37.lab.eng.blr.redhat.com:_RHS__VMadd__store' (cwd None) vdsm.log.8.xz:Thread-7855::DEBUG::2013-03-22 17:58:14,005::misc::83::Storage.Misc.excCmd::(<lambda>) '/usr/bin/sudo -n /bin/mount -t glusterfs -o backupvolfile-server=rhs-client15.lab.eng.blr.redhat.com rhs-client37.lab.eng.blr.redhat.com:/RHS_VMadd_store /rhev/data-center/mnt/rhs-client37.lab.eng.blr.redhat.com:_RHS__VMadd__store' (cwd None) vdsm.log.8.xz:Thread-7871::DEBUG::2013-03-22 17:58:35,730::misc::83::Storage.Misc.excCmd::(<lambda>) '/usr/bin/sudo -n /bin/mount -t glusterfs -o backupvolfile-server=rhs-client15.lab.eng.blr.redhat.com rhs-client37.lab.eng.blr.redhat.com:/RHS_VMadd_store /rhev/data-center/mnt/rhs-client37.lab.eng.blr.redhat.com:_RHS__VMadd__store' (cwd None)
(In reply to comment #7) > I'm not sure where you're looking. > There is a series of connect calls from engine which don't have any > additional options and then there are calls which do contain it and > accordingly in vdsm: > > $ xzgrep "mount -t glusterfs -o " vdsm.log* > vdsm.log.1.xz:Thread-162755::DEBUG::2013-03-25 > 16:01:44,022::misc::83::Storage.Misc.excCmd::(<lambda>) '/usr/bin/sudo -n > /bin/mount -t glusterfs -o > backupvolfile-server=rhs-client45.lab.eng.blr.redhat.com > rhs-client15.lab.eng.blr.redhat.com:/RHS_VMadd_store > /rhev/data-center/mnt/rhs-client15.lab.eng.blr.redhat.com: > _RHS__VMadd__store' (cwd None) > vdsm.log.1.xz:Thread-167515::DEBUG::2013-03-25 > 18:13:04,537::misc::83::Storage.Misc.excCmd::(<lambda>) '/usr/bin/sudo -n > /bin/mount -t glusterfs -o > acl,attribute-timeout=2,background-qlen=128,direct-io-mode=enable,entry- > timeout=2,gid-timeout=2,backupvolfile-server=rhs-client45.lab.eng.blr.redhat. > com RHS_VMadd_store /rhev/data-center/mnt/RHS__VMadd__store' (cwd None) > vdsm.log.1.xz:Thread-167521::DEBUG::2013-03-25 > 18:13:10,023::misc::83::Storage.Misc.excCmd::(<lambda>) '/usr/bin/sudo -n > /bin/mount -t glusterfs -o > acl,attribute-timeout=2,background-qlen=128,direct-io-mode=enable,entry- > timeout=2,gid-timeout=2,backupvolfile-server=rhs-client45.lab.eng.blr.redhat. > com RHS_VMadd_store /rhev/data-center/mnt/RHS__VMadd__store' (cwd None) > vdsm.log.1.xz:Thread-169104::DEBUG::2013-03-25 > 18:55:08,958::misc::83::Storage.Misc.excCmd::(<lambda>) '/usr/bin/sudo -n > /bin/mount -t glusterfs -o > acl,attribute-timeout=2,background-qlen=128,direct-io-mode=on,entry- > timeout=2,gid-timeout=2,backupvolfile-server=rhs-client45.lab.eng.blr.redhat. > com rhs-client15.lab.eng.blr.redhat.com:/RHS_VMadd_store > /rhev/data-center/mnt/rhs-client15.lab.eng.blr.redhat.com: > _RHS__VMadd__store' (cwd None) > vdsm.log.4.xz:Thread-103959::DEBUG::2013-03-24 > 13:15:21,595::misc::83::Storage.Misc.excCmd::(<lambda>) '/usr/bin/sudo -n > /bin/mount -t glusterfs -o > backupvolfile-server=rhs-client15.lab.eng.blr.redhat.com > rhs-client37.lab.eng.blr.redhat.com:/RHS_VMadd_store > /rhev/data-center/mnt/rhs-client37.lab.eng.blr.redhat.com: > _RHS__VMadd__store' (cwd None) > vdsm.log.4.xz:Thread-104094::DEBUG::2013-03-24 > 13:18:13,968::misc::83::Storage.Misc.excCmd::(<lambda>) '/usr/bin/sudo -n > /bin/mount -t glusterfs -o > backupvolfile-server=rhs-client15.lab.eng.blr.redhat.com > rhs-client37.lab.eng.blr.redhat.com:/RHS_VMadd_store > /rhev/data-center/mnt/rhs-client37.lab.eng.blr.redhat.com: > _RHS__VMadd__store' (cwd None) > vdsm.log.8.xz:Thread-7855::DEBUG::2013-03-22 > 17:58:14,005::misc::83::Storage.Misc.excCmd::(<lambda>) '/usr/bin/sudo -n > /bin/mount -t glusterfs -o > backupvolfile-server=rhs-client15.lab.eng.blr.redhat.com > rhs-client37.lab.eng.blr.redhat.com:/RHS_VMadd_store > /rhev/data-center/mnt/rhs-client37.lab.eng.blr.redhat.com: > _RHS__VMadd__store' (cwd None) > vdsm.log.8.xz:Thread-7871::DEBUG::2013-03-22 > 17:58:35,730::misc::83::Storage.Misc.excCmd::(<lambda>) '/usr/bin/sudo -n > /bin/mount -t glusterfs -o > backupvolfile-server=rhs-client15.lab.eng.blr.redhat.com > rhs-client37.lab.eng.blr.redhat.com:/RHS_VMadd_store > /rhev/data-center/mnt/rhs-client37.lab.eng.blr.redhat.com: > _RHS__VMadd__store' (cwd None) Ayal Baron, I think you have got the issue confused. I will try to explain it better. If the mount option 'backupvolfile-server=<SECONDARY SERVER NAME> is provided through RHEV-M, by itself, or together with other mount options, and the primary RHS server is unreachable, all the given mount options come up in the mount call logged by vdsm. However, if any mount option(s), other than the 'backupvolfile-server=<SECONDARY SERVER NAME> option, is provided, either alone or together, through RHEV-M, none of the mount options come up in the mount calls logged by vdsm. Since there a lot of entries in the vdsm.log* files of the system, I have provided 'tail' snips of the vdsm.log file from each attempt using different mount option(s). They are provided in attachment 3 [details], labelled as ' snips from vdsm logs, for each mount option attempt'. Hope I have been able to provide enough clarification. If you need any more information, please get back to me. Cheers! rejy (rmc)
I'm still missing something. Here you see that a lot more options are passed: > vdsm.log.1.xz:Thread-167515::DEBUG::2013-03-25 > 18:13:04,537::misc::83::Storage.Misc.excCmd::(<lambda>) '/usr/bin/sudo -n > /bin/mount -t glusterfs -o > acl,attribute-timeout=2,background-qlen=128,direct-io-mode=enable,entry- > timeout=2,gid-timeout=2,backupvolfile-server=rhs-client45.lab.eng.blr.redhat. > com RHS_VMadd_store /rhev/data-center/mnt/RHS__VMadd__store' (cwd None)
(In reply to comment #9) > I'm still missing something. Here you see that a lot more options are > passed: > > > vdsm.log.1.xz:Thread-167515::DEBUG::2013-03-25 > > 18:13:04,537::misc::83::Storage.Misc.excCmd::(<lambda>) '/usr/bin/sudo -n > > /bin/mount -t glusterfs -o > > acl,attribute-timeout=2,background-qlen=128,direct-io-mode=enable,entry- > > timeout=2,gid-timeout=2,backupvolfile-server=rhs-client45.lab.eng.blr.redhat. > > com RHS_VMadd_store /rhev/data-center/mnt/RHS__VMadd__store' (cwd None) The options I tried are acl entry-timeout=<SECONDS> attribute-timeout=<SECONDS> gid-timeout=<SECONDS> background-qlen=<N> direct-io-mode[=BOOL] by themselves, and together with 'backupvolfile-server=<server>' option. The examples given by you, are of the time when the above 6 options were used together with the 'backupvolfile-server=<server>' option, and the primary RHS server was not reachable. In all other instances, when the above 6 option were used without the 'backupvolfile-server=<server>' option, the options do not appear in the mount call. - rejy (rmc)
Indeed, engine is passing the parameters to the validate command but not to the connect command: Thread-156004::INFO::2013-03-25 12:55:14,854::logUtils::37::dispatcher::(wrapper) Run and protect: validateStorageServerConnection(domType=6, spUUID='00000000-0000-0000-0000-000000000000', conList=[{'port ': '', 'connection': 'rhs-client45.lab.eng.blr.redhat.com:/RHS_VMadd_store', 'mnt_options': 'acl', 'portal': '', 'user': '', 'iqn': '', 'vfs_type': 'glusterfs', 'password': '******', 'id': '00000000-0000- 0000-0000-000000000000'}], options=None) 27724 Thread-156004::INFO::2013-03-25 12:55:14,854::logUtils::39::dispatcher::(wrapper) Run and protect: validateStorageServerConnection, Return response: {'statuslist': [{'status': 0, 'id': '00000000-0000-0000 -0000-000000000000'}]} 27725 Thread-156004::DEBUG::2013-03-25 12:55:14,854::task::1151::TaskManager.Task::(prepare) Task=`451a8f53-9c3e-426e-b448-579264fb5db5`::finished: {'statuslist': [{'status': 0, 'id': '00000000-0000-0000-0000-0 00000000000'}]} 27726 Thread-156004::DEBUG::2013-03-25 12:55:14,854::task::568::TaskManager.Task::(_updateState) Task=`451a8f53-9c3e-426e-b448-579264fb5db5`::moving from state preparing -> state finished 27727 Thread-156004::DEBUG::2013-03-25 12:55:14,854::resourceManager::809::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {} 27728 Thread-156004::DEBUG::2013-03-25 12:55:14,855::resourceManager::844::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} 27729 Thread-156004::DEBUG::2013-03-25 12:55:14,855::task::957::TaskManager.Task::(_decref) Task=`451a8f53-9c3e-426e-b448-579264fb5db5`::ref 0 aborting False 27730 Thread-156005::DEBUG::2013-03-25 12:55:14,903::BindingXMLRPC::161::vds::(wrapper) [10.70.34.108] 27731 Thread-156005::DEBUG::2013-03-25 12:55:14,904::task::568::TaskManager.Task::(_updateState) Task=`3f822635-fcd9-4fdd-8222-d82d8bc1ac1b`::moving from state init -> state preparing 27732 Thread-156005::INFO::2013-03-25 12:55:14,904::logUtils::37::dispatcher::(wrapper) Run and protect: connectStorageServer(domType=6, spUUID='00000000-0000-0000-0000-000000000000', conList=[{'port': '', 'con nection': 'rhs-client45.lab.eng.blr.redhat.com:/RHS_VMadd_store', 'iqn': '', 'portal': '', 'user': '', 'vfs_type': 'glusterfs', 'password': '******', 'id': '822bd949-666f-4aa2-89a5-a24f8aaf5c70'}], option s=None) 27733 Thread-156005::DEBUG::2013-03-25 12:55:14,906::misc::83::Storage.Misc.excCmd::(<lambda>) '/usr/bin/sudo -n /bin/mount -t glusterfs rhs-client45.lab.eng.blr.redhat.com:/RHS_VMadd_store /rhev/data-center/mn t/rhs-client45.lab.eng.blr.redhat.com:_RHS__VMadd__store' (cwd None) 27734 Thread-156005::INFO::2013-03-25 12:55:19,357::logUtils::39::dispatcher::(wrapper) Run and protect: connectStorageServer, Return response: {'statuslist': [{'status': 0, 'id': '822bd949-666f-4aa2-89a5-a24f8 aaf5c70'}]}
Note that this happens only when adding a POSIX domain which is a part of a data center, I guess that is what Rejy meant by primary server, in cases when you add a new POSIX domain and mark the data center as "none" the mount options are sent to VDSM correctly
Rejy, can you tell me what was the data center compatibility version you used? Was it 3.0, 3.1 etc.?
(In reply to comment #13) > Rejy, can you tell me what was the data center compatibility version you > used? Was it 3.0, 3.1 etc.? Tal, To answer your query, the Data Center compatibility version used was 3.1 only. And referring to your observations in comment 12 : I only tested adding POSIX domain to an existing Data Center, and never tried adding the POSIX domain with the data center marked as "none". To clarify on my usage of the term 'primary server', I used that term in relation to the glusterfs mount option of 'backupvolfile-server=<server>'. This mount option is used to specify a fallback RHS server, which is used in the eventuality where the RHS server given in the original mount command is not reachable. I referred to the RHS server given in the original mount command as primary RHS server, and the fallback RHS server given in the mount option as secondary RHS server, Regards, rejy (rmc)
Ok, now it's clearer, we have made changes to the ConnectStorageServer command since 3.1, it had a bug with sending mount option and vfs_type, tested in 3.2 and it's working as it should
Mount options of glusterfs are being honoured: Thread-65951::INFO::2013-05-08 16:01:14,439::logUtils::40::dispatcher::(wrapper) Run and protect: connectStorageServer(domType=6, spUUID='00000000-0000-0000-0000-000000000000', conList=[{'port': '', 'connection': 'filer01.qa.lab.tlv.redhat.com:/elad2', 'mnt_options': 'entry-timeout=123 attribute-timeout=123 gid-timeout=123' Verified on RHEVM-3.2-SF15: rhevm-3.2.0-10.21.master.el6ev.noarch vdsm-4.10.2-17.0.el6ev.x86_64 glusterfs-3.3.0-22.el6rhs.x86_64 glusterfs-fuse-3.3.0-22.el6rhs.x86_64 rhsc-2.0.techpreview1-4.el6rhs.noarch
3.2 has been released