Description of problem: When a Gluster volume is started, by default both NFS and CIFS server processes are also started automatically. But since we do not support simultaneous CIFS and NFS access to the same volume, we should not autostart CIFS and NFS for the Gluster volume at the same time. I suggest that we leave them both disabled by default. Or only make NFS server auto started.
> " But since we do not support simultaneous CIFS and NFS access to the same volume, we should not autostart CIFS and NFS for the Gluster volume at the same time." Jin, Where did you get the above confirmation? (any doc, someone stating it in email etc) because, from storage philosophy, we think once a volume is started, it should be made available in all the access protocols as possible, and hence we start both CIFS and NFS when a volume starts.
Hi Amar, Certain protocols are not compatible to be enabled at the same time on the same volume. While I'm still trying to get more complete picture, here are two email threads you can refer to http://post-office.corp.redhat.com/archives/sme-storage/2012-August/msg00163.html http://post-office.corp.redhat.com/archives/sme-storage/2012-November/msg00097.html Thanks Jin
Note that the volume is only exported by samba when the samba service is enabled and running. If the samba service is disabled, the volume will only be exported over NFS. I guess we could change the hook-scripts and have them check 'nfs.disable' before adding the volume to a smb.conf and restarting samba.
adding two more dev from Samba team to confirm the behavior on comment #3.
Per 03/05 email exchange w/ PM, targeting for Big Bend.
Re-confirming the behavior described in comment #3. At this point in time, we do not consider NFS and SMB services to be compatible on top of GlusterFS. Work is ongoing to make SMB compatible with other access methods. Fixes to Gluster byte-range locking support, which were applied several months ago, are a major step toward protocol interoperability but new features (such as OpLock support) will be needed in order to fully synchronize behaviors.
The product version of Red Hat Storage on which this issue was reported has reached End Of Life (EOL) [1], hence this bug report is being closed. If the issue is still observed on a current version of Red Hat Storage, please file a new bug report on the current version. [1] https://rhn.redhat.com/errata/RHSA-2014-0821.html