+++ This bug was initially created as a clone of Bug #1330079 +++ Description of problem: Currently, gluster vfs plugin can take only one volfile server in config settings. Recently gfapi merged feature where in it allows one to specify multiple volfiles. We should make use of that feature in vfs plugin.
How to test ================================================================= glusterfs:volfile_server key can be list of white space seperated elements where each element could be unix+/path/to/socket/file or [tcp+]IP|hostname|\[IPv6\][:port]. Note the restriction on naming a IPv6 host, it follows the same restriction that is based on IPv6 naming in URL as per RFC 2732[1]. Some tests: 1. Setup, Gluster cluster of 192.168.122.13, 192.168.122.14, 192.168.122.15 . In smb.conf on Server 192.168.122.13, you can have a. glusterfs:volfile_server = google.com 192.168.122.13 192.168.122.14 192.168.122.15 Expected result: connection should succeed with 192.168.122.13 . b. glusterfs:volfile_server = google.com 192.168.122.31 192.168.122.14 192.168.122.15 Expected result: connection should succeed with 192.168.122.14 . c. glusterfs:volfile_server = tcp+192.168.122.14:24007 192.168.122.15 Expected result: connection should succeed with 192.168.122.14 . d. glusterfs:volfile_server = unix+/var/run/glusterd.socket tcp+192.168.122.14:24007 192.168.122.15 Expected result: connection should succeed with local glusterd using unix socket. e. glusterfs:volfile_server = 192.168.122.31 192.168.122.44 192.168.122.45 Expected result: connection should fail with log saying could not fetch volfile. For all the above cases, please use netstat to verify connections and perform IO from the smb clients to ensure that connection is fully operational. Logging at default level 0 will only happen for total failure case where all volfiles are exhausted. All other logs would be at level 3 onwards. I will add more cases later.
NOTE: This list is used only for initial connection setup in case of Gluster versions <= 3.7. In 3.8 Gluster version, we have introduced mechanism to use the list to connect to Glusterd on a different node as a failover response to a Glusterd connection disconnect to existing one. For example, glusterfs:volfile_server = 192.168.122.13 192.168.122.14 192.168.122.15 Expected result: connection would succeed with 192.168.122.13. Now kill glusterd on 192.168.122.13. Automatically connection should get initiated with 192.168.122.14. This would happen only on 3.8 builds and later.
Tested on a two node setup the basic functionality of VFS is working that includes 1. Setup, Gluster cluster of 192.168.122.13, 192.168.122.14, 192.168.122.15 . In smb.conf on Server 192.168.122.13, you can have a. glusterfs:volfile_server = <invalid server> 192.168.122.13 192.168.122.14 192.168.122.15 Expected result: connection should succeed with 192.168.122.13 . b. glusterfs:volfile_server = <invalid server> 192.168.122.31 192.168.122.14 192.168.122.15 Expected result: connection should succeed with 192.168.122.14 . c. glusterfs:volfile_server = tcp+192.168.122.14:24007 192.168.122.15 Expected result: connection should succeed with 192.168.122.14 . e. glusterfs:volfile_server = 192.168.122.31 192.168.122.44 192.168.122.45 Expected result: connection should fail with log saying could not fetch version -------- samba-client-4.4.6-2 glusterfs-3.8.4-4
Doc looks good.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHSA-2017-0494.html