Description of problem:
I have a storage pool consisting of 3 nodes
When auth.allow is set for a volume only on the first node (executed from that node) with its fqdn,
it successfully mounts on that node using that same fqdn (which is expected)
Can mount on 2nd host using that particular nodes fqdn (which shouldn't be the case)
When auth.allow is set for a volume on the first 2 nodes (executed from the first node) with its fqdn,
the first 2 nodes are able to mount the volume with their respective fqdn's(which is expected)
can mount the volume from the third node as well with its own fqdn (which shouldn't be the case)
When auth.reject is set on a volume on the first 2 nodes (executed from the first node) with its fqdn,
the first node is not able to mount the volume (which is expected)
the 2nd node mounts with its own fqdn (which shouldn't be the case)
the 3rd node mounts with its own fqdn (which is expected)
Version-Release number of selected component (if applicable):
[root@dhcp41-156 ~]# rpm -qa | grep gluster
Below is my observation w.r.t setting auth.allow/auth.reject on a volume in trusted storage pool (let us consider we have a TSP of 3 Nodes[H1,H2,H3], with a volume created)
1. All nodes which are part of TSP can mount the volume.
2. Nodes(i.e IP/fqdn) which are outside of TSP can mount the volume without any issue.
3. If after creating volume first command is auth.reject *.*.*.*, in this case mount will fail for all the client, however mount will succeed on nodes which are part of TSP.
4. after step 4, when gluster v set test-vol auth.allow IP1, IP2, IP3 will allow only IP1, IP2, IP3 to mount the volume.
I believe it is not a bug. Could you please check and confirm the same.
This actually doesn't require any change in the glusterd code base. Moving to core component at this moment.
Raised an issue to include the flow of auth.allow post auth.reject - https://bugzilla.redhat.com/show_bug.cgi?id=1655579