Bug 1546991

Summary: Define the workflow of auth.allow post auth.reject
Product: [Red Hat Storage] Red Hat Gluster Storage Reporter: Rochelle <rallan>
Component: protocolAssignee: Sheetal Pamecha <spamecha>
Status: CLOSED NOTABUG QA Contact: Rahul Hinduja <rhinduja>
Severity: low Docs Contact:
Priority: low    
Version: rhgs-3.4CC: amukherj, atumball, rallan, rhinduja, rhs-bugs, rkavunga, sankarshan, sasundar, spamecha, storage-qa-internal, vbellur
Target Milestone: ---Keywords: EasyFix, ZStream
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2018-12-03 13:36:15 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1319271, 1546738    

Description Rochelle 2018-02-20 09:58:25 UTC
Description of problem:
=======================
I have a storage pool consisting of 3 nodes

Scenario 1:
-----------
When auth.allow is set for a volume only on the first node (executed from that node) with its fqdn, 
it successfully mounts on that node using that same fqdn (which is expected)

Can mount on 2nd host using that particular nodes fqdn (which shouldn't be the case)


Scenarios 2:
------------
When auth.allow is set for a volume on the first 2 nodes (executed from the first node) with its fqdn,
the first 2 nodes are able to mount the volume with their respective fqdn's(which is expected)

can mount the volume from the third node as well with its own fqdn (which shouldn't be the case)

Scenario 3:
-----------
When auth.reject is set on a volume on the first 2 nodes (executed from the first node) with its fqdn,
the first node is not able to mount the volume (which is expected)

the 2nd node mounts with its own fqdn (which shouldn't be the case)
the 3rd node mounts with its own fqdn (which is expected)

Version-Release number of selected component (if applicable):
==============================================================
[root@dhcp41-156 ~]# rpm -qa | grep gluster
vdsm-gluster-4.17.33-1.2.el7rhgs.noarch
glusterfs-3.12.2-4.el7rhgs.x86_64
glusterfs-cli-3.12.2-4.el7rhgs.x86_64
glusterfs-rdma-3.12.2-4.el7rhgs.x86_64
libvirt-daemon-driver-storage-gluster-3.9.0-12.el7.x86_64
glusterfs-libs-3.12.2-4.el7rhgs.x86_64
gluster-nagios-common-0.2.4-1.el7rhgs.noarch
glusterfs-client-xlators-3.12.2-4.el7rhgs.x86_64
glusterfs-api-3.12.2-4.el7rhgs.x86_64
glusterfs-server-3.12.2-4.el7rhgs.x86_64
glusterfs-geo-replication-3.12.2-4.el7rhgs.x86_64
gluster-nagios-addons-0.2.10-2.el7rhgs.x86_64
glusterfs-fuse-3.12.2-4.el7rhgs.x86_64
python2-gluster-3.12.2-4.el7rhgs.x86_64


How reproducible:
=================
Always

Comment 2 Gaurav Yadav 2018-02-22 11:13:05 UTC
Below is my observation w.r.t setting auth.allow/auth.reject on a volume in trusted storage pool (let us consider we have a TSP of 3 Nodes[H1,H2,H3], with a volume created)

1. All nodes which are part of TSP can mount the volume.
2. Nodes(i.e IP/fqdn) which are outside of TSP can mount the volume without any issue.
3. If after creating volume first command is auth.reject *.*.*.*, in this case mount will fail for all the client, however mount will succeed on nodes which are part of TSP.
4. after step 4, when gluster v set test-vol auth.allow IP1, IP2, IP3 will allow only IP1, IP2, IP3 to mount the volume.

Rochelle,

I believe it is not a bug. Could you please check and confirm the same.

Comment 7 Atin Mukherjee 2018-10-06 13:20:48 UTC
This actually doesn't require any change in the glusterd code base. Moving to core component at this moment.

Comment 8 Sheetal Pamecha 2018-12-03 13:36:15 UTC
Raised an issue to include the flow of auth.allow post auth.reject - https://bugzilla.redhat.com/show_bug.cgi?id=1655579