Bug 765524 - (GLUSTER-3792) secondary group owner limited to 16 groups
secondary group owner limited to 16 groups
Status: CLOSED CURRENTRELEASE
Product: GlusterFS
Classification: Community
Component: access-control (Show other bugs)
3.2.4
x86_64 Linux
medium Severity urgent
: ---
: ---
Assigned To: shishir gowda
:
Depends On:
Blocks: 817967
  Show dependency treegraph
 
Reported: 2011-11-08 04:21 EST by hurdmann
Modified: 2013-12-08 20:27 EST (History)
4 users (show)

See Also:
Fixed In Version: glusterfs-3.4.0
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2013-07-24 13:09:24 EDT
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)
a try to had a const (5.40 KB, patch)
2011-11-08 06:31 EST, hurdmann
no flags Details | Diff

  None (edit)
Description hurdmann 2011-11-08 02:12:10 EST
I probably have found the source of the number 16 :

struct auth_glusterfs_parms {
	uint64_t lk_owner;
	u_int pid;
	u_int uid;
	u_int gid;
	u_int ngrps;
	u_int groups[16];
} __attribute__((packed));
typedef struct auth_glusterfs_parms auth_glusterfs_parms;

into :
xdr-common.h:64.

I try to change it to 64 to test...
Comment 1 shishir gowda 2011-11-08 03:13:18 EST
Hi Hurdmann,

That is a known limitation of supporting upto 16 aux groups with release 3.1/3.2. We plan to bump it up in the later releases.

Changing severity of the bug to major from blocker.
Comment 2 hurdmann 2011-11-08 03:28:36 EST
Hi,
thanks for your response, i'm writing a patch to add a #define to src code, i'll post it here after some tests ;)

regards
hm
Comment 3 hurdmann 2011-11-08 04:21:12 EST
Hello,
i use gluster with an apache server who have a lot of secondary group (one per website) for security reason (one user / one scp / one website ).

I have upgraded from 3.2.3 to 3.2.4 to get the patch on rpc/rpc-lib/src/rpc-clnt.c

from :
1221	        memcpy (au.groups, call_frame->root->groups, 16);	
to :
1221	        memcpy (au.groups, call_frame->root->groups, sizeof (au.groups));

But after some test, the 16 first group are ok but not the over ( 17 , 18 ...).

The error and the log are like before the patch.

Any idea or patch ?

i'm on irc for questions ;)

thanks,

hm
Comment 4 hurdmann 2011-11-08 06:30:25 EST
There's a lot of duplicate, constant , number 16 for the same thing :

like :

RPCSVC_MAX_AUTH_BYTES
GF_REQUEST_MAXGROUPS
NGRPS
16 into [] and into for.

so i need help to complete my patch because i fall in something like that :

[2011-11-08 15:23:23.961992] I [server-resolve.c:571:server_resolve] 0-test-volume-server: pure path resolution for �q`O�(����}��ㅷ`O���! (OPENDIR)

probably a buffer overflow.
Comment 5 hurdmann 2011-11-08 06:31:06 EST
Created attachment 719 [details]
Stack trace of the coredump
Comment 6 hurdmann 2011-12-15 06:40:14 EST
Hi,
i see that there's a commit on rpc/rpc-lib/src/auth-glusterfs.c

to had a limit :
+        if (req->auxgidcount > 16) {

+                ret = RPCSVC_AUTH_REJECT;

+                goto err;

+        }

+

So it's more bad than only take 16, isn't it ?
Comment 7 hurdmann 2011-12-20 05:16:33 EST
I have tried a lot of modification in rpc/fuse and other src.
But i ever get my gid array going into my path ...

So, i need help, or perhaps a deadline for a patch ?
Comment 8 Amar Tumballi 2011-12-20 06:15:38 EST
http://review.gluster.com/779 increases the max count of aux gid to 500 from current 16. Try that patch on master branch, and it should work for you. If this patch gets upstream, then you can expect a release in sometime in feb/march 2012.
Comment 9 Amar Tumballi 2012-01-25 00:38:30 EST
currently its just 200 Aux GIDs on wire.. if we hit this limit, then we will consider extending the protocol.
Comment 10 Anush Shetty 2012-05-19 01:31:05 EDT
32 Aux GIDs is the limit. This limitation is tracked through another bug, Bz 789961. So moving this bug to verified for release-3.3.

Note You need to log in before you can comment on or make changes to this bug.