Bug 789961 - [635f3bc0f8a05ad1280f8ab7d55181502bcad700] proc status shows only 32 groups even though the protocol supports more
Summary: [635f3bc0f8a05ad1280f8ab7d55181502bcad700] proc status shows only 32 groups e...
Keywords:
Status: CLOSED EOL
Alias: None
Product: GlusterFS
Classification: Community
Component: fuse
Version: mainline
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: ---
Assignee: Brian Foster
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks: 848333
TreeView+ depends on / blocked
 
Reported: 2012-02-13 11:10 UTC by Anush Shetty
Modified: 2015-10-22 15:46 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
: 848333 (view as bug list)
Environment:
Last Closed: 2015-10-22 15:46:38 UTC
Regression: RTP
Mount Type: fuse
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Anush Shetty 2012-02-13 11:10:33 UTC
Description of problem: On a fuse mount, I created 40 files each one with a different group. Also, I create a user and add all the 40 groups as secondary groups to that user. The groups have been give rw access by setting an ACL entry. When I login as user and try to read/write into the files, it only works for the first 31 files. On further examination, the proc status showed only 32 groups.

The read and writes were successful for all the 40 files when tried on the backend directly.


How reproducible:

MOUNT=/mnt/gluster
useradd testing_random

mkdir -p $MOUNT/testing
chmod -R 777 $MOUNT/testing


for i in `seq 1 40`; do 
   groupadd test_grps_$i; 
   newgrp test_grps_$i <<EOF
       touch $MOUNT/testing/file_$i
   EOF
done

for i in `seq 1 40`; do 
   usermod -a -G test_grps_$i testing_random; 
done

for i in `seq 1 40`; do 
    setfacl -m o::---,g::rw $MOUNT/testing/file_$i; 
done

for i in `seq 1 40`; do su testing_random -c "echo '234343' > /mnt/gluster/testing/file_$i"; done


getfacl testing/file_1
# file: testing/file_1
# owner: root
# group: test_grps_1
user::rw-
group::rw-
other::---





Additional info:

# id testing_random
uid=1010(testing_random) gid=1010(testing_random) groups=1010(testing_random),1022(test_grps_1),1023(test_grps_2),1024(test_grps_3),1025(test_grps_4),1026(test_grps_5),1027(test_grps_6),1028(test_grps_7),1029(test_grps_8),1030(test_grps_9),1031(test_grps_10),1032(test_grps_11),1033(test_grps_12),1034(test_grps_13),1035(test_grps_14),1036(test_grps_15),1037(test_grps_16),1038(test_grps_17),1039(test_grps_18),1040(test_grps_19),1041(test_grps_20),1042(test_grps_21),1043(test_grps_22),1044(test_grps_23),1045(test_grps_24),1046(test_grps_25),1047(test_grps_26),1048(test_grps_27),1049(test_grps_28),1050(test_grps_29),1051(test_grps_30),1052(test_grps_31),1053(test_grps_32),1054(test_grps_33),1055(test_grps_34),1056(test_grps_35),1057(test_grps_36),1058(test_grps_37),1059(test_grps_38),1060(test_grps_39),1061(test_grps_40)



cat /proc/17747/status | grep Group
Groups:	1011 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047 1048 1049 1050 1051 1052 


(gdb) b io_stats_open
Breakpoint 1 at 0x7fcc6620ec3c: file io-stats.c, line 2027.
(gdb) c
Continuing.
[Switching to Thread 0x7fcc65065700 (LWP 3362)]

Breakpoint 1, io_stats_open (frame=0x7fcc69f5b0d8, this=0x2607110, loc=0x7fcc60012f50, flags=32769, fd=0x7fcc6506f384, wbflags=0)
    at io-stats.c:2027
2027	        frame->local = gf_strdup (loc->path);
(gdb) p *frame->local
Attempt to dereference a generic pointer.
(gdb) p *frame->root
$1 = {{all_frames = {next = 0x7fcc68f12850, prev = 0x25fc5a0}, {next_call = 0x7fcc68f12850, prev_call = 0x25fc5a0}}, pool = 0x25fc5a0, 
  stack_lock = 1, trans = 0x0, unique = 4858, state = 0x7fcc60012f30, uid = 1011, gid = 1011, pid = 17747, ngrps = 32, groups = {1011, 
    1022, 1023, 1024, 1025, 1026, 1027, 1028, 1029, 1030, 1031, 1032, 1033, 1034, 1035, 1036, 1037, 1038, 1039, 1040, 1041, 1042, 1043, 
    1044, 1045, 1046, 1047, 1048, 1049, 1050, 1051, 1052, 0 <repeats 168 times>}, lk_owner = {len = 8, data = '\000' <repeats 1023 times>}, 
  frames = {root = 0x7fcc68f1202c, parent = 0x0, next = 0x7fcc69f5b0d8, prev = 0x0, local = 0x0, this = 0x25fceb0, ret = 0, ref_count = 1, 
    lock = 1, cookie = 0x0, complete = _gf_false, op = GF_FOP_OPEN, begin = {tv_sec = 0, tv_usec = 0}, end = {tv_sec = 0, tv_usec = 0}, 
    wind_from = 0x0, wind_to = 0x0, unwind_from = 0x0, unwind_to = 0x0}, op = 11, type = 1 '\001'}

Comment 1 Amar Tumballi 2012-09-18 10:22:16 UTC
Brian, if you get any insight into this issue, it would be great to know. I don't know how we can get aux gid's if its more than 32 at the moment.

Comment 2 Brian Foster 2012-09-18 13:40:03 UTC
(In reply to comment #1)
> Brian, if you get any insight into this issue, it would be great to know. I
> don't know how we can get aux gid's if its more than 32 at the moment.

Hi Amar,

Right, I recall looking into this briefly in the past and don't recall a direct mechanism to get the aux group list for a separate process. getgroups() works for the current process and getgrouplist() is based on a user. I suppose we could get the aux gids for the user of a process, but that's probably not correct for a process that might have changed its group membership at runtime (setgroups()?). Some testing in that regard against a local filesystem might be informative.

A correct approach might be to enhance fuse to include aux group data in fuse requests (if requested by the fuse fs). Actually in doing some googling, I see that Emmanuel had attempted such an approach in the past (for NetBSD), but it appears to have not made it upstream (at least the libfuse bits):

http://old.nabble.com/-PATCH--send-secondary-groups-through-FUSE-tt32937662.html#a32937662

Perhaps we should revisit this.

BTW, is this based on a user report or otherwise something we see user's complain about?

Comment 3 meltingrobot 2013-11-05 16:33:45 UTC
I ran into this issue with the latest Gluster 3.4.1.  We have a few users that happen to be in a more than 32 AD groups that are troubled with this limitation.

Comment 4 csb sysadmin 2013-12-12 20:30:25 UTC
comment 3 seconded

Comment 5 Kaleb KEITHLEY 2015-10-22 15:46:38 UTC
because of the large number of bugs filed against mainline version\ is ambiguous and about to be removed as a choice.

If you believe this is still a bug, please change the status back to NEW and choose the appropriate, applicable version for it.


Note You need to log in before you can comment on or make changes to this bug.