Comment 1Raghavendra Talur
2020-01-20 14:25:39 UTC
Test case:
Create more than 250(around 300) 1x3 volumes and have brick mux on.
Mount all volumes on a client machine
Reboot any one of the 3 gluster nodes
All the mounts should continue to work.
Observation:
Some of the mount go to failed state and even mount fails with the error
[2020-01-14 17:27:54.009038] E [MSGID: 114058] [client-handshake.c:1449:client_query_portmap_cbk] 0-ocs_glusterfs_claim0735_e7683536-362a-11ea-901b-068049869906-client-2: failed to get the po
rt number for remote subvolume. Please run 'gluster volume status' on server to see if brick process is running.
[2020-01-14 17:28:34.607674] E [MSGID: 114044] [client-handshake.c:1031:client_setvolume_cbk] 0-ocs_glusterfs_claim0735_e7683536-362a-11ea-901b-068049869906-client-2: SETVOLUME on remote-host
failed: Authentication failed [Permission denied]
[2020-01-14 17:28:34.607731] E [fuse-bridge.c:6292:notify] 0-fuse: Server authenication failed. Shutting down.
Hi,
I have tried to reproduce it on non-OCS environment, I am not able to reproduce it.
1) Setup 300 volumes and enable brick mux and start all the volumes
2) for i in {1..300}; do mkdir /mnt$i; mount -t glusterfs <gluster-1>:/test$i /mnt$i; done
3) Reboot gluster-1 node
4) grep -il "permission" /var/log/glusterfs/mnt*
Not getting any error specific to permission denied.
Regards,
Mohit Agrawal
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.
For information on the advisory, and where to find the updated
files, follow the link below.
If the solution does not work for you, open a new bug report.
https://access.redhat.com/errata/RHBA-2020:0288
Comment 28Red Hat Bugzilla
2023-09-14 05:50:18 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 1000 days