Bug 1793035 - Mounts fails after reboot of 1/3 gluster nodes
Summary: Mounts fails after reboot of 1/3 gluster nodes
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: glusterd
Version: rhgs-3.5
Hardware: Unspecified
OS: Unspecified
unspecified
urgent
Target Milestone: ---
: RHGS 3.5.z Batch Update 1
Assignee: Mohit Agrawal
QA Contact: Nag Pavan Chilakam
URL:
Whiteboard:
Depends On:
Blocks: 1788913 1793852 1794019 1794020 1804512
TreeView+ depends on / blocked
 
Reported: 2020-01-20 14:22 UTC by Raghavendra Talur
Modified: 2023-09-14 05:50 UTC (History)
12 users (show)

Fixed In Version: glusterfs-6.0-29
Doc Type: No Doc Update
Doc Text:
Clone Of:
: 1793852 (view as bug list)
Environment:
Last Closed: 2020-01-30 06:42:48 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2020:0288 0 None None None 2020-01-30 06:42:58 UTC

Comment 1 Raghavendra Talur 2020-01-20 14:25:39 UTC
Test case:

Create more than 250(around 300) 1x3 volumes and have brick mux on.
Mount all volumes on a client machine
Reboot any one of the 3 gluster nodes
All the mounts should continue to work.

Observation:
Some of the mount go to failed state and even mount fails with the error
[2020-01-14 17:27:54.009038] E [MSGID: 114058] [client-handshake.c:1449:client_query_portmap_cbk] 0-ocs_glusterfs_claim0735_e7683536-362a-11ea-901b-068049869906-client-2: failed to get the po
rt number for remote subvolume. Please run 'gluster volume status' on server to see if brick process is running.
[2020-01-14 17:28:34.607674] E [MSGID: 114044] [client-handshake.c:1031:client_setvolume_cbk] 0-ocs_glusterfs_claim0735_e7683536-362a-11ea-901b-068049869906-client-2: SETVOLUME on remote-host
 failed: Authentication failed [Permission denied]
[2020-01-14 17:28:34.607731] E [fuse-bridge.c:6292:notify] 0-fuse: Server authenication failed. Shutting down.

Comment 3 Mohit Agrawal 2020-01-20 15:27:55 UTC
Hi,

I have tried to reproduce it on non-OCS environment, I am not able to reproduce it.

1) Setup 300 volumes and enable brick mux and start all the volumes
2) for i in {1..300}; do mkdir /mnt$i; mount -t glusterfs <gluster-1>:/test$i /mnt$i; done
3) Reboot gluster-1 node
4) grep -il "permission" /var/log/glusterfs/mnt*
   Not getting any error specific to permission denied.

Regards,
Mohit Agrawal

Comment 27 errata-xmlrpc 2020-01-30 06:42:48 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:0288

Comment 28 Red Hat Bugzilla 2023-09-14 05:50:18 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 1000 days


Note You need to log in before you can comment on or make changes to this bug.