Bug 1163750 - [USS]: Accessing .snaps from NFS mount, caused snapd and NFS server to stop running on one node
Summary: [USS]: Accessing .snaps from NFS mount, caused snapd and NFS server to stop r...
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: snapshot
Version: rhgs-3.0
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: ---
Assignee: rjoseph
QA Contact: Anoop
URL:
Whiteboard: USS
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2014-11-13 12:27 UTC by senaik
Modified: 2023-09-14 02:50 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-01-05 10:24:43 UTC
Embargoed:


Attachments (Terms of Use)

Description senaik 2014-11-13 12:27:49 UTC
Description of problem:
=======================
While trying to access .snaps from NFS mount, snapd and NFS server stopped on the node(from which the volume was mounted)
Snapd stopped due to "Request received from non-priviliged port"


Version-Release number of selected component (if applicable):
============================================================
glusterfs 3.6.0.32 

How reproducible:
=================
1/1


Steps to Reproduce:
==================
1)Fuse and NFS mount a 2x2 dist-rep volume , and enble USS

2) Create 256 snapshots in a loop while IO is going on 
 for i in {1..150} ; do cp -rvf /var/log/glusterfs f_log.$i ; done
 for i in {1..150} ; do cp -rvf /var/log/glusterfs n_log.$i ; done

3) After snapshot creation is complete, cd to .snaps from fuse and NFS mount
 From fuse mount, .snaps was accessible , then while accessing .snaps from NFS mount, failed with IO error

4) Checked gluster v status of the volume, showed snapd on the server (thro which the volume was mounted) was down

Log messages reported :
~~~~~~~~~~~~~~~~~~~~~~
[2014-11-12 13:32:35.074996] E [rpcsvc.c:617:rpcsvc_handle_rpc_call] 0-glusterd: Request received from non-privileged port. Failing request
[2014-11-12 13:32:35.106171] I [glusterd-pmap.c:271:pmap_registry_remove] 0-pmap: removing brick snapd-vol1 on port 49170
[2014-11-12 13:32:35.957462] W [socket.c:529:__socket_rwv] 0-management: readv on /var/run/22f16287a2b97835e475c3bbf5501834.socket failed (No data available)
[2014-11-12 13:32:36.109356] I [MSGID: 106006] [glusterd-handler.c:4238:__glusterd_snapd_rpc_notify] 0-management: snapd for volume vol1 has disconnected from glusterd.

5) Restarted glusterd and accessed .snaps - successful

6) Access .snaps from fuse and nfs mount again, while trying to cd to .snaps from NFS mount , snapd on the server always went down

7) Tried to stop the volume, start it again and then access .snaps . From Fuse mount, it was successful, but from NFS mount cd  to .snaps was hung

Actual results:
==============
While trying to access .snaps from NFS mount, snapd and NFS server stopped on one node


Expected results:
=================
Accessing .snaps from Fuse and NFS mount should be successful 


Additional info:

Comment 8 Red Hat Bugzilla 2023-09-14 02:50:47 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 1000 days


Note You need to log in before you can comment on or make changes to this bug.