Bug 1221941
| Summary: | glusterfsd: bricks crash while executing ls on nfs-ganesha vers=3 | ||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|
| Product: | [Community] GlusterFS | Reporter: | Saurabh <saujain> | ||||||||
| Component: | upcall | Assignee: | bugs <bugs> | ||||||||
| Status: | CLOSED CURRENTRELEASE | QA Contact: | |||||||||
| Severity: | high | Docs Contact: | |||||||||
| Priority: | unspecified | ||||||||||
| Version: | 3.7.0 | CC: | amukherj, ansubram, bugs, gluster-bugs, kkeithle, mmadhusu, mzywusko, ndevos, skoduri | ||||||||
| Target Milestone: | --- | Keywords: | Patch, Triaged | ||||||||
| Target Release: | --- | ||||||||||
| Hardware: | x86_64 | ||||||||||
| OS: | Linux | ||||||||||
| Whiteboard: | |||||||||||
| Fixed In Version: | glusterfs-3.7.2 | Doc Type: | Bug Fix | ||||||||
| Doc Text: | Story Points: | --- | |||||||||
| Clone Of: | |||||||||||
| : | 1227204 (view as bug list) | Environment: | |||||||||
| Last Closed: | 2015-06-20 09:48:20 UTC | Type: | Bug | ||||||||
| Regression: | --- | Mount Type: | --- | ||||||||
| Documentation: | --- | CRM: | |||||||||
| Verified Versions: | Category: | --- | |||||||||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||||||
| Cloudforms Team: | --- | Target Upstream Version: | |||||||||
| Embargoed: | |||||||||||
| Bug Depends On: | 1227204 | ||||||||||
| Bug Blocks: | 1227206 | ||||||||||
| Attachments: |
|
||||||||||
|
Description
Saurabh
2015-05-15 10:02:22 UTC
[root@nfs3 ~]# gluster volume status
Status of volume: gluster_shared_storage
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick 10.70.37.148:/rhs/brick1/d1r1-share 49156 0 Y 3549
Brick 10.70.37.77:/rhs/brick1/d1r2-share 49155 0 Y 3329
Brick 10.70.37.76:/rhs/brick1/d2r1-share 49155 0 Y 3081
Brick 10.70.37.69:/rhs/brick1/d2r2-share 49155 0 Y 3346
Brick 10.70.37.148:/rhs/brick1/d3r1-share 49157 0 Y 3566
Brick 10.70.37.77:/rhs/brick1/d3r2-share 49156 0 Y 3346
Brick 10.70.37.76:/rhs/brick1/d4r1-share 49156 0 Y 3098
Brick 10.70.37.69:/rhs/brick1/d4r2-share 49156 0 Y 3363
Brick 10.70.37.148:/rhs/brick1/d5r1-share 49158 0 Y 3583
Brick 10.70.37.77:/rhs/brick1/d5r2-share 49157 0 Y 3363
Brick 10.70.37.76:/rhs/brick1/d6r1-share 49157 0 Y 3115
Brick 10.70.37.69:/rhs/brick1/d6r2-share 49157 0 Y 3380
Self-heal Daemon on localhost N/A N/A Y 28389
Self-heal Daemon on 10.70.37.148 N/A N/A Y 22717
Self-heal Daemon on 10.70.37.77 N/A N/A Y 4784
Self-heal Daemon on 10.70.37.76 N/A N/A Y 25893
Task Status of Volume gluster_shared_storage
------------------------------------------------------------------------------
There are no active volume tasks
Status of volume: vol2
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick 10.70.37.148:/rhs/brick1/d1r1 49153 0 Y 22219
Brick 10.70.37.77:/rhs/brick1/d1r2 49152 0 Y 4321
Brick 10.70.37.76:/rhs/brick1/d2r1 N/A N/A N 25654
Brick 10.70.37.69:/rhs/brick1/d2r2 49152 0 Y 27914
Brick 10.70.37.148:/rhs/brick1/d3r1 49154 0 Y 18842
Brick 10.70.37.77:/rhs/brick1/d3r2 49153 0 Y 4343
Brick 10.70.37.76:/rhs/brick1/d4r1 N/A N/A N 25856
Brick 10.70.37.69:/rhs/brick1/d4r2 N/A N/A N 27934
Brick 10.70.37.148:/rhs/brick1/d5r1 49155 0 Y 22237
Brick 10.70.37.77:/rhs/brick1/d5r2 49154 0 Y 4361
Brick 10.70.37.76:/rhs/brick1/d6r1 N/A N/A N 25874
Brick 10.70.37.69:/rhs/brick1/d6r2 N/A N/A N 27952
Self-heal Daemon on localhost N/A N/A Y 28389
Self-heal Daemon on 10.70.37.77 N/A N/A Y 4784
Self-heal Daemon on 10.70.37.148 N/A N/A Y 22717
Self-heal Daemon on 10.70.37.76 N/A N/A Y 25893
Task Status of Volume vol2
------------------------------------------------------------------------------
There are no active volume tasks
cat /etc/ganesha/exports/export.vol2.conf
# WARNING : Using Gluster CLI will overwrite manual
# changes made to this file. To avoid it, edit the
# file, copy it over to all the NFS-Ganesha nodes
# and run ganesha-ha.sh --refresh-config.
EXPORT{
Export_Id= 2 ;
Path = "/vol2";
FSAL {
name = GLUSTER;
hostname="localhost";
volume="vol2";
}
Access_type = RW;
Squash="No_root_squash";
Pseudo="/vol2";
Protocols = "3", "4" ;
Transports = "UDP","TCP";
SecType = "sys";
Disable_ACL = True;
}
Created attachment 1025731 [details]
coredump of the brick
Created attachment 1025733 [details]
sosreport of node2
Created attachment 1025735 [details]
sosreport of node3
http://review.gluster.org/10909 has been merged in the master branch, backporting can be done now. Thanks Niels. I shall backport the fix. COMMIT: http://review.gluster.org/11141 committed in release-3.7 by Kaleb KEITHLEY (kkeithle) ------ commit 922f9df5d7cdb7775dfa6fac4874105d5cc85c98 Author: Soumya Koduri <skoduri> Date: Thu Jun 4 11:25:35 2015 +0530 Upcall/cache-invalidation: Ignore fops with frame->root->client not set Server-side internally generated fops like 'quota/marker' will not have any client associated with the frame. Hence we need a check for clients to be valid before processing for upcall cache invalidation. Also fixed an issue with initializing reaper-thread. Added a testcase to test the fix. Change-Id: If7419b98aca383f4b80711c10fef2e0b32498c57 BUG: 1221941 Signed-off-by: Soumya Koduri <skoduri> Reviewed-on: http://review.gluster.org/10909 Reviewed-by: Kaleb KEITHLEY <kkeithle> Reviewed-by: jiffin tony Thottan <jthottan> Reviewed-by: Niels de Vos <ndevos> Reviewed-on: http://review.gluster.org/11141 Tested-by: NetBSD Build System <jenkins.org> This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.2, please reopen this bug report. glusterfs-3.7.2 has been announced on the Gluster Packaging mailinglist [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://www.gluster.org/pipermail/packaging/2015-June/000006.html [2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user |