Bug 1262776
Summary: | nfs-ganesha: ganesha process coredump with "pub_glfs_fsync (glfd=0x7f1078018e70) at glfs-fops.c" | ||||||
---|---|---|---|---|---|---|---|
Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | Saurabh <saujain> | ||||
Component: | nfs-ganesha | Assignee: | Jiffin <jthottan> | ||||
Status: | CLOSED ERRATA | QA Contact: | Shashank Raj <sraj> | ||||
Severity: | high | Docs Contact: | |||||
Priority: | unspecified | ||||||
Version: | rhgs-3.1 | CC: | jthottan, kkeithle, mzywusko, ndevos, nlevinki, rcyriac, rhinduja, sashinde, skoduri, smohan | ||||
Target Milestone: | --- | Keywords: | ZStream | ||||
Target Release: | RHGS 3.1.3 | ||||||
Hardware: | x86_64 | ||||||
OS: | Linux | ||||||
Whiteboard: | |||||||
Fixed In Version: | nfs-ganesha-2.3.1-1 | Doc Type: | Bug Fix | ||||
Doc Text: | Story Points: | --- | |||||
Clone Of: | Environment: | ||||||
Last Closed: | 2016-06-23 05:35:58 UTC | Type: | Bug | ||||
Regression: | --- | Mount Type: | --- | ||||
Documentation: | --- | CRM: | |||||
Verified Versions: | Category: | --- | |||||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||
Cloudforms Team: | --- | Target Upstream Version: | |||||
Embargoed: | |||||||
Bug Depends On: | |||||||
Bug Blocks: | 1299184 | ||||||
Attachments: |
|
Description
Saurabh
2015-09-14 10:23:13 UTC
Created attachment 1073179 [details]
nfs11 nfs-ganesha coredump
The fix has posted in upstream https://review.gerrithub.io/#/c/246586/ Verified this bug with glusterfs-3.7.9-1 and ganesha-2.3.1-3 with below steps: 1) Create a 4 node cluster and configure ganesha on it. 2) Create a 6x2 volume and start it. Volume Name: testvolume Type: Distributed-Replicate Volume ID: 814b88fe-30a4-47cc-841b-beaf7b348254 Status: Started Number of Bricks: 6 x 2 = 12 Transport-type: tcp Bricks: Brick1: 10.70.37.180:/bricks/brick0/b0 Brick2: 10.70.37.158:/bricks/brick0/b0 Brick3: 10.70.37.127:/bricks/brick0/b0 Brick4: 10.70.37.174:/bricks/brick0/b0 Brick5: 10.70.37.180:/bricks/brick1/b1 Brick6: 10.70.37.158:/bricks/brick1/b1 Brick7: 10.70.37.127:/bricks/brick1/b1 Brick8: 10.70.37.174:/bricks/brick1/b1 Brick9: 10.70.37.180:/bricks/brick2/b2 Brick10: 10.70.37.158:/bricks/brick2/b2 Brick11: 10.70.37.127:/bricks/brick2/b2 Brick12: 10.70.37.174:/bricks/brick2/b2 Options Reconfigured: ganesha.enable: on features.cache-invalidation: on features.quota-deem-statfs: on features.inode-quota: on features.quota: on nfs.disable: on performance.readdir-ahead: on cluster.enable-shared-storage: enable nfs-ganesha: enable 3) Enable acls on the volume. [root@dhcp37-180 exports]# cat export.testvolume.conf # WARNING : Using Gluster CLI will overwrite manual # changes made to this file. To avoid it, edit the # file and run ganesha-ha.sh --refresh-config. EXPORT{ Export_Id= 2 ; Path = "/testvolume"; FSAL { name = GLUSTER; hostname="localhost"; volume="testvolume"; } Access_type = RW; Disable_ACL = false; Squash="No_root_squash"; Pseudo="/testvolume"; Protocols = "3", "4" ; Transports = "UDP","TCP"; SecType = "sys"; } 4) Enable quota on the volume and set limit-usage as 25GB on / [root@dhcp37-180 exports]# gluster volume quota testvolume list Path Hard-limit Soft-limit Used Available Soft-limit exceeded? Hard-limit exceeded? ------------------------------------------------------------------------------------------------------------------------------- / 25.0GB 80%(20.0GB) 10.9GB 14.1GB No No 5) mount the volume with version=4 and start creating IO on the mount point. 6) While IO is in progress, perform add-brick and rebalance on the volume. 7) No crash is seen on the nodes and IO is going on during the process and it is not hanged. Based on the above observation, marking this bug as Verified. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHEA-2016:1288 |