Bugzilla will be upgraded to version 5.0. The upgrade date is tentatively scheduled for 2 December 2018, pending final testing and feedback.
Bug 1263581 - nfs-ganesha: nfsd coredumps once quota limits cross while creating a file larger than the quota limit set
nfs-ganesha: nfsd coredumps once quota limits cross while creating a file lar...
Status: CLOSED ERRATA
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: nfs-ganesha (Show other bugs)
3.1
x86_64 Linux
unspecified Severity urgent
: ---
: RHGS 3.1.1
Assigned To: Soumya Koduri
Saurabh
: ZStream
Depends On: 1263094
Blocks: 1251815
  Show dependency treegraph
 
Reported: 2015-09-16 04:50 EDT by Saurabh
Modified: 2016-01-19 01:15 EST (History)
9 users (show)

See Also:
Fixed In Version: nfs-ganesha-2.2.0-9
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2015-10-05 03:27:07 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)


External Trackers
Tracker ID Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2015:1845 normal SHIPPED_LIVE Moderate: Red Hat Gluster Storage 3.1 update 2015-10-05 07:06:22 EDT

  None (edit)
Description Saurabh 2015-09-16 04:50:59 EDT
Description of problem:
nfsd coredumps once quota limits cross while creating a file larger than the quota limit set

# gluster volume info vol1
 
Volume Name: vol1
Type: Distributed-Replicate
Volume ID: 3176319c-c033-4d81-a1c2-e46d92a94e9c
Status: Started
Number of Bricks: 6 x 2 = 12
Transport-type: tcp
Bricks:
Brick1: 10.70.44.108:/rhs/brick1/d1r11
Brick2: 10.70.44.109:/rhs/brick1/d1r21
Brick3: 10.70.44.110:/rhs/brick1/d2r11
Brick4: 10.70.44.111:/rhs/brick1/d2r21
Brick5: 10.70.44.108:/rhs/brick1/d3r11
Brick6: 10.70.44.109:/rhs/brick1/d3r21
Brick7: 10.70.44.110:/rhs/brick1/d4r11
Brick8: 10.70.44.111:/rhs/brick1/d4r21
Brick9: 10.70.44.108:/rhs/brick1/d5r11
Brick10: 10.70.44.109:/rhs/brick1/d5r21
Brick11: 10.70.44.110:/rhs/brick1/d6r11
Brick12: 10.70.44.111:/rhs/brick1/d6r21
Options Reconfigured:
features.quota-deem-statfs: on
features.inode-quota: on
features.quota: on
ganesha.enable: on
features.cache-invalidation: on
nfs.disable: on
performance.readdir-ahead: on
nfs-ganesha: enable
cluster.enable-shared-storage: enable

Version-Release number of selected component (if applicable):
glusterfs-3.7.1-15.el7rhgs.x86_64
nfs-ganesha-2.2.0-7.el7rhgs.x86_64

How reproducible:
always

1. create a volume of 6x2 type.
2. enable quota on the volume
3. set a quota limit of 2 GB
4. configure nfs-ganesha
5. mount the volume using vers=3
6. use dd to create a file of 3GB.

Actual results:
(gdb) bt
#0  0x00007f74c81c6b22 in pub_glfs_pwritev (glfd=0x7f74a832b930, iovec=iovec@entry=0x7f74c97f87f0, iovcnt=iovcnt@entry=1, offset=2352373760, flags=0) at glfs-fops.c:936
#1  0x00007f74c81c6e7a in pub_glfs_pwrite (glfd=<optimized out>, buf=<optimized out>, count=<optimized out>, offset=<optimized out>, flags=<optimized out>) at glfs-fops.c:1051
#2  0x00007f74c85ebbe0 in file_write () from /usr/lib64/ganesha/libfsalgluster.so
#3  0x00000000004d458e in cache_inode_rdwr_plus ()
#4  0x00000000004d53a9 in cache_inode_rdwr ()
#5  0x000000000045db41 in nfs3_write ()
#6  0x0000000000453a01 in nfs_rpc_execute ()
#7  0x00000000004545ad in worker_run ()
#8  0x000000000050afeb in fridgethr_start_routine ()
#9  0x00007f74d94f4df5 in start_thread () from /lib64/libpthread.so.0
#10 0x00007f74d901a1ad in clone () from /lib64/libc.so.6
(gdb) f 0
#0  0x00007f74c81c6b22 in pub_glfs_pwritev (glfd=0x7f74a832b930, iovec=iovec@entry=0x7f74c97f87f0, iovcnt=iovcnt@entry=1, offset=2352373760, flags=0) at glfs-fops.c:936
936		__GLFS_ENTRY_VALIDATE_FD (glfd, invalid_fs);
(gdb) p * glfd
$1 = {openfds = {next = 0x0, prev = 0x7f74a000ce90}, fs = 0x7f74a8324f20, offset = 140139014803232, fd = 0x7f74a8324f20, entries = {next = 0x78, prev = 0x78}, next = 0x7800000001, 
  readdirbuf = 0x10200000002000}
(gdb) p * glfd->fd
$2 = {pid = 0, flags = -1473070784, refcount = 32628, inode_list = {next = 0x1, prev = 0x7f74a832baa0}, inode = 0x7800000078, lock = 8192, _ctx = 0x0, xl_count = 0, lk_ctx = 0x0, anonymous = _gf_false}
(gdb) p * glfd->fd->inode
Cannot access memory at address 0x7800000078

Expected results:
no coredumps and server respond back with "Disk quota exceeded"

Additional info:
Comment 2 Jiffin 2015-09-16 07:25:07 EDT
The patch has posted in https://review.gerrithub.io/#/c/246586/
Comment 4 Saurabh 2015-09-22 02:00:30 EDT
Executing the similar test as mentioned in the description section,

logs from the server,
# gluster volume quota vol1 list
                  Path                   Hard-limit Soft-limit   Used  Available  Soft-limit exceeded? Hard-limit exceeded?
---------------------------------------------------------------------------------------------------------------------------
/                                          2.0GB       80%       2.1GB  0Bytes             Yes                  Yes


logs from the client,
# time dd if=/dev/urandom of=f.1 bs=1024 count=3145728
dd: error writing ‘f.1’: Disk quota exceeded
2231918+0 records in
2231917+0 records out
2285483008 bytes (2.3 GB) copied, 270.861 s, 8.4 MB/s

real	4m30.929s
user	0m0.716s
sys	4m1.892s

# rpm -qa | grep glusterfs
glusterfs-3.7.1-15.el7rhgs.x86_64
Comment 6 errata-xmlrpc 2015-10-05 03:27:07 EDT
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHSA-2015-1845.html

Note You need to log in before you can comment on or make changes to this bug.