Bug 1263581 - nfs-ganesha: nfsd coredumps once quota limits cross while creating a file larger than the quota limit set
Summary: nfs-ganesha: nfsd coredumps once quota limits cross while creating a file lar...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: nfs-ganesha
Version: rhgs-3.1
Hardware: x86_64
OS: Linux
unspecified
urgent
Target Milestone: ---
: RHGS 3.1.1
Assignee: Soumya Koduri
QA Contact: Saurabh
URL:
Whiteboard:
Depends On: 1263094
Blocks: 1251815
TreeView+ depends on / blocked
 
Reported: 2015-09-16 08:50 UTC by Saurabh
Modified: 2016-01-19 06:15 UTC (History)
9 users (show)

Fixed In Version: nfs-ganesha-2.2.0-9
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2015-10-05 07:27:07 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2015:1845 0 normal SHIPPED_LIVE Moderate: Red Hat Gluster Storage 3.1 update 2015-10-05 11:06:22 UTC

Description Saurabh 2015-09-16 08:50:59 UTC
Description of problem:
nfsd coredumps once quota limits cross while creating a file larger than the quota limit set

# gluster volume info vol1
 
Volume Name: vol1
Type: Distributed-Replicate
Volume ID: 3176319c-c033-4d81-a1c2-e46d92a94e9c
Status: Started
Number of Bricks: 6 x 2 = 12
Transport-type: tcp
Bricks:
Brick1: 10.70.44.108:/rhs/brick1/d1r11
Brick2: 10.70.44.109:/rhs/brick1/d1r21
Brick3: 10.70.44.110:/rhs/brick1/d2r11
Brick4: 10.70.44.111:/rhs/brick1/d2r21
Brick5: 10.70.44.108:/rhs/brick1/d3r11
Brick6: 10.70.44.109:/rhs/brick1/d3r21
Brick7: 10.70.44.110:/rhs/brick1/d4r11
Brick8: 10.70.44.111:/rhs/brick1/d4r21
Brick9: 10.70.44.108:/rhs/brick1/d5r11
Brick10: 10.70.44.109:/rhs/brick1/d5r21
Brick11: 10.70.44.110:/rhs/brick1/d6r11
Brick12: 10.70.44.111:/rhs/brick1/d6r21
Options Reconfigured:
features.quota-deem-statfs: on
features.inode-quota: on
features.quota: on
ganesha.enable: on
features.cache-invalidation: on
nfs.disable: on
performance.readdir-ahead: on
nfs-ganesha: enable
cluster.enable-shared-storage: enable

Version-Release number of selected component (if applicable):
glusterfs-3.7.1-15.el7rhgs.x86_64
nfs-ganesha-2.2.0-7.el7rhgs.x86_64

How reproducible:
always

1. create a volume of 6x2 type.
2. enable quota on the volume
3. set a quota limit of 2 GB
4. configure nfs-ganesha
5. mount the volume using vers=3
6. use dd to create a file of 3GB.

Actual results:
(gdb) bt
#0  0x00007f74c81c6b22 in pub_glfs_pwritev (glfd=0x7f74a832b930, iovec=iovec@entry=0x7f74c97f87f0, iovcnt=iovcnt@entry=1, offset=2352373760, flags=0) at glfs-fops.c:936
#1  0x00007f74c81c6e7a in pub_glfs_pwrite (glfd=<optimized out>, buf=<optimized out>, count=<optimized out>, offset=<optimized out>, flags=<optimized out>) at glfs-fops.c:1051
#2  0x00007f74c85ebbe0 in file_write () from /usr/lib64/ganesha/libfsalgluster.so
#3  0x00000000004d458e in cache_inode_rdwr_plus ()
#4  0x00000000004d53a9 in cache_inode_rdwr ()
#5  0x000000000045db41 in nfs3_write ()
#6  0x0000000000453a01 in nfs_rpc_execute ()
#7  0x00000000004545ad in worker_run ()
#8  0x000000000050afeb in fridgethr_start_routine ()
#9  0x00007f74d94f4df5 in start_thread () from /lib64/libpthread.so.0
#10 0x00007f74d901a1ad in clone () from /lib64/libc.so.6
(gdb) f 0
#0  0x00007f74c81c6b22 in pub_glfs_pwritev (glfd=0x7f74a832b930, iovec=iovec@entry=0x7f74c97f87f0, iovcnt=iovcnt@entry=1, offset=2352373760, flags=0) at glfs-fops.c:936
936		__GLFS_ENTRY_VALIDATE_FD (glfd, invalid_fs);
(gdb) p * glfd
$1 = {openfds = {next = 0x0, prev = 0x7f74a000ce90}, fs = 0x7f74a8324f20, offset = 140139014803232, fd = 0x7f74a8324f20, entries = {next = 0x78, prev = 0x78}, next = 0x7800000001, 
  readdirbuf = 0x10200000002000}
(gdb) p * glfd->fd
$2 = {pid = 0, flags = -1473070784, refcount = 32628, inode_list = {next = 0x1, prev = 0x7f74a832baa0}, inode = 0x7800000078, lock = 8192, _ctx = 0x0, xl_count = 0, lk_ctx = 0x0, anonymous = _gf_false}
(gdb) p * glfd->fd->inode
Cannot access memory at address 0x7800000078

Expected results:
no coredumps and server respond back with "Disk quota exceeded"

Additional info:

Comment 2 Jiffin 2015-09-16 11:25:07 UTC
The patch has posted in https://review.gerrithub.io/#/c/246586/

Comment 4 Saurabh 2015-09-22 06:00:30 UTC
Executing the similar test as mentioned in the description section,

logs from the server,
# gluster volume quota vol1 list
                  Path                   Hard-limit Soft-limit   Used  Available  Soft-limit exceeded? Hard-limit exceeded?
---------------------------------------------------------------------------------------------------------------------------
/                                          2.0GB       80%       2.1GB  0Bytes             Yes                  Yes


logs from the client,
# time dd if=/dev/urandom of=f.1 bs=1024 count=3145728
dd: error writing ‘f.1’: Disk quota exceeded
2231918+0 records in
2231917+0 records out
2285483008 bytes (2.3 GB) copied, 270.861 s, 8.4 MB/s

real	4m30.929s
user	0m0.716s
sys	4m1.892s

# rpm -qa | grep glusterfs
glusterfs-3.7.1-15.el7rhgs.x86_64

Comment 6 errata-xmlrpc 2015-10-05 07:27:07 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHSA-2015-1845.html


Note You need to log in before you can comment on or make changes to this bug.