Bug 1095267 - glusterfsd crashed when quota was enabled, then disabled and enabled again
Summary: glusterfsd crashed when quota was enabled, then disabled and enabled again
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: quota
Version: rhgs-3.0
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
: RHGS 3.0.0
Assignee: Raghavendra G
QA Contact: Shruti Sampat
URL:
Whiteboard:
: 1103152 (view as bug list)
Depends On:
Blocks: 1103636
TreeView+ depends on / blocked
 
Reported: 2014-05-07 11:44 UTC by Shruti Sampat
Modified: 2016-09-17 12:40 UTC (History)
8 users (show)

Fixed In Version: glusterfs-3.6.0.13-1
Doc Type: Bug Fix
Doc Text:
Previously, while trying to enable quota again, the system tried to access a NULL transport object leading to a crash. With this fix, a new transport connection is created every time quota is enabled.
Clone Of:
: 1103636 (view as bug list)
Environment:
Last Closed: 2014-09-22 19:36:49 UTC
Embargoed:


Attachments (Terms of Use)
gdb backtrace (18.99 KB, text/plain)
2014-05-07 11:44 UTC, Shruti Sampat
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHEA-2014:1278 0 normal SHIPPED_LIVE Red Hat Storage Server 3.0 bug fix and enhancement update 2014-09-22 23:26:55 UTC

Description Shruti Sampat 2014-05-07 11:44:00 UTC
Created attachment 893229 [details]
gdb backtrace

Description of problem:
--------------------------

glusterfsd processes of a distributed-replicate volume crashed after quota was enabled on a volume on which it had been enabled and disabled previously.

The following was seen when the core was examined with gdb -
-------------------------------------------------------------------------------

(gdb) p *rpc->conn.rpc_clnt
$4 = {lock = {__data = {__lock = 0, __count = 0, __owner = 0, __nusers = 0, __kind = 0, __spins = 0, __list = {__prev = 0x0, __next = 0x0}}, 
    __size = '\000' <repeats 39 times>, __align = 0}, notifyfn = 0x7f34b77c2690 <quota_enforcer_notify>, conn = {lock = {__data = {__lock = 0, __count = 0, 
        __owner = 0, __nusers = 0, __kind = 0, __spins = 0, __list = {__prev = 0x0, __next = 0x0}}, __size = '\000' <repeats 39 times>, __align = 0}, trans = 0x0, 
    config = {rpc_timeout = 0, remote_port = 0, remote_host = 0x0}, reconnect = 0x0, timer = 0x0, ping_timer = 0x0, rpc_clnt = 0x2344ef0, connected = 0 '\000', 
    saved_frames = 0x233ecc0, frame_timeout = 1800, last_sent = {tv_sec = 1399457050, tv_usec = 294940}, last_received = {tv_sec = 1399456455, tv_usec = 40203}, 
    ping_started = 0, name = 0x233c240 "dis_rep_vol-quota"}, mydata = 0x22ebd60, xid = 160030, programs = {next = 0x2344fd8, prev = 0x2344fd8}, reqpool = 0x233c1c0, 
  saved_frames_pool = 0x2347520, ctx = 0x22ab010, refcount = 1, auth_null = 0, disabled = 1 '\001'}

Note that 'trans' is seen as null above.

Find attached the gdb backtrace.

Version-Release number of selected component (if applicable):
glusterfs-3.5qa2-0.369.git500a656.el6rhs.x86_64

How reproducible:
Saw it once in my setup.

Steps to Reproduce:
1. Enable quota on a distributed-replicate volume and set the limit. The volume is mounted on a client and data written to cross the soft limit set on a particular directory.
2. Quota is disabled after a while.
3. Enabled quota again after a while.

Actual results:
Brick processes crashed.

Expected results:
Brick processes should not crash.

Additional info:

Comment 3 Raghavendra G 2014-06-02 08:41:28 UTC
*** Bug 1103152 has been marked as a duplicate of this bug. ***

Comment 5 Shruti Sampat 2014-06-09 11:20:03 UTC
Verified as fixed in glusterfs-3.6.0.14-1.el6rhs.x86_64

Performed the following steps for both distribute and distributed-replicate volumes -

1. Enable quota on the volume, and set limit on a directory.
2. Cause the usage of the directory to exceed the limit set.
3. Disable quota on the volume and enable it again.

glusterfsd crash not seen.

Comment 6 Pavithra 2014-07-23 07:19:35 UTC
Hi Du,

Please review the edited doc text for technical accuracy and sign off.

Comment 8 errata-xmlrpc 2014-09-22 19:36:49 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHEA-2014-1278.html


Note You need to log in before you can comment on or make changes to this bug.