Bug 1186657 - Inode infinite loop leads to glusterfsd segfault
Summary: Inode infinite loop leads to glusterfsd segfault
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: core
Version: rhgs-3.0
Hardware: x86_64
OS: Linux
urgent
urgent
Target Milestone: ---
: RHGS 3.0.4
Assignee: rjoseph
QA Contact: Amit Chaurasia
URL:
Whiteboard:
Depends On: 1156178 1158226 1159225
Blocks: 1155395 1182947 1193757
TreeView+ depends on / blocked
 
Reported: 2015-01-28 09:17 UTC by Nithya Balachandran
Modified: 2016-09-17 14:37 UTC (History)
16 users (show)

Fixed In Version: glusterfs-3.6.0.46-1
Doc Type: Bug Fix
Doc Text:
Clone Of: 1158226
: 1193757 (view as bug list)
Environment:
Last Closed: 2015-03-26 06:35:44 UTC
Target Upstream Version:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2015:0682 0 normal SHIPPED_LIVE Red Hat Storage 3.0 enhancement and bug fix update #4 2015-03-26 10:32:55 UTC

Comment 4 Amit Chaurasia 2015-03-10 10:45:38 UTC
The issue was that a dentry in the inode table was skewed in a way that the dentry of a file occured before its parent. So, the entry was like parent > child > parent.

This triggered an infinite loop in inode table causing the loc>Path to be NULL and as there was no way to handle this NULL path, a crash dump was triggered. 

This dentry issue in the inode table was caused by a race in readdirp.

I followed simple steps as suggested :

1. Create a 3 level folder structure.
/fuse_mnt1/test_1186657/bug1/sub_dir

2. cd to the folder and start creating and deleting the files.

while true; do touch a; rm -f a; done

3. From the other mount point, start a lookup:
while true; do ls -lR > /dev/null; done

4. I started this file and link creation from multiple terminals for different files on different mount points. 

The idea was to try and create a race in readdirp while entries are being made and deleted from the inode table.

Didn't see the crash neither the error messages in the logs files.

Note: I could see following quota messages on the log files:
==> /var/log/glusterfs/bricks/rhs-brick1-gv0.log <==
[2015-03-10 15:22:47.771144] W [quota.c:1773:quota_unlink_cbk] 0-gv0-quota: quota context not set in inode (gfid:51a0154d-318c-42dc-9d31-12e3eaa13d15)

==> /var/log/glusterfs/bricks/rhs-brick3-gv0.log <==
[2015-03-10 15:22:47.774715] W [quota.c:1773:quota_unlink_cbk] 0-gv0-quota: quota context not set in inode (gfid:51a0154d-318c-42dc-9d31-12e3eaa13d15)


Marking the bug verified.

Comment 6 errata-xmlrpc 2015-03-26 06:35:44 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2015-0682.html


Note You need to log in before you can comment on or make changes to this bug.