Bug 1005063 - fd leaks observed when "rm -rf" is interrupted by ctrl+c on cifs mount
fd leaks observed when "rm -rf" is interrupted by ctrl+c on cifs mount
Status: CLOSED EOL
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: samba (Show other bugs)
2.1
Unspecified Unspecified
unspecified Severity high
: ---
: ---
Assigned To: Poornima G
Lalatendu Mohanty
core
: ZStream
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2013-09-06 03:30 EDT by spandura
Modified: 2015-12-03 12:21 EST (History)
4 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2015-12-03 12:21:11 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description spandura 2013-09-06 03:30:26 EDT
Description of problem:
======================
In a distributed-replicate volume observed fd leaks in brick process when running  "rm -rf *" from cifs mount. Interrupted rm with "ctrl-c". 

Version-Release number of selected component (if applicable):
==============================================================
glusterfs 3.4.0.31rhs built on Sep  5 2013 08:23:16

How reproducible:
================
Often

Steps to Reproduce:
====================
1.Create a distribute-replicate volume (2x2). Start the volume .

2.Create cifs mount. From cifs mount execute: dbench -s -F -S --one-byte-write-fix --stat-check 10

3.After some time , stop dbench with "ctrl+c" 

4. From mount point execute : rm -rf *

5. While rm is in progress, interrupt with "ctrl+c"

Actual results:
==================
fd leaks observed on brick process. 

root@mia [Sep-06-2013- 7:24:59] >ls -l /proc/`cat /var/lib/glusterd/vols/vol_dis_1_rep_2/run/mia-rhs-bricks-vol_dis_1_rep_2_b3.pid`/fd  ; ls -l /proc/`cat /var/lib/glusterd/vols/vol_dis_1_rep_2/run/mia.lab.eng.blr.redhat.com-rhs-bricks-vol_dis_1_rep_2_b1.pid`/fd | grep "deleted"
total 0
lr-x------ 1 root root 64 Sep  6 07:00 0 -> /dev/null
l-wx------ 1 root root 64 Sep  6 07:00 1 -> /dev/null
lrwx------ 1 root root 64 Sep  6 07:00 10 -> socket:[3179384]
lr-x------ 1 root root 64 Sep  6 07:00 11 -> /dev/urandom
lr-x------ 1 root root 64 Sep  6 07:00 12 -> /rhs/bricks/vol_dis_1_rep_2_b3
lrwx------ 1 root root 64 Sep  6 07:00 13 -> socket:[3179578]
lrwx------ 1 root root 64 Sep  6 07:00 14 -> socket:[3179708]
lrwx------ 1 root root 64 Sep  6 07:00 15 -> socket:[3179727]
lrwx------ 1 root root 64 Sep  6 07:00 16 -> socket:[3179746]
lrwx------ 1 root root 64 Sep  6 07:01 17 -> socket:[3184259]
lrwx------ 1 root root 64 Sep  6 07:01 18 -> /rhs/bricks/vol_dis_1_rep_2_b3/testdir_cifs_mount/clients/client8/~dmtmp/WORD/~WRL0004.TMP (deleted)
l-wx------ 1 root root 64 Sep  6 07:00 2 -> /dev/null
lrwx------ 1 root root 64 Sep  6 07:00 3 -> anon_inode:[eventpoll]
l-wx------ 1 root root 64 Sep  6 07:00 4 -> /var/log/glusterfs/bricks/rhs-bricks-vol_dis_1_rep_2_b3.log
lrwx------ 1 root root 64 Sep  6 07:00 5 -> /var/lib/glusterd/vols/vol_dis_1_rep_2/run/mia-rhs-bricks-vol_dis_1_rep_2_b3.pid
lrwx------ 1 root root 64 Sep  6 07:00 6 -> socket:[3179367]
lrwx------ 1 root root 64 Sep  6 07:00 7 -> socket:[3179403]
lrwx------ 1 root root 64 Sep  6 07:00 8 -> socket:[3179376]
lrwx------ 1 root root 64 Sep  6 07:00 9 -> socket:[3179544]
root@mia [Sep-06-2013- 7:25:10] >
root@mia [Sep-06-2013- 7:25:11] >
root@mia [Sep-06-2013- 7:25:12] >
root@mia [Sep-06-2013- 7:25:12] >
root@mia [Sep-06-2013- 7:25:12] >ls  /rhs/bricks/vol_dis_1_rep_2_b3/testdir_cifs_mount/clients/client8/~dmtmp/WORD/~WRL0004.TMPls: cannot access /rhs/bricks/vol_dis_1_rep_2_b3/testdir_cifs_mount/clients/client8/~dmtmp/WORD/~WRL0004.TMP: No such file or directory

Expected results:
==================
There shouldn't be any fd leaks. 

Additional info:
====================

root@mia [Sep-06-2013- 7:25:15] >gluster v info
 
Volume Name: vol_dis_1_rep_2
Type: Distributed-Replicate
Volume ID: f5c43519-b5eb-4138-8219-723c064af71c
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: fan.lab.eng.blr.redhat.com:/rhs/bricks/vol_dis_1_rep_2_b0
Brick2: mia.lab.eng.blr.redhat.com:/rhs/bricks/vol_dis_1_rep_2_b1
Brick3: fan:/rhs/bricks/vol_dis_1_rep_2_b2
Brick4: mia:/rhs/bricks/vol_dis_1_rep_2_b3
Options Reconfigured:
cluster.self-heal-daemon: on
Comment 2 Poornima G 2013-10-15 01:14:25 EDT
The fd (deleted) entry in /proc/pid/fd exists if the file is deleted but the fd for that file is still open.

For every fd opened(in gluster volume) by dbench, there will be a corresponding fd opened by brick process. This fd entry in the brick process remains until the fd is deleted by dbench or dbench is stopped completely.

In this case, it looks like the dbench process wasn't completely stopped. Hence it is expected to have an entry in the brick process.
Comment 3 Vivek Agarwal 2015-12-03 12:21:11 EST
Thank you for submitting this issue for consideration in Red Hat Gluster Storage. The release for which you requested us to review, is now End of Life. Please See https://access.redhat.com/support/policy/updates/rhs/

If you can reproduce this bug against a currently maintained version of Red Hat Gluster Storage, please feel free to file a new report against the current release.

Note You need to log in before you can comment on or make changes to this bug.