Bug 1133436 - Creation of files on mount failing with "Cannot lock - Bad file descriptor" , "Cannot unlock - No such file or directory", "Cannot write to file3 - Transport endpoint is not connected"
Summary: Creation of files on mount failing with "Cannot lock - Bad file descriptor" ,...
Keywords:
Status: CLOSED WORKSFORME
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: replicate
Version: rhgs-3.0
Hardware: Unspecified
OS: Unspecified
low
high
Target Milestone: ---
: ---
Assignee: Krutika Dhananjay
QA Contact: storage-qa-internal@redhat.com
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2014-08-25 07:16 UTC by spandura
Modified: 2017-11-28 09:55 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-11-28 09:55:39 UTC
Embargoed:


Attachments (Terms of Use)
Scripts required to execute the case (2.51 KB, application/x-gzip)
2014-08-25 07:35 UTC, spandura
no flags Details

Description spandura 2014-08-25 07:16:22 UTC
Description of problem:
============================
The cluster consists of 4 storage nodes + 1 management node. we have 4 ( 2 x 2 ) distribute-replicate volume running in the cluster. management node has a cron job running every 15 minutes from 9.A.M to 6.P.M for taking snapshot on all the volumes. 

we have 2 mount points for each of the volume from 2 client machines. 

each of the mount point is creating files/directories. On the mount point, the file creation failed with different errors. 

1) For vol1, observed mount point failures.

Creating file : user18/file15.....
Exiting Thread-73
Exiting Thread-47
Exiting Thread-38
Cannot unlock - No such file or directory
Cannot unlock - No such file or directory
Exiting Thread-58
Exiting Thread-90
Exiting Thread-7
Exiting Thread-62
Cannot unlock - No such file or directory
Cannot unlock - No such file or directory
Exiting Thread-43
Cannot unlock - No such file or directory
Exiting Thread-32
Cannot unlock - No such file or directory
Cannot unlock - No such file or directory
Exiting Thread-80
Exiting Thread-50


2) For vol3, observed mount point failures.

Creating file : user96/file20.....
Cannot open file: No such file or directory
flock() on closed filehandle FH at ./create_dirs_files.pl line 76.
Cannot lock - Bad file descriptor
Exiting Thread-95
Cannot write to file3 - Transport endpoint is not connected
Cannot write to file19 - Transport endpoint is not connected
Cannot write to file24 - Transport endpoint is not connected
Cannot write to file5 - Transport endpoint is not connected
Cannot write to file5 - Transport endpoint is not connected
Cannot write to file6 - Transport endpoint is not connected
Cannot write to file5 - Transport endpoint is not connected

3) For vol2, the mount logs reported:

[2014-08-25 06:38:13.426088] E [afr-self-heal-common.c:2869:afr_log_self_heal_completion_status] 0-vol2-replicate-1:  metadata self heal  failed,   on /
[2014-08-25 06:38:13.426550] E [afr-self-heal-common.c:2869:afr_log_self_heal_completion_status] 0-vol2-replicate-1:  metadata self heal  failed,   on /

Version-Release number of selected component (if applicable):
==============================================================
glusterfs 3.6.0.27 built on Aug  4 2014 11:49:25

How reproducible:
===================
Observed for the first time. 

Steps to Reproduce:
===================
1. Create 4 2 x 2 dis-rep volume. Start the volumes. 

2. On management node have a cron job running to create snapshot of all the volumes every 15 minutes. ( */15 9,10,11,12,13,14,15,16,17,18 * * * cd /root && ./snaps.sh create >>/tmp/crontab_snaps_output 2>&1 )

3. Create 4 mounts for each of the volume from 2 different clients.

4. copy the attached scripts  "create_dirs_files_multi_thread.py  , create_dirs_files.pl " to all the mount points from one of the client. 

5. From all the 4 mounts on client1 execute the following:
./create_dirs_files_multi_thread.py --number-of-threads 100 --num-files-per-dir 25 --min-file-size 1024 --max-file-size 10240 --starting-dir-num 1 --dir-depth 5 --dir-width 4

6. From all the 4 mounts on client2 execute the following:
./create_dirs_files_multi_thread.py --number-of-threads 100 --num-files-per-dir 25 --min-file-size 1024 --max-file-size 5120 --starting-dir-num 101 --dir-depth 5 --dir-width 4

Actual results:
=================
Observed the failures in creating files from the mount point. 

Expected results:
=====================
File creation shouldn't fail. 

Additional info:
====================
root@mia [Aug-25-2014-12:42:36] >gluster v info
 
Volume Name: vol1
Type: Distributed-Replicate
Volume ID: 86fb01ba-be09-4734-87ab-bd77b926c1e5
Status: Started
Snap Volume: no
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: rhs-client11:/rhs/device0/b1
Brick2: rhs-client12:/rhs/device0/b2
Brick3: rhs-client13:/rhs/device0/b3
Brick4: rhs-client14:/rhs/device0/b4
Options Reconfigured:
features.barrier: disable
performance.readdir-ahead: on
snap-max-hard-limit: 256
snap-max-soft-limit: 90
auto-delete: disable
 
Volume Name: vol2
Type: Distributed-Replicate
Volume ID: f9740540-88d7-4ee6-b720-024bd827bcac
Status: Started
Snap Volume: no
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: rhs-client11:/rhs/device1/b1
Brick2: rhs-client12:/rhs/device1/b2
Brick3: rhs-client13:/rhs/device1/b3
Brick4: rhs-client14:/rhs/device1/b4
Options Reconfigured:
features.barrier: disable
performance.readdir-ahead: on
snap-max-hard-limit: 256
snap-max-soft-limit: 90
auto-delete: disable
 
Volume Name: vol3
Type: Distributed-Replicate
Volume ID: 2dc6c55c-e945-4f44-bd4a-e0da47041b78
Status: Started
Snap Volume: no
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: rhs-client11:/rhs/device2/b1
Brick2: rhs-client12:/rhs/device2/b2
Brick3: rhs-client13:/rhs/device2/b3
Brick4: rhs-client14:/rhs/device2/b4
Options Reconfigured:
features.barrier: disable
performance.readdir-ahead: on
snap-max-hard-limit: 256
snap-max-soft-limit: 90
auto-delete: disable
 
Volume Name: vol4
Type: Distributed-Replicate
Volume ID: 224dc91f-530c-4dea-a289-c7b5f1239133
Status: Started
Snap Volume: no
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: rhs-client11:/rhs/device3/b1
Brick2: rhs-client12:/rhs/device3/b2
Brick3: rhs-client13:/rhs/device3/b3
Brick4: rhs-client14:/rhs/device3/b4
Options Reconfigured:
features.barrier: disable
performance.readdir-ahead: on
snap-max-hard-limit: 256
snap-max-soft-limit: 90
auto-delete: disable
root@mia [Aug-25-2014-12:42:42] >


root@mia [Aug-25-2014-12:42:56] >gluster v status vol1
Status of volume: vol1
Gluster process						Port	Online	Pid
------------------------------------------------------------------------------
Brick rhs-client11:/rhs/device0/b1			49152	Y	7991
Brick rhs-client12:/rhs/device0/b2			49152	Y	18408
Brick rhs-client13:/rhs/device0/b3			49152	Y	25248
Brick rhs-client14:/rhs/device0/b4			49152	Y	2898
NFS Server on localhost					2049	Y	1570
Self-heal Daemon on localhost				N/A	Y	1578
NFS Server on rhs-client12				2049	Y	18664
Self-heal Daemon on rhs-client12			N/A	Y	18673
NFS Server on rhs-client14				2049	Y	2906
Self-heal Daemon on rhs-client14			N/A	Y	2916
NFS Server on rhs-client13				2049	Y	21453
Self-heal Daemon on rhs-client13			N/A	Y	21463
NFS Server on rhs-client11				2049	Y	7998
Self-heal Daemon on rhs-client11			N/A	Y	8005
 
Task Status of Volume vol1
------------------------------------------------------------------------------
There are no active volume tasks
 
root@mia [Aug-25-2014-12:43:13] >gluster v status vol2
Status of volume: vol2
Gluster process						Port	Online	Pid
------------------------------------------------------------------------------
Brick rhs-client11:/rhs/device1/b1			49153	Y	4079
Brick rhs-client12:/rhs/device1/b2			49153	Y	18508
Brick rhs-client13:/rhs/device1/b3			49153	Y	25348
Brick rhs-client14:/rhs/device1/b4			49153	Y	31190
NFS Server on localhost					2049	Y	1570
Self-heal Daemon on localhost				N/A	Y	1578
NFS Server on rhs-client13				2049	Y	21453
Self-heal Daemon on rhs-client13			N/A	Y	21463
NFS Server on rhs-client14				2049	Y	2906
Self-heal Daemon on rhs-client14			N/A	Y	2916
NFS Server on rhs-client12				2049	Y	18664
Self-heal Daemon on rhs-client12			N/A	Y	18673
NFS Server on rhs-client11				2049	Y	7998
Self-heal Daemon on rhs-client11			N/A	Y	8005
 
Task Status of Volume vol2
------------------------------------------------------------------------------
There are no active volume tasks
 
root@mia [Aug-25-2014-12:43:15] >gluster v status vol3
Status of volume: vol3
Gluster process						Port	Online	Pid
------------------------------------------------------------------------------
Brick rhs-client11:/rhs/device2/b1			49154	Y	4150
Brick rhs-client12:/rhs/device2/b2			49154	Y	18583
Brick rhs-client13:/rhs/device2/b3			49154	Y	25417
Brick rhs-client14:/rhs/device2/b4			49154	Y	31257
NFS Server on localhost					2049	Y	1570
Self-heal Daemon on localhost				N/A	Y	1578
NFS Server on rhs-client13				2049	Y	21453
Self-heal Daemon on rhs-client13			N/A	Y	21463
NFS Server on rhs-client14				2049	Y	2906
Self-heal Daemon on rhs-client14			N/A	Y	2916
NFS Server on rhs-client11				2049	Y	7998
Self-heal Daemon on rhs-client11			N/A	Y	8005
NFS Server on rhs-client12				2049	Y	18664
Self-heal Daemon on rhs-client12			N/A	Y	18673
 
Task Status of Volume vol3
------------------------------------------------------------------------------
There are no active volume tasks
 
root@mia [Aug-25-2014-12:43:17] >gluster v status vol4
Status of volume: vol4
Gluster process						Port	Online	Pid
------------------------------------------------------------------------------
Brick rhs-client11:/rhs/device3/b1			49155	Y	4220
Brick rhs-client12:/rhs/device3/b2			49155	Y	18651
Brick rhs-client13:/rhs/device3/b3			49155	Y	25493
Brick rhs-client14:/rhs/device3/b4			49155	Y	31327
NFS Server on localhost					2049	Y	1570
Self-heal Daemon on localhost				N/A	Y	1578
NFS Server on rhs-client14				2049	Y	2906
Self-heal Daemon on rhs-client14			N/A	Y	2916
NFS Server on rhs-client13				2049	Y	21453
Self-heal Daemon on rhs-client13			N/A	Y	21463
NFS Server on rhs-client12				2049	Y	18664
Self-heal Daemon on rhs-client12			N/A	Y	18673
NFS Server on rhs-client11				2049	Y	7998
Self-heal Daemon on rhs-client11			N/A	Y	8005
 
Task Status of Volume vol4
------------------------------------------------------------------------------
There are no active volume tasks
 
root@mia [Aug-25-2014-12:43:19] >


root@mia [Aug-25-2014-12:43:36] >gluster snapshot list
snap_1
snap_2
snap_3
snap_4
root@mia [Aug-25-2014-12:43:39] >

Comment 2 spandura 2014-08-25 07:32:33 UTC
SOS Reports : http://rhsqe-repo.lab.eng.blr.redhat.com/bugs_necessary_info/1133436/

Client1: dj

Client2: king

BRICK1 : rhs-client11

BRICK2 : rhs-client12

BRICK3 : rhs-client13

BRICK4 : rhs-client14

MGMT_NODE : mia

Comment 3 spandura 2014-08-25 07:35:47 UTC
Created attachment 930350 [details]
Scripts required to execute the case

Comment 4 Pranith Kumar K 2014-08-28 04:32:27 UTC
With shwetha's help we were able to re-create the issue every time with in a matter of minutes. The interesting thing we found is that fsyncs are taking very long, close to the order of minutes. Need to find out why that is the case.
Here are some numbers:
root@rhs-client11 [Aug-28-2014- 9:53:05] >grep -i fsync info.txt
     92.94 26650150.15 us 1522857.00 us 48115604.00 us            104       FSYNC
     97.80 35502796.12 us 12597704.00 us 48115604.00 us             34       FSYNC
     86.48 28516896.79 us  370091.00 us 56845990.00 us             71       FSYNC
     86.98 42081453.95 us 20249447.00 us 56845990.00 us             21       FSYNC
     89.59 19004670.68 us 5092117.00 us 38607219.00 us            109       FSYNC
     86.36 23177027.56 us 10103503.00 us 31206852.00 us             32       FSYNC
     55.51 37115151.02 us  913235.00 us 85055222.00 us             64       FSYNC
     71.77 59096848.44 us 50639193.00 us 85055222.00 us             16       FSYNC

Next steps are to figure out why fsyncs are taking so long.
1) Some of the disks failed after the runs so now I am not sure if the issue is with hardware or gluster.
2) Once we get hardware in order. We want to strace the brick process to see the relative time between each syscall to figure out what is the bottle neck.

Pranith

Comment 5 Pranith Kumar K 2014-09-01 07:06:14 UTC
(In reply to Pranith Kumar K from comment #4)
> With shwetha's help we were able to re-create the issue every time with in a
> matter of minutes. The interesting thing we found is that fsyncs are taking
> very long, close to the order of minutes. Need to find out why that is the
> case.
> Here are some numbers:
> root@rhs-client11 [Aug-28-2014- 9:53:05] >grep -i fsync info.txt
>      92.94 26650150.15 us 1522857.00 us 48115604.00 us            104      
> FSYNC
>      97.80 35502796.12 us 12597704.00 us 48115604.00 us             34      
> FSYNC
>      86.48 28516896.79 us  370091.00 us 56845990.00 us             71      
> FSYNC
>      86.98 42081453.95 us 20249447.00 us 56845990.00 us             21      
> FSYNC
>      89.59 19004670.68 us 5092117.00 us 38607219.00 us            109      
> FSYNC
>      86.36 23177027.56 us 10103503.00 us 31206852.00 us             32      
> FSYNC
>      55.51 37115151.02 us  913235.00 us 85055222.00 us             64      
> FSYNC
>      71.77 59096848.44 us 50639193.00 us 85055222.00 us             16      
> FSYNC
> 
> Next steps are to figure out why fsyncs are taking so long.
> 1) Some of the disks failed after the runs so now I am not sure if the issue
> is with hardware or gluster.

Took some servers from Kaleb and tried the test there. No issues reported in the bug are observed there.

> 2) Once we get hardware in order. We want to strace the brick process to see
> the relative time between each syscall to figure out what is the bottle neck.

It continued to fail on setup provided by shwetha. Interestingly we observed following messages in /var/log/messages which point to some ata8 hardware issue. We are just debugging what it can be using Humble's help.
sas: ata8: end_device-6:1: cmd error handler
sas: ata8: end_device-6:1: dev error handler
ata8.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6
ata8.00: failed command: SET FEATURES
ata8.00: cmd ef/05:3c:00:00:00/00:00:00:00:00/40 tag 0
ata8.00: status: { ERR }
ata8.00: error: { ABRT }
ata8: hard resetting link
ata8.00: configured for UDMA/133
ata8: EH complete

Will update the bug as soon as we isolate the problem.
> 
> Pranith

Comment 9 Ravishankar N 2017-11-28 09:53:10 UTC
Nag, Given that this bug was raised 4 releases back on afr-v1, and neither dev nor qe has seen this issue in subsequent dev/testing phase, can we go ahead and close it as WORKS FOR ME?

Comment 10 Nag Pavan Chilakam 2017-11-28 09:54:58 UTC
i haven't seen this specific issue in recent times, hence we can close it , and open a new bug with latest info if we hit again

Comment 11 Ravishankar N 2017-11-28 09:55:39 UTC
Closing based on comment #10


Note You need to log in before you can comment on or make changes to this bug.