Description of problem:
The customer is getting a very high volume of EBADF errors that seem to be coming from a defect in one of their applications. However, while figuring this out, they need to suppress the messages, because they are filling up the storage very rapidly (1GB/hour in logs). This is heavily impacting production.
The client log is filling up with W level messages:
[2018-02-09 19:34:38.225605] W [MSGID: 114031] [client-rpc-fops.c:3002:client3_3_readv_cbk] 0-home-client-3: remote operation failed [Bad file descriptor]
[2018-02-09 19:34:38.226525] W [MSGID: 114031] [client-rpc-fops.c:3002:client3_3_readv_cbk] 0-home-client-2: remote operation failed [Bad file descriptor]
[2018-02-09 19:34:38.226633] W [fuse-bridge.c:2228:fuse_readv_cbk] 0-glusterfs-fuse: 7223: READ => -1 gfid=ef78d2ed-382a-4c22-a410-b100a3dc6ecd fd=0x7fa16401206c (Bad file descriptor)
[2018-02-09 19:34:38.227251] W [MSGID: 114031] [client-rpc-fops.c:3002:client3_3_readv_cbk] 0-home-client-3: remote operation failed [Bad file descriptor]
The customer has tried using the log-level=ERROR mount option, but it does not get rid of these errors. They also tried "log-level=NONE" which also did not work.
Normally, the customer uses autofs, but they have also tried mounting manually which did not work either:
mount -t glusterfs -o rw,root-squash=0,nosuid,nodev,noatime,log-file=/var/log/gluster-mount2.log,log-level=ERROR <redacted>:/home /mnt/gluster-test
mount -t glusterfs -o rw,root-squash=0,nosuid,nodev,noatime,log-file=/var/log/gluster-mount3.log,log-level=NONE <redacted>:/home /mnt/gluster-test
Version-Release number of selected component (if applicable): RHGS 3.3
How reproducible: We have not been able to reproduce this issue using the following mount command:
# mount -t glusterfs -o log-level=ERROR 127.0.0.2:testvol /mnt/fuse_mnt
Additional info: This bug is being opened as requested in: https://bugzilla.redhat.com/show_bug.cgi?id=1540282
I was not sure which component to file this under, so please change it if necessary. I will follow up in a private comment with some more information. Please let me know if any other information is needed. Thank you.
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.
For information on the advisory, and where to find the updated
files, follow the link below.
If the solution does not work for you, open a new bug report.