RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1138229 - Disconnections from glusterfs through libgfapi
Summary: Disconnections from glusterfs through libgfapi
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: glusterfs
Version: 6.6
Hardware: x86_64
OS: Linux
unspecified
unspecified
Target Milestone: pre-dev-freeze
: 6.7
Assignee: sankarshan
QA Contact: storage-qa-internal@redhat.com
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2014-09-04 10:30 UTC by Alvaro Flores
Modified: 2017-12-06 11:44 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-12-06 11:44:55 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
Logs of the three gluster servers (717.15 KB, application/octet-stream)
2014-09-04 10:30 UTC, Alvaro Flores
no flags Details

Description Alvaro Flores 2014-09-04 10:30:40 UTC
Created attachment 934366 [details]
Logs of the three gluster servers

Description of problem:
 
Disconnections from glusterfs through libgfapi.
 
Version-Release number of selected component (if applicable):
 
glusterfs-3.6.0.22-1.el6rhs.x86_64
 
How reproducible:
 
Open some connections to the glusterfs through libgfapi and write 50K files and read 100K files os 1 MB
 
Steps to Reproduce:
1. Open multiple connections (i.e 64) to a gluster volume using libgfapi.
2. open-write-close files using different threads on the multiple connections
3. Client get disconnected from the gluster volume (open fails)
 
Actual results:
 
I am having an issue with glusterfs disconnections while creating and writing files.
 
The environment is 3 RHEV hypervisors running each one 2 virtual machines, one client and one server, both of them with RHEL 6.5
 
I am running a script that open n threads from the clients to write or read files from the glusterfs volume who shares the three servers, using libgfapi. Before try to do nothing, the script removes the destination folders of the writers if it exists. It only makes a mount using nfs to launch a rm -rf of the destination folder.
Here i have the first strange log:
 
nfs.log:[2014-09-04 07:46:16.260087] W [dht-layout.c:180:dht_layout_search] 0-vol02-dht: no subvolume for hash (value) = 3004083931
 
It only occurs in one of the three servers, and to daze me it only occurs with exactly 1000 files, so as i have to remove 3 folders, i have exactly 3000 events like this in the logs.
 
After that, the script begins to open connections to the gluster volume, it connects to the gluster:
 
[2014-09-04 07:56:59.609786] I [server-handshake.c:578:server_setvolume] 0-vol02-server: accepted client from vxoa01.vna-2273-2014/09/04-07:56:53:865815-vol02-client-22-0-0 (version: 3.6.0.22)
 
but after one or two minutes it begins:
 
[2014-09-04 07:58:17.290542] I [server-helpers.c:291:do_fd_cleanup] 0-vol02-server: fd cleanup on /rtest_3/dir_026/dptest.26008
[2014-09-04 07:58:17.290796] E [client_t.c:384:gf_client_unref] (-->/usr/lib64/glusterfs/3.6.0.22/xlator/features/locks.so(pl_flush_cbk+0xb9) [0x7f0b1a546399] (-->/usr/lib64/libglusterfs.so.0(default_flush_cbk+0xb9) [0x3be322d919] (-->/usr/lib64/glusterfs/3.6.0.22/xlator/debug/io-stats.so(io_stats_flush_cbk+0xed) [0x7f0b198d952d]))) 0-client_t: client is NULL
[2014-09-04 07:58:17.353584] I [server-resolve.c:519:server_resolve_fd] 0-: fd not found in context
[2014-09-04 07:58:17.353679] E [server-rpc-fops.c:1336:server_flush_cbk] 0-vol02-server: 831: FLUSH 4 (d35cf248-fa0b-4b9d-b4ca-733c4d457ea2) ==> (Wrong file descriptor)
 
In the next 2 seconds i have the same message for a lot of different files in differnt volumes and the sessions are disconnected.
 
The gluster volume is online before and after, all in online state and no messages in /var/log except the glusterfs ones. The limit of number of files in the kernel is far (cat /proc/sys/fs/file-nr don't go further of 3500 files and ulimit -n show 32000).
 
Expected results:
 
Avoid the disconnections.
 
Additional info:

Logs of the three servers are attached to this bug.

Comment 1 krishnan parthasarathi 2014-09-09 12:37:25 UTC
Alvaro,
Could you attach the libgfapi based script, that experiences the disconnections when the "Steps to Reproduce" are followed, to the bug?

Comment 2 krishnan parthasarathi 2014-09-09 12:53:25 UTC
Alvaro,

The version of glusterfs you are using is not the community version. Could you confirm if you are seeing this issue on a corresponding upstream version and share the corresponding logs?

Comment 3 Alvaro Flores 2014-09-10 12:15:52 UTC
Hello Krishnan.

I sended you by email the source code of the tool that we use (to the @redhat.com mail showed in your name).

I think that the problem is the number of threads, collapsing the glusterd, but the machine have enough capability to be able to manage them (not reached the number of file descriptors, not reached the cpu limit, enough ram, etc). It will fail with only 8 threads, is not a big number, so we are getting the error too quickly.

Comment 4 Niels de Vos 2015-02-10 12:41:44 UTC
KP, did you follow up on this?

Alvaro, is this still an issue with the latest 3.6.2 packages?

Comment 6 krishnan parthasarathi 2015-08-18 07:15:38 UTC
(In reply to Niels de Vos from comment #4)
> KP, did you follow up on this?
> 
> Alvaro, is this still an issue with the latest 3.6.2 packages?

Niels, I am not an expert on libgfapi. I tried helping Alvaro long back. I am assigning it to the default assignee for libgfapi.

Comment 10 Jan Kurik 2017-12-06 11:44:55 UTC
Red Hat Enterprise Linux 6 is in the Production 3 Phase. During the Production 3 Phase, Critical impact Security Advisories (RHSAs) and selected Urgent Priority Bug Fix Advisories (RHBAs) may be released as they become available.

The official life cycle policy can be reviewed here:

http://redhat.com/rhel/lifecycle

This issue does not meet the inclusion criteria for the Production 3 Phase and will be marked as CLOSED/WONTFIX. If this remains a critical requirement, please contact Red Hat Customer Support to request a re-evaluation of the issue, citing a clear business justification. Note that a strong business justification will be required for re-evaluation. Red Hat Customer Support can be contacted via the Red Hat Customer Portal at the following URL:

https://access.redhat.com/


Note You need to log in before you can comment on or make changes to this bug.