Bug 1215189 - timeout/expiry of group-cache should be set to 300 seconds
Summary: timeout/expiry of group-cache should be set to 300 seconds
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: core
Version: 3.7.0
Hardware: Unspecified
OS: Unspecified
high
unspecified
Target Milestone: ---
Assignee: Niels de Vos
QA Contact:
URL: https://lists.fedorahosted.org/piperm...
Whiteboard:
Depends On: 1215187
Blocks: glusterfs-3.7.0
TreeView+ depends on / blocked
 
Reported: 2015-04-24 14:05 UTC by Niels de Vos
Modified: 2015-05-14 17:46 UTC (History)
2 users (show)

Fixed In Version: glusterfs-3.7.0
Doc Type: Bug Fix
Doc Text:
Clone Of: 1215187
Environment:
Last Closed: 2015-05-14 17:29:29 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Niels de Vos 2015-04-24 14:05:26 UTC
+++ This bug was initially created as a clone of Bug #1215187 +++
+++                                                           +++
+++ Use this bug to backport the change to release-3.7        +++

Description of problem:
The current timeout/expiry of the group-cache on the bricks is set to 5 (?) seconds. When sssd is used to request all the groups of a user, and the request requires network access (i.e. LDAP), expiry of the cache can happen way too often.

sssd has a default of 300 seconds for memory caching (groups are only cached on disk as of current sssd versions). Gluster should use the same timeout for caching, making it more sssd friendly and preventing high cpu usage in some environments where fetching groups is slow.

Version-Release number of selected component (if applicable):
3.7

How reproducible:
100%

Steps to Reproduce:
1. have a user in many (100's of groups) in an LDAP structure
2. enable server-side group fetching with server.manage-gids=on for the volume
3. do some I/O as the user
4. see the slowness when groups need to be refreshed constantly

Actual results:
Gluster performs poorly.

Expected results:
The number of groups that a user belongs to should not affect performance *that* much.

Additional info:
https://lists.fedorahosted.org/pipermail/sssd-devel/2014-November/021451.html

Comment 1 Anand Avati 2015-05-04 11:05:19 UTC
REVIEW: http://review.gluster.org/10523 (protocol: increase default group-cache-timeout to 300 seconds) posted (#1) for review on release-3.7 by Niels de Vos (ndevos)

Comment 2 Niels de Vos 2015-05-14 17:29:29 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.0, please open a new bug report.

glusterfs-3.7.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/10939
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user

Comment 3 Niels de Vos 2015-05-14 17:35:57 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.0, please open a new bug report.

glusterfs-3.7.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/10939
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user

Comment 4 Niels de Vos 2015-05-14 17:38:18 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.0, please open a new bug report.

glusterfs-3.7.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/10939
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user

Comment 5 Niels de Vos 2015-05-14 17:46:50 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.0, please open a new bug report.

glusterfs-3.7.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/10939
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user


Note You need to log in before you can comment on or make changes to this bug.