Bugzilla (bugzilla.redhat.com) will be under maintenance for infrastructure upgrades and will not be available on July 31st between 12:30 AM - 05:30 AM UTC. We appreciate your understanding and patience. You can follow status.redhat.com for details.
Bug 1565654 - /var/log/glusterfs/bricks/export_vdb.log flooded with this error message "Not able to add to index [Too many links]"
Summary: /var/log/glusterfs/bricks/export_vdb.log flooded with this error message "Not...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: index
Version: 4.0
Hardware: x86_64
OS: Linux
unspecified
urgent
Target Milestone: ---
Assignee: bugs@gluster.org
QA Contact:
URL:
Whiteboard:
Depends On: 1553777 1559004 1565655
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-04-10 13:36 UTC by Pranith Kumar K
Modified: 2018-05-07 15:15 UTC (History)
5 users (show)

Fixed In Version: glusterfs-4.0.2
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1559004
Environment:
Last Closed: 2018-05-07 15:15:28 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:


Attachments (Terms of Use)

Description Pranith Kumar K 2018-04-10 13:36:09 UTC
+++ This bug was initially created as a clone of Bug #1559004 +++

+++ This bug was initially created as a clone of Bug #1553777 +++

Description of problem:

I have just upgraded from 3.8.15 to 3.10.11 ( after another bug was fixed - Bug 1544461 ). Everything was fine for a while, in think when i added a new server (replicate) to the pool and checking the log files, i saw /var/log/glusterfs/bricks/export_vdb.log flooded with the following error message:


[2018-03-09 12:57:19.544372] E [MSGID: 138003] [index.c:610:index_link_to_base] 0-gluster_volume-index: /export_vdb/.glusterfs/indices/xattrop/4f8ed955-6a22-4311-baf6-9e38088dbabc: Not able to add to index [Too many links]
[2018-03-09 12:57:19.544810] E [MSGID: 138003] [index.c:610:index_link_to_base] 0-gluster_volume-index: /export_vdb/.glusterfs/indices/xattrop/4a451bd4-5992-4896-94ee-94cf6c5b17d4: Not able to add to index [Too many links]
[2018-03-09 12:57:19.545229] E [MSGID: 138003] [index.c:610:index_link_to_base] 0-gluster_volume-index: /export_vdb/.glusterfs/indices/xattrop/4a451bd4-5992-4896-94ee-94cf6c5b17d4: Not able to add to index [Too many links]

This error appears only on the existing servers/bricks, the newly created one is does not have these errors. ( the synch is now still in progress )

I am using Ubuntu 14, 5 x replicated cluster, and i am using ext4. ( i read https://github.com/gluster/glusterfs/issues/132 )



Version-Release number of selected component (if applicable):

Old: 3.8.15
New: 3.10.11


How reproducible:


Steps to Reproduce:
1. Existing 4 x replicated Gluster cluster of 3.8.15
2. Upgrading those to 3.10.11 and clients and op version
3. Add new brick/server to the pool (2-gls-dus21-ci-efood-real-de) => 5 x replicated Gluster cluster
4. /var/log/glusterfs/bricks/export_vdb.log full of errors: Not able to add to index [Too many links] 

Actual results:

/var/log/glusterfs/bricks/export_vdb.log log is flooded with errors only for existing bricks:

[2018-03-09 14:05:18.183135] E [MSGID: 138003] [index.c:610:index_link_to_base] 0-gluster_volume-index: /export_vdb/.glusterfs/indices/xattrop/7d4c9f23-75c3-4162-a315-df52ef878d60: Not able to add to index [Too many links]
[2018-03-09 14:05:18.514930] E [MSGID: 138003] [index.c:610:index_link_to_base] 0-gluster_volume-index: /export_vdb/.glusterfs/indices/xattrop/bb30cd15-d86f-478b-94da-91cffab732b7: Not able to add to index [Too many links]
[2018-03-09 14:05:18.515105] E [MSGID: 138003] [index.c:610:index_link_to_base] 0-gluster_volume-index: /export_vdb/.glusterfs/indices/xattrop/bb30cd15-d86f-478b-94da-91cffab732b7: Not able to add to index [Too many links]
[2018-03-09 14:05:18.646843] E [MSGID: 138003] [index.c:610:index_link_to_base] 0-gluster_volume-index: /export_vdb/.glusterfs/indices/xattrop/bea06bc5-cc7b-4a50-801a-9bef757e5364: Not able to add to index [Too many links]
[2018-03-09 14:05:18.647213] E [MSGID: 138003] [index.c:610:index_link_to_base] 0-gluster_volume-index: /export_vdb/.glusterfs/indices/xattrop/bea06bc5-cc7b-4a50-801a-9bef757e5364: Not able to add to index [Too many links]
[2018-03-09 14:05:18.675816] E [MSGID: 138003] [index.c:610:index_link_to_base] 0-gluster_volume-index: /export_vdb/.glusterfs/indices/xattrop/95860b45-8976-4a01-a818-3dc70d507c11: Not able to add to index [Too many links]
[2018-03-09 14:05:18.676011] E [MSGID: 138003] [index.c:610:index_link_to_base] 0-gluster_volume-index: /export_vdb/.glusterfs/indices/xattrop/95860b45-8976-4a01-a818-3dc70d507c11: Not able to add to index [Too many links]


Expected results:.

newly added gluster to synch and no error flooding into the /var/log/glusterfs/bricks/export_vdb.log


Additional info:



Status of volume: gluster_volume
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick 2-gls-dus10-ci-efood-real-de.openstac
k.local:/export_vdb                         49153     0          Y       24166
Brick 1-gls-dus10-ci-efood-real-de.openstac
k.local:/export_vdb                         49153     0          Y       3364
Brick 1-gls-dus21-ci-efood-real-de:/export_
vdb                                         49153     0          Y       30337
Brick 3-gls-dus10-ci-efood-real-de.openstac
k.local:/export_vdb                         49153     0          Y       3223
Brick 2-gls-dus21-ci-efood-real-de.openstac
klocal:/export_vdb                          49152     0          Y       12426
Self-heal Daemon on localhost               N/A       N/A        Y       21907
Self-heal Daemon on 1-gls-dus21-ci-efood-re
al-de.openstacklocal                        N/A       N/A        Y       16837
Self-heal Daemon on 2-gls-dus21-ci-efood-re
al-de.openstacklocal                        N/A       N/A        Y       17551
Self-heal Daemon on 1-gls-dus10-ci-efood-re
al-de.openstack.local                       N/A       N/A        Y       23096
Self-heal Daemon on 2-gls-dus10-ci-efood-re
al-de.openstack.local                       N/A       N/A        Y       10407

Task Status of Volume gluster_volume
------------------------------------------------------------------------------
There are no active volume tasks



root@3-gls-dus10-ci-efood-real-de:/var/log/glusterfs/bricks# gluster volume info

Volume Name: gluster_volume
Type: Replicate
Volume ID: 2e6bd6ba-37c8-4808-9156-08545cea3e3e
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 5 = 5
Transport-type: tcp
Bricks:
Brick1: 2-gls-dus10-ci-efood-real-de.openstack.local:/export_vdb
Brick2: 1-gls-dus10-ci-efood-real-de.openstack.local:/export_vdb
Brick3: 1-gls-dus21-ci-efood-real-de:/export_vdb
Brick4: 3-gls-dus10-ci-efood-real-de.openstack.local:/export_vdb
Brick5: 2-gls-dus21-ci-efood-real-de.openstacklocal:/export_vdb
Options Reconfigured:
performance.io-thread-count: 32
cluster.self-heal-window-size: 64
performance.cache-max-file-size: 1MB
performance.cache-size: 2GB
nfs.disable: on
auth.allow: 10.96.214.95,10.97.177.128,10.96.214.103,10.96.214.101,10.97.177.122,10.97.177.127,10.96.215.197,10.96.215.201,10.97.177.132,10.97.177.124,10.96.214.93,10.97.177.139,10.96.214.119,10.97.177.106,10.96.210.69,10.96.214.94,10.97.177.118,10.97.177.145,10.96.214.98
performance.readdir-ahead: on
features.barrier: off
transport.address-family: inet


I have attached glusterfs log directory of an old existing server and of the new server.PS I have restarted the service etc ... the old one has the upgrade also. The new server is 2-gls-dus21-ci-efood-real-de.

--- Additional comment from Atin Mukherjee on 2018-03-11 22:10:44 EDT ---



--- Additional comment from Marc on 2018-03-21 04:59:22 EDT ---

Hello Atin, 
Do you know at least if this is a know issue or is something specific to my setup, is ext4 still supported by 3.10.11?

--- Additional comment from Atin Mukherjee on 2018-03-21 06:47:22 EDT ---

Pranith/Ravi - could one of you take a look at this?

--- Additional comment from Worker Ant on 2018-03-21 09:43:57 EDT ---

REVIEW: https://review.gluster.org/19754 (features/index: Choose different base file on EMLINK error) posted (#1) for review on master by Pranith Kumar Karampuri

--- Additional comment from Worker Ant on 2018-03-26 05:43:02 EDT ---

REVIEW: https://review.gluster.org/19754 (features/index: Choose different base file on EMLINK error) posted (#2) for review on master by Pranith Kumar Karampuri

--- Additional comment from Worker Ant on 2018-04-06 20:09:59 EDT ---

COMMIT: https://review.gluster.org/19754 committed in master by "Pranith Kumar Karampuri" <pkarampu@redhat.com> with a commit message- features/index: Choose different base file on EMLINK error

Change-Id: I4648816af908539efdc2528608aa2ebf7f0d0e2f
fixes: bz#1559004
BUG: 1559004
Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>

Comment 1 Worker Ant 2018-04-10 13:38:14 UTC
REVIEW: https://review.gluster.org/19842 (features/index: Choose different base file on EMLINK error) posted (#1) for review on release-4.0 by Pranith Kumar Karampuri

Comment 2 Shyamsundar 2018-05-07 15:15:28 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-4.0.2, please open a new bug report.

glusterfs-4.0.2 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://lists.gluster.org/pipermail/announce/2018-April/000097.html
[2] https://www.gluster.org/pipermail/gluster-users/


Note You need to log in before you can comment on or make changes to this bug.