Bug 1553777 - /var/log/glusterfs/bricks/export_vdb.log flooded with this error message "Not able to add to index [Too many links]"
Summary: /var/log/glusterfs/bricks/export_vdb.log flooded with this error message "Not...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: index
Version: 3.10
Hardware: x86_64
OS: Linux
unspecified
urgent
Target Milestone: ---
Assignee: bugs@gluster.org
QA Contact:
URL:
Whiteboard:
: 1553778 (view as bug list)
Depends On:
Blocks: 1559004 1565654 1565655
TreeView+ depends on / blocked
 
Reported: 2018-03-09 14:17 UTC by Marc
Modified: 2018-05-07 15:05 UTC (History)
4 users (show)

Fixed In Version: glusterfs-3.10.12
Clone Of:
: 1559004 (view as bug list)
Environment:
Last Closed: 2018-05-07 15:05:04 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)
Glusterfs log directory archived for a new server and an old server (3.35 MB, application/x-7z-compressed)
2018-03-09 14:17 UTC, Marc
no flags Details

Description Marc 2018-03-09 14:17:55 UTC
Created attachment 1406267 [details]
Glusterfs log directory archived for a new server and an old server

Description of problem:

I have just upgraded from 3.8.15 to 3.10.11 ( after another bug was fixed - Bug 1544461 ). Everything was fine for a while, in think when i added a new server (replicate) to the pool and checking the log files, i saw /var/log/glusterfs/bricks/export_vdb.log flooded with the following error message:


[2018-03-09 12:57:19.544372] E [MSGID: 138003] [index.c:610:index_link_to_base] 0-gluster_volume-index: /export_vdb/.glusterfs/indices/xattrop/4f8ed955-6a22-4311-baf6-9e38088dbabc: Not able to add to index [Too many links]
[2018-03-09 12:57:19.544810] E [MSGID: 138003] [index.c:610:index_link_to_base] 0-gluster_volume-index: /export_vdb/.glusterfs/indices/xattrop/4a451bd4-5992-4896-94ee-94cf6c5b17d4: Not able to add to index [Too many links]
[2018-03-09 12:57:19.545229] E [MSGID: 138003] [index.c:610:index_link_to_base] 0-gluster_volume-index: /export_vdb/.glusterfs/indices/xattrop/4a451bd4-5992-4896-94ee-94cf6c5b17d4: Not able to add to index [Too many links]

This error appears only on the existing servers/bricks, the newly created one is does not have these errors. ( the synch is now still in progress )

I am using Ubuntu 14, 5 x replicated cluster, and i am using ext4. ( i read https://github.com/gluster/glusterfs/issues/132 )



Version-Release number of selected component (if applicable):

Old: 3.8.15
New: 3.10.11


How reproducible:


Steps to Reproduce:
1. Existing 4 x replicated Gluster cluster of 3.8.15
2. Upgrading those to 3.10.11 and clients and op version
3. Add new brick/server to the pool (2-gls-dus21-ci-efood-real-de) => 5 x replicated Gluster cluster
4. /var/log/glusterfs/bricks/export_vdb.log full of errors: Not able to add to index [Too many links] 

Actual results:

/var/log/glusterfs/bricks/export_vdb.log log is flooded with errors only for existing bricks:

[2018-03-09 14:05:18.183135] E [MSGID: 138003] [index.c:610:index_link_to_base] 0-gluster_volume-index: /export_vdb/.glusterfs/indices/xattrop/7d4c9f23-75c3-4162-a315-df52ef878d60: Not able to add to index [Too many links]
[2018-03-09 14:05:18.514930] E [MSGID: 138003] [index.c:610:index_link_to_base] 0-gluster_volume-index: /export_vdb/.glusterfs/indices/xattrop/bb30cd15-d86f-478b-94da-91cffab732b7: Not able to add to index [Too many links]
[2018-03-09 14:05:18.515105] E [MSGID: 138003] [index.c:610:index_link_to_base] 0-gluster_volume-index: /export_vdb/.glusterfs/indices/xattrop/bb30cd15-d86f-478b-94da-91cffab732b7: Not able to add to index [Too many links]
[2018-03-09 14:05:18.646843] E [MSGID: 138003] [index.c:610:index_link_to_base] 0-gluster_volume-index: /export_vdb/.glusterfs/indices/xattrop/bea06bc5-cc7b-4a50-801a-9bef757e5364: Not able to add to index [Too many links]
[2018-03-09 14:05:18.647213] E [MSGID: 138003] [index.c:610:index_link_to_base] 0-gluster_volume-index: /export_vdb/.glusterfs/indices/xattrop/bea06bc5-cc7b-4a50-801a-9bef757e5364: Not able to add to index [Too many links]
[2018-03-09 14:05:18.675816] E [MSGID: 138003] [index.c:610:index_link_to_base] 0-gluster_volume-index: /export_vdb/.glusterfs/indices/xattrop/95860b45-8976-4a01-a818-3dc70d507c11: Not able to add to index [Too many links]
[2018-03-09 14:05:18.676011] E [MSGID: 138003] [index.c:610:index_link_to_base] 0-gluster_volume-index: /export_vdb/.glusterfs/indices/xattrop/95860b45-8976-4a01-a818-3dc70d507c11: Not able to add to index [Too many links]


Expected results:.

newly added gluster to synch and no error flooding into the /var/log/glusterfs/bricks/export_vdb.log


Additional info:



Status of volume: gluster_volume
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick 2-gls-dus10-ci-efood-real-de.openstac
k.local:/export_vdb                         49153     0          Y       24166
Brick 1-gls-dus10-ci-efood-real-de.openstac
k.local:/export_vdb                         49153     0          Y       3364
Brick 1-gls-dus21-ci-efood-real-de:/export_
vdb                                         49153     0          Y       30337
Brick 3-gls-dus10-ci-efood-real-de.openstac
k.local:/export_vdb                         49153     0          Y       3223
Brick 2-gls-dus21-ci-efood-real-de.openstac
klocal:/export_vdb                          49152     0          Y       12426
Self-heal Daemon on localhost               N/A       N/A        Y       21907
Self-heal Daemon on 1-gls-dus21-ci-efood-re
al-de.openstacklocal                        N/A       N/A        Y       16837
Self-heal Daemon on 2-gls-dus21-ci-efood-re
al-de.openstacklocal                        N/A       N/A        Y       17551
Self-heal Daemon on 1-gls-dus10-ci-efood-re
al-de.openstack.local                       N/A       N/A        Y       23096
Self-heal Daemon on 2-gls-dus10-ci-efood-re
al-de.openstack.local                       N/A       N/A        Y       10407

Task Status of Volume gluster_volume
------------------------------------------------------------------------------
There are no active volume tasks



root@3-gls-dus10-ci-efood-real-de:/var/log/glusterfs/bricks# gluster volume info

Volume Name: gluster_volume
Type: Replicate
Volume ID: 2e6bd6ba-37c8-4808-9156-08545cea3e3e
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 5 = 5
Transport-type: tcp
Bricks:
Brick1: 2-gls-dus10-ci-efood-real-de.openstack.local:/export_vdb
Brick2: 1-gls-dus10-ci-efood-real-de.openstack.local:/export_vdb
Brick3: 1-gls-dus21-ci-efood-real-de:/export_vdb
Brick4: 3-gls-dus10-ci-efood-real-de.openstack.local:/export_vdb
Brick5: 2-gls-dus21-ci-efood-real-de.openstacklocal:/export_vdb
Options Reconfigured:
performance.io-thread-count: 32
cluster.self-heal-window-size: 64
performance.cache-max-file-size: 1MB
performance.cache-size: 2GB
nfs.disable: on
auth.allow: 10.96.214.95,10.97.177.128,10.96.214.103,10.96.214.101,10.97.177.122,10.97.177.127,10.96.215.197,10.96.215.201,10.97.177.132,10.97.177.124,10.96.214.93,10.97.177.139,10.96.214.119,10.97.177.106,10.96.210.69,10.96.214.94,10.97.177.118,10.97.177.145,10.96.214.98
performance.readdir-ahead: on
features.barrier: off
transport.address-family: inet


I have attached glusterfs log directory of an old existing server and of the new server.PS I have restarted the service etc ... the old one has the upgrade also. The new server is 2-gls-dus21-ci-efood-real-de.

Comment 1 Atin Mukherjee 2018-03-12 02:10:44 UTC
*** Bug 1553778 has been marked as a duplicate of this bug. ***

Comment 2 Marc 2018-03-21 08:59:22 UTC
Hello Atin, 
Do you know at least if this is a know issue or is something specific to my setup, is ext4 still supported by 3.10.11?

Comment 3 Atin Mukherjee 2018-03-21 10:47:22 UTC
Pranith/Ravi - could one of you take a look at this?

Comment 4 Pranith Kumar K 2018-03-21 13:44:07 UTC
This is a regression. I sent https://review.gluster.org/19754 that is fixing this issue. I will also work on an automated test case tomorrow so that we can catch any issues before it makes it to a release.

Comment 5 Marc 2018-04-05 10:15:01 UTC
Hello Pranith,

Do you know if there will be a 3.10.x released for March and if so, it it will include this fix?

Thank you

Comment 6 Worker Ant 2018-04-10 13:40:24 UTC
REVIEW: https://review.gluster.org/19844 (features/index: Choose different base file on EMLINK error) posted (#1) for review on release-3.10 by Pranith Kumar Karampuri

Comment 7 Worker Ant 2018-04-13 13:10:40 UTC
COMMIT: https://review.gluster.org/19844 committed in release-3.10 by "Pranith Kumar Karampuri" <pkarampu> with a commit message- features/index: Choose different base file on EMLINK error

Change-Id: I4648816af908539efdc2528608aa2ebf7f0d0e2f
fixes: bz#1553777
BUG: 1553777
Signed-off-by: Pranith Kumar K <pkarampu>
(cherry picked from commit bb12f2109a01856e8184e13cf984210d20155b13)

Comment 8 Pranith Kumar K 2018-04-16 09:33:19 UTC
(In reply to Marc from comment #5)
> Hello Pranith,
> 
> Do you know if there will be a 3.10.x released for March and if so, it it
> will include this fix?
> 
> Thank you

Marc,
     The patch is merged on 13th April. It will be available in the next 3.10 release. This bug will be closed with the release information when that happens.

Comment 9 Shyamsundar 2018-05-07 15:05:04 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.10.12, please open a new bug report.

glusterfs-3.10.12 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://lists.gluster.org/pipermail/announce/2018-April/000095.html
[2] https://www.gluster.org/pipermail/gluster-users/


Note You need to log in before you can comment on or make changes to this bug.