Bug 1225571 - [geo-rep]: client-rpc-fops.c:172:client3_3_symlink_cbk can be handled better/or ignore these messages in the slave cluster log
Summary: [geo-rep]: client-rpc-fops.c:172:client3_3_symlink_cbk can be handled better/...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: geo-replication
Version: mainline
Hardware: x86_64
OS: Linux
unspecified
low
Target Milestone: ---
Assignee: Saravanakumar
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks: 1223636 1225574
TreeView+ depends on / blocked
 
Reported: 2015-05-27 16:55 UTC by Aravinda VK
Modified: 2016-06-16 13:06 UTC (History)
9 users (show)

Fixed In Version: glusterfs-3.8rc2
Clone Of: 1224662
: 1225574 (view as bug list)
Environment:
Last Closed: 2016-06-16 13:06:19 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Aravinda VK 2015-05-27 16:55:22 UTC
+++ This bug was initially created as a clone of Bug #1224662 +++

Description of problem:
=======================

As part of the history crawl if the file exist we print the warning logs as:

[2015-05-25 15:38:21.290046] W [client-rpc-fops.c:172:client3_3_symlink_cbk] 0-slave-client-3: remote operation failed: File exists. Path: (/.gfid/128c11ed-ab96-4404-a197-d209f3ef05e5/rc4.d/S90crond to ../init.d/crond)

This log message prints for all the symlink present and it can be handled better, this is expected so we can either ignore these errors or print only in the Debug mode. 

For my system it has logged them 68k times as:

[root@georep4 ~]# cat /var/log/glusterfs/geo-replication-slaves/002d5e5d-123e-47c2-8d2f-d3ca4ec21912\:gluster%3A%2F%2F127.0.0.1%3Aslave.gluster.log | grep "client-rpc-fops.c:172:client3_3_symlink_cbk" | wc
  68510  959140 15602612



Version-Release number of selected component (if applicable):
=============================================================



How reproducible:
=================

1/1

Steps Carried:
==============

1. Create a master volume (2x3) from 3 nodes N1,N2,N3 consisting 2 bricks each.
2. Start the master volume
3. Create a slave volume (2x2) from 2 nodes S1,S2
4. Start a slave volume
5. Mount the master volume to the client
6. Create and start the georep session between master and slave
7. Copy the huge set of data from the client on master volume
8. While the data is in progress, bring bricks offline and online from node N1 and N2. Ensured that not to bring bricks offline from node N3 keeping one brick constant up in x3 replica.
9. Check the slave side log file


Actual results:
===============

Prints these messages 

Expected results:
=================

logs should be handled better, if needed can go to debug mode as this is expected.

Comment 1 Saravanakumar 2015-05-28 06:42:19 UTC
Although the bug mention about softlink creation errors, these errors are applicable for hardlinks also, so fixed in both the cases.

Note:
For hard links, use cp -af, so that hardlink file type is preserved.

Comment 2 Anand Avati 2015-05-29 05:50:58 UTC
REVIEW: http://review.gluster.org/10984 (geo-rep: ignore symlink and harlink errors in geo-rep) posted (#1) for review on release-3.7 by Saravanakumar Arumugam (sarumuga)

Comment 3 Anand Avati 2015-06-02 07:35:03 UTC
REVIEW: http://review.gluster.org/10957 (geo-rep: ignore symlink and harlink errors in geo-rep) posted (#2) for review on master by Saravanakumar Arumugam (sarumuga)

Comment 4 Anand Avati 2015-06-11 19:22:20 UTC
COMMIT: http://review.gluster.org/10957 committed in master by Vijay Bellur (vbellur) 
------
commit 07df9b56fd0eb4abf1ef6a45318519c9d7ae08e3
Author: Saravanakumar Arumugam <sarumuga>
Date:   Thu May 28 11:54:04 2015 +0530

    geo-rep: ignore symlink and harlink errors in geo-rep
    
    Ignore logging in case of symlink and hardlink creation
    errors, as these are safe errors with respect to
    geo-replication.
    
    Change-Id: I3b863fb03ae23dbea907e4dd9477ff85feecad70
    BUG: 1225571
    Signed-off-by: Saravanakumar Arumugam <sarumuga>
    Reviewed-on: http://review.gluster.org/10957
    Tested-by: NetBSD Build System <jenkins.org>
    Reviewed-by: Aravinda VK <avishwan>
    Reviewed-by: Vijay Bellur <vbellur>

Comment 5 Nagaprasad Sathyanarayana 2015-10-25 15:22:55 UTC
Fix for this BZ is already present in a GlusterFS release. You can find clone of this BZ, fixed in a GlusterFS release and closed. Hence closing this mainline BZ as well.

Comment 6 Niels de Vos 2016-06-16 13:06:19 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.8.0, please open a new bug report.

glusterfs-3.8.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://blog.gluster.org/2016/06/glusterfs-3-8-released/
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user


Note You need to log in before you can comment on or make changes to this bug.