Bug 1225574 - [geo-rep]: client-rpc-fops.c:172:client3_3_symlink_cbk can be handled better/or ignore these messages in the slave cluster log
Summary: [geo-rep]: client-rpc-fops.c:172:client3_3_symlink_cbk can be handled better/...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: geo-replication
Version: 3.7.0
Hardware: x86_64
OS: Linux
unspecified
low
Target Milestone: ---
Assignee: Saravanakumar
QA Contact:
URL:
Whiteboard:
Depends On: 1225571
Blocks: glusterfs-3.7.2
TreeView+ depends on / blocked
 
Reported: 2015-05-27 16:56 UTC by Aravinda VK
Modified: 2015-06-20 09:48 UTC (History)
9 users (show)

Fixed In Version: glusterfs-3.7.2
Doc Type: Bug Fix
Doc Text:
Clone Of: 1225571
Environment:
Last Closed: 2015-06-20 09:48:42 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Aravinda VK 2015-05-27 16:56:18 UTC
+++ This bug was initially created as a clone of Bug #1225571 +++

+++ This bug was initially created as a clone of Bug #1224662 +++

Description of problem:
=======================

As part of the history crawl if the file exist we print the warning logs as:

[2015-05-25 15:38:21.290046] W [client-rpc-fops.c:172:client3_3_symlink_cbk] 0-slave-client-3: remote operation failed: File exists. Path: (/.gfid/128c11ed-ab96-4404-a197-d209f3ef05e5/rc4.d/S90crond to ../init.d/crond)

This log message prints for all the symlink present and it can be handled better, this is expected so we can either ignore these errors or print only in the Debug mode. 

For my system it has logged them 68k times as:

[root@georep4 ~]# cat /var/log/glusterfs/geo-replication-slaves/002d5e5d-123e-47c2-8d2f-d3ca4ec21912\:gluster%3A%2F%2F127.0.0.1%3Aslave.gluster.log | grep "client-rpc-fops.c:172:client3_3_symlink_cbk" | wc
  68510  959140 15602612



Version-Release number of selected component (if applicable):
=============================================================



How reproducible:
=================

1/1

Steps Carried:
==============

1. Create a master volume (2x3) from 3 nodes N1,N2,N3 consisting 2 bricks each.
2. Start the master volume
3. Create a slave volume (2x2) from 2 nodes S1,S2
4. Start a slave volume
5. Mount the master volume to the client
6. Create and start the georep session between master and slave
7. Copy the huge set of data from the client on master volume
8. While the data is in progress, bring bricks offline and online from node N1 and N2. Ensured that not to bring bricks offline from node N3 keeping one brick constant up in x3 replica.
9. Check the slave side log file


Actual results:
===============

Prints these messages 

Expected results:
=================

logs should be handled better, if needed can go to debug mode as this is expected.

Comment 1 Aravinda VK 2015-06-01 10:34:38 UTC
Patch is available. Moving this to POST
http://review.gluster.org/#/c/10984/

Comment 2 Niels de Vos 2015-06-02 08:20:21 UTC
The required changes to fix this bug have not made it into glusterfs-3.7.1. This bug is now getting tracked for glusterfs-3.7.2.

Comment 3 Anand Avati 2015-06-13 07:33:11 UTC
REVIEW: http://review.gluster.org/10984 (geo-rep: ignore symlink and harlink errors in geo-rep) posted (#3) for review on release-3.7 by Aravinda VK (avishwan)

Comment 4 Anand Avati 2015-06-13 09:23:18 UTC
COMMIT: http://review.gluster.org/10984 committed in release-3.7 by Vijay Bellur (vbellur) 
------
commit 1675c4ccf6d515198bd732a129ad117aca1c74e3
Author: Saravanakumar Arumugam <sarumuga>
Date:   Thu May 28 11:54:04 2015 +0530

    geo-rep: ignore symlink and harlink errors in geo-rep
    
    Ignore logging in case of symlink and hardlink creation
    errors, as these are safe errors with respect to
    geo-replication.
    
    Change-Id: I3b863fb03ae23dbea907e4dd9477ff85feecad70
    BUG: 1225574
    Signed-off-by: Saravanakumar Arumugam <sarumuga>
    Reviewed-on: http://review.gluster.org/10957
    Reviewed-on: http://review.gluster.org/10984
    Reviewed-by: Vijay Bellur <vbellur>
    Tested-by: Vijay Bellur <vbellur>

Comment 5 Niels de Vos 2015-06-20 09:48:42 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.2, please reopen this bug report.

glusterfs-3.7.2 has been announced on the Gluster Packaging mailinglist [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://www.gluster.org/pipermail/packaging/2015-June/000006.html
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user


Note You need to log in before you can comment on or make changes to this bug.