Bugzilla will be upgraded to version 5.0. The upgrade date is tentatively scheduled for 2 December 2018, pending final testing and feedback.
Bug 1224662 - [geo-rep]: client-rpc-fops.c:172:client3_3_symlink_cbk can be handled better/or ignore these messages in the slave cluster log
[geo-rep]: client-rpc-fops.c:172:client3_3_symlink_cbk can be handled better/...
Status: CLOSED ERRATA
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: geo-replication (Show other bugs)
3.1
x86_64 Linux
medium Severity low
: ---
: RHGS 3.1.0
Assigned To: Saravanakumar
Rahul Hinduja
:
Depends On:
Blocks: 1202842 1223636
  Show dependency treegraph
 
Reported: 2015-05-25 05:28 EDT by Rahul Hinduja
Modified: 2015-07-29 00:53 EDT (History)
7 users (show)

See Also:
Fixed In Version: glusterfs-3.7.1-3
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
: 1225571 (view as bug list)
Environment:
Last Closed: 2015-07-29 00:53:08 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)


External Trackers
Tracker ID Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2015:1495 normal SHIPPED_LIVE Important: Red Hat Gluster Storage 3.1 update 2015-07-29 04:26:26 EDT

  None (edit)
Description Rahul Hinduja 2015-05-25 05:28:34 EDT
Description of problem:
=======================

As part of the history crawl if the file exist we print the warning logs as:

[2015-05-25 15:38:21.290046] W [client-rpc-fops.c:172:client3_3_symlink_cbk] 0-slave-client-3: remote operation failed: File exists. Path: (/.gfid/128c11ed-ab96-4404-a197-d209f3ef05e5/rc4.d/S90crond to ../init.d/crond)

This log message prints for all the symlink present and it can be handled better, this is expected so we can either ignore these errors or print only in the Debug mode. 

For my system it has logged them 68k times as:

[root@georep4 ~]# cat /var/log/glusterfs/geo-replication-slaves/002d5e5d-123e-47c2-8d2f-d3ca4ec21912\:gluster%3A%2F%2F127.0.0.1%3Aslave.gluster.log | grep "client-rpc-fops.c:172:client3_3_symlink_cbk" | wc
  68510  959140 15602612



Version-Release number of selected component (if applicable):
=============================================================

glusterfs-3.7.0-2.el6rhs.x86_64


How reproducible:
=================

1/1

Steps Carried:
==============

1. Create a master volume (2x3) from 3 nodes N1,N2,N3 consisting 2 bricks each.
2. Start the master volume
3. Create a slave volume (2x2) from 2 nodes S1,S2
4. Start a slave volume
5. Mount the master volume to the client
6. Create and start the georep session between master and slave
7. Copy the huge set of data from the client on master volume
8. While the data is in progress, bring bricks offline and online from node N1 and N2. Ensured that not to bring bricks offline from node N3 keeping one brick constant up in x3 replica.
9. Check the slave side log file


Actual results:
===============

Prints these messages 

Expected results:
=================

logs should be handled better, if needed can go to debug mode as this is expected.
Comment 3 Aravinda VK 2015-06-12 08:26:44 EDT
Downstream patch: https://code.engineering.redhat.com/gerrit/#/c/50523/
Comment 5 Rahul Hinduja 2015-07-16 07:04:52 EDT
Verified with build: glusterfs-3.7.1-10.el6rhs.x86_64

[root@georep1 ~]# cat /var/log/glusterfs/geo-replication/master/ssh%3A%2F%2Froot%4010.70.46.154%3Agluster%3A%2F%2F127.0.0.1%3Aslave.* | grep -i "symlink_cbk"
[root@georep1 ~]# 


[root@georep5 ~]# cat /var/log/glusterfs/geo-replication-slaves/7d58d4b3-2e0e-4cd5-ac9b-dde5bccb40d7\:gluster%3A%2F%2F127.0.0.1%3Aslave.* | grep -i "symlink_cbk"
[root@georep5 ~]#

Moving the bug to verified state.
Comment 6 errata-xmlrpc 2015-07-29 00:53:08 EDT
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHSA-2015-1495.html

Note You need to log in before you can comment on or make changes to this bug.