Bug 1224662

Summary: [geo-rep]: client-rpc-fops.c:172:client3_3_symlink_cbk can be handled better/or ignore these messages in the slave cluster log
Product: [Red Hat Storage] Red Hat Gluster Storage Reporter: Rahul Hinduja <rhinduja>
Component: geo-replicationAssignee: Saravanakumar <sarumuga>
Status: CLOSED ERRATA QA Contact: Rahul Hinduja <rhinduja>
Severity: low Docs Contact:
Priority: medium    
Version: rhgs-3.1CC: aavati, annair, asrivast, avishwan, csaba, khiremat, nlevinki
Target Milestone: ---   
Target Release: RHGS 3.1.0   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: glusterfs-3.7.1-3 Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
: 1225571 (view as bug list) Environment:
Last Closed: 2015-07-29 04:53:08 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1202842, 1223636    

Description Rahul Hinduja 2015-05-25 09:28:34 UTC
Description of problem:
=======================

As part of the history crawl if the file exist we print the warning logs as:

[2015-05-25 15:38:21.290046] W [client-rpc-fops.c:172:client3_3_symlink_cbk] 0-slave-client-3: remote operation failed: File exists. Path: (/.gfid/128c11ed-ab96-4404-a197-d209f3ef05e5/rc4.d/S90crond to ../init.d/crond)

This log message prints for all the symlink present and it can be handled better, this is expected so we can either ignore these errors or print only in the Debug mode. 

For my system it has logged them 68k times as:

[root@georep4 ~]# cat /var/log/glusterfs/geo-replication-slaves/002d5e5d-123e-47c2-8d2f-d3ca4ec21912\:gluster%3A%2F%2F127.0.0.1%3Aslave.gluster.log | grep "client-rpc-fops.c:172:client3_3_symlink_cbk" | wc
  68510  959140 15602612



Version-Release number of selected component (if applicable):
=============================================================

glusterfs-3.7.0-2.el6rhs.x86_64


How reproducible:
=================

1/1

Steps Carried:
==============

1. Create a master volume (2x3) from 3 nodes N1,N2,N3 consisting 2 bricks each.
2. Start the master volume
3. Create a slave volume (2x2) from 2 nodes S1,S2
4. Start a slave volume
5. Mount the master volume to the client
6. Create and start the georep session between master and slave
7. Copy the huge set of data from the client on master volume
8. While the data is in progress, bring bricks offline and online from node N1 and N2. Ensured that not to bring bricks offline from node N3 keeping one brick constant up in x3 replica.
9. Check the slave side log file


Actual results:
===============

Prints these messages 

Expected results:
=================

logs should be handled better, if needed can go to debug mode as this is expected.

Comment 3 Aravinda VK 2015-06-12 12:26:44 UTC
Downstream patch: https://code.engineering.redhat.com/gerrit/#/c/50523/

Comment 5 Rahul Hinduja 2015-07-16 11:04:52 UTC
Verified with build: glusterfs-3.7.1-10.el6rhs.x86_64

[root@georep1 ~]# cat /var/log/glusterfs/geo-replication/master/ssh%3A%2F%2Froot%4010.70.46.154%3Agluster%3A%2F%2F127.0.0.1%3Aslave.* | grep -i "symlink_cbk"
[root@georep1 ~]# 


[root@georep5 ~]# cat /var/log/glusterfs/geo-replication-slaves/7d58d4b3-2e0e-4cd5-ac9b-dde5bccb40d7\:gluster%3A%2F%2F127.0.0.1%3Aslave.* | grep -i "symlink_cbk"
[root@georep5 ~]#

Moving the bug to verified state.

Comment 6 errata-xmlrpc 2015-07-29 04:53:08 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHSA-2015-1495.html