Bug 1224662 - [geo-rep]: client-rpc-fops.c:172:client3_3_symlink_cbk can be handled better/or ignore these messages in the slave cluster log
Summary: [geo-rep]: client-rpc-fops.c:172:client3_3_symlink_cbk can be handled better/...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: geo-replication
Version: rhgs-3.1
Hardware: x86_64
OS: Linux
medium
low
Target Milestone: ---
: RHGS 3.1.0
Assignee: Saravanakumar
QA Contact: Rahul Hinduja
URL:
Whiteboard:
Depends On:
Blocks: 1202842 1223636
TreeView+ depends on / blocked
 
Reported: 2015-05-25 09:28 UTC by Rahul Hinduja
Modified: 2015-07-29 04:53 UTC (History)
7 users (show)

Fixed In Version: glusterfs-3.7.1-3
Doc Type: Bug Fix
Doc Text:
Clone Of:
: 1225571 (view as bug list)
Environment:
Last Closed: 2015-07-29 04:53:08 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2015:1495 0 normal SHIPPED_LIVE Important: Red Hat Gluster Storage 3.1 update 2015-07-29 08:26:26 UTC

Description Rahul Hinduja 2015-05-25 09:28:34 UTC
Description of problem:
=======================

As part of the history crawl if the file exist we print the warning logs as:

[2015-05-25 15:38:21.290046] W [client-rpc-fops.c:172:client3_3_symlink_cbk] 0-slave-client-3: remote operation failed: File exists. Path: (/.gfid/128c11ed-ab96-4404-a197-d209f3ef05e5/rc4.d/S90crond to ../init.d/crond)

This log message prints for all the symlink present and it can be handled better, this is expected so we can either ignore these errors or print only in the Debug mode. 

For my system it has logged them 68k times as:

[root@georep4 ~]# cat /var/log/glusterfs/geo-replication-slaves/002d5e5d-123e-47c2-8d2f-d3ca4ec21912\:gluster%3A%2F%2F127.0.0.1%3Aslave.gluster.log | grep "client-rpc-fops.c:172:client3_3_symlink_cbk" | wc
  68510  959140 15602612



Version-Release number of selected component (if applicable):
=============================================================

glusterfs-3.7.0-2.el6rhs.x86_64


How reproducible:
=================

1/1

Steps Carried:
==============

1. Create a master volume (2x3) from 3 nodes N1,N2,N3 consisting 2 bricks each.
2. Start the master volume
3. Create a slave volume (2x2) from 2 nodes S1,S2
4. Start a slave volume
5. Mount the master volume to the client
6. Create and start the georep session between master and slave
7. Copy the huge set of data from the client on master volume
8. While the data is in progress, bring bricks offline and online from node N1 and N2. Ensured that not to bring bricks offline from node N3 keeping one brick constant up in x3 replica.
9. Check the slave side log file


Actual results:
===============

Prints these messages 

Expected results:
=================

logs should be handled better, if needed can go to debug mode as this is expected.

Comment 3 Aravinda VK 2015-06-12 12:26:44 UTC
Downstream patch: https://code.engineering.redhat.com/gerrit/#/c/50523/

Comment 5 Rahul Hinduja 2015-07-16 11:04:52 UTC
Verified with build: glusterfs-3.7.1-10.el6rhs.x86_64

[root@georep1 ~]# cat /var/log/glusterfs/geo-replication/master/ssh%3A%2F%2Froot%4010.70.46.154%3Agluster%3A%2F%2F127.0.0.1%3Aslave.* | grep -i "symlink_cbk"
[root@georep1 ~]# 


[root@georep5 ~]# cat /var/log/glusterfs/geo-replication-slaves/7d58d4b3-2e0e-4cd5-ac9b-dde5bccb40d7\:gluster%3A%2F%2F127.0.0.1%3Aslave.* | grep -i "symlink_cbk"
[root@georep5 ~]#

Moving the bug to verified state.

Comment 6 errata-xmlrpc 2015-07-29 04:53:08 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHSA-2015-1495.html


Note You need to log in before you can comment on or make changes to this bug.