Bug 860246 - afr: misleading log message "I [afr-self-heal-entry.c:638:afr_sh_entry_expunge_entry_cbk] 0-vol-entries-replicate-0: missing entry"
Summary: afr: misleading log message "I [afr-self-heal-entry.c:638:afr_sh_entry_expung...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: glusterfs
Version: 2.0
Hardware: x86_64
OS: Linux
unspecified
medium
Target Milestone: ---
: ---
Assignee: vsomyaju
QA Contact: Rahul Hinduja
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2012-09-25 11:35 UTC by Rahul Hinduja
Modified: 2015-03-05 00:06 UTC (History)
3 users (show)

Fixed In Version: glusterfs-3.4.0qa5
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2013-09-23 22:33:25 UTC
Embargoed:


Attachments (Terms of Use)
brick1 glustershd.log messages (246.47 KB, text/x-log)
2012-09-25 11:35 UTC, Rahul Hinduja
no flags Details

Description Rahul Hinduja 2012-09-25 11:35:48 UTC
Created attachment 616991 [details]
brick1 glustershd.log messages

Description of problem:

Currently following message is logged for the sorce brick during self heal when it find that there are entries to be deleted from the sync brick.


"[2011-09-25 07:28:47.632951] I [afr-self-heal-entry.c:638:afr_sh_entry_expunge_entry_cbk] 0-vol-entries-replicate-0: missing entry <gfid:917eae48-3773-4829-9dde-bb91e5753256>/login.defs on vol-entries-client-0"

This message is misleading as it conveys the message that we are missing few entries during self heal from the source brick.

Version-Release number of selected component (if applicable):

[root@hicks entries]# gluster --version 
glusterfs 3.3.0rhs built on Sep 10 2012 00:49:11

(glusterfs-rdma-3.3.0rhs-28.el6rhs.x86_64)

How reproducible:

1/1

Steps to Reproduce:
1. Create a replica volume (1*2) of brick1 and brick2.
2. Mount the volume to the client.
3. Add entries to the volume from mount point (for i in {1..10} ; do cp -rf /etc etc.$i; done)
4. Kill the brick2
5. remove the entries from the mount point
6. Bring the brick2 UP (gluster volume start vol force)
7. Check the "glustershd.log" on the brick1 machine
  
Actual results:

"glustershd.log" displays this message on the machine1(brick1). This message is misleading that the entries are missing on brick1. But in actual these entries are being deleted from brick1 when brick2 was done. So the message should be very clear stating that these entries are missing on source(brick1) and to be deleted from brick2.

[2011-09-25 07:28:47.632047] I [afr-self-heal-entry.c:638:afr_sh_entry_expunge_entry_cbk] 0-vol-entries-replicate-0: missing entry <gfid:917eae48-3773-4829-9dde-bb91e5753256>/abrt on vol-entries-client-0
[2011-09-25 07:28:47.632276] I [afr-self-heal-entry.c:638:afr_sh_entry_expunge_entry_cbk] 0-vol-entries-replicate-0: missing entry <gfid:917eae48-3773-4829-9dde-bb91e5753256>/cron.weekly on vol-entries-client-0
[2011-09-25 07:28:47.632589] I [afr-self-heal-entry.c:638:afr_sh_entry_expunge_entry_cbk] 0-vol-entries-replicate-0: missing entry <gfid:917eae48-3773-4829-9dde-bb91e5753256>/rc0.d on vol-entries-client-0
[2011-09-25 07:28:47.632799] I [afr-self-heal-entry.c:638:afr_sh_entry_expunge_entry_cbk] 0-vol-entries-replicate-0: missing entry <gfid:917eae48-3773-4829-9dde-bb91e5753256>/group- on vol-entries-client-0
[2011-09-25 07:28:47.632951] I [afr-self-heal-entry.c:638:afr_sh_entry_expunge_entry_cbk] 0-vol-entries-replicate-0: missing entry <gfid:917eae48-3773-4829-9dde-bb91e5753256>/login.defs on vol-entries-client-0
[2011-09-25 07:28:47.633124] I [afr-self-heal-entry.c:638:afr_sh_entry_expunge_entry_cbk] 0-vol-entries-replicate-0: missing entry <gfid:917eae48-3773-4829-9dde-bb91e5753256>/virc on vol-entries-client-0
[2011-09-25 07:28:47.633476] I [afr-self-heal-entry.c:638:afr_sh_entry_expunge_entry_cbk] 0-vol-entries-replicate-0: missing entry <gfid:917eae48-3773-4829-9dde-bb91e5753256>/hp on vol-entries-client-0



Expected results:


Additional info:

Comment 2 Vijay Bellur 2012-10-12 16:17:51 UTC
CHANGE: http://review.gluster.org/4052 (cluster/afr : Edited log message in afr_sh_entry_expunge_entry_cbk) merged in master by Anand Avati (avati)

Comment 3 Rahul Hinduja 2013-01-11 11:49:40 UTC
Verified with the build: glusterfs-3.4.0qa5-1.el6rhs.x86_64

The log is changed into: 

[2013-01-11 11:09:31.136371] I [afr-self-heal-entry.c:586:afr_sh_entry_expunge_entry_cbk] 0-vol-rep-replicate-0: Entry <gfid:00000000-0000-0000-0000-000000000001>/etc.9 is missing on vol-rep-client-0 and deleting from replica's other bricks

Moving the bug to verified state

Comment 4 Scott Haines 2013-09-23 22:33:25 UTC
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. 

For information on the advisory, and where to find the updated files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHBA-2013-1262.html


Note You need to log in before you can comment on or make changes to this bug.