Bug 1009852 - AFR : Improvements needed in log messages
AFR : Improvements needed in log messages
Status: CLOSED EOL
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: replicate (Show other bugs)
2.1
Unspecified Unspecified
unspecified Severity medium
: ---
: ---
Assigned To: Ravishankar N
storage-qa-internal@redhat.com
: ZStream
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2013-09-19 06:13 EDT by spandura
Modified: 2016-09-17 08:15 EDT (History)
4 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2015-12-03 12:16:54 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description spandura 2013-09-19 06:13:53 EDT
Description of problem:
=========================
This is a generic bug which is used to track improvements needed for afr log messages. 

Version-Release number of selected component (if applicable):
=============================================================
glusterfs 3.4.0.33rhs built on Sep  8 2013 13:22:46
Comment 2 spandura 2013-09-19 06:24:54 EDT
Adding child-id in the log messages:
=========================================

+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Case1:- "afr_sh_print_pending_matrix" 
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Currently we are not printing the child id in the "afr_sh_print_pending_matrix" log message. 

Actual output:
============
[2013-09-10 12:35:58.802127] D [afr-self-heal-common.c:148:afr_sh_print_pending_matrix] 0-vol_dis_1_rep_2-replicate-0: pending_matrix: [ 0 0 ]
[2013-09-10 12:35:58.802145] D [afr-self-heal-common.c:148:afr_sh_print_pending_matrix] 0-vol_dis_1_rep_2-replicate-0: pending_matrix: [ 0 0 ]

+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Case2 :- "afr_lookup_set_self_heal_params_by_xattr"
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Currently we are not printing the child id in the "afr_lookup_set_self_heal_params_by_xattr" log message. 

Actual Output:
=============
[2013-09-10 12:35:58.797387] D [afr-common.c:1202:afr_lookup_set_self_heal_params_by_xattr] 0-vol_dis_1_rep_2-replicate-0: metadata self-heal is pending for /b.
[2013-09-10 12:35:58.797423] D [afr-common.c:1202:afr_lookup_set_self_heal_params_by_xattr] 0-vol_dis_1_rep_2-replicate-0: metadata self-heal is pending for /b.

This applies for data and entry self-heal also. 

Expected Output:
================
Add child_id in the log message.
Comment 3 spandura 2013-09-19 06:47:06 EDT
afr_launch_self_heal 
=========================================================================
Reporting "self-heal triggered" message in "afr_launch_self_heal" is quiet confusing. This message should be reported when we actually start the sync process. 

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Log message: 
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
[2013-09-10 12:35:58.797530] D [afr-common.c:1434:afr_launch_self_heal] 0-vol_dis_1_rep_2-replicate-0: background  meta-data self-heal triggered. path: /b, reason: lookup detected pending operations
Comment 4 spandura 2013-09-19 06:53:01 EDT
Adding child-id in the log messages:
=========================================

+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Case3:- "afr_sh_data_post_nonblocking_inodelk_cbk" 
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Currently we are not printing the child id in the "afr_sh_data_post_nonblocking_inodelk_cbk" log message. 

Actual Output:
=============
[2013-09-10 12:35:58.799866] D [afr-self-heal-data.c:1337:afr_sh_data_post_nonblocking_inodelk_cbk] 0-vol_dis_1_rep_2-replicate-0: Non Blocking data inodelks done for /b by 9461cf76647f0000. Proceeding to self-heal
[2013-09-10 12:35:58.800626] D [afr-self-heal-data.c:1337:afr_sh_data_post_nonblocking_inodelk_cbk] 0-vol_dis_1_rep_2-replicate-0: Non Blocking data inodelks done for /b by 9461cf76647f0000. Proceeding to self-heal
Comment 5 spandura 2013-09-19 08:30:22 EDT
afr_log_self_heal_completion_status: 
=======================================
Currently the following is the completeion status log message:

[2013-09-10 12:35:58.803508] I [afr-self-heal-common.c:2840:afr_log_self_heal_completion_status] 0-vol_dis_1_rep_2-replicate-0:  metadata self heal  is successfully completed, backgroung data self heal  is successfully completed,  from vol_dis_1_rep_2-client-0 with 2 2  sizes - Pending matrix:  [ [ 0 0 ] [ 0 0 ] ] on /b

Expected result:
================
I [afr-self-heal-common.c:2798:afr_log_self_heal_completion_status] 0-volume1-replicate-0: on <gfid:e03553c7-4484-4e00-8f57-6dac8962f8c5> : 

metadata - Pending matrix: [ [ 0 0 0 0 0 0 ] [ 0 0 0 0 0 0 ] [ 0 0 0 0 0 0 ] [ 3 2 2 0 0 0 ] [ 2 2 2 0 0 0 ] [ 2 2 2 0 0 0 ] ]
metadata self heal is successfully completed from source volume1-client-3 to volume1-client-0, currently down subvolumes are volume1-client-1, volume1-client-2.

data - Pending matrix: [ [ 0 0 0 0 0 0 ] [ 0 0 0 0 0 0 ] [ 0 0 0 0 0 0 ] [ 3 2 2 0 0 0 ] [ 2 2 2 0 0 0 ] [ 2 2 2 0 0 0 ] ], with 0 bytes on volume1-client-0, 2 bytes on volume1-client-3, 2 bytes on volume1-client-4, 2 bytes on volume1-client-5,

foreground data self heal is successfully completed, data self heal from volume1-client-3 to sinks volume1-client-0,  currently down subvolumes are volume1-client-1, volume1-client-2,

Since the pending matrix and file sizes on the bricks are before self-heal 
the pending matrix and sizes should be reported before the data|meta-data|entry self-heal completion status in the log message.
Comment 6 Vivek Agarwal 2015-12-03 12:16:54 EST
Thank you for submitting this issue for consideration in Red Hat Gluster Storage. The release for which you requested us to review, is now End of Life. Please See https://access.redhat.com/support/policy/updates/rhs/

If you can reproduce this bug against a currently maintained version of Red Hat Gluster Storage, please feel free to file a new report against the current release.

Note You need to log in before you can comment on or make changes to this bug.