Bug 993649 - afr: glustershd.log prints the gfid instead of the actual entry
afr: glustershd.log prints the gfid instead of the actual entry
Status: CLOSED WONTFIX
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: replicate (Show other bugs)
2.1
x86_64 Linux
unspecified Severity medium
: ---
: ---
Assigned To: Bug Updates Notification Mailing List
storage-qa-internal@redhat.com
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2013-08-06 07:00 EDT by Rahul Hinduja
Modified: 2016-09-17 08:10 EDT (History)
5 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2015-08-26 17:41:11 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Rahul Hinduja 2013-08-06 07:00:27 EDT
Description of problem:
=======================

Sometimes the glustershd.log prints the gfid instead of the actual entry as follows

[2013-08-06 10:52:13.853477] I [afr-self-heal-common.c:2741:afr_log_self_heal_completion_status] 0-dis_rep_vol-replicate-5:  metadata self heal  is successfully completed, foreground data self heal  is successfully completed,  from dis_rep_vol-client-10 with 2048 0  sizes - Pending matrix:  [ [ 0 3 ] [ 0 0 ] ] on <gfid:0a95c4da-8fe9-4c50-979e-ffcb7c1a270f>
[2013-08-06 10:52:13.857491] I [afr-self-heal-common.c:2741:afr_log_self_heal_completion_status] 0-dis_rep_vol-replicate-3:  metadata self heal  is successfully completed, foreground data self heal  is successfully completed,  from dis_rep_vol-client-6 with 6144 0  sizes - Pending matrix:  [ [ 0 3 ] [ 0 0 ] ] on <gfid:a0adaa66-7aa4-48da-826f-41521effff4f>
[2013-08-06 10:52:13.860258] I [afr-self-heal-common.c:2741:afr_log_self_heal_completion_status] 0-dis_rep_vol-replicate-4:  metadata self heal  is successfully completed, foreground data self heal  is successfully completed,  from dis_rep_vol-client-8 with 2048 0  sizes - Pending matrix:  [ [ 0 4 ] [ 0 0 ] ] on <gfid:006bdd37-b498-4d07-a25b-b351cf3be1eb>


Version-Release number of selected component (if applicable):
=============================================================

glusterfs-server-3.4.0.15rhs-1.el6rhs.x86_64
glusterfs-debuginfo-3.4.0.15rhs-1.el6rhs.x86_64
glusterfs-fuse-3.4.0.15rhs-1.el6rhs.x86_64
glusterfs-geo-replication-3.4.0.15rhs-1.el6rhs.x86_64
glusterfs-rdma-3.4.0.15rhs-1.el6rhs.x86_64
glusterfs-3.4.0.15rhs-1.el6rhs.x86_64
glusterfs-devel-3.4.0.15rhs-1.el6rhs.x86_64


Steps Carried:
==============
1. Created and started 6*2 volume from 4 servers
2. Mount on the fuse and nfs client
3. created f and n directories from fuse mount
4. cd to f from fuse mount and cd to n from nfs mount
6. Created huge number of directories and files from both fuse and nfs mount.
7. While files and directory creation was in progress killall glusterd glusterfsd glusterfs on server2 and kill the brick processes on server4
8. After a while bring the bricks online from server4 and restart the glusterd on server2.
 

Actual results:
===============

[2013-08-06 10:59:01.412976] I [afr-self-heal-data.c:817:afr_sh_data_fix] 0-dis_rep_vol-replicate-0: no active sinks for performing self-heal on file <gfid:12533ad7-9aea-4f03-8599-6597b0fffd99>
[2013-08-06 10:59:01.413713] I [afr-self-heal-common.c:2741:afr_log_self_heal_completion_status] 0-dis_rep_vol-replicate-0:  foreground data self heal  is successfully completed,  from dis_rep_vol-client-0 with 1024 1024  sizes - Pending matrix:  [ [ 1 1 ] [ 1 1 ] ] on <gfid:12533ad7-9aea-4f03-8599-6597b0fffd99>
[2013-08-06 10:59:01.415470] I [afr-self-heal-data.c:817:afr_sh_data_fix] 0-dis_rep_vol-replicate-0: no active sinks for performing self-heal on file <gfid:82aa69e0-3aa8-4562-b453-c2be43657ab6>
[2013-08-06 10:59:01.416277] I [afr-self-heal-common.c:2741:afr_log_self_heal_completion_status] 0-dis_rep_vol-replicate-0:  foreground data self heal  is successfully completed,  from dis_rep_vol-client-0 with 4096 4096  sizes - Pending matrix:  [ [ 1 1 ] [ 1 1 ] ] on <gfid:82aa69e0-3aa8-4562-b453-c2be43657ab6>
[2013-08-06 10:59:01.418424] I [afr-self-heal-data.c:817:afr_sh_data_fix] 0-dis_rep_vol-replicate-0: no active sinks for performing self-heal on file <gfid:a0f06b42-80f6-44e2-b94a-49a8491b0ac8>
[2013-08-06 10:59:01.419122] I [afr-self-heal-common.c:2741:afr_log_self_heal_completion_status] 0-dis_rep_vol-replicate-0:  foreground data self heal  is successfully completed,  from dis_rep_vol-client-0 with 5120 5120  sizes - Pending matrix:  [ [ 1 1 ] [ 1 1 ] ] on <gfid:a0f06b42-80f6-44e2-b94a-49a8491b0ac8>
[2013-08-06 10:59:01.420872] I [afr-self-heal-data.c:817:afr_sh_data_fix] 0-dis_rep_vol-replicate-0: no active sinks for performing self-heal on file <gfid:17ab0ced-4b8c-44d6-83f3-06601b80fa19>
[2013-08-06 10:59:01.421429] I [afr-self-heal-common.c:2741:afr_log_self_heal_completion_status] 0-dis_rep_vol-replicate-0:  foreground data self heal  is successfully completed,  from dis_rep_vol-client-0 with 8192 8192  sizes - Pending matrix:  [ [ 1 1 ] [ 1 1 ] ] on <gfid:17ab0ced-4b8c-44d6-83f3-06601b80fa19>



Expected results:
=================

logs should be explanatory in printing the name of entries than the gfid's
Comment 2 Pranith Kumar K 2015-08-26 17:41:11 EDT
Storing the mapping of gfid to path in xattr lead to severe drop in performance in I/O path so I don't think we will go that route and we will have to live with this inconvenience.

Note You need to log in before you can comment on or make changes to this bug.