Bug 965407 - "glustershd.log" logs D and T messages even when the volume is reset
"glustershd.log" logs D and T messages even when the volume is reset
Status: CLOSED EOL
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: glusterd (Show other bugs)
2.1
x86_64 Linux
medium Severity high
: ---
: ---
Assigned To: Bug Updates Notification Mailing List
storage-qa-internal@redhat.com
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2013-05-21 04:06 EDT by Rahul Hinduja
Modified: 2015-12-03 12:15 EST (History)
3 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2015-12-03 12:15:59 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Rahul Hinduja 2013-05-21 04:06:37 EDT
Description of problem:
=======================

Created a volume and set the log level to TRACE. After sometime volume was reset using "gluster volume reset vol-name". But "glustershd.log" logs still continuous to log D and T messages


Initial volume info:
====================

[root@rhs-client11 ~]# gluster volume info 
 
Volume Name: vol-dis-rep
Type: Distributed-Replicate
Volume ID: 063a15d4-e0fb-4fb6-8ee7-957ac3af5974
Status: Started
Number of Bricks: 6 x 2 = 12
Transport-type: tcp
Bricks:
Brick1: 10.70.36.35:/rhs/brick1/b1
Brick2: 10.70.36.36:/rhs/brick1/b2
Brick3: 10.70.36.35:/rhs/brick1/b3
Brick4: 10.70.36.36:/rhs/brick1/b4
Brick5: 10.70.36.35:/rhs/brick1/b5
Brick6: 10.70.36.36:/rhs/brick1/b6
Brick7: 10.70.36.37:/rhs/brick1/b7
Brick8: 10.70.36.38:/rhs/brick1/b8
Brick9: 10.70.36.37:/rhs/brick1/b9
Brick10: 10.70.36.38:/rhs/brick1/b10
Brick11: 10.70.36.37:/rhs/brick1/b11
Brick12: 10.70.36.38:/rhs/brick1/b12
Options Reconfigured:
diagnostics.client-log-level: TRACE
[root@rhs-client11 ~]# 

Log level is reset:
====================
[root@rhs-client11 ~]# gluster volume reset vol-dis-rep 
volume reset: success
[root@rhs-client11 ~]#

After reset volume info:
======================== 
[root@rhs-client11 ~]# gluster volume info 
 
Volume Name: vol-dis-rep
Type: Distributed-Replicate
Volume ID: 063a15d4-e0fb-4fb6-8ee7-957ac3af5974
Status: Started
Number of Bricks: 6 x 2 = 12
Transport-type: tcp
Bricks:
Brick1: 10.70.36.35:/rhs/brick1/b1
Brick2: 10.70.36.36:/rhs/brick1/b2
Brick3: 10.70.36.35:/rhs/brick1/b3
Brick4: 10.70.36.36:/rhs/brick1/b4
Brick5: 10.70.36.35:/rhs/brick1/b5
Brick6: 10.70.36.36:/rhs/brick1/b6
Brick7: 10.70.36.37:/rhs/brick1/b7
Brick8: 10.70.36.38:/rhs/brick1/b8
Brick9: 10.70.36.37:/rhs/brick1/b9
Brick10: 10.70.36.38:/rhs/brick1/b10
Brick11: 10.70.36.37:/rhs/brick1/b11
Brick12: 10.70.36.38:/rhs/brick1/b12
[root@rhs-client11 ~]# 


Log snippet after volume reset:
===============================

[2013-05-21 17:29:27.852857] T [syncop.c:908:syncop_readdir_cbk] 0-vol-dis-rep-replicate-2: adding entry=., count=0
[2013-05-21 17:29:27.852881] T [syncop.c:908:syncop_readdir_cbk] 0-vol-dis-rep-replicate-2: adding entry=.., count=1
[2013-05-21 17:29:27.852956] T [rpc-clnt.c:1301:rpc_clnt_record] 0-vol-dis-rep-client-4: Auth Info: pid: 4294967295, uid: 0, gid: 0, owner: 9491ece84a7f0000
[2013-05-21 17:29:27.852981] T [rpc-clnt.c:1181:rpc_clnt_record_build_header] 0-rpc-clnt: Request fraglen 108, payload: 40, rpc hdr: 68
[2013-05-21 17:29:27.853026] T [rpc-clnt.c:1498:rpc_clnt_submit] 0-rpc-clnt: submitted request (XID: 0x47x Program: GlusterFS 3.3, ProgVers: 330, Proc: 28) to rpc-transport (vol-dis-rep-client-4)
[2013-05-21 17:29:27.853272] T [rpc-clnt.c:669:rpc_clnt_reply_init] 0-vol-dis-rep-client-4: received rpc message (RPC XID: 0x47x Program: GlusterFS 3.3, ProgVers: 330, Proc: 28) from rpc-transport (vol-dis-rep-client-4)
[2013-05-21 17:29:27.853325] D [afr-self-heald.c:1138:afr_dir_crawl] 0-vol-dis-rep-replicate-2: Crawl completed on vol-dis-rep-client-4
[2013-05-21 17:30:09.856185] D [client-handshake.c:185:client_start_ping] 0-vol-dis-rep-client-0: returning as transport is already disconnected OR there are no frames (0 || 0)
[2013-05-21 17:30:09.856236] D [client-handshake.c:185:client_start_ping] 0-vol-dis-rep-client-2: returning as transport is already disconnected OR there are no frames (0 || 0)
[2013-05-21 17:30:09.856245] D [client-handshake.c:185:client_start_ping] 0-vol-dis-rep-client-4: returning as transport is already disconnected OR there are no frames (0 || 0)


Version-Release number of selected component (if applicable):
==============================================================


[root@rhs-client11 ~]# rpm -qa | grep gluster | grep 3.4.0
glusterfs-fuse-3.4.0.8rhs-1.el6rhs.x86_64
glusterfs-geo-replication-3.4.0.8rhs-1.el6rhs.x86_64
glusterfs-rdma-3.4.0.8rhs-1.el6rhs.x86_64
glusterfs-3.4.0.8rhs-1.el6rhs.x86_64
glusterfs-server-3.4.0.8rhs-1.el6rhs.x86_64
[root@rhs-client11 ~]# 

Steps to Reproduce:
===================
1. Created 6*2 volume and mounted on Client(Fuse and NFS)
2. Set the client-log-level to TRACE
3. Use the "gluster volume heal <vol-name> info". glustershd.log logs the D and T logs as well.
4. Reset the volume using "gluster volume reset vol-name". Reset should be successful.
5. Use the "gluster volume heal <vol-name> info". "glustershd.log" still logs the D and T logs.

Actual results:
===============

[2013-05-21 17:29:27.853272] T [rpc-clnt.c:669:rpc_clnt_reply_init] 0-vol-dis-rep-client-4: received rpc message (RPC XID: 0x47x Program: GlusterFS 3.3, ProgVers: 330, Proc: 28) from rpc-transport (vol-dis-rep-client-4)
[2013-05-21 17:29:27.853325] D [afr-self-heald.c:1138:afr_dir_crawl] 0-vol-dis-rep-replicate-2: Crawl completed on vol-dis-rep-client-4

Expected results:
=================
Since the volume reset is successful, glustershd.log should not log the "D" and "T" logs.
Comment 3 Vivek Agarwal 2015-12-03 12:15:59 EST
Thank you for submitting this issue for consideration in Red Hat Gluster Storage. The release for which you requested us to review, is now End of Life. Please See https://access.redhat.com/support/policy/updates/rhs/

If you can reproduce this bug against a currently maintained version of Red Hat Gluster Storage, please feel free to file a new report against the current release.

Note You need to log in before you can comment on or make changes to this bug.