Bug 1212942

Summary: client-log-level defaults to DEBUG
Product: [Red Hat Storage] Red Hat Gluster Storage Reporter: Vivek Agarwal <vagarwal>
Component: coreAssignee: Gaurav Kumar Garg <ggarg>
Status: CLOSED CURRENTRELEASE QA Contact: SATHEESARAN <sasundar>
Severity: medium Docs Contact:
Priority: medium    
Version: 2.1CC: amukherj, hamiller, nlevinki, nsathyan, rhs-bugs, sankarshan, sasundar, smohan, storage-qa-internal, vagarwal, vbellur
Target Milestone: ---Keywords: Patch, ZStream
Target Release: ---   
Hardware: All   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: 1188835 Environment:
Last Closed: 2015-12-31 07:04:26 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1188835, 1212943    
Bug Blocks:    

Description Vivek Agarwal 2015-04-17 17:50:57 UTC
+++ This bug was initially created as a clone of Bug #1188835 +++

Description of problem: cli.log has many 'D' and 'T' entries. 


Version-Release number of selected component (if applicable):glusterfs-cli-3.6.0.28-1.el6rhs.x86_64


How reproducible: Constant


Steps to Reproduce:
1.Look in  /var/log/glusterfs/cli.log
2. See many 'D' and 'T' entries
3.

Actual results:


Expected results: Default should be no 'D', 'T' entries


Additional info: From Customer - 

Well, if gluster volume info shows non-default settings then it is set to trace or debug by default.

Before setting it to WARNING the volume shows no diag settings at all.
After setting it to WARNING it shows up in the volume info:

# gluster volume info SCC

Volume Name: SCC
Type: Replicate
Volume ID: 5c07dc05-0179-4424-840e-2546009b6ae2
Status: Started
Snap Volume: no
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: rhs-01:/SITE15/SCC/brick1
Brick2: rhs-02:/SITE15/SCC/brick2
Options Reconfigured:
diagnostics.client-log-level: WARNING
performance.stat-prefetch: off
performance.io-cache: off
performance.read-ahead: off
performance.write-behind: off
network.ping-timeout: 20
performance.readdir-ahead: on
snap-max-hard-limit: 256
snap-max-soft-limit: 90
auto-delete: disable

--- Additional comment from Harold Miller on 2015-02-03 14:39:27 EST ---

This may also be a case of where the diagnostics.client-log-level was set, and the graph failed to display that modification.

--- Additional comment from Atin Mukherjee on 2015-04-17 10:12:42 EDT ---

The bug heading and description do not match really, are we talking about cli's default log level or client-log-level? Both of them are different. If we are concerned about cli log's debug and trace entries, then an upstream patch http://review.gluster.org/9383 already addresses it. The fix would be available in RHGS 3.1. Based on your input we can clone this bug for 3.1.

--- Additional comment from Harold Miller on 2015-04-17 11:44:14 EDT ---

The customer complaint is regarding the /var/log/glusterfs/cli.log file.
I was mistaken in assuming that cli stood for client.
The cli.log does not appear to be listed in table 15.2 of our current Administrators Guide for RHS 3.x, so I was unsure.

What is the correct way to set the cli.log log level?

--- Additional comment from Vivek Agarwal on 2015-04-17 11:50:08 EDT ---

Since this is merged upstream and would be available, marking this for 3.1 and moving the state to post. 

Can be cloned for 2.1/3.0, based on Harold's feedback.

--- Additional comment from Harold Miller on 2015-04-17 12:58:19 EDT ---

vivek - if this is a short/easy fix, then please back port it as needed. We've seen at least 2 customer tickets regarding the 'excessive logging' complaint.

--- Additional comment from Vivek Agarwal on 2015-04-17 13:41:50 EDT ---

Which branches do you need this fix for?

--- Additional comment from Harold Miller on 2015-04-17 13:46:12 EDT ---

Vivek - 2.1.x, 3.0.x, 3.1

Comment 1 Vivek Agarwal 2015-04-17 17:54:09 UTC
Atin, Please port this to 2.1.

Comment 2 SATHEESARAN 2015-12-30 13:49:13 UTC
This bug was fixed with RHGS 3.1
Since RHGS 2.1 is EOL'ed, the fix will not be available for RHGS 2.1.z.

This bug could be CLOSED - CURRENTRELEASE

Comment 3 Atin Mukherjee 2015-12-31 07:04:26 UTC
Based on #c2 closing this bug.