Bug 1231150 - After resetting diagnostics.client-log-level, still Debug messages are logging in scrubber log
Summary: After resetting diagnostics.client-log-level, still Debug messages are loggin...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: bitrot
Version: rhgs-3.1
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
: RHGS 3.1.3
Assignee: Satish Mohan
QA Contact: Sweta Anandpara
URL:
Whiteboard:
Depends On:
Blocks: 1216951 1252696 1299184
TreeView+ depends on / blocked
 
Reported: 2015-06-12 09:34 UTC by RajeshReddy
Modified: 2016-09-17 14:23 UTC (History)
11 users (show)

Fixed In Version: glusterfs-3.7.9-2
Doc Type: Bug Fix
Doc Text:
If the diagnostics.client-log-level option was set to DEBUG and then reset to its default value, logging continued to occur at the DEBUG level in the scrubber and the bitrot log files. This update ensures that log levels revert to the default levels as expected.
Clone Of:
: 1252696 (view as bug list)
Environment:
Last Closed: 2016-06-23 04:54:08 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2016:1240 0 normal SHIPPED_LIVE Red Hat Gluster Storage 3.1 Update 3 2016-06-23 08:51:28 UTC

Description RajeshReddy 2015-06-12 09:34:07 UTC
Description of problem:
=======================
After resetting diagnostics.client-log-level, still Debug messages are logging in scrubber log

Version-Release number of selected component (if applicable):
==========================
glusterfs-server-3.7.1-1

How reproducible:


Steps to Reproduce:
================
1.Create a gluster volume and enable bitrot 
2.enable gluster v set vol client-log-level DEBUG and after some time reset using gluster vol reset vol client-log-level 


Actual results:
=============
After reset still able to see debug message in the scrubber log 


Expected results:


Additional info:

Comment 3 Gaurav Kumar Garg 2015-07-27 05:42:32 UTC
hi,

this doc text looks good to me.

Comment 4 Gaurav Kumar Garg 2015-08-12 06:20:12 UTC
upstream patch already available for this bug:  http://review.gluster.org/#/c/11887/

Comment 6 Mike McCune 2016-03-28 22:38:41 UTC
This bug was accidentally moved from POST to MODIFIED via an error in automation, please see mmccune with any questions

Comment 7 Gaurav Kumar Garg 2016-04-11 08:54:51 UTC
downstream patch url: https://code.engineering.redhat.com/gerrit/71848

Comment 9 Sweta Anandpara 2016-04-27 04:18:44 UTC
Tested and verified this on the build 3.7.9-2. 

After setting the diagnostics.client-log-level to DEBUG and then resetting it back to default does change the logging of bitrot logs. The DEBUG logs that would appear otherwise, will discontinue after setting the option back to its default, i.e., INFO.

Moving this to fixed in 3.1.3


[root@dhcp35-210 glusterfs]# 
[root@dhcp35-210 glusterfs]# gluster peer status
Number of Peers: 3

Hostname: 10.70.35.85
Uuid: c9550322-c0ef-45e6-ad20-f38658a5ce54
State: Peer in Cluster (Connected)

Hostname: 10.70.35.137
Uuid: 35426000-dad1-416f-b145-f25049f5036e
State: Peer in Cluster (Connected)

Hostname: 10.70.35.13
Uuid: a756f3da-7896-4970-a77d-4829e603f773
State: Peer in Cluster (Connected)
[root@dhcp35-210 glusterfs]# 
[root@dhcp35-210 glusterfs]# 
[root@dhcp35-210 glusterfs]# rpm -qa | grep gluster
glusterfs-api-3.7.9-2.el7rhgs.x86_64
glusterfs-3.7.9-2.el7rhgs.x86_64
glusterfs-geo-replication-3.7.9-2.el7rhgs.x86_64
glusterfs-client-xlators-3.7.9-2.el7rhgs.x86_64
glusterfs-server-3.7.9-2.el7rhgs.x86_64
glusterfs-rdma-3.7.9-2.el7rhgs.x86_64
python-gluster-3.7.5-19.el7rhgs.noarch
glusterfs-fuse-3.7.9-2.el7rhgs.x86_64
gluster-nagios-common-0.2.4-1.el7rhgs.noarch
vdsm-gluster-4.16.30-1.3.el7rhgs.noarch
glusterfs-libs-3.7.9-2.el7rhgs.x86_64
gluster-nagios-addons-0.2.6-1.el7rhgs.x86_64
glusterfs-cli-3.7.9-2.el7rhgs.x86_64
[root@dhcp35-210 glusterfs]#
[root@dhcp35-210 glusterfs]# gluster v info 
 
Volume Name: ozone
Type: Distributed-Replicate
Volume ID: 62e8dd2d-75bc-4b77-aafb-a961f8010839
Status: Started
Number of Bricks: 4 x 3 = 12
Transport-type: tcp
Bricks:
Brick1: 10.70.35.210:/bricks/brick0/ozone
Brick2: 10.70.35.85:/bricks/brick0/ozone
Brick3: 10.70.35.137:/bricks/brick0/ozone
Brick4: 10.70.35.210:/bricks/brick1/ozone
Brick5: 10.70.35.85:/bricks/brick1/ozone
Brick6: 10.70.35.137:/bricks/brick1/ozone
Brick7: 10.70.35.210:/bricks/brick2/ozone
Brick8: 10.70.35.85:/bricks/brick2/ozone
Brick9: 10.70.35.137:/bricks/brick2/ozone
Brick10: 10.70.35.210:/bricks/brick3/ozone
Brick11: 10.70.35.85:/bricks/brick3/ozone
Brick12: 10.70.35.137:/bricks/brick3/ozone
Options Reconfigured:
diagnostics.client-log-level: DEBUG
features.scrub-throttle: normal
features.scrub-freq: hourly
features.scrub: Active
features.bitrot: on
performance.readdir-ahead: on
[root@dhcp35-210 glusterfs]# 
[root@dhcp35-210 glusterfs]# 
[root@dhcp35-210 glusterfs]# gluster v bitrot ozone scrub-throttle aggressive
volume bitrot: success
[root@dhcp35-210 glusterfs]# gluster v bitrot ozone scrub-throttle normal
volume bitrot: success
[root@dhcp35-210 glusterfs]# gluster v bitrot ozone scrub-throttle aggressive
volume bitrot: success
[root@dhcp35-210 glusterfs]#
[root@dhcp35-210 glusterfs]# gluster v get ozone all | grep log-level
diagnostics.brick-log-level             INFO                                    
diagnostics.client-log-level            DEBUG                                   
diagnostics.brick-sys-log-level         CRITICAL                                
diagnostics.client-sys-log-level        CRITICAL                                
[root@dhcp35-210 glusterfs]# 
[root@dhcp35-210 glusterfs]# 
[root@dhcp35-210 glusterfs]# gluster v reset ozone diagnostics.client-log-level
volume reset: success: reset volume successful
[root@dhcp35-210 glusterfs]# 
[root@dhcp35-210 glusterfs]# 
[root@dhcp35-210 glusterfs]# gluster v get ozone all | grep log-level
diagnostics.brick-log-level             INFO                                    
diagnostics.client-log-level            INFO                                    
diagnostics.brick-sys-log-level         CRITICAL                                
diagnostics.client-sys-log-level        CRITICAL                                
[root@dhcp35-210 glusterfs]# 
[root@dhcp35-210 glusterfs]# 
[root@dhcp35-210 glusterfs]# gluster v bitrot ozone scrub-throttle normal
volume bitrot: success
[root@dhcp35-210 glusterfs]#

Comment 13 Atin Mukherjee 2016-06-09 04:24:54 UTC
LGTM :)

Comment 15 errata-xmlrpc 2016-06-23 04:54:08 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2016:1240


Note You need to log in before you can comment on or make changes to this bug.