Bug 1034143

Summary: Even though volume file is changed the log messages reports "No change in volfile , continuing"
Product: [Red Hat Storage] Red Hat Gluster Storage Reporter: spandura
Component: glusterdAssignee: Bug Updates Notification Mailing List <rhs-bugs>
Status: CLOSED EOL QA Contact: storage-qa-internal <storage-qa-internal>
Severity: medium Docs Contact:
Priority: unspecified    
Version: 2.1CC: rwheeler, vbellur
Target Milestone: ---   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard: glusterd
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
: 1284386 (view as bug list) Environment:
Last Closed: 2015-12-03 17:16:01 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1284386    

Description spandura 2013-11-25 10:32:57 UTC
Description of problem:
=========================
On a volume when we perform any volume set operations which changes the vol file, log messages doesn't report anything about the changes .  Consider the case of setting "read-subvolume" to "client-0" on a replicate volume. This change is not reported in the client log files. Following are the log messages reported:

[2013-11-25 09:29:19.642710] I [glusterfsd-mgmt.c:56:mgmt_cbk_spec] 0-mgmt: Volume file changed
[2013-11-25 09:29:19.644174] I [glusterfsd-mgmt.c:1559:mgmt_getspec_cbk] 0-glusterfs: No change in volfile, continuing

Following are the messages received when set "brick-log-level" to "DEBUG". 

[2013-11-25 10:26:56.243497] I [glusterfsd-mgmt.c:56:mgmt_cbk_spec] 0-mgmt: Volume file changed
[2013-11-25 10:26:57.566316] I [glusterfsd-mgmt.c:56:mgmt_cbk_spec] 0-mgmt: Volume file changed
[2013-11-25 10:26:57.570608] D [io-stats.c:2579:reconfigure] 0-/rhs/bricks/b1-rep1: reconfigure returning 0
[2013-11-25 10:26:57.570654] D [options.c:991:xlator_reconfigure_rec] 0-/rhs/bricks/b1-rep1: reconfigured
[2013-11-25 10:26:57.570754] D [server.c:897:server_init_grace_timer] 0-vol_rep-server: lk-heal = off
[2013-11-25 10:26:57.570779] D [server.c:906:server_init_grace_timer] 0-vol_rep-server: Server grace timeout value = 10
[2013-11-25 10:26:57.570807] D [server.c:1011:reconfigure] 0-: returning 0
[2013-11-25 10:26:57.570895] D [glusterfsd-mgmt.c:1587:mgmt_getspec_cbk] 0-glusterfsd-mgmt: No need to re-load volfile, reconfigure done
[2013-11-25 10:26:57.571043] I [glusterfsd-mgmt.c:1559:mgmt_getspec_cbk] 0-glusterfs: No change in volfile, continuing


The first 2 info messages shows "Volume file changed". The last info message reports "No change in volfile, continuing" . 

Version-Release number of selected component (if applicable):
=============================================================
glusterfs 3.4.0.43.1u2rhs built on Nov 12 2013 07:38:20

How reproducible:
================
Often

Steps to Reproduce:
===================
1.Create a replicate volume. 

2.Set any volume options which changes client , brick vol files. 

Expected results:
===================
The volume options set/reset should be reported in the log messages.

Comment 2 Amar Tumballi 2013-12-02 10:19:43 UTC
[2013-11-25 10:26:57.570895] D [glusterfsd-mgmt.c:1587:mgmt_getspec_cbk] 0-glusterfsd-mgmt: No need to re-load volfile, reconfigure done

Is the reason for the next No change in volfile message. In case of reconfigure, we don't load the new volume file in memory (but change the existing volume's options in reconfigure).

Not at all a bug, but I still feel bug is valid as the log can be bit better.

Comment 3 Vivek Agarwal 2015-12-03 17:16:01 UTC
Thank you for submitting this issue for consideration in Red Hat Gluster Storage. The release for which you requested us to review, is now End of Life. Please See https://access.redhat.com/support/policy/updates/rhs/

If you can reproduce this bug against a currently maintained version of Red Hat Gluster Storage, please feel free to file a new report against the current release.