Bug 1034143 - Even though volume file is changed the log messages reports "No change in volfile , continuing"
Summary: Even though volume file is changed the log messages reports "No change in vol...
Keywords:
Status: CLOSED EOL
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: glusterd
Version: 2.1
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: ---
: ---
Assignee: Bug Updates Notification Mailing List
QA Contact: storage-qa-internal@redhat.com
URL:
Whiteboard: glusterd
Depends On:
Blocks: 1284386
TreeView+ depends on / blocked
 
Reported: 2013-11-25 10:32 UTC by spandura
Modified: 2015-12-03 17:16 UTC (History)
2 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
: 1284386 (view as bug list)
Environment:
Last Closed: 2015-12-03 17:16:01 UTC
Embargoed:


Attachments (Terms of Use)

Description spandura 2013-11-25 10:32:57 UTC
Description of problem:
=========================
On a volume when we perform any volume set operations which changes the vol file, log messages doesn't report anything about the changes .  Consider the case of setting "read-subvolume" to "client-0" on a replicate volume. This change is not reported in the client log files. Following are the log messages reported:

[2013-11-25 09:29:19.642710] I [glusterfsd-mgmt.c:56:mgmt_cbk_spec] 0-mgmt: Volume file changed
[2013-11-25 09:29:19.644174] I [glusterfsd-mgmt.c:1559:mgmt_getspec_cbk] 0-glusterfs: No change in volfile, continuing

Following are the messages received when set "brick-log-level" to "DEBUG". 

[2013-11-25 10:26:56.243497] I [glusterfsd-mgmt.c:56:mgmt_cbk_spec] 0-mgmt: Volume file changed
[2013-11-25 10:26:57.566316] I [glusterfsd-mgmt.c:56:mgmt_cbk_spec] 0-mgmt: Volume file changed
[2013-11-25 10:26:57.570608] D [io-stats.c:2579:reconfigure] 0-/rhs/bricks/b1-rep1: reconfigure returning 0
[2013-11-25 10:26:57.570654] D [options.c:991:xlator_reconfigure_rec] 0-/rhs/bricks/b1-rep1: reconfigured
[2013-11-25 10:26:57.570754] D [server.c:897:server_init_grace_timer] 0-vol_rep-server: lk-heal = off
[2013-11-25 10:26:57.570779] D [server.c:906:server_init_grace_timer] 0-vol_rep-server: Server grace timeout value = 10
[2013-11-25 10:26:57.570807] D [server.c:1011:reconfigure] 0-: returning 0
[2013-11-25 10:26:57.570895] D [glusterfsd-mgmt.c:1587:mgmt_getspec_cbk] 0-glusterfsd-mgmt: No need to re-load volfile, reconfigure done
[2013-11-25 10:26:57.571043] I [glusterfsd-mgmt.c:1559:mgmt_getspec_cbk] 0-glusterfs: No change in volfile, continuing


The first 2 info messages shows "Volume file changed". The last info message reports "No change in volfile, continuing" . 

Version-Release number of selected component (if applicable):
=============================================================
glusterfs 3.4.0.43.1u2rhs built on Nov 12 2013 07:38:20

How reproducible:
================
Often

Steps to Reproduce:
===================
1.Create a replicate volume. 

2.Set any volume options which changes client , brick vol files. 

Expected results:
===================
The volume options set/reset should be reported in the log messages.

Comment 2 Amar Tumballi 2013-12-02 10:19:43 UTC
[2013-11-25 10:26:57.570895] D [glusterfsd-mgmt.c:1587:mgmt_getspec_cbk] 0-glusterfsd-mgmt: No need to re-load volfile, reconfigure done

Is the reason for the next No change in volfile message. In case of reconfigure, we don't load the new volume file in memory (but change the existing volume's options in reconfigure).

Not at all a bug, but I still feel bug is valid as the log can be bit better.

Comment 3 Vivek Agarwal 2015-12-03 17:16:01 UTC
Thank you for submitting this issue for consideration in Red Hat Gluster Storage. The release for which you requested us to review, is now End of Life. Please See https://access.redhat.com/support/policy/updates/rhs/

If you can reproduce this bug against a currently maintained version of Red Hat Gluster Storage, please feel free to file a new report against the current release.


Note You need to log in before you can comment on or make changes to this bug.