Bug 1436614

Summary: Brick Multiplexing: dedicated brick logs not available anymore
Product: [Community] GlusterFS Reporter: Nag Pavan Chilakam <nchilaka>
Component: coreAssignee: Jeff Darcy <jeff>
Status: CLOSED EOL QA Contact:
Severity: high Docs Contact:
Priority: unspecified    
Version: 3.10CC: bugs, jeff
Target Milestone: ---Keywords: Triaged
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2018-06-20 18:29:26 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Nag Pavan Chilakam 2017-03-28 10:15:13 UTC
Description of problem:
=======================
With brick multiplexing enabled, if I have volume{1..5}. I don't find brick logs for each of the bricks in each of these volumes any more in /var/log/glusterfs/bricks
Instead I find only one brick log for all the bricks in that node


I understand that now given that we have one glusterfsd per node, that is the reason, but then for the end user or admin, it would be difficult to debug into the logs for a particular volume




Version-Release number of selected component (if applicable):
=====================
[root@dhcp35-192 bricks]# rpm -qa|grep gluster
glusterfs-fuse-3.10.0-1.el7.x86_64
glusterfs-rdma-3.10.0-1.el7.x86_64
glusterfs-libs-3.10.0-1.el7.x86_64
glusterfs-client-xlators-3.10.0-1.el7.x86_64
glusterfs-api-3.10.0-1.el7.x86_64
glusterfs-server-3.10.0-1.el7.x86_64
glusterfs-debuginfo-3.10.0-1.el7.x86_64
glusterfs-3.10.0-1.el7.x86_64
glusterfs-cli-3.10.0-1.el7.x86_64

in my case I have below volumes:

[root@dhcp35-192 bricks]# gluster v status
Status of volume: distrep
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick 10.70.35.192:/rhs/brick2/distrep      49156     0          Y       6301 
Brick 10.70.35.214:/rhs/brick2/distrep      49154     0          Y       20321
Brick 10.70.35.192:/rhs/brick3/distrep      49156     0          Y       6301 
Brick 10.70.35.215:/rhs/brick3/distrep      49154     0          Y       13393
Self-heal Daemon on localhost               N/A       N/A        Y       6752 
Self-heal Daemon on 10.70.35.214            N/A       N/A        Y       21013
Self-heal Daemon on 10.70.35.215            N/A       N/A        Y       14199
 
Task Status of Volume distrep
------------------------------------------------------------------------------
There are no active volume tasks
 
Status of volume: ecvol_1
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick 10.70.35.192:/rhs/brick1/ecvol_1      49156     0          Y       6301 
Brick 10.70.35.214:/rhs/brick1/ecvol_1      49154     0          Y       20321
Brick 10.70.35.215:/rhs/brick1/ecvol_1      49154     0          Y       13393
Brick 10.70.35.192:/rhs/brick2/ecvol_1      49156     0          Y       6301 
Brick 10.70.35.214:/rhs/brick2/ecvol_1      49154     0          Y       20321
Brick 10.70.35.215:/rhs/brick2/ecvol_1      49154     0          Y       13393
Brick 10.70.35.192:/rhs/brick3/ecvol_1      49156     0          Y       6301 
Brick 10.70.35.214:/rhs/brick3/ecvol_1      49154     0          Y       20321
Brick 10.70.35.215:/rhs/brick3/ecvol_1      49154     0          Y       13393
Brick 10.70.35.192:/rhs/brick4/ecvol_1      49156     0          Y       6301 
Brick 10.70.35.214:/rhs/brick4/ecvol_1      49154     0          Y       20321
Brick 10.70.35.215:/rhs/brick4/ecvol_1      49154     0          Y       13393
Self-heal Daemon on localhost               N/A       N/A        Y       6752 
Self-heal Daemon on 10.70.35.214            N/A       N/A        Y       21013
Self-heal Daemon on 10.70.35.215            N/A       N/A        Y       14199
 
Task Status of Volume ecvol_1
------------------------------------------------------------------------------
There are no active volume tasks
 
Status of volume: spencer
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick 10.70.35.192:/rhs/brick4/spencer      49156     0          Y       6301 
Brick 10.70.35.214:/rhs/brick4/spencer      49154     0          Y       20321
Brick 10.70.35.215:/rhs/brick4/spencer      49154     0          Y       13393
Self-heal Daemon on localhost               N/A       N/A        Y       6752 
Self-heal Daemon on 10.70.35.215            N/A       N/A        Y       14199
Self-heal Daemon on 10.70.35.214            N/A       N/A        Y       21013
 
Task Status of Volume spencer
------------------------------------------------------------------------------
There are no active volume tasks
 
Status of volume: test
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick 10.70.35.214:/rhs/brick1/test         49154     0          Y       20321
Brick 10.70.35.215:/rhs/brick1/test         49154     0          Y       13393
Self-heal Daemon on localhost               N/A       N/A        Y       6752 
Self-heal Daemon on 10.70.35.214            N/A       N/A        Y       21013
Self-heal Daemon on 10.70.35.215            N/A       N/A        Y       14199
 
Task Status of Volume test
------------------------------------------------------------------------------
There are no active volume tasks
 




but the brick logs available are :
[root@dhcp35-192 bricks]# pwd
/var/log/glusterfs/bricks
[root@dhcp35-192 bricks]# ls
rhs-brick1-test.log  rhs-brick2-distrep.log  rhs-brick3-distrep.log


note:test volume was created before brick multiplex enabled, hence a log file for its bricks

Comment 1 Jeff Darcy 2017-03-28 12:15:46 UTC
Some bugs are easier to debug with separate logs.  Some bugs are easier to debug with a single log, as there's no need to edit multiple files and correlate timestamps between them.  Also, some messages refer to the daemon itself and not to any particular brick within it.  Much of this was already discussed here.

http://lists.gluster.org/pipermail/gluster-devel/2017-February/052086.html

As multiplexing was being developed, a single log was certainly more efficient than multiple logs would have been.  I don't see why we would expect that to be any different for users.  IMO this *feature request* should be prioritized accordingly.

Comment 2 Shyamsundar 2018-06-20 18:29:26 UTC
This bug reported is against a version of Gluster that is no longer maintained
(or has been EOL'd). See https://www.gluster.org/release-schedule/ for the
versions currently maintained.

As a result this bug is being closed.

If the bug persists on a maintained version of gluster or against the mainline
gluster repository, request that it be reopened and the Version field be marked
appropriately.