Description of problem: Sosreport currently does not collect geo-replication master or slave logs located in /var/log/glusterfs/geo-replication and /var/log/glusterfs/geo-replication-slaves Version-Release number of selected component (if applicable): sos-2.2-68 How reproducible: Consistently on gluster nodes with geo-replication configured Steps to Reproduce: 1. Configure a glusterfs volume as a geo-replication master or slave 2. run sosreport with no flags 3. Resulting tarball does not include geo-replication log files Actual results: No geo-rep log files are collected Expected results: Recent geo-rep log files are collected Additional info: This may be a regression introduced by BZ 1002619
It grabs the whole of '/var/log/glusterfs' unless log size limits are in effect: 128 if limit: 129 # collect logs last as some of the other actions create log entries 130 self.addCopySpecLimit("/var/log/glusterfs/cli.log", limit) 131 self.addCopySpecLimit("/var/log/glusterfs/*.vol.log", limit) 132 self.addCopySpecLimit("/var/log/glusterfs/gluster-lock.log", limit) 133 self.addCopySpecLimit("/var/log/glusterfs/glustershd.log", limit) 134 self.addCopySpecLimit("/var/log/glusterfs/nfs.log", limit) 135 self.addCopySpecLimit("/var/log/glusterfs/quota-crawl.log", limit) 136 self.addCopySpecLimit("/var/log/glusterfs/quotad.log", limit) 137 self.addCopySpecLimit("/var/log/glusterfs/quotad-mount-*.log", limit) 138 self.addCopySpecLimit("/var/log/glusterfs/status.log", limit) 139 self.addCopySpecLimit("/var/log/glusterfs/bricks/*.log", limit) 140 else: 141 self.addCopySpec("/var/log/glusterfs") So either pass 'all_logs' or set 'logsize' to zero. We can easily add these as separate files but that should be reviewed by the Gluster team.
(In reply to Bryn M. Reeves from comment #1) > It grabs the whole of '/var/log/glusterfs' unless log size limits are in > effect: > > 128 if limit: > 129 # collect logs last as some of the other actions create log > entries > 130 self.addCopySpecLimit("/var/log/glusterfs/cli.log", limit) > 131 self.addCopySpecLimit("/var/log/glusterfs/*.vol.log", limit) > 132 self.addCopySpecLimit("/var/log/glusterfs/gluster-lock.log", > limit) > 133 self.addCopySpecLimit("/var/log/glusterfs/glustershd.log", > limit) > 134 self.addCopySpecLimit("/var/log/glusterfs/nfs.log", limit) > 135 self.addCopySpecLimit("/var/log/glusterfs/quota-crawl.log", > limit) > 136 self.addCopySpecLimit("/var/log/glusterfs/quotad.log", limit) > 137 > self.addCopySpecLimit("/var/log/glusterfs/quotad-mount-*.log", limit) > 138 self.addCopySpecLimit("/var/log/glusterfs/status.log", limit) > 139 self.addCopySpecLimit("/var/log/glusterfs/bricks/*.log", > limit) > 140 else: > 141 self.addCopySpec("/var/log/glusterfs") > > So either pass 'all_logs' or set 'logsize' to zero. > > We can easily add these as separate files but that should be reviewed by the > Gluster team. Setting 'all_logs' or 'logsize=0' puts us back to the same problem fixed in BZ 1002619 where the collected sosreport can be excessively large. So yes, it's a workaround, but not a desirable one. Better for now it to simply acknowledge that the files are not collected and request them separately. I think it's still bug behavior to not collect these files in a default sosreport run. I'll move this over to RHS.
The patch was in the bug for 4 months before release along with a description of what it did (which is accurate) so I can't really view this as a regression (or using limit/all_logs as a workaround). This is a new feature request: to collect the replication logs explicitly (if this had been mentioned 8 months ago when the changes were proposed they could've been included with bug 1002619).
Thank you for submitting this issue for consideration in Red Hat Gluster Storage. The release for which you requested us to review, is now End of Life. Please See https://access.redhat.com/support/policy/updates/rhs/ If you can reproduce this bug against a currently maintained version of Red Hat Gluster Storage, please feel free to file a new report against the current release.