Description of problem: ======================= For USS we have 1 snapd log per volume and as many snap logs for volume. For example if there are 4 volumes having 256 snaps each and USS is enabled than total number of logs under /var/log/glusterfs for USS would be 1028 logs. Total logs = (4(snapd per volume) + 4(volumes)*256(snaps)) = 1028 Hence, it makes sense to move into into sub-folder structure like /var/log/glusterfs/snaps/<vol-name>/<snapd + snaps logs> Version-Release number of selected component (if applicable): ============================================================= glusterfs-3.6.0.30-1.el6rhs.x86_64 How reproducible: ================= always Steps to Reproduce: =================== 1. Create 4 node cluster 2. Create 4 volumes 3. Enable USS 4. Create 256 snaps of each volume Actual results: =============== Total 1028 log entries are created under /var/log/glusterfs Expected results: ================= It is good to separate these logs and put it under some subdirectory like /var/log/glusterfs/snaps/<vol-name>/<snapd + snaps logs>
Patch submitted: https://code.engineering.redhat.com/gerrit/36474
verified with: glusterfs-3.6.0.33-1.el6rhs.x86_64 [root@inception snaps]# pwd /var/log/glusterfs/snaps [root@inception snaps]# ls vol0 vol1 [root@inception snaps]# ls * vol0: snap1-5330f9de-444c-4cef-b35d-3b6a015a7e09.log snapd.log snap2-646c4e30-3411-41c0-849f-340b9dcf02af.log vol1: rs1-a2cc607d-a538-48f6-a916-e29996b32ed2.log rs2-9a95c9b8-4a8b-45a6-811c-a55a922a386f.log rs3-50fb9ee6-f44e-4ae8-9f33-0e97367ac984.log snapd.log [root@inception snaps]# Moving the bug to verified state
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHBA-2015-0038.html