Bug 1302751

Summary: USS: Huge logging in samba client logs while accessing .snaps folder from cifs client
Product: Red Hat Gluster Storage Reporter: surabhi <sbhaloth>
Component: sambaAssignee: rhs-smb <rhs-smb>
Status: CLOSED INSUFFICIENT_DATA QA Contact: storage-qa-internal <storage-qa-internal>
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: rhgs-3.1CC: annair, nlevinki, pgurusid, sankarshan
Target Milestone: ---Keywords: ZStream
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2018-11-20 04:52:17 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:

Description surabhi 2016-01-28 14:17:48 UTC
Description of problem:

While running automated TC's for USS :
1.Enable USS on volume, list snaps under .snaps dir"
2.start and stop snapd
3.start and stop snapd when USS is enabled/disabled"

Version-Release number of selected component (if applicable):
glusterfs-libs-3.7.5-17.el7rhgs.x86_64
glusterfs-rdma-3.7.5-17.el7rhgs.x86_64
glusterfs-cli-3.7.5-17.el7rhgs.x86_64
samba-vfs-glusterfs-4.2.4-12.el7rhgs.x86_64
glusterfs-client-xlators-3.7.5-17.el7rhgs.x86_64
glusterfs-server-3.7.5-17.el7rhgs.x86_64
glusterfs-api-3.7.5-17.el7rhgs.x86_64
glusterfs-geo-replication-3.7.5-17.el7rhgs.x86_64
glusterfs-fuse-3.7.5-17.el7rhgs.x86_64
glusterfs-3.7.5-17.el7rhgs.x86_64


How reproducible:
Always

Steps to Reproduce:
1.Do setup required for USS
2.Mount the volume on cifs client , create I/O's , take snapshots
3.enable USS
4.Activate snapshot
5.cd .snaps on cifs mount

Actual results:
**********************
Huge logging in samba client logs.

Expected results:
This much huge logging should not be there.

Additional info:

Comment 2 Poornima G 2018-11-20 04:52:17 UTC
Log messages are not provided. Also the bug is 2 year old, lot of code changes have gone in, in this area of the code. Hence closing the bug for now, please reopen if it still occurs.