Bug 1156624

Summary: excessive logging per-file-create
Product: [Red Hat Storage] Red Hat Gluster Storage Reporter: Ben England <bengland>
Component: distributeAssignee: Bug Updates Notification Mailing List <rhs-bugs>
Status: CLOSED DUPLICATE QA Contact: storage-qa-internal <storage-qa-internal>
Severity: medium Docs Contact:
Priority: unspecified    
Version: rhgs-3.0CC: nbalacha
Target Milestone: ---   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2014-11-06 07:43:54 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Ben England 2014-10-24 20:05:00 UTC
Description of problem:

Running a small-file create workload causes massive amounts of informational log messages in glusterfs FUSE mountpoint logfiles, can fill a disk partition.  Will try to isolate further.

Version-Release number of selected component (if applicable):

RHEL 6.6 GA
RHS 3.0 GA RPMs - glusterfs-3.6.0.28-1*.el6rhs.x86_64

How reproducible:

Every time.

Every time on this config.

Steps to Reproduce:
1. create Gluster 2-server volume with multiple bricks/volume
2. mount volume from 4 clients
3. run smallfile benchmark to create 320K files using 32 threads total

Actual results:

application workload generator completes without reporting any errors, but lots of messages like these two:

[2014-10-24 14:54:01.308103] I [dht-common.c:1828ht_lookup_cbk] 0-thinvol-dht: Entry /smf-fsync/file_srcdir/gprfc070/thrd_29/d_007/d_009/_29_7989_ missing on subvol thinvol-replicate-3
[2014-10-24 14:54:01.308989] I [dht-common.c:1085ht_lookup_everywhere_done] 0-thinvol-dht: STATUS: hashed_subvol thinvol-replicate-3 cached_subvol null

Expected results:

Since there was no error reported by the application, I don't see why these messages need to be logged.  Furthermore, I'm not using 3-way replication, so why does it say "thinvol-replicate-3"?

Additional info:

workload:

/root/smallfile-v22/smallfile_cli.py --top /mnt/glustervol/smf-fsync,/mnt/glustervol2/smf-fsync,/mnt/glustervol3/smf-fsync,/mnt/glustervol4/smf-fsync --host-set --files 8000 --pause 5000 --file-size 256 --threads 32 --operation create

configuration:

2 HP 60-drive SL4540 servers, each with 64 GB RAM, 2-socket Sandy Bridge (or is it Haswell)?, Mellanox Connect-X3 10-GbE port with jumbo frames MTU=9000, 6 RAID6 LUNs with 256-KB stripe element size and stripe width of 8, 5 LUNs are 22-TB, 1 LUN is 16-TB.