Bug 1156624 - excessive logging per-file-create
Summary: excessive logging per-file-create
Keywords:
Status: CLOSED DUPLICATE of bug 1138288
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: distribute
Version: rhgs-3.0
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: ---
: ---
Assignee: Bug Updates Notification Mailing List
QA Contact: storage-qa-internal@redhat.com
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2014-10-24 20:05 UTC by Ben England
Modified: 2014-11-06 07:46 UTC (History)
1 user (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2014-11-06 07:43:54 UTC
Embargoed:


Attachments (Terms of Use)

Description Ben England 2014-10-24 20:05:00 UTC
Description of problem:

Running a small-file create workload causes massive amounts of informational log messages in glusterfs FUSE mountpoint logfiles, can fill a disk partition.  Will try to isolate further.

Version-Release number of selected component (if applicable):

RHEL 6.6 GA
RHS 3.0 GA RPMs - glusterfs-3.6.0.28-1*.el6rhs.x86_64

How reproducible:

Every time.

Every time on this config.

Steps to Reproduce:
1. create Gluster 2-server volume with multiple bricks/volume
2. mount volume from 4 clients
3. run smallfile benchmark to create 320K files using 32 threads total

Actual results:

application workload generator completes without reporting any errors, but lots of messages like these two:

[2014-10-24 14:54:01.308103] I [dht-common.c:1828ht_lookup_cbk] 0-thinvol-dht: Entry /smf-fsync/file_srcdir/gprfc070/thrd_29/d_007/d_009/_29_7989_ missing on subvol thinvol-replicate-3
[2014-10-24 14:54:01.308989] I [dht-common.c:1085ht_lookup_everywhere_done] 0-thinvol-dht: STATUS: hashed_subvol thinvol-replicate-3 cached_subvol null

Expected results:

Since there was no error reported by the application, I don't see why these messages need to be logged.  Furthermore, I'm not using 3-way replication, so why does it say "thinvol-replicate-3"?

Additional info:

workload:

/root/smallfile-v22/smallfile_cli.py --top /mnt/glustervol/smf-fsync,/mnt/glustervol2/smf-fsync,/mnt/glustervol3/smf-fsync,/mnt/glustervol4/smf-fsync --host-set --files 8000 --pause 5000 --file-size 256 --threads 32 --operation create

configuration:

2 HP 60-drive SL4540 servers, each with 64 GB RAM, 2-socket Sandy Bridge (or is it Haswell)?, Mellanox Connect-X3 10-GbE port with jumbo frames MTU=9000, 6 RAID6 LUNs with 256-KB stripe element size and stripe width of 8, 5 LUNs are 22-TB, 1 LUN is 16-TB.


Note You need to log in before you can comment on or make changes to this bug.