Bug 731612 - [xfs/xfstests 170] Multi-file data streams should always write into seperate AGs
Summary: [xfs/xfstests 170] Multi-file data streams should always write into seperate AGs
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: kernel
Version: 6.2
Hardware: x86_64
OS: Linux
unspecified
low
Target Milestone: rc
: ---
Assignee: Red Hat Kernel Manager
QA Contact: Red Hat Kernel QE team
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2011-08-18 03:22 UTC by Eryu Guan
Modified: 2011-08-23 03:07 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2011-08-23 03:07:18 UTC


Attachments (Terms of Use)

Description Eryu Guan 2011-08-18 03:22:34 UTC
Description of problem:
xfstests 170 failed on xfs
I've ran 170 manually in loop on -188 kernel for 3000+ times but failed to reproduce. On RHEL6.1 kernel this cannot be reproduced either. So it's hard to say if it's a regression.

Running test 170
#! /bin/bash
#
# FSQA Test No. 170
#
# Check the filestreams allocator is doing its job.
# Multi-file data streams should always write into seperate AGs.
#
#-----------------------------------------------------------------------
# Copyright (c) 2007 Silicon Graphics, Inc.  All Rights Reserved.
#
FSTYP         -- xfs (non-debug)
PLATFORM      -- Linux/x86_64 intel-s3ea2-03 2.6.32-188.el6.x86_64
MKFS_OPTIONS  -- -f -bsize=4096 /dev/sda6
MOUNT_OPTIONS -- -o context=system_u:object_r:nfs_t:s0 /dev/sda6 /mnt/testarea/scratch

170	 [failed, exit status 1] - output mismatch (see 170.out.bad)
--- 170.out	2011-08-17 02:06:51.000000000 -0400
+++ 170.out.bad	2011-08-17 04:54:39.000000000 -0400
@@ -13,9 +13,5 @@
 # streaming
 # sync AGs...
 # checking stream AGs...
-+ passed, streams are in seperate AGs
-# testing 8 16 4 8 3 1 1 ....
-# streaming
-# sync AGs...
-# checking stream AGs...
-+ passed, streams are in seperate AGs
+- failed, 1 streams with matching AGs
+(see 170.full for details)
Ran: 170
Failures: 170
Failed 1 of 1 tests
=== 170.full ===
stream 1 AGs: 0 0 0 0 0 7 7 7
stream 2 AGs: 1 1 1 1 1 5 5 5
stream 3 AGs: 2 2 2 2 2 4 4 4
stream 4 AGs: 3 3 3 3 3 6 6 6
stream 1 AGs: 0 0 0 0 0 7 7 7
stream 2 AGs: 1 1 1 1 1 5 5 5
stream 3 AGs: 2 2 2 2 2 4 4 4
stream 4 AGs: 3 3 3 3 3 6 6 6
stream 1 AGs: 0 0 0 0 0 4 4 4
stream 2 AGs: 1 1 1 1 1 5 5 5
stream 3 AGs: 2 2 2 2 2 6 6 4 6
duplicate AG 4 found
stream 4 AGs: 3 3 3 3 3 7 7 7
- failed, 1 streams with matching AGs
=== 170.out.bad ===
QA output created by 170
# testing 8 16 4 8 3 0 0 ....
# streaming
# sync AGs...
# checking stream AGs...
+ passed, streams are in seperate AGs
# testing 8 16 4 8 3 1 0 ....
# streaming
# sync AGs...
# checking stream AGs...
+ passed, streams are in seperate AGs
# testing 8 16 4 8 3 0 1 ....
# streaming
# sync AGs...
# checking stream AGs...
- failed, 1 streams with matching AGs
(see 170.full for details)


Version-Release number of selected component (if applicable):
kernel-2.6.32-188.el6

How reproducible:
Very hard to reproduce

Steps to Reproduce:
1. install xfstests
2. check 170
3.
  
Actual results:
test fail

Expected results:
test pass

Additional info:
Beaker job:
https://beaker.engineering.redhat.com/jobs/120870
Beaker log:
http://beaker-archive.app.eng.bos.redhat.com/beaker-logs/2011/08/1208/120870/248726/2693709/13668653//test_log--kernel-filesystems-xfs-xfstests-170.log

Comment 2 Eric Sandeen 2011-08-18 16:06:12 UTC
IIRC, the filestreams tests have always been a little shaky.

I don't know if any customers use this option... I don't think I'd rank this bug as being very critical.  Dave, what do you think?

Comment 3 Dave Chinner 2011-08-23 03:07:18 UTC
(In reply to comment #2)
> IIRC, the filestreams tests have always been a little shaky.
> 
> I don't know if any customers use this option... I don't think I'd rank this
> bug as being very critical.  Dave, what do you think?

Filestreams is always best effort for separating the streams - there is no guarantee that they will or can be separated.

Given that this didn't reproduce on 6.1 or in a loop of 3000 iterations, I'd say "close - NOTABUG".


Note You need to log in before you can comment on or make changes to this bug.