This service will be undergoing maintenance at 00:00 UTC, 2016-09-28. It is expected to last about 1 hours
Bug 169673 - IOSTAT Reports gig writes when using fibre channel
IOSTAT Reports gig writes when using fibre channel
Status: CLOSED INSUFFICIENT_DATA
Product: Red Hat Enterprise Linux 4
Classification: Red Hat
Component: kernel (Show other bugs)
4.0
i386 Linux
medium Severity medium
: ---
: ---
Assigned To: Tom Coughlan
Brian Brock
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2005-09-30 17:16 EDT by Frank Ruiz
Modified: 2008-11-10 17:36 EST (History)
3 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2008-11-10 17:36:12 EST
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:


Attachments (Terms of Use)

  None (edit)
Description Frank Ruiz 2005-09-30 17:16:57 EDT
From Bugzilla Helper:
User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.7.8) Gecko/20050511 Firefox/1.0.4

Description of problem:
I am running the following command:

iostat -k 1 2

I am measuring performance of different storage arrays. I first ran a baseline test on a hp dl380 g4, and I did not seen any abnormal results. This is using Ultra 320 SCSI.

The test consist of running multiple dds in parallel.  

I run into an issue when I start doing measurements on fibre channel attached storage.

My test bed consists of an hp dl380 g4, with qla2342 hbas. I run 80 dd streams in parallel. Command = dd if=/dev/zero of=/fileX bs=1M count=1000 (where x is 1-80)

During my measurements, I am running iostat in the background. Every so often I get measurements of 600MB per second, all the way up to a little over a 1 gig. This is impossible since the hba will only allow for a max of 212MB per second. I am running the HBA in an active/passive configuration. 

I have only encountered the problem on fibre attached storage. I did not encounter this problem with local U320 disk.

I am expecting to only get results of 200 and below. 

Version-Release number of selected component (if applicable):
sysstat-5.0.5-1

How reproducible:
Always

Steps to Reproduce:
1. Attach fibre channel storage to a server.
2. Run 120 parallel dd writes. I use perl parallel::forkcontrol to achieve this.
3.Run iostat in the background, and you will see a many good values, and the occassional 600MB per/second and above values. 
  

Actual Results:  I get about 1000 normal values, and about 4% of those are abnormal where the values are 600MB per/sec, all the way up to 1GB per/sec.

Expected Results:  I should only see values under 200MB per second.

Additional info:

I have had this only happen on fibre channel storage. U320 was fine.
Comment 1 Ivana Varekova 2005-10-03 07:14:58 EDT
Hello, could you please try to reproduce this problem with test version 
sysstat-5.0.5-5.fc.test
(http://people.redhat.com/varekova/sysstat-5.0.5-5.fc.test.i386.rpm or
http://people.redhat.com/varekova/sysstat-5.0.5-5.fc.test.src.rpm) and attach 
iostat output (or part of this output), which contains this bug. (In test
version, there is only add auxiliary output which should help to find problem.)
Thanks.
Comment 2 Frank Ruiz 2005-10-06 16:35:25 EDT
Here is a sample of the output. You will see one abnormal entry listed at
1318568.00 kb.

avg-cpu:  %user   %nice    %sys %iowait   %idle
          25.68    0.00    1.82    2.08   70.43

Device:            tps    kB_read/s    kB_wrtn/s    kB_read    kB_wrtn
sdb              17.16         4.59     15054.80       5653   18538476
     37076952,             0, 37076952, 123140, 2,
(wr_sectors,wr_sectors,S_VALUE,itv,fctr)

avg-cpu:  %user   %nice    %sys %iowait   %idle
           0.00    0.00   24.00   71.25    4.75

Device:            tps    kB_read/s    kB_wrtn/s    kB_read    kB_wrtn
sdb             181.19        11.88    161366.34         12     162980
     37402912,      37076952, 325960, 101, 2,
(wr_sectors,wr_sectors,S_VALUE,itv,fctr)

avg-cpu:  %user   %nice    %sys %iowait   %idle
           0.00    0.00   23.81   58.40   17.79

Device:            tps    kB_read/s    kB_wrtn/s    kB_read    kB_wrtn
sdb             183.00        16.00    120488.00         16     120488
     37643888,      37402912, 240976, 100, 2,
(wr_sectors,wr_sectors,S_VALUE,itv,fctr)

avg-cpu:  %user   %nice    %sys %iowait   %idle
           0.00    0.00   23.25   51.50   25.25

Device:            tps    kB_read/s    kB_wrtn/s    kB_read    kB_wrtn
sdb             185.86        12.12    226670.71         12     224404
     38092696,      37643888, 448808, 99, 2,
(wr_sectors,wr_sectors,S_VALUE,itv,fctr)

avg-cpu:  %user   %nice    %sys %iowait   %idle
           0.25    0.00   22.25   51.25   26.25

Device:            tps    kB_read/s    kB_wrtn/s    kB_read    kB_wrtn
sdb             152.00         4.00   1318568.00          4    1318568
     40729832,      38092696, 2637136, 100, 2,
(wr_sectors,wr_sectors,S_VALUE,itv,fctr)

avg-cpu:  %user   %nice    %sys %iowait   %idle
           0.00    0.00    0.75   70.07   29.18

Device:            tps    kB_read/s    kB_wrtn/s    kB_read    kB_wrtn
sdb             104.00         4.00         0.00          4          0
     40729832,      40729832, 0, 100, 2, (wr_sectors,wr_sectors,S_VALUE,itv,fctr)

avg-cpu:  %user   %nice    %sys %iowait   %idle
           0.00    0.00    0.50   49.50   50.00

Device:            tps    kB_read/s    kB_wrtn/s    kB_read    kB_wrtn
sdb              94.00         0.00         0.00          0          0
     40729832,      40729832, 0, 100, 2, (wr_sectors,wr_sectors,S_VALUE,itv,fctr)

avg-cpu:  %user   %nice    %sys %iowait   %idle
           0.00    0.00    0.25   49.75   50.00

Device:            tps    kB_read/s    kB_wrtn/s    kB_read    kB_wrtn
sdb              84.00         0.00         0.00          0          0
     40729832,      40729832, 0, 100, 2, (wr_sectors,wr_sectors,S_VALUE,itv,fctr)

avg-cpu:  %user   %nice    %sys %iowait   %idle
           0.00    0.00    0.50   49.63   49.88

Device:            tps    kB_read/s    kB_wrtn/s    kB_read    kB_wrtn
sdb              79.00         0.00         0.00          0          0
     40729832,      40729832, 0, 100, 2, (wr_sectors,wr_sectors,S_VALUE,itv,fctr)


Comment 3 Frank Ruiz 2005-10-06 16:36:37 EDT
The command I ran in the background to generate I/O was dd if=/dev/zero
of=/opt/file1 bs=1M count=5000
Comment 4 Ivana Varekova 2006-04-18 09:27:14 EDT
This seems to be kernel problem. 
Sysstat counts the right value from input data 
In the debug output - the number of wr_sectors was 40 729 832 in the first time,
in the second time this value was 38 092 696, 
itv (100) - time units are in USER_HZ 
fctr (2) is the value means the output is in kB - (if the output is in blk fctr
is 1 - blk is 1/2kB). So the result kB_wrtn/s value should be 1318568.00.
There should be problem with the imput values.
Reassigning. 
Comment 5 Tom Coughlan 2006-04-24 12:18:06 EDT
What version of the kernel is this? Anything non-standard? 

Please post the output of sysreport, or dmesg or /var/log/messages showing the
boot messages. 
Comment 8 Tom Coughlan 2008-11-10 17:36:12 EST
No reply for two years. Closing.

Note You need to log in before you can comment on or make changes to this bug.