Bug 736666

Summary: v7 storage test has an improper write/read sequence
Product: [Retired] Red Hat Hardware Certification Program Reporter: WANG Chao <chaowang>
Component: Test Suite (tests)Assignee: Greg Nichols <gnichols>
Status: CLOSED ERRATA QA Contact: Guangze Bai <gbai>
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: 6.1CC: chaowang, czhang, fge, gbai, rlandry, ruyang, ykun, yshao
Target Milestone: ---   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
In v7 1.3, storage test didn't do read/write testing in correct sequence, which was like: write with 1024 bytes; write with 2048 bytes; ... write with 65536 bytes. read with 1024 bytes; read with 2048 bytes; ... read with 65536 bytes. The writing testing would never verify the data because the old data was always overridden by subsequent writing operation. In v7 1.4, this issue has been fixed, now the read/write sequence is like this: write with 1024 bytes; read with 1024 bytes and verifying; write with 2048 bytes; read with 2048 bytes and verifying; ... write with 65536 bytes, read with 65536 bytes and verifying.
Story Points: ---
Clone Of: Environment:
Last Closed: 2011-11-08 15:43:01 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
patch fix none

Description WANG Chao 2011-09-08 11:43:01 UTC
Description of problem:
After code review of /usr/share/v7/tests/storage/storage.py, I found that storage.py run dt with a improper progress.
Now v7 storage test is progressed as:
        write with 1024 bytes ... done
        write with 2048 bytes ... done
            ...
        write with 65536 bytes ... done
        read with 1024 bytes ... done
        read with 2048 bytes ... done
            ...
        read with 65536 bytes ... done
Please note that those write ops disables verify data and every read op will give a result about whether matched the dt option we gave 'pattern=XXX'.And if a write op wrote bad data on the disk and that bad data block may easily overrided by the next write op.So the read op won't find error.
    I think the proper process will be:
        write with 1024, read with 1024, then we got a verification result
        ...
        write with 65536, read with 65536, then we got a verification result 

Version-Release number of selected component (if applicable):
14

How reproducible:
100%

Steps to Reproduce:
1.v7 run -t storage
2.check dt output
3.
  
Actual results:
do all write before read with verification

Expected results:
data should be verified with read op immediately after write op

Additional info:

Comment 1 WANG Chao 2011-09-08 12:30:51 UTC
Created attachment 522105 [details]
patch fix

This patch also fix:
bug 736638 , bug 736675 and bug 736679

Comment 4 Caspar Zhang 2011-10-21 14:31:09 UTC
    Technical note added. If any revisions are required, please edit the "Technical Notes" field
    accordingly. All revisions will be proofread by the Engineering Content Services team.
    
    New Contents:
In v7 1.3, storage test didn't do read/write testing in correct sequence, which was like:

        write with 1024 bytes;
        write with 2048 bytes;
            ...
        write with 65536 bytes.
        read with 1024 bytes;
        read with 2048 bytes;
            ...
        read with 65536 bytes.

The writing testing would never verify the data because the old data was always overridden by subsequent writing operation.

In v7 1.4, this issue has been fixed, now the read/write sequence is like this:

        write with 1024 bytes; read with 1024 bytes and verifying;
        write with 2048 bytes; read with 2048 bytes and verifying;
        ...
        write with 65536 bytes, read with 65536 bytes and verifying.

Comment 6 errata-xmlrpc 2011-11-08 15:43:01 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHBA-2011-1436.html

Comment 7 errata-xmlrpc 2011-11-08 18:33:05 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHBA-2011-1436.html