Bug 1449314 - [whql][virtio-block+glusterfs]"Disk Stress" and "Disk Verification" job always failed on win7-32/win2012/win2k8R2 guest
Summary: [whql][virtio-block+glusterfs]"Disk Stress" and "Disk Verification" job alway...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: read-ahead
Version: 3.8
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
Assignee: Raghavendra G
QA Contact:
URL:
Whiteboard:
Depends On: 1414242
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-05-09 15:08 UTC by Raghavendra G
Modified: 2017-05-29 04:59 UTC (History)
20 users (show)

Fixed In Version: glusterfs-3.8.12
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1414242
Environment:
Last Closed: 2017-05-29 04:59:32 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Comment 1 Worker Ant 2017-05-09 15:10:23 UTC
REVIEW: https://review.gluster.org/17223 (performance/read-ahead: prevent stale data being returned to application.) posted (#1) for review on release-3.8 by Raghavendra G (rgowdapp)

Comment 2 Worker Ant 2017-05-11 09:49:55 UTC
COMMIT: https://review.gluster.org/17223 committed in release-3.8 by Niels de Vos (ndevos) 
------
commit ae2f6a650fab30a91357c3f3327a432a0edb5fdf
Author: Raghavendra G <rgowdapp>
Date:   Fri Apr 11 15:58:47 2014 +0530

    performance/read-ahead: prevent stale data being returned to application.
    
    Assume that fd is shared by two application threads/processes.
    
    T0 read is triggered from app-thread t1 and read call passes through
       write-behind.
    T1 app-thread t2 issues a write. The page on which read from t1 is
       waiting is marked stale
    T2 write-behind caches write and indicates to application as write
       complete.
    T3 app-thread t2 issues read to same region. Since, there is already a
       page for that region (created as part of read at T0), this read
       request waits on that page to be filled (though it is stale, which
       is a bug).
    T4 read (triggered at T0) completes from brick (with write still
       pending). Now both read requests from t1 and t2 are served this data
       (though data is stale from app-thread t2's perspective - which is a
       bug)
    T5 write is flushed to brick by write-behind.
    
    Fix is to not to serve data from a stale page, but instead initiate a
    fresh read to back-end.
    
    >Change-Id: Id6af733464fa41bb4e81fd29c7451c73d06453fb
    >BUG: 1414242
    >Signed-off-by: Raghavendra G <rgowdapp>
    >Reviewed-on: https://review.gluster.org/7447
    >Smoke: Gluster Build System <jenkins.org>
    >CentOS-regression: Gluster Build System <jenkins.org>
    >Reviewed-by: Csaba Henk <csaba>
    >NetBSD-regression: NetBSD Build System <jenkins.org>
    >Reviewed-by: Zhou Zhengping <johnzzpcrystal>
    >Reviewed-by: Amar Tumballi <amarts>
    
    (cherry picked from commit 2ff39c5cbea6fbda0d7a442f55e6dc2a72efb171)
    Change-Id: Id6af733464fa41bb4e81fd29c7451c73d06453fb
    BUG: 1449314
    Signed-off-by: Raghavendra G <rgowdapp>
    Reviewed-on: https://review.gluster.org/17223
    NetBSD-regression: NetBSD Build System <jenkins.org>
    CentOS-regression: Gluster Build System <jenkins.org>
    Reviewed-by: Niels de Vos <ndevos>
    Smoke: Gluster Build System <jenkins.org>

Comment 3 Niels de Vos 2017-05-29 04:59:32 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.8.12, please open a new bug report.

glusterfs-3.8.12 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] https://lists.gluster.org/pipermail/announce/2017-May/000072.html
[2] https://www.gluster.org/pipermail/gluster-users/


Note You need to log in before you can comment on or make changes to this bug.