Bug 1449313
| Summary: | [whql][virtio-block+glusterfs]"Disk Stress" and "Disk Verification" job always failed on win7-32/win2012/win2k8R2 guest | ||
|---|---|---|---|
| Product: | [Community] GlusterFS | Reporter: | Raghavendra G <rgowdapp> |
| Component: | read-ahead | Assignee: | bugs <bugs> |
| Status: | CLOSED CURRENTRELEASE | QA Contact: | |
| Severity: | high | Docs Contact: | |
| Priority: | high | ||
| Version: | 3.10 | CC: | bugs, csaba, ddumas, hhuang, jcody, juzhang, lijin, mdeng, michen, nigelb, nkinder, rbalakri, rgowdapp, rhs-bugs, rtalur, rwheeler, ssaha, stefanha, storage-qa-internal, virt-maint, vrozenfe |
| Target Milestone: | --- | ||
| Target Release: | --- | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | If docs needed, set a value | |
| Doc Text: | Story Points: | --- | |
| Clone Of: | 1414242 | Environment: | |
| Last Closed: | 2017-09-08 10:27:56 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
| Bug Depends On: | 1414242 | ||
| Bug Blocks: | |||
|
Comment 1
Worker Ant
2017-05-09 15:07:50 UTC
COMMIT: https://review.gluster.org/17222 committed in release-3.10 by Raghavendra Talur (rtalur) ------ commit 409806eb813a64319d301464e08c87df71382444 Author: Raghavendra G <rgowdapp> Date: Fri Apr 11 15:58:47 2014 +0530 performance/read-ahead: prevent stale data being returned to application. Assume that fd is shared by two application threads/processes. T0 read is triggered from app-thread t1 and read call passes through write-behind. T1 app-thread t2 issues a write. The page on which read from t1 is waiting is marked stale T2 write-behind caches write and indicates to application as write complete. T3 app-thread t2 issues read to same region. Since, there is already a page for that region (created as part of read at T0), this read request waits on that page to be filled (though it is stale, which is a bug). T4 read (triggered at T0) completes from brick (with write still pending). Now both read requests from t1 and t2 are served this data (though data is stale from app-thread t2's perspective - which is a bug) T5 write is flushed to brick by write-behind. Fix is to not to serve data from a stale page, but instead initiate a fresh read to back-end. >Change-Id: Id6af733464fa41bb4e81fd29c7451c73d06453fb >BUG: 1414242 >Signed-off-by: Raghavendra G <rgowdapp> >Reviewed-on: https://review.gluster.org/7447 >Smoke: Gluster Build System <jenkins.org> >CentOS-regression: Gluster Build System <jenkins.org> >Reviewed-by: Csaba Henk <csaba> >NetBSD-regression: NetBSD Build System <jenkins.org> >Reviewed-by: Zhou Zhengping <johnzzpcrystal> >Reviewed-by: Amar Tumballi <amarts> (cherry picked from commit 2ff39c5cbea6fbda0d7a442f55e6dc2a72efb171) Change-Id: Id6af733464fa41bb4e81fd29c7451c73d06453fb BUG: 1449313 Signed-off-by: Raghavendra G <rgowdapp> Reviewed-on: https://review.gluster.org/17222 NetBSD-regression: NetBSD Build System <jenkins.org> CentOS-regression: Gluster Build System <jenkins.org> Smoke: Gluster Build System <jenkins.org> This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.12.0, please open a new bug report. glusterfs-3.12.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://lists.gluster.org/pipermail/announce/2017-September/000082.html [2] https://www.gluster.org/pipermail/gluster-users/ |