Bug 1479692
Summary: | Running sysbench on vm disk from plain distribute gluster volume causes disk corruption | ||
---|---|---|---|
Product: | [Community] GlusterFS | Reporter: | Krutika Dhananjay <kdhananj> |
Component: | posix | Assignee: | Krutika Dhananjay <kdhananj> |
Status: | CLOSED CURRENTRELEASE | QA Contact: | |
Severity: | unspecified | Docs Contact: | |
Priority: | unspecified | ||
Version: | 3.11 | CC: | bugs, johan, kdhananj, pkarampu, rhinduja, rhs-bugs, sabose, storage-qa-internal |
Target Milestone: | --- | Keywords: | Triaged |
Target Release: | --- | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | glusterfs-3.11.3 | Doc Type: | If docs needed, set a value |
Doc Text: | Story Points: | --- | |
Clone Of: | 1472758 | Environment: | |
Last Closed: | 2017-08-24 14:46:21 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | 1472757, 1472758, 1479717, 1480193, 1583464 | ||
Bug Blocks: | 1458846 |
Description
Krutika Dhananjay
2017-08-09 08:12:07 UTC
Could this affect writes as well? I see a similar problem when running dd if=/dev/zero of=/pathtoglusterfsmount/testfile bs=1M count=1000 oflag=direct It fails horribly on write and dd fails horribly. If we add write behind and set that to a high value it works. Also it only affects when shard is enabled. REVIEW: https://review.gluster.org/18009 (storage/posix: Use the ret value of posix_gfid_heal()) posted (#1) for review on release-3.11 by Krutika Dhananjay (kdhananj) (In reply to Johan Bernhardsson from comment #1) > Could this affect writes as well? I see a similar problem when running dd > if=/dev/zero of=/pathtoglusterfsmount/testfile bs=1M count=1000 oflag=direct > > It fails horribly on write and dd fails horribly. If we add write behind and > set that to a high value it works. > > Also it only affects when shard is enabled. Yes, it can fail WRITEs too. Yeah it is more prominent with shard but the bug iteself is in posix translator and not shard. COMMIT: https://review.gluster.org/18009 committed in release-3.11 by Shyamsundar Ranganathan (srangana) ------ commit 884a1be3aaebbe6dfaaab343452f937bfa92cb99 Author: Krutika Dhananjay <kdhananj> Date: Wed Jul 19 16:14:59 2017 +0530 storage/posix: Use the ret value of posix_gfid_heal() ... to make the change in commit acf8cfdf truly useful. Without this, a race between entry creation fops and lookup at posix layer can cause lookups to fail with ENODATA, as opposed to ENOENT. Backport of: > Change-Id: I44a226872283a25f1f4812f03f68921c5eb335bb > Reviewed-on: https://review.gluster.org/17821 > BUG: 1472758 > cherry-picked from 669868d23eaeba42809fca7be134137c607d64ed Change-Id: I44a226872283a25f1f4812f03f68921c5eb335bb BUG: 1479692 Signed-off-by: Krutika Dhananjay <kdhananj> Reviewed-on: https://review.gluster.org/18009 Smoke: Gluster Build System <jenkins.org> CentOS-regression: Gluster Build System <jenkins.org> Reviewed-by: Raghavendra Bhat <raghavendra> This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.11.3, please open a new bug report. glusterfs-3.11.3 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://lists.gluster.org/pipermail/announce/2017-August/000081.html [2] https://www.gluster.org/pipermail/gluster-users/ |