Bug 1448299
Summary: | Mismatch in checksum of the image file after copying to a new image file | ||
---|---|---|---|
Product: | [Community] GlusterFS | Reporter: | Krutika Dhananjay <kdhananj> |
Component: | sharding | Assignee: | Krutika Dhananjay <kdhananj> |
Status: | CLOSED CURRENTRELEASE | QA Contact: | bugs <bugs> |
Severity: | medium | Docs Contact: | |
Priority: | unspecified | ||
Version: | mainline | CC: | bugs, rhs-bugs, sasundar, storage-qa-internal |
Target Milestone: | --- | Keywords: | Triaged |
Target Release: | --- | ||
Hardware: | x86_64 | ||
OS: | Linux | ||
Whiteboard: | |||
Fixed In Version: | glusterfs-3.12.0 | Doc Type: | If docs needed, set a value |
Doc Text: | Story Points: | --- | |
Clone Of: | 1447959 | Environment: | |
Last Closed: | 2017-09-05 17:29:02 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | |||
Bug Blocks: | 1447959 |
Description
Krutika Dhananjay
2017-05-05 06:36:40 UTC
The issue is size mismatch between the src and dst file upon `cp` where the dst file is on gluster mount and sharded, leading to checksum mismatch. This particular bug is in shard's aggregated size accounting and is exposed when there are parallel writes and extending truncate on the file. And `cp` does a truncate on the dst file before writing to it. The parallelization comes in when write-behind flushes cached writes and an extending truncate in parallel. Note that the data integrity of the vm image is *not* affected by this bug. What is affected is the size of the file. To confirm this, I truncated the extra bytes off the dst file to make its size same as size of src file and computed checksum again. In this case checksums did match. I asked Satheesaran also to verify the same and he confirmed it works. Basically md5sum,sha256sum etc fetch file size and read till the end of the file size. So in the dst file, the excess portion is filled with zeroes and checksum calculated on this region too. FWIW, the same checksum test exists in upstream master regression test suite - https://github.com/gluster/glusterfs/blob/master/tests/bugs/shard/bug-1272986.t. The reason it passes there consistently is because the script performs copy through `dd` as opposed to `cp`. REVIEW: https://review.gluster.org/17184 (features/shard: Set size in inode ctx before size update for truncate too) posted (#2) for review on master by Krutika Dhananjay (kdhananj) REVIEW: https://review.gluster.org/17184 (features/shard: Set size in inode ctx before size update for truncate too) posted (#3) for review on master by Krutika Dhananjay (kdhananj) REVIEW: https://review.gluster.org/17184 (features/shard: Set size in inode ctx before size update for truncate too) posted (#4) for review on master by Krutika Dhananjay (kdhananj) REVIEW: https://review.gluster.org/17184 (features/shard: Set size in inode ctx before size update for truncate too) posted (#5) for review on master by Krutika Dhananjay (kdhananj) COMMIT: https://review.gluster.org/17184 committed in master by Pranith Kumar Karampuri (pkarampu) ------ commit 9df83b504a01e86d3b73af6c40df0c94cd2cd97a Author: Krutika Dhananjay <kdhananj> Date: Fri May 5 14:30:49 2017 +0530 features/shard: Set size in inode ctx before size update for truncate too Change-Id: I7e984bb0f50c7d42764c0648e697d94d6c768dc7 BUG: 1448299 Signed-off-by: Krutika Dhananjay <kdhananj> Reviewed-on: https://review.gluster.org/17184 CentOS-regression: Gluster Build System <jenkins.org> NetBSD-regression: NetBSD Build System <jenkins.org> Reviewed-by: Pranith Kumar Karampuri <pkarampu> Smoke: Gluster Build System <jenkins.org> This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.12.0, please open a new bug report. glusterfs-3.12.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://lists.gluster.org/pipermail/announce/2017-September/000082.html [2] https://www.gluster.org/pipermail/gluster-users/ |