Bug 1234848
Summary: | Disperse volume : heal fails during file truncates | ||
---|---|---|---|
Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | Bhaskarakiran <byarlaga> |
Component: | disperse | Assignee: | Sunil Kumar Acharya <sheggodu> |
Status: | CLOSED WORKSFORME | QA Contact: | Matt Zywusko <mzywusko> |
Severity: | unspecified | Docs Contact: | |
Priority: | unspecified | ||
Version: | rhgs-3.1 | CC: | mzywusko, pkarampu, rhs-bugs |
Target Milestone: | --- | Keywords: | Reopened, ZStream |
Target Release: | --- | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | Bug Fix | |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2017-04-06 06:00:52 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | |||
Bug Blocks: | 1223636 |
Description
Bhaskarakiran
2015-06-23 11:38:03 UTC
We tried recreating the issue. 1. Created a disperse volume. 2. Killed one of the bricks. 3. Created 1GB file. Mount point: [root@varada mount-1]# ls -l total 1000000 -rw-r--r--. 1 root root 1024000000 Feb 3 16:02 testfile [root@varada mount-1]# Bricks: [root@varada mount-1]# ls -l /LAB/store/ec-* /LAB/store/ec-1: total 250008 -rw-r--r--. 2 root root 256000000 Feb 3 16:02 testfile /LAB/store/ec-2: total 250008 -rw-r--r--. 2 root root 256000000 Feb 3 16:02 testfile /LAB/store/ec-3: total 250008 -rw-r--r--. 2 root root 256000000 Feb 3 16:02 testfile /LAB/store/ec-4: total 0 /LAB/store/ec-5: total 250008 -rw-r--r--. 2 root root 256000000 Feb 3 16:02 testfile /LAB/store/ec-6: total 250008 -rw-r--r--. 2 root root 256000000 Feb 3 16:02 testfile [root@varada mount-1]# 4. Brick was brought online. 5. File was listed for healing. Initiated the healing. [root@varada mount-1]# gluster volume heal ec-1 info Brick varada:/LAB/store/ec-1 /testfile Status: Connected Number of entries: 1 Brick varada:/LAB/store/ec-2 /testfile Status: Connected Number of entries: 1 Brick varada:/LAB/store/ec-3 /testfile Status: Connected Number of entries: 1 Brick varada:/LAB/store/ec-4 Status: Connected Number of entries: 0 Brick varada:/LAB/store/ec-5 /testfile Status: Connected Number of entries: 1 Brick varada:/LAB/store/ec-6 /testfile Status: Connected Number of entries: 1 [root@varada mount-1]# [root@varada mount-1]# gluster volume heal ec-1 Launching heal operation to perform index self heal on volume ec-1 has been successful Use heal info commands to check status [root@varada mount-1]# [root@varada mount-1]# ls -l /LAB/store/ec-* /LAB/store/ec-1: total 250008 -rw-r--r--. 2 root root 256000000 Feb 3 16:02 testfile /LAB/store/ec-2: total 250008 -rw-r--r--. 2 root root 256000000 Feb 3 16:02 testfile /LAB/store/ec-3: total 250008 -rw-r--r--. 2 root root 256000000 Feb 3 16:02 testfile /LAB/store/ec-4: total 145412 -rw-r--r--. 2 root root 148897792 Feb 3 16:03 testfile /LAB/store/ec-5: total 250008 -rw-r--r--. 2 root root 256000000 Feb 3 16:02 testfile /LAB/store/ec-6: total 250008 -rw-r--r--. 2 root root 256000000 Feb 3 16:02 testfile [root@varada mount-1]# 6. Truncated the file. [root@varada mount-1]# truncate -s 700MB testfile [root@varada mount-1]# ls -l total 683594 -rw-r--r--. 1 root root 700000000 Feb 3 16:04 testfile [root@varada mount-1]# 7. After some time checked the file size on bricks. [root@varada mount-1]# ls -l /LAB/store/ec-* /LAB/store/ec-1: total 170908 -rw-r--r--. 2 root root 175000064 Feb 3 16:04 testfile /LAB/store/ec-2: total 170908 -rw-r--r--. 2 root root 175000064 Feb 3 16:04 testfile /LAB/store/ec-3: total 170908 -rw-r--r--. 2 root root 175000064 Feb 3 16:04 testfile /LAB/store/ec-4: total 170904 -rw-r--r--. 2 root root 175000064 Feb 3 16:04 testfile /LAB/store/ec-5: total 170908 -rw-r--r--. 2 root root 175000064 Feb 3 16:04 testfile /LAB/store/ec-6: total 170908 -rw-r--r--. 2 root root 175000064 Feb 3 16:04 testfile [root@varada mount-1]# All the files were of same size on the bricks. It can be observed that the file size on mount point is not same as file size on bricks. Above testes were performed several times both on upstream and downstream v3.7.1. Behavior explained above was observed consistently and is working as expected. As detailed in my previous update, I am not able to recreate the issue. I have also discussed it with QA (Nag) and he is fine with the observation. Any suggestion? Sure, go ahead and close it as works for me. |