Bug 1140506
Summary: | [DHT-REBALANCE]-DataLoss: The data appended to a file during its migration will be lost once the migration is done | ||||||||
---|---|---|---|---|---|---|---|---|---|
Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | shylesh <shmohan> | ||||||
Component: | distribute | Assignee: | Raghavendra G <rgowdapp> | ||||||
Status: | CLOSED ERRATA | QA Contact: | Triveni Rao <trao> | ||||||
Severity: | urgent | Docs Contact: | |||||||
Priority: | urgent | ||||||||
Version: | rhgs-3.0 | CC: | achauras, annair, asrivast, mzywusko, nbalacha, nsathyan, sashinde, srangana, vagarwal | ||||||
Target Milestone: | --- | Keywords: | ZStream | ||||||
Target Release: | RHGS 3.1.0 | ||||||||
Hardware: | x86_64 | ||||||||
OS: | Linux | ||||||||
Whiteboard: | dht-data-loss | ||||||||
Fixed In Version: | glusterfs-3.7.1-2 | Doc Type: | Bug Fix | ||||||
Doc Text: | Story Points: | --- | |||||||
Clone Of: | |||||||||
: | 1142423 (view as bug list) | Environment: | |||||||
Last Closed: | 2015-07-29 04:35:53 UTC | Type: | Bug | ||||||
Regression: | --- | Mount Type: | --- | ||||||
Documentation: | --- | CRM: | |||||||
Verified Versions: | Category: | --- | |||||||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||||
Cloudforms Team: | --- | Target Upstream Version: | |||||||
Embargoed: | |||||||||
Bug Depends On: | 1142423, 1225809 | ||||||||
Bug Blocks: | 1202842 | ||||||||
Attachments: |
|
Description
shylesh
2014-09-11 07:25:50 UTC
Created attachment 936408 [details]
mount logs
Created attachment 936411 [details]
test program
Just an update on the bug. This bug is not reproducible on fresh mount. i.e if rebalance is running for the first time after the mount and data is appended to it , everything works fine. If same mount persists and subsequent rebalance with data append leads to this bug. 2.1u2 had different issue for the same test case which is captured in the following bugs. https://bugzilla.redhat.com/show_bug.cgi?id=1059687 https://bugzilla.redhat.com/show_bug.cgi?id=1058569 https://bugzilla.redhat.com/show_bug.cgi?id=1054782 This bug has 2 code related issues. This is split as "Issue 1: Invalid stashed value in inode ctx 1" and "Issue 2: Incorrect Phase 2 cached/hashed determination on open fd". I am detailing Issue 1 and Du would detail the second one. Issue 1: Invalid stashed value in inode ctx1 Test case to reproduce this, - Create a nx2 or even nx1 volume - Mount on FUSE - Create a 2 GB file (say FINAL) 1- Rename FINAL to ABCDE 2- Ensure that ABCDE hashes to a different subvolume (for the next rebalance step to work) 3- Run a rebalance force 4- When rebalance has started on ABCDE, start an appending write for ABCDE 5- Check sizes on brick for file - Repeat steps 1..5 without restarting the mount or remounting The second time the test is done the appending write can demonstrate a couple of behaviors, - The dht_write and dht_write2 write to the same subvol which is the old cached subvol (so new location does not receive the bytes thus written) - The dht_write2 is called with a cached subvol where the fd we send is invalid (this is not caught by the application due to write behind) Finally, the data is either written to the older location only and no errors popped to the application, or the data is not written anywhere, or the first write appends the data to the older location and not to the newer location (i.e the files hashed volume in this case) (( older is the cached location and newer is the hashed location, so any appending writes not replayed to newer location will result in a loss post rebalance is done with the file)) Code problem: The dht_migration_complete_check_task (i.e migration phase 2 detection) never gets called, as we finish the appending writes before the file is completely migrated (hence the large file size). Due to this, inode_ctx_reset1 is never called, so we have stashed a subvol here that we think we should send future writes to in case we detect a rebalance in progress during the FOP (say write, could be for other FOPs as well that check, dht_inode_ctx_get1) and blindly send the FOP without opening the fd to the returned subvol). So the issue is that the stashed data in ctx1 should be invalidated (post a rebalance?) somehow, otherwise we end up in troubled waters with a data loss. Phase 1 migration function dht_rebalance_in_progress_check is not called as there is data already in inode ctx 1 for optimization reasons (i.e for each write or FOP that needs this do not determine this again). Solution proposed: Stash this ctx1 information on the fd instead, so that it's life is the life of the fd, and in case there are overruns of the fd (i.e it remaining open even after rebalance is complete), it would still work as the brick would retain the open fd on the brick, and we will detect Phase 2 of migration in progress/complete when we reuse this fd (unlink will not delete the file, till last fd is closed). Other soltions welcome, else we will go ahead with this one for Issue 1 presented here. Verified the bug by creating a file, renaming so that hashed and cached lie on different subvol, rebalanced and then appended. The file size properly appended. Rebalance is successful. [root@dht-rhs-23 test]# mv FILE-1-rename-2 FILE-1-rename-3 [root@dht-rhs-23 test]# ls -ltrh total 2.2G -rw-r--r--. 1 root root 2.2G Jun 16 00:18 FILE-1-rename-3 [root@dht-rhs-23 test]# [root@dht-rhs-23 test]# gluster v rebalance testvol start force volume rebalance: testvol: success: Rebalance on testvol has been started successfully. Use rebalance status command to check status of the rebalance process. ID: 2d2d0473-b374-4116-82a4-2dff55a09bbd Rebalance in progress [root@dht-rhs-23 test]# gluster v rebalance testvol status Node Rebalanced-files size scanned failures skipped status run time in secs --------- ----------- ----------- ----------- ----------- ----------- ------------ -------------- localhost 0 0Bytes 2 0 0 in progress 26.00 10.70.47.174 0 0Bytes 0 0 0 completed 0.00 volume rebalance: testvol: success: [root@dht-rhs-23 test]# Appending the file: [root@dht-rhs-23 test]# dd oflag=append conv=notrunc if=/dev/zero of=FILE-1-rename-3 bs=1024 count=100000 100000+0 records in 100000+0 records out 102400000 bytes (102 MB) copied, 105.431 s, 971 kB/s [root@dht-rhs-23 test]# gluster v rebalance testvol status Node Rebalanced-files size scanned failures skipped status run time in secs --------- ----------- ----------- ----------- ----------- ----------- ------------ -------------- localhost 1 2.1GB 2 0 0 completed 257.00 10.70.47.174 0 0Bytes 0 0 0 completed 0.00 volume rebalance: testvol: success: [root@dht-rhs-23 test]# [root@dht-rhs-23 test]# Rebalance completed. === File lies on diff subvol [root@dht-rhs-24 ~]# for i in {0..9}; do ls -ltrh /bricks/brick$i/testvol/test; done total 2.2G -rw-r-Sr-T. 2 root root 2.2G Jun 16 00:18 FILE-1-rename-3 total 679M ---------T. 2 root root 2.2G Jun 16 00:20 FILE-1-rename-3 -----------------================================= [root@dht-rhs-24 ~]# ls -l /mnt/glusterfs/test/ total 2245666 -rw-r-Sr-T. 1 root root 2299561984 Jun 16 00:18 FILE-1-rename-3 [root@dht-rhs-24 ~]# [root@dht-rhs-24 ~]# ls -l /mnt/glusterfs/test/total 2252067 -rw-r-Sr-T. 1 root root 2306116608 Jun 16 00:20 FILE-1-rename-3 [root@dht-rhs-24 ~]# ls -l /mnt/glusterfs/test/ total 2299427 -rw-r-Sr-T. 1 root root 2354613248 Jun 16 00:21 FILE-1-rename-3 [root@dht-rhs-24 ~]# ls -l /mnt/glusterfs/test/ total 2345666 -rw-r--r--. 1 root root 2401961984 Jun 16 00:22 FILE-1-rename-3 File size is increased appropriately. Tried this for multiple times and each time , the file was appended appropriately. Marking the bug verified.. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHSA-2015-1495.html |