Bug 1225809 - [DHT-REBALANCE]-DataLoss: The data appended to a file during its migration will be lost once the migration is done
Summary: [DHT-REBALANCE]-DataLoss: The data appended to a file during its migration wi...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: distribute
Version: 3.7.0
Hardware: x86_64
OS: Linux
unspecified
urgent
Target Milestone: ---
Assignee: Raghavendra G
QA Contact:
URL:
Whiteboard:
Depends On: 1142423
Blocks: 1140506 glusterfs-3.7.2
TreeView+ depends on / blocked
 
Reported: 2015-05-28 09:30 UTC by Raghavendra G
Modified: 2015-06-20 09:48 UTC (History)
11 users (show)

Fixed In Version: glusterfs-3.7.2
Clone Of: 1142423
Environment:
Last Closed: 2015-06-20 09:48:47 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Raghavendra G 2015-05-28 09:30:55 UTC
+++ This bug was initially created as a clone of Bug #1142423 +++

+++ This bug was initially created as a clone of Bug #1140506 +++

Description of problem:
While a file migration in progress if we append any data it will be lost once the migration is over

How reproducible:
Always

Steps to Reproduce:
1.created a 54 brick dist-rep volume
2.created a big file of size 3GB using urandom
dd if=/dev/urandom of=FILE bs=512M count=6
3.rename the file to something else so that subsequent rebalance will migrates the same file
4. mv FILE abc
5. check the file size before migration 
6. start rebalance force
gluster volume rebalance <vol> start force
7. while migration is in progress append some data to the file
   use the program attached with bug
8.check the file size during migration
9. check the files size after migaration

Actual results:



Before migration
=======
[root@localhost mnt]# ll
total 3145737
drwxr-xr-x 2 root root        162 Sep 10 15:09 2
-rw-r--r-- 1 root root         24 Sep 10 15:12 f1
-rw-r-Sr-T 1 root root 3221225523 Sep 11 03:06 FILE1
-rwxr-xr-x 1 root root       8139 Sep 10 15:11 slow

after migration
=========
[root@localhost mnt]# ll
total 3145737
drwxr-xr-x 2 root root        162 Sep 10 15:09 2
-rw-r--r-- 1 root root         24 Sep 10 15:12 f1
-rw-r--r-- 1 root root 3221225522 Sep 11 03:06 FILE1
-rwxr-xr-x 1 root root       8139 Sep 10 15:11 slow

 

Additional info:

[2014-09-11 07:06:26.508878] W [fuse-bridge.c:2238:fuse_writev_cbk] 0-glusterfs-fuse: 33829: WRITE => -1 (Invalid argument)
[2014-09-11 07:06:27.510004] W [fuse-bridge.c:2238:fuse_writev_cbk] 0-glusterfs-fuse: 33831: WRITE => -1 (Invalid argument)
[2014-09-11 07:06:28.510720] W [fuse-bridge.c:2238:fuse_writev_cbk] 0-glusterfs-fuse: 33833: WRITE => -1 (Invalid argument)
[2014-09-11 07:06:29.511543] W [fuse-bridge.c:2238:fuse_writev_cbk] 0-glusterfs-fuse: 33835: WRITE => -1 (Invalid argument)
[2014-09-11 07:06:30.512182] W [fuse-bridge.c:2238:fuse_writev_cbk] 0-glusterfs-fuse: 33837: WRITE => -1 (Invalid argument)
[2014-09-11 07:06:31.517089] W [fuse-bridge.c:2238:fuse_writev_cbk] 0-glusterfs-fuse: 33839: WRITE => -1 (Invalid argument)
[2014-09-11 07:06:32.517822] W [fuse-bridge.c:2238:fuse_writev_cbk] 0-glusterfs-fuse: 33841: WRITE => -1 (Invalid argument)
[2014-09-11 07:06:33.518535] W [fuse-bridge.c:2238:fuse_writev_cbk] 0-glusterfs-fuse: 33843: WRITE => -1 (Invalid argument)
[2014-09-11 07:06:34.519160] W [fuse-bridge.c:2238:fuse_writev_cbk] 0-glusterfs-fuse: 33845: WRITE => -1 (Invalid argument)
[2014-09-11 07:06:35.519623] W [fuse-bridge.c:2238:fuse_writev_cbk] 0-glusterfs-fuse: 33847: WRITE => -1 (Invalid argument)
[2014-09-11 07:06:36.520035] W [fuse-bridge.c:2238:fuse_writev_cbk] 0-glusterfs-fuse: 33849: WRITE => -1 (Invalid argument)
[2014-09-11 07:06:37.520552] W [fuse-bridge.c:2238:fuse_writev_cbk] 0-glusterfs-fuse: 33851: WRITE => -1 (Invalid argument)
[2014-09-11 07:06:37.521722] W [fuse-bridge.c:1237:fuse_err_cbk] 0-glusterfs-fuse: 33855: FLUSH() ERR => -1 (Invalid argument)
[2014-09-11 07:07:10.539605] W [client-rpc-fops.c:2761:client3_3_lookup_cbk] 6-dongra-client-4: remote operation failed: No such file or directory. Path: /FILE1 (fcbe026d-b646-4391-9f03-849a381e8a84)
[2014-09-11 07:07:10.539656] W [client-rpc-fops.c:2761:client3_3_lookup_cbk] 6-dongra-client-5: remote operation failed: No such file or directory. Path: /FILE1 (fcbe026d-b646-4391-9f03-849a381e8a84)

attaching the logs

--- Additional comment from shylesh on 2014-09-12 05:46:52 EDT ---

Just an update on the bug.

This bug is not reproducible on fresh mount. i.e if rebalance is running for the first time after the mount and data is appended to it , everything works fine.

If same mount persists and subsequent rebalance with data append leads to this bug.

2.1u2 had different issue for the same test case which is captured in the following bugs.
https://bugzilla.redhat.com/show_bug.cgi?id=1059687
https://bugzilla.redhat.com/show_bug.cgi?id=1058569
https://bugzilla.redhat.com/show_bug.cgi?id=1054782

--- Additional comment from Shyamsundar on 2014-09-12 16:18:40 EDT ---

This bug has 2 code related issues. This is split as "Issue 1: Invalid stashed value in inode ctx 1" and "Issue 2: Incorrect Phase 2 cached/hashed determination on open fd". I am detailing Issue 1 and Du would detail the second one.

Issue 1: Invalid stashed value in inode ctx1

Test case to reproduce this,
- Create a nx2 or even nx1 volume
- Mount on FUSE
- Create a 2 GB file (say FINAL)
1- Rename FINAL to ABCDE
2- Ensure that ABCDE hashes to a different subvolume (for the next rebalance step to work)
3- Run a rebalance force
4- When rebalance has started on ABCDE, start an appending write for ABCDE
5- Check sizes on brick for file
- Repeat steps 1..5 without restarting the mount or remounting

The second time the test is done the appending write can demonstrate a couple of behaviors,
   - The dht_write and dht_write2 write to the same subvol which is the old cached subvol (so new location does not receive the bytes thus written)
   - The dht_write2 is called with a cached subvol where the fd we send is invalid (this is not caught by the application due to write behind)

Finally, the data is either written to the older location only and no errors popped to the application, or the data is not written anywhere, or the first write appends the data to the older location and not to the newer location (i.e the files hashed volume in this case) 

(( older is the cached location and newer is the hashed location, so any appending writes not replayed to newer location will result in a loss post rebalance is done with the file))

Code problem:
The dht_migration_complete_check_task (i.e migration phase 2 detection) never gets called, as we finish the appending writes before the file is completely migrated (hence the large file size). Due to this, inode_ctx_reset1 is never called, so we have stashed a subvol here that we think we should send future writes to in case we detect a rebalance in progress during the FOP (say write, could be for other FOPs as well that check, dht_inode_ctx_get1) and blindly send the FOP without opening the fd to the returned subvol).

So the issue is that the stashed data in ctx1 should be invalidated (post a rebalance?) somehow, otherwise we end up in troubled waters with a data loss.

Phase 1 migration function dht_rebalance_in_progress_check is not called as there is data already in inode ctx 1 for optimization reasons (i.e for each write or FOP that needs this do not determine this again).

Solution proposed:
Stash this ctx1 information on the fd instead, so that it's life is the life of the fd, and in case there are overruns of the fd (i.e it remaining open even after rebalance is complete), it would still work as the brick would retain the open fd on the brick, and we will detect Phase 2 of migration in progress/complete when we reuse this fd (unlink will not delete the file, till last fd is closed).

Other soltions welcome, else we will go ahead with this one for Issue 1 presented here.

--- Additional comment from Anand Avati on 2014-10-07 16:20:47 EDT ---

REVIEW: http://review.gluster.org/8912 (cluster/dht: Fix stale subvol cache for files under migration) posted (#1) for review on master by Shyamsundar Ranganathan (srangana)

--- Additional comment from Anand Avati on 2014-10-09 16:35:05 EDT ---

REVIEW: http://review.gluster.org/8912 (cluster/dht: Fix stale subvol cache for files under migration) posted (#2) for review on master by Shyamsundar Ranganathan (srangana)

--- Additional comment from Anand Avati on 2014-10-13 17:21:23 EDT ---

REVIEW: http://review.gluster.org/8912 (cluster/dht: Fix stale subvol cache for files under migration) posted (#3) for review on master by Shyamsundar Ranganathan (srangana)

--- Additional comment from Anand Avati on 2014-10-14 14:03:24 EDT ---

REVIEW: http://review.gluster.org/8912 (cluster/dht: Fix stale subvol cache for files under migration) posted (#4) for review on master by Shyamsundar Ranganathan (srangana)

--- Additional comment from Anand Avati on 2014-10-15 14:48:46 EDT ---

REVIEW: http://review.gluster.org/8912 (cluster/dht: Fix stale subvol cache for files under migration) posted (#5) for review on master by Shyamsundar Ranganathan (srangana)

--- Additional comment from Anand Avati on 2015-05-19 14:06:10 EDT ---

REVIEW: http://review.gluster.org/10834 (cluster/dht: fix incorrect dst subvol info in inode_ctx) posted (#1) for review on master by N Balachandran (nbalacha)

--- Additional comment from Anand Avati on 2015-05-21 05:06:24 EDT ---

REVIEW: http://review.gluster.org/10834 (cluster/dht: fix incorrect dst subvol info in inode_ctx) posted (#2) for review on master by Raghavendra G (rgowdapp)

--- Additional comment from Nagaprasad Sathyanarayana on 2015-05-22 03:10:09 EDT ---

http://review.gluster.org/10805

--- Additional comment from Anand Avati on 2015-05-25 09:25:15 EDT ---

REVIEW: http://review.gluster.org/10805 (cluster/dht: Don't rely on linkto xattr to find destination subvol during phase 2 of migration.) posted (#2) for review on master by Raghavendra G (rgowdapp)

--- Additional comment from Anand Avati on 2015-05-27 08:31:14 EDT ---

REVIEW: http://review.gluster.org/10943 (cluster/dht: pass a destination subvol to fop2 variants to avoid races.) posted (#1) for review on master by Raghavendra G (rgowdapp)

--- Additional comment from Anand Avati on 2015-05-28 00:20:34 EDT ---

REVIEW: http://review.gluster.org/10834 (cluster/dht: fix incorrect dst subvol info in inode_ctx) posted (#3) for review on master by Raghavendra G (rgowdapp)

--- Additional comment from Anand Avati on 2015-05-28 00:23:40 EDT ---

REVIEW: http://review.gluster.org/10805 (cluster/dht: Don't rely on linkto xattr to find destination subvol during phase 2 of migration.) posted (#3) for review on master by Raghavendra G (rgowdapp)

--- Additional comment from Anand Avati on 2015-05-28 00:23:52 EDT ---

REVIEW: http://review.gluster.org/10943 (cluster/dht: pass a destination subvol to fop2 variants to avoid races.) posted (#2) for review on master by Raghavendra G (rgowdapp)

--- Additional comment from Anand Avati on 2015-05-28 00:50:02 EDT ---

REVIEW: http://review.gluster.org/10943 (cluster/dht: pass a destination subvol to fop2 variants to avoid races.) posted (#3) for review on master by Raghavendra G (rgowdapp)

--- Additional comment from Anand Avati on 2015-05-28 05:23:30 EDT ---

COMMIT: http://review.gluster.org/10805 committed in master by Raghavendra G (rgowdapp) 
------
commit 4df3ea9ab4d8a1aff98784460983b5f0cb4a9ee9
Author: Raghavendra G <rgowdapp>
Date:   Wed May 13 19:56:47 2015 +0530

    cluster/dht: Don't rely on linkto xattr to find destination subvol during phase 2 of migration.
    
    linkto xattr on source file cannot be relied to find where the data
    file currently resides. This can happen if there are multiple
    migrations before phase 2 detection by a client. For eg.,
    
    * migration (M1, node1, node2) starts.
    * application writes some data. DHT correctly stores the state in
      inode context that phase-1 of migration is in progress
    * migration M1 completes
    * migration (M2, node2, node3) is triggered and completed
    * application resumes writes to the file. DHT identifies it as phase-2
      of migration. However, linkto xattr on node1 points to node2, but
      the file is on node3. A lookup correctly identifies node3 as cached
      subvol
    
    TBD:
       When we identify phase-2 of a previous migration (say M1), there
       might be a migration in progress - say (M3, node3, node4). In this
       case we need to send writes to both (node3, node4) not just
       node3. Also, the inode state needs to correctly indicate that its in
       phase-1 of migration. I'll send this as a different patch.
    
    Change-Id: I1a861f766258170af2f6c0935468edb6be687b95
    BUG: 1142423
    Signed-off-by: Raghavendra G <rgowdapp>
    Reviewed-on: http://review.gluster.org/10805
    Tested-by: NetBSD Build System

Comment 1 Anand Avati 2015-05-28 10:22:25 UTC
REVIEW: http://review.gluster.org/10966 (cluster/dht: pass a destination subvol to fop2 variants to avoid races.) posted (#1) for review on release-3.7 by Raghavendra G (rgowdapp)

Comment 2 Anand Avati 2015-05-28 10:22:37 UTC
REVIEW: http://review.gluster.org/10967 (cluster/dht: fix incorrect dst subvol info in inode_ctx) posted (#1) for review on release-3.7 by Raghavendra G (rgowdapp)

Comment 3 Anand Avati 2015-05-29 07:14:56 UTC
REVIEW: http://review.gluster.org/10966 (cluster/dht: pass a destination subvol to fop2 variants to avoid races.) posted (#2) for review on release-3.7 by Raghavendra G (rgowdapp)

Comment 4 Anand Avati 2015-05-29 07:15:42 UTC
REVIEW: http://review.gluster.org/10965 (cluster/dht: Don't rely on linkto xattr to find destination subvol during phase 2 of migration.) posted (#2) for review on release-3.7 by Raghavendra G (rgowdapp)

Comment 5 Anand Avati 2015-05-29 11:09:30 UTC
REVIEW: http://review.gluster.org/10966 (cluster/dht: pass a destination subvol to fop2 variants to avoid races.) posted (#3) for review on release-3.7 by Raghavendra G (rgowdapp)

Comment 6 Anand Avati 2015-05-29 11:09:33 UTC
REVIEW: http://review.gluster.org/10967 (cluster/dht: fix incorrect dst subvol info in inode_ctx) posted (#3) for review on release-3.7 by Raghavendra G (rgowdapp)

Comment 7 Anand Avati 2015-05-29 11:09:36 UTC
REVIEW: http://review.gluster.org/10965 (cluster/dht: Don't rely on linkto xattr to find destination subvol during phase 2 of migration.) posted (#3) for review on release-3.7 by Raghavendra G (rgowdapp)

Comment 8 Anand Avati 2015-06-01 04:27:40 UTC
REVIEW: http://review.gluster.org/10966 (cluster/dht: pass a destination subvol to fop2 variants to avoid races.) posted (#4) for review on release-3.7 by Raghavendra G (rgowdapp)

Comment 9 Anand Avati 2015-06-01 04:28:14 UTC
REVIEW: http://review.gluster.org/10965 (cluster/dht: Don't rely on linkto xattr to find destination subvol during phase 2 of migration.) posted (#4) for review on release-3.7 by Raghavendra G (rgowdapp)

Comment 10 Anand Avati 2015-06-04 05:19:14 UTC
COMMIT: http://review.gluster.org/10965 committed in release-3.7 by Raghavendra G (rgowdapp) 
------
commit 271ee96e5c8636ee512a06e0478982874c160a1a
Author: Raghavendra G <rgowdapp>
Date:   Wed May 13 19:56:47 2015 +0530

    cluster/dht: Don't rely on linkto xattr to find destination subvol
    during phase 2 of migration.
    
    linkto xattr on source file cannot be relied to find where the data
    file currently resides. This can happen if there are multiple
    migrations before phase 2 detection by a client. For eg.,
    
    * migration (M1, node1, node2) starts.
    * application writes some data. DHT correctly stores the state in
      inode context that phase-1 of migration is in progress
    * migration M1 completes
    * migration (M2, node2, node3) is triggered and completed
    * application resumes writes to the file. DHT identifies it as phase-2
      of migration. However, linkto xattr on node1 points to node2, but
      the file is on node3. A lookup correctly identifies node3 as cached
      subvol
    
    TBD:
       When we identify phase-2 of a previous migration (say M1), there
       might be a migration in progress - say (M3, node3, node4). In this
       case we need to send writes to both (node3, node4) not just
       node3. Also, the inode state needs to correctly indicate that its in
       phase-1 of migration. I'll send this as a different patch.
    
    Change-Id: I1a861f766258170af2f6c0935468edb6be687b95
    BUG: 1225809
    Signed-off-by: Raghavendra G <rgowdapp>
    Reviewed-on: http://review.gluster.org/10805
    Reviewed-on: http://review.gluster.org/10965
    Tested-by: Gluster Build System <jenkins.com>

Comment 11 Anand Avati 2015-06-04 16:44:27 UTC
COMMIT: http://review.gluster.org/10967 committed in release-3.7 by Shyamsundar Ranganathan (srangana) 
------
commit 7aa52673cc70011750ff019cefad8e0229f892ad
Author: Nithya Balachandran <nbalacha>
Date:   Tue May 19 23:27:35 2015 +0530

    cluster/dht: fix incorrect dst subvol info in inode_ctx
    
    Stashing additional information in the inode_ctx to help
    decide whether the migration information is stale, which could
    happen if a file was migrated several times but FOPs only detected
    the P1 migration phase. If no FOP detects the P2 phase, the inode
    ctx1 is never reset.
    We now save the src subvol as well as the dst subvol in the
    inode ctx. The src subvol is the subvol on which the FOP was sent
    when the mig info was set in the inode ctx. This information is
    considered stale if:
    1. The subvol on which the current FOP is sent is the same as
    the dst subvol in the ctx
    2. The subvol on which the current FOP is sent is not the same
    as the src subvol in the ctx
    
    This does not handle the case where the same file might have been
    renamed such that the src subvol is the same but the dst subvol
    is different. However, that is unlikely to happen very often.
    
    Change-Id: I05a2e9b107ee64750c7ca629aee03b03a02ef75f
    BUG: 1225809
    Signed-off-by: Nithya Balachandran <nbalacha>
    Reviewed-on: http://review.gluster.org/10967
    Tested-by: Gluster Build System <jenkins.com>
    Reviewed-by: Raghavendra G <rgowdapp>
    Tested-by: Raghavendra G <rgowdapp>
    Tested-by: NetBSD Build System <jenkins.org>

Comment 12 Niels de Vos 2015-06-20 09:48:47 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.2, please reopen this bug report.

glusterfs-3.7.2 has been announced on the Gluster Packaging mailinglist [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://www.gluster.org/pipermail/packaging/2015-June/000006.html
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user


Note You need to log in before you can comment on or make changes to this bug.