Bug 1468186 - [Geo-rep]: entry failed to sync to slave with ENOENT errror
Summary: [Geo-rep]: entry failed to sync to slave with ENOENT errror
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: geo-replication
Version: rhgs-3.3
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: ---
: RHGS 3.3.0
Assignee: Kotresh HR
QA Contact: Rahul Hinduja
URL:
Whiteboard:
Depends On: 1467718 1468198 1468200
Blocks: 1417151
TreeView+ depends on / blocked
 
Reported: 2017-07-06 08:49 UTC by Rahul Hinduja
Modified: 2021-09-09 13:04 UTC (History)
9 users (show)

Fixed In Version: glusterfs-3.8.4-33
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1467718
Environment:
Last Closed: 2017-09-21 05:02:13 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2017:2774 0 normal SHIPPED_LIVE glusterfs bug fix and enhancement update 2017-09-21 08:16:29 UTC

Description Rahul Hinduja 2017-07-06 08:49:08 UTC
+++ This bug was initially created as a clone of Bug #1467718 +++

Description of problem:
When running iozone, bonnie, smallfiles workload on master, the entry failed to sync to slave with ENOENT on slave (Parent directory does not exist on slave)

The errors is like below.

[2017-06-16 14:54:26.1849] E [master(/gluster/brick1/brick):785:log_failures] _GMaster: ENTRY FAILED: ({'uid': 0, 'gfid': '4d16fd49-591d-4088-8f87-e75c081ca2f9', 'gid': 0, 'mode': 33152, 'entry': '.gfid/abe8c2f6-210b-4ac3-8c05-a84d44c3b5b1/dovecot.index', 'op': 'MKNOD'}, 2) 

Version-Release number of selected component (if applicable):
mainline

How reproducible:
Saw only once

Steps to Reproduce:
1. Setup geo-rep and run iozone, bonnie, smallfile workload on master

Actual results:
Entry failure error with ENOENT

Expected results:
Entry failures should not happen

Additional info:

--- Additional comment from Kotresh HR on 2017-07-04 15:39:32 EDT ---

Analysis:
It was seen that the RMDIR followed by MKDIR is recorded in changelog on
a particular subvolume with same gfid and pargfid/bname but not on all subvolumes as below.
    
    E 61c67a2e-07f2-45a9-95cf-d8f16a5e9c36 RMDIR \
    9cc51be8-91c3-4ef4-8ae3-17596fcfed40%2Ffedora2
    E 61c67a2e-07f2-45a9-95cf-d8f16a5e9c36 MKDIR 16877 0 0 \
    9cc51be8-91c3-4ef4-8ae3-17596fcfed40%2Ffedora2
    
While processing this changelog, geo-rep thinks RMDIR is successful and does recursive rmdir on slave. But in the master the directory still exists. Further entry creation under this directory which hashed to that particular subvol failed with ENOENT.

--- Additional comment from Worker Ant on 2017-07-04 15:43:30 EDT ---

REVIEW: https://review.gluster.org/17695 (geo-rep: Fix entry failure because parent dir doesn't exist) posted (#1) for review on master by Kotresh HR (khiremat)

--- Additional comment from Kotresh HR on 2017-07-04 15:45:16 EDT ---

Cause:
    RMDIR-MKDIR pair gets recorded so in changelog when the
    directory removal is successful on cached subvolume and
    failed in one of hashed subvol for some reason
    (may be down). In this case, the directory is re-created
    on cached subvol which gets recorded as MKDIR again in
    changelog.

 Solution:
    So while processing RMDIR geo-replication should stat on
    master with gfid and should not delete it if it's present.

--- Additional comment from Worker Ant on 2017-07-05 11:44:27 EDT ---

COMMIT: https://review.gluster.org/17695 committed in master by Aravinda VK (avishwan) 
------
commit b25bf64f3a3520a96ad557daa4903c0ceba96d72
Author: Kotresh HR <khiremat>
Date:   Tue Jul 4 08:46:06 2017 -0400

    geo-rep: Fix entry failure because parent dir doesn't exist
    
    In a distributed volume on master, it can so happen that
    the RMDIR followed by MKDIR is recorded in changelog on
    a particular subvolume with same gfid and pargfid/bname
    but not on all subvolumes as below.
    
    E 61c67a2e-07f2-45a9-95cf-d8f16a5e9c36 RMDIR \
    9cc51be8-91c3-4ef4-8ae3-17596fcfed40%2Ffedora2
    E 61c67a2e-07f2-45a9-95cf-d8f16a5e9c36 MKDIR 16877 0 0 \
    9cc51be8-91c3-4ef4-8ae3-17596fcfed40%2Ffedora2
    
    While processing this changelog, geo-rep thinks RMDIR is
    successful and does recursive rmdir on slave. But in the
    master the directory still exists. This could lead to
    data discrepancy between master and slave.
    
    Cause:
    RMDIR-MKDIR pair gets recorded so in changelog when the
    directory removal is successful on cached subvolume and
    failed in one of hashed subvol for some reason
    (may be down). In this case, the directory is re-created
    on cached subvol which gets recorded as MKDIR again in
    changelog.
    
    Solution:
    So while processing RMDIR geo-replication should stat on
    master with gfid and should not delete it if it's present.
    
    Change-Id: If5da1d6462eb4d9ebe2e88b3a70cc454411a133e
    BUG: 1467718
    Signed-off-by: Kotresh HR <khiremat>
    Reviewed-on: https://review.gluster.org/17695
    Smoke: Gluster Build System <jenkins.org>
    CentOS-regression: Gluster Build System <jenkins.org>
    Reviewed-by: Aravinda VK <avishwan>

Comment 2 Rahul Hinduja 2017-07-06 08:50:07 UTC
Its a very corner case and a race between how 2 changelogs are processed during rmdir and mkdir. Following is one of the case: 

1. Two Subvolume having dir d1. d1 having files f1,f2 in first subvolume and f3,f4 in second subvolume. 
2. rmdir of d1 is issued and mkdir with same name (d1) is issued. New files created with name f5,f6,f7,f8. 

If rmdir failed on one subvolume (A) for any reason, recursive rmdir is retried. At the same time some of the new files are hashed to different subvolume (B). Once the rmdir is reprocessed at A, it would delete the newly created files at B and will have only the files created after changelog processed mkdir on A.

Comment 3 Rahul Hinduja 2017-07-06 08:50:54 UTC
Proposing as blocker because it can cause a data loss (Or, data miss match) at slave in the specific scenario mentioned at comment 5.

Comment 4 Atin Mukherjee 2017-07-06 09:10:31 UTC
upstream patch : https://review.gluster.org/#/c/17695/

Comment 6 Kotresh HR 2017-07-07 07:39:32 UTC
Downstream Patch:
https://code.engineering.redhat.com/gerrit/#/c/111301/

Comment 8 Rahul Hinduja 2017-08-05 11:02:15 UTC
There were multiple consequences for this bug:

1. ENTRY Errors in the logs
2. Data loss at the slave (Either the whole directory was missing or few files from it)

Was able to reproduce this issue on 3.2.0 (3.8.4-18) build using the following steps:

1. touch dir1 => This is to find which subvolume the file hashes too  
2. rm dir1
3. mkdir dir1 and create some files inside it (touch {1..99})
4. Let it sync to slave
5. Stop the geo-replication
6. Attach gdb to mount pid and breakpoint at dht_rmdir_lock_cbk 
7. continue
8. rm -rf dir1/ 
9. Kill the complete Hashed subvolume (captured from step 1)
10. continue
11. Start volume with force (bring back bricks)
12. ls /mnt/dir1
13. Wait for dht heal
14. Write some more files into dir/ {touch file{1..99}}
15. Start the geo-replication

On 3.2.0_async builds, tried the above use case twice and following were results:

1. In first instance some of the files were missing from slave
2. In second instance directory dir1 was missing from slave and Entry failures were reported

Tried the same case with build: glusterfs-geo-replication-3.8.4-37.el7rhgs.x86_64

In both the iterations, the files were properly synced to slave without any Entry errors in the logs. Moving this bug to verified state

Comment 12 errata-xmlrpc 2017-09-21 05:02:13 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2017:2774


Note You need to log in before you can comment on or make changes to this bug.