Bug 1110694 - [DHT:REBALANCE]: Rebalance failures are seen with error message " remote operation failed: File exists"
Summary: [DHT:REBALANCE]: Rebalance failures are seen with error message " remote ope...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: distribute
Version: rhgs-3.0
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
: RHGS 3.0.0
Assignee: vsomyaju
QA Contact: shylesh
URL:
Whiteboard:
Depends On:
Blocks: 1115937 1116150 1117661 1138385 1139995
TreeView+ depends on / blocked
 
Reported: 2014-06-18 09:25 UTC by shylesh
Modified: 2015-05-15 17:53 UTC (History)
9 users (show)

Fixed In Version: glusterfs-3.6.0.28-1
Doc Type: Bug Fix
Doc Text:
Clone Of:
: 1116150 (view as bug list)
Environment:
Last Closed: 2014-09-22 19:42:03 UTC
Embargoed:


Attachments (Terms of Use)
Rebalane-Race (1.54 KB, text/plain)
2014-06-25 06:51 UTC, vsomyaju
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHEA-2014:1278 0 normal SHIPPED_LIVE Red Hat Storage Server 3.0 bug fix and enhancement update 2014-09-22 23:26:55 UTC

Description shylesh 2014-06-18 09:25:12 UTC
Description of problem:
Adding a brick and running rebalance leads to migration failures , log message says " remote operation failed: File exists"

Version-Release number of selected component (if applicable):
glusterfs-geo-replication-3.6.0.18-1.el6rhs.x86_64
gluster-swift-object-1.10.0-2.el6rhs.noarch
glusterfs-fuse-3.6.0.18-1.el6rhs.x86_64
gluster-swift-1.10.0-2.el6rhs.noarch
gluster-nagios-common-0.1.0-26.git2b35b66.el6rhs.x86_64
glusterfs-server-3.6.0.18-1.el6rhs.x86_64
glusterfs-rdma-3.6.0.18-1.el6rhs.x86_64
gluster-swift-proxy-1.10.0-2.el6rhs.noarch
gluster-swift-account-1.10.0-2.el6rhs.noarch
glusterfs-libs-3.6.0.18-1.el6rhs.x86_64
glusterfs-3.6.0.18-1.el6rhs.x86_64
glusterfs-cli-3.6.0.18-1.el6rhs.x86_64
glusterfs-debuginfo-3.6.0.18-1.el6rhs.x86_64
gluster-swift-container-1.10.0-2.el6rhs.noarch
gluster-swift-plugin-1.10.0-5.el6rhs.noarch
gluster-nagios-addons-0.1.0-57.git9d252a3.el6rhs.x86_64
samba-glusterfs-3.6.9-168.1.el6rhs.x86_64
vdsm-gluster-4.14.5-21.git7a3d0f0.el6rhs.noarch
glusterfs-api-3.6.0.18-1.el6rhs.x86_64


How reproducible:
Tried once

Steps to Reproduce:
1.created a 2 brick distribute volume
2. created some data by 
for i in {1..10} do mkdir $i ; cd $i; cp -R /etc/* .; done
3. add one more brick and run rebalance

Actual results:
Rebalance failures are seen




Additional info:
---------------
Rebalance logs 
================

[2014-06-18 06:44:50.298216] W [client-rpc-fops.c:306:client3_3_mkdir_cbk] 0-new-client-2: remote operation failed: File exists. Path: /1/2/3/4/5/6/7/8/9/10/xdg
[2014-06-18 06:44:50.298252] D [dht-selfheal.c:419:dht_selfheal_dir_mkdir_cbk] 0-new-dht: selfhealing directory /1/2/3/4/5/6/7/8/9/10/xdg failed: File exists




Volume Name: new
Type: Distribute
Volume ID: 5d5b5cdf-6c77-4200-a7d0-9f4ac5828a0c
Status: Started
Snap Volume: no
Number of Bricks: 3
Transport-type: tcp
Bricks:
Brick1: rhs-client4.lab.eng.blr.redhat.com:/home/n0
Brick2: rhs-client39.lab.eng.blr.redhat.com:/home/n1
Brick3: rhs-client4.lab.eng.blr.redhat.com:/home/n2
Options Reconfigured:
diagnostics.client-log-level: DEBUG
diagnostics.client-log-buf-size: 20
diagnostics.brick-log-buf-size: 10
diagnostics.brick-log-flush-timeout: 200
diagnostics.client-log-flush-timeout: 300
diagnostics.client-log-format: with-msg-id


[root@rhs-client4 mnt]# gluster v rebalance new status
                                    Node Rebalanced-files          size       scanned      failures       skipped               status   run time in secs
                               ---------      -----------   -----------   -----------   -----------   -----------         ------------     --------------
                               localhost             1306       313.8KB         20503            17          2395            completed              93.00
     rhs-client39.lab.eng.blr.redhat.com             1001        18.6KB         20912            15          2106            completed              93.00
volume rebalance: new: success: 



cluster info
==============
rhs-client4.lab.eng.blr.redhat.com
 rhs-client39.lab.eng.blr.redhat.com


attached the sosreports

Comment 3 vsomyaju 2014-06-25 06:51:01 UTC
Created attachment 911931 [details]
Rebalane-Race

Comment 4 vsomyaju 2014-06-25 06:54:06 UTC
Added an attachment which describes the race condition.

From the logs, it seems to be a race condition between two rebalance prcocess.


STATE 1:                          BRICK-1     
only one brick                   Cached File
in the system

STATE 2:
Add brick-2                       BRICK-1                BRICK-2


STATE 3:                                               Lookup of File on brick-2
                                                       by this node's rebalance
                                                       will fail because hashed 
						       file is not created yet.
                                                    So dht_lookup_everywhere is 
                                                        about to get called.


STATE 4:                         As part of lookup
                                 link file at brick-2
                                 will be created.


STATE 5:                         getxattr to check that
                                 cached file belongs to 
                                 this node is done


STATE 6:                                               

                                              dht_lookup_everywhere_cbk detects
                                              the link created by rebalance-1.
                                              It will unlink it.


STATE 7:                        getxattr at the link
                                file with "pathinfo" key
                                will be called will fail 
                                as the link file is deleted
                                by rebalance on node-2

Comment 5 Sachidananda Urs 2014-06-26 11:57:06 UTC
With the release:

I see much more failures reported during remove-brick:


                                    Node Rebalanced-files          size       scanned      failures       skipped               status   r
un time in secs
                               ---------      -----------   -----------   -----------   -----------   -----------         ------------
 --------------
                               localhost                0        0Bytes        458684             0             0          in progress
        5790.00
                             172.17.69.1           114615         3.4MB        293904         30561             0          in progress
        5790.00

30561 and counting...

However, I do not see any errors reported in the logs.

Comment 6 vsomyaju 2014-07-23 11:06:59 UTC
Sent on downstream branch: https://code.engineering.redhat.com/gerrit/#/c/29357/

Comment 8 Nithya Balachandran 2014-08-12 06:14:13 UTC
Additional patches need to be merged to complete this fix bug. They are currently being reviewed. Moving this back to POST.

Comment 10 Atin Mukherjee 2014-09-19 11:46:18 UTC
Gluster-server version
======================

[root@rhssvm-swift2 ~]# gluster --version
glusterfs 3.6.0.28 built on Sep  3 2014 10:13:12
Repository revision: git://git.gluster.com/glusterfs.git
Copyright (c) 2006-2011 Gluster Inc. <http://www.gluster.com>
GlusterFS comes with ABSOLUTELY NO WARRANTY.
You may redistribute copies of GlusterFS under the terms of the GNU General Public License.


glusterfs-client version
========================

[root@rhs-client10 10]# glusterfs --version
glusterfs 3.6.0.28 built on Sep  3 2014 10:13:11
Repository revision: git://git.gluster.com/glusterfs.git
Copyright (c) 2006-2013 Red Hat, Inc. <http://www.redhat.com/>
GlusterFS comes with ABSOLUTELY NO WARRANTY.
It is licensed to you under your choice of the GNU Lesser
General Public License, version 3 or any later version (LGPLv3
or later), or the GNU General Public License, version 2 (GPLv2),
in all cases as published by the Free Software Foundation.


Tested the following steps:

1. Created a distributed volume with 2 bricks
2. From mount point ran for i in {1..10} do mkdir $i ; cd $i; cp -R /etc/* .; done to create some data
3. Added a brick and then executed rebalance & waited till the status became completed.
4. grep "remote operation failed: File exists" /var/log/glusterfs/vol1-rebalance.log | wc -l
0

Hence this bug is verified.

Comment 12 errata-xmlrpc 2014-09-22 19:42:03 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHEA-2014-1278.html


Note You need to log in before you can comment on or make changes to this bug.