Bug 1328000 - [dht-rebalance]: Failed to set xattr errors seen for files undergoing rename with rebalance operation
Summary: [dht-rebalance]: Failed to set xattr errors seen for files undergoing rename ...
Keywords:
Status: CLOSED DUPLICATE of bug 1282318
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: distribute
Version: rhgs-3.1
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: ---
: ---
Assignee: Bug Updates Notification Mailing List
QA Contact: storage-qa-internal@redhat.com
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-04-18 08:29 UTC by krishnaram Karthick
Modified: 2016-05-03 09:05 UTC (History)
2 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-05-03 09:05:24 UTC
Embargoed:


Attachments (Terms of Use)

Description krishnaram Karthick 2016-04-18 08:29:31 UTC
Description of problem:
Error message 'set xattr failed' is seen for most of the files being renamed when rebalance operation is in progress and a single brick is brought down on a dist-re volume.

[2016-04-17 15:13:54.234808] E [MSGID: 113001] [posix-helpers.c:1177:posix_handle_pair] 0-dht-vol-posix: /bricks/brick2/v1/1319592/file-16503: key:trusted.glusterfs.dht.linktoflags: 1 length:20 [File exists]
[2016-04-17 15:13:54.234899] E [MSGID: 113001] [posix.c:1281:posix_mknod] 0-dht-vol-posix: setting xattrs on /bricks/brick2/v1/1319592/file-16503 failed

[root@dhcp47-90 ~]# gluster v status
Status of volume: dht-vol
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick 10.70.47.90:/bricks/brick0/v1         N/A       N/A        N       N/A  
Brick 10.70.47.105:/bricks/brick0/v1        49155     0          Y       30652
Brick 10.70.47.9:/bricks/brick0/v1          49155     0          Y       30255
Brick 10.70.46.94:/bricks/brick0/v1         49155     0          Y       30644
Brick 10.70.47.90:/bricks/brick2/v1         49156     0          Y       5794 
Brick 10.70.47.105:/bricks/brick2/v1        49156     0          Y       30037
Brick 10.70.47.9:/bricks/brick2/v1          49156     0          Y       30042
Brick 10.70.46.94:/bricks/brick2/v1         49156     0          Y       30110
NFS Server on localhost                     2049      0          Y       6450 
Self-heal Daemon on localhost               N/A       N/A        Y       6458 
NFS Server on 10.70.46.94                   2049      0          Y       30664
Self-heal Daemon on 10.70.46.94             N/A       N/A        Y       30673
NFS Server on 10.70.47.105                  2049      0          Y       30672
Self-heal Daemon on 10.70.47.105            N/A       N/A        Y       30680
NFS Server on 10.70.47.9                    2049      0          Y       30618
Self-heal Daemon on 10.70.47.9              N/A       N/A        Y       30626
 
Task Status of Volume dht-vol
------------------------------------------------------------------------------
Task                 : Rebalance           
ID                   : 361dba9a-c948-4211-913c-74bebea85179
Status               : completed           


Version-Release number of selected component (if applicable):
glusterfs-3.7.9-1.el7rhgs.x86_64

How reproducible:
Yet to determine

Steps to Reproduce:
1. create a 4x2 dist-rep volume
2. create 10k files under a directory
3. while step 2 is in progress, add 4 more bricks
4. start rebalance process
5. start renaming of files
6. kill on of the brick process

Actual results:
On the brick logs of new bricks that were added, lot of error messages are seen.

Expected results:
No errors should be seen

Additional info:
sosreports shall be attached.

Comment 2 krishnaram Karthick 2016-04-27 09:11:33 UTC
sosreports are available here --> http://rhsqe-repo.lab.eng.blr.redhat.com/sosreports/1328000/

Comment 3 Nithya Balachandran 2016-05-03 09:05:24 UTC

*** This bug has been marked as a duplicate of bug 1282318 ***


Note You need to log in before you can comment on or make changes to this bug.