Bug 1244759

Summary: Symlink are ending up with unexpected xattrs
Product: [Red Hat Storage] Red Hat Gluster Storage Reporter: Bhaskarakiran <byarlaga>
Component: disperseAssignee: Pranith Kumar K <pkarampu>
Status: CLOSED WORKSFORME QA Contact: Bhaskarakiran <byarlaga>
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: rhgs-3.1CC: asriram, byarlaga, mzywusko, pkarampu, rhs-bugs, sankarshan, storage-qa-internal
Target Milestone: ---   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: Known Issue
Doc Text:
Sometimes "gluster volume heal <volname> info" shows some symlinks which need to be healed for hours. To confirm this issue, the files must have the following extended attributes: # getfattr -d -m. -e hex -h /path/to/file/on/brick | grep trusted.ec Example output: trusted.ec.dirty=0x3000 trusted.ec.size=0x3000 trusted.ec.version=0x30000000000000000000000000000001 The first 4 digits must be '3000' and the file must be a symlink/softlink. Workaround: Execute the following commands on the files in each brick and ensure to stop all operations on them. 1) trusted.ec.size must be deleted. # setfattr -x trusted.ec.size /path/to/file/on/brick 2) First 16 digits must have '0' in both trusted.ec.dirty and trusted.ec.version attributes and the rest of the 16 digits should remain as is. If the number of digits is less than 32, then use '0' s as padding. Example: # setfattr -n trusted.ec.dirty -v 0x00000000000000000000000000000000 /path/to/file/on/brick # setfattr -n trusted.ec.version -v 0x00000000000000000000000000000001 /path/to/file/on/brick
Story Points: ---
Clone Of: Environment:
Last Closed: 2016-05-09 05:31:16 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Bhaskarakiran 2015-07-20 12:46:27 UTC
Description of problem:
======================

ec.dirty and ec.size are getting set to 0x3000 which should not be. Below are the operation done :

1. create 8+4 disperse volume
2. fuse mounted on the client
3. Start IO - linux untar's (overwrites), directory and files creation
4. Added bricks to make 2x(8_4)
5. Continued the IO and started the rebalance.
6. While the rebalance is in progress, stopped the IO.
7. After rebalance is completed, brought down some of the bricks and did overwrites.
8. Brought up the bricks and did a heal full
9. Saw heal pending on some of the symfiles

[root@vertigo ~]# getfattr -d -m. -e hex -h /rhs/brick3/b23//tarball/linux-4.1.1/tools/testing/selftests/powerpc/vphn/vphn.c
getfattr: Removing leading '/' from absolute path names
# file: rhs/brick3/b23//tarball/linux-4.1.1/tools/testing/selftests/powerpc/vphn/vphn.c
security.selinux=0x756e636f6e66696e65645f753a6f626a6563745f723a66696c655f743a733000
trusted.ec.dirty=0x3000
trusted.ec.size=0x3000
trusted.ec.version=0x30000000000000000000000000000001
trusted.gfid=0xe87fe152ceae45a5a8d2ade45da2c7a7
trusted.glusterfs.quota.80f7c207-3b96-4c39-9339-f64b2359b89a.contri=0x00000000000000000000000000000001
trusted.pgfid.80f7c207-3b96-4c39-9339-f64b2359b89a=0x00000001

Version-Release number of selected component (if applicable):
=============================================================
[root@vertigo ~]# gluster --version
glusterfs 3.7.1 built on Jul 19 2015 02:16:40
Repository revision: git://git.gluster.com/glusterfs.git
Copyright (c) 2006-2011 Gluster Inc. <http://www.gluster.com>
GlusterFS comes with ABSOLUTELY NO WARRANTY.
You may redistribute copies of GlusterFS under the terms of the GNU General Public License.
[root@vertigo ~]# 

How reproducible:
=================
Seen once 

Steps to Reproduce:
===================
As in description.

Actual results:


Expected results:


Additional info:
================
sosreports will be attached.

Comment 3 Anjana Suparna Sriram 2015-07-27 10:28:37 UTC
Please review and sign off.

Comment 4 Pranith Kumar K 2015-07-27 10:30:00 UTC
Looks good to me Anjana

Comment 7 Bhaskarakiran 2016-05-08 12:20:15 UTC
This can be closed i guess since it was hit only once. Can re-open if hit again.