Bug 1005462 - "trusted.glusterfs.lockinfo" extended attribute is persistent on disk
Summary: "trusted.glusterfs.lockinfo" extended attribute is persistent on disk
Keywords:
Status: CLOSED EOL
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: glusterd
Version: 2.1
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: ---
: ---
Assignee: Bug Updates Notification Mailing List
QA Contact: spandura
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2013-09-07 08:30 UTC by spandura
Modified: 2015-12-03 17:12 UTC (History)
2 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2015-12-03 17:12:28 UTC
Target Upstream Version:


Attachments (Terms of Use)

Description spandura 2013-09-07 08:30:29 UTC
Description of problem:
=========================
"trusted.glusterfs.lockinfo" extended attribute used while migrating locks from old graph to new graph whenever there is  a graph change is getting written to disk. 

Expected results:
=====================
This extended attribute should not be written to disk. 

Version-Release number of selected component (if applicable):
==============================================================
glusterfs 3.4.0.32rhs built on Sep  6 2013 10:27:55

How reproducible:
=================
Often

Steps to Reproduce:
=====================
1. Create a replicate volume (1 X 2). Start the volume. 

2. Create fuse mount. From fuse mount execute the following script:- "test_script.sh file1"

test_script.sh
================
#!/bin/bash

pwd=`pwd`
filename="${pwd}/$1"
(
	echo "Time before flock : `date`"
	flock -x 200
	echo "Time after flock : `date`"
	echo -e "\nWriting to file : $filename"
	for i in `seq 1 100`; do echo "Hello $i" >&200 ; sleep 1; done
	echo "Time after the writes are successful : `date`"
)200>>$filename


3. While script is in progress, set "write-behind" "off" (client graph-switch)

4. Check the extended attributes of the file on both the bricks

Actual results:
===============
root@fan [Sep-07-2013- 8:27:51] >getfattr -d -e hex -m . /rhs/bricks/vol_dis_1_rep_2_b0/testdir_gluster/file4
getfattr: Removing leading '/' from absolute path names
# file: rhs/bricks/vol_dis_1_rep_2_b0/testdir_gluster/file4
trusted.afr.vol_dis_1_rep_2-client-0=0x000000020000000000000000
trusted.afr.vol_dis_1_rep_2-client-1=0x000000020000000000000000
trusted.gfid=0x020b280875774d33969a117d13d5e8f3
trusted.glusterfs.lockinfo=0x0000000200000041000000093c504f534958282f7268732f627269636b732f766f6c5f6469735f315f7265705f325f6231293a6d69612e6c61622e656e672e626c722e7265646861742e636f6d0032323138343532300000000041000000093c504f534958282f7268732f627269636b732f766f6c5f6469735f315f7265705f325f6230293a66616e2e6c61622e656e672e626c722e7265646861742e636f6d00323030343135323800

root@mia [Sep-07-2013- 8:27:51] >getfattr -d -e hex -m . /rhs/bricks/vol_dis_1_rep_2_b1/testdir_gluster/file4
getfattr: Removing leading '/' from absolute path names
# file: rhs/bricks/vol_dis_1_rep_2_b1/testdir_gluster/file4
trusted.afr.vol_dis_1_rep_2-client-0=0x000000020000000000000000
trusted.afr.vol_dis_1_rep_2-client-1=0x000000020000000000000000
trusted.gfid=0x020b280875774d33969a117d13d5e8f3
trusted.glusterfs.lockinfo=0x0000000200000041000000093c504f534958282f7268732f627269636b732f766f6c5f6469735f315f7265705f325f6231293a6d69612e6c61622e656e672e626c722e7265646861742e636f6d0032323138343532300000000041000000093c504f534958282f7268732f627269636b732f766f6c5f6469735f315f7265705f325f6230293a66616e2e6c61622e656e672e626c722e7265646861742e636f6d00323030343135323800

Additional info:
====================

root@fan [Sep-07-2013- 8:27:47] >gluster v info
 
Volume Name: vol_dis_1_rep_2
Type: Replicate
Volume ID: f5c43519-b5eb-4138-8219-723c064af71c
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: fan.lab.eng.blr.redhat.com:/rhs/bricks/vol_dis_1_rep_2_b0
Brick2: mia.lab.eng.blr.redhat.com:/rhs/bricks/vol_dis_1_rep_2_b1
Options Reconfigured:
server.allow-insecure: on
performance.stat-prefetch: off
performance.write-behind: on
cluster.self-heal-daemon: on

Comment 2 spandura 2013-09-07 09:31:04 UTC
This extended attribute is visible from mount point. 

Output of getfattr from mount point:
======================================
root@darrel [Sep-07-2013- 9:30:16] >mount | grep glusterfs
mia:/vol_dis_1_rep_2 on /mnt/gm1 type fuse.glusterfs (rw,default_permissions,allow_other,max_read=131072)
root@darrel [Sep-07-2013- 9:30:22] >
root@darrel [Sep-07-2013- 9:30:25] >pwd
/mnt/gm1/testdir_gluster
root@darrel [Sep-07-2013- 9:30:26] >
root@darrel [Sep-07-2013- 9:30:27] >getfattr -d -e hex -m . *
# file: file1
trusted.glusterfs.lockinfo=0x0000000200000041000000093c504f534958282f7268732f627269636b732f766f6c5f6469735f315f7265705f325f6230293a66616e2e6c61622e656e672e626c722e7265646861742e636f6d0032303034313532380000000041000000093c504f534958282f7268732f627269636b732f766f6c5f6469735f315f7265705f325f6231293a6d69612e6c61622e656e672e626c722e7265646861742e636f6d00313635303235383400

# file: file2
trusted.glusterfs.lockinfo=0x0000000200000041000000093c504f534958282f7268732f627269636b732f766f6c5f6469735f315f7265705f325f6231293a6d69612e6c61622e656e672e626c722e7265646861742e636f6d0031363530323830300000000041000000093c504f534958282f7268732f627269636b732f766f6c5f6469735f315f7265705f325f6230293a66616e2e6c61622e656e672e626c722e7265646861742e636f6d00323030343135323800

# file: file3
trusted.glusterfs.lockinfo=0x0000000200000041000000093c504f534958282f7268732f627269636b732f766f6c5f6469735f315f7265705f325f6231293a6d69612e6c61622e656e672e626c722e7265646861742e636f6d0032323138343431320000000041000000093c504f534958282f7268732f627269636b732f766f6c5f6469735f315f7265705f325f6230293a66616e2e6c61622e656e672e626c722e7265646861742e636f6d00323030343135323800

# file: file4
trusted.glusterfs.lockinfo=0x0000000200000041000000093c504f534958282f7268732f627269636b732f766f6c5f6469735f315f7265705f325f6231293a6d69612e6c61622e656e672e626c722e7265646861742e636f6d0032323138343532300000000041000000093c504f534958282f7268732f627269636b732f766f6c5f6469735f315f7265705f325f6230293a66616e2e6c61622e656e672e626c722e7265646861742e636f6d00323030343135323800

root@darrel [Sep-07-2013- 9:30:30] >

Comment 3 Vivek Agarwal 2015-12-03 17:12:28 UTC
Thank you for submitting this issue for consideration in Red Hat Gluster Storage. The release for which you requested us to review, is now End of Life. Please See https://access.redhat.com/support/policy/updates/rhs/

If you can reproduce this bug against a currently maintained version of Red Hat Gluster Storage, please feel free to file a new report against the current release.


Note You need to log in before you can comment on or make changes to this bug.