Bug 1005462 - "trusted.glusterfs.lockinfo" extended attribute is persistent on disk
"trusted.glusterfs.lockinfo" extended attribute is persistent on disk
Status: CLOSED EOL
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: glusterd (Show other bugs)
2.1
Unspecified Unspecified
unspecified Severity medium
: ---
: ---
Assigned To: Bug Updates Notification Mailing List
spandura
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2013-09-07 04:30 EDT by spandura
Modified: 2015-12-03 12:12 EST (History)
2 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2015-12-03 12:12:28 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description spandura 2013-09-07 04:30:29 EDT
Description of problem:
=========================
"trusted.glusterfs.lockinfo" extended attribute used while migrating locks from old graph to new graph whenever there is  a graph change is getting written to disk. 

Expected results:
=====================
This extended attribute should not be written to disk. 

Version-Release number of selected component (if applicable):
==============================================================
glusterfs 3.4.0.32rhs built on Sep  6 2013 10:27:55

How reproducible:
=================
Often

Steps to Reproduce:
=====================
1. Create a replicate volume (1 X 2). Start the volume. 

2. Create fuse mount. From fuse mount execute the following script:- "test_script.sh file1"

test_script.sh
================
#!/bin/bash

pwd=`pwd`
filename="${pwd}/$1"
(
	echo "Time before flock : `date`"
	flock -x 200
	echo "Time after flock : `date`"
	echo -e "\nWriting to file : $filename"
	for i in `seq 1 100`; do echo "Hello $i" >&200 ; sleep 1; done
	echo "Time after the writes are successful : `date`"
)200>>$filename


3. While script is in progress, set "write-behind" "off" (client graph-switch)

4. Check the extended attributes of the file on both the bricks

Actual results:
===============
root@fan [Sep-07-2013- 8:27:51] >getfattr -d -e hex -m . /rhs/bricks/vol_dis_1_rep_2_b0/testdir_gluster/file4
getfattr: Removing leading '/' from absolute path names
# file: rhs/bricks/vol_dis_1_rep_2_b0/testdir_gluster/file4
trusted.afr.vol_dis_1_rep_2-client-0=0x000000020000000000000000
trusted.afr.vol_dis_1_rep_2-client-1=0x000000020000000000000000
trusted.gfid=0x020b280875774d33969a117d13d5e8f3
trusted.glusterfs.lockinfo=0x0000000200000041000000093c504f534958282f7268732f627269636b732f766f6c5f6469735f315f7265705f325f6231293a6d69612e6c61622e656e672e626c722e7265646861742e636f6d0032323138343532300000000041000000093c504f534958282f7268732f627269636b732f766f6c5f6469735f315f7265705f325f6230293a66616e2e6c61622e656e672e626c722e7265646861742e636f6d00323030343135323800

root@mia [Sep-07-2013- 8:27:51] >getfattr -d -e hex -m . /rhs/bricks/vol_dis_1_rep_2_b1/testdir_gluster/file4
getfattr: Removing leading '/' from absolute path names
# file: rhs/bricks/vol_dis_1_rep_2_b1/testdir_gluster/file4
trusted.afr.vol_dis_1_rep_2-client-0=0x000000020000000000000000
trusted.afr.vol_dis_1_rep_2-client-1=0x000000020000000000000000
trusted.gfid=0x020b280875774d33969a117d13d5e8f3
trusted.glusterfs.lockinfo=0x0000000200000041000000093c504f534958282f7268732f627269636b732f766f6c5f6469735f315f7265705f325f6231293a6d69612e6c61622e656e672e626c722e7265646861742e636f6d0032323138343532300000000041000000093c504f534958282f7268732f627269636b732f766f6c5f6469735f315f7265705f325f6230293a66616e2e6c61622e656e672e626c722e7265646861742e636f6d00323030343135323800

Additional info:
====================

root@fan [Sep-07-2013- 8:27:47] >gluster v info
 
Volume Name: vol_dis_1_rep_2
Type: Replicate
Volume ID: f5c43519-b5eb-4138-8219-723c064af71c
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: fan.lab.eng.blr.redhat.com:/rhs/bricks/vol_dis_1_rep_2_b0
Brick2: mia.lab.eng.blr.redhat.com:/rhs/bricks/vol_dis_1_rep_2_b1
Options Reconfigured:
server.allow-insecure: on
performance.stat-prefetch: off
performance.write-behind: on
cluster.self-heal-daemon: on
Comment 2 spandura 2013-09-07 05:31:04 EDT
This extended attribute is visible from mount point. 

Output of getfattr from mount point:
======================================
root@darrel [Sep-07-2013- 9:30:16] >mount | grep glusterfs
mia:/vol_dis_1_rep_2 on /mnt/gm1 type fuse.glusterfs (rw,default_permissions,allow_other,max_read=131072)
root@darrel [Sep-07-2013- 9:30:22] >
root@darrel [Sep-07-2013- 9:30:25] >pwd
/mnt/gm1/testdir_gluster
root@darrel [Sep-07-2013- 9:30:26] >
root@darrel [Sep-07-2013- 9:30:27] >getfattr -d -e hex -m . *
# file: file1
trusted.glusterfs.lockinfo=0x0000000200000041000000093c504f534958282f7268732f627269636b732f766f6c5f6469735f315f7265705f325f6230293a66616e2e6c61622e656e672e626c722e7265646861742e636f6d0032303034313532380000000041000000093c504f534958282f7268732f627269636b732f766f6c5f6469735f315f7265705f325f6231293a6d69612e6c61622e656e672e626c722e7265646861742e636f6d00313635303235383400

# file: file2
trusted.glusterfs.lockinfo=0x0000000200000041000000093c504f534958282f7268732f627269636b732f766f6c5f6469735f315f7265705f325f6231293a6d69612e6c61622e656e672e626c722e7265646861742e636f6d0031363530323830300000000041000000093c504f534958282f7268732f627269636b732f766f6c5f6469735f315f7265705f325f6230293a66616e2e6c61622e656e672e626c722e7265646861742e636f6d00323030343135323800

# file: file3
trusted.glusterfs.lockinfo=0x0000000200000041000000093c504f534958282f7268732f627269636b732f766f6c5f6469735f315f7265705f325f6231293a6d69612e6c61622e656e672e626c722e7265646861742e636f6d0032323138343431320000000041000000093c504f534958282f7268732f627269636b732f766f6c5f6469735f315f7265705f325f6230293a66616e2e6c61622e656e672e626c722e7265646861742e636f6d00323030343135323800

# file: file4
trusted.glusterfs.lockinfo=0x0000000200000041000000093c504f534958282f7268732f627269636b732f766f6c5f6469735f315f7265705f325f6231293a6d69612e6c61622e656e672e626c722e7265646861742e636f6d0032323138343532300000000041000000093c504f534958282f7268732f627269636b732f766f6c5f6469735f315f7265705f325f6230293a66616e2e6c61622e656e672e626c722e7265646861742e636f6d00323030343135323800

root@darrel [Sep-07-2013- 9:30:30] >
Comment 3 Vivek Agarwal 2015-12-03 12:12:28 EST
Thank you for submitting this issue for consideration in Red Hat Gluster Storage. The release for which you requested us to review, is now End of Life. Please See https://access.redhat.com/support/policy/updates/rhs/

If you can reproduce this bug against a currently maintained version of Red Hat Gluster Storage, please feel free to file a new report against the current release.

Note You need to log in before you can comment on or make changes to this bug.