Bug 1228495 - [Backup]: Glusterfind pre fails with htime xattr updation error resulting in historical changelogs not available
Summary: [Backup]: Glusterfind pre fails with htime xattr updation error resulting in ...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: glusterfind
Version: rhgs-3.1
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: ---
: RHGS 3.1.0
Assignee: Kotresh HR
QA Contact: Sweta Anandpara
URL:
Whiteboard:
Depends On:
Blocks: 1202842 1223636 1230015 1230694
TreeView+ depends on / blocked
 
Reported: 2015-06-05 05:17 UTC by Sweta Anandpara
Modified: 2016-09-17 15:20 UTC (History)
6 users (show)

Fixed In Version: glusterfs-3.7.1-3
Doc Type: Bug Fix
Doc Text:
Clone Of:
: 1230015 (view as bug list)
Environment:
Last Closed: 2015-07-29 04:57:25 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2015:1495 0 normal SHIPPED_LIVE Important: Red Hat Gluster Storage 3.1 update 2015-07-29 08:26:26 UTC

Description Sweta Anandpara 2015-06-05 05:17:20 UTC
Description of problem:

Had a 2node cluster with 2 volumes 2*2 dist-rep 'ozone' and 4*1 distribute 'nash',  with glusterfind sessions for both of them. Glutserfind command on ozone resulted in a crash (bug 1228017), post which I carried on my testing with nash volume.

Did creates, renames, file moves, metadata changes and operations of those kind at teh fuse mountpoint of nash. Ran glusterfind commands pre and post everytime there was a change. Left the setup untouched for close to 3 hours. 

Ran the glusterfind pre command again, without doing any changes on the mountpoint, and it resulted in an error on node2, stating 'Historical changelogs not available'. It managed to get the changelogs on node1. glutserfind list displayed healthy session entries on node1 but 'session corrupted' entries on node2. 

Brick logs displayed the following error on node2:

[2015-06-04 12:37:06.887081] W [socket.c:642:__socket_rwv] 0-nash-changelog: readv on /var/run/gluster/.7fa2c65ef2768e33fb16e3a06d07afa73795.sock failed (No data available)
[2015-06-04 12:45:22.519142] W [socket.c:642:__socket_rwv] 0-nash-changelog: readv on /var/run/gluster/.7fa2c65ef2768e33fb16e3a06d07afa73942.sock failed (No data available)
[2015-06-04 16:29:18.732545] E [changelog-helpers.c:333:htime_update] 0-nash-changelog: Htime xattr updation failed, reason (No data available)
[2015-06-04 16:29:33.749876] E [changelog-helpers.c:333:htime_update] 0-nash-changelog: Htime xattr updation failed, reason (No data available)
[2015-06-04 16:29:48.767266] E [changelog-helpers.c:333:htime_update] 0-nash-changelog: Htime xattr updation failed, reason (No data available)
[2015-06-04 16:30:03.784468] E [changelog-helpers.c:333:htime_update] 0-nash-changelog: Htime xattr updation failed, reason (No data available)



Version-Release number of selected component (if applicable):
glusterfs-3.7.0-3.el6rhs.x86_64

How reproducible: 1:1

Comment 2 Kotresh HR 2015-06-10 08:01:25 UTC
Patch posted Upstream:
http://review.gluster.org/#/c/11150/

Comment 3 Kotresh HR 2015-06-11 11:45:54 UTC
Downstream Patch:
https://code.engineering.redhat.com/gerrit/#/c/50530/

Comment 5 Kotresh HR 2015-06-12 10:36:03 UTC
Upstream 3.7 Patch:
http://review.gluster.org/#/c/11181/

Comment 8 Sweta Anandpara 2015-06-22 10:31:43 UTC
Inputs on what would be the probable scenarios to hit this issue, and the regression to be run in and around the fix that has gone in for this bug, will help. Thanks!

Comment 9 Kotresh HR 2015-07-05 14:13:47 UTC
The issue with this bug is that the extended attribute "trusted.glusterfs.htime" on HTIME.TSTAMP file got vanished for some unknown reason.

So to verify this bug, manually delete the extended attribute on HTIME.TSTAMP file
as follows.

# getfattr -d -m . HTIME.1435905425 
# file: HTIME.1435905425
security.selinux="unconfined_u:object_r:file_t:s0"
trusted.glusterfs.htime="1436105426:1403"

#setfattr -x trusted.glusterfs.htime HTIME.1435905425

Comment 10 Sweta Anandpara 2015-07-16 04:57:41 UTC
Tested and verified this on the build glusterfs-3.7.1-8.el7rhgs.x86_64

Installed a previous version 3.7.0-3 to reproduce the issue and re did the steps (mentioned in comment 9) on the newer build. 

Do not see the issue that is mentioned.. Would not be able to say for sure as the root cause of this bug was never found, but am able to positively confirm on not hitting this again, with the agreement of dev.

Moving this to fixed in 3.1 everglades.

Pasted below are the detailed logs:

[root@dhcp42-37 ~]# 
[root@dhcp42-37 ~]# rpm -qa | grep gluster
glusterfs-libs-3.7.1-8.el7rhgs.x86_64
glusterfs-3.7.1-8.el7rhgs.x86_64
glusterfs-fuse-3.7.1-8.el7rhgs.x86_64
glusterfs-server-3.7.1-8.el7rhgs.x86_64
glusterfs-client-xlators-3.7.1-8.el7rhgs.x86_64
glusterfs-api-3.7.1-8.el7rhgs.x86_64
glusterfs-cli-3.7.1-8.el7rhgs.x86_64
[root@dhcp42-37 ~]# 
[root@dhcp42-37 ~]# 
[root@dhcp42-37 ~]# glusterfind list
No sessions found
[root@dhcp42-37 ~]# glustv list
-bash: glustv: command not found
[root@dhcp42-37 ~]# gluster v list
No volumes present in cluster
[root@dhcp42-37 ~]# gluster v create
Usage: volume create <NEW-VOLNAME> [stripe <COUNT>] [replica <COUNT> [arbiter <COUNT>]] [disperse [<COUNT>]] [disperse-data <COUNT>] [redundancy <COUNT>] [transport <tcp|rdma|tcp,rdma>] <NEW-BRICK>?<vg_name>... [force]
[root@dhcp42-37 ~]# gluster v create ozone replica 3 10.70.42.37:/bricks/^C
[root@dhcp42-37 ~]# 
[root@dhcp42-37 ~]# gluster v create gluster_shared_storage replica 3 10.70.43.94:/bricks/brick2/gss 10.70.43.124:/bricks/brick2/gss 10.70.42.15:/bricks/brick2/gss
volume create: gluster_shared_storage: success: please start the volume to access data
[root@dhcp42-37 ~]# 
[root@dhcp42-37 ~]# 
[root@dhcp42-37 ~]# gluster v start gluster_shared_storage
volume start: gluster_shared_storage: success
[root@dhcp42-37 ~]# 
[root@dhcp42-37 ~]# 
[root@dhcp42-37 ~]# gluster v info 
 
Volume Name: gluster_shared_storage
Type: Replicate
Volume ID: 112687f3-191e-42bf-8c72-dbf2386e7430
Status: Started
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: 10.70.43.94:/bricks/brick2/gss
Brick2: 10.70.43.124:/bricks/brick2/gss
Brick3: 10.70.42.15:/bricks/brick2/gss
Options Reconfigured:
performance.readdir-ahead: on
[root@dhcp42-37 ~]# 
[root@dhcp42-37 ~]# 
[root@dhcp42-37 ~]# gluster v create ozone replica 3 10.70.42.37:/bricks/brick1/oz 10.70.43.94:/bricks/brick1/oz 10.70.43.124:/bricks/brick1/oz 10.70.42.37:/bricks/brick2/oz 10.70.43.94:/bricks/brick2/oz 10.70.43.94:/bricks/brick2/oz
Found duplicate exports 10.70.43.94:/bricks/brick2/oz
Usage: volume create <NEW-VOLNAME> [stripe <COUNT>] [replica <COUNT> [arbiter <COUNT>]] [disperse [<COUNT>]] [disperse-data <COUNT>] [redundancy <COUNT>] [transport <tcp|rdma|tcp,rdma>] <NEW-BRICK>?<vg_name>... [force]
[root@dhcp42-37 ~]# gluster v create ozone replica 3 10.70.42.37:/bricks/brick1/oz 10.70.43.94:/bricks/brick1/oz 10.70.43.124:/bricks/brick1/oz 10.70.42.37:/bricks/brick2/oz 10.70.43.94:/bricks/brick2/oz 10.70.43.94:/bricks/brick2/oz
Found duplicate exports 10.70.43.94:/bricks/brick2/oz
Usage: volume create <NEW-VOLNAME> [stripe <COUNT>] [replica <COUNT> [arbiter <COUNT>]] [disperse [<COUNT>]] [disperse-data <COUNT>] [redundancy <COUNT>] [transport <tcp|rdma|tcp,rdma>] <NEW-BRICK>?<vg_name>... [force]
[root@dhcp42-37 ~]# 
[root@dhcp42-37 ~]# 
[root@dhcp42-37 ~]# 
[root@dhcp42-37 ~]# 
[root@dhcp42-37 ~]# 
[root@dhcp42-37 ~]# gluster v create ozone replica 3 10.70.42.37:/bricks/brick1/oz 10.70.43.94:/bricks/brick1/oz 10.70.43.124:/bricks/brick1/oz 10.70.42.37:/bricks/brick2/oz 10.70.43.94:/bricks/brick2/oz 10.70.43.124:/bricks/brick2/oz
volume create: ozone: success: please start the volume to access data
[root@dhcp42-37 ~]# gluster v list
gluster_shared_storage
ozone
[root@dhcp42-37 ~]# gluster v info ozone
 
Volume Name: ozone
Type: Distributed-Replicate
Volume ID: a3812e21-cd1d-49ae-b41c-09b51beed1d2
Status: Created
Number of Bricks: 2 x 3 = 6
Transport-type: tcp
Bricks:
Brick1: 10.70.42.37:/bricks/brick1/oz
Brick2: 10.70.43.94:/bricks/brick1/oz
Brick3: 10.70.43.124:/bricks/brick1/oz
Brick4: 10.70.42.37:/bricks/brick2/oz
Brick5: 10.70.43.94:/bricks/brick2/oz
Brick6: 10.70.43.124:/bricks/brick2/oz
Options Reconfigured:
performance.readdir-ahead: on
[root@dhcp42-37 ~]# 
[root@dhcp42-37 ~]# 
[root@dhcp42-37 ~]# gluster v start ozone
volume start: ozone: success
[root@dhcp42-37 ~]# 
[root@dhcp42-37 ~]# 
[root@dhcp42-37 ~]# gluster v info ozone
 
Volume Name: ozone
Type: Distributed-Replicate
Volume ID: a3812e21-cd1d-49ae-b41c-09b51beed1d2
Status: Started
Number of Bricks: 2 x 3 = 6
Transport-type: tcp
Bricks:
Brick1: 10.70.42.37:/bricks/brick1/oz
Brick2: 10.70.43.94:/bricks/brick1/oz
Brick3: 10.70.43.124:/bricks/brick1/oz
Brick4: 10.70.42.37:/bricks/brick2/oz
Brick5: 10.70.43.94:/bricks/brick2/oz
Brick6: 10.70.43.124:/bricks/brick2/oz
Options Reconfigured:
performance.readdir-ahead: on
[root@dhcp42-37 ~]# 
[root@dhcp42-37 ~]# 
[root@dhcp42-37 ~]# glusterfind create so1 ozone
Session so1 created with volume ozone
[root@dhcp42-37 ~]# glusterfind create so2 ozone
Session so2 created with volume ozone
[root@dhcp42-37 ~]# glusterfind create so3 ozone
Session so3 created with volume ozone
[root@dhcp42-37 ~]# 
[root@dhcp42-37 ~]# 
[root@dhcp42-37 ~]# glusterfind list
SESSION                   VOLUME                    SESSION TIME             
---------------------------------------------------------------------------
so1                       ozone                     2015-07-15 16:50:39      
so2                       ozone                     2015-07-15 16:50:47      
so3                       ozone                     2015-07-15 16:50:56      
[root@dhcp42-37 ~]# 
[root@dhcp42-37 ~]# 
[root@dhcp42-37 ~]# glusterfind pre so1 ozone /tmp/out1
Generated output file /tmp/out1
[root@dhcp42-37 ~]# cat /tmp/out1
MODIFY .trashcan%2F 
NEW test1 
NEW test2 
NEW dir1 
NEW dir1%2F%2Fdir2 
NEW dir3 
NEW dir3%2F%2Fdir4 
NEW dir3%2Fdir4%2F%2Fdir5 
NEW dir3%2Fdir4%2F%2Ftest3 
[root@dhcp42-37 ~]# 
[root@dhcp42-37 ~]# gluster
gluster> exit
[root@dhcp42-37 ~]# 
[root@dhcp42-37 ~]# 
[root@dhcp42-37 ~]# 
[root@dhcp42-37 ~]# glusterfind post so1 ozone
Session so1 with volume ozone updated
[root@dhcp42-37 ~]# 
[root@dhcp42-37 ~]# 
[root@dhcp42-37 ~]# glusterfind pre so1 ozone /tmp/out1
Generated output file /tmp/out1
[root@dhcp42-37 ~]# cat /tmp/out1
NEW file1 
NEW file2 
MODIFY dir3%2Fdir4%2Ftest3 
RENAME test1 dir1%2F%2Ftest1new
DELETE test2 
[root@dhcp42-37 ~]#
[root@dhcp42-37 ~]# 
[root@dhcp42-37 ~]# glusterfind post so1 ozone
Session so1 with volume ozone updated
[root@dhcp42-37 ~]# 
[root@dhcp42-37 ~]# 
[root@dhcp42-37 ~]# 
[root@dhcp42-37 ~]# glusterfind pre so1 ozone /tmp/out1
Generated output file /tmp/out1
[root@dhcp42-37 ~]# cat /tmp/out1
NEW dir1%2F%2Ffile1 
[root@dhcp42-37 ~]# glusterfind pre so2 ozone /tmp/out2
Generated output file /tmp/out2
[root@dhcp42-37 ~]# cat /tmp/out2
MODIFY .trashcan%2F 
NEW dir1%2F%2Ftest1new 
NEW dir1 
NEW dir1%2F%2Fdir2 
NEW dir3 
NEW dir3%2F%2Fdir4 
NEW dir3%2Fdir4%2F%2Fdir5 
NEW dir3%2Fdir4%2F%2Ftest3 
NEW file1 
NEW file2 
NEW dir1%2F%2Ffile1 
[root@dhcp42-37 ~]# 
[root@dhcp42-37 ~]# 
[root@dhcp42-37 ~]# 
[root@dhcp42-37 ~]# 
[root@dhcp42-37 ~]# cd /var/log/glusterfs/
bricks/                         cmd_history.log                 glusterfind/                    nfs.log
cli.log                         etc-glusterfs-glusterd.vol.log  glustershd.log                  snaps/
[root@dhcp42-37 ~]# cd /var/log/glusterfs/bricks/bricks-brick
bricks-brick1-nash.log  bricks-brick1-oz.log    bricks-brick2-nash.log  bricks-brick2-oz.log    
[root@dhcp42-37 ~]# cd /var/log/glusterfs/bricks/bricks-brick
bricks-brick1-nash.log  bricks-brick1-oz.log    bricks-brick2-nash.log  bricks-brick2-oz.log    
[root@dhcp42-37 ~]# cd /var/log/glusterfs/glusterfind/
cli.log  sn1/     so1/     so2/     so3/     
[root@dhcp42-37 ~]# cd /var/log/glusterfs/glusterfind/so1/ozone/c
changelog.7f86c700e6351d662b281ef1d3fb76c6066d169e.log  changelog.log
changelog.d1e856b5798e52a31b94ba2c0c8e1d261f39e70f.log  cli.log
[root@dhcp42-37 ~]# 
[root@dhcp42-37 ~]# 
[root@dhcp42-37 ~]# cd /etc/
Display all 180 possibilities? (y or n)
[root@dhcp42-37 ~]# cd /var/lib/glusterd/
bitd/            glusterfind/     hooks/           peers/           snaps/           
geo-replication/ glustershd/      nfs/             quotad/          ss_brick/        
glusterd.info    groups/          options          scrub/           vols/            
[root@dhcp42-37 ~]# cd /var/lib/glusterd/glusterfind/
.keys/ so1/   so2/   so3/   
[root@dhcp42-37 ~]# cd /var/lib/glusterd/glusterfind/so1/ozone/
%2Fbricks%2Fbrick1%2Foz.status      %2Fbricks%2Fbrick2%2Foz.status.pre  status
%2Fbricks%2Fbrick1%2Foz.status.pre  so1_ozone_secret.pem                status.pre
%2Fbricks%2Fbrick2%2Foz.status      so1_ozone_secret.pem.pub            
[root@dhcp42-37 ~]# cd /var/lib/glusterd/glusterfind/so1/ozone/^C
[root@dhcp42-37 ~]# cd /var/lib/glusterd/glustershd/
glustershd-server.vol  run/                   
[root@dhcp42-37 ~]# ls /var/lib/glusterd/glustershd/
glustershd-server.vol  run
[root@dhcp42-37 ~]# ls /var/lib/glusterd/glustershd/run/glustershd.pid^C
[root@dhcp42-37 ~]# ls /var/lib/glusterd/
bitd/            glusterfind/     hooks/           peers/           snaps/           
geo-replication/ glustershd/      nfs/             quotad/          ss_brick/        
glusterd.info    groups/          options          scrub/           vols/            
[root@dhcp42-37 ~]# vi /var/lib/glusterd/glusterd.info 
[root@dhcp42-37 ~]# vi /var/lib/glusterd/glustershd/
glustershd-server.vol  run/                   
[root@dhcp42-37 ~]# vi /var/lib/glusterd/glustershd/glustershd-server.vol 
[root@dhcp42-37 ~]# 
[root@dhcp42-37 ~]# vi /var/lib/glusterd/
bitd/            glusterfind/     hooks/           peers/           snaps/           
geo-replication/ glustershd/      nfs/             quotad/          ss_brick/        
glusterd.info    groups/          options          scrub/           vols/            
[root@dhcp42-37 ~]# vi /var/lib/glusterd/^C
[root@dhcp42-37 ~]# 
[root@dhcp42-37 ~]# 
[root@dhcp42-37 ~]# 
[root@dhcp42-37 ~]# 
[root@dhcp42-37 ~]# cd /var/lib/glusterd/^C
[root@dhcp42-37 ~]# cd /bricks/brick1/oz/
dir1/       dir3/       file1       file2       .glusterfs/ .trashcan/  
[root@dhcp42-37 ~]# cd /bricks/brick1/oz/
dir1/       dir3/       file1       file2       .glusterfs/ .trashcan/  
[root@dhcp42-37 ~]# cd /bricks/brick1/oz/.glusterfs/
00/           6e/           a7/           cd/           ef/           landfill/     oz.db-wal     
39/           76/           b6/           changelogs/   health_check  oz.db         
53/           93/           c8/           e0/           indices/      oz.db-shm     
[root@dhcp42-37 ~]# cd /bricks/brick1/oz/.glusterfs/changelogs/
CHANGELOG             CHANGELOG.1436959584  CHANGELOG.1436959794  CHANGELOG.1436959989  CHANGELOG.1436960680  
CHANGELOG.1436959524  CHANGELOG.1436959689  CHANGELOG.1436959884  CHANGELOG.1436960004  csnap/                
CHANGELOG.1436959539  CHANGELOG.1436959779  CHANGELOG.1436959974  CHANGELOG.1436960034  htime/                
[root@dhcp42-37 ~]# catcd /bricks/brick1/oz/.glusterfs/changelogs/htime/HTIME.1436959222 
-bash: catcd: command not found
[root@dhcp42-37 ~]# cat /bricks/brick1/oz/.glusterfs/changelogs/htime/HTIME.1436959222 
/bricks/brick1/oz/.glusterfs/changelogs/changelog.1436959
...
....
...
[root@dhcp42-37 ~]# 
[root@dhcp42-37 ~]# getfattr -d -m . 
anaconda-ks.cfg  .bash_logout     .bashrc          .lesshst         .ssh/            
.bash_history    .bash_profile    .cshrc           .pki/            .tcshrc          
[root@dhcp42-37 ~]# pwd
/root
[root@dhcp42-37 ~]# cd /bricks/brick1/oz/.glusterfs/changelogs/
[root@dhcp42-37 changelogs]# 
[root@dhcp42-37 changelogs]# 
[root@dhcp42-37 changelogs]# getfattr -d -m . htime/HTIME.1436959222 
# file: htime/HTIME.1436959222
security.selinux="system_u:object_r:unlabeled_t:s0"
trusted.glusterfs.htime="1436962152:196"

[root@dhcp42-37 changelogs]# 
[root@dhcp42-37 changelogs]# setfattr -x trusted.glusterfs.htime htime/HTIME.1436959222 
[root@dhcp42-37 changelogs]# 
[root@dhcp42-37 changelogs]# getfattr -d -m . htime/HTIME.1436959222 
# file: htime/HTIME.1436959222
security.selinux="system_u:object_r:unlabeled_t:s0"

[root@dhcp42-37 changelogs]# 
[root@dhcp42-37 changelogs]# 
[root@dhcp42-37 changelogs]# glusterfind list
SESSION                   VOLUME                    SESSION TIME             
---------------------------------------------------------------------------
so1                       ozone                     2015-07-15 17:04:01      
so2                       ozone                     2015-07-15 16:50:47      
so3                       ozone                     2015-07-15 16:50:56      
[root@dhcp42-37 changelogs]# 
[root@dhcp42-37 changelogs]# 
[root@dhcp42-37 changelogs]# glusterfind pre so1 ozone /tmp/out1
Post command is not run after last pre, use --regenerate-outfile
[root@dhcp42-37 changelogs]# glusterfind pre so3 ozone /tmp/out3
Generated output file /tmp/out3
[root@dhcp42-37 changelogs]# 
[root@dhcp42-37 changelogs]# 
[root@dhcp42-37 changelogs]# cat /tmp/out3
MODIFY .trashcan%2F 
NEW dir1%2F%2Ftest1new 
NEW dir1 
NEW dir1%2F%2Fdir2 
NEW dir3 
NEW dir3%2F%2Fdir4 
NEW dir3%2Fdir4%2F%2Fdir5 
NEW dir3%2Fdir4%2F%2Ftest3 
NEW file1 
NEW file2 
NEW dir1%2F%2Ffile1 
[root@dhcp42-37 changelogs]# 
[root@dhcp42-37 changelogs]# glusterfind pre so2 ozone t/mout2
Post command is not run after last pre, use --regenerate-outfile
[root@dhcp42-37 changelogs]# 
[root@dhcp42-37 ~]# 
[root@dhcp42-37 ~]# cd /bricks/brick2/oz/.glusterfs/
00/           53/           b6/           changelogs/   indices/      oz.db         oz.db-wal     
39/           93/           cd/           health_check  landfill/     oz.db-shm     
[root@dhcp42-37 ~]# cd /bricks/brick2/oz/.glusterfs/changelogs/htime/
[root@dhcp42-37 htime]# 
[root@dhcp42-37 htime]# getfattr -d m . HTIME.1436959222 
getfattr: m: No such file or directory
[root@dhcp42-37 htime]# getfattr -d -m . HTIME.1436959222 
# file: HTIME.1436959222
security.selinux="system_u:object_r:unlabeled_t:s0"
trusted.glusterfs.htime="1436962392:212"

[root@dhcp42-37 htime]# 
[root@dhcp42-37 htime]# setfattr -x trusted.glusterfs.htime HTIME.1436959222 
[root@dhcp42-37 htime]# getfattr -d -m . HTIME.1436959222 
# file: HTIME.1436959222
security.selinux="system_u:object_r:unlabeled_t:s0"

[root@dhcp42-37 htime]# getfattr -d -m . /bricks/brick1/oz/.glusterfs/changelogs/htime/HTIME.1436959222 
getfattr: Removing leading '/' from absolute path names
# file: bricks/brick1/oz/.glusterfs/changelogs/htime/HTIME.1436959222
security.selinux="system_u:object_r:unlabeled_t:s0"
trusted.glusterfs.htime="1436962467:217"

[root@dhcp42-37 htime]# getfattr -d -m . /bricks/brick1/oz/.glusterfs/changelogs/htime/HTIME.1436959222 ^C
[root@dhcp42-37 htime]# setfattr -x trusted.glusterfs.htime /bricks/brick1/oz/.glusterfs/changelogs/htime/HTIME.1436959222
[root@dhcp42-37 htime]# getfattr -d -m . /bricks/brick1/oz/.glusterfs/changelogs/htime/HTIME.1436959222 
getfattr: Removing leading '/' from absolute path names
# file: bricks/brick1/oz/.glusterfs/changelogs/htime/HTIME.1436959222
security.selinux="system_u:object_r:unlabeled_t:s0"

[root@dhcp42-37 htime]# getfattr -d -m . HTIME.1436959222 
# file: HTIME.1436959222
security.selinux="system_u:object_r:unlabeled_t:s0"
trusted.glusterfs.htime="1436962512:220"

[root@dhcp42-37 htime]# setfattr -x trusted.glusterfs.htime HTIME.1436959222 
[root@dhcp42-37 htime]# getfattr -d -m . HTIME.1436959222 
# file: HTIME.1436959222
security.selinux="system_u:object_r:unlabeled_t:s0"

[root@dhcp42-37 htime]# getfattr -d -m . /bricks/brick1/oz/.glusterfs/changelogs/htime/HTIME.1436959222 
getfattr: Removing leading '/' from absolute path names
# file: bricks/brick1/oz/.glusterfs/changelogs/htime/HTIME.1436959222
security.selinux="system_u:object_r:unlabeled_t:s0"
trusted.glusterfs.htime="1436962542:222"

[root@dhcp42-37 htime]# setfattr -x trusted.glusterfs.htime HTIME.1436959222 
[root@dhcp42-37 htime]# getfattr -d -m . HTIME.1436959222 
# file: HTIME.1436959222
security.selinux="system_u:object_r:unlabeled_t:s0"
trusted.glusterfs.htime="1436962632:228"

[root@dhcp42-37 htime]# 


CLIENT
==============================


bash-4.3$ ssh root.43.59
Last login: Tue Jul  7 00:16:19 2015 from dhcp43-140.lab.eng.blr.redhat.com
[root@dhcp43-59 ~]# 
[root@dhcp43-59 ~]# 
[root@dhcp43-59 ~]# mount | grep ozone
10.70.43.191:/ozone on /mnt/ozone type fuse.glusterfs (rw,default_permissions,allow_other,max_read=131072)
10.70.43.93:/ozone on /mnt/oz type fuse.glusterfs (rw,default_permissions,allow_other,max_read=131072)
[root@dhcp43-59 ~]# umount /mnt/oz
[root@dhcp43-59 ~]# umount /mnt/ozone
[root@dhcp43-59 ~]# rm -rf /mnt/oz
[root@dhcp43-59 ~]# rm -rf /mnt/ozone
[root@dhcp43-59 ~]# 
[root@dhcp43-59 ~]# mkdir /mnt/oz
[root@dhcp43-59 ~]# mount -t nfs 10.70.42.15:/ozone /mnt/oz
^C
[root@dhcp43-59 ~]# mount -t glusterfs 10.70.42.15:/ozone /mnt/oz
[root@dhcp43-59 ~]# cd /mnt/oz
[root@dhcp43-59 oz]# ls -a
.  ..  .trashcan
[root@dhcp43-59 oz]# echo "what a beautiful day" > test1
[root@dhcp43-59 oz]# echo hello world" > test2
> ^C
[root@dhcp43-59 oz]# ls
test1
[root@dhcp43-59 oz]# echo "hello wolrd" > test2
[root@dhcp43-59 oz]# ls -a
.  ..  test1  test2  .trashcan
[root@dhcp43-59 oz]# mkdir -p dir1/dir2
[root@dhcp43-59 oz]# mkdir -p dir3/dir4/dir5
[root@dhcp43-59 oz]# echo "whatever" dir3/dir4/test3
whatever dir3/dir4/test3
[root@dhcp43-59 oz]# ls
dir1  dir3  test1  test2
[root@dhcp43-59 oz]# echo "whatever" > dir3/dir4/test3
[root@dhcp43-59 oz]# ls dir3/dir4/
dir5  test3
[root@dhcp43-59 oz]# 
[root@dhcp43-59 oz]# 
[root@dhcp43-59 oz]# 
[root@dhcp43-59 oz]# echo "fjdksl" > file1
[root@dhcp43-59 oz]# echo "fjkdsl" > file2
[root@dhcp43-59 oz]# touch dir3/dir4/test3 
[root@dhcp43-59 oz]# mv test1 dir1/test1new
[root@dhcp43-59 oz]# rm test2
rm: remove regular file `test2'? y
[root@dhcp43-59 oz]#

Comment 11 errata-xmlrpc 2015-07-29 04:57:25 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHSA-2015-1495.html


Note You need to log in before you can comment on or make changes to this bug.