Bug 1228598 - [Backup]: Glusterfind session(s) created before starting the volume results in 'changelog not available' error, eventually
Summary: [Backup]: Glusterfind session(s) created before starting the volume results i...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: glusterfind
Version: rhgs-3.1
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: RHGS 3.1.0
Assignee: Kotresh HR
QA Contact: Sweta Anandpara
URL:
Whiteboard:
Depends On:
Blocks: 1202842 1223636 1232729 1233518
TreeView+ depends on / blocked
 
Reported: 2015-06-05 10:02 UTC by Sweta Anandpara
Modified: 2016-09-17 15:20 UTC (History)
7 users (show)

Fixed In Version: glusterfs-3.7.1-6
Doc Type: Bug Fix
Doc Text:
Clone Of:
: 1232729 (view as bug list)
Environment:
Last Closed: 2015-07-29 04:58:07 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2015:1495 0 normal SHIPPED_LIVE Important: Red Hat Gluster Storage 3.1 update 2015-07-29 08:26:26 UTC

Description Sweta Anandpara 2015-06-05 10:02:23 UTC
Description of problem:

Have a 4+2 disperse volume 'disperse' and create glusterfind sessions for the same. Mount it over nfs/fuse and create a couple of files/dirs. Execute the glusterfind pre command it fails with 'Historical changelogs not available' on both the nodes of the cluster


[root@dhcp43-140 ~]# 
[root@dhcp43-140 ~]# gluster v status dispersevol
Status of volume: dispersevol
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick 10.70.43.140:/rhs/thinbrick1/disperse 49188     0          Y       21579
Brick 10.70.43.140:/rhs/thinbrick2/disperse 49189     0          Y       21599
Brick 10.70.42.75:/rhs/thinbrick2/disperse  49184     0          Y       17892
Brick 10.70.42.75:/rhs/thinbrick1/disperse  49185     0          Y       17912
Brick 10.70.43.140:/rhs/thinbrick3/disperse 49190     0          Y       21619
Brick 10.70.42.75:/rhs/thinbrick3/disperse  49186     0          Y       17932
NFS Server on localhost                     2049      0          Y       21640
NFS Server on 10.70.42.75                   2049      0          Y       17953
 
Task Status of Volume dispersevol
------------------------------------------------------------------------------
There are no active volume tasks
 
[root@dhcp43-140 ~]# mkdir /mnt/disperse
[root@dhcp43-140 ~]# mount -t glusterfs 10.70.43.140:/disperse /mnt/disperse
Mount failed. Please check the log file for more details.
[root@dhcp43-140 ~]# mount -t glusterfs 10.70.43.140:/dispersevol /mnt/disperse
[root@dhcp43-140 ~]# cd /mnt/disperse
[root@dhcp43-140 disperse]# ls -a
.  ..  .trashcan
[root@dhcp43-140 disperse]# 
[root@dhcp43-140 disperse]# 
[root@dhcp43-140 disperse]# 
[root@dhcp43-140 disperse]# echo "what a day.. hmph" > test1
[root@dhcp43-140 disperse]# mkdir dir2
[root@dhcp43-140 disperse]# ln test1 test1_ln
[root@dhcp43-140 disperse]# echo "what a day.. hmph" > test2
[root@dhcp43-140 disperse]# ln -s test2 test2_sln
[root@dhcp43-140 disperse]# cd
[root@dhcp43-140 ~]# 
[root@dhcp43-140 ~]# 
[root@dhcp43-140 ~]# 
[root@dhcp43-140 ~]# gluster v info dispersevol
 
Volume Name: dispersevol
Type: Disperse
Volume ID: e70dccdf-9f39-494a-a166-aa142049de07
Status: Created
Number of Bricks: 1 x (4 + 2) = 6
Transport-type: tcp
Bricks:
Brick1: 10.70.43.140:/rhs/thinbrick1/disperse
Brick2: 10.70.43.140:/rhs/thinbrick2/disperse
Brick3: 10.70.42.75:/rhs/thinbrick2/disperse
Brick4: 10.70.42.75:/rhs/thinbrick1/disperse
Brick5: 10.70.43.140:/rhs/thinbrick3/disperse
Brick6: 10.70.42.75:/rhs/thinbrick3/disperse
Options Reconfigured:
changelog.changelog: on
storage.build-pgfid: on
performance.readdir-ahead: on
[root@dhcp43-140 ~]# gluster v start dispersevol
volume start: dispersevol: success
[root@dhcp43-140 ~]# 
[root@dhcp43-140 ~]# glusterfind list
SESSION                   VOLUME                    SESSION TIME             
---------------------------------------------------------------------------
testvol_sess              testvol                   2015-06-04 23:38:02      
cross32                   cross3vol                 2015-06-04 22:46:01      
sessd2                    dispersevol               2015-06-05 19:44:31      
sessd3                    dispersevol               2015-06-05 19:44:39      
sessd1                    dispersevol               2015-06-05 19:44:23      
cross31                   cross3vol                 2015-06-05 00:00:28      
sessc                     cross3vol                 2015-06-05 15:44:47      
[root@dhcp43-140 ~]# 
[root@dhcp43-140 ~]# glusterfind pre sessd1 dispersevol outd1.txt --regenerate-outfile
10.70.43.140 - pre failed: /rhs/thinbrick2/disperse Historical Changelogs not available: [Errno 2] No such file or directory

10.70.43.140 - pre failed: /rhs/thinbrick1/disperse Historical Changelogs not available: [Errno 2] No such file or directory

10.70.43.140 - pre failed: /rhs/thinbrick3/disperse Historical Changelogs not available: [Errno 2] No such file or directory

10.70.42.75 - pre failed: /rhs/thinbrick2/disperse Historical Changelogs not available: [Errno 2] No such file or directory

10.70.42.75 - pre failed: /rhs/thinbrick1/disperse Historical Changelogs not available: [Errno 2] No such file or directory

10.70.42.75 - pre failed: /rhs/thinbrick3/disperse Historical Changelogs not available: [Errno 2] No such file or directory

Generated output file /root/outd1.txt
[root@dhcp43-140 ~]#

***********************************************************88
This is what is seen in the backend brick logs:


[2015-06-05 14:32:15.422293] I [rpcsvc.c:2213:rpcsvc_set_outstanding_rpc_limit] 0-rpc-service: Configured rpc.outstanding-rpc-limit with value 64
[2015-06-05 14:32:15.422417] W [options.c:936:xl_opt_validate] 0-dispersevol-server: option 'listen-port' is deprecated, preferred is 'transport.socket.listen-port', continuing with correction
[2015-06-05 14:32:15.425732] E [changelog-helpers.c:634:htime_open] 0-dispersevol-changelog: Error extracting HTIME_CURRENT: No data available.
[2015-06-05 14:32:15.427484] E [ctr-helper.c:250:extract_ctr_options] 0-gfdbdatastore: CTR Xlator is disabled.
...
...
...
[2015-06-05 14:42:29.646309] I [server-handshake.c:585:server_setvolume] 0-dispersevol-server: accepted client from dhcp43-140.lab.eng.blr.redhat.com-21847-2015/06/05-14:42:29:558126-dispersevol-client-0-0-0 (version: 3.7.0)
[2015-06-05 14:42:53.867949] E [posix-helpers.c:1088:posix_handle_pair] 0-dispersevol-posix: /rhs/thinbrick1/disperse/dir2: key:glusterfs.inodelk-dom-count flags: 1 length:23 error:Operation not supported
[2015-06-05 14:42:53.868063] E [posix.c:1391:posix_mkdir] 0-dispersevol-posix: setting xattrs on /rhs/thinbrick1/disperse/dir2 failed (Operation not supported)
[2015-06-05 14:43:19.746591] E [posix-helpers.c:1088:posix_handle_pair] 0-dispersevol-posix: /rhs/thinbrick1/disperse/test2_sln: key:glusterfs.inodelk-dom-count flags: 1 length:23 error:Operation not supported
[2015-06-05 14:43:19.746643] E [posix.c:1897:posix_symlink] 0-dispersevol-posix: setting xattrs on /rhs/thinbrick1/disperse/test2_sln failed (Operation not supported)
[2015-06-05 14:44:27.716197] I [server-handshake.c:585:server_setvolume] 0-dispersevol-server: accepted client from dhcp42-75.lab.eng.blr.redhat.com-18098-2015/06/05-14:44:27:456748-dispersevol-client-0-0-0 (version: 3.7.0)
[2015-06-05 14:44:56.663817] W [socket.c:642:__socket_rwv] 0-dispersevol-changelog: readv on /var/run/gluster/.0ea84248fa0f63dc12f2a29bd7f49b5921945.sock failed (No data available)

*********************************************************



Version-Release number of selected component (if applicable):
glusterfs-3.7.0-3.el6rhs.x86_64

How reproducible: 1:1

Comment 2 Kotresh HR 2015-06-12 09:17:04 UTC
RCA:
---

This is nothing to do with disperse volume. It should happen with any other
volume if it is done in following way.

1. gluster volume is created first
2. Immediately glusterfind session is created which updates cuurent time as
   say 't1' as start time for changelogs available time.
   Even though, the changelog is enabled (marked 'on' in vol file), backend  
   .glusterfs and actual changelog files gets created during volume start.
3. gluster volume is started.
   The changelog and HITIME.TSTAMP gets created during volume start and the
   TSTAMP is current time say 't1+n'

  If above is the case, then glustefind pre is requesting History API with
  start time as 't1' where as the changelog is actually available from 't1+s'
  which will always fail.

Solution:
--------
With the following patches which is already merged, the glusterfind session creation will fail unless volume is oneline. So that would fix the above issue

Upstream Master:
http://review.gluster.org/#/c/10955/

Upstream 3.7:
http://review.gluster.org/#/c/11187/

The BZ 1224236 fixes this bug.

Comment 3 Sweta Anandpara 2015-06-17 07:04:39 UTC
As mentioned in the RCA, changing the title of this bug.

Also, the patch mentioned above fixes the issue only when the status of the volume is in 'stopped' state. The issue still persists if the volume status is 'created'.

Comment 4 Kotresh HR 2015-06-23 04:58:45 UTC
Upstream Patch (Mater):
http://review.gluster.org/#/c/11278/

Upstream Patch: (3.7):
http://review.gluster.org/#/c/11322/

Comment 7 Kotresh HR 2015-06-24 09:00:39 UTC
Downstream Patch:

https://code.engineering.redhat.com/gerrit/#/c/51457/

Comment 8 Sweta Anandpara 2015-07-04 05:00:23 UTC
Tested and verified this on the build glusterfs-server-3.7.1-6.el6rhs.x86_64

When the volume's state is not 'started' (i.e., either 'created' or 'stopped'), it does not allow us to create glusterfind sessions, removing the probability of hitting the error mentioned in the title.

Moving this to fixed in 3.1 everglades. 

Detailed logs are pasted below:

[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# gluster v status
Status of volume: gv1
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick 10.70.43.93:/rhs/b1                   49154     0          Y       13880
NFS Server on localhost                     2049      0          Y       13881
NFS Server on 10.70.43.155                  2049      0          Y       23445
 
Task Status of Volume gv1
------------------------------------------------------------------------------
There are no active volume tasks
 
Status of volume: slave
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick 10.70.43.93:/rhs/thinbrick1/slave     49152     0          Y       13892
Brick 10.70.43.155:/rhs/thinbrick1/slave    49152     0          Y       23444
Brick 10.70.43.93:/rhs/thinbrick2/slave     49153     0          Y       13901
Brick 10.70.43.155:/rhs/thinbrick2/slave    49153     0          Y       23455
NFS Server on localhost                     2049      0          Y       13881
Self-heal Daemon on localhost               N/A       N/A        N       N/A  
NFS Server on 10.70.43.155                  2049      0          Y       23445
Self-heal Daemon on 10.70.43.155            N/A       N/A        N       N/A  
 
Task Status of Volume slave
------------------------------------------------------------------------------
There are no active volume tasks
 
[root@dhcp43-93 ~]# gluster peer status
Number of Peers: 1

Hostname: 10.70.43.155
Uuid: 97f53dc5-1ba1-45dc-acdd-ddf38229035b
State: Peer in Cluster (Connected)
[root@dhcp43-93 ~]# 
[root@dhcp43-93 thinbrick2]# gluster  v create vol1 10.70.43.93:/rhs/thinbrick1/vol1 10.70.43.155:/rhs/thinbrick1/vol1 10.70.43.93:/rhs/thinbrick2/vol1 10.70.43.155:/rhs/thinbrick2/vol1
volume create: vol1: success: please start the volume to access data
[root@dhcp43-93 thinbrick2]# 
[root@dhcp43-93 thinbrick2]# 
[root@dhcp43-93 thinbrick2]# 
[root@dhcp43-93 thinbrick2]# gluster v status
Status of volume: gv1
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick 10.70.43.93:/rhs/b1                   49154     0          Y       13880
NFS Server on localhost                     2049      0          Y       13881
NFS Server on 10.70.43.155                  2049      0          Y       23445
 
Task Status of Volume gv1
------------------------------------------------------------------------------
There are no active volume tasks
 
Status of volume: slave
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick 10.70.43.93:/rhs/thinbrick1/slave     49152     0          Y       13892
Brick 10.70.43.155:/rhs/thinbrick1/slave    49152     0          Y       23444
Brick 10.70.43.93:/rhs/thinbrick2/slave     49153     0          Y       13901
Brick 10.70.43.155:/rhs/thinbrick2/slave    49153     0          Y       23455
NFS Server on localhost                     2049      0          Y       13881
Self-heal Daemon on localhost               N/A       N/A        N       N/A  
NFS Server on 10.70.43.155                  2049      0          Y       23445
Self-heal Daemon on 10.70.43.155            N/A       N/A        N       N/A  
 
Task Status of Volume slave
------------------------------------------------------------------------------
There are no active volume tasks
 
Volume vol1 is not started
 
[root@dhcp43-93 thinbrick2]# 
[root@dhcp43-93 thinbrick2]# 
[root@dhcp43-93 thinbrick2]# glutser v info vol1
-bash: glutser: command not found
[root@dhcp43-93 thinbrick2]# gluster v info vol1
 
Volume Name: vol1
Type: Distribute
Volume ID: 8918e433-d903-4bb8-80c2-42a1b5a0244e
Status: Created
Number of Bricks: 4
Transport-type: tcp
Bricks:
Brick1: 10.70.43.93:/rhs/thinbrick1/vol1
Brick2: 10.70.43.155:/rhs/thinbrick1/vol1
Brick3: 10.70.43.93:/rhs/thinbrick2/vol1
Brick4: 10.70.43.155:/rhs/thinbrick2/vol1
Options Reconfigured:
performance.readdir-ahead: on
[root@dhcp43-93 thinbrick2]# 
[root@dhcp43-93 thinbrick2]# 
[root@dhcp43-93 thinbrick2]# glusterfind create sv1 vol1 
Volume vol1 is not online
[root@dhcp43-93 thinbrick2]# 
[root@dhcp43-93 thinbrick2]# 
[root@dhcp43-93 thinbrick2]# glusterfind list
SESSION                   VOLUME                    SESSION TIME             
---------------------------------------------------------------------------
ss2                       slave                     2015-06-27 00:08:39      
ss1                       slave                     2015-06-27 00:25:26      
[root@dhcp43-93 thinbrick2]# 
[root@dhcp43-93 thinbrick2]# 
[root@dhcp43-93 thinbrick2]# glusterfind create fdsfds vol1
Volume vol1 is not online
[root@dhcp43-93 thinbrick2]# 
[root@dhcp43-93 thinbrick2]# 
[root@dhcp43-93 thinbrick2]# ls /var/lib/glusterd/glusterfind/
ss1  ss2
[root@dhcp43-93 thinbrick2]# ls
slave  vol1
[root@dhcp43-93 thinbrick2]# cd
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# ls /var/log/glusterfs/glusterfind/
cli.log  fdsfds  nash  plutos1  ps1  ps2  ps3  sess21  sessn1  sessn2  sessn3  sessn4  sesso1  sesso2  sesso3  sessp1  sessp2  sessv1  sgv1  ss1  ss2  sumne  sv1  vol1s1  vol1s2  vol1s3
[root@dhcp43-93 ~]# cat /var/log/glusterfs/glusterfind/sv1/vol1/cli.log 
[2015-07-04 15:48:32,839] ERROR [utils - 152:fail] - Volume vol1 is not online
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# gluster v start vol1
volume start: vol1: success
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# glusterfind create sv1 vol1
Session sv1 created with volume vol1
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# glusterfind list
SESSION                   VOLUME                    SESSION TIME             
---------------------------------------------------------------------------
sv1                       vol1                      2015-07-04 15:50:02      
ss2                       slave                     2015-06-27 00:08:39      
ss1                       slave                     2015-06-27 00:25:26      
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# ls /var/lib/glusterd/glusterfind/
ss1  ss2  sv1
[root@dhcp43-93 ~]# ls /var/lib/glusterd/glusterfind/sv1
vol1
[root@dhcp43-93 ~]# ls /var/lib/glusterd/glusterfind/sv1/vol1
%2Frhs%2Fthinbrick1%2Fvol1.status  %2Frhs%2Fthinbrick2%2Fvol1.status  status  sv1_vol1_secret.pem  sv1_vol1_secret.pem.pub
[root@dhcp43-93 ~]# gluster v stop vol1
Stopping volume will make its data inaccessible. Do you want to continue? (y/n) y
volume stop: vol1: success
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# gluster v status vol1
Volume vol1 is not started
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# glusterfind create sv2 vol1
Volume vol1 is not online
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# glusterfind list
SESSION                   VOLUME                    SESSION TIME             
---------------------------------------------------------------------------
sv1                       vol1                      2015-07-04 15:50:02      
ss2                       slave                     2015-06-27 00:08:39      
ss1                       slave                     2015-06-27 00:25:26      
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# glusterfind pre sv1 vol1 /tmp/out.txt
Volume vol1 is not online
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# ls /var/lib/glusterd/glusterfind/
.keys/ ss1/   ss2/   sv1/   
[root@dhcp43-93 ~]# ls /var/lib/glusterd/glusterfind/sv1/vol1/
%2Frhs%2Fthinbrick1%2Fvol1.status  %2Frhs%2Fthinbrick2%2Fvol1.status  status                             sv1_vol1_secret.pem                sv1_vol1_secret.pem.pub
[root@dhcp43-93 ~]# ls /var/lib/glusterd/glusterfind/sv1/vol1/
%2Frhs%2Fthinbrick1%2Fvol1.status  %2Frhs%2Fthinbrick2%2Fvol1.status  status  sv1_vol1_secret.pem  sv1_vol1_secret.pem.pub
[root@dhcp43-93 ~]# glusterfind pre sv2 vol1 /tmp/out.t
Invalid session sv2
[root@dhcp43-93 ~]# glusterfind post sv1 vol1
Pre script is not run
[root@dhcp43-93 ~]# glusterfind delete sv1 vol1^C
[root@dhcp43-93 ~]# glusterfind delete sv2 vol1
Invalid session sv2
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# gluster v start vol1
volume start: vol1: success
[root@dhcp43-93 ~]# glusterfind pre sv2 vol1 /tmp/out.txt
Invalid session sv2
[root@dhcp43-93 ~]# glusterfind pre sv1 vol1 /tmp/out.txt
Generated output file /tmp/out.txt
[root@dhcp43-93 ~]# glusterfind list
SESSION                   VOLUME                    SESSION TIME             
---------------------------------------------------------------------------
sv1                       vol1                      2015-07-04 15:50:02      
ss2                       slave                     2015-06-27 00:08:39      
ss1                       slave                     2015-06-27 00:25:26      
[root@dhcp43-93 ~]# ls /var/lib/glusterd/glusterfind/
.keys/ ss1/   ss2/   sv1/   
[root@dhcp43-93 ~]# ls /var/lib/glusterd/glusterfind/sv1/vol1/
%2Frhs%2Fthinbrick1%2Fvol1.status      %2Frhs%2Fthinbrick2%2Fvol1.status      status                                 sv1_vol1_secret.pem
%2Frhs%2Fthinbrick1%2Fvol1.status.pre  %2Frhs%2Fthinbrick2%2Fvol1.status.pre  status.pre                             sv1_vol1_secret.pem.pub
[root@dhcp43-93 ~]# ls /var/lib/glusterd/glusterfind/sv1/vol1/
%2Frhs%2Fthinbrick1%2Fvol1.status      %2Frhs%2Fthinbrick2%2Fvol1.status      status      sv1_vol1_secret.pem
%2Frhs%2Fthinbrick1%2Fvol1.status.pre  %2Frhs%2Fthinbrick2%2Fvol1.status.pre  status.pre  sv1_vol1_secret.pem.pub
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# glusterfind pre sv2 vol1 /tmp/out.txt
Invalid session sv2
[root@dhcp43-93 ~]# glusterfind delete sv1 vol1
root.43.155's password: root.43.155's password: 


root.43.155's password: 
root.43.155's password: 
10.70.43.155 - delete failed: Permission denied, please try again.
Permission denied, please try again.
Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).

Command delete failed in 10.70.43.155:/rhs/thinbrick1/vol1
[root@dhcp43-93 ~]# glusterfind lsit
usage: glusterfind [-h] {pre,create,list,post,delete} ...
glusterfind: error: argument mode: invalid choice: 'lsit' (choose from 'pre', 'create', 'list', 'post', 'delete')
[root@dhcp43-93 ~]# glusterfind list
SESSION                   VOLUME                    SESSION TIME             
---------------------------------------------------------------------------
ss2                       slave                     2015-06-27 00:08:39      
ss1                       slave                     2015-06-27 00:25:26      
[root@dhcp43-93 ~]# glusterfind lsit
usage: glusterfind [-h] {pre,create,list,post,delete} ...
glusterfind: error: argument mode: invalid choice: 'lsit' (choose from 'pre', 'create', 'list', 'post', 'delete')
[root@dhcp43-93 ~]# glusterfind list
SESSION                   VOLUME                    SESSION TIME             
---------------------------------------------------------------------------
ss2                       slave                     2015-06-27 00:08:39      
ss1                       slave                     2015-06-27 00:25:26      
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# ls /var/lib/glusterd/glusterfind/
ss1  ss2
[root@dhcp43-93 ~]# ls /var/log/glusterfs/glusterfind/
cli.log  nash/    ps1/     ps3/     sessn1/  sessn3/  sesso1/  sesso3/  sessp2/  sgv1/    ss2/     sv1/     vol1s1/  vol1s3/  
fdsfds/  plutos1/ ps2/     sess21/  sessn2/  sessn4/  sesso2/  sessp1/  sessv1/  ss1/     sumne/   sv2/     vol1s2/  
[root@dhcp43-93 ~]# ls /var/log/glusterfs/glusterfind/sv1/vol1/c
changelog.1c27a488a584181d698698190ce633eae6ab4a90.log  changelog.log                                           
changelog.b85984854053ba4529aeaba8bd2c93408cb68773.log  cli.log                                                 
[root@dhcp43-93 ~]# ls /var/log/glusterfs/glusterfind/sv1/vol1/^C
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# glusterfind create sv1 vol1
glutSession sv1 created with volume vol1
[root@dhcp43-93 ~]# gluster v info vol1
 
Volume Name: vol1
Type: Distribute
Volume ID: 8918e433-d903-4bb8-80c2-42a1b5a0244e
Status: Started
Number of Bricks: 4
Transport-type: tcp
Bricks:
Brick1: 10.70.43.93:/rhs/thinbrick1/vol1
Brick2: 10.70.43.155:/rhs/thinbrick1/vol1
Brick3: 10.70.43.93:/rhs/thinbrick2/vol1
Brick4: 10.70.43.155:/rhs/thinbrick2/vol1
Options Reconfigured:
changelog.capture-del-path: on
changelog.changelog: on
storage.build-pgfid: on
performance.readdir-ahead: on
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# gluster v stop vol1
Stopping volume will make its data inaccessible. Do you want to continue? (y/n) y
volume stop: vol1: success
[root@dhcp43-93 ~]# glusterfind pre sv1 vol1 /tmp/out.txt
Volume vol1 is not online
[root@dhcp43-93 ~]# glusterfind post sv1 vol1
Pre script is not run
[root@dhcp43-93 ~]# gluster v start vol1
volume start: vol1: success
[root@dhcp43-93 ~]# glusterfind pre sv1 vol1 /tmp/out.txt
Generated output file /tmp/out.txt
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# gluster v stop vol1
Stopping volume will make its data inaccessible. Do you want to continue? (y/n) y
volume stop: vol1: success
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# gluster v  info vol1
 
Volume Name: vol1
Type: Distribute
Volume ID: 8918e433-d903-4bb8-80c2-42a1b5a0244e
Status: Stopped
Number of Bricks: 4
Transport-type: tcp
Bricks:
Brick1: 10.70.43.93:/rhs/thinbrick1/vol1
Brick2: 10.70.43.155:/rhs/thinbrick1/vol1
Brick3: 10.70.43.93:/rhs/thinbrick2/vol1
Brick4: 10.70.43.155:/rhs/thinbrick2/vol1
Options Reconfigured:
changelog.capture-del-path: on
changelog.changelog: on
storage.build-pgfid: on
performance.readdir-ahead: on
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# glusterfind post sv1 vol1
Session sv1 with volume vol1 updated
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# glusterfind list
SESSION                   VOLUME                    SESSION TIME             
---------------------------------------------------------------------------
sv1                       vol1                      2015-07-04 15:58:10      
ss2                       slave                     2015-06-27 00:08:39      
ss1                       slave                     2015-06-27 00:25:26      
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# ls /var/lib/glusterd/glusterfind/sv1/vol1/
%2Frhs%2Fthinbrick1%2Fvol1.status  %2Frhs%2Fthinbrick2%2Fvol1.status  status                             sv1_vol1_secret.pem                sv1_vol1_secret.pem.pub
[root@dhcp43-93 ~]# ls /var/lib/glusterd/glusterfind/sv1/vol1/^C
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# rpm -qa | grep gluster
glusterfs-client-xlators-3.7.1-6.el6rhs.x86_64
glusterfs-server-3.7.1-6.el6rhs.x86_64
glusterfs-3.7.1-6.el6rhs.x86_64
glusterfs-api-3.7.1-6.el6rhs.x86_64
glusterfs-cli-3.7.1-6.el6rhs.x86_64
glusterfs-geo-replication-3.7.1-6.el6rhs.x86_64
glusterfs-libs-3.7.1-6.el6rhs.x86_64
glusterfs-fuse-3.7.1-6.el6rhs.x86_64
[root@dhcp43-93 ~]#

Comment 9 errata-xmlrpc 2015-07-29 04:58:07 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHSA-2015-1495.html


Note You need to log in before you can comment on or make changes to this bug.