Bug 1224236 - [Backup]: RFE - Glusterfind CLI commands need to respond based on volume's start/stop state
Summary: [Backup]: RFE - Glusterfind CLI commands need to respond based on volume's st...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: glusterfind
Version: rhgs-3.1
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: ---
: RHGS 3.1.0
Assignee: Milind Changire
QA Contact: Sweta Anandpara
URL:
Whiteboard:
Depends On:
Blocks: 1202842 1223636
TreeView+ depends on / blocked
 
Reported: 2015-05-22 11:13 UTC by Sweta Anandpara
Modified: 2016-09-17 15:21 UTC (History)
7 users (show)

Fixed In Version: glusterfs-3.7.1-7
Doc Type: Enhancement
Doc Text:
Clone Of:
: 1225564 (view as bug list)
Environment:
Last Closed: 2015-07-29 04:51:32 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2015:1495 0 normal SHIPPED_LIVE Important: Red Hat Gluster Storage 3.1 update 2015-07-29 08:26:26 UTC

Description Sweta Anandpara 2015-05-22 11:13:55 UTC
Description of problem:
If a volume is in a stopped state, glusterfind create succeeds (!). And the pre fails with the error 'Changelog register failed- Connection refused'. Post and delete succeeds. It is required to have a uniform behaviour of glusterfind CLI commands, depending on the start/stop state of the volume. 

This could be further enhanced to gracefully handle the scenario where the volume goes into a stopped state, in the middle of a pre command being run.

Version-Release number of selected component (if applicable):
glusterfs-3.7.0-2.el6rhs.x86_64

How reproducible: Always


Steps to Reproduce:
1. Create a new volume. Leave it in a stopped state
2. Create a glusterfind session. That succeeds.
3. Execute glusterfind pre and that fails with an error. 
4. Delete the glusterfind session and the session gets deleted.


Expected results:

* Step 3 should gracefully fail with a more related error.
* All glusterfind CLI commands - create/pre/post/delete/list - should have uniform behaviour based on the state of the volume.

Additional info:

[root@dhcp43-140 ~]# 
[root@dhcp43-140 ~]# gluster v stop nash
Stopping volume will make its data inaccessible. Do you want to continue? (y/n) y
volume stop: nash: success
[root@dhcp43-140 ~]# glusterfind pre sess_nash nash /tmp/out.txt --regenerate-outfile
10.70.43.140 - pre failed: /rhs/thinbrick2/nash/dd Changelog register failed: [Errno 111] Connection refused

10.70.43.140 - pre failed: /rhs/thinbrick1/nash/dd Changelog register failed: [Errno 111] Connection refused

10.70.42.75 - pre failed: [2015-05-22 16:26:00.321920] I [event-epoll.c:629:event_dispatch_epoll_worker] 0-epoll: Started thread with index 3
/rhs/thinbrick1/nash/dd Changelog register failed: [Errno 111] Connection refused

10.70.42.75 - pre failed: /rhs/thinbrick2/nash/dd Changelog register failed: [Errno 111] Connection refused

Generated output file /tmp/out.txt
[root@dhcp43-140 ~]#

Comment 7 Sweta Anandpara 2015-06-25 05:39:14 UTC
Observing the below mentioned behaviour in different scenarios:

1. 'Glusterfind create' for a stopped volume - pass/fail (??)
Stop an already created volume, and try to create a new glusterfind session.

Output: CLI fails with an error mentioning the 'stopped' state of the volume. Glusterfind list does display the session being created, BUT with the 'session corrupted' flag

2. 'Glusterfind pre' for a stopped volume

Output: CLI fails with an error mentioning 'volume is in stopped state'. Pre does not succeed.

3. 'Glusterfind post' for a stopped volume

Output: CLI succeeds. And so does the functionality of glusterfind post, of updating status.pre to status in $GLUSTERD_WORKDIR

4. 'Glusterfind delete' for a stopped volume

Output: Prompts for password multiple times (bug 1234213) and finally states that 'the command has failed'. BUT the session does get deleted.

5. 'Glusterfind list' - displays the session information irrespective of the volume start/stop state           --------->>>>> as expected

6. 'Glusterfind create' for a created volume

Output: Create succeeds ( bug 1228598).

7. 'Glusterfind pre/post/delete' for a created volume

Output: It all succeeds, resulting in 'changelog not available' error eventually - would be taken care of by bug 1228598 - which is expected to prevent session getting created in the first place.


As mentioned while raising the bug - glusterfind create/pre/post/delete should have *uniform* behaviour based on volume's state. Differing behaviour is seen for different CLI commands.


Moving this bug back to ASSIGNED. Pasted below are the logs:

[root@dhcp43-191 ~]# 
[root@dhcp43-191 ~]# gluster v list
cross3
gluster_shared_storage
ozone
[root@dhcp43-191 ~]# 
[root@dhcp43-191 ~]# 
[root@dhcp43-191 ~]# 
[root@dhcp43-191 ~]# gluster v info ozone
 
Volume Name: ozone
Type: Distributed-Replicate
Volume ID: 9ef1ace8-505d-4d97-aa23-4296aa685f76
Status: Started
Number of Bricks: 4 x 2 = 8
Transport-type: tcp
Bricks:
Brick1: 10.70.43.191:/rhs/thinbrick1/ozone
Brick2: 10.70.42.202:/rhs/thinbrick1/ozone
Brick3: 10.70.43.191:/rhs/thinbrick2/ozone
Brick4: 10.70.42.202:/rhs/thinbrick2/ozone
Brick5: 10.70.42.30:/rhs/thinbrick1/ozone
Brick6: 10.70.42.147:/rhs/thinbrick1/ozone
Brick7: 10.70.42.30:/rhs/thinbrick2/ozone
Brick8: 10.70.42.147:/rhs/thinbrick2/ozone
Options Reconfigured:
performance.readdir-ahead: on
storage.build-pgfid: on
changelog.changelog: on
changelog.capture-del-path: on
[root@dhcp43-191 ~]# 
[root@dhcp43-191 ~]# 
[root@dhcp43-191 ~]# rpm -qa | grep glusterfs
glusterfs-libs-3.7.1-4.el6rhs.x86_64
glusterfs-api-3.7.1-4.el6rhs.x86_64
glusterfs-client-xlators-3.7.1-4.el6rhs.x86_64
glusterfs-fuse-3.7.1-4.el6rhs.x86_64
glusterfs-cli-3.7.1-4.el6rhs.x86_64
glusterfs-3.7.1-4.el6rhs.x86_64
glusterfs-server-3.7.1-4.el6rhs.x86_64
[root@dhcp43-191 ~]# 
[root@dhcp43-191 ~]# 
[root@dhcp43-191 ~]# glusterfind list
SESSION                   VOLUME                    SESSION TIME             
---------------------------------------------------------------------------
sesso3                    ozone                     2015-06-18 16:27:30      
sesso1                    ozone                     2015-06-23 20:43:34      
cross3s1                  cross3                    2015-06-23 23:30:16      
sesso5                    ozone                     2015-06-20 00:18:03      
cross3s3                  cross3                    2015-06-23 17:55:28      
cross3s2                  cross3                    2015-06-23 18:06:38      
sesso2                    ozone                     2015-06-19 22:44:40      
sesso4                    ozone                     2015-06-18 16:27:38      
[root@dhcp43-191 ~]# 
[root@dhcp43-191 ~]# 
[root@dhcp43-191 ~]# 
[root@dhcp43-191 ~]# glusterfind pre sesso4 ozone /tmp/outo4.txt
10.70.42.30 - pre failed: [2015-06-25 10:46:48.746139] I [event-epoll.c:629:event_dispatch_epoll_worker] 0-epoll: Started thread with index 1
[2015-06-25 10:46:48.746527] I [event-epoll.c:629:event_dispatch_epoll_worker] 0-epoll: Started thread with index 4
[2015-06-25 10:46:48.746950] I [event-epoll.c:629:event_dispatch_epoll_worker] 0-epoll: Started thread with index 2
[2015-06-25 10:46:48.750058] I [event-epoll.c:629:event_dispatch_epoll_worker] 0-epoll: Started thread with index 3
/rhs/thinbrick2/ozone Historical Changelogs not available: [Errno 2] No such file or directory

10.70.42.147 - pre failed: [2015-06-25 10:46:48.885443] I [event-epoll.c:629:event_dispatch_epoll_worker] 0-epoll: Started thread with index 1
/rhs/thinbrick2/ozone Historical Changelogs not available: [Errno 2] No such file or directory

Generated output file /tmp/outo4.txt
[root@dhcp43-191 ~]# 
[root@dhcp43-191 ~]# 
[root@dhcp43-191 ~]# vi /tmp/outo4.txt 
[root@dhcp43-191 ~]# 
[root@dhcp43-191 ~]# 
[root@dhcp43-191 ~]# gluster v stop ozone
Stopping volume will make its data inaccessible. Do you want to continue? (y/n) y
volume stop: ozone: success
[root@dhcp43-191 ~]# 
[root@dhcp43-191 ~]# 
[root@dhcp43-191 ~]# 
[root@dhcp43-191 ~]# glusterfind pre sesso2 ozone /tmp/outo2.txt
Post command is not run after last pre, use --regenerate-outfile
[root@dhcp43-191 ~]# glusterfind pre sesso2 ozone /tmp/outo2.txt --regenerate-outfile
Volume ozone is in stopped state
[root@dhcp43-191 ~]# 
[root@dhcp43-191 ~]# 
[root@dhcp43-191 ~]# glusterfind post sesso2 ozone 
Session sesso2 with volume ozone updated
[root@dhcp43-191 ~]# ls /var/lib/glusterd/glusterfind/
cross3s1/ cross3s2/ cross3s3/ .keys/    sesso1/   sesso2/   sesso3/   sesso4/   sesso5/   
[root@dhcp43-191 ~]# ls /var/lib/glusterd/glusterfind/sesso2/ozone/
%2Frhs%2Fthinbrick1%2Fozone.status  %2Frhs%2Fthinbrick2%2Fozone.status  sesso2_ozone_secret.pem             sesso2_ozone_secret.pem.pub         status
[root@dhcp43-191 ~]# ls /var/lib/glusterd/glusterfind/sesso2/ozone/^C
[root@dhcp43-191 ~]# 
[root@dhcp43-191 ~]# 
[root@dhcp43-191 ~]# glusterfind create sesso5 ozone
Session sesso5 already created
[root@dhcp43-191 ~]# glusterfind list
SESSION                   VOLUME                    SESSION TIME             
---------------------------------------------------------------------------
sesso3                    ozone                     2015-06-18 16:27:30      
sesso1                    ozone                     2015-06-23 20:43:34      
cross3s1                  cross3                    2015-06-23 23:30:16      
sesso5                    ozone                     2015-06-20 00:18:03      
cross3s3                  cross3                    2015-06-23 17:55:28      
cross3s2                  cross3                    2015-06-23 18:06:38      
sesso2                    ozone                     2015-06-20 00:14:05      
sesso4                    ozone                     2015-06-18 16:27:38      
[root@dhcp43-191 ~]# glusterfind create sesso6 ozone
Volume ozone is in stopped state
[root@dhcp43-191 ~]# 
[root@dhcp43-191 ~]# 
[root@dhcp43-191 ~]# glusterfind list
SESSION                   VOLUME                    SESSION TIME             
---------------------------------------------------------------------------
sesso6                    ozone                     Session Corrupted        
sesso3                    ozone                     2015-06-18 16:27:30      
sesso1                    ozone                     2015-06-23 20:43:34      
cross3s1                  cross3                    2015-06-23 23:30:16      
sesso5                    ozone                     2015-06-20 00:18:03      
cross3s3                  cross3                    2015-06-23 17:55:28      
cross3s2                  cross3                    2015-06-23 18:06:38      
sesso2                    ozone                     2015-06-20 00:14:05      
sesso4                    ozone                     2015-06-18 16:27:38      
[root@dhcp43-191 ~]# 
[root@dhcp43-191 ~]# gluster v info ozone
 
Volume Name: ozone
Type: Distributed-Replicate
Volume ID: 9ef1ace8-505d-4d97-aa23-4296aa685f76
Status: Stopped
Number of Bricks: 4 x 2 = 8
Transport-type: tcp
Bricks:
Brick1: 10.70.43.191:/rhs/thinbrick1/ozone
Brick2: 10.70.42.202:/rhs/thinbrick1/ozone
Brick3: 10.70.43.191:/rhs/thinbrick2/ozone
Brick4: 10.70.42.202:/rhs/thinbrick2/ozone
Brick5: 10.70.42.30:/rhs/thinbrick1/ozone
Brick6: 10.70.42.147:/rhs/thinbrick1/ozone
Brick7: 10.70.42.30:/rhs/thinbrick2/ozone
Brick8: 10.70.42.147:/rhs/thinbrick2/ozone
Options Reconfigured:
performance.readdir-ahead: on
storage.build-pgfid: on
changelog.changelog: on
changelog.capture-del-path: on
[root@dhcp43-191 ~]# vi /var/log/glusterfs/glusterfind/sesso6/ozone/cli.log 
[root@dhcp43-191 ~]# less /var/log/glusterfs/glusterfind/sesso6/ozone/cli.log 
[root@dhcp43-191 ~]# 
[root@dhcp43-191 ~]# cat /var/log/glusterfs/glusterfind/sesso6/ozone/cli.log 
[2015-06-25 16:18:49,457] INFO [main - 285:ssh_setup] - Ssh key generated /var/lib/glusterd/glusterfind/sesso6/ozone/sesso6_ozone_secret.pem
[2015-06-25 16:18:49,515] INFO [main - 307:ssh_setup] - Distributed ssh key to all nodes of Volume
[2015-06-25 16:18:49,645] INFO [main - 320:ssh_setup] - Ssh key added to authorized_keys of Volume nodes
[2015-06-25 16:18:50,754] INFO [main - 346:mode_create] - Volume option set ozone, build-pgfid on
[2015-06-25 16:18:52,304] INFO [main - 353:mode_create] - Volume option set ozone, changelog.changelog on
[2015-06-25 16:18:53,512] INFO [main - 360:mode_create] - Volume option set ozone, changelog.capture-del-path on
[2015-06-25 16:18:53,575] ERROR [utils - 152:fail] - Volume ozone is in stopped state
[root@dhcp43-191 ~]# 
[root@dhcp43-191 ~]# 
[root@dhcp43-191 ~]# glusterfind delete sesso6 ozone6
Session sesso6 not created with volume ozone6
[root@dhcp43-191 ~]# glusterfind delete sesso6 ozon
Session sesso6 not created with volume ozon
[root@dhcp43-191 ~]# glusterfind delete sesso6 ozone
root.42.202's password: root.42.30's password: root.42.30's password: root.42.147's password: root.42.202's password: root.42.147's password: 


root.42.30's password: 

root.42.147's password: 

root.42.147's password: 


root.42.147's password: 

10.70.42.147 - delete failed: Permission denied, please try again.
Permission denied, please try again.
Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).

Command delete failed in 10.70.42.147:/rhs/thinbrick1/ozone
[root@dhcp43-191 ~]# 
[root@dhcp43-191 ~]# 
[root@dhcp43-191 ~]# glusterfind list
SESSION                   VOLUME                    SESSION TIME             
---------------------------------------------------------------------------
sesso3                    ozone                     2015-06-18 16:27:30      
sesso1                    ozone                     2015-06-23 20:43:34      
cross3s1                  cross3                    2015-06-23 23:30:16      
sesso5                    ozone                     2015-06-20 00:18:03      
cross3s3                  cross3                    2015-06-23 17:55:28      
cross3s2                  cross3                    2015-06-23 18:06:38      
sesso2                    ozone                     2015-06-20 00:14:05      
sesso4                    ozone                     2015-06-18 16:27:38      
[root@dhcp43-191 ~]# 
[root@dhcp43-191 ~]# glusterfind delete sesso5 ozone
root.42.202's password: root.42.30's password: root.42.30's password: root.42.202's password: root.42.147's password: root.42.147's password: 


root.42.30's password: 

root.42.202's password: 

root.42.147's password: 


root.42.202's password: 


10.70.42.202 - delete failed: Permission denied, please try again.
Permission denied, please try again.
Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).

Command delete failed in 10.70.42.202:/rhs/thinbrick2/ozone
root.42.147's password: 
10.70.42.147 - delete failed: Permission denied, please try again.
Permission denied, please try again.
Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).

[root@dhcp43-191 ~]# 
[root@dhcp43-191 ~]# glusterfind list
SESSION                   VOLUME                    SESSION TIME             
---------------------------------------------------------------------------
sesso3                    ozone                     2015-06-18 16:27:30      
sesso1                    ozone                     2015-06-23 20:43:34      
cross3s1                  cross3                    2015-06-23 23:30:16      
cross3s3                  cross3                    2015-06-23 17:55:28      
cross3s2                  cross3                    2015-06-23 18:06:38      
sesso2                    ozone                     2015-06-20 00:14:05      
sesso4                    ozone                     2015-06-18 16:27:38      
[root@dhcp43-191 ~]# 
[root@dhcp43-191 ~]# 
[root@dhcp43-191 ~]# 
[root@dhcp43-191 ~]# 
[root@dhcp43-191 ~]# 
[root@dhcp43-191 ~]# gluster v create nash 10.70.43.191:/rhs/thinbrick1/nash 10.70.42.202:/rhs/thinbrick1/nash 10.70.42.30:/rhs/thinbrick1/nash 10.70.42.147:/rhs/thinbrick1/nash
volume create: nash: success: please start the volume to access data
[root@dhcp43-191 ~]# 
[root@dhcp43-191 ~]# 
[root@dhcp43-191 ~]# 
[root@dhcp43-191 ~]# gluster v info nash
 
Volume Name: nash
Type: Distribute
Volume ID: 696305c8-a26c-435f-9ec7-ea7bc073f056
Status: Created
Number of Bricks: 4
Transport-type: tcp
Bricks:
Brick1: 10.70.43.191:/rhs/thinbrick1/nash
Brick2: 10.70.42.202:/rhs/thinbrick1/nash
Brick3: 10.70.42.30:/rhs/thinbrick1/nash
Brick4: 10.70.42.147:/rhs/thinbrick1/nash
Options Reconfigured:
performance.readdir-ahead: on
[root@dhcp43-191 ~]# 
[root@dhcp43-191 ~]# 
[root@dhcp43-191 ~]# 
[root@dhcp43-191 ~]# glusterfind create sessn1 nash
Session sessn1 created with volume nash
[root@dhcp43-191 ~]# glusterfind pre sessn1 nash /tmp/outn1.txt
10.70.43.191 - pre failed: [2015-06-25 11:06:19.069166] I [event-epoll.c:629:event_dispatch_epoll_worker] 0-epoll: Started thread with index 1
/rhs/thinbrick1/nash Changelog register failed: [Errno 2] No such file or directory

10.70.42.30 - pre failed: [2015-06-25 11:06:19.924185] I [event-epoll.c:629:event_dispatch_epoll_worker] 0-epoll: Started thread with index 1
/rhs/thinbrick1/nash Changelog register failed: [Errno 2] No such file or directory

10.70.42.202 - pre failed: /rhs/thinbrick1/nash Changelog register failed: [Errno 2] No such file or directory

10.70.42.147 - pre failed: [2015-06-25 11:06:20.106686] I [event-epoll.c:629:event_dispatch_epoll_worker] 0-epoll: Started thread with index 1
[2015-06-25 11:06:20.106713] I [event-epoll.c:629:event_dispatch_epoll_worker] 0-epoll: Started thread with index 2
/rhs/thinbrick1/nash Changelog register failed: [Errno 2] No such file or directory

Generated output file /tmp/outn1.txt
[root@dhcp43-191 ~]# 
[root@dhcp43-191 ~]# cat /tmp/outn1.txt 
[root@dhcp43-191 ~]# glusterfind post sessn1 nash
Session sessn1 with volume nash updated
[root@dhcp43-191 ~]# 
[root@dhcp43-191 ~]# 
[root@dhcp43-191 ~]# ls /var/lib/glusterd/glusterfind/sessn1/nash/
%2Frhs%2Fthinbrick1%2Fnash.status  sessn1_nash_secret.pem  sessn1_nash_secret.pem.pub  status
[root@dhcp43-191 ~]# 
[root@dhcp43-191 ~]# 
[root@dhcp43-191 ~]# glusterfind delete sessn1 nash
Session sessn1 with volume nash deleted
[root@dhcp43-191 ~]# 
[root@dhcp43-191 ~]# 
[root@dhcp43-191 ~]# glusterfind list
SESSION                   VOLUME                    SESSION TIME             
---------------------------------------------------------------------------
sesso3                    ozone                     2015-06-18 16:27:30      
sesso1                    ozone                     2015-06-23 20:43:34      
cross3s1                  cross3                    2015-06-23 23:30:16      
cross3s3                  cross3                    2015-06-23 17:55:28      
cross3s2                  cross3                    2015-06-23 18:06:38      
sesso2                    ozone                     2015-06-20 00:14:05      
sesso4                    ozone                     2015-06-18 16:27:38      
[root@dhcp43-191 ~]#

Comment 13 Sweta Anandpara 2015-07-04 05:56:15 UTC
Tested and verified this on the build 3.7.1-6

Glusterfind create fails when volume is in stopped/created state (i.e., not started. So does glusterfind pre. 

Glusterfind post and delete does succeed, but I don't see that as an issue that would cause corruption in the functionality. Nor does it affect the usability aspect of the feature. 

Glusterfind list succeeds all the time (irrespective of volume's state) - which is how it is expected.

Moving this to fixed in 3.1 everglades. Detailed logs are pasted below. For the other issues that are seen in the logs below - there are separate bugs tracking the same (1224880, 1224064)

[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# gluster  v list
gv1
slave
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# gluster  v create vol1 10.70.43.93:/rhs/thinbrick1/vol1 10.70.42.75:/rhs/thinbrick1/vol1 10.70.43.93:/rhs/thinbrick2/vol1 10.70.42.75:/rhs/thinbrick2/vol1
volume create: vol1: failed: /rhs/thinbrick1/vol1 is already part of a volume
[root@dhcp43-93 ~]# cd /rhs/thinbrick1/
[root@dhcp43-93 thinbrick1]# ls
nash  ozone  pluto  slave  vol1
[root@dhcp43-93 thinbrick1]# rm -rf vol1/
[root@dhcp43-93 thinbrick1]# rm -rf pluto/
[root@dhcp43-93 thinbrick1]# rm -rf ozone
[root@dhcp43-93 thinbrick1]# rm -rf nash
[root@dhcp43-93 thinbrick1]# ls
slave
[root@dhcp43-93 thinbrick1]# cd
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# rpm -qa | grep glusterfs
glusterfs-client-xlators-3.7.1-6.el6rhs.x86_64
glusterfs-server-3.7.1-6.el6rhs.x86_64
glusterfs-3.7.1-6.el6rhs.x86_64
glusterfs-api-3.7.1-6.el6rhs.x86_64
glusterfs-cli-3.7.1-6.el6rhs.x86_64
glusterfs-geo-replication-3.7.1-6.el6rhs.x86_64
glusterfs-libs-3.7.1-6.el6rhs.x86_64
glusterfs-fuse-3.7.1-6.el6rhs.x86_64
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# cd /rhs/thinbrick2
[root@dhcp43-93 thinbrick2]# ls
nash  ozone  pluto  slave  vol1
[root@dhcp43-93 thinbrick2]# rm -rf vol1
[root@dhcp43-93 thinbrick2]# rm -rf pluto
[root@dhcp43-93 thinbrick2]# rm -rf nash
[root@dhcp43-93 thinbrick2]# ls
ozone  slave
[root@dhcp43-93 thinbrick2]# gluster v list
gv1
slave
[root@dhcp43-93 thinbrick2]# rm -rf ozone
[root@dhcp43-93 thinbrick2]# 
[root@dhcp43-93 thinbrick2]# 
[root@dhcp43-93 thinbrick2]# ls -a
.  ..  slave
[root@dhcp43-93 thinbrick2]# 
[root@dhcp43-93 thinbrick2]# 
[root@dhcp43-93 thinbrick2]# cd
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# 
[root@dhcp43-93 thinbrick2]# ls
slave
[root@dhcp43-93 thinbrick2]# gluster  v create vol1 10.70.43.93:/rhs/thinbrick1/vol1 10.70.43.155:/rhs/thinbrick1/vol1 10.70.43.93:/rhs/thinbrick2/vol1 10.70.43.155:/rhs/thinbrick2/vol1
volume create: vol1: success: please start the volume to access data
[root@dhcp43-93 thinbrick2]# 
[root@dhcp43-93 thinbrick2]# 
[root@dhcp43-93 thinbrick2]# 
[root@dhcp43-93 thinbrick2]# gluster v status
Status of volume: gv1
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick 10.70.43.93:/rhs/b1                   49154     0          Y       13880
NFS Server on localhost                     2049      0          Y       13881
NFS Server on 10.70.43.155                  2049      0          Y       23445
 
Task Status of Volume gv1
------------------------------------------------------------------------------
There are no active volume tasks
 
Status of volume: slave
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick 10.70.43.93:/rhs/thinbrick1/slave     49152     0          Y       13892
Brick 10.70.43.155:/rhs/thinbrick1/slave    49152     0          Y       23444
Brick 10.70.43.93:/rhs/thinbrick2/slave     49153     0          Y       13901
Brick 10.70.43.155:/rhs/thinbrick2/slave    49153     0          Y       23455
NFS Server on localhost                     2049      0          Y       13881
Self-heal Daemon on localhost               N/A       N/A        N       N/A  
NFS Server on 10.70.43.155                  2049      0          Y       23445
Self-heal Daemon on 10.70.43.155            N/A       N/A        N       N/A  
 
Task Status of Volume slave
------------------------------------------------------------------------------
There are no active volume tasks
 
Volume vol1 is not started
 
[root@dhcp43-93 thinbrick2]# 
[root@dhcp43-93 thinbrick2]# 
[root@dhcp43-93 thinbrick2]# glutser v info vol1
-bash: glutser: command not found
[root@dhcp43-93 thinbrick2]# gluster v info vol1
 
Volume Name: vol1
Type: Distribute
Volume ID: 8918e433-d903-4bb8-80c2-42a1b5a0244e
Status: Created
Number of Bricks: 4
Transport-type: tcp
Bricks:
Brick1: 10.70.43.93:/rhs/thinbrick1/vol1
Brick2: 10.70.43.155:/rhs/thinbrick1/vol1
Brick3: 10.70.43.93:/rhs/thinbrick2/vol1
Brick4: 10.70.43.155:/rhs/thinbrick2/vol1
Options Reconfigured:
performance.readdir-ahead: on
[root@dhcp43-93 thinbrick2]# 
[root@dhcp43-93 thinbrick2]# 
[root@dhcp43-93 thinbrick2]# glusterfind create sv1 vol1 
Volume vol1 is not online
[root@dhcp43-93 thinbrick2]# 
[root@dhcp43-93 thinbrick2]# 
[root@dhcp43-93 thinbrick2]# glusterfind list
SESSION                   VOLUME                    SESSION TIME             
---------------------------------------------------------------------------
ss2                       slave                     2015-06-27 00:08:39      
ss1                       slave                     2015-06-27 00:25:26      
[root@dhcp43-93 thinbrick2]# 
[root@dhcp43-93 thinbrick2]# 
[root@dhcp43-93 thinbrick2]# glusterfind create fdsfds vol1
Volume vol1 is not online
[root@dhcp43-93 thinbrick2]# 
[root@dhcp43-93 thinbrick2]# 
[root@dhcp43-93 thinbrick2]# ls /var/lib/glusterd/glusterfind/
ss1  ss2
[root@dhcp43-93 thinbrick2]# ls
slave  vol1
[root@dhcp43-93 thinbrick2]# cd
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# ls /var/log/glusterfs/glusterfind/
cli.log  fdsfds  nash  plutos1  ps1  ps2  ps3  sess21  sessn1  sessn2  sessn3  sessn4  sesso1  sesso2  sesso3  sessp1  sessp2  sessv1  sgv1  ss1  ss2  sumne  sv1  vol1s1  vol1s2  vol1s3
[root@dhcp43-93 ~]# cat /var/log/glusterfs/glusterfind/sv1/vol1/cli.log 
[2015-07-04 15:48:32,839] ERROR [utils - 152:fail] - Volume vol1 is not online
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# gluster v start vol1
volume start: vol1: success
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# glusterfind create sv1 vol1
Session sv1 created with volume vol1
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# glusterfind list
SESSION                   VOLUME                    SESSION TIME             
---------------------------------------------------------------------------
sv1                       vol1                      2015-07-04 15:50:02      
ss2                       slave                     2015-06-27 00:08:39      
ss1                       slave                     2015-06-27 00:25:26      
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# ls /var/lib/glusterd/glusterfind/
ss1  ss2  sv1
[root@dhcp43-93 ~]# ls /var/lib/glusterd/glusterfind/sv1
vol1
[root@dhcp43-93 ~]# ls /var/lib/glusterd/glusterfind/sv1/vol1
%2Frhs%2Fthinbrick1%2Fvol1.status  %2Frhs%2Fthinbrick2%2Fvol1.status  status  sv1_vol1_secret.pem  sv1_vol1_secret.pem.pub
[root@dhcp43-93 ~]# gluster v stop vol1
Stopping volume will make its data inaccessible. Do you want to continue? (y/n) y
volume stop: vol1: success
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# gluster v status vol1
Volume vol1 is not started
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# glusterfind create sv2 vol1
Volume vol1 is not online
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# glusterfind list
SESSION                   VOLUME                    SESSION TIME             
---------------------------------------------------------------------------
sv1                       vol1                      2015-07-04 15:50:02      
ss2                       slave                     2015-06-27 00:08:39      
ss1                       slave                     2015-06-27 00:25:26      
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# glusterfind pre sv1 vol1 /tmp/out.txt
Volume vol1 is not online
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# ls /var/lib/glusterd/glusterfind/
.keys/ ss1/   ss2/   sv1/   
[root@dhcp43-93 ~]# ls /var/lib/glusterd/glusterfind/sv1/vol1/
%2Frhs%2Fthinbrick1%2Fvol1.status  %2Frhs%2Fthinbrick2%2Fvol1.status  status                             sv1_vol1_secret.pem                sv1_vol1_secret.pem.pub
[root@dhcp43-93 ~]# ls /var/lib/glusterd/glusterfind/sv1/vol1/
%2Frhs%2Fthinbrick1%2Fvol1.status  %2Frhs%2Fthinbrick2%2Fvol1.status  status  sv1_vol1_secret.pem  sv1_vol1_secret.pem.pub
[root@dhcp43-93 ~]# glusterfind pre sv2 vol1 /tmp/out.t
Invalid session sv2
[root@dhcp43-93 ~]# glusterfind post sv1 vol1
Pre script is not run
[root@dhcp43-93 ~]# glusterfind delete sv1 vol1^C
[root@dhcp43-93 ~]# glusterfind delete sv2 vol1
Invalid session sv2
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# gluster v start vol1
volume start: vol1: success
[root@dhcp43-93 ~]# glusterfind pre sv2 vol1 /tmp/out.txt
Invalid session sv2
[root@dhcp43-93 ~]# glusterfind pre sv1 vol1 /tmp/out.txt
Generated output file /tmp/out.txt
[root@dhcp43-93 ~]# glusterfind list
SESSION                   VOLUME                    SESSION TIME             
---------------------------------------------------------------------------
sv1                       vol1                      2015-07-04 15:50:02      
ss2                       slave                     2015-06-27 00:08:39      
ss1                       slave                     2015-06-27 00:25:26      
[root@dhcp43-93 ~]# ls /var/lib/glusterd/glusterfind/
.keys/ ss1/   ss2/   sv1/   
[root@dhcp43-93 ~]# ls /var/lib/glusterd/glusterfind/sv1/vol1/
%2Frhs%2Fthinbrick1%2Fvol1.status      %2Frhs%2Fthinbrick2%2Fvol1.status      status                                 sv1_vol1_secret.pem
%2Frhs%2Fthinbrick1%2Fvol1.status.pre  %2Frhs%2Fthinbrick2%2Fvol1.status.pre  status.pre                             sv1_vol1_secret.pem.pub
[root@dhcp43-93 ~]# ls /var/lib/glusterd/glusterfind/sv1/vol1/
%2Frhs%2Fthinbrick1%2Fvol1.status      %2Frhs%2Fthinbrick2%2Fvol1.status      status      sv1_vol1_secret.pem
%2Frhs%2Fthinbrick1%2Fvol1.status.pre  %2Frhs%2Fthinbrick2%2Fvol1.status.pre  status.pre  sv1_vol1_secret.pem.pub
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# glusterfind pre sv2 vol1 /tmp/out.txt
Invalid session sv2
[root@dhcp43-93 ~]# glusterfind delete sv1 vol1
root.43.155's password: root.43.155's password: 


root.43.155's password: 
root.43.155's password: 
10.70.43.155 - delete failed: Permission denied, please try again.
Permission denied, please try again.
Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).

Command delete failed in 10.70.43.155:/rhs/thinbrick1/vol1
[root@dhcp43-93 ~]# glusterfind lsit
usage: glusterfind [-h] {pre,create,list,post,delete} ...
glusterfind: error: argument mode: invalid choice: 'lsit' (choose from 'pre', 'create', 'list', 'post', 'delete')
[root@dhcp43-93 ~]# glusterfind list
SESSION                   VOLUME                    SESSION TIME             
---------------------------------------------------------------------------
ss2                       slave                     2015-06-27 00:08:39      
ss1                       slave                     2015-06-27 00:25:26      
[root@dhcp43-93 ~]# glusterfind lsit
usage: glusterfind [-h] {pre,create,list,post,delete} ...
glusterfind: error: argument mode: invalid choice: 'lsit' (choose from 'pre', 'create', 'list', 'post', 'delete')
[root@dhcp43-93 ~]# glusterfind list
SESSION                   VOLUME                    SESSION TIME             
---------------------------------------------------------------------------
ss2                       slave                     2015-06-27 00:08:39      
ss1                       slave                     2015-06-27 00:25:26      
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# ls /var/lib/glusterd/glusterfind/
ss1  ss2
[root@dhcp43-93 ~]# ls /var/log/glusterfs/glusterfind/
cli.log  nash/    ps1/     ps3/     sessn1/  sessn3/  sesso1/  sesso3/  sessp2/  sgv1/    ss2/     sv1/     vol1s1/  vol1s3/  
fdsfds/  plutos1/ ps2/     sess21/  sessn2/  sessn4/  sesso2/  sessp1/  sessv1/  ss1/     sumne/   sv2/     vol1s2/  
[root@dhcp43-93 ~]# ls /var/log/glusterfs/glusterfind/sv1/vol1/c
changelog.1c27a488a584181d698698190ce633eae6ab4a90.log  changelog.log                                           
changelog.b85984854053ba4529aeaba8bd2c93408cb68773.log  cli.log                                                 
[root@dhcp43-93 ~]# ls /var/log/glusterfs/glusterfind/sv1/vol1/^C
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# glusterfind create sv1 vol1
glutSession sv1 created with volume vol1
[root@dhcp43-93 ~]# gluster v info vol1
 
Volume Name: vol1
Type: Distribute
Volume ID: 8918e433-d903-4bb8-80c2-42a1b5a0244e
Status: Started
Number of Bricks: 4
Transport-type: tcp
Bricks:
Brick1: 10.70.43.93:/rhs/thinbrick1/vol1
Brick2: 10.70.43.155:/rhs/thinbrick1/vol1
Brick3: 10.70.43.93:/rhs/thinbrick2/vol1
Brick4: 10.70.43.155:/rhs/thinbrick2/vol1
Options Reconfigured:
changelog.capture-del-path: on
changelog.changelog: on
storage.build-pgfid: on
performance.readdir-ahead: on
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# gluster v stop vol1
Stopping volume will make its data inaccessible. Do you want to continue? (y/n) y
volume stop: vol1: success
[root@dhcp43-93 ~]# glusterfind pre sv1 vol1 /tmp/out.txt
Volume vol1 is not online
[root@dhcp43-93 ~]# glusterfind post sv1 vol1
Pre script is not run
[root@dhcp43-93 ~]# gluster v start vol1
volume start: vol1: success
[root@dhcp43-93 ~]# glusterfind pre sv1 vol1 /tmp/out.txt
Generated output file /tmp/out.txt
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# gluster v stop vol1
Stopping volume will make its data inaccessible. Do you want to continue? (y/n) y
volume stop: vol1: success
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# gluster v  info vol1
 
Volume Name: vol1
Type: Distribute
Volume ID: 8918e433-d903-4bb8-80c2-42a1b5a0244e
Status: Stopped
Number of Bricks: 4
Transport-type: tcp
Bricks:
Brick1: 10.70.43.93:/rhs/thinbrick1/vol1
Brick2: 10.70.43.155:/rhs/thinbrick1/vol1
Brick3: 10.70.43.93:/rhs/thinbrick2/vol1
Brick4: 10.70.43.155:/rhs/thinbrick2/vol1
Options Reconfigured:
changelog.capture-del-path: on
changelog.changelog: on
storage.build-pgfid: on
performance.readdir-ahead: on
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# glusterfind post sv1 vol1
Session sv1 with volume vol1 updated
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# glusterfind list
SESSION                   VOLUME                    SESSION TIME             
---------------------------------------------------------------------------
sv1                       vol1                      2015-07-04 15:58:10      
ss2                       slave                     2015-06-27 00:08:39      
ss1                       slave                     2015-06-27 00:25:26      
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# ls /var/lib/glusterd/glusterfind/sv1/vol1/
%2Frhs%2Fthinbrick1%2Fvol1.status  %2Frhs%2Fthinbrick2%2Fvol1.status  status                             sv1_vol1_secret.pem                sv1_vol1_secret.pem.pub
[root@dhcp43-93 ~]# ls /var/lib/glusterd/glusterfind/sv1/vol1/^C
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# rpm -qa | grep gluster
glusterfs-client-xlators-3.7.1-6.el6rhs.x86_64
glusterfs-server-3.7.1-6.el6rhs.x86_64
glusterfs-3.7.1-6.el6rhs.x86_64
glusterfs-api-3.7.1-6.el6rhs.x86_64
glusterfs-cli-3.7.1-6.el6rhs.x86_64
glusterfs-geo-replication-3.7.1-6.el6rhs.x86_64
glusterfs-libs-3.7.1-6.el6rhs.x86_64
glusterfs-fuse-3.7.1-6.el6rhs.x86_64
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# glusterfind list
SESSION                   VOLUME                    SESSION TIME             
---------------------------------------------------------------------------
sv1                       vol1                      2015-07-04 15:58:10      
ss2                       slave                     2015-06-27 00:08:39      
ss1                       slave                     2015-06-27 00:25:26      
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# volume stop vol1
-bash: volume: command not found
[root@dhcp43-93 ~]# gluster v stop vol1
Stopping volume will make its data inaccessible. Do you want to continue? (y/n) y
volume stop: vol1: failed: Volume vol1 is not in the started state
[root@dhcp43-93 ~]# gluster v info vol1
 
Volume Name: vol1
Type: Distribute
Volume ID: 8918e433-d903-4bb8-80c2-42a1b5a0244e
Status: Stopped
Number of Bricks: 4
Transport-type: tcp
Bricks:
Brick1: 10.70.43.93:/rhs/thinbrick1/vol1
Brick2: 10.70.43.155:/rhs/thinbrick1/vol1
Brick3: 10.70.43.93:/rhs/thinbrick2/vol1
Brick4: 10.70.43.155:/rhs/thinbrick2/vol1
Options Reconfigured:
changelog.capture-del-path: on
changelog.changelog: on
storage.build-pgfid: on
performance.readdir-ahead: on
[root@dhcp43-93 ~]# gluster v delete vol1
Deleting volume will erase all information about the volume. Do you want to continue? (y/n) y
volume delete: vol1: success
[root@dhcp43-93 ~]# glusterfind list
SESSION                   VOLUME                    SESSION TIME             
---------------------------------------------------------------------------
sv1                       vol1                      2015-07-04 15:58:10      
ss2                       slave                     2015-06-27 00:08:39      
ss1                       slave                     2015-06-27 00:25:26      
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# glusterfind list
SESSION                   VOLUME                    SESSION TIME             
---------------------------------------------------------------------------
sv1                       vol1                      2015-07-04 15:58:10      
ss2                       slave                     2015-06-27 00:08:39      
ss1                       slave                     2015-06-27 00:25:26      
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# ls /var/lib/glusterd/glusterfind/
ss1  ss2  sv1
[root@dhcp43-93 ~]# ls /var/lib/glusterd/glusterfind/sv1/vol1/
%2Frhs%2Fthinbrick1%2Fvol1.status  %2Frhs%2Fthinbrick2%2Fvol1.status  status  sv1_vol1_secret.pem  sv1_vol1_secret.pem.pub
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# glusterfind list
SESSION                   VOLUME                    SESSION TIME             
---------------------------------------------------------------------------
sv1                       vol1                      2015-07-04 15:58:10      
ss2                       slave                     2015-06-27 00:08:39      
ss1                       slave                     2015-06-27 00:25:26      
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# gluster v status 
Status of volume: gv1
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick 10.70.43.93:/rhs/b1                   49154     0          Y       13880
NFS Server on localhost                     2049      0          Y       1162 
NFS Server on 10.70.43.155                  2049      0          Y       13427
 
Task Status of Volume gv1
------------------------------------------------------------------------------
There are no active volume tasks
 
Status of volume: slave
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick 10.70.43.93:/rhs/thinbrick1/slave     49152     0          Y       13892
Brick 10.70.43.155:/rhs/thinbrick1/slave    49152     0          Y       23444
Brick 10.70.43.93:/rhs/thinbrick2/slave     49153     0          Y       13901
Brick 10.70.43.155:/rhs/thinbrick2/slave    49153     0          Y       23455
NFS Server on localhost                     2049      0          Y       1162 
Self-heal Daemon on localhost               N/A       N/A        N       N/A  
NFS Server on 10.70.43.155                  2049      0          Y       13427
Self-heal Daemon on 10.70.43.155            N/A       N/A        N       N/A  
 
Task Status of Volume slave
------------------------------------------------------------------------------
There are no active volume tasks
 
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# gluster v status vol1
Volume vol1 does not exist
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# glusterfind delete sv2 vol1
Invalid session sv2
[root@dhcp43-93 ~]# glusterfind delete sv1 vol1
Unable to get volume details
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# glusterfind list
SESSION                   VOLUME                    SESSION TIME             
---------------------------------------------------------------------------
sv1                       vol1                      2015-07-04 15:58:10      
ss2                       slave                     2015-06-27 00:08:39      
ss1                       slave                     2015-06-27 00:25:26      
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]#

Comment 14 errata-xmlrpc 2015-07-29 04:51:32 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHSA-2015-1495.html


Note You need to log in before you can comment on or make changes to this bug.