Bugzilla will be upgraded to version 5.0. The upgrade date is tentatively scheduled for 2 December 2018, pending final testing and feedback.
Bug 1229674 - [Backup]: 'Glusterfind list' should display an appropriate output when there are no active sessions
[Backup]: 'Glusterfind list' should display an appropriate output when there ...
Status: CLOSED ERRATA
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: glusterfind (Show other bugs)
3.1
Unspecified Unspecified
medium Severity medium
: ---
: RHGS 3.1.0
Assigned To: Milind Changire
Sweta Anandpara
:
Depends On:
Blocks: 1202842 1223636 1230017 1230791
  Show dependency treegraph
 
Reported: 2015-06-09 08:05 EDT by Sweta Anandpara
Modified: 2016-09-17 11:20 EDT (History)
7 users (show)

See Also:
Fixed In Version: glusterfs-3.7.1-3
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
: 1230017 1230791 (view as bug list)
Environment:
Last Closed: 2015-07-29 01:00:24 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)


External Trackers
Tracker ID Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2015:1495 normal SHIPPED_LIVE Important: Red Hat Gluster Storage 3.1 update 2015-07-29 04:26:26 EDT

  None (edit)
Description Sweta Anandpara 2015-06-09 08:05:57 EDT
Description of problem:
When there are no glusterfind sessions created, 'glusterfind list' displays back the prompt - which could be misleading to the user. A brief output which would read something like - 'No glusterfind sessions active' would contribute to better usability.


Version-Release number of selected component (if applicable):
glusterfs-3.7.1-1.el6rhs.x86_64

How reproducible: Always
Comment 5 Sweta Anandpara 2015-06-23 05:54:47 EDT
Tested and verified this on the build glusterfs-3.7.1-4.el6rhs.x86_64

Deleted the existing glusterfind sessions in my 2 node cluster setup, and executed a 'glusterfind list'. It displayed a valid message of 'no sessions found'

Moving this bug to verified in 3.1 everglades. Pasted below are the complete logs:


#########      NODE1       ##############


[root@dhcp42-236 ~]# gluster v list
testvol
[root@dhcp42-236 ~]# gluster v info
 
Volume Name: testvol
Type: Distribute
Volume ID: 7630e680-866b-47d8-ac32-761ea36a2c4f
Status: Stopped
Number of Bricks: 4
Transport-type: tcp
Bricks:
Brick1: 10.70.42.236:/rhs/thinbrick1/testvol
Brick2: 10.70.43.163:/rhs/thinbrick1/testvol
Brick3: 10.70.42.236:/rhs/thinbrick2/testvol
Brick4: 10.70.43.163:/rhs/thinbrick2/testvol
Options Reconfigured:
changelog.changelog: on
storage.build-pgfid: on
performance.readdir-ahead: on
[root@dhcp42-236 ~]# 
[root@dhcp42-236 ~]# 
[root@dhcp42-236 ~]# gluster v stop testvol
Stopping volume will make its data inaccessible. Do you want to continue? (y/n) y
volume stop: testvol: failed: Volume testvol is not in the started state
[root@dhcp42-236 ~]# gluster v delete testvol
Deleting volume will erase all information about the volume. Do you want to continue? (y/n) n
[root@dhcp42-236 ~]# glusterfind list
SESSION                   VOLUME                    SESSION TIME             
---------------------------------------------------------------------------
sesst1                    testvol                   2015-06-10 15:47:47      
[root@dhcp42-236 ~]# glusterfind delete sesst1 testvol
root@10.70.43.163's password: root@10.70.43.163's password: 


root@10.70.43.163's password: 
Session sesst1 with volume testvol deleted
[root@dhcp42-236 ~]# 
[root@dhcp42-236 ~]# 
[root@dhcp42-236 ~]# glusterfind list
No sessions found
[root@dhcp42-236 ~]# 
[root@dhcp42-236 ~]# 
[root@dhcp42-236 ~]# 
[root@dhcp42-236 ~]# glusterfind create
usage: glusterfind create [-h] [--debug] [--force] [--reset-session-time]
                          session volume
glusterfind create: error: too few arguments
[root@dhcp42-236 ~]# 
[root@dhcp42-236 ~]# gluster peer status
Number of Peers: 1

Hostname: 10.70.43.163
Uuid: a2cba06e-fb1a-4e10-994b-25d797f42e44
State: Peer in Cluster (Connected)
[root@dhcp42-236 ~]# cd /var/lib/glusterd/glusterfind/
[root@dhcp42-236 glusterfind]# ls
[root@dhcp42-236 glusterfind]# ls -a
.  ..  .keys
[root@dhcp42-236 glusterfind]# 
[root@dhcp42-236 glusterfind]# 
[root@dhcp42-236 glusterfind]# rpm -qa | grep glusterfs
glusterfs-client-xlators-3.7.1-4.el6rhs.x86_64
glusterfs-cli-3.7.1-4.el6rhs.x86_64
glusterfs-ganesha-3.7.1-4.el6rhs.x86_64
glusterfs-libs-3.7.1-4.el6rhs.x86_64
glusterfs-3.7.1-4.el6rhs.x86_64
glusterfs-api-3.7.1-4.el6rhs.x86_64
glusterfs-server-3.7.1-4.el6rhs.x86_64
glusterfs-fuse-3.7.1-4.el6rhs.x86_64
[root@dhcp42-236 glusterfind]# 


###########   NODE2  #########3

bash-4.3$ ssh root@10.70.43.163
root@10.70.43.163's password: 
Last login: Fri Jun 19 17:15:43 2015 from 10.70.1.209
[root@dhcp43-163 ~]# gluster peer status
Number of Peers: 1

Hostname: 10.70.42.236
Uuid: ba207b47-f730-47a3-8fa0-fe7dd5080fd5
State: Peer in Cluster (Connected)
[root@dhcp43-163 ~]# 
[root@dhcp43-163 ~]# 
[root@dhcp43-163 ~]# 
[root@dhcp43-163 ~]# gluster v list
testvol
[root@dhcp43-163 ~]# 
[root@dhcp43-163 ~]# 
[root@dhcp43-163 ~]# gluster v info
 
Volume Name: testvol
Type: Distribute
Volume ID: 7630e680-866b-47d8-ac32-761ea36a2c4f
Status: Stopped
Number of Bricks: 4
Transport-type: tcp
Bricks:
Brick1: 10.70.42.236:/rhs/thinbrick1/testvol
Brick2: 10.70.43.163:/rhs/thinbrick1/testvol
Brick3: 10.70.42.236:/rhs/thinbrick2/testvol
Brick4: 10.70.43.163:/rhs/thinbrick2/testvol
Options Reconfigured:
changelog.changelog: on
storage.build-pgfid: on
performance.readdir-ahead: on
[root@dhcp43-163 ~]# glusterfind list
No sessions found
[root@dhcp43-163 ~]# 
[root@dhcp43-163 ~]# ls -l /var/lib/glusterd/glusterfind/
total 0
[root@dhcp43-163 ~]# ls -l /var/lib/glusterd/glusterfind/^C
[root@dhcp43-163 ~]# 
[root@dhcp43-163 ~]#
Comment 6 Sweta Anandpara 2015-06-23 05:56:51 EDT
For the issue of password of peer node being prompted on 'glusterfind delete', bug 1234213 tracks it.
Comment 7 errata-xmlrpc 2015-07-29 01:00:24 EDT
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHSA-2015-1495.html

Note You need to log in before you can comment on or make changes to this bug.