Bug 1229674
| Summary: | [Backup]: 'Glusterfind list' should display an appropriate output when there are no active sessions | |||
|---|---|---|---|---|
| Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | Sweta Anandpara <sanandpa> | |
| Component: | glusterfind | Assignee: | Milind Changire <mchangir> | |
| Status: | CLOSED ERRATA | QA Contact: | Sweta Anandpara <sanandpa> | |
| Severity: | medium | Docs Contact: | ||
| Priority: | medium | |||
| Version: | rhgs-3.1 | CC: | asrivast, avishwan, khiremat, mchangir, rhs-bugs, storage-qa-internal, vagarwal | |
| Target Milestone: | --- | |||
| Target Release: | RHGS 3.1.0 | |||
| Hardware: | Unspecified | |||
| OS: | Unspecified | |||
| Whiteboard: | ||||
| Fixed In Version: | glusterfs-3.7.1-3 | Doc Type: | Bug Fix | |
| Doc Text: | Story Points: | --- | ||
| Clone Of: | ||||
| : | 1230017 1230791 (view as bug list) | Environment: | ||
| Last Closed: | 2015-07-29 05:00:24 UTC | Type: | Bug | |
| Regression: | --- | Mount Type: | --- | |
| Documentation: | --- | CRM: | ||
| Verified Versions: | Category: | --- | ||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
| Cloudforms Team: | --- | Target Upstream Version: | ||
| Embargoed: | ||||
| Bug Depends On: | ||||
| Bug Blocks: | 1202842, 1223636, 1230017, 1230791 | |||
|
Description
Sweta Anandpara
2015-06-09 12:05:57 UTC
Tested and verified this on the build glusterfs-3.7.1-4.el6rhs.x86_64
Deleted the existing glusterfind sessions in my 2 node cluster setup, and executed a 'glusterfind list'. It displayed a valid message of 'no sessions found'
Moving this bug to verified in 3.1 everglades. Pasted below are the complete logs:
######### NODE1 ##############
[root@dhcp42-236 ~]# gluster v list
testvol
[root@dhcp42-236 ~]# gluster v info
Volume Name: testvol
Type: Distribute
Volume ID: 7630e680-866b-47d8-ac32-761ea36a2c4f
Status: Stopped
Number of Bricks: 4
Transport-type: tcp
Bricks:
Brick1: 10.70.42.236:/rhs/thinbrick1/testvol
Brick2: 10.70.43.163:/rhs/thinbrick1/testvol
Brick3: 10.70.42.236:/rhs/thinbrick2/testvol
Brick4: 10.70.43.163:/rhs/thinbrick2/testvol
Options Reconfigured:
changelog.changelog: on
storage.build-pgfid: on
performance.readdir-ahead: on
[root@dhcp42-236 ~]#
[root@dhcp42-236 ~]#
[root@dhcp42-236 ~]# gluster v stop testvol
Stopping volume will make its data inaccessible. Do you want to continue? (y/n) y
volume stop: testvol: failed: Volume testvol is not in the started state
[root@dhcp42-236 ~]# gluster v delete testvol
Deleting volume will erase all information about the volume. Do you want to continue? (y/n) n
[root@dhcp42-236 ~]# glusterfind list
SESSION VOLUME SESSION TIME
---------------------------------------------------------------------------
sesst1 testvol 2015-06-10 15:47:47
[root@dhcp42-236 ~]# glusterfind delete sesst1 testvol
root.43.163's password: root.43.163's password:
root.43.163's password:
Session sesst1 with volume testvol deleted
[root@dhcp42-236 ~]#
[root@dhcp42-236 ~]#
[root@dhcp42-236 ~]# glusterfind list
No sessions found
[root@dhcp42-236 ~]#
[root@dhcp42-236 ~]#
[root@dhcp42-236 ~]#
[root@dhcp42-236 ~]# glusterfind create
usage: glusterfind create [-h] [--debug] [--force] [--reset-session-time]
session volume
glusterfind create: error: too few arguments
[root@dhcp42-236 ~]#
[root@dhcp42-236 ~]# gluster peer status
Number of Peers: 1
Hostname: 10.70.43.163
Uuid: a2cba06e-fb1a-4e10-994b-25d797f42e44
State: Peer in Cluster (Connected)
[root@dhcp42-236 ~]# cd /var/lib/glusterd/glusterfind/
[root@dhcp42-236 glusterfind]# ls
[root@dhcp42-236 glusterfind]# ls -a
. .. .keys
[root@dhcp42-236 glusterfind]#
[root@dhcp42-236 glusterfind]#
[root@dhcp42-236 glusterfind]# rpm -qa | grep glusterfs
glusterfs-client-xlators-3.7.1-4.el6rhs.x86_64
glusterfs-cli-3.7.1-4.el6rhs.x86_64
glusterfs-ganesha-3.7.1-4.el6rhs.x86_64
glusterfs-libs-3.7.1-4.el6rhs.x86_64
glusterfs-3.7.1-4.el6rhs.x86_64
glusterfs-api-3.7.1-4.el6rhs.x86_64
glusterfs-server-3.7.1-4.el6rhs.x86_64
glusterfs-fuse-3.7.1-4.el6rhs.x86_64
[root@dhcp42-236 glusterfind]#
########### NODE2 #########3
bash-4.3$ ssh root.43.163
root.43.163's password:
Last login: Fri Jun 19 17:15:43 2015 from 10.70.1.209
[root@dhcp43-163 ~]# gluster peer status
Number of Peers: 1
Hostname: 10.70.42.236
Uuid: ba207b47-f730-47a3-8fa0-fe7dd5080fd5
State: Peer in Cluster (Connected)
[root@dhcp43-163 ~]#
[root@dhcp43-163 ~]#
[root@dhcp43-163 ~]#
[root@dhcp43-163 ~]# gluster v list
testvol
[root@dhcp43-163 ~]#
[root@dhcp43-163 ~]#
[root@dhcp43-163 ~]# gluster v info
Volume Name: testvol
Type: Distribute
Volume ID: 7630e680-866b-47d8-ac32-761ea36a2c4f
Status: Stopped
Number of Bricks: 4
Transport-type: tcp
Bricks:
Brick1: 10.70.42.236:/rhs/thinbrick1/testvol
Brick2: 10.70.43.163:/rhs/thinbrick1/testvol
Brick3: 10.70.42.236:/rhs/thinbrick2/testvol
Brick4: 10.70.43.163:/rhs/thinbrick2/testvol
Options Reconfigured:
changelog.changelog: on
storage.build-pgfid: on
performance.readdir-ahead: on
[root@dhcp43-163 ~]# glusterfind list
No sessions found
[root@dhcp43-163 ~]#
[root@dhcp43-163 ~]# ls -l /var/lib/glusterd/glusterfind/
total 0
[root@dhcp43-163 ~]# ls -l /var/lib/glusterd/glusterfind/^C
[root@dhcp43-163 ~]#
[root@dhcp43-163 ~]#
For the issue of password of peer node being prompted on 'glusterfind delete', bug 1234213 tracks it. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHSA-2015-1495.html |