Bug 1224043 - [Backup]: Incorrect error message displayed when glusterfind post is run with invalid volume name
Summary: [Backup]: Incorrect error message displayed when glusterfind post is run with...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: glusterfind
Version: rhgs-3.1
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: ---
: RHGS 3.1.0
Assignee: Milind Changire
QA Contact: Sweta Anandpara
URL:
Whiteboard:
Depends On: 1224046
Blocks: 1202842 1223636
TreeView+ depends on / blocked
 
Reported: 2015-05-22 05:41 UTC by Sweta Anandpara
Modified: 2016-09-17 15:19 UTC (History)
6 users (show)

Fixed In Version: glusterfs-3.7.1-3
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2015-07-29 04:45:14 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2015:1495 0 normal SHIPPED_LIVE Important: Red Hat Gluster Storage 3.1 update 2015-07-29 08:26:26 UTC

Description Sweta Anandpara 2015-05-22 05:41:06 UTC
Description of problem:
When running glusterfind post command with incorrect session name, it correctly displays that the session name is invalid. Similarly, when any command is given with incorrect/non-existent volume name, it should display - that the volume name is invalid, or something like 'session not found with <sessionName> and <volumename>'

Version-Release number of selected component (if applicable):
glusterfs-3.7.0-2.el6rhs.x86_64

How reproducible: Always


[root@dhcp43-140 ~]# 
[root@dhcp43-140 ~]# glusterfind post 
usage: glusterfind post [-h] [--debug] session volume
glusterfind post: error: too few arguments
[root@dhcp43-140 ~]# glusterfind post sessi vol1
Invalid session sessi
[root@dhcp43-140 ~]# glusterfind post sess vol11
Pre script is not run
[root@dhcp43-140 ~]# glusterfind list
SESSION                   VOLUME                    SESSION TIME             
---------------------------------------------------------------------------
sess                      vol1                      2015-05-19 15:23:06      
[root@dhcp43-140 ~]# 

Expected: Invalid volume name vol1. OR
No entry found with session sess and volume vol11

Comment 5 Sweta Anandpara 2015-06-25 09:01:52 UTC
Tested and verified this on the build glusterfs-3.7.1-4.el6rhs.x86_64

Pasted below are the logs. Regression run in and around this fix is updated here: https://polarion.engineering.redhat.com/polarion/testrun-attachment/RHG3/glusterfs-3_7_1_3_RHEL6_7_FUSE/RHG3-5400_Logs_6.7_3.7.1-3_output_file_validation.odt

Moving this to VERIFIED in 3.1 Everglades

[root@dhcp43-191 ~]# 
[root@dhcp43-191 ~]# glusterfind list
SESSION                   VOLUME                    SESSION TIME             
---------------------------------------------------------------------------
sesso3                    ozone                     2015-06-18 16:27:30      
sesso1                    ozone                     2015-06-25 18:16:52      
cross3s1                  cross3                    2015-06-25 18:31:01      
cross3s3                  cross3                    2015-06-23 17:55:28      
cross3s2                  cross3                    2015-06-25 18:30:24      
sesso2                    ozone                     2015-06-20 00:14:05      
sesso4                    ozone                     2015-06-18 16:27:38      
[root@dhcp43-191 ~]# 
[root@dhcp43-191 ~]# 
[root@dhcp43-191 ~]# glusterfind post cross3s4 cross3
Invalid session cross3s4
[root@dhcp43-191 ~]# glusterfind post cross3s cross34
Invalid session cross3s
[root@dhcp43-191 ~]# glusterfind post cross3s3 cross34
Session cross3s3 not created with volume cross34
[root@dhcp43-191 ~]# glusterfind post cross3s3 cross3
Pre script is not run
[root@dhcp43-191 ~]# glusterfind post cross3s2 cross3
Pre script is not run
[root@dhcp43-191 ~]# glusterfind post cross3s1 cross3
Session cross3s1 with volume cross3 updated
[root@dhcp43-191 ~]# glusterfind post @!%^ cross3
glusterfind post @^ cross3
Invalid session @^
[root@dhcp43-191 ~]# glusterfind post @*$%^ cross3
Invalid session @*$%^
[root@dhcp43-191 ~]# 
[root@dhcp43-191 ~]# glusterfind post 123 cross3
Invalid session 123
[root@dhcp43-191 ~]# glusterfind post RS cross3
Invalid session RS
[root@dhcp43-191 ~]# 
[root@dhcp43-191 ~]# 
[root@dhcp43-191 ~]# 
[root@dhcp43-191 ~]# rpm -qa | grep glusterfs
glusterfs-libs-3.7.1-4.el6rhs.x86_64
glusterfs-api-3.7.1-4.el6rhs.x86_64
glusterfs-client-xlators-3.7.1-4.el6rhs.x86_64
glusterfs-fuse-3.7.1-4.el6rhs.x86_64
glusterfs-cli-3.7.1-4.el6rhs.x86_64
glusterfs-3.7.1-4.el6rhs.x86_64
glusterfs-server-3.7.1-4.el6rhs.x86_64
[root@dhcp43-191 ~]# 
[root@dhcp43-191 ~]# 
[root@dhcp43-191 ~]# gluster v info
 
Volume Name: cross3
Type: Distributed-Replicate
Volume ID: 81de5a1c-24ac-44fa-a9ce-7691d9b308a0
Status: Started
Number of Bricks: 2 x 3 = 6
Transport-type: tcp
Bricks:
Brick1: 10.70.43.191:/rhs/thinbrick1/cross3
Brick2: 10.70.42.202:/rhs/thinbrick1/cross3
Brick3: 10.70.42.30:/rhs/thinbrick1/cross3
Brick4: 10.70.43.191:/rhs/thinbrick2/cross3
Brick5: 10.70.42.202:/rhs/thinbrick2/cross3
Brick6: 10.70.42.30:/rhs/thinbrick2/cross3
Options Reconfigured:
changelog.capture-del-path: on
changelog.changelog: on
storage.build-pgfid: on
performance.readdir-ahead: on
 
Volume Name: gluster_shared_storage
Type: Replicate
Volume ID: ced3ec30-654b-4bf5-956b-9e99bc51d445
Status: Started
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: 10.70.43.191:/rhs/brick1/gss
Brick2: 10.70.42.202:/rhs/brick1/gss
Brick3: 10.70.42.30:/rhs/brick1/gss
Options Reconfigured:
performance.readdir-ahead: on
 
Volume Name: nash
Type: Distribute
Volume ID: 696305c8-a26c-435f-9ec7-ea7bc073f056
Status: Created
Number of Bricks: 4
Transport-type: tcp
Bricks:
Brick1: 10.70.43.191:/rhs/thinbrick1/nash
Brick2: 10.70.42.202:/rhs/thinbrick1/nash
Brick3: 10.70.42.30:/rhs/thinbrick1/nash
Brick4: 10.70.42.147:/rhs/thinbrick1/nash
Options Reconfigured:
changelog.capture-del-path: on
changelog.changelog: on
storage.build-pgfid: on
performance.readdir-ahead: on
 
Volume Name: ozone
Type: Distributed-Replicate
Volume ID: 9ef1ace8-505d-4d97-aa23-4296aa685f76
Status: Started
Number of Bricks: 4 x 2 = 8
Transport-type: tcp
Bricks:
Brick1: 10.70.43.191:/rhs/thinbrick1/ozone
Brick2: 10.70.42.202:/rhs/thinbrick1/ozone
Brick3: 10.70.43.191:/rhs/thinbrick2/ozone
Brick4: 10.70.42.202:/rhs/thinbrick2/ozone
Brick5: 10.70.42.30:/rhs/thinbrick1/ozone
Brick6: 10.70.42.147:/rhs/thinbrick1/ozone
Brick7: 10.70.42.30:/rhs/thinbrick2/ozone
Brick8: 10.70.42.147:/rhs/thinbrick2/ozone
Options Reconfigured:
performance.readdir-ahead: on
storage.build-pgfid: on
changelog.changelog: on
changelog.capture-del-path: on
[root@dhcp43-191 ~]#

Comment 6 errata-xmlrpc 2015-07-29 04:45:14 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHSA-2015-1495.html


Note You need to log in before you can comment on or make changes to this bug.