Bug 1121593

Summary: [gluster-cli] Better key matching logic for same keys across different domains
Product: [Community] GlusterFS Reporter: SATHEESARAN <sasundar>
Component: glusterdAssignee: Atin Mukherjee <amukherj>
Status: CLOSED WONTFIX QA Contact:
Severity: medium Docs Contact:
Priority: medium    
Version: mainlineCC: bugs, kaushal, smohan
Target Milestone: ---Keywords: Triaged
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
gluster-cli
Last Closed: 2018-10-05 03:52:47 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---

Description SATHEESARAN 2014-07-21 10:10:48 UTC
Description of problem:
-----------------------
'gluster volume set' has inherent property of suggesting the user proper 'key' close to the one that user typed incrrectly. This could be thought of as a suggestion which helps user to correct the 'key', if he meant it wrongly.

These keys are of format, 'domain.key' combination.
For example, 'performance.read-ahead'
This format allows same key in different domain.
When such thing happens, auto-suggestion from gluster cli doesn't help the user, rather confused him

Version-Release number of selected component (if applicable):
-------------------------------------------------------------
mainline

How reproducible:
-----------------
always

Steps to Reproduce:
-------------------
1.execute 'gluster volume set <vol-name> outstanding-rpc-limit 32'

Actual results:
----------------
'outstanding-rpc-limit' was available two different domains, 'nfs' and 'server'.
And there is a meaningless auto-suggestion, suggesting the same key again, rather than explaining user that there are 2 domains containing the same key

Expected results:
-----------------
There could be a better and meaningful suggestion

Additional info:
----------------
[Fri Jul 18 11:28:26 UTC 2014 root@10.70.37.136:~ ] # gluster volume set dv outstanding-rpc-limit 3
volume set: failed: option : outstanding-rpc-limit does not exist
Did you mean outstanding-rpc-limit?

[Fri Jul 18 11:28:33 UTC 2014 root@10.70.37.136:~ ] # gluster volume set dv server.outstanding-rpc-limit 3
volume set: success

[Fri Jul 18 11:28:47 UTC 2014 root@10.70.37.136:~ ] # gluster volume set dv nfs.outstanding-rpc-limit 3
volume set: success

[Fri Jul 18 11:28:53 UTC 2014 root@10.70.37.136:~ ] # gluster v i

Volume Name: dv
Type: Distribute
Volume ID: 6b811482-bc48-4142-b988-2b8efe7cae46
Status: Started
Snap Volume: no
Number of Bricks: 1
Transport-type: tcp
Bricks:
Brick1: 10.70.37.136:/rhs/brick1/b1
Options Reconfigured:
nfs.outstanding-rpc-limit: 3
server.outstanding-rpc-limit: 3
performance.readdir-ahead: on
auto-delete: disable
snap-max-soft-limit: 90
snap-max-hard-limit: 256

[Fri Jul 18 11:28:58 UTC 2014 root@10.70.37.136:~ ] # gluster volume set dv outstanding-rpc-limit 3
volume set: failed: option : outstanding-rpc-limit does not exist
Did you mean outstanding-rpc-limit?

Comment 1 SATHEESARAN 2014-07-21 10:12:20 UTC
Thanks Atin for uncovering this issue

Comment 2 Atin Mukherjee 2018-10-05 03:52:47 UTC
Not a priority for us to get this fixed in GD1. We haven't seen any complaints from users around this. Closing this bug. If you have any valid justification on why this is critical to be fixed, please feel free to reopen.