Bug 1312623 - gluster vol get volname user.metadata-text" Command fails with "volume get option: failed: Did you mean cluster.metadata-self-heal?"
gluster vol get volname user.metadata-text" Command fails with "volume get o...
Status: CLOSED CURRENTRELEASE
Product: GlusterFS
Classification: Community
Component: glusterd (Show other bugs)
3.7.8
All All
medium Severity medium
: ---
: ---
Assigned To: Atin Mukherjee
: Triaged
Depends On: 1297442 1297638
Blocks:
  Show dependency treegraph
 
Reported: 2016-02-27 23:40 EST by Atin Mukherjee
Modified: 2016-04-19 03:22 EDT (History)
3 users (show)

See Also:
Fixed In Version: glusterfs-3.7.9
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: 1297638
Environment:
Last Closed: 2016-04-19 03:22:46 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Atin Mukherjee 2016-02-27 23:40:08 EST
+++ This bug was initially created as a clone of Bug #1297638 +++

+++ This bug was initially created as a clone of Bug #1297442 +++

Description of problem:
"gluster vol get  volname user.metadata-text" Command  is not working. Command fails with the error

"volume get option: failed: Did you mean cluster.metadata-self-heal?"

Version-Release number of selected component (if applicable):

mainline

How reproducible:

Always


Steps to Reproduce:

1. Create a Volume (any topology)

2. Set a volume description using 

 #gluster vol set <volname> user.metadata-text 'VOLUME DESCRIPTION'

3. Try to read the description using 

#gluster vol get  volname user.metadata-text   

or  

#gluster vol get  volname all | grep user

The volume description is shown in the #gluster volume info. 

Actual results:

Command fails with the below error,

"volume get option: failed: Did you mean cluster.metadata-self-heal?"

Expected results:

Command should return the user.metadata-text

--- Additional comment from Atin Mukherjee on 2016-01-11 23:16:37 EST ---

user.* bucket is a hooks friendly option and volume get framework only looks for the options from volume map entry (VME) table. This is why volume get never captures it. I'd need to understand the reason of having user.* under hooks, what was the motivation/intention behind it. Based on that we'd update the status accordingly.

--- Additional comment from Vijay Bellur on 2016-01-12 00:16:01 EST ---

REVIEW: http://review.gluster.org/13222 (glusterd: display usr.* options in volume get) posted (#1) for review on master by Atin Mukherjee (amukherj@redhat.com)

--- Additional comment from Vijay Bellur on 2016-01-12 06:29:24 EST ---

REVIEW: http://review.gluster.org/13222 (glusterd: display usr.* options in volume get) posted (#2) for review on master by Atin Mukherjee (amukherj@redhat.com)

--- Additional comment from Vijay Bellur on 2016-01-13 05:49:21 EST ---

REVIEW: http://review.gluster.org/13222 (glusterd: display usr.* options in volume get) posted (#3) for review on master by Atin Mukherjee (amukherj@redhat.com)

--- Additional comment from Vijay Bellur on 2016-01-26 10:02:45 EST ---

REVIEW: http://review.gluster.org/13222 (glusterd: display usr.* options in volume get) posted (#4) for review on master by Atin Mukherjee (amukherj@redhat.com)

--- Additional comment from Vijay Bellur on 2016-02-25 04:03:02 EST ---

REVIEW: http://review.gluster.org/13222 (glusterd: display user.* options in volume get) posted (#5) for review on master by Atin Mukherjee (amukherj@redhat.com)

--- Additional comment from Vijay Bellur on 2016-02-27 23:39:15 EST ---

COMMIT: http://review.gluster.org/13222 committed in master by Atin Mukherjee (amukherj@redhat.com) 
------
commit 18612a7b2e005d76529f8c2a6149a6506f6daae6
Author: Atin Mukherjee <amukherj@redhat.com>
Date:   Tue Jan 12 10:41:46 2016 +0530

    glusterd: display user.* options in volume get
    
    As of now volume get framework doesn't consider user.* xattrs to be displayed.
    This patch is to include them in volume get output. Please note these options will be
    only shown for a given volume name, 'all' as a volume name wouldn't consider them displaying.
    
    Change-Id: Ifc19e89c612e9254d760deaaef50bc1b4bfe02ce
    BUG: 1297638
    Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
    Reviewed-on: http://review.gluster.org/13222
    CentOS-regression: Gluster Build System <jenkins@build.gluster.com>
    NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
    Smoke: Gluster Build System <jenkins@build.gluster.com>
    Reviewed-by: Raghavendra Talur <rtalur@redhat.com>
Comment 1 Vijay Bellur 2016-02-27 23:41:03 EST
REVIEW: http://review.gluster.org/13534 (glusterd: display user.* options in volume get) posted (#1) for review on release-3.7 by Atin Mukherjee (amukherj@redhat.com)
Comment 2 Vijay Bellur 2016-02-29 02:17:43 EST
COMMIT: http://review.gluster.org/13534 committed in release-3.7 by Atin Mukherjee (amukherj@redhat.com) 
------
commit bc0bade84aefecfe1671c1360fd4632d26192988
Author: Atin Mukherjee <amukherj@redhat.com>
Date:   Tue Jan 12 10:41:46 2016 +0530

    glusterd: display user.* options in volume get
    
    Backport of http://review.gluster.org/13222
    
    As of now volume get framework doesn't consider user.* xattrs to be displayed.
    This patch is to include them in volume get output. Please note these options will be
    only shown for a given volume name, 'all' as a volume name wouldn't consider them displaying.
    
    Change-Id: Ifc19e89c612e9254d760deaaef50bc1b4bfe02ce
    BUG: 1312623
    Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
    Reviewed-on: http://review.gluster.org/13222
    CentOS-regression: Gluster Build System <jenkins@build.gluster.com>
    NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
    Smoke: Gluster Build System <jenkins@build.gluster.com>
    Reviewed-by: Raghavendra Talur <rtalur@redhat.com>
    Reviewed-on: http://review.gluster.org/13534
Comment 3 Kaushal 2016-04-19 03:22:46 EDT
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.9, please open a new bug report.

glusterfs-3.7.9 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] https://www.gluster.org/pipermail/gluster-users/2016-March/025922.html
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user

Note You need to log in before you can comment on or make changes to this bug.