Bug 1480099 - More useful error - replace 'not optimal'
Summary: More useful error - replace 'not optimal'
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: glusterd
Version: mainline
Hardware: x86_64
OS: Linux
unspecified
medium
Target Milestone: ---
Assignee: Ashish Pandey
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks: 1480042 1480448
TreeView+ depends on / blocked
 
Reported: 2017-08-10 07:23 UTC by Ashish Pandey
Modified: 2017-12-08 17:37 UTC (History)
10 users (show)

Fixed In Version: glusterfs-3.13.0
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1480042
: 1480448 (view as bug list)
Environment:
Last Closed: 2017-12-08 17:37:20 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Comment 1 Ashish Pandey 2017-08-10 07:24:25 UTC
When creating a 4+2 dispersed volume as shown without --force command, the command returns an error saying "This setup is not optimal".

Presumably this is because there are multiple bricks in a volume stored on the same server, but 'optimal' has no distinct meaning. It would be more useful to say WHY this setup isn't generally considered a good idea, and give the user some idea of what would need to be corrected to be 'optimal' rather than just saying that it isn't optimal.


Example (from documentation) below:

# gluster volume create testvol disperse-data 4 redundancy 2 transport tcp \
rhgs3-1:/mnt/bricks/brick1/data \
rhgs3-1:/mnt/bricks/brick2/data \
rhgs3-2:/mnt/bricks/brick1/data \
rhgs3-2:/mnt/bricks/brick2/data \
rhgs3-3:/mnt/bricks/brick1/data \
rhgs3-3:/mnt/bricks/brick2/data

volume create: testvol: failed: Multiple bricks of a disperse volume are present on the same server. This setup is not optimal. Use 'force' at the end of the command if you want to override this behavior.

Comment 2 Ashish Pandey 2017-08-10 07:24:58 UTC
After having a discussion with Laura, I am going to make following changes in command error message-

[root@apandey glusterfs]# gluster v create test disperse 6 redundancy 2 apandey:/home/apandey/bricks/gluster/test-nn{1..6}
volume create: test: failed: Multiple bricks of a disperse volume are present on the same server. This setup is not optimal. Bricks should be on different nodes to have best fault tolerant configuration. Use 'force' at the end of the command if you want to override this behavior.

Let me know your thoughts on this. It will also go through the review process and may differ a little bit.

Comment 3 Worker Ant 2017-08-10 07:54:51 UTC
REVIEW: https://review.gluster.org/18014 (mgmt/glusterd: Provide more information in command message) posted (#1) for review on master by Ashish Pandey (aspandey)

Comment 4 Worker Ant 2017-08-10 14:41:46 UTC
COMMIT: https://review.gluster.org/18014 committed in master by Atin Mukherjee (amukherj) 
------
commit cfdcdd1b1fea3f30d9131dd36afab6efeef2bee0
Author: Ashish Pandey <aspandey>
Date:   Thu Aug 10 12:56:32 2017 +0530

    mgmt/glusterd: Provide more information in command message
    
    Problem:
    When more than one bricks are present on the same node,
    while creating a volume, we get a warning message that
    the setup is not optimal. We need to add more information
    in this error/warning.
    
    Solution:
    Add following line in current message.
    Bricks should be on different nodes to have best fault
    tolerant configuration.
    
    Change-Id: Ica72bd6e68dff7e41c37617f3b775a981fa40c69
    BUG: 1480099
    Signed-off-by: Ashish Pandey <aspandey>
    Reviewed-on: https://review.gluster.org/18014
    CentOS-regression: Gluster Build System <jenkins.org>
    Smoke: Gluster Build System <jenkins.org>
    Reviewed-by: Atin Mukherjee <amukherj>

Comment 5 Shyamsundar 2017-12-08 17:37:20 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.13.0, please open a new bug report.

glusterfs-3.13.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://lists.gluster.org/pipermail/announce/2017-December/000087.html
[2] https://www.gluster.org/pipermail/gluster-users/


Note You need to log in before you can comment on or make changes to this bug.