Bugzilla will be upgraded to version 5.0. The upgrade date is tentatively scheduled for 2 December 2018, pending final testing and feedback.
Bug 1480042 - More useful error - replace 'not optimal'
More useful error - replace 'not optimal'
Status: CLOSED ERRATA
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: glusterd (Show other bugs)
3.2
x86_64 Linux
low Severity medium
: ---
: RHGS 3.4.0
Assigned To: Ashish Pandey
nchilaka
rebase
:
Depends On: 1480099 1480448
Blocks: 1503135
  Show dependency treegraph
 
Reported: 2017-08-09 21:22 EDT by Laura Bailey
Modified: 2018-09-04 02:35 EDT (History)
11 users (show)

See Also:
Fixed In Version: glusterfs-3.12.2-1
Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: 1478964
: 1480099 (view as bug list)
Environment:
Last Closed: 2018-09-04 02:34:23 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)


External Trackers
Tracker ID Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2018:2607 None None None 2018-09-04 02:35 EDT

  None (edit)
Comment 8 Atin Mukherjee 2017-08-11 03:53:13 EDT
upstream patch : https://review.gluster.org/18014
Comment 11 nchilaka 2018-02-16 03:47:01 EST
on_qa validation
The changes implemented are visible and inline with expectation hence moving to verified




I am seeing the new warning message when we create bricks of same disperse subvol on same node as below
[root@dhcp42-216 ~]# gluster v create volum disperse-data 4 redundancy 2 10.70.42.216:/bricks/brick2/volum-b1 10.70.41.246:/bricks/brick2/volum-b2 10.70.43.57:/bricks/brick2/volum-b3 10.70.42.231:/bricks/brick2/volum-b4 10.70.43.168:/bricks/brick2/volum-b5 10.70.43.205:/bricks/brick2/volum-b6 10.70.42.216:/bricks/brick3/volum-b7 10.70.41.246:/bricks/brick3/volum-b8 10.70.43.57:/bricks/brick3/volum 10.70.42.231:/bricks/brick3/volum-b10 10.70.43.168:/bricks/brick3/volum-b11 10.70.42.216:/bricks/brick3/volum-b12
volume create: volum: failed: Multiple bricks of a disperse volume are present on the same server. This setup is not optimal. Bricks should be on different nodes to have best fault tolerant configuration. Use 'force' at the end of the command if you want to override this behavior. 




[root@dhcp42-216 ~]# gluster v create testnag disperse-data 4 redundancy 2 10.70.42.216:/bricks/brick2/testnag^C1 10.70.41.246:/bricks/brick2/disperse-b2 10.70.43.57:/bricks/brick2/disperse-b3 10.70.42.231:/bricks/brick2/disperse-b4 10.70.43.168:/bricks/brick2/disperse-b5 10.70.43.205:/bricks/brick2/disperse-b6 10.70.42.216:/bricks/brick3/disperse-b7 10.70.41.246:/bricks/brick3/disperse-b8 10.70.43.57:/bricks/brick3/disperse 10.70.42.231:/bricks/brick3/disperse-b10 10.70.43.168:/bricks/brick3/disperse-b11 10.70.43.205:/bricks/brick3/disperse-b12
[root@dhcp42-216 ~]# gluster v get all all
Option                                  Value                                   
------                                  -----                                   
cluster.server-quorum-ratio             51                                      
cluster.enable-shared-storage           disable                                 
cluster.op-version                      31301                                   
cluster.max-op-version                  31301                                   
cluster.brick-multiplex                 enable                                  
cluster.max-bricks-per-process          0                                       
cluster.localtime-logging               disable                                 
[root@dhcp42-216 ~]# gluster v create testnag disperse-data 4 redundancy 2 10.70.42.216:/bricks/brick2/testnag-b1 10.70.41.246:/bricks/brick2/testnag-b2 10.70.43.57:/bricks/brick2/testnag-b3 10.70.42.231:/bricks/brick2/testnag-b4 10.70.43.168:/bricks/brick2/testnag-b5 10.70.43.205:/bricks/brick2/testnag-b6 10.70.42.216:/bricks/brick3/testnag-b7 10.70.41.246:/bricks/brick3/testnag-b8 10.70.43.57:/bricks/brick3/testnag 10.70.42.231:/bricks/brick3/testnag-b10 10.70.43.168:/bricks/brick3/testnag-b11 10.70.43.205:/bricks/brick3/testnag-b12
volume create: testnag: success: please start the volume to access data
[root@dhcp42-216 ~]# gluster v start testnag
volume start: testnag: success

[root@dhcp42-216 ~]# gluster v create volum disperse-data 4 redundancy 2 10.70.42.216:/bricks/brick2/volum-b1 10.70.41.246:/bricks/brick2/volum-b2 10.70.43.57:/bricks/brick2/volum-b3 10.70.42.231:/bricks/brick2/volum-b4 10.70.43.168:/bricks/brick2/volum-b5 10.70.43.205:/bricks/brick2/volum-b6 10.70.42.216:/bricks/brick3/volum-b7 10.70.41.246:/bricks/brick3/volum-b8 10.70.43.57:/bricks/brick3/volum 10.70.42.231:/bricks/brick3/volum-b10 10.70.43.168:/bricks/brick3/volum-b11 10.70.42.216:/bricks/brick3/volum-b12
volume create: volum: failed: Multiple bricks of a disperse volume are present on the same server. This setup is not optimal. Bricks should be on different nodes to have best fault tolerant configuration. Use 'force' at the end of the command if you want to override this behavior. 
[root@dhcp42-216 ~]# gluster v create xylum disperse-data 4 redundancy 2 10.70.42.216:/bricks/brick2/xylum-b1 10.70.41.246:/bricks/brick2/xylum-b2 10.70.43.57:/bricks/brick2/xylum-b3 10.70.42.231:/bricks/brick2/xylum-b4 10.70.43.168:/bricks/brick2/xylum-b5 10.70.43.205:/bricks/brick2/xylum-b6 10.70.42.216:/bricks/brick3/xylum-b7 10.70.41.246:/bricks/brick3/xylum-b8 10.70.43.57:/bricks/brick3/xylum 10.70.42.231:/bricks/brick3/xylum-b10 10.70.43.168:/bricks/brick3/xylum-b11 10.70.43.205:/bricks/brick3/xylum-b12 10.70.42.216:/bricks/brick2/xantium-b1 10.70.41.246:/bricks/brick2/xantium-b2 10.70.43.57:/bricks/brick2/xantium-b3 10.70.42.231:/bricks/brick2/xantium-b4 10.70.43.168:/bricks/brick2/xantium-b5 10.70.43.205:/bricks/brick2/xantium-b6 10.70.42.216:/bricks/brick3/xantium-b7 10.70.41.246:/bricks/brick3/xantium-b8 10.70.43.57:/bricks/brick3/xantium 10.70.42.231:/bricks/brick3/xantium-b10 10.70.43.168:/bricks/brick3/xantium-b11 10.70.43.205:/bricks/brick3/xantium-b12
volume create: xylum: success: please start the volume to access data
[root@dhcp42-216 ~]# 
[root@dhcp42-216 ~]# gluster v create ertiga disperse-data 4 redundancy 2 10.70.42.216:/bricks/brick2/ertiga-b1 10.70.41.246:/bricks/brick2/ertiga-b2 10.70.43.57:/bricks/brick2/ertiga-b3 10.70.42.231:/bricks/brick2/ertiga-b4 10.70.43.168:/bricks/brick2/ertiga-b5 10.70.43.205:/bricks/brick2/ertiga-b6 10.70.42.216:/bricks/brick3/ertiga-b7 10.70.41.246:/bricks/brick3/ertiga-b8 10.70.43.57:/bricks/brick3/ertiga 10.70.42.231:/bricks/brick3/ertiga-b10 10.70.43.168:/bricks/brick3/ertiga-b11 10.70.43.205:/bricks/brick3/ertiga-b12 10.70.42.216:/bricks/brick2/swift-b1 10.70.41.246:/bricks/brick2/swift-b2 10.70.43.57:/bricks/brick2/swift-b3 10.70.42.231:/bricks/brick2/swift-b4 10.70.43.168:/bricks/brick2/swift-b5 10.70.43.205:/bricks/brick2/swift-b6 10.70.42.216:/bricks/brick3/swift-b7 10.70.41.246:/bricks/brick3/swift-b8 10.70.43.57:/bricks/brick3/swift 10.70.42.231:/bricks/brick3/swift-b10 10.70.43.168:/bricks/brick3/swift-b11 10.70.42.216:/bricks/brick3/swift-b12
volume create: ertiga: failed: Multiple bricks of a disperse volume are present on the same server. This setup is not optimal. Bricks should be on different nodes to have best fault tolerant configuration. Use 'force' at the end of the command if you want to override this behavior. 
[root@dhcp42-216 ~]# gluster v list
disperse
distrep
testnag
xylum

[root@dhcp42-216 ~]# gluster v create ertiga disperse-data 4 redundancy 2 10.70.42.216:/bricks/brick2/ertiga-b1 10.70.41.246:/bricks/brick2/ertiga-b2 10.70.43.57:/bricks/brick2/ertiga-b3 10.70.42.231:/bricks/brick2/ertiga-b4 10.70.43.168:/bricks/brick2/ertiga-b5 10.70.43.205:/bricks/brick2/ertiga-b6 10.70.42.216:/bricks/brick3/ertiga-b7 10.70.41.246:/bricks/brick3/ertiga-b8 10.70.43.57:/bricks/brick3/ertiga 10.70.42.231:/bricks/brick3/ertiga-b10 10.70.43.168:/bricks/brick3/ertiga-b11 10.70.43.205:/bricks/brick3/ertiga-b12 10.70.42.216:/bricks/brick2/swift-b1 10.70.41.246:/bricks/brick2/swift-b2 10.70.43.57:/bricks/brick2/swift-b3 10.70.42.231:/bricks/brick2/swift-b4 10.70.43.168:/bricks/brick2/swift-b5 10.70.43.205:/bricks/brick2/swift-b6 10.70.42.216:/bricks/brick3/swift-b7 10.70.41.246:/bricks/brick3/swift-b8 10.70.43.57:/bricks/brick3/swift 10.70.42.231:/bricks/brick3/swift-b10 10.70.43.168:/bricks/brick3/swift-b11 10.70.42.216:/bricks/brick3/swift-b12 force
volume create: ertiga: success: please start the volume to access data
[root@dhcp42-216 ~]# gluster v list
disperse
distrep
ertiga
testnag
xylum
[root@dhcp42-216 ~]# gluster v create opel disperse-data 4 redundancy 2 10.70.42.216:/bricks/brick2/opel-b1 10.70.41.246:/bricks/brick2/opel-b2 10.70.43.57:/bricks/brick2/opel-b3 10.70.42.231:/bricks/brick2/opel-b4 10.70.43.168:/bricks/brick2/opel-b5 10.70.43.205:/bricks/brick2/opel-b6 10.70.42.216:/bricks/brick3/opel-b7 10.70.41.246:/bricks/brick3/opel-b8 10.70.43.57:/bricks/brick3/opel 10.70.42.231:/bricks/brick3/opel-b10 10.70.43.168:/bricks/brick3/opel-b11 10.70.43.205:/bricks/brick3/opel-b12 10.70.42.216:/bricks/brick2/astra-b1 10.70.41.246:/bricks/brick2/astra-b2 10.70.43.57:/bricks/brick2/astra-b3 10.70.42.231:/bricks/brick2/astra-b4 10.70.43.168:/bricks/brick2/astra-b5 10.70.43.205:/bricks/brick2/astra-b6 10.70.42.216:/bricks/brick3/astra-b7 10.70.41.246:/bricks/brick3/astra-b8 10.70.43.57:/bricks/brick3/astra 10.70.42.231:/bricks/brick3/astra-b10 10.70.43.168:/bricks/brick3/astra-b11 10.70.42.216:/bricks/brick3/astra-b12 force
for  i in volume create: opel: success: please start the volume to access data
[root@dhcp42-216 ~]# for  i in $(gluster v list);do gluster v start $i;done
volume start: disperse: failed: Volume disperse already started
volume start: distrep: failed: Volume distrep already started
volume start: ertiga: success
volume start: opel: success
volume start: testnag: failed: Volume testnag already started
volume start: xylum: success
[root@dhcp42-216 ~]# 



[root@dhcp42-216 ~]# rpm -qa|grep gluster
glusterfs-server-3.12.2-4.el7rhgs.x86_64
glusterfs-rdma-3.12.2-4.el7rhgs.x86_64
gluster-nagios-addons-0.2.9-1.el7rhgs.x86_64
python2-gluster-3.12.2-4.el7rhgs.x86_64
glusterfs-fuse-3.12.2-4.el7rhgs.x86_64
vdsm-gluster-4.17.33-1.2.el7rhgs.noarch
glusterfs-geo-replication-3.12.2-4.el7rhgs.x86_64
glusterfs-libs-3.12.2-4.el7rhgs.x86_64
gluster-nagios-common-0.2.4-1.el7rhgs.noarch
libvirt-daemon-driver-storage-gluster-3.9.0-12.el7.x86_64
pglusterfs-cli-3.12.2-4.el7rhgs.x86_64
glusterfs-3.12.2-4.el7rhgs.x86_64
s -eglusterfs-api-3.12.2-4.el7rhgs.x86_64
glusterfs-client-xlators-3.12.2-4.el7rhgs.x86_64
[root@dhcp42-216 ~]# cat /etc/red*
cat: /etc/redhat-access-insights: Is a directory
Red Hat Enterprise Linux Server release 7.5 Beta (Maipo)



#########################################################################
before fix:
[root@dhcp41-189 ~]# gluster  v create test  disperse-data 4 redundancy 2 10.70.41.189:/bricks/brick0/test 10.70.41.212:/bricks/brick0 10.70.41.220:/bricks/brick0/test 10.70.43.96:/bricks/brick0/test 10.70.43.230:/bricks/brick0/test 10.70.41.189:/bricks/brick1/test
volume create: test: failed: Multiple bricks of a disperse volume are present on the same server. This setup is not optimal. Use 'force' at the end of the command if you want to override this behavior. 

[root@dhcp41-189 ~]# rpm -qa|grep gluster
glusterfs-server-3.8.4-54.el7rhgs.x86_64
Comment 13 errata-xmlrpc 2018-09-04 02:34:23 EDT
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2018:2607

Note You need to log in before you can comment on or make changes to this bug.