Bug 1182369 - [RFE][HC] - glusterfs volume create/extend should fail when bricks from the same server
Summary: [RFE][HC] - glusterfs volume create/extend should fail when bricks from the s...
Alias: None
Product: ovirt-engine
Classification: oVirt
Component: RFEs
Version: ---
Hardware: Unspecified
OS: Unspecified
medium vote
Target Milestone: ovirt-4.1.0-beta
Assignee: Sahina Bose
Depends On:
Blocks: Generic_Hyper_Converged_Host Gluster-HC-2
TreeView+ depends on / blocked
Reported: 2015-01-15 00:26 UTC by Paul Cuzner
Modified: 2017-02-15 15:07 UTC (History)
13 users (show)

Fixed In Version:
Doc Type: Enhancement
Doc Text:
Previously in a hyper-converged cluster environment containing gluster and virt nodes, it was possible to create a replica set containing bricks from the same server. A warning appeared but the action was enabled even though there was a risk of losing data or service. In this release, it will no longer be possible to create a replica set containing multiple bricks from the same server in a hyper-converged environment.
Clone Of:
Last Closed: 2017-02-15 15:07:05 UTC
oVirt Team: Gluster
rule-engine: ovirt-4.1+
bmcclain: planning_ack+
sabose: devel_ack+
sasundar: testing_ack+

Attachments (Terms of Use)

System ID Private Priority Status Summary Last Updated
oVirt gerrit 59498 0 master MERGED engine: enforce bricks are not from same server 2016-08-23 06:43:05 UTC

Description Paul Cuzner 2015-01-15 00:26:39 UTC
Description of problem:
Although the UI warns when creating a replica set containing bricks from the same server - it should actually fail the request when the nodes are gluster and virt i.e. hyper-converged,

Version-Release number of selected component (if applicable):

How reproducible:

Steps to Reproduce:
1. create a replicated volume by providing all bricks from the same node
2. UI provides a warning (Replicate Confirmation dialog) which the user can ignore

Actual results:
Volume can be created

Expected results:
Volume made from multiple bricks from the same server in a converged use case represents a data loss or service loss risk. The UI should enforce sensible decisions to safeguard service availability.

Additional info:

Comment 1 Red Hat Bugzilla Rules Engine 2015-10-19 11:01:50 UTC
Target release should be placed once a package build is known to fix a issue. Since this bug is not modified, the target version has been reset. Please use target milestone to plan a fix for a oVirt release.

Comment 2 Sahina Bose 2016-03-31 14:38:40 UTC
Bricks from same server in a replica set should be blocked

Comment 3 Sandro Bonazzola 2016-05-02 10:10:26 UTC
Moving from 4.0 alpha to 4.0 beta since 4.0 alpha has been already released and bug is not ON_QA.

Comment 4 Yaniv Lavi 2016-05-23 13:26:24 UTC
oVirt 4.0 beta has been released, moving to RC milestone.

Comment 5 Yaniv Lavi 2016-05-23 13:26:46 UTC
oVirt 4.0 beta has been released, moving to RC milestone.

Comment 6 Emma Heftman 2017-01-22 09:55:43 UTC
Hi Sahina
Can you please confirm whether this chapter of the Administration Guide will need to be updated to support this new behaviour:  https://access.redhat.com/documentation/en/red-hat-virtualization/4.0/single/administration-guide/#sect-Cluster_Utilization

If not, should it be documented anywhere else?

Comment 7 Sahina Bose 2017-01-23 06:59:22 UTC
Doc-text looks good to me. 
The "Global Inventory" needs to be updated, but as part of Bug 1364999

Comment 8 SATHEESARAN 2017-02-08 17:42:37 UTC
Tested with RHV 4.1 Beta ( Red Hat Virtualization Manager Version: )

When trying to create a gluster replica 3 volume with bricks on the same server, the operation failed with following error message:

"Error while executing action: Cannot create Gluster Volume. Replica set contains bricks from same server. This is not supported for hyper-converged cluster."

Comment 9 George Joseph 2017-02-09 01:35:53 UTC
So in a cluster with 4 nodes, the only way to create a replica 3/arbiter 1 volume is with 12 bricks, 3 on each node.  Is this now going to be impossible?

Note You need to log in before you can comment on or make changes to this bug.