Bug 1644159 - Set Preallocated disk to default option in HC environments
Summary: Set Preallocated disk to default option in HC environments
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: ovirt-engine
Classification: oVirt
Component: BLL.Storage
Version: 4.2.6
Hardware: Unspecified
OS: Unspecified
high
medium with 1 vote
Target Milestone: ovirt-4.3.3
: ---
Assignee: Kaustav Majumder
QA Contact: SATHEESARAN
URL:
Whiteboard:
Depends On:
Blocks: 1641592
TreeView+ depends on / blocked
 
Reported: 2018-10-30 06:21 UTC by Sahina Bose
Modified: 2019-10-24 03:08 UTC (History)
12 users (show)

Fixed In Version: ovirt-engine-4.3.3.1
Clone Of: 1641592
Environment:
Last Closed: 2019-04-16 13:58:21 UTC
oVirt Team: Gluster
Embargoed:
rule-engine: ovirt-4.3+
pm-rhel: blocker+
godas: devel_ack+


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
oVirt gerrit 96282 0 master MERGED webadmin:Setting Preallocated disk to default option in HC environments 2019-03-22 09:09:55 UTC
oVirt gerrit 98750 0 ovirt-engine-4.3 MERGED webadmin:Setting Preallocated disk to default option in HC environments. 2019-03-24 10:54:26 UTC
oVirt gerrit 100063 0 master MERGED webadmin: set Gluster disk default volume type to preallocated 2019-05-20 08:11:52 UTC
oVirt gerrit 100180 0 ovirt-engine-4.3 MERGED webadmin: set Gluster disk default volume type to preallocated 2019-05-21 06:56:54 UTC

Description Sahina Bose 2018-10-30 06:21:56 UTC
+++ This bug was initially created as a clone of Bug #1641592 +++

Description of problem:

In the recent performance runs executed on RHHI using fio, it has been noticed that there is a considerable gap between the sequential write throughput obtained on a sparsed raw vDisk added to VM , compared to preallocated vDisk added to VM.

This has been tested with both , VDO and Non-VDO test case scenarios:

1. Difference in performance of sequential write on Non-VDO RHHI:
 
Sparse: 278 MB/s
Preallocated: 573 MB/s
Diff: 106%

1. Difference in performance of sequential write on VDO RHHI:
 
Sparse: 189 MB/s
Preallocated: 286 MB/s
Diff: 51%


On the basis of the following statistics, preacllocated vDisk should be set as default vDisk option for the Virtual Machines provisioned on RHHI .

Comment 1 Tal Nisan 2018-11-01 15:08:35 UTC
Sahina, who should take it? Someone from my team or yours?

Comment 2 Sahina Bose 2018-11-02 08:26:29 UTC
(In reply to Tal Nisan from comment #1)
> Sahina, who should take it? Someone from my team or yours?

We can take it - wanted to check if you have any inputs on changing the default to pre-allocated?

Comment 3 Tal Nisan 2018-11-04 15:08:32 UTC
(In reply to Sahina Bose from comment #2)
> (In reply to Tal Nisan from comment #1)
> > Sahina, who should take it? Someone from my team or yours?
> 
> We can take it - wanted to check if you have any inputs on changing the
> default to pre-allocated?

No, sounds fine

Comment 4 Sahina Bose 2018-11-29 07:13:15 UTC
Kaustav, can you set preallocated as default when creating disks on gluster storage domain?

Comment 5 Kaustav Majumder 2018-11-29 07:27:26 UTC
Yes, seems straight forward.

Comment 6 Sandro Bonazzola 2019-01-28 09:34:27 UTC
This bug has not been marked as blocker for oVirt 4.3.0.
Since we are releasing it tomorrow, January 29th, this bug has been re-targeted to 4.3.1.

Comment 7 Gobinda Das 2019-02-27 11:06:38 UTC
Ovirt-4.3.1 already released, so moving to ovirt-4.3.2

Comment 9 SATHEESARAN 2019-03-29 19:07:07 UTC
Verified that with ovirt-engine-4.3.3

1. Glusterfs storage domain is created
2. VMs are created and when creating the disks, the allocation policy is 'preallocated'

Comment 10 Sandro Bonazzola 2019-04-16 13:58:21 UTC
This bugzilla is included in oVirt 4.3.3 release, published on April 16th 2019.

Since the problem described in this bug report should be
resolved in oVirt 4.3.3 release, it has been closed with a resolution of CURRENT RELEASE.

If the solution does not work for you, please open a new bug report.

Comment 11 Strahil Nikolov 2019-06-02 08:58:49 UTC
In relation to bug 1704782 , can we reevaluate the situation with the performance ?
In bug 1704782  it was mentioned that performance issues observed in this bug cannot be reproduced.

The idea behind the reevaluation is:
1) Reevaluate the gluster options in the gluster virt group
2) If performance with 'thin provision'-ed disks is acceptable, then the default gluster policy can be switched to 'thin provision' instead of 'preallocated' .

Comment 12 Sahina Bose 2019-07-09 09:16:04 UTC
(In reply to Strahil Nikolov from comment #11)
> In relation to bug 1704782 , can we reevaluate the situation with the
> performance ?
> In bug 1704782  it was mentioned that performance issues observed in this
> bug cannot be reproduced.
> 
> The idea behind the reevaluation is:
> 1) Reevaluate the gluster options in the gluster virt group
> 2) If performance with 'thin provision'-ed disks is acceptable, then the
> default gluster policy can be switched to 'thin provision' instead of
> 'preallocated' .

Can you open a new bug to evaluate? This one has been verified and closed.

Comment 13 Sahina Bose 2019-07-09 09:19:30 UTC
(In reply to Sahina Bose from comment #12)
> (In reply to Strahil Nikolov from comment #11)
> > In relation to bug 1704782 , can we reevaluate the situation with the
> > performance ?
> > In bug 1704782  it was mentioned that performance issues observed in this
> > bug cannot be reproduced.
> > 
> > The idea behind the reevaluation is:
> > 1) Reevaluate the gluster options in the gluster virt group
> > 2) If performance with 'thin provision'-ed disks is acceptable, then the
> > default gluster policy can be switched to 'thin provision' instead of
> > 'preallocated' .
> 
> Can you open a new bug to evaluate? This one has been verified and closed.

Also, please look at https://gluster.github.io/devblog/gluster-3-12-vs-6-a-performance-oriented-overview.
With gluster sharding and preallocated disks, the performance is expected to improve as we reduce a network call that would otherwise create the shards.

Comment 14 Strahil Nikolov 2019-10-24 03:08:23 UTC
Ok. That makes sense. Let's leave it as is.
We can still dedupe via VDO.


Note You need to log in before you can comment on or make changes to this bug.