+++ This bug was initially created as a clone of Bug #1641592 +++ Description of problem: In the recent performance runs executed on RHHI using fio, it has been noticed that there is a considerable gap between the sequential write throughput obtained on a sparsed raw vDisk added to VM , compared to preallocated vDisk added to VM. This has been tested with both , VDO and Non-VDO test case scenarios: 1. Difference in performance of sequential write on Non-VDO RHHI: Sparse: 278 MB/s Preallocated: 573 MB/s Diff: 106% 1. Difference in performance of sequential write on VDO RHHI: Sparse: 189 MB/s Preallocated: 286 MB/s Diff: 51% On the basis of the following statistics, preacllocated vDisk should be set as default vDisk option for the Virtual Machines provisioned on RHHI .
Sahina, who should take it? Someone from my team or yours?
(In reply to Tal Nisan from comment #1) > Sahina, who should take it? Someone from my team or yours? We can take it - wanted to check if you have any inputs on changing the default to pre-allocated?
(In reply to Sahina Bose from comment #2) > (In reply to Tal Nisan from comment #1) > > Sahina, who should take it? Someone from my team or yours? > > We can take it - wanted to check if you have any inputs on changing the > default to pre-allocated? No, sounds fine
Kaustav, can you set preallocated as default when creating disks on gluster storage domain?
Yes, seems straight forward.
This bug has not been marked as blocker for oVirt 4.3.0. Since we are releasing it tomorrow, January 29th, this bug has been re-targeted to 4.3.1.
Ovirt-4.3.1 already released, so moving to ovirt-4.3.2
Verified that with ovirt-engine-4.3.3 1. Glusterfs storage domain is created 2. VMs are created and when creating the disks, the allocation policy is 'preallocated'
This bugzilla is included in oVirt 4.3.3 release, published on April 16th 2019. Since the problem described in this bug report should be resolved in oVirt 4.3.3 release, it has been closed with a resolution of CURRENT RELEASE. If the solution does not work for you, please open a new bug report.
In relation to bug 1704782 , can we reevaluate the situation with the performance ? In bug 1704782 it was mentioned that performance issues observed in this bug cannot be reproduced. The idea behind the reevaluation is: 1) Reevaluate the gluster options in the gluster virt group 2) If performance with 'thin provision'-ed disks is acceptable, then the default gluster policy can be switched to 'thin provision' instead of 'preallocated' .
(In reply to Strahil Nikolov from comment #11) > In relation to bug 1704782 , can we reevaluate the situation with the > performance ? > In bug 1704782 it was mentioned that performance issues observed in this > bug cannot be reproduced. > > The idea behind the reevaluation is: > 1) Reevaluate the gluster options in the gluster virt group > 2) If performance with 'thin provision'-ed disks is acceptable, then the > default gluster policy can be switched to 'thin provision' instead of > 'preallocated' . Can you open a new bug to evaluate? This one has been verified and closed.
(In reply to Sahina Bose from comment #12) > (In reply to Strahil Nikolov from comment #11) > > In relation to bug 1704782 , can we reevaluate the situation with the > > performance ? > > In bug 1704782 it was mentioned that performance issues observed in this > > bug cannot be reproduced. > > > > The idea behind the reevaluation is: > > 1) Reevaluate the gluster options in the gluster virt group > > 2) If performance with 'thin provision'-ed disks is acceptable, then the > > default gluster policy can be switched to 'thin provision' instead of > > 'preallocated' . > > Can you open a new bug to evaluate? This one has been verified and closed. Also, please look at https://gluster.github.io/devblog/gluster-3-12-vs-6-a-performance-oriented-overview. With gluster sharding and preallocated disks, the performance is expected to improve as we reduce a network call that would otherwise create the shards.
Ok. That makes sense. Let's leave it as is. We can still dedupe via VDO.