Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.

Bug 922491

Summary: [Scalability] Degradation in time implementation during multiple creations of VM's
Product: Red Hat Enterprise Virtualization Manager Reporter: vvyazmin <vvyazmin>
Component: vdsmAssignee: Nobody's working on this, feel free to take it <nobody>
Status: CLOSED WONTFIX QA Contact: vvyazmin <vvyazmin>
Severity: high Docs Contact:
Priority: unspecified    
Version: 3.2.0CC: abaron, acanan, amureini, bazulay, hateya, iheim, jkt, lpeer
Target Milestone: ---Keywords: Triaged
Target Release: 3.4.0   
Hardware: x86_64   
OS: Linux   
Whiteboard: storage
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2013-09-09 04:59:19 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: Storage RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
## Time implementation none

Description vvyazmin@redhat.com 2013-03-17 13:08:12 UTC
Created attachment 711376 [details]
## Time implementation

Description of problem:
Degradation in time implementation during multiple creations of VM's

Version-Release number of selected component (if applicable):

Test done on 3.2 - SF10 environment:
RHEVM: rhevm-3.2.0-10.14.beta1.el6ev.noarch    
VDSM: vdsm-4.10.2-11.0.el6ev.x86_64
LIBVIRT: libvirt-0.10.2-18.el6.x86_64
QEMU & KVM: qemu-kvm-rhev-0.12.1.2-2.355.el6_4.2.x86_64
SANLOCK: sanlock-2.6-2.el6.x86_64

and on RHEVM RHEVM 3.1.3 - SI27.3 environment:
RHEVM: rhevm-3.1.0-50.el6ev.noarch
VDSM: vdsm-4.10.2-1.6.el6.x86_64
LIBVIRT: libvirt-0.10.2-18.el6.x86_64
QEMU & KVM: qemu-kvm-rhev-0.12.1.2-2.355.el6_4.1.x86_64
SANLOCK: sanlock-2.6-2.el6.x86_64

RHEVM Environment:
RHEVM (Build 3.2 – SF10): installed on physical (x 4 hosts) 
PostgreSQL remote DB, installed on another physical machine
Storage: EMC and XIO, both connected via FC switch (8Gbit)

Run scenarios:
1. Create a pool with 1000 VM's (automatic pool) + OS installed (RHEL 3.2) with disk size 20 GB thin provision. Test run via UI
2. Create 1000 VM's from 10 different templates paralleled (MAX_PARALLEL=10) + OS installed (RHEL 3.2) with disk size 1 GB preallocated. Test run via PythonSDK.

How reproducible:
100%

Steps to Reproduce:
1. Create 1000 VM's by first and second way (see “Run scenarios” above) 
  
Actual results:
object = = disk == LVS

During creation first objects, time deployment ~ 2 sec. When you close to n==1000, time deployment ~ 59 sec. This test repeat 6 times (with different RHEVM versions, storage vendors and run scenarios) , and I get same results. File with graph attached. Total time deployment 4 hours.

Expected results:
1. Improve time performance for creating N--> 1000 object
2. Prevent degradation implementation of objects in systems.
3. Open a bottleneck in VDSM server.

Additional info:
In my scenarios no influence on performance creating objects from DB on Storage are found.

/var/log/ovirt-engine/engine.log

/var/log/vdsm/vdsm.log