Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.

Bug 1885286

Summary: serial execution of playbook on a single host
Product: [oVirt] ovirt-engine Reporter: Tommaso <tommaso>
Component: BLL.VirtAssignee: Liran Rotenberg <lrotenbe>
Status: CLOSED DUPLICATE QA Contact: meital avital <mavital>
Severity: medium Docs Contact:
Priority: unspecified    
Version: 4.4.2CC: ahadas, bugs
Target Milestone: ovirt-4.4.4Flags: pm-rhel: ovirt-4.4+
Target Release: ---   
Hardware: All   
OS: All   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2020-10-20 14:50:43 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: Virt RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Tommaso 2020-10-05 14:20:02 UTC
Description of problem:

serial execution of playbook ovirt-ova-export, olso when running on different datacenters/hosts



Steps to Reproduce:
execute multiple "export to ova" at the same time

Actual results:

the export are executed in serial

Expected results:

the export are executed in parallel

Additional info:
as in https://bugzilla.redhat.com/show_bug.cgi?id=1855782:

If you run export_vm_as_ova.py for vm1 and then for vm2 after one minute, they both to start initially as jobs, but actually the "qemu-img convert" process for the second vm doesn't start until the first one has completed.
Eg in engine events you see 
Starting to export Vm c8server as a Virtual Appliance 10/2/202:50:23 PM
Starting to export Vm c8client as a Virtual Appliance 10/2/202:51:41 PM
but only when the first one completes, you see on host that after that the qemu-img process for the second one start.

I see that the derived ansible job on engine, named "/usr/share/ovirt-engine/playbooks/ovirt-ova-export.yml" and executed as this command:

ovirt     9534  1642  6 14:50 ?        00:00:42 /usr/bin/python2 /usr/bin/ansible-playbook --ssh-common-args=-F /var/lib/ovirt-engine/.ssh/config -v --private-key=/etc/pki/ovirt-engine/keys/engine_id_rsa --inventory=/tmp/ansible-inventory1162818200209914533 --extra-vars=target_directory="/save_ova/base/dump" --extra-vars=entity_type="vm" --extra-vars=ova_name="c8server.ova" --extra-vars=ovirt_ova_pack_ovf=" . . . ovf definition . . ." /usr/share/ovirt-engine/playbooks/ovirt-ova-export.yml

is only present for the first VM, so culd it be that the lock is perhaps at db level?
EG I see:  

engine=# select * from job order by start_time desc;
                job_id                |  action_type  |                                    descripti
on                                    | status  |               owner_id               | visible |  
       start_time         |          end_time          |      last_update_time      |            cor
relation_id            | is_external | is_auto_cleared | engine_session_seq_id 
--------------------------------------+---------------+---------------------------------------------
--------------------------------------+---------+--------------------------------------+---------+--
--------------------------+----------------------------+----------------------------+---------------
-----------------------+-------------+-----------------+-----------------------
 1f304799-5f86-422d-af60-46fd52047858 | ExportVmToOva | Exporting VM c8client as an OVA to /save_ova
/base/dump/c8client.ova on Host ov301 | STARTED | d1429f1f-2bea-4f60-bd2e-5bed997716ed | t       | 2
020-10-02 14:51:37.423+02 |                            | 2020-10-02 14:51:45.031+02 | e4276504-c2ca-
4454-996e-86a61cc265db | f           | t               |                  1247
 ea76fe36-7274-4603-870e-3f142e6e268b | ExportVmToOva | Exporting VM c8server as an OVA to /save_ova
/base/dump/c8server.ova on Host ov301 | STARTED | d1429f1f-2bea-4f60-bd2e-5bed997716ed | t       | 2
020-10-02 14:50:19.911+02 |                            | 2020-10-02 14:50:43.191+02 | 333df0a9-ccfe-
4034-9f67-e1cec49a4468 | f

Comment 1 RHEL Program Management 2020-10-05 15:34:19 UTC
The documentation text flag should only be set after 'doc text' field is provided. Please provide the documentation text and set the flag to '?' again.

Comment 2 Liran Rotenberg 2020-10-20 14:50:43 UTC
Detailed comment in the original bug.

*** This bug has been marked as a duplicate of bug 1855782 ***