Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.

Bug 1528987

Summary: [RFE] Package as a qcow2 image
Product: [oVirt] ovirt-appliance Reporter: Yedidyah Bar David <didi>
Component: Packaging.rpmAssignee: Yuval Turgeman <yturgema>
Status: CLOSED WONTFIX QA Contact: Nikolai Sednev <nsednev>
Severity: medium Docs Contact:
Priority: high    
Version: 4.1CC: bugs, didi, fdeutsch, lsurette, lsvaty, lveyde, mavital, rbarry, sbonazzo, stirabos, ykaul, ylavi
Target Milestone: ---Keywords: FutureFeature
Target Release: ---Flags: rule-engine: planning_ack?
rule-engine: devel_ack?
rule-engine: testing_ack?
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: Enhancement
Doc Text:
Story Points: ---
Clone Of: 1274065 Environment:
Last Closed: 2018-06-06 12:35:41 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: Integration RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1274065    

Description Yedidyah Bar David 2017-12-25 14:35:38 UTC
+++ This bug was initially created as a clone of Bug #1274065 +++

Description of problem:
When using the appliance flow in HE setup, the extraction is taking a large portion of the setup time.

Version-Release number of selected component (if applicable):
3.6

How reproducible:
always

Steps to Reproduce:
1. Use appliance flow in
2.
3.

Actual results:
Extraction process takes long

Expected results:
Extraction process is quick

Additional info:

--- Additional comment from Fabian Deutsch on 2015-10-22 14:35:22 IDT ---

Raising the priority, because the flow could be nice, but is really slowed down due to this bug.

Ryan, could you quantify the slow down compared to the whole installation duration?

--- Additional comment from Simone Tiraboschi on 2015-10-22 14:42:03 IDT ---

(In reply to Fabian Deutsch from comment #1)
> Raising the priority, because the flow could be nice, but is really slowed
> down due to this bug.
> 
> Ryan, could you quantify the slow down compared to the whole installation
> duration?

It's not really a bug, it's an RFE: it could be faster but it's working.
The issue is about python not efficiently handling sparse files so the real gain depends just from the image sparseness.

--- Additional comment from Fabian Deutsch on 2015-10-22 14:47:25 IDT ---

Agreed, it's an RFE.

--- Additional comment from Yaniv Kaul on 2015-10-22 22:10:01 IDT ---

Are we running virt-sparsify and virt-sysprep on the image before packing it?

--- Additional comment from Fabian Deutsch on 2017-01-27 11:51:28 IST ---

Yes, IIRC.

The files should be sparse, but as Simone says: Python does not handle those efficiently when extracting tars.

--- Additional comment from Yaniv Kaul on 2017-11-16 15:42:09 IST ---

Simone, the new installation flow in 4.2 makes a difference here?

--- Additional comment from Simone Tiraboschi on 2017-11-16 16:14:21 IST ---

(In reply to Yaniv Kaul from comment #6)
> Simone, the new installation flow in 4.2 makes a difference here?

We are using system tar via ansible with --sparse option:
https://github.com/oVirt/ovirt-hosted-engine-setup/blob/master/src/ansible/bootstrap_local_vm.yml#L36

On my test system with a 7200 rpm disk is taking about 50 seconds to extract a 2.4 qcow2 sparse image from a 800M ova file:

 2017-11-16 12:14:51,291+0100 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils.v2_playbook_on_task_start:164 TASK [Extract appliance to local vm dir]
 2017-11-16 12:15:41,756+0100 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils.v2_runner_on_ok:120 changed: [localhost]
 
 [root@c74he20171031h1 ~]# ls -lh /usr/share/ovirt-engine-appliance/ovirt-engine-appliance-4.2-20171114.1.el7.centos.ova
 -rw-r--r--. 1 root root 808M 14 nov 14.59 /usr/share/ovirt-engine-appliance/ovirt-engine-appliance-4.2-20171114.1.el7.centos.ova
 [root@c74he20171031h1 ~]# du -h /var/tmp/localvm/images/d1746233-6cf3-42b8-9efb-b481954a8d3f/d7cdd432-9bc6-47f6-a6d8-73340e59647b
 2,4G	/var/tmp/localvm/images/d1746233-6cf3-42b8-9efb-b481954a8d3f/d7cdd432-9bc6-47f6-a6d8-73340e59647b
 [root@c74he20171031h1 ~]# file /var/tmp/localvm/images/d1746233-6cf3-42b8-9efb-b481954a8d3f/d7cdd432-9bc6-47f6-a6d8-73340e59647b
 /var/tmp/localvm/images/d1746233-6cf3-42b8-9efb-b481954a8d3f/d7cdd432-9bc6-47f6-a6d8-73340e59647b: QEMU QCOW Image (v3), 53687091200 bytes

If we want to improve, I think we should evaluate avoiding the RPM(OVA(QCOW2)) packaging and just ship the qcow2 disk in an rpm file using it in place with a snapshot to revert on issues.

--- Additional comment from Yaniv Kaul on 2017-11-16 20:38:54 IST ---

Alternatively we should use the RHEL cloud image, and virt-customize it on the spot with latest Engine, etc. 
Should take a lot more time, but would reduce the initial size and ensure we use the latest-greatest.

Comment 1 Yedidyah Bar David 2017-12-28 07:22:55 UTC
Moving to POST, because there is a (mostly working) patch, but not sure this is worth it - see bug 1274065 comment 11.

Comment 2 Sandro Bonazzola 2018-04-06 07:55:50 UTC
With the qcow packaging:

 ovirt-engine-appliance.qcow2                         2.48 GB
 oVirt-Engine-Appliance-CentOS-x86_64-7-20180319.ova  828.65 MB
 
 ovirt-engine-appliance-4.3-20180319.1.el7.centos.noarch.rpm        808.87 MB
 ovirt-engine-appliance-4.3-20180319.1.el7.centos.src.rpm           1.60 GB
 ovirt-engine-appliance-qcow2-4.3-20180319.1.el7.centos.noarch.rpm  689.62 MB

So at least in terms of download time, the RPM xz compression of the uncompressed qcow image is winning against the rpm wrapping of the tar.gz compression of the ova file.

Simone, if I remember correctly we also need the ova xml for the deployment right?

Comment 3 Simone Tiraboschi 2018-04-06 08:25:31 UTC
(In reply to Sandro Bonazzola from comment #2)
> Simone, if I remember correctly we also need the ova xml for the deployment
> right?

Yes, we are reading default CPU and memory from there but we can simply ship that values out of band if we want.

Comment 4 Yedidyah Bar David 2018-04-09 07:25:05 UTC
(In reply to Sandro Bonazzola from comment #2)
> With the qcow packaging:
> 
>  ovirt-engine-appliance.qcow2                         2.48 GB
>  oVirt-Engine-Appliance-CentOS-x86_64-7-20180319.ova  828.65 MB
>  
>  ovirt-engine-appliance-4.3-20180319.1.el7.centos.noarch.rpm        808.87 MB
>  ovirt-engine-appliance-4.3-20180319.1.el7.centos.src.rpm           1.60 GB
>  ovirt-engine-appliance-qcow2-4.3-20180319.1.el7.centos.noarch.rpm  689.62 MB
> 
> So at least in terms of download time, the RPM xz compression of the
> uncompressed qcow image is winning against the rpm wrapping of the tar.gz
> compression of the ova file.

Did you also compare times?

When I did, 'yum install' of the qcow rpm took longer than both of 'yum install' of the ova rpm and unpacking it. Not sure why.

I talked about this with Ido and he said he still prefers it this way - that it's less important if an initial 'yum install' takes somewhat longer, because people already know it will take long, and that we should optimize the time after you start deploy. I agree this makes sense.

> 
> Simone, if I remember correctly we also need the ova xml for the deployment
> right?

Already talked with Yuval about this, not sure we got any final conclusions but it should not be too hard to add this in a separate file.

Comment 5 Yaniv Lavi 2018-06-06 12:35:41 UTC
This doesn't seem to have the desired affect on performance. Closing.