Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.

Bug 915520

Summary: Oz: Package update - Add qcow2 support
Product: Red Hat OpenStack Reporter: Ian McLeod <imcleod>
Component: ozAssignee: Ian McLeod <imcleod>
Status: CLOSED ERRATA QA Contact: Kashyap Chamarthy <kchamart>
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: unspecifiedCC: apevec, clalancette, jhenner, kchamart
Target Milestone: snapshot5Keywords: Triaged
Target Release: 2.1   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: oz-0.9.0-4.el6 Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
: 915521 (view as bug list) Environment:
Last Closed: 2013-04-04 18:00:12 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 915521    
Bug Blocks:    
Attachments:
Description Flags
Backport/cherry pick of upstream patch
none
SPEC update
none
oz-install stdout which indicates the disk image being created is qcow2 none

Description Ian McLeod 2013-02-25 23:40:18 UTC
Description of problem:


Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.
  
Actual results:


Expected results:


Additional info:

Comment 1 Ian McLeod 2013-02-25 23:40:56 UTC
Created attachment 702554 [details]
Backport/cherry pick of upstream patch

Comment 2 Ian McLeod 2013-02-25 23:41:50 UTC
Created attachment 702555 [details]
SPEC update

Note that this change also removes the dep for the python parted module/binding.

Comment 4 Jaroslav Henner 2013-02-26 11:34:21 UTC
I am not familiar with oz. I guess we should just test whether we can create qcow image with oz and whether we still can create the other (RAW).

Comment 7 Kashyap Chamarthy 2013-03-11 11:13:08 UTC
Here's simple script to create a guest w/ oz -- https://bugzilla.redhat.com/attachment.cgi?id=697671

Related bz - https://bugzilla.redhat.com/show_bug.cgi?id=896108

Comment 8 Kashyap Chamarthy 2013-03-11 11:17:48 UTC
This can be easily tested by editing oz config section, specficcally the image_type config directive.
#-----------------------------------#
$ cat /etc/oz/oz.cfg 
[paths]
output_dir = /var/lib/libvirt/images
data_dir = /var/lib/oz
screenshot_dir = /var/lib/oz/screenshots

[libvirt]
uri = qemu:///system

# this can be 'raw' or qcow2'
image_type = qcow2
# type = kvm
# bridge_name = virbr0
# cpus = 1
# memory = 1024

[cache]
original_media = yes
modified_media = no
jeos = no
#-----------------------------------#

Comment 9 Kashyap Chamarthy 2013-03-11 13:55:55 UTC
NOTE:Ok, there appears to be a bug with newer version of Oz, if you have an  machine with a 'default' libvirt pool running, Oz fails to create a guest - because it tries to create a new pool with the same name - 'default' and a conflict arises

# rpm -q oz
oz-0.9.0-3.el6.noarch

For instance, I have this:
---------
[root@interceptor oz-test-qcow2]# virsh pool-list
Name                 State      Autostart 
-----------------------------------------
default              active     yes       

[root@interceptor oz-test-qcow2]#
---------
[root@interceptor oz-test-qcow2]# virsh pool-info default
Name:           default
UUID:           a7757e97-86f8-1e06-6f14-5d8406fa32e5
State:          running
Persistent:     yes
Autostart:      yes
Capacity:       19.22 GiB
Allocation:     15.05 GiB
Available:      4.17 GiB
---------


Attempt to create an Oz guest w/ the script mentioned in comment #7 - it just runs Oz install
---------
.
.
INFO:oz.Guest.RHEL6Guest:Cleaning up after install
Traceback (most recent call last):
  File "/usr/bin/oz-install", line 145, in <module>
    guest.generate_diskimage(size=guest.disksize, force=force_download)
  File "/usr/lib/python2.6/site-packages/oz/Guest.py", line 526, in generate_diskimage
    return self._internal_generate_diskimage(size, force, False)
  File "/usr/lib/python2.6/site-packages/oz/Guest.py", line 501, in _internal_generate_diskimage
    pool = self.libvirt_conn.storagePoolCreateXML(pool_xml, 0)
  File "/usr/lib64/python2.6/site-packages/libvirt.py", line 3270, in storagePoolCreateXML
    if ret is None:raise libvirtError('virStoragePoolCreateXML() failed', conn=self)
libvirt.libvirtError: operation failed: Storage source conflict with pool: 'default'
---------


Additional info

from /usr/lib/python2.6/site-packages/oz/Guest.py
#---------#
.
.
.
    500 
    501         pool = self.libvirt_conn.storagePoolCreateXML(pool_xml, 0)
    502         try:
    503             pool.createXML(vol_xml, 0)
    504         finally:
    505             pool.destroy()
    506 
.
.
.
#---------#

Comment 10 Kashyap Chamarthy 2013-03-12 06:31:33 UTC
NOTE: Oz runs successfully, if there's no default libvirt storage pool existing.

VERIFIED.

1] Version Info
-----------
# rpm -q oz ; cat /etc/redhat-release ; arch
oz-0.9.0-3.el6.noarch
Red Hat Enterprise Linux Server release 6.4 (Santiago)
x86_64
-----------

2] Test

  2.1] Use the /etc/oz/oz.cfg specified in Comment #8 . Of specific note is: image_type = qcow2 under [libvirt] section

  2.2] Run the oz-script from here (which is just just a wrapper around oz-install -- https://bugzilla.redhat.com/attachment.cgi?id=697671 


Once install finishes, define the guest, ensure it's running:
===========
[root@interceptor oz-test-qcow2]# ls
oz-jeos.bash  rhel63-qcow2test1Mar_12_2013-10:32:08  rhel63-qcow2test1.out  rhel63_x86_64.tdl
[root@interceptor oz-test-qcow2]# 
===========
[root@interceptor oz-test-qcow2]# virsh define rhel63-qcow2test1Mar_12_2013-10\:32\:08 
Domain rhel63-qcow2test1 defined from rhel63-qcow2test1Mar_12_2013-10:32:08
===========
[root@interceptor oz-test-qcow2]# virsh list | grep rhel63-qcow2test1
 64    rhel63-qcow2test1              running
[root@interceptor oz-test-qcow2]# 
===========


Optionally, add a PTY console, so that you can access the guest via 'virsh console foo' :
===========
1/ shutdown the guest, then edited (virsh edit rhel63-qcow2test1), 

replaced: 
-----------------
  <serial type='tcp'>
      <source mode='bind' host='127.0.0.1' service='19496'/>
      <protocol type='raw'/>
      <target port='1'/>
    </serial>
    <console type='tcp'>
      <source mode='bind' host='127.0.0.1' service='19496'/>
      <protocol type='raw'/>
      <target type='serial' port='1'/>
    </console>
-----------------

with:
-----------------
    <serial type="pty">
      <target port="0"/>
    </serial>
-----------------


2/ Define and start the guest via serial console
-----------------
# virsh define /etc/libvirt/qemu/rhel63-qcow2test1.xml
Domain rhel63-qcow2test1 defined from /etc/libvirt/qemu/rhel63-qcow2test1.xml

# virsh start rhel63-qcow2test1 --console
-----------------

Comment 11 Kashyap Chamarthy 2013-03-12 06:34:12 UTC
Created attachment 708780 [details]
oz-install stdout which indicates the disk image being created is qcow2

qcow2 disk image -- created by Oz -- info:
===========
# qemu-img info /var/lib/libvirt/images/rhel63-qcow2test1.qcow2
image: /var/lib/libvirt/images/rhel63-qcow2test1.qcow2
file format: qcow2
virtual size: 10G (10737418240 bytes)
disk size: 1.1G
cluster_size: 65536
===========

Comment 12 Kashyap Chamarthy 2013-03-12 06:37:11 UTC
Note - However, Oz should be more tolerant: if a default storage pool exists, it should create another one with a different name, or throw a warning to undefine the existing default storage pool, instead of an ugly stack trace.

Comment 13 Kashyap Chamarthy 2013-03-14 03:22:43 UTC
An upstream bug related to the bug mentioned in comment #9 --
https://github.com/clalancette/oz/issues/72

Comment 14 Chris Lalancette 2013-03-14 12:26:43 UTC
Just as an FYI; the code in the upstream Oz master branch has diverged, and does handle this situation (I ran into the problem myself :).  So you might want to re-cherry-pick the upstream patch for RHEL.

Chris

Comment 15 Kashyap Chamarthy 2013-03-15 05:33:04 UTC
Moving to ON_DEV per above comment #14 from Chris.

Thanks Chris.

Comment 18 Ian McLeod 2013-03-18 02:28:51 UTC
I've re-merged and rebuilt with the latest upstream qcow2 approach.  This is brewed as 0.9.0-4.

Comment 19 Kashyap Chamarthy 2013-03-25 02:15:05 UTC
VERIFIED.

$ rpm -q oz
oz-0.9.0-4.el6.noarch


Verification info:


$ cat /etc/libvirt/storage/default.xml 
<!--
WARNING: THIS IS AN AUTO-GENERATED FILE. CHANGES TO IT ARE LIKELY TO BE 
OVERWRITTEN AND LOST. Changes to this xml configuration should be made using:
  virsh pool-edit default
or other application using the libvirt API.
-->

<pool type='dir'>
  <name>default</name>
  <uuid>a7757e97-86f8-1e06-6f14-5d8406fa32e5</uuid>
  <capacity unit='bytes'>0</capacity>
  <allocation unit='bytes'>0</allocation>
  <available unit='bytes'>0</available>
  <source>
  </source>
  <target>
    <path>/var/lib/libvirt/images</path>
    <permissions>
      <mode>0755</mode>
      <owner>-1</owner>
      <group>-1</group>
    </permissions>
  </target>
</pool>


$ virsh pool-define default.xml 
Pool default defined from default.xml


$ virsh pool-list --all
Name                 State      Autostart 
-----------------------------------------
default              inactive   no        


$ virsh pool-start default
Pool default started


$ virsh pool-list
Name                 State      Autostart 
-----------------------------------------
default              active     no        


$ virsh pool-info default
Name:           default
UUID:           a7757e97-86f8-1e06-6f14-5d8406fa32e5
State:          running
Persistent:     yes
Autostart:      no
Capacity:       19.22 GiB
Allocation:     13.62 GiB
Available:      5.60 GiB


$./oz-jeos.bash qcow2-el63-t1


$ virsh define qcow2-el63-t1Mar_25_2013-07:36:32
Domain qcow2-el63-t1 defined from qcow2-el63-t1Mar_25_2013-07:36:32


From above, Oz does create a guest even with a default storage pool existing.

Turning to VERIFIED.

Comment 20 errata-xmlrpc 2013-04-04 18:00:12 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHBA-2013-0706.html