Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.

Bug 1151731

Summary: fail to create vmdk/qcow2 format disk on glusterfs protocols with qemu-kvm-rhev-2.1.x
Product: Red Hat Enterprise Linux 7 Reporter: Sibiao Luo <sluo>
Component: qemu-kvm-rhevAssignee: Jeff Cody <jcody>
Status: CLOSED DUPLICATE QA Contact: Virtualization Bugs <virt-bugs>
Severity: medium Docs Contact:
Priority: medium    
Version: 7.1CC: chayang, famz, hhuang, jcody, juzhang, kwolf, mazhang, michen, pbonzini, qzhang, sharpwiner, virt-maint, xfu
Target Milestone: rc   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2014-12-10 16:58:05 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Sibiao Luo 2014-10-11 08:06:00 UTC
Description of problem:
I found that it fail to create qcow2/vmdk format disk on glusterfs protocols with qemu-kvm-1.5.3-x when i verified bug 1098086, raw/vhdx format is ok now but vdi/vpc is anther issue (bug 1136381).

Version-Release number of selected component (if applicable):
host1 info:
# uname -r && rpm -q qemu-kvm-rhev
3.10.0-183.el7.x86_64
qemu-kvm-rhev-2.1.2-1.el7.x86_64
glusterfs client:
# rpm -qa | grep glusterfs
glusterfs-3.6.0.29-2.el7.x86_64
glusterfs-cli-3.6.0.29-2.el7.x86_64
glusterfs-libs-3.6.0.29-2.el7.x86_64
glusterfs-api-devel-3.6.0.29-2.el7.x86_64
glusterfs-debuginfo-3.6.0.29-2.el7.x86_64
glusterfs-api-3.6.0.29-2.el7.x86_64
glusterfs-fuse-3.6.0.29-2.el7.x86_64
glusterfs-rdma-3.6.0.29-2.el7.x86_64
glusterfs-devel-3.6.0.29-2.el7.x86_64

glusterfs server:
rhel6, kernel-2.6.32-497.el6.x86_64
# rpm -qa | grep glusterfs
glusterfs-fuse-3.4.0.69rhs-1.el6rhs.x86_64
glusterfs-rdma-3.4.0.69rhs-1.el6rhs.x86_64
glusterfs-libs-3.4.0.69rhs-1.el6rhs.x86_64
glusterfs-api-3.4.0.69rhs-1.el6rhs.x86_64
glusterfs-geo-replication-3.4.0.69rhs-1.el6rhs.x86_64
glusterfs-api-devel-3.4.0.69rhs-1.el6rhs.x86_64
glusterfs-devel-3.4.0.69rhs-1.el6rhs.x86_64
glusterfs-3.4.0.69rhs-1.el6rhs.x86_64
glusterfs-server-3.4.0.69rhs-1.el6rhs.x86_64
glusterfs-debuginfo-3.4.0.69rhs-1.el6rhs.x86_64

How reproducible:
100%

Steps to Reproduce:
# qemu-img create -f vmdk gluster://10.66.106.35/volume_sluo/test1.vmdk 1G
Formatting 'gluster://10.66.106.35/volume_sluo/test1.vmdk', fmt=vmdk size=1073741824 compat6=off 
[2014-10-11 07:41:51.286577] I [client.c:2215:client_rpc_notify] 0-volume_sluo-client-0: disconnected from volume_sluo-client-0. Client process will keep trying to connect to glusterd until brick's port is available
# qemu-img create -f qcow2 gluster://10.66.106.35/volume_sluo/test4.qcow2 1G
Formatting 'gluster://10.66.106.35/volume_sluo/test4.qcow2', fmt=qcow2 size=1073741824 encryption=off cluster_size=65536 lazy_refcounts=off 
[2014-10-11 07:46:28.240024] I [client.c:2215:client_rpc_notify] 0-volume_sluo-client-0: disconnected from volume_sluo-client-0. Client process will keep trying to connect to glusterd until brick's port is available
[2014-10-11 07:46:28.602774] I [client.c:2215:client_rpc_notify] 0-volume_sluo-client-0: disconnected from volume_sluo-client-0. Client process will keep trying to connect to glusterd until brick's port is available
 
# qemu-img create -f vpc gluster://10.66.106.35/volume_sluo/test2.vpc 1G
Formatting 'gluster://10.66.106.35/volume_sluo/test2.vpc', fmt=vpc size=1073741824 
qemu-img: gluster://10.66.106.35/volume_sluo/test2.vpc: Could not create image: Input/output error
# qemu-img create -f vdi gluster://10.66.106.35/volume_sluo/test3.vdi 1G
Formatting 'gluster://10.66.106.35/volume_sluo/test3.vdi', fmt=vdi size=1073741824 static=off 
qemu-img: gluster://10.66.106.35/volume_sluo/test3.vdi: Could not create image: No such file or directory

# qemu-img create -f raw gluster://10.66.106.35/volume_sluo/test5.raw 1G
Formatting 'gluster://10.66.106.35/volume_sluo/test5.raw', fmt=raw size=1073741824 
# qemu-img create -f vhdx gluster://10.66.106.35/volume_sluo/test6.vhdx 1G
Formatting 'gluster://10.66.106.35/volume_sluo/test6.vhdx', fmt=vhdx size=1073741824 log_size=1048576 block_size=0 

Actual results:


Expected results:
It should can create qcow2/vmdk format disk via glusterfs protocols.

Additional info:

Comment 1 mazhang 2014-10-23 08:03:33 UTC
It could be gluster server package too older than gluster client.

Test this bug on glusterfs-server-3.6.0.29-3.el6rhs.x86_64, works well.

Gluster server:
glusterfs-3.6.0.29-3.el6rhs.x86_64
glusterfs-api-3.6.0.29-3.el6rhs.x86_64
glusterfs-fuse-3.6.0.29-3.el6rhs.x86_64
glusterfs-server-3.6.0.29-3.el6rhs.x86_64
glusterfs-libs-3.6.0.29-3.el6rhs.x86_64
glusterfs-cli-3.6.0.29-3.el6rhs.x86_64

Gluster client:
glusterfs-api-3.6.0.29-2.el7.x86_64
glusterfs-libs-3.6.0.29-2.el7.x86_64
glusterfs-3.6.0.29-2.el7.x86_64

Qemu-kvm:
qemu-img-rhev-2.1.2-4.el7.x86_64
qemu-kvm-common-rhev-2.1.2-4.el7.x86_64
qemu-kvm-tools-rhev-2.1.2-4.el7.x86_64
qemu-kvm-rhev-debuginfo-2.1.2-4.el7.x86_64
qemu-kvm-rhev-2.1.2-4.el7.x86_64


[root@dhcp-11-16 ~]# qemu-img create -f vmdk gluster://10.66.106.25/gv0/test.vmdk 1G
Formatting 'gluster://10.66.106.25/gv0/test.vmdk', fmt=vmdk size=1073741824 compat6=off 
[root@dhcp-11-16 ~]# qemu-img create -f vmdk gluster://10.66.106.25/gv0/test.qcow2 1G
Formatting 'gluster://10.66.106.25/gv0/test.qcow2', fmt=vmdk size=1073741824 compat6=off 
[2014-10-23 07:59:29.137097] I [client.c:2215:client_rpc_notify] 0-gv0-client-0: disconnected from gv0-client-0. Client process will keep trying to connect to glusterd until brick's port is available
[root@dhcp-11-16 ~]# qemu-img info gluster://10.66.106.25/gv0/test.vmdk
image: gluster://10.66.106.25/gv0/test.vmdk
file format: vmdk
virtual size: 1.0G (1073741824 bytes)
disk size: 16K
cluster_size: 65536
Format specific information:
    cid: 1414051117
    parent cid: 4294967295
    create type: monolithicSparse
    extents:
        [0]:
            virtual size: 1073741824
            filename: gluster://10.66.106.25/gv0/test.vmdk
            cluster size: 65536
            format: 
[root@dhcp-11-16 ~]# qemu-img info gluster://10.66.106.25/gv0/test.qcow2
image: gluster://10.66.106.25/gv0/test.qcow2
file format: vmdk
virtual size: 1.0G (1073741824 bytes)
disk size: 16K
cluster_size: 65536
Format specific information:
    cid: 1414051169
    parent cid: 4294967295
    create type: monolithicSparse
    extents:
        [0]:
            virtual size: 1073741824
            filename: gluster://10.66.106.25/gv0/test.qcow2
            cluster size: 65536
            format:

Comment 2 Jeff Cody 2014-12-10 16:58:05 UTC
Marking this as a duplicate of BZ 1151728.  

The vmdk/qcow2 portion appears to be a gfapi issue, and the vpc/vdi issue is documented in another bz, as the description indicates.

*** This bug has been marked as a duplicate of bug 1151728 ***