RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1151728 - fail to create qcow2 format disk on glusterfs protocols with qemu-kvm-1.5.3-x
Summary: fail to create qcow2 format disk on glusterfs protocols with qemu-kvm-1.5.3-x
Keywords:
Status: CLOSED WORKSFORME
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: glusterfs
Version: 7.1
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: rc
: ---
Assignee: Prasanna Kumar Kalever
QA Contact: storage-qa-internal@redhat.com
URL:
Whiteboard:
: 1151731 (view as bug list)
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2014-10-11 07:58 UTC by Sibiao Luo
Modified: 2019-10-15 08:41 UTC (History)
10 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-10-15 08:41:01 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Sibiao Luo 2014-10-11 07:58:17 UTC
Description of problem:
I found that it fail to create qcow2 format disk on glusterfs protocols with qemu-kvm-1.5.3-x when i verified bug 1098086, raw/vhdx/vmdk/vpc/vdi format is ok now.

Version-Release number of selected component (if applicable):
host1 info:
# uname -r && rpm -q qemu-kvm
3.10.0-123.9.2.el7.x86_64
qemu-kvm-1.5.3-75.el7.x86_64
glusterfs client:
# rpm -qa | grep glusterfs
glusterfs-3.6.0.29-2.el7.x86_64
glusterfs-cli-3.6.0.29-2.el7.x86_64
glusterfs-api-devel-3.6.0.29-2.el7.x86_64
glusterfs-api-3.6.0.29-2.el7.x86_64
glusterfs-fuse-3.6.0.29-2.el7.x86_64
glusterfs-rdma-3.6.0.29-2.el7.x86_64
glusterfs-devel-3.6.0.29-2.el7.x86_64
glusterfs-libs-3.6.0.29-2.el7.x86_64
glusterfs-debuginfo-3.6.0.29-2.el7.x86_64

host1 info:
rhel6, kernel-2.6.32-497.el6.x86_64
glusterfs server:
# rpm -qa | grep glusterfs
glusterfs-fuse-3.4.0.69rhs-1.el6rhs.x86_64
glusterfs-rdma-3.4.0.69rhs-1.el6rhs.x86_64
glusterfs-libs-3.4.0.69rhs-1.el6rhs.x86_64
glusterfs-api-3.4.0.69rhs-1.el6rhs.x86_64
glusterfs-geo-replication-3.4.0.69rhs-1.el6rhs.x86_64
glusterfs-api-devel-3.4.0.69rhs-1.el6rhs.x86_64
glusterfs-devel-3.4.0.69rhs-1.el6rhs.x86_64
glusterfs-3.4.0.69rhs-1.el6rhs.x86_64
glusterfs-server-3.4.0.69rhs-1.el6rhs.x86_64
glusterfs-debuginfo-3.4.0.69rhs-1.el6rhs.x86_64

How reproducible:
100%

Steps to Reproduce:
# qemu-img create -f qcow2 gluster://10.66.106.35/volume_sluo/test.qcow2 1G
Formatting 'gluster://10.66.106.35/volume_sluo/test.qcow2', fmt=qcow2 size=1073741824 encryption=off cluster_size=65536 lazy_refcounts=off 
[2014-10-11 07:52:33.246178] I [client.c:2215:client_rpc_notify] 0-volume_sluo-client-0: disconnected from volume_sluo-client-0. Client process will keep trying to connect to glusterd until brick's port is available

# qemu-img create -f vmdk gluster://10.66.106.35/volume_sluo/test.vmdk 1G
Formatting 'gluster://10.66.106.35/volume_sluo/test.vmdk', fmt=vmdk size=1073741824 compat6=off zeroed_grain=off 
# qemu-img create -f vpc gluster://10.66.106.35/volume_sluo/test.vpc 1G
Formatting 'gluster://10.66.106.35/volume_sluo/test.vpc', fmt=vpc size=1073741824 
# qemu-img create -f vdi gluster://10.66.106.35/volume_sluo/test.vdi 1G
Formatting 'gluster://10.66.106.35/volume_sluo/test.vdi', fmt=vdi size=1073741824 static=off 
# qemu-img create -f vhdx gluster://10.66.106.35/volume_sluo/test.vhdx 1G
Formatting 'gluster://10.66.106.35/volume_sluo/test.vhdx', fmt=vhdx size=1073741824 log_size=1048576 block_size=0 block_state_zero=off
# qemu-img create -f raw gluster://10.66.106.35/volume_sluo/test.raw 1G
Formatting 'gluster://10.66.106.35/volume_sluo/test.raw', fmt=raw size=1073741824

Actual results:


Expected results:
It should can create qcow2 format disk via glusterfs protocols.

Additional info:

Comment 1 Jeff Cody 2014-12-10 16:50:48 UTC
This looks to be more of an issue with the gluster server refusing the connection (most likely independent of the format type).  

Did the qcow2 image actually get created?  That looks to be a warning/error printed out by gfapi.

I am only able to replicate this in this manner:

let i=0
while [ 1 ]
do 
let i=i+1
echo "creating test${i}.qcow2"
./qemu-img create -f qcow2 gluster://192.168.15.2/gv0/test${i}.qcow2 10M
echo $?
done

In that above scenario, I would get the same error message occasionally. However, despite the error/warning message, in each instance the numbered test image was actually created successfully on the server.

Re-assigning to gluster.

Comment 2 Jeff Cody 2014-12-10 16:58:05 UTC
*** Bug 1151731 has been marked as a duplicate of this bug. ***

Comment 5 Prasanna Kumar Kalever 2016-09-18 18:55:21 UTC
Looks like this is very old issue, came into my eyes.

Request to try out with the latest gluster releases and see if this is still reproducible. IMO this bug no more exist now.


But will wait for a couple of weeks before closing this BUG, to see if someone has any objections.

Comment 6 Prasanna Kumar Kalever 2016-09-18 18:55:47 UTC
Looks like this is very old issue, came into my eyes.

Request to try out with the latest gluster releases and see if this is still reproducible. IMO this bug no more exist now.


But will wait for a couple of weeks before closing this BUG, to see if someone has any objections.

Comment 7 Prasanna Kumar Kalever 2019-10-15 08:41:01 UTC
Closing based on https://bugzilla.redhat.com/show_bug.cgi?id=1151728#c6

Feel free to reopen if the issue is still seen.


Note You need to log in before you can comment on or make changes to this bug.