Bug 770634

Summary: blockpull output is diffrent for diffrent size of qed domain img without backing file
Product: Red Hat Enterprise Linux 6 Reporter: xhu
Component: libvirtAssignee: Martin Kletzander <mkletzan>
Status: CLOSED CURRENTRELEASE QA Contact: Virtualization Bugs <virt-bugs>
Severity: medium Docs Contact:
Priority: low    
Version: 6.3CC: acathrow, ajia, dallan, mshao, mzhan, rwu, weizhan, whuang, yupzhang, zpeng
Target Milestone: rc   
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2012-05-29 09:36:06 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Attachments:
Description Flags
guest xml file none

Description xhu 2011-12-28 07:00:44 UTC
Description of problem:
blockpull output is diffrent for diffrent size of qed domain img without backing file. 
For 50M qed domain img without backing file, the following error will prompt: 
# virsh blockpull vr-rhel6-x86_64-kvm /var/lib/libvirt/images/qed 1
error: Requested operation is not valid: No active operation on device: drive-ide0-0-0
For 1G qed domain img without backing file, blockpull can be succeed to executed:
# virsh blockpull vr-rhel6-x86_64-kvm /var/lib/libvirt/images/qed 1

Version-Release number of selected component (if applicable):
libvirt-0.9.8-1.el6.x86_64
qemu-kvm-0.12.1.2-2.213.el6.x86_64
kernel-2.6.32-220.el6.x86_64

How reproducible:
everytimes

Steps to Reproduce:
For 50M qed domain img without backing file
1. create a 50M qed img 
# qemu-img create -f qed /var/lib/libvirt/images/qed  50M
Formatting '/var/lib/libvirt/images/qed', fmt=qed size=52428800 cluster_size=0 table_size=0
2. start guest using the qed img
...
<disk type='file' device='disk'>
  <driver name='qemu' type='qed'/>
  <source file='/var/lib/libvirt/images/qed'/>
  <target dev='hda' bus='ide'/>
</disk>
...
# virsh start vr-rhel6-x86_64-kvm
Domain vr-rhel6-x86_64-kvm started
3. start to block pull
# virsh blockpull vr-rhel6-x86_64-kvm /var/lib/libvirt/images/qed 1
error: Requested operation is not valid: No active operation on device: drive-ide0-0-0

For 1G qed domain img without backing file:
1. create a 1G qed img 
# qemu-img create -f qed /var/lib/libvirt/images/qed  1G
Formatting '/var/lib/libvirt/images/qed', fmt=qed size=1073741824 cluster_size=0 table_size=0
2. start guest using the qed img
...
<disk type='file' device='disk'>
  <driver name='qemu' type='qed'/>
  <source file='/var/lib/libvirt/images/qed'/>
  <target dev='hda' bus='ide'/>
</disk>
...
# virsh start rhel6
Domain rhel6 started
3. start to block pull
# virsh blockpull rhel6 /var/lib/libvirt/images/qed 1
  
# virsh blockjob rhel6 /var/lib/libvirt/images/qed --info
Block Pull: [ 30 %]    Bandwidth limit: 1 MB/s

Actual results:
blockpull output is diffrent for diffrent size of qed domain img without backing file. 

Expected results:
If blockpull doesn't support for qed domain img without backing file, the error should raised regardless of the size of img.

Additional info:

Comment 4 Martin Kletzander 2012-05-25 13:18:45 UTC
I am unable to reproduce this with 0.9.11, could you please try it with latest libvirt and qemu? There was some heavy work done on this part since the bug was filed. Thanks

Comment 8 zhe peng 2012-05-28 10:25:08 UTC
Created attachment 587205 [details]
guest xml file

Comment 9 Martin Kletzander 2012-05-28 11:42:54 UTC
I tried few possibilities and this looks perfectly fine. No output means that everything was finished (there is no job left). The bigger the image is, the longer it takes to check for what should and what shouldn't be pulled. On 50M it took me just about few seconds on a slow disk, so it's really fast.
I'd close this bug as it works the way it's supposed to. This was probably fixed in 0.9.11, I'll have a look at it.

Comment 10 Martin Kletzander 2012-05-29 09:36:06 UTC
Closing as this was solved already and is now OK in current version.