Bug 1199036
Summary: | Libvirtd was restarted when do active blockcommit while there is a blockpull job running | ||||||
---|---|---|---|---|---|---|---|
Product: | Red Hat Enterprise Linux 7 | Reporter: | Shanzhi Yu <shyu> | ||||
Component: | libvirt | Assignee: | Peter Krempa <pkrempa> | ||||
Status: | CLOSED ERRATA | QA Contact: | Virtualization Bugs <virt-bugs> | ||||
Severity: | high | Docs Contact: | |||||
Priority: | high | ||||||
Version: | 7.1 | CC: | dyuan, eblake, jdenemar, jkurik, mst, mzhan, pkrempa, rbalakri, shyu, xuzhang, yanyang | ||||
Target Milestone: | rc | Keywords: | ZStream | ||||
Target Release: | --- | ||||||
Hardware: | Unspecified | ||||||
OS: | Unspecified | ||||||
Whiteboard: | |||||||
Fixed In Version: | libvirt-1.2.14-1.el7 | Doc Type: | Bug Fix | ||||
Doc Text: | Story Points: | --- | |||||
Clone Of: | |||||||
: | 1202719 (view as bug list) | Environment: | |||||
Last Closed: | 2015-11-19 06:18:43 UTC | Type: | Bug | ||||
Regression: | --- | Mount Type: | --- | ||||
Documentation: | --- | CRM: | |||||
Verified Versions: | Category: | --- | |||||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||
Cloudforms Team: | --- | Target Upstream Version: | |||||
Embargoed: | |||||||
Bug Depends On: | |||||||
Bug Blocks: | 1199182, 1202719 | ||||||
Attachments: |
|
Description
Shanzhi Yu
2015-03-05 11:14:18 UTC
More info maybe useful: (gdb) c Continuing. Detaching after fork from child process 19544. Detaching after fork from child process 19554. Program received signal SIGABRT, Aborted. [Switching to Thread 0x7fd64e66b700 (LWP 19317)] 0x00007fd6593828d7 in __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:55 55 return INLINE_SYSCALL (tgkill, 3, pid, selftid, sig); (gdb) t a a bt Thread 11 (Thread 0x7fd64f66d700 (LWP 19315)): #0 pthread_cond_wait@@GLIBC_2.3.2 () at ../sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185 #1 0x00007fd65ae8e446 in virCondWait (c=c@entry=0x7fd65c9f3990, m=m@entry=0x7fd65c9f3968) at util/virthread.c:153 #2 0x00007fd65ae8e8fb in virThreadPoolWorker (opaque=opaque@entry=0x7fd65c9ff730) at util/virthreadpool.c:104 #3 0x00007fd65ae8e1fe in virThreadHelper (data=<optimized out>) at util/virthread.c:197 #4 0x00007fd65971252a in start_thread (arg=0x7fd64f66d700) at pthread_create.c:310 #5 0x00007fd65944e22d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:109 Thread 10 (Thread 0x7fd64ee6c700 (LWP 19316)): #0 pthread_cond_timedwait@@GLIBC_2.3.2 () at ../sysdeps/unix/sysv/linux/x86_64/pthread_cond_timedwait.S:238 #1 0x00007fd65ae8e4b5 in virCondWaitUntil (c=c@entry=0x7fd62400b150, m=m@entry=0x7fd624000b80, whenms=whenms@entry=1425554002914) at util/virthread.c:168 #2 0x00007fd64881bfb0 in qemuDomainObjBeginJobInternal (driver=driver@entry=0x7fd64008ff60, obj=0x7fd624000b70, job=job@entry=QEMU_JOB_QUERY, asyncJob=asyncJob@entry=QEMU_ASYNC_JOB_NONE) at qemu/qemu_domain.c:1347 #3 0x00007fd64881da4b in qemuDomainObjBeginJob (driver=driver@entry=0x7fd64008ff60, obj=<optimized out>, job=job@entry=QEMU_JOB_QUERY) at qemu/qemu_domain.c:1427 #4 0x00007fd6488878ef in qemuDomainGetBlockJobInfo (dom=<optimized out>, path=0x7fd63c001070 "vda", info=0x7fd64ee6bbb0, flags=0) at qemu/qemu_driver.c:15939 #5 0x00007fd65af2d4b4 in virDomainGetBlockJobInfo (dom=dom@entry=0x7fd63c001590, disk=0x7fd63c001070 "vda", info=info@entry=0x7fd64ee6bbb0, flags=0) at libvirt-domain.c:9523 #6 0x00007fd65b951abc in remoteDispatchDomainGetBlockJobInfo (server=<optimized out>, msg=<optimized out>, ret=0x7fd63c0016a0, args=0x7fd63c001600, rerr=0x7fd64ee6bcb0, client=<optimized out>) at remote.c:2730 #7 remoteDispatchDomainGetBlockJobInfoHelper (server=<optimized out>, client=<optimized out>, msg=<optimized out>, rerr=0x7fd64ee6bcb0, args=0x7fd63c001600, ret=0x7fd63c0016a0) at remote_dispatch.h:3982 #8 0x00007fd65af82489 in virNetServerProgramDispatchCall (msg=0x7fd65c9f54a0, client=0x7fd65c9fcb90, server=0x7fd65c9f3820, prog=0x7fd65c9fbaf0) at rpc/virnetserverprogram.c:437 #9 virNetServerProgramDispatch (prog=0x7fd65c9fbaf0, server=server@entry=0x7fd65c9f3820, client=0x7fd65c9fcb90, msg=0x7fd65c9f54a0) at rpc/virnetserverprogram.c:307 #10 0x00007fd65b971288 in virNetServerProcessMsg (msg=<optimized out>, prog=<optimized out>, client=<optimized out>, srv=0x7fd65c9f3820) at rpc/virnetserver.c:172 #11 virNetServerHandleJob (jobOpaque=<optimized out>, opaque=0x7fd65c9f3820) at rpc/virnetserver.c:193 #12 0x00007fd65ae8e85e in virThreadPoolWorker (opaque=opaque@entry=0x7fd65c9ff3b0) at util/virthreadpool.c:144 #13 0x00007fd65ae8e1fe in virThreadHelper (data=<optimized out>) at util/virthread.c:197 #14 0x00007fd65971252a in start_thread (arg=0x7fd64ee6c700) at pthread_create.c:310 #15 0x00007fd65944e22d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:109 Thread 9 (Thread 0x7fd64e66b700 (LWP 19317)): #0 0x00007fd6593828d7 in __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:55 #1 0x00007fd65938453a in __GI_abort () at abort.c:89 #2 0x00007fd6593c5da3 in __libc_message (do_abort=do_abort@entry=2, fmt=fmt@entry=0x7fd6594d52f0 "*** Error in `%s': %s: 0x%s ***\n") at ../sysdeps/posix/libc_fatal.c:175 #3 0x00007fd6593d19f5 in malloc_printerr (ptr=<optimized out>, str=0x7fd6594d2f15 "free(): invalid pointer", action=<optimized out>) at malloc.c:4974 #4 _int_free (have_lock=0, p=<optimized out>, av=<optimized out>) at malloc.c:3841 #5 __GI___libc_free (mem=<optimized out>) at malloc.c:2951 #6 0x00007fd65ae34c2b in virFree (ptrptr=ptrptr@entry=0x7fd62400f7b8) at util/viralloc.c:582 #7 0x00007fd65ae87e3a in virStorageSourceClear (def=0x7fd62400f7b0) at util/virstoragefile.c:2024 #8 0x00007fd65ae87313 in virStorageSourceFree (def=0x7fd62400f7b0) at util/virstoragefile.c:2048 ---Type <return> to continue, or q <return> to quit--- #9 0x00007fd648885021 in qemuDomainBlockCommit (dom=<optimized out>, path=<optimized out>, base=<optimized out>, top=<optimized out>, bandwidth=<optimized out>, flags=<optimized out>) at qemu/qemu_driver.c:16601 #10 0x00007fd65af2e503 in virDomainBlockCommit (dom=dom@entry=0x7fd624000c80, disk=0x7fd62400d230 "vda", base=0x0, top=0x0, bandwidth=0, flags=4) at libvirt-domain.c:10067 #11 0x00007fd65b94c0ce in remoteDispatchDomainBlockCommit (server=<optimized out>, msg=<optimized out>, args=0x7fd624000a70, rerr=0x7fd64e66acb0, client=<optimized out>) at remote_dispatch.h:2594 #12 remoteDispatchDomainBlockCommitHelper (server=<optimized out>, client=<optimized out>, msg=<optimized out>, rerr=0x7fd64e66acb0, args=0x7fd624000a70, ret=<optimized out>) at remote_dispatch.h:2564 #13 0x00007fd65af82489 in virNetServerProgramDispatchCall (msg=0x7fd65c9f5510, client=0x7fd65c9fd660, server=0x7fd65c9f3820, prog=0x7fd65c9fbaf0) at rpc/virnetserverprogram.c:437 #14 virNetServerProgramDispatch (prog=0x7fd65c9fbaf0, server=server@entry=0x7fd65c9f3820, client=0x7fd65c9fd660, msg=0x7fd65c9f5510) at rpc/virnetserverprogram.c:307 #15 0x00007fd65b971288 in virNetServerProcessMsg (msg=<optimized out>, prog=<optimized out>, client=<optimized out>, srv=0x7fd65c9f3820) at rpc/virnetserver.c:172 #16 virNetServerHandleJob (jobOpaque=<optimized out>, opaque=0x7fd65c9f3820) at rpc/virnetserver.c:193 #17 0x00007fd65ae8e85e in virThreadPoolWorker (opaque=opaque@entry=0x7fd65c9ff230) at util/virthreadpool.c:144 #18 0x00007fd65ae8e1fe in virThreadHelper (data=<optimized out>) at util/virthread.c:197 #19 0x00007fd65971252a in start_thread (arg=0x7fd64e66b700) at pthread_create.c:310 #20 0x00007fd65944e22d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:109 It's from fedora21 with latest libvirt Fixed upstream: commit 51f9f03a4ca50b070c0fbfb29748d49f583e15e1 Author: Peter Krempa <pkrempa> Date: Fri Mar 13 17:22:04 2015 +0100 qemu: Disallow concurrent block jobs on a single disk While qemu may be prepared to do this libvirt is not. Forbid the block ops until we fix our code. commit 1a92c719101e5bfa6fe2b78006ad04c7f075ea28 Author: Peter Krempa <pkrempa> Date: Fri Mar 13 17:00:03 2015 +0100 qemu: event: Don't fiddle with disk backing trees without a job Surprisingly we did not grab a VM job when a block job finished and we'd happily rewrite the backing chain data. This made it possible to crash libvirt when queueing two backing chains tightly and other badness. To fix it, add yet another handler to the helper thread that handles monitor events that require a job. Peter, I'm not sure the new error is caused by this patches serial, but I got below error when do test just as steps in https://bugzilla.redhat.com/show_bug.cgi?id=1199182#c14 2015-03-16 14:30:45.282+0000: 25137: error : qemuDomainDiskBlockJobIsActive:2774 : Operation not supported: disk 'vda' already in active block job This error come from your new patch. # git describe v1.2.13-181-g4bca619 # virsh blockjob testvm3 vda No current block job for vda # virsh blockcommit --active --verbose testvm3 vda --shallow --pivot error: Operation not supported: disk 'vda' already in active block job # virsh dumpxml testvm3 |grep disk -A 15 <disk type='file' device='disk'> <driver name='qemu' type='qcow2'/> <source file='/tmp/images/c/../b/b'/> <backingStore type='file' index='1'> <format type='qcow2'/> <source file='/tmp/images/c/../b/../a/a'/> <backingStore type='network' index='2'> <format type='qcow2'/> <source protocol='gluster' name='gluster-vol1/r7-qcow2.img'> <host name='10.66.5.38'/> </source> <backingStore/> </backingStore> </backingStore> <target dev='vda' bus='virtio'/> <alias name='virtio-disk0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </disk> (In reply to Shanzhi Yu from comment #3) > Peter, > > I'm not sure the new error is caused by this patches serial, but I got below > error when do test just as steps in > https://bugzilla.redhat.com/show_bug.cgi?id=1199182#c14 > > 2015-03-16 14:30:45.282+0000: 25137: error : > qemuDomainDiskBlockJobIsActive:2774 : Operation not supported: disk 'vda' > already in active block job > > This error come from your new patch. > > # git describe > v1.2.13-181-g4bca619 > > > # virsh blockjob testvm3 vda > No current block job for vda > > # virsh blockcommit --active --verbose testvm3 vda --shallow --pivot > error: Operation not supported: disk 'vda' already in active block job > This error is caused by patch "qemu: Disallow concurrent block jobs on a single disk". What steps did you do before the error happened? One of the previous block jobs probably didn't reset the interlocking flag correctly. Just run the attached script, which used in bug 1199182. # sh testvm3-start.sh Domain testvm3 destroyed Formatting 'images/a/a', fmt=qcow2 size=10485760000 backing_file='gluster://10.66.5.38/gluster-vol1/r7-qcow2.img' backing_fmt='qcow2' encryption=off cluster_size=65536 lazy_refcounts=off refcount_bits=16 Formatting 'b', fmt=qcow2 size=10485760000 backing_file='../a/a' backing_fmt='qcow2' encryption=off cluster_size=65536 lazy_refcounts=off refcount_bits=16 Formatting 'c', fmt=qcow2 size=10485760000 backing_file='../b/b' backing_fmt='qcow2' encryption=off cluster_size=65536 lazy_refcounts=off refcount_bits=16 Domain testvm3 created from /dev/stdin Block Commit: [100 %] Successfully pivoted Block Commit: [100 %] Successfully pivoted Block Commit: [100 %] Successfully pivoted Domain snapshot 1426518648 created Domain snapshot 1426518651 created Domain snapshot 1426518655 created Block Commit: [100 %] Successfully pivoted error: internal error: unable to execute QEMU command 'block-commit': Top image file /tmp/images/c/../b/b not found error: Operation not supported: disk 'vda' already in active block job The first error is caused by bug 1199182, the second error is caused by the new patch. After run the script, make some check by manual # virsh blockjob testvm3 vda No current block job for vda # virsh blockcommit testvm3 vda --pivot error: Operation not supported: disk 'vda' already in active block job # virsh dumpxml testvm3|grep disk -A 14 <disk type='file' device='disk'> <driver name='qemu' type='qcow2'/> <source file='/tmp/images/c/../b/b'/> <backingStore type='file' index='1'> <format type='qcow2'/> <source file='/tmp/images/c/../b/../a/a'/> <backingStore type='network' index='2'> <format type='qcow2'/> <source protocol='gluster' name='gluster-vol1/r7-qcow2.img'> <host name='10.66.5.38'/> </source> <backingStore/> </backingStore> </backingStore> <target dev='vda' bus='virtio'/> <alias name='virtio-disk0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </disk> Created attachment 1002350 [details]
script reproduce the new bug
(In reply to Shanzhi Yu from comment #5) > Just run the attached script, which used in bug 1199182. > > # sh testvm3-start.sh > Domain testvm3 destroyed > > Formatting 'images/a/a', fmt=qcow2 size=10485760000 > backing_file='gluster://10.66.5.38/gluster-vol1/r7-qcow2.img' > backing_fmt='qcow2' encryption=off cluster_size=65536 lazy_refcounts=off > refcount_bits=16 > Formatting 'b', fmt=qcow2 size=10485760000 backing_file='../a/a' > backing_fmt='qcow2' encryption=off cluster_size=65536 lazy_refcounts=off > refcount_bits=16 > Formatting 'c', fmt=qcow2 size=10485760000 backing_file='../b/b' > backing_fmt='qcow2' encryption=off cluster_size=65536 lazy_refcounts=off > refcount_bits=16 > Domain testvm3 created from /dev/stdin > > Block Commit: [100 %] > Successfully pivoted > Block Commit: [100 %] > Successfully pivoted > Block Commit: [100 %] > Successfully pivoted > Domain snapshot 1426518648 created > Domain snapshot 1426518651 created > Domain snapshot 1426518655 created > Block Commit: [100 %] > Successfully pivoted > error: internal error: unable to execute QEMU command 'block-commit': Top > image file /tmp/images/c/../b/b not found ... this was the problem that caused ... > > error: Operation not supported: disk 'vda' already in active block job > > > The first error is caused by bug 1199182, the second error is caused by the > new patch. > > After run the script, make some check by manual > > # virsh blockjob testvm3 vda > No current block job for vda > > # virsh blockcommit testvm3 vda --pivot > error: Operation not supported: disk 'vda' already in active block job ... this problem later as the disk was falsely marked as having a active block job. Fixed upstream: commit ee744b5b387b5123ee40683c52ab40783ffc3020 Author: Peter Krempa <pkrempa> Date: Mon Mar 16 16:52:44 2015 +0100 qemu: block-commit: Mark disk in block jobs only on successful command Patch 51f9f03a4ca50b070c0fbfb29748d49f583e15e1 introduces a regression where if a blockCommit operation fails the disk is still marked as being part of a block job but can't be unmarked later. Potential 7.1.z patches: http://post-office.corp.redhat.com/archives/rhvirt-patches/2015-March/msg00447.html (In reply to Peter Krempa from comment #7) > (In reply to Shanzhi Yu from comment #5) > > The first error is caused by bug 1199182, the second error is caused by the > > new patch. > > > > After run the script, make some check by manual > > > > # virsh blockjob testvm3 vda > > No current block job for vda > > > > # virsh blockcommit testvm3 vda --pivot > > error: Operation not supported: disk 'vda' already in active block job > > ... this problem later as the disk was falsely marked as having a active > block job. > > > Fixed upstream: > > commit ee744b5b387b5123ee40683c52ab40783ffc3020 > Author: Peter Krempa <pkrempa> > Date: Mon Mar 16 16:52:44 2015 +0100 > > qemu: block-commit: Mark disk in block jobs only on successful command > > Patch 51f9f03a4ca50b070c0fbfb29748d49f583e15e1 introduces a regression > where if a blockCommit operation fails the disk is still marked as being > part of a block job but can't be unmarked later. Ok, this patch fix above problem. But there still exist a case, when fail to abort job with pivot, libvirt will keep a "Block Commit" job underground. Before abort that job, other ops will be block See below steps: # sh testvm3-start.sh + cd /tmp + virsh -k0 destroy testvm3 Domain testvm3 destroyed + rm -rf images + mkdir -p images images/a images/b images/c + qemu-img create -f qcow2 images/a/a -b gluster://10.66.5.38/gluster-vol1/r7-qcow2.img -o backing_fmt=qcow2 Formatting 'images/a/a', fmt=qcow2 size=10485760000 backing_file='gluster://10.66.5.38/gluster-vol1/r7-qcow2.img' backing_fmt='qcow2' encryption=off cluster_size=65536 lazy_refcounts=off refcount_bits=16 + cd images/b + qemu-img create -f qcow2 -o backing_file=../a/a,backing_fmt=qcow2 b Formatting 'b', fmt=qcow2 size=10485760000 backing_file='../a/a' backing_fmt='qcow2' encryption=off cluster_size=65536 lazy_refcounts=off refcount_bits=16 + cd images/c + qemu-img create -f qcow2 -o backing_file=../b/b,backing_fmt=qcow2 c Formatting 'c', fmt=qcow2 size=10485760000 backing_file='../b/b' backing_fmt='qcow2' encryption=off cluster_size=65536 lazy_refcounts=off refcount_bits=16 + setenforce 0 + virsh -k0 create /dev/stdin Domain testvm3 created from /dev/stdin + virsh -k0 blockcommit --active --verbose testvm3 vda --shallow --pivot Block Commit: [100 %] Successfully pivoted + virsh -k0 blockcommit --active --verbose testvm3 vda --shallow --pivot Block Commit: [100 %] Successfully pivoted + virsh blockjob testvm3 vda No current block job for vda + virsh -k0 blockcommit --active --verbose testvm3 vda --shallow --pivot Block Commit: [100 %]error: failed to pivot job for disk vda error: internal error: unable to execute QEMU command 'block-job-complete': The active block job for device 'drive-virtio-disk0' cannot be completed + exit 0 # virsh blockjob testvm3 vda # virsh blockjob testvm3 vda # virsh blockjob testvm3 vda Block Commit: [100 %] #virsh dumpxml testvm3|grep mirror # virsh blockjob testvm3 vda --pivot error: Requested operation is not valid: pivot of disk 'vda' requires an active copy job # virsh snapshot-create-as testvm3 --no-metadata --disk-only error: internal error: unable to execute QEMU command 'transaction': Device 'drive-virtio-disk0' is busy: block device is in use by block job: commit # virsh blockjob testvm3 vda --abort # virsh snapshot-create-as testvm3 --no-metadata --disk-only Domain snapshot 1426566021 created Base on latest libvirt.git # git describe v1.2.13-196-ga7d6b94 qemu-kvm-2.2.0-5.fc21.x86_64 (In reply to Shanzhi Yu from comment #10) > (In reply to Peter Krempa from comment #7) > > (In reply to Shanzhi Yu from comment #5) > > Ok, this patch fix above problem. But there still exist a case, when fail to > abort job with pivot, libvirt will keep a "Block Commit" job underground. > Before abort that job, other ops will be block > > See below steps: ... > + virsh blockjob testvm3 vda > No current block job for vda > + virsh -k0 blockcommit --active --verbose testvm3 vda --shallow --pivot > Block Commit: [100 %]error: failed to pivot job for disk vda > error: internal error: unable to execute QEMU command 'block-job-complete': > The active block job for device 'drive-virtio-disk0' cannot be completed After this command fails the following piece of code is executed: if (ret < 0) { /* On failure, qemu abandons the mirror, and reverts back to * the source disk (RHEL 6.3 has a bug where the revert could * cause catastrophic failure in qemu, but we don't need to * worry about it here as it is not an upstream qemu problem. */ /* XXX should we be parsing the exact qemu error, or calling * 'query-block', to see what state we really got left in * before killing the mirroring job? The comment above hints that it might happen that a not-so-ancient qemu actually does not abandon the mirror but rather keeps it active and thus removing the mirror job here is not what should happen. * XXX We want to revoke security labels and disk lease, as * well as audit that revocation, before dropping the original * source. But it gets tricky if both source and mirror share * common backing files (we want to only revoke the non-shared * portion of the chain); so for now, we leak the access to * the original. */ virStorageSourceFree(disk->mirror); disk->mirror = NULL; disk->mirrorState = VIR_DOMAIN_DISK_MIRROR_STATE_NONE; disk->mirrorJob = VIR_DOMAIN_BLOCK_JOB_TYPE_UNKNOWN; disk->blockjob = false; } > > # virsh blockjob testvm3 vda --pivot > error: Requested operation is not valid: pivot of disk 'vda' requires an > active copy job At that point, as the mirror job info is removed the above operation fails due to a check in libvirt not being satisfied as the mirror job info was removed in the code above, but ... > > # virsh snapshot-create-as testvm3 --no-metadata --disk-only > error: internal error: unable to execute QEMU command 'transaction': Device > 'drive-virtio-disk0' is busy: block device is in use by block job: commit ... this fails in a check in qemu as the mirror is still in place actually. > > # virsh blockjob testvm3 vda --abort This command does not have a check in libvirt so it completes successfully and kills the mirror. > > # virsh snapshot-create-as testvm3 --no-metadata --disk-only > Domain snapshot 1426566021 created And now you are able to execute block operations again. > > > Base on latest libvirt.git > # git describe > v1.2.13-196-ga7d6b94 > > qemu-kvm-2.2.0-5.fc21.x86 I think this needs a new bugzilla. I file a bug for the issue in comment 11, see bug 1202704 Verify it on libvirt-1.2.15-2.el7.x86_64 Steps: 1.Do blockcommit while there is a blockpull job running already # virsh blockpull simple vda --wait --verbose Block Pull: [ 39 %] # virsh blockcommit simple vda --wait --verbose --active error: Operation not supported: disk 'vda' already in active block job 2. do blockpull while there is a blockpull job running already # virsh blockpull simple vda --wait --verbose Block Pull: [ 36 %] # virsh blockpull simple vda --wait --verbose error: Operation not supported: disk 'vda' already in active block job 3. do blockcopy while there is a blockpull job running already # virsh blockpull simple vda --wait --verbose Block Pull: [ 20 %] # virsh blockcopy simple vda --shallow /tmp/copy error: Operation not supported: disk 'vda' already in active block job 4. Do blockpull while there is a blockcommit job running already # virsh blockcommit simple vda --wait --verbose --active --bandwidth 1 Block Commit: [ 2 %] # virsh blockpull simple vda --wait --verbose error: block copy still active: disk 'vda' already in active block job 5. do blockcommit while there is a blockcommit job running already # virsh blockcommit simple vda --wait --verbose --active --bandwidth 1 Block Commit: [ 1 %] # virsh blockcommit simple vda --wait --verbose --active error: block copy still active: disk 'vda' already in active block job 6. do blockcoy while there is a blockcommit job running already # virsh blockcommit simple vda --wait --verbose --active --bandwidth 1 Block Commit: [ 1 %] # virsh blockcopy simple vda --shallow /tmp/copy error: block copy still active: disk 'vda' already in active block job 7. do blockpull while there is a blockcopy job running already # virsh blockcopy simple vda /tmp/copy Block Copy started # virsh blockjob simple vda Block Copy: [ 11 %] # virsh blockpull simple vda --wait --verbose error: block copy still active: disk 'vda' already in active block job 8. do blockcommit while there is a blockcopy job running already # virsh blockcopy simple vda /tmp/copy Block Copy started # virsh blockjob simple vda Block Copy: [ 11 %] # virsh blockcommit simple vda --wait --verbose --active error: block copy still active: disk 'vda' already in active block job 9. do blockcopy while there is a blockcopy job running already # virsh blockcopy simple vda --shallow /tmp/copy Block Copy started # virsh blockcopy simple vda /tmp/copy1 error: block copy still active: disk 'vda' already in active block job Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHBA-2015-2202.html |