Bug 1219541

Summary: virsh migrate --copy-storage-all fails to preserve sparse disk image
Product: Red Hat Enterprise Linux 7 Reporter: Stefan Hajnoczi <stefanha>
Component: qemu-kvm-rhevAssignee: John Snow <jsnow>
Status: CLOSED ERRATA QA Contact: Qianqian Zhu <qizhu>
Severity: medium Docs Contact:
Priority: medium    
Version: 7.1CC: amit.shah, antidez93, aoeuser, berrange, bryce.pier, cfergeau, charlie, chayang, crobinso, dmsimard, dwmw2, dyuan, extras-qa, famz, fjin, grube, huding, itamar, jcody, jshubin, juzhang, kchamart, kwolf, madko, mkalinin, mprivozn, mrezanin, mzhan, nyargh88, pbonzini, pezhang, purpleidea, rbalakri, redhatbugreports_541234, rjones, scottt.tw, stefanha, support, virt-maint, vjsofstuff, v.tolstov, xfu, yanqzhan, yanyang, zpeng
Target Milestone: rc   
Target Release: ---   
Hardware: All   
OS: Linux   
Whiteboard:
Fixed In Version: qemu-kvm-rhev-2.9.0-1.el7 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: 817700 Environment:
Last Closed: 2017-08-01 23:27:12 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 817700    
Bug Blocks: 1401400    

Description Stefan Hajnoczi 2015-05-07 14:37:52 UTC
+++ This bug was initially created as a clone of Bug #817700 +++

Description of problem:

Running a virsh migrate --copy-storage-all, will take a 'sparse' image on the source, and create a bloated image on destination. Walkthrough of my steps are below.

How reproducible:
100%


Steps to Reproduce:

migration from iron3 -> iron4 without using shared storage.

# first see my sparse file
[root@iron3 images]# ls -lAhs
[...snip]
7.6G -rwxr-xr-x. 1 root root 500G Apr 29 18:51 ncr.raw

# apparently you need to "prepare" the destination. if not, the migrate fails with:
error: unable to set user and group to '107:107' on '/var/lib/libvirt/images/ncr.raw': No such file or directory
# should I be doing this, or is there a better way ? Please note, this new image *is* sparse (and empty).
[root@iron4 images] time qemu-img create -f raw ncr.raw 500G
Formatting 'ncr.raw', fmt=raw size=536870912000

# okay, go time!
[root@iron3 ~]# time virsh migrate --verbose --live --copy-storage-all ncr qemu+ssh://iron4/system
Migration: [  0 %]
...
[...finishes successfully after 78 minutes!]

# note: this took so long because it didn't copy it in a "sparse" way (such as rsync -S can)
[root@iron4 images]# ls -lAhs
total 501G
501G -rw-r--r-- 1 qemu qemu 500G Apr 29 21:04 ncr.raw

[root@iron4 images]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/vg_00-lv_root
                      621G 1009M  588G   1% /
tmpfs                  24G     0   24G   0% /dev/shm
/dev/sda1             504M   42M  437M   9% /boot
/dev/mapper/vg_00-lv_home
                      504M   17M  462M   4% /home
/dev/mapper/vg_00-lv_var
                     1008G  501G  457G  53% /var

# I can "fix" the image with qemu-img, but ideally it should stay sparse to begin with! What have I done wrong? Maybe this is a bug.
[root@iron4 images]# mv ncr.raw ncr.raw.original
[root@iron4 images]# time qemu-img convert ncr.raw.original ncr.raw

  
Actual results:
Destination file is not sparse.

Expected results:
Destination file should also be sparse.
As an intermediary hack, it could even make sense to pipe the incoming image through the library that qemu-img uses to do it's 'convert', so that the output comes out "clean" (aka sparse)

Additional info:

Thank you for looking into this.

--- Additional comment from aoeuser on 2014-08-26 10:31:08 EDT ---

Another work-around is to use rsync rather than --copy-storage-all.  The following example assumes
- a single qcow2 file in the default pool
- ssh transport

#!/bin/bash
VM=$1
TARGET=$2
vm_path=/var/lib/libvirt/images/$VM.img
rsync -S --progress $vm_path root@$TARGET:$vm_path && \
virsh migrate --live --suspend --verbose $VM qemu+ssh://$TARGET/system && \
rsync -S --progress $vm_image root@$TARGET:$vm_path && \
virsh -c qemu+ssh://$TARGET/system resume $VM
# end script

Results in a slightly longer suspended period, but saves disk space.

--- Additional comment from Bryce Pier on 2015-03-24 11:36:39 EDT ---

The same ballooning of sparse images occurs when sparse qcow2 images are used as well.

--- Additional comment from Michal Privoznik on 2015-04-02 09:37:08 EDT ---

This issue is still under investigation.

--- Additional comment from Michal Privoznik on 2015-04-02 12:50:26 EDT ---

Patch proposed upstream:

https://www.redhat.com/archives/libvir-list/2015-April/msg00130.html

--- Additional comment from Michal Privoznik on 2015-04-13 05:57:45 EDT ---

During review it was pointed out that this can hardly be a libvirt issue since user pre-creates the storage himself. I suspect qemu does not preserve sparse files during storage migration. Switching over to qemu then.

--- Additional comment from Stefan Hajnoczi on 2015-05-07 10:34:21 EDT ---

I think the issue is that drive-mirror (on the source) and run-time NBD server (on the destination) don't have anything like the has_zero_init logic that qemu-img convert uses to preserve sparseness.

The legacy block migration (migrate -b) feature preserved sparseness.  It checked for zeroes on the source host and sent a special flag to the destination host instead of the full zero data.

Comment 2 Stefan Hajnoczi 2015-05-07 16:35:37 UTC
John, we talked about this bug.  Feel free to reassign if you don't have time to tackle it.

Comment 3 Peter Krempa 2015-07-31 11:27:31 UTC
*** Bug 1248996 has been marked as a duplicate of this bug. ***

Comment 5 Evgeny Barsukov 2015-09-22 23:36:41 UTC
I have a better workaround! :) tar -S copies sparsed files very quickly, if they have a lot of free space inside. And temporary external snapshot makes all job safe.

#!/bin/bash

VM=$1
TARGET=$2
STOR="/home/guest_images/"

cd $STOR

# make temporary external disk snapshot named "mig"
virsh snapshot-create-as $VM mig --disk-only --atomic

# remove snapshot from metadata due to virsh-migrate dont like existing snapshots 
virsh snapshot-delete $VM mig --metadata

# copy base image
tar --totals --checkpoint=.8192  -Scvf - $VM.qcow2 | ssh $TARGET "tar -C $STOR -xf -"

# suspend VM
virsh suspend $VM

# copy snapshot image
tar --totals -Scvf - $VM.mig | ssh $TARGET "tar -C $STOR -xf -"

# live migrate
virsh migrate --live --undefinesource --persistent --verbose $VM qemu+ssh://$TARGET/system

# merge snapshot to base image file and make  
virsh -c qemu+ssh://$TARGET/system blockcommit $VM vda --active --pivot --verbose

# resume VM 
virsh -c qemu+ssh://$TARGET/system resume $VM

#remove orphaned snapshot file
ssh $TARGET "cd $STOR; rm -f $VM.mig"

#remove local disk files if necessary
# rm -f $VM.*

Comment 6 Bryce Pier 2015-09-23 16:18:58 UTC
Unfortunately this work around does not work in RHEL6 due to the QEMU binary not supporting snapshots:

# virsh snapshot-create-as prgtwb02 gitlab --disk-only --atomic
error: Operation not supported: live disk snapshot not supported with this QEMU binary

# yum list installed qemu-kvm
Installed Packages
qemu-kvm.x86_64                                                                   2:0.12.1.2-2.448.el6_6.4

Comment 7 Kashyap Chamarthy 2015-09-23 18:32:15 UTC
(In reply to Bryce Pier from comment #6)
> Unfortunately this work around does not work in RHEL6 due to the QEMU binary
> not supporting snapshots:
> 
> # virsh snapshot-create-as prgtwb02 gitlab --disk-only --atomic
> error: Operation not supported: live disk snapshot not supported with this
> QEMU binary

Hmm, that's expected from base RHEL 'qemu-kvm' package.

> # yum list installed qemu-kvm
> Installed Packages
> qemu-kvm.x86_64                                                             

You'd need 'qemu-kvm-rhev' RPM package for the above command to work.

Comment 8 James (purpleidea) 2015-09-23 19:46:46 UTC
(In reply to Evgeny Barsukov from comment #5)
> I have a better workaround! :) tar -S copies sparsed files very quickly, if
> they have a lot of free space inside. And temporary external snapshot makes
> all job safe.
> 

Unless I've misunderstood something, this isn't the same thing because the --copy-storage-all is a live migrate, where as your solution looks like a suspend+resume.

Secondly between the snapshot and the suspend operation, things could happen, so your disk snapshot isn't necessarily up to date, so wouldn't this perhaps cause data loss, or worse an ACK sent to a user could actually not be stored in the image...

I may be way off here, but in my half sleepy eyes, this is what I saw.

Cheers

Comment 10 Cole Robinson 2016-05-02 20:41:41 UTC
*** Bug 817700 has been marked as a duplicate of this bug. ***

Comment 11 Vasiliy G Tolstov 2016-10-25 14:02:16 UTC
any news about this bug?

Comment 12 Ademar Reis 2016-11-16 19:22:47 UTC
(In reply to Vasiliy G Tolstov from comment #11)
> any news about this bug?

Vasiliy, this bug is being processed and considered for a next release of RHEL, but if the issue is critical or in any way time sensitive to you or your organization, please raise a ticket through your regular Red Hat support channels to make certain it receives the proper attention and prioritization that will result in a timely resolution.
                                                                                
For information on how to contact the Red Hat production support team, please visit: https://www.redhat.com/support/process/production/#howto

Comment 14 John Snow 2017-04-26 23:56:11 UTC
Picked up in rebase.

Comment 16 Yanqiu Zhang 2017-05-05 02:21:00 UTC
Still can reproduce in my testing.

Pkg verison:
qemu-kvm-rhev-2.9.0-2.el7.x86_64
libvirt-3.2.0-4.el7.x86_64

Steps:
1. Start a guest with local image, which only exists on source.
# virsh dumpxml V|grep 'disk t' -A3
    <disk type='file' device='disk'>
      <driver name='qemu' type='qcow2' cache='none'/>
      <source file='/var/lib/libvirt/images/V.qcow2'/>
      <target dev='vda' bus='virtio'/>

wait for guest os fully up, check image info:
# qemu-img info /var/lib/libvirt/images/V.qcow2
image: /var/lib/libvirt/images/V.qcow2
file format: qcow2
virtual size: 10G (10737418240 bytes)
disk size: 1.3G
cluster_size: 65536
Format specific information:
    compat: 1.1
    lazy refcounts: false
    refcount bits: 16
    corrupt: false

2.Do copy-storage migration
# virsh migrate V --live qemu+ssh://{target_ip}/system --verbose  --copy-storage-all
Migration: [100 %]

3.On target, check image info
[target_host]# qemu-img info /var/lib/libvirt/images/V.qcow2
image: /var/lib/libvirt/images/V.qcow2
file format: qcow2
virtual size: 10G (10737418240 bytes)
disk size: 10G                     <== extends to 10G
cluster_size: 65536
Format specific information:
    compat: 0.10
    refcount bits: 16

Actual result:
After copy-storage migration, on target, the "disk size" extends from 3G to 10G, equals to "virtual size".

Comment 17 Qianqian Zhu 2017-05-05 08:08:55 UTC
Reopen per comment 16

Comment 18 John Snow 2017-05-08 21:57:20 UTC
I think we might be testing slightly different things.

I tested raw file sparseness, which appears to still be working upstream and should be working in qemu-kvm-rhev. Previously, when migrating a sparse .raw file, the sparseness was not preserved cross-migration. Now, it is.

This was due to a bug in the lvalue used to store the return value for  bdrv_get_block_status_above in mirror_iteration(). We were truncating an int64_t to an int, which meant that every 2GB would fluctuate between negative and positive numbers erroneously.

This fluctuation prevented accurate determination of whether to invoke mirror_do_read or mirror_do_zero_or_discard. Because of this, we'd copy more data than strictly necessary.

Now, with regards to qcow2 -- let's be careful to differentiate qcow2's sparseness with sparseness on the filesystem level. It's still entirely possible to have a qcow2 that is "fully allocated" with zeroes written all throughout the file to be "sparse" on the actual filesystem level; check this with `ls -lahs` to see the discrepancy between virtual-fs size and allocated-fs size.

This BZ as I understood it was to preserve *filesystem* sparseness. This seems like a slightly different problem -- it looks like your qcow2 image has gone from "compat 1.1" to "compat 0.10" during the migration process; I monitored a similar migration from my setup and observed that right up until the very end, the qcow2 was not fully allocated and -- right at the last moment -- became fully allocated.

I think this is a different bug, or at least the root cause is different. Recommend splitting the BZ so that we can verify the change in behavior for RAW files, and then I will investigate the qcow2 behavior for a future release.

Comment 19 Fangge Jin 2017-05-09 02:50:45 UTC
Hi John

I reopened Bug 1248996 for the qcow2 images size problem. 
As for the image version changing from "compat 1.1" to "compat 0.10", there was bug(Bug 1371749) that's closed as NOTABUG, I don't know if the two problems have any relations.

Comment 20 Yanqiu Zhang 2017-05-09 03:42:44 UTC
I retested by pre-creating an image on target:

Source image info:
# qemu-img info /var/lib/libvirt/images/V.qcow2
image: /var/lib/libvirt/images/V.qcow2
file format: qcow2
virtual size: 10G (10737418240 bytes)
disk size: 1.3G
cluster_size: 65536
Format specific information:
    compat: 1.1
    lazy refcounts: false
    refcount bits: 16
    corrupt: false

Scenario 1. compat=1.1
Steps
1.pre-create an image on target
[target]# qemu-img create -f qcow2 V.qcow2 10G
Formatting 'V.qcow2', fmt=qcow2 size=10737418240 encryption=off cluster_size=65536 lazy_refcounts=off refcount_bits=16
[target]# qemu-img info V.qcow2
image: V.qcow2
file format: qcow2
virtual size: 10G (10737418240 bytes)
disk size: 196K
cluster_size: 65536
Format specific information:
    compat: 1.1
    lazy refcounts: false
    refcount bits: 16
    corrupt: false

2.migrate from source to target
[source]# virsh start V
Domain V started
[source]# virsh migrate V --live qemu+ssh://{target_ip}/system --verbose  --copy-storage-all
Migration: [100 %]

3.After migration, target image disk size equals to the value on source
# qemu-img info V.qcow2
image: V.qcow2
file format: qcow2
virtual size: 10G (10737418240 bytes)
disk size: 1.3G                <==1.3G, equals to source disk size
cluster_size: 65536
Format specific information:
    compat: 1.1
    lazy refcounts: false
    refcount bits: 16
    corrupt: false

Scenario 1. compat=0.10
1.pre-create an image on target
# qemu-img create -f qcow2  -o compat=0.10  V.qcow2 10G
Formatting 'V.qcow2', fmt=qcow2 size=10737418240 compat=0.10 encryption=off cluster_size=65536 lazy_refcounts=off refcount_bits=16
# qemu-img info V.qcow2
image: V.qcow2
file format: qcow2
virtual size: 10G (10737418240 bytes)
disk size: 196K
cluster_size: 65536
Format specific information:
    compat: 0.10
    refcount bits: 16

2.migrate from source to target

3.After migration, target image disk size extends to the value of virtual size
# qemu-img info V.qcow2
image: V.qcow2
file format: qcow2
virtual size: 10G (10737418240 bytes)
disk size: 10G                        <== extends to 10G
cluster_size: 65536
Format specific information:
    compat: 0.10
    refcount bits: 16

Combining with comment 16, whether image pre-exists on target, if the image version is compat=0.10, the disk size both will extends to the value of virtual size.
So the image version compat=0.10 affects.

Comment 21 Nikolay Zadoya 2017-05-10 10:54:24 UTC
This patch does not help us. When migrating, the same drives become thick ...

Comment 22 John Snow 2017-05-10 15:02:28 UTC
(In reply to Nikolay Zadoya from comment #21)
> This patch does not help us. When migrating, the same drives become thick ...

Who are you? Who is "us"? What patch are you talking about? What is your use case? If you're talking about qcow2 files ballooning on migrate, you're in the wrong BZ; you want either 1371749 or 1248996.

Advise you to go through your support channels to receive attention expressly for your usage case, or otherwise to file an appropriate BZ for the exact behavior that you're seeing.

Comment 23 John Snow 2017-05-10 23:48:47 UTC
(In reply to yanqzhan from comment #20)
> I retested by pre-creating an image on target:
> 
...
> 
> Combining with comment 16, whether image pre-exists on target, if the image
> version is compat=0.10, the disk size both will extends to the value of
> virtual size.
> So the image version compat=0.10 affects.

Thank you for testing this. I will investigate if there is a way to preserve sparseness to 0.10 images, but that will not happen for 7.4.

(In reply to JinFangge from comment #19)
> Hi John
> 
> I reopened Bug 1248996 for the qcow2 images size problem. 
> As for the image version changing from "compat 1.1" to "compat 0.10", there
> was bug(Bug 1371749) that's closed as NOTABUG, I don't know if the two
> problems have any relations.

Alright, so let me recap what I know about this situation.

-The original bug was filed against sparse raw images.

-The bug preventing sparse images from being transferred as sparse was corrected by upstream commit 39c11580f3af8a96a7fba9b8b80a047a0b88b0ec

-This commit was sufficient to prevent sparse raw files from ballooning, which fixes the original use case reported by James Shubin / Stefan Hajnoczi. The above commit ALSO prevented qcow2 files from being transferred as sparse, even with workarounds such as pre-creating 1.1 compatible images on the destination.

-libvirt creates 0.10 compat qcow2 images by default, which do not support explicit zero-clusters. This means that even though images are now being SENT as sparse, they are filling the HDD with explicit zeroes on the destination end (presumably only if detect_zeroes is not set, which is the default.)

-The qcow2 behavior can be avoided by pre-creating a qcow2v3 image prior to the migration process. I believe this is understood as best practice by libvirt regardless.

Now, let's sink our teeth into this ludicrous bug. Why does zero-copy with raw work (using BLKDISCARD, no less) but fails with qcow2 0.10? (it fails both to simply not allocate clusters, OR to allocate its zero clusters efficiently by using the raw file's efficient zero-writing mechanisms)

Firstly, mirror currently uses mirror_do_zero_or_discard in the mirror iteration process. This is almost always going to do "zero" instead of "discard" because, as far as I understand it, bdrv_get_block_status_above is almost always going to return either DATA or ZERO. (both would have to be false for mirror to choose DISCARD.)

Then, we invoke this sequence of write operations:

blk_aio_pwrite_zeroes
blk_aio_prwv
blk_aio_write_entry
blk_co_pwritev
bdrv_co_pwritev
bdrv_co_do_zero_pwritev
bdrv_aligned_pwritev
bdrv_co_do_pwrite_zeroes

Here's where things start getting juicy.

We will attempt to call drv->bdrv_co_pwrite_zeroes, in this case qcow2_co_pwrite_zeroes.
Then we'll call qcow2_zero_clusters, which... does not really like the fact that we're trying to do zero writes on a QCOW2 0.10 image.

We return -ENOTSUP, back up to bdrv_co_do_zero_pwritev, which will then fill a bounce buffer with literal zeroes and continue its journey with bdrv_driver_pwritev -- losing the semantic information that this is a zeroes write. Inevitably, eventually, the qcow2 driver will pass the data along to its backing driver (file-posix, most likely*) and instead of detecting the efficient write, will write out the dumb, big buffer of zeroes.

There are a few ways to optimize this in various ways:

(1) If we have no backing file, qcow2's write zeroes could literally just ignore the write if the clusters are already unmapped. It's the same net effect.

(2) If we cannot guarantee the safety of the above, we can allocate L2 entries as per usual, but forward the write_zeroes request down the stack. This way, the raw driver can decide if it is able to punch holes in the file to still accomplish sparse zero allocation with 0.10 images.

(3) Mirror could be improved to understand when it is able to discard target clusters instead of even attempting zero writes which may or may not get optimized to discards, provided that mirror was given unmap=true. (If the target has no backing file and has the zero_init property, simply unmapping should be sufficient here.)


As for what we should do with _this_ BZ:

The original BZ was submitted against raw images, I am proposing we test this BZ with regards to sparse raw images (for 7.4) and we use BZ #1248996 to discuss the qcow2-flavored variant of this problem.

yangzhan: can you re-test this using only raw images for now, and I will continue working on the qcow2 version of the problem in the other BZ? Thank you very much, and sorry for the confusion and hassle!

Comment 24 Yanqiu Zhang 2017-05-11 09:00:20 UTC
Hi John, pls refer to my following testing results:

Pkg verison:
qemu-kvm-rhev-2.9.0-3.el7.x86_64
libvirt-3.2.0-4.el7.x86_64

Steps:
Scenario1: image only exists on source

1. Start a guest with local image in raw format:
#  qemu-img info /var/lib/libvirt/images/V-raw.img 
image: /var/lib/libvirt/images/V-raw.img
file format: raw
virtual size: 5.0G (5368709120 bytes)
disk size: 3.6G             <==original 3.6G

2.Do copy-storage migration
# virsh migrate V-raw --live qemu+ssh://{target_ip}/system --verbose  --copy-storage-all
Migration: [100 %]

3.On target, check image info
# qemu-img info /var/lib/libvirt/images/V-raw.img 
image: /var/lib/libvirt/images/V-raw.img
file format: raw
virtual size: 5.0G (5368709120 bytes)
disk size: 1.3G            <==change to 1.3G

Guest os works well.

Scenario2: pre-create image on target

1.On target, pre-create an raw image
# qemu-img create V-raw.img 5g
Formatting 'V-raw.img', fmt=raw size=5368709120

# qemu-img info /var/lib/libvirt/images/V-raw.img 
image: /var/lib/libvirt/images/V-raw.img
file format: raw
virtual size: 5.0G (5368709120 bytes)
disk size: 0                <==original 0G for newly created 

2.Do copy-storage migration
[source]# virsh migrate V-raw --live qemu+ssh://{target_ip}/system --verbose  --copy-storage-all
Migration: [100 %]

3.After migration, check image info:
# qemu-img info /var/lib/libvirt/images/V-raw.img 
image: /var/lib/libvirt/images/V-raw.img
file format: raw
virtual size: 5.0G (5368709120 bytes)
disk size: 1.3G            <==1.3G, not 5G(virtual size)

Guest os works well.

Comment 25 John Snow 2017-05-11 18:36:19 UTC
yangzhan: thanks.

Per Comment #23 and #24 I am moving this back to ON_QA with the understanding that the qcow2 failure will be addressed in #1248996, and with the expectation that yangzhan will be able to simply move this to VERIFIED as per Comment #24.

Comment 26 Qianqian Zhu 2017-05-12 03:17:51 UTC
yanqzhan,
Do you think this bz can be moved to VERIFIED per Comment25? Thanks.

Comment 27 Yanqiu Zhang 2017-05-12 04:02:26 UTC
Hi qianqian,
     We libvirt side can only provide assistant testing for this scenario, not very sure about the new issues or regressions involved by this patch.
     Suggesting that you do some related testing with qemu to make sure there is no new issues or regressions, then verify by yourself.
 
    Thanks.

Comment 28 Nikolay Zadoya 2017-05-24 09:10:50 UTC
Hello. Is there any news?

Comment 29 Cole Robinson 2017-05-24 11:34:45 UTC
(In reply to Nikolay Zadoya from comment #28)
> Hello. Is there any news?

See comment #22, you never responded to John's questions

Comment 31 Qianqian Zhu 2017-06-01 09:17:22 UTC
Moving to verified per comment 24 and comment 30.

Comment 32 Nikolay Zadoya 2017-06-29 12:30:10 UTC
When will this bug be fixed?

Comment 33 John Snow 2017-06-29 19:26:32 UTC
(In reply to Nikolay Zadoya from comment #32)
> When will this bug be fixed?

comment #29
comment #22

Comment 34 Nikolay Zadoya 2017-06-29 20:20:52 UTC
(In reply to John Snow from comment #33)
> (In reply to Nikolay Zadoya from comment #32)
> > When will this bug be fixed?
> 
> comment #29
> comment #22

I have the same problem as here.

With live migration between nodes, the thin disk becomes thick.

I do not have support to write a ticket.

Comment 36 errata-xmlrpc 2017-08-01 23:27:12 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2017:2392

Comment 37 errata-xmlrpc 2017-08-02 01:04:50 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2017:2392

Comment 38 errata-xmlrpc 2017-08-02 01:56:50 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2017:2392

Comment 39 errata-xmlrpc 2017-08-02 02:37:35 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2017:2392

Comment 40 errata-xmlrpc 2017-08-02 03:02:18 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2017:2392

Comment 41 errata-xmlrpc 2017-08-02 03:22:27 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2017:2392

Comment 42 Alexandr 2017-09-25 04:46:51 UTC
Здравствуйте, планируется ли исправление данной ошибки для qemu 2.9 centos
https://cbs.centos.org/koji/packageinfo?packageID=539

Comment 43 Alexandr 2017-09-25 04:48:30 UTC
Hello, are you planning to fix this error for qemu 2.9 centos
https://cbs.centos.org/koji/packageinfo?packageID=539

Comment 44 Nikolay Zadoya 2017-10-04 17:59:51 UTC
Hello, are you planning to fix this error for qemu 2.9 centos
https://cbs.centos.org/koji/packageinfo?packageID=539

Comment 45 Richard W.M. Jones 2017-10-04 19:42:06 UTC
Suggest asking the CentOS team.  This is a RHEL bug.