RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1207657 - RFE: QEMU Incremental live backup - push and pull modes
Summary: RFE: QEMU Incremental live backup - push and pull modes
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: qemu-kvm-rhev
Version: 7.2
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: rc
: 7.4
Assignee: John Snow
QA Contact: Gu Nini
URL:
Whiteboard:
: 1612020 (view as bug list)
Depends On:
Blocks: 1207659 Engine_Change_Block_Tracking 1288337 1518988 1518989 1558125 1658343 1734975 1734976
TreeView+ depends on / blocked
 
Reported: 2015-03-31 12:45 UTC by Ademar Reis
Modified: 2019-07-31 17:48 UTC (History)
23 users (show)

Fixed In Version: qemu-kvm-rhev-2.12.0-8.el7
Doc Type: Enhancement
Doc Text:
Clone Of:
: 1207659 1518989 1593440 (view as bug list)
Environment:
Last Closed: 2018-11-01 11:01:10 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
Failed_to_start_guest_with_back2_right_screen_capture (66.54 KB, image/png)
2018-12-06 09:14 UTC, aihua liang
no flags Details

Description Ademar Reis 2015-03-31 12:45:50 UTC
QEMU is getting incremental live backup, where a dirty block bitmap is available to third-party applications so they can read only blocks that changed since the previous backup.

Upstream references:
http://lists.gnu.org/archive/html/qemu-devel/2015-03/msg04501.html
http://lists.gnu.org/archive/html/qemu-devel/2015-03/msg05796.html

Comment 3 John Snow 2015-12-23 21:48:43 UTC
Status:

QEMU 2.5 has the core mechanisms and the full transactional QMP API. The feature is currently fully usable for transient bitmaps (i.e. they do not migrate or persist.)

Migration: Patches have existed since 2.4, but have not been merged pending consensus on the persistence feature to make sure these two features can coexist.

Persistence: Specification for QCOW2 extension is nearing consensus upstream. Patches to store bitmaps in qcow2 files have existed since the 2.4 window, but need to be reworked to fit the new specification. 2.6 seems likely.

A format for storing bitmaps for arbitrary bitmaps against arbitrary formats that do not support native inlining of persistence data is being developed by Fam Zheng for 2.6; we will be trying our hardest to make sure this goes in for 2.6.

Current estimate for full usability in QEMU: 2.6, or 2.7 at the latest.

No ETA for libvirt or virt-manager support; see https://bugzilla.redhat.com/show_bug.cgi?id=1217820

Comment 6 John Snow 2017-04-05 18:13:13 UTC
This is a general progress report brief to update this BZ; the white paper authored is still a current and good source of information. You already have the most up-to-date information. There are no substantive changes to anything we've discussed for RHV/oVirt, including push or pull model data access methods.


That said, here's some upstream references for you:

Here's [Nbd] [PATCH v5] doc: Add NBD_CMD_BLOCK_STATUS extension:
https://lists.nongnu.org/archive/html/qemu-devel/2016-12/msg01597.html

"Further tidy-up on block status":
https://sourceforge.net/p/nbd/mailman/nbd-general/thread/20161214150840.10899-1-alex%40alex.org.uk/#msg35551212
Or the QEMU mirror of that discussion: https://lists.nongnu.org/archive/html/qemu-devel/2016-12/msg01969.html

Latest version of the persistence series:
https://lists.nongnu.org/archive/html/qemu-devel/2017-02/msg06359.html

Latest version of the migration series:
https://lists.nongnu.org/archive/html/qemu-devel/2017-02/msg02508.html

Comment 7 John Snow 2017-11-16 22:35:39 UTC
Persistence has gone upstream.
Migration is nearly reviewed but will not be present in QEMU 2.11.

The minimum functionality for this feature should be present, Dennis Ritchie Willing, in QEMU 2.12.

Comment 9 Ademar Reis 2018-04-02 17:17:55 UTC
push model for qcow2 is ready in upstream qemu-2.12 and will be made available with our rebase.

For the pull model we have a cond-nak(upstream), as the work is still in progress with colaboration from other vendors and discussions with libvirt and NBD developers.

For raw support, we also have a cond-nak(upstream), as the solution for that use-case is still being discussed.

Comment 13 John Snow 2018-06-12 23:16:32 UTC
"push mode done in QEMU," we are awaiting libvirt pieces which are in development now. Additional QEMU pieces are being developed to facilitate libvirt's API which should be upstream soon.

Comment 18 Miroslav Rezanina 2018-07-24 14:04:04 UTC
Fix included in qemu-kvm-rhev-2.12.0-8.el7

Comment 24 Kevin Wolf 2018-08-14 14:18:20 UTC
*** Bug 1612020 has been marked as a duplicate of this bug. ***

Comment 27 Eric Blake 2018-08-17 19:57:20 UTC
tl:dr; I did the following steps on Fedora 28 as host with a libvirt domain, using self-built qemu 3.0 (although the point of this bug is that those same features should have backported). It should be fairly straightforward to repeat the test using JUST qemu, as long as you have a QMP monitor open. I rely on bash as my shell.

With that intro out of the way, here are the steps I used to prove incremental pull mode backup works; lines prefixed with ; are comments, # are command lines on the host, guest# are command lines as the guest. Several steps require running as root; I just did them all as root (hence my use of # prefix) rather than messing with sudo (so be careful that you don't hose your own system).

; Part 0: setup
; start by creating a viable guest with a disk you want to experiment with
; I picked an existing libvirt domain, then prepare to add a NEW disk:
# dom=mydom
# orig=/path/to/orig.img
; but you can set those variables to whatever you want. The reason for
; a separate data disk is so that the guest OS only modifies the disk when
; I want, rather than having to sort out what else is going on; it also
; makes it very easy to use libguestfs to inspect the backup copies later on
# qemu-img create -f qcow2 $orig 100M
; 100M is plenty of size to see whether backups are full or incremental,
; without taking way too long copying lots of data
; at this point, I modified my libvirt domain; the corresponding qemu-only
; setup would be modifying the qemu command line to plug in the extra
; disk. I chose to expose the new disk as scsi, rather than virtio:
# diff -u $dom.good <(virsh dumpxml $dom)
...
-    <emulator>/usr/bin/qemu-kvm</emulator>
+    <emulator>/home/eblake/qemu/x86_64-softmmu/qemu-system-x86_64</emulator>
...
+    <disk type='file' device='disk'>
+      <driver name='qemu' type='qcow2' cache='none' error_policy='stop' io='native'/>
+      <source file='/path/to/$orig'/>
+      <backingStore/>
+      <target dev='sdc' bus='scsi'/>
+      <alias name='scsi0-0-1'/>
+      <address type='drive' controller='0' bus='0' target='0' unit='1'/>
+    </disk>
...
+  <seclabel type='none' model='selinux'/>
#
; that last <seclabel> option was important for later hot-plugging of the
; scratch image needed during pull (once libvirt eventually manages this
; all through its own APIs, things will work with selinux labels in place,
; but for now, we need it out of the way. My testing succeeded with
; selinux enforcing
# virsh start $dom
; or the raw qemu command line, as appropriate
# virsh qemu-monitor-command $dom '{"execute":"nbd-server-start",
 "arguments":{"addr":{"type":"inet",
  "data":{"host":"localhost", "port":"10809"}}}}'
; the qemu instance needs to have an NBD server. It can be started once
; up front, rather than having to start/teardown/re-enable it through the
; later steps. I purposefully put the server on the NBD default port on
; the local machine; later steps need tweaking if you point it somewhere else
guest# cat /proc/partitions
...
8 0 102400 sda
; this was verifying that the guest sees the 100M data disk alongside
; everything else. I did this mostly to make sure I didn't wipe the guest's
; OS partition in the next steps :)
guest# mke2fs /dev/sda
; the command asks if I really meant to write the filesystem onto the
; disk as a whole, rather than partitioning it. I answered 'y'
guest# mount /dev/sda /mnt/sysimage
guest# touch /mnt/sysimage a
guest# dd if=/dev/zero of=/mnt/sysimage/b bs=64k count=10
guest# dd if=/dev/urandom of=/mnt/sysimage/c bs=64k count=10
guest# md5sum /mnt/sysimage/? |tee /mnt/sysimage/sum1
guest# ls /mnt/sysimage
a  b  c  lost+found  sum1
; there - we loaded some data onto the disk. Between formatting it, an
; empty file a, an all-0 file b, a random file c, and a record of
; checksums (particularly useful for checking state of randome files),
; we've written over 20 clusters (>1.28M) of data + filesystem overhead

; Part 1 - time to kick off our first backup
# qemu-img create -f qcow2 scratch.img 100M
# qemu-img rebase -u  -f qcow2 -b $orig -F qcow2 scratch.img
; pull mode backups require scratch space backed by the live image.
; It would be nice if you could do this in one step, with:
; # qemu-img create -f qcow2 -b $orig -F qcow2 scratch.img
; but some versions of qemu-img complain that $orig is in use,
; hence the two-step creation followed by writing in the backing file
# chown qemu:qemu scratch.img
; needed when libvirt runs qemu as a different user than root (might not
; be needed if you are running qemu directly)
# virsh qemu-monitor-command $dom --pretty '{"execute":"query-block"}' |
  grep '\(node-name\|orig\)'
; libvirt doesn't yet name all its nodes, so we need to figure out what
; node name qemu auto-assigned for use in commands below. In the grep
; output, the next-to-last line (the node-name right before the "file"
; line that points to $orig) is what to use. Or, if you are doing your
; own qemu command line, you could be sure to name the node up front
# node=#block522
; obviously, that line will be whatever qemu actually told you
# virsh qemu-monitor-command $dom '{"execute":"blockdev-add",
 "arguments":{"driver":"qcow2", "node-name":"tmp", "file":{"driver":"file",
  "filename":"'$PWD'/scratch.img"},
  "backing":"'$node'"}}'
; before we can do the pull, we need scratch storage exposed to qemu
# virsh qemu-monitor-command $dom '{"execute":"transaction",
 "arguments":{"actions":[
  {"type":"blockdev-backup", "data":{
   "device":"'$node'", "target":"tmp", "sync":"none" }},
  {"type":"block-dirty-bitmap-add", "data":{
   "node":"'$node'", "name":"bitmap0"}}
 ]}}'
; this kicks off the block job that must be running for as long as the
; pull mode will be active, and simultaneously created a new bitmap named
; bitmap0 for tracking all changes after this point in time
; Had we wanted to do a push mode backup instead, we would have used
; "sync":"full", and the scratch image we attached would be the backup
; when the job completes, and would not need the next command
# virsh qemu-monitor-command $dom '{"execute":"nbd-server-add",
 "arguments":{"device":"tmp"}}'
; our NBD server finally has something to export
guest# dd if=/dev/urandom of=/mnt/sysimage/a bs=64k count=10
guest# md5sum /mnt/sysimage/? | tee /mnt/sysimage/sum2
; now that the point in time has been established, make some modifications
; in the guest file system, to prove that we are capturing the data at
; the point in time and not what changed after. This changed 'a' from an
; empty file to now being 640k of random contents
; Note that I envision ALL of the commands in step 1 to be performed by
; the single libvirt command virDomainBackupBegin()

; Part 2 - grab the first backup image
# qemu-img create -f qcow2 back1.img 100M
; we want to read data off the NBD export and into our first backup file,
; which I just created by the name back1.img. I chose qcow2 as my format
; for the backup file because it is easy to chain two qcow2 backups back
; into a coherent image, later in the demo. But it is also acceptable to
; create whatever format you want - after all, this part is simulating
; what the third-party client does with the NBD data
# modprobe nbd
; Here, I'm relying on the kernel NBD module to make it easier to access
; subsets of an NBD export by using native file system tools
; Next, I'm defining a helper function for copying JUST the subsets of
; an NBD file that are interesting. This relies on both NBD server and
; client supporting NBD_CMD_BLOCK_STATUS with useful information. More
; comments about the function body below
# copyif() {
 if test $# -lt 3; then
   echo 'usage: copyif [true|false] src dst'
   return 1
 fi
 if $1; then
   map_from="-f raw $2"
 else
   map_from="--image-opts driver=nbd,export=tmp,server.type=inet"
   map_from+=",server.host=localhost,server.port=10809"
   map_from+=",x-dirty-bitmap=qemu:dirty-bitmap:bitmap0"
 fi
 qemu-img info -f raw $2 || return
 qemu-img info -f qcow2 $3 || return
 qemu-nbd -r -f raw -c /dev/nbd0 $2 || return
 qemu-nbd -f qcow2 -c /dev/nbd1 $3 ||
   { ret=$?; qemu-nbd -d /dev/nbd0; return $ret; }
 ret=0
 while read line; do
   [[ $line =~ .*start.:.([0-9]*).*length.:.([0-9]*).*data.:.$1.* ]] || continue
   start=${BASH_REMATCH[1]} len=${BASH_REMATCH[2]}
   echo " $start $len:" dd if=/dev/nbd0 of=/dev/nbd1 bs=64k skip=$((start/64/1024)) seek=$((start/64/1024)) count=$((len/64/1024))
   dd if=/dev/nbd0 of=/dev/nbd1 bs=64k skip=$((start/64/1024)) seek=$((start/64/1024)) count=$((len/64/1024)) conv=fdatasync || { ret=2; break; }
 done < <(qemu-img map --output=json $map_from)
 qemu-nbd -d /dev/nbd0
 qemu-nbd -d /dev/nbd1
 if test $ret = 0; then echo 'Success!'; fi
 return $ret
}
; the function started with argument validation, then sets $map_from
; to the source file that qemu-img will use to determine which portions
; of the source image to read. Our first use of this function is reading
; anything that contains data (we could read the entire 100M guest contents,
; but since we know it defaulted to all-0s, it's easier to just read the
; portions that the guest wrote prior to the point in time where we started
; the transaction). Later, we will use this function to read only the dirty
; portions of the file (which are advertised via NBD via the x-dirty-bitmap
; hack which has to be explicitly spelled out via --image-opts. Adjust that
; line to match the qemu NBD server you set up in step 0)
; Next, the function starts two separate kernel NBD mappings, one of the
; source (the NBD export we turned on in step 1), the other of the
; destination (the file we just created). (I would have liked to use
; 'qemu-img dd' to bypass having to use the kernel NBD module, but that
; isn't powerful enough in qemu 3.0).
; Then the function loops over the map produced by <(qemu-img ...) to
; determine which portions of the file are interesting, and uses 'dd'
; to copy just those portions into the backup file.
; Finally, it tears down the kernel NBD devices
# copyif true nbd://localhost:10809/tmp back1.img
; actually make the copy. Adjust the second argument to match Step 0
# ls -l back1.img
; I see an image size of 6684672 - that's more than the 1.28M of file
; data that I wrote in the guest, but the rest can be explained as the
; filesystem overhead itself of having formatted the entire disk as ext2,
; coupled with any rounding where qemu writes an entire 64k cluster even
; if only 512 bytes within the cluster were touched
# guestfish -r -a back1.img << \EOF
run
fsck ext2 /dev/sda
fsck ext2 /dev/sda
mount /dev/sda /
ls /
checksum md5 /a
EOF
; time to inspect the image, without modifying it
; the first fsck returns a status of 0x1 - that's to be expected, because
; we grabbed the backup copy WHILE the guest was mounted (that is, because
; we didn't freeze guest I/O, the partition was mounted, and the backup acts
; as if it was the result of a hard power loss right at that point in time);
; but note that the second fsck has status 0 because the image was complete.
; The checksum prints d41d8cd98f00b204e9800998ecf8427e because file 'a'
; was still empty at the point in time where we started our backup, even
; though the guest has since written into 'a'
# virsh qemu-monitor-command $dom '{"execute":"nbd-server-remove",
 "arguments":{"name":"tmp"}}'
# virsh qemu-monitor-command $dom '{"execute":"block-job-cancel",
 "arguments":{"device":"drive-scsi0-0-1"}}'
# virsh qemu-monitor-command $dom '{"execute":"blockdev-del",
 "arguments":{"node-name":"tmp"}}'
; Now that the third-party pull backup is complete, we can clean up after
; ourselves; these three commands would all be done by the single libvirt
; API virDomainBackupEnd(). The scratch image is now useless (it contains
; only the clusters which were overwritten by the live guest while the
; backup job was live, but is NOT a coherent file system on its own)

; Part 3 - start an incremental backup
# qemu-img create -f qcow2 scratch.img 100M
# qemu-img rebase -u  -f qcow2 -b $orig -F qcow2 scratch.img
; since scratch is useless, it's easiest just to wipe it by recreating it
; afresh. we don't want any of the data that it previously held
# virsh qemu-monitor-command $dom '{"execute":"blockdev-add",
 "arguments":{"driver":"qcow2", "node-name":"tmp", "file":{"driver":"file",
  "filename":"'$PWD'/scratch.img"},
  "backing":"'$node'"}}'
# virsh qemu-monitor-command $dom '{"execute":"transaction",
 "arguments":{"actions":[
  {"type":"blockdev-backup", "data":{
   "device":"'$node'", "target":"tmp", "sync":"none" }},
  {"type":"block-dirty-bitmap-add", "data":{
   "node":"'$node'", "name":"bitmap1"}},
  {"type":"x-block-dirty-bitmap-disable", "data":{
   "node":"'$node'", "name":"bitmap0"}}
 ]}}'
; again, give qemu access to scratch storage, and kick off a new backup
; job. Note that this time, we simultaneously ended 'bitmap0' and started
; a new 'bitmap1'. If we had wanted an incremental push, we would have
; used the "drive-backup" command instead, with "sync":"incremental" and
; a "backup":"..." parameter
# virsh qemu-monitor-command $dom '{"execute":"nbd-server-add",
 "arguments":{"device":"tmp"}}'
# virsh qemu-monitor-command $dom '{"execute":"x-nbd-server-add-bitmap",
 "arguments":{"name":"tmp", "bitmap":"bitmap0"}}'
; this time, expose both the scratch disk, and the dirty bitmap, over NBD
guest# dd if=/dev/urandom of=/mnt/sysimage/a bs=64k count=10
guest# md5sum /mnt/sysimage/? | tee /mnt/sysimage/sum3
; and again, it's nice to modify the guest file system after the point in
; time, to prove that our backup contents correspond to the right time

; Part 4: copy just the incremental changes
# qemu-img create -f qcow2 back2.img 100M
; note that this image has no backing file (yet)
# copyif false nbd://localhost:10809/tmp back2.img
; there's our handy shell function from part 2, this time set to copy
; just the dirty clusters
# ls -l back2.img
; I see a size of 1310720. Reasonable (we wrote 640k contents to 'a',
; but in doing so also touched other parts of the file system, and there
; is some qemu overhead), and smaller than the 6M of back1.img.
# virsh qemu-monitor-command $dom '{"execute":"nbd-server-remove",
 "arguments":{"name":"tmp"}}'
# virsh qemu-monitor-command $dom '{"execute":"block-job-cancel",
 "arguments":{"device":"drive-scsi0-0-1"}}'
# virsh qemu-monitor-command $dom '{"execute":"blockdev-del",
 "arguments":{"node-name":"tmp"}}'
# rm scratch.img
; again, shut down the backup job

; Part 5: inspect things to see if it really was incremental
; remember, at this point, back2.img has not backing file
# guestfish -r -a back2.img << \EOF
run
fsck ext2 /dev/sda
fsck ext2 /dev/sda
mount /dev/sda /
checksum md5 /c
cat /sum2
EOF
; the fsck reports status 4 rather than 1, because there are unrecoverable
; errors (namely, any portion of the filesystem that was not modified is
; not present), but in spite of that, the image can still be "mounted"
; the checksum of file /c (which was supposed to be random contents from
; our first backup) is 157e39521e47ad1c923a94edd69ad59c - but that's
; the same as the checksum for file /b (which is all zeros). That makes
; sense - since file c did not change, none of its contents were included
; in the incremental backup, so while the inode was still legible, the
; file system ends up reading uninitialized clusters as if they were the
; file contents.
; Okay, reading an incomplete file system is not fun, let's do the real
; magic of pasting our two backup images back into one chain (and now
; you know why I stored both images as qcow2 rather than raw):
# qemu-img rebase -u -f qcow2 -F qcow2 -b back1.img back2.img
# guestfish -r -a back2.img << \EOF
run
fsck ext2 /dev/sda
fsck ext2 /dev/sda
mount /dev/sda /
ls /
checksum md5 /a
checksum md5 /b
checksum md5 /c
cat /sum2
EOF
; Amazing! fsck reports 0x1 then 0 as before when we first checked back1.img,
; and now the checksums all match what the guest claimed they should be.
; We have successfully reconstructed the state of the file system at the
; point of the second incremental backup.

Comment 31 Eric Blake 2018-09-21 22:09:01 UTC
The following tweaked version of my copyif() shell function got a bit-for-bit identical image, but without requiring the use of the kernel NBD module. (Note: I changed argument order, so it's not quite a dropin to the shell used in comment 27)

qemu_img=/path/to/qemu-img
copyif2() {
if test $# -lt 2 || test $# -gt 3; then
  echo 'usage: copyif src dst [bitmap]'
  return 1
fi
if test "$1" != nbd://localhost:10809/tmp; then
  echo Please fix hard-coded references to one specific source
  return 1
fi
if test -z "$3"; then
  map_from="-f raw $1"
  state=true
else
  map_from="--image-opts driver=nbd,export=tmp,server.type=inet"
  map_from+=",server.host=localhost,server.port=10809"
  map_from+=",x-dirty-bitmap=qemu:dirty-bitmap:$3"
  state=false
fi
$qemu_img info -f raw $1 || return
$qemu_img info -f qcow2 $2 || return
ret=0
while read line; do
  [[ $line =~ .*start.:.([0-9]*).*length.:.([0-9]*).*data.:.$state.* ]] || continue
  start=${BASH_REMATCH[1]} len=${BASH_REMATCH[2]}
  echo
  echo " $start $len:"
  qemu-io -c "w -P 0 $start $len" -f qcow2 $2
  $qemu_img convert -C -O qcow2 \
    "json:{'driver':'null-co', 'size':$start}" \
    "json:{'driver':'raw', 'offset':$start, 'size':$len, \
      'file':{'driver':'nbd', 'server':{'type':'inet', \
        'host':'localhost', 'port':'10809'}, 'export':'tmp'}}" \
    $2.tmp || { ret=$?; break; }
  $qemu_img rebase -u -b $2 -F qcow2 -f qcow2 $2.tmp || { ret=$?; break; }
  $qemu_img commit -f qcow2 $2.tmp || { ret=$?; break; }
  \rm $2.tmp || { ret=$?; break; }
done < <($qemu_img map --output=json $map_from)
if test $ret = 0; then echo 'Success!'; fi
return $ret
}

The main idea was using Max' idea of qemu-img convert to concatenate null-co + a subset of the source image into a temporary file, then to commit that temporary into the destination file. It's a bit more I/O being thrown around (data is copied twice - once from NBD source to temporary, again from temporary to backup file, rather than straight from NBD source to backup file), but getting rid of the dependency on 'qemu-nbd -c' was worth it.  I also had to pre-zero any section of the destination file being copied, as otherwise 'qemu-img convert' does zero-detection and ends up with a result that is sparser than the 'dd' method (qemu-img compare still claimed the images identical in that case, but bit-for-bit identical is nicer than worrying about whether the sparse areas are handled correctly). I could not use 'qemu-img convert -S 0' to avoid the zero-detection, as otherwise the portion of the image copied from null-co gets expanded into allocations.

Comment 32 Gu Nini 2018-09-25 01:59:22 UTC
(In reply to Eric Blake from comment #31)

Thanks Eric for the reply.

For the nbd module, I loaded it after recompile the kernel, so I had finished the test in comment #27. Later I will try the way in comment #31 and update the polarion case if necessary.

Comment 33 Eric Blake 2018-09-26 20:37:03 UTC
Even more efficient, using an idea from Alberto - using qemu-io's copy-on-read feature:

copyif2() {
if test $# -lt 2 || test $# -gt 3; then
  echo 'usage: copyif src dst [bitmap]'
  return 1
fi
if test "$1" != nbd://localhost:10809/tmp; then
  echo Please fix hard-coded references to one specific source
  return 1
fi
if test -z "$3"; then
  map_from="-f raw $1"
  state=true
else
  map_from="--image-opts driver=nbd,export=tmp,server.type=inet"
  map_from+=",server.host=localhost,server.port=10809"
  map_from+=",x-dirty-bitmap=qemu:dirty-bitmap:$3"
  state=false
fi
$qemu_img info -f raw $1 || return
$qemu_img info -f qcow2 $2 || return
ret=0
$qemu_img rebase -u -f qcow2 -F raw -b $1 $2
while read line; do
  [[ $line =~ .*start.:.([0-9]*).*length.:.([0-9]*).*data.:.$state.* ]] || continue
  start=${BASH_REMATCH[1]} len=${BASH_REMATCH[2]}
  echo
  echo " $start $len:"
  qemu-io -C -c "r $start $len" -f qcow2 $2
done < <($qemu_img map --output=json $map_from)
$qemu_img rebase -u -f qcow2 -b '' $2
if test $ret = 0; then echo 'Success!'; fi
return $ret
}

Comment 35 errata-xmlrpc 2018-11-01 11:01:10 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2018:3443

Comment 40 aihua liang 2018-12-06 09:14:51 UTC
Created attachment 1512012 [details]
Failed_to_start_guest_with_back2_right_screen_capture


Note You need to log in before you can comment on or make changes to this bug.