RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1041569 - [NFR] libvirt: Returning the allocation watermark for all the images opened for writing during block-commit
Summary: [NFR] libvirt: Returning the allocation watermark for all the images opened f...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: libvirt
Version: 7.0
Hardware: Unspecified
OS: Unspecified
urgent
high
Target Milestone: rc
: ---
Assignee: Eric Blake
QA Contact: Virtualization Bugs
URL:
Whiteboard:
Depends On: 819485 822165 1041564
Blocks: 1035038 1082754 1083310 1109920 1158094 1168327 1175276 1175314
TreeView+ depends on / blocked
 
Reported: 2013-12-12 17:37 UTC by Ademar Reis
Modified: 2018-07-16 12:34 UTC (History)
26 users (show)

Fixed In Version: libvirt-1.2.8-11.el7
Doc Type: Enhancement
Doc Text:
Clone Of: 819485
: 1158094 1175276 1175314 (view as bug list)
Environment:
Last Closed: 2015-03-05 07:28:44 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Bugzilla 1154814 0 unspecified CLOSED Live Merge: Specify VIR_DOMAIN_XML_BLOCK_INFO when checking internal volume watermarks 2021-02-22 00:41:40 UTC
Red Hat Product Errata RHSA-2015:0323 0 normal SHIPPED_LIVE Low: libvirt security, bug fix, and enhancement update 2015-03-05 12:10:54 UTC

Internal Links: 1154814

Comment 5 Eric Blake 2014-06-20 21:49:10 UTC
Uggh.  I'm finally looking at this today, but the information that upstream qemu gives leaves a lot to be desired:

$ cd /tmp
$ rm -f base.img snap1.img snap2.img
$ # base.img <- snap1.img <- snap2.img
$ qemu-img create -f raw base.img 1G
$ qemu-img create -f qcow2 -b base.img -o backing_fmt=raw snap1.img
$ qemu-img create -f qcow2 -b snap1.img -o backing_fmt=qcow2 snap2.img
$ virsh create /dev/stdin <<EOF
<domain type='kvm'>
 <name>testvm1</name>
 <memory unit='MiB'>256</memory>
 <vcpu>1</vcpu>
 <os>
   <type arch='x86_64'>hvm</type>
 </os>
 <devices>
   <disk type='file' device='disk'>
     <driver name='qemu' type='qcow2'/>
     <source file='$PWD/snap2.img'/>
     <target dev='vda' bus='virtio'/>
   </disk>
   <graphics type='vnc'/>
 </devices>
</domain>
EOF
$ virsh blockcopy testvm1 vda /tmp/copy.img --shallow --verbose --wait
Block Copy: [100 %]
Now in mirroring phase
$ virsh qemu-monitor-command --pretty testvm1 '{"execute":"query-blockstats"}'
{
    "return": [
        {
            "device": "drive-virtio-disk0",
            "parent": {
                "stats": {
                    "flush_total_time_ns": 0,
                    "wr_highest_offset": 0,
                    "wr_total_time_ns": 0,
                    "wr_bytes": 0,
                    "rd_total_time_ns": 0,
                    "flush_operations": 0,
                    "wr_operations": 0,
                    "rd_bytes": 0,
                    "rd_operations": 0
                }
            },
            "stats": {
                "flush_total_time_ns": 0,
                "wr_highest_offset": 0,
                "wr_total_time_ns": 0,
                "wr_bytes": 0,
                "rd_total_time_ns": 73943,
                "flush_operations": 0,
                "wr_operations": 0,
                "rd_bytes": 512,
                "rd_operations": 1
            },
            "backing": {
                "parent": {
                    "stats": {
                        "flush_total_time_ns": 0,
                        "wr_highest_offset": 0,
                        "wr_total_time_ns": 0,
                        "wr_bytes": 0,
                        "rd_total_time_ns": 0,
                        "flush_operations": 0,
                        "wr_operations": 0,
                        "rd_bytes": 0,
                        "rd_operations": 0
                    }
                },
                "stats": {
                    "flush_total_time_ns": 0,
                    "wr_highest_offset": 0,
                    "wr_total_time_ns": 0,
                    "wr_bytes": 0,
                    "rd_total_time_ns": 0,
                    "flush_operations": 0,
                    "wr_operations": 0,
                    "rd_bytes": 0,
                    "rd_operations": 0
                },
                "backing": {
                    "parent": {
                        "stats": {
                            "flush_total_time_ns": 0,
                            "wr_highest_offset": 0,
                            "wr_total_time_ns": 0,
                            "wr_bytes": 0,
                            "rd_total_time_ns": 0,
                            "flush_operations": 0,
                            "wr_operations": 0,
                            "rd_bytes": 0,
                            "rd_operations": 0
                        }
                    },
                    "stats": {
                        "flush_total_time_ns": 0,
                        "wr_highest_offset": 0,
                        "wr_total_time_ns": 0,
                        "wr_bytes": 0,
                        "rd_total_time_ns": 0,
                        "flush_operations": 0,
                        "wr_operations": 0,
                        "rd_bytes": 0,
                        "rd_operations": 0
                    }
                }
            }
        }
    ],
    "id": "libvirt-11"
}

Quick - given that output, what's the watermark for copy.img?  If I can't even figure that out, there's no way I can amend libvirt to report new information.

Comment 6 Eric Blake 2014-06-20 21:51:51 UTC
For that same example:

$ virsh qemu-monitor-command --pretty testvm1 '{"execute":"query-block"}'
{
    "return": [
        {
            "io-status": "ok",
            "device": "drive-virtio-disk0",
            "locked": false,
            "removable": false,
            "inserted": {
                "iops_rd": 0,
                "detect_zeroes": "off",
                "image": {
                    "backing-image": {
                        "backing-image": {
                            "virtual-size": 1073741824,
                            "filename": "/tmp/base.img",
                            "format": "raw",
                            "actual-size": 0,
                            "dirty-flag": false
                        },
                        "backing-filename-format": "raw",
                        "virtual-size": 1073741824,
                        "filename": "/tmp/snap1.img",
                        "cluster-size": 65536,
                        "format": "qcow2",
                        "actual-size": 200704,
                        "format-specific": {
                            "type": "qcow2",
                            "data": {
                                "compat": "1.1",
                                "lazy-refcounts": false
                            }
                        },
                        "backing-filename": "/tmp/base.img",
                        "dirty-flag": false
                    },
                    "backing-filename-format": "qcow2",
                    "virtual-size": 1073741824,
                    "filename": "/tmp/snap2.img",
                    "cluster-size": 65536,
                    "format": "qcow2",
                    "actual-size": 200704,
                    "format-specific": {
                        "type": "qcow2",
                        "data": {
                            "compat": "1.1",
                            "lazy-refcounts": false
                        }
                    },
                    "backing-filename": "/tmp/snap1.img",
                    "dirty-flag": false
                },
                "iops_wr": 0,
                "ro": false,
                "node-name": "__qemu##00000001TRFBCQLO",
                "backing_file_depth": 2,
                "drv": "qcow2",
                "iops": 0,
                "bps_wr": 0,
                "backing_file": "/tmp/snap1.img",
                "encrypted": false,
                "bps": 0,
                "bps_rd": 0,
                "file": "/tmp/snap2.img",
                "encryption_key_missing": false
            },
            "dirty-bitmaps": [
                {
                    "granularity": 65536,
                    "count": 0
                }
            ],
            "type": "unknown"
        }
    ],
    "id": "libvirt-12"
}

Comment 8 Eric Blake 2014-06-20 22:23:23 UTC
(In reply to Eric Blake from comment #5)
> Uggh.  I'm finally looking at this today, but the information that upstream
> qemu gives leaves a lot to be desired:
> 

Annotating what is here:

> $ virsh qemu-monitor-command --pretty testvm1
> '{"execute":"query-blockstats"}'
> {
>     "return": [
>         {
>             "device": "drive-virtio-disk0",
>             "parent": {
>                 "stats": {

"parent" of drive-virtio-disk0 is the raw file behind the qcow2 format, or /tmp/snap2.img

>             "stats": {

while "stats" contains the stats of the access through the active image qcow2 driver (so also of /tmp/snap2.img, but differing on whether it was the raw read from disk or the qcow2 deciphering of the data)

>             "backing": {

"backing" is the backing file of the active layer, or /tmp/snap1.img

>                 "parent": {
>                     "stats": {

Just as snap2 had both raw and qcow2 stats, the backing file has "parent" (raw) stats...

>                 "stats": {

and actual (qcow2) stats

>                 "backing": {

Recurse another layer to the grandparent backing file, /tmp/base.img

>                     "parent": {
>                         "stats": {

Huh? /tmp/base.img is raw, so why does it have a "parent"?

>                     "stats": {

and how would the parent (raw) stats differ from the direct (also raw) stats?

(In reply to Eric Blake from comment #6)
> For that same example:
> 
> $ virsh qemu-monitor-command --pretty testvm1 '{"execute":"query-block"}'
> {

Look through this, and you see no mention of copy.img, even though the drive mirror is still active.  Of the four files in use by qemu, base.img and snap1.img are still read-only, while snap2.img and copy.img are read-write, but nothing exists to tell me about copy.img.

Comment 9 Ademar Reis 2014-06-20 22:53:16 UTC
Setting NEEDINFO(Fam), who implemented it in QEMU (Bug 1941564)

See also https://bugzilla.redhat.com/show_bug.cgi?id=1041564#c8

Comment 10 Eric Blake 2014-06-20 23:13:40 UTC
Upstream qemu plus Jeff's patches for always naming nodes is a little better, that I can now see the nodes; but this doesn't list stats on those nodes:

$ virsh qemu-monitor-command --pretty testvm1 '{"execute":"query-named-block-nodes"}'
{
    "return": [
        {
            "iops_rd": 0,
            "detect_zeroes": "off",
            "image": {

            },
            "iops_wr": 0,
            "ro": false,
            "node-name": "__qemu##0000000cCNPLEXSR",
            "backing_file_depth": 0,
            "drv": "qcow2",
            "iops": 0,
            "bps_wr": 0,
            "backing_file": "/tmp/snap1.img",
            "encrypted": false,
            "bps": 0,
            "bps_rd": 0,
            "file": "/tmp/copy.img",
            "encryption_key_missing": false
        },
        {
            "iops_rd": 0,
            "detect_zeroes": "off",
            "image": {

            },
            "iops_wr": 0,
            "ro": false,
            "node-name": "__qemu##0000000bWRSWQVOV",
            "backing_file_depth": 0,
            "drv": "file",
            "iops": 0,
            "bps_wr": 0,
            "encrypted": false,
            "bps": 0,
            "bps_rd": 0,
            "file": "/tmp/copy.img",
            "encryption_key_missing": false
        },
        {
            "iops_rd": 0,
            "detect_zeroes": "off",
            "image": {

            },
            "iops_wr": 0,
            "ro": true,
            "node-name": "__qemu##00000005PCGSOMKG",
            "backing_file_depth": 0,
            "drv": "raw",
            "iops": 0,
            "bps_wr": 0,
            "encrypted": false,
            "bps": 0,
            "bps_rd": 0,
            "file": "/tmp/base.img",
            "encryption_key_missing": false
        },
        {
            "iops_rd": 0,
            "detect_zeroes": "off",
            "image": {

            },
            "iops_wr": 0,
            "ro": true,
            "node-name": "__qemu##00000004APGOBHPC",
            "backing_file_depth": 0,
            "drv": "file",
            "iops": 0,
            "bps_wr": 0,
            "encrypted": false,
            "bps": 0,
            "bps_rd": 0,
            "file": "/tmp/base.img",
            "encryption_key_missing": false
        },
        {
            "iops_rd": 0,
            "detect_zeroes": "off",
            "image": {

            },
            "iops_wr": 0,
            "ro": true,
            "node-name": "__qemu##00000003MJDWQKOY",
            "backing_file_depth": 1,
            "drv": "qcow2",
            "iops": 0,
            "bps_wr": 0,
            "backing_file": "/tmp/base.img",
            "encrypted": false,
            "bps": 0,
            "bps_rd": 0,
            "file": "/tmp/snap1.img",
            "encryption_key_missing": false
        },
        {
            "iops_rd": 0,
            "detect_zeroes": "off",
            "image": {

            },
            "iops_wr": 0,
            "ro": true,
            "node-name": "__qemu##00000002IUHBJMJK",
            "backing_file_depth": 0,
            "drv": "file",
            "iops": 0,
            "bps_wr": 0,
            "encrypted": false,
            "bps": 0,
            "bps_rd": 0,
            "file": "/tmp/snap1.img",
            "encryption_key_missing": false
        },
        {
            "iops_rd": 0,
            "detect_zeroes": "off",
            "image": {

            },
            "iops_wr": 0,
            "ro": false,
            "node-name": "__qemu##00000001KKHDMHVB",
            "backing_file_depth": 2,
            "drv": "qcow2",
            "iops": 0,
            "bps_wr": 0,
            "backing_file": "/tmp/snap1.img",
            "encrypted": false,
            "bps": 0,
            "bps_rd": 0,
            "file": "/tmp/snap2.img",
            "encryption_key_missing": false
        },
        {
            "iops_rd": 0,
            "detect_zeroes": "off",
            "image": {

            },
            "iops_wr": 0,
            "ro": false,
            "node-name": "__qemu##00000000PJBVAHCU",
            "backing_file_depth": 0,
            "drv": "file",
            "iops": 0,
            "bps_wr": 0,
            "encrypted": false,
            "bps": 0,
            "bps_rd": 0,
            "file": "/tmp/snap2.img",
            "encryption_key_missing": false
        }
    ],
    "id": "libvirt-15"
}

Comment 11 Fam Zheng 2014-06-21 06:14:43 UTC
Indeed, the problem is mirror target blockdev is purely internal and not visible with block-querystats. To solve this I have some options, all need some more upstream work:

1) Add stat info in "query-named-block-nodes verbose=True".

2) Add stat info in 'query-block-jobs stat=True"

3) Change "query-blockstats" to display mirror target, which means giving a name to it so it's included in the bdrv_states list.

Fam

Comment 14 Eric Blake 2014-06-24 14:49:44 UTC
(In reply to Fam Zheng from comment #11)
> Indeed, the problem is mirror target blockdev is purely internal and not
> visible with block-querystats. To solve this I have some options, all need
> some more upstream work:
> 
> 1) Add stat info in "query-named-block-nodes verbose=True".

Might be worth doing.  As long as the destination has a named node (whether for drive-mirror or active commit), then the stats could be grabbed for the destination.  It also means a single query command would get ALL information at once.  But right now the query-named-block-nodes does NOT display relationships between nodes, and also depends on Jeff's patch to name all nodes (which is still in the air whether it will make qemu 2.1).

> 
> 2) Add stat info in 'query-block-jobs stat=True"

This may be the easiest - the only time we care about the watermark for a BDS that is not the active layer is when that BDS is the destination of a block job (whether drive-mirror or block-commit), so getting the stats on the destination image as part of the job information would be the easiest way to find the information.

> 
> 3) Change "query-blockstats" to display mirror target, which means giving a
> name to it so it's included in the bdrv_states list.

As it is, the query-blockstats should be enhanced to show names anyways, to make it obvious _which_ BDS is being referenced at each layer of the recursion.  So adding yet another optional member "target" alongside the existing "backing" member would be all that is needed to get the stats for the target destination of an active job.

My personal vote is for 3), but I could also live with 2) or 1) (in order of preference) if those are easier for qemu to add.

Comment 15 Eric Blake 2014-09-13 03:26:28 UTC
Some upstream traffic on API thoughts (using virDomainGetXMLDesc to display allocation for all disks in one call): https://www.redhat.com/archives/libvir-list/2014-August/msg00207.html

Plus, I just realized that virDomainGetBlockInfo(dom, "vda[1]", ...) might work to get the info for the first backing element of the "vda" chain (similar to how we are able to use "vda[1]" as a blockcommit destination, even for gluster volumes).  However, the "vda[1]" notation for blockcommit depends on the recent storage source refactoring, so while it will be easy to support for RHEL 7.1, backporting it to anything earlier is probably a non-starter.

Comment 17 Ademar Reis 2014-10-28 14:21:18 UTC
(In reply to Eric Blake from comment #15)
> Some upstream traffic on API thoughts (using virDomainGetXMLDesc to display
> allocation for all disks in one call):
> https://www.redhat.com/archives/libvir-list/2014-August/msg00207.html
> 
> Plus, I just realized that virDomainGetBlockInfo(dom, "vda[1]", ...) might
> work to get the info for the first backing element of the "vda" chain
> (similar to how we are able to use "vda[1]" as a blockcommit destination,
> even for gluster volumes).  However, the "vda[1]" notation for blockcommit
> depends on the recent storage source refactoring, so while it will be easy
> to support for RHEL 7.1, backporting it to anything earlier is probably a
> non-starter.

I'm a bit confuse about the QEMU side: is there anything pending (see comment #14)? If yes, we need a QEMU BZ.

Comment 18 Eric Blake 2014-10-28 14:46:13 UTC
Current qemu provides enough information for libvirt to track allocation during block-commit, but NOT for block-copy.  So I'll go ahead and clone this.

Comment 20 Eric Blake 2014-12-06 08:16:46 UTC
Upstream progress: this series gets us closer to the agreed-upon interface for block-commit:
https://www.redhat.com/archives/libvir-list/2014-December/msg00370.html

Comment 26 Shanzhi Yu 2014-12-18 09:17:41 UTC
With libvirt-1.2.8-11.el7.x86_64, I do below test

Scenario I 
Domain with local file as source file

1. Use raw format file as source file

1.1. Preprare a running domain with raw format source file
# virsh domblklist r7 
Target     Source
------------------------------------------------
vda        /var/lib/libvirt/images/rhel7-raw.img

# qemu-img info /var/lib/libvirt/images/rhel7-raw.img
image: /var/lib/libvirt/images/rhel7-raw.img
file format: raw
virtual size: 9.8G (10485760000 bytes)
disk size: 6.2G


Open two terminal, one try to keep geting domain block device statistics 
#for i in {1..100000};do echo $i;virsh domstats r7 --block --backing;sleep 1;done

In another terminal, do creating snapshot/blockcommit operations

1.2. Create external snapshot for domain
# virsh snapshot-create-as r7 s1 --disk-only 
Domain snapshot s1 created

# virsh domblklist r7 
Target     Source
------------------------------------------------
vda        /var/lib/libvirt/images/rhel7-raw.s1

1.3 Login guest, try do create file

#dd if=/dev/zero of=/mnt/s1 bs=512 count=100000000

1.4. Do active blockcommit
# virsh blockcommit r7 vda --active  --verbose --wait --bandwidth 10 --pivot 
Block Commit: [ 54 %]

Check result in another terminal

When domain is shut off status

1
Domain: 'r7'
  block.count=1
  block.0.name=vda
  block.0.path=/var/lib/libvirt/images/rhel7-raw.img

When boot up domain:

..
148
Domain: 'r7'
  block.count=1
  block.0.name=vda
  block.0.path=/var/lib/libvirt/images/rhel7-raw.img
  block.0.rd.reqs=0
  block.0.rd.bytes=0
  block.0.rd.times=0
  block.0.wr.reqs=0
  block.0.wr.bytes=0
  block.0.wr.times=0
  block.0.fl.reqs=0
  block.0.fl.times=0
  block.0.allocation=0
  block.0.capacity=10485760000
  block.0.physical=6615142400
..
151
Domain: 'r7'
  block.count=1
  block.0.name=vda
  block.0.path=/var/lib/libvirt/images/rhel7-raw.img
  block.0.rd.reqs=7710
  block.0.rd.bytes=254152704
  block.0.rd.times=2034562632
  block.0.wr.reqs=2321
  block.0.wr.bytes=6199808
  block.0.wr.times=261748234
  block.0.fl.reqs=90
  block.0.fl.times=1678090878
  block.0.allocation=9261780480
  block.0.capacity=10485760000
  block.0.physical=6615142400
..

NB:
below options will change during the domain booting up process

block.0.rd.reqs  block.0.rd.bytes  block.0.rd.times  block.0.wr.reqs block.0.wr.bytes block.0.wr.times block.0.fl.reqs block.0.fl.times block.0.allocation 

Create external snapshot

357
Domain: 'r7'
  block.count=2
  block.0.name=vda
  block.0.path=/var/lib/libvirt/images/rhel7-raw.ss
  block.0.rd.reqs=0
  block.0.rd.bytes=0
  block.0.rd.times=0
  block.0.wr.reqs=0
  block.0.wr.bytes=0
  block.0.wr.times=0
  block.0.fl.reqs=0
  block.0.fl.times=0
  block.0.allocation=0
  block.0.capacity=10485760000
  block.0.physical=200704
  block.1.name=vda
  block.1.path=/var/lib/libvirt/images/rhel7-raw.img
  block.1.backingIndex=1
  block.1.rd.reqs=8108
  block.1.rd.bytes=291110912
  block.1.rd.times=2153835100
  block.1.wr.reqs=2363
  block.1.wr.bytes=6429184
  block.1.wr.times=281294797
  block.1.fl.reqs=106
  block.1.fl.times=1879752863
  block.1.allocation=9261780480
  block.1.capacity=10485760000
  block.1.physical=6615142400

..

399
Domain: 'r7'
  block.count=2
  block.0.name=vda
  block.0.path=/var/lib/libvirt/images/rhel7-raw.ss
  block.0.rd.reqs=0
  block.0.rd.bytes=0
  block.0.rd.times=0
  block.0.wr.reqs=8
  block.0.wr.bytes=38912
  block.0.wr.times=1472792115
  block.0.fl.reqs=2
  block.0.fl.times=102201131
  block.0.allocation=982528
  block.0.capacity=10485760000
  block.0.physical=921600
  block.1.name=vda
  block.1.path=/var/lib/libvirt/images/rhel7-raw.img
  block.1.backingIndex=1
  block.1.rd.reqs=8108
  block.1.rd.bytes=291110912
  block.1.rd.times=2153835100
  block.1.wr.reqs=2363
  block.1.wr.bytes=6429184
  block.1.wr.times=281294797
  block.1.fl.reqs=106
  block.1.fl.times=1879752863
  block.1.allocation=9261780480
  block.1.capacity=10485760000
  block.1.physical=6615142400

NB:
after creating extneral snapshot, below options will keep changing

block.0.rd.reqs block.0.rd.bytes block.0.rd.times block.0.wr.reqs  block.0.wr.bytes  block.0.wr.times  block.0.fl.reqs  block.0.fl.times  block.0.allocation block.0.physical

Ceate file in guest

NB:
During creating file in guest,
below options keep changing


block.0.rd.reqs block.0.rd.bytes block.0.rd.times block.0.wr.reqs  block.0.wr.bytes block.0.wr.times block.0.fl.reqs block.0.fl.times block.0.allocation block.0.physical 


Before commit
Domain: 'r7'
  block.count=2
  block.0.name=vda
  block.0.path=/var/lib/libvirt/images/rhel7-raw.ss
  block.0.rd.reqs=1137
  block.0.rd.bytes=83234816
  block.0.rd.times=27109000103
  block.0.wr.reqs=11351
  block.0.wr.bytes=5396881408
  block.0.wr.times=7365454900930
  block.0.fl.reqs=45
  block.0.fl.times=15778950082
  block.0.allocation=4907924992
  block.0.capacity=10485760000
  block.0.physical=4907868160
  block.1.name=vda
  block.1.path=/var/lib/libvirt/images/rhel7-raw.img
  block.1.backingIndex=1
  block.1.rd.reqs=8108
  block.1.rd.bytes=291110912
  block.1.rd.times=2153835100
  block.1.wr.reqs=2363
  block.1.wr.bytes=6429184
  block.1.wr.times=281294797
  block.1.fl.reqs=106
  block.1.fl.times=1879752863
  block.1.allocation=9261780480
  block.1.capacity=10485760000
  block.1.physical=6615142400

NB:
During active blockcommit, only option block.1.physical keep changing

After blockcommit finish,

Domain: 'r7'
  block.count=1
  block.0.name=vda
  block.0.path=/var/lib/libvirt/images/rhel7-raw.img
  block.0.rd.reqs=8108
  block.0.rd.bytes=291110912
  block.0.rd.times=2153835100
  block.0.wr.reqs=2363
  block.0.wr.bytes=6429184
  block.0.wr.times=281294797
  block.0.fl.reqs=106
  block.0.fl.times=1879752863
  block.0.allocation=10485759488
  block.0.capacity=10485760000
  block.0.physical=9458880512

Comment 27 Shanzhi Yu 2014-12-18 09:21:13 UTC
Hi Eric,
Can you help confirm if the test steps in comment 26 is right to verify this bug, I mean, if I can test this new feature in that way. If it is right, I will try to add new more scenario just as above steps. If not, can you give some hint how to test it?

Thanks

Comment 28 Eric Blake 2014-12-19 22:45:19 UTC
(In reply to Shanzhi Yu from comment #27)
> Hi Eric,
> Can you help confirm if the test steps in comment 26 is right to verify this
> bug, I mean, if I can test this new feature in that way. If it is right, I
> will try to add new more scenario just as above steps. If not, can you give
> some hint how to test it?

The scenario that VDSM wants tested is having a block device (raw partition or LVM partition) containing a qcow2 format, and taking snapshots of that as well as block-commit.  Your tests used a file rather than a block device as the base of the chain and proves that the interface works, but one additional test that also proves that the allocation numbers for a qcow2 formatted block device change during a block-commit would be useful for the integration scenario that this code was designed for.

Comment 29 Shanzhi Yu 2014-12-22 10:31:20 UTC
scenario I iscsi backend

1. Guest source file is qcow2 format

1.1. Prepare logical volume based on iscsi storage

# iscsiadm --mode node --targetname iqn.2014-12.com.redhat:libvirt.shyu-3  --portal 10.66.5.38:3260  --login
Logging in to [iface: default, target: iqn.2014-12.com.redhat:libvirt.shyu-3, portal: 10.66.5.38,3260] (multiple)
Login to [iface: default, target: iqn.2014-12.com.redhat:libvirt.shyu-3, portal: 10.66.5.38,3260] successful.

# ll /dev/disk/by-path/
total 0
lrwxrwxrwx. 1 root root 9 Dec 22 14:35 ip-10.66.5.38:3260-iscsi-iqn.2014-12.com.redhat:libvirt.shyu-3-lun-1 -> ../../sdb

# pvcreate /dev/sdb
  Physical volume "/dev/sdb" successfully created

# vgcreate vg01 /dev/sdb
  Volume group "vg01" successfully created

# for i in 1 2 3 4;do lvcreate -L 10G -n lv0$i vg01 ;done 
  Logical volume "lv01" created.
  Logical volume "lv02" created.
  Logical volume "lv03" created.
  Logical volume "lv04" created.

1.2. Make logical volume to qcow2 format

# for i in 1 2 3 4;do qemu-img create -f qcow2 /dev/vg01/lv0$i 10G;done

Formatting '/dev/vg01/lv01', fmt=qcow2 size=10737418240 encryption=off cluster_size=65536 lazy_refcounts=off 
Formatting '/dev/vg01/lv02', fmt=qcow2 size=10737418240 encryption=off cluster_size=65536 lazy_refcounts=off 
Formatting '/dev/vg01/lv03', fmt=qcow2 size=10737418240 encryption=off cluster_size=65536 lazy_refcounts=off 
Formatting '/dev/vg01/lv04', fmt=qcow2 size=10737418240 encryption=off cluster_size=65536 lazy_refcounts=off 

1.3. Install a rhel7 domain with logical volume lv01(qcow2 format)

# virsh dumpxml r7|grep disk -A5
    <disk type='file' device='disk'>
      <driver name='qemu' type='qcow2' cache='none'/>
      <source file='/dev/vg01/lv01'/>
      <backingStore/>
      <target dev='vda' bus='virtio'/>
      <boot order='1'/>
      <alias name='virtio-disk0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
    </disk>

1.4 Open another terminal, keep checking domain block device statistics with --backing option while do snapshot-create/blockcommit 

# for i in {1..10000};do echo $i;virsh domstats r7 --backing --block ;sleep 1; done

1.5. Create external snapshot with snapshot file using lv02 lv03 and lv04, do file write in domain afte each snapshot

# virsh snapshot-create-as r7 s1 --disk-only --diskspec vda,snapshot=external,file=/dev/vg01/lv02 
Domain snapshot s1 created

[guest]# dd if=/dev/zero of=/mnt/s1 bs=1M count=1000
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB) copied, 13.2612 s, 79.1 MB/s

# virsh snapshot-create-as r7 s2 --disk-only --diskspec vda,snapshot=external,file=/dev/vg01/lv03
Domain snapshot s2 created

[guest]# dd if=/dev/zero of=/mnt/s2 bs=1M count=1000
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB) copied, 13.2612 s, 79.1 MB/s

# virsh snapshot-create-as r7 s3 --disk-only --diskspec vda,snapshot=external,file=/dev/vg01/lv04
Domain snapshot s3 created

[guest]# dd if=/dev/zero of=/mnt/s3 bs=1M count=1000
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB) copied, 13.2612 s, 79.1 MB/s

# virsh dumpxml r7|grep disk -A16
    <disk type='file' device='disk'>
      <driver name='qemu' type='qcow2' cache='none'/>
      <source file='/dev/vg01/lv04'/>
      <backingStore type='block' index='1'>
        <format type='qcow2'/>
        <source dev='/dev/vg01/lv03'/>
        <backingStore type='block' index='2'>
          <format type='qcow2'/>
          <source dev='/dev/vg01/lv02'/>
          <backingStore type='block' index='3'>
            <format type='qcow2'/>
            <source dev='/dev/vg01/lv01'/>
            <backingStore/>
          </backingStore>
        </backingStore>
      </backingStore>
      <target dev='vda' bus='virtio'/>
..

NB:
block.x.allocation of lv02 lv03 and lv04 will keekp changing during create file in guest


1.6. Keep checking domain block device statistics with --backing option during do blockcommit 

Before do blockcommit:

# virsh domstats r7 --block --backing
Domain: 'r7'
  block.count=4
  block.0.name=vda
  block.0.path=/dev/vg01/lv04
  block.0.rd.reqs=17
  block.0.rd.bytes=1019904
  block.0.rd.times=1625703563
  block.0.wr.reqs=2101
  block.0.wr.bytes=1073664512
  block.0.wr.times=1989104289735
  block.0.fl.reqs=5
  block.0.fl.times=2863230683
  block.0.allocation=1075183104
  block.0.capacity=10737418240
  block.1.name=vda
  block.1.path=/dev/vg01/lv03
  block.1.backingIndex=1
  block.1.rd.reqs=292
  block.1.rd.bytes=7327744
  block.1.rd.times=2998284742
  block.1.wr.reqs=2016
  block.1.wr.bytes=1023684096
  block.1.wr.times=1889684115670
  block.1.fl.reqs=8
  block.1.fl.times=2092315591
  block.1.allocation=1025441280
  block.1.capacity=10737418240
  block.2.name=vda
  block.2.path=/dev/vg01/lv02
  block.2.backingIndex=2
  block.2.rd.reqs=7
  block.2.rd.bytes=40960
  block.2.rd.times=555838411
  block.2.wr.reqs=2047
  block.2.wr.bytes=1048587776
  block.2.wr.times=1813074411090
  block.2.fl.reqs=4
  block.2.fl.times=839518770
  block.2.allocation=1049492992
  block.2.capacity=10737418240
  block.3.name=vda
  block.3.path=/dev/vg01/lv01
  block.3.backingIndex=3
  block.3.rd.reqs=7601
  block.3.rd.bytes=125323264
  block.3.rd.times=6823281669
  block.3.wr.reqs=4254
  block.3.wr.bytes=1053747200
  block.3.wr.times=2001957787858
  block.3.fl.reqs=20
  block.3.fl.times=3580097187
  block.3.allocation=2243624448
  block.3.capacity=10737418240


1.6.1 Do blockcommit on inactive layer(from lv03 to lv02)


# virsh blockcommit r7 vda --top vda[1] --base vda[2] --verbose --wait --bandwidth 10

Block Commit: [ 45 %]
..

NB:

VALUE of block.2.allocation(/dev/vg01/lv02) keek changing during the blockcommit


after blockcommit

# virsh domstats r7 --block --backing
Domain: 'r7'
  block.count=3
  block.0.name=vda
  block.0.path=/dev/vg01/lv04
  block.0.rd.reqs=2697
  block.0.rd.bytes=25407488
  block.0.rd.times=11990867314
  block.0.wr.reqs=2372
  block.0.wr.bytes=1075705856
  block.0.wr.times=1994824064770
  block.0.fl.reqs=189
  block.0.fl.times=21458832205
  block.0.allocation=1077345792
  block.0.capacity=10737418240
  block.1.name=vda
  block.1.path=/dev/vg01/lv02
  block.1.backingIndex=1
  block.1.rd.reqs=7
  block.1.rd.bytes=40960
  block.1.rd.times=555838411
  block.1.wr.reqs=2047
  block.1.wr.bytes=1048587776
  block.1.wr.times=1813074411090
  block.1.fl.reqs=4
  block.1.fl.times=839518770
  block.1.allocation=2074082816
  block.1.capacity=10737418240
  block.2.name=vda
  block.2.path=/dev/vg01/lv01
  block.2.backingIndex=2
  block.2.rd.reqs=7601
  block.2.rd.bytes=125323264
  block.2.rd.times=6823281669
  block.2.wr.reqs=4254
  block.2.wr.bytes=1053747200
  block.2.wr.times=2001957787858
  block.2.fl.reqs=20
  block.2.fl.times=3580097187
  block.2.allocation=2243624448
  block.2.capacity=10737418240

1.6.2 Do blockcommit on active layer(from lv04 to lv01)

Before blockcommit

# virsh domstats r7 --block --backing
Domain: 'r7'
  block.count=3
  block.0.name=vda
  block.0.path=/dev/vg01/lv04
  block.0.rd.reqs=3265
  block.0.rd.bytes=39649280
  block.0.rd.times=13928609546
  block.0.wr.reqs=2716
  block.0.wr.bytes=1078469120
  block.0.wr.times=2150073507143
  block.0.fl.reqs=330
  block.0.fl.times=27431024850
  block.0.allocation=1087766016
  block.0.capacity=10737418240
  block.1.name=vda
  block.1.path=/dev/vg01/lv02
  block.1.backingIndex=1
  block.1.rd.reqs=7
  block.1.rd.bytes=40960
  block.1.rd.times=555838411
  block.1.wr.reqs=2047
  block.1.wr.bytes=1048587776
  block.1.wr.times=1813074411090
  block.1.fl.reqs=4
  block.1.fl.times=839518770
  block.1.allocation=2074082816
  block.1.capacity=10737418240
  block.2.name=vda
  block.2.path=/dev/vg01/lv01
  block.2.backingIndex=2
  block.2.rd.reqs=7601
  block.2.rd.bytes=125323264
  block.2.rd.times=6823281669
  block.2.wr.reqs=4254
  block.2.wr.bytes=1053747200
  block.2.wr.times=2001957787858
  block.2.fl.reqs=20
  block.2.fl.times=3580097187
  block.2.allocation=2243624448
  block.2.capacity=10737418240

# virsh blockcommit r7 vda --pivot  --verbose --wait --bandwidth 10
Block Commit: [ 72 %]


NB:

VALUE of block.2.allocation(/dev/vg01/lv01) keek changing during the blockcommit


After blockcommit

# virsh domstats r7 --block --backing
Domain: 'r7'
  block.count=1
  block.0.name=vda
  block.0.path=/dev/vg01/lv01
  block.0.rd.reqs=7604
  block.0.rd.bytes=125364224
  block.0.rd.times=6851145491
  block.0.wr.reqs=4254
  block.0.wr.bytes=1053747200
  block.0.wr.times=2001957787858
  block.0.fl.reqs=20
  block.0.fl.times=3580097187
  block.0.allocation=5378145792
  block.0.capacity=10737418240

2. Guest source file is raw format

2.1 clean env, repeat step 1.1 to prepare env

2.2 Make logical volume lv02 lv03 and lv04 to qcow2 format 

# for i in 2 3 4;do qemu-img create -f qcow2 /dev/vg01/lv0$i 10G;done
Formatting '/dev/vg01/lv02', fmt=qcow2 size=10737418240 encryption=off cluster_size=65536 lazy_refcounts=off 
Formatting '/dev/vg01/lv03', fmt=qcow2 size=10737418240 encryption=off cluster_size=65536 lazy_refcounts=off 
Formatting '/dev/vg01/lv04', fmt=qcow2 size=10737418240 encryption=off cluster_size=65536 lazy_refcounts=off 

2.3 Install a rhel7 domain with logical volume lv01(raw format)

#  virsh dumpxml r7|grep disk -A5
    <disk type='block' device='disk'>
      <driver name='qemu' type='raw' cache='none'/>
      <source dev='/dev/vg01/lv01'/>
      <backingStore/>
      <target dev='vda' bus='virtio'/>
      <boot order='1'/>
      <alias name='virtio-disk0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
    </disk>
..


2.4 Open another terminal, keep checking domain block device statistics with --backing option while do snapshot-create/blockcommit 

# for i in {1..10000};do echo $i;virsh domstats r7 --backing --block ;sleep 1; done

2.5 Create external snapshot with snapshot file using lv02 lv03 and lv04, do file write in domain afte each snapshot

# virsh snapshot-create-as r7 s1 --disk-only --diskspec vda,snapshot=external,file=/dev/vg01/lv02 
Domain snapshot s1 created

[guest]# dd if=/dev/zero of=/mnt/s1 bs=1M count=1000
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB) copied, 13.2612 s, 79.1 MB/s

# virsh snapshot-create-as r7 s2 --disk-only --diskspec vda,snapshot=external,file=/dev/vg01/lv03
Domain snapshot s2 created

[guest]# dd if=/dev/zero of=/mnt/s2 bs=1M count=1000
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB) copied, 13.2612 s, 79.1 MB/s

# virsh snapshot-create-as r7 s3 --disk-only --diskspec vda,snapshot=external,file=/dev/vg01/lv04
Domain snapshot s3 created

[guest]# dd if=/dev/zero of=/mnt/s3 bs=1M count=1000
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB) copied, 13.2612 s, 79.1 MB/s

2.6. Keep checking domain block device statistics with --backing option during do blockcommit 

Before do blockcommit:

# virsh domstats r7 --block --backing
Domain: 'r7'
  block.count=4
  block.0.name=vda
  block.0.path=/dev/vg01/lv04
  block.0.rd.reqs=17
  block.0.rd.bytes=1019904
  block.0.rd.times=332538917
  block.0.wr.reqs=2195
  block.0.wr.bytes=1119808000
  block.0.wr.times=2026369882817
  block.0.fl.reqs=4
  block.0.fl.times=1802768581
  block.0.allocation=1121123840
  block.0.capacity=10737418240
  block.1.name=vda
  block.1.path=/dev/vg01/lv03
  block.1.backingIndex=1
  block.1.rd.reqs=16
  block.1.rd.bytes=1011712
  block.1.rd.times=35527504
  block.1.wr.reqs=1916
  block.1.wr.bytes=977416192
  block.1.wr.times=1988492312614
  block.1.fl.reqs=3
  block.1.fl.times=1737785915
  block.1.allocation=978320896
  block.1.capacity=10737418240
  block.2.name=vda
  block.2.path=/dev/vg01/lv02
  block.2.backingIndex=2
  block.2.rd.reqs=592
  block.2.rd.bytes=17367040
  block.2.rd.times=2284116239
  block.2.wr.reqs=2058
  block.2.wr.bytes=1048587264
  block.2.wr.times=1934014017135
  block.2.fl.reqs=2
  block.2.fl.times=612633532
  block.2.allocation=1049558528
  block.2.capacity=10737418240
  block.3.name=vda
  block.3.path=/dev/vg01/lv01
  block.3.backingIndex=3
  block.3.rd.reqs=13126
  block.3.rd.bytes=254214144
  block.3.rd.times=57351019838
  block.3.wr.reqs=6487
  block.3.wr.bytes=2107651072
  block.3.wr.times=2206967432432
  block.3.fl.reqs=50
  block.3.fl.times=8036785025
  block.3.allocation=8542551552
  block.3.capacity=10737418240



2.6.1 Do blockcommit on inactive layer(from lv03 to lv02)


# virsh blockcommit r7 vda --top vda[1] --base vda[2] --verbose --wait --bandwidth 10

Block Commit: [ 45 %]
..

NB:

VALUE of block.2.allocation(/dev/vg01/lv02) will keep changing during the blockcommit


After blockcommit

# virsh domstats r7 --block --backing
Domain: 'r7'
  block.count=3
  block.0.name=vda
  block.0.path=/dev/vg01/lv04
  block.0.rd.reqs=19
  block.0.rd.bytes=1036288
  block.0.rd.times=337823030
  block.0.wr.reqs=2203
  block.0.wr.bytes=1119844864
  block.0.wr.times=2026600025938
  block.0.fl.reqs=8
  block.0.fl.times=3641398923
  block.0.allocation=1121254912
  block.0.capacity=10737418240
  block.1.name=vda
  block.1.path=/dev/vg01/lv02
  block.1.backingIndex=1
  block.1.rd.reqs=592
  block.1.rd.bytes=17367040
  block.1.rd.times=2284116239
  block.1.wr.reqs=2058
  block.1.wr.bytes=1048587264
  block.1.wr.times=1934014017135
  block.1.fl.reqs=2
  block.1.fl.times=612633532
  block.1.allocation=2027159040
  block.1.capacity=10737418240
  block.2.name=vda
  block.2.path=/dev/vg01/lv01
  block.2.backingIndex=2
  block.2.rd.reqs=13126
  block.2.rd.bytes=254214144
  block.2.rd.times=57351019838
  block.2.wr.reqs=6487
  block.2.wr.bytes=2107651072
  block.2.wr.times=2206967432432
  block.2.fl.reqs=50
  block.2.fl.times=8036785025
  block.2.allocation=8542551552
  block.2.capacity=10737418240


2.6.2 Do blockcommit on active layer(from lv04 to lv01)

Before blockcommit

# virsh domstats r7 --block --backing


# virsh blockcommit r7 vda --pivot  --verbose --wait --bandwidth 10
Block Commit: [ 72 %]


NB:

VALUE of block.2.allocation (/dev/vg01/lv01) WILL NOT change during the blockcommit


After blockcommit

# virsh domstats r7 --block --backing

Domain: 'r7'
  block.count=1
  block.0.name=vda
  block.0.path=/dev/vg01/lv01
  block.0.rd.reqs=13126
  block.0.rd.bytes=254214144
  block.0.rd.times=57351019838
  block.0.wr.reqs=6487
  block.0.wr.bytes=2107651072
  block.0.wr.times=2206967432432
  block.0.fl.reqs=50
  block.0.fl.times=8036785025
  block.0.allocation=8542551552
  block.0.capacity=10737418240

Comment 30 Shanzhi Yu 2014-12-23 11:15:58 UTC
scenario II iscsi backend

1. Guest source file is qcow2 format

1.1 Prepare NFS server, and mount to test host

# mount 10.66.5.38:/nfs/nfs-1 /mnt/ -o soft,vers=3,nosharecache,retrans=6

# mount|grep 5.38
10.66.5.38:/nfs/nfs-1 on /mnt type nfs (rw,relatime,vers=3,rsize=1048576,wsize=1048576,namlen=255,soft,nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys,mountaddr=10.66.5.38,mountvers=3,mountport=56831,mountproto=udp,local_lock=none,addr=10.66.5.38)

(The mount options are what I saw on host which was registered to RHEVM3.5 when I add new NFS storage to RHEVM server)

1.2 Prepare a qcow2 format file based on NFS server, install rhel7 domain 

# qemu-img create -f qcow2 /mnt/r7-qcow2.img 10G
Formatting '/mnt/r7-qcow2.img', fmt=qcow2 size=10737418240 encryption=off cluster_size=65536 lazy_refcounts=off 

# virsh dumpxml r7|grep disk -A 6 
    <disk type='block' device='disk'>
      <driver name='qemu' type='qcow2' cache='none'/>
      <source dev='/mnt/r7-qcow2.img'>
        <seclabel model='selinux' labelskip='yes'/>
      </source>
      <backingStore/>
      <target dev='vda' bus='virtio'/>
      <alias name='virtio-disk0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
    </disk>


1.3 Open another terminal, keep checking domain block device statistics with --backing option while do snapshot-create/blockcommit 

# for i in {1..10000};do echo $i;virsh domstats r7 --backing --block ;sleep 1; done

1.4 Create external disk snapshot s1 s2 s3, do file write in domain afte each snapshot

# virsh snapshot-create-as r7 s1 --disk-only --diskspec vda,snapshot=external,file=/mnt/s1 
Domain snapshot s1 created

[guest]# dd if=/dev/zero of=/mnt/s1 bs=1M count=1000
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB) copied, 24.6795 s, 42.5 MB/s

# virsh snapshot-create-as r7 s2 --disk-only --diskspec vda,snapshot=external,file=/mnt/s2 
Domain snapshot s2 created

[guest]# dd if=/dev/zero of=/mnt/s2 bs=1M count=1000
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB) copied, 24.6795 s, 42.5 MB/s

# virsh snapshot-create-as r7 s3 --disk-only --diskspec vda,snapshot=external,file=/mnt/s3 
Domain snapshot s3 created

[guest]# dd if=/dev/zero of=/mnt/s3 bs=1M count=1000
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB) copied, 24.6795 s, 42.5 MB/s

NB:
block.0.allocation block.0.physical of s1 s2 and s3 will keekp changing during create file in guest


# virsh dumpxml r7|grep disk -A16
    <disk type='file' device='disk'>
      <driver name='qemu' type='qcow2' cache='none'/>
      <source file='/mnt/s3'>
        <seclabel model='selinux' labelskip='yes'/>
      </source>
      <backingStore type='file' index='1'>
        <format type='qcow2'/>
        <source file='/mnt/s2'/>
        <backingStore type='file' index='2'>
          <format type='qcow2'/>
          <source file='/mnt/s1'/>
          <backingStore type='file' index='3'>
            <format type='qcow2'/>
            <source file='/mnt/r7-qcow2.img'/>
            <backingStore/>
          </backingStore>
        </backingStore>

1.5. Keep checking domain block device statistics with --backing option during do blockcommit 

Before do blockcommit:

Domain: 'r7'
  block.count=4
  block.0.name=vda
  block.0.path=/mnt/s3
  block.0.rd.reqs=18
  block.0.rd.bytes=1024000
  block.0.rd.times=686892469
  block.0.wr.reqs=2067
  block.0.wr.bytes=1048630784
  block.0.wr.times=2952713174943
  block.0.fl.reqs=9
  block.0.fl.times=1576496718
  block.0.allocation=1049689600
  block.0.capacity=10737418240
  block.0.physical=1049628672
  block.1.name=vda
  block.1.path=/mnt/s2
  block.1.backingIndex=1
  block.1.rd.reqs=8
  block.1.rd.bytes=53248
  block.1.rd.times=604461292
  block.1.wr.reqs=2083
  block.1.wr.bytes=1048722944
  block.1.wr.times=2975646698318
  block.1.fl.reqs=8
  block.1.fl.times=2521754640
  block.1.allocation=1050541568
  block.1.capacity=10737418240
  block.1.physical=1050480640
  block.2.name=vda
  block.2.path=/mnt/s1
  block.2.backingIndex=2
  block.2.rd.reqs=259
  block.2.rd.bytes=6979584
  block.2.rd.times=7124006588
  block.2.wr.reqs=8
  block.2.wr.bytes=26112
  block.2.wr.times=3168304381
  block.2.fl.reqs=6
  block.2.fl.times=250509579
  block.2.allocation=916992
  block.2.capacity=10737418240
  block.2.physical=856064
  block.3.name=vda
  block.3.path=/mnt/r7-qcow2.img
  block.3.backingIndex=3
  block.3.rd.reqs=7278
  block.3.rd.bytes=109696000
  block.3.rd.times=50383033393
  block.3.wr.reqs=4243
  block.3.wr.bytes=1053656064
  block.3.wr.times=3636551840787
  block.3.fl.reqs=26
  block.3.fl.times=759679207
  block.3.allocation=2228616704
  block.3.capacity=10737418240
  block.3.physical=2228555776

1.5.1 Do blockcommit on inactive layer(from s2 to s1)


# virsh blockcommit r7 vda --top vda[1] --base vda[2] --verbose --wait --bandwidth 10

Block Commit: [ 45 %]
..

NB:

VALUE of block.2.allocation block.2.physical (/mnt/s1) keek changing during the blockcommit

After blockcommit

# virsh domstats r7 --block --backing
Domain: 'r7'
  block.count=3
  block.0.name=vda
  block.0.path=/mnt/s3
  block.0.rd.reqs=102
  block.0.rd.bytes=3371008
  block.0.rd.times=4305716964
  block.0.wr.reqs=2111
  block.0.wr.bytes=1048848384
  block.0.wr.times=2976163932320
  block.0.fl.reqs=25
  block.0.fl.times=2913110338
  block.0.allocation=1051065856
  block.0.capacity=10737418240
  block.0.physical=1051004928
  block.1.name=vda
  block.1.path=/mnt/s1
  block.1.backingIndex=1
  block.1.rd.reqs=259
  block.1.rd.bytes=6979584
  block.1.rd.times=7124006588
  block.1.wr.reqs=8
  block.1.wr.bytes=26112
  block.1.wr.times=3168304381
  block.1.fl.reqs=6
  block.1.fl.times=250509579
  block.1.allocation=1050738176
  block.1.capacity=10737418240
  block.1.physical=1050677248
  block.2.name=vda
  block.2.path=/mnt/r7-qcow2.img
  block.2.backingIndex=2
  block.2.rd.reqs=7278
  block.2.rd.bytes=109696000
  block.2.rd.times=50383033393
  block.2.wr.reqs=4243
  block.2.wr.bytes=1053656064
  block.2.wr.times=3636551840787
  block.2.fl.reqs=26
  block.2.fl.times=759679207
  block.2.allocation=2228616704
  block.2.capacity=10737418240
  block.2.physical=2228555776

1.5.2 Do blockcommit on active layer(from s3 to r7-qcow2.img)

Before commit

Domain: 'r7'
  block.count=3
  block.0.name=vda
  block.0.path=/mnt/s3
  block.0.rd.reqs=102
  block.0.rd.bytes=3371008
  block.0.rd.times=4305716964
  block.0.wr.reqs=2111
  block.0.wr.bytes=1048848384
  block.0.wr.times=2976163932320
  block.0.fl.reqs=25
  block.0.fl.times=2913110338
  block.0.allocation=1051065856
  block.0.capacity=10737418240
  block.0.physical=1051004928
  block.1.name=vda
  block.1.path=/mnt/s1
  block.1.backingIndex=1
  block.1.rd.reqs=259
  block.1.rd.bytes=6979584
  block.1.rd.times=7124006588
  block.1.wr.reqs=8
  block.1.wr.bytes=26112
  block.1.wr.times=3168304381
  block.1.fl.reqs=6
  block.1.fl.times=250509579
  block.1.allocation=1050738176
  block.1.capacity=10737418240
  block.1.physical=1050677248
  block.2.name=vda
  block.2.path=/mnt/r7-qcow2.img
  block.2.backingIndex=2
  block.2.rd.reqs=7278
  block.2.rd.bytes=109696000
  block.2.rd.times=50383033393
  block.2.wr.reqs=4243
  block.2.wr.bytes=1053656064
  block.2.wr.times=3636551840787
  block.2.fl.reqs=26
  block.2.fl.times=759679207
  block.2.allocation=2228616704
  block.2.capacity=10737418240
  block.2.physical=2228555776

# virsh blockcommit r7 vda --pivot  --verbose --wait --bandwidth 10
Block Commit: [ 81 %]

NB:

VALUE of block.2.allocation block.2.physical(/mnt/r7-qcow2.img) keek changing during the blockcommit


After blockcommit

# virsh domstats r7 --block --backing
Domain: 'r7'
  block.count=1
  block.0.name=vda
  block.0.path=/mnt/r7-qcow2.img
  block.0.rd.reqs=7278
  block.0.rd.bytes=109696000
  block.0.rd.times=50383033393
  block.0.wr.reqs=4243
  block.0.wr.bytes=1053656064
  block.0.wr.times=3636551840787
  block.0.fl.reqs=26
  block.0.fl.times=759679207
  block.0.allocation=4431019520
  block.0.capacity=10737418240
  block.0.physical=4430893056

2. Guest source file is raw format

2.1. Clean env, repeat step 1.1

2.2 Prepare a raw format file based on NFS server, install rhel7 domain

# virsh dumpxml r7|grep disk -A6
    <disk type='block' device='disk'>
      <driver name='qemu' type='raw' cache='none'/>
      <source dev='/mnt/r7-raw.img'>
        <seclabel model='selinux' labelskip='yes'/>
      </source>
      <backingStore/>
      <target dev='vda' bus='virtio'/>
      <alias name='virtio-disk0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
    </disk>

2.3 Open another terminal, keep checking domain block device statistics with --backing option while do snapshot-create/blockcommit 

# for i in {1..10000};do echo $i;virsh domstats r7 --backing --block ;sleep 1; done

2.4 Create external disk snapshot s1 s2 s3, do file write in domain afte each snapshot

# virsh snapshot-create-as r7 s1 --disk-only --diskspec vda,snapshot=external,file=/mnt/s1 
Domain snapshot s1 created

[guest]# dd if=/dev/zero of=/mnt/s1 bs=1M count=1000
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB) copied, 24.6795 s, 42.5 MB/s


# virsh snapshot-create-as r7 s2 --disk-only --diskspec vda,snapshot=external,file=/mnt/s2 
Domain snapshot s2 created

[guest]# dd if=/dev/zero of=/mnt/s2 bs=1M count=1000
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB) copied, 24.6795 s, 42.5 MB/s

# virsh snapshot-create-as r7 s3 --disk-only --diskspec vda,snapshot=external,file=/mnt/s3 
Domain snapshot s3 created

[guest]# dd if=/dev/zero of=/mnt/s3 bs=1M count=1000
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB) copied, 24.6795 s, 42.5 MB/s

NB:
block.0.allocation block.0.physical of s1 s2 and s3 keep changing during create file in guest

2.5 Keep checking domain block device statistics with --backing option during do blockcommit 

Before do blockcommit:

Domain: 'r7'
  block.count=4
  block.0.name=vda
  block.0.path=/mnt/s3
  block.0.rd.reqs=36
  block.0.rd.bytes=2056192
  block.0.rd.times=1382790181
  block.0.wr.reqs=2069
  block.0.wr.bytes=1048638976
  block.0.wr.times=3343809925579
  block.0.fl.reqs=8
  block.0.fl.times=2068328114
  block.0.allocation=1049886208
  block.0.capacity=10737418240
  block.0.physical=1049825280
  block.1.name=vda
  block.1.path=/mnt/s2
  block.1.backingIndex=1
  block.1.rd.reqs=16
  block.1.rd.bytes=1011712
  block.1.rd.times=179207528
  block.1.wr.reqs=2060
  block.1.wr.bytes=1048611840
  block.1.wr.times=3001442171392
  block.1.fl.reqs=6
  block.1.fl.times=976087507
  block.1.allocation=1049624064
  block.1.capacity=10737418240
  block.1.physical=1049563136
  block.2.name=vda
  block.2.path=/mnt/s1
  block.2.backingIndex=2
  block.2.rd.reqs=4
  block.2.rd.bytes=36864
  block.2.rd.times=173040826
  block.2.wr.reqs=2055
  block.2.wr.bytes=1048618496
  block.2.wr.times=2884652763909
  block.2.fl.reqs=4
  block.2.fl.times=1152255461
  block.2.allocation=1049689600
  block.2.capacity=10737418240
  block.2.physical=1049628672
  block.3.name=vda
  block.3.path=/mnt/r7-raw.img
  block.3.backingIndex=3
  block.3.rd.reqs=6451
  block.3.rd.bytes=111515648
  block.3.rd.times=66803855619
  block.3.wr.reqs=3291
  block.3.wr.bytes=1054007808
  block.3.wr.times=2013432850053
  block.3.fl.reqs=53
  block.3.fl.times=5931379
  block.3.allocation=8551509504
  block.3.capacity=10737418240
  block.3.physical=2277695488

2.5.1 Do blockcommit on inactive layer(from s2 to s1)

# virsh blockcommit r7 vda --top vda[1] --base vda[2] --verbose --wait --bandwidth 10

Block Commit: [ 45 %]
..


NB:

block.2.allocation block.2.physical(/mnt/s1) keek changing during the blockcommit

After commit

# virsh domstats r7 --block --backing
Domain: 'r7'
  block.count=3
  block.0.name=vda
  block.0.path=/mnt/s3
  block.0.rd.reqs=284
  block.0.rd.bytes=7168000
  block.0.rd.times=13094957689
  block.0.wr.reqs=2084
  block.0.wr.bytes=1048727040
  block.0.wr.times=3350424884721
  block.0.fl.reqs=16
  block.0.fl.times=2799042670
  block.0.allocation=1050476032
  block.0.capacity=10737418240
  block.0.physical=1050415104
  block.1.name=vda
  block.1.path=/mnt/s1
  block.1.backingIndex=1
  block.1.rd.reqs=4
  block.1.rd.bytes=36864
  block.1.rd.times=173040826
  block.1.wr.reqs=2055
  block.1.wr.bytes=1048618496
  block.1.wr.times=2884652763909
  block.1.fl.reqs=4
  block.1.fl.times=1152255461
  block.1.allocation=2098396672
  block.1.capacity=10737418240
  block.1.physical=2098335744
  block.2.name=vda
  block.2.path=/mnt/r7-raw.img
  block.2.backingIndex=2
  block.2.rd.reqs=6451
  block.2.rd.bytes=111515648
  block.2.rd.times=66803855619
  block.2.wr.reqs=3291
  block.2.wr.bytes=1054007808
  block.2.wr.times=2013432850053
  block.2.fl.reqs=53
  block.2.fl.times=5931379
  block.2.allocation=8551509504
  block.2.capacity=10737418240
  block.2.physical=2277695488

2.5.2 Do blockcommit on active layer(s3 to r7-raw.img)

Before commit 

# virsh domstats r7 --block --backing
Domain: 'r7'
  block.count=3
  block.0.name=vda
  block.0.path=/mnt/s3
  block.0.rd.reqs=284
  block.0.rd.bytes=7168000
  block.0.rd.times=13094957689
  block.0.wr.reqs=2088
  block.0.wr.bytes=1048730112
  block.0.wr.times=3350694833827
  block.0.fl.reqs=20
  block.0.fl.times=2883291308
  block.0.allocation=1050476032
  block.0.capacity=10737418240
  block.0.physical=1050415104
  block.1.name=vda
  block.1.path=/mnt/s1
  block.1.backingIndex=1
  block.1.rd.reqs=4
  block.1.rd.bytes=36864
  block.1.rd.times=173040826
  block.1.wr.reqs=2055
  block.1.wr.bytes=1048618496
  block.1.wr.times=2884652763909
  block.1.fl.reqs=4
  block.1.fl.times=1152255461
  block.1.allocation=2098396672
  block.1.capacity=10737418240
  block.1.physical=2098335744
  block.2.name=vda
  block.2.path=/mnt/r7-raw.img
  block.2.backingIndex=2
  block.2.rd.reqs=6451
  block.2.rd.bytes=111515648
  block.2.rd.times=66803855619
  block.2.wr.reqs=3291
  block.2.wr.bytes=1054007808
  block.2.wr.times=2013432850053
  block.2.fl.reqs=53
  block.2.fl.times=5931379
  block.2.allocation=8551509504
  block.2.capacity=10737418240
  block.2.physical=2277695488

# virsh blockcommit r7 vda --pivot  --verbose --wait --bandwidth 10
Block Commit: [ 72 %]


NB:

Only block.2.physical(/mnt/r7-raw.img) keep changing during the blockcommit, block.2.allocation WILL NOT change

After blockcommit

# virsh domstats r7 --block --backing
Domain: 'r7'
  block.count=1
  block.0.name=vda
  block.0.path=/mnt/r7-raw.img
  block.0.rd.reqs=6451
  block.0.rd.bytes=111515648
  block.0.rd.times=66803855619
  block.0.wr.reqs=3291
  block.0.wr.bytes=1054007808
  block.0.wr.times=2013432850053
  block.0.fl.reqs=53
  block.0.fl.times=5931379
  block.0.allocation=8551529984
  block.0.capacity=10737418240
  block.0.physical=5409050624

Comment 31 Shanzhi Yu 2014-12-24 07:54:32 UTC
Update comment 30, scenario II iscsi backend should be scenario II NFS backend


Hi Ademar

Would you please help check the test result in comment 29 and comment 30 to see if it is good enough for rhevm?(only covery NFS and iSCSI based storage,didn't test FC and GlusterFS)

AFAIK, vm will use raw format type file as source file when it is created in RHEVM with both NFS and iSCSI storage.(I did see anywhere I can configure it with qcow2 format)
When vm in iSCSI datacenter, there is no block.0.allocation when do blockcommit on active layer(Delete a vm snapshot in  rhevm env means do blockcommit);in NFS datacenter, block.0.physical will grow larger. 
While do blockcommit on inactive layer, both block.x.allocation and block.x.physical of the backing file will change(grow lager) in both NFS and iSCSI

Comment 32 Ademar Reis 2014-12-24 17:40:10 UTC
(In reply to Shanzhi Yu from comment #31)
> Update comment 30, scenario II iscsi backend should be scenario II NFS
> backend
> 
> 
> Hi Ademar
> 
> Would you please help check the test result in comment 29 and comment 30 to
> see if it is good enough for rhevm?(only covery NFS and iSCSI based
> storage,didn't test FC and GlusterFS)
> 
> AFAIK, vm will use raw format type file as source file when it is created in
> RHEVM with both NFS and iSCSI storage.(I did see anywhere I can configure it
> with qcow2 format)
> When vm in iSCSI datacenter, there is no block.0.allocation when do
> blockcommit on active layer(Delete a vm snapshot in  rhevm env means do
> blockcommit);in NFS datacenter, block.0.physical will grow larger. 
> While do blockcommit on inactive layer, both block.x.allocation and
> block.x.physical of the backing file will change(grow lager) in both NFS and
> iSCSI

Federico, would you be able to ansewr the question above?

Comment 33 Shanzhi Yu 2015-01-21 15:32:29 UTC
Eric,

I update the test scenarios in comment 29 and comment 30. Would you please help take a look at it if it is good enough to verify this bug?

Thanks

Comment 34 Eric Blake 2015-01-21 23:10:50 UTC
I don't know if RHEVM has any additional requirements, but the additional tests in 29 and 30 look good to me, and from my libvirt point of view you have proved that the new output is correctly exposing what qemu is providing.  I'm fine with calling this bug verified, if no one else complains.

Comment 35 Adam Litke 2015-01-22 14:35:32 UTC
I've been testing this in a RHEV environment and everything seems to be okay there as well.

Comment 36 Shanzhi Yu 2015-01-23 10:33:03 UTC
According comment 34 and comment 35, change this bug to VERIFIED status.

Thanks both

Comment 38 errata-xmlrpc 2015-03-05 07:28:44 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHSA-2015-0323.html


Note You need to log in before you can comment on or make changes to this bug.