RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1181659 - [RFE] Add support for QEMU notification of disk usage threshold exceeded
Summary: [RFE] Add support for QEMU notification of disk usage threshold exceeded
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: libvirt
Version: 7.0
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: rc
: 7.4
Assignee: Peter Krempa
QA Contact: yisun
URL:
Whiteboard:
Depends On: 1181648
Blocks: 1154205 1172230 1181665
TreeView+ depends on / blocked
 
Reported: 2015-01-13 14:42 UTC by Francesco Romani
Modified: 2017-08-02 01:25 UTC (History)
11 users (show)

Fixed In Version: libvirt-3.2.0-1.el7
Doc Type: Enhancement
Doc Text:
Clone Of: 1181653
Environment:
Last Closed: 2017-08-01 17:06:41 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHEA-2017:1846 0 normal SHIPPED_LIVE libvirt bug fix and enhancement update 2017-08-01 18:02:50 UTC

Description Francesco Romani 2015-01-13 14:42:33 UTC
clone of the bug. RHEV would greatly benefit of this feature to get rid of one of the main (if not the biggest) sources of load on libvirt.

+++ This bug was initially created as a clone of Bug #1181653 +++

Description of problem:
Add an event to report if a block device usage exceeds a threshold. The threshold should be configurable, and the event should report the affected block device.

Rationale for the RFE
Managing applications, like oVirt (http://www.ovirt.org), make extensive use of thin-provisioned disk images.
In order to let the guest run flawlessly and be not unnecessarily paused, oVirt sets a watermark and automatically resized the image once the watermark is reached or exceeded.

In order to detect the mark crossing, the managing application has no choice than aggressively poll the disk highest written sector, using virDomaiGetBlockInfo or the recently added bulk stats equivalent.

However, oVirt needs to do very frequent polling. In general, this usage leads to unnecessary system load, and is made even worse under scale: scenarios
with hunderds of VM are becoming not unusual.

A patch for QEMU to implement disk usage threshold was posted on qemu-devel, reviewd and acked.
Once accepted, libvirt should expose this event.

This BZ entry is to track libvirt support.

Additional info:
QEMU upstream bug: https://bugs.launchpad.net/qemu/+bug/1338957?comments=all
Includes link to the QEMU API.

Comment 2 Eric Blake 2015-05-19 12:48:57 UTC
Current libvirt proposal:
https://www.redhat.com/archives/libvir-list/2015-May/msg00580.html

Comment 3 Eric Blake 2015-06-07 02:23:31 UTC
I just learned that qemu 2.3 does NOT support thresholds without a node name registration; but libvirt is not yet supplying node names.  This proposed patch to qemu makes it possible for libvirt to register a threshold using just a device name:
https://lists.gnu.org/archive/html/qemu-devel/2015-06/msg02023.html

Registering thresholds by node names gets interesting any time block copy, active block commit, or snapshots change the active image, because the new active layer must have a different node name, even though it preserves the same device name.

Comment 4 Eric Blake 2015-07-28 15:55:06 UTC
Relies either on qemu doing auto node naming (which missed qemu 2.4) or on libvirt tracking all node names (still not upstream), so this missed the rebase deadline for RHEL 7.2.

Comment 12 Peter Krempa 2017-03-27 08:39:54 UTC
Upstream added this functionality to the upcoming 3.2.0 release:

 commit 91c3d430c96ca365ae40bf922df3e4f83295e331
Author: Peter Krempa <pkrempa>
Date:   Thu Mar 16 14:37:56 2017 +0100

    qemu: stats: Display the block threshold size in bulk stats
    
    Management tools may want to check whether the threshold is still set if
    they missed an event. Add the data to the bulk stats API where they can
    also query the current backing size at the same time.

commit 51c4b744d8361cd324c85e3d131a63feef316afd
Author: Peter Krempa <pkrempa>
Date:   Thu Mar 16 12:30:16 2017 +0100

    qemu: block: Add code to fetch block node data by node name
    
    To allow updating stats based on the node name, add a helper function
    that will fetch the required data from 'query-named-block-nodes' and
    return it in hash table for easy lookup.

commit 86e51d68f91af66ec6ee1d55358b2e0601161ecb
Author: Peter Krempa <pkrempa>
Date:   Thu Mar 16 10:19:32 2017 +0100

    util: json: Make function to free JSON values in virHash universal
    
    Move the helper that frees JSON entries put into hash tables into the
    JSON module so that it does not have to be reimplemented.

commit 0feebab2c4bd2f07ccbe1fa2a9803247922f3ba8
Author: Peter Krempa <pkrempa>
Date:   Wed Mar 15 13:03:21 2017 +0100

    qemu: block: Add code to detect node names when necessary
    
    Detect the node names when setting block threshold and when reconnecting
    or when they are cleared when a block job finishes. This operation will
    become a no-op once we fully support node names.

commit 2780bcd9f88b3859158d4502b583574eb000045c
Author: Peter Krempa <pkrempa>
Date:   Thu Feb 23 19:36:52 2017 +0100

    qemu: monitor: Extract the top level format node when querying disks
    
    To allow matching the node names gathered via 'query-named-block-nodes'
    we need to query and then use the top level nodes from 'query-block'.
    Add the data to the structure returned by qemuMonitorGetBlockInfo.

commit b0aa088fad999decce4b15d7a3eef624228be30e
Author: Peter Krempa <pkrempa>
Date:   Tue Mar 14 18:07:29 2017 +0100

    tests: qemumonitorjson: Test node name detection on networked storage

commit 2a50c18fc04df45085317299e684b98a91b9b765
Author: Peter Krempa <pkrempa>
Date:   Tue Mar 14 16:20:47 2017 +0100

    tests: qemumonitorjson: Add relative image names for node name detection
    
    oVirt uses relative names with directories in them. Test such
    configuration. Also tests a snapshot done with _REUSE_EXTERNAL and a
    relative backing file pre-specified in the qcow2 metadata.

commit b6c5a3f09bedd6a2058d6cdc095ef39adc2f074a
Author: Peter Krempa <pkrempa>
Date:   Tue Mar 14 15:02:11 2017 +0100

    tests: qemumonitorjson: Add case for two disks sharing a backing image
    
    Since we have to match the images by filename a common backing image
    will break the detection process. Add a test case to see that the code
    correctly did not continue the detection process.

commit aece275043377632a7f4a6374fcc253e79165e12
Author: Peter Krempa <pkrempa>
Date:   Mon Mar 13 12:48:32 2017 +0100

    tests: qemumonitorjson: Add long backing chain test case for node name detection

commit 217484bdbd9235a38c7ac99c13a875af53314312
Author: Peter Krempa <pkrempa>
Date:   Mon Mar 13 12:47:46 2017 +0100

    tests: qemumonitorjson: Add test case for node name detection code
    
    The code is rather magic so a test case will help making sure that
    everything works well. The first case is a simple backing chain.

commit dbad8f8aee564809c16ebe68e713c4dbccf692e7
Author: Peter Krempa <pkrempa>
Date:   Mon Mar 13 12:46:18 2017 +0100

    qemu: block: Add code to allow detection of auto-allocated node names
    
    qemu for some time already sets node names automatically for the block
    nodes. This patch adds code that attempts a best-effort detection of the
    node names for the backing chain from the output of
    'query-named-block-nodes'. The only drawback is that the data provided
    by qemu needs to be matched by the filename as seen by qemu and thus
    if two disks share a single backing store file the detection won't work.
    
    This will allow us to use qemu commands such as
    'block-set-write-threshold' which only accepts node names.
    
    In this patch only the detection code is added, it will be used later.

commit d92d7f6b52dc5126d19239e4ebf6f84d6b8964f1
Author: Peter Krempa <pkrempa>
Date:   Fri Feb 24 14:59:40 2017 +0100

    qemu: monitor: Add monitor infrastructure for query-named-block-nodes
    
    Add monitor tooling for calling query-named-block-nodes. The monitor
    returns the data as the raw JSON array that is returned from the
    monitor.
    
    Unfortunately the logic to extract the node names for a complete backing
    chain will be so complex that I won't be able to extract any meaningful
    subset of the data in the monitor code.

commit e2b05c9a8deb978879c9d7999d8532b80f795e0a
Author: Peter Krempa <pkrempa>
Date:   Wed Mar 15 17:21:48 2017 +0100

    qemu: capabilities: add capability for query-named-block-nodes qmp cmd

commit c6f4acc4cbbc41e9097d5266cc5310ca891bb234
Author: Peter Krempa <pkrempa>
Date:   Thu Feb 23 13:50:24 2017 +0100

    qemu: implement qemuDomainSetBlockThreshold
    
    Add code to call the appropriate monitor command and code to lookup the
    given disk backing chain member.

commit 9b93c4c26483308371aae3ae30bf5536c88b7f4b
Author: Peter Krempa <pkrempa>
Date:   Thu Feb 23 19:14:47 2017 +0100

    qemu: domain: Add helper to look up disk soruce by the backing store string

commit 97148962b5bc51d780997d66dfc024a3aad13892
Author: Peter Krempa <pkrempa>
Date:   Thu Feb 23 13:27:18 2017 +0100

    virsh: Implement 'domblkthreshold' command to call virDomainSetBlockThreshold
    
    Add a simple wrapper which will allow to set the threshold for
    delivering the event.

commit bb09798fbeb5ffcdb6145a25d48ded794500ad77
Author: Peter Krempa <pkrempa>
Date:   Thu Feb 23 13:09:12 2017 +0100

    lib: Add API for setting the threshold size for VIR_DOMAIN_EVENT_ID_BLOCK_THRESHOLD
    
    The new API can be used to configure the threshold when
    VIR_DOMAIN_EVENT_ID_BLOCK_THRESHOLD should be fired.

commit e96130dcc838035222cd21b777b5c69cd8a37346
Author: Peter Krempa <pkrempa>
Date:   Wed Feb 22 17:51:26 2017 +0100

    qemu: process: Wire up firing of the VIR_DOMAIN_EVENT_ID_BLOCK_THRESHOLD event
    
    Bind it to qemu's BLOCK_WRITE_THRESHOLD event. Look up the disk by
    nodename and construct the string to return.

commit 4e1618ce72ef47d2e145ce853c917fc41df99afa
Author: Peter Krempa <pkrempa>
Date:   Thu Feb 23 18:13:02 2017 +0100

    qemu: domain: Add helper to generate indexed backing store names
    
    The code is currently simple, but if we later add node names, it will be
    necessary to generate the names based on the node name. Add a helper so
    that there's a central point to fix once we add self-generated node
    names.

commit 1a5e2a80981605936aa93cd078f2f3df194f626a
Author: Peter Krempa <pkrempa>
Date:   Wed Feb 22 17:51:05 2017 +0100

    qemu: domain: Add helper to lookup disk by node name
    
    Looks up a disk and its corresponding backing chain element by node
    name.

commit 73d4b3242779bd2d726a6212ce47ec8f59697773
Author: Peter Krempa <pkrempa>
Date:   Wed Feb 22 16:52:22 2017 +0100

    qemu: monitor: Add support for BLOCK_WRITE_THRESHOLD event
    
    The event is fired when a given block backend node (identified by the
    node name) experiences a write beyond the bound set via
    block-set-write-threshold QMP command. This wires up the monitor code to
    extract the data and allow us receiving the events and the capability.

commit 085e794a862972508498356f9943d7540c52ce24
Author: Peter Krempa <pkrempa>
Date:   Tue Feb 21 15:03:07 2017 +0100

    lib: Introduce event for tracking disk backing file write threshold
    
    When using thin provisioning, management tools need to resize the disk
    in certain cases. To avoid having them to poll disk usage introduce an
    event which will be fired when a given offset of the storage is written
    by the hypervisor. Together with the API which will be added later, it
    will allow registering thresholds for given storage backing volumes and
    this event will then notify management if the threshold is exceeded.

Comment 14 yisun 2017-03-29 08:56:05 UTC
Hi Peter,
After reading the above patches, I think following test scenario may cover this RFE with virsh cmds, pls help to check if I miss something since there are too many commits here... thx

Test matrix:
1.    image format: raw, qcow2, luks(supported?)
2.    w/wo snapshots
3.    image from: local file, gluster, iscsi

Scenario:
1. set the threshold with:
    virsh domblkthreshold <domain> <dev> <threashold>
2. check the threshold with:
    virsh domstatus <domain> --block, 
3. test the threshold taking effect:
    @terminal 1: virsh event --event block-threshold
    @terminal 2: write data to corresponding disk of vm with dd
    @terminal 1: check the event is captured
4. some negative tests will be carried out then

Comment 15 Eric Blake 2017-03-30 01:00:18 UTC
(In reply to yisun from comment #14)
> Hi Peter,
> After reading the above patches, I think following test scenario may cover
> this RFE with virsh cmds, pls help to check if I miss something since there
> are too many commits here... thx
> 
> Test matrix:
> 1.    image format: raw, qcow2, luks(supported?)
> 2.    w/wo snapshots
> 3.    image from: local file, gluster, iscsi
> 
> Scenario:
> 1. set the threshold with:
>     virsh domblkthreshold <domain> <dev> <threashold>
> 2. check the threshold with:
>     virsh domstatus <domain> --block, 
> 3. test the threshold taking effect:
>     @terminal 1: virsh event --event block-threshold
>     @terminal 2: write data to corresponding disk of vm with dd
>     @terminal 1: check the event is captured
> 4. some negative tests will be carried out then

See also my testing posted upstream, for inspiration on steps to use.
https://www.redhat.com/archives/libvir-list/2017-March/msg01225.html

Comment 16 Peter Krempa 2017-03-31 11:58:22 UTC
(In reply to yisun from comment #14)
> Hi Peter,
> After reading the above patches, I think following test scenario may cover
> this RFE with virsh cmds, pls help to check if I miss something since there
> are too many commits here... thx
> 
> Test matrix:
> 1.    image format: raw, qcow2, luks(supported?)

luks should be supported

> 2.    w/wo snapshots

Yes, please test backing chains.

> 3.    image from: local file, gluster, iscsi
> 
> Scenario:
> 1. set the threshold with:
>     virsh domblkthreshold <domain> <dev> <threashold>
> 2. check the threshold with:
>     virsh domstatus <domain> --block, 
> 3. test the threshold taking effect:
>     @terminal 1: virsh event --event block-threshold
>     @terminal 2: write data to corresponding disk of vm with dd
>     @terminal 1: check the event is captured

Additionaly it should be possible to get the event while doing a block commit operation in the backing chain. The threshold can be set for backing chain members using the vda[3] syntax (see the 'index' field in the backing chain in the XML). I'd suggest you set the threshold to 1 or so so that you are guaranteed to hit it.

Comment 17 yisun 2017-04-01 00:23:05 UTC
(In reply to Eric Blake from comment #15)
> (In reply to yisun from comment #14)
> > ...
> See also my testing posted upstream, for inspiration on steps to use.
> https://www.redhat.com/archives/libvir-list/2017-March/msg01225.html

Thx Eric and Peter,
will do the test accordingly.

Comment 19 yisun 2017-04-07 09:28:20 UTC
Hi Peter, 
During my test, I found for RAW img, only a mkfs.ext4 /dev/vdx in guest will trigger the event, as follow, pls help to confirm. thx

steps:
## qemu-img create -f raw /var/lib/libvirt/images/test.raw 2G
Formatting '/var/lib/libvirt/images/test.raw', fmt=raw size=2147483648


## virsh dumpxml vm1 | grep disk -a10
...
    <disk type='file' device='disk'>
      <driver name='qemu' type='raw'/>
      <source file='/var/lib/libvirt/images/test.raw'/>
      <backingStore/>
      <target dev='vdb' bus='virtio'/>
      <alias name='virtio-disk1'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x09' function='0x0'/>
    </disk>


## virsh domblkthreshold vm1 vdb 1G

## virsh event --event block-threshold --loop 


Now in guest when I do a: #mkfs.ext4 /dev/vdb
A event will be captured as follow:

## virsh event --event block-threshold --loop
event 'block-threshold' for domain vm1: dev: vdb(/var/lib/libvirt/images/test.raw) 1073741824 3682304

Comment 20 Peter Krempa 2017-04-07 16:14:17 UTC
Since mkfs.ext4 writes to the device it is expected that it will trigger for some of the writes.

Filesystem metadata is spread across the device so this is okay.

If you do the same test with qcow2 this should not happen though, since qcow2 should allocate only the blocks which are written and they should be organized at the beginning of the image.

Comment 21 yisun 2017-04-11 06:47:47 UTC
(In reply to Peter Krempa from comment #20)
> Since mkfs.ext4 writes to the device it is expected that it will trigger for
> some of the writes.
> 
> Filesystem metadata is spread across the device so this is okay.
> 
> If you do the same test with qcow2 this should not happen though, since
> qcow2 should allocate only the blocks which are written and they should be
> organized at the beginning of the image.

hmm, in https://bugzilla.redhat.com/show_bug.cgi?id=1181648#c8 qemu test this with mkfs. But accord to comment 0, this event is mostly used to enlarge volume when captured by uplayer products. So if mkfs triggers this event, and then a vol-resize may be carried out to enlarge volume. Is this acceptable? 
- Or we'll just let the uplayer product to do more judgement such as image format, etc?
- Or we'll document somewhere that this mechanism is only logically suitable for qcow2 file? 
thx

Comment 22 Peter Krempa 2017-04-11 12:51:30 UTC
The event is triggered if the storage image is written beyond the configured sector. For raw files this maps 1:1 to guest sectors. For qcow2 it does not, since qcow2 may allocate the sectors in different places in the guest image.

The intended use case is to notify when a qcow2 grows beyond certain size.

Libvirt does not really want to limit the usage here, since the guest itself may want to employ a storage format which similarly to qcow2 fills the disk continuously from the beginning and the management may then decide to resize the disk in the host, so this feature is perfectly valid also for raw disks.

With raw disks it's even easier to test, since a 'dd seek=NUM' writes the given place right away, whereas with qcow2 you need to fill the image until that point.

Comment 23 yisun 2017-04-12 02:52:24 UTC
(In reply to Peter Krempa from comment #22)
> The event is triggered if the storage image is written beyond the configured
> sector. For raw files this maps 1:1 to guest sectors. For qcow2 it does not,
> since qcow2 may allocate the sectors in different places in the guest image.
> 
> The intended use case is to notify when a qcow2 grows beyond certain size.
> 
> Libvirt does not really want to limit the usage here, since the guest itself
> may want to employ a storage format which similarly to qcow2 fills the disk
> continuously from the beginning and the management may then decide to resize
> the disk in the host, so this feature is perfectly valid also for raw disks.
> 
> With raw disks it's even easier to test, since a 'dd seek=NUM' writes the
> given place right away, whereas with qcow2 you need to fill the image until
> that point.

ok, i'll use dd for raw img test. thx for the info. And btw, I met following issues, pls help to confirm.

1. threshold info for snapshot's backend img not showed up with "domstats"
## virsh domblkthreshold vm1 vdc[1] 10086
## virsh domstats vm1 --block | grep 10086
<=== nothing here

Do we have any virsh cmd can get that info beside qmp command as follow?
## virsh qemu-monitor-command vm1 '{"execute":"query-named-block-nodes"}' | grep 10086
{"return":[{"iops_rd":0,"detect_zeroes":"off",..."image":{"virtual-size":197120,"filename":"/var/lib/libvirt/images/test.qcow2","format":"file","actual-size":200704,"dirty-flag":false},"iops_wr":0,"ro":true,"node-name":"#block452","backing_file_depth":0,"drv":"file","iops":0,"bps_wr":0,"write_threshold":10086,...}

2. For LUKS encrypted virtual disk, I cannot set its threshold, always get an error
## qemu-img info /var/lib/libvirt/images/luks_1.img
image: /var/lib/libvirt/images/luks_1.img
file format: luks
virtual size: 1.0G (1073741824 bytes)
disk size: 33M
encrypted: yes
Format specific information:
    ivgen alg: plain64
    hash alg: sha256
    cipher alg: aes-256
    uuid: 37b40f65-3a2e-45ed-bdae-004b06ea601c
    cipher mode: xts

## virsh dumpxml vm1 | grep vdb -a10
...
    <disk type='file' device='disk'>
      <driver name='qemu' type='raw'/>
      <source file='/var/lib/libvirt/images/luks_1.img'/>
      <backingStore/>
      <target dev='vdb' bus='virtio'/>
      <encryption format='luks'>
        <secret type='passphrase' uuid='f981dd17-143f-45bc-88e6-ed1fe20ce9da'/>
      </encryption>
      <alias name='virtio-disk1'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x09' function='0x0'/>
    </disk>

## virsh start vm1
Domain vm1 started

## virsh domblklist vm1
Target     Source
------------------------------------------------
vda        /var/lib/libvirt/images/rhel7.3.qcow2
vdb        /var/lib/libvirt/images/luks_1.img


## virsh domblkthreshold vm1 vdb 100M
error: Operation not supported: threshold currently can't be set for block device 'vdb'

Comment 24 Peter Krempa 2017-04-12 06:52:30 UTC
(In reply to yisun from comment #23)
> (In reply to Peter Krempa from comment #22)

[...]

> ok, i'll use dd for raw img test. thx for the info. And btw, I met following
> issues, pls help to confirm.
> 
> 1. threshold info for snapshot's backend img not showed up with "domstats"
> ## virsh domblkthreshold vm1 vdc[1] 10086
> ## virsh domstats vm1 --block | grep 10086
> <=== nothing here

You didn't enable the stats for the backing chain using the '--backing' option.

> 2. For LUKS encrypted virtual disk, I cannot set its threshold, always get
> an error

[...]

> ## virsh domblkthreshold vm1 vdb 100M
> error: Operation not supported: threshold currently can't be set for block
> device 'vdb'

Looks like the LUKS disk does not get properly detected in the backing chain detection code. I'll have a look.

Comment 25 yisun 2017-04-19 06:03:51 UTC
(In reply to Peter Krempa from comment #24)
> (In reply to yisun from comment #23)
> > (In reply to Peter Krempa from comment #22)
> 
> [...]

> Looks like the LUKS disk does not get properly detected in the backing chain
> detection code. I'll have a look.

Besides the LUKS related issue, we found that iscsi backend img seems cannot trigger such a event. as follow:

--------------
iscsi
--------------
@Terminal 1:
## iscsiadm -m discovery -t sendtargets -p 10.73.196.113
10.73.196.113:3260,1 iqn.2016-03.com.virttest:logical-pool.target

## virsh dumpxml vm1
...
    <disk type='network' device='lun'>
      <driver name='qemu' type='raw'/>
      <source protocol='iscsi' name='iqn.2016-03.com.virttest:logical-pool.target/0'>
        <host name='10.73.196.113' port='3260'/>
      </source>
      <backingStore/>
      <target dev='sdb' bus='scsi'/>
      <alias name='scsi0-0-0-1'/>
      <address type='drive' controller='0' bus='0' target='0' unit='1'/>
    </disk>
...


## virsh domblkthreshold vm1 sdb 200M
## virsh domstats vm1 --block
Domain: 'vm1'
  ...
  block.1.threshold=209715200

## virsh event --event block-threshold --loop

@Terminal 2:
## virsh console vm1
[root@localhost ~]# dd if=/dev/urandom of=/dev/sdb bs=1M count=300
300+0 records in
300+0 records out
314572800 bytes (315 MB) copied, 17.5543 s, 17.9 MB/s

[root@localhost ~]# mkfs.ext4 /dev/sdb
mke2fs 1.42.9 (28-Dec-2013)
...
Writing superblocks and filesystem accounting information: done

[root@localhost ~]# mount /dev/sdb /mnt

[root@localhost ~]# dd if=/dev/urandom of=/mnt/1 bs=1M count=300
300+0 records in
300+0 records out
314572800 bytes (315 MB) copied, 18.8775 s, 16.7 MB/s

[root@localhost ~]# sync

@Terminal 1:
## virsh event --event block-threshold --loop
<==== nothing happened here

=============================
since this RFE has too many changes and most of the function work well. Do you think we should close this RFE and use new bugs to track such minor issues?

Comment 28 Peter Krempa 2017-04-25 13:48:25 UTC
(In reply to yisun from comment #25)
> (In reply to Peter Krempa from comment #24)
> > (In reply to yisun from comment #23)
> > > (In reply to Peter Krempa from comment #22)

[...]

> [root@localhost ~]# dd if=/dev/urandom of=/mnt/1 bs=1M count=300
> 300+0 records in
> 300+0 records out
> 314572800 bytes (315 MB) copied, 18.8775 s, 16.7 MB/s
> 
> [root@localhost ~]# sync
> 
> @Terminal 1:
> ## virsh event --event block-threshold --loop
> <==== nothing happened here
> 
> =============================
> since this RFE has too many changes and most of the function work well. Do
> you think we should close this RFE and use new bugs to track such minor
> issues?

This looks like qemu didn't implement the threshold notifier for the iSCSI backend. Please file a new bug (initially on libvirt) to track this so that it can possibly be moved to qemu if I'm certain that's the case.

Comment 30 yisun 2017-04-26 06:22:28 UTC
Verified with:
libvirt-3.2.0-2.el7.x86_64
qemu-kvm-rhev-2.9.0-1.el7.x86_64

===========================
1. basic test
===========================
----------------
qcow2 file
----------------
@Terminal 1:
## virsh domblkthreshold vm1 vdb 100M
## virsh domstats vm1 --block
Domain: 'vm1'
  ...
  block.1.physical=1675694080
  *** block.1.threshold=104857600 *** <=== this is 100MB

## virsh event --event block-threshold --loop

@Terminal 2:
##qemu-img create -f qcow2 /var/lib/libvirt/images/test.qcow2 1G

##virsh dumpxml vm1
...
    <disk type='file' device='disk'>
      <driver name='qemu' type='qcow2'/>
      <source file='/var/lib/libvirt/images/test.qcow2'/>
      <backingStore/>
      <target dev='vdb' bus='virtio'/>
      <alias name='virtio-disk1'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x09' function='0x0'/>
    </disk>
...

##virsh console vm1

@Guest:
# mkfs.ext4 /dev/vdb
# mount /dev/vdb /mnt
# dd if=/dev/urandom of=/mnt/bigfile bs=1M count=101
101+0 records in
101+0 records out
105906176 bytes (106 MB) copied, 5.66389 s, 18.7 MB/s

@Terminal 2:
## virsh event --event block-threshold --loop
event 'block-threshold' for domain vm1: dev: vdb(/var/lib/libvirt/images/test.qcow2) 104857600 2031616 <=== event captured

@Terminal 1:
## virsh domstats vm1 --block
Domain: 'vm1'
 ...
  block.1.physical=3823177728
<=== Threshold removed after event captured
----------------
raw file
----------------
@Terminal 1:
## virsh domblkthreshold vm1 vdb 1G
## virsh domstats vm1 --block
Domain: 'vm1'
  ...
  block.1.threshold=1073741824

## virsh event --event block-threshold --loop

@Terminal 2:
## qemu-img create -f raw /var/lib/libvirt/images/test.raw 2G
Formatting '/var/lib/libvirt/images/test.raw', fmt=raw size=2147483648

## virsh dumpxml vm1 | grep disk -a5
...
    <disk type='file' device='disk'>
      <driver name='qemu' type='raw'/>
      <source file='/var/lib/libvirt/images/test.raw'/>
      <target dev='vdb' bus='virtio'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x09' function='0x0'/>
    </disk>
...

## virsh console vm1

@guest:
# mkfs /dev/vdb

@Terminal 1:
Now the event triggered
## virsh event --event block-threshold --loop
event 'block-threshold' for domain vm1: dev: vdb(/var/lib/libvirt/images/test.raw) 1073741824 2105344

----------------
luks file
----------------
Prepare a luks encrypted img:
## qemu-img info /var/lib/libvirt/images/luks_1.img
image: /var/lib/libvirt/images/luks_1.img
file format: luks
virtual size: 1.0G (1073741824 bytes)
disk size: 33M
encrypted: yes
Format specific information:
    ivgen alg: plain64
    hash alg: sha256
    cipher alg: aes-256
    uuid: 37b40f65-3a2e-45ed-bdae-004b06ea601c
    cipher mode: xts

@Terminal 1:
## virsh dumpxml vm1 | grep vdb -a10
...
    <disk type='file' device='disk'>
      <driver name='qemu' type='raw'/>
      <source file='/var/lib/libvirt/images/luks_1.img'/>
      <backingStore/>
      <target dev='vdb' bus='virtio'/>
      <encryption format='luks'>
        <secret type='passphrase' uuid='f981dd17-143f-45bc-88e6-ed1fe20ce9da'/>
      </encryption>
      <alias name='virtio-disk1'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x09' function='0x0'/>
    </disk>

## virsh start vm1
Domain vm1 started

## virsh domblklist vm1
Target     Source
------------------------------------------------
vda        /var/lib/libvirt/images/rhel7.3.qcow2
vdb        /var/lib/libvirt/images/luks_1.img


## virsh domblkthreshold vm1 vdb 100M
error: Operation not supported: threshold currently can't be set for block device 'vdb'
<=== bz 1445598

===========================
2. test with snapshot
===========================
@Terminal 1:
## virsh dumpxml vm1 | grep vdc -a10
    <disk type='file' device='disk'>
      <driver name='qemu' type='qcow2'/>
      <source file='/var/lib/libvirt/images/test.qcow2'/>
      <backingStore/>
      <target dev='vdc' bus='virtio'/>
      <alias name='virtio-disk2'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x0a' function='0x0'/>
    </disk>

## virsh snapshot-create-as vm1 snap1 --disk-only --diskspec vdc,file=/var/lib/libvirt/images/vdc.s1
Domain snapshot snap1 created


## virsh dumpxml vm1 | grep vdc -a10
...
    <disk type='file' device='disk'>
      <driver name='qemu' type='qcow2'/>
      <source file='/var/lib/libvirt/images/vdc.s1'/>
      <backingStore type='file' index='1'>
        <format type='qcow2'/>
        <source file='/var/lib/libvirt/images/test.qcow2'/>
        <backingStore/>
      </backingStore>
      <target dev='vdc' bus='virtio'/>
      <alias name='virtio-disk2'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x0a' function='0x0'/>
    </disk>

## virsh domblkthreshold vm1 vdc[1] 10086
## virsh domstats vm1 --block --backing | grep 10086
  block.5.threshold=10086

## virsh event --event block-threshold --loop

@Terminal 2:
## virsh console vm1
Connected to domain vm1
[root@localhost ~]# mkfs.ext4 /dev/vdc
[root@localhost ~]# mount /dev/vdc /mnt
[root@localhost ~]# dd if=/dev/urandom of=/mnt/test bs=1 count=10087
10087+0 records in
10087+0 records out
10087 bytes (10 kB) copied, 0.0204009 s, 494 kB/s

## virsh blockcommit vm1 vdc --pivot
Successfully pivoted

@Terminal 1:
## virsh event --event block-threshold --loop
event 'block-threshold' for domain vm1: dev: vdc[1](/var/lib/libvirt/images/test.qcow2) 10086 186522
## virsh qemu-monitor-command vm1 '{"execute":"query-named-block-nodes"}' | grep 10086
<=== the threshold setting gone

===========================
3. different backend test
===========================
--------------
gluster
--------------
@Terminal 1:
## qemu-img create -f raw gluster://10.66.5.88/gluster-vol1/test.raw 1G
## virsh dumpxml vm1
...
    <disk type='network' device='disk'>
      <driver name='qemu' type='raw' cache='none'/>
      <source protocol='gluster' name='gluster-vol1/test.raw'>
        <host name='10.66.5.88'/>
      </source>
      <backingStore/>
      <target dev='vdb' bus='virtio'/>
      <alias name='virtio-disk1'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x09' function='0x0'/>
    </disk>
...
## virsh start vm1
Domain vm1 started

## virsh domblkthreshold vm1 vdb 100M
 ## virsh domstats vm1 --block
Domain: 'vm1'
...
  block.1.threshold=104857600


## virsh event --event block-threshold --loop

@Terminal 2:
## virsh console vm1
[root@localhost ~]# dd if=/dev/urandom of=/dev/vdb bs=1M count=101
101+0 records in
101+0 records out
105906176 bytes (106 MB) copied, 6.48764 s, 16.3 MB/s

@Terminal 1:
## virsh event --event block-threshold --loop
event 'block-threshold' for domain vm1: dev: vdb(<null>) 104857600 1048576


--------------
iscsi
--------------
@Terminal 1:
## iscsiadm -m discovery -t sendtargets -p 10.73.196.113
10.73.196.113:3260,1 iqn.2016-03.com.virttest:logical-pool.target

## virsh dumpxml vm1
...
    <disk type='network' device='lun'>
      <driver name='qemu' type='raw'/>
      <source protocol='iscsi' name='iqn.2016-03.com.virttest:logical-pool.target/0'>
        <host name='10.73.196.113' port='3260'/>
      </source>
      <backingStore/>
      <target dev='sdb' bus='scsi'/>
      <alias name='scsi0-0-0-1'/>
      <address type='drive' controller='0' bus='0' target='0' unit='1'/>
    </disk>
...


## virsh domblkthreshold vm1 sdb 200M
## virsh domstats vm1 --block
Domain: 'vm1'
  ...
  block.1.threshold=209715200

## virsh event --event block-threshold --loop

@Terminal 2:
## virsh console vm1
[root@localhost ~]# dd if=/dev/urandom of=/dev/sdb bs=1M count=300
300+0 records in
300+0 records out
314572800 bytes (315 MB) copied, 17.5543 s, 17.9 MB/s

[root@localhost ~]# mkfs.ext4 /dev/sdb
mke2fs 1.42.9 (28-Dec-2013)
...
Writing superblocks and filesystem accounting information: done

[root@localhost ~]# mount /dev/sdb /mnt

[root@localhost ~]# dd if=/dev/urandom of=/mnt/1 bs=1M count=300
300+0 records in
300+0 records out
314572800 bytes (315 MB) copied, 18.8775 s, 16.7 MB/s

[root@localhost ~]# sync

@Terminal 1:
## virsh event --event block-threshold --loop
<== bz 1445596

--------------
ceph
--------------@Terminal 1:
## qemu-img create -f raw rbd:libvirt-pool/rbd1.img:mon_host=10.73.75.52 1G
Formatting 'rbd:libvirt-pool/rbd1.img:mon_host=10.73.75.52', fmt=raw size=1073741824

## virsh dumpxml vm1
...
    <disk type='network' device='disk'>
      <driver name='qemu' type='raw' cache='none'/>
      <source protocol='rbd' name='libvirt-pool/rbd1.img'>
        <host name='10.73.75.52' port='6789'/>
      </source>
      <backingStore/>
      <target dev='vdc' bus='virtio'/>
      <alias name='virtio-disk2'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x0a' function='0x0'/>
    </disk>
...

## virsh domblkthreshold vm1 vdc 10086
## virsh domstats vm1 --block --backing | grep 10086
  block.2.threshold=10086

## virsh event --event block-threshold --loop

@Terminal 2:
## virsh console vm1
[root@localhost ~]# dd if=/dev/urandom of=/dev/vdc bs=1M count=102
102+0 records in
102+0 records out
106954752 bytes (107 MB) copied, 13.8193 s, 7.7 MB/s

@Terminal 1:
## virsh event --event block-threshold --loop
event 'block-threshold' for domain vm1: dev: vdc(<null>) 10086 2021530

--------------
nbd
--------------
@Terminal 1:
## qemu-img create -f raw ~/nbd.img 1G
## qemu-nbd -f raw  ~/nbd.img -p 10808 &
## virsh edit vm1
...
    <disk type='network' device='disk'>
      <driver name='qemu' type='raw'/>
      <source protocol='nbd'>
        <host name='10.66.144.26' port='10808'/>
      </source>
      <backingStore/>
      <target dev='vdb' bus='virtio'/>
      <alias name='virtio-disk1'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x09' function='0x0'/>
    </disk>
...

## virsh start vm1
Domain vm1 started

## virsh domblkthreshold vm1 vdb 100M

## virsh domstats --block vm1
Domain: 'vm1'
...
  block.1.threshold=104857600

## virsh event --event block-threshold --loop

@Terminal 2:
## virsh console vm1
[root@localhost ~]# dd if=/dev/urandom of=/dev/vdb bs=1M count=200
200+0 records in
200+0 records out
209715200 bytes (210 MB) copied, 12.8287 s, 16.3 MB/s

@Terminal 1:
## virsh event --event block-threshold --loop
event 'block-threshold' for domain vm1: dev: vdb(<null>) 104857600 4063232

Comment 31 yisun 2017-04-26 06:25:04 UTC
Two extra bugs tracking remaining issues:
https://bugzilla.redhat.com/show_bug.cgi?id=1445596
-- iscsi backend img cannot trigger block-threshold event

https://bugzilla.redhat.com/show_bug.cgi?id=1445598
-- LUKS encrypted img not supported by "domblkthreshold"

Comment 32 errata-xmlrpc 2017-08-01 17:06:41 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2017:1846

Comment 33 errata-xmlrpc 2017-08-01 23:48:45 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2017:1846

Comment 34 errata-xmlrpc 2017-08-02 01:25:04 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2017:1846


Note You need to log in before you can comment on or make changes to this bug.