Bug 590998 - qcow2 high watermark
qcow2 high watermark
Status: CLOSED CURRENTRELEASE
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: qemu-kvm (Show other bugs)
6.0
All Linux
low Severity medium
: rc
: ---
Assigned To: Kevin Wolf
Virtualization Bugs
:
: 547628 (view as bug list)
Depends On:
Blocks: 526289 580953
  Show dependency treegraph
 
Reported: 2010-05-11 03:30 EDT by Kevin Wolf
Modified: 2013-01-09 17:34 EST (History)
6 users (show)

See Also:
Fixed In Version: qemu-kvm-0.12.1.2-2.56.el6
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2010-06-09 02:45:15 EDT
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Kevin Wolf 2010-05-11 03:30:48 EDT
RHEL 5 provides functionality to track the highest offset in a qcow2 file that has been written to. VDSM requires this to grow LVs before qemu runs out of space. This functionality is not yet available in RHEL 6, but it should.
Comment 2 RHEL Product and Program Management 2010-05-11 05:24:17 EDT
This request was evaluated by Red Hat Product Management for inclusion in a Red
Hat Enterprise Linux major release.  Product Management has requested further
review of this request by Red Hat Engineering, for potential inclusion in a Red
Hat Enterprise Linux Major release.  This request is not yet committed for
inclusion.
Comment 4 Martin Jenner 2010-05-14 13:03:15 EDT
Please provide a testing procedure/notes so QE can verify this functionality is implemented correctly when the patches are applied.

thanks,
Martin
Comment 5 Kevin Wolf 2010-05-14 13:25:54 EDT
The best way to verify is probably to take a VM with two disks (one system disk and one empty test disk; the test disk being in raw format) and with a QMP server enabled. Connect to the QMP server (e.g. using netcat) and issue a query-blockstats command. With the connection setup this might look like this:

{"QMP": {"version": {"qemu": "0.12.1", "package": " (qemu-kvm-devel)"}, "capabilities": []}}
{ "execute": "qmp_capabilities" }
{"return": {}}
{ "execute": "query-blockstats" }
{"return": [{"device": "ide0-hd0", "parent": {"stats": {"wr_highest_offset": 0, "wr_bytes": 0, "wr_operations": 0, "rd_bytes": 45067264, "rd_operations": 14679}}, "stats": {"wr_highest_offset": 0, "wr_bytes": 0, "wr_operations": 0, "rd_bytes": 45647360, "rd_operations": 15452}}, {"device": "ide0-hd1", "parent": {"stats": {"wr_highest_offset": 0, "wr_bytes": 0, "wr_operations": 0, "rd_bytes": 430080, "rd_operations": 73}}, "stats": {"wr_highest_offset": 0, "wr_bytes": 0, "wr_operations": 0, "rd_bytes": 430080, "rd_operations": 73}}, {"device": "ide1-cd0", "stats": {"wr_highest_offset": 0, "wr_bytes": 0, "wr_operations": 0, "rd_bytes": 0, "rd_operations": 0}}, {"device": "floppy0", "stats": {"wr_highest_offset": 0, "wr_bytes": 0, "wr_operations": 0, "rd_bytes": 0, "rd_operations": 0}}, {"device": "sd0", "stats": {"wr_highest_offset": 0, "wr_bytes": 0, "wr_operations": 0, "rd_bytes": 0, "rd_operations": 0}}]}

Make sure that both disks are present and that their parent has a wr_highest_offset value (this is the watermark). Now try writing to different offsets in the test disk (e.g. using dd) and repeatedly issue more query-blockstats commands to check if the watermark is correctly changed to reflect the start of the highest sector that you have written to.
Comment 9 Qunfang Zhang 2010-06-03 23:15:43 EDT
Test this issue according to Comment 5, using qemu-kvm-0.12.1.2-2.68.el6.

Paste my result here:(only paste the test disk info)
1. Boot a guest with two disk, one is system disk, another is a raw empty disk with 5G size.

"device": "ide0-hd1", "parent": {"stats": {"wr_highest_offset": 0, "wr_bytes": 0, "wr_operations": 0, "rd_bytes": 655872, "rd_operations": 65}}, "stats": {"wr_highest_offset": 0, "wr_bytes": 0, "wr_operations": 0, "rd_bytes": 655872, "rd_operations": 65}},

2.write some data in the test disk.
dd if=/dev/zero of=/dev/hdb bs=1M count=100

"device": "ide0-hd1", "parent": {"stats": {"wr_highest_offset": 104857088, "wr_bytes": 104857600, "wr_operations": 201, "rd_bytes": 725504, "rd_operations": 69}}, "stats": {"wr_highest_offset": 104857088, "wr_bytes": 104857600, "wr_operations": 201, "rd_bytes": 725504, "rd_operations": 69}},

3.dd if=/dev/zero of=/dev/hdb seek=100 bs=1M count=100

"device": "ide0-hd1", "parent": {"stats": {"wr_highest_offset": 209714688, "wr_bytes": 209715200, "wr_operations": 402, "rd_bytes": 725504, "rd_operations": 69}}, "stats": {"wr_highest_offset": 209714688, "wr_bytes": 209715200, "wr_operations": 402, "rd_bytes": 725504, "rd_operations": 69}},

4.dd if=/dev/zero of=/dev/hdb seek=1024 bs=1M count=100

"device": "ide0-hd1", "parent": {"stats": {"wr_highest_offset": 1178598912, "wr_bytes": 419430400, "wr_operations": 804, "rd_bytes": 725504, "rd_operations": 69}}, "stats": {"wr_highest_offset": 1178598912, "wr_bytes": 419430400, "wr_operations": 804, "rd_bytes": 725504, "rd_operations": 69}},

5. dd if=/dev/zero of=/dev/hdb seek=700 bs=1M count=100

"device": "ide0-hd1", "parent": {"stats": {"wr_highest_offset": 1178598912, "wr_bytes": 524288000, "wr_operations": 1005, "rd_bytes": 725504, "rd_operations": 69}}, "stats": {"wr_highest_offset": 1178598912, "wr_bytes": 524288000, "wr_operations": 1005, "rd_bytes": 725504, "rd_operations": 69}},

6.Write data to the offset outside the disk
dd if=/dev/zero of=/dev/hdb seek=6000 bs=1M count=100
dd:writing to '/dev/hdb':No space left on device

"device": "ide0-hd1", "parent": {"stats": {"wr_highest_offset": 1178598912, "wr_bytes": 524288000, "wr_operations": 1005, "rd_bytes": 5369434624, "rd_operations": 20630}}, "stats": {"wr_highest_offset": 1178598912, "wr_bytes": 524288000, "wr_operations": 1005, "rd_bytes": 5369434624, "rd_operations": 20630}},


qzhang -> Kevin
Could I call this verified pass? BTW, what does "parent" mean?  thanks!
Comment 10 Kevin Wolf 2010-06-04 07:46:21 EDT
Yes, the results look good to me and cover everything I suggested.

"parent" means the underlying protocol that is used to access the qcow2 data, e.g. file or host_device (or even nbd or http, but I don't think we support these with RHEL).
Comment 11 Qunfang Zhang 2010-06-09 02:45:02 EDT
(In reply to comment #10)
> Yes, the results look good to me and cover everything I suggested.
> 
> "parent" means the underlying protocol that is used to access the qcow2 data,
> e.g. file or host_device (or even nbd or http, but I don't think we support
> these with RHEL).    

Thank you Kevin, then I will close this issue. :)
Comment 12 Kevin Wolf 2010-11-22 03:24:33 EST
*** Bug 547628 has been marked as a duplicate of this bug. ***

Note You need to log in before you can comment on or make changes to this bug.