Bug 590998

Summary: qcow2 high watermark
Product: Red Hat Enterprise Linux 6 Reporter: Kevin Wolf <kwolf>
Component: qemu-kvmAssignee: Kevin Wolf <kwolf>
Status: CLOSED CURRENTRELEASE QA Contact: Virtualization Bugs <virt-bugs>
Severity: medium Docs Contact:
Priority: low    
Version: 6.0CC: danken, juzhang, mjenner, qzhang, tburke, virt-maint
Target Milestone: rc   
Target Release: ---   
Hardware: All   
OS: Linux   
Whiteboard:
Fixed In Version: qemu-kvm-0.12.1.2-2.56.el6 Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2010-06-09 06:45:15 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 526289, 580953    

Description Kevin Wolf 2010-05-11 07:30:48 UTC
RHEL 5 provides functionality to track the highest offset in a qcow2 file that has been written to. VDSM requires this to grow LVs before qemu runs out of space. This functionality is not yet available in RHEL 6, but it should.

Comment 2 RHEL Program Management 2010-05-11 09:24:17 UTC
This request was evaluated by Red Hat Product Management for inclusion in a Red
Hat Enterprise Linux major release.  Product Management has requested further
review of this request by Red Hat Engineering, for potential inclusion in a Red
Hat Enterprise Linux Major release.  This request is not yet committed for
inclusion.

Comment 4 Martin Jenner 2010-05-14 17:03:15 UTC
Please provide a testing procedure/notes so QE can verify this functionality is implemented correctly when the patches are applied.

thanks,
Martin

Comment 5 Kevin Wolf 2010-05-14 17:25:54 UTC
The best way to verify is probably to take a VM with two disks (one system disk and one empty test disk; the test disk being in raw format) and with a QMP server enabled. Connect to the QMP server (e.g. using netcat) and issue a query-blockstats command. With the connection setup this might look like this:

{"QMP": {"version": {"qemu": "0.12.1", "package": " (qemu-kvm-devel)"}, "capabilities": []}}
{ "execute": "qmp_capabilities" }
{"return": {}}
{ "execute": "query-blockstats" }
{"return": [{"device": "ide0-hd0", "parent": {"stats": {"wr_highest_offset": 0, "wr_bytes": 0, "wr_operations": 0, "rd_bytes": 45067264, "rd_operations": 14679}}, "stats": {"wr_highest_offset": 0, "wr_bytes": 0, "wr_operations": 0, "rd_bytes": 45647360, "rd_operations": 15452}}, {"device": "ide0-hd1", "parent": {"stats": {"wr_highest_offset": 0, "wr_bytes": 0, "wr_operations": 0, "rd_bytes": 430080, "rd_operations": 73}}, "stats": {"wr_highest_offset": 0, "wr_bytes": 0, "wr_operations": 0, "rd_bytes": 430080, "rd_operations": 73}}, {"device": "ide1-cd0", "stats": {"wr_highest_offset": 0, "wr_bytes": 0, "wr_operations": 0, "rd_bytes": 0, "rd_operations": 0}}, {"device": "floppy0", "stats": {"wr_highest_offset": 0, "wr_bytes": 0, "wr_operations": 0, "rd_bytes": 0, "rd_operations": 0}}, {"device": "sd0", "stats": {"wr_highest_offset": 0, "wr_bytes": 0, "wr_operations": 0, "rd_bytes": 0, "rd_operations": 0}}]}

Make sure that both disks are present and that their parent has a wr_highest_offset value (this is the watermark). Now try writing to different offsets in the test disk (e.g. using dd) and repeatedly issue more query-blockstats commands to check if the watermark is correctly changed to reflect the start of the highest sector that you have written to.

Comment 9 Qunfang Zhang 2010-06-04 03:15:43 UTC
Test this issue according to Comment 5, using qemu-kvm-0.12.1.2-2.68.el6.

Paste my result here:(only paste the test disk info)
1. Boot a guest with two disk, one is system disk, another is a raw empty disk with 5G size.

"device": "ide0-hd1", "parent": {"stats": {"wr_highest_offset": 0, "wr_bytes": 0, "wr_operations": 0, "rd_bytes": 655872, "rd_operations": 65}}, "stats": {"wr_highest_offset": 0, "wr_bytes": 0, "wr_operations": 0, "rd_bytes": 655872, "rd_operations": 65}},

2.write some data in the test disk.
dd if=/dev/zero of=/dev/hdb bs=1M count=100

"device": "ide0-hd1", "parent": {"stats": {"wr_highest_offset": 104857088, "wr_bytes": 104857600, "wr_operations": 201, "rd_bytes": 725504, "rd_operations": 69}}, "stats": {"wr_highest_offset": 104857088, "wr_bytes": 104857600, "wr_operations": 201, "rd_bytes": 725504, "rd_operations": 69}},

3.dd if=/dev/zero of=/dev/hdb seek=100 bs=1M count=100

"device": "ide0-hd1", "parent": {"stats": {"wr_highest_offset": 209714688, "wr_bytes": 209715200, "wr_operations": 402, "rd_bytes": 725504, "rd_operations": 69}}, "stats": {"wr_highest_offset": 209714688, "wr_bytes": 209715200, "wr_operations": 402, "rd_bytes": 725504, "rd_operations": 69}},

4.dd if=/dev/zero of=/dev/hdb seek=1024 bs=1M count=100

"device": "ide0-hd1", "parent": {"stats": {"wr_highest_offset": 1178598912, "wr_bytes": 419430400, "wr_operations": 804, "rd_bytes": 725504, "rd_operations": 69}}, "stats": {"wr_highest_offset": 1178598912, "wr_bytes": 419430400, "wr_operations": 804, "rd_bytes": 725504, "rd_operations": 69}},

5. dd if=/dev/zero of=/dev/hdb seek=700 bs=1M count=100

"device": "ide0-hd1", "parent": {"stats": {"wr_highest_offset": 1178598912, "wr_bytes": 524288000, "wr_operations": 1005, "rd_bytes": 725504, "rd_operations": 69}}, "stats": {"wr_highest_offset": 1178598912, "wr_bytes": 524288000, "wr_operations": 1005, "rd_bytes": 725504, "rd_operations": 69}},

6.Write data to the offset outside the disk
dd if=/dev/zero of=/dev/hdb seek=6000 bs=1M count=100
dd:writing to '/dev/hdb':No space left on device

"device": "ide0-hd1", "parent": {"stats": {"wr_highest_offset": 1178598912, "wr_bytes": 524288000, "wr_operations": 1005, "rd_bytes": 5369434624, "rd_operations": 20630}}, "stats": {"wr_highest_offset": 1178598912, "wr_bytes": 524288000, "wr_operations": 1005, "rd_bytes": 5369434624, "rd_operations": 20630}},


qzhang -> Kevin
Could I call this verified pass? BTW, what does "parent" mean?  thanks!

Comment 10 Kevin Wolf 2010-06-04 11:46:21 UTC
Yes, the results look good to me and cover everything I suggested.

"parent" means the underlying protocol that is used to access the qcow2 data, e.g. file or host_device (or even nbd or http, but I don't think we support these with RHEL).

Comment 11 Qunfang Zhang 2010-06-09 06:45:02 UTC
(In reply to comment #10)
> Yes, the results look good to me and cover everything I suggested.
> 
> "parent" means the underlying protocol that is used to access the qcow2 data,
> e.g. file or host_device (or even nbd or http, but I don't think we support
> these with RHEL).    

Thank you Kevin, then I will close this issue. :)

Comment 12 Kevin Wolf 2010-11-22 08:24:33 UTC
*** Bug 547628 has been marked as a duplicate of this bug. ***