Bug 1876625 - with F33 on btrfs, wouldn't VM disk format of "raw" be prefered to "qcow2"
Summary: with F33 on btrfs, wouldn't VM disk format of "raw" be prefered to "qcow2"
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: Fedora
Classification: Fedora
Component: cockpit
Version: 33
Hardware: Unspecified
OS: Unspecified
unspecified
low
Target Milestone: ---
Assignee: Martin Pitt
QA Contact: Fedora Extras Quality Assurance
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-09-07 18:15 UTC by Heðin
Modified: 2020-09-08 11:49 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-09-08 11:49:02 UTC
Type: Bug
Embargoed:


Attachments (Terms of Use)

Description Heðin 2020-09-07 18:15:26 UTC
Description of problem:
CoW on top of CoW, isn't that bad ?

Version-Release number of selected component (if applicable):
* F33
** Cockpit-machines-227-1.fc33
** virt-manager-2.2.1-3.fc32

How reproducible:
always

Steps to Reproduce:
1. Install F33 with btrfs
2. Install cockpit, cockpit-machines and virt-manager
3. Login to cockpit and try to deploy a VM, it will get a .qcow2 based disk
4. Same for virt-manager

Actual results:
qcow2 based VM disk

Expected results:
not a CoW system on top of a CoW system.

Additional info:
Found multiple sources on the big web where it's claimed that a decent performance gain was obtained by converting qcow2 to raw on top of btrfs.

Comment 1 Chris Murphy 2020-09-07 21:05:15 UTC
cockpit, virt-manager, GNOME Boxes, virt-install: all set nodatacow (chattr +C) on their enclosing directories for images. It's basically the same as ext4/xfs.

I'm not sure about Cockpit but pretty sure virt-manager effectively uses 'qemu-img -o preallocation=falloc". Both metadata and data areas are fallocated. Best long term performance and aging characteristics. But not as flexible space usage as sparse, which is the qemu-img default. Equivalents for raw files is "fallocate" vs "truncate" commands.

The problem isn't directly COW per se, even SSD's are doing COW behind the scene. The problem is with the ensuing fragmentation (many data extents), which increases the cpu and memory cost of tracking the extents, and IO latency in retrieving them.

I'm pretty sure qcow2 is comparable to raw, when both are fallocated. But some of these things just need testing and aging to better understand. The guest file system write pattern has an affect on this. As does the qemu block device cache mode. I pretty much use guest(btrfs+compression)+qemu(virtioblk,cache=unsafe,discards=umpap)+host(sparse raw backing file, nodatacow, on btrfs). Change the guest file system, and all bets are off - this is a terrible combination for e.g. Windows NTFS.

Anyway, no bug here, I'll let the cockpit folks close this.


Note You need to log in before you can comment on or make changes to this bug.