Description of problem: CoW on top of CoW, isn't that bad ? Version-Release number of selected component (if applicable): * F33 ** Cockpit-machines-227-1.fc33 ** virt-manager-2.2.1-3.fc32 How reproducible: always Steps to Reproduce: 1. Install F33 with btrfs 2. Install cockpit, cockpit-machines and virt-manager 3. Login to cockpit and try to deploy a VM, it will get a .qcow2 based disk 4. Same for virt-manager Actual results: qcow2 based VM disk Expected results: not a CoW system on top of a CoW system. Additional info: Found multiple sources on the big web where it's claimed that a decent performance gain was obtained by converting qcow2 to raw on top of btrfs.
cockpit, virt-manager, GNOME Boxes, virt-install: all set nodatacow (chattr +C) on their enclosing directories for images. It's basically the same as ext4/xfs. I'm not sure about Cockpit but pretty sure virt-manager effectively uses 'qemu-img -o preallocation=falloc". Both metadata and data areas are fallocated. Best long term performance and aging characteristics. But not as flexible space usage as sparse, which is the qemu-img default. Equivalents for raw files is "fallocate" vs "truncate" commands. The problem isn't directly COW per se, even SSD's are doing COW behind the scene. The problem is with the ensuing fragmentation (many data extents), which increases the cpu and memory cost of tracking the extents, and IO latency in retrieving them. I'm pretty sure qcow2 is comparable to raw, when both are fallocated. But some of these things just need testing and aging to better understand. The guest file system write pattern has an affect on this. As does the qemu block device cache mode. I pretty much use guest(btrfs+compression)+qemu(virtioblk,cache=unsafe,discards=umpap)+host(sparse raw backing file, nodatacow, on btrfs). Change the guest file system, and all bets are off - this is a terrible combination for e.g. Windows NTFS. Anyway, no bug here, I'll let the cockpit folks close this.