How reproducible: 100% Steps to Reproduce: 1. Create a 1GB volume on the NetApp from the cirros image 2. Check that the volume size on the disk is around 20MB 3. Migrate that volume to another NetApp share 4. Check size again Expected result: The volume should remain the same size Actual results: The size is changed to 1GB which means the volume lost the sparseness. Additional info: We should consider using dd with "conv=sparse" like introduced in: https://review.openstack.org/#/c/182473/
Can I assume that because it's not mentioned, the volume is not attached during the migration? Assuming this is an unattached migration, the netapp driver should be doing something much more efficient here than a dd. I am not familiar with Netapp's sdk, but I would be surprised to discover that they don't have an efficient api for copying files between filers whilst preserving sparseness. In fact, are we sure that the driver doesn't implement this, but we're just being pushed off the efficient path somehow? First things first: we need to know exactly which netapp driver they're using in cinder. It turns out there are lots. Secondly, as a workaround, what happens if they run fstrim in the guest after the migration? Do they get the sparseness back?
Note that a related patch recently landed for the LVM driver: https://review.openstack.org/182473 can we do something similar for NetApp?
(In reply to Matthew Booth from comment #4) > Can I assume that because it's not mentioned, the volume is not attached > during the migration? Yes, that's why the bug is assigned to the Cinder! Attached volume migration is performed by Nova and it doesn't work if VM is shutoff (we have another bug for this) or loose it's sparseness if VM is up and running (libvirt's blockRebase). > > Assuming this is an unattached migration, the netapp driver should be doing > something much more efficient here than a dd. I am not familiar with > Netapp's sdk, but I would be surprised to discover that they don't have an > efficient api for copying files between filers whilst preserving sparseness. > In fact, are we sure that the driver doesn't implement this, but we're just > being pushed off the efficient path somehow? Interesting point, but it seems like NetappNfsDriver doesn't implement something special in this case so the implementation falls back to the default dd. > > First things first: we need to know exactly which netapp driver they're > using in cinder. It turns out there are lots. Right, but we already know that info GSS reported that they use an NetappNfsDriver. > > Secondly, as a workaround, what happens if they run fstrim in the guest > after the migration? Do they get the sparseness back? From within the guest the volume looks like a block device, I don't think that fstrim covers that case. According to it's man page: fstrim - discard unused blocks on a mounted filesystem I would say it's worth checking but I don't think it gonna work. As Eoghan mentioned in the comment#5 upstream community fix that problem for the LVM iSCSI driver, we have to check if we can do something similar with the NFS/NetappNfs driver or even better - fix the Cinder default behavior.
General thought I'd like to put here for reference: The work to add qcow2 snapshots as an NFS driver feature in Cinder implicitly also adds general qcow2 support for the NFS backend. This would likely avoid/resolve this issue, because when using qcow2 files, they will remain sparse through libvirt operations. (I think.)
I talked to the libvirt guys, the problem here is that maintaining sparseness for live migration requires changes to both libvirt and qemu, not just Nova. This work on the radar in qemu upstream but I'm not sure there's a definite target for implementing it yet.
I know that upstream qemu is aware of the fact that drive-mirror (the underlying QMP command that libvirt uses for blockRebase) should have a mode that preserves sparseness, but I'm not yet sure if all of that is already available or if it is still pending additional patches to land upstream and be backported. At this point, we may be better off asking qemu folks on the current state of the art. For example, this is Fam's work on adding an 'unmap' flag: https://lists.gnu.org/archive/html/qemu-devel/2015-05/msg05673.html and his explanation of what it will do: https://lists.gnu.org/archive/html/qemu-devel/2015-06/msg00831.html but since it defaults to 'true' (which is the case that preserves the most sparseness), libvirt should already be using it (if it defaulted to false but needs to be enabled, then that would be a libvirt change; likewise, if libvirt needed to make it easier to flip the bit to false that would be a libvirt change, but I don't think we want it false). So if there are situations where the destination is being allocated where the source was sparse, the new 'unmap' flag may not impact that, and we may need further investigation from the qemu side.
What does 'qemu-img map --output=json $source' say about the source file? That tells you how much qemu was able to deduce about the sparseness of the source. If qemu doesn't know that the source is sparse, then it is harder to make the destination be equally sparse.
There are three parts here: - Maintaining sparseness in non-live migration: this is "conv=sparse". The solution used for LVM can be applied to any other dd user - Maintaining sparseness in live migration of NFS-based images: QEMU can do this, but NFS does not have a mechanism to pass information about sparseness to QEMU. We can use the existing zero detection in QEMU though. We can implement it in QEMU 2.4 or 2.5, but we can probably backport to RHEL7.2 if it is urgent enough. - Maintaining sparseness in live migration, where the guest is doing discards (aka trim) during the live migration: this currently crashes QEMU, and this is what Fam is working on. It will be fixed in QEMU 2.4 and probably backported to RHEL7.2. Fam, please correct me here.
Paolo's summarizing is correct. I'll send a patch for part 2, the answer to the question in comment will be helpful to confirm this is the right part to put a fix.
(In reply to Fam Zheng from comment #19) > Paolo's summarizing is correct. I'll send a patch for part 2, the answer to > the question in comment will be helpful to confirm this is the right part to > put a fix. s/comment/comment 17/
(In reply to Fam Zheng from comment #19) > I'll send a patch for part 2 https://patchwork.ozlabs.org/patch/481798/
Oops, and https://patchwork.ozlabs.org/patch/481797/ (sorry for flooding this)
@pbonzini: Can you clarify whether you're referring to the active or inactive VM case in comment #18, specifically: "Maintaining sparseness in *live* migration ..." The reason I ask is that IIUC this bug was originally intended to capture the inactive VM case: "NetApp volume loses its sparseness after *offline* volume migration between different NetApp shares" whereas we should cleave off the active VM scenario to a separate bug, in which case the actual volume migration is acheived via a call from cinder into the nova swap_volume operation (which eventually boils down to blockRebase in the libvirt case).
Cleaved off BZ 1229843 to cover the active VM case (leaving this bug to solely represent the case where the VM is inactive).
@pbonzini, I mistakenly removed a NEEDINFO on you. Please, see the comment #23. Thanks
(In reply to Sergey Gotliv from comment #29) > @pbonzini, > > I mistakenly removed a NEEDINFO on you. Please, see the comment #23. And I just did the same, apologies.