Bug 1772439 - Restoring a VM from a block device fails with "unable to get SELinux context of /dev/<disk>: No such file or directory"
Summary: Restoring a VM from a block device fails with "unable to get SELinux context ...
Keywords:
Status: CLOSED DUPLICATE of bug 1772838
Alias: None
Product: Red Hat Enterprise Linux Advanced Virtualization
Classification: Red Hat
Component: libvirt
Version: 8.1
Hardware: Unspecified
OS: Unspecified
unspecified
low
Target Milestone: rc
: ---
Assignee: Michal Privoznik
QA Contact: Yanqiu Zhang
URL:
Whiteboard:
Depends On:
Blocks: 1772838
TreeView+ depends on / blocked
 
Reported: 2019-11-14 10:53 UTC by Erik Skultety
Modified: 2020-04-03 16:01 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-04-03 16:01:07 UTC
Type: Bug
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Erik Skultety 2019-11-14 10:53:10 UTC
Description of problem:
When libvirt tries to restore a VM from an image that was previously saved to a block device rather than a file it fails with:
"error: internal error: child reported (status=125): unable to get SELinux context of /dev/<disk>: No such file or directory"

Saving the domain to the block device was successful and the header confirms that:

$ strings /dev/vdb | head
LibvirtQemudSave
<domain type='kvm'>
  <name>alpine</name>
  <uuid>291fbd79-092b-4719-89f0-876d58d7a9fd</uuid>
  <metadata>
    <libosinfo:libosinfo xmlns:libosinfo="http://libosinfo.org/xmlns/libvirt/domain/1.0">
      <libosinfo:os id="http://alpinelinux.org/alpinelinux/3.8"/>
    </libosinfo:libosinfo>
  </metadata>
  <memory unit='KiB'>524288</memory>
...

Version-Release number of selected component (if applicable):
libvirt.x86_64 5.6.0-5.module+el8.1.0+4229+2e4e348c
qemu-kvm.x86_64 15:4.1.0-10.module+el8.1.0+4234+33aa4f57


How reproducible:
100%

Steps to Reproduce:
1. start a VM
2. prepare a block device to which the domain state should be saved

$ ls -lZ /dev/<disk>
brw-rw----. 1 root root system_u:object_r:fixed_disk_device_t:s0 253

3. save the domain state

$ virsh save <domain> /dev/<disk>
Domain alpine saved to /dev/<disk>

$ ls -lZ /dev/<disk>
brw-rw----. 1 root root system_u:object_r:svirt_image_t:s0:c416,c861 253

$ strings /dev/<disk> | head
# output provided above

4. try to restore the domain from the block device

$ virsh restore /dev/<disk>
error: Failed to restore domain from /dev/<disk>                                         
error: internal error: child reported (status=125): unable to get SELinux context of /dev/<disk>: No such file or directory

5. disable namespaces in qemu.conf
$ grep namespaces /etc/libvirt/qemu.conf
namespaces=[]

6. try to restore the domain again
$ virsh restore /dev/<disk>
Domain restored from /dev/vdb

7. check that the domain is in the default state prior to saving it
$ virsh list --all
 Id   Name     State
------------------------
3    alpine   running

Actual results:
libvirt fails to restore a domain from a block device containing the state

Expected results:
The operation should succeed.

Additional info:
Apparently libvirt is able to expose the device in the mount namespace when doing save, but fails to do so when doing restore, disabling namespaces before restoring the domain leads to successful restore.

Comment 1 Peter Krempa 2019-11-14 10:58:24 UTC
Storing a VM to a block device is not a good idea from the beginning. I'd suggest you don't do it.

Comment 2 Peter Krempa 2019-11-14 11:13:06 UTC
To be more specific on why this is a bad idea:

1) It's hard to judge the final size of the image and since a block-device or logical volume will not grow automatically. For a too small image the save operation will fail

2) This couples with a second problem because the block device must be oversized. While this probably isn't a problem for uncompressed images as qemu stores the sizes of the sections internally, it might become a problem for the decompression algorithm which will not be able to determine when the image ended and thus will decompress garbage.

3) There are additionally no benefits in storing the memory image in a block device so it doesn't make sense to use it this way due to the complications above.

Comment 3 Erik Skultety 2019-11-14 12:01:15 UTC
(In reply to Peter Krempa from comment #1)
> Storing a VM to a block device is not a good idea from the beginning. I'd
> suggest you don't do it.

The point of this bug was not about the point of save-restore operations on block devices (the use cases are questionable), but about the behaviour of namespaces which is clearly inconsistent, when you save the image, libvirt will expose the node in the namespace but doesn't for the restore. So, regardless of whether it is or is not a good idea to perform saves on block devices, this inconsistency should be addressed.

(In reply to Peter Krempa from comment #2)
> To be more specific on why this is a bad idea:
> 
> 1) It's hard to judge the final size of the image and since a block-device
> or logical volume will not grow automatically. For a too small image the
> save operation will fail
> 
> 2) This couples with a second problem because the block device must be
> oversized. While this probably isn't a problem for uncompressed images as
> qemu stores the sizes of the sections internally, it might become a problem
> for the decompression algorithm which will not be able to determine when the
> image ended and thus will decompress garbage.

This is all true, and yet none of those apply to my case. It's obvious from the reproducer that a feature works only 50%. If you're not willing to fix this, such practice should be clearly discouraged in the documentation.

Comment 4 Daniel Berrangé 2019-11-14 12:43:46 UTC
(In reply to Peter Krempa from comment #2)
> To be more specific on why this is a bad idea:
> 
> 1) It's hard to judge the final size of the image and since a block-device
> or logical volume will not grow automatically. For a too small image the
> save operation will fail

If you simply take that block device and mkfs on it, you'll get a filesystem of approx the same size, which we can then save to, but that file we're saving to can only grow upto the size of this block device the FS is on. IOW, I don't think this is a strong argument against using a block device. Whether using a block dev or file, you need to make sure there is sufficient space.

Judging the size needed isn't that difficult either, it is basically just the guest RAM size plus some handful of MB of overhead for the device state which is finite & easily measured, especially if your VM configs are standardized.

> 2) This couples with a second problem because the block device must be
> oversized. While this probably isn't a problem for uncompressed images as
> qemu stores the sizes of the sections internally, it might become a problem
> for the decompression algorithm which will not be able to determine when the
> image ended and thus will decompress garbage.

Decompressing garbage isn't the end of the world. AFAIK, QEMU will simply stop reading from the stream once it has loaded everything it needs, even if the stream has more trailing garbage that should just be discarded. So as long as we explicitly  stop the decompression program once QEMU has reported it is loaded, we shuld be fine.

> 3) There are additionally no benefits in storing the memory image in a block
> device so it doesn't make sense to use it this way due to the complications
> above.

I don't think that's the case - its a similar tradeoff to storing disk images in files vs block devices. Block devices are generally lower I/O overhead due to eliminating the FS layer. Assuming any bugs are addressed, what remains is mostly just a docs problem IMHO

Comment 5 Erik Skultety 2019-11-15 09:52:16 UTC
Re-opening per comment 4 and BZ https://bugzilla.redhat.com/show_bug.cgi?id=1772838.

Comment 6 Michal Privoznik 2020-04-03 16:01:07 UTC
Thing is, libvirt doesn't need to create the device it's restoring the domain from in the namespace. The explanation is here:

https://www.redhat.com/archives/libvir-list/2020-April/msg00187.html

I've sent patches to fix the related bug:

https://www.redhat.com/archives/libvir-list/2020-April/msg00185.html

and therefore I think this one can be closed then. It's the same problem. If you disagree please feel free to reopen.

*** This bug has been marked as a duplicate of bug 1772838 ***


Note You need to log in before you can comment on or make changes to this bug.