Bug 1751120 - Unable to read from '/sys/fs/cgroup/machine/cgroup.controllers': No such file or directory
Summary: Unable to read from '/sys/fs/cgroup/machine/cgroup.controllers': No such file...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Fedora
Classification: Fedora
Component: libvirt
Version: 31
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
Assignee: Libvirt Maintainers
QA Contact: Fedora Extras Quality Assurance
URL:
Whiteboard:
: 1754161 1755198 (view as bug list)
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-09-11 08:44 UTC by Yanko Kaneti
Modified: 2019-10-10 08:52 UTC (History)
17 users (show)

Fixed In Version: libvirt-5.6.0-4.fc31
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-10-04 20:05:16 UTC
Type: Bug
Embargoed:


Attachments (Terms of Use)

Description Yanko Kaneti 2019-09-11 08:44:10 UTC
Description of problem:
Trying to create a vm using virt-manager. Everyting seems ok until libvirt goes to start the vm

Error starting domain: Unable to read from '/sys/fs/cgroup/machine/cgroup.controllers': No such file or directory

Version-Release number of selected component (if applicable):
kernel  -  5.3.0-0.rc7.git1.1.fc32.x86_64
libvirt-daemon-5.7.0-1.fc32.x86_64

Selinux in permissive mode

# ls -al  /sys/fs/cgroup/ 
total 0
dr-xr-xr-x.  6 root root 0 Sep 10 09:07 .
drwxr-xr-x. 10 root root 0 Sep 10 09:07 ..
-r--r--r--.  1 root root 0 Sep 10 09:07 cgroup.controllers
-rw-r--r--.  1 root root 0 Sep 11 11:14 cgroup.max.depth
-rw-r--r--.  1 root root 0 Sep 11 11:14 cgroup.max.descendants
-rw-r--r--.  1 root root 0 Sep 10 09:07 cgroup.procs
-r--r--r--.  1 root root 0 Sep 11 11:14 cgroup.stat
-rw-r--r--.  1 root root 0 Sep 11 11:41 cgroup.subtree_control
-rw-r--r--.  1 root root 0 Sep 11 11:14 cgroup.threads
-rw-r--r--.  1 root root 0 Sep 11 11:14 cpu.pressure
-r--r--r--.  1 root root 0 Sep 11 11:14 cpuset.cpus.effective
-r--r--r--.  1 root root 0 Sep 11 11:14 cpuset.mems.effective
drwxr-xr-x.  2 root root 0 Sep 10 09:07 init.scope
-rw-r--r--.  1 root root 0 Sep 11 11:14 io.pressure
drwxr-xr-x.  2 root root 0 Sep 11 11:41 machine.slice
-rw-r--r--.  1 root root 0 Sep 11 11:14 memory.pressure
drwxr-xr-x. 98 root root 0 Sep 10 09:07 system.slice
drwxr-xr-x.  4 root root 0 Sep 11 09:00 user.slice

Comment 1 Yanko Kaneti 2019-09-11 08:45:09 UTC
# mount | grep cgroup
cgroup2 on /sys/fs/cgroup type cgroup2 (rw,nosuid,nodev,noexec,relatime,seclabel,nsdelegate)

Comment 2 Cole Robinson 2019-09-11 14:10:40 UTC
Moving back to libvirt, it's unlikely this is specific to libvirt-python.

FWIW on fedora 31 with libvirt 5.6.0-1 and 5.3.0-0.rc6.git0.1.fc31.x86_64 has similar mount and /sys/fs/cgroup/ ls output, but VM startup works fine.

Comment 3 Pavel Hrdina 2019-09-12 09:07:09 UTC
Hi, can you please provide debug logs from libvirt [1]?

Thanks, Pavel.

[1] <https://wiki.libvirt.org/page/DebugLogs>

Comment 4 Yanko Kaneti 2019-09-12 14:20:33 UTC
Now after a few reboots for other reasons and after some more rawhide updates, it doesn't seem that I am able to reproduce anymore.
Probably was was rawhide transitional fluke.

Sorry for the noise

Comment 5 Cole Robinson 2019-09-18 14:55:12 UTC
I saw this on f31 too. Machine was running for a few days across suspend/resumes, and VMs wouldn't start today. Restarting libvirtd didn't help. Rebooting fixed it. If I reproduce again I'll reopen.

Comment 6 Cole Robinson 2019-09-18 22:07:10 UTC
This popped up again. I started a VM, 15 minutes later tried to start another one, and this message was happening consistently. Couldn't figure out any way to make it go away besides a reboot.

Comment 7 Chris Marusich 2019-09-23 04:11:22 UTC
The Guix community has encountered a similar problem with libvirt recently:

https://debbugs.gnu.org/cgi/bugreport.cgi?bug=36634

Based on Christopher Baines' correspondence in that bug report, it seems the problem may have been introduced going from libvirt 5.4.0 to 5.5.0.  He was able to work around the issue by using libvirt 5.4.0.

In Guix, the symptoms are slightly different than reported here.  The Guix bug report mentions "/sys/fs/cgroup/unified/machine/cgroup.controllers", but this bug report mentions "/sys/fs/cgroup/machine/cgroup.controllers".  The Guix bug report says the problem occurs when creating a domain, but this bug report says it occurs when starting a VM.  The symptoms are similar enough that I feel it is worth mentioning here, though.  The problem reported in the Guix bug tracker is consistently reproducible when using Guix.  It is possible to consistently reproduce the issue in Guix by following these steps:

* Install Guix (perhaps in a VM) - follow the official installation guide's instructions: https://guix.gnu.org/download/
* After installation, run "guix pull" to update Guix: https://guix.gnu.org/manual/en/html_node/Invoking-guix-pull.html#Invoking-guix-pull
* Install virt-manager with "guix package -i virt-manager": https://guix.gnu.org/manual/en/html_node/Invoking-guix-package.html#index-installing-packages
* Verify that the freshly installed virt-manager is in fact using libvirt version 5.6.0, and not some other version - it should show up in the output of the following command: guix gc --requisites $(realpath  $(which virt-manager)) | grep libvirt-5.6.0
* Run virt-manager and try to create a new domain.  Use any random installer ISO, and tell virt-manager to create a new disk image when creating the domain.

This will always fail with the error message mentioned in the Guix bug report.

Comment 8 Cole Robinson 2019-09-24 16:34:12 UTC
*** Bug 1754161 has been marked as a duplicate of this bug. ***

Comment 9 Cole Robinson 2019-09-24 18:36:38 UTC
Thanks Chris, that is helpful, sounds like the same issue to me.

Between libvirt 5.4.0 and 5.5.0 there were a handful of cgroup patches, most mentioning init of controllers, so it makes sense that an issue popped up there. What's not clear to me is if this is strictly a libvirt bug or if this is tickling an issue in systemd or the kernel maybe.

Pavel can you provide some info:

Is /sys/fs/cgroup/machine/cgroup.controllers only meant to be visible to the particular 'machine' process?
Is that a cgroup v1 path or cgroup v2? How do I determine which one the VM is using? (both are mounted on my f31 host apparently)
Any other debugging suggestions? Something to try or a workaround to get the host back into a working state?

Also if anyone can come up with a reliable reproducer that would be super helpful

Comment 10 Pavel Hrdina 2019-09-25 08:15:23 UTC
(In reply to Cole Robinson from comment #9)
> Thanks Chris, that is helpful, sounds like the same issue to me.
> 
> Between libvirt 5.4.0 and 5.5.0 there were a handful of cgroup patches, most
> mentioning init of controllers, so it makes sense that an issue popped up
> there. What's not clear to me is if this is strictly a libvirt bug or if
> this is tickling an issue in systemd or the kernel maybe.
> 
> Pavel can you provide some info:
> 
> Is /sys/fs/cgroup/machine/cgroup.controllers only meant to be visible to the
> particular 'machine' process?

This path should never exists on systemd distributions, it should be 

    '/sys/fs/cgroup/machine.slice/cgroup.controllers'

> Is that a cgroup v1 path or cgroup v2? How do I determine which one the VM
> is using? (both are mounted on my f31 host apparently)

That path was created by libvirt since for some reason it thinks that it's running
on host without systemd where the creation of cgroup directories is done by libvirt.

To figure out if you have cgroups v1 or cgroups v2 just run

    mount | grep cgroup

and if there is only cgroup2 mount point you have cgroups v2 enabled.

> Any other debugging suggestions? Something to try or a workaround to get the
> host back into a working state?

Restarting libvirtd should do the trick, but I'm not sure as I was not able to reproduce
it yet.

> Also if anyone can come up with a reliable reproducer that would be super
> helpful

Another point is whether this happens with the newer libvirt 5.6.0 as there are some patches
to fix the controller detection.

If this happens to someone occasionally can that person please enable debug logging as it's
described here [1] so we can figure out how it happens.

[1] <https://wiki.libvirt.org/page/DebugLogs>

Comment 11 Adam Williamson 2019-09-25 15:10:08 UTC
*** Bug 1755198 has been marked as a duplicate of this bug. ***

Comment 12 Adam Williamson 2019-09-25 15:11:49 UTC
I saw this yesterday on my F31 system with libvirt-5.6.0-3.fc31.x86_64 , and filed 1755198. If I can catch it again I'll try and get logs.

Comment 13 Cole Robinson 2019-09-26 12:43:42 UTC
I dug into libvirt code to try and get a better idea of what might be happening, but I'm still not sure. Some change on the host is either tickling a libvirt bug, or responsible for the bad behavior. I haven't encountered the issue for a while though

Next time someone can reproduce, besides libvirtd logs Pavel mentioned, also provide:

* /proc/mounts
* output of 'machinectl'
* systemctl status machine.slice

Comment 14 Jonathan Billings 2019-09-26 15:14:24 UTC
Turned on DebugLogs on my f31 system that is showing these symptoms, and the DebugLog says:

2019-09-26 15:12:42.135+0000: 22936: error : virFileReadAll:1431 : Failed to open file '/sys/fs/cgroup/machine/cgroup.controllers': No such file or directory
2019-09-26 15:12:42.135+0000: 22936: error : virCgroupV2ParseControllersFile:268 : Unable to read from '/sys/fs/cgroup/machine/cgroup.controllers': No such file or directory

Comment 15 Jonathan Billings 2019-09-26 15:17:21 UTC
here's more information from the log:
2019-09-26 15:12:42.135+0000: 22936: info : virDBusCall:1588 : DBUS_METHOD_REPLY: 'org.freedesktop.machine1.Manager.CreateMachineWithNetwork' on '/org/freedesktop/machine1' at 'org.freedesktop.machine1'
2019-09-26 15:12:42.135+0000: 22936: debug : virCgroupNewMachineSystemd:1145 : Detecting systemd placement
2019-09-26 15:12:42.135+0000: 22936: debug : virCgroupNew:677 : pid=23022 path= parent=(nil) controllers=-1 group=0x7f535bffe2b8
2019-09-26 15:12:42.135+0000: 22936: debug : virCgroupDetect:353 : group=0x7f5344025bc0 controllers=-1 path= parent=(nil)
2019-09-26 15:12:42.135+0000: 22936: debug : virFileClose:114 : Closed fd 39
2019-09-26 15:12:42.135+0000: 22936: debug : virCgroupDetectPlacement:289 : Detecting placement for pid 23022 path
2019-09-26 15:12:42.135+0000: 22936: debug : virFileClose:114 : Closed fd 37
2019-09-26 15:12:42.135+0000: 22936: debug : virCgroupV2DetectControllers:313 : Controller 'cpu' present=yes
2019-09-26 15:12:42.135+0000: 22936: debug : virCgroupV2DetectControllers:313 : Controller 'cpuacct' present=yes
2019-09-26 15:12:42.135+0000: 22936: debug : virCgroupV2DetectControllers:313 : Controller 'cpuset' present=yes
2019-09-26 15:12:42.135+0000: 22936: debug : virCgroupV2DetectControllers:313 : Controller 'memory' present=yes
2019-09-26 15:12:42.135+0000: 22936: debug : virCgroupV2DetectControllers:313 : Controller 'devices' present=no
2019-09-26 15:12:42.135+0000: 22936: debug : virCgroupV2DetectControllers:313 : Controller 'freezer' present=no
2019-09-26 15:12:42.135+0000: 22936: debug : virCgroupV2DetectControllers:313 : Controller 'io' present=yes
2019-09-26 15:12:42.135+0000: 22936: debug : virCgroupV2DetectControllers:313 : Controller 'net_cls' present=no
2019-09-26 15:12:42.135+0000: 22936: debug : virCgroupV2DetectControllers:313 : Controller 'perf_event' present=no
2019-09-26 15:12:42.135+0000: 22936: debug : virCgroupV2DetectControllers:313 : Controller 'name=systemd' present=no
2019-09-26 15:12:42.135+0000: 22936: debug : virCgroupNewMachineSystemd:1160 : Systemd didn't setup its controller
2019-09-26 15:12:42.135+0000: 22936: debug : virCgroupNewMachineManual:1203 : Fallback to non-systemd setup
2019-09-26 15:12:42.135+0000: 22936: debug : virCgroupNewPartition:848 : path=/machine create=1 controllers=ffffffff
2019-09-26 15:12:42.135+0000: 22936: debug : virCgroupNew:677 : pid=-1 path=/machine parent=(nil) controllers=-1 group=0x7f535bffe2d0
2019-09-26 15:12:42.135+0000: 22936: debug : virCgroupDetect:353 : group=0x7f5344025db0 controllers=-1 path=/machine parent=(nil)
2019-09-26 15:12:42.135+0000: 22936: debug : virFileClose:114 : Closed fd 39
2019-09-26 15:12:42.135+0000: 22936: debug : virCgroupDetectPlacement:289 : Detecting placement for pid -1 path /machine
2019-09-26 15:12:42.135+0000: 22936: error : virFileReadAll:1431 : Failed to open file '/sys/fs/cgroup/machine/cgroup.controllers': No such file or directory
2019-09-26 15:12:42.135+0000: 22936: error : virCgroupV2ParseControllersFile:268 : Unable to read from '/sys/fs/cgroup/machine/cgroup.controllers': No such file or directory

Comment 16 Jonathan Billings 2019-09-26 15:31:14 UTC
# systemctl status machine.slice
● machine.slice - Virtual Machine and Container Slice
   Loaded: loaded (/usr/lib/systemd/system/machine.slice; static; vendor preset: disabled)
   Active: active since Thu 2019-09-26 08:03:20 EDT; 3h 26min ago
     Docs: man:systemd.special(7)
    Tasks: 0
   Memory: 1.6M
      CPU: 19.039s
   CGroup: /machine.slice

# machinectl 
No machines.

/proc/self/mounts:

sysfs /sys sysfs rw,seclabel,nosuid,nodev,noexec,relatime 0 0
proc /proc proc rw,nosuid,nodev,noexec,relatime 0 0
devtmpfs /dev devtmpfs rw,seclabel,nosuid,size=16347476k,nr_inodes=4086869,mode=755 0 0
securityfs /sys/kernel/security securityfs rw,nosuid,nodev,noexec,relatime 0 0
tmpfs /dev/shm tmpfs rw,seclabel,nosuid,nodev 0 0
devpts /dev/pts devpts rw,seclabel,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000 0 0
tmpfs /run tmpfs rw,seclabel,nosuid,nodev,mode=755 0 0
cgroup2 /sys/fs/cgroup cgroup2 rw,seclabel,nosuid,nodev,noexec,relatime,nsdelegate 0 0
pstore /sys/fs/pstore pstore rw,seclabel,nosuid,nodev,noexec,relatime 0 0
efivarfs /sys/firmware/efi/efivars efivarfs rw,nosuid,nodev,noexec,relatime 0 0
bpf /sys/fs/bpf bpf rw,nosuid,nodev,noexec,relatime,mode=700 0 0
configfs /sys/kernel/config configfs rw,nosuid,nodev,noexec,relatime 0 0
/dev/mapper/fedora_localhost--live-root / ext4 rw,seclabel,relatime 0 0
selinuxfs /sys/fs/selinux selinuxfs rw,relatime 0 0
mqueue /dev/mqueue mqueue rw,seclabel,nosuid,nodev,noexec,relatime 0 0
debugfs /sys/kernel/debug debugfs rw,seclabel,nosuid,nodev,noexec,relatime 0 0
hugetlbfs /dev/hugepages hugetlbfs rw,seclabel,relatime,pagesize=2M 0 0
tmpfs /tmp tmpfs rw,seclabel,nosuid,nodev 0 0
/dev/nvme0n1p2 /boot ext4 rw,seclabel,relatime 0 0
/dev/mapper/fedora_localhost--live-home /home ext4 rw,seclabel,relatime 0 0
/dev/mapper/fedora_localhost--live-home /var/lib/mock ext4 rw,seclabel,relatime 0 0
/dev/nvme0n1p1 /boot/efi vfat rw,relatime,fmask=0077,dmask=0077,codepage=437,iocharset=ascii,shortname=winnt,errors=remount-ro 0 0

Comment 17 Cole Robinson 2019-09-26 15:56:44 UTC
Thanks Jonathan., this helps

2019-09-26 15:12:42.135+0000: 22936: debug : virCgroupNewMachineSystemd:1160 : Systemd didn't setup its controller
2019-09-26 15:12:42.135+0000: 22936: debug : virCgroupNewMachineManual:1203 : Fallback to non-systemd setup

This drops us into virCgroupNewMachineManual, which has:
    if (virCgroupNewPartition(partition,                                         
                              STREQ(partition, "/machine"),                      
                              controllers,                                       
                              &parent) < 0) {

partition == /machine (which I think is valid), but it's getting used eventually for 'path' here which obviously isn't correct in this context.
The same failure can be manually triggered by forcing the 'Systemd didn't setup its controller' code path to hit

So the questions are:
* what leads to that condition to hit?
* regardless, can we tweak virCgroupNewMachineManual call stack to not fall over?

I'll keep looking

Comment 18 Cole Robinson 2019-09-26 18:59:18 UTC
Okay so I found one reproducer, running a mock build, probably due to systemd-nspawn usage.

Before:
$ cat /proc/self/cgroup 
0::/user.slice/user-1000.slice/user/gnome-terminal-server.service

After:
$ cat /proc/self/cgroup 
1:name=systemd:/
0::/user.slice/user-1000.slice/user/gnome-terminal-server.service

libvirt acts on that first line, which eventually leads to the reported error message. I have a fix that i'll build shortly

Comment 19 Adam Williamson 2019-09-26 20:08:40 UTC
ah, yeah, I very likely had run a mock build on the same boot as I tried to run the VM.

Comment 20 Fedora Update System 2019-09-26 20:46:38 UTC
FEDORA-2019-43c162e51b has been submitted as an update to Fedora 31. https://bodhi.fedoraproject.org/updates/FEDORA-2019-43c162e51b

Comment 21 Cole Robinson 2019-09-26 20:48:34 UTC
(In reply to Adam Williamson from comment #19)
> ah, yeah, I very likely had run a mock build on the same boot as I tried to
> run the VM.

FWIW all it takes is running mock once, and then the host seems to be stuck in the bug tickling state. This affects podman container startup too

I filed a systemd bug incase this is unintentional behavior: https://bugzilla.redhat.com/show_bug.cgi?id=1756143

Comment 22 Cole Robinson 2019-09-26 21:19:08 UTC
Patches sent upstream: https://www.redhat.com/archives/libvir-list/2019-September/msg01254.html

Comment 23 Fedora Update System 2019-09-27 02:29:23 UTC
libvirt-5.6.0-4.fc31 has been pushed to the Fedora 31 testing repository. If problems still persist, please make note of it in this bug report.
See https://fedoraproject.org/wiki/QA:Updates_Testing for
instructions on how to install test updates.
You can provide feedback for this update here: https://bodhi.fedoraproject.org/updates/FEDORA-2019-43c162e51b

Comment 24 Fedora Update System 2019-10-04 20:05:16 UTC
libvirt-5.6.0-4.fc31 has been pushed to the Fedora 31 stable repository. If problems still persist, please make note of it in this bug report.

Comment 25 Chris Marusich 2019-10-05 05:37:36 UTC
Hi,

Unfortunately, the patch does not seem to fix the problem described in the Guix bug report.  I've tested the patch from here:

https://www.redhat.com/archives/libvir-list/2019-September/msg01255.html

I built a new version of libvirt and virt-manager using that patch.  I was still able to reproduce the problem in Guix.

Should I open a new bug report, or should I continue to update this one?  For now, I will provide the information I have here.  The following information comes from a Guix system that did not have the above patch applied.

The error (as described in the Guix bug report and reproduced by me) is that, when trying to create a new domain using virt-manager, I get the following error:

Unable to complete install: 'Unable to read from '/sys/fs/cgroup/unified/machine/cgroup.controllers': No such file or directory'

Traceback (most recent call last):
  File "/gnu/store/w9x0hy191bn19mm4lvz3rwsck5283ryv-virt-manager-2.1.0/share/virt-manager/virtManager/asyncjob.py", line 75, in cb_wrapper
    callback(asyncjob, *args, **kwargs)
  File "/gnu/store/w9x0hy191bn19mm4lvz3rwsck5283ryv-virt-manager-2.1.0/share/virt-manager/virtManager/create.py", line 2122, in _do_async_install
    guest.installer_instance.start_install(guest, meter=meter)
  File "/gnu/store/w9x0hy191bn19mm4lvz3rwsck5283ryv-virt-manager-2.1.0/share/virt-manager/virtinst/installer.py", line 415, in start_install
    doboot, transient)
  File "/gnu/store/w9x0hy191bn19mm4lvz3rwsck5283ryv-virt-manager-2.1.0/share/virt-manager/virtinst/installer.py", line 358, in _create_guest
    domain = self.conn.createXML(install_xml or final_xml, 0)
  File "/gnu/store/fri02cvgj6z6hjfjmwdy4asj898xmvgy-python-libvirt-5.6.0/lib/python3.7/site-packages/libvirt.py", line 3915, in createXML
    if ret is None:raise libvirtError('virDomainCreateXML() failed', conn=self)
libvirt.libvirtError: Unable to read from '/sys/fs/cgroup/unified/machine/cgroup.controllers': No such file or directory

I see that neither /sys/fs/cgroup/machine/cgroup.controllers nor /sys/fs/cgroup/machine.slice/cgroup.controllers exist on my system:

[0] marusich:~
$ ls /sys/fs/cgroup/machine/cgroup.controllers
ls: cannot access '/sys/fs/cgroup/machine/cgroup.controllers': No such file or directory
[2] marusich:~
$ ls /sys/fs/cgroup/machine.slice/cgroup.controllers
ls: cannot access '/sys/fs/cgroup/machine.slice/cgroup.controllers': No such file or directory
[2] marusich:~
$ 

Here is the output of mount and the contents of /proc/mounts:

[0] marusich:~
$ mount | grep cgroup
cgroup on /sys/fs/cgroup type tmpfs (rw,relatime)
cgroup on /sys/fs/cgroup/elogind type cgroup (rw,relatime,name=elogind)
cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,relatime,cpuset)
cgroup on /sys/fs/cgroup/cpu type cgroup (rw,relatime,cpu)
cgroup on /sys/fs/cgroup/cpuacct type cgroup (rw,relatime,cpuacct)
cgroup on /sys/fs/cgroup/memory type cgroup (rw,relatime,memory)
cgroup on /sys/fs/cgroup/devices type cgroup (rw,relatime,devices)
cgroup on /sys/fs/cgroup/freezer type cgroup (rw,relatime,freezer)
cgroup on /sys/fs/cgroup/blkio type cgroup (rw,relatime,blkio)
cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,relatime,perf_event)
cgroup2 on /sys/fs/cgroup/unified type cgroup2 (rw,nosuid,nodev,noexec,relatime,nsdelegate)
[0] marusich:~
$ cat /proc/mounts
none /proc proc rw,relatime 0 0
none /dev devtmpfs rw,relatime,size=3944584k,nr_inodes=986146,mode=755 0 0
none /sys sysfs rw,relatime 0 0
/dev/mapper/root / ext4 rw,relatime 0 0
none /dev/pts devpts rw,relatime,gid=996,mode=620,ptmxmode=000 0 0
tmpfs /dev/shm tmpfs rw,nosuid,nodev,relatime 0 0
/dev/mapper/root /gnu/store ext4 ro,relatime 0 0
none /run/systemd tmpfs rw,nosuid,nodev,noexec,relatime,mode=755 0 0
none /run/user tmpfs rw,nosuid,nodev,noexec,relatime,mode=755 0 0
cgroup /sys/fs/cgroup tmpfs rw,relatime 0 0
cgroup /sys/fs/cgroup/elogind cgroup rw,relatime,name=elogind 0 0
cgroup /sys/fs/cgroup/cpuset cgroup rw,relatime,cpuset 0 0
cgroup /sys/fs/cgroup/cpu cgroup rw,relatime,cpu 0 0
cgroup /sys/fs/cgroup/cpuacct cgroup rw,relatime,cpuacct 0 0
cgroup /sys/fs/cgroup/memory cgroup rw,relatime,memory 0 0
cgroup /sys/fs/cgroup/devices cgroup rw,relatime,devices 0 0
cgroup /sys/fs/cgroup/freezer cgroup rw,relatime,freezer 0 0
cgroup /sys/fs/cgroup/blkio cgroup rw,relatime,blkio 0 0
cgroup /sys/fs/cgroup/perf_event cgroup rw,relatime,perf_event 0 0
cgroup2 /sys/fs/cgroup/unified cgroup2 rw,nosuid,nodev,noexec,relatime,nsdelegate 0 0
tmpfs /run/user/983 tmpfs rw,nosuid,nodev,relatime,size=790640k,mode=700,uid=983,gid=978 0 0
tmpfs /run/user/1000 tmpfs rw,nosuid,nodev,relatime,size=790640k,mode=700,uid=1000,gid=998 0 0
gvfsd-fuse /run/user/1000/gvfs fuse.gvfsd-fuse rw,nosuid,nodev,relatime,user_id=1000,group_id=998 0 0

These are the directories and files under /sys/fs/cgroup:

/sys/fs/cgroup
/sys/fs/cgroup/unified
/sys/fs/cgroup/unified/io.pressure
/sys/fs/cgroup/unified/cgroup.procs
/sys/fs/cgroup/unified/cgroup.max.descendants
/sys/fs/cgroup/unified/memory.pressure
/sys/fs/cgroup/unified/cpu.pressure
/sys/fs/cgroup/unified/cgroup.stat
/sys/fs/cgroup/unified/cgroup.threads
/sys/fs/cgroup/unified/cgroup.controllers
/sys/fs/cgroup/unified/cgroup.subtree_control
/sys/fs/cgroup/unified/cgroup.max.depth
/sys/fs/cgroup/perf_event
/sys/fs/cgroup/perf_event/cgroup.procs
/sys/fs/cgroup/perf_event/cgroup.sane_behavior
/sys/fs/cgroup/perf_event/tasks
/sys/fs/cgroup/perf_event/notify_on_release
/sys/fs/cgroup/perf_event/release_agent
/sys/fs/cgroup/perf_event/cgroup.clone_children
/sys/fs/cgroup/blkio
/sys/fs/cgroup/blkio/cgroup.procs
/sys/fs/cgroup/blkio/blkio.throttle.read_iops_device
/sys/fs/cgroup/blkio/blkio.throttle.io_service_bytes
/sys/fs/cgroup/blkio/cgroup.sane_behavior
/sys/fs/cgroup/blkio/blkio.throttle.write_iops_device
/sys/fs/cgroup/blkio/blkio.reset_stats
/sys/fs/cgroup/blkio/blkio.throttle.read_bps_device
/sys/fs/cgroup/blkio/blkio.throttle.write_bps_device
/sys/fs/cgroup/blkio/tasks
/sys/fs/cgroup/blkio/notify_on_release
/sys/fs/cgroup/blkio/release_agent
/sys/fs/cgroup/blkio/cgroup.clone_children
/sys/fs/cgroup/blkio/blkio.throttle.io_serviced
/sys/fs/cgroup/blkio/blkio.throttle.io_service_bytes_recursive
/sys/fs/cgroup/blkio/blkio.throttle.io_serviced_recursive
/sys/fs/cgroup/freezer
/sys/fs/cgroup/freezer/cgroup.procs
/sys/fs/cgroup/freezer/cgroup.sane_behavior
/sys/fs/cgroup/freezer/tasks
/sys/fs/cgroup/freezer/notify_on_release
/sys/fs/cgroup/freezer/release_agent
/sys/fs/cgroup/freezer/cgroup.clone_children
/sys/fs/cgroup/devices
/sys/fs/cgroup/devices/cgroup.procs
/sys/fs/cgroup/devices/devices.deny
/sys/fs/cgroup/devices/cgroup.sane_behavior
/sys/fs/cgroup/devices/devices.list
/sys/fs/cgroup/devices/devices.allow
/sys/fs/cgroup/devices/tasks
/sys/fs/cgroup/devices/notify_on_release
/sys/fs/cgroup/devices/release_agent
/sys/fs/cgroup/devices/cgroup.clone_children
/sys/fs/cgroup/memory
/sys/fs/cgroup/memory/cgroup.procs
/sys/fs/cgroup/memory/memory.use_hierarchy
/sys/fs/cgroup/memory/memory.kmem.tcp.usage_in_bytes
/sys/fs/cgroup/memory/memory.soft_limit_in_bytes
/sys/fs/cgroup/memory/cgroup.sane_behavior
/sys/fs/cgroup/memory/memory.force_empty
/sys/fs/cgroup/memory/memory.pressure_level
/sys/fs/cgroup/memory/memory.move_charge_at_immigrate
/sys/fs/cgroup/memory/memory.kmem.tcp.max_usage_in_bytes
/sys/fs/cgroup/memory/memory.max_usage_in_bytes
/sys/fs/cgroup/memory/memory.oom_control
/sys/fs/cgroup/memory/memory.stat
/sys/fs/cgroup/memory/memory.kmem.slabinfo
/sys/fs/cgroup/memory/memory.limit_in_bytes
/sys/fs/cgroup/memory/memory.swappiness
/sys/fs/cgroup/memory/memory.numa_stat
/sys/fs/cgroup/memory/memory.kmem.failcnt
/sys/fs/cgroup/memory/memory.kmem.max_usage_in_bytes
/sys/fs/cgroup/memory/memory.usage_in_bytes
/sys/fs/cgroup/memory/tasks
/sys/fs/cgroup/memory/memory.failcnt
/sys/fs/cgroup/memory/cgroup.event_control
/sys/fs/cgroup/memory/memory.kmem.tcp.failcnt
/sys/fs/cgroup/memory/memory.kmem.limit_in_bytes
/sys/fs/cgroup/memory/notify_on_release
/sys/fs/cgroup/memory/release_agent
/sys/fs/cgroup/memory/memory.kmem.usage_in_bytes
/sys/fs/cgroup/memory/memory.kmem.tcp.limit_in_bytes
/sys/fs/cgroup/memory/cgroup.clone_children
/sys/fs/cgroup/cpuacct
/sys/fs/cgroup/cpuacct/cgroup.procs
/sys/fs/cgroup/cpuacct/cgroup.sane_behavior
/sys/fs/cgroup/cpuacct/cpuacct.usage_percpu_sys
/sys/fs/cgroup/cpuacct/cpuacct.usage_percpu
/sys/fs/cgroup/cpuacct/cpuacct.stat
/sys/fs/cgroup/cpuacct/cpuacct.usage
/sys/fs/cgroup/cpuacct/tasks
/sys/fs/cgroup/cpuacct/cpuacct.usage_sys
/sys/fs/cgroup/cpuacct/cpuacct.usage_all
/sys/fs/cgroup/cpuacct/cpuacct.usage_percpu_user
/sys/fs/cgroup/cpuacct/notify_on_release
/sys/fs/cgroup/cpuacct/release_agent
/sys/fs/cgroup/cpuacct/cgroup.clone_children
/sys/fs/cgroup/cpuacct/cpuacct.usage_user
/sys/fs/cgroup/cpu
/sys/fs/cgroup/cpu/cgroup.procs
/sys/fs/cgroup/cpu/cpu.cfs_period_us
/sys/fs/cgroup/cpu/cgroup.sane_behavior
/sys/fs/cgroup/cpu/cpu.stat
/sys/fs/cgroup/cpu/cpu.shares
/sys/fs/cgroup/cpu/cpu.cfs_quota_us
/sys/fs/cgroup/cpu/tasks
/sys/fs/cgroup/cpu/notify_on_release
/sys/fs/cgroup/cpu/release_agent
/sys/fs/cgroup/cpu/cgroup.clone_children
/sys/fs/cgroup/cpuset
/sys/fs/cgroup/cpuset/cgroup.procs
/sys/fs/cgroup/cpuset/cgroup.sane_behavior
/sys/fs/cgroup/cpuset/cpuset.memory_pressure
/sys/fs/cgroup/cpuset/cpuset.memory_migrate
/sys/fs/cgroup/cpuset/cpuset.memory_pressure_enabled
/sys/fs/cgroup/cpuset/cpuset.mem_exclusive
/sys/fs/cgroup/cpuset/cpuset.memory_spread_slab
/sys/fs/cgroup/cpuset/cpuset.cpu_exclusive
/sys/fs/cgroup/cpuset/tasks
/sys/fs/cgroup/cpuset/cpuset.effective_mems
/sys/fs/cgroup/cpuset/cpuset.effective_cpus
/sys/fs/cgroup/cpuset/notify_on_release
/sys/fs/cgroup/cpuset/release_agent
/sys/fs/cgroup/cpuset/cpuset.sched_load_balance
/sys/fs/cgroup/cpuset/cpuset.mems
/sys/fs/cgroup/cpuset/cpuset.mem_hardwall
/sys/fs/cgroup/cpuset/cpuset.sched_relax_domain_level
/sys/fs/cgroup/cpuset/cpuset.cpus
/sys/fs/cgroup/cpuset/cgroup.clone_children
/sys/fs/cgroup/cpuset/cpuset.memory_spread_page
/sys/fs/cgroup/elogind
/sys/fs/cgroup/elogind/cgroup.procs
/sys/fs/cgroup/elogind/cgroup.sane_behavior
/sys/fs/cgroup/elogind/c1
/sys/fs/cgroup/elogind/c1/cgroup.procs
/sys/fs/cgroup/elogind/c1/tasks
/sys/fs/cgroup/elogind/c1/notify_on_release
/sys/fs/cgroup/elogind/c1/cgroup.clone_children
/sys/fs/cgroup/elogind/tasks
/sys/fs/cgroup/elogind/c2
/sys/fs/cgroup/elogind/c2/cgroup.procs
/sys/fs/cgroup/elogind/c2/tasks
/sys/fs/cgroup/elogind/c2/notify_on_release
/sys/fs/cgroup/elogind/c2/cgroup.clone_children
/sys/fs/cgroup/elogind/notify_on_release
/sys/fs/cgroup/elogind/release_agent
/sys/fs/cgroup/elogind/cgroup.clone_children

I enabled debug logging as described in:

https://wiki.libvirt.org/page/DebugLogs

And here is what I believe to be the relevant parts of the logs.

Daemon logs:

2019-09-28 23:22:15.323+0000: 420: debug : qemuProcessLaunch:6808 : Building mount namespace
2019-09-28 23:22:15.323+0000: 420: info : virObjectRef:401 : OBJECT_REF: obj=0x7f210c00e480
2019-09-28 23:22:15.323+0000: 420: info : virObjectUnref:349 : OBJECT_UNREF: obj=0x7f210c00e480
2019-09-28 23:22:15.323+0000: 420: debug : qemuProcessLaunch:6814 : Clear emulator capabilities: 1
2019-09-28 23:22:15.323+0000: 420: debug : qemuProcessLaunch:6818 : Setting up raw IO
2019-09-28 23:22:15.323+0000: 420: debug : qemuProcessLaunch:6828 : Setting up security labelling
2019-09-28 23:22:15.323+0000: 420: debug : virSecurityDACSetChildProcessLabel:2165 : Setting child to drop privileges to 65534:984
2019-09-28 23:22:15.323+0000: 420: debug : virCommandRequireHandshake:2869 : Transfer handshake wait=35 notify=36, keep handshake wait=34 notify=37
2019-09-28 23:22:15.323+0000: 420: debug : virCommandRunAsync:2671 : About to run LC_ALL=C PATH=/run/current-system/profile/bin HOME=/var/lib/libvirt/qemu/domain-1-generic XDG_DATA_HOME=/var/lib/libvirt/qemu/domain-1-generic/.local/share XDG_CACHE_HOME=/var/lib/libvirt/qemu/domain-1-generic/.cache XDG_CONFIG_HOME=/var/lib/libvirt/qemu/domain-1-generic/.config QEMU_AUDIO_DRV=spice /run/current-system/profile/bin/qemu-system-x86_64 -name guest=generic,debug-threads=on -S -object secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-1-generic/master-key.aes -machine pc-i440fx-3.1,accel=kvm,usb=off,vmport=off,dump-guest-core=off -cpu Penryn,vme=on,ss=on,vmx=on,x2apic=on,tsc-deadline=on,hypervisor=on,arat=on,tsc_adjust=on -m 1024 -overcommit mem-lock=off -smp 1,sockets=1,cores=1,threads=1 -uuid cb02b3d6-19b2-4d83-822f-981e763e8b8a -no-user-config -nodefaults -chardev socket,id=charmonitor,fd=32,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc,driftfix=slew -global kvm-pit.lost_tick_policy=delay -no-hpet -no-reboot -global PIIX4_PM.disable_s3=1 -global PIIX4_PM.disable_s4=1 -boot strict=on -device ich9-usb-ehci1,id=usb,bus=pci.0,addr=0x4.0x7 -device ich9-usb-uhci1,masterbus=usb.0,firstport=0,bus=pci.0,multifunction=on,addr=0x4 -device ich9-usb-uhci2,masterbus=usb.0,firstport=2,bus=pci.0,addr=0x4.0x1 -device ich9-usb-uhci3,masterbus=usb.0,firstport=4,bus=pci.0,addr=0x4.0x2 -drive file=/var/lib/libvirt/images/generic.qcow2,format=qcow2,if=none,id=drive-ide0-0-0 -device ide-hd,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0,bootindex=2 -drive file=/var/lib/libvirt/images/debian-10.1.0-amd64-netinst.iso,format=raw,if=none,id=drive-ide0-0-1,readonly=on -device ide-cd,bus=ide.0,unit=1,drive=drive-ide0-0-1,id=ide0-0-1,bootindex=1 -chardev pty,id=charserial0 -device isa-serial,chardev=charserial0,id=serial0 -device usb-tablet,id=input0,bus=usb.0,port=1 -spice port=5900,addr=127.0.0.1,disable-ticketing,image-compression=off,seamless-migration=on -device qxl-vga,id=video0,ram_size=67108864,vram_size=67108864,vram64_size_mb=0,vgamem_mb=16,max_outputs=1,bus=pci.0,addr=0x2 -device intel-hda,id=sound0,bus=pci.0,addr=0x3 -device hda-duplex,id=sound0-codec0,bus=sound0.0,cad=0 -chardev spicevmc,id=charredir0,name=usbredir -device usb-redir,chardev=charredir0,id=redir0,bus=usb.0,port=2 -chardev spicevmc,id=charredir1,name=usbredir -device usb-redir,chardev=charredir1,id=redir1,bus=usb.0,port=3 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5 -sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny -msg timestamp=on
2019-09-28 23:22:15.326+0000: 420: debug : virFileClose:114 : Closed fd 38
2019-09-28 23:22:15.326+0000: 420: debug : virCommandRunAsync:2674 : Command result 0, with PID 3797
2019-09-28 23:22:15.326+0000: 420: debug : virFileClose:114 : Closed fd 32
2019-09-28 23:22:15.326+0000: 420: debug : virFileClose:114 : Closed fd 35
2019-09-28 23:22:15.326+0000: 420: debug : virFileClose:114 : Closed fd 36
2019-09-28 23:22:15.334+0000: 420: debug : virCommandRun:2509 : Result status 0, stdout: '(null)' stderr: '(null)'
2019-09-28 23:22:15.334+0000: 420: debug : virFileClose:114 : Closed fd 32
2019-09-28 23:22:15.334+0000: 420: debug : qemuProcessLaunch:6854 : QEMU vm=0x7f212c013230 name=generic running with pid=3798
2019-09-28 23:22:15.334+0000: 420: debug : qemuProcessLaunch:6861 : Writing early domain status to disk
2019-09-28 23:22:15.335+0000: 420: debug : virFileMakePathHelper:3001 : path=/var/run/libvirt/qemu mode=0777
2019-09-28 23:22:15.338+0000: 420: debug : virFileClose:114 : Closed fd 32
2019-09-28 23:22:15.338+0000: 420: debug : qemuProcessLaunch:6865 : Waiting for handshake from child
2019-09-28 23:22:15.338+0000: 420: debug : virCommandHandshakeWait:2904 : Wait for handshake on 34
2019-09-28 23:22:15.338+0000: 420: debug : virFileClose:114 : Closed fd 34
2019-09-28 23:22:15.339+0000: 420: debug : qemuProcessLaunch:6873 : Setting up domain cgroup (if required)
2019-09-28 23:22:15.339+0000: 420: info : virObjectRef:401 : OBJECT_REF: obj=0x7f210c00e480
2019-09-28 23:22:15.339+0000: 420: debug : virFileClose:114 : Closed fd 34
2019-09-28 23:22:15.339+0000: 420: debug : virCgroupNewMachineSystemd:1129 : Trying to setup machine 'qemu-1-generic' via systemd
2019-09-28 23:22:15.339+0000: 420: debug : virDBusMessageIterEncode:627 : rootiter=0x7f2149795370 types=(null)
2019-09-28 23:22:15.339+0000: 420: info : virDBusCall:1561 : DBUS_METHOD_CALL: 'org.freedesktop.DBus.ListActivatableNames' on '/org/freedesktop/DBus' at 'org.freedesktop.DBus'
2019-09-28 23:22:15.339+0000: 420: info : virDBusCall:1590 : DBUS_METHOD_REPLY: 'org.freedesktop.DBus.ListActivatableNames' on '/org/freedesktop/DBus' at 'org.freedesktop.DBus'
2019-09-28 23:22:15.339+0000: 420: debug : virDBusIsServiceEnabled:1730 : Service org.freedesktop.machine1 is unavailable
2019-09-28 23:22:15.339+0000: 420: debug : virCgroupNewMachineManual:1203 : Fallback to non-systemd setup
2019-09-28 23:22:15.339+0000: 420: debug : virCgroupNewPartition:849 : path=/machine create=1 controllers=ffffffff
2019-09-28 23:22:15.339+0000: 420: debug : virCgroupNew:678 : pid=-1 path=/machine parent=(nil) controllers=-1 group=0x7f21497957e0
2019-09-28 23:22:15.339+0000: 420: debug : virCgroupDetect:354 : group=0x7f212c02abb0 controllers=-1 path=/machine parent=(nil)
2019-09-28 23:22:15.339+0000: 420: debug : virFileClose:114 : Closed fd 34
2019-09-28 23:22:15.340+0000: 420: debug : virCgroupDetectPlacement:290 : Detecting placement for pid -1 path /machine
2019-09-28 23:22:15.340+0000: 420: debug : virCgroupV1ValidatePlacement:403 : Detected mount/mapping 0:cpu at /sys/fs/cgroup/cpu in /machine for pid -1
2019-09-28 23:22:15.340+0000: 420: debug : virCgroupV1ValidatePlacement:403 : Detected mount/mapping 1:cpuacct at /sys/fs/cgroup/cpuacct in /machine for pid -1
2019-09-28 23:22:15.340+0000: 420: debug : virCgroupV1ValidatePlacement:403 : Detected mount/mapping 2:cpuset at /sys/fs/cgroup/cpuset in /machine for pid -1
2019-09-28 23:22:15.340+0000: 420: debug : virCgroupV1ValidatePlacement:403 : Detected mount/mapping 3:memory at /sys/fs/cgroup/memory in /machine for pid -1
2019-09-28 23:22:15.340+0000: 420: debug : virCgroupV1ValidatePlacement:403 : Detected mount/mapping 4:devices at /sys/fs/cgroup/devices in /machine for pid -1
2019-09-28 23:22:15.340+0000: 420: debug : virCgroupV1ValidatePlacement:403 : Detected mount/mapping 5:freezer at /sys/fs/cgroup/freezer in /machine for pid -1
2019-09-28 23:22:15.340+0000: 420: debug : virCgroupV1ValidatePlacement:403 : Detected mount/mapping 6:blkio at /sys/fs/cgroup/blkio in /machine for pid -1
2019-09-28 23:22:15.340+0000: 420: debug : virCgroupV1ValidatePlacement:403 : Detected mount/mapping 8:perf_event at /sys/fs/cgroup/perf_event in /machine for pid -1
2019-09-28 23:22:15.340+0000: 420: error : virFileReadAll:1431 : Failed to open file '/sys/fs/cgroup/unified/machine/cgroup.controllers': No such file or directory
2019-09-28 23:22:15.340+0000: 420: error : virCgroupV2ParseControllersFile:268 : Unable to read from '/sys/fs/cgroup/unified/machine/cgroup.controllers': No such file or directory
2019-09-28 23:22:15.340+0000: 420: info : virObjectUnref:349 : OBJECT_UNREF: obj=0x7f210c00e480
2019-09-28 23:22:15.340+0000: 420: debug : virFileClose:114 : Closed fd 37
2019-09-28 23:22:15.340+0000: 420: info : virObjectUnref:349 : OBJECT_UNREF: obj=0x7f212c024c30
2019-09-28 23:22:15.340+0000: 420: info : virObjectUnref:351 : OBJECT_DISPOSE: obj=0x7f212c024c30
2019-09-28 23:22:15.340+0000: 420: debug : qemuDomainLogContextDispose:158 : ctxt=0x7f212c024c30
2019-09-28 23:22:15.340+0000: 420: info : virObjectUnref:349 : OBJECT_UNREF: obj=0x7f212c026910
2019-09-28 23:22:15.340+0000: 420: info : virObjectUnref:351 : OBJECT_DISPOSE: obj=0x7f212c026910
2019-09-28 23:22:15.340+0000: 420: debug : virFileClose:114 : Closed fd 28
2019-09-28 23:22:15.340+0000: 420: info : virObjectUnref:349 : OBJECT_UNREF: obj=0x7f212c0268a0
2019-09-28 23:22:15.340+0000: 420: info : virObjectUnref:349 : OBJECT_UNREF: obj=0x7f212c026b60
2019-09-28 23:22:15.340+0000: 420: info : virObjectUnref:351 : OBJECT_DISPOSE: obj=0x7f212c026b60
2019-09-28 23:22:15.340+0000: 420: info : virObjectUnref:349 : OBJECT_UNREF: obj=0x7f212c0268a0
2019-09-28 23:22:15.340+0000: 420: info : virObjectUnref:351 : OBJECT_DISPOSE: obj=0x7f212c0268a0
2019-09-28 23:22:15.340+0000: 420: debug : virFileClose:114 : Closed fd 31
2019-09-28 23:22:15.340+0000: 420: debug : virFileClose:114 : Closed fd 29
2019-09-28 23:22:15.340+0000: 420: debug : virFileClose:114 : Closed fd 33
2019-09-28 23:22:15.340+0000: 420: info : virObjectUnref:349 : OBJECT_UNREF: obj=0x7f210c00e480
2019-09-28 23:22:15.340+0000: 420: info : virObjectUnref:349 : OBJECT_UNREF: obj=0x7f2134017c00
2019-09-28 23:22:15.340+0000: 420: info : virObjectRef:401 : OBJECT_REF: obj=0x7f210c00e480
2019-09-28 23:22:15.340+0000: 420: debug : qemuProcessStop:7379 : Shutting down vm=0x7f212c013230 name=generic id=1 pid=3798, reason=failed, asyncJob=start, flags=0x2
2019-09-28 23:22:15.340+0000: 420: info : virObjectRef:401 : OBJECT_REF: obj=0x7f210c00e480
2019-09-28 23:22:15.340+0000: 420: debug : qemuDomainObjBeginJobInternal:7778 : Starting job: job=async nested agentJob=none asyncJob=none (vm=0x7f212c013230 name=generic, current job=none agentJob=none async=start)
2019-09-28 23:22:15.340+0000: 420: debug : qemuDomainObjBeginJobInternal:7827 : Started job: async nested (async=start vm=0x7f212c013230 name=generic)
2019-09-28 23:22:15.340+0000: 420: info : virObjectUnref:349 : OBJECT_UNREF: obj=0x7f210c00e480
2019-09-28 23:22:15.340+0000: 420: info : virObjectRef:401 : OBJECT_REF: obj=0x7f210c00e480
2019-09-28 23:22:15.341+0000: 420: info : virObjectUnref:349 : OBJECT_UNREF: obj=0x7f210c00e480
2019-09-28 23:22:15.341+0000: 420: debug : virFileClose:114 : Closed fd 30
2019-09-28 23:22:15.341+0000: 420: info : virObjectRef:401 : OBJECT_REF: obj=0x7f210c00e480
2019-09-28 23:22:15.341+0000: 420: debug : qemuDomainLogAppendMessage:9026 : Append log message (vm='generic' message='2019-09-28 23:22:15.341+0000: shutting down, reason=failed
) stdioLogD=1
2019-09-28 23:22:15.341+0000: 420: info : virObjectNew:252 : OBJECT_NEW: obj=0x7f212c028620 classname=virNetSocket
2019-09-28 23:22:15.341+0000: 420: info : virObjectNew:252 : OBJECT_NEW: obj=0x7f212c0289b0 classname=virNetClient
2019-09-28 23:22:15.341+0000: 420: info : virObjectNew:252 : OBJECT_NEW: obj=0x7f212c0283c0 classname=virNetClientProgram
2019-09-28 23:22:15.341+0000: 420: info : virObjectRef:401 : OBJECT_REF: obj=0x7f212c0283c0
2019-09-28 23:22:15.341+0000: 420: info : virObjectUnref:349 : OBJECT_UNREF: obj=0x7f212c028620
2019-09-28 23:22:15.341+0000: 420: info : virObjectUnref:351 : OBJECT_DISPOSE: obj=0x7f212c028620
2019-09-28 23:22:15.341+0000: 420: debug : virFileClose:114 : Closed fd 28
2019-09-28 23:22:15.341+0000: 420: info : virObjectUnref:349 : OBJECT_UNREF: obj=0x7f212c0283c0
2019-09-28 23:22:15.341+0000: 420: info : virObjectUnref:349 : OBJECT_UNREF: obj=0x7f212c0289b0
2019-09-28 23:22:15.341+0000: 420: info : virObjectUnref:351 : OBJECT_DISPOSE: obj=0x7f212c0289b0
2019-09-28 23:22:15.341+0000: 420: info : virObjectUnref:349 : OBJECT_UNREF: obj=0x7f212c0283c0
2019-09-28 23:22:15.341+0000: 420: info : virObjectUnref:351 : OBJECT_DISPOSE: obj=0x7f212c0283c0
2019-09-28 23:22:15.341+0000: 420: debug : virFileClose:114 : Closed fd 30
2019-09-28 23:22:15.341+0000: 420: debug : virFileClose:114 : Closed fd 29
2019-09-28 23:22:15.341+0000: 420: info : virObjectUnref:349 : OBJECT_UNREF: obj=0x7f210c00e480
2019-09-28 23:22:15.341+0000: 420: info : virObjectUnref:349 : OBJECT_UNREF: obj=0x7f212c021e40
2019-09-28 23:22:15.341+0000: 420: info : virObjectUnref:351 : OBJECT_DISPOSE: obj=0x7f212c021e40
2019-09-28 23:22:15.341+0000: 420: info : virObjectUnref:349 : OBJECT_UNREF: obj=0x7f212c021e20
2019-09-28 23:22:15.341+0000: 420: info : virObjectUnref:351 : OBJECT_DISPOSE: obj=0x7f212c021e20
2019-09-28 23:22:15.341+0000: 420: debug : qemuProcessKill:7291 : vm=0x7f212c013230 name=generic pid=3798 flags=0x5
2019-09-28 23:22:15.341+0000: 420: debug : virProcessKillPainfullyDelay:358 : vpid=3798 force=1 extradelay=0
2019-09-28 23:22:15.542+0000: 420: debug : qemuDomainCleanupRun:9791 : driver=0x7f210c00d600, vm=generic
2019-09-28 23:22:15.542+0000: 420: debug : qemuProcessAutoDestroyRemove:7739 : vm=generic

Client logs:

2019-09-28 23:22:06.908+0000: 3771: info : libvirt version: 5.6.0
2019-09-28 23:22:06.908+0000: 3771: info : hostname: garuda.local
2019-09-28 23:22:06.908+0000: 3771: error : virNetClientProgramDispatchError:172 : Domain not found: no domain with matching name 'generic'
2019-09-28 23:22:06.911+0000: 3771: error : virNetClientProgramDispatchError:172 : Domain not found: no domain with matching name 'generic'
2019-09-28 23:22:10.350+0000: 3771: error : virNetClientProgramDispatchError:172 : Storage volume not found: no storage vol with matching name 'generic.qcow2'
2019-09-28 23:22:10.354+0000: 3771: error : virNetClientProgramDispatchError:172 : Storage volume not found: no storage vol with matching path '/var/lib/libvirt/images/generic.qcow2'
2019-09-28 23:22:10.357+0000: 3771: error : virNetClientProgramDispatchError:172 : Storage volume not found: no storage vol with matching path '/var/lib/libvirt/images/generic.qcow2'
2019-09-28 23:22:10.368+0000: 3771: error : virNetClientProgramDispatchError:172 : Storage volume not found: no storage vol with matching name 'generic.qcow2'
2019-09-28 23:22:14.456+0000: 3771: error : virNetClientProgramDispatchError:172 : Domain not found: no domain with matching uuid 'cb02b3d6-19b2-4d83-822f-981e763e8b8a'
2019-09-28 23:22:14.549+0000: 3788: error : virNetClientProgramDispatchError:172 : Domain not found: no domain with matching name 'generic'
2019-09-28 23:22:14.556+0000: 3789: error : virNetClientProgramDispatchError:172 : Storage volume not found: no storage vol with matching name 'generic.qcow2'
2019-09-28 23:22:15.545+0000: 3788: error : virNetClientProgramDispatchError:172 : Unable to read from '/sys/fs/cgroup/unified/machine/cgroup.controllers': No such file or directory

If you would like to try reproducing this error yourself in a Guix system, you can do so by creating a VM as follows:

- Install Guix, or use the vanilla pre-built QEMU image: https://guix.gnu.org/
- Put the following into a file named config.scm:

;; This is an operating system configuration for a VM image.
;; Modify it as you see fit and instantiate the changes by running:
;;
;;   guix system reconfigure /etc/config.scm
;;

(use-modules (gnu) (guix) (srfi srfi-1))
(use-service-modules virtualization desktop networking ssh xorg)
(use-package-modules virtualization bootloaders certs fonts nvi
                     package-management wget xorg)

(define vm-image-motd (plain-file "motd" "
\x1b[1;37mThis is the GNU system.  Welcome!\x1b[0m

This instance of Guix is a template for virtualized environments.
You can reconfigure the whole system by adjusting /etc/config.scm
and running:

  guix system reconfigure /etc/config.scm

Run '\x1b[1;37minfo guix\x1b[0m' to browse documentation.

\x1b[1;33mConsider setting a password for the 'root' and 'guest' \
accounts.\x1b[0m
"))

(define this-file
  (local-file (basename (assoc-ref (current-source-location) 'filename))
              "config.scm"))


(operating-system
 (host-name "gnu")
 (timezone "Etc/UTC")
 (locale "en_US.utf8")
 (keyboard-layout (keyboard-layout "us" "altgr-intl"))

 ;; Label for the GRUB boot menu.
 (label (string-append "GNU Guix " (package-version guix)))

 (firmware '())

 ;; Below we assume /dev/vda is the VM's hard disk.
 ;; Adjust as needed.
 (bootloader (bootloader-configuration
              (bootloader grub-bootloader)
              (target "/dev/vda")
              (terminal-outputs '(console))))
 (file-systems (cons (file-system
                      (mount-point "/")
                      (device "/dev/vda1")
                      (type "ext4"))
                     %base-file-systems))

 (users (cons (user-account
               (name "guest")
               (comment "GNU Guix Live")
               (password "")            ;no password
               (group "users")
               (supplementary-groups '("wheel" "netdev"
                                       "audio" "video"
                                       "libvirt")))
              %base-user-accounts))

 ;; Our /etc/sudoers file.  Since 'guest' initially has an empty password,
 ;; allow for password-less sudo.
 (sudoers-file (plain-file "sudoers" "\
root ALL=(ALL) ALL
%wheel ALL=NOPASSWD: ALL\n"))

 (packages (append (list virt-manager font-bitstream-vera nss-certs nvi
                         wget)
                   %base-packages))

 (services
  (append (list (service xfce-desktop-service-type)

                ;; Copy this file to /etc/config.scm in the OS.
                (simple-service 'config-file etc-service-type
                                `(("config.scm" ,this-file)))

                ;; Choose SLiM, which is lighter than the default GDM.
                (service slim-service-type
                         (slim-configuration
                          (auto-login? #t)
                          (default-user "guest")
                          (xorg-configuration
                           (xorg-configuration
                            (keyboard-layout keyboard-layout)))))

                ;; Uncomment the line below to add an SSH server.
                ;;(service openssh-service-type)

                ;; Use the DHCP client service rather than NetworkManager.
                (service dhcp-client-service-type))

          ;; Remove GDM, ModemManager, NetworkManager, and wpa-supplicant,
          ;; which don't make sense in a VM.
          (append
           (list (service libvirt-service-type
                          (libvirt-configuration
                           (unix-sock-group "libvirt")
                           (log-filters
                            "3:remote 4:event 3:util.json 3:rpc 1:*")
                           (log-outputs
                            "1:file:/var/log/libvirt/libvirtd.log")))
                 (service virtlog-service-type))
           (remove (lambda (service)
                     (let ((type (service-kind service)))
                       (or (memq type
                                 (list gdm-service-type
                                       wpa-supplicant-service-type
                                       cups-pk-helper-service-type
                                       network-manager-service-type
                                       modem-manager-service-type))
                           (eq? 'network-manager-applet
                                (service-type-name type)))))
                   (modify-services %desktop-services
                                    (login-service-type config =>
                                                        (login-configuration
                                                         (inherit config)
                                                         (motd vm-image-motd))))))))

 ;; Allow resolution of '.local' host names with mDNS.
 (name-service-switch %mdns-host-lookup-nss))

- Upgrade Guix to a recent version (this is the one I'm using):
    guix pull --commit=6e377b88930226f3f74ba9fac74d80c36494d9be
- Build a VM image (this will take a long time):
    cp $(guix system vm-image --image-size=10GiB config.scm) qemu-image
- Make the file usable:
    sudo chown $(whoami) qemu-image && chmod 644 qemu-image
- Launch the VM:
    qemu-system-x86_64 \
            -net user -net nic,model=virtio \
            -enable-kvm -m 1024 \
            -device virtio-blk,drive=myhd \
            -drive if=none,file=qemu-image,id=myhd
- Once you're logged into the VM, create the log directory (it seems the daemon will not create the directory for you, and it will not work at all if you do not do this first):
    sudo mkdir /var/log/libvirt
- Restart the libvirtd (probably not necessary, but doesn't hurt):
    sudo herd restart libvirtd
- Start virt-manager, and try to make a new domain using any random installer ISO from the Internet.  It should fail with the messages shown above.

I'm happy to help with any investigation you need, but I'm a total newbie when it comes to libvirt code, so please bear with me!

Thank you very much for your help,
Chris

Comment 26 Cole Robinson 2019-10-08 16:46:14 UTC
Chris can you file a new bug? Under Product=Virtualization Tools component=libvirt, more info here: https://libvirt.org/bugs.html#general

If you can also show, in that bug, what /proc/self/cgroup contains for a working setup (if there is one), and a non-working setup

Comment 27 Chris Marusich 2019-10-10 08:52:18 UTC
Sounds good - here's the new bug report: https://bugzilla.redhat.com/show_bug.cgi?id=1760233


Note You need to log in before you can comment on or make changes to this bug.