RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1175234 - virDomainGetSchedulerParameters() fails with Unable to read from '/sys/fs/cgroup/cpu,cpuacct/machine.slice/machine-qemu\x2dMic2.scope/cpu.shares': No such file or directory
Summary: virDomainGetSchedulerParameters() fails with Unable to read from '/sys/fs/cgr...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: libvirt
Version: 7.0
Hardware: x86_64
OS: Linux
unspecified
urgent
Target Milestone: rc
: ---
Assignee: Libvirt Maintainers
QA Contact: Virtualization Bugs
URL:
Whiteboard: sla
Depends On: 1139223
Blocks: 1156399
TreeView+ depends on / blocked
 
Reported: 2014-12-17 10:54 UTC by Martin Sivák
Modified: 2015-03-05 07:48 UTC (History)
20 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of: 1156399
Environment:
Last Closed: 2015-03-05 07:48:18 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
relevant logs (2.35 MB, application/x-gzip)
2014-12-17 12:00 UTC, Michael Burman
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2015:0323 0 normal SHIPPED_LIVE Low: libvirt security, bug fix, and enhancement update 2015-03-05 12:10:54 UTC

Description Martin Sivák 2014-12-17 10:54:35 UTC
+++ This bug was initially created as a clone of Bug #1156399 +++

Traceback (most recent call last):
  File "/usr/share/vdsm/virt/sampling.py", line 471, in collect
    statsFunction()
  File "/usr/share/vdsm/virt/sampling.py", line 346, in __call__
    retValue = self._function(*args, **kwargs)
  File "/usr/share/vdsm/virt/vm.py", line 349, in _sampleCpuTune
    infos = self._vm._dom.schedulerParameters()
  File "/usr/share/vdsm/virt/vm.py", line 689, in f
    ret = attr(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/vdsm/libvirtconnection.py", line 111, in wrapper
    ret = f(*args, **kwargs)
  File "/usr/lib64/python2.7/site-packages/libvirt.py", line 2134, in schedulerParameters
    if ret is None: raise libvirtError ('virDomainGetSchedulerParameters() failed', dom=self)
libvirtError: Unable to read from '/sys/fs/cgroup/cpu,cpuacct/machine.slice/machine-qemu\x2dMic2.scope/cpu.shares': No such file or directory

This issue currently blocks RHEV-M 3.5 release.

Comment 1 Martin Sivák 2014-12-17 10:56:40 UTC
mpavlik: Can you please add the versions of libvirt and systemd here? I suspect that it is related to those two components somehow.

Comment 3 Daniel Berrangé 2014-12-17 11:05:08 UTC
This is probably just another case of systemd deleting libvirt's cgroups

https://bugzilla.redhat.com/show_bug.cgi?id=1139223

Comment 4 Michael Burman 2014-12-17 11:56:41 UTC
libvirt-daemon-1.1.1-29.el7_0.3.x86_64
systemd-python-208-11.el7_0.5.x86_64

Comment 5 Michael Burman 2014-12-17 12:00:04 UTC
Created attachment 970071 [details]
relevant logs

Comment 6 Martin Pavlik 2014-12-17 12:20:16 UTC
I think comment 3 is right

if node was freshly booted vNIC could be linked down and up, after vdsm restart problem occured

Thread-19::ERROR::2014-12-17 12:43:12,404::sampling::475::vm.Vm::(collect) vmId=`32752ba3-ee24-4bd8-931c-d7cf5fe361e7`::Stats function failed: <AdvancedStatsFunction _sampleCpu at 0x2e2a720>
Traceback (most recent call last):
  File "/usr/share/vdsm/virt/sampling.py", line 471, in collect
    statsFunction()
  File "/usr/share/vdsm/virt/sampling.py", line 346, in __call__
    retValue = self._function(*args, **kwargs)
  File "/usr/share/vdsm/virt/vm.py", line 303, in _sampleCpu
    cpuStats = self._vm._dom.getCPUStats(True, 0)
  File "/usr/share/vdsm/virt/vm.py", line 689, in f
    ret = attr(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/vdsm/libvirtconnection.py", line 111, in wrapper
    ret = f(*args, **kwargs)
  File "/usr/lib64/python2.7/site-packages/libvirt.py", line 2016, in getCPUStats
    if ret is None: raise libvirtError ('virDomainGetCPUStats() failed', dom=self)
libvirtError: unable to get cpu account: Operation not permitted


[root@dell-r210ii-06 ~]# rpm -qa | grep libvirt
libvirt-python-1.1.1-29.el7_0.3.x86_64
libvirt-daemon-driver-secret-1.1.1-29.el7_0.3.x86_64
libvirt-daemon-driver-qemu-1.1.1-29.el7_0.3.x86_64
libvirt-client-1.1.1-29.el7_0.3.x86_64
libvirt-daemon-driver-nwfilter-1.1.1-29.el7_0.3.x86_64
libvirt-daemon-driver-interface-1.1.1-29.el7_0.3.x86_64
libvirt-lock-sanlock-1.1.1-29.el7_0.3.x86_64
libvirt-daemon-config-nwfilter-1.1.1-29.el7_0.3.x86_64
libvirt-daemon-driver-network-1.1.1-29.el7_0.3.x86_64
libvirt-daemon-1.1.1-29.el7_0.3.x86_64
libvirt-daemon-driver-storage-1.1.1-29.el7_0.3.x86_64
libvirt-daemon-driver-nodedev-1.1.1-29.el7_0.3.x86_64
libvirt-daemon-kvm-1.1.1-29.el7_0.3.x86_64
[root@dell-r210ii-06 ~]# rpm -qa | grep systemd
systemd-208-11.el7_0.5.x86_64
systemd-libs-208-11.el7_0.5.x86_64
systemd-sysv-208-11.el7_0.5.x86_64

Comment 7 Jiri Denemark 2014-12-18 09:17:03 UTC
There should be no libvirt work required once the systemd bug 1139223 is fixed.

Comment 8 Shanzhi Yu 2015-01-06 06:58:24 UTC
I can reproduce it with systemd-208-12.el7.x86_64

Steps as below:

# rpm -qa|grep systemd
systemd-python-208-12.el7.x86_64
systemd-208-12.el7.x86_64
systemd-libs-208-12.el7.x86_64
systemd-sysv-208-12.el7.x86_64
systemd-devel-208-12.el7.x86_64

# rpm -q libvirt
libvirt-1.2.8-10.el7.x86_64

1. Start a domain
# virsh start rh7 
Domain rh7 started

2. Check cgroup dir used by libvirtd 
## for i in cpuset cpu,cpuacct memory devices freezer net_cls blkio perf_event;do file /sys/fs/cgroup/$i/machine.slice/machine-qemu\\x2drh7.scope;done 
/sys/fs/cgroup/cpuset/machine.slice/machine-qemu\x2drh7.scope: directory
/sys/fs/cgroup/cpu,cpuacct/machine.slice/machine-qemu\x2drh7.scope: directory
/sys/fs/cgroup/memory/machine.slice/machine-qemu\x2drh7.scope: directory
/sys/fs/cgroup/devices/machine.slice/machine-qemu\x2drh7.scope: directory
/sys/fs/cgroup/freezer/machine.slice/machine-qemu\x2drh7.scope: directory
/sys/fs/cgroup/net_cls/machine.slice/machine-qemu\x2drh7.scope: directory
/sys/fs/cgroup/blkio/machine.slice/machine-qemu\x2drh7.scope: directory
/sys/fs/cgroup/perf_event/machine.slice/machine-qemu\x2drh7.scope: directory

3. Reload systemd service and restart libvirtd service
# systemctl daemon-reload ;systemctl restart libvirtd.service

4. Check sub cgroup dir used by libvirtd 

# for i in cpuset cpu,cpuacct memory devices freezer net_cls blkio perf_event;do file /sys/fs/cgroup/$i/machine.slice/machine-qemu\\x2drh7.scope;done 
/sys/fs/cgroup/cpuset/machine.slice/machine-qemu\x2drh7.scope: directory
/sys/fs/cgroup/cpu,cpuacct/machine.slice/machine-qemu\x2drh7.scope: cannot open (No such file or directory)
/sys/fs/cgroup/memory/machine.slice/machine-qemu\x2drh7.scope: cannot open (No such file or directory)
/sys/fs/cgroup/devices/machine.slice/machine-qemu\x2drh7.scope: cannot open (No such file or directory)
/sys/fs/cgroup/freezer/machine.slice/machine-qemu\x2drh7.scope: directory
/sys/fs/cgroup/net_cls/machine.slice/machine-qemu\x2drh7.scope: directory
/sys/fs/cgroup/blkio/machine.slice/machine-qemu\x2drh7.scope: cannot open (No such file or directory)
/sys/fs/cgroup/perf_event/machine.slice/machine-qemu\x2drh7.scope: directory

So, sub cgroup blkio devices memory cpu,cpuacct used by libvirtd were deleted when reload systemd

Verify this with systemd-208-20.el7.x86_64

1. # rpm -q systemd
systemd-208-20.el7.x86_64

2. # virsh start rh7 
Domain rh7 started

3. # for i in cpuset cpu,cpuacct memory devices freezer net_cls blkio perf_event;do file /sys/fs/cgroup/$i/machine.slice/machine-qemu\\x2drh7.scope;done 
/sys/fs/cgroup/cpuset/machine.slice/machine-qemu\x2drh7.scope: directory
/sys/fs/cgroup/cpu,cpuacct/machine.slice/machine-qemu\x2drh7.scope: directory
/sys/fs/cgroup/memory/machine.slice/machine-qemu\x2drh7.scope: directory
/sys/fs/cgroup/devices/machine.slice/machine-qemu\x2drh7.scope: directory
/sys/fs/cgroup/freezer/machine.slice/machine-qemu\x2drh7.scope: directory
/sys/fs/cgroup/net_cls/machine.slice/machine-qemu\x2drh7.scope: directory
/sys/fs/cgroup/blkio/machine.slice/machine-qemu\x2drh7.scope: directory
/sys/fs/cgroup/perf_event/machine.slice/machine-qemu\x2drh7.scope: directory

4. # systemctl daemon-reload ;systemctl restart libvirtd.service

# for i in cpuset cpu,cpuacct memory devices freezer net_cls blkio perf_event;do file /sys/fs/cgroup/$i/machine.slice/machine-qemu\\x2drh7.scope;done 
/sys/fs/cgroup/cpuset/machine.slice/machine-qemu\x2drh7.scope: directory
/sys/fs/cgroup/cpu,cpuacct/machine.slice/machine-qemu\x2drh7.scope: directory
/sys/fs/cgroup/memory/machine.slice/machine-qemu\x2drh7.scope: directory
/sys/fs/cgroup/devices/machine.slice/machine-qemu\x2drh7.scope: directory
/sys/fs/cgroup/freezer/machine.slice/machine-qemu\x2drh7.scope: directory
/sys/fs/cgroup/net_cls/machine.slice/machine-qemu\x2drh7.scope: directory
/sys/fs/cgroup/blkio/machine.slice/machine-qemu\x2drh7.scope: directory
/sys/fs/cgroup/perf_event/machine.slice/machine-qemu\x2drh7.scope: directory

Comment 10 errata-xmlrpc 2015-03-05 07:48:18 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHSA-2015-0323.html


Note You need to log in before you can comment on or make changes to this bug.