Bug 1238570

Summary: Libvirt cannot get resource info after restart libvirtd if use custom partitions
Product: Red Hat Enterprise Linux 7 Reporter: Pei Zhang <pzhang>
Component: libvirtAssignee: Peter Krempa <pkrempa>
Status: CLOSED ERRATA QA Contact: Virtualization Bugs <virt-bugs>
Severity: medium Docs Contact:
Priority: medium    
Version: 7.2CC: dyuan, mzhan, rbalakri, shyu, xuzhang
Target Milestone: rc   
Target Release: ---   
Hardware: x86_64   
OS: Unspecified   
Whiteboard:
Fixed In Version: libvirt-1.2.17-3.el7 Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2015-11-19 06:47:52 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Pei Zhang 2015-07-02 07:48:33 UTC
Description of problem:
Libvirt cannot get resource info after restart libvirtd if use custom partitions

Version-Release number of selected component (if applicable):
libvirt-1.2.16-1.el7.x86_64
qemu-kvm-rhev-2.3.0-2.el7.x86_64

How reproducible:
100%

Steps to Reproduce:
1.make a partition in cgroup
# pwd
/sys/fs/cgroup/systemd/machine.slice
#mkdir test

2.edit guest with new partition
......
<resource>
    <partition>/machine/test</partition>
  </resource>
.....
3.destroy and start guest ,set blkiotune value
# virsh blkiotune testvm 999
# virsh blkiotune testvm 999 --config
# virsh blkiotune testvm
weight         : 999
device_weight  :
......
# cat blkio/machine.slice/machine-test.slice/machine-qemu\\x2dtestvm.scope/blkio.weight
999

# virsh dumpxml testvm | grep blkiotune -A 2
  <blkiotune>
    <weight>999</weight>
  </blkiotune>

4.restart libvirtd , check blkiotune again
# systemctl restart libvirtd.service

# virsh blkiotune testvm
error: Unable to get blkio parameters
error: Requested operation is not valid: blkio cgroup isn't mounted

5. to check blkiotune value in cgroup ,the blkio controller is mounted .
#systemd-cgls
......
15 ├─blkio
 16 │ ├─1 /usr/lib/systemd/systemd --system --deserialize 22
 17 │ ├─user_test.slice
 18 │ │ └─machine-qemu\x2dr72.scope
 19 │ │   └─16414 /usr/libexec/qemu-kvm -name r72 -S -machine pc-i440fx-rhel7.1.0...
 20 │ ├─system.slice
 21 │ │ ├─  439 /usr/lib/systemd/systemd-journald
 22 │ ├─user.slice
 23 │ └─machine.slice
 24 │   ├─machine-qemu\x2dr7.2.scope
 25 │   │ └─21578 /usr/libexec/qemu-kvm -name r7.2 -S -machine pc-i440fx-rhel7.2....
 26 │   ├─machine-test.slice
 27 │   │ └─machine-qemu\x2dtestvm.scope
 28 │   │   └─15949 /usr/libexec/qemu-kvm -name testvm -S -machine pc-i440fx-rhel...


# cat blkio/machine.slice/machine-test.slice/machine-qemu\\x2dtestvm.scope/blkio.weight
999

Actual results:
As step 4 , cannot get resource value after restart libvirtd

Expected results:
In step 4 , can get resource value

Additional info:
debug : virCgroupValidateMachineGroup:302 : Name 'machine-qemu\x2dtestvm.scope' for controller 'cpu' does not match 'testvm', 'testvm.libvirt-qemu' or 'machine-test-qemu\x2dtestvm.scope'

Comment 1 Peter Krempa 2015-07-22 05:23:08 UTC
Fixed upstream with:

commit 88f6c007c3fb4324396ec397de57c8a80ba7b31d
Author: Peter Krempa <pkrempa>
Date:   Thu Jul 16 15:35:05 2015 +0200

    cgroup: Drop resource partition from virSystemdMakeScopeName
    
    The scope name, even according to our docs is
    "machine-$DRIVER\x2d$VMNAME.scope" virSystemdMakeScopeName would use the
    resource partition name instead of "machine-" if it was specified thus
    creating invalid scope paths.
    
    This makes libvirt drop cgroups for a VM that uses custom resource
    partition upon reconnecting since the detected scope name would not
    match the expected name generated by virSystemdMakeScopeName.
    
    The error is exposed by the following log entry:
    
    debug : virCgroupValidateMachineGroup:302 : Name 'machine-qemu\x2dtestvm.scope' for controller 'cpu' does not match 'testvm', 'testvm.libvirt-qemu' or 'machine-test-qemu\x2dtestvm.scope'
    
    for a "/machine/test" resource and "testvm" vm.

v1.2.17-144-g88f6c00

Comment 4 Pei Zhang 2015-09-10 09:23:08 UTC
Verified version : 
libvirt-1.2.17-8.el7.x86_64
qemu-kvm-rhev-2.3.0-22.el7.x86_64

steps: 

1.define and start a guest using custome resource partition 
# virsh dumpxml r70820 | grep resource -A 3
  <resource>
    <partition>/machine/mytest</partition>
  </resource>

2.get/set resource info set cputune here.

# virsh schedinfo r70820
Scheduler      : posix
cpu_shares     : 1024
vcpu_period    : 100000
vcpu_quota     : -1
emulator_period: 100000
emulator_quota : -1


# virsh schedinfo r70820 --set cpu_shares=2048
Scheduler      : posix
cpu_shares     : 2048
vcpu_period    : 100000
vcpu_quota     : -1
emulator_period: 100000
emulator_quota : -1

check in cgroup:

# cat /sys/fs/cgroup/cpu\,cpuacct/machine.slice/machine-mytest.slice/machine-qemu\\x2dr70820.scope/cpu.shares 
2048

3.restart libvirtd ,check value again 

# service libvirtd restart 
Redirecting to /bin/systemctl restart  libvirtd.service

# virsh dumpxml r70820 | grep cputune -A 3
  <cputune>
    <shares>2048</shares>
  </cputune>
  <resource>
    <partition>/machine/mytest</partition>
  </resource>


# virsh schedinfo r70820 
Scheduler      : posix
cpu_shares     : 2048
vcpu_period    : 100000
vcpu_quota     : -1
emulator_period: 100000
emulator_quota : -1

It can get resource value after restart libvirtd.
moving to verified.

Comment 6 errata-xmlrpc 2015-11-19 06:47:52 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2015-2202.html