Bug 860907 - It reported an error when checked the schedinfo of the lxc guest
It reported an error when checked the schedinfo of the lxc guest
Status: CLOSED ERRATA
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: libvirt (Show other bugs)
6.4
x86_64 Linux
medium Severity medium
: rc
: ---
Assigned To: Michal Privoznik
Virtualization Bugs
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2012-09-27 01:13 EDT by zhenfeng wang
Modified: 2013-02-21 02:25 EST (History)
8 users (show)

See Also:
Fixed In Version: libvirt-0.10.2-2.el6
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2013-02-21 02:25:21 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description zhenfeng wang 2012-09-27 01:13:16 EDT
Description of problem:
It reported an error when checked the schedinfo of the lxc guest

Version-Release number of selected component (if applicable):
libvirt-0.10.2-1.el6.x86_64
qemu-kvm-0.12.1.2-2.313.el6.x86_64
kernel-2.6.32-308.el6.x86_64


How reproducible:
100%

Steps to Reproduce:
1 prepare the domain xml
# cat lxc.xml
<domain type='lxc'>
  <name>toy</name>
  <uuid>d1f4798b-bebf-d93c-1d97-fe1c1cb7c780</uuid>
  <memory>500000</memory>
  <currentMemory>500000</currentMemory>
  <vcpu>1</vcpu>
  <os>
    <type arch='x86_64'>exe</type>
    <init>/bin/sh</init>
  </os>
  <clock offset='utc'/>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>restart</on_reboot>
  <on_crash>destroy</on_crash>
  <devices>
    <emulator>/usr/libexec/libvirt_lxc</emulator>
    <interface type='network'>
      <mac address='52:54:00:25:bf:e9'/>
      <source network='default'/>
    </interface>
    <console type='pty'>
      <target type='lxc' port='0'/>
    </console>
  </devices>
</domain>

2 Define and start lxc guest.
# virsh -c lxc:/// define lxc.xml
# virsh -c lxc:/// start toy

3 get scheduler parameters of "toy"
 # cat /cgroup/cpu/libvirt/lxc/toy/cpu.shares
  1024
4 get the value of cpu shares from cgroup
  # virsh -c lxc:/// schedinfo toy
  Scheduler      : Unknown
  error: Requested operation is not valid: cgroup CPU controller is not mounted
  
Actual results:
it reported an error when I run the command "virsh -c lxc:/// schedinfo toy"

Expected results:
compare the value of cpu shares between we got in step 3 and step 4,they should have a same cpu-shares value,and no reporting any error

Additional info:
Comment 2 zhenfeng wang 2012-09-27 01:55:09 EDT
# virsh -c lxc:/// list 
 Id    Name                           State
----------------------------------------------------
 28110 toy                            running

# cat /proc/mounts
rootfs / rootfs rw 0 0
proc /proc proc rw,nosuid,nodev,noexec,relatime 0 0
sysfs /sys sysfs rw,seclabel,nosuid,nodev,noexec,relatime 0 0
devtmpfs /dev devtmpfs rw,seclabel,nosuid,relatime,size=3998736k,nr_inodes=999684,mode=755 0 0
devpts /dev/pts devpts rw,seclabel,relatime,gid=5,mode=620,ptmxmode=000 0 0
tmpfs /dev/shm tmpfs rw,seclabel,nosuid,nodev,relatime 0 0
/dev/sda1 / ext4 rw,seclabel,relatime,barrier=1,data=ordered 0 0
none /selinux selinuxfs rw,relatime 0 0
devtmpfs /dev devtmpfs rw,seclabel,nosuid,relatime,size=3998736k,nr_inodes=999684,mode=755 0 0
/proc/bus/usb /proc/bus/usb usbfs rw,relatime 0 0
none /proc/sys/fs/binfmt_misc binfmt_misc rw,relatime 0 0
sunrpc /var/lib/nfs/rpc_pipefs rpc_pipefs rw,relatime 0 0
gvfs-fuse-daemon /root/.gvfs fuse.gvfs-fuse-daemon rw,nosuid,nodev,relatime,user_id=0,group_id=0 0 0
10.66.90.121:/vol/S3/libvirtmanual /mnt nfs rw,relatime,vers=3,rsize=65536,wsize=65536,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=10.66.90.121,mountvers=3,mountport=4046,mountproto=udp,local_lock=none,addr=10.66.90.121 0 0
cgroup /cgroup/cpuset cgroup rw,relatime,cpuset 0 0
cgroup /cgroup/cpu cgroup rw,relatime,cpu 0 0
cgroup /cgroup/cpuacct cgroup rw,relatime,cpuacct 0 0
cgroup /cgroup/memory cgroup rw,relatime,memory 0 0
cgroup /cgroup/devices cgroup rw,relatime,devices 0 0
cgroup /cgroup/freezer cgroup rw,relatime,freezer 0 0
cgroup /cgroup/net_cls cgroup rw,relatime,net_cls 0 0
cgroup /cgroup/blkio cgroup rw,relatime,blkio 0 0
Comment 3 Michal Privoznik 2012-10-01 09:10:59 EDT
Patch proposed upstream:

https://www.redhat.com/archives/libvir-list/2012-October/msg00009.html
Comment 6 Wayne Sun 2012-10-09 04:10:25 EDT
pkgs:
libvirt-0.10.2-2.el6.x86_64
kernel-2.6.32-306.el6.x86_64

steps:
1. prepare a domain and start it:

# virsh -c lxc:///
Welcome to virsh, the virtualization interactive terminal.

Type:  'help' for help with commands
       'quit' to quit

virsh # dumpxml vm1
<domain type='lxc'>
  <name>vm1</name>
  <uuid>386f5b25-43ee-9d62-4ce2-58c3809e47c1</uuid>
  <memory unit='KiB'>500000</memory>
  <currentMemory unit='KiB'>500000</currentMemory>
  <vcpu placement='static'>1</vcpu>
  <os>
    <type arch='x86_64'>exe</type>
    <init>/bin/sh</init>
  </os>
  <clock offset='utc'/>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>restart</on_reboot>
  <on_crash>destroy</on_crash>
  <devices>
    <emulator>/usr/libexec/libvirt_lxc</emulator>
    <interface type='network'>
      <mac address='52:54:00:f2:2c:ac'/>
      <source network='default'/>
      <target dev='veth1'/>
    </interface>
    <console type='pty'>
      <target type='lxc' port='0'/>
    </console>
  </devices>
  <seclabel type='none'/>
</domain>

virsh # start vm1
Domain vm1 started

virsh # list --all
 Id    Name                           State
----------------------------------------------------
 28777 fedora-rawhide                 running
 28918 vm1                            running

2. check schedinfo
virsh # schedinfo vm1
Scheduler      : posix
cpu_shares     : 1024
vcpu_period    : 100000
vcpu_quota     : -1

# cat /cgroup/cpu/libvirt/lxc/vm1/cpu.shares 
1024

This is working now.
Comment 7 errata-xmlrpc 2013-02-21 02:25:21 EST
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHSA-2013-0276.html

Note You need to log in before you can comment on or make changes to this bug.