Bug 1985908

Summary: Tuned affining containers to house keeping cpus
Product: OpenShift Container Platform Reporter: OpenShift BugZilla Robot <openshift-bugzilla-robot>
Component: Node Tuning OperatorAssignee: Jiří Mencák <jmencak>
Status: CLOSED ERRATA QA Contact: Simon <skordas>
Severity: high Docs Contact:
Priority: medium    
Version: 4.8CC: aos-bugs, dagray, keyoung, sejug
Target Milestone: ---   
Target Release: 4.8.z   
Hardware: x86_64   
OS: All   
Whiteboard:
Fixed In Version: Doc Type: No Doc Update
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2021-08-16 18:32:12 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1979352    
Bug Blocks:    

Comment 1 Jiří Mencák 2021-07-26 09:01:27 UTC
Upstream PR: https://github.com/openshift/cluster-node-tuning-operator/pull/254

Comment 4 Simon 2021-08-12 17:02:29 UTC
$ oc get clusterversion
NAME      VERSION                             AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.8.0-0.nightly-2021-08-09-135211   True        False         44m     Cluster version is 4.8.0-0.nightly-2021-08-09-135211

$ oc get nodes | grep worker
ip-10-0-130-136.us-east-2.compute.internal   Ready    worker   4h16m   v1.21.1+9807387
ip-10-0-185-40.us-east-2.compute.internal    Ready    worker   4h16m   v1.21.1+9807387
ip-10-0-196-55.us-east-2.compute.internal    Ready    worker   4h16m   v1.21.1+9807387

$ node=ip-10-0-130-136.us-east-2.compute.internal

$ oc project openshift-cluster-node-tuning-operator 
Now using project "openshift-cluster-node-tuning-operator" on server "https://api.skordas812a.qe.devcluster.openshift.com:6443".

$ oc get pods | grep $node

$ oc get pods -o wide | grep $node
tuned-gn7mh                                     1/1     Running   0          4h16m   10.0.130.136   ip-10-0-130-136.us-east-2.compute.internal   <none>           <none>

$ pod=tuned-gn7mh

$ oc rsh $pod
sh-4.4# grep cgroup_ps_bla /usr/lib/python3.6/site-packages/tuned/plugins/plugin_scheduler.py
                self._cgroup_ps_blacklist_re = ""
                        "cgroup_ps_blacklist": None,
        @command_custom("cgroup_ps_blacklist", per_device = False)
        def _cgroup_ps_blacklist(self, enabling, value, verify, ignore_missing):
                        self._cgroup_ps_blacklist_re = "|".join(["(%s)" % v for v in re.split(r"(?<!\\);", str(value))])
                if self._cgroup_ps_blacklist_re != "":
                        psl = [v for v in psl if re.search(self._cgroup_ps_blacklist_re,

$ oc label node $node tuned-scheduler-node=
node/ip-10-0-130-136.us-east-2.compute.internal labeled

$ cat scheduler-tuned.yaml 
apiVersion: tuned.openshift.io/v1
kind: Tuned
metadata:
  name: ocp-scheduler-profile
  namespace: openshift-cluster-node-tuning-operator
spec:
  profile:
  - data: |
      [main]
      summary=Custom OpenShift profile
      include=openshift-node
      [scheduler]
      isolated_cores=1
      cgroup_ps_blacklist=/kubepods\.slice/
    name: ocp-scheduler-profile
  recommend:
  - match:
    - label:  tuned-scheduler-node
    priority: 20
    profile: ocp-scheduler-profile
    operand:
      debug: true


$ oc rsh $pod
sh-4.4# grep ^Cpus_allowed_list /proc/`pgrep openshift-tuned`/status
Cpus_allowed_list:      0-1
sh-4.4# grep Cpus_allowed_list /proc/`pidof chronyd`/status
Cpus_allowed_list:      0-1
sh-4.4# grep . /proc/`pidof chronyd`/cgroup 
12:freezer:/
11:memory:/system.slice/chronyd.service
10:perf_event:/
9:hugetlb:/
8:pids:/system.slice/chronyd.service
7:blkio:/system.slice/chronyd.service
6:devices:/system.slice/chronyd.service
5:rdma:/
4:cpu,cpuacct:/system.slice/chronyd.service
3:cpuset:/
2:net_cls,net_prio:/
1:name=systemd:/system.slice/chronyd.service
sh-4.4# exit
exit

$ oc create -f scheduler-tuned.yaml 
tuned.tuned.openshift.io/ocp-scheduler-profile created

$ oc get profiles
NAME                                         TUNED                     APPLIED   DEGRADED   AGE
ip-10-0-130-136.us-east-2.compute.internal   ocp-scheduler-profile     True      False      4h21m
ip-10-0-147-151.us-east-2.compute.internal   openshift-control-plane   True      False      4h26m
ip-10-0-173-212.us-east-2.compute.internal   openshift-control-plane   True      False      4h26m
ip-10-0-185-40.us-east-2.compute.internal    openshift-node            True      False      4h21m
ip-10-0-196-55.us-east-2.compute.internal    openshift-node            True      False      4h21m
ip-10-0-223-189.us-east-2.compute.internal   openshift-control-plane   True      False      4h26m

$ oc rsh $pod
sh-4.4# grep ^Cpus_allowed_list /proc/`pgrep openshift-tuned`/status
Cpus_allowed_list:      0-1
sh-4.4# grep Cpus_allowed_list /proc/`pidof chronyd`/status
Cpus_allowed_list:      0
sh-4.4# grep . /proc/`pidof chronyd`/cgroup 
12:freezer:/
11:memory:/system.slice/chronyd.service
10:perf_event:/
9:hugetlb:/
8:pids:/system.slice/chronyd.service
7:blkio:/system.slice/chronyd.service
6:devices:/system.slice/chronyd.service
5:rdma:/
4:cpu,cpuacct:/system.slice/chronyd.service
3:cpuset:/
2:net_cls,net_prio:/
1:name=systemd:/system.slice/chronyd.service
sh-4.4# exit
exit

Comment 6 errata-xmlrpc 2021-08-16 18:32:12 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (OpenShift Container Platform 4.8.5 security update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2021:3121