Bug 2181997 - kubemacpool-cert-manager ignores node placement configuration
Summary: kubemacpool-cert-manager ignores node placement configuration
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Container Native Virtualization (CNV)
Classification: Red Hat
Component: Networking
Version: 4.13.0
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
: 4.13.1
Assignee: Quique Llorente
QA Contact: Yossi Segev
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2023-03-27 08:01 UTC by Simone Tiraboschi
Modified: 2025-09-28 15:57 UTC (History)
2 users (show)

Fixed In Version: cluster-network-addons-operator-rhel9 v4.13.1-2
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2023-06-20 13:41:05 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github kubevirt cluster-network-addons-operator pull 1525 0 None open kmp: Add placement to cert-manager 2023-04-04 07:44:54 UTC
Red Hat Issue Tracker CNV-27450 0 None None None 2023-03-27 08:49:43 UTC
Red Hat Knowledge Base (Solution) 7010990 0 None None None 2023-05-03 21:43:56 UTC
Red Hat Product Errata RHEA-2023:3686 0 None None None 2023-06-20 13:41:26 UTC

Description Simone Tiraboschi 2023-03-27 08:01:59 UTC
Description of problem:
Something like:

apiVersion: hco.kubevirt.io/v1beta1
kind: HyperConverged
metadata:
  name: kubevirt-hyperconverged
  namespace: openshift-cnv
spec:
  infra:
    nodePlacement:
      nodeSelector:
        node-role.kubernetes.io/infra: ''
  workloads:
    nodePlacement:
      nodeSelector:
        node-role.kubernetes.io/worker: ''

is not properly propagated down to kubemacpool-cert-manager deployment

Version-Release number of selected component (if applicable):


How reproducible:
100%

Steps to Reproduce:
1. configure HCO CR as for the suggested configuration
2. check the placement of kubemacpool-cert-manager
3.

Actual results:
kubemacpool-cert-manager is not scheduled on infra nodes

Expected results:
kubemacpool-cert-manager is scheduled on infra nodes

Additional info:

Comment 1 Quique Llorente 2023-04-04 07:45:23 UTC
Fixed upstream https://github.com/kubevirt/cluster-network-addons-operator/pull/1525

Comment 2 Yossi Segev 2023-05-29 18:29:27 UTC
Verified on:
CNV 4.13.1
cluster-network-addons-operator-rhel9:v4.13.1-2


Verified with the following scenario:
1. Label all workers with infra role:
$ oc get nodes
NAME                                  STATUS   ROLES                  AGE     VERSION
c01-n-ys-4131o-gfnr2-master-0         Ready    control-plane,master   4h32m   v1.26.3+b404935
c01-n-ys-4131o-gfnr2-master-1         Ready    control-plane,master   4h32m   v1.26.3+b404935
c01-n-ys-4131o-gfnr2-master-2         Ready    control-plane,master   4h32m   v1.26.3+b404935
c01-n-ys-4131o-gfnr2-worker-0-255wc   Ready    worker           4h16m   v1.26.3+b404935
c01-n-ys-4131o-gfnr2-worker-0-725qn   Ready    worker           4h16m   v1.26.3+b404935
c01-n-ys-4131o-gfnr2-worker-0-tjvwz   Ready    worker           4h15m   v1.26.3+b404935
$
$ oc label node c01-n-ys-4131o-gfnr2-worker-0-255wc node-role.kubernetes.io/infra=""
node/c01-n-ys-4131o-gfnr2-worker-0-255wc labeled
$ 
$ oc label node c01-n-ys-4131o-gfnr2-worker-0-725qn node-role.kubernetes.io/infra=""
node/c01-n-ys-4131o-gfnr2-worker-0-725qn labeled
$ oc label node c01-n-ys-4131o-gfnr2-worker-0-tjvwz node-role.kubernetes.io/infra=""
node/c01-n-ys-4131o-gfnr2-worker-0-tjvwz labeled
$
$ oc get nodes
NAME                                  STATUS   ROLES                  AGE     VERSION
c01-n-ys-4131o-gfnr2-master-0         Ready    control-plane,master   4h32m   v1.26.3+b404935
c01-n-ys-4131o-gfnr2-master-1         Ready    control-plane,master   4h32m   v1.26.3+b404935
c01-n-ys-4131o-gfnr2-master-2         Ready    control-plane,master   4h32m   v1.26.3+b404935
c01-n-ys-4131o-gfnr2-worker-0-255wc   Ready    infra,worker           4h16m   v1.26.3+b404935
c01-n-ys-4131o-gfnr2-worker-0-725qn   Ready    infra,worker           4h16m   v1.26.3+b404935
c01-n-ys-4131o-gfnr2-worker-0-tjvwz   Ready    infra,worker           4h15m   v1.26.3+b404935

2. Check what is the node on which kubemacpool-cert-manager is currently scheduled:
$ oc get pod -n openshift-cnv kubemacpool-cert-manager-64c6596598-gct9v -o wide
NAME                                        READY   STATUS    RESTARTS   AGE    IP             NODE                            NOMINATED NODE   READINESS GATES
kubemacpool-cert-manager-64c6596598-gct9v   1/1     Running   0          168m   10.128.0.115   c01-n-ys-4131o-gfnr2-master-0   <none>           <none>

3. Edit HCO and add the infra and workloads nodePlacement which are specified in the bug description:
$ oc edit hco -n openshift-cnv kubevirt-hyperconverged 
...
spec:
...
  infra:
    nodePlacement:
      nodeSelector:
        node-role.kubernetes.io/infra: ""
...
  workloads:
    nodePlacement:                                                                                                                                                           
      nodeSelector:
        node-role.kubernetes.io/worker: ""
...

hyperconverged.hco.kubevirt.io/kubevirt-hyperconverged edited

4. Check for the nodeSelector in the new kubemacpool-cert-manager ReplicaSet:
$ oc get replicaset -n openshift-cnv kubemacpool-cert-manager-66898f94cd -o yaml
...
      nodeSelector:
        node-role.kubernetes.io/infra: ""
...

5. Verify the kubemacpool-cert-manager pod was scheduled on one of the labeled nodes:
$ oc get pod -n openshift-cnv kubemacpool-cert-manager-66898f94cd-j8lrs -o wide
NAME                                        READY   STATUS    RESTARTS   AGE   IP             NODE                                  NOMINATED NODE   READINESS GATES
kubemacpool-cert-manager-66898f94cd-j8lrs   1/1     Running   0          15m   10.131.0.104   c01-n-ys-4131o-gfnr2-worker-0-725qn   <none>           <none>

Comment 8 errata-xmlrpc 2023-06-20 13:41:05 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (OpenShift Virtualization 4.13.1 Images), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2023:3686


Note You need to log in before you can comment on or make changes to this bug.