Bug 1957703

Summary: Prometheus Statefulsets should have 2 replicas and hard affinity set
Product: OpenShift Container Platform Reporter: Damien Grisonnet <dgrisonn>
Component: MonitoringAssignee: Damien Grisonnet <dgrisonn>
Status: CLOSED WONTFIX QA Contact: Junqi Zhao <juzhao>
Severity: medium Docs Contact:
Priority: medium    
Version: 4.7CC: anpicker, aos-bugs, dgrisonn, erooth, jeder, juzhao, lcosic, rgudimet, spasquie, sraje, vjaypurk, wking
Target Milestone: ---   
Target Release: 4.7.z   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: 1949262
: 1957704 (view as bug list) Environment:
Last Closed: 2021-10-15 12:34:32 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1949262    
Bug Blocks: 1957704    

Comment 1 Junqi Zhao 2021-05-13 06:28:09 UTC
Tested with the not merged PR, hard anti-affinity to Prometheuses is added, prometheus-k8s and prometheus-user-workload pods are scheduled to different nodes
# oc -n openshift-monitoring get sts prometheus-k8s -oyaml | grep podAntiAffinity -A10
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
          - labelSelector:
              matchExpressions:
              - key: prometheus
                operator: In
                values:
                - k8s
            namespaces:
            - openshift-monitoring
            topologyKey: kubernetes.io/hostname
# oc -n openshift-user-workload-monitoring get sts prometheus-user-workload -oyaml  | grep podAntiAffinity -A10
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
          - labelSelector:
              matchExpressions:
              - key: prometheus
                operator: In
                values:
                - user-workload
            namespaces:
            - openshift-user-workload-monitoring
            topologyKey: kubernetes.io/hostname

# oc -n openshift-monitoring get pod -o wide | grep prometheus-k8s
prometheus-k8s-0                               7/7     Running   1          18m    10.129.2.10   ci-ln-l78mzib-f76d1-x2jrt-worker-b-q6sdk   <none>           <none>
prometheus-k8s-1                               7/7     Running   1          18m    10.128.2.11   ci-ln-l78mzib-f76d1-x2jrt-worker-c-5prgg   <none>           <none>

# oc -n openshift-user-workload-monitoring get po -o wide 
NAME                                   READY   STATUS    RESTARTS   AGE     IP            NODE                                       NOMINATED NODE   READINESS GATES
prometheus-operator-559497d594-fzhps   2/2     Running   0          2m31s   10.130.0.43   ci-ln-l78mzib-f76d1-x2jrt-master-2         <none>           <none>
prometheus-user-workload-0             5/5     Running   1          2m28s   10.131.0.28   ci-ln-l78mzib-f76d1-x2jrt-worker-d-cxn8k   <none>           <none>
prometheus-user-workload-1             5/5     Running   1          2m27s   10.129.2.11   ci-ln-l78mzib-f76d1-x2jrt-worker-b-q6sdk   <none>           <none>
thanos-ruler-user-workload-0           3/3     Running   0          2m24s   10.131.0.29   ci-ln-l78mzib-f76d1-x2jrt-worker-d-cxn8k   <none>           <none>
thanos-ruler-user-workload-1           3/3     Running   0          2m24s   10.128.2.13   ci-ln-l78mzib-f76d1-x2jrt-worker-c-5prgg   <none>           <none>

Comment 2 Damien Grisonnet 2021-05-25 17:26:34 UTC
PR is waiting for patch manager approval.

Comment 4 Junqi Zhao 2021-05-31 02:07:38 UTC
fix is in 4.7.0-0.nightly-2021-05-28-013206 and later builds, based on Comment 1, move to VERIFIED

Comment 6 Siddharth Sharma 2021-06-04 18:39:19 UTC
This bug will be shipped as part of next z-stream release 4.7.15 on June 14th, as 4.7.14 was dropped due to a regression https://bugzilla.redhat.com/show_bug.cgi?id=1967614