Login
[x]
Log in using an account from:
Fedora Account System
Red Hat Associate
Red Hat Customer
Or login using a Red Hat Bugzilla account
Forgot Password
Login:
Hide Forgot
Create an Account
Red Hat Bugzilla – Attachment 1475753 Details for
Bug 1615732
prometheus-operator ReplicaSet has timed out progressing
[?]
New
Simple Search
Advanced Search
My Links
Browse
Requests
Reports
Current State
Search
Tabular reports
Graphical reports
Duplicates
Other Reports
User Changes
Plotly Reports
Bug Status
Bug Severity
Non-Defaults
|
Product Dashboard
Help
Page Help!
Bug Writing Guidelines
What's new
Browser Support Policy
5.0.4.rh83 Release notes
FAQ
Guides index
User guide
Web Services
Contact
Legal
This site requires JavaScript to be enabled to function correctly, please enable it.
prometheus-operator pod in CrashLoopBackOff status
prometheus-operator_CrashLoopBackOff.txt (text/plain), 9.27 KB, created by
Junqi Zhao
on 2018-08-14 06:54:09 UTC
(
hide
)
Description:
prometheus-operator pod in CrashLoopBackOff status
Filename:
MIME Type:
Creator:
Junqi Zhao
Created:
2018-08-14 06:54:09 UTC
Size:
9.27 KB
patch
obsolete
># kubectl -n openshift-monitoring get pod >NAME READY STATUS RESTARTS AGE >cluster-monitoring-operator-9f7578d96-c2m8p 1/1 Running 0 49m >prometheus-operator-9f6cffdb-vrrtf 0/1 CrashLoopBackOff 13 47m >***************************************************************************************************************** ># kubectl -n openshift-monitoring get deploy prometheus-operator -o yaml >apiVersion: extensions/v1beta1 >kind: Deployment >metadata: > annotations: > deployment.kubernetes.io/revision: "1" > creationTimestamp: 2018-08-14T05:31:33Z > generation: 4 > labels: > k8s-app: prometheus-operator > name: prometheus-operator > namespace: openshift-monitoring > resourceVersion: "22664" > selfLink: /apis/extensions/v1beta1/namespaces/openshift-monitoring/deployments/prometheus-operator > uid: 4de1a3c4-9f83-11e8-9782-fa163ef64d7a >spec: > progressDeadlineSeconds: 600 > replicas: 1 > revisionHistoryLimit: 10 > selector: > matchLabels: > k8s-app: prometheus-operator > strategy: > rollingUpdate: > maxSurge: 25% > maxUnavailable: 25% > type: RollingUpdate > template: > metadata: > creationTimestamp: null > labels: > k8s-app: prometheus-operator > spec: > containers: > - args: > - --kubelet-service=kube-system/kubelet > - -logtostderr=true > - --config-reloader-image=registry.dev.redhat.io/openshift3/ose-configmap-reloader:v3.11.0 > - --prometheus-config-reloader=registry.dev.redhat.io/openshift3/ose-prometheus-config-reloader:v3.11.0 > - --namespace=openshift-monitoring > image: registry.dev.redhat.io/openshift3/ose-prometheus-operator:v3.11.0 > imagePullPolicy: IfNotPresent > name: prometheus-operator > ports: > - containerPort: 8080 > name: http > protocol: TCP > resources: {} > securityContext: > allowPrivilegeEscalation: false > readOnlyRootFilesystem: true > terminationMessagePath: /dev/termination-log > terminationMessagePolicy: File > dnsPolicy: ClusterFirst > nodeSelector: > beta.kubernetes.io/os: linux > restartPolicy: Always > schedulerName: default-scheduler > securityContext: {} > serviceAccount: prometheus-operator > serviceAccountName: prometheus-operator > terminationGracePeriodSeconds: 30 >status: > conditions: > - lastTransitionTime: 2018-08-14T05:31:33Z > lastUpdateTime: 2018-08-14T05:31:33Z > message: Deployment does not have minimum availability. > reason: MinimumReplicasUnavailable > status: "False" > type: Available > - lastTransitionTime: 2018-08-14T05:41:34Z > lastUpdateTime: 2018-08-14T05:41:34Z > message: ReplicaSet "prometheus-operator-9f6cffdb" has timed out progressing. > reason: ProgressDeadlineExceeded > status: "False" > type: Progressing > observedGeneration: 4 > replicas: 1 > unavailableReplicas: 1 > updatedReplicas: 1 >***************************************************************************************************************** ># kubectl -n openshift-monitoring get replicaset.apps/prometheus-operator-9f6cffdb -oyaml >apiVersion: apps/v1 >kind: ReplicaSet >metadata: > annotations: > deployment.kubernetes.io/desired-replicas: "1" > deployment.kubernetes.io/max-replicas: "2" > deployment.kubernetes.io/revision: "1" > creationTimestamp: 2018-08-14T05:31:33Z > generation: 1 > labels: > k8s-app: prometheus-operator > pod-template-hash: "59279986" > name: prometheus-operator-9f6cffdb > namespace: openshift-monitoring > ownerReferences: > - apiVersion: apps/v1 > blockOwnerDeletion: true > controller: true > kind: Deployment > name: prometheus-operator > uid: 4de1a3c4-9f83-11e8-9782-fa163ef64d7a > resourceVersion: "20744" > selfLink: /apis/apps/v1/namespaces/openshift-monitoring/replicasets/prometheus-operator-9f6cffdb > uid: 4de27b16-9f83-11e8-9782-fa163ef64d7a >spec: > replicas: 1 > selector: > matchLabels: > k8s-app: prometheus-operator > pod-template-hash: "59279986" > template: > metadata: > creationTimestamp: null > labels: > k8s-app: prometheus-operator > pod-template-hash: "59279986" > spec: > containers: > - args: > - --kubelet-service=kube-system/kubelet > - -logtostderr=true > - --config-reloader-image=registry.dev.redhat.io/openshift3/ose-configmap-reloader:v3.11.0 > - --prometheus-config-reloader=registry.dev.redhat.io/openshift3/ose-prometheus-config-reloader:v3.11.0 > - --namespace=openshift-monitoring > image: registry.dev.redhat.io/openshift3/ose-prometheus-operator:v3.11.0 > imagePullPolicy: IfNotPresent > name: prometheus-operator > ports: > - containerPort: 8080 > name: http > protocol: TCP > resources: {} > securityContext: > allowPrivilegeEscalation: false > readOnlyRootFilesystem: true > terminationMessagePath: /dev/termination-log > terminationMessagePolicy: File > dnsPolicy: ClusterFirst > nodeSelector: > beta.kubernetes.io/os: linux > restartPolicy: Always > schedulerName: default-scheduler > securityContext: {} > serviceAccount: prometheus-operator > serviceAccountName: prometheus-operator > terminationGracePeriodSeconds: 30 >status: > fullyLabeledReplicas: 1 > observedGeneration: 1 > replicas: 1 > > ># kubectl -n openshift-monitoring get pod prometheus-operator-9f6cffdb-vrrtf -o yaml >apiVersion: v1 >kind: Pod >metadata: > annotations: > openshift.io/scc: restricted > creationTimestamp: 2018-08-14T05:31:33Z > generateName: prometheus-operator-9f6cffdb- > labels: > k8s-app: prometheus-operator > pod-template-hash: "59279986" > name: prometheus-operator-9f6cffdb-vrrtf > namespace: openshift-monitoring > ownerReferences: > - apiVersion: apps/v1 > blockOwnerDeletion: true > controller: true > kind: ReplicaSet > name: prometheus-operator-9f6cffdb > uid: 4de27b16-9f83-11e8-9782-fa163ef64d7a > resourceVersion: "27423" > selfLink: /api/v1/namespaces/openshift-monitoring/pods/prometheus-operator-9f6cffdb-vrrtf > uid: 4de4b3f7-9f83-11e8-9782-fa163ef64d7a >spec: > containers: > - args: > - --kubelet-service=kube-system/kubelet > - -logtostderr=true > - --config-reloader-image=registry.dev.redhat.io/openshift3/ose-configmap-reloader:v3.11.0 > - --prometheus-config-reloader=registry.dev.redhat.io/openshift3/ose-prometheus-config-reloader:v3.11.0 > - --namespace=openshift-monitoring > image: registry.dev.redhat.io/openshift3/ose-prometheus-operator:v3.11.0 > imagePullPolicy: IfNotPresent > name: prometheus-operator > ports: > - containerPort: 8080 > name: http > protocol: TCP > resources: {} > securityContext: > allowPrivilegeEscalation: false > capabilities: > drop: > - KILL > - MKNOD > - SETGID > - SETUID > readOnlyRootFilesystem: true > runAsUser: 1000300000 > terminationMessagePath: /dev/termination-log > terminationMessagePolicy: File > volumeMounts: > - mountPath: /var/run/secrets/kubernetes.io/serviceaccount > name: prometheus-operator-token-wn9r6 > readOnly: true > dnsPolicy: ClusterFirst > imagePullSecrets: > - name: prometheus-operator-dockercfg-w7l6v > nodeName: qe-juzhao-311-qeos-nrr-1 > nodeSelector: > beta.kubernetes.io/os: linux > priority: 0 > restartPolicy: Always > schedulerName: default-scheduler > securityContext: > fsGroup: 1000300000 > seLinuxOptions: > level: s0:c17,c14 > serviceAccount: prometheus-operator > serviceAccountName: prometheus-operator > terminationGracePeriodSeconds: 30 > volumes: > - name: prometheus-operator-token-wn9r6 > secret: > defaultMode: 420 > secretName: prometheus-operator-token-wn9r6 >status: > conditions: > - lastProbeTime: null > lastTransitionTime: 2018-08-14T05:31:33Z > status: "True" > type: Initialized > - lastProbeTime: null > lastTransitionTime: 2018-08-14T05:31:33Z > message: 'containers with unready status: [prometheus-operator]' > reason: ContainersNotReady > status: "False" > type: Ready > - lastProbeTime: null > lastTransitionTime: null > message: 'containers with unready status: [prometheus-operator]' > reason: ContainersNotReady > status: "False" > type: ContainersReady > - lastProbeTime: null > lastTransitionTime: 2018-08-14T05:31:33Z > status: "True" > type: PodScheduled > containerStatuses: > - containerID: cri-o://cb5c689eaf47819ba26dfc85cba444ee38672129fb589450cb6ce1257b751ff8 > image: registry.dev.redhat.io/openshift3/ose-prometheus-operator:v3.11.0 > imageID: registry.dev.redhat.io/openshift3/ose-prometheus-operator@sha256:96bc3d940c49bf8016856cfd8679e6488d3d6a33bddcc5d707e662141af69eab > lastState: > terminated: > containerID: cri-o://cb5c689eaf47819ba26dfc85cba444ee38672129fb589450cb6ce1257b751ff8 > exitCode: 1 > finishedAt: 2018-08-14T06:24:20Z > reason: Error > startedAt: 2018-08-14T06:24:20Z > name: prometheus-operator > ready: false > restartCount: 15 > state: > waiting: > message: Back-off 5m0s restarting failed container=prometheus-operator pod=prometheus-operator-9f6cffdb-vrrtf_openshift-monitoring(4de4b3f7-9f83-11e8-9782-fa163ef64d7a) > reason: CrashLoopBackOff > phase: Running > podIP: 10.130.0.12 > qosClass: BestEffort > startTime: 2018-08-14T05:31:33Z > >
You cannot view the attachment while viewing its details because your browser does not support IFRAMEs.
View the attachment on a separate page
.
View Attachment As Raw
Actions:
View
Attachments on
bug 1615732
: 1475753 |
1475755