Bug 1415464 - Stateful Set status.replicas is always 0 after pod running
Summary: Stateful Set status.replicas is always 0 after pod running
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Node
Version: unspecified
Hardware: Unspecified
OS: Unspecified
medium
low
Target Milestone: ---
: ---
Assignee: Derek Carr
QA Contact: Xiaoli Tian
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-01-22 09:26 UTC by Xingxing Xia
Modified: 2019-07-03 15:12 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-07-03 15:12:09 UTC
Target Upstream Version:


Attachments (Terms of Use)

Description Xingxing Xia 2017-01-22 09:26:26 UTC
Description of problem:
Stateful Set status.replicas is always 0 after pod running. Compared with RC, RC does not have the problem

Version-Release number of selected component (if applicable):
openshift v1.5.0-alpha.2+6e9d68d-91

How reproducible:
Always

Steps to Reproduce:
1. Create statefulset
$ oc create -f - <<EOF
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
  labels:
    app: hello
    name: hello
  name: hello
spec:
  replicas: 1
  selector:
    matchLabels:
      app: hello
  serviceName: hello
  template:
    metadata:
      labels:
        app: hello
    spec:
      containers:
      - image: aosqe/hello-openshift
        name: hello
        ports:
        - containerPort: 8080
          name: web
          protocol: TCP
      restartPolicy: Always
EOF

2. After pod is running, check statefulset status.replicas
$ oc get statefulset -o yaml

3. Check `oc describe` to see the problem obviously:
$ oc describe statefulset hello

Actual results:
2. It shows 0 in status.replicas
...
status:
  replicas: 0

3. It shows "0 current / 1 desired"
Name:            hello
...
Replicas:        0 current / 1 desired
...
Pods Status:        1 Running / 0 Waiting / 0 Succeeded / 0 Failed
...

Expected results:
2. status.replicas should be 1
3. Should be:
...
Replicas:               1 current / 1 desired
...

Additional info:

Comment 1 DeShuai Ma 2017-02-06 05:31:51 UTC
I try this on ocp-3.5, statefulset status.replicas is correct.

Version:
[root@ip-172-18-1-86 ~]# openshift version
openshift v3.5.0.16+a26133a
kubernetes v1.5.2+43a9be4
etcd 3.1.0

Test result:
[root@dhcp-128-7 dma]# oc get statefulset
NAME           DESIRED   CURRENT   AGE
hello-petset   2         2         4m
[root@dhcp-128-7 dma]# oc get pod|grep hello-petset
hello-petset-0              1/1       Running   0          4m
hello-petset-1              1/1       Running   0          4m
[root@dhcp-128-7 dma]# oc get statefulset hello-petset -o json
{
    "apiVersion": "apps/v1beta1",
    "kind": "StatefulSet",
    "metadata": {
        "creationTimestamp": "2017-02-06T05:22:59Z",
        "generation": 1,
        "labels": {
            "app": "hello-pod"
        },
        "name": "hello-petset",
        "namespace": "test1",
        "resourceVersion": "9084",
        "selfLink": "/apis/apps/v1beta1/namespaces/test1/statefulsets/hello-petset",
        "uid": "5310261a-ec2c-11e6-bb1b-0e9cac247b88"
    },
    "spec": {
        "replicas": 2,
        "selector": {
            "matchLabels": {
                "app": "hello-pod"
            }
        },
        "serviceName": "foo",
        "template": {
            "metadata": {
                "annotations": {
                    "pod.alpha.kubernetes.io/initialized": "true"
                },
                "creationTimestamp": null,
                "labels": {
                    "app": "hello-pod"
                }
            },
            "spec": {
                "containers": [
                    {
                        "image": "docker.io/deshuai/hello-pod:latest",
                        "imagePullPolicy": "IfNotPresent",
                        "name": "hello-pod",
                        "ports": [
                            {
                                "containerPort": 8080,
                                "protocol": "TCP"
                            }
                        ],
                        "resources": {},
                        "securityContext": {
                            "capabilities": {},
                            "privileged": false
                        },
                        "terminationMessagePath": "/dev/termination-log",
                        "volumeMounts": [
                            {
                                "mountPath": "/tmp",
                                "name": "tmp"
                            }
                        ]
                    }
                ],
                "dnsPolicy": "ClusterFirst",
                "restartPolicy": "Always",
                "securityContext": {},
                "terminationGracePeriodSeconds": 0,
                "volumes": [
                    {
                        "emptyDir": {},
                        "name": "tmp"
                    }
                ]
            }
        }
    },
    "status": {
        "replicas": 2
    }
}
[root@dhcp-128-7 dma]# oc describe statefulset hello-petset
Name:			hello-petset
Namespace:		test1
Image(s):		docker.io/deshuai/hello-pod:latest
Selector:		app=hello-pod
Labels:			app=hello-pod
Replicas:		2 current / 2 desired
Annotations:		<none>
CreationTimestamp:	Mon, 06 Feb 2017 13:22:59 +0800
Pods Status:		2 Running / 0 Waiting / 0 Succeeded / 0 Failed
Volumes:
  tmp:
    Type:	EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:	
Events:
  FirstSeen	LastSeen	Count	From		SubObjectPath	Type		Reason			Message
  ---------	--------	-----	----		-------------	--------	------			-------
  4m		4m		1	{statefulset }			Normal		SuccessfulCreate	pet: hello-petset-0
  4m		4m		1	{statefulset }			Normal		SuccessfulCreate	pet: hello-petset-1

Comment 3 Greg Blomquist 2019-07-03 15:12:09 UTC
Appears to work in OCP 3.5 based on comment #1


Note You need to log in before you can comment on or make changes to this bug.