Description of problem: strategy type is seem as "Rolling" instead of "Recreate" while exporting the heketi dc. See below: ######## # oc get dc NAME REVISION DESIRED CURRENT TRIGGERED BY heketi 1 1 1 config storage-project-router 1 1 1 config # oc export dc heketi -o yaml |grep -i rolling rollingParams: type: Rolling ######## Version-Release number of selected component (if applicable): openshift v3.4.0.38 kubernetes v1.4.0+776c994 # heketi-cli -v heketi-cli 3.1.0 heketi-client-3.1.0-12.el7rhgs.x86_64 cns-deploy-3.1.0-12.el7rhgs.x86_64 How reproducible: Always Steps to Reproduce: 1. Install the latest heketi build from https://brewweb.engineering.redhat.com/brew/buildinfo?buildID=530142 2. Create a new namespace called "storage-project" 3. Setup a router as described in our official doc 4. Create a topology file 5. Execute # cns-deploy topology.json --deploy-gluster --namespace=storage-project 6. Check the output of # oc export dc heketi -o yaml Actual results: strategy type is currently seem as "Rolling" while exporting the heketi dc even though the heketi template has the correct type, which is "Recreate". If I understand it correctly, this is going to affect any upgrades from CNS 3.4 to any future versions unless and until a manual workaround is provided in the doc and the same is applied by all the admins while deploying CNS 3.4 itself. If this is not ensured, I doubt if any future upgrades for CNS 3.4 will go smoothly or not. Expected results: strategy type should be "Recreate" so that it won't any future upgrades of CNS 3.4 Additional info: See the full output of export below: ********************** # oc export dc heketi -o yaml apiVersion: v1 kind: DeploymentConfig metadata: annotations: description: Defines how to deploy Heketi creationTimestamp: null generation: 1 labels: glusterfs: heketi-dc template: heketi name: heketi spec: replicas: 1 selector: name: heketi strategy: resources: {} rollingParams: intervalSeconds: 1 maxSurge: 25% maxUnavailable: 25% timeoutSeconds: 600 updatePeriodSeconds: 1 type: Rolling template: metadata: creationTimestamp: null labels: glusterfs: heketi-pod name: heketi name: heketi spec: containers: - env: - name: HEKETI_USER_KEY - name: HEKETI_ADMIN_KEY - name: HEKETI_EXECUTOR value: kubernetes - name: HEKETI_FSTAB value: /var/lib/heketi/fstab - name: HEKETI_SNAPSHOT_LIMIT value: "14" - name: HEKETI_KUBE_CERTFILE - name: HEKETI_KUBE_INSECURE value: "y" - name: HEKETI_KUBE_GLUSTER_DAEMONSET value: "y" - name: HEKETI_KUBE_USE_SECRET value: "y" - name: HEKETI_KUBE_TOKENFILE value: /var/lib/heketi/secret/token - name: HEKETI_KUBE_NAMESPACE value: storage-project - name: HEKETI_KUBE_APIHOST value: https://dhcp46-26.lab.eng.blr.redhat.com:8443 image: rhgs3/rhgs-volmanager-rhel7 imagePullPolicy: Always livenessProbe: failureThreshold: 3 httpGet: path: /hello port: 8080 scheme: HTTP initialDelaySeconds: 30 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 3 name: heketi ports: - containerPort: 8080 protocol: TCP readinessProbe: failureThreshold: 3 httpGet: path: /hello port: 8080 scheme: HTTP initialDelaySeconds: 3 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 3 resources: {} terminationMessagePath: /dev/termination-log volumeMounts: - mountPath: /var/lib/heketi name: db - mountPath: /var/lib/heketi/secret name: secret dnsPolicy: ClusterFirst restartPolicy: Always securityContext: {} terminationGracePeriodSeconds: 30 volumes: - glusterfs: endpoints: heketi-storage-endpoints path: heketidbstorage name: db - name: secret secret: defaultMode: 420 secretName: heketi-service-account-token-qk9mq test: false triggers: - type: ConfigChange status: {} **********************
Looks like the 'strategy' has been bound to 'pod' instead of 'dc' in the new yaml file.
Patch Upstream : https://github.com/gluster/gluster-kubernetes/pull/144
Verified as fixed in cns-deploy-3.1.0-14
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHEA-2017-0148.html