Bug 1268891
Summary: | [3.0.2] pods from the same image in the same service in the same deployment not grouped in another service | ||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Product: | OpenShift Container Platform | Reporter: | Erik M Jacobs <ejacobs> | ||||||||||
Component: | Management Console | Assignee: | Jessica Forrester <jforrest> | ||||||||||
Status: | CLOSED CURRENTRELEASE | QA Contact: | Yadan Pei <yapei> | ||||||||||
Severity: | medium | Docs Contact: | |||||||||||
Priority: | unspecified | ||||||||||||
Version: | 3.0.0 | CC: | aos-bugs, jokerman, mmccomas | ||||||||||
Target Milestone: | --- | ||||||||||||
Target Release: | --- | ||||||||||||
Hardware: | Unspecified | ||||||||||||
OS: | Unspecified | ||||||||||||
Whiteboard: | |||||||||||||
Fixed In Version: | Doc Type: | Bug Fix | |||||||||||
Doc Text: | Story Points: | --- | |||||||||||
Clone Of: | Environment: | ||||||||||||
Last Closed: | 2015-11-23 14:24:22 UTC | Type: | Bug | ||||||||||
Regression: | --- | Mount Type: | --- | ||||||||||
Documentation: | --- | CRM: | |||||||||||
Verified Versions: | Category: | --- | |||||||||||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||||||||
Cloudforms Team: | --- | Target Upstream Version: | |||||||||||
Embargoed: | |||||||||||||
Attachments: |
|
Description
Erik M Jacobs
2015-10-05 14:41:53 UTC
Created attachment 1079991 [details]
shows ungrouped pods
The checks in overview in deploymentByService and deployemtConfigsByService are not correct. They are checking if the selector of the service covers the selector of the deployment/DC, but those selectors may actually be disjoint. It should be checking if the selector of the service covers the set of labels in the template of the dep/dc, i.e. whether the pods created by the deployments would be covered by the service. Commit pushed to master at https://github.com/openshift/origin https://github.com/openshift/origin/commit/2944083e6ce5f327eb31c405f743d714a6e76324 Bug 1268891 - pods not always grouped when service selector should cover template of a dc/deployment (In reply to Erik M Jacobs from comment #1) > Created attachment 1079991 [details] > shows ungrouped pods Could you pls upload a new attachment? There is error when opening the image 'Error interpreting JPEG image file (Not a JPEG file: starts with 0x89 0x50)' it will help us to verify (In reply to Erik M Jacobs from comment #1) > Created attachment 1079991 [details] > shows ungrouped pods Now I'm not very clear about this issue, really appreciate if you could provide a new valid image file The image "shows ungrouped pods" works fine for me -- I just downloaded it and was able to look at it. Created attachment 1084928 [details]
correct grouping
Created attachment 1084929 [details]
incorrect grouping
Verified on oc v3.0.2.902 kubernetes v1.2.0-alpha.1-1107-g4c8e6f4 the issue is fixed steps to verify: 1. Create a service(frontend) with two PODs $ oc create -f https://raw.githubusercontent.com/openshift/origin/master/examples/sample-app/application-template-stibuild.json 2. Wait for all pods to be running 3. Check overview page on web console PODs with same image in the same service in the same deployment are grouped together, displayed as a chart. can you show a picture for verification? It doesn't sound like you created a second service like in the original comment. A service without an associated deployment that has the same pods/images/etc. from a different deployment was what got ungrouped. Can you show your YAML/JSON? Jacob, thanks a lot for your reminding. I verified again with yaml file as follows, I replaced the image apiVersion: v1 items: - apiVersion: v1 kind: Service metadata: creationTimestamp: 2015-10-05T14:24:05Z labels: app: blue name: ab namespace: deploymentscenarios resourceVersion: "75250" selfLink: /api/v1/namespaces/deploymentscenarios/services/ab uid: bb8810f4-6b6c-11e5-b979-525400b33d1d spec: clusterIP: 172.30.74.127 portalIP: 172.30.74.127 ports: - name: default nodePort: 0 port: 8080 protocol: TCP targetPort: 8080 selector: type: ab sessionAffinity: None type: ClusterIP status: loadBalancer: {} - apiVersion: v1 kind: Service metadata: annotations: openshift.io/generatedby: OpenShiftWebConsole creationTimestamp: 2015-10-05T14:23:11Z labels: app: blue name: blue namespace: deploymentscenarios resourceVersion: "75164" selfLink: /api/v1/namespaces/deploymentscenarios/services/blue uid: 9bbd5545-6b6c-11e5-b979-525400b33d1d spec: clusterIP: 172.30.21.59 portalIP: 172.30.21.59 ports: - nodePort: 0 port: 8080 protocol: TCP targetPort: 8080 selector: deploymentconfig: blue sessionAffinity: None type: ClusterIP status: loadBalancer: {} - apiVersion: v1 kind: Service metadata: annotations: openshift.io/generatedby: OpenShiftWebConsole creationTimestamp: 2015-10-05T14:32:18Z labels: app: green name: green namespace: deploymentscenarios resourceVersion: "75499" selfLink: /api/v1/namespaces/deploymentscenarios/services/green uid: e1b4f413-6b6d-11e5-b979-525400b33d1d spec: clusterIP: 172.30.205.105 portalIP: 172.30.205.105 ports: - nodePort: 0 port: 8080 protocol: TCP targetPort: 8080 selector: deploymentconfig: green sessionAffinity: None type: ClusterIP status: loadBalancer: {} - apiVersion: v1 kind: DeploymentConfig metadata: annotations: openshift.io/generatedby: OpenShiftWebConsole creationTimestamp: 2015-10-05T14:23:11Z labels: app: blue name: blue namespace: deploymentscenarios resourceVersion: "75295" selfLink: /oapi/v1/namespaces/deploymentscenarios/deploymentconfigs/blue uid: 9bba9908-6b6c-11e5-b979-525400b33d1d spec: replicas: 2 selector: deploymentconfig: blue strategy: resources: {} rollingParams: intervalSeconds: 1 maxSurge: 25% maxUnavailable: 25% timeoutSeconds: 600 updatePeriodSeconds: 1 type: Rolling template: metadata: creationTimestamp: null labels: app: blue deploymentconfig: blue type: ab spec: containers: - image: webapp imagePullPolicy: Always name: blue ports: - containerPort: 8080 protocol: TCP resources: {} terminationMessagePath: /dev/termination-log dnsPolicy: ClusterFirst restartPolicy: Always triggers: - imageChangeParams: automatic: true containerNames: - blue from: kind: ImageStreamTag name: blue:latest # lastTriggeredImage: 172.30.147.87:5000/deploymentscenarios/blue@sha256:97981da1108509d45b596ede398dc1d40ba449654ec4a02a656411f7cb02c3ea type: ImageChange - type: ConfigChange status: details: causes: - type: ConfigChange latestVersion: 2 - apiVersion: v1 kind: DeploymentConfig metadata: annotations: openshift.io/generatedby: OpenShiftWebConsole creationTimestamp: 2015-10-05T14:32:18Z labels: app: green name: green namespace: deploymentscenarios resourceVersion: "75651" selfLink: /oapi/v1/namespaces/deploymentscenarios/deploymentconfigs/green uid: e1b15f85-6b6d-11e5-b979-525400b33d1d spec: replicas: 2 selector: deploymentconfig: green strategy: resources: {} rollingParams: intervalSeconds: 1 maxSurge: 25% maxUnavailable: 25% timeoutSeconds: 600 updatePeriodSeconds: 1 type: Rolling template: metadata: creationTimestamp: null labels: app: green deploymentconfig: green type: ab spec: containers: - image: busybox imagePullPolicy: Always name: green ports: - containerPort: 8080 protocol: TCP resources: {} terminationMessagePath: /dev/termination-log dnsPolicy: ClusterFirst restartPolicy: Always triggers: - imageChangeParams: automatic: true containerNames: - green from: kind: ImageStreamTag name: green:latest # lastTriggeredImage: 172.30.147.87:5000/deploymentscenarios/green@sha256:3ede227d1e037c8f107b238b40c9624c27362ea5b51e5ab919dbed369711cfc5 type: ImageChange - type: ConfigChange status: details: causes: - type: ConfigChange latestVersion: 2 kind: List metadata: {} Created attachment 1085702 [details]
Correct Grouping
the two pods matches service ab's selector was grouped in a deployment other than standalone pods I guess that looks right! This fix is available in OpenShift Enterprise 3.1. |