Bug 1413748 - InterPodAffinity/AntiAffinity e2es failing on OCP: forbidden due to disallowed TopologyKey
Summary: InterPodAffinity/AntiAffinity e2es failing on OCP: forbidden due to disallow...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Node
Version: 3.5.0
Hardware: x86_64
OS: Linux
unspecified
medium
Target Milestone: ---
: ---
Assignee: Avesh Agarwal
QA Contact: Mike Fiedler
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-01-16 21:10 UTC by Mike Fiedler
Modified: 2017-07-24 14:11 UTC (History)
8 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
The admission plugin LimitPodHardAntiAffinityTopology has been disabled by default. Enabling it by default was causing conflict with one of the end to end tests.
Clone Of:
Environment:
Last Closed: 2017-04-12 19:09:32 UTC
Target Upstream Version:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Origin (Github) 12782 0 None None None 2017-02-03 18:37:50 UTC
Red Hat Product Errata RHBA-2017:0884 0 normal SHIPPED_LIVE Red Hat OpenShift Container Platform 3.5 RPM Release Advisory 2017-04-12 22:50:07 UTC

Description Mike Fiedler 2017-01-16 21:10:03 UTC
Description of problem:

2 e2e InterPodAffinity/InterPodAntiAffinity tests are failing on OCP 3.5:

[It] validates that InterPodAntiAffinity is respected if matching 2
[It] validates that InterPod Affinity and AntiAffinity is respected if matching

Output for the 2 tests below.   Sample failure:

[91m[1m• Failure [44.074 seconds][0m
[k8s.io] SchedulerPredicates [Serial]
[90m/builddir/build/BUILD/atomic-openshift-git-0.86a6117/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:826[0m
  [91m[1mvalidates that InterPodAntiAffinity is respected if matching 2 [It][0m
  [90m/builddir/build/BUILD/atomic-openshift-git-0.86a6117/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:564[0m
 
  [91mExpected error:
      <*errors.StatusError | 0xc42199c300>: {
          ErrStatus: {
              TypeMeta: {Kind: "", APIVersion: ""},
              ListMeta: {SelfLink: "", ResourceVersion: ""},
              Status: "Failure",
              Message: "pods \"with-podantiaffinity-c4a87965-dc27-11e6-a515-0208cfd56fdf\" is forbidden: affinity.PodAntiAffinity.RequiredDuringScheduling has TopologyKey e2e.inter-pod-affinity.kubernetes.io/zone but only key kubernetes.io/hostname is allowed",
              Reason: "Forbidden",
              Details: {
                  Name: "with-podantiaffinity-c4a87965-dc27-11e6-a515-0208cfd56fdf",
                  Group: "",
                  Kind: "pods",
                  Causes: nil,
                  RetryAfterSeconds: 0,
              },
              Code: 403,
          },
      }
      pods "with-podantiaffinity-c4a87965-dc27-11e6-a515-0208cfd56fdf" is forbidden: affinity.PodAntiAffinity.RequiredDuringScheduling has TopologyKey e2e.inter-pod-affinity.kubernetes.io/zone but only key kubernetes.io/hostname is allowed
  not to have occurred[0m




Version-Release number of selected component (if applicable):  3.5.0.4


How reproducible: Always


Steps to Reproduce:
1.  Install cluster on AWS
2.  Run the failing e2es listed above


Additional info:

[k8s.io] SchedulerPredicates [Serial]
[90m/builddir/build/BUILD/atomic-openshift-git-0.86a6117/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:826[0m
  validates that required NodeAffinity setting is respected if matching
  [90m/builddir/build/BUILD/atomic-openshift-git-0.86a6117/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:373[0m
[90m------------------------------[0m
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[0m[k8s.io] SchedulerPredicates [Serial][0m 
  [1mvalidates that InterPodAntiAffinity is respected if matching 2[0m
  [37m/builddir/build/BUILD/atomic-openshift-git-0.86a6117/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:564[0m
[BeforeEach] [Top Level]
  /builddir/build/BUILD/atomic-openshift-git-0.86a6117/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:47
[BeforeEach] [k8s.io] SchedulerPredicates [Serial]
  /builddir/build/BUILD/atomic-openshift-git-0.86a6117/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141
[1mSTEP[0m: Creating a kubernetes client
Jan 16 15:09:51.890: INFO: >>> kubeConfig: /etc/origin/master/admin.kubeconfig

[1mSTEP[0m: Building a namespace api object
Jan 16 15:09:51.970: INFO: About to run a Kube e2e test, ensuring namespace is privileged
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] SchedulerPredicates [Serial]
  /builddir/build/BUILD/atomic-openshift-git-0.86a6117/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Jan 16 15:09:52.039: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jan 16 15:09:52.046: INFO: Waiting for terminating namespaces to be deleted...
Jan 16 15:09:52.051: INFO: Waiting up to 5m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready
Jan 16 15:09:52.054: INFO: Waiting for pods to enter Success, but no pods in "kube-system" match label map[name:e2e-image-puller]
Jan 16 15:09:52.057: INFO: 0 / 0 pods in namespace 'kube-system' are running and ready (0 seconds elapsed)
Jan 16 15:09:52.057: INFO: expected 0 pod replicas in namespace 'kube-system', 0 are Running and Ready.
Jan 16 15:09:52.057: INFO: 
Logging pods the kubelet thinks is on node ip-172-31-34-189.us-west-2.compute.internal before test
Jan 16 15:09:52.062: INFO: 
Logging pods the kubelet thinks is on node ip-172-31-35-171.us-west-2.compute.internal before test
Jan 16 15:09:52.067: INFO: registry-console-1-zt4kz from default started at 2017-01-16 08:15:30 -0500 EST (1 container statuses recorded)
Jan 16 15:09:52.067: INFO: 	Container registry-console ready: true, restart count 0
Jan 16 15:09:52.068: INFO: 
Logging pods the kubelet thinks is on node ip-172-31-40-79.us-west-2.compute.internal before test
Jan 16 15:09:52.073: INFO: docker-registry-5-1y3cm from default started at 2017-01-16 11:22:35 -0500 EST (1 container statuses recorded)
Jan 16 15:09:52.073: INFO: 	Container registry ready: true, restart count 0
Jan 16 15:09:52.073: INFO: router-1-98ocr from default started at 2017-01-16 08:15:25 -0500 EST (1 container statuses recorded)
Jan 16 15:09:52.073: INFO: 	Container router ready: true, restart count 0
[It] validates that InterPodAntiAffinity is respected if matching 2
  /builddir/build/BUILD/atomic-openshift-git-0.86a6117/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:564
[1mSTEP[0m: Launching two pods on two distinct nodes to get two node names
[1mSTEP[0m: Running RC which reserves host port
[1mSTEP[0m: creating replication controller host-port in namespace e2e-tests-sched-pred-xglum
I0116 15:09:52.078941  116021 runners.go:103] Created replication controller with name: host-port, namespace: e2e-tests-sched-pred-xglum, replica count: 2
I0116 15:09:52.078980  116021 reflector.go:196] Starting reflector *api.Pod (0s) from github.com/openshift/origin/vendor/k8s.io/kubernetes/test/utils/pod_store.go:52
I0116 15:09:52.079014  116021 reflector.go:234] Listing and watching *api.Pod from github.com/openshift/origin/vendor/k8s.io/kubernetes/test/utils/pod_store.go:52
I0116 15:10:02.079197  116021 runners.go:103] host-port Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
[1mSTEP[0m: Applying a random label to both nodes.
[1mSTEP[0m: verifying the node has the label e2e.inter-pod-affinity.kubernetes.io/zone china-e2etest
[1mSTEP[0m: verifying the node has the label e2e.inter-pod-affinity.kubernetes.io/zone china-e2etest
[1mSTEP[0m: Trying to launch another pod on the first node with the service label.
[1mSTEP[0m: Trying to launch another pod, now with podAntiAffinity with same Labels.
Jan 16 15:10:04.263: INFO: Unexpected error occurred: pods "with-podantiaffinity-c4a87965-dc27-11e6-a515-0208cfd56fdf" is forbidden: affinity.PodAntiAffinity.RequiredDuringScheduling has TopologyKey e2e.inter-pod-affinity.kubernetes.io/zone but only key kubernetes.io/hostname is allowed
[1mSTEP[0m: removing the label e2e.inter-pod-affinity.kubernetes.io/zone off the node ip-172-31-35-171.us-west-2.compute.internal
[1mSTEP[0m: verifying the node doesn't have the label e2e.inter-pod-affinity.kubernetes.io/zone
[1mSTEP[0m: removing the label e2e.inter-pod-affinity.kubernetes.io/zone off the node ip-172-31-40-79.us-west-2.compute.internal
[1mSTEP[0m: verifying the node doesn't have the label e2e.inter-pod-affinity.kubernetes.io/zone
[1mSTEP[0m: deleting replication controller host-port in namespace e2e-tests-sched-pred-xglum
I0116 15:10:04.293563  116021 reflector.go:196] Starting reflector *api.Pod (0s) from github.com/openshift/origin/vendor/k8s.io/kubernetes/test/utils/pod_store.go:52
I0116 15:10:04.293617  116021 reflector.go:234] Listing and watching *api.Pod from github.com/openshift/origin/vendor/k8s.io/kubernetes/test/utils/pod_store.go:52
Jan 16 15:10:05.344: INFO: Deleting RC host-port took: 50.656747ms
Jan 16 15:10:05.344: INFO: Terminating RC host-port pods took: 30.556µs
[AfterEach] [k8s.io] SchedulerPredicates [Serial]
  /builddir/build/BUILD/atomic-openshift-git-0.86a6117/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142
[1mSTEP[0m: Collecting events from namespace "e2e-tests-sched-pred-xglum".
[1mSTEP[0m: Found 18 events.
Jan 16 15:10:15.348: INFO: At 2017-01-16 15:09:52 -0500 EST - event for host-port: {replication-controller } SuccessfulCreate: Created pod: host-port-xqnkt
Jan 16 15:10:15.348: INFO: At 2017-01-16 15:09:52 -0500 EST - event for host-port: {replication-controller } SuccessfulCreate: Created pod: host-port-zoi01
Jan 16 15:10:15.348: INFO: At 2017-01-16 15:09:52 -0500 EST - event for host-port-xqnkt: {default-scheduler } Scheduled: Successfully assigned host-port-xqnkt to ip-172-31-40-79.us-west-2.compute.internal
Jan 16 15:10:15.348: INFO: At 2017-01-16 15:09:52 -0500 EST - event for host-port-zoi01: {default-scheduler } Scheduled: Successfully assigned host-port-zoi01 to ip-172-31-35-171.us-west-2.compute.internal
Jan 16 15:10:15.348: INFO: At 2017-01-16 15:09:53 -0500 EST - event for host-port-xqnkt: {kubelet ip-172-31-40-79.us-west-2.compute.internal} Pulled: Container image "gcr.io/google_containers/pause-amd64:3.0" already present on machine
Jan 16 15:10:15.348: INFO: At 2017-01-16 15:09:53 -0500 EST - event for host-port-xqnkt: {kubelet ip-172-31-40-79.us-west-2.compute.internal} Created: Created container with docker id 87ced1bc7a6e; Security:[seccomp=unconfined]
Jan 16 15:10:15.348: INFO: At 2017-01-16 15:09:53 -0500 EST - event for host-port-zoi01: {kubelet ip-172-31-35-171.us-west-2.compute.internal} Pulled: Container image "gcr.io/google_containers/pause-amd64:3.0" already present on machine
Jan 16 15:10:15.348: INFO: At 2017-01-16 15:09:53 -0500 EST - event for host-port-zoi01: {kubelet ip-172-31-35-171.us-west-2.compute.internal} Created: Created container with docker id 44927f8b7363; Security:[seccomp=unconfined]
Jan 16 15:10:15.348: INFO: At 2017-01-16 15:09:53 -0500 EST - event for host-port-zoi01: {kubelet ip-172-31-35-171.us-west-2.compute.internal} Started: Started container with docker id 44927f8b7363
Jan 16 15:10:15.348: INFO: At 2017-01-16 15:09:54 -0500 EST - event for host-port-xqnkt: {kubelet ip-172-31-40-79.us-west-2.compute.internal} Started: Started container with docker id 87ced1bc7a6e
Jan 16 15:10:15.348: INFO: At 2017-01-16 15:10:02 -0500 EST - event for with-label-c3601ca6-dc27-11e6-a515-0208cfd56fdf: {default-scheduler } Scheduled: Successfully assigned with-label-c3601ca6-dc27-11e6-a515-0208cfd56fdf to ip-172-31-35-171.us-west-2.compute.internal
Jan 16 15:10:15.348: INFO: At 2017-01-16 15:10:03 -0500 EST - event for with-label-c3601ca6-dc27-11e6-a515-0208cfd56fdf: {kubelet ip-172-31-35-171.us-west-2.compute.internal} Created: Created container with docker id e2bc17a54db9; Security:[seccomp=unconfined]
Jan 16 15:10:15.348: INFO: At 2017-01-16 15:10:03 -0500 EST - event for with-label-c3601ca6-dc27-11e6-a515-0208cfd56fdf: {kubelet ip-172-31-35-171.us-west-2.compute.internal} Pulled: Container image "gcr.io/google_containers/pause-amd64:3.0" already present on machine
Jan 16 15:10:15.348: INFO: At 2017-01-16 15:10:04 -0500 EST - event for with-label-c3601ca6-dc27-11e6-a515-0208cfd56fdf: {kubelet ip-172-31-35-171.us-west-2.compute.internal} Started: Started container with docker id e2bc17a54db9
Jan 16 15:10:15.348: INFO: At 2017-01-16 15:10:05 -0500 EST - event for host-port: {replication-controller } SuccessfulDelete: Deleted pod: host-port-xqnkt
Jan 16 15:10:15.348: INFO: At 2017-01-16 15:10:05 -0500 EST - event for host-port: {replication-controller } SuccessfulDelete: Deleted pod: host-port-zoi01
Jan 16 15:10:15.348: INFO: At 2017-01-16 15:10:05 -0500 EST - event for host-port-xqnkt: {kubelet ip-172-31-40-79.us-west-2.compute.internal} Killing: Killing container with docker id 87ced1bc7a6e: Need to kill pod.
Jan 16 15:10:15.348: INFO: At 2017-01-16 15:10:05 -0500 EST - event for host-port-zoi01: {kubelet ip-172-31-35-171.us-west-2.compute.internal} Killing: Killing container with docker id 44927f8b7363: Need to kill pod.
Jan 16 15:10:15.358: INFO: POD                                              NODE                                         PHASE    GRACE  CONDITIONS
Jan 16 15:10:15.358: INFO: docker-registry-5-1y3cm                          ip-172-31-40-79.us-west-2.compute.internal   Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-16 11:22:35 -0500 EST  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2017-01-16 11:22:38 -0500 EST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-16 11:22:35 -0500 EST  }]
Jan 16 15:10:15.358: INFO: registry-console-1-zt4kz                         ip-172-31-35-171.us-west-2.compute.internal  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-16 08:15:30 -0500 EST  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2017-01-16 08:16:10 -0500 EST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-16 08:15:30 -0500 EST  }]
Jan 16 15:10:15.358: INFO: router-1-98ocr                                   ip-172-31-40-79.us-west-2.compute.internal   Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-16 08:15:25 -0500 EST  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2017-01-16 08:15:45 -0500 EST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-16 08:15:25 -0500 EST  }]
Jan 16 15:10:15.358: INFO: with-label-c3601ca6-dc27-11e6-a515-0208cfd56fdf  ip-172-31-35-171.us-west-2.compute.internal  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-16 15:10:02 -0500 EST  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2017-01-16 15:10:04 -0500 EST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-16 15:10:02 -0500 EST  }]
Jan 16 15:10:15.358: INFO: 
Jan 16 15:10:15.362: INFO: 
Logging node info for node ip-172-31-34-189.us-west-2.compute.internal
Jan 16 15:10:15.365: INFO: Node Info: &TypeMeta{Kind:,APIVersion:,}
Jan 16 15:10:15.365: INFO: 
Logging kubelet events for node ip-172-31-34-189.us-west-2.compute.internal
Jan 16 15:10:15.367: INFO: 
Logging pods the kubelet thinks is on node ip-172-31-34-189.us-west-2.compute.internal
W0116 15:10:15.375846  116021 metrics_grabber.go:73] Master node is not registered. Grabbing metrics from Scheduler and ControllerManager is disabled.
Jan 16 15:10:15.473: INFO: 
Latency metrics for node ip-172-31-34-189.us-west-2.compute.internal
Jan 16 15:10:15.473: INFO: 
Logging node info for node ip-172-31-35-171.us-west-2.compute.internal
Jan 16 15:10:15.476: INFO: Node Info: &TypeMeta{Kind:,APIVersion:,}
Jan 16 15:10:15.476: INFO: 
Logging kubelet events for node ip-172-31-35-171.us-west-2.compute.internal
Jan 16 15:10:15.478: INFO: 
Logging pods the kubelet thinks is on node ip-172-31-35-171.us-west-2.compute.internal
Jan 16 15:10:15.483: INFO: registry-console-1-zt4kz started at 2017-01-16 08:15:30 -0500 EST (0+1 container statuses recorded)
Jan 16 15:10:15.483: INFO: 	Container registry-console ready: true, restart count 0
Jan 16 15:10:15.483: INFO: with-label-c3601ca6-dc27-11e6-a515-0208cfd56fdf started at 2017-01-16 15:10:02 -0500 EST (0+1 container statuses recorded)
Jan 16 15:10:15.483: INFO: 	Container pfpod ready: true, restart count 0
W0116 15:10:15.487237  116021 metrics_grabber.go:73] Master node is not registered. Grabbing metrics from Scheduler and ControllerManager is disabled.
Jan 16 15:10:15.636: INFO: 
Latency metrics for node ip-172-31-35-171.us-west-2.compute.internal
Jan 16 15:10:15.636: INFO: 
Logging node info for node ip-172-31-40-79.us-west-2.compute.internal
Jan 16 15:10:15.639: INFO: Node Info: &TypeMeta{Kind:,APIVersion:,}
Jan 16 15:10:15.639: INFO: 
Logging kubelet events for node ip-172-31-40-79.us-west-2.compute.internal
Jan 16 15:10:15.641: INFO: 
Logging pods the kubelet thinks is on node ip-172-31-40-79.us-west-2.compute.internal
Jan 16 15:10:15.647: INFO: router-1-98ocr started at 2017-01-16 08:15:25 -0500 EST (0+1 container statuses recorded)
Jan 16 15:10:15.647: INFO: 	Container router ready: true, restart count 0
Jan 16 15:10:15.647: INFO: docker-registry-5-1y3cm started at 2017-01-16 11:22:35 -0500 EST (0+1 container statuses recorded)
Jan 16 15:10:15.647: INFO: 	Container registry ready: true, restart count 0
W0116 15:10:15.650623  116021 metrics_grabber.go:73] Master node is not registered. Grabbing metrics from Scheduler and ControllerManager is disabled.
Jan 16 15:10:15.798: INFO: 
Latency metrics for node ip-172-31-40-79.us-west-2.compute.internal
Jan 16 15:10:15.798: INFO: 
Logging node info for node ip-172-31-58-91.us-west-2.compute.internal
Jan 16 15:10:15.800: INFO: Node Info: &TypeMeta{Kind:,APIVersion:,}
Jan 16 15:10:15.800: INFO: 
Logging kubelet events for node ip-172-31-58-91.us-west-2.compute.internal
Jan 16 15:10:15.802: INFO: 
Logging pods the kubelet thinks is on node ip-172-31-58-91.us-west-2.compute.internal
W0116 15:10:15.812928  116021 metrics_grabber.go:73] Master node is not registered. Grabbing metrics from Scheduler and ControllerManager is disabled.
Jan 16 15:10:15.890: INFO: 
Latency metrics for node ip-172-31-58-91.us-west-2.compute.internal
[1mSTEP[0m: Dumping a list of prepulled images on each node
Jan 16 15:10:15.892: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "e2e-tests-sched-pred-xglum" for this suite.
Jan 16 15:10:35.942: INFO: namespace: e2e-tests-sched-pred-xglum, resource: bindings, ignored listing per whitelist
[AfterEach] [k8s.io] SchedulerPredicates [Serial]
  /builddir/build/BUILD/atomic-openshift-git-0.86a6117/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:67
I0116 15:10:35.964419  116021 request.go:768] Error in request: resource name may not be empty

[91m[1m• Failure [44.074 seconds][0m
[k8s.io] SchedulerPredicates [Serial]
[90m/builddir/build/BUILD/atomic-openshift-git-0.86a6117/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:826[0m
  [91m[1mvalidates that InterPodAntiAffinity is respected if matching 2 [It][0m
  [90m/builddir/build/BUILD/atomic-openshift-git-0.86a6117/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:564[0m

  [91mExpected error:
      <*errors.StatusError | 0xc42199c300>: {
          ErrStatus: {
              TypeMeta: {Kind: "", APIVersion: ""},
              ListMeta: {SelfLink: "", ResourceVersion: ""},
              Status: "Failure",
              Message: "pods \"with-podantiaffinity-c4a87965-dc27-11e6-a515-0208cfd56fdf\" is forbidden: affinity.PodAntiAffinity.RequiredDuringScheduling has TopologyKey e2e.inter-pod-affinity.kubernetes.io/zone but only key kubernetes.io/hostname is allowed",
              Reason: "Forbidden",
              Details: {
                  Name: "with-podantiaffinity-c4a87965-dc27-11e6-a515-0208cfd56fdf",
                  Group: "",
                  Kind: "pods",
                  Causes: nil,
                  RetryAfterSeconds: 0,
              },
              Code: 403,
          },
      }
      pods "with-podantiaffinity-c4a87965-dc27-11e6-a515-0208cfd56fdf" is forbidden: affinity.PodAntiAffinity.RequiredDuringScheduling has TopologyKey e2e.inter-pod-affinity.kubernetes.io/zone but only key kubernetes.io/hostname is allowed
  not to have occurred[0m


====================================================


[k8s.io] SchedulerPredicates [Serial]
[90m/builddir/build/BUILD/atomic-openshift-git-0.86a6117/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:826[0m
  validates that embedding the JSON PodAffinity and PodAntiAffinity setting as a string in the annotation value work
  [90m/builddir/build/BUILD/atomic-openshift-git-0.86a6117/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:664[0m
[90m------------------------------[0m
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[0m[k8s.io] SchedulerPredicates [Serial][0m 
  [1mvalidates that InterPod Affinity and AntiAffinity is respected if matching[0m
  [37m/builddir/build/BUILD/atomic-openshift-git-0.86a6117/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:640[0m
[BeforeEach] [Top Level]
  /builddir/build/BUILD/atomic-openshift-git-0.86a6117/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:47
[BeforeEach] [k8s.io] SchedulerPredicates [Serial]
  /builddir/build/BUILD/atomic-openshift-git-0.86a6117/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141
[1mSTEP[0m: Creating a kubernetes client
Jan 16 15:15:26.816: INFO: >>> kubeConfig: /etc/origin/master/admin.kubeconfig

[1mSTEP[0m: Building a namespace api object
Jan 16 15:15:26.901: INFO: About to run a Kube e2e test, ensuring namespace is privileged
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] SchedulerPredicates [Serial]
  /builddir/build/BUILD/atomic-openshift-git-0.86a6117/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Jan 16 15:15:26.968: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jan 16 15:15:26.975: INFO: Waiting for terminating namespaces to be deleted...
Jan 16 15:15:26.980: INFO: Waiting up to 5m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready
Jan 16 15:15:26.982: INFO: Waiting for pods to enter Success, but no pods in "kube-system" match label map[name:e2e-image-puller]
Jan 16 15:15:26.986: INFO: 0 / 0 pods in namespace 'kube-system' are running and ready (0 seconds elapsed)
Jan 16 15:15:26.986: INFO: expected 0 pod replicas in namespace 'kube-system', 0 are Running and Ready.
Jan 16 15:15:26.986: INFO: 
Logging pods the kubelet thinks is on node ip-172-31-34-189.us-west-2.compute.internal before test
Jan 16 15:15:26.991: INFO: 
Logging pods the kubelet thinks is on node ip-172-31-35-171.us-west-2.compute.internal before test
Jan 16 15:15:26.996: INFO: registry-console-1-zt4kz from default started at 2017-01-16 08:15:30 -0500 EST (1 container statuses recorded)
Jan 16 15:15:26.996: INFO: 	Container registry-console ready: true, restart count 0
Jan 16 15:15:26.996: INFO: 
Logging pods the kubelet thinks is on node ip-172-31-40-79.us-west-2.compute.internal before test
Jan 16 15:15:27.002: INFO: router-1-98ocr from default started at 2017-01-16 08:15:25 -0500 EST (1 container statuses recorded)
Jan 16 15:15:27.002: INFO: 	Container router ready: true, restart count 0
Jan 16 15:15:27.002: INFO: docker-registry-5-1y3cm from default started at 2017-01-16 11:22:35 -0500 EST (1 container statuses recorded)
Jan 16 15:15:27.002: INFO: 	Container registry ready: true, restart count 0
[It] validates that InterPod Affinity and AntiAffinity is respected if matching
  /builddir/build/BUILD/atomic-openshift-git-0.86a6117/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:640
[1mSTEP[0m: Trying to launch a pod with a label to get a node which can launch it.
[1mSTEP[0m: Trying to apply a random label on the found node.
[1mSTEP[0m: verifying the node has the label e2e.inter-pod-affinity.kubernetes.io/zone e2e-testing
[1mSTEP[0m: Trying to launch the pod, now with Pod affinity and anti affinity.
Jan 16 15:15:29.965: INFO: Unexpected error occurred: pods "with-podantiaffinity-86ca2603-dc28-11e6-a515-0208cfd56fdf" is forbidden: affinity.PodAntiAffinity.RequiredDuringScheduling has TopologyKey e2e.inter-pod-affinity.kubernetes.io/zone but only key kubernetes.io/hostname is allowed
[1mSTEP[0m: removing the label e2e.inter-pod-affinity.kubernetes.io/zone off the node ip-172-31-34-189.us-west-2.compute.internal
[1mSTEP[0m: verifying the node doesn't have the label e2e.inter-pod-affinity.kubernetes.io/zone
[AfterEach] [k8s.io] SchedulerPredicates [Serial]
  /builddir/build/BUILD/atomic-openshift-git-0.86a6117/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142
[1mSTEP[0m: Collecting events from namespace "e2e-tests-sched-pred-sj8ls".
[1mSTEP[0m: Found 4 events.
Jan 16 15:15:29.981: INFO: At 2017-01-16 15:15:27 -0500 EST - event for with-label-85073525-dc28-11e6-a515-0208cfd56fdf: {default-scheduler } Scheduled: Successfully assigned with-label-85073525-dc28-11e6-a515-0208cfd56fdf to ip-172-31-34-189.us-west-2.compute.internal
Jan 16 15:15:29.981: INFO: At 2017-01-16 15:15:28 -0500 EST - event for with-label-85073525-dc28-11e6-a515-0208cfd56fdf: {kubelet ip-172-31-34-189.us-west-2.compute.internal} Pulled: Container image "gcr.io/google_containers/pause-amd64:3.0" already present on machine
Jan 16 15:15:29.981: INFO: At 2017-01-16 15:15:28 -0500 EST - event for with-label-85073525-dc28-11e6-a515-0208cfd56fdf: {kubelet ip-172-31-34-189.us-west-2.compute.internal} Created: Created container with docker id 1124ca57f561; Security:[seccomp=unconfined]
Jan 16 15:15:29.981: INFO: At 2017-01-16 15:15:29 -0500 EST - event for with-label-85073525-dc28-11e6-a515-0208cfd56fdf: {kubelet ip-172-31-34-189.us-west-2.compute.internal} Started: Started container with docker id 1124ca57f561
Jan 16 15:15:29.991: INFO: POD                                              NODE                                         PHASE    GRACE  CONDITIONS
Jan 16 15:15:29.991: INFO: docker-registry-5-1y3cm                          ip-172-31-40-79.us-west-2.compute.internal   Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-16 11:22:35 -0500 EST  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2017-01-16 11:22:38 -0500 EST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-16 11:22:35 -0500 EST  }]
Jan 16 15:15:29.991: INFO: registry-console-1-zt4kz                         ip-172-31-35-171.us-west-2.compute.internal  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-16 08:15:30 -0500 EST  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2017-01-16 08:16:10 -0500 EST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-16 08:15:30 -0500 EST  }]
Jan 16 15:15:29.991: INFO: router-1-98ocr                                   ip-172-31-40-79.us-west-2.compute.internal   Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-16 08:15:25 -0500 EST  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2017-01-16 08:15:45 -0500 EST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-16 08:15:25 -0500 EST  }]
Jan 16 15:15:29.991: INFO: with-label-85073525-dc28-11e6-a515-0208cfd56fdf  ip-172-31-34-189.us-west-2.compute.internal  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-16 15:15:27 -0500 EST  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2017-01-16 15:15:29 -0500 EST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-16 15:15:27 -0500 EST  }]
Jan 16 15:15:29.991: INFO: 
Jan 16 15:15:29.994: INFO: 
Logging node info for node ip-172-31-34-189.us-west-2.compute.internal
Jan 16 15:15:29.997: INFO: Node Info: &TypeMeta{Kind:,APIVersion:,}
Jan 16 15:15:29.997: INFO: 
Logging kubelet events for node ip-172-31-34-189.us-west-2.compute.internal
Jan 16 15:15:29.999: INFO: 
Logging pods the kubelet thinks is on node ip-172-31-34-189.us-west-2.compute.internal
Jan 16 15:15:30.005: INFO: with-label-85073525-dc28-11e6-a515-0208cfd56fdf started at 2017-01-16 15:15:27 -0500 EST (0+1 container statuses recorded)
Jan 16 15:15:30.005: INFO: 	Container pfpod ready: true, restart count 0
W0116 15:15:30.008575  116021 metrics_grabber.go:73] Master node is not registered. Grabbing metrics from Scheduler and ControllerManager is disabled.
Jan 16 15:15:30.161: INFO: 
Latency metrics for node ip-172-31-34-189.us-west-2.compute.internal
Jan 16 15:15:30.161: INFO: {Operation: Method:pod_start_latency_microseconds Quantile:0.99 Latency:11.488931s}
Jan 16 15:15:30.161: INFO: {Operation: Method:pod_start_latency_microseconds Quantile:0.9 Latency:11.309982s}
Jan 16 15:15:30.161: INFO: 
Logging node info for node ip-172-31-35-171.us-west-2.compute.internal
Jan 16 15:15:30.164: INFO: Node Info: &TypeMeta{Kind:,APIVersion:,}
Jan 16 15:15:30.164: INFO: 
Logging kubelet events for node ip-172-31-35-171.us-west-2.compute.internal
Jan 16 15:15:30.166: INFO: 
Logging pods the kubelet thinks is on node ip-172-31-35-171.us-west-2.compute.internal
Jan 16 15:15:30.172: INFO: registry-console-1-zt4kz started at 2017-01-16 08:15:30 -0500 EST (0+1 container statuses recorded)
Jan 16 15:15:30.172: INFO: 	Container registry-console ready: true, restart count 0
W0116 15:15:30.175457  116021 metrics_grabber.go:73] Master node is not registered. Grabbing metrics from Scheduler and ControllerManager is disabled.
Jan 16 15:15:30.313: INFO: 
Latency metrics for node ip-172-31-35-171.us-west-2.compute.internal
Jan 16 15:15:30.313: INFO: {Operation: Method:pod_start_latency_microseconds Quantile:0.99 Latency:15.173499s}
Jan 16 15:15:30.313: INFO: {Operation: Method:pod_start_latency_microseconds Quantile:0.9 Latency:11.105399s}
Jan 16 15:15:30.313: INFO: 
Logging node info for node ip-172-31-40-79.us-west-2.compute.internal
Jan 16 15:15:30.316: INFO: Node Info: &TypeMeta{Kind:,APIVersion:,}
Jan 16 15:15:30.316: INFO: 
Logging kubelet events for node ip-172-31-40-79.us-west-2.compute.internal
Jan 16 15:15:30.318: INFO: 
Logging pods the kubelet thinks is on node ip-172-31-40-79.us-west-2.compute.internal
Jan 16 15:15:30.324: INFO: router-1-98ocr started at 2017-01-16 08:15:25 -0500 EST (0+1 container statuses recorded)
Jan 16 15:15:30.324: INFO: 	Container router ready: true, restart count 0
Jan 16 15:15:30.324: INFO: docker-registry-5-1y3cm started at 2017-01-16 11:22:35 -0500 EST (0+1 container statuses recorded)
Jan 16 15:15:30.324: INFO: 	Container registry ready: true, restart count 0
W0116 15:15:30.328204  116021 metrics_grabber.go:73] Master node is not registered. Grabbing metrics from Scheduler and ControllerManager is disabled.
Jan 16 15:15:30.427: INFO: 
Latency metrics for node ip-172-31-40-79.us-west-2.compute.internal
Jan 16 15:15:30.427: INFO: {Operation: Method:pod_start_latency_microseconds Quantile:0.99 Latency:10.545574s}
Jan 16 15:15:30.427: INFO: 
Logging node info for node ip-172-31-58-91.us-west-2.compute.internal
Jan 16 15:15:30.430: INFO: Node Info: &TypeMeta{Kind:,APIVersion:,}
Jan 16 15:15:30.430: INFO: 
Logging kubelet events for node ip-172-31-58-91.us-west-2.compute.internal
Jan 16 15:15:30.432: INFO: 
Logging pods the kubelet thinks is on node ip-172-31-58-91.us-west-2.compute.internal
W0116 15:15:30.441733  116021 metrics_grabber.go:73] Master node is not registered. Grabbing metrics from Scheduler and ControllerManager is disabled.
Jan 16 15:15:30.566: INFO: 
Latency metrics for node ip-172-31-58-91.us-west-2.compute.internal
[1mSTEP[0m: Dumping a list of prepulled images on each node
Jan 16 15:15:30.569: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "e2e-tests-sched-pred-sj8ls" for this suite.
Jan 16 15:15:50.620: INFO: namespace: e2e-tests-sched-pred-sj8ls, resource: bindings, ignored listing per whitelist
[AfterEach] [k8s.io] SchedulerPredicates [Serial]
  /builddir/build/BUILD/atomic-openshift-git-0.86a6117/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:67
I0116 15:15:50.642091  116021 request.go:768] Error in request: resource name may not be empty

[91m[1m• Failure [23.825 seconds][0m
[k8s.io] SchedulerPredicates [Serial]
[90m/builddir/build/BUILD/atomic-openshift-git-0.86a6117/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:826[0m
  [91m[1mvalidates that InterPod Affinity and AntiAffinity is respected if matching [It][0m
  [90m/builddir/build/BUILD/atomic-openshift-git-0.86a6117/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:640[0m

  [91mExpected error:
      <*errors.StatusError | 0xc421bd0280>: {
          ErrStatus: {
              TypeMeta: {Kind: "", APIVersion: ""},
              ListMeta: {SelfLink: "", ResourceVersion: ""},
              Status: "Failure",
              Message: "pods \"with-podantiaffinity-86ca2603-dc28-11e6-a515-0208cfd56fdf\" is forbidden: affinity.PodAntiAffinity.RequiredDuringScheduling has TopologyKey e2e.inter-pod-affinity.kubernetes.io/zone but only key kubernetes.io/hostname is allowed",
              Reason: "Forbidden",
              Details: {
                  Name: "with-podantiaffinity-86ca2603-dc28-11e6-a515-0208cfd56fdf",
                  Group: "",
                  Kind: "pods",
                  Causes: nil,
                  RetryAfterSeconds: 0,
              },
              Code: 403,
          },
      }
      pods "with-podantiaffinity-86ca2603-dc28-11e6-a515-0208cfd56fdf" is forbidden: affinity.PodAntiAffinity.RequiredDuringScheduling has TopologyKey e2e.inter-pod-affinity.kubernetes.io/zone but only key kubernetes.io/hostname is allowed
  not to have occurred[0m

Comment 1 Avesh Agarwal 2017-01-18 22:38:37 UTC
I tested latest kube and these 2 tests are passing, I will test on ocp now.

[root@fedora24 kubernetes]# KUBERNETES_PROVIDER=local go run hack/e2e.go -v --test --test_args="--ginkgo.focus="InterPod*Affinity""
2017/01/18 17:33:15 e2e.go:946: Running: ./cluster/kubectl.sh version --match-server-version=false
Client Version: version.Info{Major:"1", Minor:"6+", GitVersion:"v1.6.0-alpha.0.3482+dd2cca470fff7e-dirty", GitCommit:"dd2cca470fff7edc9deaa8de40e29cdcdc9f611c", GitTreeState:"dirty", BuildDate:"2017-01-18T22:00:44Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"6+", GitVersion:"v1.6.0-alpha.0.3482+dd2cca470fff7e-dirty", GitCommit:"dd2cca470fff7edc9deaa8de40e29cdcdc9f611c", GitTreeState:"dirty", BuildDate:"2017-01-18T22:00:44Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"linux/amd64"}
2017/01/18 17:33:15 e2e.go:948: Step './cluster/kubectl.sh version --match-server-version=false' finished in 70.654786ms
2017/01/18 17:33:15 e2e.go:946: Running: ./hack/e2e-internal/e2e-status.sh
Local doesn't need special preparations for e2e tests
Client Version: version.Info{Major:"1", Minor:"6+", GitVersion:"v1.6.0-alpha.0.3482+dd2cca470fff7e-dirty", GitCommit:"dd2cca470fff7edc9deaa8de40e29cdcdc9f611c", GitTreeState:"dirty", BuildDate:"2017-01-18T22:00:44Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"6+", GitVersion:"v1.6.0-alpha.0.3482+dd2cca470fff7e-dirty", GitCommit:"dd2cca470fff7edc9deaa8de40e29cdcdc9f611c", GitTreeState:"dirty", BuildDate:"2017-01-18T22:00:44Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"linux/amd64"}
2017/01/18 17:33:15 e2e.go:948: Step './hack/e2e-internal/e2e-status.sh' finished in 67.445437ms
2017/01/18 17:33:15 e2e.go:946: Running: ./hack/ginkgo-e2e.sh --ginkgo.focus=InterPod*Affinity
Setting up for KUBERNETES_PROVIDER="local".
Local doesn't need special preparations for e2e tests
Jan 18 17:33:16.113: INFO: Overriding default scale value of zero to 1
Jan 18 17:33:16.113: INFO: Overriding default milliseconds value of zero to 5000
I0118 17:33:16.151390   10555 e2e.go:250] Starting e2e run "1a97a6b8-ddce-11e6-bc46-525400b5933e" on Ginkgo node 1
Running Suite: Kubernetes e2e suite
===================================
Random Seed: 1484778795 - Will randomize all specs
Will run 2 of 494 specs

Jan 18 17:33:16.174: INFO: >>> kubeConfig: /root/.kube/config

Jan 18 17:33:16.177: INFO: Waiting up to 4h0m0s for all (but 0) nodes to be schedulable
Jan 18 17:33:16.178: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready
Jan 18 17:33:16.180: INFO: Waiting for pods to enter Success, but no pods in "kube-system" match label map[name:e2e-image-puller]
Jan 18 17:33:16.182: INFO: 0 / 0 pods in namespace 'kube-system' are running and ready (0 seconds elapsed)
Jan 18 17:33:16.182: INFO: expected 0 pod replicas in namespace 'kube-system', 0 are Running and Ready.
Jan 18 17:33:16.183: INFO: Waiting for pods to enter Success, but no pods in "kube-system" match label map[name:e2e-image-puller]
Jan 18 17:33:16.183: INFO: Dumping network health container logs from all nodes
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] SchedulerPredicates [Serial] 
  validates that InterPodAffinity is respected if matching
  /root/upstream-code/gocode/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:498
[BeforeEach] [k8s.io] SchedulerPredicates [Serial]
  /root/upstream-code/gocode/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Jan 18 17:33:16.186: INFO: >>> kubeConfig: /root/.kube/config

STEP: Building a namespace api object
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] SchedulerPredicates [Serial]
  /root/upstream-code/gocode/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:102
Jan 18 17:33:16.209: INFO: Waiting up to 1m0s for all nodes to be ready
Jan 18 17:34:16.215: INFO: Waiting for terminating namespaces to be deleted...
Jan 18 17:34:16.217: INFO: Waiting up to 5m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready
Jan 18 17:34:16.219: INFO: Waiting for pods to enter Success, but no pods in "kube-system" match label map[name:e2e-image-puller]
Jan 18 17:34:16.220: INFO: 0 / 0 pods in namespace 'kube-system' are running and ready (0 seconds elapsed)
Jan 18 17:34:16.220: INFO: expected 0 pod replicas in namespace 'kube-system', 0 are Running and Ready.
Jan 18 17:34:16.220: INFO: 
Logging pods the kubelet thinks is on node 127.0.0.1 before test
[It] validates that InterPodAffinity is respected if matching
  /root/upstream-code/gocode/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:498
STEP: Trying to launch a pod with a label to get a node which can launch it.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label e2e.inter-pod-affinity.kubernetes.io/zone china-e2etest
STEP: Trying to launch the pod, now with podAffinity.
STEP: removing the label e2e.inter-pod-affinity.kubernetes.io/zone off the node 127.0.0.1
STEP: verifying the node doesn't have the label e2e.inter-pod-affinity.kubernetes.io/zone
[AfterEach] [k8s.io] SchedulerPredicates [Serial]
  /root/upstream-code/gocode/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 18 17:34:28.498: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-gmz8c" for this suite.
Jan 18 17:34:53.524: INFO: namespace: e2e-tests-sched-pred-gmz8c, resource: bindings, ignored listing per whitelist
[AfterEach] [k8s.io] SchedulerPredicates [Serial]
  /root/upstream-code/gocode/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:69

• [SLOW TEST:97.351 seconds]
[k8s.io] SchedulerPredicates [Serial]
/root/upstream-code/gocode/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:827
  validates that InterPodAffinity is respected if matching
  /root/upstream-code/gocode/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:498
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] SchedulerPredicates [Serial] 
  validates that InterPodAffinity is respected if matching with multiple Affinities
  /root/upstream-code/gocode/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:615
[BeforeEach] [k8s.io] SchedulerPredicates [Serial]
  /root/upstream-code/gocode/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Jan 18 17:34:53.538: INFO: >>> kubeConfig: /root/.kube/config

STEP: Building a namespace api object
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] SchedulerPredicates [Serial]
  /root/upstream-code/gocode/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:102
Jan 18 17:34:53.571: INFO: Waiting up to 1m0s for all nodes to be ready
Jan 18 17:35:53.576: INFO: Waiting for terminating namespaces to be deleted...
Jan 18 17:35:53.577: INFO: Waiting up to 5m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready
Jan 18 17:35:53.579: INFO: Waiting for pods to enter Success, but no pods in "kube-system" match label map[name:e2e-image-puller]
Jan 18 17:35:53.580: INFO: 0 / 0 pods in namespace 'kube-system' are running and ready (0 seconds elapsed)
Jan 18 17:35:53.580: INFO: expected 0 pod replicas in namespace 'kube-system', 0 are Running and Ready.
Jan 18 17:35:53.580: INFO: 
Logging pods the kubelet thinks is on node 127.0.0.1 before test
Jan 18 17:35:53.583: INFO: with-label-3e75d219-ddce-11e6-bc46-525400b5933e from e2e-tests-sched-pred-gmz8c started at 2017-01-18 17:34:16 -0500 EST (1 container statuses recorded)
Jan 18 17:35:53.583: INFO: 	Container pfpod ready: false, restart count 0
Jan 18 17:35:53.583: INFO: with-podaffinity-435463d6-ddce-11e6-bc46-525400b5933e from e2e-tests-sched-pred-gmz8c started at 2017-01-18 17:34:24 -0500 EST (1 container statuses recorded)
Jan 18 17:35:53.583: INFO: 	Container pfpod ready: false, restart count 0
[It] validates that InterPodAffinity is respected if matching with multiple Affinities
  /root/upstream-code/gocode/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:615
STEP: Trying to launch a pod with a label to get a node which can launch it.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label e2e.inter-pod-affinity.kubernetes.io/zone kubernetes-e2e
STEP: Trying to launch the pod, now with multiple pod affinities with diff LabelOperators.
STEP: removing the label e2e.inter-pod-affinity.kubernetes.io/zone off the node 127.0.0.1
STEP: verifying the node doesn't have the label e2e.inter-pod-affinity.kubernetes.io/zone
[AfterEach] [k8s.io] SchedulerPredicates [Serial]
  /root/upstream-code/gocode/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 18 17:35:59.858: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-v62rs" for this suite.
Jan 18 17:36:24.874: INFO: namespace: e2e-tests-sched-pred-v62rs, resource: bindings, ignored listing per whitelist
[AfterEach] [k8s.io] SchedulerPredicates [Serial]
  /root/upstream-code/gocode/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:69

• [SLOW TEST:91.349 seconds]
[k8s.io] SchedulerPredicates [Serial]
/root/upstream-code/gocode/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:827
  validates that InterPodAffinity is respected if matching with multiple Affinities
  /root/upstream-code/gocode/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:615
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
Ran 2 of 494 Specs in 188.717 seconds
SUCCESS! -- 2 Passed | 0 Failed | 0 Pending | 492 Skipped PASS

Ginkgo ran 1 suite in 3m8.921199721s
Test Suite Passed
2017/01/18 17:36:24 e2e.go:948: Step './hack/ginkgo-e2e.sh --ginkgo.focus=InterPod*Affinity' finished in 3m8.957169789s

Comment 2 Avesh Agarwal 2017-01-26 18:23:17 UTC
I have sent a PR to fix following e2e: validates that InterPodAntiAffinity is respected if matching 2. This e2e test was failing due to a different reason than the one mentioned here in upstream kube.

https://github.com/kubernetes/kubernetes/pull/40534

Comment 5 Avesh Agarwal 2017-01-27 13:27:15 UTC
Mike,

Could you check if the admission plugin LimitPodHardAntiAffinityTopology is enabled when running these tests?

These tests seem to be designed with zones and pod anti affinity, and in direct conflict of the LimitPodHardAntiAffinityTopology plugin.  

I could change the tests, but then it wouldn't test what is needed to test zones and pod anti affinity together.

Comment 7 Mike Fiedler 2017-01-27 15:39:22 UTC
I get this at startup.   LimitPodHardAntiAffinityTopology plugin is not in the list of plugins NOT enabled.   I don't see a corresponding list of those enabled.

messages:Jan 27 10:25:23 ip-172-31-12-197 atomic-openshift-master: I0127 10:25:23.299335    2516 admission.go:105] Admission plugin ProjectRequestLimit is not enabled.  It will not be started.
messages:Jan 27 10:25:23 ip-172-31-12-197 atomic-openshift-master: I0127 10:25:23.299366    2516 admission.go:105] Admission plugin openshift.io/RestrictSubjectBindings is not enabled.  It will not be started.
messages:Jan 27 10:25:23 ip-172-31-12-197 atomic-openshift-master: I0127 10:25:23.299376    2516 admission.go:105] Admission plugin PodNodeConstraints is not enabled.  It will not be started.
messages:Jan 27 10:25:23 ip-172-31-12-197 atomic-openshift-master: I0127 10:25:23.299417    2516 admission.go:105] Admission plugin RunOnceDuration is not enabled.  It will not be started.
messages:Jan 27 10:25:23 ip-172-31-12-197 atomic-openshift-master: I0127 10:25:23.299427    2516 admission.go:105] Admission plugin PodNodeConstraints is not enabled.  It will not be started.
messages:Jan 27 10:25:23 ip-172-31-12-197 atomic-openshift-master: I0127 10:25:23.299439    2516 admission.go:105] Admission plugin ClusterResourceOverride is not enabled.  It will not be started.
messages:Jan 27 10:25:23 ip-172-31-12-197 atomic-openshift-master: I0127 10:25:23.299474    2516 admission.go:105] Admission plugin openshift.io/ImagePolicy is not enabled.  It will not be started.
messages:Jan 27 10:25:23 ip-172-31-12-197 atomic-openshift-master: I0127 10:25:23.299483    2516 admission.go:105] Admission plugin ImagePolicyWebhook is not enabled.  It will not be started.
messages:Jan 27 10:25:23 ip-172-31-12-197 atomic-openshift-master: I0127 10:25:23.299567    2516 admission.go:105] Admission plugin AlwaysPullImages is not enabled.  It will not be started.

Comment 8 Avesh Agarwal 2017-01-30 14:59:30 UTC
I think it would require disabling the plugin  which is enabled by default to run these e2e tests:
https://github.com/openshift/origin/blob/master/pkg/cmd/server/start/admission.go#L68

I have created origin issue: https://github.com/openshift/origin/issues/12712

Comment 9 Avesh Agarwal 2017-02-02 22:27:59 UTC
PR to fix this issue:

https://github.com/openshift/origin/pull/12782

Comment 10 Troy Dawson 2017-02-06 19:25:58 UTC
This has been merged into ocp and is in OCP v3.5.0.17 or newer.

Comment 12 Mike Fiedler 2017-02-07 16:19:21 UTC
Verified on 3.5.0.17

Comment 14 errata-xmlrpc 2017-04-12 19:09:32 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2017:0884


Note You need to log in before you can comment on or make changes to this bug.