+++ This bug was initially created as a clone of Bug #1268437 +++ Description of problem: oadm registry fails to deploy when selector option is passed. Version-Release number of selected component (if applicable): 3.0.2 How reproducible: 100% Steps to Reproduce: -run oadm registry --config=/etc/openshift/master/admin.kubeconfig --credentials=/etc/openshift/master/openshift-registry.kubeconfig --images='registry.access.redhat.com/openshift3/ose-${component}:${version}' --selector='region=primary' --service-account=registry Actual results: Error creating deployer pod for default/docker-registry-1: pods "docker-registry-1-deploy" is forbidden: pod node label selector conflicts with its project node label selector Expected results: - Registry is deployed to primary region Additional info oadm registry --config=/etc/openshift/master/admin.kubeconfig --credentials=/etc/openshift/master/openshift-registry.kubeconfig --images='registry.access.redhat.com/openshift3/ose-${component}:${version}' --selector='region=primary' --service-account=registry [root@master ~]# oc get project default -o yaml apiVersion: v1 kind: Project metadata: annotations: openshift.io/sa.initialized-roles: "true" openshift.io/sa.scc.mcs: s0:c1,c0 openshift.io/sa.scc.uid-range: 1000000000/10000 creationTimestamp: 2015-10-01T21:59:21Z name: default resourceVersion: "174" selfLink: /oapi/v1/projects/default uid: abb57420-6887-11e5-b0b8-fa163e8561a6 spec: finalizers: - kubernetes - openshift.io/origin status: phase: Active [root@master ~]# oc get nodes NAME LABELS STATUS AGE master.vault.com kubernetes.io/hostname=master.vault.com,region=primary,zone=vault Ready,SchedulingDisabled 21h node1.vault.com kubernetes.io/hostname=node1.vault.com,region=primary,zone=vault Ready 21h node2.vault.com kubernetes.io/hostname=node2.vault.com,region=primary,zone=vault Ready 21h
Not a bug.