Bug 1268440

Summary: oadm registry fails to deploy when selector option is passed.
Product: OpenShift Container Platform Reporter: Ryan Howe <rhowe>
Component: NodeAssignee: Paul Weil <pweil>
Status: CLOSED NOTABUG QA Contact: Jianwei Hou <jhou>
Severity: high Docs Contact:
Priority: unspecified    
Version: 3.0.0CC: aos-bugs, jhou, jokerman, mmccomas, pweil
Target Milestone: ---   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: 1268437 Environment:
Last Closed: 2015-10-02 20:18:14 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1268437    
Bug Blocks:    

Description Ryan Howe 2015-10-02 20:00:42 UTC
+++ This bug was initially created as a clone of Bug #1268437 +++

Description of problem:

oadm registry fails to deploy when selector option is passed. 

Version-Release number of selected component (if applicable):
3.0.2

How reproducible:
100%

Steps to Reproduce:
-run

oadm registry --config=/etc/openshift/master/admin.kubeconfig --credentials=/etc/openshift/master/openshift-registry.kubeconfig --images='registry.access.redhat.com/openshift3/ose-${component}:${version}' --selector='region=primary' --service-account=registry


Actual results:

Error creating deployer pod for default/docker-registry-1: pods "docker-registry-1-deploy" is forbidden: pod node label selector conflicts with its project node label selector


Expected results:
- Registry is deployed to primary region 


Additional info

oadm registry --config=/etc/openshift/master/admin.kubeconfig --credentials=/etc/openshift/master/openshift-registry.kubeconfig --images='registry.access.redhat.com/openshift3/ose-${component}:${version}' --selector='region=primary' --service-account=registry


[root@master ~]# oc get project default -o yaml
apiVersion: v1
kind: Project
metadata:
  annotations:
    openshift.io/sa.initialized-roles: "true"
    openshift.io/sa.scc.mcs: s0:c1,c0
    openshift.io/sa.scc.uid-range: 1000000000/10000
  creationTimestamp: 2015-10-01T21:59:21Z
  name: default
  resourceVersion: "174"
  selfLink: /oapi/v1/projects/default
  uid: abb57420-6887-11e5-b0b8-fa163e8561a6
spec:
  finalizers:
  - kubernetes
  - openshift.io/origin
status:
  phase: Active

[root@master ~]# oc get nodes
NAME               LABELS                                                              STATUS                     AGE
master.vault.com   kubernetes.io/hostname=master.vault.com,region=primary,zone=vault   Ready,SchedulingDisabled   21h
node1.vault.com    kubernetes.io/hostname=node1.vault.com,region=primary,zone=vault    Ready                      21h
node2.vault.com    kubernetes.io/hostname=node2.vault.com,region=primary,zone=vault    Ready                      21h

Comment 2 Ryan Howe 2015-10-02 20:18:14 UTC
Not a bug.