Bug 1268440 - oadm registry fails to deploy when selector option is passed.
Summary: oadm registry fails to deploy when selector option is passed.
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Node
Version: 3.0.0
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: ---
Assignee: Paul Weil
QA Contact: Jianwei Hou
URL:
Whiteboard:
Depends On: 1268437
Blocks:
TreeView+ depends on / blocked
 
Reported: 2015-10-02 20:00 UTC by Ryan Howe
Modified: 2015-10-02 20:19 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of: 1268437
Environment:
Last Closed: 2015-10-02 20:18:14 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Ryan Howe 2015-10-02 20:00:42 UTC
+++ This bug was initially created as a clone of Bug #1268437 +++

Description of problem:

oadm registry fails to deploy when selector option is passed. 

Version-Release number of selected component (if applicable):
3.0.2

How reproducible:
100%

Steps to Reproduce:
-run

oadm registry --config=/etc/openshift/master/admin.kubeconfig --credentials=/etc/openshift/master/openshift-registry.kubeconfig --images='registry.access.redhat.com/openshift3/ose-${component}:${version}' --selector='region=primary' --service-account=registry


Actual results:

Error creating deployer pod for default/docker-registry-1: pods "docker-registry-1-deploy" is forbidden: pod node label selector conflicts with its project node label selector


Expected results:
- Registry is deployed to primary region 


Additional info

oadm registry --config=/etc/openshift/master/admin.kubeconfig --credentials=/etc/openshift/master/openshift-registry.kubeconfig --images='registry.access.redhat.com/openshift3/ose-${component}:${version}' --selector='region=primary' --service-account=registry


[root@master ~]# oc get project default -o yaml
apiVersion: v1
kind: Project
metadata:
  annotations:
    openshift.io/sa.initialized-roles: "true"
    openshift.io/sa.scc.mcs: s0:c1,c0
    openshift.io/sa.scc.uid-range: 1000000000/10000
  creationTimestamp: 2015-10-01T21:59:21Z
  name: default
  resourceVersion: "174"
  selfLink: /oapi/v1/projects/default
  uid: abb57420-6887-11e5-b0b8-fa163e8561a6
spec:
  finalizers:
  - kubernetes
  - openshift.io/origin
status:
  phase: Active

[root@master ~]# oc get nodes
NAME               LABELS                                                              STATUS                     AGE
master.vault.com   kubernetes.io/hostname=master.vault.com,region=primary,zone=vault   Ready,SchedulingDisabled   21h
node1.vault.com    kubernetes.io/hostname=node1.vault.com,region=primary,zone=vault    Ready                      21h
node2.vault.com    kubernetes.io/hostname=node2.vault.com,region=primary,zone=vault    Ready                      21h

Comment 2 Ryan Howe 2015-10-02 20:18:14 UTC
Not a bug.


Note You need to log in before you can comment on or make changes to this bug.