Bug 1268440 - oadm registry fails to deploy when selector option is passed.
oadm registry fails to deploy when selector option is passed.
Status: CLOSED NOTABUG
Product: OpenShift Container Platform
Classification: Red Hat
Component: Pod (Show other bugs)
3.0.0
Unspecified Unspecified
unspecified Severity high
: ---
: ---
Assigned To: Paul Weil
Jianwei Hou
:
Depends On: 1268437
Blocks:
  Show dependency treegraph
 
Reported: 2015-10-02 16:00 EDT by Ryan Howe
Modified: 2015-10-02 16:19 EDT (History)
5 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: 1268437
Environment:
Last Closed: 2015-10-02 16:18:14 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Ryan Howe 2015-10-02 16:00:42 EDT
+++ This bug was initially created as a clone of Bug #1268437 +++

Description of problem:

oadm registry fails to deploy when selector option is passed. 

Version-Release number of selected component (if applicable):
3.0.2

How reproducible:
100%

Steps to Reproduce:
-run

oadm registry --config=/etc/openshift/master/admin.kubeconfig --credentials=/etc/openshift/master/openshift-registry.kubeconfig --images='registry.access.redhat.com/openshift3/ose-${component}:${version}' --selector='region=primary' --service-account=registry


Actual results:

Error creating deployer pod for default/docker-registry-1: pods "docker-registry-1-deploy" is forbidden: pod node label selector conflicts with its project node label selector


Expected results:
- Registry is deployed to primary region 


Additional info

oadm registry --config=/etc/openshift/master/admin.kubeconfig --credentials=/etc/openshift/master/openshift-registry.kubeconfig --images='registry.access.redhat.com/openshift3/ose-${component}:${version}' --selector='region=primary' --service-account=registry


[root@master ~]# oc get project default -o yaml
apiVersion: v1
kind: Project
metadata:
  annotations:
    openshift.io/sa.initialized-roles: "true"
    openshift.io/sa.scc.mcs: s0:c1,c0
    openshift.io/sa.scc.uid-range: 1000000000/10000
  creationTimestamp: 2015-10-01T21:59:21Z
  name: default
  resourceVersion: "174"
  selfLink: /oapi/v1/projects/default
  uid: abb57420-6887-11e5-b0b8-fa163e8561a6
spec:
  finalizers:
  - kubernetes
  - openshift.io/origin
status:
  phase: Active

[root@master ~]# oc get nodes
NAME               LABELS                                                              STATUS                     AGE
master.vault.com   kubernetes.io/hostname=master.vault.com,region=primary,zone=vault   Ready,SchedulingDisabled   21h
node1.vault.com    kubernetes.io/hostname=node1.vault.com,region=primary,zone=vault    Ready                      21h
node2.vault.com    kubernetes.io/hostname=node2.vault.com,region=primary,zone=vault    Ready                      21h
Comment 2 Ryan Howe 2015-10-02 16:18:14 EDT
Not a bug.

Note You need to log in before you can comment on or make changes to this bug.