Bug 1268437 - oadm registry defaults to region=infra
Summary: oadm registry defaults to region=infra
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Node
Version: 3.0.0
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: ---
Assignee: Paul Weil
QA Contact: Jianwei Hou
URL:
Whiteboard:
Depends On:
Blocks: 1268440
TreeView+ depends on / blocked
 
Reported: 2015-10-02 19:56 UTC by Ryan Howe
Modified: 2015-10-02 20:21 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
: 1268440 (view as bug list)
Environment:
Last Closed: 2015-10-02 20:21:02 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Ryan Howe 2015-10-02 19:56:33 UTC
Description of problem:

`oadm registry` defaults to region=infra even when that region is unschedulable.

Then when choosing region/selector error received: Error creating deployer pod for default/docker-registry-1: pods "docker-registry-1-deploy" is forbidden: pod node label selector conflicts with its project node label selector

Version-Release number of selected component (if applicable):
3.0.2

How reproducible:
100%

Steps to Reproduce:
Mark you infra region unschedulable or remove the infra region. 

-run

oadm registry --config=/etc/openshift/master/admin.kubeconfig --credentials=/etc/openshift/master/openshift-registry.kubeconfig --images='registry.access.redhat.com/openshift3/ose-${component}:${version}' 



Actual results:
1h          1h         1         docker-registry-1          ReplicationController               failedUpdate       {deployer }    Error updating deployment default/docker-registry-1 status to Pending
1h          44m        87        docker-registry-1-deploy   Pod                                 failedScheduling   {scheduler }   Failed for reason MatchNodeSelector and possibly others


Expected results:

registry should deploy on available node 

Additional info:

Out put of test environment 

oadm registry --config=/etc/openshift/master/admin.kubeconfig --credentials=/etc/openshift/master/openshift-registry.kubeconfig --images='registry.access.redhat.com/openshift3/ose-${component}:${version}' 



[root@master ~]# oc get nodes
NAME               LABELS                                                              STATUS                     AGE
master.vault.com   kubernetes.io/hostname=master.vault.com,region=primary,zone=vault   Ready,SchedulingDisabled   21h
node1.vault.com    kubernetes.io/hostname=node1.vault.com,region=primary,zone=vault    Ready                      21h
node2.vault.com    kubernetes.io/hostname=node2.vault.com,region=primary,zone=vault    Ready                      21h


[root@master ~]# oc get ev
FIRSTSEEN   LASTSEEN   COUNT     NAME                       KIND                    SUBOBJECT   REASON             SOURCE         MESSAGE
1h          1h         1         docker-registry-1          ReplicationController               failedUpdate       {deployer }    Error updating deployment default/docker-registry-1 status to Pending
1h          49m        87        docker-registry-1-deploy   Pod                                 failedScheduling   {scheduler }   Failed for reason MatchNodeSelector and possibly others
1h          48m        88        docker-registry-1-deploy   Pod                                 failedScheduling   {scheduler }   Failed for reason Region and possibly others
48m         48m        1         docker-registry-1          ReplicationController               failed             {deployer }    Deployer pod "docker-registry-1-deploy" has gone missing
47m         41m        13        docker-registry-1-deploy   Pod                                 failedScheduling   {scheduler }   Failed for reason Region and possibly others
47m         41m        11        docker-registry-1-deploy   Pod                                 failedScheduling   {scheduler }   Failed for reason MatchNodeSelector and possibly others
39m         39m        1         docker-registry-1          ReplicationController               failedUpdate       {deployer }    Error updating deployment default/docker-registry-1 status to Pending
39m         35m        12        docker-registry-1-deploy   Pod                                 failedScheduling   {scheduler }   Failed for reason Region and possibly others
39m         34m        9         docker-registry-1-deploy   Pod                                 failedScheduling   {scheduler }   Failed for reason MatchNodeSelector and possibly others
34m         34m        1         docker-registry-1          ReplicationController               failed             {deployer }    Deployer pod "docker-registry-1-deploy" has gone missing
24m         24m        1         docker-registry-1          ReplicationController               failedUpdate       {deployer }    Error updating deployment default/docker-registry-1 status to Pending
28m         24m        10        docker-registry-1-deploy   Pod                                 failedScheduling   {scheduler }   Failed for reason MatchNodeSelector and possibly others
28m         24m        10        docker-registry-1-deploy   Pod                                 failedScheduling   {scheduler }   Failed for reason Region and possibly others
23m         23m        1         docker-registry-1          ReplicationController               failedUpdate       {deployer }    Error updating deployment default/docker-registry-1 status to Pending

Comment 2 Ryan Howe 2015-10-02 20:21:02 UTC
Not a bug.


Note You need to log in before you can comment on or make changes to this bug.