Bug 1461554 - Can not create egress router http-proxy pod
Summary: Can not create egress router http-proxy pod
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Networking
Version: 3.6.0
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
: ---
Assignee: Dan Winship
QA Contact: zhaozhanqi
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-06-14 18:54 UTC by Weibin Liang
Modified: 2022-08-04 22:20 UTC (History)
2 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-06-21 12:19:40 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Weibin Liang 2017-06-14 18:54:03 UTC
Description of problem:
Can not create egress router http-proxy pod in latest OC v3.6.77 image.

Version-Release number of selected component (if applicable):
[root@localhost ~]# oc version
oc v3.6.77
kubernetes v1.6.1+5115d708d7
features: Basic-Auth GSSAPI Kerberos SPNEGO

Server https://192.168.122.37:8443
openshift v3.6.77
kubernetes v1.6.1+5115d708d7
[root@localhost ~]# oc get nodes
NAME             STATUS                     AGE       VERSION
192.168.122.37   Ready,SchedulingDisabled   2h        v1.6.1+5115d708d7
192.168.122.50   Ready                      2h        v1.6.1+5115d708d7
[root@localhost ~]# 


How reproducible:
Every time

Steps to Reproduce:
## Case one:
Without "securityContext":{  "privileged":true  } configured for spec.containers

[root@localhost ~]# oc create -f https://raw.githubusercontent.com/weliang1/Openshift_Networking/master/egress-http-proxy/egress-http-proxy.json
pod "egress-http-proxy" created
[root@localhost ~]# oc get pods
NAME                READY     STATUS     RESTARTS   AGE
egress-http-proxy   0/1       Init:0/1   0          10s
[root@localhost ~]# oc describe pod egress-http-proxy
Name:			egress-http-proxy
Namespace:		default
Security Policy:	privileged
Node:			192.168.122.50/192.168.122.50
Start Time:		Wed, 14 Jun 2017 14:48:40 -0400
Labels:			name=egress-http-proxy
Annotations:		openshift.io/scc=privileged
			pod.network.openshift.io/assign-macvlan=true
Status:			Pending
IP:			
Controllers:		<none>
Init Containers:
  egress-router-setup:
    Container ID:	
    Image:		openshift3/ose-egress-router
    Image ID:		
    Port:		
    State:		Waiting
      Reason:		PodInitializing
    Ready:		False
    Restart Count:	0
    Environment:
      EGRESS_SOURCE:		192.168.12.99
      EGRESS_GATEWAY:		192.168.12.1
      EGRESS_ROUTER_MODE:	http-proxy
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-xsr9d (ro)
Containers:
  egress-router-proxy:
    Container ID:	
    Image:		openshift3/ose-egress-http-proxy
    Image ID:		
    Port:		
    State:		Waiting
      Reason:		PodInitializing
    Ready:		False
    Restart Count:	0
    Environment:
      EGRESS_HTTP_PROXY_DESTINATION:	!*.redhat.com
!98.0.0.0/8
!69.172.200.235/32
!www.cisco.com
*

    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-xsr9d (ro)
Conditions:
  Type		Status
  Initialized 	False 
  Ready 	False 
  PodScheduled 	True 
Volumes:
  default-token-xsr9d:
    Type:	Secret (a volume populated by a Secret)
    SecretName:	default-token-xsr9d
    Optional:	false
QoS Class:	BestEffort
Node-Selectors:	<none>
Tolerations:	<none>
Events:
  FirstSeen	LastSeen	Count	From			SubObjectPath	Type		Reason		Message
  ---------	--------	-----	----			-------------	--------	------		-------
  22s		22s		1	default-scheduler			Normal		Scheduled	Successfully assigned egress-http-proxy to 192.168.122.50
  20s		2s		6	kubelet, 192.168.122.50			Warning		FailedSync	Error syncing pod, skipping: failed to "CreatePodSandbox" for "egress-http-proxy_default(151c7526-5132-11e7-b6db-525400d36e4d)" with CreatePodSandboxError: "CreatePodSandbox for pod \"egress-http-proxy_default(151c7526-5132-11e7-b6db-525400d36e4d)\" failed: rpc error: code = 2 desc = NetworkPlugin cni failed to set up pod \"egress-http-proxy_default\" network: CNI request failed with status 400: 'pod has \"pod.network.openshift.io/assign-macvlan\" annotation but is not privileged\n'"

  18s	1s	6	kubelet, 192.168.122.50		Normal	SandboxChanged	Pod sandbox changed, it will be killed and re-created.
[root@localhost ~]# 

## Case two:
With "securityContext":{  "privileged":true  } configured for spec.containers
[root@localhost ~]# oc create -f https://raw.githubusercontent.com/weliang1/Openshift_Networking/master/egress-http-proxy/egress-http-proxy-privileged.json
pod "egress-http-proxy" created
[root@localhost ~]# oc get pods
NAME                READY     STATUS             RESTARTS   AGE
egress-http-proxy   0/1       ImagePullBackOff   0          38s
[root@localhost ~]# oc describe pod egress-http-proxy
Name:			egress-http-proxy
Namespace:		default
Security Policy:	privileged
Node:			192.168.122.50/192.168.122.50
Start Time:		Wed, 14 Jun 2017 14:50:04 -0400
Labels:			name=egress-http-proxy
Annotations:		openshift.io/scc=privileged
			pod.network.openshift.io/assign-macvlan=true
Status:			Pending
IP:			10.128.0.250
Controllers:		<none>
Init Containers:
  egress-router-setup:
    Container ID:	docker://af270893a1981333cf86e4fd045df1759154188bff33274e876f87c7f2e5f959
    Image:		openshift3/ose-egress-router
    Image ID:		docker-pullable://registry.ops.openshift.com/openshift3/ose-egress-router@sha256:ba4ae9fb96db9860876374bd4e6e74a75c2e632c67b0fba294bdc784c1928b3e
    Port:		
    State:		Terminated
      Reason:		Completed
      Exit Code:	0
      Started:		Mon, 01 Jan 0001 00:00:00 +0000
      Finished:		Wed, 14 Jun 2017 14:50:11 -0400
    Ready:		True
    Restart Count:	0
    Environment:
      EGRESS_SOURCE:		192.168.12.99
      EGRESS_GATEWAY:		192.168.12.1
      EGRESS_ROUTER_MODE:	http-proxy
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-xsr9d (ro)
Containers:
  egress-router-proxy:
    Container ID:	
    Image:		openshift3/ose-egress-http-proxy
    Image ID:		
    Port:		
    State:		Waiting
      Reason:		ErrImagePull
    Ready:		False
    Restart Count:	0
    Environment:
      EGRESS_HTTP_PROXY_DESTINATION:	!*.redhat.com
!98.0.0.0/8
!69.172.200.235/32
!www.cisco.com
*

    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-xsr9d (ro)
Conditions:
  Type		Status
  Initialized 	True 
  Ready 	False 
  PodScheduled 	True 
Volumes:
  default-token-xsr9d:
    Type:	Secret (a volume populated by a Secret)
    SecretName:	default-token-xsr9d
    Optional:	false
QoS Class:	BestEffort
Node-Selectors:	<none>
Tolerations:	<none>
Events:
  FirstSeen	LastSeen	Count	From			SubObjectPath					Type		Reason		Message
  ---------	--------	-----	----			-------------					--------	------		-------
  41s		41s		1	default-scheduler							Normal		Scheduled	Successfully assigned egress-http-proxy to 192.168.122.50
  39s		39s		1	kubelet, 192.168.122.50	spec.initContainers{egress-router-setup}	Normal		Pulling		pulling image "openshift3/ose-egress-router"
  37s		37s		1	kubelet, 192.168.122.50	spec.initContainers{egress-router-setup}	Normal		Pulled		Successfully pulled image "openshift3/ose-egress-router"
  35s		35s		1	kubelet, 192.168.122.50	spec.initContainers{egress-router-setup}	Normal		Created		Created container with id af270893a1981333cf86e4fd045df1759154188bff33274e876f87c7f2e5f959
  35s		35s		1	kubelet, 192.168.122.50	spec.initContainers{egress-router-setup}	Normal		Started		Started container with id af270893a1981333cf86e4fd045df1759154188bff33274e876f87c7f2e5f959
  34s		17s		2	kubelet, 192.168.122.50	spec.containers{egress-router-proxy}		Normal		Pulling		pulling image "openshift3/ose-egress-http-proxy"
  31s		14s		2	kubelet, 192.168.122.50	spec.containers{egress-router-proxy}		Warning		Failed		Failed to pull image "openshift3/ose-egress-http-proxy": rpc error: code = 2 desc = unauthorized: authentication required
  31s		14s		2	kubelet, 192.168.122.50							Warning		FailedSync	Error syncing pod, skipping: failed to "StartContainer" for "egress-router-proxy" with ErrImagePull: "rpc error: code = 2 desc = unauthorized: authentication required"

  31s	1s	2	kubelet, 192.168.122.50	spec.containers{egress-router-proxy}	Normal	BackOff		Back-off pulling image "openshift3/ose-egress-http-proxy"
  31s	1s	2	kubelet, 192.168.122.50						Warning	FailedSync	Error syncing pod, skipping: failed to "StartContainer" for "egress-router-proxy" with ImagePullBackOff: "Back-off pulling image \"openshift3/ose-egress-http-proxy\""

[root@localhost ~]# 

Actual results:
Can not create egress router http-proxy pod

Expected results:
Should be able to create egress router http-proxy pod

Additional info:

Comment 1 Weibin Liang 2017-06-14 18:59:54 UTC
Egress router setup doc (https://github.com/danwinship/openshift-docs/blob/74a82b5bc35fec4677af72bd5071316cec4397db/admin_guide/managing_networking.adoc) does not require "securityContext":{  "privileged":true  } configured for spec.containers.

Without "securityContext":{  "privileged":true  } configured for spec.containers, pod can not created due to privileged issue.

Comment 2 Weibin Liang 2017-06-16 13:23:47 UTC
Even create openshift3/ose-egress-http-proxy container image locally, I still see the error which same as case two (see case two infor from bug description)

#git clone https://github.com/openshift/origin
#cd origin/images/egress/http-proxy/
#docker build -t openshift3/ose-egress-http-proxy  . 
# docker images | grep proxy
openshift3/ose-egress-http-proxy                            latest              c954efa3c20c        33 minutes ago      396 MB
registry.ops.openshift.com/openshift3/ose-haproxy-router    v3.6.74             1c0957067bf5        5 weeks ago         938.4 MB
#

Comment 3 Dan Winship 2017-06-20 13:21:24 UTC
(In reply to Weibin Liang from comment #2)
> Even create openshift3/ose-egress-http-proxy container image locally, I
> still see the error which same as case two (see case two infor from bug
> description)

You need to add

  "imagePullPolicy": "IfNotPresent",

to the JSON. Otherwise it will check the registry first even if the image already exists locally.

With that added to your JSON file, it starts up fine for me, without marking the HTTP proxy container privileged.

Comment 4 Weibin Liang 2017-06-21 12:19:40 UTC
With "imagePullPolicy": "IfNotPresent" in JSON file, creation works


Note You need to log in before you can comment on or make changes to this bug.