Bugzilla will be upgraded to version 5.0. The upgrade date is tentatively scheduled for 2 December 2018, pending final testing and feedback.
Bug 1529478 - pods were in ContainerCreating status while enabling cri-o behind proxy [NEEDINFO]
pods were in ContainerCreating status while enabling cri-o behind proxy
Status: CLOSED ERRATA
Product: OpenShift Container Platform
Classification: Red Hat
Component: Installer (Show other bugs)
3.9.0
Unspecified Unspecified
medium Severity high
: ---
: 3.9.0
Assigned To: Giuseppe Scrivano
Gan Huang
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2017-12-28 04:44 EST by Gan Huang
Modified: 2018-03-28 10:17 EDT (History)
8 users (show)

See Also:
Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2018-03-28 10:17:25 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---
gscrivan: needinfo? (mpatel)


Attachments (Terms of Use)


External Trackers
Tracker ID Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2018:0489 None None None 2018-03-28 10:17 EDT

  None (edit)
Description Gan Huang 2017-12-28 04:44:56 EST
Description of problem:
Trigger installation with cri-o enabled behind proxy, it turned out that all the pods were in ContainerCreating status.

Version-Release number of the following components:
openshift-ansible-3.9.0-0.9.0.git.0.a1344ac.el7.noarch.rpm
# crio --version
crio version 1.0.6

How reproducible:
always

Steps to Reproduce:
1. Trigger installation with cri-o enabled behind proxy
# cat inventory
<--snip-->
openshift_use_system_containers=true
system_images_registry=registry.reg-aws.openshift.com:443
containerized=true
openshift_use_crio=true
openshift_crio_systemcontainer_image_override=http:brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/openshift3/cri-o:v3.7
openshift_docker_use_system_container=true
openshift_docker_systemcontainer_image_override=http:brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/openshift3/container-engine:latest
openshift_http_proxy=http://xxx.redhat.com:3128
openshift_https_proxy=http://xxx.redhat.com:3128

<--snip-->
2.
3.

Actual results:
# oc get pods
NAME                        READY     STATUS              RESTARTS   AGE
docker-registry-1-deploy    0/1       ContainerCreating   0          55m
registry-console-1-deploy   0/1       ContainerCreating   0          45m
router-1-deploy             0/1       ContainerCreating   0          1h

# oc describe po registry-console-1-deploy

<--snip-->

Events:
  Type     Reason                  Age                From                    Message
  ----     ------                  ----               ----                    -------
  Normal   Scheduled               45m                default-scheduler       Successfully assigned registry-console-1-deploy to 172.16.120.31
  Normal   SuccessfulMountVolume   45m                kubelet, 172.16.120.31  MountVolume.SetUp succeeded for volume "deployer-token-n9rhs"
  Warning  FailedCreatePodSandBox  5m (x33 over 44m)  kubelet, 172.16.120.31  Failed create pod sandbox.


Expected results:

Additional info:
Please attach logs from ansible-playbook with the -vvv flag
Comment 2 Giuseppe Scrivano 2018-01-04 14:16:56 EST
The Environment= in the service file won't affect the syscontainer.

Could you try adding the same information to the /var/lib/containers/atomic/cri-o.0/config.json file (under env) and restarting the cri-o service?  Does that solve the problem?

As I've opened a PR for using /etc/sysconfig/crio-storage and /etc/sysconfig/crio-network from within the system container.

Just to be sure, Mrunal, passing these env variables is enough or is there need to do something more?
Comment 3 Giuseppe Scrivano 2018-01-04 14:21:25 EST
link to the PR for CRI-O system containers: https://github.com/kubernetes-incubator/cri-o/pull/1245
Comment 4 Gan Huang 2018-01-04 23:49:03 EST
(In reply to Giuseppe Scrivano from comment #2)
> The Environment= in the service file won't affect the syscontainer.
> 
> Could you try adding the same information to the
> /var/lib/containers/atomic/cri-o.0/config.json file (under env) and
> restarting the cri-o service?  Does that solve the problem?
> 

Thanks Giuseppe! That's helpful.
Comment 5 Giuseppe Scrivano 2018-01-05 04:11:38 EST
Thanks for confirming it.  I've opened a PR for openshift-ansible here:

https://github.com/openshift/openshift-ansible/pull/6615
Comment 7 Gan Huang 2018-01-30 03:42:05 EST
Tested in openshift-ansible-3.9.0-0.31.0.git.0.e0a0ad8.el7.noarch.rpm

# crio --version
crio version 1.9.1

The configurations have been added:
# cat /etc/sysconfig/crio-network 
HTTP_PROXY=http://file.rdu.redhat.com:3128
HTTPS_PROXY=http://file.rdu.redhat.com:3128
NO_PROXY=.cluster.local,.svc,172.16.120.131,172.16.120.60

Pods are still in ContainerCreating status, same error as comment 1.
Comment 8 Giuseppe Scrivano 2018-01-30 04:35:20 EST
@Gan, thanks for the info.  Could you try to modify "/etc/sysconfig/crio-network" and set it to:

export HTTP_PROXY=http://file.rdu.redhat.com:3128
export HTTPS_PROXY=http://file.rdu.redhat.com:3128
export NO_PROXY=.cluster.local,.svc,172.16.120.131,172.16.120.60

then restart cri-o.

Does it make any difference?
Comment 9 Gan Huang 2018-01-30 04:41:36 EST
Yes, the pods running now.
Comment 10 Giuseppe Scrivano 2018-01-30 05:16:46 EST
Thanks, I opened a PR here:

https://github.com/openshift/openshift-ansible/pull/6933
Comment 11 Xiaoli Tian 2018-03-06 22:15:01 EST
The PR has been merged
Comment 12 Gan Huang 2018-03-08 01:53:02 EST
Verified in openshift-ansible-3.9.3-1.git.0.e166207.el7.noarch.rpm
Comment 15 errata-xmlrpc 2018-03-28 10:17:25 EDT
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2018:0489

Note You need to log in before you can comment on or make changes to this bug.