Description of problem: Trigger installation with cri-o enabled behind proxy, it turned out that all the pods were in ContainerCreating status. Version-Release number of the following components: openshift-ansible-3.9.0-0.9.0.git.0.a1344ac.el7.noarch.rpm # crio --version crio version 1.0.6 How reproducible: always Steps to Reproduce: 1. Trigger installation with cri-o enabled behind proxy # cat inventory <--snip--> openshift_use_system_containers=true system_images_registry=registry.reg-aws.openshift.com:443 containerized=true openshift_use_crio=true openshift_crio_systemcontainer_image_override=http:brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/openshift3/cri-o:v3.7 openshift_docker_use_system_container=true openshift_docker_systemcontainer_image_override=http:brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/openshift3/container-engine:latest openshift_http_proxy=http://xxx.redhat.com:3128 openshift_https_proxy=http://xxx.redhat.com:3128 <--snip--> 2. 3. Actual results: # oc get pods NAME READY STATUS RESTARTS AGE docker-registry-1-deploy 0/1 ContainerCreating 0 55m registry-console-1-deploy 0/1 ContainerCreating 0 45m router-1-deploy 0/1 ContainerCreating 0 1h # oc describe po registry-console-1-deploy <--snip--> Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 45m default-scheduler Successfully assigned registry-console-1-deploy to 172.16.120.31 Normal SuccessfulMountVolume 45m kubelet, 172.16.120.31 MountVolume.SetUp succeeded for volume "deployer-token-n9rhs" Warning FailedCreatePodSandBox 5m (x33 over 44m) kubelet, 172.16.120.31 Failed create pod sandbox. Expected results: Additional info: Please attach logs from ansible-playbook with the -vvv flag
The Environment= in the service file won't affect the syscontainer. Could you try adding the same information to the /var/lib/containers/atomic/cri-o.0/config.json file (under env) and restarting the cri-o service? Does that solve the problem? As I've opened a PR for using /etc/sysconfig/crio-storage and /etc/sysconfig/crio-network from within the system container. Just to be sure, Mrunal, passing these env variables is enough or is there need to do something more?
link to the PR for CRI-O system containers: https://github.com/kubernetes-incubator/cri-o/pull/1245
(In reply to Giuseppe Scrivano from comment #2) > The Environment= in the service file won't affect the syscontainer. > > Could you try adding the same information to the > /var/lib/containers/atomic/cri-o.0/config.json file (under env) and > restarting the cri-o service? Does that solve the problem? > Thanks Giuseppe! That's helpful.
Thanks for confirming it. I've opened a PR for openshift-ansible here: https://github.com/openshift/openshift-ansible/pull/6615
Tested in openshift-ansible-3.9.0-0.31.0.git.0.e0a0ad8.el7.noarch.rpm # crio --version crio version 1.9.1 The configurations have been added: # cat /etc/sysconfig/crio-network HTTP_PROXY=http://file.rdu.redhat.com:3128 HTTPS_PROXY=http://file.rdu.redhat.com:3128 NO_PROXY=.cluster.local,.svc,172.16.120.131,172.16.120.60 Pods are still in ContainerCreating status, same error as comment 1.
@Gan, thanks for the info. Could you try to modify "/etc/sysconfig/crio-network" and set it to: export HTTP_PROXY=http://file.rdu.redhat.com:3128 export HTTPS_PROXY=http://file.rdu.redhat.com:3128 export NO_PROXY=.cluster.local,.svc,172.16.120.131,172.16.120.60 then restart cri-o. Does it make any difference?
Yes, the pods running now.
Thanks, I opened a PR here: https://github.com/openshift/openshift-ansible/pull/6933
The PR has been merged
Verified in openshift-ansible-3.9.3-1.git.0.e166207.el7.noarch.rpm
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2018:0489
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 1000 days