Description of problem: Using the same Dockerfile that worked with rhscl/s2i-core-rhel7:1-40, with the latest version, rhscl/s2i-core-rhel7:1-42, it fails with "There are no enabled repos." I've checked rhscl/s2i-core-rhel7 Dockerfile changes, and the only change I can see is the base image from rhel7:7.5-433 to rhel7:7.6-115. Version-Release number of selected component (if applicable): Container image rhscl/s2i-core-rhel7:1-42 How reproducible: Create a dockerfile and build using rhscl/s2i-core-rhel7:1-42 Steps to Reproduce: 1. Create a Dockerfile. For example: --- FROM registry.access.redhat.com/rhscl/s2i-core-rhel7:1-42 LABEL name="alber" RUN yum repolist > /dev/null && \ INSTALL_PKGS=" bzip2" && \ yum install -y --enablerepo=rhel-server-rhscl-7-rpms,rhel-7-server-ansible-2.4-rpms --setopt=tsflags=nodocs $INSTALL_PKGS && \ rpm -V $INSTALL_PKGS && \ yum clean all -y --- 2. Build docker docker build -t alber . Actual results: [root@master-0 alber]# docker build -t alber . Sending build context to Docker daemon 2.048 kB Step 1/3 : FROM registry.access.redhat.com/rhscl/s2i-core-rhel7:1-42 ---> 3d278ca191ea Step 2/3 : LABEL name "alber" ---> Running in 72f0f4aa813f ---> eb1e19e96429 Removing intermediate container 72f0f4aa813f Step 3/3 : RUN yum repolist > /dev/null && INSTALL_PKGS=" bzip2" && yum install -y --enablerepo=rhel-server-rhscl-7-rpms,rhel-7-server-ansible-2.4-rpms --setopt=tsflags=nodocs $INSTALL_PKGS && rpm -V $INSTALL_PKGS && yum clean all -y ---> Running in 01a01e574053 Loaded plugins: ovl, product-id, search-disabled-repos, subscription-manager This system is not receiving updates. You can use subscription-manager on the host to register and assign subscriptions. There are no enabled repos. Run "yum repolist all" to see the repos you have. To enable Red Hat Subscription Management repositories: subscription-manager repos --enable <repo> To enable custom repositories: yum-config-manager --enable <repo> The command '/bin/sh -c yum repolist > /dev/null && INSTALL_PKGS=" bzip2" && yum install -y --enablerepo=rhel-server-rhscl-7-rpms,rhel-7-server-ansible-2.4-rpms --setopt=tsflags=nodocs $INSTALL_PKGS && rpm -V $INSTALL_PKGS && yum clean all -y' returned a non-zero code: 1 Expected results: Using [root@master-0 alber]# docker build -t alber . Sending build context to Docker daemon 2.048 kB Step 1/3 : FROM registry.access.redhat.com/rhscl/s2i-core-rhel7:1-40 ---> ed3d5103f165 Step 2/3 : LABEL name "alber" ---> Running in 2a5805e678a4 ---> 9a1a34ca8482 Removing intermediate container 2a5805e678a4 Step 3/3 : RUN yum repolist > /dev/null && INSTALL_PKGS=" bzip2" && yum install -y --enablerepo=rhel-server-rhscl-7-rpms,rhel-7-server-ansible-2.4-rpms --setopt=tsflags=nodocs $INSTALL_PKGS && rpm -V $INSTALL_PKGS && yum clean all -y ---> Running in 1fa5953b5eae Loaded plugins: ovl, product-id, search-disabled-repos, subscription-manager Resolving Dependencies --> Running transaction check ---> Package bzip2.x86_64 0:1.0.6-13.el7 will be installed --> Finished Dependency Resolution Dependencies Resolved ... (continues) Additional info: Working properly with rhscl/s2i-core-rhel7:1-40, but not with rhscl/s2i-core-rhel7:1-42.
I've just tested in an environment without Red Hat Satellite, and it worked: --- [root@master-0 ~]# cat /etc/rhsm/rhsm.conf # Red Hat Subscription Manager Configuration File: # Unified Entitlement Platform Configuration [server] # Server hostname: hostname = subscription.rhsm.redhat.com # Server prefix: prefix = /subscription # Server port: port = 443 # Set to 1 to disable certificate validation: insecure = 0 --- But in an environment with Red Hat Satellite, failed: --- [root@master-0 bug-1654414]# cat /etc/rhsm/rhsm.conf # Red Hat Subscription Manager Configuration File: # Unified Entitlement Platform Configuration [server] # Server hostname: hostname = satellite6.*** # Server prefix: prefix = /rhsm # Server port: port = 443 # Set to 1 to disable certificate validation: insecure = 1 ---
Given the connection to Satellite it is likely due to bug 1645205. Should be fixed by the next s2i-core image release.
*** This bug has been marked as a duplicate of bug 1645205 ***