Hide Forgot
Description of problem: Running RHEL 7.2 with docker 1.10.3 46.el7.10. A new yum repo is automatically added to my host by subscription-manager. When building a docker image based on the RHEL7 base image, steps involving yum fail attempting to access repodata for the new yum repo. I don't want/need this repo/subscription on my host or in my docker images, so I disable it on the host via yum-config-manager --disable <problem repo> also attempted (on the host): subscription-manager repos --disable <problem repo> yum-config-manager --save --setopt=rhel-7-server-tus-rpms.skip_if_unavailable=true restarting docker, rhnsd.service, rhsmcertd.service, the whole OS Yum operations on the host works fine. However, they continue to be broken in all docker containers. Version-Release number of selected component (if applicable): 1.10.3-46.el7.10 How reproducible: Happens whenever new entitlements/subscriptions are activated for my host. Steps to Reproduce: 1. new RH yum repo enabled on host (by rhsm automatically) 2. attempt to build a docker container based on RHEL7 image 3. build steps must include a yum command (update, install, etc) Actual results: Build-time yum operations fail with issues accessing the new RH yum repo https://cdn.redhat.com/content/tus/rhel/server/7/7Server/x86_64/os/repodata/repomd.xml: [Errno 14] HTTPS Error 404 - Not Found Expected results: Build continues as usual, yum operations work fine, etc. Additional info: As an RH employee I get new entitlements/subscriptions automatically and I am not aware of any way I can prevent them from being automatically activated on my host. If I start a container based on RHEL7 with an interactive session (docker run --rm -ti some-image bash), I can reproduce the yum failure, and then rectify it *in the running container* by running "yum-config-manager --disable <problem repo>" twice (yes, twice: once is not sufficient, bizarrely) I could probably inject similar instructions into my Dockerfiles, but this is not practical because I am working on docker images of Red Hat products and these instructions would not be appropriate to ship in those products. The particular repository that is causing me problems right now is rhel-7-server-tus-rpms but it can be any. The only RH repos I normally have active in my host or expect in my containers are rhel-7-server-extras-rpms, rhel-7-server-rpms and rhel-7-server-optional-rpms.
Not sure if this is a subscription manager problem or a problem with how we are sharing secrets into the container.
This is an artefact of the way entitlements are shared between host and containers. The redhat.repo file of any container (starting with any base RHEL image) will be empty (aside from a comment explaining it's emptiness). In order to generate a redhat.repo file the aligns with your present entitlements some yum command must be run (as the subscription-manager plugin is responsible for this entirely inside containers). A commonly used and often suggested command is `yum repolist` as it is innocuous. After the redhat.repo file is populated it may be manipulated as the reporter has described using yum-config-manager. By design the host redhat.repo file is not presently used by subscription-manager.
Hi folks, thanks for your responses. Another colleague pointed me at https://access.redhat.com/solutions/1443553 which explains the situation too. We are working around this in the CE team because we use a Dockerfile generator/pre-processor that inserts extra RUN commands to invoke yum as necessary. (We've been doing this for a while now, long enough that I completely forgot we were, and hit this when working on/with the upstream version of the pre-processor, which doesn't have this specific behaviour in it) The situation is probably exacerbated for RH employees since we have so many entitlements; it's less likely to occur or be a problem for the majority of customers I imagine. The solutions document I linked above is very clear. I'm not sure if it's linked in the RHEL7 docker documents or release notes etc but I couldn't find it.
After evaluating this issue, there are no plans to address it further or fix it in an upcoming release. Therefore, it is being closed. If plans change such that this issue will be fixed in an upcoming release, then the bug can be reopened.