Bug 1613123 - Installer fails due to the unnecessary packages check in package_availability health check
Summary: Installer fails due to the unnecessary packages check in package_availability...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Installer
Version: 3.11.0
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: ---
: 3.11.0
Assignee: Vadim Rutkovsky
QA Contact: Johnny Liu
URL:
Whiteboard:
Depends On: 1613112
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-08-07 04:31 UTC by sheng.lao
Modified: 2018-10-11 07:24 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1613112
Environment:
Last Closed: 2018-10-11 07:24:00 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2018:2652 0 None None None 2018-10-11 07:24:21 UTC

Description sheng.lao 2018-08-07 04:31:01 UTC
+++ This bug was initially created as a clone of Bug #1613112 +++

Now, in since OCP-3.10 the SDN and master componet,  running as static pod. Accordingly, I remove the channel of fast-datapath, then I get the error messages, as follow:
We should remove those packages for health check accordingly in package_availability.

-----------------------
Failure summary:

1. Hosts: XXX  
Play: OpenShift Health Checks
Task: Run health checks (install) - EL
Message: One or more checks failed
Details: check "package_availability":
Could not perform a yum update.
Errors from dependency resolution:

atomic-openshift-sdn-ovs-3.11.0-0.11.0.git.0.c5fa1e4.el7.x86_64 requires openvswitch >= 2.6.1

You should resolve these issues before proceeding with an install.
You may need to remove or downgrade packages or enable/disable yum repositories.
-----------------------


I find that rpm of atomic-openshift-sdn-ovs is verified in package_availability.py:
    def node_packages(rpm_prefix):
        """Return a list of RPMs that we expect a node install to have available."""
    ... ...
            "{rpm_prefix}-sdn-ovs".format(rpm_prefix=rpm_prefix),
    ... ...

So checking against atomic-openshift-sdn-ovs stuff seems like redundant.

Version-Release number of selected component (if applicable):
openshift-ansible-3.11.0-0.11.0.git.0.3c66516None.noarch.rpm

How reproducible:
always

Steps to Reproduce:
1. Deploy OCP-3.11 with channel of fast-datapath disabled
2.
3.

Actual results:
Failure summary:

1. Hosts: XXX  
Play: OpenShift Health Checks
Task: Run health checks (install) - EL
Message: One or more checks failed
Details: check "package_availability":
Could not perform a yum update.
Errors from dependency resolution:

atomic-openshift-sdn-ovs-3.11.0-0.11.0.git.0.c5fa1e4.el7.x86_64 requires openvswitch >= 2.6.1

You should resolve these issues before proceeding with an install.
You may need to remove or downgrade packages or enable/disable yum repositories.

Expected results:
Installer is success

Additional info:


Description of problem:

Version-Release number of the following components:
rpm -q openshift-ansible
rpm -q ansible
ansible --version

How reproducible:

Steps to Reproduce:
1.
2.
3.

Actual results:
Please include the entire output from the last TASK line through the end of output if an error is generated

Expected results:

Additional info:
Please attach logs from ansible-playbook with the -vvv flag

Comment 1 Vadim Rutkovsky 2018-08-09 15:12:52 UTC
Created https://github.com/openshift/openshift-ansible/pull/9506

Comment 2 Vadim Rutkovsky 2018-08-16 09:03:41 UTC
Fix is available in openshift-ansible-3.11.0-0.16.0

Comment 5 errata-xmlrpc 2018-10-11 07:24:00 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2018:2652


Note You need to log in before you can comment on or make changes to this bug.