Bug 1453119 - Bad condition in install_on_redhat.yml leads to installation of ceph-osd on MON node
Summary: Bad condition in install_on_redhat.yml leads to installation of ceph-osd on M...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Storage Console
Classification: Red Hat Storage
Component: ceph-ansible
Version: 2
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
: 2
Assignee: Sébastien Han
QA Contact: Daniel Horák
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-05-22 08:24 UTC by Daniel Horák
Modified: 2023-09-14 03:57 UTC (History)
12 users (show)

Fixed In Version: ceph-ansible-2.2.7-1.el7scon
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-06-19 13:18:12 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github ceph ceph-ansible pull 1557 0 None None None 2017-05-22 16:21:15 UTC
Red Hat Product Errata RHBA-2017:1496 0 normal SHIPPED_LIVE ceph-installer, ceph-ansible, and ceph-iscsi-ansible update 2017-06-19 17:14:02 UTC

Description Daniel Horák 2017-05-22 08:24:09 UTC
Description of problem:
  Commit 76ddcbc271[1] breaks conditions in install_on_redhat.yml[2] related to following tasks (and maybe also something else):
    * install distro or red hat storage ceph mon
    * install distro or red hat storage ceph osd
    * install distro or red hat storage ceph mds
    * install distro or red hat storage ceph-fuse
    * install distro or red hat storage ceph base
  
  Now the condition for example for installation of 'ceph-mon' package looks like this:
    when:
    - mon_group_name in group_names
      or ceph_origin == "distro"
      or ceph_custom
  Which means, install ceph-mon package on system which is in mon group *OR* if ceph_origin == "distro".
  There should be logical AND instead of OR.

Version-Release number of selected component (if applicable):
  ceph-ansible-2.2.6-1.el7scon.noarch
  It was broken by commit 76ddcbc271[1].

How reproducible:
  100%

Steps to Reproduce:
1. Try to create Ceph cluster via ceph-ansible (with ceph_origin set to "distro").

Actual results:
The Ceph create cluster task fails on following (and similar) error, when try to install ceph-osd on monitor node:
  fatal: [mon1.example.com]: FAILED! => {
    "changed": false,
    "failed": true,
    "invocation": {
        "module_args": {
            "conf_file": null,
            "disable_gpg_check": false,
            "disablerepo": null,
            "enablerepo": null,
            "exclude": null,
            "install_repoquery": true,
            "list": null,
            "name": [
                "ceph-osd"
            ],
            "state": "present",
            "update_cache": false,
            "validate_certs": true
        }
    },
    "msg": "No package matching 'ceph-osd' found available, installed or updated",
    "rc": 126,
    "results": [
        "No package matching 'ceph-osd' found available, installed or updated"
    ]
  }

Expected results:
  ceph-ansible will install particular ceph packages on particular node accordingly to goroup/role assigned to the node

Additional info:
[1] https://github.com/ceph/ceph-ansible/commit/76ddcbc2719d316b7746f2d4567521b9fbfcc568#diff-22a320c218700276113cd8d961ef249f
[2] https://github.com/ceph/ceph-ansible/blob/master/roles/ceph-common/tasks/installs/install_on_redhat.yml#L71

Comment 2 seb 2017-05-22 12:24:12 UTC
Thanks for reporting this, here a patch upstream: https://github.com/ceph/ceph-ansible/pull/1557

Do you mind testing it? Thanks

Comment 3 Daniel Horák 2017-05-23 10:57:03 UTC
I'm able to retest it only by using it in our automated workflow, where I initially discovered it. But it will not cover the whole patch, so I'm not sure if it is enough or not?

Comment 4 Ian Colle 2017-05-24 05:37:01 UTC
It looks like upstream fix is still under review. Seb, please resolve, test, and merge.

Comment 5 seb 2017-05-24 10:15:09 UTC
Yes Daniel, that should be enough, thanks.

Ian, indeed we are still discussing the right approach, will probably merge this today and do the backport as well.

Comment 6 John Poelstra 2017-05-24 15:07:55 UTC
discussed at program meeting, expecting to be in ON_QA today

Comment 9 Daniel Horák 2017-05-30 12:55:16 UTC
I retested it on our work-flow where I initially discovered this issue and it works as expected.

# rpm -qa | grep ansible
  ansible-2.2.2.0-1.el7.noarch
  ceph-ansible-2.2.7-1.el7scon.noarch

>> VERIFIED

Comment 11 errata-xmlrpc 2017-06-19 13:18:12 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2017:1496

Comment 12 Red Hat Bugzilla 2023-09-14 03:57:55 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 1000 days


Note You need to log in before you can comment on or make changes to this bug.