Bug 1730344 - With ansible-runner there is no partial success as is with ansible-playbook
Summary: With ansible-runner there is no partial success as is with ansible-playbook
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Satellite 6
Classification: Red Hat
Component: Ansible
Version: 6.6.0
Hardware: Unspecified
OS: Unspecified
unspecified
medium vote
Target Milestone: 6.6.0
Assignee: Adam Ruzicka
QA Contact: Lukas Pramuk
URL:
Whiteboard:
Depends On:
Blocks: 1698178
TreeView+ depends on / blocked
 
Reported: 2019-07-16 13:54 UTC by Lukas Pramuk
Modified: 2019-10-22 19:50 UTC (History)
5 users (show)

Fixed In Version: tfm-rubygem-foreman_ansible_core-3.0.1
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-10-22 19:50:32 UTC
Target Upstream Version:


Attachments (Terms of Use)


Links
System ID Priority Status Summary Last Updated
Foreman Issue Tracker 27311 Normal Closed With ansible-runner there is no partial success as is with ansible-playbook 2020-02-25 14:13:05 UTC

Description Lukas Pramuk 2019-07-16 13:54:30 UTC
Description of problem:
With ansible-runner there is no partial success as is with ansible-playbook.
if ansible REX job fails for one host the other hosts are marked as failed too even though job succeeded for them.

Version-Release number of selected component (if applicable):
@Satellite-6.6.0 Snap9
tfm-rubygem-foreman_ansible-3.0.3-3.el7sat.noarch
tfm-rubygem-foreman_ansible_core-3.0.0-1.el7sat.noarch
rubygem-smart_proxy_ansible-3.0.1-1.el7sat.noarch
ansible-runner-1.3.4-2.el7ar.noarch
ansible-2.8.2-1.el7ae.noarch

How reproducible:
deterministic

Steps to Reproduce:
1. Have Satellite and two hosts set up for REX using ansible-runner

# yum install ansible-runner
# hammer settings set --name ansible_implementation --value ansible-runner

2. On 1st host create a dir while on latter it doesn't exist

@host # mkdir /tmp/foo

3. Run Ansible Command job against the two hosts with command "cd /tmp/foo"

-------------
   1:
   2:PLAY [all] *********************************************************************
   3:
   4:TASK [Gathering Facts] *********************************************************
   5:ok: [host1.example.com]
   6:
   7:TASK [shell] *******************************************************************
   8:changed: [host1.example.com]
   9:
  10:TASK [debug] *******************************************************************
  11:ok: [host1.example.com] => {
  12:    "out": {
  13:        "changed": true, 
  14:        "cmd": "cd /tmp/foo\n", 
  15:        "delta": "0:00:00.005329", 
  16:        "end": "2019-07-16 08:49:49.585670", 
  17:        "failed": false, 
  18:        "rc": 0, 
  19:        "start": "2019-07-16 08:49:49.580341", 
  20:        "stderr": "", 
  21:        "stderr_lines": [], 
  22:        "stdout": "", 
  23:        "stdout_lines": []
  24:    }
  25:}
  26:PLAY RECAP *********************************************************************
  27:host1.example.com : ok=3    changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
  28:Exit status: 2
-------------
   1:
   2:PLAY [all] *********************************************************************
   3:
   4:TASK [Gathering Facts] *********************************************************
   5:ok: [host2.example.com]
   6:
   7:TASK [shell] *******************************************************************
   8:fatal: [host2.example.com]: FAILED! => {"changed": true, "cmd": "cd /tmp/foo\n", "delta": "0:00:00.004742", "end": "2019-07-16 08:49:49.584202", "msg": "non-zero return code", "rc": 1, "start": "2019-07-16 08:49:49.579460", "stderr": "/bin/sh: line 0: cd: /tmp/foo: No such file or directory", "stderr_lines": ["/bin/sh: line 0: cd: /tmp/foo: No such file or directory"], "stdout": "", "stdout_lines": []}
   9:
  10:TASK [debug] *******************************************************************
  11:PLAY RECAP *********************************************************************
  12:host2.example.com : ok=1    changed=0    unreachable=0    failed=1    skipped=0    rescued=0    ignored=0
  13:Exit status: 2
-------------


Actual results:
100% Failure  (if 1 host fails then all hosts fail)

Expected results:
50% Success (failure of 1 host doesn't prevent other hosts to be successful)

Comment 3 Marek Hulan 2019-07-16 13:57:44 UTC
Does this ring a bell? Do we share the overall status for all hosts now?

Comment 4 Adam Ruzicka 2019-07-16 14:00:48 UTC
This is a valid bug, I have a local reproducer and I'm working on a fix.

Comment 5 Adam Ruzicka 2019-07-16 14:03:21 UTC
Created redmine issue https://projects.theforeman.org/issues/27311 from this bug

Comment 7 Lukas Pramuk 2019-09-05 12:36:59 UTC
FailedQA.

@Satellite-6.6.0 Snap18
tfm-rubygem-foreman_ansible_core-3.0.0-1.el7sat.noarch

"Fixed In Version" contains wrong package ! tfm-rubygem-foreman_ansible 
The affected code is provided by tfm-rubygem-foreman_ansible_core

As a result? the fix didn't land downstream

Comment 8 Lukas Pramuk 2019-09-17 12:10:26 UTC
VERIFIED.

@Satellite 6.6.0 Snap 20
tfm-rubygem-foreman_ansible_core-3.0.1-1.el7sat.noarch

by manual reproducer described in comment#0:

result correctly 50% success

Comment 9 Bryan Kearney 2019-10-22 19:50:32 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2019:3172


Note You need to log in before you can comment on or make changes to this bug.