Bug 1281836 - rhui-installer reports success but returns error code
rhui-installer reports success but returns error code
Status: CLOSED ERRATA
Product: Red Hat Update Infrastructure for Cloud Providers
Classification: Red Hat
Component: Tools (Show other bugs)
3.0.0
Unspecified Unspecified
unspecified Severity unspecified
: ---
: ---
Assigned To: Patrick Creech
Irina Gulina
: EasyFix
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2015-11-13 09:44 EST by Irina Gulina
Modified: 2017-03-01 17:11 EST (History)
2 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2017-03-01 17:11:07 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Irina Gulina 2015-11-13 09:44:26 EST
Description of problem:
on rhua rhel6 machine: 

>> rhui-installer --remote-fs-server=cds01.example.com:rhui_content_642 --remote-fs-type=glusterfs
Installing             Done                                               [100%] [....................]
  Success!
    The initial credentials are admin / VnQ63aGpjnctFYHfa4KRFv7KHvf9DraH
    Re-running the installer will not update your password.
  The full log is at /var/log/kafo/configuration.log
>>  echo $?
2
>> rhui-installer --remote-fs-server=cds01.example.com:rhui_content_642 --remote-fs-type=glusterfs
Installing             Done                                               [100%] [....................]
  Success!
    The initial credentials are admin / VnQ63aGpjnctFYHfa4KRFv7KHvf9DraH
    Re-running the installer will not update your password.
  The full log is at /var/log/kafo/configuration.log
>> echo $?
2

the same via ansible:
TASK: [rhua | call rhui installer if gluster] ********************************* 
failed: [XXXX] => {"changed": true, "cmd": ["rhui-installer", "--remote-fs-server=cds01.example.com:rhui_content_642", "--remote-fs-type=glusterfs"], "delta": "0:00:35.418900", "end": "2015-11-13 09:13:09.141970", "rc": 2, "start": "2015-11-13 09:12:33.723070", "warnings": []}
stderr: Debug: importing '/usr/share/rhui-installer/module: 0 00:00:00
Debug: template[/usr/share/rhui-installer/modules/: 0 0.0/s 00:00:01
Debug: hiera(): Looking up apache::sendfile in YAM: 0 0.0/s 00:00:02
Debug: hiera(): Looking for data source common    : 0 0.0/s 00:00:03
Debug: hiera(): Cannot find datafile /var/lib/hier: 0 0.0/s 00:00:04
Debug: template[/usr/share/rhui-installer/modules/: 0 0.0/s 00:00:05
Debug: Adding relationship from Ca[rhui-default-ca: 0 0.0/s 00:00:06
Debug: /File[/etc/pulp/repo_auth.conf]/seluser: Fo: 0 0.0/s 00:00:08
Debug: /File[/var/lib/puppet/concat/15-default-ssl: 0 0.0/s 00:00:09
Debug: /Stage[main]/Apache::Mod::Wsgi/File[wsgi.co: 0/403, 0%, 0.0/s, elapsed: 00:00:10
Info: RESOURCE Concat::Fragment[pulp-https-servera: 11/403, 2%, 1.8/s, elapsed: 00:00:11, ETA: 00:03:33
Info: RESOURCE Cert[rhua.eu-west-1.compute.interna: 192/403, 47%, 1.8/s, elapsed: 00:00:12, ETA: 00:01:55
Notice: /Stage[main]/Rhua::Mounts/Mount[mount-remo: 294/403, 72%, 24.4/s, elapsed: 00:00:13, ETA: 00:00:04
Debug: /File[/var/lib/puppet/concat/05-pulp-https.: 334/403, 82%, 24.4/s, elapsed: 00:00:14, ETA: 00:00:02
Debug: Executing '/sbin/chkconfig qpidd'          : 396/403, 98%, 32.9/s, elapsed: 00:00:15
Notice: /Stage[main]/Pulp::Database/Exec[migrate_p: 398/403, 98%, 32.9/s, elapsed: 00:00:16
Debug: Executing '/sbin/chkconfig pulp_resource_ma: 399/403, 99%, 33.1/s, elapsed: 00:00:17
Notice: /Stage[main]/Pulp::Service/Service[pulp_re: 400/403, 99%, 33.2/s, elapsed: 00:00:20
Notice: /Stage[main]/Pulp::Service/Service[pulp_wo: 401/403, 99%, 33.2/s, elapsed: 00:00:27
Info: RESOURCE File[/etc/puppet/rack/config.ru]   : 402/403, 99%, 33.2/s, elapsed: 00:00:28
Notice: /Stage[main]/Apache::Service/Service[httpd: 402/403, 99%, 31.4/s, elapsed: 00:00:29
Done                                              : 403/403, 100%, 8.9/s, elapsed: 00:00:31
Done                                              : 403/403, 100%, 8.9/s, elapsed: 00:00:31
stdout:   [1m[32mSuccess![0m
    The initial credentials are [1m[36madmin[0m / [1m[36mVnQ63aGpjnctFYHfa4KRFv7KHvf9DraH[0m
    Re-running the installer will not update your password.
  The full log is at [1m[36m/var/log/kafo/configuration.log[0m

FATAL: all hosts have already failed -- aborting


Version-Release number of selected component (if applicable):
>> rpm -qa rhui-installer
rhui-installer-0.0.24-1.el6ui.noarch


How reproducible:
always

Steps to Reproduce:
1. call rhui-installer with glusterfs
2. check exit code


Actual results:
>> echo $?
2

Expected results:
echo $?
0
Comment 5 Irina Gulina 2016-07-20 11:55:38 EDT
Verified on RHEL6 and RHEL7 20160719 iso's 

....
Installing             Done                                               [100%] []
  Success!
    The initial credentials are admin / Qp5tFsdJ59vzShNKSVmMgkDmUhibv2oz
    Re-running the installer will not update your password.
  The full log is at /var/log/kafo/configuration.log
>> echo $?
0

.....

PLAY RECAP *********************************************************************
ec2-176-34-201-162.eu-west-1.compute.amazonaws.com : ok=14   changed=1    unreachable=0    failed=0   
ec2-54-155-168-137.eu-west-1.compute.amazonaws.com : ok=29   changed=7    unreachable=0    failed=0   
ec2-54-216-131-93.eu-west-1.compute.amazonaws.com : ok=35   changed=13   unreachable=0    failed=0   
ec2-54-217-150-153.eu-west-1.compute.amazonaws.com : ok=33   changed=10   unreachable=0    failed=0   
ec2-54-217-175-71.eu-west-1.compute.amazonaws.com : ok=14   changed=1    unreachable=0    failed=0
Comment 6 errata-xmlrpc 2017-03-01 17:11:07 EST
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2017:0367

Note You need to log in before you can comment on or make changes to this bug.