Bug 1390655 - RHV deployment sometimes fails if RH insights is enabled
Summary: RHV deployment sometimes fails if RH insights is enabled
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Quickstart Cloud Installer
Classification: Red Hat
Component: Installation - RHEV
Version: 1.1
Hardware: All
OS: All
unspecified
medium
Target Milestone: ---
: 1.1
Assignee: Fabian von Feilitzsch
QA Contact: Tasos Papaioannou
Dan Macpherson
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-11-01 15:28 UTC by Tasos Papaioannou
Modified: 2017-02-28 01:40 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-02-28 01:40:45 UTC
Target Upstream Version:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHEA-2017:0335 0 normal SHIPPED_LIVE Red Hat Quickstart Installer 1.1 2017-02-28 06:36:13 UTC

Description Tasos Papaioannou 2016-11-01 15:28:13 UTC
Description of problem:

RHV deployment w/ RH insights enabled can fail due to simultaneous yum install of redhat-access-insights (via puppet-agent) and rhevm (via ansible) packages. 

Version-Release number of selected component (if applicable):

QCI-1.1-RHEL-7-20161026.t.0.

How reproducible:

Inconsistently. The issue depends on the timing of puppet-agent and ansible tasks.

Steps to Reproduce:
1.) Deploy RHV w/ insights enabled.
2.) See failed task in the deployment's ansible.log, e.g.,

2016-10-31 15:58:41,305 p=30967 u=foreman |  failed: [tpapaioa-engine.cfme.lab.eng.rdu2.redhat.com] (item=[u'rhevm', u'glusterfs-fuse', u'python-enum34']) => {"ansible_job_id": "201261156787.11006", "changed": t
rue, "failed": true, "finished": 1, "invocation": {"module_args": {"conf_file": null, "disable_gpg_check": false, "disablerepo": null, "enablerepo": null, "exclude": null, "install_repoquery": true, "list": null
, "name": ["rhevm", "glusterfs-fuse", "python-enum34"], "state": "present", "update_cache": false, "validate_certs": true}, "module_name": "async_status"}, "item": ["rhevm", "glusterfs-fuse", "python-enum34"], "
msg": "Existing lock /var/run/yum.pid: another copy is running as pid 9296.\nAnother app is currently holding the yum lock; waiting for it to exit...\n  The other application is: yum\n    Memory : 100 M RSS (421
 MB VSZ)\n    Started: Mon Oct 31 19:46:02 2016 - 00:30 ago\n    State  : Uninterruptible, pid: 9296\n

[snip]
 Downloading packages:\n"]}
2016-10-31 15:58:41,315 p=30967 u=foreman |  NO MORE HOSTS LEFT *************************************************************
2016-10-31 15:58:41,315 p=30967 u=foreman |  PLAY RECAP *********************************************************************
2016-10-31 15:58:41,315 p=30967 u=foreman |  tpapaioa-engine.cfme.lab.eng.rdu2.redhat.com : ok=6    changed=3    unreachable=0    failed=1   

3.) On the RHV system's /var/log/message, see that the pid of the other yum process mentioned in the ansible log was installing redhat-access-client:

Oct 31 19:46:13 tpapaioa-engine yum[9296]: Installed: libcgroup-0.41-11.el7.x86_64
[...]
Oct 31 19:46:26 tpapaioa-engine yum[9296]: Installed: redhat-access-insights-1.0.11-0.el7.noarch

Actual results:

puppet's yum install of redhat-access-insights can lock out the ansible playbook's yum install of rhv-related rpms.

Expected results:

Successful yum install of all packages.

Additional info:

Comment 2 Fabian von Feilitzsch 2016-11-16 16:27:53 UTC
https://github.com/fusor/ansible-ovirt/pull/10

This should wait (up to 5 minutes) until any existing yum tasks finish before trying to install anything

Comment 3 John Matthews 2016-11-22 13:39:19 UTC
Expected in 11/21 ISO

Comment 4 Tasos Papaioannou 2016-12-02 20:00:13 UTC
Verified on QCI-1.1-RHEL-7-20161128.t.0. No further errors seen in deployments w/ access insights enabled.

Comment 7 errata-xmlrpc 2017-02-28 01:40:45 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2017:0335


Note You need to log in before you can comment on or make changes to this bug.