Bug 1387438

Summary: 3.4 playbook failure to remove repo.
Product: OKD Reporter: Peter Ruan <pruan>
Component: InstallerAssignee: Jason DeTiberus <jdetiber>
Status: CLOSED NOTABUG QA Contact: Johnny Liu <jialiu>
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: 3.xCC: aos-bugs, mmccomas
Target Milestone: ---   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2016-10-21 02:00:19 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:

Description Peter Ruan 2016-10-20 21:28:54 UTC
Description of problem:
  While trying to bring up a 3.4 cluster using the Openshift ansible playbook, it looks like the playbook had problem removing the /etc/yum.repo.d/ose34-install.repo



Version-Release number of selected component (if applicable):


How reproducible:
always

Steps to Reproduce:
1.
2.
3.

Actual results:


PLAY [pre actions on slaves] ***************************************************

TASK [setup] *******************************************************************
Thursday 20 October 2016  20:37:19 +0000 (0:00:00.076)       0:00:00.076 ****** 
ok: [localhost]

TASK [set_fact] ****************************************************************
Thursday 20 October 2016  20:37:20 +0000 (0:00:01.254)       0:00:01.331 ****** 
ok: [localhost] => {"ansible_facts": {"change_os_instance_name": false, "openshift_playbook_rpm_repos": [{"baseurl": "http://download.eng.bos.redhat.com/rcm-guest/puddles/RHAOS/AtomicOpenShift/3.4/latest/x86_64/os", "enabled": 1, "gpgcheck": 0, "id": "aos34-devel", "name": "aos34-devel"}], "use_rpm_playbook": true, "vm_list": ""}, "changed": false}

TASK [set_fact] ****************************************************************
Thursday 20 October 2016  20:37:21 +0000 (0:00:00.119)       0:00:01.450 ****** 
skipping: [localhost] => {"changed": false, "skip_reason": "Conditional check failed", "skipped": true}

TASK [Change openstack instance name] ******************************************
Thursday 20 October 2016  20:37:21 +0000 (0:00:00.089)       0:00:01.540 ****** 
skipping: [localhost] => (item=)  => {"changed": false, "item": "", "skip_reason": "Conditional check failed", "skipped": true}

TASK [set_fact] ****************************************************************
Thursday 20 October 2016  20:37:21 +0000 (0:00:00.096)       0:00:01.637 ****** 
ok: [localhost] => {"ansible_facts": {"work_dir": "/home/slave1/workspace/Launch Environment Flexy"}, "changed": false}

TASK [Wait for yum repo file unlock because of other installation job] *********
Thursday 20 October 2016  20:37:21 +0000 (0:00:00.104)       0:00:01.741 ****** 
fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 300, "failed": true, "msg": "Timeout when waiting for /etc/yum.repos.d/ose34-install.repo to be absent."}
	to retry, use: --limit @/home/slave1/workspace/Launch Environment Flexy/private-openshift-misc/v3-launch-templates/functionality-testing/aos-34/extra-ansible/pre_actions.retry

PLAY RECAP *********************************************************************
localhost                  : ok=3    changed=0    unreachable=0    failed=1   


Expected results:


Additional info:

Comment 1 Johnny Liu 2016-10-21 02:00:19 UTC
"pre actions on slaves" playbooks is only a part of QE's cucushift launcher scripts, not related to installer.

Comment 2 Peter Ruan 2016-10-21 06:41:26 UTC
@Jianlin, I don't see that message at all under Cucushift, where is this playbook located?