Bug 1589134 - openshift-ansible 'dict object' crd has no attribute 'metadata'
Summary: openshift-ansible 'dict object' crd has no attribute 'metadata'
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Service Broker
Version: 3.10.0
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: ---
: 3.10.z
Assignee: Fabian von Feilitzsch
QA Contact: Zihan Tang
URL:
Whiteboard: aos-scalability-310
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-06-08 13:11 UTC by Matt Bruzek
Modified: 2018-07-30 19:18 UTC (History)
11 users (show)

Fixed In Version:
Doc Type: No Doc Update
Doc Text:
undefined
Clone Of:
Environment:
Last Closed: 2018-07-30 19:17:50 UTC
Target Upstream Version:


Attachments (Terms of Use)


Links
System ID Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2018:1816 None None None 2018-07-30 19:18:17 UTC

Description Matt Bruzek 2018-06-08 13:11:45 UTC
Description of problem:

Our installer failed with the message: 

The task includes an option with an undefined variable. The error was: 'dict object' has no attribute 'metadata'\n\nThe error appears to have been in '/home/cloud-user/openshift-ansible/roles/ansible_service_broker/tasks/install.yml': line 119, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.

Version-Release number of the following components:
$ git describe
openshift-ansible-3.10.0-0.63.0-17-g37e2c16
rpm -q ansible
ansible-2.4.4.0-1.el7ae.noarch
ansible --version
ansible 2.4.3.0
  config file = /etc/ansible/ansible.cfg
  configured module search path = [u'/home/cloud-user/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/lib/python2.7/site-packages/ansible
  executable location = /usr/bin/ansible
  python version = 2.7.5 (default, May 30 2018, 12:39:41) [GCC 4.8.5 20150623 (Red Hat 4.8.5-34)]

How reproducible: 1 out of 1 

Steps to Reproduce:
1. Install OpenStack.
2. Run the installer with the .63 content attempting to install 3.10 on OpenStack
source /home/cloud-user/keystonerc; ansible-playbook -vvv --user openshift -i inventory -i openshift-ansible/playbooks/openstack/inventory.py openshift-ansible/playbooks/openstack/openshift-cluster/install.yml

Actual results:
TASK [ansible_service_broker : Create custom resource definitions for asb] *****
task path: /home/cloud-user/openshift-ansible/roles/ansible_service_broker/tasks/install.yml:119
Thursday 07 June 2018  18:59:45 -0400 (0:00:00.061)       0:19:00.759 ********* 
fatal: [master-2.scale-ci.example.com]: FAILED! => {
    "msg": "The task includes an option with an undefined variable. The error was: 'dict object' has no attribute 'metadata'\n\nThe error appears to have been in '/home/cloud-user/openshift-ansible/roles/ansible_service_broker/tasks/install.yml': line 119, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n- name: Create custom resource definitions for asb\n  ^ here\n\nexception type: <class 'ansible.errors.AnsibleUndefinedVariable'>\nexception: 'dict object' has no attribute 'metadata'"
}

PLAY RECAP *********************************************************************
app-node-0.scale-ci.example.com : ok=165  changed=63   unreachable=0    failed=0   
app-node-1.scale-ci.example.com : ok=165  changed=63   unreachable=0    failed=0   
cns-0.scale-ci.example.com : ok=167  changed=65   unreachable=0    failed=0   
cns-1.scale-ci.example.com : ok=167  changed=65   unreachable=0    failed=0   
cns-2.scale-ci.example.com : ok=167  changed=65   unreachable=0    failed=0   
infra-node-0.scale-ci.example.com : ok=169  changed=64   unreachable=0    failed=0   
infra-node-1.scale-ci.example.com : ok=165  changed=63   unreachable=0    failed=0   
infra-node-2.scale-ci.example.com : ok=165  changed=63   unreachable=0    failed=0   
lb-0.scale-ci.example.com  : ok=96   changed=19   unreachable=0    failed=0   
localhost                  : ok=30   changed=0    unreachable=0    failed=0   
master-0.scale-ci.example.com : ok=330  changed=137  unreachable=0    failed=0   
master-1.scale-ci.example.com : ok=330  changed=137  unreachable=0    failed=0   
master-2.scale-ci.example.com : ok=1010 changed=430  unreachable=0    failed=1   


INSTALLER STATUS ***************************************************************
Initialization               : Complete (0:00:20)
Health Check                 : Complete (0:00:12)
Node Preparation             : Complete (0:02:44)
etcd Install                 : Complete (0:00:23)
Load Balancer Install        : Complete (0:00:06)
Master Install               : Complete (0:03:12)
Master Additional Install    : Complete (0:00:23)
GlusterFS Install            : Complete (0:02:58)
Hosted Install               : Complete (0:00:50)
Cluster Monitoring Operator  : Complete (0:00:33)
Web Console Install          : Complete (0:00:25)
Logging Install              : Complete (0:03:39)
Service Catalog Install      : In Progress (0:00:32)
	This phase can be restarted by running: playbooks/openshift-service-catalog/config.yml
Thursday 07 June 2018  18:59:45 -0400 (0:00:00.045)       0:19:00.805 ********* 
=============================================================================== 
openshift_node : Check status of node image pre-pull ------------------- 90.93s
/home/cloud-user/openshift-ansible/roles/openshift_node/tasks/config.yml:54 ---
openshift_control_plane : Wait for all control plane pods to become ready -- 90.38s
/home/cloud-user/openshift-ansible/roles/openshift_control_plane/tasks/main.yml:242 
Ensure openshift-ansible installer package deps are installed ---------- 65.25s
/home/cloud-user/openshift-ansible/playbooks/init/base_packages.yml:31 --------
openshift_storage_glusterfs : Wait for GlusterFS pods ------------------ 62.03s
/home/cloud-user/openshift-ansible/roles/openshift_storage_glusterfs/tasks/glusterfs_deploy.yml:108 
openshift_cluster_monitoring_operator : Wait for the ServiceMonitor CRD to be created -- 30.55s
/home/cloud-user/openshift-ansible/roles/openshift_cluster_monitoring_operator/tasks/install.yaml:53 
openshift_storage_glusterfs : Wait for heketi pod ---------------------- 20.88s
/home/cloud-user/openshift-ansible/roles/openshift_storage_glusterfs/tasks/heketi_deploy_part2.yml:105 
openshift_storage_glusterfs : Wait for deploy-heketi pod --------------- 20.87s
/home/cloud-user/openshift-ansible/roles/openshift_storage_glusterfs/tasks/glusterfs_common.yml:221 
openshift_web_console : Verify that the console is running ------------- 20.86s
/home/cloud-user/openshift-ansible/roles/openshift_web_console/tasks/start.yml:2 
openshift_hosted : Ensure OpenShift pod correctly rolls out (best-effort today) -- 16.62s
/home/cloud-user/openshift-ansible/roles/openshift_hosted/tasks/wait_for_pod.yml:4 
openshift_node : install needed rpm(s) --------------------------------- 15.39s
/home/cloud-user/openshift-ansible/roles/openshift_node/tasks/install_rpms.yml:2 
container_runtime : Fixup SELinux permissions for docker --------------- 14.56s
/home/cloud-user/openshift-ansible/roles/container_runtime/tasks/package_docker.yml:159 
openshift_control_plane : Wait for control plane pods to appear -------- 13.33s
/home/cloud-user/openshift-ansible/roles/openshift_control_plane/tasks/main.yml:190 
Run health checks (install) - EL --------------------------------------- 11.67s
/home/cloud-user/openshift-ansible/playbooks/openshift-checks/private/install.yml:24 
openshift_storage_glusterfs : Wait for glusterblock provisioner pod ---- 10.58s
/home/cloud-user/openshift-ansible/roles/openshift_storage_glusterfs/tasks/glusterblock_deploy.yml:53 
openshift_storage_glusterfs : Wait for copy job to finish -------------- 10.58s
/home/cloud-user/openshift-ansible/roles/openshift_storage_glusterfs/tasks/heketi_deploy_part2.yml:14 
openshift_logging_fluentd : Execute the fluentd temporary labeling script --- 8.83s
/home/cloud-user/openshift-ansible/roles/openshift_logging_fluentd/tasks/label_and_wait.yaml:13 
Approve bootstrap nodes ------------------------------------------------- 7.45s
/home/cloud-user/openshift-ansible/playbooks/openshift-node/private/join.yml:26 
openshift_logging_fluentd : Create temporary fluentd labeling script ---- 7.35s
/home/cloud-user/openshift-ansible/roles/openshift_logging_fluentd/tasks/label_and_wait.yaml:7 
openshift_hosted : Create OpenShift router ------------------------------ 6.70s
/home/cloud-user/openshift-ansible/roles/openshift_hosted/tasks/router.yml:85 -
Gather Cluster facts ---------------------------------------------------- 6.54s
/home/cloud-user/openshift-ansible/playbooks/init/cluster_facts.yml:27 --------


Failure summary:


  1. Hosts:    master-2.scale-ci.example.com
     Play:     Service Catalog
     Task:     Create custom resource definitions for asb
     Message:  The task includes an option with an undefined variable. The error was: 'dict object' has no attribute 'metadata'
               
               The error appears to have been in '/home/cloud-user/openshift-ansible/roles/ansible_service_broker/tasks/install.yml': line 119, column 3, but may
               be elsewhere in the file depending on the exact syntax problem.
               
               The offending line appears to be:
               
               
               - name: Create custom resource definitions for asb
                 ^ here
               
               exception type: <class 'ansible.errors.AnsibleUndefinedVariable'>
               exception: 'dict object' has no attribute 'metadata'

Expected results:
I expected the installer to complete.

Additional info:

Please attach logs from ansible-playbook with the -vvv flag

Comment 3 Matt Bruzek 2018-06-08 14:15:03 UTC
I wrote a small ansible playbook to test this code and the 3 files exist and all have metadata keys.

http://pastebin.test.redhat.com/601164

Is it possible that the working directory is not relative to the ansible_service_broker directory? In this case I ran the test program from: /home/cloud-user/openshift-ansible/roles/ansible_service_broker

Comment 4 Matt Bruzek 2018-06-08 19:45:19 UTC
I redeployed this same thing in CI and it failed with the exact same message:

The task includes an option with an undefined variable. The error was: 'dict object' has no attribute 'metadata'

The error appears to have been in '/home/cloud-user/openshift-ansible/roles/ansible_service_broker/tasks/install.yml': line 119, column 3, but may
be elsewhere in the file depending on the exact syntax problem.

Comment 5 Jason Montleon 2018-06-11 13:50:45 UTC
Where did openshift-ansible-3.10.0-0.63.0-17-g37e2c16 come from? Is this an upstream package?

Comment 7 John Matthews 2018-06-14 12:35:23 UTC
Could we have an update on the question asked in comment #5?

We were unable to reproduce this issue with our testing and we are unsure where the referenced RPM came from, it sounds like this issue is related to using a bad RPM build.

Comment 9 Fabian von Feilitzsch 2018-06-19 20:30:06 UTC
It may be an issue with the symlink in the openstack playbook directory, Scott posted a possible fix here: https://github.com/openshift/openshift-ansible/pull/8853

Comment 10 Scott Dodson 2018-06-20 17:42:53 UTC
Moving to 3.10.z based on limited scope.

Comment 11 Matt Bruzek 2018-06-25 16:17:48 UTC
Our latest install generated this exact problem again. I was able to verify that the change in PR 8853 existed on the system.

$ ls -al playbooks/openstack/openshift-cluster/roles
lrwxrwxrwx. 1 cloud-user cloud-user 14 Jun 25 11:35 playbooks/openstack/openshift-cluster/roles -> ../../../roles

$ git describe
v3.10.0-rc.0-79-g60cbc1c

We are using the 3.10 branch

$ git status
# On branch release-3.10

Error message: 
TASK [ansible_service_broker : Create custom resource definitions for asb] *****
task path: /home/cloud-user/openshift-ansible/roles/ansible_service_broker/tasks/install.yml:119
Monday 25 June 2018  11:55:34 -0400 (0:00:00.053)       0:17:42.243 *********** 
fatal: [master-0.scale-ci.example.com]: FAILED! => {
    "msg": "The task includes an option with an undefined variable. The error was: 'dict object' has no attribute 'metadata'\n\nThe error appears to have been in '/home/cloud-user/openshift-ansible/roles/ansible_service_broker/tasks/install.yml': line 119, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n- name: Create custom resource definitions for asb\n  ^ here\n\nexception type: <class 'ansible.errors.AnsibleUndefinedVariable'>\nexception: 'dict object' has no attribute 'metadata'"
}

Comment 12 Fabian von Feilitzsch 2018-06-25 19:36:08 UTC
https://github.com/openshift/openshift-ansible/pull/8965

Comment 13 Matt Bruzek 2018-06-25 20:04:46 UTC
Looks like the 'crd' variable in the ansible_service_broker was colliding with a variable cluster monitoring operator command that was being run earlier.

https://github.com/openshift/openshift-ansible/blob/master/roles/openshift_cluster_monitoring_operator/tasks/install.yaml#L70

The fix involves renaming the variable in the ansible_service_broker role.

Comment 14 openshift-github-bot 2018-06-26 12:13:05 UTC
Commits pushed to master at https://github.com/openshift/openshift-ansible

https://github.com/openshift/openshift-ansible/commit/b0fa03265f84cbafcf4f20c03586a7da7d66726b
Bug 1589134- Namespace the CRD variable to prevent collision

https://github.com/openshift/openshift-ansible/commit/ffab936876043ef4cf4abafb2f9bcbd8f605ba78
Merge pull request #8965 from fabianvf/bz1589134

Bug 1589134- Namespace the CRD variable to prevent collision

Comment 15 Matt Bruzek 2018-06-26 14:29:34 UTC
I was able to give the installer another go after these changes landed. I can attest that these PRs fixed the problem I originally reported!

Comment 17 Zihan Tang 2018-07-10 05:52:48 UTC
according to #comment 15, this is fixed.
and I use openshift-ansible-3.10.15 to do regression, install succeed, CRDs are created.
so mark it as VERIFIED.

Comment 19 errata-xmlrpc 2018-07-30 19:17:50 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2018:1816


Note You need to log in before you can comment on or make changes to this bug.