Bug 1413668 - Gluster: CFME installation fails at 22.7% with "'NoneType' object has no attribute 'import_template'"
Summary: Gluster: CFME installation fails at 22.7% with "'NoneType' object has no attr...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Quickstart Cloud Installer
Classification: Red Hat
Component: Installation - CloudForms
Version: 1.1
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
: 1.1
Assignee: cchase
QA Contact: Antonin Pagac
Dan Macpherson
URL:
Whiteboard:
Depends On:
Blocks: 1414495
TreeView+ depends on / blocked
 
Reported: 2017-01-16 16:13 UTC by Antonin Pagac
Modified: 2017-02-28 01:44 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1414495 (view as bug list)
Environment:
Last Closed: 2017-02-28 01:44:13 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
production.log excerpt (23.30 KB, text/plain)
2017-01-16 16:13 UTC, Antonin Pagac
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHEA-2017:0335 0 normal SHIPPED_LIVE Red Hat Quickstart Installer 1.1 2017-02-28 06:36:13 UTC

Description Antonin Pagac 2017-01-16 16:13:08 UTC
Created attachment 1241312 [details]
production.log excerpt

Description of problem:
I have my own Gluster storage set up on a fedora system. Using this, I was able to deploy RHV without issues, but CFME fails to deploy with:

"ERROR -- : Error running command: /usr/share/fusor_ovirt/bin/ovirt_import_template.py --api_user 'admin@internal' --api_pass [FILTERED] --api_host e.example.com --cluster_name Default --data_center_name Default --export_domain_name my_export --storage_domain_name my_storage --vm_template_name dpl1-cfme-template
ERROR -- : Status code: 1
ERROR -- : Command output: 'NoneType' object has no attribute 'import_template'"

I tried to play a bit with RHV, create a disk, create a VM, start the VM, everything went fine, storage seems to work OK.
See traceback in the attachment.

Version-Release number of selected component (if applicable):
QCI-1.1-RHEL-7-20170112.t.0

How reproducible:
100%, reproduced twice with different ISOs

Steps to Reproduce:
1. Deploy RHV+CFME using Gluster storage
2. RHV deploys fine, CFME fails at 22.7%
3.

Actual results:
CFME fails to deploy when using Gluster storage

Expected results:
CFME deploys without issues

Additional info:

Comment 4 Jason Montleon 2017-01-18 16:06:18 UTC
You must have the volume exposed via NFS with the same name you're using to mount it in the cluster.

In order for this to work properly as we have things now the volume must be entered with a leading slash so that when the engine-image-uploader command is run it attempts to mount the nfs volume with a leading slash, otherwise the mount fails.

Because of this we need to fix the validation on the export domain when using gluster.

As far as I can tell this isn't a problem in RHV. 10.0.0.1:export and 10.0.0.1:/export appear to both work so this shouldn't be a problem here.

I'll clone this to a Docs bug to state that when using gluster the volume must be accessible via NFS with the same name when deploying CFME.

Comment 5 cchase 2017-01-24 18:42:59 UTC
https://github.com/fusor/fusor/pull/1355

Gluster now requires leading slashes for share directories.

Tested with RHV + CFME.  Will also need to test OCP on gluster.

Comment 6 Dylan Murray 2017-01-26 20:57:43 UTC
This is fixed in the 1/26 compose.

Comment 7 Antonin Pagac 2017-02-07 16:39:08 UTC
I'm now able to deploy CFME on top of RHV with Gluster that is using leading slash in storage path.

Compose: QCI-1.1-RHEL-7-20170203.t.0

Comment 9 errata-xmlrpc 2017-02-28 01:44:13 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2017:0335


Note You need to log in before you can comment on or make changes to this bug.