Bug 973814 - Some sorts of provisioning errors not bubbling up to UI
Some sorts of provisioning errors not bubbling up to UI
Status: CLOSED WORKSFORME
Product: Red Hat Satellite 6
Classification: Red Hat
Component: Provisioning (Show other bugs)
6.0.0
Unspecified Unspecified
unspecified Severity unspecified (vote)
: Unspecified
: --
Assigned To: Dmitri Dolguikh
Katello QA List
http://projects.theforeman.org/issues...
: Triaged
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2013-06-12 16:17 EDT by Corey Welton
Modified: 2016-04-22 12:44 EDT (History)
4 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2013-10-21 11:05:41 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)


External Trackers
Tracker ID Priority Status Summary Last Updated
Foreman Issue Tracker 3179 None None None 2016-04-22 12:44 EDT

  None (edit)
Description Corey Welton 2013-06-12 16:17:17 EDT
Description of problem:

This is actually probably multiple bugs all tied together -- and perhaps an edge case -- but it was painful to try and sort through.

In certain (?) circumstances, errors thrown by the backend are not bubbling up to the UI when trying to provision a system. In this situation, the user is left completely in the lurch as to what has taken place.

Version-Release number of selected component (if applicable):


How reproducible:
I'm not 100% sure of a repro case, although I think in this situation we were trying to provision on a system that mysteriously had virt support turned off. We'll make that assumption here.

Steps to Reproduce:
1.  Attempt to provision a system which causes an error similar to that seen in the results below. You may be able to do this by running product on a system which has virt flags disabled.
2.  Observe results.


Actual results:
Several things actually.

* Error like the one below does not bubble up to UI.
Rolling back due to a problem: [Settings up compute instance whynow.example.org  1       failed  [#<Host::Managed id: nil, name: "whynow.example.org", ip: "192.168.100.13", environment: nil, last_compile: nil, last_freshcheck: nil, last_report: nil, updated_at: nil, source_file_id: nil, created_at: nil, mac: nil, root_pass: nil, serial: nil, puppet_status: 0, domain_id: 1, architecture_id: 1, operatingsystem_id: 1, environment_id: 9, subnet_id: 1, ptable_id: 1, medium_id: 5, build: true, comment: "", disk: "", installed_at: nil, model_id: nil, hostgroup_id: 1, owner_id: 1, owner_type: "User", enabled: true, puppet_ca_proxy_id: nil, managed: true, use_image: nil, image_file: nil, uuid: nil, compute_resource_id: 1, puppet_proxy_id: 1, certname: nil, image_id: nil, organization_id: 1, location_id: nil, type: "Host::Managed">, :setCompute]]

* Page is reloaded and user is blindly taken back to virt machine editing page.

* Virt machine is left half-baked, and while no new hosts show up in Hosts view, machines can no longer be created with the name(s) used in other futile attempts.  For example, per the logs above, user can no longer try and create a system called "whynow" -- presumably because a basic storage device is listed in libvirt's storage directory -- but no such name appears in UI.

Expected results:
If we fail, we fail and we tell user
If we fail, we don't consume namespace -- hostnames can be reused if a failure to create host occurs.

Additional info:
Comment 1 Corey Welton 2013-06-12 16:17:53 EDT
* candlepin-0.8.9-1.el6_4.noarch
* candlepin-scl-1-5.el6_4.noarch
* candlepin-scl-quartz-2.1.5-5.el6_4.noarch
* candlepin-scl-rhino-1.7R3-1.el6_4.noarch
* candlepin-scl-runtime-1-5.el6_4.noarch
* candlepin-selinux-0.8.9-1.el6_4.noarch
* candlepin-tomcat6-0.8.9-1.el6_4.noarch
* elasticsearch-0.19.9-8.el6sat.noarch
* foreman-1.1.10009-1.noarch
* foreman-compute-1.1.10009-1.noarch
* foreman-installer-puppet-concat-0-2.d776701.git.0.21ef926.el6sat.noarch
* foreman-installer-puppet-dhcp-0-5.3a4a13c.el6sat.noarch
* foreman-installer-puppet-dns-0-7.fcae203.el6sat.noarch
* foreman-installer-puppet-foreman-0-6.568c5c4.el6sat.noarch
* foreman-installer-puppet-foreman_proxy-0-8.bd1e35d.el6sat.noarch
* foreman-installer-puppet-puppet-0-3.ab46748.el6sat.noarch
* foreman-installer-puppet-tftp-0-5.ea6c5e5.el6sat.noarch
* foreman-installer-puppet-xinetd-0-50a267b8.git.0.44aca6a.el6sat.noarch
* foreman-libvirt-1.1.10009-1.noarch
* foreman-postgresql-1.1.10009-1.noarch
* foreman-proxy-1.1.10003-1.el6sat.noarch
* foreman-proxy-installer-1.0.1-8.f5ae2cd.el6sat.noarch
* katello-1.4.2-12.el6sat.noarch
* katello-all-1.4.2-12.el6sat.noarch
* katello-candlepin-cert-key-pair-1.0-1.noarch
* katello-certs-tools-1.4.2-2.el6sat.noarch
* katello-cli-1.4.2-7.el6sat.noarch
* katello-cli-common-1.4.2-7.el6sat.noarch
* katello-common-1.4.2-12.el6sat.noarch
* katello-configure-1.4.3-15.el6sat.noarch
* katello-configure-foreman-1.4.3-15.el6sat.noarch
* katello-foreman-all-1.4.2-12.el6sat.noarch
* katello-glue-candlepin-1.4.2-12.el6sat.noarch
* katello-glue-elasticsearch-1.4.2-12.el6sat.noarch
* katello-glue-pulp-1.4.2-12.el6sat.noarch
* katello-qpid-broker-key-pair-1.0-1.noarch
* katello-qpid-client-key-pair-1.0-1.noarch
* katello-selinux-1.4.3-3.el6sat.noarch
* openldap-2.4.23-31.el6.x86_64
* pulp-rpm-plugins-2.1.1-1.el6sat.noarch
* pulp-selinux-2.1.1-1.el6sat.noarch
* pulp-server-2.1.1-1.el6sat.noarch
* python-ldap-2.3.10-1.el6.x86_64
* ruby193-rubygem-ldap_fluff-0.2.2-1.el6sat.noarch
* ruby193-rubygem-net-ldap-0.3.1-2.el6sat.noarch
* signo-0.0.16-1.el6sat.noarch
* signo-katello-0.0.16-1.el6sat.noarch
Comment 3 Dmitri Dolguikh 2013-10-04 09:47:30 EDT
I can't replicate this particular problem in the latest upstream. It appears that since the issue was reported (among other things) that error handling was improved in Fog.
Comment 4 Corey Welton 2013-10-21 11:05:41 EDT
Gonna close this out, I can't repro anymore either.  Can reopen if it starts showing up again.

Note You need to log in before you can comment on or make changes to this bug.