Bug 1638684 - VMware vCloud Provider's vApp Service Cannot be Fully Retired
Summary: VMware vCloud Provider's vApp Service Cannot be Fully Retired
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat CloudForms Management Engine
Classification: Red Hat
Component: Providers
Version: 5.9.4
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: GA
: 5.9.5
Assignee: mplesko
QA Contact: mplesko
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-10-12 09:14 UTC by mplesko
Modified: 2018-11-05 14:00 UTC (History)
10 users (show)

Fixed In Version: 5.9.5.2
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-11-05 14:00:33 UTC
Category: ---
Cloudforms Team: vCloud


Attachments (Terms of Use)


Links
System ID Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2018:3466 None None None 2018-11-05 14:00:54 UTC

Description mplesko 2018-10-12 09:14:00 UTC
Description of problem:
When I click `Lifecycle > Retire this service` in service details page, the vApp on vCloud gets removed but MIQ task then errors when probing for status.


Version-Release number of selected component (if applicable):
5.9.4

How reproducible:
Always


Steps to Reproduce:
1. order a vCloud service (it in turn provisiona a vApp) and wait till provisioned
2. go to `Services > My Services` and select your service
3. opt-in for `Lifecycle > Retire this service`

Actual results:
Retirement is triggered and vApp gets deleted. But service remains shown as "active".


Expected results:
Service should be shown as "retired"



Additional info:

The only log I can get is:

# automation.log
[----] E, [2018-10-12T09:00:59.933640 #15016:1dab050] ERROR -- : <AEMethod update_retirement_status> Stack Retirement Error: Server [] Stack [Friday GA vApp] Step [CheckRemovedFromProvider] Status [Error Checking Removal from Provider]


# evm.log
[----] E, [2018-10-12T09:00:59.949961 #15016:c36f18] ERROR -- : MIQ(MiqAeEngine.deliver) Error delivering {:event_type=>"request_orchestration_stack_retire", "OrchestrationStack::orchestration_stack"=>123000000000006, :orchestratio
n_stack_id=>123000000000006, :retirement_initiator=>"user", :userid=>"admin", :type=>"ManageIQ::Providers::Vmware::CloudManager::OrchestrationStack", "MiqEvent::miq_event"=>123000000000428, :miq_event_id=>123000000000428, "EventStr
eam::event_stream"=>123000000000428, :event_stream_id=>123000000000428} for object [ManageIQ::Providers::Vmware::CloudManager::OrchestrationStack.123000000000006] with state [] to Automate:

Comment 7 CFME Bot 2018-10-12 18:59:40 UTC
New commit detected on ManageIQ/manageiq-providers-vmware/gaprindashvili:

https://github.com/ManageIQ/manageiq-providers-vmware/commit/217b7a053372dad4ca4d0f329bde2c0ad745c636
commit 217b7a053372dad4ca4d0f329bde2c0ad745c636
Author:     Miha Pleško <miha.plesko@xlab.si>
AuthorDate: Wed Sep 26 10:21:42 2018 -0400
Commit:     Miha Pleško <miha.plesko@xlab.si>
CommitDate: Wed Sep 26 10:21:42 2018 -0400

    [GA] Don't crash when probing deleted vApp for status

    With this commit we properly capture Fog exception which is
    raised when GET-ing vApp by ID when vApp doesn't exist anymore.
    Interesting enough, the fog-vcloud raises one of

    ```ruby
    Fog::Compute::VcloudDirector::Forbidden
    Fog::Compute::VcloudDirector::ServiceError
    ```

    exceptions instead 404, depending on why exactly the entity
    couldn't be found :)

    With this commit we now capture the two exceptions and convert
    them to much more meaningful

    ```
    MiqException::MiqOrchestrationStackNotExistError
    ```

    to better reflect what's going on. Also, Automation is able to
    properly handle this kind of exception.

    Fixes https://bugzilla.redhat.com/show_bug.cgi?id=1638684

    Signed-off-by: Miha Pleško <miha.plesko@xlab.si>

 app/models/manageiq/providers/vmware/cloud_manager/orchestration_stack.rb | 16 +-
 1 file changed, 14 insertions(+), 2 deletions(-)

Comment 9 errata-xmlrpc 2018-11-05 14:00:33 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2018:3466


Note You need to log in before you can comment on or make changes to this bug.