Bug 1491768

Summary: Unable to remove providers
Product: Red Hat CloudForms Management Engine Reporter: Ievgen Zapolskyi <izapolsk>
Component: ProvidersAssignee: Marcel Hild <mhild>
Status: CLOSED NOTABUG QA Contact: Dave Johnson <dajohnso>
Severity: high Docs Contact:
Priority: unspecified    
Version: 5.9.0CC: gblomqui, izapolsk, jfrey, jhardy, mhild, obarenbo, pakotvan, tzumainn
Target Milestone: GAFlags: izapolsk: automate_bug+
Target Release: cfme-future   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard: upstream:provider:cloud:openstack
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2017-10-13 11:09:35 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: Bug
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: Openstack Target Upstream Version:
Embargoed:
Attachments:
Description Flags
appliance logs none

Description Ievgen Zapolskyi 2017-09-14 15:36:50 UTC
Description of problem:
When rhos7-ga is added in upstream, it cannot be removed either thru rest or UI.

Version-Release number of selected component (if applicable):
master.20170914082537_bf64a63

How reproducible:
100%

Steps to Reproduce:
1. Add rhos7-ga to appliance
2. try to remove it either thru rest or UI

Actual results:
provider cannot be removed

Expected results:
no such issue


Additional info:

Comment 2 Ievgen Zapolskyi 2017-09-14 15:44:16 UTC
Created attachment 1326119 [details]
appliance logs

Comment 3 Tzu-Mainn Chen 2017-09-19 12:28:51 UTC
For reference, this is what I see in the log:

[----] I, [2017-09-14T11:38:45.747165 #11886:a69140]  INFO -- : MIQ(MiqGenericWorker::Runner#get_message_via_drb) Message id: [8397], MiqWorker id: [23], Zone: [default], Role: [], Server: [], Ident: [generic], Target id: [], Instance id: [5], Task id: [], Command: [ManageIQ::Providers::Openstack::CloudManager.orchestrate_destroy], Timeout: [600], Priority: [100], State: [dequeue], Deliver On: [2017-09-14 15:38:42 UTC], Data: [], Args: [], Dequeued in: [18.142524251] seconds
[----] I, [2017-09-14T11:38:45.747314 #11886:a69140]  INFO -- : MIQ(MiqQueue#deliver) Message id: [8397], Delivering...
[----] I, [2017-09-14T11:38:45.939492 #11886:a69140]  INFO -- : MIQ(ManageIQ::Providers::Openstack::CloudManager#orchestrate_destroy) Cannot destroy ManageIQ::Providers::Openstack::CloudManager with id: 5, workers still in progress. Requeuing destroy...
[----] I, [2017-09-14T11:38:45.972083 #11886:a69140]  INFO -- : MIQ(MiqQueue.put) Message id: [8400],  id: [], Zone: [default], Role: [], Server: [], Ident: [generic], Target id: [], Instance id: [5], Task id: [], Command: [ManageIQ::Providers::Openstack::CloudManager.orchestrate_destroy], Timeout: [600], Priority: [100], State: [ready], Deliver On: [2017-09-14 15:39:00 UTC], Data: [], Args: []

Comment 5 Ievgen Zapolskyi 2017-10-11 09:59:58 UTC
as far as I see it concerns all providers.

Comment 7 Ievgen Zapolskyi 2017-10-11 14:02:49 UTC
Dave, I see this issue only in 5.9/upstream builds. It is absent in 5.8/5.7.

Comment 9 Greg Blomquist 2017-10-11 16:32:57 UTC
Ievgen, can you try this with multiple providers?  Can you reliably reproduce this issue with OSP?

Comment 10 Marcel Hild 2017-10-13 11:09:35 UTC
so, 20170914082537_bf64a63 is the BUILD string, but the actual latest commit is bf64a63 from 2017-08-29 09:54:25 - so this appliance does not contain https://github.com/ManageIQ/manageiq/pull/15590 which fixed the problem.
This can also been seen in the logs as there is only

Queuing destroy of ManageIQ::Providers::
CloudManager with the following ids: [5]

and no corresponding

Queuing destroy of ManageIQ::Providers::
NetworkManager with the following ids: [6]

But alas, it was not all in vain - I've created https://github.com/ManageIQ/manageiq/pull/16196 to not fool us in the future

Comment 11 Marcel Hild 2017-10-13 11:10:34 UTC
worked with the help of Ievgen through this... so removing need-info too

Comment 12 Adam Grare 2017-11-14 16:42:43 UTC
*** Bug 1503467 has been marked as a duplicate of this bug. ***