Bug 1300767 - Deleted container provider leaves orphaned subordinate resources
Deleted container provider leaves orphaned subordinate resources
Status: CLOSED NOTABUG
Product: Red Hat CloudForms Management Engine
Classification: Red Hat
Component: Providers (Show other bugs)
5.5.0
Unspecified Unspecified
unspecified Severity unspecified
: GA
: 5.6.0
Assigned To: Mooli Tayer
Tony
container
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2016-01-21 11:37 EST by ncatling
Modified: 2016-10-13 10:50 EDT (History)
8 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2016-03-15 10:09:22 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description ncatling 2016-01-21 11:37:35 EST
Description of problem:
Deleted OSE provider leaves orphaned subordinate resources, specifically:
Projects, Routes, Container Services, Replicators, Pods, Containers, Nodes, Images and Registries.


Version-Release number of selected component (if applicable):
CFME 5.5.0


How reproducible:
Removed OSE provider from VMDB. Subordinate resources remain.

Steps to Reproduce:
1. Remove container provider from VMDB
2. Wait for job to complete
3. Review subordinate resources, Projects for example. Deleted provider's Projects remain in the VMDB.

Actual results:
Container provider's orphaned subordinate resources remain in VMDB, specifically: Projects, Routes, Container Services, Replicators, Pods, Containers, Nodes, Images and Registries.

Expected results:
All subordinate resources should be deleted along with parent provider.

Additional info:
Comment 1 Federico Simoncelli 2016-01-21 12:29:38 EST
If I recall correctly removal of entities is deferred/collected at a later time (after the provider is deleted). Greg Blomquist for sure knows how this part is supposed to work.

Mooli can you sync with Greg and check if we're missing this cleanup? Thanks!
Comment 3 Mooli Tayer 2016-01-25 11:36:17 EST
This does not reproduce on master. 

I will test on the reported tag (CFME 5.5.0) tomorrow and see how to proceed.
Comment 6 Mooli Tayer 2016-02-14 11:35:00 EST
I can not get a reproduction of this on any environment.

Nick: could you post logs please? specifically we should see a "Record delete initiated" at first and "Removed EMS [<name>] id [<id>]" at the end.
If there is some error we should see a "<name>: Error during destroy:..." msg
(grepping the ems_name should suffice).

the destroy call is enqueued and right after that we should see the 'Removed':

[----] I, [2016-02-14T17:38:49.551490 #24071:7ad988]  INFO -- : MIQ(MiqGenericWorker::Runner#get_message_via_drb) Message id: [54], MiqWorker id: [2], Zone: [default], Role: [], Server: [], Ident: [generic], Tar
get id: [], Instance id: [1], Task id: [], Command: [ManageIQ::Providers::ContainerManager.destroy], Timeout: [3600], Priority: [100], State: [dequeue], Deliver On: [], Data: [], Args: [], Dequeued in: [6.454513
205] seconds
[----] I, [2016-02-14T17:38:49.551561 #24071:7ad988]  INFO -- : MIQ(MiqQueue#deliver) Message id: [54], Delivering...
[----] I, [2016-02-14T17:38:50.182974 #24071:7ad988]  INFO -- : MIQ(ExtManagementSystem.after_destroy) Removed EMS [erez] id [1]

Lets continue to debug together if there is nothing interesting in the logs.

There is a possibility that the bug is in the refresh: example: after a refresh
a container_service is added with out an ems_id. Then it will not deleted. Can you maybe reproduce something like that  

Note, relevant code around:
ExtManagementSystem.after_destroy and EmsCommon: if task == "destroy"
Comment 7 ncatling 2016-03-15 10:08:02 EDT
Hi Mooli, I'm not able to reproduce this. I have since upgraded my environment to 552, perhaps that has resolved the problem. I'll close as not a bug.

Note You need to log in before you can comment on or make changes to this bug.