Description of problem: Version-Release number of selected component (if applicable): 5.7.1.0 How reproducible: Always Steps to Reproduce: 1. Add an ec2 provider 2. Wait for full refresh and then do refresh again 3. Immediately after that delete provider Actual results: Items like VMs/key pairs and other are not deleted from vmdb. Expected results: Items should be deleted. Additional info:
Matouš, do the items show up as archived? If so, then it's working as designed. MIQ doesn't delete objects like VMs when a provider is deleted because of the overhead of deleting related records like Metrics and Events.
How can I find out whether it's archived? For example when I do this and add another ec2 provider and try to provision an instance then I have all the images twice in image selection. I think this is not how it should be.
I'm pretty sure this is a race condition. We should schedule the delete of a provider. https://github.com/ManageIQ/manageiq/pull/13204 evaluates this approach. Matouš can you do this again and attach all logs to this BZ? related BZs https://bugzilla.redhat.com/show_bug.cgi?id=1369359 https://bugzilla.redhat.com/show_bug.cgi?id=1343328
Created attachment 1263043 [details] evm.log
Whole evm.log, fresh appliance and I only added, refreshed, deleted ec2 provider. Version 5.7.1.3. Also aws log attached.
Created attachment 1263044 [details] aws.log
reading the logs, this is indeed because the destroy of the provider was running at the same time as a refresh and thus was creating inventory (templates) while deleting it leaving disconnected entities. since https://github.com/ManageIQ/manageiq/pull/14848 we have an orchestrated destroy, which waits for all workers to stop and then does the destroy. Matouš I just checked with a current upstream appliance and it worked as expected. The only "issue" I had was, that I needed to explicitly destroy the dependent Network and storage managers. After that, the cloud manager got destroyed and left no templates or vms. Ok to close this?
Dup'ing this on bug #1437549. I know this one is older, but that one has more details about the fix and is tracking the relevant PRs. *** This bug has been marked as a duplicate of bug 1437549 ***