Bug 1589834 - [RFE][XS-2] Add possibility to unregister a VM in RHV provider
Summary: [RFE][XS-2] Add possibility to unregister a VM in RHV provider
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat CloudForms Management Engine
Classification: Red Hat
Component: Providers
Version: 5.8.0
Hardware: Unspecified
OS: Unspecified
medium
high
Target Milestone: GA
: 5.8.4
Assignee: Moti Asayag
QA Contact: Ilanit Stein
URL:
Whiteboard:
Depends On: 1536628
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-06-11 13:31 UTC by Satoe Imaishi
Modified: 2022-07-09 09:46 UTC (History)
15 users (show)

Fixed In Version: 5.8.4.4
Doc Type: Enhancement
Doc Text:
Feature: Support unregister vm from ovirt-engine (removing the vm, but retaining its disks) Reason: Before adding that capability to CFME, the admin could only remove the VM entirely, without preserving its disks to be used later (i.e. by attaching them to a different vm ) Result: With this fix, the user will be able to remove the VM from ovirt-engine, but to preserve its disks. This feature is supported only for ovirt-engine that supports api v4 (ovirt-engine-4.0 and above).
Clone Of: 1536628
Environment:
Last Closed: 2018-06-25 14:21:05 UTC
Category: ---
Cloudforms Team: RHEVM
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github ManageIQ manageiq pull 17533 0 None None None 2018-06-11 13:32:08 UTC
Red Hat Product Errata RHSA-2018:1972 0 None None None 2018-06-25 14:21:15 UTC

Comment 2 Ilanit Stein 2018-06-13 09:33:48 UTC
Verified on CFME-5.8.4.4/RHV-4.2.3, RHV-3.6.13.4:

Steps done under Automation:
===========================

1) In Automate->Explorer added a new domain and a namespace.
(As a reference, this link can be used to create the domain/namespace:
https://pemcg.gitbooks.io/introduction-to-cloudforms-automation/content/chapter1/methods.html 
See the "hello world" method creation flow.

Copy instance Redhat-> Infrastructure->VMware->VimApi->"VMware_HotAdd_Disk"
to the created namespace path.
(See 'unregister_vm_copy_instance.png' attached).

2) Edit the Instance name into "Unregister_vm", 
   and the method name into "unregister_vm".
(See 'add_method_for_unregister.png' attached)
Set the method type as inline, and add this content:

#
# Description: This method is used to unregister a vm
#
# Inputs: vm_name
#

# Get vm object
vm_name = $evm.root['vm_name']
$evm.log("info", "-------->>>> Trying to unregister VM: <#{vm_name}>")

$evm.root['vm'] = $evm.vmdb('vm').find_by_name(vm_name)
vm = $evm.root['vm']
raise "Missing $evm.root['vm'] object" unless vm

vm.unregister

3) Then run Automate->Simulation, with vm_name of VM that we want to unregister.
This VM should have a disk that is not of an existing template, and should be in state down for the unregister succeed.
(See automate_simulation.png)

Results:
=======
* For the RHV-4.2 VM,
Once the simulation operation was complete, the VM was removed, 
and it's disk remained (was not removed).


* For the RHV-3.6 VM,
Simulation operation fail: the VM was NOT removed.
No error seen in the CFME UI,
however evm.log contain this error:

[----] I, [2018-06-13T04:27:40.967619 #40492:cbf110]  INFO -- : MIQ(MiqAlert.evaluate_alerts) [request_vm_unregister] Target: ManageIQ::Providers::Redhat::InfraManager::Vm Name: [istein1], Id: [142]
[----] I, [2018-06-13T04:27:41.003182 #40492:cbf110]  INFO -- : <AutomationEngine> Followed  Relationship [miqaedb:/System/event_handlers/event_enforce_policy#create]
[----] I, [2018-06-13T04:27:41.004308 #40492:cbf110]  INFO -- : <AutomationEngine> Followed  Relationship [miqaedb:/System/Event/MiqEvent/POLICY/request_vm_unregister#create]
[----] I, [2018-06-13T04:27:41.005463 #40492:cbf110]  INFO -- : MIQ(MiqQueue#delivered) Message id: [14942], State: [ok], Delivered in [0.358122622] seconds
[----] I, [2018-06-13T04:27:41.077134 #40492:cbf110]  INFO -- : MIQ(MiqQueue#m_callback) Message id: [14942], Invoking Callback with args: [:raw_unregister, "ok", "Message delivered successfully", "#<MiqAeEngine::MiqAeWorkspaceRuntime:0x000000000c0b6830 @readonly=false, @nodes=[#<MiqAeEngine::MiqAeObject:0x000000000cc57dd8 @workspace=#<MiqAeEngine::MiqAeWorkspaceRuntime:0x000000000c0b6830 ...>, @namespace=\"ManageIQ/System\", @klass=\"Process\", @instance=\"Event\", @attributes={\"event_stream_id\"=>\"2208\", \"event_type\"=>\"request_vm_unregister\", \"ext_management_system_id\"=>\"18\", \"miq_event_id\"=>\"2208\", \"object_name\"=>\"Event\", \"vm_id\"=>\"142\", \"vmdb_object_type\"=>\"vm\", \"event_stream\"=>#<MiqAeService..."]
[----] I, [2018-06-13T04:27:41.080382 #40492:cbf110]  INFO -- : MIQ(ManageIQ::Providers::Redhat::InfraManager#with_provider_connection) Connecting through ManageIQ::Providers::Redhat::InfraManager: [rhv3.6]



[----] E, [2018-06-13T04:27:41.683907 #40492:cbf110] ERROR -- : MIQ(MiqQueue#m_callback) Message id: [14942]: version 4 of the api is not supported by the provider


[----] E, [2018-06-13T04:27:41.684215 #40492:cbf110] ERROR -- : MIQ(MiqQueue#m_callback) backtrace: /var/www/miq/vmdb/app/models/manageiq/providers/redhat/infra_manager/api_integration.rb:21:in `connect'
/var/www/miq/vmdb/app/models/manageiq/providers/redhat/infra_manager/api_integration.rb:124:in `with_provider_connection'
/var/www/miq/vmdb/app/models/mixins/provider_object_mixin.rb:12:in `with_provider_object'
/var/www/miq/vmdb/app/models/manageiq/providers/redhat/infra_manager/vm/operations.rb:13:in `raw_unregister'
/var/www/miq/vmdb/app/models/mixins/miq_policy_mixin.rb:115:in `check_policy_prevent_callback'
/var/www/miq/vmdb/app/models/miq_queue.rb:414:in `m_callback'
/var/www/miq/vmdb/app/models/miq_queue.rb:383:in `delivered'

Comment 4 errata-xmlrpc 2018-06-25 14:21:05 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2018:1972


Note You need to log in before you can comment on or make changes to this bug.