Bug 1334789 - Storage information was wiped out after running SSA on the VM of VMWare providers
Summary: Storage information was wiped out after running SSA on the VM of VMWare provi...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat CloudForms Management Engine
Classification: Red Hat
Component: Providers
Version: 5.4.0
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: GA
: 5.6.0
Assignee: Adam Grare
QA Contact: Satyajit Bulage
URL:
Whiteboard: vm:smartstate
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-05-10 14:24 UTC by Hui Song
Modified: 2016-06-29 16:01 UTC (History)
8 users (show)

Fixed In Version: 5.6.0.8
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-06-29 16:01:05 UTC
Category: ---
Cloudforms Team: ---
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2016:1348 0 normal SHIPPED_LIVE CFME 5.6.0 bug fixes and enhancement update 2016-06-29 18:50:04 UTC

Description Hui Song 2016-05-10 14:24:24 UTC
Description of problem:

After running SSA on the vm of VMWare providers, its storage information was changed to nil. This cause the second SSA always fails.


Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1. Add VMWare provider to the appliance and refresh it;
2. Check the storage_id field of vms table, they should have valid values;
3. Run a SSA on on VM, after job finish, check the storage_id of vms. Some of them are changed into nil.

Actual results:


Expected results:


Additional info:

Comment 2 Hui Song 2016-05-10 14:35:17 UTC
Adam has investigated this issue and found the reason.

Comment 3 Adam Grare 2016-05-10 15:08:14 UTC
The problem appears to be when building the initial inventory cache we "Fix Up" all VMs to add a 'summary/config/vmLocalPathName' property.  This is meant to take the VmPathName "[NFS Share] CFME (Agrare)/CFME (Agrare).vmx" and make it look more like the datastore URL "ds:///vmfs/volumes/c84ed2d3-b76003e0/".

What it does is take the Datastore name from the VmPathName (i.e.: "NFS Share") and looks up the datastore by name to get the summary url.  If you have more than one datastore with the same name it will get the wrong summary url, and VMs will be linked to the wrong storage after the initial refresh.

The storage_id is sometimes nil after a targeted refresh if the wrong datastore isn't in the list of storages returned for the targeted refresh.

Comment 5 CFME Bot 2016-05-20 17:35:50 UTC
New commit detected on ManageIQ/manageiq/master:
https://github.com/ManageIQ/manageiq/commit/ad2fc3bc7f767169aa2689e16144210cc5caf906

commit ad2fc3bc7f767169aa2689e16144210cc5caf906
Author:     Adam Grare <agrare>
AuthorDate: Mon May 16 14:43:47 2016 -0400
Commit:     Adam Grare <agrare>
CommitDate: Mon May 16 15:20:13 2016 -0400

    Fix for VM linked to incorrect Storage
    
    If there are multiple different Storages in the environment with
    the same name, the vmLocalPathName will only point to one of them
    due to a hash collision when using the datastore name as a key.
    
    To fix this we use the MOR from the vm_inv['datastore'] property
    instead of trying to find the storage_uid from the vmLocalPathName
    when looking for the datastore the VM resides on.
    
    https://bugzilla.redhat.com/show_bug.cgi?id=1334789

 .../vmware/infra_manager/refresh_parser.rb         | 58 +++-------------------
 1 file changed, 7 insertions(+), 51 deletions(-)

Comment 9 Satyajit Bulage 2016-05-27 08:55:59 UTC
Both the VMs are on correct storage after running SmartState Analysis on it.

Verified Version:-5.6.0.8-rc1.20160524155303_f2a5a50

Comment 11 errata-xmlrpc 2016-06-29 16:01:05 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2016:1348


Note You need to log in before you can comment on or make changes to this bug.