Bug 1394030 - VMware appliance with direct connect LUN devices fails scanning
Summary: VMware appliance with direct connect LUN devices fails scanning
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: Red Hat CloudForms Management Engine
Classification: Red Hat
Component: SmartState Analysis
Version: 5.6.0
Hardware: x86_64
OS: Linux
unspecified
medium
Target Milestone: GA
: cfme-future
Assignee: Jerry Keselman
QA Contact: Satyajit Bulage
URL:
Whiteboard: vsphere:smartstate
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-11-10 20:59 UTC by Thomas Hennessy
Modified: 2019-12-16 07:21 UTC (History)
8 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-11-14 20:34:32 UTC
Category: ---
Cloudforms Team: ---
Target Upstream Version:


Attachments (Terms of Use)
complete set of current logs from CFME server where scan is failing (1.56 MB, application/x-xz)
2016-11-10 20:59 UTC, Thomas Hennessy
no flags Details
png files from customer showing VI Client configuration of failing VMware appliance (243.86 KB, application/zip)
2016-11-10 21:01 UTC, Thomas Hennessy
no flags Details
LAST startup.txt file from the CFME 4.0 appliance also failing (64.04 KB, text/plain)
2016-11-10 21:30 UTC, Thomas Hennessy
no flags Details
last_startup.txt file from the appliance that the current logs represent (145.89 KB, text/plain)
2016-11-10 21:30 UTC, Thomas Hennessy
no flags Details

Description Thomas Hennessy 2016-11-10 20:59:45 UTC
Created attachment 1219546 [details]
complete set of current logs from CFME server where scan is failing

Description of problem:smartstate analysis fails for VMware VM with direct connect LUN storage


Version-Release number of selected component (if applicable): 4.0 & 4.1


How reproducible:  See attached images for VMware configuration of failing appliance


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:
extracted from failing scan process:
=====
[----] I, [2016-11-10T12:11:43.062248 #17385:fcd988]  INFO -- : MIQ(MiqSmartProxyWorker::Runner#get_message_via_drb) Message id: [101000000024867], MiqWorker id: [101000000000114], Zone: [default], Role: [smartproxy], Server: [59b8c6d2-a6b1-11e6-9874-005056a3a215], Ident: [smartproxy], Target id: [], Instance id: [101000000000002], Task id: [], Command: [MiqServer.scan_metadata], Timeout: [1200], Priority: [100], State: [dequeue], Deliver On: [], Data: [], Args: [#<OpenStruct args=["[vol1.san1.afg1.nor1.ems.encore.tech] sql2.afg1.nor1.ems.encore.tech/sql2.afg1.nor1.ems.encore.tech.vmx", "---\nems:\n  ems:\n    :address: vcs1.afg1.nor1.ems.encore.tech\n    :hostname: vcs1.afg1.nor1.ems.encore.tech\n    :ipaddress: \n    :username: administrator\n    :password: ********\n    :class_name: ManageIQ::Providers::Vmware::InfraManager\n  host:\n    :address: vmh24.afg1.nor1.ems.encore.tech\n    :hostname: vmh24.afg1.nor1.ems.encore.tech\n    :ipaddress: 10.99.1.33\n    :username: root\n    :password: ********\n    :class_name: ManageIQ::Providers::Vmware::InfraManager::HostEsx\n  connect_to: host\nsnapshot:\n  use_existing: false\n  description: 'Snapshot for scan job: aff66d62-a768-11e6-aa0d-005056a3a215, EVM Server\n    build: 20161017185613_7cee0a0 (embedded) Server Time: 2016-11-10T17:11:36Z'\n  create_free_percent: 100\n  remove_free_percent: 100\nvmScanProfiles: []\n"], method_name="scan_metadata", vm_guid="c537c3fa-a6c4-11e6-9920-005056a352bf", category="vmconfig,accounts,software,services,system", taskid="aff66d62-a768-11e6-aa0d-005056a3a215", target_id=101000000000006, target_type="VmOrTemplate">], Dequeued in: [6.45127472] seconds
[----] I, [2016-11-10T12:11:43.062553 #17385:fcd988]  INFO -- : MIQ(MiqQueue#deliver) Message id: [101000000024867], Delivering...
[----] I, [2016-11-10T12:11:44.248018 #17385:fcd988]  INFO -- : Connecting to [host(directly):vmh24.afg1.nor1.ems.encore.tech] for VM:[]
[----] I, [2016-11-10T12:11:44.250727 #17385:fcd988]  INFO -- : MIQ(MiqFaultTolerantVim._connect) EMS: [vmh24.afg1.nor1.ems.encore.tech] Connecting with address: [vmh24.afg1.nor1.ems.encore.tech], userid: [root]...
[----] I, [2016-11-10T12:11:44.828960 #17385:fcd988]  INFO -- : MIQ(MiqFaultTolerantVim._connect) EMS: [vmh24.afg1.nor1.ems.encore.tech] vmh24.afg1.nor1.ems.encore.tech is ESX, API version: 6.0
[----] I, [2016-11-10T12:11:44.829055 #17385:fcd988]  INFO -- : MIQ(MiqFaultTolerantVim._connect) EMS: [vmh24.afg1.nor1.ems.encore.tech] Connected
[----] I, [2016-11-10T12:11:44.829158 #17385:fcd988]  INFO -- : Connection to [host(directly):vmh24.afg1.nor1.ems.encore.tech] completed for VM:[] in [0.581104048] seconds
[----] I, [2016-11-10T12:11:45.948070 #17385:fcd988]  INFO -- : MIQ(ManageIQ::Providers::Vmware::InfraManager::Vm#agent_job_state_op) jobid: [aff66d62-a768-11e6-aa0d-005056a3a215] starting
[----] I, [2016-11-10T12:11:45.954758 #17385:fcd988]  INFO -- : MIQ(MiqQueue.put) Message id: [101000000024871],  id: [], Zone: [default], Role: [smartstate], Server: [], Ident: [generic], Target id: [], Instance id: [], Task id: [agent_job_state_1478797905], Command: [Job.agent_state_update_queue], Timeout: [600], Priority: [100], State: [ready], Deliver On: [], Data: [], Args: ["aff66d62-a768-11e6-aa0d-005056a3a215", "Scanning", "Initializing scan"]
[----] I, [2016-11-10T12:11:45.958555 #17385:fcd988]  INFO -- : MIQExtract using config file: [[vol1.san1.afg1.nor1.ems.encore.tech] sql2.afg1.nor1.ems.encore.tech/sql2.afg1.nor1.ems.encore.tech.vmx]  settings: [{"ems"=>{"ems"=>{:address=>"vcs1.afg1.nor1.ems.encore.tech", :hostname=>"vcs1.afg1.nor1.ems.encore.tech", :ipaddress=>nil, :username=>"administrator", :password=>"********", :class_name=>"ManageIQ::Providers::Vmware::InfraManager"}, "host"=>{:address=>"vmh24.afg1.nor1.ems.encore.tech", :hostname=>"vmh24.afg1.nor1.ems.encore.tech", :ipaddress=>"10.99.1.33", :username=>"root", :password=>"********", :class_name=>"ManageIQ::Providers::Vmware::InfraManager::HostEsx", :use_vim_broker=>false}, "connect_to"=>"host", :use_vim_broker=>false}, "snapshot"=>{"use_existing"=>false, "description"=>"Snapshot for scan job: aff66d62-a768-11e6-aa0d-005056a3a215, EVM Server build: 20161017185613_7cee0a0 (embedded) Server Time: 2016-11-10T17:11:36Z", "create_free_percent"=>100, "remove_free_percent"=>100}, "vmScanProfiles"=>[]}]
[----] I, [2016-11-10T12:11:45.958628 #17385:fcd988]  INFO -- : Loading disk files for VM [[vol1.san1.afg1.nor1.ems.encore.tech] sql2.afg1.nor1.ems.encore.tech/sql2.afg1.nor1.ems.encore.tech.vmx]
[----] I, [2016-11-10T12:11:47.181836 #17385:fcd988]  INFO -- : Snapshot create pre-check skipped for Datastore <vol1.san1.afg1.nor1.ems.encore.tech> due to Percentage:<>.  Space Free:<754302582784>  Disk size:<85899345920>
[----] E, [2016-11-10T12:11:47.524699 #17385:fcd988] ERROR -- : Unable to mount filesystem.  Reason:[Virtual machine is configured to use a device that prevents the snapshot operation: Device '' is a SCSI controller engaged in bus-sharing.] for VM [[vol1.san1.afg1.nor1.ems.encore.tech] sql2.afg1.nor1.ems.encore.tech/sql2.afg1.nor1.ems.encore.tech.vmx]
[----] E, [2016-11-10T12:11:47.524893 #17385:fcd988] ERROR -- : MIQExtract.new /var/www/miq/vmdb/gems/pending/VMwareWebService/MiqVimInventory.rb:2163:in `waitForTask'  
=====

Comment 2 Thomas Hennessy 2016-11-10 21:01:46 UTC
Created attachment 1219547 [details]
png files from customer showing VI Client configuration of failing VMware appliance

Comment 3 Thomas Hennessy 2016-11-10 21:30:01 UTC
Created attachment 1219554 [details]
LAST startup.txt file from the CFME 4.0 appliance also failing

Comment 4 Thomas Hennessy 2016-11-10 21:30:52 UTC
Created attachment 1219555 [details]
last_startup.txt file from the appliance that the current logs represent

Comment 5 Jerry Keselman 2016-11-14 20:33:27 UTC
Based on this VMware Knowledge Base article, it appears the VM is configured incorrectly.  The article provides some pointers for alternative configurations.

https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1006392

It does not appear there is anything Engineering can do with this.
Please let me know if there is anything else you need.  Thanks.


Note You need to log in before you can comment on or make changes to this bug.