Bug 1690294 - Hosted engine deployment failed as VDSM fails to query the guest agent with KeyError: 'appsList'
Summary: Hosted engine deployment failed as VDSM fails to query the guest agent with K...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: rhhi
Version: rhhiv-1.6
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: ---
: RHHI-V 1.6
Assignee: Sahina Bose
QA Contact: Mugdha Soni
URL:
Whiteboard:
Depends On: 1676893 1690301
Blocks: RHHI-V-1-6-Engineering-InFlight-BZs
TreeView+ depends on / blocked
 
Reported: 2019-03-19 09:00 UTC by Mugdha Soni
Modified: 2019-05-09 06:09 UTC (History)
10 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1690301 (view as bug list)
Environment:
Last Closed: 2019-05-09 06:09:09 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHEA-2019:1121 0 None None None 2019-05-09 06:09:18 UTC

Description Mugdha Soni 2019-03-19 09:00:30 UTC
Description of problem:
------------------------
In hosted engine deployment ,the deployment failed after the preparation of local hosted engine vm stating the error " "msg": "Fault reason is \"Operation Failed\". Fault detail is \"[Desktop does not exist]\". HTTP response code is 400."
But later seen that the RHVM was up but didnot have the additional hosts added.


Version-Release number of selected component :
-----------------------------------------------
rhvh-4.3.0.5-0.20190313
glusterfs-server-3.12.2-47.el7rhgs
ovirt-hosted-engine-setup-2.3.6-1.el7ev
gluster-ansible-repositories-1.0-1.el7rhgs.noarch
gluster-ansible-maintenance-1.0.1-1.el7rhgs.noarch
gluster-ansible-features-1.0.4-5.el7rhgs.noarch
gluster-ansible-cluster-1.0-1.el7rhgs.noarch
gluster-ansible-roles-1.0.4-4.el7rhgs.noarch
gluster-ansible-infra-1.0.3-3.el7rhgs.noarch


How reproducible:
------------------
1/1


Steps to Reproduce:
--------------------
1. Complete gluster deployment and  preparation of the local hosted engine vm.
3.While completion of the hosted engine deployment the issue can be found.

Actual results:
----------------
The deployment fails.


Expected results:
------------------
The deployment should be successful


Additional info:
------------------
snippet of error 

"Traceback (most recent call last):\n  File \"/tmp/ansible_ovirt_vm_payload__DJBWy/__main__.py\", line 2115, in main\n    wait_condition=vms_module.wait_for_down,\n  File \"/tmp/ansible_ovirt_vm
_payload__DJBWy/ansible_ovirt_vm_payload.zip/ansible/module_utils/ovirt.py\", line 738, in action\n    getattr(entity_service, action)(**kwargs)\n  File \"/usr/lib64/python2.7/site-packages/ovirtsdk4/services.py
\", line 31302, in shutdown\n    return self._internal_action(action, 'shutdown', None, headers, query, wait)\n  File \"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 299, in _internal_action\n 
   return future.wait() if wait else future\n  File \"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 55, in wait\n    return self._code(response)\n  File \"/usr/lib64/python2.7/site-packages/ovi
rtsdk4/service.py\", line 296, in callback\n    self._check_fault(response)\n  File \"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 134, in _check_fault\n    self._raise_error(response, body.fa
ult)\n  File \"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 118, in _raise_error\n    raise error\nError: Fault reason is \"Operation Failed\". Fault detail is \"[Desktop does not exist]\". HT
TP response code is 400.\n", 
    "failed": true,

Comment 1 Mugdha Soni 2019-03-19 09:04:15 UTC
The ovirt-hosted-engine-setup logs are present in the below link mentioned.
 " http://rhsqe-repo.lab.eng.blr.redhat.com/sosreports/mugdha/ovirt-hosted-log/ovirt-hosted-engine-setup-ansible-create_target_vm-201921816045-e1lgu5.log "

Comment 2 Mugdha Soni 2019-03-19 09:25:19 UTC
ovirt-engine logs 
" http://rhsqe-repo.lab.eng.blr.redhat.com/sosreports/mugdha/ovirt-engine-log/ "

Comment 3 Simone Tiraboschi 2019-03-19 10:15:19 UTC
Let's focus on engine.log.

The engine see the bootstrap local VM as an external VM:
2019-03-18 15:59:06,530+05 INFO  [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (EE-ManagedThreadFactory-engineScheduled-Thread-77) [] VM 'fdafa94f-07bc-4bfd-ae02-f44ddb2f7a43' was discovered as 'Up' on VDS '7a3ed5ac-786e-476c-8120-e815d9934f98'(rhsqa-grafton1.lab.eng.blr.redhat.com)
2019-03-18 15:59:06,550+05 INFO  [org.ovirt.engine.core.bll.AddUnmanagedVmsCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-77) [136031b] Running command: AddUnmanagedVmsCommand internal: true.
2019-03-18 15:59:06,553+05 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.DumpXmlsVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-77) [136031b] START, DumpXmlsVDSCommand(HostName = rhsqa-grafton1.lab.eng.blr.redhat.com, Params:{hostId='7a3ed5ac-786e-476c-8120-e815d9934f98', vmIds='[fdafa94f-07bc-4bfd-ae02-f44ddb2f7a43]'}), log id: 95d0bca
2019-03-18 15:59:06,558+05 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.DumpXmlsVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-77) [136031b] FINISH, DumpXmlsVDSCommand, return: {fdafa94f-07bc-4bfd-ae02-f44ddb2f7a43=<domain type='kvm' id='1'>
  <name>HostedEngineLocal</name>
  <uuid>fdafa94f-07bc-4bfd-ae02-f44ddb2f7a43</uuid>
  <memory unit='KiB'>16740352</memory>
  <currentMemory unit='KiB'>16740352</currentMemory>
  <vcpu placement='static'>4</vcpu>
  <resource>
    <partition>/machine</partition>
  </resource>
  <os>
    <type arch='x86_64' machine='pc-i440fx-rhel7.6.0'>hvm</type>
    <boot dev='hd'/>
    <bootmenu enable='no'/>
  </os>
  <features>
    <acpi/>
    <apic/>
  </features>
  <cpu mode='custom' match='exact' check='full'>
    <model fallback='forbid'>Haswell-noTSX-IBRS</model>
    <feature policy='require' name='vme'/>
    <feature policy='require' name='f16c'/>
    <feature policy='require' name='rdrand'/>
    <feature policy='require' name='hypervisor'/>
    <feature policy='require' name='arat'/>
    <feature policy='require' name='xsaveopt'/>
    <feature policy='require' name='abm'/>
  </cpu>
  <clock offset='utc'>
    <timer name='kvmclock' present='yes'/>
  </clock>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>restart</on_reboot>
  <on_crash>destroy</on_crash>
  <pm>
    <suspend-to-mem enabled='no'/>
    <suspend-to-disk enabled='no'/>
  </pm>
  <devices>
    <emulator>/usr/libexec/qemu-kvm</emulator>
    <disk type='file' device='disk'>
      <driver name='qemu' type='qcow2'/>
      <source file='/var/tmp/localvmDxh30G/images/4276937a-3a05-4a4a-a4c9-771eb4f9a0a9/010d2981-0230-4672-92f7-9c6ed0ef6bb9'/>
      <backingStore/>
      <target dev='vda' bus='virtio'/>
      <alias name='virtio-disk0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
    </disk>
    <disk type='file' device='cdrom'>
      <driver name='qemu' type='raw'/>
      <source file='/var/tmp/localvmDxh30G/seed.iso'/>
      <backingStore/>
      <target dev='hda' bus='ide'/>
      <readonly/>
      <alias name='ide0-0-0'/>
      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
    </disk>
    <controller type='usb' index='0' model='none'/>
    <controller type='pci' index='0' model='pci-root'>
      <alias name='pci.0'/>
    </controller>
    <controller type='ide' index='0'>
      <alias name='ide'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/>
    </controller>
    <controller type='virtio-serial' index='0'>
      <alias name='virtio-serial0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
    </controller>
    <interface type='network'>
      <mac address='00:45:54:19:48:01'/>
      <source network='default' bridge='virbr0'/>
      <target dev='vnet0'/>
      <model type='virtio'/>
      <alias name='net0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
    </interface>
    <serial type='pty'>
      <source path='/dev/pts/4'/>
      <target type='isa-serial' port='0'>
        <model name='isa-serial'/>
      </target>
      <alias name='serial0'/>
    </serial>
    <console type='pty' tty='/dev/pts/4'>
      <source path='/dev/pts/4'/>
      <target type='serial' port='0'/>
      <alias name='serial0'/>
    </console>
    <channel type='unix'>
      <source mode='bind' path='/var/lib/libvirt/qemu/channel/target/domain-1-HostedEngineLocal/org.qemu.guest_agent.0'/>
      <target type='virtio' name='org.qemu.guest_agent.0' state='connected'/>
      <alias name='channel0'/>
      <address type='virtio-serial' controller='0' bus='0' port='1'/>
    </channel>
    <input type='mouse' bus='ps2'>
      <alias name='input0'/>
    </input>
    <input type='keyboard' bus='ps2'>
      <alias name='input1'/>
    </input>
    <graphics type='vnc' port='5900' autoport='yes' listen='127.0.0.1'>
      <listen type='address' address='127.0.0.1'/>
    </graphics>
    <video>
      <model type='vga' vram='16384' heads='1' primary='yes'/>
      <alias name='video0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
    </video>
    <memballoon model='none'/>
    <rng model='virtio'>
      <backend model='random'>/dev/random</backend>
      <alias name='rng0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
    </rng>
  </devices>
  <seclabel type='dynamic' model='selinux' relabel='yes'>
    <label>system_u:system_r:svirt_t:s0:c119,c535</label>
    <imagelabel>system_u:object_r:svirt_image_t:s0:c119,c535</imagelabel>
  </seclabel>
  <seclabel type='dynamic' model='dac' relabel='yes'>
    <label>+107:+107</label>
    <imagelabel>+107:+107</imagelabel>
  </seclabel>
</domain>
}, log id: 95d0bca
2019-03-18 15:59:06,727+05 INFO  [org.ovirt.engine.core.bll.AddVmFromScratchCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-77) [616c5678] Lock Acquired to object 'EngineLock:{exclusiveLocks='[external-HostedEngineLocal=VM_NAME]', sharedLocks=''}'
2019-03-18 15:59:06,815+05 INFO  [org.ovirt.engine.core.bll.AddVmFromScratchCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-77) [] Running command: AddVmFromScratchCommand internal: true. Entities affected :  ID: c7c98b60-4967-11e9-9569-004554194801 Type: ClusterAction group CREATE_VM with role type USER
2019-03-18 15:59:07,000+05 INFO  [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engineScheduled-Thread-77) [] EVENT_ID: USER_ADD_VM(34), VM external-HostedEngineLocal was created by SYSTEM.
2019-03-18 15:59:07,001+05 INFO  [org.ovirt.engine.core.bll.AddVmFromScratchCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-77) [] Lock freed to object 'EngineLock:{exclusiveLocks='[external-HostedEngineLocal=VM_NAME]', sharedLocks=''}'
2019-03-18 15:59:07,012+05 INFO  [org.ovirt.engine.core.vdsbroker.monitoring.VmDevicesMonitoring] (EE-ManagedThreadFactory-engineScheduled-Thread-77) [] Received a console Device without an address when processing VM fdafa94f-07bc-4bfd-ae02-f44ddb2f7a43 devices, skipping device: {alias=serial0, type=console, device=console, deviceId=ec5ad979-5cd6-443c-97ca-6c005d0709f4}
2019-03-18 15:59:07,013+05 INFO  [org.ovirt.engine.core.vdsbroker.monitoring.VmDevicesMonitoring] (EE-ManagedThreadFactory-engineScheduled-Thread-77) [] Received a vga Device without an address when processing VM fdafa94f-07bc-4bfd-ae02-f44ddb2f7a43 devices, skipping device: {alias=video0, type=video, device=vga, deviceId=1359c3ee-0ea3-45ce-a212-7e34192c95e3}
2019-03-18 15:59:07,013+05 INFO  [org.ovirt.engine.core.vdsbroker.monitoring.VmDevicesMonitoring] (EE-ManagedThreadFactory-engineScheduled-Thread-77) [] Received a vnc Device without an address when processing VM fdafa94f-07bc-4bfd-ae02-f44ddb2f7a43 devices, skipping device: {alias=, type=graphics, device=vnc, deviceId=2bedfc6a-e134-4238-a5d1-e29aa8019d03}
2019-03-18 15:59:07,026+05 INFO  [org.ovirt.engine.core.bll.AddUnmanagedVmsCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-77) [] Importing VM 'HostedEngineLocal' as 'external-HostedEngineLocal', as it is running on the on Host, but does not exist in the engine.
2019-03-18 15:59:07,041+05 INFO  [org.ovirt.engine.core.vdsbroker.VdsManager] (EE-ManagedThreadFactory-engineScheduled-Thread-77) [] VMs initialization finished for Host: 'rhsqa-grafton1.lab.eng.blr.redhat.com:7a3ed5ac-786e-476c-8120-e815d9934f98'


That bootstrap VM in our case has uuid: fdafa94f-07bc-4bfd-ae02-f44ddb2f7a43

Then we restart the engine on the target VM on the shared storage.
So hosted-engine-setup tries to remove the bootstrap engine VM to avoid any leftover but it fails with:


2019-03-18 16:07:59,115+05 INFO  [org.ovirt.engine.core.bll.ShutdownVmCommand] (default task-1) [eeb95d8c-8bed-428a-9439-e4241f489a57] Running command: ShutdownVmCommand internal: false. Entities affected :  ID: fdafa94f-07bc-4bfd-ae02-f44ddb2f7a43 Type: VMAction group SHUT_DOWN_VM with role type USER
2019-03-18 16:07:59,138+05 INFO  [org.ovirt.engine.core.bll.ShutdownVmCommand] (default task-1) [eeb95d8c-8bed-428a-9439-e4241f489a57] Entered (VM 'external-HostedEngineLocal').
2019-03-18 16:07:59,138+05 INFO  [org.ovirt.engine.core.bll.ShutdownVmCommand] (default task-1) [eeb95d8c-8bed-428a-9439-e4241f489a57] Sending shutdown command for VM 'external-HostedEngineLocal'.
2019-03-18 16:07:59,142+05 INFO  [org.ovirt.engine.core.vdsbroker.DestroyVmVDSCommand] (default task-1) [eeb95d8c-8bed-428a-9439-e4241f489a57] START, DestroyVmVDSCommand( DestroyVmVDSCommandParameters:{hostId='7a3ed5ac-786e-476c-8120-e815d9934f98', vmId='fdafa94f-07bc-4bfd-ae02-f44ddb2f7a43', secondsToWait='30', gracefully='true', reason='', ignoreNoVm='false'}), log id: 16d89021
2019-03-18 16:07:59,156+05 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand] (default task-1) [eeb95d8c-8bed-428a-9439-e4241f489a57] START, DestroyVDSCommand(HostName = rhsqa-grafton1.lab.eng.blr.redhat.com, DestroyVmVDSCommandParameters:{hostId='7a3ed5ac-786e-476c-8120-e815d9934f98', vmId='fdafa94f-07bc-4bfd-ae02-f44ddb2f7a43', secondsToWait='30', gracefully='true', reason='', ignoreNoVm='false'}), log id: 66deda5d
2019-03-18 16:07:59,161+05 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand] (default task-1) [eeb95d8c-8bed-428a-9439-e4241f489a57] Destroy VM couldn't find VM 'fdafa94f-07bc-4bfd-ae02-f44ddb2f7a43'. If after Cancel Migration and VM is UP on source, ignore next error, it's just a clean-up call
2019-03-18 16:07:59,161+05 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand] (default task-1) [eeb95d8c-8bed-428a-9439-e4241f489a57] Failed in 'DestroyVDS' method
2019-03-18 16:07:59,169+05 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-1) [eeb95d8c-8bed-428a-9439-e4241f489a57] EVENT_ID: VDS_BROKER_COMMAND_FAILURE(10,802), VDSM rhsqa-grafton1.lab.eng.blr.redhat.com command DestroyVDS failed: Virtual machine does not exist
2019-03-18 16:07:59,169+05 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand] (default task-1) [eeb95d8c-8bed-428a-9439-e4241f489a57] Command 'org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand' return value 'StatusOnlyReturn [status=Status [code=1, message=Virtual machine does not exist]]'
2019-03-18 16:07:59,170+05 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand] (default task-1) [eeb95d8c-8bed-428a-9439-e4241f489a57] HostName = rhsqa-grafton1.lab.eng.blr.redhat.com
2019-03-18 16:07:59,170+05 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand] (default task-1) [eeb95d8c-8bed-428a-9439-e4241f489a57] Command 'DestroyVDSCommand(HostName = rhsqa-grafton1.lab.eng.blr.redhat.com, DestroyVmVDSCommandParameters:{hostId='7a3ed5ac-786e-476c-8120-e815d9934f98', vmId='fdafa94f-07bc-4bfd-ae02-f44ddb2f7a43', secondsToWait='30', gracefully='true', reason='', ignoreNoVm='false'})' execution failed: VDSGenericException: VDSErrorException: Failed to DestroyVDS, error = Virtual machine does not exist, code = 1
2019-03-18 16:07:59,170+05 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand] (default task-1) [eeb95d8c-8bed-428a-9439-e4241f489a57] FINISH, DestroyVDSCommand, return: , log id: 66deda5d
2019-03-18 16:07:59,170+05 ERROR [org.ovirt.engine.core.vdsbroker.DestroyVmVDSCommand] (default task-1) [eeb95d8c-8bed-428a-9439-e4241f489a57] Failed to destroy VM 'fdafa94f-07bc-4bfd-ae02-f44ddb2f7a43' in VDS = '7a3ed5ac-786e-476c-8120-e815d9934f98' , error = 'org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException: VDSGenericException: VDSErrorException: Failed to DestroyVDS, error = Virtual machine does not exist, code = 1'
2019-03-18 16:07:59,170+05 INFO  [org.ovirt.engine.core.vdsbroker.DestroyVmVDSCommand] (default task-1) [eeb95d8c-8bed-428a-9439-e4241f489a57] FINISH, DestroyVmVDSCommand, return: , log id: 16d89021
2019-03-18 16:07:59,170+05 ERROR [org.ovirt.engine.core.bll.ShutdownVmCommand] (default task-1) [eeb95d8c-8bed-428a-9439-e4241f489a57] Command 'org.ovirt.engine.core.bll.ShutdownVmCommand' failed: EngineException: org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException: VDSGenericException: VDSErrorException: Failed to DestroyVDS, error = Virtual machine does not exist, code = 1 (Failed with error noVM and code 1)
2019-03-18 16:07:59,199+05 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-1) [eeb95d8c-8bed-428a-9439-e4241f489a57] EVENT_ID: USER_FAILED_SHUTDOWN_VM(74), Failed to initiate shutdown on VM external-HostedEngineLocal (Host: rhsqa-grafton1.lab.eng.blr.redhat.com, User: admin@internal-authz).
2019-03-18 16:07:59,217+05 ERROR [org.ovirt.engine.api.restapi.resource.AbstractBackendResource] (default task-1) [] Operation Failed: [Desktop does not exist]


but the root cause is here:

2019-03-18 16:08:07,344+05 WARN  [org.ovirt.engine.core.vdsbroker.vdsbroker.GetAllVmStatsVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-10) [] Unexpected return value: Status [code=-32603, message=Internal JSON-RPC error: {'reason': "'appsList'"}]
2019-03-18 16:08:07,344+05 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.GetAllVmStatsVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-10) [] Failed in 'GetAllVmStatsVDS' method
2019-03-18 16:08:07,344+05 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.GetAllVmStatsVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-10) [] Command 'GetAllVmStatsVDSCommand(HostName = rhsqa-grafton1.lab.eng.blr.redhat.com, VdsIdVDSCommandParametersBase:{hostId='7a3ed5ac-786e-476c-8120-e815d9934f98'})' execution failed: VDSGenericException: VDSErrorException: Failed to GetAllVmStatsVDS, error = Internal JSON-RPC error: {'reason': "'appsList'"}, code = -32603
2019-03-18 16:08:07,344+05 INFO  [org.ovirt.engine.core.vdsbroker.monitoring.PollVmStatsRefresher] (EE-ManagedThreadFactory-engineScheduled-Thread-10) [] Failed to fetch vms info for host 'rhsqa-grafton1.lab.eng.blr.redhat.com' - skipping VMs monitoring.


GetAllVmStatsVDS on rhsqa-grafton1.lab.eng.blr.redhat.com failed with error = Internal JSON-RPC error: {'reason': "'appsList'"}, code = -32603 and so the VM list for rhsqa-grafton1.lab.eng.blr.redhat.com was not up to date and so ovirt-hosted-engine-setup failed to remove the bootstrap local engine VM since the VM list was missing.

Mugdha, can you please attach /var/log/vdsm/vdsm.log from rhsqa-grafton1.lab.eng.blr.redhat.com (for the relevant time) to better understand why GetAllVmStatsVDS there failed?

Comment 4 Mugdha Soni 2019-03-19 10:47:38 UTC
VDSM LOGS
" http://rhsqe-repo.lab.eng.blr.redhat.com/sosreports/mugdha/vdsm-log/ "

The error in the vdsm logs states as follows:
 2019-03-18 16:07:59,545+0530 ERROR (jsonrpc/4) [jsonrpc.JsonRpcServer] Internal server error (__init__:350)
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 345, in _handle_request
    res = method(**params)
  File "/usr/lib/python2.7/site-packages/vdsm/rpc/Bridge.py", line 194, in _dynamicMethod
    result = fn(*methodArgs)
  File "<string>", line 2, in getAllVmStats
  File "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 50, in method
    ret = func(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/vdsm/API.py", line 1388, in getAllVmStats
    statsList = self._cif.getAllVmStats()
  File "/usr/lib/python2.7/site-packages/vdsm/clientIF.py", line 567, in getAllVmStats
    return [v.getStats() for v in self.vmContainer.values()]
  File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 1766, in getStats
    oga_stats = self._getGuestStats()
  File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 1967, in _getGuestStats
    stats = self.guestAgent.getGuestInfo()
  File "/usr/lib/python2.7/site-packages/vdsm/virt/guestagent.py", line 505, in getGuestInfo
    del qga['appsList']
KeyError: 'appsList'

Comment 8 Charles J. Gruener 2019-03-27 21:42:31 UTC
Just chiming in to say we are experiencing this same error on an upgraded (4.2.8 to 4.3.2) oVirt instance.

Comment 9 Charles J. Gruener 2019-03-27 23:11:23 UTC
The following patch and a subsequent restart of vdsmd fixes my issue.

https://gerrit.ovirt.org/#/c/98499/1/lib/vdsm/virt/guestagent.py

Comment 10 Nicolas Ecarnot 2019-03-28 12:31:35 UTC
(In reply to Charles J. Gruener from comment #9)
> The following patch and a subsequent restart of vdsmd fixes my issue.
> 
> https://gerrit.ovirt.org/#/c/98499/1/lib/vdsm/virt/guestagent.py

Hello,

Just to say that I experienced this error in a non gluster context (pure oVirt on iSCSI), and the patch above solved it.
I don't know how and when this patch will be embedded in the next oVirt release?

Comment 11 Sahina Bose 2019-03-29 13:51:54 UTC
(In reply to Nicolas Ecarnot from comment #10)
> (In reply to Charles J. Gruener from comment #9)
> > The following patch and a subsequent restart of vdsmd fixes my issue.
> > 
> > https://gerrit.ovirt.org/#/c/98499/1/lib/vdsm/virt/guestagent.py
> 
> Hello,
> 
> Just to say that I experienced this error in a non gluster context (pure
> oVirt on iSCSI), and the patch above solved it.
> I don't know how and when this patch will be embedded in the next oVirt
> release?

Yes, this is addressed as part of Bug 1676893 in oVirt 4.3.3

Comment 12 SATHEESARAN 2019-03-29 18:59:04 UTC
Tested with RHVH 4.3 (rhvh-4.3.0.5-0.20190328.0+1)

1. HE deployment was successful and there are no issues seen with gluster storage domain

Thanks Mugdha for verifying this scenario

Comment 14 errata-xmlrpc 2019-05-09 06:09:09 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2019:1121


Note You need to log in before you can comment on or make changes to this bug.