RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1474102 - If libvirt iSCSI LUN detach fails the KVM domain dies
Summary: If libvirt iSCSI LUN detach fails the KVM domain dies
Keywords:
Status: CLOSED DEFERRED
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: qemu-kvm-rhev
Version: 7.3
Hardware: x86_64
OS: Unspecified
unspecified
high
Target Milestone: rc
: 7.5
Assignee: John Snow
QA Contact: Xueqiang Wei
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-07-23 21:47 UTC by Federico Iezzi
Modified: 2023-09-15 00:03 UTC (History)
23 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-12-05 21:52:13 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Federico Iezzi 2017-07-23 21:47:48 UTC
Description of problem:
Sometimes a Cinder VNX iSCSI volume detach operation fails. The “multipath -r” command fails exiting with an code different from zero. This creates a chain of events that make the KVM VM domain to die.


Version-Release number of selected component (if applicable):
RH-OSP9 with latest minor updates on RHEL 7.3

How reproducible:
Not easy but try to make an iSCSI LUN with multipath to fail the detach process. 

Actual results:
Sometimes the detach process fails and the KVM domain dies.

Expected results:
Assuming that the LUN detach process shouldn't fail, anyway the KVM domain cannot just die like this.


Additional info:
It's important to highlight that the Nova config is TripleO generated and the multipath config is the one Red Hat QA'd. 

##################################################
# Nova compute logs from the creation to the deletion.
2017-07-21 10:37:53.369 32703 INFO nova.compute.claims [req-0f27a8f6-deb1-4cb5-9efc-1f8db7d62abc ad0e5a68c95e45c5ae9523d945d31442 7f5d0faabc2542088f14d2bc65e6b69c - - -] [instance: dec5b741-50f9-4f31-8c11-7ad78cdc50aa] Attempting claim: memory 12000 MB, disk 40 GB, vcpus 8 CPU
2017-07-21 10:37:53.369 32703 INFO nova.compute.claims [req-0f27a8f6-deb1-4cb5-9efc-1f8db7d62abc ad0e5a68c95e45c5ae9523d945d31442 7f5d0faabc2542088f14d2bc65e6b69c - - -] [instance: dec5b741-50f9-4f31-8c11-7ad78cdc50aa] Total memory: 262018 MB, used: 96048.00 MB
2017-07-21 10:37:53.369 32703 INFO nova.compute.claims [req-0f27a8f6-deb1-4cb5-9efc-1f8db7d62abc ad0e5a68c95e45c5ae9523d945d31442 7f5d0faabc2542088f14d2bc65e6b69c - - -] [instance: dec5b741-50f9-4f31-8c11-7ad78cdc50aa] memory limit: 262018.00 MB, free: 165970.00 MB
2017-07-21 10:37:53.370 32703 INFO nova.compute.claims [req-0f27a8f6-deb1-4cb5-9efc-1f8db7d62abc ad0e5a68c95e45c5ae9523d945d31442 7f5d0faabc2542088f14d2bc65e6b69c - - -] [instance: dec5b741-50f9-4f31-8c11-7ad78cdc50aa] Total disk: 1676 GB, used: 520.00 GB
2017-07-21 10:37:53.370 32703 INFO nova.compute.claims [req-0f27a8f6-deb1-4cb5-9efc-1f8db7d62abc ad0e5a68c95e45c5ae9523d945d31442 7f5d0faabc2542088f14d2bc65e6b69c - - -] [instance: dec5b741-50f9-4f31-8c11-7ad78cdc50aa] disk limit: 1676.00 GB, free: 1156.00 GB
2017-07-21 10:37:53.370 32703 INFO nova.compute.claims [req-0f27a8f6-deb1-4cb5-9efc-1f8db7d62abc ad0e5a68c95e45c5ae9523d945d31442 7f5d0faabc2542088f14d2bc65e6b69c - - -] [instance: dec5b741-50f9-4f31-8c11-7ad78cdc50aa] Total vcpu: 40 VCPU, used: 42.00 VCPU
2017-07-21 10:37:53.370 32703 INFO nova.compute.claims [req-0f27a8f6-deb1-4cb5-9efc-1f8db7d62abc ad0e5a68c95e45c5ae9523d945d31442 7f5d0faabc2542088f14d2bc65e6b69c - - -] [instance: dec5b741-50f9-4f31-8c11-7ad78cdc50aa] vcpu limit not specified, defaulting to unlimited
2017-07-21 10:37:53.381 32703 INFO nova.compute.claims [req-0f27a8f6-deb1-4cb5-9efc-1f8db7d62abc ad0e5a68c95e45c5ae9523d945d31442 7f5d0faabc2542088f14d2bc65e6b69c - - -] [instance: dec5b741-50f9-4f31-8c11-7ad78cdc50aa] Claim successful
2017-07-21 10:37:53.892 32703 INFO nova.virt.libvirt.driver [req-0f27a8f6-deb1-4cb5-9efc-1f8db7d62abc ad0e5a68c95e45c5ae9523d945d31442 7f5d0faabc2542088f14d2bc65e6b69c - - -] [instance: dec5b741-50f9-4f31-8c11-7ad78cdc50aa] Creating image
2017-07-21 10:37:54.811 32703 INFO nova.virt.libvirt.driver [req-0f27a8f6-deb1-4cb5-9efc-1f8db7d62abc ad0e5a68c95e45c5ae9523d945d31442 7f5d0faabc2542088f14d2bc65e6b69c - - -] [instance: dec5b741-50f9-4f31-8c11-7ad78cdc50aa] Using config drive
2017-07-21 10:37:54.970 32703 INFO nova.virt.libvirt.driver [req-0f27a8f6-deb1-4cb5-9efc-1f8db7d62abc ad0e5a68c95e45c5ae9523d945d31442 7f5d0faabc2542088f14d2bc65e6b69c - - -] [instance: dec5b741-50f9-4f31-8c11-7ad78cdc50aa] Creating config drive at /var/lib/nova/instances/dec5b741-50f9-4f31-8c11-7ad78cdc50aa/disk.config
2017-07-21 10:37:57.051 32703 INFO nova.compute.manager [req-399f30e4-0683-4501-91ab-a5d75b9fad08 - - - - -] [instance: dec5b741-50f9-4f31-8c11-7ad78cdc50aa] VM Started (Lifecycle Event)
2017-07-21 10:37:57.120 32703 INFO nova.compute.manager [req-399f30e4-0683-4501-91ab-a5d75b9fad08 - - - - -] [instance: dec5b741-50f9-4f31-8c11-7ad78cdc50aa] VM Paused (Lifecycle Event)
2017-07-21 10:37:57.216 32703 INFO nova.compute.manager [req-399f30e4-0683-4501-91ab-a5d75b9fad08 - - - - -] [instance: dec5b741-50f9-4f31-8c11-7ad78cdc50aa] During sync_power_state the instance has a pending task (spawning). Skip.
2017-07-21 10:38:00.340 32703 INFO nova.compute.manager [req-399f30e4-0683-4501-91ab-a5d75b9fad08 - - - - -] [instance: dec5b741-50f9-4f31-8c11-7ad78cdc50aa] VM Resumed (Lifecycle Event)
2017-07-21 10:38:00.345 32703 INFO nova.virt.libvirt.driver [-] [instance: dec5b741-50f9-4f31-8c11-7ad78cdc50aa] Instance spawned successfully.
2017-07-21 10:38:00.345 32703 INFO nova.compute.manager [req-0f27a8f6-deb1-4cb5-9efc-1f8db7d62abc ad0e5a68c95e45c5ae9523d945d31442 7f5d0faabc2542088f14d2bc65e6b69c - - -] [instance: dec5b741-50f9-4f31-8c11-7ad78cdc50aa] Took 6.45 seconds to spawn the instance on the hypervisor.
2017-07-21 10:38:00.431 32703 INFO nova.compute.manager [req-399f30e4-0683-4501-91ab-a5d75b9fad08 - - - - -] [instance: dec5b741-50f9-4f31-8c11-7ad78cdc50aa] During sync_power_state the instance has a pending task (spawning). Skip.
2017-07-21 10:38:00.432 32703 INFO nova.compute.manager [req-399f30e4-0683-4501-91ab-a5d75b9fad08 - - - - -] [instance: dec5b741-50f9-4f31-8c11-7ad78cdc50aa] VM Resumed (Lifecycle Event)
2017-07-21 10:38:00.461 32703 INFO nova.compute.manager [req-0f27a8f6-deb1-4cb5-9efc-1f8db7d62abc ad0e5a68c95e45c5ae9523d945d31442 7f5d0faabc2542088f14d2bc65e6b69c - - -] [instance: dec5b741-50f9-4f31-8c11-7ad78cdc50aa] Took 7.11 seconds to build instance.
2017-07-21 10:38:02.505 32703 INFO nova.compute.manager [req-a59b59c5-22ac-4b75-9504-859a6d778932 ad0e5a68c95e45c5ae9523d945d31442 7f5d0faabc2542088f14d2bc65e6b69c - - -] [instance: dec5b741-50f9-4f31-8c11-7ad78cdc50aa] Attaching volume 217f3031-c701-43ce-b0c7-77104c4fcd67 to /dev/vdc
2017-07-21 15:28:18.256 32703 INFO nova.compute.manager [req-fd0a67ff-9796-4652-bf2d-507b0155a2a9 ad0e5a68c95e45c5ae9523d945d31442 7f5d0faabc2542088f14d2bc65e6b69c - - -] [instance: dec5b741-50f9-4f31-8c11-7ad78cdc50aa] Detach volume 217f3031-c701-43ce-b0c7-77104c4fcd67 from mountpoint /dev/vdc
2017-07-21 15:28:23.266 32703 ERROR nova.compute.manager [req-fd0a67ff-9796-4652-bf2d-507b0155a2a9 ad0e5a68c95e45c5ae9523d945d31442 7f5d0faabc2542088f14d2bc65e6b69c - - -] [instance: dec5b741-50f9-4f31-8c11-7ad78cdc50aa] Failed to detach volume 217f3031-c701-43ce-b0c7-77104c4fcd67 from /dev/vdc
2017-07-21 15:28:23.266 32703 ERROR nova.compute.manager [instance: dec5b741-50f9-4f31-8c11-7ad78cdc50aa] Traceback (most recent call last):
2017-07-21 15:28:23.266 32703 ERROR nova.compute.manager [instance: dec5b741-50f9-4f31-8c11-7ad78cdc50aa]   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 4790, in _driver_detach_volume
2017-07-21 15:28:23.266 32703 ERROR nova.compute.manager [instance: dec5b741-50f9-4f31-8c11-7ad78cdc50aa]     encryption=encryption)
2017-07-21 15:28:23.266 32703 ERROR nova.compute.manager [instance: dec5b741-50f9-4f31-8c11-7ad78cdc50aa]   File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 1465, in detach_volume
2017-07-21 15:28:23.266 32703 ERROR nova.compute.manager [instance: dec5b741-50f9-4f31-8c11-7ad78cdc50aa]     live=live)
2017-07-21 15:28:23.266 32703 ERROR nova.compute.manager [instance: dec5b741-50f9-4f31-8c11-7ad78cdc50aa]   File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/guest.py", line 335, in detach_device_with_retry
2017-07-21 15:28:23.266 32703 ERROR nova.compute.manager [instance: dec5b741-50f9-4f31-8c11-7ad78cdc50aa]     self.detach_device(conf, persistent, live)
2017-07-21 15:28:23.266 32703 ERROR nova.compute.manager [instance: dec5b741-50f9-4f31-8c11-7ad78cdc50aa]   File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/guest.py", line 365, in detach_device
2017-07-21 15:28:23.266 32703 ERROR nova.compute.manager [instance: dec5b741-50f9-4f31-8c11-7ad78cdc50aa]     self._domain.detachDeviceFlags(conf.to_xml(), flags=flags)
2017-07-21 15:28:23.266 32703 ERROR nova.compute.manager [instance: dec5b741-50f9-4f31-8c11-7ad78cdc50aa]   File "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 183, in doit
2017-07-21 15:28:23.266 32703 ERROR nova.compute.manager [instance: dec5b741-50f9-4f31-8c11-7ad78cdc50aa]     result = proxy_call(self._autowrap, f, *args, **kwargs)
2017-07-21 15:28:23.266 32703 ERROR nova.compute.manager [instance: dec5b741-50f9-4f31-8c11-7ad78cdc50aa]   File "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 141, in proxy_call
2017-07-21 15:28:23.266 32703 ERROR nova.compute.manager [instance: dec5b741-50f9-4f31-8c11-7ad78cdc50aa]     rv = execute(f, *args, **kwargs)
2017-07-21 15:28:23.266 32703 ERROR nova.compute.manager [instance: dec5b741-50f9-4f31-8c11-7ad78cdc50aa]   File "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 122, in execute
2017-07-21 15:28:23.266 32703 ERROR nova.compute.manager [instance: dec5b741-50f9-4f31-8c11-7ad78cdc50aa]     six.reraise(c, e, tb)
2017-07-21 15:28:23.266 32703 ERROR nova.compute.manager [instance: dec5b741-50f9-4f31-8c11-7ad78cdc50aa]   File "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 80, in tworker
2017-07-21 15:28:23.266 32703 ERROR nova.compute.manager [instance: dec5b741-50f9-4f31-8c11-7ad78cdc50aa]     rv = meth(*args, **kwargs)
2017-07-21 15:28:23.266 32703 ERROR nova.compute.manager [instance: dec5b741-50f9-4f31-8c11-7ad78cdc50aa]   File "/usr/lib64/python2.7/site-packages/libvirt.py", line 1190, in detachDeviceFlags
2017-07-21 15:28:23.266 32703 ERROR nova.compute.manager [instance: dec5b741-50f9-4f31-8c11-7ad78cdc50aa]     if ret == -1: raise libvirtError ('virDomainDetachDeviceFlags() failed', dom=self)
2017-07-21 15:28:23.266 32703 ERROR nova.compute.manager [instance: dec5b741-50f9-4f31-8c11-7ad78cdc50aa] libvirtError: internal error: End of file from monitor
2017-07-21 15:28:23.266 32703 ERROR nova.compute.manager [instance: dec5b741-50f9-4f31-8c11-7ad78cdc50aa] 
2017-07-21 15:28:38.279 32703 INFO nova.compute.manager [-] [instance: dec5b741-50f9-4f31-8c11-7ad78cdc50aa] VM Stopped (Lifecycle Event)
2017-07-21 15:28:38.376 32703 INFO nova.compute.manager [req-ca75c923-e4c4-47b5-abb6-f57b88a4f6f9 - - - - -] [instance: dec5b741-50f9-4f31-8c11-7ad78cdc50aa] During _sync_instance_power_state the DB power_state (1) does not match the vm_power_state from the hypervisor (4). Updating power_state in the DB to match the hypervisor.
2017-07-21 15:28:38.531 32703 WARNING nova.compute.manager [req-ca75c923-e4c4-47b5-abb6-f57b88a4f6f9 - - - - -] [instance: dec5b741-50f9-4f31-8c11-7ad78cdc50aa] Instance shutdown by itself. Calling the stop API. Current vm_state: active, current task_state: None, original DB power_state: 1, current VM power_state: 4
2017-07-21 15:28:38.710 32703 INFO nova.compute.manager [req-ca75c923-e4c4-47b5-abb6-f57b88a4f6f9 - - - - -] [instance: dec5b741-50f9-4f31-8c11-7ad78cdc50aa] Instance is already powered off in the hypervisor when stop is called.
2017-07-21 15:28:38.762 32703 INFO nova.virt.libvirt.driver [req-ca75c923-e4c4-47b5-abb6-f57b88a4f6f9 - - - - -] [instance: dec5b741-50f9-4f31-8c11-7ad78cdc50aa] Instance already shutdown.
2017-07-21 15:28:38.765 32703 INFO nova.virt.libvirt.driver [-] [instance: dec5b741-50f9-4f31-8c11-7ad78cdc50aa] Instance destroyed successfully.
##################################################
# KVM domain
<domain type='kvm'>
  <name>instance-00006d67</name>
  <uuid>dec5b741-50f9-4f31-8c11-7ad78cdc50aa</uuid>
  <metadata>
    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.0">
      <nova:package version="13.1.4-1.el7ost"/>
      <nova:name>elasticsearch</nova:name>
      <nova:creationTime>2017-07-21 10:37:55</nova:creationTime>
      <nova:flavor name="flavor_8vC12M">
        <nova:memory>12000</nova:memory>
        <nova:disk>20</nova:disk>
        <nova:swap>0</nova:swap>
        <nova:ephemeral>20</nova:ephemeral>
        <nova:vcpus>8</nova:vcpus>
      </nova:flavor>
      <nova:owner>
        <nova:user uuid="ad0e5a68c95e45c5ae9523d945d31442">ieatenmc5b02_admin</nova:user>
        <nova:project uuid="7f5d0faabc2542088f14d2bc65e6b69c">Maintrack01</nova:project>
      </nova:owner>
      <nova:root type="image" uuid="c6e577b7-9bdf-4f01-a027-ebb7495a6555"/>
    </nova:instance>
  </metadata>
  <memory unit='KiB'>12288000</memory>
  <currentMemory unit='KiB'>12288000</currentMemory>
  <vcpu placement='static'>8</vcpu>
  <cputune>
    <shares>8192</shares>
  </cputune>
  <sysinfo type='smbios'>
    <system>
      <entry name='manufacturer'>Red Hat</entry>
      <entry name='product'>OpenStack Nova</entry>
      <entry name='version'>13.1.4-1.el7ost</entry>
      <entry name='serial'>12215c5f-ae8a-4a56-81ca-04db2cd519e7</entry>
      <entry name='uuid'>dec5b741-50f9-4f31-8c11-7ad78cdc50aa</entry>
      <entry name='family'>Virtual Machine</entry>
    </system>
  </sysinfo>
  <os>
    <type arch='x86_64' machine='pc-i440fx-rhel7.3.0'>hvm</type>
    <boot dev='hd'/>
    <smbios mode='sysinfo'/>
  </os>
  <features>
    <acpi/>
    <apic/>
  </features>
  <cpu mode='host-model'>
    <model fallback='allow'/>
    <topology sockets='8' cores='1' threads='1'/>
  </cpu>
  <clock offset='utc'>
    <timer name='pit' tickpolicy='delay'/>
    <timer name='rtc' tickpolicy='catchup'/>
    <timer name='hpet' present='no'/>
  </clock>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>restart</on_reboot>
  <on_crash>destroy</on_crash>
  <devices>
    <emulator>/usr/libexec/qemu-kvm</emulator>
    <disk type='file' device='disk'>
      <driver name='qemu' type='qcow2' cache='none'/>
      <source file='/var/lib/nova/instances/dec5b741-50f9-4f31-8c11-7ad78cdc50aa/disk'/>
      <target dev='vda' bus='virtio'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
    </disk>
    <disk type='file' device='disk'>
      <driver name='qemu' type='qcow2' cache='none'/>
      <source file='/var/lib/nova/instances/dec5b741-50f9-4f31-8c11-7ad78cdc50aa/disk.eph0'/>
      <target dev='vdb' bus='virtio'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
    </disk>
    <disk type='block' device='disk'>
      <driver name='qemu' type='raw' cache='none' io='native'/>
      <source dev='/dev/disk/by-path/ip-10.148.43.136:3260-iscsi-iqn.1992-04.com.emc:cx.ckm00171601134.b3-lun-136'/>
      <target dev='vdc' bus='virtio'/>
      <serial>217f3031-c701-43ce-b0c7-77104c4fcd67</serial>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
    </disk>
    <disk type='file' device='cdrom'>
      <driver name='qemu' type='raw' cache='none'/>
      <source file='/var/lib/nova/instances/dec5b741-50f9-4f31-8c11-7ad78cdc50aa/disk.config'/>
      <target dev='hdd' bus='ide'/>
      <readonly/>
      <address type='drive' controller='0' bus='1' target='0' unit='1'/>
    </disk>
    <controller type='usb' index='0'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
    </controller>
    <controller type='pci' index='0' model='pci-root'/>
    <controller type='ide' index='0'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/>
    </controller>
    <interface type='bridge'>
      <mac address='fa:16:3e:62:95:14'/>
      <source bridge='qbrad513c6d-86'/>
      <target dev='tapad513c6d-86'/>
      <model type='virtio'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
    </interface>
    <serial type='file'>
      <source path='/var/lib/nova/instances/dec5b741-50f9-4f31-8c11-7ad78cdc50aa/console.log'/>
      <target port='0'/>
    </serial>
    <serial type='pty'>
      <target port='1'/>
    </serial>
    <console type='file'>
      <source path='/var/lib/nova/instances/dec5b741-50f9-4f31-8c11-7ad78cdc50aa/console.log'/>
      <target type='serial' port='0'/>
    </console>
    <input type='tablet' bus='usb'>
      <address type='usb' bus='0' port='1'/>
    </input>
    <input type='mouse' bus='ps2'/>
    <input type='keyboard' bus='ps2'/>
    <graphics type='vnc' port='-1' autoport='yes' listen='0.0.0.0' keymap='en-us'>
      <listen type='address' address='0.0.0.0'/>
    </graphics>
    <video>
      <model type='cirrus' vram='16384' heads='1' primary='yes'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
    </video>
    <memballoon model='virtio'>
      <stats period='10'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
    </memballoon>
  </devices>
</domain>
##################################################
# multipathd logs
Jul 21 15:28:05 overcloud-novacompute-prod5b-0.localdomain multipathd[5002]: sdk: remove path (uevent)
Jul 21 15:28:05 overcloud-novacompute-prod5b-0.localdomain multipathd[5002]: 36006016076e044003f290fc0ff6de711: load table [0 52428800 multipath 2 queue_if_no_path retain_at
Jul 21 15:28:05 overcloud-novacompute-prod5b-0.localdomain multipathd[5002]: sdk [8:160]: path removed from map 36006016076e044003f290fc0ff6de711
Jul 21 15:28:06 overcloud-novacompute-prod5b-0.localdomain multipathd[5002]: sdf: remove path (uevent)
Jul 21 15:28:06 overcloud-novacompute-prod5b-0.localdomain multipathd[5002]: 36006016076e044003f290fc0ff6de711: load table [0 52428800 multipath 2 queue_if_no_path retain_at
Jul 21 15:28:06 overcloud-novacompute-prod5b-0.localdomain multipathd[5002]: sdf [8:80]: path removed from map 36006016076e044003f290fc0ff6de711
Jul 21 15:28:06 overcloud-novacompute-prod5b-0.localdomain multipathd[5002]: sdg: remove path (uevent)
Jul 21 15:28:06 overcloud-novacompute-prod5b-0.localdomain multipathd[5002]: 36006016076e044003f290fc0ff6de711 Last path deleted, disabling queueing
Jul 21 15:28:06 overcloud-novacompute-prod5b-0.localdomain multipathd[5002]: 36006016076e044003f290fc0ff6de711: devmap removed
Jul 21 15:28:06 overcloud-novacompute-prod5b-0.localdomain multipathd[5002]: 36006016076e044003f290fc0ff6de711: stop event checker thread (139682822719232)
Jul 21 15:28:06 overcloud-novacompute-prod5b-0.localdomain multipathd[5002]: 36006016076e044003f290fc0ff6de711: removed map after removing all paths
Jul 21 15:28:06 overcloud-novacompute-prod5b-0.localdomain multipathd[5002]: dm-4: remove map (uevent)
Jul 21 15:28:06 overcloud-novacompute-prod5b-0.localdomain multipathd[5002]: dm-4: devmap not registered, can't remove
Jul 21 15:28:06 overcloud-novacompute-prod5b-0.localdomain multipathd[5002]: dm-4: remove map (uevent)

##################################################
# multipathd config
blacklist {
    # Skip LUNZ device from VNX
    device {
        vendor "DGC"
        product "LUNZ"
        }
}

defaults {
    user_friendly_names no
    flush_on_last_del yes
}

devices {
    # Device attributed for EMC CLARiiON and VNX series ALUA
    device {
        vendor "DGC"
        product ".*"
        product_blacklist "LUNZ"
        path_grouping_policy group_by_prio
        path_selector "round-robin 0"
        path_checker emc_clariion
        features "1 queue_if_no_path"
        hardware_handler "1 alua"
        prio alua
        failback immediate
    }
}

##################################################
# Qemu logs
2017-07-21 10:37:56.829+0000: starting up libvirt version: 2.0.0, package: 10.el7_3.9 (Red Hat, Inc. <http://bugzilla.redhat.com/bugzilla>, 2017-05-04-06:48:37, x86-034.build.eng.bos.redhat.com), qemu version: 2.6.0 (qemu-kvm-rhev-2.6.0-28.el7_3.9), hostname: overcloud-novacompute-prod5b-0.localdomain
LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin QEMU_AUDIO_DRV=none /usr/libexec/qemu-kvm -name guest=instance-00006d67,debug-threads=on -S -object secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-259-instance-00006d67/master-key.aes -machine pc-i440fx-rhel7.3.0,accel=kvm,usb=off -cpu Haswell-noTSX,+vme,+ds,+acpi,+ss,+ht,+tm,+pbe,+dtes64,+monitor,+ds_cpl,+vmx,+smx,+est,+tm2,+xtpr,+pdcm,+dca,+osxsave,+f16c,+rdrand,+arat,+tsc_adjust,+xsaveopt,+pdpe1gb,+abm -m 12000 -realtime mlock=off -smp 8,sockets=8,cores=1,threads=1 -uuid dec5b741-50f9-4f31-8c11-7ad78cdc50aa -smbios 'type=1,manufacturer=Red Hat,product=OpenStack Nova,version=13.1.4-1.el7ost,serial=12215c5f-ae8a-4a56-81ca-04db2cd519e7,uuid=dec5b741-50f9-4f31-8c11-7ad78cdc50aa,family=Virtual Machine' -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-259-instance-00006d67/monitor.sock,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc,driftfix=slew -global kvm-pit.lost_tick_policy=discard -no-hpet -no-shutdown -boot strict=on -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive file=/var/lib/nova/instances/dec5b741-50f9-4f31-8c11-7ad78cdc50aa/disk,format=qcow2,if=none,id=drive-virtio-disk0,cache=none -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -drive file=/var/lib/nova/instances/dec5b741-50f9-4f31-8c11-7ad78cdc50aa/disk.eph0,format=qcow2,if=none,id=drive-virtio-disk1,cache=none -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-disk1,id=virtio-disk1 -drive file=/var/lib/nova/instances/dec5b741-50f9-4f31-8c11-7ad78cdc50aa/disk.config,format=raw,if=none,id=drive-ide0-1-1,readonly=on,cache=none -device ide-cd,bus=ide.1,unit=1,drive=drive-ide0-1-1,id=ide0-1-1 -netdev tap,fd=31,id=hostnet0,vhost=on,vhostfd=34 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=fa:16:3e:62:95:14,bus=pci.0,addr=0x3 -add-fd set=2,fd=39 -chardev file,id=charserial0,path=/dev/fdset/2,append=on -device isa-serial,chardev=charserial0,id=serial0 -chardev pty,id=charserial1 -device isa-serial,chardev=charserial1,id=serial1 -device usb-tablet,id=input0,bus=usb.0,port=1 -vnc 0.0.0.0:0 -k en-us -device cirrus-vga,id=video0,bus=pci.0,addr=0x2 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x6 -msg timestamp=on
char device redirected to /dev/pts/1 (label charserial1)
warning: host doesn't support requested feature: CPUID.01H:EDX.ds [bit 21]
warning: host doesn't support requested feature: CPUID.01H:EDX.acpi [bit 22]
warning: host doesn't support requested feature: CPUID.01H:EDX.ht [bit 28]
warning: host doesn't support requested feature: CPUID.01H:EDX.tm [bit 29]
warning: host doesn't support requested feature: CPUID.01H:EDX.pbe [bit 31]
warning: host doesn't support requested feature: CPUID.01H:ECX.dtes64 [bit 2]
warning: host doesn't support requested feature: CPUID.01H:ECX.monitor [bit 3]
warning: host doesn't support requested feature: CPUID.01H:ECX.ds_cpl [bit 4]
warning: host doesn't support requested feature: CPUID.01H:ECX.vmx [bit 5]
warning: host doesn't support requested feature: CPUID.01H:ECX.smx [bit 6]
warning: host doesn't support requested feature: CPUID.01H:ECX.est [bit 7]
warning: host doesn't support requested feature: CPUID.01H:ECX.tm2 [bit 8]
warning: host doesn't support requested feature: CPUID.01H:ECX.xtpr [bit 14]
warning: host doesn't support requested feature: CPUID.01H:ECX.pdcm [bit 15]
warning: host doesn't support requested feature: CPUID.01H:ECX.dca [bit 18]
warning: host doesn't support requested feature: CPUID.01H:ECX.osxsave [bit 27]
warning: host doesn't support requested feature: CPUID.01H:EDX.ds [bit 21]
warning: host doesn't support requested feature: CPUID.01H:EDX.acpi [bit 22]
warning: host doesn't support requested feature: CPUID.01H:EDX.ht [bit 28]
warning: host doesn't support requested feature: CPUID.01H:EDX.tm [bit 29]
warning: host doesn't support requested feature: CPUID.01H:EDX.pbe [bit 31]
warning: host doesn't support requested feature: CPUID.01H:ECX.dtes64 [bit 2]
warning: host doesn't support requested feature: CPUID.01H:ECX.monitor [bit 3]
warning: host doesn't support requested feature: CPUID.01H:ECX.ds_cpl [bit 4]
warning: host doesn't support requested feature: CPUID.01H:ECX.vmx [bit 5]
warning: host doesn't support requested feature: CPUID.01H:ECX.smx [bit 6]
warning: host doesn't support requested feature: CPUID.01H:ECX.est [bit 7]
warning: host doesn't support requested feature: CPUID.01H:ECX.tm2 [bit 8]
warning: host doesn't support requested feature: CPUID.01H:ECX.xtpr [bit 14]
warning: host doesn't support requested feature: CPUID.01H:ECX.pdcm [bit 15]
warning: host doesn't support requested feature: CPUID.01H:ECX.dca [bit 18]
warning: host doesn't support requested feature: CPUID.01H:ECX.osxsave [bit 27]
warning: host doesn't support requested feature: CPUID.01H:EDX.ds [bit 21]
warning: host doesn't support requested feature: CPUID.01H:EDX.acpi [bit 22]
warning: host doesn't support requested feature: CPUID.01H:EDX.ht [bit 28]
warning: host doesn't support requested feature: CPUID.01H:EDX.tm [bit 29]
warning: host doesn't support requested feature: CPUID.01H:EDX.pbe [bit 31]
warning: host doesn't support requested feature: CPUID.01H:ECX.dtes64 [bit 2]
warning: host doesn't support requested feature: CPUID.01H:ECX.monitor [bit 3]
warning: host doesn't support requested feature: CPUID.01H:ECX.ds_cpl [bit 4]
warning: host doesn't support requested feature: CPUID.01H:ECX.vmx [bit 5]
warning: host doesn't support requested feature: CPUID.01H:ECX.smx [bit 6]
warning: host doesn't support requested feature: CPUID.01H:ECX.est [bit 7]
warning: host doesn't support requested feature: CPUID.01H:ECX.tm2 [bit 8]
warning: host doesn't support requested feature: CPUID.01H:ECX.xtpr [bit 14]
warning: host doesn't support requested feature: CPUID.01H:ECX.pdcm [bit 15]
warning: host doesn't support requested feature: CPUID.01H:ECX.dca [bit 18]
warning: host doesn't support requested feature: CPUID.01H:ECX.osxsave [bit 27]
warning: host doesn't support requested feature: CPUID.01H:EDX.ds [bit 21]
warning: host doesn't support requested feature: CPUID.01H:EDX.acpi [bit 22]
warning: host doesn't support requested feature: CPUID.01H:EDX.ht [bit 28]
warning: host doesn't support requested feature: CPUID.01H:EDX.tm [bit 29]
warning: host doesn't support requested feature: CPUID.01H:EDX.pbe [bit 31]
warning: host doesn't support requested feature: CPUID.01H:ECX.dtes64 [bit 2]
warning: host doesn't support requested feature: CPUID.01H:ECX.monitor [bit 3]
warning: host doesn't support requested feature: CPUID.01H:ECX.ds_cpl [bit 4]
warning: host doesn't support requested feature: CPUID.01H:ECX.vmx [bit 5]
warning: host doesn't support requested feature: CPUID.01H:ECX.smx [bit 6]
warning: host doesn't support requested feature: CPUID.01H:ECX.est [bit 7]
warning: host doesn't support requested feature: CPUID.01H:ECX.tm2 [bit 8]
warning: host doesn't support requested feature: CPUID.01H:ECX.xtpr [bit 14]
warning: host doesn't support requested feature: CPUID.01H:ECX.pdcm [bit 15]
warning: host doesn't support requested feature: CPUID.01H:ECX.dca [bit 18]
warning: host doesn't support requested feature: CPUID.01H:ECX.osxsave [bit 27]
warning: host doesn't support requested feature: CPUID.01H:EDX.ds [bit 21]
warning: host doesn't support requested feature: CPUID.01H:EDX.acpi [bit 22]
warning: host doesn't support requested feature: CPUID.01H:EDX.ht [bit 28]
warning: host doesn't support requested feature: CPUID.01H:EDX.tm [bit 29]
warning: host doesn't support requested feature: CPUID.01H:EDX.pbe [bit 31]
warning: host doesn't support requested feature: CPUID.01H:ECX.dtes64 [bit 2]
warning: host doesn't support requested feature: CPUID.01H:ECX.monitor [bit 3]
warning: host doesn't support requested feature: CPUID.01H:ECX.ds_cpl [bit 4]
warning: host doesn't support requested feature: CPUID.01H:ECX.vmx [bit 5]
warning: host doesn't support requested feature: CPUID.01H:ECX.smx [bit 6]
warning: host doesn't support requested feature: CPUID.01H:ECX.est [bit 7]
warning: host doesn't support requested feature: CPUID.01H:ECX.tm2 [bit 8]
warning: host doesn't support requested feature: CPUID.01H:ECX.xtpr [bit 14]
warning: host doesn't support requested feature: CPUID.01H:ECX.pdcm [bit 15]
warning: host doesn't support requested feature: CPUID.01H:ECX.dca [bit 18]
warning: host doesn't support requested feature: CPUID.01H:ECX.osxsave [bit 27]
warning: host doesn't support requested feature: CPUID.01H:EDX.ds [bit 21]
warning: host doesn't support requested feature: CPUID.01H:EDX.acpi [bit 22]
warning: host doesn't support requested feature: CPUID.01H:EDX.ht [bit 28]
warning: host doesn't support requested feature: CPUID.01H:EDX.tm [bit 29]
warning: host doesn't support requested feature: CPUID.01H:EDX.pbe [bit 31]
warning: host doesn't support requested feature: CPUID.01H:ECX.dtes64 [bit 2]
warning: host doesn't support requested feature: CPUID.01H:ECX.monitor [bit 3]
warning: host doesn't support requested feature: CPUID.01H:ECX.ds_cpl [bit 4]
warning: host doesn't support requested feature: CPUID.01H:ECX.vmx [bit 5]
warning: host doesn't support requested feature: CPUID.01H:ECX.smx [bit 6]
warning: host doesn't support requested feature: CPUID.01H:ECX.est [bit 7]
warning: host doesn't support requested feature: CPUID.01H:ECX.tm2 [bit 8]
warning: host doesn't support requested feature: CPUID.01H:ECX.xtpr [bit 14]
warning: host doesn't support requested feature: CPUID.01H:ECX.pdcm [bit 15]
warning: host doesn't support requested feature: CPUID.01H:ECX.dca [bit 18]
warning: host doesn't support requested feature: CPUID.01H:ECX.osxsave [bit 27]
warning: host doesn't support requested feature: CPUID.01H:EDX.ds [bit 21]
warning: host doesn't support requested feature: CPUID.01H:EDX.acpi [bit 22]
warning: host doesn't support requested feature: CPUID.01H:EDX.ht [bit 28]
warning: host doesn't support requested feature: CPUID.01H:EDX.tm [bit 29]
warning: host doesn't support requested feature: CPUID.01H:EDX.pbe [bit 31]
warning: host doesn't support requested feature: CPUID.01H:ECX.dtes64 [bit 2]
warning: host doesn't support requested feature: CPUID.01H:ECX.monitor [bit 3]
warning: host doesn't support requested feature: CPUID.01H:ECX.ds_cpl [bit 4]
warning: host doesn't support requested feature: CPUID.01H:ECX.vmx [bit 5]
warning: host doesn't support requested feature: CPUID.01H:ECX.smx [bit 6]
warning: host doesn't support requested feature: CPUID.01H:ECX.est [bit 7]
warning: host doesn't support requested feature: CPUID.01H:ECX.tm2 [bit 8]
warning: host doesn't support requested feature: CPUID.01H:ECX.xtpr [bit 14]
warning: host doesn't support requested feature: CPUID.01H:ECX.pdcm [bit 15]
warning: host doesn't support requested feature: CPUID.01H:ECX.dca [bit 18]
warning: host doesn't support requested feature: CPUID.01H:ECX.osxsave [bit 27]
warning: host doesn't support requested feature: CPUID.01H:EDX.ds [bit 21]
warning: host doesn't support requested feature: CPUID.01H:EDX.acpi [bit 22]
warning: host doesn't support requested feature: CPUID.01H:EDX.ht [bit 28]
warning: host doesn't support requested feature: CPUID.01H:EDX.tm [bit 29]
warning: host doesn't support requested feature: CPUID.01H:EDX.pbe [bit 31]
warning: host doesn't support requested feature: CPUID.01H:ECX.dtes64 [bit 2]
warning: host doesn't support requested feature: CPUID.01H:ECX.monitor [bit 3]
warning: host doesn't support requested feature: CPUID.01H:ECX.ds_cpl [bit 4]
warning: host doesn't support requested feature: CPUID.01H:ECX.vmx [bit 5]
warning: host doesn't support requested feature: CPUID.01H:ECX.smx [bit 6]
warning: host doesn't support requested feature: CPUID.01H:ECX.est [bit 7]
warning: host doesn't support requested feature: CPUID.01H:ECX.tm2 [bit 8]
warning: host doesn't support requested feature: CPUID.01H:ECX.xtpr [bit 14]
warning: host doesn't support requested feature: CPUID.01H:ECX.pdcm [bit 15]
warning: host doesn't support requested feature: CPUID.01H:ECX.dca [bit 18]
warning: host doesn't support requested feature: CPUID.01H:ECX.osxsave [bit 27]
2017-07-21 15:28:23.266+0000: shutting down

Comment 2 Sahid Ferdjaoui 2017-07-24 07:06:24 UTC
I have some doubts that the problem is related to Nova. Nova is just reporting that libvirt was unable to detach the device. After that Nova received an event indicating that the VM is shutting down so Nova started to the process to destroy it.

Comment 3 Federico Iezzi 2017-07-24 09:02:54 UTC
Indeed but I opened the issue against Nova because I wanted to have a track of the history. At this point, I believe you can reassign it to libvirt.

Comment 14 Peter Krempa 2017-07-28 14:45:13 UTC
Re-assigning to qemu. From the log above it looks like nova tried to detach a disk by calling 'detachDeviceFlags' API in libvirt.

This, in case of disks translates to a 'device_del' call followed by a 'drive del' (In this case the json variant added in downstream) after we get the DEVICE_DELETED event. Unfortunately I'm not sure in which phase qemu crashed and thus closed the monitor.

Unfortunately, the information in the description does not seem to be covered in the sosreports, thus I can't find more data.

The qemu-kvm version according to the sosreports seems to be:
qemu-kvm-rhev-2.6.0-28.el7_3.9.x86_64

Since qemu apparenlty crashed (or exited unexpectedly) it would be helpful if a backtrace could be provided. A libvirtd debug log would help in determining when qemu crashed.

Comment 19 Xueqiang Wei 2017-08-04 08:34:51 UTC
I try to reproduce the issue on rhel6.9 guest and rhel7.4 guest, but not hit it.

host kernel: 3.10.0-514.16.1.el7.x86_64
guest kernel: 2.6.32-671.el6.x86_64
guest kernel: 3.10.0-648.el7.x86_64
qemu-kvm-rhev-2.6.0-28.el7_3.9

steps as below:
1. prepare one iSCSI LUN(sdb)
2. boot a guest with sdb passthrough twice
/usr/libexec/qemu-kvm \
-boot menu=on \
-name rhel7.4 \
-m 4096 \
-smp 4 \
-device virtio-scsi-pci,id=scsi0,bus=pci.0,ioeventfd=on \
-drive file=/home/rhel74-64-virtio-scsi.qcow2,id=drive-data-disk,if=none,cache=none,snapshot=off,format=qcow2,media=disk,aio=threads \
-device scsi-hd,id=data-disk,bus=scsi0.0,drive=drive-data-disk,bootindex=0 \
-qmp tcp:0:6666,server,nowait \
-device virtio-net-pci,mac=fa:f7:f8:5f:fa:5b,id=idn0VnaA,vectors=4,netdev=id8xJhp7,bus=pci.0,addr=06 \
-netdev tap,id=id8xJhp7,vhost=on \
-monitor stdio \
-device piix3-usb-uhci,id=xhci,bus=pci.0 \
-device usb-tablet,bus=xhci.0,id=tablet \
-spice port=8001,disable-ticketing \
-vnc :0 \
-drive file=/dev/sdb,if=none,format=raw,id=datadisk1 \
-device virtio-scsi-pci,id=scsipci1 \
-device scsi-hd,drive=datadisk1,id=datadisk1,bus=scsipci1.0,serial=test \
-drive file=/dev/sdb,if=none,format=raw,id=datadisk2 \
-device virtio-scsi-pci,id=scsipci2 \
-device scsi-hd,drive=datadisk2,id=datadisk2,bus=scsipci2.0,serial=test \
3. in guest check wwid
# scsi_id --whitelisted --replace-whitespace --device=/dev/sdb
# scsi_id --whitelisted --replace-whitespace --device=/dev/sdc
4. restart multipathd service by default configuration.
# mpathconf --enable
# systemctl restart multipathd.service
# systemctl enable multipathd.service
# fdisk -l
5. check multipath works
5.1 # ls /dev/mapper/
    # multipath -ll
5.2 run dd test on this datadisk.
    # dd if=/dev/zero of=/dev/mapper/mpatha bs=256k count=2000
    # dd of=/dev/null if=/dev/mapper/mpatha bs=256k count=2000
6. remove sdb.
# telnet host_ip 6666
{"execute":"qmp_capabilities"}
{"execute":"device_del","arguments":{"id":"datadisk1"}}
{"execute":"device_del","arguments":{"id":"scsipci1"}}
7. check multipath
# multipath -ll
8. do system_reset.
{"execute":"system_reset"}
9. hotplug the disk again.
{"execute":"__com.redhat_drive_add", "arguments": {"file":"/home/datadisk.raw","format":"raw","id":"datadisk1"}}
{"execute":"device_add","arguments":{"driver":"virtio-scsi-pci","id":"scsipci1"}}
{"execute":"device_add","arguments":{"driver":"scsi-hd","drive":"datadisk1","bus":"scsipci1.0","serial":"test"}}
10. check multipath
# multipath -ll
11. Create an LVM with related VG and LV
  # pvcreate /dev/mapper/mpatha 
  # vgcreate test_vg /dev/mapper/mpatha 
  # lvcreate -l 100%FREE -n test_lv test_vg
  # fsck -t ext4 -y /dev/mapper/mpatha
12. Create ext4 FS on the LV
  # mkfs.ext4 /dev/test_vg/test_lv
13. auto mount in booting time
  # echo "/dev/mapper/test_vg-test_lv  /mnt/t  ext4  defaults,_netdev  0 0 " >> /etc/fstab
14. do system_reset.
{"execute":"system_reset"}
15. check multipath after guest boot up
  # multipath -ll
16. remove sdc.
{"execute":"device_del","arguments":{"id":"datadisk2"}}
{"execute":"device_del","arguments":{"id":"scsipci2"}}
17. check multipath
# multipath -ll
18. do system_reset.
{"execute":"system_reset"}
19. hotplug the disk again.
{"execute":"__com.redhat_drive_add", "arguments": {"file":"/dev/sdc","format":"raw","id":"datadisk2"}}
{"execute":"device_add","arguments":{"driver":"virtio-scsi-pci","id":"scsipci2"}}
{"execute":"device_add","arguments":{"driver":"scsi-hd","drive":"datadisk2","bus":"scsipci2.0","serial":"test"}}
20. check multipath
  # multipath -ll
21. do system_reset
{"execute":"system_reset"}
22. check multipath after guest boot up
  # multipath -ll

after step 22, guest works well, no crash and die.



Hi Fam,

Do you have any idea about this issue?

Comment 20 Fam Zheng 2017-08-04 12:02:54 UTC
Xueqiang, by comparing your steps with the original bug description, I notice two major differences:

1) Customer used virtio-blk and you used scsi.

2) Customer VM is doing heavy I/O on the disk that is being detached.

Comment 21 Xueqiang Wei 2017-08-09 09:39:03 UTC
I want to confirm two items:
1) the return value for "multipath -ll" before detach operation 
2) iSCSI volume detach operation is in host or guest?

thanks

Comment 22 Nikhil Shetty 2017-08-16 06:17:21 UTC
Hello,

The updated Files and Commands requested have been extracted to collab shell. Please, find link for the same below

URL:-http://collab-shell.usersys.redhat.com/01885331/

thanks
Nikhil Shetty

Comment 25 Fam Zheng 2017-08-22 06:47:05 UTC
The link in comment 22 doesn't work now (404), Nikhil?

Comment 26 Nikhil Shetty 2017-08-22 09:52:37 UTC
Hello,

The updated Files and Commands requested have been again extracted to collab shell. Please, find link for the same below

URL:- http://collab-shell.usersys.redhat.com/01885331/


thanks
Nikhil Shetty

Comment 27 Fam Zheng 2017-08-30 02:01:38 UTC
Neither I nor QE could reproduce this, and the files in comment 26 doesn't have any decisive clues about why the VM disappeared.

So as the next step, it is necessary to obtain a backtrace or libvirt log as suggested in comment 14.

Comment 28 Federico Iezzi 2017-10-04 13:31:22 UTC
I left the customer over a month ago, at the time they managed to reproduce the issue multiple times but the libvirt debug log was insufficient.

If possible, you have to ask them to provide more details/sosreport/logs etc.
AFAIK the issue was present in both RHEL 7.2 and RHEL 7.3, 7.4 was just installed in an OSP10z4 environment and there were no running production workloads.

Comment 36 Red Hat Bugzilla 2023-09-15 00:03:09 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 500 days


Note You need to log in before you can comment on or make changes to this bug.