Bug 1679882 - Power cycling the vm's fails in RHV-M
Summary: Power cycling the vm's fails in RHV-M
Keywords:
Status: CLOSED DUPLICATE of bug 1666795
Alias: None
Product: ovirt-engine
Classification: oVirt
Component: BLL.Storage
Version: 4.3.0
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: ---
: ---
Assignee: Tal Nisan
QA Contact: Elad
URL:
Whiteboard:
Depends On:
Blocks: 1679884
TreeView+ depends on / blocked
 
Reported: 2019-02-22 06:48 UTC by bipin
Modified: 2019-02-25 10:06 UTC (History)
1 user (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1679884 (view as bug list)
Environment:
Last Closed: 2019-02-25 10:06:50 UTC
oVirt Team: Gluster
Embargoed:


Attachments (Terms of Use)

Description bipin 2019-02-22 06:48:32 UTC
Description of problem:
======================
On a RHHI 1.6 setup,restarting a powered off vm fails to come up. This was seen repeatedly in the environment, after powering off the vm's.

Version-Release number of selected component (if applicable):
============================================================
ovirt-engine-4.3.0.4-0.1.el7.noarch
glusterfs-3.12.2-43.el7rhgs.x86_64
rhvh-4.3.0.4-0.20190220.0+1

How reproducible:
================
2/2

Steps to Reproduce:
==================
1.Deploy a RHHI 1.6 (RHV 4.3 + RHGS 3.4.3 )
2.Poweroff a vm
3.Start the vm

Actual results:
==============
The vm fails to come up

Expected results:
===============
The vm should start immediately

Additional info:
===============
Just a small history on the setup,
I actually performed a major upgrade from 4.2 (rhv 4.2.8) to 4.3 (rhv 4.3.0.4-0) before doing this test. So the vm's created are based out of 4.2.8.

Comment 1 bipin 2019-02-22 06:49:41 UTC
Engine log:
===========
2019-02-22 09:57:22,028+05 INFO  [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-9) [] VM 'a9e20293-f2db-4651-a4af-c97a529552fa'(vm3) moved from 'Up' --> 'Down'
2019-02-22 09:57:22,038+05 INFO  [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-34026) [442f5139-58b3-4fd3-9329-b589ada4acc4] EVENT_ID: USER_STOP_VM(33), VM vm3 powered off by admin@internal-authz (Host: rhsqa-grafton9-nic2.lab.eng.blr.redhat.com).
2019-02-22 09:57:22,084+05 INFO  [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (ForkJoinPool-1-worker-9) [] EVENT_ID: VM_DOWN(61), VM vm3 is down.  
2019-02-22 09:57:25,872+05 INFO  [org.ovirt.engine.core.vdsbroker.CreateVDSCommand] (EE-ManagedThreadFactory-engine-Thread-34030) [7d08656a-5413-4256-a376-85ff5c025e59] START, CreateVDSCommand( CreateVDSCommandParameters:{hostId='386144e8-1bc7-4f20-8973-b12f142b0a39', vmId='a9e20293-f2db-4651-a4af-c97a529552fa', vm='VM [vm3]'}), log id: 351b4564
2019-02-22 09:57:25,872+05 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateBrokerVDSCommand] (EE-ManagedThreadFactory-engine-Thread-34030) [7d08656a-5413-4256-a376-85ff5c025e59] START, CreateBrokerVDSCommand(HostName = rhsqa-grafton7-nic2.lab.eng.blr.redhat.com, CreateVDSCommandParameters:{hostId='386144e8-1bc7-4f20-8973-b12f142b0a39', vmId='a9e20293-f2db-4651-a4af-c97a529552fa', vm='VM [vm3]'}), log id: 3c7defbb
  <name>vm3</name>
2019-02-22 09:57:25,981+05 INFO  [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-34030) [7d08656a-5413-4256-a376-85ff5c025e59] EVENT_ID: USER_STARTED_VM(153), VM vm3 was started by admin@internal-authz (Host: rhsqa-grafton7-nic2.lab.eng.blr.redhat.com).
2019-02-22 09:57:40,243+05 INFO  [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (EE-ManagedThreadFactory-engineScheduled-Thread-25) [] VM 'a9e20293-f2db-4651-a4af-c97a529552fa'(vm3) moved from 'WaitForLaunch' --> 'Down'
2019-02-22 09:57:40,304+05 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engineScheduled-Thread-25) [] EVENT_ID: VM_DOWN_ERROR(119), VM vm3 is down with error. Exit message: Bad volume specification {'address': {'bus': '0', 'controller': '0', 'type': 'drive', 'target': '0', 'unit': '0'}, 'serial': '1270d754-fde4-4fb1-9a39-9e84feec3935', 'index': 0, 'iface': 'scsi', 'apparentsize': '42949672960', 'specParams': {}, 'cache': 'none', 'imageID': '1270d754-fde4-4fb1-9a39-9e84feec3935', 'truesize': '1928600576', 'type': 'disk', 'domainID': '21aa5c27-bc4c-4d1d-b5ea-24ed29e1a3d0', 'reqsize': '0', 'format': 'raw', 'poolID': 'f3ea6394-34ff-11e9-8c1a-004755204901', 'device': 'disk', 'path': '/rhev/data-center/f3ea6394-34ff-11e9-8c1a-004755204901/21aa5c27-bc4c-4d1d-b5ea-24ed29e1a3d0/images/1270d754-fde4-4fb1-9a39-9e84feec3935/6ab99284-8f1e-4cfc-9359-bcef64e17da8', 'propagateErrors': 'off', 'name': 'sda', 'bootOrder': '2', 'volumeID': '6ab99284-8f1e-4cfc-9359-bcef64e17da8', 'diskType': 'file', 'alias': 'ua-1270d754-fde4-4fb1-9a39-9e84feec3935', 'discard': False}.
2019-02-22 09:57:40,304+05 INFO  [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (EE-ManagedThreadFactory-engineScheduled-Thread-25) [] add VM 'a9e20293-f2db-4651-a4af-c97a529552fa'(vm3) to rerun treatment
2019-02-22 09:57:40,330+05 WARN  [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-34037) [] EVENT_ID: USER_INITIATED_RUN_VM_FAILED(151), Failed to run VM vm3 on Host rhsqa-grafton7-nic2.lab.eng.blr.redhat.com.
2019-02-22 09:57:40,488+05 INFO  [org.ovirt.engine.core.vdsbroker.CreateVDSCommand] (EE-ManagedThreadFactory-engine-Thread-34037) [] START, CreateVDSCommand( CreateVDSCommandParameters:{hostId='87f6f991-0bff-4497-a2f1-d9ac149f9c2c', vmId='a9e20293-f2db-4651-a4af-c97a529552fa', vm='VM [vm3]'}), log id: 4709fa56
2019-02-22 09:57:40,489+05 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateBrokerVDSCommand] (EE-ManagedThreadFactory-engine-Thread-34037) [] START, CreateBrokerVDSCommand(HostName = rhsqa-grafton8-nic2.lab.eng.blr.redhat.com, CreateVDSCommandParameters:{hostId='87f6f991-0bff-4497-a2f1-d9ac149f9c2c', vmId='a9e20293-f2db-4651-a4af-c97a529552fa', vm='VM [vm3]'}), log id: 22b1bcf2
  <name>vm3</name>
2019-02-22 09:57:40,546+05 INFO  [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-34037) [] EVENT_ID: USER_STARTED_VM(153), VM vm3 was started by admin@internal-authz (Host: rhsqa-grafton8-nic2.lab.eng.blr.redhat.com).
2019-02-22 09:57:42,012+05 INFO  [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-9) [] VM 'a9e20293-f2db-4651-a4af-c97a529552fa'(vm3) moved from 'WaitForLaunch' --> 'Down'
2019-02-22 09:57:42,062+05 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (ForkJoinPool-1-worker-9) [] EVENT_ID: VM_DOWN_ERROR(119), VM vm3 is down with error. Exit message: Bad volume specification {'address': {'bus': '0', 'controller': '0', 'type': 'drive', 'target': '0', 'unit': '0'}, 'serial': '1270d754-fde4-4fb1-9a39-9e84feec3935', 'index': 0, 'iface': 'scsi', 'apparentsize': '42949672960', 'specParams': {}, 'cache': 'none', 'imageID': '1270d754-fde4-4fb1-9a39-9e84feec3935', 'truesize': '1928600576', 'type': 'disk', 'domainID': '21aa5c27-bc4c-4d1d-b5ea-24ed29e1a3d0', 'reqsize': '0', 'format': 'raw', 'poolID': 'f3ea6394-34ff-11e9-8c1a-004755204901', 'device': 'disk', 'path': '/rhev/data-center/f3ea6394-34ff-11e9-8c1a-004755204901/21aa5c27-bc4c-4d1d-b5ea-24ed29e1a3d0/images/1270d754-fde4-4fb1-9a39-9e84feec3935/6ab99284-8f1e-4cfc-9359-bcef64e17da8', 'propagateErrors': 'off', 'name': 'sda', 'bootOrder': '2', 'volumeID': '6ab99284-8f1e-4cfc-9359-bcef64e17da8', 'diskType': 'file', 'alias': 'ua-1270d754-fde4-4fb1-9a39-9e84feec3935', 'discard': False}.
2019-02-22 09:57:42,063+05 INFO  [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-9) [] add VM 'a9e20293-f2db-4651-a4af-c97a529552fa'(vm3) to rerun treatment
2019-02-22 09:57:42,088+05 WARN  [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-34038) [] EVENT_ID: USER_INITIATED_RUN_VM_FAILED(151), Failed to run VM vm3 on Host rhsqa-grafton8-nic2.lab.eng.blr.redhat.com.
2019-02-22 09:57:42,191+05 INFO  [org.ovirt.engine.core.vdsbroker.CreateVDSCommand] (EE-ManagedThreadFactory-engine-Thread-34038) [] START, CreateVDSCommand( CreateVDSCommandParameters:{hostId='a37bef1d-9214-4476-a22b-2e556f71949f', vmId='a9e20293-f2db-4651-a4af-c97a529552fa', vm='VM [vm3]'}), log id: 738ee5b1
2019-02-22 09:57:42,191+05 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateBrokerVDSCommand] (EE-ManagedThreadFactory-engine-Thread-34038) [] START, CreateBrokerVDSCommand(HostName = rhsqa-grafton9-nic2.lab.eng.blr.redhat.com, CreateVDSCommandParameters:{hostId='a37bef1d-9214-4476-a22b-2e556f71949f', vmId='a9e20293-f2db-4651-a4af-c97a529552fa', vm='VM [vm3]'}), log id: 21534285
  <name>vm3</name>
2019-02-22 09:57:42,228+05 INFO  [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-34038) [] EVENT_ID: USER_STARTED_VM(153), VM vm3 was started by admin@internal-authz (Host: rhsqa-grafton9-nic2.lab.eng.blr.redhat.com).
2019-02-22 09:57:42,674+05 INFO  [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-11) [] VM 'a9e20293-f2db-4651-a4af-c97a529552fa'(vm3) moved from 'WaitForLaunch' --> 'Down'
2019-02-22 09:57:42,686+05 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (ForkJoinPool-1-worker-11) [] EVENT_ID: VM_DOWN_ERROR(119), VM vm3 is down with error. Exit message: Bad volume specification {'address': {'bus': '0', 'controller': '0', 'type': 'drive', 'target': '0', 'unit': '0'}, 'serial': '1270d754-fde4-4fb1-9a39-9e84feec3935', 'index': 0, 'iface': 'scsi', 'apparentsize': '42949672960', 'specParams': {}, 'cache': 'none', 'imageID': '1270d754-fde4-4fb1-9a39-9e84feec3935', 'truesize': '1928600576', 'type': 'disk', 'domainID': '21aa5c27-bc4c-4d1d-b5ea-24ed29e1a3d0', 'reqsize': '0', 'format': 'raw', 'poolID': 'f3ea6394-34ff-11e9-8c1a-004755204901', 'device': 'disk', 'path': '/rhev/data-center/f3ea6394-34ff-11e9-8c1a-004755204901/21aa5c27-bc4c-4d1d-b5ea-24ed29e1a3d0/images/1270d754-fde4-4fb1-9a39-9e84feec3935/6ab99284-8f1e-4cfc-9359-bcef64e17da8', 'propagateErrors': 'off', 'name': 'sda', 'bootOrder': '2', 'volumeID': '6ab99284-8f1e-4cfc-9359-bcef64e17da8', 'diskType': 'file', 'alias': 'ua-1270d754-fde4-4fb1-9a39-9e84feec3935', 'discard': False}.
2019-02-22 09:57:42,686+05 INFO  [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-11) [] add VM 'a9e20293-f2db-4651-a4af-c97a529552fa'(vm3) to rerun treatment
2019-02-22 09:57:42,713+05 WARN  [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-34039) [] EVENT_ID: USER_INITIATED_RUN_VM_FAILED(151), Failed to run VM vm3 on Host rhsqa-grafton9-nic2.lab.eng.blr.redhat.com.
2019-02-22 09:57:42,719+05 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-34039) [] EVENT_ID: USER_FAILED_RUN_VM(54), Failed to run VM vm3  (User: admin@internal-authz).


vdsm log
========
2019-02-22 09:57:26,107+0530 INFO  (vm/a9e20293) [vdsm.api] FINISH prepareImage error=Volume does not exist: (u'6ab99284-8f1e-4cfc-9359-bcef64e17da8',) from=internal, task_id=3ec7a723-77eb-4ff9-a7cb-c7d4cfb1c17e
 (api:52)
2019-02-22 09:57:26,107+0530 ERROR (vm/a9e20293) [storage.TaskManager.Task] (Task='3ec7a723-77eb-4ff9-a7cb-c7d4cfb1c17e') Unexpected error (task:875)
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 882, in _run
    return fn(*args, **kargs)
  File "<string>", line 2, in prepareImage
  File "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 50, in method
    ret = func(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 3199, in prepareImage
    legality = dom.produceVolume(imgUUID, volUUID).getLegality()
  File "/usr/lib/python2.7/site-packages/vdsm/storage/sd.py", line 822, in produceVolume
    volUUID)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/glusterVolume.py", line 45, in __init__
    volUUID)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/volume.py", line 801, in __init__
    self._manifest = self.manifestClass(repoPath, sdUUID, imgUUID, volUUID)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/fileVolume.py", line 71, in __init__
    volUUID)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/volume.py", line 86, in __init__
    self.validate()
  File "/usr/lib/python2.7/site-packages/vdsm/storage/volume.py", line 112, in validate
    self.validateVolumePath()
  File "/usr/lib/python2.7/site-packages/vdsm/storage/fileVolume.py", line 131, in validateVolumePath
    raise se.VolumeDoesNotExist(self.volUUID)
VolumeDoesNotExist: Volume does not exist: (u'6ab99284-8f1e-4cfc-9359-bcef64e17da8',)
2019-02-22 09:57:26,107+0530 INFO  (vm/a9e20293) [storage.TaskManager.Task] (Task='3ec7a723-77eb-4ff9-a7cb-c7d4cfb1c17e') aborting: Task is aborted: "Volume does not exist: (u'6ab99284-8f1e-4cfc-9359-bcef64e17da8',)" - code 201 (task:1181)
2019-02-22 09:57:26,108+0530 ERROR (vm/a9e20293) [storage.Dispatcher] FINISH prepareImage error=Volume does not exist: (u'6ab99284-8f1e-4cfc-9359-bcef64e17da8',) (dispatcher:83)
2019-02-22 09:57:26,108+0530 ERROR (vm/a9e20293) [virt.vm] (vmId='a9e20293-f2db-4651-a4af-c97a529552fa') The vm start process failed (vm:937)
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 866, in _startUnderlyingVm
    self._run()
  File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 2759, in _run
    self._devices = self._make_devices()
  File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 2599, in _make_devices
    disk_objs = self._perform_host_local_adjustment()
  File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 2672, in _perform_host_local_adjustment
    self._preparePathsForDrives(disk_params)
  File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 1011, in _preparePathsForDrives
    drive['path'] = self.cif.prepareVolumePath(drive, self.id)
  File "/usr/lib/python2.7/site-packages/vdsm/clientIF.py", line 415, in prepareVolumePath
    raise vm.VolumeError(drive)
VolumeError: Bad volume specification {'address': {'bus': '0', 'controller': '0', 'type': 'drive', 'target': '0', 'unit': '0'}, 'serial': '1270d754-fde4-4fb1-9a39-9e84feec3935', 'index': 0, 'iface': 'scsi', 'apparentsize': '42949672960', 'specParams': {}, 'cache': 'none', 'imageID': '1270d754-fde4-4fb1-9a39-9e84feec3935', 'truesize': '1928600576', 'type': 'disk', 'domainID': '21aa5c27-bc4c-4d1d-b5ea-24ed29e1a3d0', 'reqsize': '0', 'format': 'raw', 'poolID': 'f3ea6394-34ff-11e9-8c1a-004755204901', 'device': 'disk', 'path': '/rhev/data-center/f3ea6394-34ff-11e9-8c1a-004755204901/21aa5c27-bc4c-4d1d-b5ea-24ed29e1a3d0/images/1270d754-fde4-4fb1-9a39-9e84feec3935/6ab99284-8f1e-4cfc-9359-bcef64e17da8', 'propagateErrors': 'off', 'name': 'sda', 'bootOrder': '2', 'volumeID': '6ab99284-8f1e-4cfc-9359-bcef64e17da8', 'diskType': 'file', 'alias': 'ua-1270d754-fde4-4fb1-9a39-9e84feec3935', 'discard': False}
2019-02-22 09:57:26,108+0530 INFO  (vm/a9e20293) [virt.vm] (vmId='a9e20293-f2db-4651-a4af-c97a529552fa') Changed state to Down: Bad volume specification {'address': {'bus': '0', 'controller': '0', 'type': 'drive', 'target': '0', 'unit': '0'}, 'serial': '1270d754-fde4-4fb1-9a39-9e84feec3935', 'index': 0, 'iface': 'scsi', 'apparentsize': '42949672960', 'specParams': {}, 'cache': 'none', 'imageID': '1270d754-fde4-4fb1-9a39-9e84feec3935', 'truesize': '1928600576', 'type': 'disk', 'domainID': '21aa5c27-bc4c-4d1d-b5ea-24ed29e1a3d0', 'reqsize': '0', 'format': 'raw', 'poolID': 'f3ea6394-34ff-11e9-8c1a-004755204901', 'device': 'disk', 'path': '/rhev/data-center/f3ea6394-34ff-11e9-8c1a-004755204901/21aa5c27-bc4c-4d1d-b5ea-24ed29e1a3d0/images/1270d754-fde4-4fb1-9a39-9e84feec3935/6ab99284-8f1e-4cfc-9359-bcef64e17da8', 'propagateErrors': 'off', 'name': 'sda', 'bootOrder': '2', 'volumeID': '6ab99284-8f1e-4cfc-9359-bcef64e17da8', 'diskType': 'file', 'alias': 'ua-1270d754-fde4-4fb1-9a39-9e84feec3935', 'discard': False} (code=1) (vm:1675)
2019-02-22 09:57:26,184+0530 INFO  (vm/a9e20293) [virt.vm] (vmId='a9e20293-f2db-4651-a4af-c97a529552fa') Stopping connection (guestagent:455)

Comment 3 bipin 2019-02-25 10:06:50 UTC
Marking as duplicate of the downstream Bug 1666795

*** This bug has been marked as a duplicate of bug 1666795 ***


Note You need to log in before you can comment on or make changes to this bug.