Bugzilla will be upgraded to version 5.0. The upgrade date is tentatively scheduled for 2 December 2018, pending final testing and feedback.
Bug 1419872 - Creating second snapshot for suspended VM throws error in evm.log
Creating second snapshot for suspended VM throws error in evm.log
Status: CLOSED ERRATA
Product: Red Hat CloudForms Management Engine
Classification: Red Hat
Component: Providers (Show other bugs)
5.7.0
Unspecified Unspecified
medium Severity medium
: GA
: 5.9.0
Assigned To: Adam Grare
Antonin Pagac
snapshot
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2017-02-07 05:02 EST by Aziza Karol
Modified: 2018-03-01 08:09 EST (History)
5 users (show)

See Also:
Fixed In Version: 5.9.0.1
Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2018-03-01 08:09:24 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)


External Trackers
Tracker ID Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2018:0380 normal SHIPPED_LIVE Moderate: Red Hat CloudForms security, bug fix, and enhancement update 2018-03-01 13:37:12 EST

  None (edit)
Description Aziza Karol 2017-02-07 05:02:46 EST
Description of problem:


Version-Release number of selected component (if applicable):
5.7.1.0.20170124142149_8a5f4d5 

How reproducible:
100%

Steps to Reproduce:
1.Navigate to  compute->infrastructure->virtual machines 
2.Select a vm with suspended state
3.Take a first snapshot. snapshot successful
4.Take a second snapshot

Actual results:
Flash message  "create Snapshot for VM and Instance "xxx" started" is displayed but snapshot does not get created. 
Error is thrown in evm.log

Expected results:


Additional info:
evm.log
[----] E, [2017-02-07T03:17:52.806020 #2897:1167140] ERROR -- : MIQ(MiqQueue#m_callback) Message id: [1386279]: Snapshot not taken since the state of the virtual machine has not changed since the last snapshot operation.
[----] E, [2017-02-07T03:17:52.806210 #2897:1167140] ERROR -- : MIQ(MiqQueue#m_callback) backtrace: (druby://127.0.0.1:42374) /var/www/miq/vmdb/gems/pending/VMwareWebService/MiqVimInventory.rb:2170:in `waitForTask'
(druby://127.0.0.1:42374) /var/www/miq/vmdb/gems/pending/VMwareWebService/MiqVimVm.rb:1218:in `waitForTask'
(druby://127.0.0.1:42374) /var/www/miq/vmdb/gems/pending/VMwareWebService/MiqVimVm.rb:385:in `createSnapshot'
(druby://127.0.0.1:42374) /opt/rh/rh-ruby23/root/usr/share/ruby/drb/drb.rb:1624:in `perform_without_block'
(druby://127.0.0.1:42374) /opt/rh/rh-ruby23/root/usr/share/ruby/drb/drb.rb:1584:in `perform'
(druby://127.0.0.1:42374) /opt/rh/rh-ruby23/root/usr/share/ruby/drb/drb.rb:1657:in `block (2 levels) in main_loop'
(druby://127.0.0.1:42374) /opt/rh/rh-ruby23/root/usr/share/ruby/drb/drb.rb:1653:in `loop'
(druby://127.0.0.1:42374) /opt/rh/rh-ruby23/root/usr/share/ruby/drb/drb.rb:1653:in `block in main_loop'
/var/www/miq/vmdb/app/models/manageiq/providers/vmware/infra_manager.rb:454:in `block in invoke_vim_ws'
/var/www/miq/vmdb/app/models/mixins/provider_object_mixin.rb:15:in `block in with_provider_object'
/var/www/miq/vmdb/app/models/mixins/vim_connect_mixin.rb:36:in `with_provider_connection'
/var/www/miq/vmdb/app/models/mixins/provider_object_mixin.rb:12:in `with_provider_object'
/var/www/miq/vmdb/app/models/manageiq/providers/vmware/infra_manager.rb:450:in `invoke_vim_ws'
/var/www/miq/vmdb/app/models/manageiq/providers/vmware/infra_manager.rb:347:in `vm_create_snapshot'
/var/www/miq/vmdb/app/models/vm_or_template.rb:330:in `run_command_via_parent'
/var/www/miq/vmdb/app/models/vm_or_template/operations/snapshot.rb:34:in `raw_create_snapshot'
/var/www/miq/vmdb/app/models/vm_or_template.rb:354:in `check_policy_prevent_callback'
/var/www/miq/vmdb/app/models/miq_queue.rb:416:in `m_callback'
/var/www/miq/vmdb/app/models/miq_queue.rb:385:in `delivered'
/var/www/miq/vmdb/app/models/miq_queue_worker_base/runner.rb:117:in `deliver_queue_message'
/var/www/miq/vmdb/app/models/miq_queue_worker_base/runner.rb:134:in `deliver_message'
/var/www/miq/vmdb/app/models/miq_queue_worker_base/runner.rb:152:in `block in do_work'
/var/www/miq/vmdb/app/models/miq_queue_worker_base/runner.rb:146:in `loop'
/var/www/miq/vmdb/app/models/miq_queue_worker_base/runner.rb:146:in `do_work'
/var/www/miq/vmdb/app/models/miq_worker/runner.rb:334:in `block in do_work_loop'
/var/www/miq/vmdb/app/models/miq_worker/runner.rb:331:in `loop'
/var/www/miq/vmdb/app/models/miq_worker/runner.rb:331:in `do_work_loop'
/var/www/miq/vmdb/app/models/miq_worker/runner.rb:153:in `run'
/var/www/miq/vmdb/app/models/miq_worker/runner.rb:128:in `start'
/var/www/miq/vmdb/app/models/miq_worker/runner.rb:21:in `start_worker'
/var/www/miq/vmdb/app/models/miq_worker.rb:343:in `block in start'
/opt/rh/cfme-gemset/gems/nakayoshi_fork-0.0.3/lib/nakayoshi_fork.rb:24:in `fork'
/opt/rh/cfme-gemset/gems/nakayoshi_fork-0.0.3/lib/nakayoshi_fork.rb:24:in `fork'
/var/www/miq/vmdb/app/models/miq_worker.rb:341:in `start'
/var/www/miq/vmdb/app/models/miq_worker.rb:270:in `start_worker'
/var/www/miq/vmdb/app/models/miq_worker.rb:150:in `block in sync_workers'
/var/www/miq/vmdb/app/models/miq_worker.rb:150:in `times'
/var/www/miq/vmdb/app/models/miq_worker.rb:150:in `sync_workers'
/var/www/miq/vmdb/app/models/miq_server/worker_management/monitor.rb:52:in `block in sync_workers'
/var/www/miq/vmdb/app/models/miq_server/worker_management/monitor.rb:50:in `each'
/var/www/miq/vmdb/app/models/miq_server/worker_management/monitor.rb:50:in `sync_workers'
/var/www/miq/vmdb/app/models/miq_server.rb:158:in `start'
/var/www/miq/vmdb/app/models/miq_server.rb:249:in `start'
/var/www/miq/vmdb/lib/workers/evm_server.rb:65:in `start'
/var/www/miq/vmdb/lib/workers/evm_server.rb:92:in `start'
/var/www/miq/vmdb/lib/workers/bin/evm_server.rb:4:in `<main>'
[----] I, [2017-02-07T03:17:53.314014 #17333:1167140]  INFO -- :
Comment 2 Greg Blomquist 2017-02-09 09:45:42 EST
Nice one Aziza!

So, the error message in the log shows what the problem is, just looks like the error message needs to get propagated to the user somehow...
Comment 4 CFME Bot 2017-04-24 15:56:37 EDT
New commit detected on ManageIQ/manageiq/master:
https://github.com/ManageIQ/manageiq/commit/6f68389cf4b33c6867a9c18722fced54a099b56f

commit 6f68389cf4b33c6867a9c18722fced54a099b56f
Author:     Adam Grare <agrare@redhat.com>
AuthorDate: Mon Feb 20 15:00:16 2017 -0500
Commit:     Adam Grare <agrare@redhat.com>
CommitDate: Tue Apr 18 13:40:22 2017 -0400

    Create a notification when creating a snap fails
    
    https://bugzilla.redhat.com/show_bug.cgi?id=1419872

 app/models/vm_or_template.rb                     | 8 ++++++++
 app/models/vm_or_template/operations/snapshot.rb | 3 +++
 2 files changed, 11 insertions(+)
Comment 5 Antonin Pagac 2017-05-30 08:07:10 EDT
Issue is still present in 5.8.0.17.
Comment 6 Adam Grare 2017-05-30 08:55:18 EDT
Hi Antonin, this wasn't backported to 5.8
Comment 7 Antonin Pagac 2017-10-12 04:48:42 EDT
Appliance version: 5.9.0.2.20171010190026_0413a06

Issue is still present, though with different error in evm.log:

[----] E, [2017-10-12T04:21:30.336410 #12986:53113c] ERROR -- : MIQ(MiqQueue#m_callback) Message id: [1308]: uninitialized constant VmOrTemplate::Operations::Snapshot::MiqVmSnapshotError
[----] E, [2017-10-12T04:21:30.336681 #12986:53113c] ERROR -- : [NameError]: uninitialized constant VmOrTemplate::Operations::Snapshot::MiqVmSnapshotError  Method:[block in method_missing]
[----] E, [2017-10-12T04:21:30.336781 #12986:53113c] ERROR -- : /var/www/miq/vmdb/app/models/vm_or_template/operations/snapshot.rb:54:in `rescue in raw_create_snapshot'
/var/www/miq/vmdb/app/models/vm_or_template/operations/snapshot.rb:51:in `raw_create_snapshot'
/var/www/miq/vmdb/app/models/mixins/miq_policy_mixin.rb:115:in `check_policy_prevent_callback'
/var/www/miq/vmdb/app/models/miq_queue.rb:474:in `m_callback'
/var/www/miq/vmdb/app/models/miq_queue.rb:443:in `delivered'
/var/www/miq/vmdb/app/models/miq_queue_worker_base/runner.rb:116:in `deliver_queue_message'
/var/www/miq/vmdb/app/models/miq_queue_worker_base/runner.rb:134:in `deliver_message'
/var/www/miq/vmdb/app/models/miq_queue_worker_base/runner.rb:152:in `block in do_work'
/var/www/miq/vmdb/app/models/miq_queue_worker_base/runner.rb:146:in `loop'
/var/www/miq/vmdb/app/models/miq_queue_worker_base/runner.rb:146:in `do_work'
/var/www/miq/vmdb/app/models/miq_worker/runner.rb:331:in `block in do_work_loop'
/var/www/miq/vmdb/app/models/miq_worker/runner.rb:328:in `loop'
/var/www/miq/vmdb/app/models/miq_worker/runner.rb:328:in `do_work_loop'
/var/www/miq/vmdb/app/models/miq_worker/runner.rb:155:in `run'
/var/www/miq/vmdb/app/models/miq_worker/runner.rb:129:in `start'
/var/www/miq/vmdb/app/models/miq_worker/runner.rb:22:in `start_worker'
/var/www/miq/vmdb/app/models/miq_worker.rb:354:in `block in start_runner_via_fork'
/opt/rh/cfme-gemset/gems/nakayoshi_fork-0.0.3/lib/nakayoshi_fork.rb:24:in `fork'
/opt/rh/cfme-gemset/gems/nakayoshi_fork-0.0.3/lib/nakayoshi_fork.rb:24:in `fork'
/var/www/miq/vmdb/app/models/miq_worker.rb:352:in `start_runner_via_fork'
/var/www/miq/vmdb/app/models/miq_worker.rb:346:in `start_runner'
/var/www/miq/vmdb/app/models/miq_worker.rb:380:in `start'
/var/www/miq/vmdb/app/models/miq_worker.rb:263:in `start_worker'
/var/www/miq/vmdb/app/models/miq_worker.rb:150:in `block in sync_workers'
/var/www/miq/vmdb/app/models/miq_worker.rb:150:in `times'
/var/www/miq/vmdb/app/models/miq_worker.rb:150:in `sync_workers'
/var/www/miq/vmdb/app/models/miq_server/worker_management/monitor.rb:53:in `block in sync_workers'
/var/www/miq/vmdb/app/models/miq_server/worker_management/monitor.rb:50:in `each'
/var/www/miq/vmdb/app/models/miq_server/worker_management/monitor.rb:50:in `sync_workers'
/var/www/miq/vmdb/app/models/miq_server.rb:139:in `start'
/var/www/miq/vmdb/app/models/miq_server.rb:231:in `start'
/var/www/miq/vmdb/lib/workers/evm_server.rb:27:in `start'
/var/www/miq/vmdb/lib/workers/evm_server.rb:48:in `start'
/var/www/miq/vmdb/lib/workers/bin/evm_server.rb:4:in `<main>'

When trying again, a warning is briefly displayed in the UI and the error is thrown again in evm.log.

How reproducible:
100%

Steps to Reproduce:
1.Navigate to  compute->infrastructure->virtual machines 
2.Select a vm with suspended state
3.Take a first snapshot. snapshot successful
4.Take a second snapshot

Actual results:
Second snapshot is not taken; warning displayed in the UI; error thrown in evm.log

Expected results:
Second snapshot created on suspended VM

Additional info:
The above error appeared in evm.log also when I tried to create first snapshot with suspended VM. This happened to me only once and I can't reproduce it now.
Comment 8 Adam Grare 2017-10-13 10:27:22 EDT
This was an unrelated error, fixed by https://github.com/ManageIQ/manageiq/pull/16186
Comment 9 Antonin Pagac 2017-10-26 06:01:53 EDT
Verified with 5.9.0.4. Second snapshot is not created for suspended VM, there is a notification about that. No error in evm.log.
Comment 12 errata-xmlrpc 2018-03-01 08:09:24 EST
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2018:0380

Note You need to log in before you can comment on or make changes to this bug.