Bug 1478848 - Use migration profile in HE maintenance migration
Use migration profile in HE maintenance migration
Status: CLOSED CURRENTRELEASE
Product: ovirt-engine
Classification: oVirt
Component: BLL.HostedEngine (Show other bugs)
4.1.5.1
x86_64 Linux
low Severity medium (vote)
: ovirt-4.2.0
: ---
Assigned To: Andrej Krejcir
Nikolai Sednev
: Triaged
Depends On: 1467063 1512534
Blocks: 1458745
  Show dependency treegraph
 
Reported: 2017-08-07 06:10 EDT by Nikolai Sednev
Modified: 2017-12-20 05:49 EST (History)
7 users (show)

See Also:
Fixed In Version:
Doc Type: No Doc Update
Doc Text:
undefined
Story Points: ---
Clone Of: 1467063
Environment:
Last Closed: 2017-12-20 05:49:15 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: SLA
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---
rule-engine: ovirt‑4.2+
rule-engine: planning_ack+
msivak: devel_ack+
mavital: testing_ack+


Attachments (Terms of Use)
sosreport from the engine (9.63 MB, application/x-xz)
2017-08-08 06:51 EDT, Nikolai Sednev
no flags Details
puma18 (11.03 MB, application/x-xz)
2017-08-08 06:52 EDT, Nikolai Sednev
no flags Details
puma19 (10.58 MB, application/x-xz)
2017-08-08 06:53 EDT, Nikolai Sednev
no flags Details


External Trackers
Tracker ID Priority Status Summary Last Updated
oVirt gerrit 84437 master MERGED agent: Fix parameter name in RPC call for VM migration 2017-11-22 03:55 EST
oVirt gerrit 84438 master MERGED agent: Use migration policy "Suspend workload if needed" 2017-11-22 03:55 EST

  None (edit)
Comment 3 Doron Fediuck 2017-08-08 04:32:28 EDT
We're unable to reproduce in a clean environment.
Please provide a reproducer with its relevant log files in a clean environment.
Please state the load (memory and cpu) and the destination machine.
Comment 4 Nikolai Sednev 2017-08-08 04:44:58 EDT
(In reply to Doron Fediuck from comment #3)
> We're unable to reproduce in a clean environment.
> Please provide a reproducer with its relevant log files in a clean
> environment.
> Please state the load (memory and cpu) and the destination machine.

Bug #1467063 contains everything you've asked, logs and everything.
Reproduction steps also mentioned within the original bug, e.g.

1.Clean deployment of SHE over NFS+addition of one or two NFS data storage domains on pair of ha-hosts.
2.Ensure you're running SHE VM on SPM ha-host.
3.Migrate SHE-VM to none-SPM.
4.Migrate SHE-VM back to SPM ha-host and run monitor hosted-engine --vm-status from CLI.
5.During migration of SHE-VM to SPM you expected to catch the score penalty being dropped to by 50 points due to migration retry.
Comment 5 Yaniv Kaul 2017-08-08 04:52:26 EDT
Re-setting needinfo per comment 3
Comment 6 Nikolai Sednev 2017-08-08 06:51 EDT
Created attachment 1310516 [details]
sosreport from the engine
Comment 7 Nikolai Sednev 2017-08-08 06:52 EDT
Created attachment 1310519 [details]
puma18
Comment 8 Nikolai Sednev 2017-08-08 06:53 EDT
Created attachment 1310520 [details]
puma19
Comment 9 Nikolai Sednev 2017-08-08 06:55:48 EDT
Puma 18 was the SPM host.
Comment 10 Doron Fediuck 2017-08-09 09:15:39 EDT
According to the engine logs the HE VM maanaged to migrate to the SPM host:
2017-07-02 07:03:15,175-04 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.FullListVDSCommand] (DefaultQuartzScheduler5) [601534c9] START, FullListVDSCommand(HostName = puma18.scl.lab.tlv.redhat.com, FullListVDSCommandParameters:{runAsync='true', hostId='4962ca54-b194-40c5-99fd-01a402cc9ecd', vmIds='[5d543bb5-edfe-4414-a507-c06ea2d25368]'}), log id: 75ba1ffc
Comment 11 Nikolai Sednev 2017-08-09 09:48:11 EDT
(In reply to Doron Fediuck from comment #10)
> According to the engine logs the HE VM maanaged to migrate to the SPM host:
> 2017-07-02 07:03:15,175-04 INFO 
> [org.ovirt.engine.core.vdsbroker.vdsbroker.FullListVDSCommand]
> (DefaultQuartzScheduler5) [601534c9] START, FullListVDSCommand(HostName =
> puma18.scl.lab.tlv.redhat.com,
> FullListVDSCommandParameters:{runAsync='true',
> hostId='4962ca54-b194-40c5-99fd-01a402cc9ecd',
> vmIds='[5d543bb5-edfe-4414-a507-c06ea2d25368]'}), log id: 75ba1ffc

And it also failed in the middle of migration, so there was a re-try and thus due to that re-try score was dropped by 50 points, then it was raised back to 3400, once re-try succeeded.
This bug is about re-try and what caused it.
Comment 15 Nikolai Sednev 2017-12-12 06:44:07 EST
Tested on cleanly deployed ovirt-hosted-engine-setup-2.2.1-0.0.master.20171206172737.gitd3001c8.el7.centos.noarch with ovirt-engine-appliance-4.2-20171210.1.el7.centos.noarch, over Gluster storage and with one NFS datat storage domain.
Original issue was not reproduced, hence moving to verified.
See also screencast attached.
Comment 17 Sandro Bonazzola 2017-12-20 05:49:15 EST
This bugzilla is included in oVirt 4.2.0 release, published on Dec 20th 2017.

Since the problem described in this bug report should be
resolved in oVirt 4.2.0 release, published on Dec 20th 2017, it has been closed with a resolution of CURRENT RELEASE.

If the solution does not work for you, please open a new bug report.

Note You need to log in before you can comment on or make changes to this bug.