Bug 1404623 - [el7] HA vms do not start after successful power-management.
Summary: [el7] HA vms do not start after successful power-management.
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: ovirt-engine
Version: 3.6.5
Hardware: All
OS: Linux
medium
medium
Target Milestone: ovirt-4.0.7
: ---
Assignee: Vinzenz Feenstra [evilissimo]
QA Contact: Artyom
URL:
Whiteboard:
: 1455016 (view as bug list)
Depends On: 1341106
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-12-14 09:50 UTC by rhev-integ
Modified: 2021-08-30 11:46 UTC (History)
17 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Cause: When VMs where stopped during a system shutdown/reboot the shutdown of those instances looked to VDSM the same as if the VM would have been shutdown gracefully from within the Guest Operating System. In this case we have sent this information to the RHV Engine. Consequence: The RHV Engine did not start HA VMs on a different Host because it considered the stopping of the VMs as user initiated. Fix: With help of the RHV guest agent, vdsm will now detect that a VM was shutdown from within the system and therefore can differentiate the unplanned shutdown and reports this information from now on accordingly Result: HA VMs stopped on a system shutdown (e.g. initiated through fencing) are now restarted on a different system.
Clone Of: 1341106
Environment:
Last Closed: 2017-03-14 14:00:32 UTC
oVirt Team: Virt
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker RHV-43180 0 None None None 2021-08-30 11:46:03 UTC
Red Hat Knowledge Base (Solution) 3050321 0 None None None 2017-05-24 05:46:26 UTC
Red Hat Product Errata RHEA-2017:0500 0 normal SHIPPED_LIVE rhevm-guest-agent bug fix and enhancement update 2017-03-14 18:00:19 UTC
oVirt gerrit 64991 0 master NEW virt: Try to detect non guest iniated shutdowns 2016-12-14 09:52:46 UTC
oVirt gerrit 64994 0 master MERGED Report session start and stop on all Guest OSes 2016-12-14 09:52:46 UTC
oVirt gerrit 65342 0 ovirt-4.0 MERGED Report session start and stop on all Guest OSes 2016-12-14 09:52:46 UTC
oVirt gerrit 65394 0 master NEW Report session-startup on refresh 2016-12-14 09:52:46 UTC

Description rhev-integ 2016-12-14 09:50:25 UTC
+++ This bug is a downstream clone. The original bug is: +++
+++   bug 1341106 +++
======================================================================

Description of problem:
With successful power management configured, the vms Marked HA should start on other host but those are getting shutdown.

Things Tried till now:

A) Configure Power Management for the Hosts. 
B) Mark VM as High Available (HIGH)

 1] Click Power Management drop down menu and select - restart [VMs remains down with Exit message: User shut down from within the guest] 
 2] Enter command reboot/ init 6 / init 0 in host - [VMs remains down with Exit message: User shut down from within the guest] 
 3] From Hypervisor console - Poweroff Host - [VMs remains down with Exit message: User shut down from within the guest]
 4] Abrupt Shutdown - [VMs restarted on other host once host fenced]
 5] Ifdown interface - [vms Unknown and start once the host is UP]

Version-Release number of selected component (if applicable):
RHEVM 3.6.5

How reproducible:
Always

Steps to Reproduce:
1. Configure power management for the host.
2. Mark the VM as Highly Available 
3. Try to gracefully shutdown the host or choose from "Host --> Power Management --> (dropdown) Restart"

Host fence is successful but VM down with error: Exit message: User shut down from within the guest

Actual results:
HA vms are not restarting on other or the same host.

Expected results:
HA vms should restart on other host.

Additional info:
Tried with this also: echo c >/proc/sysrq-trigger
With this option, vm was restarted on other host.

(Originally by Ulhas Surse)

Comment 5 rhev-integ 2016-12-14 09:50:57 UTC
Just some further findings:

It looks like the IMM2 Board from IBM/Lenovo does always send ACPI signals to the OS.

This is why systemd jumps in and kills the VM.
So we need to
a) either get systemd to ignore the ACPI signals (and as such not killing the VM)
b) Get IBM to not send ACPI signals to the OS in case of a "Immediate Power off"
   resp. "power off" without "-s" as it obviously does in that case.


Taken from the logs prior to a Poweroff-event from the IMM
(still waiting for some further logs for final confirmation):

qemu: terminating on signal 15 from pid 1
^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@2016-05-26 05:48:42.924+0000: starting up libvirt version: 1.2.17, packa
ge: 13.el7_2.4 (Red Hat, Inc. <http://bugzilla.redhat.com/bugzilla>, 2016-03-02-11:10:27, x86-034.build.eng.bos.redhat.com), qemu version: 2.3.0 (qemu-kvm-rhev-
2.3.0-31.el7_2.10)

(Originally by Martin Tessun)

Comment 8 rhev-integ 2016-12-14 09:51:16 UTC
Just checking the logs from my previous tests and I found the following in the messages:

2016-06-01T08:19:37.172092Z qemu-kvm: warning: All CPU(s) up to maxcpus should be described in NUMA config
main_channel_link: add main channel client
main_channel_handle_parsed: net test: latency 85.839000 ms, bitrate 1721352 bps (1.641609 Mbps) LOW BANDWIDTH
inputs_connect: inputs channel client create
red_dispatcher_set_cursor_peer: 
===> qemu: terminating on signal 15 from pid 1 <=== 

This shows the shutdown is triggered by systemd before the system is powered off.

An even better evidence can be found in the messages:
Jun  1 08:06:08 IDCRHLV01 root: PowerOff Test started
Jun  1 08:07:16 IDCRHLV01 systemd-logind: Power key pressed.
Jun  1 08:07:16 IDCRHLV01 systemd-logind: Powering Off...
Jun  1 08:07:16 IDCRHLV01 systemd-logind: System is powering down.
Jun  1 08:07:16 IDCRHLV01 systemd: Unmounting RPC Pipe File System...
Jun  1 08:07:16 IDCRHLV01 systemd: Stopped Dump dmesg to /var/log/dmesg.
Jun  1 08:07:16 IDCRHLV01 systemd: Stopping Dump dmesg to /var/log/dmesg...
Jun  1 08:07:16 IDCRHLV01 systemd: Stopped target Timers.
Jun  1 08:07:16 IDCRHLV01 systemd: Stopping Timers.
Jun  1 08:07:16 IDCRHLV01 systemd: Stopping LVM2 PV scan on device 8:144...
[...]

(Originally by Martin Tessun)

Comment 9 rhev-integ 2016-12-14 09:51:23 UTC
Sorry submitted too early.
So maybe we should disable powermanagement for Hypervisors by default.

E.g.:

1. Shutdown and disable acpid
   # systemctl disable acpid
   # systemctl stop acpid

2. Change the ACPI Actions of systemd to "IGNORE":
   # mkdir -m 755 /etc/systemd/logind.conf.d
   # cat > /etc/systemd/logind.conf.d/acpi.conf <<EOF
[Login]
HandlePowerKey=ignore
HandleSuspendKey=ignore
HandleHibernateKey=ignore
HandleLidSwitch=ignore
HandleLidSwitchDocked=ignore
EOF

3. Restart systemd-logind
   # systemctl restart systemd-logind

(Originally by Martin Tessun)

Comment 10 rhev-integ 2016-12-14 09:51:30 UTC
(In reply to Martin Tessun from comment #7)
> ===> qemu: terminating on signal 15 from pid 1 <=== 
> 
> This shows the shutdown is triggered by systemd before the system is powered
> off.
> 
this would be ok, I guess, we should handle that and still identify that as ungraceful shutdown. Was there any acpi event inside the guest perhaps? What about libvirt, did it get sigterm too?

Generally, it is a desired behavior that it tries to gracefully terminate, but then we have to rethink how HA behaves and maybe restart HA VM regardless what guest does and allow to shut down HA VM only from UI...which might be annoying

(Originally by michal.skrivanek)

Comment 11 rhev-integ 2016-12-14 09:51:37 UTC
(In reply to Martin Tessun from comment #8)
> Sorry submitted too early.
> So maybe we should disable powermanagement for Hypervisors by default.

No, we shouldn't. In case of a disaster, I expect IT to go into the server room and shutdown using the ON/OFF button, expecting a graceful shutdown. 
This change is unexpected.
I'm quite sure there's a way in IBM, via BMC or whatnot, to ungracefully kill the host. We should look into it, in the fence-agents code.

(Originally by Yaniv Kaul)

Comment 12 rhev-integ 2016-12-14 09:51:43 UTC
(In reply to Yaniv Kaul from comment #10)
> (In reply to Martin Tessun from comment #8)
> > Sorry submitted too early.
> > So maybe we should disable powermanagement for Hypervisors by default.
> 
> No, we shouldn't. In case of a disaster, I expect IT to go into the server
> room and shutdown using the ON/OFF button, expecting a graceful shutdown. 
> This change is unexpected.

Well in case of a desaster, I don't expect anyone to go to the server room. It is a desaster, so probably there might be some risks to go into the server room.

In my 20 years of Administration, I never used the Power Off button to gracefully shutdown a server. Either I have a serial console I can reach, or I do a hard poweroff (maybe even NMI triggered to get a crashdump), but probably never graceful, as this usually does not work in these cases.

Anyways, I can accept this point of view and it would of course break the current behaviour, which might lead to other cases requesting the exact opposite.

> I'm quite sure there's a way in IBM, via BMC or whatnot, to ungracefully
> kill the host. We should look into it, in the fence-agents code.

Sure from my point of view thw IBM IMM2 / BMC card has a firmware issue, as there is an option to gracefully shut down the server (power off -s).

Still I agree with Michal that we should somehow handle these sort of issues (so in case the VM is killed from systemd) at least for HA VMs.

(Originally by Martin Tessun)

Comment 17 rhev-integ 2016-12-14 09:52:16 UTC
I have tried a couple more test cases related to this and maybe we can consider it in the same BZ.

when the admin logs on to the hypervisor and issues a shutdown or reboot, all the VMS running on the host exit with the same message "User shut down from within the guest"

which means the guest will never start up again automatically and the admin will have to manually start all these VMs up again.

the solution to this could be either:
1- to enable maintenance mode on the host as part of the shutdown sequence, which would gracefully move all VMs from that host to another functional one in the cluster.
2- to forcibly kill the guest VM processes instead of attempting shutdown, this would then be picked up by RHV-M and automatically start the VMs on another host

(Originally by Ahmed El-Rayess)

Comment 18 rhev-integ 2016-12-14 09:52:22 UTC
@aelrayes:

The workaround applied here will work for those scenarios as well (A hypervisor shut-down by an administrator, it's not a user shutdown)

1) is the correct way to do this for an administrator

2) should be avoided if not necessary

(Originally by Vinzenz Feenstra)

Comment 23 Artyom 2016-12-21 10:27:33 UTC
I move bug to assigned until we will have build for 4.0.7

Comment 24 Michal Skrivanek 2016-12-21 13:02:42 UTC
(In reply to Artyom from comment #23)
> I move bug to assigned until we will have build for 4.0.7

Moving back to ON_QA since it is testable. See (private) comment #22 for location

Comment 25 Artyom 2016-12-22 10:17:38 UTC
Verified on: 
rhevm-guest-agent-common-1.0.12-4.el7ev.noarch

Checked scenario:
1) Power off the HA VM from the engine - the VM stay in stated DOWN
2) Power off the HA VM from the guest OS - the VM stay in stated DOWN
3) Power off the host where the HA VM runs via the engine power management action - the VM starts on the other host
4) Power off the host where HA VM runs via host OS - the VM starts on the other host

Comment 27 errata-xmlrpc 2017-03-14 14:00:32 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHEA-2017-0500.html

Comment 29 Vinzenz Feenstra [evilissimo] 2017-03-15 19:55:55 UTC
If you're affected by this issue you're at the very least required to update vdsm on the hypervisors to the latest version and additionally the guest agents on the VMs as well.

HTH

Comment 32 Germano Veit Michel 2017-05-24 04:02:17 UTC
*** Bug 1455016 has been marked as a duplicate of this bug. ***


Note You need to log in before you can comment on or make changes to this bug.