Bug 1886803 - [ppc64le] Need to document firmware Risk Level configuration needed to run RHV-M VMs on POWER9 hardware
Summary: [ppc64le] Need to document firmware Risk Level configuration needed to run RH...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: Documentation
Version: 4.4.4
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ovirt-4.5.1
: 4.5.1
Assignee: Donna DaCosta
QA Contact: meital avital
URL:
Whiteboard: docscope 4.5
Depends On:
Blocks: 1880774
TreeView+ depends on / blocked
 
Reported: 2020-10-09 12:28 UTC by Milan Zamazal
Modified: 2023-09-15 01:30 UTC (History)
8 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2022-06-22 14:57:23 UTC
oVirt Team: Docs
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Milan Zamazal 2020-10-09 12:28:01 UTC
Description of problem:

A user cannot start RHV VMs with pseries-rhel8.2.0 machine type on an IBM AC922 POWER9 8335-GTH machine, due to the following failure:

  qemu-kvm: Requested count cache flush assist capability level not supported by kvm, try appending -machine cap-ccf-assist=off

This happens on bare metal, with an up-to-date firmware (according to the user and the information available on IBM Fix Central).

Here is the complete original report: https://bugzilla.redhat.com/show_bug.cgi?id=1880774#c0

Version-Release number of selected component (if applicable):

qemu-kvm-4.2.0-29.module+el8.2.1+7712+3c3fe332.2.ppc64le

How reproducible:

100% in the user's environment.

Steps to Reproduce:
1. Starting a VM with pseries-rhel8.2.0 machine type in the environment described at the report linked above.

Actual results:

The VM fails to start, with the error message above.

Expected results:

The VM starts without switching ccf-assist off.

Additional info:

We don't have the given hardware so we don't have an opportunity to check it on our setup or to check whether the problem still exists in RHEL/AV 8.3.

Comment 1 David Gibson 2020-10-12 07:41:00 UTC
Thanks for the info Milan.

I suspect the problem may be that the firmware is configured for the incorrect "Risk Level" which controls which Spectre mitigations are available.  There's some information on these here:

    https://github.com/linuxppc/wiki/wiki/Security-Mitigations
    https://wiki.raptorcs.com/wiki/Configuring_Spectre_Protection_Level

Basically you want "risk level 0" if you have a POWER9 DD2.2 chip, or "risk level 4" if you have a POWER9 DD2.3 chip (risk level 0 will also work for DD2.3, it will just be slower).

If not we'll need to look at why the mitigations we need don't seem to be available.

Comment 2 Milan Zamazal 2020-10-12 14:10:34 UTC
Wrong assignee? I'm afraid I can't fix this bug. :-)

Comment 3 Vinícius Ferrão 2020-10-12 14:13:41 UTC
Hi David, thanks for pointing out. To be honest I don’t know how to answer this properly, but reading your references I can post the output of the verification checks, and here it is:

Last login: Thu Oct  8 22:48:22 2020 from 172.21.1.8
[root@rhvpower ~]# grep . /sys/devices/system/cpu/vulnerabilities/*

/sys/devices/system/cpu/vulnerabilities/itlb_multihit:Not affected/sys/devices/system/cpu/vulnerabilities/l1tf:Not affected/sys/devices/system/cpu/vulnerabilities/mds:Not affected/sys/devices/system/cpu/vulnerabilities/meltdown:Mitigation: RFI Flush, L1D private per thread/sys/devices/system/cpu/vulnerabilities/spec_store_bypass:Mitigation: Kernel entry/exit barrier (eieio)/sys/devices/system/cpu/vulnerabilities/spectre_v1:Mitigation: __user pointer sanitization, ori31 speculation barrier enabled/sys/devices/system/cpu/vulnerabilities/spectre_v2:Mitigation: Indirect branch serialisation (kernel only)/sys/devices/system/cpu/vulnerabilities/srbds:Not affected/sys/devices/system/cpu/vulnerabilities/tsx_async_abort:Not affected

[root@rhvpower ~]# ls /proc/device-tree/ibm,opal/fw-features/*/enabled
/proc/device-tree/ibm,opal/fw-features/fw-bcctrl-serialized/enabled
/proc/device-tree/ibm,opal/fw-features/fw-branch-hints-honored/enabled
/proc/device-tree/ibm,opal/fw-features/fw-l1d-thread-split/enabled
/proc/device-tree/ibm,opal/fw-features/inst-l1d-flush-trig2/enabled
/proc/device-tree/ibm,opal/fw-features/inst-spec-barrier-ori31,31,0/enabled
/proc/device-tree/ibm,opal/fw-features/needs-l1d-flush-msr-hv-1-to-0/enabled
/proc/device-tree/ibm,opal/fw-features/needs-l1d-flush-msr-pr-0-to-1/enabled
/proc/device-tree/ibm,opal/fw-features/needs-spec-barrier-for-bound-checks/enabled
/proc/device-tree/ibm,opal/fw-features/speculation-policy-favor-security/enabled
/proc/device-tree/ibm,opal/fw-features/tm-suspend-mode/enabled
/proc/device-tree/ibm,opal/fw-features/user-mode-branch-speculation/enabled

Thank you all!

Comment 4 Vinícius Ferrão 2020-10-13 21:02:54 UTC
Hello, in my last message it was a little bit difficult to read the output of the first command, so I'm pasting it again. It was probably some copy and paste issue since I've answered this on my phone, there you go again:

[root@rhvpower ~]# grep . /sys/devices/system/cpu/vulnerabilities/*
/sys/devices/system/cpu/vulnerabilities/itlb_multihit:Not affected
/sys/devices/system/cpu/vulnerabilities/l1tf:Not affected
/sys/devices/system/cpu/vulnerabilities/mds:Not affected
/sys/devices/system/cpu/vulnerabilities/meltdown:Mitigation: RFI Flush, L1D private per thread
/sys/devices/system/cpu/vulnerabilities/spec_store_bypass:Mitigation: Kernel entry/exit barrier (eieio)
/sys/devices/system/cpu/vulnerabilities/spectre_v1:Mitigation: __user pointer sanitization, ori31 speculation barrier enabled
/sys/devices/system/cpu/vulnerabilities/spectre_v2:Mitigation: Indirect branch serialisation (kernel only)
/sys/devices/system/cpu/vulnerabilities/srbds:Not affected
/sys/devices/system/cpu/vulnerabilities/tsx_async_abort:Not affected

I'm adding some info that may be relevant:

[    0.000000] Linux version 4.18.0-193.19.1.el8_2.ppc64le (mockbuild.eng.bos.redhat.com) (gcc version 8.3.1 20191121 (Red Hat 8.3.1-5) (GCC)) #1 SMP Wed Aug 26 15:13:15 EDT 2020
[    0.000000] Kernel command line: root=/dev/mapper/rhel-root ro crashkernel=auto rd.md.uuid=fd143e8c:fb181637:3781e6b2:c819ac14 rd.lvm.lv=rhel/root rd.md.uuid=e36843f8:56b7bb2b:2ea70114:ba7bab10 

Thanks.

Comment 5 David Gibson 2020-10-19 01:24:59 UTC
Right.  The info from comment 3 (the device tree stuff, particularly) looks like the firmware is configured in "Risk Level 1".

If you have BMC level access you'll need to adjust it for Risk Level 1, using the instructions from the second link in comment 1.  If not you'll need to ask whoever does have BMC access to do that (eng-ops have done this for a number of our machines previously, if that helps).

Note that those instructions are for setting Risk Level 1.  To set Risk Level 0, you need to change the '0x00000001' to '0x00000000' in the register values.

Comment 6 Milan Zamazal 2020-10-29 10:59:34 UTC
Vinícius, could you please check whether the instructions from comment 5 helped?

Comment 8 Vinícius Ferrão 2020-11-02 18:05:40 UTC
Alright, with the changes proposed by David I was able to fire up a VM with pseries-rhel8.2.0.

Now the relevant part, the file didn't exist on the BMC. I had to create it and put it on Risk Level 0, the resulting file was:

root@rhvpower-ipmi:~# cat /var/lib/obmc/cfam_overrides
# Control speculative execution mode
0 0x283a 0x00000000  # bits 28:31 are used for init level -- in this case 0 (Kernel and User protection) 
0 0x283F 0x20000000  # Indicate override register is valid

After the changes I was able to properly boot a new VM with pseries-rhel8.2.0. For existing VMs I just cant change it to pseries-rhel8.2.0, since it will complain about a SCSI error: VM power.nix.versatushpc.com.br is down with error. Exit message: XML error: Multiple 'scsi' controllers with index '0'.

But this is not relevant, I will just destroy and recreate the VMs.

Now regarding the issue, what is a red flag for me is that the default behaviour of IBM AC922 it not risk level 0. It's risk level 2. So by default we can run any VM on the hardware. This should be in some place in the documentation, because I'm trying to fix this for almost 6 months now, since RHV 4.3 I was running without sxxm due to this issue. I wasn't aware about those fine risk levels controls, they are great, but they aren't explained anywhere, not even IBM support was able to alert regarding this.

I would like to thanks for the effort that you guys have put on this issue, and I really recommend this becoming a documentation issue, so other people don't miserable like me.

Thanks all.

PS: Regarding the delayed answer, I'm with other issues on the machine, but this is a debug for other bugzilla. Basically the system refuses to boot complaining about missing LVMs and I had to reboot the server multiple times until it finally properly boots. This machine is a bug fest. Anyway I don't have the logs here and whatever. :(

Comment 9 David Gibson 2020-11-26 01:18:35 UTC
Vinícius,

Sorry for the delayed reply, I've been on vacation.

I will pass your feedback regarding the default Risk Level on to IBM.  I agree it that having the system default to a mode which can't safely run VMs seems a poor choice, so I understand your frustration.

From the software side, there's not a lot we can do: defaulting to insecure VMs (even on a hardware/firmware configuration that's capable of better) seems like an even worse choice.  Autodetecting the hardware/firmware capabilities isn't really feasible either, it would either mean (a) treating risk level 0 and risk level non-zero machines as entirely incompatible, which causes additional problems throughout the whole management stack, or (b) migration would allow a supposedly secure VM to silently become insecure, which is an even worse idea than all the above.

Milan, who do we need to talk to about documenting this situation better?

Comment 10 Milan Zamazal 2020-11-26 16:17:29 UTC
It's probably best to open a documentation bug on RHV (product: RHEVM, component: Documentation) and discuss it there with the doc team. If you'd like to discuss things outside BZ, you can talk to Eli Marcus.

Comment 11 Vinícius Ferrão 2020-11-30 19:36:45 UTC
If you guys can make this visible in the documentation my struggle would not be in vain. I agree that the solution is documentation, since IBM ships the machine in an insecure state it's probably a good idea to even inform the user with a better error message within the Engine when the issue happens. Because some people don't go first to the documentation.

That's my suggestion of course. And thanks for solving this.

Comment 12 ctomasko 2021-12-10 17:40:21 UTC
@mzamazal  I'm scheduling this bug for the docs team to fix. 

About the request for an error message. If you agree with https://bugzilla.redhat.com/show_bug.cgi?id=1886803#c11, then please clone this issue for the engineers to add a better error message within the Engine. The writers can work with the engineering team to ensure that the wording of the error message is clear, concise, and accurate.

Comment 19 Red Hat Bugzilla 2023-09-15 01:30:56 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 365 days


Note You need to log in before you can comment on or make changes to this bug.