Bug 1926018 - Failed to run VM after FIPS mode is enabled
Summary: Failed to run VM after FIPS mode is enabled
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: ovirt-engine
Version: 4.4.4
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ovirt-4.4.6
: 4.4.6
Assignee: Asaf Rachmani
QA Contact: cshao
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2021-02-08 03:31 UTC by cshao
Modified: 2021-06-01 13:22 UTC (History)
12 users (show)

Fixed In Version: ovirt-engine-4.4.6.5
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-06-01 13:22:11 UTC
oVirt Team: Virt
Target Upstream Version:
Embargoed:
cshao: testing_plan_complete+


Attachments (Terms of Use)
rhvh + engine logs (3.54 MB, application/gzip)
2021-02-08 03:31 UTC, cshao
no flags Details
fips-vm-failed-screenshot (75.91 KB, image/png)
2021-02-08 03:32 UTC, cshao
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2021:2179 0 None None None 2021-06-01 13:22:39 UTC
oVirt gerrit 114240 0 master MERGED ansible: Call populate service facts before "restart libvirtd" task 2021-04-19 12:41:10 UTC

Description cshao 2021-02-08 03:31:13 UTC
Created attachment 1755593 [details]
rhvh + engine logs

Description of problem:
Failed to run VM after FIPS mode is enabled.

# fips-mode-setup --enable
Setting system policy to FIPS
Note: System-wide crypto policies are applied on application start-up.
It is recommended to restart the system for the change of policies
to fully take place.
FIPS mode will be enabled.
Please reboot the system for the setting to take effect.

reboot

# fips-mode-setup --check
FIPS mode is enabled.


rhvh.log
============
Feb  8 03:03:58 hp-bl460cg9-01 vdsm[6127]: ERROR FINISH create error=Error creating the requested VM#012Traceback (most recent call last):#012  File "/usr/lib/python3.6/site-packages/vdsm/common/api.py", line 124, in method#012    ret = func(*args, **kwargs)#012  File "/usr/lib/python3.6/site-packages/vdsm/API.py", line 228, in create#012    "A VM is not secure: VNC has no password and SASL "#012vdsm.common.exception.CannotCreateVM: Error creating the requested VM
============


Version-Release number of selected component (if applicable):
redhat-virtualization-host-4.4.5-20210204.0.el8_3
kernel-4.18.0-240.15.1.el8_3.x86_64
imgbased-1.2.16-0.1.el8ev.noarch

Engine: 4.4.5-4

How reproducible:
100%

Steps to Reproduce:
1. Install RHVH via anaconda GUI.
2. Enable fips mode by run "fips-mode-setup --enable"
3. Reboot
4. Register RHVH to Engine
5. Add storage domain
6. Create VM

Actual results:
Failed to run VM after FIPS mode is enabled.

Expected results:
Can run VM succeed after FIPS mode is enabled.

Additional info:
No such issue after FIPS mode is disabled.

Comment 1 cshao 2021-02-08 03:32:26 UTC
Created attachment 1755594 [details]
fips-vm-failed-screenshot

Comment 5 Michal Skrivanek 2021-04-12 16:14:33 UTC
Indeed on the actual host SASL is not enabled hence the VM fails to start
seems like the SASL enablement steps didn't work or it was not run correctly. Can you attach output of that VNC SASL playbook?

Comment 6 cshao 2021-04-13 02:06:27 UTC
(In reply to Michal Skrivanek from comment #5)
> Indeed on the actual host SASL is not enabled hence the VM fails to start
> seems like the SASL enablement steps didn't work or it was not run
> correctly. Can you attach output of that VNC SASL playbook?


# ansible-playbook --ask-pass --inventory=10.73.32.8, ovirt-vnc-sasl.yml
SSH password: 

PLAY [all] ***********************************************************************************************************************************************************************************

TASK [Gathering Facts] ***********************************************************************************************************************************************************************
ok: [10.73.32.8]

TASK [ovirt-host-setup-vnc-sasl : Create SASL QEMU config file] ******************************************************************************************************************************
changed: [10.73.32.8]

TASK [ovirt-host-setup-vnc-sasl : Use saslpasswd2 to create file with dummy user] ************************************************************************************************************
changed: [10.73.32.8]

TASK [ovirt-host-setup-vnc-sasl : Set ownership of the password db] **************************************************************************************************************************
changed: [10.73.32.8]

TASK [ovirt-host-setup-vnc-sasl : Modify qemu config file - enable VNC SASL authentication] **************************************************************************************************
changed: [10.73.32.8]

RUNNING HANDLER [ovirt-host-setup-vnc-sasl : restart libvirtd] *******************************************************************************************************************************
fatal: [10.73.32.8]: FAILED! => {}

MSG:

The conditional check 'services_in_vnc_sasl['ansible_facts']['services'].get('libvirtd.service', {}).get('state') == 'running'' failed. The error was: error while evaluating conditional (services_in_vnc_sasl['ansible_facts']['services'].get('libvirtd.service', {}).get('state') == 'running'): 'services_in_vnc_sasl' is undefined

The error appears to be in '/usr/share/ovirt-engine/ansible-runner-service-project/project/roles/ovirt-host-setup-vnc-sasl/handlers/main.yml': line 7, column 3, but may
be elsewhere in the file depending on the exact syntax problem.

The offending line appears to be:

# already running.
- name: restart libvirtd
  ^ here


NO MORE HOSTS LEFT ***************************************************************************************************************************************************************************

PLAY RECAP ***********************************************************************************************************************************************************************************
10.73.32.8                 : ok=5    changed=4    unreachable=0    failed=1    skipped=0    rescued=0    ignored=0   

#

Comment 7 Asaf Rachmani 2021-04-13 07:07:14 UTC
I ran the same command on the same machine and it works fine:

# ansible-playbook --ask-pass --inventory=10.73.32.8, ovirt-vnc-sasl.yml
SSH password:
 
PLAY [all] ***********************************************************************************************************************************************************************************************************************************
 
TASK [Gathering Facts] ***********************************************************************************************************************************************************************************************************************
ok: [10.73.32.8]
 
TASK [ovirt-host-setup-vnc-sasl : Create SASL QEMU config file] ******************************************************************************************************************************************************************************
ok: [10.73.32.8]
 
TASK [ovirt-host-setup-vnc-sasl : Use saslpasswd2 to create file with dummy user] ************************************************************************************************************************************************************
ok: [10.73.32.8]
 
TASK [ovirt-host-setup-vnc-sasl : Set ownership of the password db] **************************************************************************************************************************************************************************
ok: [10.73.32.8]
 
TASK [ovirt-host-setup-vnc-sasl : Modify qemu config file - enable VNC SASL authentication] **************************************************************************************************************************************************
ok: [10.73.32.8]
 
PLAY RECAP ***********************************************************************************************************************************************************************************************************************************
10.73.32.8                 : ok=5    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0


But this time I see that the "RUNNING HANDLER [ovirt-host-setup-vnc-sasl : restart libvirtd]" didn't execute.
Checked the code and I see "Modify qemu config file - enable VNC SASL authentication" task call the HANDLER - "restart libvirtd":

- name: Modify qemu config file - enable VNC SASL authentication
  lineinfile:
    path: '/etc/libvirt/qemu.conf'
    state: present
    line: 'vnc_sasl=1'
  notify:
    restart libvirtd


The handler file contains two tasks, and only "restart libvirtd" task runs:

- name: populate service facts for sasl libvirtd restart
  service_facts:
  register: services_in_vnc_sasl

# libvirtd may not be started automatically on hosts >= 4.4 if not
# already running.
- name: restart libvirtd
  service:
    name: libvirtd
    state: restarted
  when: "services_in_vnc_sasl['ansible_facts']['services'].get('libvirtd.service', {}).get('state') == 'running'"
  listen: "restart libvirtd"

Comment 13 errata-xmlrpc 2021-06-01 13:22:11 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: RHV Manager security update (ovirt-engine) [ovirt-4.4.6]), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2021:2179


Note You need to log in before you can comment on or make changes to this bug.